Compare commits

..

403 Commits

Author SHA1 Message Date
Scott Rifenbark
c006044611 documentation/poky-ref-manual/extendpoky.xml: removed pokylinux.org link
There was a link whose URL was http://autobuilder.pokylinux.org:8010.
I changed the link to use yoctoproject.org.  Note that this URL
was not visible to the reader in the manual.  However, it was there
in the DocBook code.

(From yocto-docs rev: ca1cf9fb404f148fe4f0868630dc4f109231c5c3)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:18 +00:00
Scott Rifenbark
69cf476e36 documentation/poky-ref-manual/resources.xml: removed referenct to poky linux site
There was a reference to the pokylinux.org home site.  I commented this
item out so it does not show in the user documentation.  I was unclear
on whether the reference should have been entirely removed from the manual
or not.

(From yocto-docs rev: 1cda8aab1336cc81648536e1f7d2777047673a64)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:17 +00:00
Scott Rifenbark
0283822752 documentation/poky-ref-manual/ref-images.xml: [BUGID#_1004] - EXTRA_IMAGE_FEATURE
[BUGID#_1004] - The statement indicating to comment out EXTRA_IMAGE_FEATURE was
incorrectly shown as IMAGE_EXTRA_FEATURE.  I corrected this.

(From yocto-docs rev: e18da2d4e4520a60045f869ca0c63a34c16e3e89)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:17 +00:00
Scott Rifenbark
15c05fc10c documentation: Updated manual history tables for 5.0.2
The 5.0.2 poky release (Yocto Project 1.0.2) required that the
manual history tables be updated.

(From yocto-docs rev: 784e000b9b381e63b453d2b461876611a047ba72)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:17 +00:00
Scott Rifenbark
cc6e20ea98 documentation/yocto-project-qs/yocto-project-qs.xml: updated 5.0.1
For the Bernard release, updated 5.0.1 to 5.0.2.

(From yocto-docs rev: 46508b821f9ae77b083d64764af3c51fdfd20108)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:17 +00:00
Scott Rifenbark
fc7f4b9711 documentation/poky-ref-manual/ref-structure.xml: Fixed a type (From yocto-docs rev: 756b11396a26c5f7430595532649acfc3b2caa0e)
Signed-off-by: Scott Rifenbark
 <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:17 +00:00
Scott Rifenbark
4f1611cb8d documentation/poky-ref-manual/extendpoky.xml: YOCTO #1104 EXTRA_IMAGE_FEATURES
YOCTO #1104: The section that describes how to customize images
with new features failed to mention the variable EXTRA_IMAGE_FEATURES.
I added text to include that option and referenced the variable.
(From yocto-docs rev: 69113aeebe4b7047c18727d07d134560ae2018c5)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:17 +00:00
Scott Rifenbark
5347bf352b documentation/poky-ref-manual/ref-varlocality.xml: YOCTO #1104 EXTRA_IMAGE_FEATURES
YOCTO #1104: The section describing the local configiuration file
local.conf lists out variables used.  One variable it had in there
was IMAGE_FEATURES.  This variable is not in local.conf.  But,
EXTRA_IMAGE_FEATURES is the variable that is there.  I corrected this.

(From yocto-docs rev: 2604e7e4d87bd133341429ffcb83f920ff64f6d5)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:17 +00:00
Scott Rifenbark
d81e13a138 documentation/poky-ref-manual/ref-features.xml: YOCTO #1104 EXTRA_IMAGE_FEATURES
The Images reference section states you can control what features are
in an image by using the IMAGE_FEATURES variable.  It failed to mention
the EXTRA_IMAGE_FEATURES variable.  I included this variable in the
discussion.

(From yocto-docs rev: 0149133ce08e161cffc2a66721537a17623da79e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:17 +00:00
Scott Rifenbark
3d9db8b275 documentation/poky-ref-manual/ref-variables.xml: YOCTO #1104 EXTRA_IMAGE_FEATURES
Added a cross-reference to the EXTRA_IMAGE_VARIABLES glossary
term and provided more explanation describing the relationship
between the variable and the IMAGE_FEATURES variable.

(From yocto-docs rev: 0072ac854c544e218de840d923563ab53fb864d6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:17 +00:00
Scott Rifenbark
3faa84f835 documentation/poky-ref-manual/ref-variables.xml - YOCTO #1104 EXTRA_IMAGE_FEATURES
YOCTO #1104 - Added a glossary entry for the EXTRA_IMAGE_FEATURES
variable.

(From yocto-docs rev: 4fb4a4b441ac6e52499926f1076826175072cb88)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:17 +00:00
Scott Rifenbark
10a8fb437e documentation/yocto-project-qs/yocto-project-qs.xml: removed 5.0 references
Removed several instances to the 5.0 Bernard release.  I replaced
these with the 5.0.1 release.  I also re-wrote a paragraph that
instructs the user on where to find and download the most recent
YP tarball.  It used to point to the Index of /Downloads into the
"yocto-1.0" folder.  I now instruct the user to go to the
Yocto Project website and download the desired release from there.

(From yocto-docs rev: fe35396177a5d48a0d04caf9e3abf4bb414af04d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:17 +00:00
Scott Rifenbark
2c24e6b9b9 documentation: updated manual history
I added an entry to represent the Yocto Project 1.0.1 Release into
the manual history table for all of the manuals except the Yocto
Project Quick Start, which does not have a table.

(From yocto-docs rev: 5260548a799b4d11942ee0539903f4ea85569894)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:17 +00:00
Scott Rifenbark
7f0a98f9ee - documentation/Makefile: Cleaned up Makefile
[BUGID#_1025] - I added some conditionals to handle the
Yocto Project Quick Start case.  This manual does not have
a PDF version.  I put in tests for publishing and for a case
where a user might attempt to specifically generate a PDF
using 'make pdf'.  I also converted the version variable into
a command-line argument so we don't have to edit the make file
when a new release comes out.

(From yocto-docs rev: 9ab1e208e6e08b5d05ab2012a05dc3450420cfe8)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:16 +00:00
Scott Rifenbark
47b2f03955 documentation/bsp-guide/Makefile: Fixed publish
There was some URL problems with the publish statement.
Beth debugged it.

(From yocto-docs rev: c13027937a9c541d58949078a53a28891ef60885)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:16 +00:00
Beth Flanagan
a02537187f documentation: [YOCTO #1025] build system for docs
These changes fix the following issues:

1. Multiple Makefiles. There really is no need for this.
2. Unable to maintain more than one version of the docs on the webserver.

This is a quick fix to enable the above.
In order to build the documentation, at the top level, issue a:
make all DOC=<doc directory name>
make all DOC=kernel-manual

Also, some changes need to occur on the webserver to fully
incorporate these fixes.
http://www.yoctoproject.org/documentation/
The docs are now published to:
http://www.yoctoproject.org/docs/<Release MM.mm>/<doc name>

The main page should be changed to point not only to the current doc release,
but also to the prior releases. This will enable us to maintain prior release
documentation without stomping over it when we publish new docs. Also, we'll
need to repoint the yocto-quick-start link to yocto-project-qs. Or rename
documentation/yocto-project-qs/* to support the website naming.

(From yocto-docs rev: b5cb0801691dbedfa9d3733a6b62450c8a674fa0)

Signed-off-by: Beth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:16 +00:00
Scott Rifenbark
78d092fe7a documentation/poky-ref-manual/faq.xml: Added new FAQ entry x-toolchain
Added a new FAQ entry per Richard Purdie answering the question
'How do I use an external toolchain?'

(From yocto-docs rev: 4203e02eb93a54f4b86554ed07f7353b25fd340e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:16 +00:00
Scott Rifenbark
361eda901c BUGID#_1083 - documentation/yocto-project-qs/yocto-project-qs.xml: Added -k option
BUGID#_1083 - I added the -k option as part of the bitbake command in the
example that builds an image.  I did not explain it as that is beyond
the scope of the quickstart.  I did however point the user to where
they can find information on it.

(From yocto-docs rev: 22bdc2da14dea568345fe7e4f6dd35dafe92b2ec)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:16 +00:00
Scott Rifenbark
373d73c7e7 BUGID#_1083 - documentation/poky-ref-manual/usingpoky.xml: -k option added
In the section 2.1.1 BitBake I added a paragraph at the end of the discussion
about BitBake explaining the benefits of the '-k' and '--continue' options.

(From yocto-docs rev: 40e427a74ae2c9252f10843afaec95ad18c30fe3)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:16 +00:00
Scott Rifenbark
b154d10232 documentation/bsp-guide/bsp.xml: Updated Example Filesystem Layout
Added more explanation about the base directory (meta-<bsp_name>) to the
Example Filesystem Layout section.  These changes were suggested by
Tom Zanussi to help users understand better how to add BSP layers
to the build system.

(From yocto-docs rev: 66b05e2b3096539b746f1b597ea8f542bba6be3f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:16 +00:00
Scott Rifenbark
00af854e96 documentation/poky-ref-manual/faq.xml: Added FAQ entry for filename spaces
Added a new entry for we do not support spaces in filenames.
This entry was suggested by Richard.

(From yocto-docs rev: e87b9b1fd61b87d33a67d2b10c3daf834eecbd8f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:16 +00:00
Scott Rifenbark
4005aaf3f8 documentation/bsp-guide/bsp.xml: Updated link to BSP Download site
In the 'BSP Click-Through Licensing Procedure' section there was an
old link to the BSP download page on the Yocto Project website.  The
link was non-functional.  I fixed the link so that it points to the
Yocto Project BSP Download page.  I also re-wrote the paragraph to
read better.

(From yocto-docs rev: 15fea54ad8aa637af487474faf5f66e944c5d224)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:16 +00:00
Scott Rifenbark
5c78a2b02d documentation/bsp-guide/bsp.xml: Updated /binary explanation
In the '1.1.3 Pre-built User Binaries' section it said that the
ADT and minimal images were kept in the optional
meta-<bsp_name>/binary directory.  Jianjun Xu pointed out that
in fact it is just the minimal and sato images here.  I also
confirmed with Tom as well.  I re-wrote this sentence to be
clearer and more accurate.

(From yocto-docs rev: 16d32b8ce2b98ef25609922050c8b2c7de672bde)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:16 +00:00
Scott Rifenbark
11355f3a7f documentation/bsp-guide/bsp.xml: BBFILES statement corrected.
In the '1.1.4 Layer Configuration File' section there was a BBFILES
statment that used the '\' character to indicate a continuation of the
command on the following line.  However, the example did not use
a new line.  I added the hard-return to correct that in the example.

(From yocto-docs rev: b27bfc3b7a24b09d974b503937aa02c80101bfb5)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:16 +00:00
Scott Rifenbark
a2283defe2 documentation/bsp-guide/bsp.xml: bsp exmaple name fixed
Changed the example BSP name 'meta-intel_n450' to 'meta-n450' in
the section 'Example Filesystem Layout.'  Error found by Jiajun Xu.

(From yocto-docs rev: a8fa9410b3c2e2111af6c7b700044a7a1ecdb59e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:16 +00:00
Scott Rifenbark
7ce789de38 documentation/yocto-project-qs/figures/yocto-environment.png: New figure
There is a newer version of the yocto-environment.png file that has
OE-branding.  This is the figure used now.

(From yocto-docs rev: 3e53a15bdb00d9cbb6a4610ac82b587fe64a178b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:16 +00:00
Scott Rifenbark
4194c83a56 BUGID#_ 1068 - documentation/poky-ref-manual/extendpoky.xml: updated hello ex.
Changed the hello_2.2.bb example to hello_2.3.bb

(From yocto-docs rev: 7afa2f58e7367a5d29bad075df2e6c39995f5394)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:16 +00:00
Scott Rifenbark
decb8953cd BUGID#_807: documentation/poky-ref-manual/ref-variables.xml: BBFILE_PRIORITY updated
BUGID#_807 - I updated the description of the BBFILE_PRIORITY variable
to provide more detail.  Input from Tom Zanussi on this fix.

(From yocto-docs rev: 7865b8f1d6298a8943e8a512c07b1a32f16679f6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:15 +00:00
Scott Rifenbark
d6b531e6a1 documentation/poky-ref-manual: Removed the PNG files in screenshots
The directory screenshots is no longer used in this manual.  Previously,
only one file (ss-sato.png) was used and I have moved it to the
figures folder.

(From yocto-docs rev: 816aeaccbac6e89b10ffed88e47e7a9ae50778aa)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:15 +00:00
Scott Rifenbark
d412b923ac documentation/poky-ref-manual: Added new title graphic
I have removed the multi-colored POKY image that was used for
the title of the Poky Handbook.  The image I put in here is in
line with the other graphics used as titles for our Yocto Project
documentation.  To accomplish this I had to create and add a new
PNG file named poky-title.png.  I placed this image in the figures
folder.  I removed the poky-ref-manual.png file (old figure).
I also had to alter the Makefile to use the new figure as part of the
tarball.  Finally, I had to alter the HTML style sheet (style.css)
to include the new file.

(From yocto-docs rev: e640d19b2714702f318adb483302f86a3bfa967f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:15 +00:00
Scott Rifenbark
cb69e75b7b documentation/poky-ref-manual/figures/ss-sato.png: Added this file.
I moved this file from the screenshots directory to the figures
directory so that all figures would be in the figures directory.

(From yocto-docs rev: 61859fc26aee841b05b082a373c09f56e006d7b3)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:15 +00:00
Scott Rifenbark
9b33f20a73 documentation/poky-ref-manual: Figures cleanup
I removed two figures from the figures directory:
cropped-yocto-project-bw.png and yocto-project-transp.png.
Both figures are relics and not used in the manual.  I also
altered the Makefile to pull the ss-sato.png file from the
figures directory instead of the screenshots directory.  I moved
this PNG file from the screenshots directory to the figures
directory so that all figures would be in the figures directory.
Finally, I updated the introduction.xml file so that the html
code to include the ss-sato.png file pulls it from the figures
directory and not the screenshots directory.

(From yocto-docs rev: 4b900cf71a3a87d86bd14ce2056310484daf8081)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:15 +00:00
Scott Rifenbark
00a8552b2b BUGID#_956: documenation/poky-ref-manual/Makefile: Updated publish
BUGID#_956: I updated the publish option so that the HTML and PDF
versions of the manual are automatically pushed to the Yocto Project
website. This fix takes care of BUGID#_956 for the Poky Reference Manual.

(From yocto-docs rev: 2a8a3157512e496a3884f25b5bb060f9571edc8e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:15 +00:00
Scott Rifenbark
ec3aab7b04 documentation/kernel-manual: removed a figure and updated makefile
I removed the figures/kernel-big-picture.png file as it is not used
in the manual.  I also had to update the Makefile so that it would
not include this PNG file in the tarball.

(From yocto-docs rev: 80fb92f2969445a59f8681fd7512d9ed3e8c6892)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:15 +00:00
Scott Rifenbark
5c51a88346 documentation/kernel-manual/figures/yocto-project-transp.png: Removed
I removed this PNG file as the picture is not used in the manual.

(From yocto-docs rev: 8ade73890dc4ac6706921cf823b822447c748439)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:15 +00:00
Scott Rifenbark
53bbe30ee7 BUGID#_956 - documentation/bsp-guide/Makefile: Updated for publish process
BUGID#_956: I updated the Make file so that it will push the HTML and PDF files
automatically to the yocto project site.  This takes care of
BUGID#_956 for the BSP Guide.

(From yocto-docs rev: 9086e3710ef5df94be4d74683b8e66aa1c74ac91)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:15 +00:00
Scott Rifenbark
1ca2d4316e documentation/bsp-guide/figures/poky-ref-manual.png: Removed figure
Removed this figure as it is not used in the manual.

(From yocto-docs rev: 12a30c966bf476844c0be718b2dec4ce740dcd6f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:15 +00:00
Scott Rifenbark
87d0a3b594 documentation/yocto-project-qs/figures: Removed two figures
Two figures (cropped-yocto-project-bw.png and white-on-black.png)
were not used in the manual.  I removed them from the figures
directory.

(From yocto-docs rev: 092045be84b4108baec2105f39f11b24d8cf496e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:15 +00:00
Scott Rifenbark
c2c0b9f861 documentation/yocto-project-qs/yocto-project-qs.xml: fixed typo
There was an occurance of 'the the' in the manual.  I removed it.

(From yocto-docs rev: b32d92cb698f6f7e0f2127e1de79e81b1db36c8c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:14 +00:00
Scott Rifenbark
d1c356ad3d documentation/adt-manual/figures/yocto-project-transp.png: removed file
This figure is not used in the manual.  I removed it.

(From yocto-docs rev: 44cab823ee4b5b1d298d2b99eea847b62a2fbe07)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-20 22:35:14 +00:00
Beth Flanagan
4f5622fb01 poky.conf: DISTRO bump
In preparation for release, bumping DISTRO and DISTRO_VERSION

Signed-off-by: Beth Flanagan <elizabeth.flanagan@intel.com>
2011-12-19 14:35:40 -08:00
Joshua Lock
9ee10c93af u-boot: use a hash not a tag for SRCREV
Further, move the SRCREV into the poky-default-revisions.in file
where the rest of them are defined.

(Based on OE-Core rev: 04fe616bec7416b5aea55dad6896700652796239)

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-11-28 16:04:55 -08:00
Joshua Lock
490b71d15d poky.conf: switch to an appropriate mirror URL
The autobuilder no longer hosts the sources for Yocto 1.1,
update the MIRROR and PREMIRROR URI's to use a mirror location
with 1.0 sources.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-11-21 14:49:30 -08:00
Qing He
60f42f2dc9 quilt: fix test for target build
fixes [YOCTO #969]

(From OE-Core rev: fd2485ab15ed82cb3dc84b8408e516a932de1bd1)

Signed-off-by: Qing He <qing.he@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-11-21 14:02:05 -08:00
Joshua Lock
4dab699e96 persist_data: increase the SQLite connection timeout
We're seeing OperationalErrors exceptions due to locking in some of the
pysqlite access paths (related to the initial burst of writes) on certain
setups.

This patch increases the sqlite timeout to 30, the same as in BitBake master,
to workaround this issue.

Fixes [YOCTO #1759]

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-11-14 18:02:36 -08:00
Bruce Ashfield
1ca9ca2c7d linux-yocto-stable: update SRCREVs to v2.6.34.10
Updating the SRCREVs to pick up the -longterm updates to v2.6.34.10

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
2011-11-14 18:02:36 -08:00
Bruce Ashfield
5359255ce2 linux-yocto-stable: update SRC_URI to generic 2.6.34 repo
The existing linux-windriver repo was cloned into a more generically
named linux-yocto-2.6.34 repository. It is the 2.6.34 repository that
is taking updates for stable and point releases, so switching the
SRC_URI to that repo needs to be done. The existing repository is
maintained for old releases and builds, so nothing is lost.

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
2011-11-14 18:02:36 -08:00
Joshua Lock
c9805a0c3c web: switch to git and fix Makefile
The SVN repo is no longer available and we don't have a mirror of the
SVN tarball.

Further the Makefile in git uses spaces where the Make parser
expects tabs.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-11-14 18:01:46 -08:00
Joshua Lock
3545f453aa texinfo: several changes to build without zlib and ncurses headers on host
Texinfo very cleverly detects cross-compilation and builds host versions
of the texinfo binaries it requires to bootstrap the build, however this
was causing the host to require ncurses and zlib libraries and headers.

Instead, since we require texinfo to be installed on the host, remove this
feature from the texinfo configure.ac (disable-native-tools.patch).

Further, fix texinfo to link with newer binutils (link-zip.patch) and to
generate translations with newer gettext (gettext-macros.patch).

With this patch I am able to build texinfo on Fedora without ncurses-devel
and zlib-devel installed.

This fixes [YOCTO #1483]

(From OE-Core rev: 4b395a9beb6c02f7b23266e7ee2ca3c08a9cbb70)

Signed-off-by: Joshua Lock <josh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Conflicts:

	meta/recipes-extended/texinfo/texinfo_4.13a.bb
2011-10-17 15:47:53 -07:00
Richard Purdie
2319b2d2d7 Remove help2man dependency
The help2man script is pretty useless to us. It requires to run the target
binary to extract help information which is not possible for any of our
cross compiled target binaries.

We're not interested in man pages for -cross/-native tools.

It therefore makes no sense to have this as a core build dependency.

This patch removes the dependeny and replaces it with a script
returning false. This will trigger autotool's missing utility
to use the copy of the man page included with the sources which
is what would already happen when we tried to run cross compiled
binaries anyway.

(From OE-Core rev: 288343e30604b944dc18fd82172febd314d9c520)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-10-17 15:47:53 -07:00
Nitin A Kamble
da22a78bd4 zaurusd: fix a typo in Makefile
(From OE-Core rev: fcc7800834fda37df5a5c2bbd1da712ec8ff12b9)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-10-17 15:26:17 -07:00
Nitin A Kamble
0a69e60cfc matchbox-wm-2: fix typo in Makefile
(From OE-Core rev: a708c42065eeeaabf97b97b530f63e4ef484bcf7)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-10-17 15:20:30 -07:00
Lin Tong
2816cc0db8 valgrind: supporting on Linux kernel 3.x
The old valgrind package do not support for Linux kernel 3.x, only for
kernel 2.4 and 2.6. Now adding the configuration to the configure.in
file to support Linux kernel 3.0.

This commit fixes the problem in valgrind [YOCTO #1129]

(From OE-Core rev: 5fc1e6d27f52e2032aa7a8ca20bb90d939d03c77)

Signed-off-by: Lin Tong <tong.lin@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Applied to Bernard's valgrind 3.6.0
Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-10-14 11:39:12 -07:00
Joshua Lock
c998000630 glib-2.0: explicitly disable dtrace and systemtap for native varaint
This prevents systemtap and dtrace being picked up from the host as
reported on the Yocto mailing list by Andre Haupt <andre@bitwigglers.org>

(From OE-Core rev: 0d883b5df25635fbad45191d297cbdf78a6c1fe0)

Signed-off-by: Joshua Lock <josh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>

Conflicts:

	meta/recipes-core/glib-2.0/glib.inc
2011-10-14 09:38:40 -07:00
Joshua Lock
bf8d577f1d python: fix CVE-2011-1015
This patch adds a backported security fix from upstream for CVE-2011-1015 to
address a vulnerability in the CGIHTTPServer module.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-10-14 09:38:40 -07:00
Joshua Lock
e3e50d2c69 libpng: backport security fixes
This patch includes various security fixes from upstream (though the patches
were taken from Debian's packaging) to address the following CVE issues:

libpng CVE-2011-2690
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-2690
libpng CVE-2011-2692
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-2692
libpng CVE-2011-2501
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-2501

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-10-14 09:38:40 -07:00
Joshua Lock
e5cce8a57d scripts/poky-qemu: fix libGL checks for recent Debian(ish) systems
On 64bit Debian(ish) systems libGL now lives in /usr/lib/x86_64-gnu/, add an
extra test to the qemu script to check for libGL and libGLU in directories
that match this pattern.

Based on commits by Khem Raj (0350be9458) and
Anders Darander (1927021c78) in OE-Core.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-10-14 09:38:40 -07:00
Anders Darander
bb855dab75 qemu: modify search paths for libgl
On e.g. Debian libql is found under /usr/lib/x86_64-linux-gnu/libGL.so.
Use a wildcard to match different locations, as uname -i only return unknown on Debian.

(From OE-Core rev: 32f74152dfe583f005c8654910b15cd7d0e3d421)

Signed-off-by: Anders Darander <anders@chargestorm.se>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-10-14 09:38:40 -07:00
jani.uusi-rantala@nokia.com
7779a1fedc Magic file path should be given for rpmbuild
Magic file path should be given for rpmbuild in
_rpmfc_magic_path define so that build system default file
is not used by accident. Not doing this caused many
packages to fail building in several systems.

Fixes [YOCTO #1358]

Signed-off-by: Jani Uusi-Rantala <jani.uusi-rantala@nokia.com>
2011-10-14 09:38:40 -07:00
Khem Raj
1c5171b251 qemu: Poke more paths for presence of libgl
On ubuntu 11.10 libGL is not in
/usr/lib/`uname -i`-linux-gnu/ directory
so we search this dir too.

(From OE-Core rev: ced947e989dfbca8055fe57e14207cb6f1357430)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-10-14 09:38:40 -07:00
Martin Jansa
5eabb17202 python: add patch to fix cross compilation on host with linux-3.0
(From OE-Core rev: 4b7e7b004dacb698ed637f35661a60d2402c00cd)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-10-14 09:38:40 -07:00
Qing He
b3cb28df9f rpm: fix fprint pointer issue
[YOCTO #1030]

(From OE-Core rev: bc4b86639a713c877dbe5e0f984873915d1578d4)

Signed-off-by: Qing He <qing.he@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-10-14 09:38:40 -07:00
Scott Rifenbark
14c9af0056 documentation: Makefile corrections to support web server structure
The web server directory structure for the post 1.0 releases was changed.
Also, the creation of a new 1.0 area in the web structure was retroactively
created.  this broke the five make files for publishing documents to the web.

I fixed all five files so they now push to the 1.0 area only.  The fix included
hard-coding the 1.0 directory structure.  I also set them up to be a little more
generic.

(From yocto-docs rev: d2cd8f1165b0cc995fc322a7d836de0902da7614)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-10-14 00:39:51 +01:00
Darren Hart
d106d15cad README.hardware: update installation instructions for beagleboard
o Add C4 specific instructions
o Replace poky with core
o Correct a kernel version typo
o Clarify some language to avoid confusion encountered during testing

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Cc: Zhao Yi <yi.zhao@windriver.com>
Cc: Bruce Ashfield <bruce.ashfield@windriver.com>
Cc: Jeff Osier-mixon <jeffrey.osier-mixon@intel.com>
Cc: Koen Kooi <koen@dominion.thruhere.net>
2011-05-25 16:23:29 -07:00
Saul Wold
72f06800bc u-boot: remove old SRCREV from poky-default-revisions.inc
Acked-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
2011-05-20 09:22:18 -07:00
Darren Hart
53b15f2732 u-boot: update SRCREV to 2011.03
Fixes [YOCTO 1029]

u-boot 2010.12 fails to run on the Beagleboard C4 and xM Rev A boards.  Commit
55aacbc30e suggests there was a mixup during
development, as the MD5SUM change is from the 2011.03 SRCREV back to the
2010.12. Chances are a patch was never sent to update the SRCREV, leaving the
MD5SUM in a bad state.

Update the SRCREV and COPYING MD5SUM to use the 2011.03 version. Built
and tested on Beagleboard xM Rev A and Beagleboard Rev C4.

(From OE-Core rev: 68d301e950c06eda8c8a73db1ed299c45dee7b9f)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Tested-by: Jeff Osier-Mixon <jefro@jefro.net>
Cc: Jeff Osier-Mixon <jefro@jefro.net>
Cc: Yi Zhao <yi.zhao@windriver.com>
Cc: Robert Berger <pokylinux@reliableembeddedsystems.com>
Cc: Gary Thomas <gary@mlbassoc.com>

Merged Richard's removal of PR from PV

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-19 13:10:11 -07:00
Saul Wold
1b159ff35d poky.conf: Update DISTRO_VERSION to 1.0.1
Signed-off-by: Saul Wold <sgw@linux.intel.com>
2011-05-12 16:38:03 -07:00
Saul Wold
eabe47ed8c distcc: Update SRC_URI
Fixes [YOCTO #1032]

The distcc source location moved from samba.org to googlecode.com

(From OE-Core rev: eb85a7440e5b313ef550c60545d2dcd12d620c84)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-12 16:34:28 -07:00
Kang Kai
5299510bd3 groff: update to 1.20.1
Update groff to 1.20.1, add SUMMARY and LICENSE info
From OE 70bf94cd8669f549ca90581e9592d409b6e24e2e
Fixes [Yocto 879]

(From OE-Core rev: 6c5cbb73550639ec71cb9564883253dbe1c09f36)

Signed-off-by: Kang Kai <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-10 09:05:40 -07:00
Otavio Salvador
679e3ae6de insane.bbclass: skip license checksum if LICENSE is "CLOSED"
(From OE-Core rev: 2d2d7710cc51c2656e89c3aec3f3fc0a5b65eb30)

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-10 07:55:57 -07:00
Scott Garman
6619eff40b gnome-doc-utils: Add additional missing -nonet options to xsltproc
I missed some instances of xsltproc when adding -nonet in my
previous commit. This should take care of them all to fix
the compilation errors.

(From OE-Core rev: b232ad2c74c93f045006a6b03b2eff7f6103a865)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-10 07:55:57 -07:00
Dexuan Cui
4e41793b5c rsync (GPLv2): fix security vulnerability CVE-2007-4091
Added a patch to fix
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2007-4091

[YOCTO #984] is partially fixed by this commit.

(From OE-Core rev: 3670f110aacebdde118b79d31aa15156330418c6)

Signed-off-by: Dexuan Cui <dexuan.cui@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-10 07:55:56 -07:00
Darren Hart
5b1d38c0ed u-boot: correct COPYING MD5SUM
(From OE-Core rev: d0dc2b5bb02ef55a41e7a97b6831c72391ae7f36)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-10 07:55:56 -07:00
Richard Purdie
67ef061d39 sanity.bbclass: Add cpio to list of required utilities tested for
(From OE-Core rev: 4f4bac0a459fe238e105e96b2b59b6af88e639c4)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-10 07:55:55 -07:00
Dexuan Cui
5d3bfbbd18 gnu-config-native: add dependency on perl-native
Fixes [YOCTO #968]

(From OE-Core rev: 649a836a6a5c64aa48f2a612a90c2d4c26731e05)

Signed-off-by: Dexuan Cui <dexuan.cui@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-09 08:13:49 -07:00
Scott Rifenbark
36c9135215 documentation/yocto-project-qs/Makefile: BUGID#_956 - fixed remote publish URL
Fixed the remote publish URL so that the HTML version of the manual will
get pushed to the yoctoproject.org site automatically.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-05-06 17:44:25 -07:00
Scott Rifenbark
437950723f documentation/poky-ref-manual/Makefile: BUGID#_956 - fixed publish URL
Fixed the remote publish URL used to push the HTML and PDF files to
the website.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-05-06 17:44:25 -07:00
Scott Rifenbark
5f92b6262f documentation/kernel-manual/Makefile: BUGID#_956 - fixed publish URL
Fixed the URL used to publish the HTML and PDF docs to the
yoctoproject.org server and website.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-05-06 17:44:25 -07:00
Scott Rifenbark
65d61e2d11 documentation/bsp-guide/Makefile: BUGID#_956 - fixing publish process
Added the URL for the manual to the rcp publishing process.  This
is part of the fix for this bug.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-05-06 17:44:25 -07:00
Scott Rifenbark
4825604977 documentation/adt-manual/Makefile: Updated publish
Updated the publish statement so that the HTML and PDF files will
be published to the Yocto Project website.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-05-06 17:44:24 -07:00
Nitin A Kamble
00996de4eb tar-1.17 (GPLv2) bugfix
This fixes bug [YOCTO #982]

(From OE-Core rev: 9346961f863b2e0d6489615fa976b002553123de)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:24 -07:00
Nitin A Kamble
5a9b3fecde python-pycairo: fix installation path of __init__.py
This fixes Bug [YOCTO #477]

(From OE-Core rev: 8f6436b25a96594d09c64c7ba20a045cb1f8d18a)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:24 -07:00
Zhai Edwin
2343f81fb4 avahi: Upgrade to 0.6.30 (from 0.6.28)
This upgrade fix the one security issue:
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2011-1002
[YOCTO #975] fixed.

This should be included in the Bernard point-release.

(From OE-Core rev: b52e9922e8d9acaa9b94b0f19c54bdee18ae49f1)

Signed-off-by: Zhai Edwin <edwin.zhai@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:24 -07:00
Dongxiao Xu
d8f4a33500 rxvt-unicode: upgrade to version 9.10
Remove some patches since some logic doesn't exist in upstream.
This upgrades fixes CVE:
http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2006-0126

Also it fixes [Yocto #980]

(From OE-Core rev: 6108c5962a717e1ece4aa7acb0f543f7d8e86a35)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:24 -07:00
Saul Wold
a982aa5786 libexif: fix gettext inherit
Signed-off-by: Saul Wold <sgw@linux.intel.com>
2011-05-06 17:44:24 -07:00
Richard Purdie
8404b657fa qemu-config: Enable for qemumips/qemuppc
(From OE-Core rev: 7dbb204266a480435f78837aa1bded30fed96378)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:24 -07:00
Paul Eggleton
e4ab64389e netbase: add /etc/network/interfaces file for qemumips & qemuppc
This fixes the network configuration for qemumips & qemuppc to match the
other qemu* machines.

(From OE-Core rev: cb181eb4dc2c20a70153f9d69d732978566ba4f7)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:24 -07:00
Nitin A Kamble
9c43741ed6 libxcb: fix for broken library link in the image
log.do_package:
NOTE: the following files were installed but not shipped in any package:
NOTE:   /usr/lib/libxcb-dri2.so.0
NOTE:   /usr/lib/libxcb-dri2.so.0.0.0
NOTE: libxcb-dev contains dangling symlink to
/usr/lib/libxcb-dri2.so.0.0.0

Then because of the dangling symlink, ldconfig fails at the time of
rootfs creation of image.

(From OE-Core rev: 917ac8c82a9e1e9df6029ecfa68e8f9ce2f8013c)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:23 -07:00
Koen Kooi
586b7055b3 librsvg 2.32.1: fix postinst script
The symptom:

root@pandaboard-core:~# sh /var/lib/opkg/info/librsvg-2-gtk.postinst
g_module_open() failed for /home/root/--update-cache: /home/root/--update-cache.so: cannot open shared object file: No such file or directory
root@pandaboard-core:~#

the gdk-pixbuf-query-loaders app doesn't support arguments, only .so names, so remove --update-cache

Also being fixed:

* loader libdir
* redirect output to /etc/gtk-2.0/gdk-pixbuf.loaders

(From OE-Core rev: e726028424793093f22fd96f7eec791adf55f0ee)

Signed-off-by: Koen Kooi <koen@dominion.thruhere.net>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:23 -07:00
Richard Purdie
0401043d43 gthumb: Add missing DEPENDS on gst-plugins-base as otherwise gstreamer isn't enabled
(From OE-Core rev: 75e2ced78f5164882f933787f9247e30da203613)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:23 -07:00
Saul Wold
40a6a2612e desktop-file-utils: Add SRC_URI checksums
(From OE-Core rev: 1f164043be7fffb38b82f3b24c27e837268e51e5)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:23 -07:00
Saul Wold
2060a0d1f2 alsa-tools: Add checksums
(From OE-Core rev: b6864fa496fa108ac4ef644ee14b841b9fc8565b)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:23 -07:00
Mark Hatle
23a0019b1f Workaround for Global C++ Constructor problem on ARM
[YOCTO #938]

Workaround for a problem with the order of the global C++ constructors on ARM.
The workaround is simply to avoid defining the ID numbers outside of the
usage of the ID's.

This also has the effect of fixing a problem on MIPS, where "_mips" is a
defined symbol and unavailable on the system for a variable name.

(From OE-Core rev: b308149b4b7d2066390aa4eaa7364af3334f70f5)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:23 -07:00
Tom Zanussi
aa37762223 linux-tools.inc: turn off newt and dwarf for perf
Turn these off for now to avoid the host infection issues for perf.

Fixes yocto [BUGID #994].

(From OE-Core rev: 51cf1ecab860269b3d822e2e372756b8bb8ffe26)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:23 -07:00
Richard Purdie
5570e0ae78 base-files: Remove sysctl.conf file. This is now provided by the procpc recipe.
The base-files version is horribly outdated too.

[YOCTO #924]

(From OE-Core rev: f61df1f1e4a191ed3dd3d71aa78a479c615b14d1)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:23 -07:00
Denys Dmytriyenko
310897df07 qt4: security advisory - blacklist fraudulent comodo certificates
Security advisory: Blacklist fraudulent certificates. More info is in the
patch and at the following links:
http://www.comodo.com/Comodo-Fraud-Incident-2011-03-23.html
http://qt.nokia.com/files/qt-patches/blacklist-fraudulent-comodo-certificates-patch.diff/view

(Imported from OE rev 61eeeec1224c4f974f9185c2b93eeb19d13938af)

(From OE-Core rev: 14419f4a4bc629b171281d46750c6abfa84bf83b)

Signed-off-by: Denys Dmytriyenko <denys@ti.com>
Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:23 -07:00
Paul Eggleton
ca77772632 qt4: replace 4.7.1 with version 4.7.2
Qt 4.7.2 is a bugfix release for the 4.7 series - more details here:

  http://qt.nokia.com/developer/changes/changes-4.7.2/

This was prompted by the equivalent change in OE, however the change was
redone by hand. There are no changes to the recipes themselves other than
updating SRC_URI checksums and resetting PR.

(From OE-Core rev: e8a3686ec108f6095bafa5b601c9f763bc39c123)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:22 -07:00
Qing He
b8765d4efb sat-solver: fix arch=all packages
add a new options to set noarch archs as all so platform independent
packages can be recognized and installed.

fixes [YOCTO #993]

(From OE-Core rev: bd0798120559a8aca726db8e962bbbafb80c2a54)

Signed-off-by: Qing He <qing.he@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:22 -07:00
Mark Hatle
c36361ed5a Fix sat-solver & RPM5 integration issue
From Michael Schroeder, fix the configuration of how RPM5 handles obsoletes
within the sat-solver.

(From OE-Core rev: 7178a540b35a4a5e4a5e0546eb0c2207d2033cdf)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:22 -07:00
Mark Hatle
60ab27d71b Fix integration of zypper and sat-solver
Adjust the integration of zypper and sat-solver to ensure that all of the
defined architectures for a given machine are defined identically to Poky.

(From OE-Core rev: b2996efc015bc5ae0b8246924083e76fb5129cea)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:22 -07:00
Richard Purdie
d739fc53eb poky.conf: Add missing POKY_EXTRA_RDEPENDS qemu changes for mipc/ppc
[YOCTO #394]

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:22 -07:00
Koen Kooi
84d82c0685 x11vnc: fix SRC_URI
The download structure got changed at some point and made this recipe unfetchable

(From OE-Core rev: 98bd7497c9fa904b01e4984e34d61daac54b2fab)

Signed-off-by: Koen Kooi <koen@dominion.thruhere.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:22 -07:00
Nitin A Kamble
9361df5ec2 ldconfig-native-2.12.1: newer recipe with eglibc sources
This fixes [YOCTO #780]

Handle the input/output data with different endian-ness correctly
Also fix the definition of LD_SO for cross environment

And remove the older 2.5 version of ldconfig-native recipe

(From OE-Core rev: 694db055f3729662e0e0193a31f2098be599877f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:22 -07:00
Bruce Ashfield
5ec8233e2f linux-yocto/qemux86-64: enable profiling and latency
The configuration chunks for profiling and latency top have
been enabled in tree now, so we can drop optional feature
additions in the recipe itself.

build tests show identical configurations.

(From OE-Core rev: 0f69382ac1eea1dea05581c29cf66e3214f0bd74)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:22 -07:00
Chris Larson
c7301228c0 goggle: exit quietly on ^C
(Bitbake rev: bdd10e9b357417774f30cc52e89e3fa83bbbbfc0)

Signed-off-by: Chris Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:21 -07:00
Richard Purdie
f77efdf544 bitbake/fetch2: Fix the problems introduced by the git fetcher AUTOREV fix
The ordering constrains on the urldata_init functions are not straight
forward. To avoid further problems, create a helper function to setup
the source revisions which the init functions can all at the appropriate
point.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:21 -07:00
Richard Purdie
f15a4a7677 bitbake/fetch2/git: Fix a bug where AUTOREV and the git fetcher interact badly
Fix a bug where ud.branches were being referenced before it was set by
the git fetcher when using AUTOREV. To do this some ordering needed
to be changed. This fixes errors like:

ERROR: Error parsing /recipes-kernel/linux/rt-tests_git.bb: Failure expanding variable
SRCPV, expression was ${@bb.fetch2.get_srcrev(d)} which triggered exception
AttributeError: 'FetchData' object has no attribute 'branches'

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:21 -07:00
Richard Purdie
0e55651fd0 sstate: Add support for taking shared lockfiles
(From OE-Core rev: c411a10e06f479ff364c07766f7c77907b7b4a16)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:21 -07:00
Dexuan Cui
8c888bf67a sstate: ensure an ordered mapping between SSTATETASKS and SSTATETASKNAMES
Fix [YOCTO #964]

A recent commit 25a6e5f9(sstate: use only unique set of SSTATETASK) breaks
the ordered mapping between SSTATETASKS and SSTATETASKNAMES. As a result,
in sstate_cleanall, the line
taskname = tasks[namemap.index(name)]
gets an incorrect result, and "bitbake -c cleanall" doesn't really remove
the files populalted by do_populate_sysroot.

(From OE-Core rev: 2f6505f0e795b6c8cad641a6918739c3faac1f99)

Signed-off-by: Dexuan Cui <dexuan.cui@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:21 -07:00
Martin Jansa
6f904b3550 sstate: use only unique set of SSTATETASK
* otherwise strange error like this:
  ERROR: Logfile of failure stored in: /OE/shr-core/tmp/work/armv7a-oe-linux-gnueabi/libtool-cross-2.4-r1/temp/log.do_package_write_ipk.25551
  Log data follows:
  | ERROR: Package already staged (/OE/shr-core/tmp/sstate-control/manifest-nokia900-libtool-cross.deploy-ipk)?!
  | ERROR: Function 'sstate_task_postfunc' failed
  NOTE: package libtool-cross-2.4-r1: task do_package_write_ipk: Failed
  ERROR: Task 11 (/OE/shr-core/openembedded-core/meta/recipes-devtools/libtool/libtool-cross_2.4.bb, do_package_write_ipk) failed with exit code '1'

  is shown in this case with package_ipk twice in INHERIT

* Thanks to Richard for fix

(From OE-Core rev: f2fe5e840b8aa0558b5462ef2c7517b2f14ec2ea)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:21 -07:00
Chris Larson
5d01c9c296 utils: fix typo in lockfile
(Bitbake rev: 53a10b6793c5bdb45854483abe5da791058dfd84)

Signed-off-by: Chris Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:21 -07:00
Ilya Yanok
55f72863b9 native, nativesdk, crosssdk: reset TARGET_FPU
When building one of the native, nativesdk or crosssdk packages TARGET_*
variables' values are no longer related to the target we set via MACHINE
variable, they are now related to the BUILD (native) or SDK (nativesdk,
crosssdk) targets instead. We need to change TARGET_FPU variable
accordingly or some of the recipes (the ones that check for TARGET_FPU
value, most notably gcc and eglibc) might be confused.

It's probably cleaner not to reset TARGET_FPU but to change it to
something like ${BUILD_FPU} (for native) or ${SDK_FPU} (for crosssdk and
nativesdk) but as long as BUILD and SDK are x86 it's safe to just reset
TARGET_FPU.

(From OE-Core rev: 0d4ea5d7486dc35001582bef3ff6ebfad0606bda)

Signed-off-by: Ilya Yanok <yanok@emcraft.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:21 -07:00
Saul Wold
e086bc7c11 keymaps: Fix MACHINE -> MACHINE_ARCH
Fixes [YOCTO #960]

(From OE-Core rev: b136520e787744abd61f7aab8430a46c910457aa)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:21 -07:00
Nitin A Kamble
aa468ee163 image.bbclass: make execution ldconfig verbose
The failure of ldconfig was not getting logged anywhere before.

(From OE-Core rev: 880b0a222fdc11ee088bcaf8c832edae23bc28a7)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:20 -07:00
Khem Raj
94d2b2c563 scripts/poky-qemu-internal: call stty sane before exit
When qemu is booted into console with -nographics
then after exiting the terminal line settings are messed
up. This patch calls stty sane to restore the terminal
settings to default.

stty is part of coreutils which is installed on all
host distros hence there is no need to warn about it
being available or not

(From OE-Core rev: 201a43cce6171988999f954a5759f46b330a7812)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:20 -07:00
Darren Hart
f837ecebc6 bitbake docs: use dblatex to build the pdf bitbake manual
Fix [BUGID #593]

The current manual build fails for printing formats which use latex as an
intermediate format. This bug has been reported in multiple locations and I
haven't found a solution posted to any of them.

Using --with-dblatex uses dblatex to make the conversion and successfully
generates the pdf. It adds a dependency on dblatex and its dependencies.

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
CC: Joshua Lock <josh@linux.intel.com>
2011-05-06 17:44:20 -07:00
Scott Rifenbark
dfb31f15b9 document/poky-ref-manual/ref-classes.xml: removed <function> tag
I got rid of the <function> tag and replaced with <filename>.  We
have too many styles.

(From OE-Core rev: 5ac97ba191c8707ff20105626427998df997d221)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:20 -07:00
Darren Hart
6bfb96bff3 README.hardware: automate boot process for router station pro
(From OE-Core rev: d192b79721c5ef9137720f08bab5d6b97cb041be)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:20 -07:00
Paul Eggleton
88083714e3 README.hardware: remove u-boot flashing instructions for mpc8315e-rdb
Upgrading u-boot is apparently not necessary with current board revisions,
and these instructions may not work properly anyway (our toolchain seems not
to be able to compile u-boot in the way described), and given that they are
potentially risky they should be removed.

(From OE-Core rev: 52a85e805797bff2ec53b2356da8daf224460e9e)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:20 -07:00
Darren Hart
2bd9b41760 bitbake: correct typo in ??= documentation
??= is a lazy version of ?=

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:20 -07:00
Scott Rifenbark
e5f8d44d24 documentation/kernel-manual/kernel-how-to.xml: replaced 'pokylinux' with 'yoctoproject'
(From OE-Core rev: 39f8b1b13072598729a189fb58c14622d300db69)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:20 -07:00
Scott Rifenbark
a3ed4e19e1 documentation/adt-manual/adt-eclipse.xml: Fixed URL with pokylinux.org
Substituted 'pokylinux.org' with 'yoctoproject.org' in an URL
to locate the OProfile viewer and server.

(From OE-Core rev: 6e2553b07be5f06a68f0967775111d7598d9404f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:19 -07:00
Scott Rifenbark
16c10b7a8d documentation/adt-manual/adt-prepare.xml: Fixed URL for nightly builds
There was a stray "0.9" in an URL for the Yocto source downloads.
I changed it to 1.0.

(From OE-Core rev: 82890a85c0422aa6b081497be394aa756da567b2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:19 -07:00
Scott Rifenbark
ce08910d62 documentation/adt-manual/adt-eclipse.xml: Fixed link to autotools.
The link to autotools was incorrect.  It had 'www' in the URL
when it should not have.  it is now
'http://download.eclipse.org/technology/linuxtools/update/'

(From OE-Core rev: 56965da0631d4619282b5548fc19118429183507)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:19 -07:00
Scott Rifenbark
d2492a6ee2 documentation/yocto-project-qs/Makefile: Updated Makefile to include PDF in tar
(From OE-Core rev: ceffea9c2ffe4422fd98524d3265f8d00bc80f9a)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:19 -07:00
Scott Rifenbark
a05ffe7e61 documentation/kernel-manual/Makefile: Updated Makefile so PDF is in tarball
(From OE-Core rev: dc4c7e396833dd3d0839c458b8762a89e0979138)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:19 -07:00
Scott Rifenbark
05d95c7feb documentation/poky-ref-manual/ref-bitbake.xml: style tags updated
I got rid of the <filename class='directory'...> and
<filename class='extension'...> and replaced with simple
<filename>/</filename> pairs.

(From OE-Core rev: 1bcdaf8d3d39680c154144227ee2caca9a7bb3e5)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:19 -07:00
Scott Rifenbark
22a4bae306 documentation/poky-ref-manual/ref-bitbake.xml: BitBake parsing section update
In section B.1 (Parsing) it said that BBFILES variable by default
specified the direcotyr 'meta/packages/' as the place to look for .bb
files.  This directory is invalid and needed to be changed to
'meta/packages/'.

(From OE-Core rev: c48325b1f23201a1e7790bfd7c52191baf14878f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:19 -07:00
Scott Rifenbark
dd99bbf1f3 documentation/poky-ref-manual/ref-variables.xml: added SSTATE_DIR variable
Due to some changes in the file structure for 1.0 there is a new directory
for the shared state.  The variable SSTATE_DIR can be used to point to
the directory.  I added this variable to the list of documented variables.

(From OE-Core rev: fe939d7181856145ea26c193be131883da182fcd)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:19 -07:00
Scott Rifenbark
6e902e0a31 documentation/poky-ref-manual/ref-structure.xml: Edits for Rel 1.0
These edits reflect changes in the directory structure from the 0.9
version of the software to the 1.0 version.  This set of changes still
is missing a few items.  Changes were based on Saul Wold's input.

(From OE-Core rev: 6288e2af1b05d849e53b90071c66bc893ba015b6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:19 -07:00
Scott Rifenbark
1e10a0cf03 documentation/poky-ref-manual/ref-classes.xml: tag updates
Removed the various styles for commands and such and replaced with
simple <filename>/</filename> pairs.

(From OE-Core rev: c5a0cc3e6a2f1e7eb1a90c67d2a038d3dc18b1ba)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:18 -07:00
Scott Rifenbark
98820f5b74 documentation/poky-ref-manual/ref-classes.xml: re-write of autotooled packages
Section C.2 (Autotooled Packages) was re-written.  I removed a bunch
of <variable> tags and replaced them with <filename>.  Also removed
some Britishisms.

(From OE-Core rev: 7a932962fb8f0dbfe14eb2d3636ddbb1c974b947)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:18 -07:00
Scott Rifenbark
e8486ec930 documentation/poky-ref-manual/ref-classes.xml: Fix to <filename> tag
Had to fix the <variable> tag by replacing it with <filename>.
Previous commit didn't work so this is fixing it.

(From OE-Core rev: 263e572055b09ad2f432f1feda797813ef254e74)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:18 -07:00
Scott Rifenbark
c0a58abed5 documentation/poky-ref-manual/ref-bitbake.xml: type fixed
Section B.4 (The Task List) had the typo "taksks".  Changed to
"tasks."

(From OE-Core rev: 7cbd6bb020e16ceb1894a408852648a915f193f3)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:18 -07:00
Scott Rifenbark
41d9bcdabe documentation/poky-ref-manual/ref-bitbake.xml: Grammar fix
Section B.2 (Preferences and Providers) had a grammar error.
It said "An common example is..."  I fixed it.

(From OE-Core rev: 6d04a9ff381b7771b6f080928d4416b76e76cbb0)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:18 -07:00
Scott Rifenbark
58e3304ea0 documentation/poky-ref-manual/ref-bitbake.xml: removed 'varname' style
I replaced varname style with filename style.  Looks better.  We have
too many styles.

(From OE-Core rev: 1b63d69c3c2e4b5561dc59d020b59d875420872f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:18 -07:00
Scott Rifenbark
eb83549448 documentation/poky-ref-manual/ref-structure.xml: bitbake section updated
Section A.1.1 (bitbake/) contained two URLs.  One supposedly went to
a BitBake site and the other to the BitBake online manual.  In reality,
they both went to the online manual.  I removed the one referencing
the site.

(From OE-Core rev: 02c360c3e57409a3982db73ed2b998a7c58610a6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:18 -07:00
Scott Rifenbark
fdd4dc5db9 documentation/poky-ref-manual/development.xml: OProfileUI section updated
Section 5.3.2 (Using OProfileUI) was out of date.  Several of the URLs
would not resolve.  They were pointing to openedhand links that had
not been maintained.  I updated the entire section.

(From OE-Core rev: 4678fcba5ab02669009d0ab67ec802f2ce1b087f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:18 -07:00
Scott Rifenbark
32d330889f documentation/poky-ref-manual/development.xml: Corrected command syntax
In section 5.2.2. (Building the Cross-GDB Package) there was a resulting
directory listed where you could find the binary.  The directory had
a couple of variables for 'host-arch' and 'target-abi'.  There was
a mis-guided angle bracket wrapping the 'host-arch' variable.  This
was fixed.

(From OE-Core rev: a4fbf5caabb9ded34885612ae093759c82d7d2cb)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:18 -07:00
Scott Rifenbark
a6620f2fcf documentation/adt-manual/Makefile: Added PDF file to the tarball
(From OE-Core rev: f30f044355bfe4a1c7b08a201b813afd7cf4bddb)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:17 -07:00
Scott Rifenbark
8e4021a890 documentation/adt-manual/adt-eclipse.xml: Updated repo URL for Eclipse Plug-in
Updated the URL that points to the Yocto Eclipse Plug-in to
http://www.yoctoproject.org/downloads/eclipse-plugin/1.0.

(From OE-Core rev: 6657ee7563efecdaa091ef614c5c1e20a2a4665e)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:17 -07:00
Scott Rifenbark
10a0dca45a documentation/poky-ref-manual/development.xml: Edits for Eclipse and Anjuta
I commented out a large section of the chapter, which went into detail
on how to locate, install, configure, and use the Yocto Ecliple
plug-in.  This information is redundant in this book and is better
explained in the ADT Manual.  I am referencing the information from
this chapter now.

(From OE-Core rev: f4f4efbf3f0b19fdb05ddf48ab48b4f42109a289)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:17 -07:00
Scott Rifenbark
347bbd1d4b documentation/poky-ref-manual/poky-ref-manual.xml: Updated rev-history table
Updated the revision history table for the manual.

(From OE-Core rev: 65c7bb8489de654cc02dcff0dfff21754e2e5ce8)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:17 -07:00
Scott Rifenbark
da39a264ed documentation/yocto-project-qs/yocto-project-qs.xml: Edits plus Matt Madison note regarding older host systems
I made a few small edits and I added a reference to the
wiki page 'https://wiki.yoctoproject.org/wiki/BuildingOnRHEL4'
that has entries for older development hosts.  Right now all that
is there is the RHEL4 notes but the wiki page can be expanded as needed.

(From OE-Core rev: a23acbd48ee911d9882a78491280977fb62ea156)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:17 -07:00
Scott Rifenbark
292488656d documentation/yocto-project-qs/yocto-project-qs.xml: cleaned out another "YP"
Removed "YP" from another spot in the manual.

(From OE-Core rev: 22f701b97a8d1412638f5ae79343a37791dde9e6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:17 -07:00
Scott Rifenbark
6683544362 documentation/bsp-guide/Makefile: Updated to include PDF in tarball
For some reason the PDF version of this manual was not being included
in the tarball created by the Makefile.  I fixed this.

(From OE-Core rev: f8ec09ab31c04b2ae9570b71174f50c58ad09f00)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:17 -07:00
Scott Rifenbark
5ac1d6be71 documentation/poky-ref-manual/usingpoky.xml: [BUGID# 929] - Note added warning about switching up GPL versions
In chapter 2 where we talk about building images I added a new
note indicating that the user should not switch around using different
GPL versions when trying to rebuild an image as it can cause dependency
failures.

(From OE-Core rev: f84441dbcc8254062d55d2452d3d6f4bc6f907fe)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:17 -07:00
Scott Rifenbark
97b223c6fc documentation/yocto-project-qs/yocto-project-qs.xml: removed (YP) acronym
I removed this YP acroynm since we never use it.

(From OE-Core rev: b37cab45b4f0dbba0dedbbbe240e91db30df4b8c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:16 -07:00
Scott Rifenbark
96bc30cf03 documentation/bsp-guide/figures/bsp-title.png: Updated PNG file for title
I updated the figure for the title so that it uses the same color
scheme as the other manuals.

(From OE-Core rev: 23c40367c56e838bb9c1ad89cec8ca2e563a40a7)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-06 17:44:16 -07:00
Saul Wold
1d8535ccb7 web: update svn to 131 to fix build issue
Fixes [YOCTO #974]

(From OE-Core rev: a432001590b1420e6d13b70d5f2711151a304ecd)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-05 16:11:31 -07:00
Scott Garman
6244cbc945 gnome-doc-utils: Add -nonet option to xsltproc
This adds the -nonet option to xsltproc invocations, which fixes
compile errors when building gnome-doc-xslt-de.omf.

Also add intltool-native to DEPENDS, which was discovered to be
needed when building this recipe.

(From OE-Core rev: c6f791853acf8fec922c1ebcf62195be2615870d)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-05 15:47:20 -07:00
Scott Garman
f76a807400 openjade-native: run install-catalog only during do_populate_sysroot
(From OE-Core rev: 638a3d15a84edfdd218a8c40306482f6c086b4e7)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-05 11:54:50 -07:00
Scott Garman
b1febbcb26 docbook-sgml-dtd-native.inc: run install-catalog only during do_populate_sysroot
(From OE-Core rev: 34ec9086c429bef167554c57a80b5f69d7e61a21)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-05 11:54:39 -07:00
Paul Eggleton
24b30e5285 netbase: automatically bring up usb0 on BeagleBoard xM
Avoids manual configuration of the BeagleBoard xM's ethernet port
(which shows up as usb0).

Fixes [YOCTO #930]

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
2011-05-05 09:55:40 -07:00
Tom Zanussi
47724b4320 boot-directdisk: fix bzImage source location
Fixes yocto [BUGID #876]

boot-directdisk.class looks in the wrong location for the bzImage to
install.  Make it look in the right place.

(From OE-Core rev: 173d04ea828e7f790ede40929c8ffd7340b4c077)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-05 09:55:19 -07:00
Dexuan Cui
058625b713 lttng-viewer: explicitly add linkage to lttvwindow
Fixes [YOCTO #412]

Also update FILES_${PN}.

(From OE-Core rev: 6252898534a885237a3df9c8cb4ea1fdd43f65c5)

Signed-off-by: Dexuan Cui <dexuan.cui@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-05 09:52:34 -07:00
Paul Eggleton
7d75d2cd94 initscripts: remove -i from halt/reboot arguments and allow override
Introduces a variable HALTARGS which specifies the arguments sent to
halt and reboot, and sets the default value to "-d -f", dropping the
previous -i (shut down all network interfaces before halt/reboot, which
causes a freeze with NFS root.)

Fixes [YOCTO #997].

(From OE-Core rev: ace183894a5319cd73c94fd2653bbe52f29dca0b)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-05 09:52:34 -07:00
Martin Jansa
4a17fc8a81 git: use DESTDIR=$D instead prefixing all variables by $D
* with git-native and rm_work enabled I've noticed git fetcher errors like:
  warning: templates not found /OE/shr-core/tmp/work/x86_64-linux/git-native-1.7.3.4-r0/image/OE/shr-core/tmp/sysroots/x86_64-linux/usr/share/git-core/templates
  fatal: Unable to find remote helper for 'http'
  for every recipe using http:// for git repo
* after this change template_dir points to
  /OE/shr-core/tmp/sysroots/x86_64-linux/usr/share/git-core/templates
  without that workdir prefix
* haven't tested target recipe, but I guess it needs different fix or
  maybe it worked before and gets broken by this change

[sgw: removed RFC comment, target patch to follow]
(From OE-Core rev: 4b2a6fa780567c0876540bb89af78d5c778985cb)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Koen Kooi <koen@dominion.thruhere.net>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-05 09:40:03 -07:00
Saul Wold
1327b6b06b linuxdoc-tools-native: add groff-native to DEPENDS
groff-native is needed to ensure that configure finds
the groff-native binary instead of the host's groff,
this is to ensure the correct macros are used (-ms vs -mgs)

(From OE-Core rev: 1126e4daa69e3f365b060ef235b40e0f97a61705)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-04 18:58:53 -07:00
Tom Zanussi
8bc71db41f core-image-directdisk: add LIC_FILES_CHECKSUM
Fix for build failure.

(From OE-Core rev: 1d7f9211af04bcf77061eaad8a272e976c2d7c1d)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-04 18:58:52 -07:00
Yu Ke
0137a98b28 git fetcher: make tag back to work, fix Yocto bug 972
In current git fetcher, tag does not work due to commit http://git.pokylinux.org/cgit/cgit.cgi/poky/commit/?id=5920e85c561624e657c126df58f5c378a8950bbc. Tag is not in sha256 form, so it will be treated invalid, and silently replaced by latest revision.

To fix it, this patch treat tag name as branches name, thus it will be handled correctly later. Thanks Richard for reviewing and proposing the better approach.

Fix [YOCTO #972]

CC: Richard Purdie <richard.purdie@linuxfoundation.org>

Signed-off-by: Yu Ke <ke.yu@intel.com>
2011-05-04 18:58:52 -07:00
Saul Wold
326eb3f2cc libsdl: add SRC_URI Checksums
(From OE-Core rev: fea759adc52456c890b245a458e9053e94e122d0)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-04 18:58:52 -07:00
Scott Garman
5ed5ed5a0e cdrtools: recipe and patch cleanup
* Recipe cleanup, added missing metadata fields and fixed
  whitespace issues
* Added Upstream-Status to patches
* Confirmed that CVE-2003-0655 does not apply to this recipe
  as rscsi is not packaged

(From OE-Core rev: f7c35ad6267c7dfd37bad9c7521488c329f879b5)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-04 18:58:52 -07:00
Saul Wold
e6668220f2 python: add missing cytpes modules
Contributed by Martin Jansa via OE

Fixes [YOCTO #1003]

(From OE-Core rev: 2870697f08c171f455dbba03dd529b8c4cf11937)

Signed-off-by: Antonio Ospite <ospite@studenti.unina.it>
Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-04 18:58:52 -07:00
Saul Wold
372e52ff6c xproto: Add space to EXTRA_OECONF_append
Signed-off-by: Saul Wold <sgw@linux.intel.com>
2011-05-04 18:58:52 -07:00
Saul Wold
3da8a8b9b9 perl: fix Configure-multilib.patch
Thanks to Gary Thomas for his input on fixing this for Ubuntu 11.04

Signed-off-by: Saul Wold <sgw@linux.intel.com>
2011-05-04 18:58:52 -07:00
Khem Raj
841d084555 perl-native_5.12.2.bb: Fix compliation on ubuntu 11.04-alpha
Ubuntu has moved eglibc to /usr/lib/${arch}-linux-gnu and
/lib/${arch}-linux-gnu so we need that to be added to glibpth in
Configure.

Currently we set LD=ld in environment for recipes inheriting native
class. This overrides the LD settings in the Makefiles of perl and
it tries to link by calling ld which does not work since its using
-l<x> on commandline and ubuntu linker seems not to look into
the new location for these libraries. Its better to use gcc for linking
here anyway

[With tweak from Tom Rini to use CCLD, not LD]
(From OE-Core rev: 8ba700a4c593fd52bd01b6272b4c8285a71964f7)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-04 18:58:52 -07:00
Darren Hart
5a4d5b9c43 kernel-rt: use correct branch names and new git SRC_REV format
The RT kernel recipe was not updated to reflect the new git SRC_REV format nor
to take advantage of the recent updates made to the underlying infrastructure.
These fixes bring it up to date with the other linux-yocto* recipes and fix
various build issues people were seeing.

(From OE-Core rev: 690e87a2ffe8caa16379be26eb356c5bded17c1f)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Cc: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-04 15:43:26 -07:00
Chris Larson
7959e40061 lockfile: ask for forgiveness, not permission
Create the lockfile directory if it doesn't exist, rather than erroring out if
it doesn't exist (was also racy).

Also improve the wording of the error message shown when the lockfile's
directory is not writable.

Note for the future, this function should be improved, particularly with
regard to its exception handling. It should be catching the *exact*
exception(s) it will encounter when the file is locked, and continuing in that
case only. If it did that, there'd be no need for the proactive directory
writability check, as bb.utils.lockfile() would raise an appropriate IOError
for that case.

(Bitbake rev: 238151441c74db53d6e4d4753f4f96c32f6f13b6)

Signed-off-by: Chris Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-04 15:43:26 -07:00
Richard Purdie
3d2c481ab0 package_rpm: Ensure we take the sstate shared lockfile in the place we write files
The point we need to take the lock is when the rpm files are written into the
deploy rpm directory. Since sstate makes the actual installation of the files,
that is the point we need to take the lock. This also stops the deploy/rpm
directory being accessed for a lock before it exists.

[YOCTO #797]
[YOCTO #925]

(From OE-Core rev: 833a1e970f087dfcb32967cee3e24540f041cde0)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-04 15:43:25 -07:00
Xiaofeng Yan
232d7322b5 task-poky-lsb: Add some packages required by lsb test suite
Add packages gdk-pixbuf-loader-(bmp,ico,ani) to list task-poky-lsb.bb

(From OE-Core rev: fb88c2600d75302f8d55b710c364b4976ec0473b)

Signed-off-by: Xiaofeng Yan <xiaofeng.yan@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-04 15:43:25 -07:00
Qing He
b116631418 rsync: upgrade to version 3.0.8
[YOCTO #983]

from 3.0.7
fixes CVE-2011-1097

(From OE-Core rev: ea97fcf84c2e1388a62a80cc771de9f3f409afce)

Signed-off-by: Qing He <qing.he@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-04 15:43:25 -07:00
Qing He
9388aa62cf openssl: upgrade to version 0.9.8r
[YOCTO #979]

from 0.9.8p
fixes CVE-2010-4180, CVE-2010-4252, CVE-2010-0014

(From OE-Core rev: e28e11930a22a4e89075e7e026e58c081f984ddf)

Signed-off-by: Qing He <qing.he@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-04 15:43:25 -07:00
Qing He
10ac9442f2 libxml2: upgrade to version 2.7.8
[YOCTO #978]

from 2.7.7
fixes CVE-2010-4008

(From OE-Core rev: cd13726f1eb1f77f55cf202830d6bf13b47b0860)

Signed-off-by: Qing He <qing.he@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-04 15:43:25 -07:00
Qing He
184a5c1c0a libexif: upgrade to 0.6.20
[YOCTO #977]

fixes CVE-2007-6351, CVE-2007-6352, CVE-2009-3895

(From OE-Core rev: 40da3c239406fe6efbf79182ce7fbc53937cf8ca)

Signed-off-by: Qing He <qing.he@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-04 15:43:25 -07:00
Qing He
472a3b34d8 bitbake.conf: fix MACHINE_ARCH
Replaces all '-' in $MACHINE to '_', fixes [YOCTO #946]

(From OE-Core rev: 69b3a11d90579bca687ad3461e7a5cd325079fe6)

Signed-off-by: Qing He <qing.he@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-05-04 15:43:24 -07:00
Paul Eggleton
3c81ae17ea bitbake/runqueue: fix clash when setscene & real tasks done in same build
If a build causes a real task to be run when the setscene task has already
run then it was possible for dependent packages to be rebuilding at the same
time as a rebuild of the packages they depended on, resulting in failures
when files were missing. This change looks in the setscene covered list and
removes anything where a dependency of the real task is going to be run (e.g.
do_install is going to be run even though the setscene equivalent of
do_populate_sysroot has already been run).

As an additional safeguard we also delete the stamp file for the setscene
task under these circumstances.

Fixes [YOCTO #792]

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
2011-05-04 15:43:24 -07:00
Scott Garman
1528b88657 docbook-dsssl-stylesheets-native: run install-catalog only during do_populate_sysroot
(From OE-Core rev: 620679dbb552d67c0697497005685df932e1b050)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-04-19 13:41:39 -07:00
Joshua Lock
65a1eaf069 elfutils: remove unused variables to fix compilation with GCC 4.6
Unused variables trigger a warning in GCC 4.6 which are caught by -Werror as
used in the elfutils makefiles and therefore the build fails.

This patch adds some consolidated fixes from upstream to remove the unused
variables, they will no longer be required as of elfutils 0.152

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-04-06 11:51:25 +01:00
Joshua Lock
6d853bb196 xserver-xf86: explicitly disable fop document generation
Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-04-05 12:06:42 +01:00
Joshua Lock
74aeb0a2ec openjade: fix build with GCC 4.6
In GCC 4.6 the compiler no longer allows objects of const-qualified type to
be default initialized unless the type has a user-declared default
constructor.

Patch from Gentoo bugzilla: http://bugs.gentoo.org/show_bug.cgi?id=358021

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-04-05 12:06:42 +01:00
Joshua Lock
b02f8a482d libx11: disable building of specs
Generating Postscript specs fails on Fedora 15, I don't *think* we need them
so disable them.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-04-05 12:06:42 +01:00
Joshua Lock
0a11038665 xorg-[lib-common|proto-common]: disable use of fop document generation
On Fedora 15 I see a huge Java backtrace when document generation runs for
some xorg libs. As fop is automatically detected, with the possibility of
detecting fop on the host whilst doing target builds, the safest bet is to
explicitly disable fop for document generation.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-04-05 12:06:41 +01:00
Joshua Lock
01ab37c9ce libx11: add missing SRC_URI hashes
Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-04-05 12:06:41 +01:00
Joshua Lock
db95181f8f python-native: add missing SRC_URI hashes
Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-04-05 12:06:40 +01:00
Scott Rifenbark
8b6416db1e documentation/adt-manual/adt-prepare.xml: Added instruction for building ADT tarball
I added a note in the "Installing the ADT" section (2.1) saying that
if you need to build the ADT tarball you can use
'bitbake adt-installer'.  I also changed the location of the
toolchain from '...yocto-0.9' to '...yocto-1.0'.  Finally,
I changed the host sub-directory in the toolchain directory
from 'i586' to i686'.

(From OE-Core rev: 18124c5065fc570e672d068e915e0f476d20379c)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-25 17:40:53 +00:00
Scott Rifenbark
17600d23d8 documentation/yocto-project-qs/yocto-project-qs.xml: [BUGID# 931] - Removed Fedora 14 bitbake native note
[BUGID# 931] - I have removed the Note indicating that the user must run
'bitbake make-native' if running Fedora 14.
This is no longer a requirement for YP Release 1.0

(From OE-Core rev: 33a529f94c494531dbbfca5050898eb4c42f64df)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-25 16:45:38 +00:00
Scott Rifenbark
e347cd769a documentation/yocto-project-qs/yocto-project-qs.xml: Bug reference added courtesy Colin Walters
In the Quick Start there is a note indicating that you should run
'bitbake make-native' followed by 'bitbake poky-image-sato' if you
are running Fedora 14 or another distribution that ships with GNU.
Colin Walters submitted a patch that offerred an URL for further
explanation on a Make Bug.  The URL is
http://www.mail-archive.com/bug-make@gnu.org/msg06220.html.  Rather
than submit Colin's patch verbatim I updated the note to include the
reference with a little different wording.

This extra information submitted by Colin will be very helpful.

(From OE-Core rev: d32ccd0ce620942447c7b49c6117c2ea7eff46ff)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-25 16:45:26 +00:00
Paul Eggleton
25437936c4 README.hardware: add Freescale MPC8315E-RDB; other minor tweaks
* Add Freescale MPC8315E-RDB instructions (based on Wind River README passed
  on by Bruce Ashfield)
* Add short info paragraph for RouterStation Pro (to match BeagleBoard)
* Add example for connecting to RouterStation Pro serial console with picocom

(From OE-Core rev: 58d443a2ff300ff290486b2153f8a90a8ca2a89b)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-25 16:45:16 +00:00
Richard Purdie
063ede8698 poky.conf: Set version to 1.0
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-25 16:44:28 +00:00
Paul Eggleton
82af8b9fb6 sstate.bbclass: always delete stamp files in sstate_clean
For safety, always delete the stamp files in sstate_clean regardless of
whether the manifest file exists or not.

(From OE-Core rev: f781c35da9a11eefdb06bda72ca89753df863efa)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-19 01:33:21 +00:00
Richard Purdie
7b8b77444d sstate: Ensure a clean removes setscene stamps as well as the main task stamps
(From OE-Core rev: d07fe8aef537a8bcb96a802e18d7c980ff4c5ce2)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-19 01:33:04 +00:00
Paul Eggleton
2176606ff7 sstate.bbclass: avoid deleting unrelated stamp files
Avoid deleting stamp files whose names contain the current task's name as a
substring. This will be especially important for example if do_package_write
is ever made an sstate task (as it would previously have deleted the stamps
here for do_package_write_ipk etc.)

(From OE-Core rev: ea743ea30e2289733d27979e8ec921648342da0e)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-19 01:32:59 +00:00
Richard Purdie
8c920456e4 xserver-nodm-init: Mark as machine specific after recent rootless X changes
Fix summplied by ke.yu@intel.com

[YOCTO #906]

(From OE-Core rev: f0afe5827570eff5442d2f9a9846b4098e5c3333)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-19 01:32:20 +00:00
Scott Rifenbark
a80791c568 - documentation/poky-ref-manual: Notes added for non-GPLv3 builds
[BUGID# 873] - Added a note in the Images Appendix indicating that
building an image without GPLv3 components is only supported for
base and minimal images.  Also put the two changes you have to do
to the local.conf file for the build.

Added a note in the second chapter in the section on building images.
The note indicates the same as in the appendix but does not go into the
local.conf file detail.

(From OE-Core rev: c7960a2e820d7ddb8649ab0b27b3f04843f7af0d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 23:33:22 +00:00
Darren Hart
ed8bcb28b2 qemu: make warning messages consistent in format
Try to make the output of the qemu script a bit more consistent by using the
same format for the various warning messages:

WARNING: description of warning.
Detailed description of warning, actions taken, and/or instructions to user.

(From OE-Core rev: 7895377378c197289b82e3bbc059454770911abd)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 23:20:46 +00:00
Darren Hart
a5d2854104 qemu: warn user if nVidia libGL is detected (leads to qemu segfault)
nVidia's OpenGL libraries are known to have compatibility issues with qemu,
resulting in a segfault. As different workarounds are required for the different
distributions, just warn the user to explain the qemu segfault to follow, and
suggest a workaround using LD_PRELOAD.

[YOCTO #649]
[YOCTO #698]

(Original patch from Edwin, Darren modified warning and git commit wording)

(From OE-Core rev: 2247ffe954b5a71f82944d23141c836b38716654)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
CC: Mark Hatle <mark.hatle@windriver.com>
CC: Zhai Edwin <edwin.zhai@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 23:20:45 +00:00
Mark Hatle
d3d7b1d679 db: Fix path of arm-thumb patch
Newer versions of patch, such as in Fedora 14, don't like ".." within
the middle of the file to be patched path.

In order to fix the issue we have to hand apply the patch instead of using
the normal mechanisms.  Only flaw with the os.system(...) approach is if it
fails we don't get any notification or a resolver failure.

(From OE-Core rev: 4e592efe8c5ff918a77f7b7b2c17a6b698b1dd68)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 23:20:45 +00:00
Tom Rini
4efe1437dd initscripts: Make umountfs a bit more robust, bump PR
Avoids error messages on shutdown.

Imported from OE commit 072cad0100fd828e7fee8f3fa3ade23e4306b394

(From OE-Core rev: 5188687660f5aa37014aac50c43e141f032455d7)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 23:20:44 +00:00
Paul Eggleton
ee78d54023 nfs-utils: fix "sh: bad number" error on start/stop of nfsserver
Adds a test to avoid the "sh: bad number" error message during service
start or stop of nfsserver when there is no NFS_SERVERS value set in
/etc/default/nfsd.

(From OE-Core rev: 0f2debd9360abac54d3e44551af309f0bdde96e7)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 23:20:43 +00:00
Paul Eggleton
bfdabe46df busybox: enable unmount all feature
This allows "umount -f -a -r" in our initscripts to actually do something.

(From OE-Core rev: 578c938968857976f888f708f1f57cf862c7b3c4)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 23:20:43 +00:00
Scott Rifenbark
236357c05a documentation/yocto-project-qs/yocto-project-qs.xml: Note added about proxy
I added a note to reference the FAQ entry in the Poky manual that describes
how to get around proxy and firewall stuff hanging up getting the source
code during a build.

(From OE-Core rev: f9abba290157c122f36aed5e52f1a0f792e3add2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 23:20:42 +00:00
Scott Rifenbark
06ba4f48dc documentation/yocto-project-qs/yocto-project-qs.xml: Added references to FAQs
In the beginning of the manual I added references to the FAQs we
support.

(From OE-Core rev: 615a015189f3b09ea928f288516be1f90447cbf2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 23:20:41 +00:00
Scott Rifenbark
d7635a9972 documentation/yocto-project-qs/yocto-project-qs.xml: 1.0 edits applied to examples
This is first guess at the correct example commands and directory
names for the Bernard 5.0 release.  I don't have any real directories
available to look at and doc changes are supposed to be frozen before the
actual build.  So these are guesses and will need reviewed.

(From OE-Core rev: c052537216395019bc436291e1c2ec43c3abc3ae)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 23:20:41 +00:00
Scott Rifenbark
4a8dd99a9f documentation/yocto-project-qs/style.css: Updated note text color
Had to update the note and tip text color to white to match other
books.

(From OE-Core rev: 6d091c39d040525becf5b5ef719356d5d1e43bdb)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 23:20:40 +00:00
Scott Rifenbark
bcc330c80e documentation/yocto-project-qs/style.css: Updated styles
I updated the style sheet to use Yocto blue for the headings and got
rid of the green tip and note stuff.  This style matches the other
style sheet now.

(From OE-Core rev: d8661de305adcb95c281238255cd84e1c41d5469)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 23:20:39 +00:00
Scott Rifenbark
1743bba3ea - documentation/yocto-project-qs/yocto-project-qs.xml: added groff package
[BUGID# 857] In the packages section for the list of Debian-based system package
requirements I added 'groff'.

(From OE-Core rev: b67204a99fe34a165f97dd6bb5191735a4632678)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 23:20:39 +00:00
Richard Purdie
74a635b919 sstate.bbclass: Turn absolute symbolic links into relative ones for sstate packages
(From OE-Core rev: 655139c2644d085331f4f6814119fbd904ff244b)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 23:20:38 +00:00
Paul Eggleton
1fc2d92bf6 README.hardware: update for 1.0 release
* Update to refer to Yocto documentation
* Change title as suggested by Scott Rifenbark
* List all qemu* machine targets
* Remove machines no longer in core layer
* Add instructions for routerstationpro (originally based on an email from
  Mark Hatle)

(From OE-Core rev: f8e9b15aa694b0f6d3373c2b6bf8904fdb0c7b86)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 18:00:09 +00:00
Scott Rifenbark
df56b575cd documentation/poky-ref-manual/faq.xml: Added entry about proxy and firewall
This is an explanation on how to get by the proxy or around the
firewall when Poky is trying to find and download sources.

(From OE-Core rev: 426df8458bb37c81afc6fe03f0e1300985c8d059)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 18:00:09 +00:00
Scott Rifenbark
8753278af8 documentation/poky-ref-manual/extendpoky.xml: Formfactor path corrected
I changed the path 'meta/packages/formfactor/files/config'
to 'meta/recipes-bsp/formfactor/files/config' per Joshua Lock's
instruction for correctness.

(From OE-Core rev: b89ea64db2978f0ec9271565590a5a0529d396f1)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 18:00:09 +00:00
Scott Rifenbark
6f139706ae documentation/poky-ref-manual/extendpoky.xml: small edits
Various small edits and format changes.

(From OE-Core rev: 259128eb1b7676a71d5c0df4ef5db065ba5c3c88)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 18:00:09 +00:00
Scott Rifenbark
b37e6a2234 documentation/poky-ref-manual/usingpoky.xml: Small edits
I made some minor edits.

(From OE-Core rev: bb6fbb484ec912aabca77fd4d124c97fc7f956e1)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 18:00:09 +00:00
Scott Rifenbark
f32ea8feff documentation/poky-ref-manual/faq.xml: Added three FAQ entries
Added three FAQ entries per Joshual Lock.

1. How do I disable the cursor on my touchscreen device?
2. How do I make sure connected network interfaces are brought
   up by default.
3. How do I create images with more free space?

(From OE-Core rev: 9cfed91ee7c0a619e52abc098c20d6ed8b69416b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 18:00:09 +00:00
Scott Rifenbark
6f7f0810e0 documentation/poky-ref-manual/usingpoky.xml: More BitBake changes
Forgot to search for "Bitbake" occurances.  These are now changed
to "BitBake."

(From OE-Core rev: 982826b61bf68244fad46ef52b5a203e648e330b)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 18:00:08 +00:00
Scott Rifenbark
8dee7adf47 documentation/poky-ref-manual/usingpoky.xml: grammar fix and BitBake fix
Fixed a grammar problem and then did a search and replace for
"bitbake" to replace with "BitBake".

(From OE-Core rev: a25074cf7f3383ea3963c4dabb9507af34f2e3df)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 18:00:08 +00:00
Scott Rifenbark
bf4f7761b3 documentation/poky-ref-manual/introduction.xml: Wording and release update
I changed several occurances of "Yocto Project" to "the Yocto Project."
Also changed the statement about what Poky release the book supported.
It previously said "applies to Poky Release 4.0 (Laverne)."  I changed
this to "applies to Poky Release 5.0 (Bernard)."

(From OE-Core rev: 021bc37ad0698c567f9b7089fde99fe985ae3551)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 18:00:08 +00:00
Scott Rifenbark
6f5703e473 documentation/poky-ref-manual/introduction.xml: Removed link
Removed a link to the Intel Website.  Upon testing this link I
discovered that it loads the Intel site into the current web
page and then disables the back button.  Rather annoying. I tried
to change the link to pop a new window but couldn't get the
ulink.target parameter to work.  Ran out of time to try and figure
it out so I removed the link.

(From OE-Core rev: 8f75a06300714938e79800e0e140dd76ba42de86)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 18:00:08 +00:00
Scott Rifenbark
c8bab9bca4 documentation/poky-ref-manual/faq.xml: Spell Check
Performed a spell check and corrected several problems.

(From OE-Core rev: e26e9f41eac1bb34a7d9276921d14e843444622d)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 18:00:08 +00:00
Scott Rifenbark
b209beb54b documentation/poky-ref-manual/poky-ref-manual.xml: Updated title page
Updated the title page by adding a new revision entry for the manual.
Not sure of the current revision numbering scheme so I reset it to
Revision 1.0 to match that of the release.

(From OE-Core rev: 1604f6543eba3757b08bff96e75d045b809de544)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 18:00:08 +00:00
Scott Rifenbark
c1c7f61e80 documentation/poky-ref-manual/style.css: Updated to match other manuals
I have updated some styles so that the GIT manuals are looking more
consistent and have better color schemes for the section headings.

(From OE-Core rev: 747dbbf250b96cf43eba2b7227226607b9605da4)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 18:00:07 +00:00
Scott Rifenbark
48ea0ca37f documentation/poky-ref-manual/extendpoky.xml: Spell check
A spell check performed on this chapter.

(From OE-Core rev: 20ef5e573e0c835a2f359f61aa89993c3a2244a1)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 18:00:07 +00:00
Scott Rifenbark
60a7bca27e - documentation/poky-ref-manual/extendpoky.xml: Added text for choosing server
[BUGID# 293] - I added text at the end of section 3.2.3 "Customizing Images
Using Custom IMAGE_FEATURES" to include explanation of the two servers
Poky uses for images by default.  Also how to change the variable
IMAGE_FEATURES to configure the server.  This change is part of the
fix for BUGID# 293 and was suggested by Scott Garman.

(From OE-Core rev: 13041874070ea2235f8c3abe156ae5e940b15f5f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 18:00:07 +00:00
Richard Purdie
6e1e21942e bitbake.conf: Increase image overhead factor to account for rpm/zypper database size
(From OE-Core rev: f4305f960cb788d73c5132aa5a9f930e85c20385)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 18:00:07 +00:00
Scott Rifenbark
8e70535583 documentation/kernel-manual/kernel-how-to.xml: Spell checked
Performed a spell check and found a couple items.

(From OE-Core rev: 45039d008519c13f97d9b195bba4505b3865b5ea)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 17:59:44 +00:00
Scott Rifenbark
5f5c9d133b documentation/kernel-manual/kernel-manual.xml: Updated the title page
I updated the title page to add Revision 1.0 to the Revision history
table.

(From OE-Core rev: 5062c0e09b5e2c4894ccfe322977fdd432b87e39)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 17:59:44 +00:00
Scott Rifenbark
6be2a5e54b documentation/bsp-guide/bsp.xml: Spell check
Performed a spell check and caught a couple small things.

(From OE-Core rev: 17ae7d1e05df495a5e27168cdcdfbcf96337a3f9)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 17:59:44 +00:00
Scott Rifenbark
334ff1fd4f documentation/bsp-guide/style.css: Updated Style Sheet
I updated the HTML style sheet to match that of the other online
manuals.  Section heads are now in Yocto blue and tip box
color is inline with Yocto color schemes.

(From OE-Core rev: 815b71a6c66e529959a12bd9aa6aabc0afc78bb1)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 17:59:44 +00:00
Scott Rifenbark
b0df49cb10 documentation/bsp-guide/bsp-guide.xml: Updated Title Page
I updated the revision table for the manuals to have better
wording and to go from oldest to newest top down.

(From OE-Core rev: 7a4f802bb4d12f863a13fc4ba095a3de149aa6df)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 17:59:43 +00:00
Saul Wold
a7b0c87a97 util-linux: Setup for GPLv2 Recipe
* add task to remove the GPLv3 lscpu code
 * Add patch to remove the reference to lscpu in Makfiles

(From OE-Core rev: ebd181cf6ce3fe233b61aef3af093228aa925f4d)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 17:58:54 +00:00
Nitin A Kamble
39734c77f7 coreutils-6.9: fix man page building for the gplv2 recipe
Added a new patch:
   coreutils-6.9/fix_for_manpage_building.patch
And the target recipe now depends on the native recipe for the manpage
generation.

Similar fix may be needed to the GPLv3 version of this recipe.

(From OE-Core rev: 543577c25b5a4e89a3ab15ee28e754b71c2a43d5)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 12:09:06 +00:00
Lianhao Lu
3f689d6bfd toolchain-scripts.bbclass: Added --sysroot to CPPFLAGS.
[YOCTO #908] Added CPPFLAGS into the environment file and added
--sysroot to it.

(From OE-Core rev: 360daf019101d9b4d08ab1e3d279b08c02e9749e)

Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 12:08:55 +00:00
Kevin Tian
dc08a1f933 slang: specify --x-includes to pass qa sanity check
slang by default hardcodes a list of host dirs to search for X header
files, which may break qa sanity check. Use --x-includes to specify
sysroot as the fix.

Fix [YOCTO 907]

(From OE-Core rev: 35c9ed7d49309ce0babbf93e205fb2dab117c69f)

Signed-off-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 12:08:42 +00:00
Kevin Tian
8be338ed08 bitbake.conf: add POKYBASE to BB_HASHBASE_WHITELIST
or else do_populate_lic varies its checksum when using different source
directory, and thus further impact do_package sstate reuse.

Fix [YOCTO 894]
Possibly Fix [YOCTO 903]

(From OE-Core rev: 7a0922ba2e7a33005a8830ff8a4e6b1408b29aa5)

Signed-off-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 12:08:28 +00:00
Saul Wold
4c7131c26a gettext: Upgrade GPLv2 version to 0.16.1
This adds a couple of new patches for handling various autoconf
and autolocal issues.  It also hardcodes a GETTEXT_MACRO_VERSION
to 0.17 to match the native gettext.

(From OE-Core rev: e897103a58ad672cc87d2bab3ec45501ef09f8f1)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 12:08:12 +00:00
Saul Wold
3d732748b6 poky.conf: remove gnome-common from WHITELIST_GPLV3
This was due to task-poky-extended pulling in qemu-config for
non-GPLv3 poky-image-basic

(From OE-Core rev: 5abe730df009931f5745aadf613d64fe964f94b2)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 12:07:23 +00:00
Saul Wold
4d20c5ffd1 poky.conf: add additional Libraries to the LGPLv2 Whitelist
The ligcc and gcc-runtime both are LGPLv2 libraries although they
are part of the large GCC GPLv3 code. There is clearly called out
exceptions for these libraries.

(From OE-Core rev: 63c68ba8a546bd7f05fb048fb2abaa5cfb5eb16c)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 12:07:01 +00:00
Saul Wold
e6a8e53a8d poky-image-basic: remove POKY_BASE_INSTALL
Removing POKY_BASE_INSTALL and replacing it with task-poky-boot,
effectively removes task-poky-extended which was pulling unwanted
recipes.

(From OE-Core rev: aa42a75e784510e5ee76dc227758bbc7dc650fb3)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 12:06:48 +00:00
Saul Wold
fe6e54773e extended tasks: move binutils from basic to lsb
(From OE-Core rev: 5e6a574db545ea793480765ffb1e69f3723b59bf)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-18 12:06:21 +00:00
Richard Purdie
d659e6242b openjade-native: Run make depend to ensure dependencies are correct and avoid parallel make failures
[YOCTO #877]

(From OE-Core rev: 238a4eb4f4a60e0e0b8d675bb547a423b9a80c9f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-17 12:38:22 +00:00
Richard Purdie
0cfdb4a029 gnome-vfs: Force acl to be disabled since its not a dependency
Without this patch, if acl was build beforehand, the build could find
the library resulting in a non-deterministic build.

Sadly there is no --disable or --without option available so this
approach is the only mechanism available.

(From OE-Core rev: 629e0702161886f1fad9552ce451ed2b7dc77967)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-17 12:38:08 +00:00
Liping Ke
37e29b5434 ADT: bug fix for nameing and do_patch sequence
This patch is for fixing the x86-64 image name bug and also,
do_patch must be done before do_deploy.

(From OE-Core rev: 95e27a0f604796b30d7e7e1d58d0925942cfefa9)

Signed-off-by: Liping Ke <liping.ke@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-17 12:37:55 +00:00
Richard Purdie
ed949c59cf gnu-config: Use perl from the environment, not a hardcoded path
Using the hardcoded perl binary can cause conflict between the files in the native
sysroot and those of the build system perl. By using perl from the environment
we can at least ensure a consistent perl environment.

Patches taken from OE.dev commits:
be21179c5321bd0afb9221f020ac12ad75c86a3b gnu-config: use /usr/bin/env perl instead of /usr/bin/perl in gnu-configize.in
edcdefbf6e0675c1bcc1fc4f464f654223380e50 gnu-config: update also bindir change to replace /usr/bin/env instead of /usr/bin/perl

(From OE-Core rev: a508e7c03840efcd5877f4185e8f024cedb9453f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-17 12:37:44 +00:00
Darren Hart
185f2ac9ce gtk+: remove per-machine gtk+ FULL_OPTIMIZATION in favor of tune-atom.inc
Now that the FULL_OPTIMIZATION for gtk+ has been enabled in the core
tune-atom.inc, it is no longer necessary to do so for every atom based
bsp.

(From OE-Core rev: 02bc593928735abb9ac5c85b9e94d0285a6f3e8c)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
CC: Tom Zanussi <tom.zanussi@intel.com>
CC: Ke Yu <ke.yu@intel.com>
CC: Richard Purdie <richard.purdie@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-17 12:37:32 +00:00
Mark Hatle
387e05af6d rootfs_rpm.bbclass: Add additional system configuration to RPM space
The additional configuration should have been there from the beginning.  The
purpose of these config files is to have a consistent Berkeley DB configuration
even if the underlying RPM version changes -- or the RPM macros change.

This likely would not cause any problems until we attempted an upgrade of
either BDB or RPM.

(From OE-Core rev: a0682191e0743ed8ec1d30567eb26d4cde864ee8)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 18:12:08 -07:00
Darren Hart
806df0f8de atom-pc: work around gcc bug for core2
Fixes [YOCTO #853]

Without these added optimization flags, the matchbox-panel (and possibly other)
applications would segfault. This patch applies the changes to all machines
derived from atom-pc.conf.

[Tweaked by RP to apply to gtk+ only]
(From OE-Core rev: 5eb24b1cb57d1e0b43dfc993a635cd2b58d58fcf)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 17:46:48 -07:00
Nitin A Kamble
47bbe6afe7 m4: bring back GPLv2 version 1.4.9 of m4 recipe
Note: Downgrading of m4 would require rebuilding of the autoconf
Fixed circular depedency with the newer autoconf

(From OE-Core rev: b581c965b4fbaaa819aa3809db037578f61a56eb)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 17:46:48 -07:00
Nitin A Kamble
76f0cbaf1f bison: bring back GPLv2 version 2.3 of bison recipe
(From OE-Core rev: 10ea8ad9c9281e5ad6910742f4db54d4f69ef144)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 17:46:48 -07:00
Scott Rifenbark
51316230ba documentation/adt-manual/adt-manual.xml: Updated front matter
Changed the revision history box for the manual to state the release
and the release date.

(From OE-Core rev: 15f5307f78899a10358ef426cadf5bc792d11d88)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 17:46:48 -07:00
Scott Rifenbark
80c4ba0e03 documentation/adt-manual/adt-eclipse.xml: Re-inserted Autotools plug-in requirement.
Jessica flip-flopped on the need for the Autotools plug-in that was
removed from the manual.  I have re-inserted the instructions for adding
this plug-in in as part of the Eclipse set up.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 17:46:48 -07:00
Paul Eggleton
97532bc759 sanity: detect if bitbake wrapper is not being used or pseudo is broken
* Shows a warning during sanity checking if the scripts/bitbake wrapper is
  not being used
* Check to see if pseudo is working during sanity checking, and if it
  isn't an error occurs (if we are using the wrapper script and pseudo
  has been built; otherwise it is a warning).

Fixes [YOCTO #653]

(From OE-Core rev: 0b06b69992dd3df1dfff7bde694d7ad23d8d15a0)

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 17:46:48 -07:00
Mark Hatle
a7d927af35 sat-solver: Fix solution DB generation and general cleanup
Uprev sat-solver to the latest git version.  This corrects the solv db
generation with RPM5.

Refactor the patches for RPM5 support, cleaning up components of the
cmake.patch for submission upstream.  (Also fix a problem remaining
in the upstream with a mismatched function name.)

(From OE-Core rev: 89a5ad96eef411dccea817a6c37cb1e24840fdc1)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 17:46:47 -07:00
Mark Hatle
e9105d8b46 package.bbclass: Fix missing debug src files
The previous change used egrep instead of fgrep.  We need to use fgrep because
there are expression like syntaxes in some file names, we need exact matches.

(From OE-Core rev: 0de88dc9aa30f29ec1ab5cc0c541c8be859392ab)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 17:46:47 -07:00
Mark Hatle
a4adf0d1ec rpm: Disable repackage on upgrade/erasure by default
[YOCTO #787]

Disable the repackage on upgrade/erase by default.  This removes the warning
message:

    error: cannot create %_repackage_dir /var/spool/repackage/1298783317

(From OE-Core rev: 3878ef5deacda480b7c689720733c03ef6b3c702)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 17:46:47 -07:00
Mark Hatle
0473eb2c22 sat-solver: Add workaround for RPM 5 db construction
The first time the database is created on an RPM5 system it works
correctly.  However any subsequent rebuilds cause an empty database to
occur.

The following is from Michael Schroeder <mls@suse.de>:
> rpmdb2solv contains a hack that makes it use the unchanged already
> converted packages. To do this, it needs to get the database id
> for every installed packages by reading the "Name" index. This
> somehow doesn't seem to work with rpm5.
>
> As a workaround you can add a "ref = 0;" line at the top of the
> repo_add_rpmdb() function in ext/repo_rpmdb.c.

(From OE-Core rev: 3db47b9c2a40db8e94c30dca601b0ab82920c14f)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 17:46:47 -07:00
Mark Hatle
d9d74a549d sat-solver: uprev to the latest version
Upgrade to the latest git version.  Also update the cmake.patch to enable
debugging in all configurations.

(From OE-Core rev: 04da04e371da12815e176c96d852e6bd6afc2b34)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 17:46:47 -07:00
Mark Hatle
b5afabf41b libzypp: Fix release query
Libzypp is looking for the "redhat-release" file and using that version
number to help adjust the system version.  This ensures that there is
something on the system that returns a correct value.

This patch is likely not necessary.

(From OE-Core rev: a1bb79372e75269b8d135c0018955c533ba06027)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 17:46:47 -07:00
Kang Kai
b09e273fab slang: export INST_LIB_DIR to fix compile problems
Export "INST_LIB_DIR" in do_install to slang/slsh to fix cross compile warnings
Fixes [YOCTO #812]

Add necessary files to run slsh.

(From OE-Core rev: 71782f844552636bb0158e7a2271e849259a48c0)

Signed-off-by: Kang Kai <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 08:08:17 -07:00
Darren Hart
6d990c8ca1 formfactor: Assume HAVE_TOUCHSCREEN=0
If no machine specific formfactor is found, the formfactor config defaults
to HAVE_TOUCHSCREEN=1. The result is for the matchbox session to disable
the cursor. This can lead to a lot of churn sorting out why the cursor doesn't
appear: xorg bug, xorg driver bug, kernel drm driver bug, kms bug, many
of which appear when searching for invisible cursor on the web.

On the other hand, if a cursor appears on a touchscreen device, one is much
more likely to reach a correct conclusion: "I need to set HAVE_TOUCHSCREEN=1
in my custom machine formfactor config". Which likely exists or is needed for
other formfactor specific things such as dpi, screen size, rotation, etc.

(From OE-Core rev: 361f7536e75893c51cdcb2c6449e300ee2bbd53a)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 08:08:17 -07:00
Scott Rifenbark
d065ae7311 documentation/adt-manual/adt-eclipse.xml: Specified qemu options outside of brackets.
Section 4.1.3.4 discusses custom options for when you want to run
a QEMU image.  Jessica felt that we needed to stress the fact that
the options "serial", "nographic", and "kvm" must all appear outside
of the angled brackets.

(From OE-Core rev: 845770e12b6ed51db3179f42de6b8deacdff5093)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 08:08:16 -07:00
Scott Rifenbark
f6185c6d85 documentation/adt-manual/adt-eclipse.xml: Removed Autotools plug-in requirement
Section 4.1.2 lists plug-ins that need to be installed prior to installing
the Yocto Plug-in for Eclipse.  I removed the Autotools plug-in
requirement per Jessica Zhang's instructions.

(From OE-Core rev: 94e3971c95e0549a0857f07e1a38d7b7628f0022)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 08:08:16 -07:00
Scott Rifenbark
0d4aa19918 documentation/adt-manual/adt-command.xml: Initial draft of command line chapter
This is the initial draft of the Using the Command Line chapter.

(From OE-Core rev: 76bbb867d6e4e9c49c9d4a2d9c453d0cdf692c44)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 08:08:16 -07:00
Scott Rifenbark
4dfed39284 documentation/adt-manual/adt-eclipse.xml: Initial draft for Eclipse chapter.
This is the initial draft of the Eclipse chapter.

(From OE-Core rev: 44512573d62fa5e209bf227d6811f9a94ec42372)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 08:08:16 -07:00
Scott Rifenbark
fc6863bea9 documentation/adt-manual/adt-package.xml: Initial file
This file is the initial XML file for the chapter on optionally
customizing the development packages installation.

(From OE-Core rev: 2e3d29d493d6a3be006e80e75e41a0ff9ad29564)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 08:08:16 -07:00
Scott Rifenbark
b4af02bcc4 documentation/adt-manual/adt-prepare.xml: Initial draft of preparation chapter
This commit is the initial draft of the preparation chapter (chapter 2).

(From OE-Core rev: c32b215eb37828cd31c0c9ba288c2216fcd034de)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 08:08:16 -07:00
Scott Rifenbark
811b28ae39 documentation/adt-manual/adt-intro.xml: Initial text
This commit is the initial text for the introduction chapter.

(From OE-Core rev: 7c0899aa6d712e373bd1a2df1fb52dcf3a87b2fe)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 08:08:15 -07:00
Scott Rifenbark
55b141c756 documentation/adt-manual/style.css: Changed PNG file in the title page style
The .aurthorgroup style uses a 'background-image' item to add the
book title image.  This had to be changed to 'figures/adt-title.png'
from 'figures/kernel-title.png' since it is for the ADT manual.

(From OE-Core rev: 4c9dda2ac52139f67dc8e461c9f68a5d97d4690f)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 08:08:15 -07:00
Scott Rifenbark
fbe5fdcd05 documentation/adt-manual: cleaned up labels in chapter files
The initial chapters were failing to make due to duplicate section
identifiers that were created when I copied in the original files.
I gave each of the five chapter files (adt-command.xml, adt-eclipse.xml,
adt-intro.xml, adt-package.xml, and adt-prepare.xml) unique identifier
tags.

(From OE-Core rev: d30460c835c51dcc9301bcd848ceda29ba9ceeb6)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 08:08:15 -07:00
Scott Rifenbark
a2075d255b documentation/adt-manual/Makefile: Initial Makefile
These edits take the Makefile from the version I copied over from
the Kernel manual to create the initial version for the ADT Manual.

(From OE-Core rev: 50c61a4fe2f4ad65d6934a3ec3799e6a64709ed3)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 08:08:15 -07:00
Scott Rifenbark
7ea36613da documentation/adt-manual/adt-manual.xml: Initial file
This is the initial file that the Makefile calls.  The changes in this
commit reflect edits taking it from the copied kernel manual version.

(From OE-Core rev: a7c2c126e4ab12e4ba13cd4cfad70b6556739bc5)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 08:08:15 -07:00
Scott Rifenbark
e4021e2d21 documentation/adt-manual/figures: Added title PNG file, deleted kernel title PNG file.
I added the title PNG file and removed the existing (copied)
kernel title PNG file.

(From OE-Core rev: a4a9c47c1bd1e53652f73cc76f781f1c5df8adcc)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 08:08:15 -07:00
Scott Rifenbark
dfbc6b2d28 documentation/adt-manual: New file structure for ADT Manual
I have added a new directory to documentation named adt-manual.
This directory holds a Figures folder, and the 9 files needed
for the ADT manual.  The book consists of five chapters:
adt-intro, adt-prepare, adt-package, adt-eclipse, and adt-command.
There is also a adt-manual.xml file called by the Makefile.
There is also a style.css file.  And finally, a adt-manual-customization.xsl
file to control numbering.

(From OE-Core rev: ac2c8848bbefcf7d24192573904baaef87c67382)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-16 08:08:14 -07:00
Yu Ke
b14246e828 matchbox-desktop: add configure event handler to fix bug 658
Bug658 - "the bottom icons on Applications and All screen are cut-off in qemu"

the reason is that desktop work area is not resized after window manager
decoration. so add configure event handler to resize the desktop work area
can fix this issue.

[YOCTO #658]

(From OE-Core rev: 79f160a7ac9426ec9952f7a9c40190da8b95c88d)

[sgw: Tweaked ${PN} -> ${BPN}]
Signed-off-by: Yu Ke <ke.yu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
2011-03-16 08:07:05 -07:00
Richard Purdie
cc764902bc tune-atom.inc: Remove duplicate TARGET_ARCH entry to avoid ipk rootfs issues as temp workaround for problems pending a proper fix
(From OE-Core rev: a39610f0ac4c77f225671916610f78a18ff70350)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-15 18:25:49 -07:00
Paul Eggleton
1156930bd7 bitbake/runqueue: show correct task name for setscene task failure
If a setscene task failed previously it was showing an incorrect task
name in the error line. This patch ensures we show the correct name, also
including the "_setscene" suffix.

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
2011-03-15 12:55:50 -07:00
Lianhao Lu
a6f0062bd7 package-index.bb: Added missing dependencies.
[YOCTO #871] Added missing dependencies to opkg-utils-native and
opkg-native.

(From OE-Core rev: f50997891a236954f827de73e9422a67eaacb95c)

Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-15 12:55:50 -07:00
Dongxiao Xu
c86bd7f528 xserver-nodm-init: add xuser to group audio
add rootless X user to group audio to access /dev/snd/*

Fixes [YOCTO #799]

CC: Ke Yu <ke.yu@intel.com>
(From OE-Core rev: 4df75586c0f5447670fe945285c7ad01c5e1f37f)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-15 12:55:50 -07:00
Beth Flanagan
f971949135 cairo_1.10.2: Fix DEPENDS to include glib-2.0
The autobuilder picked up cairo having a dependency on glib-2.0. Added glib-2.0 to DEPENDS.

(From OE-Core rev: 65010151368c255bef7b2aefc47de48f658cf15b)

Signed-off-by: Beth Flanagan <elizabeth.flanagan@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-14 18:31:04 -07:00
Darren Hart
93970e41e3 qemux86-64: Enable latencytop and profiling (temporary)
Fixes [YOCTO #858] and [YOCTO #859]

common-pc-64.scc in the linux-yocto meta data omits latencytop and profiling
(but common-pc.scc includes them). The right fix is in common-pc-64.inc, but
this fix gets people people unblocked until Bruce can commit the proper fix to
linux-yocto.

(From OE-Core rev: e906c6ea72b0edcc509a2ef5f44cba5584432dd1)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
CC: Bruce Ashfield <bruce.ashfield@windriver.com>
CC: Jessica Zhang <jessica.zhang@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-14 18:30:57 -07:00
Richard Purdie
a250829cb6 sanity.bbclass: Fix inverted mmap_min_addr logic
(From OE-Core rev: 2956705bb0dad88b5ad7d42490c345ccb1d9d478)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-14 17:41:47 -07:00
Tom Zanussi
aa3af99591 documentation: Kernel Manual fixes
Upgraded to reflect 1.0 usage rather than 0.90 usage, and some other
clarifications and minor changes.

[RP - added tweaks suggested by Darren Hart]
(From OE-Core rev: c6f06f478ac229c4619f815b8b313711d47b1551)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-14 17:41:47 -07:00
Tom Zanussi
1800ff1c5f documentation: BSP Developer's Guide fixes
- use linux-yocto instead of linux-yocto-stable in examples
- change branch names to match linux-yocto usage
- remove outdated 'wrs' where it appears

(From OE-Core rev: 7f1662ef01b383c9fecb2b30ade50de97f17529a)

Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-14 17:41:47 -07:00
Dexuan Cui
c7f5dcaf38 poky-qemu-internal: force oprofile into using timer interrupt mode for qemux86/qemux86-64 for now
Currently oprofile's event based interrupt mode doesn't work(Bug #828) in
qemux86 and qemux86-64. We can use timer interrupt mode for now.

(From OE-Core rev: 39249cfde962b3338c2c55b99a03842ec25ecd44)

Signed-off-by: Dexuan Cui <dexuan.cui@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-14 17:41:47 -07:00
Liping Ke
95fa18fd1b adt: fix ppc/powerpcc naming bug
for ipk files on ppc, the name should be powerpc. So does the
environment file name. For the tar file name, it should be ppc.
This patch is to correct the arch/machine name pair.
Related Bug#864

(From OE-Core rev: 9b94486c6cc7295ed872e3c03ea297c3f3c7dcdf)

Signed-off-by: Liping ke <liping.ke@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-14 17:41:47 -07:00
Qing He
8797e389c6 qemux86-64: set qemux86_64 as package arch name
thus allows rpmbuild to generated RPMs with the right architecture.

(From OE-Core rev: 73b27dc6c326c8465944f8b6397dc6b1ef647452)

Signed-off-by: Qing He <qing.he@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-14 17:41:46 -07:00
Qing He
8f0fc87a18 zypper: add machine arch support
Since libzypp is now of ${MACHINE_ARCH}, change zypper to base
on this arch to.

(From OE-Core rev: 90b618231e77c96e36d7955815aad2ed85258a23)

Signed-off-by: Qing He <qing.he@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-14 17:41:46 -07:00
Qing He
c455f4ccbd libzypp: add machine arch support
(From OE-Core rev: b463188407c0c783c8d5aeb0098fc59445db57bf)

Signed-off-by: Qing He <qing.he@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-14 17:41:46 -07:00
Qing He
b8f4c95e21 sat-solver: add machine arch support
(From OE-Core rev: ca758fa404fa447689ff205ee3b4b76bd3f1068a)

Signed-off-by: Qing He <qing.he@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-14 17:41:46 -07:00
Joshua Lock
498c628a1e bitbake/xmlrpc: only use BBTransport for affected Python versions
Upstream have fixed the xmlrpclip.Transport() bug from Python #8194 for
the Python 2.7.2 release, therefore as we know which versions of the
standard library are affected we can only use our copy/paste class when
it's needed.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-03-14 17:41:46 -07:00
Joshua Lock
5c0a84fd95 bitbake/bitbake-layers: fix to run with recent changes
This patch marks the bitbake-layers script as executable and fixes the
instantiation of the BBCooker to match recent changes in the BitBake
libraries.

I've also added a brief header which demonstrates the intent and usage
as taken from Chris Larson's original commit message.

Note: this fix is not upstreamable, it's only required in Poky because of an
outstanding difference between BitBake master and Poky's BitBake.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-03-14 17:41:46 -07:00
Joshua Lock
abcec8015c bitbake/hob: fix cancel button
An accidental logic inversion (aka thinko) had the cancel button only
cancel a build when the user didn't confirm the cancellation (i.e. clicked
no)...

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-03-14 17:41:46 -07:00
Joshua Lock
70febdf0ce bitbake/cooker: don't error in prepareTreeData for unbuildable targets
Set abort to False in prepareTreeData so that unbuildable targets do not
raise an exception.

Signed-off-by: Joshua Lock <josh@linux.intel.com>
2011-03-14 17:41:46 -07:00
Nitin A Kamble
a33a2cc024 perl: another set of parallel build fixes
[YOCTO #784]

Imported more commits from the perl upstream tree

(From OE-Core rev: c3b74b0c3833541ab5e89a7f9597f1ef8a413a70)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-11 16:35:10 -08:00
Scott Garman
d16085b67b openssh: allow the openssh meta package to be empty
This allows the openssh meta-package to be used in the
poky-ssh task. Otherwise there will be no package named
openssh to install during image creation.

(From OE-Core rev: 9f4747a1e7e04e0b08b7b402bd8dd7cf8ccd0166)

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:05 -08:00
Mark Hatle
5903a8fb4f gcc-runtime: Fix dbg files
In order to debug certain C++ items, you need the helper python
components.  These components should live in the -dbg package, ensure
they are added to the recipe.

(From OE-Core rev: 285fbd8a206eee061e27f37430499fcbe1e7284d)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:05 -08:00
Mark Hatle
e865b4f106 package_rpm: Fix rootfs generation
[YOCTO #797]

During rootfs generation, if other RPM packages are being wrtten
this could cause a failure during the solvedb generation.  We
add a shared lock around the RPM package building.  This will allow
multiple RPM packages to continue to be written at the same time, but
prevent rootfs generation and RPM package generation at the same time.

(From OE-Core rev: 1d5ca654a482f582c75faf546140dfd6064da73b)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:05 -08:00
Mark Hatle
b507383230 package.bbclass: Change the debug directory to avoid conflicts
The debug directory before was below ${WORKDIR}.  Unfortunately if
something was based on a git tree, it meant that "git" was the
directory name being preserved for usr/src/debug usage.  The patch
moves to using "${WORKDIR}/.." as the base, to ensure that the
WORKDIR naming is used in usr/src/debug.

(From OE-Core rev: dbc752c75786b0985fbeb4986467ae01290f424a)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:04 -08:00
Bruce Ashfield
ee0bd97330 linux-yocto: update to 2.6.37.3
The 2.6.37.3 -stable update is available and can safely be merged
into the linux-yocto BSPs. This updated the SRCREVs of the BSP
branches to their new values.

(From OE-Core rev: 3845eb8285d6b57fe2b824ce482cbeaba561eef5)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:04 -08:00
Paul Eggleton
cd7615343e gst-plugins-good: remove dependency on hal
Disable hal usage at configure time to avoid dependency on hal (which is
deprecated). Only affects "halelements" which is of no use without hal.

Fixes [YOCTO #810] and reverts changes from c6b0c5720fa.

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
2011-03-10 21:43:04 -08:00
Richard Purdie
3087be111c autotools.bbclass: Fix automake file race issues
If one package is configuring when automake is built, the aclocal-VERSION
directory can be created or removed and this can confuse the configure
process.

Since we always run automake-native, it should always be using the
autoake-native aclocal directory for automake files which is the
result of this patch.

(From OE-Core rev: 2a15188d631a97dc20940f7edc801212e191332f)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:04 -08:00
Liping Ke
e415fd6d5b Disable wget server side cache
We found some proxy server has wrong cache for long http file name.
It will cause fetching wrong ipk files from adt repo. To avoid this,
we use wget options --no-cache to disable all server side cache.
It will make the fetch speed slower, yet correct always.

(From OE-Core rev: 2e9e8af197671ae06de1bdc9201765b160869d60)

Signed-off-by: Liping Ke <liping.ke@intel.com>
Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:04 -08:00
Dongxiao Xu
d6c639e64b gst-plugins-bad: add missing dependency librsvg
Defaultly gst-plugins-bad will configured with option --enable-rsvg.
Besides, it will check if librsvg really exists by certain configure
code.

Therefore there will be a certain race condition that, during
librsvg's populate_sysroot, gst-plugins-bad's do_compile will find
some header files are still not exists though its configure says the
library is supported.

Explicitly add librsvg as gst-plugins-bad's dependency could solve
this issue.

This fixes [YOCTO #831]

(From OE-Core rev: 5b675f91b17eb9d01a4552506518cc0f7de4eba4)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:04 -08:00
Lianhao Lu
6bb3da2236 gcc/collect2: Added --sysroot support into collect2 in gcc.
[YOCTO #815] Added --sysroot into COLLECT_GCC_OPTIONS to allow the
collect2 support user specifed sysroot.

(From OE-Core rev: 868f8d3dd04e3c6dbbce154742cf877fda460a3e)

Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:04 -08:00
Dongxiao Xu
f4b458f9e2 tinylogin: Fix rotate passwd check logic
Fix rotate passwd check logic which will write data into un-allocated
memory.

This fixes [YOCTO #735]

(From OE-Core rev: 4499beb9ef70d207e0d1f60eae77634a77fc44c3)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:04 -08:00
Mark Hatle
c966517392 gcc-runtime: Ensure that gcc-runtime builds a debug package
The gcc-runtime package will now create the proper dbg package.  The
RRECOMMENDS change is required to deal with the default.  This is
documented in bug 824.

(From OE-Core rev: 724137e50762f190438e8e87d3f0f9edd99ea11d)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:03 -08:00
Dongxiao Xu
1b774773ac telepathy-python: Fix parallel make issue.
There is a missing dependency (creation of "src/_generated" directory)
of some tasks. Add it to fix the parallel make issue.

[YOCTO #783]

(From OE-Core rev: 184b5c83df9ecdb1891b760155d6a9ce587531ae)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:03 -08:00
Dexuan Cui
095f420299 distro_tracking_fields.inc: update the info for oprofileui
(From OE-Core rev: 25e84e0e3d24bc86b31490c5de600f081823fd06)

Signed-off-by: Dexuan Cui <dexuan.cui@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:03 -08:00
Dexuan Cui
f6b945c739 oprofileui: upgrade to the latest version maintained by the Yocto project
Oprofileui at http://labs.o-hand.com/oprofileui/ is not maintained now, so
we should change SRC_URI to the one maintained by the Yocto project. This
one includes new bugfixes.

This fixes [YOCTO #820]

[sgw: merged oprofile-git.inc back into .bb as suggested by Joshua]
(From OE-Core rev: d694c6700ee27672e5372939a98d5050cda44ca9)

Signed-off-by: Dexuan Cui <dexuan.cui@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:03 -08:00
Khem Raj
4a7c467763 gcc-configure-runtime.inc: Add immediate evaluation otherwise it ends in circular dependency
(From OE-Core rev: 547c62361b21d9cae281d58c54ec2d19a5e25306)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:03 -08:00
Martin Jansa
ff5680b8f1 distutils-base: Only RDEPEND python-core on target packages
* fixes ie setuptools, without this patch it RDEPENDs on python-core-native, which is not RPROVIDED by anything
* imported from OE 8377b8ec57f35b9e5b81a74c77f68fd6e02949c8

(From OE-Core rev: 65317f21736293cc4eeb9a404e9f01043df7565d)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Acked-by: Koen Kooi <koen@dominion.thruhere.net>
Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:03 -08:00
Martin Jansa
111c268fbb debian.bbclass: call auto_libname in reverse sorted AUTO_LIBNAME_PKGS
* see comment for reason why we need this
* more info:
  http://lists.linuxtogo.org/pipermail/openembedded-devel/2011-February/029877.html

(From OE-Core rev: 6f0bbe463204d377f92140b6540d9d518d5c6d6b)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Acked-by: Koen Kooi <koen@dominion.thruhere.net>
Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:03 -08:00
Martin Jansa
20b41bd136 python: add generate-manifest-2.6.py script and regen python-2.6-manifest.inc
* imported from OE with sorted entries etc

(From OE-Core rev: 94b36524550ff2c94a5f8d82a9bc2073c06d418a)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Acked-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:03 -08:00
Nitin A Kamble
232dcb7241 gcc-runtime: fix LSB library checks for libstdc++.so.6
[YOCTO #795]

When we run library check of LSB on qemux86 and qemuppc, we got some failures
about 'libstdc++.so.6'.

Test environment:
Platform: Qemu-x86, Qemu-ppc
lsb image: poky-image-lsb-qemux86-test.ext3(Feb 26th, auto-build server)
Library check of LSB: 4.1.0-1

The error log:
Did not find _ZNKSt5ctypeIcE8do_widenEPKcS2_Pc (GLIBCXX_3.4) in libstdc++.so.6
Unmangled symbol name: std::ctype<char>::do_widen(char const*, char const*,
char*) const
...

 found that some weak symbols ('W') change into local ('t') during link time
and be stripped. According to compiling log, the option
"-fvisibility-inlines-hidden" is used for gcc. And this option caused some weak
symbols change into local.

see http://bugzilla.pokylinux.org/show_bug.cgi?id=795 for more information on the bug.

(From OE-Core rev: 4bb281ef5f12096d0889ba8efcc3fd3bb0ed3b3c)

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
Signed-off-by: Jingdong Lu <jingdong.lu@windriver.com>
Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-10 21:43:02 -08:00
Richard Purdie
09d166ebfd bitbake/fetch2/local: Fix inverted update required logic
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-09 13:21:23 -08:00
Richard Purdie
f432f1b010 bitbake/fetch2: Allow local file:// urls to be found on mirrors
With the current implementation, file:// urls as used by sstate don't access the
mirror code, breaking sstate mirror support. This change enables the usual
mirror handling. To do this, we remove the localfile special case, using the basename
paramemter instead. We also ensure the downloads directory is checked for files.

The drawback of this change is that file urls containing "*" globing require special
casing in the core.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-08 17:16:53 -08:00
Saul Wold
d7fcae0778 quilt: add autotools inheritance
(From OE-Core rev: c0ce17aed98c6475b6c1dc18c6655f3a52eda0fa)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-08 17:15:54 -08:00
Joshua Lock
223b4a9fb2 util-macros: fix DEPENDS for nativesdk
(From OE-Core rev: adf342de34604fc5a75df9798feac1e4e2b27944)

Signed-off-by: Joshua Lock <josh@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-08 17:15:49 -08:00
Richard Purdie
1af309aa19 sstate: Ensure the SRCURI fetcher cache is not used for sstate
(From OE-Core rev: 115b3b95e87320b4a6a678df45fece06469dfaeb)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-08 15:23:06 -08:00
Khem Raj
13d14d0ddf sanity.bbclass: Check for /proc/sys/vm/mmap_min_addr to be >= 65536
* Now qemu can handle lower values we can chnage this sanity test
  to check of values if less than 65536

(From OE-Core rev: 5f172d8b9b829554f3d884a9007a33fff7dcc187)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-08 15:22:35 -08:00
Martin Jansa
66b30531ac sanity.bbclass: some multilib systems have symlink /lib -> /lib64
* ie gentoo has /lib -> /lib64
* old test assumed only /lib64 -> /lib

(From OE-Core rev: 776af6c2fa5a80debfafb4697c462d0dd0e7d76c)

Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
Acked-by: Koen Kooi <koen@dominion.thruhere.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-08 15:22:28 -08:00
Khem Raj
66cf5423c6 poky-default.inc: Change LINUXLIBCVERSION "2.6.36" -> "2.6.37.2"
(From OE-Core rev: 9a86fa5235ab8715319709ff2171864a074aed37)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-08 15:22:23 -08:00
Liping Ke
3f0cec517b adt: removed unused repo source and opkg options
opgk option --force-overwrite is only a workaround for bug #547.
Now this bug is gone, so remove this option.

and also, the first opkg repo source is not useful, remove it.

(From OE-Core rev: e6c72db2ac5684dd2bb65207b2f3da7214f5dca7)

Signed-off-by: Liping Ke <liping.ke@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-08 15:22:17 -08:00
Saul Wold
833b8160b5 cpio: Fix the SHA256 Checksum for the src tarball
(From OE-Core rev: b8550ac3f30bd983191afe0f1afe3c6c45a54bca)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-08 15:22:11 -08:00
Dongxiao Xu
a776cc376e connman: add xuser to the dbus permission list
Some platform (like atom-pc) enables rootless X,
thus the connman frontend run on it need the
permission to connect with connman by dbus.
This commit grants permission to xuser.

This fixes [YOCTO #779]

(From OE-Core rev: cfbf50c235c2faeb53f43b42a12c49c022288488)

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-08 15:21:52 -08:00
Liping Ke
83777bf1bc adt: Update to svn r596 to fix symbolic link issues
See the longlinksfix patch for details but symlinks over 100 chars long
were broken in sdk tarballs and its due to problems in the inbuilt tar in
libbb in opkg. svn r596 has already fixed the problem.

(From OE-Core rev: 90d4624f0c5de6a35eace1f13c3e04df9737390c)

Signed-off-by: Liping Ke <liping.ke@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-08 15:21:32 -08:00
Saul Wold
a15bc3ddd9 lsb-live image: add lsb-live and lsb-sdk-live image types
(From OE-Core rev: 7ba79b4c25126b42d3697cec9ecdf8d688d6da54)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-08 15:21:26 -08:00
Saul Wold
6dfddf5410 attr: Added ncurses to depends
(From OE-Core rev: 21f294d9600a369fff5eafb0c7358694d9ff0221)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-08 15:21:20 -08:00
Saul Wold
fb928dc8ea gst-plugins: Added hal to DEPENDS
Fixes [YOCTO #810]

(From OE-Core rev: c6b0c5720fa0fc2ba7a6792b7f52faad38dd47dc)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-08 15:21:13 -08:00
Saul Wold
1bac3117fa util-macros: add libgpg-error to DEPENDS list
Signed-off-by: Saul Wold <sgw@linux.intel.com>
2011-03-06 08:52:00 -08:00
Saul Wold
07c55e9db4 lsbsetup: Fix LIC_FILE_CHKSUM
Signed-off-by: Saul Wold <sgw@linux.intel.com>
2011-03-06 08:51:56 -08:00
Saul Wold
5a8991913d elfutils: add bzip2 to DEPENDS
Signed-off-by: Saul Wold <sgw@linux.intel.com>
2011-03-06 08:51:50 -08:00
Scott Garman
fcce8449bc linuxdoc-tools-native: Fix build error with txt documentation
Disable building txt documentation. This is a temporary workaround,
as I have found an Ubuntu 10.10 system which throws errors during
building this that I'd like to ultimately fix. The error manifests
itself from the end of LinuxDocTools.pm with the following messages
during do_install:

| - Building txt docs
| Processing file ./guide
| troff: fatal error: can't find macro file s
|  fmt_txt::postASP: Empty output file, error when calling groff. Aborting...

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
Signed-off-by: Khem Raj <raj.khem@gmail.com>
2011-03-06 08:51:44 -08:00
Lianhao Lu
283d452ede toolchain-script.bbclass: Added --sysroot to LDFLAGS.
[YOCTO #808] Added --sysroot to LDFALGS in environment files.

Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
Signed-off-by: Khem Raj <raj.khem@gmail.com>
2011-03-06 08:51:38 -08:00
Saul Wold
52ba9b76e0 task-poky-lsb: Remove new eglibc-* packages
Remove the new eglibc packages that were part of another
patch and did not get cleanup here.

Signed-off-by: Saul Wold <sgw@linux.intel.com>
2011-03-04 16:07:04 -08:00
Bruce Ashfield
9d051f5808 linux-yocto: update machine configurations
Fixes [YOCTO #733, YOCTO #766, YOCTO: #801]

Updating the configuration for the routerstation pro and
mpc8315e-rdb to 2.6.37 variants of the RTC, USB and VFAT
filesystem types.

(From OE-Core rev: 404d47cf579c24b126a9cb2783a3224aabb27810)

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-04 14:34:32 -08:00
Xiaofeng Yan
b2ad1b9b42 task-poky-lsb: Add packges needed by LSB Test Suite
These packages added into task-poky-lsb.bb are absent in lsb-image during lsb test

(From OE-Core rev: 472f89dec06f0be43ff3e0638cac3f55f7b7e7cf)

Signed-off-by: Xiaofeng Yan <xiaofeng.yan@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-04 14:34:32 -08:00
Xiaofeng Yan
a5d3c7c4f4 creat-lsb-image: Add some functions for creating a appropriate image to make lsb test
Add all pakcages from LSB Test Suite from linux foundation web.

(From OE-Core rev: fc87e45c24eaee29dc3f803eca4f8e303cc582cb)

Signed-off-by: Xiaofeng Yan <xiaofeng.yan@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-04 14:34:32 -08:00
Scott Rifenbark
bef6f89563 documentation/kernel-manual: Kernel manual Style changes
Modifications to the figure image (figures/kernel-title.png),
the heading styles (style.css), and the numbering system with
TOC display (yocto-project-kernel-manual-customization.xsl).

I updated the title image to display the manual title using
color #00557D, which coordinates with the Yocto Project website
color scheme.  I also updated the style sheet to use this same
color for the section headings.  This helps to set them off better
from the text.  Finally, I flipped the switch back on for this
manual to create chapter-specific table of contents sections
prior to each chapter and to include a all-inclusive TOC at the
beginning of the book.

(From OE-Core rev: 2f24addbd02039fb9b6489c90c5d1c687c0d0698)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-04 14:34:32 -08:00
Scott Rifenbark
4b77527f7a documentation/kernel-manual/figures/kernel-title.png: Updated title graphic
I changed the font to Arial Narrow and inserted a better logo.

(From OE-Core rev: 7b84f126b09125b306ea9f9b59c437bb741800d2)

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-04 14:34:32 -08:00
Yu Ke
a080556e7e x11vnc: fix the endian issue in mips for bug 782
x11vnc use LIBVNCSERVER_WORDS_BIGENDIAN to handle the endian, however
it is not set correctly when cross-compile for mips, thus x11vnc mips
does not work correctly.

meanwhile, x11vnc has the autconf macro AC_C_BIGENDIAN which can
handle the endian correctly. so this patch replace the
LIBVNCSERVER_WORDS_BIGENDIAN with WORDS_BIGENDIAN (generated by
AC_C_BIGENDIAN) to fix this issue.

this patch fix the bug [YOCTO #782]

this appraoch is suggested by Khem Raj

CC: Khem Raj <raj.khem@gmail.com>

(From OE-Core rev: da4b22c8bdf00813164d8830e52e1d6ad35cdd94)

Signed-off-by: Yu Ke <ke.yu@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-04 14:34:32 -08:00
Zhai Edwin
95fe31c60d qemu-script: Remove mmap_min_addr check
qemu 0.13.0 can handle mmap_min_addr well, and patch to remove checks in
sanity.bbclass has already in oe-core mailinglist by Raj. This patch does
the same thing for qemu-script.

(From OE-Core rev: 48181023314ac09743b958b0035399797fe6cff9)

Signed-off-by: Zhai Edwin <edwin.zhai@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-04 14:34:32 -08:00
Jingdong Lu
8f1465aa9c task-poky-lsb: add python-misc
python-misc also needed by python-runtime test of LSB.

(From OE-Core rev: 266562710b86a2373d8fffa5153557e4660f9596)

Signed-off-by: Jingdong Lu <jingdong.lu@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-04 14:34:31 -08:00
Darren Hart
1b11ff7752 hello-mod: add a module for testing module.bbclass
The following patch creates a hello-mod recipe for building a trivial
out-of-tree kernel module, hello-mod.ko. This demonstrates the hostprogs
build modifications added to module.bbclass. When loaded and unloaded,
the module prints a simple string to the console to demonstrate it was
compiled correctly.

Tested on qemux86 poky-image-sato and beagleboard poky-image-minimal
(after adding hello-mod to the images).

(From OE-Core rev: d4765569d51448e8918bb15e7ab342983344074a)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
CC: Gary Thomas <gary@mlbassoc.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-04 14:34:31 -08:00
Darren Hart
101ce7109e module: build hostprogs for each module
This fixes [BUGID #241]

The kernel hostprogs are built for the host architecture. They should not be
deployed to the target, and they should not be included in an sstate package
which might get reused on a host of a different architecture.

As we don't build many out-of-tree modules, this patch takes the approach of
building the hostprogs as part of the module compile process with a
do_compile_prepend() routine in module.bbclass.

We don't have to clean the hostprogs as modules depend on the kernel being
populate_staging, so its done with the staging directory by the time we run.

(From OE-Core rev: e807fc977770cb64a217768672c18437ea8f3057)

Signed-off-by: Darren Hart <dvhart@linux.intel.com>
CC: Gary Thomas <gary@mlbassoc.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-04 14:34:31 -08:00
Xiaofeng Yan
7caf083ebe LSB_Setup.sh:Install LSB Test Suite and set lsb test environment
Perfect some funtions for lsb test in yocto 1.0

(From OE-Core rev: aa60f178d9f6b4ebdf03bbfcf2b46e94bf4e78d3)

Signed-off-by: Xiaofeng Yan <xiaofeng.yan@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-04 14:34:31 -08:00
Mei Lei
11f85405e0 distrodata.bbclass: Get git repo tag information
For those recipes which use git repo and have tag information, we can use tag to trace the version change. For other no tag recipes, we still use their
commit checksum to trace their version change.

(From OE-Core rev: 30343a72b89167b46ff4cc33be6ada2fd4b13a59)

Signed-off-by: Mei Lei <lei.mei@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-04 14:34:31 -08:00
Saul Wold
be297836a1 distro tracking: Updates to Tracking infor for clutter and other changes
(From OE-Core rev: 3a5fed48f3254ac6aafb4a5c7fa4015ad87b02e7)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-04 14:34:31 -08:00
Richard Purdie
91d72e822e Fixup merge error and apply cleanups
(From OE-Core rev: a72822d315d7bc35a424b0807693ad7a3317c519)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:59 -08:00
Khem Raj
8640414cca rpm: Fix linking error encountered in rpm-native
* This patch passes the correct LDFLAGS to account for
  additional dependencies of librpmio on libbeecrypt and libsyck
  and hence fixes the build error.

(From OE-Core rev: bcdd048e4857b5f8a343c434ade5a02ab1db33bc)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:59 -08:00
Mark Hatle
091ace83f8 qa.py: Fix a typo when evaluating bitsize
This should be setting a variable, not performing a comparison.

(From OE-Core rev: cbe1b8277c610e8e31d1270757877300532bed56)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:59 -08:00
Mark Hatle
e81957973d poky-env-internal: Add FETCH2 enablement
We need to enable the new fetch2 implementation out of bitbake.  Otherwise
we get various errors about SRCPV issues.

(From OE-Core rev: c8495be774a5cbf235a023cecf005b2763c98745)

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:59 -08:00
Saul Wold
f3af7d55a8 task-poky-lsb: add chkconfig
(From OE-Core rev: 0e3c98374ed6d87286b59754cee2c88414933c1e)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:59 -08:00
Kang Kai
e92f3a25ec recipe: add slang from OE
slang is the shared library for the S-Lang extension language,
and required by newt because of LSB command check

(From OE-Core rev: 2ce924c19e8fe8fb67e7cd2aace483e3dffb24cc)

Signed-off-by: Kang Kai <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:58 -08:00
Kang Kai
ecbe894712 recipe: add newt from OE
newt is a library for text mode user interfaces, and required by
chkconfig because of LSB command test.

(From OE-Core rev: 57c5da295855431160403b9ea356b2beae5cedca)

Signed-off-by: Kang Kai <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:58 -08:00
Kang Kai
9a432a2328 recipe: add chkconfig for LSB command test
chkconfig is a system tool for maintaining the /etc/rc*.d hierarchy,
and LSB command test will check 2 links point to command  chkconfig

(From OE-Core rev: 994cb5be07270b8414d46e01ed7888e2de448589)

Signed-off-by: Kang Kai <kai.kang@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:58 -08:00
Liping Ke
976cb2d81d ADT: Bug fix for Suse Linux
On Suse linux11.2, we found when using sudo, we must add -E opts
to preserver some network proxy environment settings. Otherwise,
opkg-cl can't access files behind firewall. [bug #785]
Also, we need to add absolute path when sourcing files.
Fix for [bug #786]

(From OE-Core rev: 794da1a4cffaedc8a9ceeb0b089d7236b22e7913)

Signed-off-by: Liping Ke <liping.ke@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:58 -08:00
Liping Ke
00d70680f9 Add libtool-nativesdk for ADT
We need to add libtool native sdk support in ADT installer.
This patch fix #bug  791

(From OE-Core rev: a003ba3d2b80dc08d128f9b58890fe89c612236d)

Signed-off-by: Liping Ke <liping.ke@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:58 -08:00
Zhai Edwin
3b8e7319f1 gstreamer: install the sound card driver of es1370
When append "audio" to poky-qemu, emulated sound card like es1370 is
exported to guest. This patch install the kernel driver in the
poky-image-qemux86/x86_64 to use them.

[BUGID #751]

(From OE-Core rev: 95e7b7b280d8f7e699a949fa775a6846a256266c)

Signed-off-by: Zhai Edwin <edwin.zhai@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:58 -08:00
Zhai Edwin
39c4f1f7c5 sato-icon-theme: Explictly use "Sato" as gtk icon theme
There is a tricky race condition that "Sato" icons got missing on
matchbox-desktop as low priority "hicolor" theme was chosen. Explictly
settting "Sato" in gtk config file to avoid this.

[BUGID #456] got fixed.

(From OE-Core rev: 06cf0e5fc4acf00738f5d2aaa505fbac665dca02)

Signed-off-by: Zhai Edwin <edwin.zhai@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:58 -08:00
Zhai Edwin
50a7f8483a x11vnc: Fix the start failure
Default parameter "-gui" and "-rfbport" make x11vnc failed to start if no
"wish" installed.

[BUGID #781] got fixed.

(From OE-Core rev: 1e1b59cd94a3fb3092b4334cd247d2d18c9e8071)

Signed-off-by: Zhai Edwin <edwin.zhai@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:58 -08:00
Qing He
52df73c3ff libpcre: fix the name collision with libc
fixes [YOCTO #721] [YOCTO #722]

[sgw: added patch comment, bump PR, and changed BUGID -> YOCTO]
(From OE-Core rev: 6a4cb991ea473a84c620b33fbb82b5ae860971a3)

Signed-off-by: Qing He <qing.he@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:58 -08:00
Chris Larson
6adaf5a554 bitbake-layers: drop 2.6 from #!, per Joshua Lock
(Bitbake rev: 898f557cbd443cdeff137fd926aac06f2aaee6c4)

Signed-off-by: Chris Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:58 -08:00
Khem Raj
fc94ae7a77 fetch, fetch2: Get rid of DeprecationWarning notice
* This patch fixes a cosmetic issue currently we get with master

WARNING: /home/kraj/work/bitbake/lib/bb/fetch2/__init__.py:733:
DeprecationWarning: Call to deprecated function bb.mkdirhier: Please use bb.utils.mkdirhier instead.  bb.mkdirhier("%s/%s" % (rootdir, destdir))

(Bitbake rev: 36fe59ce314c295d239b76de34c8714def2c32d5)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Chris Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:57 -08:00
Chris Larson
ec8ab90763 build: add missing newline
(Bitbake rev: a7aa0415bdaa458a941004bf8dd8bbfeddd6ef5a)

Signed-off-by: Chris Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:57 -08:00
Chris Larson
5a0f713935 build: switch to old cwd handling
We want this to ensure the user can run the run. script from anywhere.

(Bitbake rev: a600b79ecefc95eeb266c3f362c7160fa8c948c1)

Signed-off-by: Chris Larson <chris_larson@mentor.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:57 -08:00
Khem Raj
34921ffbba qemu-0.13.0: Add patch to avoid mmap_min_addr
* This patch is taken from OE commit 40e293342ca76921904a43b03b635d9219432edf

(From OE-Core rev: 11d76595e036f46906859b59dc06094b2e979771)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:57 -08:00
Khem Raj
62ad9a8dc5 linux-libc-headers_2.6.37.2.bb: Add checksums
(From OE-Core rev: 370e082c8bbf14c9b0f54269eb99d291d187cd40)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:57 -08:00
Khem Raj
84752f34f9 lib/oe/path.py: Use bb.utils.mkdirhier instead of bb.mkdirhier
(From OE-Core rev: 5a22a8c06743b0a8a3d949288b99d53bd4b7ceb3)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-03-03 16:33:57 -08:00
Mark Hatle
3a39d96928 insane.bbclass: Fix ELF bitsize comparison
Fix the way the ELF size is compared to ensure that incorrectly
sized ELF binaries are captured during the file scan.

lib/oe/qa.py is changed to accept a bitsize as a parameter.  Instead
of previously defining true/false, it now takes "0" undefined, "32"
32-bit, and "64" 64-bit as the size argument.  This allows us to
preserve existing behavior of only loading one ELF type, while
allowing the function to be able to discover the size on it's own.

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
2011-03-01 16:36:44 -08:00
Bruce Ashfield
65d37c34b7 kernel: remove explicit bash call in do_menuconfig
Fixes [BUGID #598]

The explicit addition of "bash" before "make menuconfig"
is clearing variables that are required for pseudo. The
end result is that menuconfig often fails silently with:

ERROR: ld.so: object 'libpseudo.so' from LD_PRELOAD cannot be preloaded: ignored.

Removing bash from the menuconfig SHELLCMDS variable fixes
the psudo problem.

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
2011-03-01 16:36:43 -08:00
Scott Garman
8e174d9437 screenshot: change the order of LDADD arguments
Rather than setting linker flags explicitly in LDADD as the
previous patch did, simply put libshot.la before GTK_LIBS.

This fixes [BUGID #664]

Signed-off-by: Scott Garman <scott.a.garman@intel.com>
2011-03-01 16:36:43 -08:00
Nitin A Kamble
4ec9b314c1 gcc: take out libiberty files from gcc packages
this Fixes [BUGID #754]

Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
2011-03-01 16:36:43 -08:00
Paul Eggleton
ba59c319b8 zypper: add util-linux-uuidgen to RRECOMMENDS
zypper complains if uuidgen is not available, so add it to RRECOMMENDS
for the zypper package.

Addresses [BUGID #749]

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
2011-03-01 16:36:43 -08:00
Paul Eggleton
7708dde102 util-linux: split out uuidgen to a separate package
uuidgen is needed by zypper and we don't want to drag in everything else
in util-linux, so split it out to a separate package.

Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
2011-03-01 16:36:43 -08:00
Scott Rifenbark
6c4c621475 documentation/bsp-guide/bsp-guide.xml: Updated revision history on title page.
I updated the revision history on the title page to reflect the upcoming
Release 1.0.  I will likely have to change this as we get nearer the
release so I can be sure of the number and also add meaning release
remarks to the entry.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-03-01 16:36:43 -08:00
Scott Rifenbark
9a8cc4eeb5 documentation/bsp-guide/bsp-guide.xml: Updated RP email address
Changed the email address for Richard Purdie in the author title
page to richard.purdie@linuxfoundation.org.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-03-01 16:36:43 -08:00
Scott Rifenbark
389ab65ab9 [BUGID# 695] - documentation/bsp-guide/figures/bsp-title.png: Updated title graphic
[BUGID# 695] - I updated the title to use a less bold and intrusive
font and one that is still common for systems.  Also removed the
"s" in the title so it now reads "Board Support Package (BSP)
Developer's Guide."  I also put a better looking Yocto logo in.

Once this commit is merged bug #695 can be marked resolved.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-03-01 16:36:43 -08:00
Scott Rifenbark
d3ae37234c [BUGID# 553] - documentation/bsp-guide/bsp.xml: Re-write of click-through
[BUGID# 553] - In the 'BSP Click-Through Licensing Procedure'
section, which is shared between the BSP Guide and the Poky
Reference Manual, there were three links to 'pokylinux.org'
sites.  These links were intended to help a user get a license
for encumbered BSPs.  However, the links never did work.  The
section also had some wording that described a propsed naming
convention for BSP tarballs that were encumbered and non-encumbered.
The naming convention is a good idea but has not been followed
so far.

I removed the links and replaced them with general instructions
on how to get through the licensing situation.  Also removed the
hard-line naming rules and replaces with a more general explanation
of how we are naming BSP (e.g. Crown Bay).

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-03-01 16:36:43 -08:00
Scott Rifenbark
f3c6ccd13c [BUGID# 553] - documentation/poky-ref-manual/resources.xml: Fixed pokylinux URL
[BUGID# 553] - In appendix I in the Contributions section (I.6) there is
mention of a Poky contributions tree and the URL
git://git.pokylinux.org/poky-contrib.git is given.  I changed this
URL to git://git.yoctoproject.org/poky-contrib.git.

This is a partial fix for but 553.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-03-01 16:36:42 -08:00
Scott Rifenbark
7305ee0962 documentation/poky-ref-manual/resources.xml: Text addtions in Links section in Appendix I
I added text after the bulleted items "The Poky website" and "BitBake Uer Manual."
These were blank and it was not consistent with the rest of the list.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-03-01 16:36:42 -08:00
Scott Rifenbark
38d6560c11 [BUGID# 553] - documentation/poky-ref-manual/resources.xml: Fixed pokylinux URL
[BUGID# 553] - In the "Bugtracker" section (appendix I - I.2) there is
a reference to the bugtracker.  The text shows just the string
"bug tracker" but the hidden URL was http://bugzilla.pokylinux.org.
I updated the text to say to report problems by using the Bugzilla
application and then gave the URL http://bugzilla.yoctoproject.org
as the reference.

This is a partial fix for bug 553.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-03-01 16:36:42 -08:00
Scott Rifenbark
87ab152239 [BUGID# 553] - documentation/poky-ref-manual/faq.xml: Fixed pokylinux.org reference
[BUGID# 553] - In the FAQ appendix item H.12 there was a reference
to http://pokylinux.org/sources/* in the question portion.  The
reference should really be http://autobuilder.yoctoproject.org/sources/*.
I made the change.

This is a partial fix for bug 553.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-03-01 16:36:42 -08:00
Scott Rifenbark
c387491661 [BUGID# 553] - documentation/poky-ref-manual/development.xml: Fixed pokylinux.org URL
[BUGID# 553] - In "The Anjuta Plug-in" section (5.1.2.2) there was
an URL to the source for the Anjuta Plug-in.  The URL had the
pokylinux.org string in it and pointed to the old area.  I changed
the URL to http://git.yoctoproject.org and directed the user to
look under IDE Plugins.

This is a partial fix to bug 553.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-03-01 16:36:42 -08:00
Scott Rifenbark
7db4e07719 [BUGID# 553] - documentation/poky-ref-manual/development.xml: Fixed pokylinux URLs
[BUGID# 553] - In "The Eclipse Plug-in" section (5.1.2.1) there were two
URLs referencing the place to get the Eclipse plug-in.  One specified
the URL to put into the HTTP:// field in the Eclipse IDE when installing
the software.  This URL was incorrect.  I replaced it with the correct
URL, which was http://www.yoctoproject.org/downloads/eclipse-plugin/.

The second URL that was fixed was referencing the source code for the
plug-in.  It had the old pokylinux.org string.  I changed it to
http://git.yoctoproject.org.

These fixes partially address bug 553.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-03-01 16:36:42 -08:00
Scott Rifenbark
0073fae58e [BUGID# 553] - documentation/poky-ref-manual/introduction.xml: fixed pokylinux.org URL
[BUGID# 553] - In the Development Checkouts section (1.5.3) there was a
reference to our git repository located at git://git.pokylinux.org/poky.git.
I changed this to git://git.yoctoproject.org/poky.git.  This is a
partial fix to Bug 553.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-03-01 16:36:42 -08:00
Scott Rifenbark
960c76bad2 [BUGID# 553] - documentation/poky-ref-manual/introduction.xml: Fixed pokylinux.org URL
[BUGID# 553] - In the Releases section (1.5.1) there was an URL to
http://pokylinux.org/releases.  This URL was old and I replaced it
with http://yoctoproject.org/downloads/poky.  This partially fixes
bug 553.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-03-01 16:36:42 -08:00
Scott Rifenbark
93c36a6f68 documentation/poky-ref-manual/introduction.xml: [BUGID# 553] - Fixed pokylinux URL
[BUGID# 553] - In the Development Checkouts section (1.5.3)
    there is a reference to
    http://git.pokylinux.org/.  This URL resolves to an older looking
    source area.  I determined that the URL
    http://git.yoctoproject.org/ resolves to the newer Yocto source
    web interface so I changed the URL to that.
    This is a partial fix
    to Bug 553.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-03-01 16:36:42 -08:00
Scott Rifenbark
781c12f2a9 documentation/poky-ref-manual/introduction.xml: [BUGID# 553] - Fixed pokylinux.org link in Nightly Build section
[BUGID# 553] - In the nightly build section (1.5.2) there is a reference to
http://autobuilder.pokylinux.org/.  This URL resolves to an autobuilder
page that has a bunch of pokylinux links.  I determined that the URL
http://autobuilder.yoctoproject.org/ also resolves to the autobuilder
page so I updated the URL to use the YP link.  This is a partial fix
to Bug 553.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-03-01 16:36:41 -08:00
Scott Rifenbark
55f3c2f438 documentation/poky-ref-manual/ref-images.xml: Update to Images Appendix
Added command 'ls meta*/recipes*/images/*.bb' as the command to see the
supported images.  Also added poky-image-lsb as an image and noted
that poky-image-sdk has becom poky-image-sato-dev.

These fixes are in response to alpha testing for release 1.0 Yocto.

Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com>
2011-03-01 16:36:41 -08:00
Bruce Ashfield
60e922f180 u-boot: remove do_install from u-boot.inc
Fixes [BUGID #777]

The do_install rule in u-boot.inc was installing a host
tool into the target ${bindir}, which is subsequently
stripped with target strip during packaging, and the
obvious error ensues.

The native u-boot recipe has its own install rule, and
the machine specific u-boot doesn't require mkimage or
anything else in the do_install function. So we remove
it completely until it is needed again.

[sgw: PR bump]
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
2011-03-01 10:01:40 -08:00
Liping Ke
a8a305a8ca ADT: Fix several bugs for adt installer
Two bugs are found
1) image download file path is not correct. So even if file is downloaded
   already, it can't be detected.
2) several images now are renamed, such as sato-dev, sato-sdk, we need to change the name
   accordingly.

Signed-off-by: Liping Ke <liping.ke@intel.com>
2011-03-01 10:01:40 -08:00
Yu Ke
87e8e1b31c shadow: upgrade to 4.1.4.3 to fix security vulnerability
For CVE-2011-0721: http://lists.debian.org/debian-security-announce/2011/msg00030.html

Signed-off-by: Yu Ke <ke.yu@intel.com>
2011-03-01 10:01:40 -08:00
Dongxiao Xu
f68e7a365f ncurses: Change ncurses patch SRC_URI location
One of ncurses's patch has been removed from its original repo
location, use autobuilder cache location instead.

Comment out the original patch address instead of removing it
since we may still need that address when upgrade the recipe later.

This fixes [BUGID #709].

[sgw: fixed having comment embedded in SRC_URI]
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
2011-03-01 10:01:39 -08:00
Dongxiao Xu
8abb5f60ca attr: Change SRC_URI to a correct location
attr has changed its download link, thus change accordingly.

This fixes [BUGID #710]

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
2011-03-01 10:01:39 -08:00
Saul Wold
59aa9a23d8 Revert "base/utility-tasks.bbclass: Drop do_setscene and do_rebuild"
This reverts commit 6d79765420.

The orignal patch broke the incremental build, so not all is right
with this change yet.

Signed-off-by: Saul Wold <sgw@linux.intel.com>
2011-02-28 18:12:44 -08:00
Richard Purdie
6d79765420 base/utility-tasks.bbclass: Drop do_setscene and do_rebuild
The do_setscene task only exists for rebuild support now as all its other
functionality has been superceeded. The rebuild task currently crashes due
to removal of the working directory and therefore isn't working for anyone.
It also interacts extremely badly with the newer sstate technology to the
point of being dangerous.

Summary, if we want rebuild support it needs a reimplementation so remove
this version and all its remnants and hacks.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-02-28 16:40:44 -08:00
Scott Garman
49ca11e02d distro_tracking_fields.inc: add transfig and linuxdoc-tools recipes
Signed-off-by: Scott Garman <scott.a.garman@intel.com>
2011-02-28 16:34:55 -08:00
Saul Wold
c004e18fb1 distro_tracking: update for newer packages added
Signed-off-by: Saul Wold <sgw@linux.intel.com>
2011-02-28 16:34:49 -08:00
Mark Hatle
ee5918d9d7 populate_sdk_rpm.bbclass: Add the necessary solvedb lock
[BUG #776]

When using the RPM solve databases, we have to lock our operations
to avoid removing it while it's in use.

The same lock is shared by the rootfs_rpm.bbclass

Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
2011-02-28 16:34:41 -08:00
Richard Purdie
9837e78bfc bitbake/cache/runqueue.py: Move workload for recipe parsing to the child process
Parsing the recipe in the parent before forking off the child worker
can mean the parent doesn't hit the idle loop and becomes a bottleneck
when lauching many short lived processes.

The reason we need this in the parent is to figure out the fakeroot
environmental options. To address this, add the fakeroot variables
to the cache and move recipe loadData into the child task.

For a poky-image-sato build this results in about a 2 minute speedup
(1.8%).

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-02-28 16:34:34 -08:00
Bruce Ashfield
e08dc5aaae linux-yocto: add crownbay BSP infrastructure
Updating the meta SRCREV to grab this linux-yocto commit:

    meta: add crownbay BSP infrastructure

    Import the 2.6.34 crownbay infrastructure and update for the
    2.6.37 kernel. This also brings in the feature/drm-emgd that
    the crownbay requires.

Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
2011-02-26 14:30:34 -08:00
Beth Flanagan
9ae2e2ef95 Merge branch 'bernard' of ssh://git.pokylinux.org/poky into bernard-1.0 2011-02-26 14:29:21 -08:00
Saul Wold
55b58a5d4c task-poky-lsb: libqtopenqgl4 should be for qemux86 and atom-pc only
Signed-off-by: Saul Wold <sgw@linux.intel.com>
2011-02-26 10:50:15 -08:00
2898 changed files with 81896 additions and 120182 deletions

17
.gitignore vendored
View File

@@ -7,16 +7,31 @@ build/tmp/
build/sstate-cache
build/pyshtables.py
pstage/
scripts/oe-git-proxy-socks
scripts/poky-git-proxy-socks
sources/
meta-darwin
meta-maemo
meta-extras
meta-m2
meta-prvt*
poky-autobuilder*
*.swp
*.orig
*.rej
*~
documentation/poky-ref-manual/poky-ref-manual.html
documentation/poky-ref-manual/poky-ref-manual.pdf
documentation/poky-ref-manual/poky-ref-manual.tgz
documentation/poky-ref-manual/bsp-guide.html
documentation/poky-ref-manual/bsp-guide.pdf
documentation/bsp-guide/bsp-guide.html
documentation/bsp-guide/bsp-guide.pdf
documentation/bsp-guide/bsp-guide.tgz
documentation/yocto-project-qs/yocto-project-qs.html
documentation/yocto-project-qs/yocto-project-qs.tgz
documentation/kernel-manual/kernel-manual.html
documentation/kernel-manual/kernel-manual.tgz
documentation/kernel-manual/kernel-manual.pdf

30
README
View File

@@ -1,25 +1,15 @@
Poky
====
Poky is an integration of various components to form a complete prepackaged
build system and development environment. It features support for building
customised embedded device style images. There are reference demo images
featuring a X11/Matchbox/GTK themed UI called Sato. The system supports
cross-architecture application development using QEMU emulation and a
standalone toolchain and SDK with IDE integration.
Poky platform builder is a combined cross build system and development
environment. It features support for building X11/Matchbox/GTK based
filesystem images for various embedded devices and boards. It also
supports cross-architecture application development using QEMU emulation
and a standalone toolchain and SDK with IDE integration.
Poky has an extensive handbook, the source of which is contained in
the handbook directory. For compiled HTML or pdf versions of this,
see the Poky website http://pokylinux.org.
Additional information on the specifics of hardware that Poky supports
is available in README.hardware. Further hardware support can easily be added
in the form of layers which extend the systems capabilities in a modular way.
As an integration layer Poky consists of several upstream projects such as
BitBake, OpenEmbedded-Core, Yocto documentation and various sources of information
e.g. for the hardware support. Poky is in turn a component of the Yocto Project.
The Yocto Project has extensive documentation about the system including a
reference manual which can be found at:
http://yoctoproject.org/community/documentation
For information about OpenEmbedded see their website:
http://www.openembedded.org/
is available in README.hardware.

View File

@@ -87,22 +87,22 @@ Hard Disk:
1. Build a directdisk image format. This will generate proper partition tables
that will in turn be written to the physical media. For example:
$ bitbake core-image-minimal-directdisk
$ bitbake poky-image-minimal-directdisk
2. Use the "dd" utility to write the image to the raw block device. For example:
# dd if=core-image-minimal-directdisk-atom-pc.hdddirect of=/dev/sdb
# dd if=poky-image-minimal-directdisk-atom-pc.hdddirect of=/dev/sdb
USB Device:
1. Build an hddimg image format. This is a simple filesystem without partition
tables and is suitable for USB keys. For example:
$ bitbake core-image-minimal-live
$ bitbake poky-image-minimal-live
2. Use the "dd" utility to write the image to the raw block device. For
example:
# dd if=core-image-minimal-live-atom-pc.hddimg of=/dev/sdb
# dd if=poky-image-minimal-live-atom-pc.hddimg of=/dev/sdb
If the device fails to boot with "Boot error" displayed, it is likely the BIOS
cannot understand the physical layout of the disk (or rather it expects a
@@ -126,7 +126,7 @@ USB Device:
b. Copy the contents of the poky image to the USB-ZIP mode device:
# mount -o loop core-image-minimal-live-atom-pc.hddimg /tmp/image
# mount -o loop poky-image-minimal-live-atom-pc.hddimg /tmp/image
# mount /dev/sdb4 /tmp/usbkey
# cp -rf /tmp/image/* /tmp/usbkey
@@ -196,7 +196,7 @@ if used via a usb card reader):
# cp u-boot-beagleboard.bin /media/boot/u-boot.bin
3. Install the root filesystem
# tar x -C /media/root -f core-image-$IMAGE_TYPE-beagleboard.tar.bz2
# tar x -C /media/root -f poky-image-$IMAGE_TYPE-beagleboard.tar.bz2
# tar x -C /media/root -f modules-$KERNEL_VERSION-beagleboard.tgz
4. Install the kernel uImage
@@ -291,11 +291,11 @@ name in all commands where appropriate.
--- Preparation ---
1) Build an image (e.g. core-image-minimal) using "routerstationpro" as the
1) Build an image (e.g. poky-image-minimal) using "routerstationpro" as the
MACHINE
2) Partition the USB drive so that primary partition 1 is type Linux (83).
Minimum size depends on your root image size - core-image-minimal probably
Minimum size depends on your root image size - poky-image-minimal probably
only needs 8-16MB, other images will need more.
# fdisk /dev/sdb
@@ -316,11 +316,11 @@ only needs 8-16MB, other images will need more.
# mke2fs -j /dev/sdb1
4) Mount partition 1 and then extract the contents of
tmp/deploy/images/core-image-XXXX.tar.bz2 into it (preserving permissions).
tmp/deploy/images/poky-image-XXXX.tar.bz2 into it (preserving permissions).
# mount /dev/sdb1 /media/sdb1
# cd /media/sdb1
# tar -xvjpf tmp/deploy/images/core-image-XXXX.tar.bz2
# tar -xvjpf tmp/deploy/images/poky-image-XXXX.tar.bz2
5) Unmount the USB drive and then plug it into the board's USB port

View File

@@ -32,15 +32,17 @@ import warnings
from traceback import format_exception
try:
import bb
except RuntimeError as exc:
except RuntimeError, exc:
sys.exit(str(exc))
from bb import event
import bb.msg
from bb import cooker
from bb import ui
from bb import server
from bb.server import none
#from bb.server import xmlrpc
__version__ = "1.13.2"
__version__ = "1.11.0"
logger = logging.getLogger("BitBake")
@@ -118,10 +120,7 @@ Default BBFILES are the .bb files in the current directory.""")
action = "store", dest = "cmd")
parser.add_option("-r", "--read", help = "read the specified file before bitbake.conf",
action = "append", dest = "prefile", default = [])
parser.add_option("-R", "--postread", help = "read the specified file after bitbake.conf",
action = "append", dest = "postfile", default = [])
action = "append", dest = "file", default = [])
parser.add_option("-v", "--verbose", help = "output more chit-chat to the terminal",
action = "store_true", dest = "verbose", default = False)
@@ -138,6 +137,9 @@ Default BBFILES are the .bb files in the current directory.""")
parser.add_option("-p", "--parse-only", help = "quit after parsing the BB files (developers only)",
action = "store_true", dest = "parse_only", default = False)
parser.add_option("-d", "--disable-psyco", help = "disable using the psyco just-in-time compiler (not recommended)",
action = "store_true", dest = "disable_psyco", default = False)
parser.add_option("-s", "--show-versions", help = "show current and preferred versions of all packages",
action = "store_true", dest = "show_versions", default = False)
@@ -159,9 +161,6 @@ Default BBFILES are the .bb files in the current directory.""")
parser.add_option("-u", "--ui", help = "userinterface to use",
action = "store", dest = "ui")
parser.add_option("-t", "--servertype", help = "Choose which server to use, none, process or xmlrpc",
action = "store", dest = "servertype")
parser.add_option("", "--revisions-changed", help = "Set the exit code depending on whether upstream floating revisions have changed or not",
action = "store_true", dest = "revisions_changed", default = False)
@@ -169,22 +168,15 @@ Default BBFILES are the .bb files in the current directory.""")
configuration = BBConfiguration(options)
configuration.pkgs_to_build.extend(args[1:])
configuration.initial_path = os.environ['PATH']
ui_main = get_ui(configuration)
# Server type could be xmlrpc or none currently, if nothing is specified,
# default server would be none
if configuration.servertype:
server_type = configuration.servertype
else:
server_type = 'process'
loghandler = event.LogHandler()
logger.addHandler(loghandler)
try:
module = __import__("bb.server", fromlist = [server_type])
server = getattr(module, server_type)
except AttributeError:
sys.exit("FATAL: Invalid server type '%s' specified.\n"
"Valid interfaces: xmlrpc, process, none [default]." % servertype)
#server = bb.server.xmlrpc
server = bb.server.none
# Save a logfile for cooker into the current working directory. When the
# server is daemonized this logfile will be truncated.
@@ -193,42 +185,35 @@ Default BBFILES are the .bb files in the current directory.""")
bb.utils.init_logger(bb.msg, configuration.verbose, configuration.debug,
configuration.debug_domains)
# Ensure logging messages get sent to the UI as events
handler = bb.event.LogHandler()
logger.addHandler(handler)
# Clear away any spurious environment variables. But don't wipe the
# environment totally. This is necessary to ensure the correct operation
# of the UIs (e.g. for DISPLAY, etc.)
bb.utils.clean_environment()
server = server.BitBakeServer()
server.initServer()
idle = server.getServerIdleCB()
cooker = bb.cooker.BBCooker(configuration, idle)
cooker = bb.cooker.BBCooker(configuration, server)
cooker.parseCommandLine()
server.addcooker(cooker)
server.saveConnectionDetails()
server.detach(cooker_logfile)
serverinfo = server.BitbakeServerInfo(cooker.server)
# Should no longer need to ever reference cooker
server.BitBakeServerFork(cooker, cooker.server, serverinfo, cooker_logfile)
del cooker
logger.removeHandler(handler)
logger.removeHandler(loghandler)
# Setup a connection to the server (cooker)
server_connection = server.establishConnection()
server_connection = server.BitBakeServerConnection(serverinfo)
# Launch the UI
if configuration.ui:
ui = configuration.ui
else:
ui = "knotty"
try:
return server.launchUI(ui_main, server_connection.connection, server_connection.events)
return server.BitbakeUILauch().launch(serverinfo, ui_main, server_connection.connection, server_connection.events)
finally:
server_connection.terminate()
return 1
if __name__ == "__main__":
try:
ret = main()
@@ -237,4 +222,3 @@ if __name__ == "__main__":
import traceback
traceback.print_exc(5)
sys.exit(ret)

View File

@@ -18,8 +18,8 @@ sys.path[0:0] = [os.path.join(topdir, 'lib')]
import bb.cache
import bb.cooker
import bb.providers
import bb.utils
from bb.cooker import state
from bb.server import none
logger = logging.getLogger('BitBake')
@@ -45,17 +45,14 @@ class Commands(cmd.Cmd):
self.returncode = 0
self.config = Config(parse_only=True)
self.cooker = bb.cooker.BBCooker(self.config,
self.register_idle_function)
bb.server.none)
self.config_data = self.cooker.configuration.data
bb.providers.logger.setLevel(logging.ERROR)
self.prepare_cooker()
def register_idle_function(self, function, data):
pass
def prepare_cooker(self):
sys.stderr.write("Parsing recipes..")
logger.setLevel(logging.WARNING)
logger.setLevel(logging.ERROR)
try:
while self.cooker.state in (state.initial, state.parsing):
@@ -74,74 +71,6 @@ class Commands(cmd.Cmd):
def do_show_layers(self, args):
logger.info(str(self.config_data.getVar('BBLAYERS', True)))
def do_show_overlayed(self, args):
if self.cooker.overlayed:
logger.info('Overlayed recipes:')
for f in self.cooker.overlayed.iterkeys():
logger.info('%s' % f)
for of in self.cooker.overlayed[f]:
logger.info(' %s' % of)
else:
logger.info('No overlayed recipes found')
def do_flatten(self, args):
arglist = args.split()
if len(arglist) != 1:
logger.error('syntax: flatten <outputdir>')
return
if os.path.exists(arglist[0]) and os.listdir(arglist[0]):
logger.error('Directory %s exists and is non-empty, please clear it out first' % arglist[0])
return
layers = (self.config_data.getVar('BBLAYERS', True) or "").split()
for layer in layers:
overlayed = []
for f in self.cooker.overlayed.iterkeys():
for of in self.cooker.overlayed[f]:
if of.startswith(layer):
overlayed.append(of)
logger.info('Copying files from %s...' % layer )
for root, dirs, files in os.walk(layer):
for f1 in files:
f1full = os.sep.join([root, f1])
if f1full in overlayed:
logger.info(' Skipping overlayed file %s' % f1full )
else:
ext = os.path.splitext(f1)[1]
if ext != '.bbappend':
fdest = f1full[len(layer):]
fdest = os.path.normpath(os.sep.join([arglist[0],fdest]))
bb.utils.mkdirhier(os.path.dirname(fdest))
if os.path.exists(fdest):
if f1 == 'layer.conf' and root.endswith('/conf'):
logger.info(' Skipping layer config file %s' % f1full )
continue
else:
logger.warn('Overwriting file %s', fdest)
bb.utils.copyfile(f1full, fdest)
if ext == '.bb':
if f1 in self.cooker_data.appends:
appends = self.cooker_data.appends[f1]
if appends:
logger.info(' Applying appends to %s' % fdest )
for appendname in appends:
self.apply_append(appendname, fdest)
def get_append_layer(self, appendname):
for layer, _, regex, _ in self.cooker.status.bbfile_config_priorities:
if regex.match(appendname):
return layer
return "?"
def apply_append(self, appendname, recipename):
appendfile = open(appendname, 'r')
recipefile = open(recipename, 'a')
recipefile.write('\n')
recipefile.write('##### bbappended from %s #####\n' % self.get_append_layer(appendname))
recipefile.writelines(appendfile.readlines())
def do_show_appends(self, args):
if not self.cooker_data.appends:
logger.info('No append files found')
@@ -149,12 +78,10 @@ class Commands(cmd.Cmd):
logger.info('State of append files:')
pnlist = list(self.cooker_data.pkg_pn.keys())
pnlist.sort()
for pn in pnlist:
for pn in self.cooker_data.pkg_pn:
self.show_appends_for_pn(pn)
self.show_appends_for_skipped()
self.show_appends_with_no_recipes()
def show_appends_for_pn(self, pn):
filenames = self.cooker_data.pkg_pn[pn]
@@ -165,30 +92,20 @@ class Commands(cmd.Cmd):
self.cooker_data.pkg_pn)
best_filename = os.path.basename(best[3])
self.show_appends_output(filenames, best_filename)
def show_appends_for_skipped(self):
filenames = [os.path.basename(f)
for f in self.cooker.skiplist.iterkeys()]
self.show_appends_output(filenames, None, " (skipped)")
def show_appends_output(self, filenames, best_filename, name_suffix = ''):
appended, missing = self.get_appends_for_files(filenames)
if appended:
for basename, appends in appended:
logger.info('%s%s:', basename, name_suffix)
logger.info('%s:', basename)
for append in appends:
logger.info(' %s', append)
if best_filename:
if best_filename in missing:
logger.warn('%s: missing append for preferred version',
best_filename)
self.returncode |= 1
if best_filename in missing:
logger.warn('%s: missing append for preferred version',
best_filename)
self.returncode |= 1
def get_appends_for_files(self, filenames):
appended, notappended = [], []
appended, notappended = set(), set()
for filename in filenames:
_, cls = bb.cache.Cache.virtualfn2realfn(filename)
if cls:
@@ -197,11 +114,26 @@ class Commands(cmd.Cmd):
basename = os.path.basename(filename)
appends = self.cooker_data.appends.get(basename)
if appends:
appended.append((basename, list(appends)))
appended.add((basename, frozenset(appends)))
else:
notappended.append(basename)
notappended.add(basename)
return appended, notappended
def show_appends_with_no_recipes(self):
recipes = set(os.path.basename(f)
for f in self.cooker_data.pkg_fn.iterkeys())
appended_recipes = self.cooker_data.appends.iterkeys()
appends_without_recipes = [self.cooker_data.appends[recipe]
for recipe in appended_recipes
if recipe not in recipes]
if appends_without_recipes:
appendlines = (' %s' % append
for appends in appends_without_recipes
for append in appends)
logger.warn('No recipes available for:\n%s',
'\n'.join(appendlines))
self.returncode |= 4
def do_EOF(self, line):
return True
@@ -211,8 +143,7 @@ class Config(object):
self.pkgs_to_build = []
self.debug_domains = []
self.extra_assume_provided = []
self.prefile = []
self.postfile = []
self.file = []
self.debug = 0
self.__dict__.update(options)

View File

@@ -1,53 +0,0 @@
#!/usr/bin/env python
import os
import sys,logging
import optparse
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),'lib'))
import prserv
import prserv.serv
__version__="1.0.0"
PRHOST_DEFAULT=''
PRPORT_DEFAULT=8585
def main():
parser = optparse.OptionParser(
version="Bitbake PR Service Core version %s, %%prog version %s" % (prserv.__version__, __version__),
usage = "%prog [options]")
parser.add_option("-f", "--file", help="database filename(default prserv.db)", action="store",
dest="dbfile", type="string", default="prserv.db")
parser.add_option("-l", "--log", help="log filename(default prserv.log)", action="store",
dest="logfile", type="string", default="prserv.log")
parser.add_option("--loglevel", help="logging level, i.e. CRITICAL, ERROR, WARNING, INFO, DEBUG",
action = "store", type="string", dest="loglevel", default = "WARNING")
parser.add_option("--start", help="start daemon",
action="store_true", dest="start", default="True")
parser.add_option("--stop", help="stop daemon",
action="store_false", dest="start")
parser.add_option("--host", help="ip address to bind", action="store",
dest="host", type="string", default=PRHOST_DEFAULT)
parser.add_option("--port", help="port number(default 8585)", action="store",
dest="port", type="int", default=PRPORT_DEFAULT)
options, args = parser.parse_args(sys.argv)
prserv.init_logger(os.path.abspath(options.logfile),options.loglevel)
if options.start:
prserv.serv.start_daemon(options)
else:
prserv.serv.stop_daemon()
if __name__ == "__main__":
try:
ret = main()
except Exception:
ret = 1
import traceback
traceback.print_exc(5)
sys.exit(ret)

View File

@@ -85,6 +85,9 @@ don't execute, just go through the motions
.B \-p, \-\-parse-only
quit after parsing the BB files (developers only)
.TP
.B \-d, \-\-disable-psyco
disable using the psyco just-in-time compiler (not recommended)
.TP
.B \-s, \-\-show-versions
show current and preferred versions of all packages
.TP

View File

@@ -29,7 +29,7 @@ tasks and managing metadata. As such, its similarities to GNU make and other
build tools are readily apparent. It was inspired by Portage, the package management system used by the Gentoo Linux distribution. BitBake is the basis of the <ulink url="http://www.openembedded.org/">OpenEmbedded</ulink> project, which is being used to build and maintain a number of embedded Linux distributions, including OpenZaurus and Familiar.</para>
</section>
<section>
<title>Background and goals</title>
<title>Background and Goals</title>
<para>Prior to BitBake, no other build tool adequately met
the needs of an aspiring embedded Linux distribution. All of the
buildsystems used by traditional desktop Linux distributions lacked
@@ -42,9 +42,9 @@ embedded space, were scalable or maintainable.</para>
<listitem><para>Handle crosscompilation.</para></listitem>
<listitem><para>Handle interpackage dependencies (build time on target architecture, build time on native architecture, and runtime).</para></listitem>
<listitem><para>Support running any number of tasks within a given package, including, but not limited to, fetching upstream sources, unpacking them, patching them, configuring them, et cetera.</para></listitem>
<listitem><para>Must be Linux distribution agnostic (both build and target).</para></listitem>
<listitem><para>Must be linux distribution agnostic (both build and target).</para></listitem>
<listitem><para>Must be architecture agnostic</para></listitem>
<listitem><para>Must support multiple build and target operating systems (including Cygwin, the BSDs, etc).</para></listitem>
<listitem><para>Must support multiple build and target operating systems (including cygwin, the BSDs, etc).</para></listitem>
<listitem><para>Must be able to be self contained, rather than tightly integrated into the build machine's root filesystem.</para></listitem>
<listitem><para>There must be a way to handle conditional metadata (on target architecture, operating system, distribution, machine).</para></listitem>
<listitem><para>It must be easy for the person using the tools to supply their own local metadata and packages to operate against.</para></listitem>
@@ -91,13 +91,13 @@ share common metadata between many packages.</para></listitem>
<section>
<title>Setting a default value (?=)</title>
<para><screen><varname>A</varname> ?= "aval"</screen></para>
<para>If <varname>A</varname> is set before the above is called, it will retain its previous value. If <varname>A</varname> is unset prior to the above call, <varname>A</varname> will be set to <literal>aval</literal>. Note that this assignment is immediate, so if there are multiple ?= assignments to a single variable, the first of those will be used.</para>
<para>If <varname>A</varname> is set before the above is called, it will retain it's previous value. If <varname>A</varname> is unset prior to the above call, <varname>A</varname> will be set to <literal>aval</literal>. Note that this assignment is immediate, so if there are multiple ?= assignments to a single variable, the first of those will be used.</para>
</section>
<section>
<title>Setting a default value (??=)</title>
<para><screen><varname>A</varname> ??= "somevalue"</screen></para>
<para><screen><varname>A</varname> ??= "someothervalue"</screen></para>
<para>If <varname>A</varname> is set before the above, it will retain that value. If <varname>A</varname> is unset prior to the above, <varname>A</varname> will be set to <literal>someothervalue</literal>. This is a lazy version of ??=, in that the assignment does not occur until the end of the parsing process, so that the last, rather than the first, ??= assignment to a given variable will be used.</para>
<para>If <varname>A</varname> is set before the above, it will retain that value. If <varname>A</varname> is unset prior to the above, <varname>A</varname> will be set to <literal>someothervalue</literal>. This is a lazy version of ?=, in that the assignment does not occur until the end of the parsing process, so that the last, rather than the first, ??= assignment to a given variable will be used.</para>
</section>
<section>
<title>Immediate variable expansion (:=)</title>
@@ -125,7 +125,7 @@ share common metadata between many packages.</para></listitem>
<varname>B</varname> .= "additionaldata"
<varname>C</varname> = "cval"
<varname>C</varname> =. "test"</screen></para>
<para>In this example, <varname>B</varname> is now <literal>bvaladditionaldata</literal> and <varname>C</varname> is <literal>testcval</literal>. In contrast to the above appending and prepending operators, no additional space
<para>In this example, <varname>B</varname> is now <literal>bvaladditionaldata</literal> and <varname>C</varname> is <literal>testcval</literal>. In contrast to the above Appending and Prepending operators no additional space
will be introduced.</para>
</section>
<section>
@@ -147,12 +147,12 @@ will be introduced.</para>
</section>
<section>
<title>Inclusion</title>
<para>Next, there is the <literal>include</literal> directive, which causes BitBake to parse whatever file you specify, and insert it at that location, which is not unlike <command>make</command>. However, if the path specified on the <literal>include</literal> line is a relative path, BitBake will locate the first one it can find within <envar>BBPATH</envar>.</para>
<para>Next, there is the <literal>include</literal> directive, which causes BitBake to parse in whatever file you specify, and insert it at that location, which is not unlike <command>make</command>. However, if the path specified on the <literal>include</literal> line is a relative path, BitBake will locate the first one it can find within <envar>BBPATH</envar>.</para>
</section>
<section>
<title>Requiring inclusion</title>
<title>Requiring Inclusion</title>
<para>In contrast to the <literal>include</literal> directive, <literal>require</literal> will
raise an ParseError if the file to be included cannot be found. Otherwise it will behave just like the <literal>
raise an ParseError if the to be included file can not be found. Otherwise it will behave just like the <literal>
include</literal> directive.</para>
</section>
<section>
@@ -171,10 +171,10 @@ include</literal> directive.</para>
import time
print time.strftime('%Y%m%d', time.gmtime())
}</screen></para>
<para>This is the similar to the previous, but flags it as Python so that BitBake knows it is Python code.</para>
<para>This is the similar to the previous, but flags it as python so that BitBake knows it is python code.</para>
</section>
<section>
<title>Defining Python functions into the global Python namespace</title>
<title>Defining python functions into the global python namespace</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
<para><screen>def get_depends(bb, d):
if bb.data.getVar('SOMECONDITION', d, True):
@@ -187,8 +187,8 @@ include</literal> directive.</para>
<para>This would result in <varname>DEPENDS</varname> containing <literal>dependencywithcond</literal>.</para>
</section>
<section>
<title>Variable flags</title>
<para>Variables can have associated flags which provide a way of tagging extra information onto a variable. Several flags are used internally by BitBake but they can be used externally too if needed. The standard operations mentioned above also work on flags.</para>
<title>Variable Flags</title>
<para>Variables can have associated flags which provide a way of tagging extra information onto a variable. Several flags are used internally by bitbake but they can be used externally too if needed. The standard operations mentioned above also work on flags.</para>
<para><screen><varname>VARIABLE</varname>[<varname>SOMEFLAG</varname>] = "value"</screen></para>
<para>In this example, <varname>VARIABLE</varname> has a flag, <varname>SOMEFLAG</varname> which is set to <literal>value</literal>.</para>
</section>
@@ -200,19 +200,19 @@ include</literal> directive.</para>
<section>
<title>Tasks</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
<para>In BitBake, each step that needs to be run for a given .bb is known as a task. There is a command <literal>addtask</literal> to add new tasks (must be a defined Python executable metadata and must start with <quote>do_</quote>) and describe intertask dependencies.</para>
<para>In BitBake, each step that needs to be run for a given .bb is known as a task. There is a command <literal>addtask</literal> to add new tasks (must be a defined python executable metadata and must start with <quote>do_</quote>) and describe intertask dependencies.</para>
<para><screen>python do_printdate () {
import time
print time.strftime('%Y%m%d', time.gmtime())
}
addtask printdate before do_build</screen></para>
<para>This defines the necessary Python function and adds it as a task which is now a dependency of do_build, the default task. If anyone executes the do_build task, that will result in do_printdate being run first.</para>
<para>This defines the necessary python function and adds it as a task which is now a dependency of do_build (the default task). If anyone executes the do_build task, that will result in do_printdate being run first.</para>
</section>
<section>
<title>Events</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
<para>BitBake allows installation of event handlers. Events are triggered at certain points during operation, such as the beginning of operation against a given .bb, the start of a given task, task failure, task success, et cetera. The intent is to make it easy to do things like email notification on build failure.</para>
<para>BitBake allows to install event handlers. Events are triggered at certain points during operation, such as, the beginning of operation against a given .bb, the start of a given task, task failure, task success, et cetera. The intent was to make it easy to do things like email notifications on build failure.</para>
<para><screen>addhandler myclass_eventhandler
python myclass_eventhandler() {
from bb.event import getName
@@ -228,20 +228,20 @@ of the event and the content of the <varname>FILE</varname> variable.</para>
</section>
<section>
<title>Variants</title>
<para>Two BitBake features exist to facilitate the creation of multiple buildable incarnations from a single recipe file.</para>
<para>The first is <varname>BBCLASSEXTEND</varname>. This variable is a space separated list of classes used to "extend" the recipe for each variant. As an example, setting <screen>BBCLASSEXTEND = "native"</screen> results in a second incarnation of the current recipe being available. This second incarantion will have the "native" class inherited.</para>
<para>The second feature is <varname>BBVERSIONS</varname>. This variable allows a single recipe to build multiple versions of a project from a single recipe file, and allows you to specify conditional metadata (using the <varname>OVERRIDES</varname> mechanism) for a single version, or an optionally named range of versions:</para>
<para>Two Bitbake features exist to facilitate the creation of multiple buildable incarnations from a single recipe file.</para>
<para>The first is <varname>BBCLASSEXTEND</varname>. This variable is a space separated list of classes to utilize to "extend" the recipe for each variant. As an example, setting <screen>BBCLASSEXTEND = "native"</screen> results in a second incarnation of the current recipe being available. This second incarantion will have the "native" class inherited.</para>
<para>The second feature is <varname>BBVERSIONS</varname>. This variable allows a single recipe to be able to build multiple versions of a project from a single recipe file, and allows you to specify conditional metadata (using the <varname>OVERRIDES</varname> mechanism) for a single version, or an optionally named range of versions:</para>
<para><screen>BBVERSIONS = "1.0 2.0 git"
SRC_URI_git = "git://someurl/somepath.git"</screen></para>
<para><screen>BBVERSIONS = "1.0.[0-6]:1.0.0+ \
1.0.[7-9]:1.0.7+"
SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;patch=1"</screen></para>
<para>Note that the name of the range will default to the original version of the recipe, so given OE, a recipe file of foo_1.0.0+.bb will default the name of its versions to 1.0.0+. This is useful, as the range name is not only placed into overrides; it's also made available for the metadata to use in the form of the <varname>BPV</varname> variable, for use in file:// search paths (<varname>FILESPATH</varname>).</para>
<para>Note that the name of the range will default to the original version of the recipe, so given OE, a recipe file of foo_1.0.0+.bb will default the name of its versions to 1.0.0+. This is useful, as the range name is not only placed into overrides, it's also made available for the metadata to use in the form of the <varname>BPV</varname> variable, for use in file:// search paths (<varname>FILESPATH</varname>).</para>
</section>
</section>
<section>
<title>Dependency handling</title>
<para>BitBake 1.7.x onwards works with the metadata at the task level since this is optimal when dealing with multiple threads of execution. A robust method of specifing task dependencies is therefore needed. </para>
<title>Dependency Handling</title>
<para>Bitbake 1.7.x onwards works with the metadata at the task level since this is optimal when dealing with multiple threads of execution. A robust method of specifing task dependencies is therefore needed. </para>
<section>
<title>Dependencies internal to the .bb file</title>
<para>Where the dependencies are internal to a given .bb file, the dependencies are handled by the previously detailed addtask directive.</para>
@@ -249,26 +249,26 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
<section>
<title>DEPENDS</title>
<para>DEPENDS lists build time dependencies. The 'deptask' flag for tasks is used to signify the task of each item listed in DEPENDS which must have completed before that task can be executed.</para>
<para>DEPENDS is taken to specify build time dependencies. The 'deptask' flag for tasks is used to signify the task of each DEPENDS which must have completed before that task can be executed.</para>
<para><screen>do_configure[deptask] = "do_populate_staging"</screen></para>
<para>means the do_populate_staging task of each item in DEPENDS must have completed before do_configure can execute.</para>
</section>
<section>
<title>RDEPENDS</title>
<para>RDEPENDS lists runtime dependencies. The 'rdeptask' flag for tasks is used to signify the task of each item listed in RDEPENDS which must have completed before that task can be executed.</para>
<para>RDEPENDS is taken to specify runtime dependencies. The 'rdeptask' flag for tasks is used to signify the task of each RDEPENDS which must have completed before that task can be executed.</para>
<para><screen>do_package_write[rdeptask] = "do_package"</screen></para>
<para>means the do_package task of each item in RDEPENDS must have completed before do_package_write can execute.</para>
</section>
<section>
<title>Recursive DEPENDS</title>
<para>These are specified with the 'recdeptask' flag and is used signify the task(s) of each DEPENDS which must have completed before that task can be executed. It applies recursively so the DEPENDS of each item in the original DEPENDS must be met and so on.</para>
<para>These are specified with the 'recdeptask' flag and is used signify the task(s) of each DEPENDS which must have completed before that task can be executed. It applies recursively so also, the DEPENDS of each item in the original DEPENDS must be met and so on.</para>
</section>
<section>
<title>Recursive RDEPENDS</title>
<para>These are specified with the 'recrdeptask' flag and is used signify the task(s) of each RDEPENDS which must have completed before that task can be executed. It applies recursively so the RDEPENDS of each item in the original RDEPENDS must be met and so on. It also runs all DEPENDS first.</para>
<para>These are specified with the 'recrdeptask' flag and is used signify the task(s) of each RDEPENDS which must have completed before that task can be executed. It applies recursively so also, the RDEPENDS of each item in the original RDEPENDS must be met and so on. It also runs all DEPENDS first too.</para>
</section>
<section>
<title>Inter task</title>
<title>Inter Task</title>
<para>The 'depends' flag for tasks is a more generic form of which allows an interdependency on specific tasks rather than specifying the data in DEPENDS or RDEPENDS.</para>
<para><screen>do_patch[depends] = "quilt-native:do_populate_staging"</screen></para>
<para>means the do_populate_staging task of the target quilt-native must have completed before the do_patch can execute.</para>
@@ -278,34 +278,35 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
<section>
<title>Parsing</title>
<section>
<title>Configuration files</title>
<para>The first kind of metadata in BitBake is configuration metadata. This metadata is global, and therefore affects <emphasis>all</emphasis> packages and tasks which are executed.</para>
<para>BitBake will first search the current working directory for an optional "conf/bblayers.conf" configuration file. This file is expected to contain a BBLAYERS variable which is a space delimited list of 'layer' directories. For each directory in this list, a "conf/layer.conf" file will be searched for and parsed with the LAYERDIR variable being set to the directory where the layer was found. The idea is these files will setup BBPATH and other variables correctly for a given build directory automatically for the user.</para>
<para>BitBake will then expect to find 'conf/bitbake.conf' somewhere in the user specified <envar>BBPATH</envar>. That configuration file generally has include directives to pull in any other metadata (generally files specific to architecture, machine, <emphasis>local</emphasis> and so on).</para>
<title>Configuration Files</title>
<para>The first of the classifications of metadata in BitBake is configuration metadata. This metadata is global, and therefore affects <emphasis>all</emphasis> packages and tasks which are executed.</para>
<para>Bitbake will first search the current working directory for an optional "conf/bblayers.conf" configuration file. This file is expected to contain a BBLAYERS variable which is a space delimited list of 'layer' directories. For each directory in this list a "conf/layer.conf" file will be searched for and parsed with the LAYERDIR variable being set to the directory where the layer was found. The idea is these files will setup BBPATH and other variables correctly for a given build directory automatically for the user.</para>
<para>Bitbake will then expect to find 'conf/bitbake.conf' somewhere in the user specified <envar>BBPATH</envar>. That configuration file generally has include directives to pull in any other metadata (generally files specific to architecture, machine, <emphasis>local</emphasis> and so on.</para>
<para>Only variable definitions and include directives are allowed in .conf files.</para>
</section>
<section>
<title>Classes</title>
<para>BitBake classes are our rudimentary inheritance mechanism. As briefly mentioned in the metadata introduction, they're parsed when an <literal>inherit</literal> directive is encountered, and they are located in classes/ relative to the directories in <envar>BBPATH</envar>.</para>
<para>BitBake classes are our rudimentary inheritance mechanism. As briefly mentioned in the metadata introduction, they're parsed when an <literal>inherit</literal> directive is encountered, and they are located in classes/ relative to the dirs in <envar>BBPATH</envar>.</para>
</section>
<section>
<title>.bb files</title>
<title>.bb Files</title>
<para>A BitBake (.bb) file is a logical unit of tasks to be executed. Normally this is a package to be built. Inter-.bb dependencies are obeyed. The files themselves are located via the <varname>BBFILES</varname> variable, which is set to a space separated list of .bb files, and does handle wildcards.</para>
</section>
</section>
</chapter>
<chapter>
<title>File download support</title>
<title>File Download support</title>
<section>
<title>Overview</title>
<para>BitBake provides support to download files this procedure is called fetching. The SRC_URI is normally used to tell BitBake which files to fetch. The next sections will describe the available fetchers and their options. Each fetcher honors a set of variables and per URI parameters separated by a <quote>;</quote> consisting of a key and a value. The semantics of the variables and parameters are defined by the fetcher. BitBake tries to have consistent semantics between the different fetchers.
<para>BitBake provides support to download files this procedure is called fetching. The SRC_URI is normally used to indicate BitBake which files to fetch. The next sections will describe th available fetchers and the options they have. Each Fetcher honors a set of Variables and
a per URI parameters separated by a <quote>;</quote> consisting of a key and a value. The semantic of the Variables and Parameters are defined by the Fetcher. BitBakes tries to have a consistent semantic between the different Fetchers.
</para>
</section>
<section>
<title>Local file fetcher</title>
<para>The URN for the local file fetcher is <emphasis>file</emphasis>. The filename can be either absolute or relative. If the filename is relative, <varname>FILESPATH</varname> and <varname>FILESDIR</varname> will be used to find the appropriate relative file, depending on the <varname>OVERRIDES</varname>. Single files and complete directories can be specified.
<title>Local File Fetcher</title>
<para>The URN for the Local File Fetcher is <emphasis>file</emphasis>. The filename can be either absolute or relative. If the filename is relative <varname>FILESPATH</varname> and <varname>FILESDIR</varname> will be used to find the appropriate relative file depending on the <varname>OVERRIDES</varname>. Single files and complete directories can be specified.
<screen><varname>SRC_URI</varname>= "file://relativefile.patch"
<varname>SRC_URI</varname>= "file://relativefile.patch;this=ignored"
<varname>SRC_URI</varname>= "file:///Users/ich/very_important_software"
@@ -314,11 +315,10 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
</section>
<section>
<title>CVS file fetcher</title>
<para>The URN for the CVS fetcher is <emphasis>cvs</emphasis>. This fetcher honors the variables <varname>DL_DIR</varname>, <varname>SRCDATE</varname>, <varname>FETCHCOMMAND_cvs</varname>, <varname>UPDATECOMMAND_cvs</varname>. <varname>DL_DIR</varname> specifies where a temporary checkout is saved. <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build). <varname>FETCHCOMMAND</varname> and <varname>UPDATECOMMAND</varname> specify which executables to use for the CVS checkout or update.
<title>CVS File Fetcher</title>
<para>The URN for the CVS Fetcher is <emphasis>cvs</emphasis>. This Fetcher honors the variables <varname>DL_DIR</varname>, <varname>SRCDATE</varname>, <varname>FETCHCOMMAND_cvs</varname>, <varname>UPDATECOMMAND_cvs</varname>. <varname>DL_DIR</varname> specifies where a temporary checkout is saved, <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build), <varname>FETCHCOMMAND</varname> and <varname>UPDATECOMMAND</varname> specify which executables should be used when doing the CVS checkout or update.
</para>
<para>The supported parameters are <varname>module</varname>, <varname>tag</varname>, <varname>date</varname>, <varname>method</varname>, <varname>localdir</varname>, <varname>rsh</varname> and <varname>scmdata</varname>. The <varname>module</varname> specifies which module to check out, the <varname>tag</varname> describes which CVS TAG should be used for the checkout. By default the TAG is empty. A <varname>date</varname> can be specified to override the SRCDATE of the configuration to checkout a specific date. The special value of "now" will cause the checkout to be updated on every build.<varname>method</varname> is by default <emphasis>pserver</emphasis>. If <emphasis>ext</emphasis> is used the <varname>rsh</varname> parameter will be evaluated and <varname>CVS_RSH</varname> will be set. Finally, <varname>localdir</varname> is used to checkout into a special directory relative to <varname>CVSDIR</varname>.
<para>The supported Parameters are <varname>module</varname>, <varname>tag</varname>, <varname>date</varname>, <varname>method</varname>, <varname>localdir</varname>, <varname>rsh</varname> and <varname>scmdata</varname>. The <varname>module</varname> specifies which module to check out, the <varname>tag</varname> describes which CVS TAG should be used for the checkout. By default the TAG is empty. A <varname>date</varname> can be specified to override the SRCDATE of the configuration to checkout a specific date. The special value of "now" will cause the checkout to be updated on every build.<varname>method</varname> is by default <emphasis>pserver</emphasis>, if <emphasis>ext</emphasis> is used the <varname>rsh</varname> parameter will be evaluated and <varname>CVS_RSH</varname> will be set. Finally <varname>localdir</varname> is used to checkout into a special directory relative to <varname>CVSDIR</varname>. If <varname>scmdata</varname> is set to <quote>keep</quote>
<screen><varname>SRC_URI</varname> = "cvs://CVSROOT;module=mymodule;tag=some-version;method=ext"
<varname>SRC_URI</varname> = "cvs://CVSROOT;module=mymodule;date=20060126;localdir=usethat"
</screen>
@@ -326,10 +326,11 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
</section>
<section>
<title>HTTP/FTP fetcher</title>
<para>The URNs for the HTTP/FTP fetcher are <emphasis>http</emphasis>, <emphasis>https</emphasis> and <emphasis>ftp</emphasis>. This fetcher honors the variables <varname>DL_DIR</varname>, <varname>FETCHCOMMAND_wget</varname>, <varname>PREMIRRORS</varname>, <varname>MIRRORS</varname>. The <varname>DL_DIR</varname> defines where to store the fetched file. <varname>FETCHCOMMAND</varname> contains the command used for fetching. <quote>${URI}</quote> and <quote>${FILES}</quote> will be replaced by the URI and basename of the file to be fetched. <varname>PREMIRRORS</varname> will be tried first when fetching a file. If that fails, the actual file will be tried and finally all <varname>MIRRORS</varname> will be tried.
<title>HTTP/FTP Fetcher</title>
<para>The URNs for the HTTP/FTP are <emphasis>http</emphasis>, <emphasis>https</emphasis> and <emphasis>ftp</emphasis>. This Fetcher honors the variables <varname>DL_DIR</varname>, <varname>FETCHCOMMAND_wget</varname>, <varname>PREMIRRORS</varname>, <varname>MIRRORS</varname>. The <varname>DL_DIR</varname> defines where to store the fetched file, <varname>FETCHCOMMAND</varname> contains the command used for fetching. <quote>${URI}</quote> and <quote>${FILES}</quote> will be replaced by the uri and basename of the to be fetched file. <varname>PREMIRRORS</varname>
will be tried first when fetching a file if that fails the actual file will be tried and finally all <varname>MIRRORS</varname> will be tried.
</para>
<para>The only supported parameter is <varname>md5sum</varname>. After a fetch the <varname>md5sum</varname> of the file will be calculated and the two sums will be compared.
<para>The only supported Parameter is <varname>md5sum</varname>. After a fetch the <varname>md5sum</varname> of the file will be calculated and the two sums will be compared.
</para>
<para><screen><varname>SRC_URI</varname> = "http://oe.handhelds.org/not_there.aac;md5sum=12343"
<varname>SRC_URI</varname> = "ftp://oe.handhelds.org/not_there_as_well.aac;md5sum=1234"
@@ -338,19 +339,19 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
</section>
<section>
<title>SVK fetcher</title>
<title>SVK Fetcher</title>
<para>
<emphasis>Currently NOT supported</emphasis>
</para>
</section>
<section>
<title>SVN fetcher</title>
<para>The URN for the SVN fetcher is <emphasis>svn</emphasis>.
<title>SVN Fetcher</title>
<para>The URN for the SVN Fetcher is <emphasis>svn</emphasis>.
</para>
<para>This fetcher honors the variables <varname>FETCHCOMMAND_svn</varname>, <varname>DL_DIR</varname>, <varname>SRCDATE</varname>. <varname>FETCHCOMMAND</varname> contains the subversion command. <varname>DL_DIR</varname> is the directory where tarballs will be saved. <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build).
<para>This Fetcher honors the variables <varname>FETCHCOMMAND_svn</varname>, <varname>DL_DIR</varname>, <varname>SRCDATE</varname>. <varname>FETCHCOMMAND</varname> contains the subversion command, <varname>DL_DIR</varname> is the directory where tarballs will be saved, <varname>SRCDATE</varname> specifies which date to use when doing the fetching (the special value of "now" will cause the checkout to be updated on every build).
</para>
<para>The supported parameters are <varname>proto</varname>, <varname>rev</varname> and <varname>scmdata</varname>. <varname>proto</varname> is the Subversion protocol, <varname>rev</varname> is the Subversion revision. If <varname>scmdata</varname> is set to <quote>keep</quote>, the <quote>.svn</quote> directories will be available during compile-time.
<para>The supported Parameters are <varname>proto</varname>, <varname>rev</varname> and <varname>scmdata</varname>. <varname>proto</varname> is the subversion protocol, <varname>rev</varname> is the subversion revision. If <varname>scmdata</varname> is set to <quote>keep</quote>, the <quote>.svn</quote> directories will be available during compile-time.
</para>
<para><screen><varname>SRC_URI</varname> = "svn://svn.oe.handhelds.org/svn;module=vip;proto=http;rev=667"
<varname>SRC_URI</varname> = "svn://svn.oe.handhelds.org/svn/;module=opie;proto=svn+ssh;date=20060126"
@@ -358,12 +359,12 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
</section>
<section>
<title>GIT fetcher</title>
<title>GIT Fetcher</title>
<para>The URN for the GIT Fetcher is <emphasis>git</emphasis>.
</para>
<para>The Variables <varname>DL_DIR</varname>, <varname>GITDIR</varname> are used. <varname>DL_DIR</varname> will be used to store the checkedout version. <varname>GITDIR</varname> will be used as the base directory where the git tree is cloned to.
</para>
<para>The parameters are <emphasis>tag</emphasis>, <emphasis>protocol</emphasis> and <emphasis>scmdata</emphasis>. <emphasis>tag</emphasis> is a Git tag, the default is <quote>master</quote>. <emphasis>protocol</emphasis> is the Git protocol to use and defaults to <quote>rsync</quote>. If <emphasis>scmdata</emphasis> is set to <quote>keep</quote>, the <quote>.git</quote> directory will be available during compile-time.
<para>The Parameters are <emphasis>tag</emphasis>, <emphasis>protocol</emphasis> and <emphasis>scmdata</emphasis>. <emphasis>tag</emphasis> is a git tag, the default is <quote>master</quote>. <emphasis>protocol</emphasis> is the git protocol to use and defaults to <quote>rsync</quote>. If <emphasis>scmdata</emphasis> is set to <quote>keep</quote>, the <quote>.git</quote> directory will be available during compile-time.
</para>
<para><screen><varname>SRC_URI</varname> = "git://git.oe.handhelds.org/git/vip.git;tag=version-1"
<varname>SRC_URI</varname> = "git://git.oe.handhelds.org/git/vip.git;protocol=http"
@@ -374,13 +375,13 @@ SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;pat
<chapter>
<title>The BitBake command</title>
<title>The bitbake command</title>
<section>
<title>Introduction</title>
<para>bitbake is the primary command in the system. It facilitates executing tasks in a single .bb file, or executing a given task on a set of multiple .bb files, accounting for interdependencies amongst them.</para>
</section>
<section>
<title>Usage and syntax</title>
<title>Usage and Syntax</title>
<para>
<screen><prompt>$ </prompt>bitbake --help
usage: bitbake [options] [package ...]
@@ -416,6 +417,8 @@ options:
than once.
-n, --dry-run don't execute, just go through the motions
-p, --parse-only quit after parsing the BB files (developers only)
-d, --disable-psyco disable using the psyco just-in-time compiler (not
recommended)
-s, --show-versions show current and preferred versions of all packages
-e, --environment show the global or per-package environment (this is
what used to be bbread)
@@ -435,7 +438,7 @@ options:
<para>
<example>
<title>Executing a task against a single .bb</title>
<para>Executing tasks for a single file is relatively simple. You specify the file in question, and BitBake parses it and executes the specified task (or <quote>build</quote> by default). It obeys intertask dependencies when doing so.</para>
<para>Executing tasks for a single file is relatively simple. You specify the file in question, and bitbake parses it and executes the specified task (or <quote>build</quote> by default). It obeys intertask dependencies when doing so.</para>
<para><quote>clean</quote> task:</para>
<para><screen><prompt>$ </prompt>bitbake -b blah_1.0.bb -c clean</screen></para>
<para><quote>build</quote> task:</para>
@@ -445,8 +448,8 @@ options:
<para>
<example>
<title>Executing tasks against a set of .bb files</title>
<para>There are a number of additional complexities introduced when one wants to manage multiple .bb files. Clearly there needs to be a way to tell BitBake what files are available, and of those, which we want to execute at this time. There also needs to be a way for each .bb to express its dependencies, both for build time and runtime. There must be a way for the user to express their preferences when multiple .bb's provide the same functionality, or when there are multiple versions of a .bb.</para>
<para>The next section, Metadata, outlines how to specify such things.</para>
<para>There are a number of additional complexities introduced when one wants to manage multiple .bb files. Clearly there needs to be a way to tell bitbake what files are available, and of those, which we want to execute at this time. There also needs to be a way for each .bb to express its dependencies, both for build time and runtime. There must be a way for the user to express their preferences when multiple .bb's provide the same functionality, or when there are multiple versions of a .bb.</para>
<para>The next section, Metadata, outlines how one goes about specifying such things.</para>
<para>Note that the bitbake command, when not using --buildfile, accepts a <varname>PROVIDER</varname>, not a filename or anything else. By default, a .bb generally PROVIDES its packagename, packagename-version, and packagename-version-revision.</para>
<screen><prompt>$ </prompt>bitbake blah</screen>
<screen><prompt>$ </prompt>bitbake blah-1.0</screen>
@@ -458,8 +461,8 @@ options:
<example>
<title>Generating dependency graphs</title>
<para>BitBake is able to generate dependency graphs using the dot syntax. These graphs can be converted
to images using the <application>dot</application> application from <ulink url="http://www.graphviz.org">Graphviz</ulink>.
Two files will be written into the current working directory, <emphasis>depends.dot</emphasis> containing dependency information at the package level and <emphasis>task-depends.dot</emphasis> containing a breakdown of the dependencies at the task level. To stop depending on common depends, one can use the <prompt>-I depend</prompt> to omit these from the graph. This can lead to more readable graphs. This way, <varname>DEPENDS</varname> from inherited classes such as base.bbclass can be removed from the graph.</para>
to images using the <application>dot</application> application from <ulink url="http://www.graphviz.org">graphviz</ulink>.
Two files will be written into the current working directory, <emphasis>depends.dot</emphasis> containing dependency information at the package level and <emphasis>task-depends.dot</emphasis> containing a breakdown of the dependencies at the task level. To stop depending on common depends one can use the <prompt>-I depend</prompt> to omit these from the graph. This can lead to more readable graphs. E.g. this way <varname>DEPENDS</varname> from inherited classes, e.g. base.bbclass, can be removed from the graph.</para>
<screen><prompt>$ </prompt>bitbake -g blah</screen>
<screen><prompt>$ </prompt>bitbake -g -I virtual/whatever -I bloom blah</screen>
</example>
@@ -467,20 +470,20 @@ Two files will be written into the current working directory, <emphasis>depends.
</section>
<section>
<title>Special variables</title>
<para>Certain variables affect BitBake operation:</para>
<para>Certain variables affect bitbake operation:</para>
<section>
<title><varname>BB_NUMBER_THREADS</varname></title>
<para> The number of threads BitBake should run at once (default: 1).</para>
<para> The number of threads bitbake should run at once (default: 1).</para>
</section>
</section>
<section>
<title>Metadata</title>
<para>As you may have seen in the usage information, or in the information about .bb files, the <varname>BBFILES</varname> variable is how the BitBake tool locates its files. This variable is a space separated list of files that are available, and supports wildcards.
<para>As you may have seen in the usage information, or in the information about .bb files, the BBFILES variable is how the bitbake tool locates its files. This variable is a space separated list of files that are available, and supports wildcards.
<example>
<title>Setting BBFILES</title>
<programlisting><varname>BBFILES</varname> = "/path/to/bbfiles/*.bb"</programlisting>
</example></para>
<para>With regard to dependencies, it expects the .bb to define a <varname>DEPENDS</varname> variable, which contains a space separated list of <quote>package names</quote>, which themselves are the <varname>PN</varname> variable. The <varname>PN</varname> variable is, in general, set to a component of the .bb filename by default.</para>
<para>With regard to dependencies, it expects the .bb to define a <varname>DEPENDS</varname> variable, which contains a space separated list of <quote>package names</quote>, which themselves are the <varname>PN</varname> variable. The <varname>PN</varname> variable is, in general, by default, set to a component of the .bb filename.</para>
<example>
<title>Depending on another .bb</title>
<para>a.bb:
@@ -493,7 +496,7 @@ DEPENDS += "package-b"</screen>
</example>
<example>
<title>Using PROVIDES</title>
<para>This example shows the usage of the <varname>PROVIDES</varname> variable, which allows a given .bb to specify what functionality it provides.</para>
<para>This example shows the usage of the PROVIDES variable, which allows a given .bb to specify what functionality it provides.</para>
<para>package1.bb:
<screen>PROVIDES += "virtual/package"</screen>
</para>
@@ -503,16 +506,16 @@ DEPENDS += "package-b"</screen>
<para>package3.bb:
<screen>PROVIDES += "virtual/package"</screen>
</para>
<para>As you can see, we have two different .bb's that provide the same functionality (virtual/package). Clearly, there needs to be a way for the person running BitBake to control which of those providers gets used. There is, indeed, such a way.</para>
<para>As you can see, here there are two different .bb's that provide the same functionality (virtual/package). Clearly, there needs to be a way for the person running bitbake to control which of those providers gets used. There is, indeed, such a way.</para>
<para>The following would go into a .conf file, to select package1:
<screen>PREFERRED_PROVIDER_virtual/package = "package1"</screen>
</para>
</example>
<example>
<title>Specifying version preference</title>
<para>When there are multiple <quote>versions</quote> of a given package, BitBake defaults to selecting the most recent version, unless otherwise specified. If the .bb in question has a <varname>DEFAULT_PREFERENCE</varname> set lower than the other .bb's (default is 0), then it will not be selected. This allows the person or persons maintaining the repository of .bb files to specify their preference for the default selected version. In addition, the user can specify their preferred version.</para>
<para>When there are multiple <quote>versions</quote> of a given package, bitbake defaults to selecting the most recent version, unless otherwise specified. If the .bb in question has a <varname>DEFAULT_PREFERENCE</varname> set lower than the other .bb's (default is 0), then it will not be selected. This allows the person or persons maintaining the repository of .bb files to specify their preferences for the default selected version. In addition, the user can specify their preferences with regard to version.</para>
<para>If the first .bb is named <filename>a_1.1.bb</filename>, then the <varname>PN</varname> variable will be set to <quote>a</quote>, and the <varname>PV</varname> variable will be set to 1.1.</para>
<para>If we then have an <filename>a_1.2.bb</filename>, BitBake will choose 1.2 by default. However, if we define the following variable in a .conf that BitBake parses, we can change that.
<para>If we then have an <filename>a_1.2.bb</filename>, bitbake will choose 1.2 by default. However, if we define the following variable in a .conf that bitbake parses, we can change that.
<screen>PREFERRED_VERSION_a = "1.1"</screen>
</para>
</example>

View File

@@ -21,7 +21,7 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
__version__ = "1.13.2"
__version__ = "1.11.0"
import sys
if sys.version_info < (2, 6, 0):
@@ -29,7 +29,7 @@ if sys.version_info < (2, 6, 0):
import os
import logging
import traceback
class NullHandler(logging.Handler):
def emit(self, record):
@@ -51,6 +51,9 @@ class BBLogger(Logger):
def verbose(self, msg, *args, **kwargs):
return self.log(logging.INFO - 1, msg, *args, **kwargs)
def exception(self, msg, *args, **kwargs):
return self.critical("%s\n%s" % (msg, traceback.format_exc()), *args, **kwargs)
logging.raiseExceptions = False
logging.setLoggerClass(BBLogger)
@@ -76,10 +79,6 @@ def plain(*args):
logger.plain(''.join(args))
def debug(lvl, *args):
if isinstance(lvl, basestring):
logger.warn("Passed invalid debug level '%s' to bb.debug", lvl)
args = (lvl,) + args
lvl = 1
logger.debug(lvl, ''.join(args))
def note(*args):
@@ -96,7 +95,7 @@ def fatal(*args):
sys.exit(1)
def deprecated(func, name=None, advice=""):
def deprecated(func, name = None, advice = ""):
"""This is a decorator which can be used to mark functions
as deprecated. It will result in a warning being emmitted
when the function is used."""
@@ -110,8 +109,8 @@ def deprecated(func, name=None, advice=""):
def newFunc(*args, **kwargs):
warnings.warn("Call to deprecated function %s%s." % (name,
advice),
category=DeprecationWarning,
stacklevel=2)
category = PendingDeprecationWarning,
stacklevel = 2)
return func(*args, **kwargs)
newFunc.__name__ = func.__name__
newFunc.__doc__ = func.__doc__

View File

@@ -28,12 +28,11 @@
import os
import sys
import logging
import shlex
import bb
import bb.msg
import bb.process
from contextlib import nested
from bb import data, event, utils
from bb import data, event, mkdirhier, utils
bblogger = logging.getLogger('BitBake')
logger = logging.getLogger('BitBake.Build')
@@ -163,7 +162,6 @@ def exec_func(func, d, dirs = None):
lockfiles = None
tempdir = data.getVar('T', d, 1)
bb.utils.mkdirhier(tempdir)
runfile = os.path.join(tempdir, 'run.{0}.{1}'.format(func, os.getpid()))
with bb.utils.fileslocked(lockfiles):
@@ -183,16 +181,16 @@ def exec_func_python(func, d, runfile, cwd=None):
"""Execute a python BB 'function'"""
bbfile = d.getVar('FILE', True)
try:
olddir = os.getcwd()
except OSError:
olddir = None
code = _functionfmt.format(function=func, body=d.getVar(func, True))
bb.utils.mkdirhier(os.path.dirname(runfile))
with open(runfile, 'w') as script:
script.write(code)
if cwd:
try:
olddir = os.getcwd()
except OSError:
olddir = None
os.chdir(cwd)
try:
@@ -204,11 +202,8 @@ def exec_func_python(func, d, runfile, cwd=None):
raise FuncFailed(func, None)
finally:
if cwd and olddir:
try:
os.chdir(olddir)
except OSError:
pass
if olddir:
os.chdir(olddir)
def exec_func_shell(function, d, runfile, cwd=None):
"""Execute a shell function from the metadata
@@ -229,8 +224,12 @@ def exec_func_shell(function, d, runfile, cwd=None):
if cwd:
script.write("cd %s\n" % cwd)
script.write("%s\n" % function)
os.fchmod(script.fileno(), 0775)
os.chmod(runfile, 0775)
env = {
'PATH': d.getVar('PATH', True),
'LC_ALL': 'C',
}
cmd = runfile
@@ -240,7 +239,7 @@ def exec_func_shell(function, d, runfile, cwd=None):
logfile = sys.stdout
try:
bb.process.run(cmd, shell=False, stdin=NULL, log=logfile)
bb.process.run(cmd, env=env, shell=False, stdin=NULL, log=logfile)
except bb.process.CmdError:
logfn = d.getVar('BB_LOGFILE', True)
raise FuncFailed(function, logfn)
@@ -383,10 +382,10 @@ def stamp_internal(taskname, d, file_name):
taskflagname = taskname.replace("_setscene", "")
if file_name:
stamp = d.stamp_base[file_name].get(taskflagname) or d.stamp[file_name]
stamp = d.stamp[file_name]
extrainfo = d.stamp_extrainfo[file_name].get(taskflagname) or ""
else:
stamp = d.getVarFlag(taskflagname, 'stamp-base', True) or d.getVar('STAMP', True)
stamp = d.getVar('STAMP', True)
file_name = d.getVar('BB_FILENAME', True)
extrainfo = d.getVarFlag(taskflagname, 'stamp-extra-info', True) or ""
@@ -412,12 +411,6 @@ def make_stamp(task, d, file_name = None):
f = open(stamp, "w")
f.close()
# If we're in task context, write out a signature file for each task
# as it completes
if not task.endswith("_setscene") and task != "do_setscene" and not file_name:
file_name = d.getVar('BB_FILENAME', True)
bb.parse.siggen.dump_sigtask(file_name, task, d.getVar('STAMP', True), True)
def del_stamp(task, d, file_name = None):
"""
Removes a stamp for a given task
@@ -463,7 +456,6 @@ def add_tasks(tasklist, d):
getTask('nostamp')
getTask('fakeroot')
getTask('noexec')
getTask('umask')
task_deps['parents'][task] = []
for dep in flags['deps']:
dep = data.expand(dep, d)

View File

@@ -30,7 +30,7 @@
import os
import logging
from collections import defaultdict
from collections import defaultdict, namedtuple
import bb.data
import bb.utils
@@ -43,15 +43,48 @@ except ImportError:
logger.info("Importing cPickle failed. "
"Falling back to a very slow implementation.")
__cache_version__ = "141"
__cache_version__ = "138"
def getCacheFile(path, filename):
return os.path.join(path, filename)
recipe_fields = (
'pn',
'pv',
'pr',
'pe',
'defaultpref',
'depends',
'provides',
'task_deps',
'stamp',
'stamp_extrainfo',
'broken',
'not_world',
'skipped',
'timestamp',
'packages',
'packages_dynamic',
'rdepends',
'rdepends_pkg',
'rprovides',
'rprovides_pkg',
'rrecommends',
'rrecommends_pkg',
'nocache',
'variants',
'file_depends',
'tasks',
'basetaskhashes',
'hashfilename',
'inherits',
'summary',
'license',
'section',
'fakerootenv',
'fakerootdirs'
)
# RecipeInfoCommon defines common data retrieving methods
# from meta data for caches. CoreRecipeInfo as well as other
# Extra RecipeInfo needs to inherit this class
class RecipeInfoCommon(object):
class RecipeInfo(namedtuple('RecipeInfo', recipe_fields)):
__slots__ = ()
@classmethod
def listvar(cls, var, metadata):
@@ -84,166 +117,66 @@ class RecipeInfoCommon(object):
def getvar(cls, var, metadata):
return metadata.getVar(var, True) or ''
class CoreRecipeInfo(RecipeInfoCommon):
__slots__ = ()
cachefile = "bb_cache.dat"
def __init__(self, filename, metadata):
self.file_depends = metadata.getVar('__depends', False)
self.timestamp = bb.parse.cached_mtime(filename)
self.variants = self.listvar('__VARIANTS', metadata) + ['']
self.appends = self.listvar('__BBAPPEND', metadata)
self.nocache = self.getvar('__BB_DONT_CACHE', metadata)
self.skipreason = self.getvar('__SKIPPED', metadata)
if self.skipreason:
self.skipped = True
self.provides = self.depvar('PROVIDES', metadata)
self.rprovides = self.depvar('RPROVIDES', metadata)
return
self.tasks = metadata.getVar('__BBTASKS', False)
self.pn = self.getvar('PN', metadata)
self.packages = self.listvar('PACKAGES', metadata)
if not self.pn in self.packages:
self.packages.append(self.pn)
self.basetaskhashes = self.taskvar('BB_BASEHASH', self.tasks, metadata)
self.hashfilename = self.getvar('BB_HASHFILENAME', metadata)
self.file_depends = metadata.getVar('__depends', False)
self.task_deps = metadata.getVar('_task_deps', False) or {'tasks': [], 'parents': {}}
self.skipped = False
self.pe = self.getvar('PE', metadata)
self.pv = self.getvar('PV', metadata)
self.pr = self.getvar('PR', metadata)
self.defaultpref = self.intvar('DEFAULT_PREFERENCE', metadata)
self.broken = self.getvar('BROKEN', metadata)
self.not_world = self.getvar('EXCLUDE_FROM_WORLD', metadata)
self.stamp = self.getvar('STAMP', metadata)
self.stamp_base = self.flaglist('stamp-base', self.tasks, metadata)
self.stamp_extrainfo = self.flaglist('stamp-extra-info', self.tasks, metadata)
self.packages_dynamic = self.listvar('PACKAGES_DYNAMIC', metadata)
self.depends = self.depvar('DEPENDS', metadata)
self.provides = self.depvar('PROVIDES', metadata)
self.rdepends = self.depvar('RDEPENDS', metadata)
self.rprovides = self.depvar('RPROVIDES', metadata)
self.rrecommends = self.depvar('RRECOMMENDS', metadata)
self.rprovides_pkg = self.pkgvar('RPROVIDES', self.packages, metadata)
self.rdepends_pkg = self.pkgvar('RDEPENDS', self.packages, metadata)
self.rrecommends_pkg = self.pkgvar('RRECOMMENDS', self.packages, metadata)
self.inherits = self.getvar('__inherit_cache', metadata)
self.summary = self.getvar('SUMMARY', metadata)
self.license = self.getvar('LICENSE', metadata)
self.section = self.getvar('SECTION', metadata)
self.fakerootenv = self.getvar('FAKEROOTENV', metadata)
self.fakerootdirs = self.getvar('FAKEROOTDIRS', metadata)
@classmethod
def make_optional(cls, default=None, **kwargs):
"""Construct the namedtuple from the specified keyword arguments,
with every value considered optional, using the default value if
it was not specified."""
for field in cls._fields:
kwargs[field] = kwargs.get(field, default)
return cls(**kwargs)
@classmethod
def init_cacheData(cls, cachedata):
# CacheData in Core RecipeInfo Class
cachedata.task_deps = {}
cachedata.pkg_fn = {}
cachedata.pkg_pn = defaultdict(list)
cachedata.pkg_pepvpr = {}
cachedata.pkg_dp = {}
def from_metadata(cls, filename, metadata):
if cls.getvar('__SKIPPED', metadata):
return cls.make_optional(skipped=True)
cachedata.stamp = {}
cachedata.stamp_base = {}
cachedata.stamp_extrainfo = {}
cachedata.fn_provides = {}
cachedata.pn_provides = defaultdict(list)
cachedata.all_depends = []
tasks = metadata.getVar('__BBTASKS', False)
cachedata.deps = defaultdict(list)
cachedata.packages = defaultdict(list)
cachedata.providers = defaultdict(list)
cachedata.rproviders = defaultdict(list)
cachedata.packages_dynamic = defaultdict(list)
pn = cls.getvar('PN', metadata)
packages = cls.listvar('PACKAGES', metadata)
if not pn in packages:
packages.append(pn)
cachedata.rundeps = defaultdict(lambda: defaultdict(list))
cachedata.runrecs = defaultdict(lambda: defaultdict(list))
cachedata.possible_world = []
cachedata.universe_target = []
cachedata.hashfn = {}
return RecipeInfo(
tasks = tasks,
basetaskhashes = cls.taskvar('BB_BASEHASH', tasks, metadata),
hashfilename = cls.getvar('BB_HASHFILENAME', metadata),
cachedata.basetaskhash = {}
cachedata.inherits = {}
cachedata.summary = {}
cachedata.license = {}
cachedata.section = {}
cachedata.fakerootenv = {}
cachedata.fakerootdirs = {}
def add_cacheData(self, cachedata, fn):
cachedata.task_deps[fn] = self.task_deps
cachedata.pkg_fn[fn] = self.pn
cachedata.pkg_pn[self.pn].append(fn)
cachedata.pkg_pepvpr[fn] = (self.pe, self.pv, self.pr)
cachedata.pkg_dp[fn] = self.defaultpref
cachedata.stamp[fn] = self.stamp
cachedata.stamp_base[fn] = self.stamp_base
cachedata.stamp_extrainfo[fn] = self.stamp_extrainfo
provides = [self.pn]
for provide in self.provides:
if provide not in provides:
provides.append(provide)
cachedata.fn_provides[fn] = provides
for provide in provides:
cachedata.providers[provide].append(fn)
if provide not in cachedata.pn_provides[self.pn]:
cachedata.pn_provides[self.pn].append(provide)
for dep in self.depends:
if dep not in cachedata.deps[fn]:
cachedata.deps[fn].append(dep)
if dep not in cachedata.all_depends:
cachedata.all_depends.append(dep)
rprovides = self.rprovides
for package in self.packages:
cachedata.packages[package].append(fn)
rprovides += self.rprovides_pkg[package]
for rprovide in rprovides:
cachedata.rproviders[rprovide].append(fn)
for package in self.packages_dynamic:
cachedata.packages_dynamic[package].append(fn)
# Build hash of runtime depends and rececommends
for package in self.packages + [self.pn]:
cachedata.rundeps[fn][package] = list(self.rdepends) + self.rdepends_pkg[package]
cachedata.runrecs[fn][package] = list(self.rrecommends) + self.rrecommends_pkg[package]
# Collect files we may need for possible world-dep
# calculations
if not self.broken and not self.not_world:
cachedata.possible_world.append(fn)
# create a collection of all targets for sanity checking
# tasks, such as upstream versions, license, and tools for
# task and image creation.
cachedata.universe_target.append(self.pn)
cachedata.hashfn[fn] = self.hashfilename
for task, taskhash in self.basetaskhashes.iteritems():
identifier = '%s.%s' % (fn, task)
cachedata.basetaskhash[identifier] = taskhash
cachedata.inherits[fn] = self.inherits
cachedata.summary[fn] = self.summary
cachedata.license[fn] = self.license
cachedata.section[fn] = self.section
cachedata.fakerootenv[fn] = self.fakerootenv
cachedata.fakerootdirs[fn] = self.fakerootdirs
file_depends = metadata.getVar('__depends', False),
task_deps = metadata.getVar('_task_deps', False) or
{'tasks': [], 'parents': {}},
variants = cls.listvar('__VARIANTS', metadata) + [''],
skipped = False,
timestamp = bb.parse.cached_mtime(filename),
packages = cls.listvar('PACKAGES', metadata),
pn = pn,
pe = cls.getvar('PE', metadata),
pv = cls.getvar('PV', metadata),
pr = cls.getvar('PR', metadata),
nocache = cls.getvar('__BB_DONT_CACHE', metadata),
defaultpref = cls.intvar('DEFAULT_PREFERENCE', metadata),
broken = cls.getvar('BROKEN', metadata),
not_world = cls.getvar('EXCLUDE_FROM_WORLD', metadata),
stamp = cls.getvar('STAMP', metadata),
stamp_extrainfo = cls.flaglist('stamp-extra-info', tasks, metadata),
packages_dynamic = cls.listvar('PACKAGES_DYNAMIC', metadata),
depends = cls.depvar('DEPENDS', metadata),
provides = cls.depvar('PROVIDES', metadata),
rdepends = cls.depvar('RDEPENDS', metadata),
rprovides = cls.depvar('RPROVIDES', metadata),
rrecommends = cls.depvar('RRECOMMENDS', metadata),
rprovides_pkg = cls.pkgvar('RPROVIDES', packages, metadata),
rdepends_pkg = cls.pkgvar('RDEPENDS', packages, metadata),
rrecommends_pkg = cls.pkgvar('RRECOMMENDS', packages, metadata),
inherits = cls.getvar('__inherit_cache', metadata),
summary = cls.getvar('SUMMARY', metadata),
license = cls.getvar('LICENSE', metadata),
section = cls.getvar('SECTION', metadata),
fakerootenv = cls.getvar('FAKEROOTENV', metadata),
fakerootdirs = cls.getvar('FAKEROOTDIRS', metadata),
)
class Cache(object):
@@ -251,11 +184,7 @@ class Cache(object):
BitBake Cache implementation
"""
def __init__(self, data, caches_array):
# Pass caches_array information into Cache Constructor
# It will be used in later for deciding whether we
# need extra cache file dump/load support
self.caches_array = caches_array
def __init__(self, data):
self.cachedir = bb.data.getVar("CACHE", data, True)
self.clean = set()
self.checked = set()
@@ -271,7 +200,7 @@ class Cache(object):
return
self.has_cache = True
self.cachefile = getCacheFile(self.cachedir, "bb_cache.dat")
self.cachefile = os.path.join(self.cachedir, "bb_cache.dat")
logger.debug(1, "Using cache in '%s'", self.cachedir)
bb.utils.mkdirhier(self.cachedir)
@@ -285,21 +214,12 @@ class Cache(object):
old_mtimes.append(newest_mtime)
newest_mtime = max(old_mtimes)
bNeedUpdate = True
if self.caches_array:
for cache_class in self.caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cachefile = getCacheFile(self.cachedir, cache_class.cachefile)
bNeedUpdate = bNeedUpdate and (bb.parse.cached_mtime_noerror(cachefile) >= newest_mtime)
cache_class.init_cacheData(self)
if bNeedUpdate:
if bb.parse.cached_mtime_noerror(self.cachefile) >= newest_mtime:
self.load_cachefile()
elif os.path.isfile(self.cachefile):
logger.info("Out of date cache found, rebuilding...")
def load_cachefile(self):
# Firstly, using core cache file information for
# valid checking
with open(self.cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
try:
@@ -316,52 +236,31 @@ class Cache(object):
logger.info('Bitbake version mismatch, rebuilding...')
return
cachesize = os.fstat(cachefile.fileno()).st_size
bb.event.fire(bb.event.CacheLoadStarted(cachesize), self.data)
cachesize = 0
previous_progress = 0
previous_percent = 0
previous_percent = 0
while cachefile:
try:
key = pickled.load()
value = pickled.load()
except Exception:
break
# Calculate the correct cachesize of all those cache files
for cache_class in self.caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cachefile = getCacheFile(self.cachedir, cache_class.cachefile)
with open(cachefile, "rb") as cachefile:
cachesize += os.fstat(cachefile.fileno()).st_size
self.depends_cache[key] = value
bb.event.fire(bb.event.CacheLoadStarted(cachesize), self.data)
for cache_class in self.caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cachefile = getCacheFile(self.cachedir, cache_class.cachefile)
with open(cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
while cachefile:
try:
key = pickled.load()
value = pickled.load()
except Exception:
break
if self.depends_cache.has_key(key):
self.depends_cache[key].append(value)
else:
self.depends_cache[key] = [value]
# only fire events on even percentage boundaries
current_progress = cachefile.tell() + previous_progress
current_percent = 100 * current_progress / cachesize
if current_percent > previous_percent:
previous_percent = current_percent
bb.event.fire(bb.event.CacheLoadProgress(current_progress),
self.data)
# only fire events on even percentage boundaries
current_progress = cachefile.tell()
current_percent = 100 * current_progress / cachesize
if current_percent > previous_percent:
previous_percent = current_percent
bb.event.fire(bb.event.CacheLoadProgress(current_progress),
self.data)
previous_progress += current_progress
bb.event.fire(bb.event.CacheLoadCompleted(cachesize,
len(self.depends_cache)),
self.data)
# Note: depends cache number is corresponding to the parsing file numbers.
# The same file has several caches, still regarded as one item in the cache
bb.event.fire(bb.event.CacheLoadCompleted(cachesize,
len(self.depends_cache)),
self.data)
@staticmethod
def virtualfn2realfn(virtualfn):
"""
@@ -395,12 +294,11 @@ class Cache(object):
logger.debug(1, "Parsing %s (full)", fn)
cfgData.setVar("__ONLYFINALISE", virtual or "default")
bb_data = cls.load_bbfile(fn, appends, cfgData)
return bb_data[virtual]
@classmethod
def parse(cls, filename, appends, configdata, caches_array):
def parse(cls, filename, appends, configdata):
"""Parse the specified filename, returning the recipe information"""
infos = []
datastores = cls.load_bbfile(filename, appends, configdata)
@@ -412,14 +310,8 @@ class Cache(object):
depends |= (data.getVar("__depends", False) or set())
if depends and not variant:
data.setVar("__depends", depends)
info_array = []
for cache_class in caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
info = cache_class(filename, data)
info_array.append(info)
infos.append((virtualfn, info_array))
info = RecipeInfo.from_metadata(filename, data)
infos.append((virtualfn, info))
return infos
def load(self, filename, appends, configdata):
@@ -430,17 +322,16 @@ class Cache(object):
automatically add the information to the cache or to your
CacheData. Use the add or add_info method to do so after
running this, or use loadData instead."""
cached = self.cacheValid(filename, appends)
cached = self.cacheValid(filename)
if cached:
infos = []
# info_array item is a list of [CoreRecipeInfo, XXXRecipeInfo]
info_array = self.depends_cache[filename]
for variant in info_array[0].variants:
info = self.depends_cache[filename]
for variant in info.variants:
virtualfn = self.realfn2virtual(filename, variant)
infos.append((virtualfn, self.depends_cache[virtualfn]))
else:
logger.debug(1, "Parsing %s", filename)
return self.parse(filename, appends, configdata, self.caches_array)
return self.parse(filename, appends, configdata)
return cached, infos
@@ -451,23 +342,23 @@ class Cache(object):
skipped, virtuals = 0, 0
cached, infos = self.load(fn, appends, cfgData)
for virtualfn, info_array in infos:
if info_array[0].skipped:
logger.debug(1, "Skipping %s: %s", virtualfn, info_array[0].skipreason)
for virtualfn, info in infos:
if info.skipped:
logger.debug(1, "Skipping %s", virtualfn)
skipped += 1
else:
self.add_info(virtualfn, info_array, cacheData, not cached)
self.add_info(virtualfn, info, cacheData, not cached)
virtuals += 1
return cached, skipped, virtuals
def cacheValid(self, fn, appends):
def cacheValid(self, fn):
"""
Is the cache valid for fn?
Fast version, no timestamps checked.
"""
if fn not in self.checked:
self.cacheValidUpdate(fn, appends)
self.cacheValidUpdate(fn)
# Is cache enabled?
if not self.has_cache:
@@ -476,7 +367,7 @@ class Cache(object):
return True
return False
def cacheValidUpdate(self, fn, appends):
def cacheValidUpdate(self, fn):
"""
Is the cache valid for fn?
Make thorough (slower) checks including timestamps.
@@ -500,15 +391,15 @@ class Cache(object):
self.remove(fn)
return False
info_array = self.depends_cache[fn]
info = self.depends_cache[fn]
# Check the file's timestamp
if mtime != info_array[0].timestamp:
if mtime != info.timestamp:
logger.debug(2, "Cache: %s changed", fn)
self.remove(fn)
return False
# Check dependencies are still valid
depends = info_array[0].file_depends
depends = info.file_depends
if depends:
for f, old_mtime in depends:
fmtime = bb.parse.cached_mtime_noerror(f)
@@ -525,14 +416,8 @@ class Cache(object):
self.remove(fn)
return False
if appends != info_array[0].appends:
logger.debug(2, "Cache: appends for %s changed", fn)
bb.note("%s to %s" % (str(appends), str(info_array[0].appends)))
self.remove(fn)
return False
invalid = False
for cls in info_array[0].variants:
for cls in info.variants:
virtualfn = self.realfn2virtual(fn, cls)
self.clean.add(virtualfn)
if virtualfn not in self.depends_cache:
@@ -579,30 +464,13 @@ class Cache(object):
logger.debug(2, "Cache is clean, not saving.")
return
file_dict = {}
pickler_dict = {}
for cache_class in self.caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cache_class_name = cache_class.__name__
cachefile = getCacheFile(self.cachedir, cache_class.cachefile)
file_dict[cache_class_name] = open(cachefile, "wb")
pickler_dict[cache_class_name] = pickle.Pickler(file_dict[cache_class_name], pickle.HIGHEST_PROTOCOL)
pickler_dict['CoreRecipeInfo'].dump(__cache_version__)
pickler_dict['CoreRecipeInfo'].dump(bb.__version__)
try:
for key, info_array in self.depends_cache.iteritems():
for info in info_array:
if isinstance(info, RecipeInfoCommon):
cache_class_name = info.__class__.__name__
pickler_dict[cache_class_name].dump(key)
pickler_dict[cache_class_name].dump(info)
finally:
for cache_class in self.caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cache_class_name = cache_class.__name__
file_dict[cache_class_name].close()
with open(self.cachefile, "wb") as cachefile:
pickler = pickle.Pickler(cachefile, pickle.HIGHEST_PROTOCOL)
pickler.dump(__cache_version__)
pickler.dump(bb.__version__)
for key, value in self.depends_cache.iteritems():
pickler.dump(key)
pickler.dump(value)
del self.depends_cache
@@ -610,17 +478,15 @@ class Cache(object):
def mtime(cachefile):
return bb.parse.cached_mtime_noerror(cachefile)
def add_info(self, filename, info_array, cacheData, parsed=None):
if isinstance(info_array[0], CoreRecipeInfo) and (not info_array[0].skipped):
cacheData.add_from_recipeinfo(filename, info_array)
def add_info(self, filename, info, cacheData, parsed=None):
cacheData.add_from_recipeinfo(filename, info)
if not self.has_cache:
return
if (info_array[0].skipped or 'SRCREVINACTION' not in info_array[0].pv) and not info_array[0].nocache:
if 'SRCREVINACTION' not in info.pv and not info.nocache:
if parsed:
self.cacheclean = False
self.depends_cache[filename] = info_array
self.depends_cache[filename] = info
def add(self, file_name, data, cacheData, parsed=None):
"""
@@ -628,12 +494,8 @@ class Cache(object):
"""
realfn = self.virtualfn2realfn(file_name)[0]
info_array = []
for cache_class in self.caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
info_array.append(cache_class(realfn, data))
self.add_info(file_name, info_array, cacheData, parsed)
info = RecipeInfo.from_metadata(realfn, data)
self.add_info(file_name, info, cacheData, parsed)
@staticmethod
def load_bbfile(bbfile, appends, config):
@@ -697,23 +559,99 @@ class CacheData(object):
The data structures we compile from the cached data
"""
def __init__(self, caches_array):
self.caches_array = caches_array
for cache_class in self.caches_array:
if type(cache_class) is type and issubclass(cache_class, RecipeInfoCommon):
cache_class.init_cacheData(self)
def __init__(self):
# Direct cache variables
self.providers = defaultdict(list)
self.rproviders = defaultdict(list)
self.packages = defaultdict(list)
self.packages_dynamic = defaultdict(list)
self.possible_world = []
self.pkg_pn = defaultdict(list)
self.pkg_fn = {}
self.pkg_pepvpr = {}
self.pkg_dp = {}
self.pn_provides = defaultdict(list)
self.fn_provides = {}
self.all_depends = []
self.deps = defaultdict(list)
self.rundeps = defaultdict(lambda: defaultdict(list))
self.runrecs = defaultdict(lambda: defaultdict(list))
self.task_queues = {}
self.task_deps = {}
self.stamp = {}
self.stamp_extrainfo = {}
self.preferred = {}
self.tasks = {}
self.basetaskhash = {}
self.hashfn = {}
self.inherits = {}
self.summary = {}
self.license = {}
self.section = {}
self.fakerootenv = {}
self.fakerootdirs = {}
# Indirect Cache variables (set elsewhere)
self.ignored_dependencies = []
self.world_target = set()
self.bbfile_priority = {}
self.bbfile_config_priorities = []
def add_from_recipeinfo(self, fn, info_array):
for info in info_array:
info.add_cacheData(self, fn)
def add_from_recipeinfo(self, fn, info):
self.task_deps[fn] = info.task_deps
self.pkg_fn[fn] = info.pn
self.pkg_pn[info.pn].append(fn)
self.pkg_pepvpr[fn] = (info.pe, info.pv, info.pr)
self.pkg_dp[fn] = info.defaultpref
self.stamp[fn] = info.stamp
self.stamp_extrainfo[fn] = info.stamp_extrainfo
provides = [info.pn]
for provide in info.provides:
if provide not in provides:
provides.append(provide)
self.fn_provides[fn] = provides
for provide in provides:
self.providers[provide].append(fn)
if provide not in self.pn_provides[info.pn]:
self.pn_provides[info.pn].append(provide)
for dep in info.depends:
if dep not in self.deps[fn]:
self.deps[fn].append(dep)
if dep not in self.all_depends:
self.all_depends.append(dep)
rprovides = info.rprovides
for package in info.packages:
self.packages[package].append(fn)
rprovides += info.rprovides_pkg[package]
for rprovide in rprovides:
self.rproviders[rprovide].append(fn)
for package in info.packages_dynamic:
self.packages_dynamic[package].append(fn)
# Build hash of runtime depends and rececommends
for package in info.packages + [info.pn]:
self.rundeps[fn][package] = list(info.rdepends) + info.rdepends_pkg[package]
self.runrecs[fn][package] = list(info.rrecommends) + info.rrecommends_pkg[package]
# Collect files we may need for possible world-dep
# calculations
if not info.broken and not info.not_world:
self.possible_world.append(fn)
self.hashfn[fn] = info.hashfilename
for task, taskhash in info.basetaskhashes.iteritems():
identifier = '%s.%s' % (fn, task)
self.basetaskhash[identifier] = taskhash
self.inherits[fn] = info.inherits
self.summary[fn] = info.summary
self.license[fn] = info.license
self.section[fn] = info.section
self.fakerootenv[fn] = info.fakerootenv
self.fakerootdirs[fn] = info.fakerootdirs

View File

@@ -1,54 +0,0 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
# Extra RecipeInfo will be all defined in this file. Currently,
# Only Hob (Image Creator) Requests some extra fields. So
# HobRecipeInfo is defined. It's named HobRecipeInfo because it
# is introduced by 'hob'. Users could also introduce other
# RecipeInfo or simply use those already defined RecipeInfo.
# In the following patch, this newly defined new extra RecipeInfo
# will be dynamically loaded and used for loading/saving the extra
# cache fields
# Copyright (C) 2011, Intel Corporation. All rights reserved.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
from bb.cache import RecipeInfoCommon
class HobRecipeInfo(RecipeInfoCommon):
__slots__ = ()
classname = "HobRecipeInfo"
# please override this member with the correct data cache file
# such as (bb_cache.dat, bb_extracache_hob.dat)
cachefile = "bb_extracache_" + classname +".dat"
def __init__(self, filename, metadata):
self.summary = self.getvar('SUMMARY', metadata)
self.license = self.getvar('LICENSE', metadata)
self.section = self.getvar('SECTION', metadata)
@classmethod
def init_cacheData(cls, cachedata):
# CacheData in Hob RecipeInfo Class
cachedata.summary = {}
cachedata.license = {}
cachedata.section = {}
def add_cacheData(self, cachedata, fn):
cachedata.summary[fn] = self.summary
cachedata.license[fn] = self.license
cachedata.section[fn] = self.section

View File

@@ -21,13 +21,13 @@ def check_indent(codestr):
"""If the code is indented, add a top level piece of code to 'remove' the indentation"""
i = 0
while codestr[i] in ["\n", "\t", " "]:
while codestr[i] in ["\n", " ", " "]:
i = i + 1
if i == 0:
return codestr
if codestr[i-1] == "\t" or codestr[i-1] == " ":
if codestr[i-1] is " " or codestr[i-1] is " ":
return "if 1:\n" + codestr
return codestr
@@ -70,85 +70,8 @@ def parser_cache_save(d):
if not cachefile:
return
glf = bb.utils.lockfile(cachefile + ".lock", shared=True)
i = os.getpid()
lf = None
while not lf:
shellcache = {}
pythoncache = {}
lf = bb.utils.lockfile(cachefile + ".lock." + str(i), retry=False)
if not lf or os.path.exists(cachefile + "-" + str(i)):
if lf:
bb.utils.unlockfile(lf)
lf = None
i = i + 1
continue
try:
p = pickle.Unpickler(file(cachefile, "rb"))
data, version = p.load()
except (IOError, EOFError, ValueError):
data, version = None, None
if version != PARSERCACHE_VERSION:
shellcache = shellparsecache
pythoncache = pythonparsecache
else:
for h in pythonparsecache:
if h not in data[0]:
pythoncache[h] = pythonparsecache[h]
for h in shellparsecache:
if h not in data[1]:
shellcache[h] = shellparsecache[h]
p = pickle.Pickler(file(cachefile + "-" + str(i), "wb"), -1)
p.dump([[pythoncache, shellcache], PARSERCACHE_VERSION])
bb.utils.unlockfile(lf)
bb.utils.unlockfile(glf)
def parser_cache_savemerge(d):
cachefile = parser_cachefile(d)
if not cachefile:
return
glf = bb.utils.lockfile(cachefile + ".lock")
try:
p = pickle.Unpickler(file(cachefile, "rb"))
data, version = p.load()
except (IOError, EOFError):
data, version = None, None
if version != PARSERCACHE_VERSION:
data = [{}, {}]
for f in [y for y in os.listdir(os.path.dirname(cachefile)) if y.startswith(os.path.basename(cachefile) + '-')]:
f = os.path.join(os.path.dirname(cachefile), f)
try:
p = pickle.Unpickler(file(f, "rb"))
extradata, version = p.load()
except (IOError, EOFError):
extradata, version = [{}, {}], None
if version != PARSERCACHE_VERSION:
continue
for h in extradata[0]:
if h not in data[0]:
data[0][h] = extradata[0][h]
for h in extradata[1]:
if h not in data[1]:
data[1][h] = extradata[1][h]
os.unlink(f)
p = pickle.Pickler(file(cachefile, "wb"), -1)
p.dump([data, PARSERCACHE_VERSION])
bb.utils.unlockfile(glf)
p.dump([[pythonparsecache, shellparsecache], PARSERCACHE_VERSION])
class PythonParser():
class ValueVisitor():

View File

@@ -82,7 +82,7 @@ class Command:
if command not in CommandsAsync.__dict__:
return "No such command"
self.currentAsyncCommand = (command, commandline)
self.cooker.server_registration_cb(self.cooker.runCommands, self.cooker)
self.cooker.server.register_idle_function(self.cooker.runCommands, self.cooker)
return True
except:
import traceback
@@ -224,19 +224,11 @@ class CommandsAsync:
def generateTargetsTree(self, command, params):
"""
Generate a tree of buildable targets.
If klass is provided ensure all recipes that inherit the class are
included in the package list.
If pkg_list provided use that list (plus any extras brought in by
klass) rather than generating a tree for all packages.
Generate a tree of all buildable targets.
"""
klass = params[0]
if len(params) > 1:
pkg_list = params[1]
else:
pkg_list = []
command.cooker.generateTargetsTree(klass, pkg_list)
command.cooker.generateTargetsTree(klass)
command.finishAsyncCommand()
generateTargetsTree.needcache = True
@@ -251,28 +243,6 @@ class CommandsAsync:
command.finishAsyncCommand()
findConfigFiles.needcache = True
def findFilesMatchingInDir(self, command, params):
"""
Find implementation files matching the specified pattern
in the requested subdirectory of a BBPATH
"""
pattern = params[0]
directory = params[1]
command.cooker.findFilesMatchingInDir(pattern, directory)
command.finishAsyncCommand()
findFilesMatchingInDir.needcache = True
def findConfigFilePath(self, command, params):
"""
Find the path of the requested configuration file
"""
configfile = params[0]
command.cooker.findConfigFilePath(configfile)
command.finishAsyncCommand()
findConfigFilePath.needcache = False
def showVersions(self, command, params):
"""
Show the currently selected versions

View File

@@ -1,28 +0,0 @@
"""Code pulled from future python versions, here for compatibility"""
def total_ordering(cls):
"""Class decorator that fills in missing ordering methods"""
convert = {
'__lt__': [('__gt__', lambda self, other: other < self),
('__le__', lambda self, other: not other < self),
('__ge__', lambda self, other: not self < other)],
'__le__': [('__ge__', lambda self, other: other <= self),
('__lt__', lambda self, other: not other <= self),
('__gt__', lambda self, other: not self <= other)],
'__gt__': [('__lt__', lambda self, other: other > self),
('__ge__', lambda self, other: not other > self),
('__le__', lambda self, other: not self > other)],
'__ge__': [('__le__', lambda self, other: other >= self),
('__gt__', lambda self, other: not other >= self),
('__lt__', lambda self, other: not self >= other)]
}
roots = set(dir(cls)) & set(convert)
if not roots:
raise ValueError('must define at least one ordering operation: < > <= >=')
root = max(roots) # prefer __lt__ to __le__ to __gt__ to __ge__
for opname, opfunc in convert[root]:
if opname not in roots:
opfunc.__name__ = opname
opfunc.__doc__ = getattr(int, opname).__doc__
setattr(cls, opname, opfunc)
return cls

View File

@@ -28,13 +28,12 @@ import atexit
import itertools
import logging
import multiprocessing
import signal
import sre_constants
import threading
from cStringIO import StringIO
from contextlib import closing
from functools import wraps
from collections import defaultdict
import bb, bb.exceptions
import bb
from bb import utils, data, parse, event, cache, providers, taskdata, command, runqueue
logger = logging.getLogger("BitBake")
@@ -56,20 +55,6 @@ class NothingToBuild(Exception):
class state:
initial, parsing, running, shutdown, stop = range(5)
class SkippedPackage:
def __init__(self, info = None, reason = None):
self.skipreason = None
self.provides = None
self.rprovides = None
if info:
self.skipreason = info.skipreason
self.provides = info.provides
self.rprovides = info.rprovides
elif reason:
self.skipreason = reason
#============================================================================#
# BBCooker
#============================================================================#
@@ -78,65 +63,23 @@ class BBCooker:
Manages one bitbake build run
"""
def __init__(self, configuration, server_registration_cb):
def __init__(self, configuration, server):
self.status = None
self.appendlist = {}
self.skiplist = {}
self.server_registration_cb = server_registration_cb
if server:
self.server = server.BitBakeServer(self)
self.configuration = configuration
self.caches_array = []
# Currently, only Image Creator hob ui needs extra cache.
# So, we save Extra Cache class name and container file
# information into a extraCaches field in hob UI.
# TODO: In future, bin/bitbake should pass information into cooker,
# instead of getting information from configuration.ui. Also, some
# UI start up issues need to be addressed at the same time.
caches_name_array = ['bb.cache:CoreRecipeInfo']
if configuration.ui:
try:
module = __import__('bb.ui', fromlist=[configuration.ui])
name_array = (getattr(module, configuration.ui)).extraCaches
for recipeInfoName in name_array:
caches_name_array.append(recipeInfoName)
except ImportError as exc:
# bb.ui.XXX is not defined and imported. It's an error!
logger.critical("Unable to import '%s' interface from bb.ui: %s" % (configuration.ui, exc))
sys.exit("FATAL: Failed to import '%s' interface." % configuration.ui)
except AttributeError:
# This is not an error. If the field is not defined in the ui,
# this interface might need no extra cache fields, so
# just skip this error!
logger.debug(2, "UI '%s' does not require extra cache!" % (configuration.ui))
# At least CoreRecipeInfo will be loaded, so caches_array will never be empty!
# This is the entry point, no further check needed!
for var in caches_name_array:
try:
module_name, cache_name = var.split(':')
module = __import__(module_name, fromlist=(cache_name,))
self.caches_array.append(getattr(module, cache_name))
except ImportError as exc:
logger.critical("Unable to import extra RecipeInfo '%s' from '%s': %s" % (cache_name, module_name, exc))
sys.exit("FATAL: Failed to import extra cache class '%s'." % cache_name)
self.configuration.data = bb.data.init()
if not self.server_registration_cb:
if not server:
bb.data.setVar("BB_WORKERCONTEXT", "1", self.configuration.data)
bb.data.inheritFromOS(self.configuration.data)
try:
self.parseConfigurationFiles(self.configuration.prefile,
self.configuration.postfile)
except SyntaxError:
sys.exit(1)
except Exception:
logger.exception("Error parsing configuration files")
sys.exit(1)
self.parseConfigurationFiles(self.configuration.file)
if not self.configuration.cmd:
self.configuration.cmd = bb.data.getVar("BB_DEFAULT_TASK", self.configuration.data, True) or "build"
@@ -164,8 +107,6 @@ class BBCooker:
self.command = bb.command.Command(self)
self.state = state.initial
self.parser = None
def parseConfiguration(self):
@@ -178,39 +119,39 @@ class BBCooker:
def parseCommandLine(self):
# Parse any commandline into actions
self.commandlineAction = {'action':None, 'msg':None}
if self.configuration.show_environment:
self.commandlineAction = None
if 'world' in self.configuration.pkgs_to_build:
self.commandlineAction['msg'] = "'world' is not a valid target for --environment."
elif 'universe' in self.configuration.pkgs_to_build:
self.commandlineAction['msg'] = "'universe' is not a valid target for --environment."
buildlog.error("'world' is not a valid target for --environment.")
elif len(self.configuration.pkgs_to_build) > 1:
self.commandlineAction['msg'] = "Only one target can be used with the --environment option."
buildlog.error("Only one target can be used with the --environment option.")
elif self.configuration.buildfile and len(self.configuration.pkgs_to_build) > 0:
self.commandlineAction['msg'] = "No target should be used with the --environment and --buildfile options."
buildlog.error("No target should be used with the --environment and --buildfile options.")
elif len(self.configuration.pkgs_to_build) > 0:
self.commandlineAction['action'] = ["showEnvironmentTarget", self.configuration.pkgs_to_build]
self.commandlineAction = ["showEnvironmentTarget", self.configuration.pkgs_to_build]
else:
self.commandlineAction['action'] = ["showEnvironment", self.configuration.buildfile]
self.commandlineAction = ["showEnvironment", self.configuration.buildfile]
elif self.configuration.buildfile is not None:
self.commandlineAction['action'] = ["buildFile", self.configuration.buildfile, self.configuration.cmd]
self.commandlineAction = ["buildFile", self.configuration.buildfile, self.configuration.cmd]
elif self.configuration.revisions_changed:
self.commandlineAction['action'] = ["compareRevisions"]
self.commandlineAction = ["compareRevisions"]
elif self.configuration.show_versions:
self.commandlineAction['action'] = ["showVersions"]
self.commandlineAction = ["showVersions"]
elif self.configuration.parse_only:
self.commandlineAction['action'] = ["parseFiles"]
self.commandlineAction = ["parseFiles"]
elif self.configuration.dot_graph:
if self.configuration.pkgs_to_build:
self.commandlineAction['action'] = ["generateDotGraph", self.configuration.pkgs_to_build, self.configuration.cmd]
self.commandlineAction = ["generateDotGraph", self.configuration.pkgs_to_build, self.configuration.cmd]
else:
self.commandlineAction['msg'] = "Please specify a package name for dependency graph generation."
self.commandlineAction = None
buildlog.error("Please specify a package name for dependency graph generation.")
else:
if self.configuration.pkgs_to_build:
self.commandlineAction['action'] = ["buildTargets", self.configuration.pkgs_to_build, self.configuration.cmd]
self.commandlineAction = ["buildTargets", self.configuration.pkgs_to_build, self.configuration.cmd]
else:
#self.commandlineAction['msg'] = "Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information."
self.commandlineAction = None
buildlog.error("Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
def runCommands(self, server, data, abort):
"""
@@ -280,7 +221,7 @@ class BBCooker:
if fn:
try:
envdata = bb.cache.Cache.loadDataFull(fn, self.get_file_appends(fn), self.configuration.data)
except Exception as e:
except Exception, e:
parselog.exception("Unable to read %s", fn)
raise
@@ -327,11 +268,9 @@ class BBCooker:
return taskdata, rq
def generateDepTreeData(self, pkgs_to_build, task, more_meta=False):
def generateDepTreeData(self, pkgs_to_build, task):
"""
Create a dependency tree of pkgs_to_build, returning the data.
When more_meta is set to True include summary, license and group
information in the returned tree.
"""
taskdata, rq = self.prepareTreeData(pkgs_to_build, task)
@@ -351,18 +290,10 @@ class BBCooker:
fn = taskdata.fn_index[fnid]
pn = self.status.pkg_fn[fn]
version = "%s:%s-%s" % self.status.pkg_pepvpr[fn]
if more_meta:
summary = self.status.summary[fn]
lic = self.status.license[fn]
section = self.status.section[fn]
if pn not in depend_tree["pn"]:
depend_tree["pn"][pn] = {}
depend_tree["pn"][pn]["filename"] = fn
depend_tree["pn"][pn]["version"] = version
if more_meta:
depend_tree["pn"][pn]["summary"] = summary
depend_tree["pn"][pn]["license"] = lic
depend_tree["pn"][pn]["section"] = section
for dep in rq.rqdata.runq_depends[task]:
depfn = taskdata.fn_index[rq.rqdata.runq_fnid[dep]]
deppn = self.status.pkg_fn[depfn]
@@ -472,36 +403,6 @@ class BBCooker:
print("}", file=tdepends_file)
logger.info("Task dependencies saved to 'task-depends.dot'")
def calc_bbfile_priority( self, filename, matched = None ):
for _, _, regex, pri in self.status.bbfile_config_priorities:
if regex.match(filename):
if matched != None:
if not regex in matched:
matched.add(regex)
return pri
return 0
def show_appends_with_no_recipes( self ):
recipes = set(os.path.basename(f)
for f in self.status.pkg_fn.iterkeys())
recipes |= set(os.path.basename(f)
for f in self.skiplist.iterkeys())
appended_recipes = self.appendlist.iterkeys()
appends_without_recipes = [self.appendlist[recipe]
for recipe in appended_recipes
if recipe not in recipes]
if appends_without_recipes:
appendlines = (' %s' % append
for appends in appends_without_recipes
for append in appends)
msg = 'No recipes available for:\n%s' % '\n'.join(appendlines)
warn_only = data.getVar("BB_DANGLINGAPPENDS_WARNONLY", \
self.configuration.data, False) or "no"
if warn_only.lower() in ("1", "yes", "true"):
bb.warn(msg)
else:
bb.fatal(msg)
def buildDepgraph( self ):
all_depends = self.status.all_depends
pn_provides = self.status.pn_provides
@@ -510,6 +411,15 @@ class BBCooker:
bb.data.update_data(localdata)
bb.data.expandKeys(localdata)
matched = set()
def calc_bbfile_priority(filename):
for _, _, regex, pri in self.status.bbfile_config_priorities:
if regex.match(filename):
if not regex in matched:
matched.add(regex)
return pri
return 0
# Handle PREFERRED_PROVIDERS
for p in (bb.data.getVar('PREFERRED_PROVIDERS', localdata, 1) or "").split():
try:
@@ -522,60 +432,13 @@ class BBCooker:
self.status.preferred[providee] = provider
# Calculate priorities for each file
matched = set()
for p in self.status.pkg_fn:
self.status.bbfile_priority[p] = self.calc_bbfile_priority(p, matched)
# Don't show the warning if the BBFILE_PATTERN did match .bbappend files
unmatched = set()
for _, _, regex, pri in self.status.bbfile_config_priorities:
if not regex in matched:
unmatched.add(regex)
def findmatch(regex):
for bbfile in self.appendlist:
for append in self.appendlist[bbfile]:
if regex.match(append):
return True
return False
for unmatch in unmatched.copy():
if findmatch(unmatch):
unmatched.remove(unmatch)
self.status.bbfile_priority[p] = calc_bbfile_priority(p)
for collection, pattern, regex, _ in self.status.bbfile_config_priorities:
if regex in unmatched:
if not regex in matched:
collectlog.warn("No bb files matched BBFILE_PATTERN_%s '%s'" % (collection, pattern))
def findConfigFilePath(self, configfile):
path = self._findConfigFile(configfile)
if path:
bb.event.fire(bb.event.ConfigFilePathFound(path), self.configuration.data)
def findFilesMatchingInDir(self, filepattern, directory):
"""
Searches for files matching the regex 'pattern' which are children of
'directory' in each BBPATH. i.e. to find all rootfs package classes available
to BitBake one could call findFilesMatchingInDir(self, 'rootfs_', 'classes')
or to find all machine configuration files one could call:
findFilesMatchingInDir(self, 'conf/machines', 'conf')
"""
import re
matches = []
p = re.compile(re.escape(filepattern))
bbpaths = bb.data.getVar('BBPATH', self.configuration.data, True).split(':')
for path in bbpaths:
dirpath = os.path.join(path, directory)
if os.path.exists(dirpath):
for root, dirs, files in os.walk(dirpath):
for f in files:
if p.search(f):
matches.append(f)
if matches:
bb.event.fire(bb.event.FilesMatchingFound(filepattern, matches), self.configuration.data)
def findConfigFiles(self, varname):
"""
Find config files which are appropriate values for varname.
@@ -597,8 +460,7 @@ class BBCooker:
if end == 'conf':
possible.append(val)
if possible:
bb.event.fire(bb.event.ConfigFilesFound(var, possible), self.configuration.data)
bb.event.fire(bb.event.ConfigFilesFound(var, possible), self.configuration.data)
def findInheritsClass(self, klass):
"""
@@ -613,14 +475,54 @@ class BBCooker:
return pkg_list
def generateTargetsTree(self, klass=None, pkgs=[]):
def generateTargetsTreeData(self, pkgs_to_build, task):
"""
Create a tree of pkgs_to_build metadata, returning the data.
"""
taskdata, rq = self.prepareTreeData(pkgs_to_build, task)
seen_fnids = []
target_tree = {}
target_tree["depends"] = {}
target_tree["pn"] = {}
target_tree["rdepends-pn"] = {}
for task in xrange(len(rq.rqdata.runq_fnid)):
taskname = rq.rqdata.runq_task[task]
fnid = rq.rqdata.runq_fnid[task]
fn = taskdata.fn_index[fnid]
pn = self.status.pkg_fn[fn]
version = "%s:%s-%s" % self.status.pkg_pepvpr[fn]
summary = self.status.summary[fn]
license = self.status.license[fn]
section = self.status.section[fn]
if pn not in target_tree["pn"]:
target_tree["pn"][pn] = {}
target_tree["pn"][pn]["filename"] = fn
target_tree["pn"][pn]["version"] = version
target_tree["pn"][pn]["summary"] = summary
target_tree["pn"][pn]["license"] = license
target_tree["pn"][pn]["section"] = section
if fnid not in seen_fnids:
seen_fnids.append(fnid)
packages = []
target_tree["depends"][pn] = []
for dep in taskdata.depids[fnid]:
target_tree["depends"][pn].append(taskdata.build_names_index[dep])
target_tree["rdepends-pn"][pn] = []
for rdep in taskdata.rdepids[fnid]:
target_tree["rdepends-pn"][pn].append(taskdata.run_names_index[rdep])
return target_tree
def generateTargetsTree(self, klass):
"""
Generate a dependency tree of buildable targets
Generate an event with the result
"""
# if the caller hasn't specified a pkgs list default to universe
if not len(pkgs):
pkgs = ['universe']
pkgs = ['world']
# if inherited_class passed ensure all recipes which inherit the
# specified class are included in pkgs
if klass:
@@ -628,7 +530,7 @@ class BBCooker:
pkgs = pkgs + extra_pkgs
# generate a dependency tree for all our packages
tree = self.generateDepTreeData(pkgs, 'build', more_meta=True)
tree = self.generateTargetsTreeData(pkgs, 'build')
bb.event.fire(bb.event.TargetsTreeGenerated(tree), self.configuration.data)
def buildWorldTargetList(self):
@@ -665,25 +567,26 @@ class BBCooker:
else:
shell.start( self )
def _findConfigFile(self, configfile):
def _findLayerConf(self):
path = os.getcwd()
while path != "/":
confpath = os.path.join(path, "conf", configfile)
if os.path.exists(confpath):
return confpath
bblayers = os.path.join(path, "conf", "bblayers.conf")
if os.path.exists(bblayers):
return bblayers
path, _ = os.path.split(path)
return None
def _findLayerConf(self):
return self._findConfigFile("bblayers.conf")
def parseConfigurationFiles(self, files):
def _parse(f, data, include=False):
try:
return bb.parse.handle(f, data, include)
except (IOError, bb.parse.ParseError) as exc:
parselog.critical("Unable to parse %s: %s" % (f, exc))
sys.exit(1)
def parseConfigurationFiles(self, prefiles, postfiles):
data = self.configuration.data
bb.parse.init_parser(data)
# Parse files for loading *before* bitbake.conf and any includes
for f in prefiles:
for f in files:
data = _parse(f, data)
layerconf = self._findLayerConf()
@@ -707,112 +610,47 @@ class BBCooker:
data = _parse(os.path.join("conf", "bitbake.conf"), data)
# Parse files for loading *after* bitbake.conf and any includes
for p in postfiles:
data = _parse(p, data)
self.configuration.data = data
# Handle any INHERITs and inherit the base class
bbclasses = ["base"] + (data.getVar('INHERIT', True) or "").split()
for bbclass in bbclasses:
data = _inherit(bbclass, data)
inherits = ["base"] + (bb.data.getVar('INHERIT', self.configuration.data, True ) or "").split()
for inherit in inherits:
self.configuration.data = _parse(os.path.join('classes', '%s.bbclass' % inherit), self.configuration.data, True )
# Nomally we only register event handlers at the end of parsing .bb files
# We register any handlers we've found so far here...
for var in bb.data.getVar('__BBHANDLERS', data) or []:
bb.event.register(var, bb.data.getVar(var, data))
for var in bb.data.getVar('__BBHANDLERS', self.configuration.data) or []:
bb.event.register(var, bb.data.getVar(var, self.configuration.data))
if data.getVar("BB_WORKERCONTEXT", False) is None:
bb.fetch.fetcher_init(data)
bb.codeparser.parser_cache_init(data)
if bb.data.getVar("BB_WORKERCONTEXT", self.configuration.data) is None:
bb.fetch.fetcher_init(self.configuration.data)
bb.codeparser.parser_cache_init(self.configuration.data)
bb.parse.init_parser(data)
bb.event.fire(bb.event.ConfigParsed(), data)
self.configuration.data = data
bb.event.fire(bb.event.ConfigParsed(), self.configuration.data)
def handleCollections( self, collections ):
"""Handle collections"""
self.status.bbfile_config_priorities = []
if collections:
collection_priorities = {}
collection_depends = {}
collection_list = collections.split()
min_prio = 0
for c in collection_list:
# Get collection priority if defined explicitly
priority = bb.data.getVar("BBFILE_PRIORITY_%s" % c, self.configuration.data, 1)
if priority:
try:
prio = int(priority)
except ValueError:
parselog.error("invalid value for BBFILE_PRIORITY_%s: \"%s\"", c, priority)
if min_prio == 0 or prio < min_prio:
min_prio = prio
collection_priorities[c] = prio
else:
collection_priorities[c] = None
# Check dependencies and store information for priority calculation
deps = bb.data.getVar("LAYERDEPENDS_%s" % c, self.configuration.data, 1)
if deps:
depnamelist = []
deplist = deps.split()
for dep in deplist:
depsplit = dep.split(':')
if len(depsplit) > 1:
try:
depver = int(depsplit[1])
except ValueError:
parselog.error("invalid version value in LAYERDEPENDS_%s: \"%s\"", c, dep)
continue
else:
depver = None
dep = depsplit[0]
depnamelist.append(dep)
if dep in collection_list:
if depver:
layerver = bb.data.getVar("LAYERVERSION_%s" % dep, self.configuration.data, 1)
if layerver:
try:
lver = int(layerver)
except ValueError:
parselog.error("invalid value for LAYERVERSION_%s: \"%s\"", c, layerver)
continue
if lver <> depver:
parselog.error("Layer dependency %s of layer %s is at version %d, expected %d", dep, c, lver, depver)
else:
parselog.error("Layer dependency %s of layer %s has no version, expected %d", dep, c, depver)
else:
parselog.error("Layer dependency %s of layer %s not found", dep, c)
collection_depends[c] = depnamelist
else:
collection_depends[c] = []
# Recursively work out collection priorities based on dependencies
def calc_layer_priority(collection):
if not collection_priorities[collection]:
max_depprio = min_prio
for dep in collection_depends[collection]:
calc_layer_priority(dep)
depprio = collection_priorities[dep]
if depprio > max_depprio:
max_depprio = depprio
max_depprio += 1
parselog.debug(1, "Calculated priority of layer %s as %d", collection, max_depprio)
collection_priorities[collection] = max_depprio
# Calculate all layer priorities using calc_layer_priority and store in bbfile_config_priorities
for c in collection_list:
calc_layer_priority(c)
regex = bb.data.getVar("BBFILE_PATTERN_%s" % c, self.configuration.data, 1)
if regex == None:
parselog.error("BBFILE_PATTERN_%s not defined" % c)
continue
priority = bb.data.getVar("BBFILE_PRIORITY_%s" % c, self.configuration.data, 1)
if priority == None:
parselog.error("BBFILE_PRIORITY_%s not defined" % c)
continue
try:
cre = re.compile(regex)
except re.error:
parselog.error("BBFILE_PATTERN_%s \"%s\" is not a valid regular expression", c, regex)
continue
self.status.bbfile_config_priorities.append((c, regex, cre, collection_priorities[c]))
try:
pri = int(priority)
self.status.bbfile_config_priorities.append((c, regex, cre, pri))
except ValueError:
parselog.error("invalid value for BBFILE_PRIORITY_%s: \"%s\"", c, priority)
def buildSetVars(self):
"""
@@ -822,22 +660,22 @@ class BBCooker:
bb.data.setVar("BUILDNAME", time.strftime('%Y%m%d%H%M'), self.configuration.data)
bb.data.setVar("BUILDSTART", time.strftime('%m/%d/%Y %H:%M:%S', time.gmtime()), self.configuration.data)
def matchFiles(self, bf):
def matchFiles(self, buildfile):
"""
Find the .bb files which match the expression in 'buildfile'.
"""
if bf.startswith("/") or bf.startswith("../"):
bf = os.path.abspath(bf)
bf = os.path.abspath(buildfile)
filelist, masked = self.collect_bbfiles()
try:
os.stat(bf)
return [bf]
except OSError:
regexp = re.compile(bf)
regexp = re.compile(buildfile)
matches = []
for f in filelist:
if regexp.search(f) and os.path.isfile(f):
bf = f
matches.append(f)
return matches
@@ -862,33 +700,28 @@ class BBCooker:
# Parse the configuration here. We need to do it explicitly here since
# buildFile() doesn't use the cache
self.parseConfiguration()
self.status = bb.cache.CacheData(self.caches_array)
self.handleCollections( bb.data.getVar("BBFILE_COLLECTIONS", self.configuration.data, 1) )
# If we are told to do the None task then query the default task
if (task == None):
task = self.configuration.cmd
fn, cls = bb.cache.Cache.virtualfn2realfn(buildfile)
fn = self.matchFile(fn)
(fn, cls) = bb.cache.Cache.virtualfn2realfn(buildfile)
buildfile = self.matchFile(fn)
fn = bb.cache.Cache.realfn2virtual(buildfile, cls)
self.buildSetVars()
self.status = bb.cache.CacheData(self.caches_array)
self.status = bb.cache.CacheData()
infos = bb.cache.Cache.parse(fn, self.get_file_appends(fn), \
self.configuration.data,
self.caches_array)
infos = dict(infos)
fn = bb.cache.Cache.realfn2virtual(fn, cls)
try:
info_array = infos[fn]
except KeyError:
bb.fatal("%s does not exist" % fn)
self.status.add_from_recipeinfo(fn, info_array)
self.configuration.data)
maininfo = None
for vfn, info in infos:
self.status.add_from_recipeinfo(vfn, info)
if vfn == fn:
maininfo = info
# Tweak some variables
item = info_array[0].pn
item = maininfo.pn
self.status.ignored_dependencies = set()
self.status.bbfile_priority[fn] = 1
@@ -910,6 +743,9 @@ class BBCooker:
buildname = bb.data.getVar("BUILDNAME", self.configuration.data)
bb.event.fire(bb.event.BuildStarted(buildname, [item]), self.configuration.event_data)
# Clear locks
bb.fetch.persistent_database_connection = {}
# Execute the runqueue
runlist = [[item, "do_%s" % task]]
@@ -929,10 +765,6 @@ class BBCooker:
buildlog.error("'%s' failed" % taskdata.fn_index[fnid])
failures += len(exc.args)
retval = False
except SystemExit as exc:
self.command.finishAsyncCommand()
return False
if not retval:
bb.event.fire(bb.event.BuildCompleted(buildname, item, failures), self.configuration.event_data)
self.command.finishAsyncCommand()
@@ -941,7 +773,7 @@ class BBCooker:
return True
return retval
self.server_registration_cb(buildFileIdle, rq)
self.server.register_idle_function(buildFileIdle, rq)
def buildTargets(self, targets, task):
"""
@@ -970,10 +802,6 @@ class BBCooker:
buildlog.error("'%s' failed" % taskdata.fn_index[fnid])
failures += len(exc.args)
retval = False
except SystemExit as exc:
self.command.finishAsyncCommand()
return False
if not retval:
bb.event.fire(bb.event.BuildCompleted(buildname, targets, failures), self.configuration.event_data)
self.command.finishAsyncCommand()
@@ -999,22 +827,34 @@ class BBCooker:
runlist.append([k, "do_%s" % task])
taskdata.add_unresolved(localdata, self.status)
# Clear locks
bb.fetch.persistent_database_connection = {}
rq = bb.runqueue.RunQueue(self, self.configuration.data, self.status, taskdata, runlist)
self.server_registration_cb(buildTargetsIdle, rq)
self.server.register_idle_function(buildTargetsIdle, rq)
def updateCache(self):
if self.state == state.running:
return
if self.state in (state.shutdown, state.stop):
self.parser.shutdown(clean=False)
sys.exit(1)
if self.state != state.parsing:
self.parseConfiguration ()
self.status = bb.cache.CacheData(self.caches_array)
# Import Psyco if available and not disabled
import platform
if platform.machine() in ['i386', 'i486', 'i586', 'i686']:
if not self.configuration.disable_psyco:
try:
import psyco
except ImportError:
collectlog.info("Psyco JIT Compiler (http://psyco.sf.net) not available. Install it to increase performance.")
else:
psyco.bind( CookerParser.parse_next )
else:
collectlog.info("You have disabled Psyco. This decreases performance.")
self.status = bb.cache.CacheData()
ignore = bb.data.getVar("ASSUME_PROVIDED", self.configuration.data, 1) or ""
self.status.ignored_dependencies = set(ignore.split())
@@ -1032,7 +872,6 @@ class BBCooker:
if not self.parser.parse_next():
collectlog.debug(1, "parsing complete")
self.show_appends_with_no_recipes()
self.buildDepgraph()
self.state = state.running
return None
@@ -1050,12 +889,6 @@ class BBCooker:
for t in self.status.world_target:
pkgs_to_build.append(t)
if 'universe' in pkgs_to_build:
parselog.debug(1, "collating packages for \"universe\"")
pkgs_to_build.remove('universe')
for t in self.status.universe_target:
pkgs_to_build.append(t)
return pkgs_to_build
def get_bbfiles( self, path = os.getcwd() ):
@@ -1090,9 +923,6 @@ class BBCooker:
files = (data.getVar( "BBFILES", self.configuration.data, 1 ) or "").split()
data.setVar("BBFILES", " ".join(files), self.configuration.data)
# Sort files by priority
files.sort( key=lambda fileitem: self.calc_bbfile_priority(fileitem) )
if not len(files):
files = self.get_bbfiles()
@@ -1100,21 +930,16 @@ class BBCooker:
collectlog.error("no recipe files to build, check your BBPATH and BBFILES?")
bb.event.fire(CookerExit(), self.configuration.event_data)
# Can't use set here as order is important
newfiles = []
newfiles = set()
for f in files:
if os.path.isdir(f):
dirfiles = self.find_bbfiles(f)
for g in dirfiles:
if g not in newfiles:
newfiles.append(g)
newfiles.update(dirfiles)
else:
globbed = glob.glob(f)
if not globbed and os.path.exists(f):
globbed = [f]
for g in globbed:
if g not in newfiles:
newfiles.append(g)
newfiles.update(globbed)
bbmask = bb.data.getVar('BBMASK', self.configuration.data, 1)
@@ -1146,18 +971,6 @@ class BBCooker:
self.appendlist[base] = []
self.appendlist[base].append(f)
# Find overlayed recipes
# bbfiles will be in priority order which makes this easy
bbfile_seen = dict()
self.overlayed = defaultdict(list)
for f in reversed(bbfiles):
base = os.path.basename(f)
if base not in bbfile_seen:
bbfile_seen[base] = f
else:
topfile = bbfile_seen[base]
self.overlayed[topfile].append(f)
return (bbfiles, masked)
def get_file_appends(self, fn):
@@ -1234,45 +1047,24 @@ class CookerExit(bb.event.Event):
def __init__(self):
bb.event.Event.__init__(self)
def catch_parse_error(func):
"""Exception handling bits for our parsing"""
@wraps(func)
def wrapped(fn, *args):
try:
return func(fn, *args)
except (IOError, bb.parse.ParseError, bb.data_smart.ExpansionError) as exc:
parselog.critical("Unable to parse %s: %s" % (fn, exc))
sys.exit(1)
return wrapped
@catch_parse_error
def _parse(fn, data, include=False):
return bb.parse.handle(fn, data, include)
@catch_parse_error
def _inherit(bbclass, data):
bb.parse.BBHandler.inherit([bbclass], data)
return data
class ParsingFailure(Exception):
def __init__(self, realexception, recipe):
self.realexception = realexception
self.recipe = recipe
Exception.__init__(self, realexception, recipe)
Exception.__init__(self, "Failure when parsing %s" % recipe)
self.args = (realexception, recipe)
def parse_file(task):
filename, appends, caches_array = task
filename, appends = task
try:
return True, bb.cache.Cache.parse(filename, appends, parse_file.cfg, caches_array)
except Exception as exc:
tb = sys.exc_info()[2]
return True, bb.cache.Cache.parse(filename, appends, parse_file.cfg)
except Exception, exc:
exc.recipe = filename
exc.traceback = list(bb.exceptions.extract_traceback(tb, context=3))
raise exc
# Need to turn BaseExceptions into Exceptions here so we gracefully shutdown
# and for example a worker thread doesn't just exit on its own in response to
# a SystemExit event for example.
except BaseException as exc:
except BaseException, exc:
raise ParsingFailure(exc, filename)
class CookerParser(object):
@@ -1295,13 +1087,13 @@ class CookerParser(object):
self.num_processes = int(self.cfgdata.getVar("BB_NUMBER_PARSE_THREADS", True) or
multiprocessing.cpu_count())
self.bb_cache = bb.cache.Cache(self.cfgdata, cooker.caches_array)
self.bb_cache = bb.cache.Cache(self.cfgdata)
self.fromcache = []
self.willparse = []
for filename in self.filelist:
appends = self.cooker.get_file_appends(filename)
if not self.bb_cache.cacheValid(filename, appends):
self.willparse.append((filename, appends, cooker.caches_array))
if not self.bb_cache.cacheValid(filename):
self.willparse.append((filename, appends))
else:
self.fromcache.append((filename, appends))
self.toparse = self.total - len(self.fromcache)
@@ -1311,24 +1103,18 @@ class CookerParser(object):
def start(self):
def init(cfg):
signal.signal(signal.SIGINT, signal.SIG_IGN)
parse_file.cfg = cfg
multiprocessing.util.Finalize(None, bb.codeparser.parser_cache_save, args=(self.cooker.configuration.data, ), exitpriority=1)
self.results = self.load_cached()
bb.event.fire(bb.event.ParseStarted(self.toparse), self.cfgdata)
if self.toparse:
bb.event.fire(bb.event.ParseStarted(self.toparse), self.cfgdata)
self.pool = multiprocessing.Pool(self.num_processes, init, [self.cfgdata])
parsed = self.pool.imap(parse_file, self.willparse)
self.pool.close()
self.pool = multiprocessing.Pool(self.num_processes, init, [self.cfgdata])
parsed = self.pool.imap(parse_file, self.willparse)
self.pool.close()
self.results = itertools.chain(self.results, parsed)
self.results = itertools.chain(self.load_cached(), parsed)
def shutdown(self, clean=True):
if not self.toparse:
return
if clean:
event = bb.event.ParseCompleted(self.cached, self.parsed,
self.skipped, self.masked,
@@ -1341,8 +1127,11 @@ class CookerParser(object):
sync = threading.Thread(target=self.bb_cache.sync)
sync.start()
multiprocessing.util.Finalize(None, sync.join, exitpriority=-100)
bb.codeparser.parser_cache_savemerge(self.cooker.configuration.data)
atexit.register(lambda: sync.join())
codesync = threading.Thread(target=bb.codeparser.parser_cache_save(self.cooker.configuration.data))
codesync.start()
atexit.register(lambda: codesync.join())
def load_cached(self):
for filename, appends in self.fromcache:
@@ -1355,21 +1144,12 @@ class CookerParser(object):
except StopIteration:
self.shutdown()
return False
except ParsingFailure as exc:
except KeyboardInterrupt:
self.shutdown(clean=False)
bb.fatal('Unable to parse %s: %s' %
(exc.recipe, bb.exceptions.to_string(exc.realexception)))
except (bb.parse.ParseError, bb.data_smart.ExpansionError) as exc:
bb.fatal(str(exc))
except SyntaxError as exc:
logger.error('Unable to parse %s', exc.recipe)
sys.exit(1)
raise
except Exception as exc:
etype, value, tb = sys.exc_info()
logger.error('Unable to parse %s', value.recipe,
exc_info=(etype, value, exc.traceback))
self.shutdown(clean=False)
sys.exit(1)
bb.fatal('Error parsing %s: %s' % (exc.recipe, exc))
self.current += 1
self.virtuals += len(result)
@@ -1381,17 +1161,17 @@ class CookerParser(object):
else:
self.cached += 1
for virtualfn, info_array in result:
if info_array[0].skipped:
for virtualfn, info in result:
if info.skipped:
self.skipped += 1
self.cooker.skiplist[virtualfn] = SkippedPackage(info_array[0])
self.bb_cache.add_info(virtualfn, info_array, self.cooker.status,
else:
self.bb_cache.add_info(virtualfn, info, self.cooker.status,
parsed=parsed)
return True
def reparse(self, filename):
infos = self.bb_cache.parse(filename,
self.cooker.get_file_appends(filename),
self.cfgdata, self.cooker.caches_array)
for vfn, info_array in infos:
self.cooker.status.add_from_recipeinfo(vfn, info_array)
self.cfgdata)
for vfn, info in infos:
self.cooker.status.add_from_recipeinfo(vfn, info)

View File

@@ -187,7 +187,7 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
val = getVar(var, d, 1)
except (KeyboardInterrupt, bb.build.FuncFailed):
raise
except Exception as exc:
except Exception, exc:
o.write('# expansion of %s threw %s: %s\n' % (var, exc.__class__.__name__, str(exc)))
return 0
@@ -234,20 +234,25 @@ def emit_env(o=sys.__stdout__, d = init(), all=False):
for key in keys:
emit_var(key, o, d, all and not isfunc) and o.write('\n')
def exported_keys(d):
return (key for key in d.keys() if not key.startswith('__') and
d.getVarFlag(key, 'export') and
not d.getVarFlag(key, 'unexport'))
def exported_vars(d):
for key in exported_keys(d):
def export_vars(d):
keys = (key for key in d.keys() if d.getVarFlag(key, "export"))
ret = {}
for k in keys:
try:
value = d.getVar(key, True)
except Exception:
v = d.getVar(k, True)
if v:
ret[k] = v
except (KeyboardInterrupt, bb.build.FuncFailed):
raise
except Exception, exc:
pass
return ret
if value is not None:
yield key, str(value)
def export_envvars(v, d):
for s in os.environ.keys():
if s not in v:
v[s] = os.environ[s]
return v
def emit_func(func, o=sys.__stdout__, d = init()):
"""Emits all items in the data store in a format such that it can be sourced by a shell."""

View File

@@ -172,12 +172,11 @@ class DataSmart(MutableMapping):
if o not in self._seen_overrides:
continue
vars = self._seen_overrides[o].copy()
vars = self._seen_overrides[o]
for var in vars:
name = var[:-l]
try:
self.setVar(name, self.getVar(var, False))
self.delVar(var)
except Exception:
logger.info("Untracked delVar")
@@ -192,11 +191,11 @@ class DataSmart(MutableMapping):
keep.append((a ,o))
continue
if op == "_append":
if op is "_append":
sval = self.getVar(append, False) or ""
sval += a
self.setVar(append, sval)
elif op == "_prepend":
elif op is "_prepend":
sval = a + (self.getVar(append, False) or "")
self.setVar(append, sval)
@@ -259,16 +258,19 @@ class DataSmart(MutableMapping):
# more cookies for the cookie monster
if '_' in var:
override = var[var.rfind('_')+1:]
if len(override) > 0:
if override not in self._seen_overrides:
self._seen_overrides[override] = set()
self._seen_overrides[override].add( var )
if override not in self._seen_overrides:
self._seen_overrides[override] = set()
self._seen_overrides[override].add( var )
# setting var
self.dict[var]["content"] = value
def getVar(self, var, expand=False, noweakdefault=False):
return self.getVarFlag(var, "content", expand, noweakdefault)
def getVar(self, var, exp):
value = self.getVarFlag(var, "content")
if exp and value:
return self.expand(value, var)
return value
def renameVar(self, key, newkey):
"""
@@ -296,23 +298,19 @@ class DataSmart(MutableMapping):
def delVar(self, var):
self.expand_cache = {}
self.dict[var] = {}
if '_' in var:
override = var[var.rfind('_')+1:]
if override and override in self._seen_overrides and var in self._seen_overrides[override]:
self._seen_overrides[override].remove(var)
def setVarFlag(self, var, flag, flagvalue):
if not var in self.dict:
self._makeShadowCopy(var)
self.dict[var][flag] = flagvalue
def getVarFlag(self, var, flag, expand=False, noweakdefault=False):
def getVarFlag(self, var, flag, expand=False):
local_var = self._findVar(var)
value = None
if local_var:
if flag in local_var:
value = copy.copy(local_var[flag])
elif flag == "content" and "defaultval" in local_var and not noweakdefault:
elif flag == "content" and "defaultval" in local_var:
value = copy.copy(local_var["defaultval"])
if expand and value:
value = self.expand(value, None)
@@ -400,22 +398,18 @@ class DataSmart(MutableMapping):
yield key
def __iter__(self):
def keylist(d):
klist = set()
for key in d:
if key == "_data":
continue
if not d[key]:
continue
klist.add(key)
seen = set()
def _keys(d):
if "_data" in d:
klist |= keylist(d["_data"])
for key in _keys(d["_data"]):
yield key
return klist
for k in keylist(self.dict):
yield k
for key in d:
if key != "_data":
if not key in seen:
seen.add(key)
yield key
return _keys(self.dict)
def __len__(self):
return len(frozenset(self))

View File

@@ -30,7 +30,6 @@ except ImportError:
import pickle
import logging
import atexit
import traceback
import bb.utils
# This is the pid for which we should generate the event. This is set when
@@ -38,8 +37,6 @@ import bb.utils
worker_pid = 0
worker_pipe = None
logger = logging.getLogger('BitBake.Event')
class Event(object):
"""Base class for events"""
@@ -61,35 +58,23 @@ _ui_handler_seq = 0
bb.utils._context["NotHandled"] = NotHandled
bb.utils._context["Handled"] = Handled
def execute_handler(name, handler, event, d):
event.data = d
try:
ret = handler(event)
except Exception:
etype, value, tb = sys.exc_info()
logger.error("Execution of event handler '%s' failed" % name,
exc_info=(etype, value, tb.tb_next))
raise
except SystemExit as exc:
if exc.code != 0:
logger.error("Execution of event handler '%s' failed" % name)
raise
finally:
del event.data
if ret is not None:
warnings.warn("Using Handled/NotHandled in event handlers is deprecated",
DeprecationWarning, stacklevel = 2)
def fire_class_handlers(event, d):
if isinstance(event, logging.LogRecord):
return
for name, handler in _handlers.iteritems():
try:
execute_handler(name, handler, event, d)
except Exception:
continue
for handler in _handlers:
h = _handlers[handler]
event.data = d
if type(h).__name__ == "code":
locals = {"e": event}
bb.utils.simple_exec(h, locals)
ret = bb.utils.better_eval("tmpHandler(e)", locals)
if ret is not None:
warnings.warn("Using Handled/NotHandled in event handlers is deprecated",
DeprecationWarning, stacklevel = 2)
else:
h(event)
del event.data
ui_queue = []
@atexit.register
@@ -120,10 +105,7 @@ def fire_ui_handlers(event, d):
# We use pickle here since it better handles object instances
# which xmlrpc's marshaller does not. Events *must* be serializable
# by pickle.
if hasattr(_ui_handlers[h].event, "sendpickle"):
_ui_handlers[h].event.sendpickle((pickle.dumps(event)))
else:
_ui_handlers[h].event.send(event)
_ui_handlers[h].event.send((pickle.dumps(event)))
except:
errors.append(h)
for h in errors:
@@ -154,7 +136,6 @@ def fire_from_worker(event, d):
event = pickle.loads(event[7:-8])
fire_ui_handlers(event, d)
noop = lambda _: None
def register(name, handler):
"""Register an Event handler"""
@@ -165,18 +146,9 @@ def register(name, handler):
if handler is not None:
# handle string containing python code
if isinstance(handler, basestring):
tmp = "def %s(e):\n%s" % (name, handler)
try:
code = compile(tmp, "%s(e)" % name, "exec")
except SyntaxError:
logger.error("Unable to register event handler '%s':\n%s", name,
''.join(traceback.format_exc(limit=0)))
_handlers[name] = noop
return
env = {}
bb.utils.simple_exec(code, env)
func = bb.utils.better_eval(name, env)
_handlers[name] = func
tmp = "def tmpHandler(e):\n%s" % handler
comp = bb.utils.better_compile(tmp, "tmpHandler(e)", "bb.event._registerCode")
_handlers[name] = comp
else:
_handlers[name] = handler
@@ -206,17 +178,13 @@ def getName(e):
class ConfigParsed(Event):
"""Configuration Parsing Complete"""
class RecipeEvent(Event):
class RecipeParsed(Event):
""" Recipe Parsing Complete """
def __init__(self, fn):
self.fn = fn
Event.__init__(self)
class RecipePreFinalise(RecipeEvent):
""" Recipe Parsing Complete but not yet finialised"""
class RecipeParsed(RecipeEvent):
""" Recipe Parsing Complete """
class StampUpdate(Event):
"""Trigger for any adjustment of the stamp files to happen"""
@@ -390,16 +358,6 @@ class TargetsTreeGenerated(Event):
Event.__init__(self)
self._model = model
class FilesMatchingFound(Event):
"""
Event when a list of files matching the supplied pattern has
been generated
"""
def __init__(self, pattern, matches):
Event.__init__(self)
self._pattern = pattern
self._matches = matches
class ConfigFilesFound(Event):
"""
Event when a list of appropriate config files has been generated
@@ -409,14 +367,6 @@ class ConfigFilesFound(Event):
self._variable = variable
self._values = values
class ConfigFilePathFound(Event):
"""
Event when a path for a config file has been found
"""
def __init__(self, path):
Event.__init__(self)
self._path = path
class MsgBase(Event):
"""Base class for messages"""
@@ -446,12 +396,6 @@ class LogHandler(logging.Handler):
"""Dispatch logging messages as bitbake events"""
def emit(self, record):
if record.exc_info:
etype, value, tb = record.exc_info
if hasattr(tb, 'tb_next'):
tb = list(bb.exceptions.extract_traceback(tb, context=3))
record.bb_exc_info = (etype, value, tb)
record.exc_info = None
fire(record, None)
def filter(self, record):

View File

@@ -1,84 +0,0 @@
from __future__ import absolute_import
import inspect
import traceback
import bb.namedtuple_with_abc
from collections import namedtuple
class TracebackEntry(namedtuple.abc):
"""Pickleable representation of a traceback entry"""
_fields = 'filename lineno function args code_context index'
_header = ' File "{0.filename}", line {0.lineno}, in {0.function}{0.args}'
def format(self, formatter=None):
if not self.code_context:
return self._header.format(self) + '\n'
formatted = [self._header.format(self) + ':\n']
for lineindex, line in enumerate(self.code_context):
if formatter:
line = formatter(line)
if lineindex == self.index:
formatted.append(' >%s' % line)
else:
formatted.append(' %s' % line)
return formatted
def __str__(self):
return ''.join(self.format())
def _get_frame_args(frame):
"""Get the formatted arguments and class (if available) for a frame"""
arginfo = inspect.getargvalues(frame)
if not arginfo.args:
return '', None
firstarg = arginfo.args[0]
if firstarg == 'self':
self = arginfo.locals['self']
cls = self.__class__.__name__
arginfo.args.pop(0)
del arginfo.locals['self']
else:
cls = None
formatted = inspect.formatargvalues(*arginfo)
return formatted, cls
def extract_traceback(tb, context=1):
frames = inspect.getinnerframes(tb, context)
for frame, filename, lineno, function, code_context, index in frames:
formatted_args, cls = _get_frame_args(frame)
if cls:
function = '%s.%s' % (cls, function)
yield TracebackEntry(filename, lineno, function, formatted_args,
code_context, index)
def format_extracted(extracted, formatter=None, limit=None):
if limit:
extracted = extracted[-limit:]
formatted = []
for tracebackinfo in extracted:
formatted.extend(tracebackinfo.format(formatter))
return formatted
def format_exception(etype, value, tb, context=1, limit=None, formatter=None):
formatted = ['Traceback (most recent call last):\n']
if hasattr(tb, 'tb_next'):
tb = extract_traceback(tb, context)
formatted.extend(format_extracted(tb, formatter, limit))
formatted.extend(traceback.format_exception_only(etype, value))
return formatted
def to_string(exc):
if isinstance(exc, SystemExit):
if not isinstance(exc.code, basestring):
return 'Exited with "%d"' % exc.code
return str(exc)

View File

@@ -153,18 +153,18 @@ def fetcher_init(d):
Called to initialize the fetchers once the configuration data is known.
Calls before this must not hit the cache.
"""
pd = persist_data.persist(d)
# When to drop SCM head revisions controlled by user policy
srcrev_policy = bb.data.getVar('BB_SRCREV_POLICY', d, 1) or "clear"
if srcrev_policy == "cache":
logger.debug(1, "Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
elif srcrev_policy == "clear":
logger.debug(1, "Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
revs = persist_data.persist('BB_URI_HEADREVS', d)
try:
bb.fetch.saved_headrevs = revs.items()
bb.fetch.saved_headrevs = pd['BB_URI_HEADREVS'].items()
except:
pass
revs.clear()
del pd['BB_URI_HEADREVS']
else:
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
@@ -178,7 +178,8 @@ def fetcher_compare_revisions(d):
return true/false on whether they've changed.
"""
data = persist_data.persist('BB_URI_HEADREVS', d).items()
pd = persist_data.persist(d)
data = pd['BB_URI_HEADREVS'].items()
data2 = bb.fetch.saved_headrevs
changed = False
@@ -755,13 +756,15 @@ class Fetch(object):
if not hasattr(self, "_latest_revision"):
raise ParameterError
revs = persist_data.persist('BB_URI_HEADREVS', d)
pd = persist_data.persist(d)
revs = pd['BB_URI_HEADREVS']
key = self.generate_revision_key(url, ud, d)
try:
return revs[key]
except KeyError:
revs[key] = rev = self._latest_revision(url, ud, d)
return rev
rev = revs[key]
if rev != None:
return str(rev)
revs[key] = rev = self._latest_revision(url, ud, d)
return rev
def sortable_revision(self, url, ud, d):
"""
@@ -770,17 +773,18 @@ class Fetch(object):
if hasattr(self, "_sortable_revision"):
return self._sortable_revision(url, ud, d)
localcounts = persist_data.persist('BB_URI_LOCALCOUNT', d)
pd = persist_data.persist(d)
localcounts = pd['BB_URI_LOCALCOUNT']
key = self.generate_revision_key(url, ud, d)
latest_rev = self._build_revision(url, ud, d)
last_rev = localcounts.get(key + '_rev')
last_rev = localcounts[key + '_rev']
uselocalcount = bb.data.getVar("BB_LOCALCOUNT_OVERRIDE", d, True) or False
count = None
if uselocalcount:
count = Fetch.localcount_internal_helper(ud, d)
if count is None:
count = localcounts.get(key + '_count')
count = localcounts[key + '_count']
if last_rev == latest_rev:
return str(count + "+" + latest_rev)

View File

@@ -67,15 +67,15 @@ class Bzr(Fetch):
options = []
if command == "revno":
if command is "revno":
bzrcmd = "%s revno %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
else:
if ud.revision:
options.append("-r %s" % ud.revision)
if command == "fetch":
if command is "fetch":
bzrcmd = "%s co %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
elif command == "update":
elif command is "update":
bzrcmd = "%s pull %s --overwrite" % (basecmd, " ".join(options))
else:
raise FetchError("Invalid bzr command %s" % command)

View File

@@ -242,36 +242,36 @@ class Git(Fetch):
"""
Look in the cache for the latest revision, if not present ask the SCM.
"""
revs = bb.persist_data.persist('BB_URI_HEADREVS', d)
persisted = bb.persist_data.persist(d)
revs = persisted['BB_URI_HEADREVS']
key = self.generate_revision_key(url, ud, d, branch=True)
try:
return revs[key]
except KeyError:
rev = revs[key]
if rev is None:
# Compatibility with old key format, no branch included
oldkey = self.generate_revision_key(url, ud, d, branch=False)
try:
rev = revs[oldkey]
except KeyError:
rev = self._latest_revision(url, ud, d)
else:
rev = revs[oldkey]
if rev is not None:
del revs[oldkey]
else:
rev = self._latest_revision(url, ud, d)
revs[key] = rev
return rev
return str(rev)
def sortable_revision(self, url, ud, d):
"""
"""
localcounts = bb.persist_data.persist('BB_URI_LOCALCOUNT', d)
pd = bb.persist_data.persist(d)
localcounts = pd['BB_URI_LOCALCOUNT']
key = self.generate_revision_key(url, ud, d, branch=True)
oldkey = self.generate_revision_key(url, ud, d, branch=False)
latest_rev = self._build_revision(url, ud, d)
last_rev = localcounts.get(key + '_rev')
last_rev = localcounts[key + '_rev']
if last_rev is None:
last_rev = localcounts.get(oldkey + '_rev')
last_rev = localcounts[oldkey + '_rev']
if last_rev is not None:
del localcounts[oldkey + '_rev']
localcounts[key + '_rev'] = last_rev
@@ -281,9 +281,9 @@ class Git(Fetch):
if uselocalcount:
count = Fetch.localcount_internal_helper(ud, d)
if count is None:
count = localcounts.get(key + '_count')
count = localcounts[key + '_count']
if count is None:
count = localcounts.get(oldkey + '_count')
count = localcounts[oldkey + '_count']
if count is not None:
del localcounts[oldkey + '_count']
localcounts[key + '_count'] = count

View File

@@ -28,8 +28,10 @@ from __future__ import absolute_import
from __future__ import print_function
import os, re
import logging
import bb.data, bb.persist_data, bb.utils
from bb import data
import bb
from bb import data
from bb import persist_data
from bb import utils
__version__ = "2"
@@ -203,10 +205,7 @@ def uri_replace(ud, uri_find, uri_replace, d):
result_decoded[loc] = uri_decoded[loc]
if isinstance(i, basestring):
if (re.match(i, uri_decoded[loc])):
if not uri_replace_decoded[loc]:
result_decoded[loc] = ""
else:
result_decoded[loc] = re.sub(i, uri_replace_decoded[loc], uri_decoded[loc])
result_decoded[loc] = re.sub(i, uri_replace_decoded[loc], uri_decoded[loc])
if uri_find_decoded.index(i) == 2:
if ud.mirrortarball:
result_decoded[loc] = os.path.join(os.path.dirname(result_decoded[loc]), os.path.basename(ud.mirrortarball))
@@ -225,18 +224,18 @@ def fetcher_init(d):
Called to initialize the fetchers once the configuration data is known.
Calls before this must not hit the cache.
"""
pd = persist_data.persist(d)
# When to drop SCM head revisions controlled by user policy
srcrev_policy = bb.data.getVar('BB_SRCREV_POLICY', d, True) or "clear"
if srcrev_policy == "cache":
logger.debug(1, "Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
elif srcrev_policy == "clear":
logger.debug(1, "Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
revs = bb.persist_data.persist('BB_URI_HEADREVS', d)
try:
bb.fetch2.saved_headrevs = revs.items()
bb.fetch2.saved_headrevs = pd['BB_URI_HEADREVS'].items()
except:
pass
revs.clear()
del pd['BB_URI_HEADREVS']
else:
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
@@ -250,7 +249,8 @@ def fetcher_compare_revisions(d):
return true/false on whether they've changed.
"""
data = bb.persist_data.persist('BB_URI_HEADREVS', d).items()
pd = persist_data.persist(d)
data = pd['BB_URI_HEADREVS'].items()
data2 = bb.fetch2.saved_headrevs
changed = False
@@ -300,22 +300,6 @@ def verify_checksum(u, ud, d):
if ud.sha256_expected != sha256data:
raise SHA256SumError(ud.localpath, ud.sha256_expected, sha256data, u)
def update_stamp(u, ud, d):
"""
donestamp is file stamp indicating the whole fetching is done
this function update the stamp after verifying the checksum
"""
if os.path.exists(ud.donestamp):
# Touch the done stamp file to show active use of the download
try:
os.utime(ud.donestamp, None)
except:
# Errors aren't fatal here
pass
else:
verify_checksum(u, ud, d)
open(ud.donestamp, 'w').close()
def subprocess_setup():
import signal
# Python installs a SIGPIPE handler by default. This is usually not what
@@ -368,7 +352,7 @@ def get_srcrev(d):
def localpath(url, d):
fetcher = bb.fetch2.Fetch([url], d)
return fetcher.localpath(url)
return fetcher.localpath(url)
def runfetchcmd(cmd, d, quiet = False, cleanup = []):
"""
@@ -388,7 +372,7 @@ def runfetchcmd(cmd, d, quiet = False, cleanup = []):
'SSH_AUTH_SOCK', 'SSH_AGENT_PID', 'HOME']
for var in exportvars:
val = bb.data.getVar(var, d, True)
val = data.getVar(var, d, True)
if val:
cmd = 'export ' + var + '=\"%s\"; %s' % (val, cmd)
@@ -514,15 +498,15 @@ def srcrev_internal_helper(ud, d, name):
return ud.parm['tag']
rev = None
pn = bb.data.getVar("PN", d, True)
pn = data.getVar("PN", d, True)
if name != '':
rev = bb.data.getVar("SRCREV_%s_pn-%s" % (name, pn), d, True)
rev = data.getVar("SRCREV_%s_pn-%s" % (name, pn), d, True)
if not rev:
rev = bb.data.getVar("SRCREV_%s" % name, d, True)
rev = data.getVar("SRCREV_%s" % name, d, True)
if not rev:
rev = bb.data.getVar("SRCREV_pn-%s" % pn, d, True)
rev = data.getVar("SRCREV_pn-%s" % pn, d, True)
if not rev:
rev = bb.data.getVar("SRCREV", d, True)
rev = data.getVar("SRCREV", d, True)
if rev == "INVALID":
raise FetchError("Please set SRCREV to a valid value", ud.url)
if rev == "AUTOINC":
@@ -608,12 +592,12 @@ class FetchData(object):
if "srcdate" in self.parm:
return self.parm['srcdate']
pn = bb.data.getVar("PN", d, True)
pn = data.getVar("PN", d, True)
if pn:
return bb.data.getVar("SRCDATE_%s" % pn, d, True) or bb.data.getVar("SRCDATE", d, True) or bb.data.getVar("DATE", d, True)
return data.getVar("SRCDATE_%s" % pn, d, True) or data.getVar("SRCDATE", d, True) or data.getVar("DATE", d, True)
return bb.data.getVar("SRCDATE", d, True) or bb.data.getVar("DATE", d, True)
return data.getVar("SRCDATE", d, True) or data.getVar("DATE", d, True)
class FetchMethod(object):
"""Base class for 'fetch'ing data"""
@@ -679,7 +663,7 @@ class FetchMethod(object):
try:
unpack = bb.utils.to_boolean(urldata.parm.get('unpack'), True)
except ValueError as exc:
except ValueError, exc:
bb.fatal("Invalid value for 'unpack' parameter for %s: %s" %
(file, urldata.parm.get('unpack')))
@@ -708,7 +692,7 @@ class FetchMethod(object):
elif file.endswith('.zip') or file.endswith('.jar'):
try:
dos = bb.utils.to_boolean(urldata.parm.get('dos'), False)
except ValueError as exc:
except ValueError, exc:
bb.fatal("Invalid value for 'dos' parameter for %s: %s" %
(file, urldata.parm.get('dos')))
cmd = 'unzip -q -o'
@@ -806,10 +790,10 @@ class FetchMethod(object):
localcount = None
if name != '':
pn = bb.data.getVar("PN", d, True)
localcount = bb.data.getVar("LOCALCOUNT_" + name, d, True)
pn = data.getVar("PN", d, True)
localcount = data.getVar("LOCALCOUNT_" + name, d, True)
if not localcount:
localcount = bb.data.getVar("LOCALCOUNT", d, True)
localcount = data.getVar("LOCALCOUNT", d, True)
return localcount
localcount_internal_helper = staticmethod(localcount_internal_helper)
@@ -821,13 +805,15 @@ class FetchMethod(object):
if not hasattr(self, "_latest_revision"):
raise ParameterError("The fetcher for this URL does not support _latest_revision", url)
revs = bb.persist_data.persist('BB_URI_HEADREVS', d)
pd = persist_data.persist(d)
revs = pd['BB_URI_HEADREVS']
key = self.generate_revision_key(url, ud, d, name)
try:
return revs[key]
except KeyError:
revs[key] = rev = self._latest_revision(url, ud, d, name)
return rev
rev = revs[key]
if rev != None:
return str(rev)
revs[key] = rev = self._latest_revision(url, ud, d, name)
return rev
def sortable_revision(self, url, ud, d, name):
"""
@@ -836,17 +822,18 @@ class FetchMethod(object):
if hasattr(self, "_sortable_revision"):
return self._sortable_revision(url, ud, d)
localcounts = bb.persist_data.persist('BB_URI_LOCALCOUNT', d)
pd = persist_data.persist(d)
localcounts = pd['BB_URI_LOCALCOUNT']
key = self.generate_revision_key(url, ud, d, name)
latest_rev = self._build_revision(url, ud, d, name)
last_rev = localcounts.get(key + '_rev')
last_rev = localcounts[key + '_rev']
uselocalcount = bb.data.getVar("BB_LOCALCOUNT_OVERRIDE", d, True) or False
count = None
if uselocalcount:
count = FetchMethod.localcount_internal_helper(ud, d, name)
if count is None:
count = localcounts.get(key + '_count') or "0"
count = localcounts[key + '_count'] or "0"
if last_rev == latest_rev:
return str(count + "+" + latest_rev)
@@ -948,9 +935,6 @@ class Fetch(object):
if hasattr(m, "build_mirror_data"):
m.build_mirror_data(u, ud, self.d)
localpath = ud.localpath
# early checksum verify, so that if checksum mismatched,
# fetcher still have chance to fetch from mirror
update_stamp(u, ud, self.d)
except bb.fetch2.NetworkAccess:
raise
@@ -967,7 +951,17 @@ class Fetch(object):
if not localpath or ((not os.path.exists(localpath)) and localpath.find("*") == -1):
raise FetchError("Unable to fetch URL %s from any source." % u, u)
update_stamp(u, ud, self.d)
if os.path.exists(ud.donestamp):
# Touch the done stamp file to show active use of the download
try:
os.utime(ud.donestamp, None)
except:
# Errors aren't fatal here
pass
else:
# Only check the checksums if we've not seen this item before, then create the stamp
verify_checksum(u, ud, self.d)
open(ud.donestamp, 'w').close()
finally:
bb.utils.unlockfile(lf)

View File

@@ -66,15 +66,15 @@ class Bzr(FetchMethod):
options = []
if command == "revno":
if command is "revno":
bzrcmd = "%s revno %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
else:
if ud.revision:
options.append("-r %s" % ud.revision)
if command == "fetch":
if command is "fetch":
bzrcmd = "%s co %s %s://%s" % (basecmd, " ".join(options), proto, bzrroot)
elif command == "update":
elif command is "update":
bzrcmd = "%s pull %s --overwrite" % (basecmd, " ".join(options))
else:
raise FetchError("Invalid bzr command %s" % command, ud.url)

View File

@@ -3,41 +3,6 @@
"""
BitBake 'Fetch' git implementation
git fetcher support the SRC_URI with format of:
SRC_URI = "git://some.host/somepath;OptionA=xxx;OptionB=xxx;..."
Supported SRC_URI options are:
- branch
The git branch to retrieve from. The default is "master"
this option also support multiple branches fetching, branches
are seperated by comma. in multiple branches case, the name option
must have the same number of names to match the branches, which is
used to specify the SRC_REV for the branch
e.g:
SRC_URI="git://some.host/somepath;branch=branchX,branchY;name=nameX,nameY"
SRCREV_nameX = "xxxxxxxxxxxxxxxxxxxx"
SRCREV_nameY = "YYYYYYYYYYYYYYYYYYYY"
- tag
The git tag to retrieve. The default is "master"
- protocol
The method to use to access the repository. Common options are "git",
"http", "file" and "rsync". The default is "git"
- rebaseable
rebaseable indicates that the upstream git repo may rebase in the future,
and current revision may disappear from upstream repo. This option will
reminder fetcher to preserve local cache carefully for future use.
The default value is "0", set rebaseable=1 for rebaseable git repo
- nocheckout
Don't checkout source code when unpacking. set this option for the recipe
who has its own routine to checkout code.
The default is "0", set nocheckout=1 if needed.
"""
#Copyright (C) 2005 Richard Purdie
@@ -86,14 +51,11 @@ class Git(FetchMethod):
elif not ud.host:
ud.proto = 'file'
else:
ud.proto = "git"
ud.proto = "rsync"
if not ud.proto in ('git', 'file', 'ssh', 'http', 'https'):
raise bb.fetch2.ParameterError("Invalid protocol type", ud.url)
ud.nocheckout = ud.parm.get("nocheckout","0") == "1"
ud.rebaseable = ud.parm.get("rebaseable","0") == "1"
ud.nocheckout = False
if 'nocheckout' in ud.parm:
ud.nocheckout = True
branches = ud.parm.get("branch", "master").split(',')
if len(branches) != len(ud.names):
@@ -103,9 +65,16 @@ class Git(FetchMethod):
branch = branches[ud.names.index(name)]
ud.branches[name] = branch
gitsrcname = '%s%s' % (ud.host, ud.path.replace('/', '.'))
ud.mirrortarball = 'git2_%s.tar.gz' % (gitsrcname)
ud.fullmirror = os.path.join(data.getVar("DL_DIR", d, True), ud.mirrortarball)
ud.clonedir = os.path.join(data.expand('${GITDIR}', d), gitsrcname)
ud.basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
ud.write_tarballs = ((data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True) or "0") != "0") or ud.rebaseable
ud.write_tarballs = (data.getVar("BB_GENERATE_MIRROR_TARBALLS", d, True) or "0") != "0"
ud.localfile = ud.clonedir
ud.setup_revisons(d)
@@ -115,20 +84,6 @@ class Git(FetchMethod):
ud.branches[name] = ud.revisions[name]
ud.revisions[name] = self.latest_revision(ud.url, ud, d, name)
gitsrcname = '%s%s' % (ud.host, ud.path.replace('/', '.'))
# for rebaseable git repo, it is necessary to keep mirror tar ball
# per revision, so that even the revision disappears from the
# upstream repo in the future, the mirror will remain intact and still
# contains the revision
if ud.rebaseable:
for name in ud.names:
gitsrcname = gitsrcname + '_' + ud.revisions[name]
ud.mirrortarball = 'git2_%s.tar.gz' % (gitsrcname)
ud.fullmirror = os.path.join(data.getVar("DL_DIR", d, True), ud.mirrortarball)
ud.clonedir = os.path.join(data.expand('${GITDIR}', d), gitsrcname)
ud.localfile = ud.clonedir
def localpath(self, url, ud, d):
return ud.clonedir
@@ -170,10 +125,8 @@ class Git(FetchMethod):
# If the repo still doesn't exist, fallback to cloning it
if not os.path.exists(ud.clonedir):
clone_cmd = "%s clone --bare --mirror %s://%s%s%s %s" % \
(ud.basecmd, ud.proto, username, ud.host, ud.path, ud.clonedir)
bb.fetch2.check_network_access(d, clone_cmd)
runfetchcmd(clone_cmd, d)
bb.fetch2.check_network_access(d, "git clone --bare %s%s" % (ud.host, ud.path))
runfetchcmd("%s clone --bare %s://%s%s%s %s" % (ud.basecmd, ud.proto, username, ud.host, ud.path, ud.clonedir), d)
os.chdir(ud.clonedir)
# Update the checkout if needed
@@ -182,16 +135,15 @@ class Git(FetchMethod):
if not self._contains_ref(ud.revisions[name], d):
needupdate = True
if needupdate:
bb.fetch2.check_network_access(d, "git fetch %s%s" % (ud.host, ud.path), ud.url)
try:
runfetchcmd("%s remote prune origin" % ud.basecmd, d)
runfetchcmd("%s remote rm origin" % ud.basecmd, d)
except bb.fetch2.FetchError:
logger.debug(1, "No Origin")
runfetchcmd("%s remote add --mirror origin %s://%s%s%s" % (ud.basecmd, ud.proto, username, ud.host, ud.path), d)
fetch_cmd = "%s fetch --all -t" % ud.basecmd
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
runfetchcmd(fetch_cmd, d)
runfetchcmd("%s remote add origin %s://%s%s%s" % (ud.basecmd, ud.proto, username, ud.host, ud.path), d)
runfetchcmd("%s fetch --all -t" % ud.basecmd, d)
runfetchcmd("%s prune-packed" % ud.basecmd, d)
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d)
ud.repochanged = True
@@ -219,11 +171,8 @@ class Git(FetchMethod):
runfetchcmd("git clone -s -n %s %s" % (ud.clonedir, destdir), d)
if not ud.nocheckout:
os.chdir(destdir)
if subdir != "":
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d)
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d)
else:
runfetchcmd("%s checkout %s" % (ud.basecmd, ud.revisions[ud.names[0]]), d)
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d)
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d)
return True
def clean(self, ud, d):
@@ -255,10 +204,9 @@ class Git(FetchMethod):
else:
username = ""
bb.fetch2.check_network_access(d, "git ls-remote %s%s %s" % (ud.host, ud.path, ud.branches[name]))
basecmd = data.getVar("FETCHCMD_git", d, True) or "git"
cmd = "%s ls-remote %s://%s%s%s %s" % \
(basecmd, ud.proto, username, ud.host, ud.path, ud.branches[name])
bb.fetch2.check_network_access(d, cmd)
cmd = "%s ls-remote %s://%s%s%s %s" % (basecmd, ud.proto, username, ud.host, ud.path, ud.branches[name])
output = runfetchcmd(cmd, d, True)
if not output:
raise bb.fetch2.FetchError("The command %s gave empty output unexpectedly" % cmd, url)
@@ -278,13 +226,10 @@ class Git(FetchMethod):
# Check if we have the rev already
if not os.path.exists(ud.clonedir):
logging.debug("GIT repository for %s does not exist in %s. \
Downloading.", url, ud.clonedir)
print("no repo")
self.download(None, ud, d)
if not os.path.exists(ud.clonedir):
logger.error("GIT repository for %s does not exist in %s after \
download. Cannot get sortable buildnumber, using \
old value", url, ud.clonedir)
logger.error("GIT repository for %s doesn't exist in %s, cannot get sortable buildnumber, using old value", url, ud.clonedir)
return None

View File

@@ -94,21 +94,21 @@ class Hg(FetchMethod):
else:
hgroot = ud.user + "@" + host + ud.path
if command == "info":
if command is "info":
return "%s identify -i %s://%s/%s" % (basecmd, proto, hgroot, ud.module)
options = [];
if ud.revision:
options.append("-r %s" % ud.revision)
if command == "fetch":
if command is "fetch":
cmd = "%s clone %s %s://%s/%s %s" % (basecmd, " ".join(options), proto, hgroot, ud.module, ud.module)
elif command == "pull":
elif command is "pull":
# do not pass options list; limiting pull to rev causes the local
# repo not to contain it and immediately following "update" command
# will crash
cmd = "%s pull" % (basecmd)
elif command == "update":
elif command is "update":
cmd = "%s update -C %s" % (basecmd, " ".join(options))
else:
raise FetchError("Invalid hg command %s" % command, ud.url)

View File

@@ -68,9 +68,9 @@ class Osc(FetchMethod):
coroot = self._strip_leading_slashes(ud.path)
if command == "fetch":
if command is "fetch":
osccmd = "%s %s co %s/%s %s" % (basecmd, config, coroot, ud.module, " ".join(options))
elif command == "update":
elif command is "update":
osccmd = "%s %s up %s" % (basecmd, config, " ".join(options))
else:
raise FetchError("Invalid osc command %s" % command, ud.url)

View File

@@ -87,7 +87,7 @@ class Svn(FetchMethod):
if ud.pswd:
options.append("--password %s" % ud.pswd)
if command == "info":
if command is "info":
svncmd = "%s info %s %s://%s/%s/" % (basecmd, " ".join(options), proto, svnroot, ud.module)
else:
suffix = ""
@@ -95,9 +95,9 @@ class Svn(FetchMethod):
options.append("-r %s" % ud.revision)
suffix = "@%s" % (ud.revision)
if command == "fetch":
if command is "fetch":
svncmd = "%s co %s %s://%s/%s%s %s" % (basecmd, " ".join(options), proto, svnroot, ud.module, suffix, ud.module)
elif command == "update":
elif command is "update":
svncmd = "%s update %s" % (basecmd, " ".join(options))
else:
raise FetchError("Invalid svn command %s" % command, ud.url)

View File

@@ -65,15 +65,9 @@ class BBLogFormatter(logging.Formatter):
def format(self, record):
record.levelname = self.getLevelName(record.levelno)
if record.levelno == self.PLAIN:
msg = record.getMessage()
return record.getMessage()
else:
msg = logging.Formatter.format(self, record)
if hasattr(record, 'bb_exc_info'):
etype, value, tb = record.bb_exc_info
formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
msg += '\n' + ''.join(formatted)
return msg
return logging.Formatter.format(self, record)
class Loggers(dict):
def __getitem__(self, key):
@@ -153,8 +147,8 @@ def set_debug_domains(domainargs):
#
def debug(level, msgdomain, msg):
warnings.warn("bb.msg.debug is deprecated in favor of the python 'logging' module",
DeprecationWarning, stacklevel=2)
warnings.warn("bb.msg.debug will soon be deprecated in favor of the python 'logging' module",
PendingDeprecationWarning, stacklevel=2)
level = logging.DEBUG - (level - 1)
if not msgdomain:
logger.debug(level, msg)
@@ -162,13 +156,13 @@ def debug(level, msgdomain, msg):
loggers[msgdomain].debug(level, msg)
def plain(msg):
warnings.warn("bb.msg.plain is deprecated in favor of the python 'logging' module",
DeprecationWarning, stacklevel=2)
warnings.warn("bb.msg.plain will soon be deprecated in favor of the python 'logging' module",
PendingDeprecationWarning, stacklevel=2)
logger.plain(msg)
def note(level, msgdomain, msg):
warnings.warn("bb.msg.note is deprecated in favor of the python 'logging' module",
DeprecationWarning, stacklevel=2)
warnings.warn("bb.msg.note will soon be deprecated in favor of the python 'logging' module",
PendingDeprecationWarning, stacklevel=2)
if level > 1:
if msgdomain:
logger.verbose(msg)
@@ -181,22 +175,24 @@ def note(level, msgdomain, msg):
loggers[msgdomain].info(msg)
def warn(msgdomain, msg):
warnings.warn("bb.msg.warn is deprecated in favor of the python 'logging' module",
DeprecationWarning, stacklevel=2)
warnings.warn("bb.msg.warn will soon be deprecated in favor of the python 'logging' module",
PendingDeprecationWarning, stacklevel=2)
if not msgdomain:
logger.warn(msg)
else:
loggers[msgdomain].warn(msg)
def error(msgdomain, msg):
warnings.warn("bb.msg.error is deprecated in favor of the python 'logging' module",
DeprecationWarning, stacklevel=2)
warnings.warn("bb.msg.error will soon be deprecated in favor of the python 'logging' module",
PendingDeprecationWarning, stacklevel=2)
if not msgdomain:
logger.error(msg)
else:
loggers[msgdomain].error(msg)
def fatal(msgdomain, msg):
warnings.warn("bb.msg.fatal will soon be deprecated in favor of raising appropriate exceptions",
PendingDeprecationWarning, stacklevel=2)
if not msgdomain:
logger.critical(msg)
else:

View File

@@ -1,255 +0,0 @@
# http://code.activestate.com/recipes/577629-namedtupleabc-abstract-base-class-mix-in-for-named/
#!/usr/bin/env python
# Copyright (c) 2011 Jan Kaliszewski (zuo). Available under the MIT License.
"""
namedtuple_with_abc.py:
* named tuple mix-in + ABC (abstract base class) recipe,
* works under Python 2.6, 2.7 as well as 3.x.
Import this module to patch collections.namedtuple() factory function
-- enriching it with the 'abc' attribute (an abstract base class + mix-in
for named tuples) and decorating it with a wrapper that registers each
newly created named tuple as a subclass of namedtuple.abc.
How to import:
import collections, namedtuple_with_abc
or:
import namedtuple_with_abc
from collections import namedtuple
# ^ in this variant you must import namedtuple function
# *after* importing namedtuple_with_abc module
or simply:
from namedtuple_with_abc import namedtuple
Simple usage example:
class Credentials(namedtuple.abc):
_fields = 'username password'
def __str__(self):
return ('{0.__class__.__name__}'
'(username={0.username}, password=...)'.format(self))
print(Credentials("alice", "Alice's password"))
For more advanced examples -- see below the "if __name__ == '__main__':".
"""
import collections
from abc import ABCMeta, abstractproperty
from functools import wraps
from sys import version_info
__all__ = ('namedtuple',)
_namedtuple = collections.namedtuple
class _NamedTupleABCMeta(ABCMeta):
'''The metaclass for the abstract base class + mix-in for named tuples.'''
def __new__(mcls, name, bases, namespace):
fields = namespace.get('_fields')
for base in bases:
if fields is not None:
break
fields = getattr(base, '_fields', None)
if not isinstance(fields, abstractproperty):
basetuple = _namedtuple(name, fields)
bases = (basetuple,) + bases
namespace.pop('_fields', None)
namespace.setdefault('__doc__', basetuple.__doc__)
namespace.setdefault('__slots__', ())
return ABCMeta.__new__(mcls, name, bases, namespace)
exec(
# Python 2.x metaclass declaration syntax
"""class _NamedTupleABC(object):
'''The abstract base class + mix-in for named tuples.'''
__metaclass__ = _NamedTupleABCMeta
_fields = abstractproperty()""" if version_info[0] < 3 else
# Python 3.x metaclass declaration syntax
"""class _NamedTupleABC(metaclass=_NamedTupleABCMeta):
'''The abstract base class + mix-in for named tuples.'''
_fields = abstractproperty()"""
)
_namedtuple.abc = _NamedTupleABC
#_NamedTupleABC.register(type(version_info)) # (and similar, in the future...)
@wraps(_namedtuple)
def namedtuple(*args, **kwargs):
'''Named tuple factory with namedtuple.abc subclass registration.'''
cls = _namedtuple(*args, **kwargs)
_NamedTupleABC.register(cls)
return cls
collections.namedtuple = namedtuple
if __name__ == '__main__':
'''Examples and explanations'''
# Simple usage
class MyRecord(namedtuple.abc):
_fields = 'x y z' # such form will be transformed into ('x', 'y', 'z')
def _my_custom_method(self):
return list(self._asdict().items())
# (the '_fields' attribute belongs to the named tuple public API anyway)
rec = MyRecord(1, 2, 3)
print(rec)
print(rec._my_custom_method())
print(rec._replace(y=222))
print(rec._replace(y=222)._my_custom_method())
# Custom abstract classes...
class MyAbstractRecord(namedtuple.abc):
def _my_custom_method(self):
return list(self._asdict().items())
try:
MyAbstractRecord() # (abstract classes cannot be instantiated)
except TypeError as exc:
print(exc)
class AnotherAbstractRecord(MyAbstractRecord):
def __str__(self):
return '<<<{0}>>>'.format(super(AnotherAbstractRecord,
self).__str__())
# ...and their non-abstract subclasses
class MyRecord2(MyAbstractRecord):
_fields = 'a, b'
class MyRecord3(AnotherAbstractRecord):
_fields = 'p', 'q', 'r'
rec2 = MyRecord2('foo', 'bar')
print(rec2)
print(rec2._my_custom_method())
print(rec2._replace(b=222))
print(rec2._replace(b=222)._my_custom_method())
rec3 = MyRecord3('foo', 'bar', 'baz')
print(rec3)
print(rec3._my_custom_method())
print(rec3._replace(q=222))
print(rec3._replace(q=222)._my_custom_method())
# You can also subclass non-abstract ones...
class MyRecord33(MyRecord3):
def __str__(self):
return '< {0!r}, ..., {0!r} >'.format(self.p, self.r)
rec33 = MyRecord33('foo', 'bar', 'baz')
print(rec33)
print(rec33._my_custom_method())
print(rec33._replace(q=222))
print(rec33._replace(q=222)._my_custom_method())
# ...and even override the magic '_fields' attribute again
class MyRecord345(MyRecord3):
_fields = 'e f g h i j k'
rec345 = MyRecord345(1, 2, 3, 4, 3, 2, 1)
print(rec345)
print(rec345._my_custom_method())
print(rec345._replace(f=222))
print(rec345._replace(f=222)._my_custom_method())
# Mixing-in some other classes is also possible:
class MyMixIn(object):
def method(self):
return "MyMixIn.method() called"
def _my_custom_method(self):
return "MyMixIn._my_custom_method() called"
def count(self, item):
return "MyMixIn.count({0}) called".format(item)
def _asdict(self): # (cannot override a namedtuple method, see below)
return "MyMixIn._asdict() called"
class MyRecord4(MyRecord33, MyMixIn): # mix-in on the right
_fields = 'j k l x'
class MyRecord5(MyMixIn, MyRecord33): # mix-in on the left
_fields = 'j k l x y'
rec4 = MyRecord4(1, 2, 3, 2)
print(rec4)
print(rec4.method())
print(rec4._my_custom_method()) # MyRecord33's
print(rec4.count(2)) # tuple's
print(rec4._replace(k=222))
print(rec4._replace(k=222).method())
print(rec4._replace(k=222)._my_custom_method()) # MyRecord33's
print(rec4._replace(k=222).count(8)) # tuple's
rec5 = MyRecord5(1, 2, 3, 2, 1)
print(rec5)
print(rec5.method())
print(rec5._my_custom_method()) # MyMixIn's
print(rec5.count(2)) # MyMixIn's
print(rec5._replace(k=222))
print(rec5._replace(k=222).method())
print(rec5._replace(k=222)._my_custom_method()) # MyMixIn's
print(rec5._replace(k=222).count(2)) # MyMixIn's
# None that behavior: the standard namedtuple methods cannot be
# overriden by a foreign mix-in -- even if the mix-in is declared
# as the leftmost base class (but, obviously, you can override them
# in the defined class or its subclasses):
print(rec4._asdict()) # (returns a dict, not "MyMixIn._asdict() called")
print(rec5._asdict()) # (returns a dict, not "MyMixIn._asdict() called")
class MyRecord6(MyRecord33):
_fields = 'j k l x y z'
def _asdict(self):
return "MyRecord6._asdict() called"
rec6 = MyRecord6(1, 2, 3, 1, 2, 3)
print(rec6._asdict()) # (this returns "MyRecord6._asdict() called")
# All that record classes are real subclasses of namedtuple.abc:
assert issubclass(MyRecord, namedtuple.abc)
assert issubclass(MyAbstractRecord, namedtuple.abc)
assert issubclass(AnotherAbstractRecord, namedtuple.abc)
assert issubclass(MyRecord2, namedtuple.abc)
assert issubclass(MyRecord3, namedtuple.abc)
assert issubclass(MyRecord33, namedtuple.abc)
assert issubclass(MyRecord345, namedtuple.abc)
assert issubclass(MyRecord4, namedtuple.abc)
assert issubclass(MyRecord5, namedtuple.abc)
assert issubclass(MyRecord6, namedtuple.abc)
# ...but abstract ones are not subclasses of tuple
# (and this is what you probably want):
assert not issubclass(MyAbstractRecord, tuple)
assert not issubclass(AnotherAbstractRecord, tuple)
assert issubclass(MyRecord, tuple)
assert issubclass(MyRecord2, tuple)
assert issubclass(MyRecord3, tuple)
assert issubclass(MyRecord33, tuple)
assert issubclass(MyRecord345, tuple)
assert issubclass(MyRecord4, tuple)
assert issubclass(MyRecord5, tuple)
assert issubclass(MyRecord6, tuple)
# Named tuple classes created with namedtuple() factory function
# (in the "traditional" way) are registered as "virtual" subclasses
# of namedtuple.abc:
MyTuple = namedtuple('MyTuple', 'a b c')
mt = MyTuple(1, 2, 3)
assert issubclass(MyTuple, namedtuple.abc)
assert isinstance(mt, namedtuple.abc)

View File

@@ -84,9 +84,9 @@ class DataNode(AstNode):
def getFunc(self, key, data):
if 'flag' in self.groupd and self.groupd['flag'] != None:
return data.getVarFlag(key, self.groupd['flag'], noweakdefault=True)
return bb.data.getVarFlag(key, self.groupd['flag'], data)
else:
return data.getVar(key, noweakdefault=True)
return bb.data.getVar(key, data)
def eval(self, data):
groupd = self.groupd
@@ -100,7 +100,7 @@ class DataNode(AstNode):
elif "colon" in groupd and groupd["colon"] != None:
e = data.createCopy()
bb.data.update_data(e)
val = bb.data.expand(groupd["value"], e, key + "[:=]")
val = bb.data.expand(groupd["value"], e)
elif "append" in groupd and groupd["append"] != None:
val = "%s %s" % ((self.getFunc(key, data) or ""), groupd["value"])
elif "prepend" in groupd and groupd["prepend"] != None:
@@ -307,14 +307,6 @@ def handleInherit(statements, filename, lineno, m):
statements.append(InheritNode(filename, lineno, classes.split()))
def finalize(fn, d, variant = None):
all_handlers = {}
for var in bb.data.getVar('__BBHANDLERS', d) or []:
# try to add the handler
handler = bb.data.getVar(var, d)
bb.event.register(var, handler)
bb.event.fire(bb.event.RecipePreFinalise(fn), d)
bb.data.expandKeys(d)
bb.data.update_data(d)
code = []
@@ -323,6 +315,12 @@ def finalize(fn, d, variant = None):
bb.utils.simple_exec("\n".join(code), {"d": d})
bb.data.update_data(d)
all_handlers = {}
for var in bb.data.getVar('__BBHANDLERS', d) or []:
# try to add the handler
handler = bb.data.getVar(var, d)
bb.event.register(var, handler)
tasklist = bb.data.getVar('__BBTASKS', d) or []
bb.build.add_tasks(tasklist, d)
@@ -371,14 +369,12 @@ def multi_finalize(fn, d):
logger.debug(2, "Appending .bbappend file %s to %s", append, fn)
bb.parse.BBHandler.handle(append, d, True)
onlyfinalise = d.getVar("__ONLYFINALISE", False)
safe_d = d
d = bb.data.createCopy(safe_d)
try:
finalize(fn, d)
except bb.parse.SkipPackage as e:
bb.data.setVar("__SKIPPED", e.args[0], d)
except bb.parse.SkipPackage:
bb.data.setVar("__SKIPPED", True, d)
datastores = {"": safe_d}
versions = (d.getVar("BBVERSIONS", True) or "").split()
@@ -420,46 +416,27 @@ def multi_finalize(fn, d):
verfunc(pv, d, safe_d)
try:
finalize(fn, d)
except bb.parse.SkipPackage as e:
bb.data.setVar("__SKIPPED", e.args[0], d)
except bb.parse.SkipPackage:
bb.data.setVar("__SKIPPED", True, d)
_create_variants(datastores, versions, verfunc)
extended = d.getVar("BBCLASSEXTEND", True) or ""
if extended:
# the following is to support bbextends with argument, for e.g. multilib
# an example is as follow:
# BBCLASSEXTEND = "multilib:lib32"
# it will create foo-lib32, inheriting multilib.bbclass and set
# CURRENTEXTEND to "lib32"
extendedmap = {}
for ext in extended.split():
eext = ext.split(':')
if len(eext) > 1:
extendedmap[eext[1]] = eext[0]
else:
extendedmap[ext] = ext
pn = d.getVar("PN", True)
def extendfunc(name, d):
if name != extendedmap[name]:
d.setVar("BBEXTENDCURR", extendedmap[name])
d.setVar("BBEXTENDVARIANT", name)
else:
d.setVar("PN", "%s-%s" % (pn, name))
bb.parse.BBHandler.inherit([extendedmap[name]], d)
d.setVar("PN", "%s-%s" % (pn, name))
bb.parse.BBHandler.inherit([name], d)
safe_d.setVar("BBCLASSEXTEND", extended)
_create_variants(datastores, extendedmap.keys(), extendfunc)
_create_variants(datastores, extended.split(), extendfunc)
for variant, variant_d in datastores.iteritems():
if variant:
try:
if not onlyfinalise or variant in onlyfinalise:
finalize(fn, variant_d, variant)
except bb.parse.SkipPackage as e:
bb.data.setVar("__SKIPPED", e.args[0], variant_d)
finalize(fn, variant_d, variant)
except bb.parse.SkipPackage:
bb.data.setVar("__SKIPPED", True, variant_d)
if len(datastores) > 1:
variants = filter(None, datastores.iterkeys())

View File

@@ -26,8 +26,7 @@ import logging
import os.path
import sys
import warnings
from bb.compat import total_ordering
from collections import Mapping
import bb.msg, bb.data, bb.utils
try:
import sqlite3
@@ -40,11 +39,8 @@ if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
logger = logging.getLogger("BitBake.PersistData")
if hasattr(sqlite3, 'enable_shared_cache'):
sqlite3.enable_shared_cache(True)
@total_ordering
class SQLTable(collections.MutableMapping):
"""Object representing a table/domain in the database"""
def __init__(self, cursor, table):
@@ -66,31 +62,16 @@ class SQLTable(collections.MutableMapping):
continue
raise
def __enter__(self):
self.cursor.__enter__()
return self
def __exit__(self, *excinfo):
self.cursor.__exit__(*excinfo)
def __getitem__(self, key):
data = self._execute("SELECT * from %s where key=?;" %
self.table, [key])
for row in data:
return row[1]
raise KeyError(key)
def __delitem__(self, key):
if key not in self:
raise KeyError(key)
self._execute("DELETE from %s where key=?;" % self.table, [key])
def __setitem__(self, key, value):
if not isinstance(key, basestring):
raise TypeError('Only string keys are supported')
elif not isinstance(value, basestring):
raise TypeError('Only string values are supported')
data = self._execute("SELECT * from %s where key=?;" %
self.table, [key])
exists = len(list(data))
@@ -111,40 +92,53 @@ class SQLTable(collections.MutableMapping):
def __iter__(self):
data = self._execute("SELECT key FROM %s;" % self.table)
return (row[0] for row in data)
for row in data:
yield row[0]
def __lt__(self, other):
if not isinstance(other, Mapping):
raise NotImplemented
return len(self) < len(other)
def values(self):
return list(self.itervalues())
def iteritems(self):
data = self._execute("SELECT * FROM %s;" % self.table)
for row in data:
yield row[0], row[1]
def itervalues(self):
data = self._execute("SELECT value FROM %s;" % self.table)
return (row[0] for row in data)
for row in data:
yield row[0]
def items(self):
return list(self.iteritems())
def iteritems(self):
return self._execute("SELECT * FROM %s;" % self.table)
class SQLData(object):
"""Object representing the persistent data"""
def __init__(self, filename):
bb.utils.mkdirhier(os.path.dirname(filename))
def clear(self):
self._execute("DELETE FROM %s;" % self.table)
self.filename = filename
self.connection = sqlite3.connect(filename, timeout=30,
isolation_level=None)
self.cursor = self.connection.cursor()
self._tables = {}
def has_key(self, key):
return key in self
def __getitem__(self, table):
if not isinstance(table, basestring):
raise TypeError("table argument must be a string, not '%s'" %
type(table))
if table in self._tables:
return self._tables[table]
else:
tableobj = self._tables[table] = SQLTable(self.cursor, table)
return tableobj
def __delitem__(self, table):
if table in self._tables:
del self._tables[table]
self.cursor.execute("DROP TABLE IF EXISTS %s;" % table)
class PersistData(object):
"""Deprecated representation of the bitbake persistent data store"""
def __init__(self, d):
warnings.warn("Use of PersistData is deprecated. Please use "
"persist(domain, d) instead.",
category=DeprecationWarning,
warnings.warn("Use of PersistData will be deprecated in the future",
category=PendingDeprecationWarning,
stacklevel=2)
self.data = persist(d)
@@ -187,19 +181,14 @@ class PersistData(object):
"""
del self.data[domain][key]
def connect(database):
return sqlite3.connect(database, timeout=30, isolation_level=None)
def persist(domain, d):
"""Convenience factory for SQLTable objects based upon metadata"""
import bb.data, bb.utils
def persist(d):
"""Convenience factory for construction of SQLData based upon metadata"""
cachedir = (bb.data.getVar("PERSISTENT_DIR", d, True) or
bb.data.getVar("CACHE", d, True))
if not cachedir:
logger.critical("Please set the 'PERSISTENT_DIR' or 'CACHE' variable")
sys.exit(1)
bb.utils.mkdirhier(cachedir)
cachefile = os.path.join(cachedir, "bb_persist_data.sqlite3")
connection = connect(cachefile)
return SQLTable(connection, domain)
return SQLData(cachefile)

View File

@@ -93,7 +93,7 @@ def run(cmd, input=None, log=None, **options):
try:
pipe = Popen(cmd, **options)
except OSError as exc:
except OSError, exc:
if exc.errno == 2:
raise NotFoundError(cmd)
else:

View File

@@ -84,10 +84,10 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
preferred_ver = None
localdata = data.createCopy(cfgData)
bb.data.setVar('OVERRIDES', "%s:pn-%s:%s" % (data.getVar('OVERRIDES', localdata), pn, pn), localdata)
bb.data.setVar('OVERRIDES', "pn-%s:%s:%s" % (pn, pn, data.getVar('OVERRIDES', localdata)), localdata)
bb.data.update_data(localdata)
preferred_v = bb.data.getVar('PREFERRED_VERSION', localdata, True)
preferred_v = bb.data.getVar('PREFERRED_VERSION_%s' % pn, localdata, True)
if preferred_v:
m = re.match('(\d+:)*(.*)(_.*)*', preferred_v)
if m:

View File

@@ -151,7 +151,7 @@ def builtin_trap(name, args, interp, env, stdin, stdout, stderr, debugflags):
for sig in args[1:]:
try:
env.traps[sig] = action
except Exception as e:
except Exception, e:
stderr.write('trap: %s\n' % str(e))
return 0
@@ -214,7 +214,7 @@ def utility_cat(name, args, interp, env, stdin, stdout, stderr, debugflags):
data = f.read()
finally:
f.close()
except IOError as e:
except IOError, e:
if e.errno != errno.ENOENT:
raise
status = 1
@@ -433,7 +433,7 @@ def utility_mkdir(name, args, interp, env, stdin, stdout, stderr, debugflags):
if option.has_p:
try:
os.makedirs(path)
except IOError as e:
except IOError, e:
if e.errno != errno.EEXIST:
raise
else:
@@ -561,7 +561,7 @@ def utility_sort(name, args, interp, env, stdin, stdout, stderr, debugflags):
lines = f.readlines()
finally:
f.close()
except IOError as e:
except IOError, e:
stderr.write(str(e) + '\n')
return 1
@@ -679,7 +679,7 @@ def run_command(name, args, interp, env, stdin, stdout,
p = subprocess.Popen([name] + args, cwd=env['PWD'], env=exec_env,
stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
out, err = p.communicate()
except WindowsError as e:
except WindowsError, e:
raise UtilityError(str(e))
if not unixoutput:

View File

@@ -248,7 +248,7 @@ class Redirections:
raise NotImplementedError('cannot open absolute path %s' % repr(filename))
else:
f = file(filename, mode+'b')
except IOError as e:
except IOError, e:
raise RedirectionError(str(e))
wrapper = None
@@ -368,7 +368,7 @@ def resolve_shebang(path, ignoreshell=False):
if arg is None:
return [cmd, win32_to_unix_path(path)]
return [cmd, arg, win32_to_unix_path(path)]
except IOError as e:
except IOError, e:
if e.errno!=errno.ENOENT and \
(e.errno!=errno.EPERM and not os.path.isdir(path)): # Opening a directory raises EPERM
raise
@@ -747,7 +747,7 @@ class Interpreter:
for cmd in cmds:
try:
status = self.execute(cmd)
except ExitSignal as e:
except ExitSignal, e:
if sourced:
raise
status = int(e.args[0])
@@ -758,13 +758,13 @@ class Interpreter:
if 'debug-utility' in self._debugflags or 'debug-cmd' in self._debugflags:
self.log('returncode ' + str(status)+ '\n')
return status
except CommandNotFound as e:
except CommandNotFound, e:
print >>self._redirs.stderr, str(e)
self._redirs.stderr.flush()
# Command not found by non-interactive shell
# return 127
raise
except RedirectionError as e:
except RedirectionError, e:
# TODO: should be handled depending on the utility status
print >>self._redirs.stderr, str(e)
self._redirs.stderr.flush()
@@ -948,7 +948,7 @@ class Interpreter:
status = self.execute(func, redirs)
finally:
redirs.close()
except ReturnSignal as e:
except ReturnSignal, e:
status = int(e.args[0])
env['?'] = status
return status
@@ -1044,7 +1044,7 @@ class Interpreter:
except ReturnSignal:
raise
except ShellError as e:
except ShellError, e:
if is_special or isinstance(e, (ExitSignal,
ShellSyntaxError, ExpansionError)):
raise e

View File

@@ -105,11 +105,6 @@ class RunQueueScheduler(object):
if self.rq.runq_running[taskid] == 1:
continue
if self.rq.runq_buildable[taskid] == 1:
fn = self.rqdata.taskData.fn_index[self.rqdata.runq_fnid[taskid]]
taskname = self.rqdata.runq_task[taskid]
stamp = bb.build.stampfile(taskname, self.rqdata.dataCache, fn)
if stamp in self.rq.build_stamps.values():
continue
return taskid
def next(self):
@@ -758,6 +753,7 @@ class RunQueueData:
self.rqdata.runq_depends[task],
self.rqdata.runq_revdeps[task])
class RunQueue:
def __init__(self, cooker, cfgData, dataCache, taskData, targets):
@@ -933,7 +929,7 @@ class RunQueue:
if self.state is runQueuePrepare:
self.rqexe = RunQueueExecuteDummy(self)
if self.rqdata.prepare() == 0:
if self.rqdata.prepare() is 0:
self.state = runQueueComplete
else:
self.state = runQueueSceneInit
@@ -1014,7 +1010,6 @@ class RunQueueExecute:
self.runq_complete = []
self.build_pids = {}
self.build_pipes = {}
self.build_stamps = {}
self.failed_fnids = []
def runqueue_process_waitpid(self):
@@ -1023,15 +1018,12 @@ class RunQueueExecute:
collect the process exit codes and close the information pipe.
"""
result = os.waitpid(-1, os.WNOHANG)
if result[0] == 0 and result[1] == 0:
if result[0] is 0 and result[1] is 0:
return None
task = self.build_pids[result[0]]
del self.build_pids[result[0]]
self.build_pipes[result[0]].close()
del self.build_pipes[result[0]]
# self.build_stamps[result[0]] may not exist when use shared work directory.
if result[0] in self.build_stamps.keys():
del self.build_stamps[result[0]]
if result[1] != 0:
self.task_fail(task, result[1]>>8)
else:
@@ -1068,32 +1060,23 @@ class RunQueueExecute:
return
def fork_off_task(self, fn, task, taskname, quieterrors=False):
# We need to setup the environment BEFORE the fork, since
# a fork() or exec*() activates PSEUDO...
envbackup = {}
umask = None
envbackup = os.environ.copy()
env = {}
taskdep = self.rqdata.dataCache.task_deps[fn]
if 'umask' in taskdep and taskname in taskdep['umask']:
# umask might come in as a number or text string..
try:
umask = int(taskdep['umask'][taskname],8)
except TypeError:
umask = taskdep['umask'][taskname]
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot']:
envvars = (self.rqdata.dataCache.fakerootenv[fn] or "").split()
for key, value in (var.split('=') for var in envvars):
envbackup[key] = os.environ.get(key)
os.environ[key] = value
for var in envvars:
comps = var.split("=")
env[comps[0]] = comps[1]
fakedirs = (self.rqdata.dataCache.fakerootdirs[fn] or "").split()
for p in fakedirs:
bb.utils.mkdirhier(p)
logger.debug(2, 'Running %s:%s under fakeroot, fakedirs: %s' %
(fn, taskname, ', '.join(fakedirs)))
bb.mkdirhier(p)
logger.debug(2, "Running %s:%s under fakeroot, state dir is %s" % (fn, taskname, fakedirs))
for e in env:
os.putenv(e, env[e])
sys.stdout.flush()
sys.stderr.flush()
@@ -1104,7 +1087,6 @@ class RunQueueExecute:
pid = os.fork()
except OSError as e:
bb.msg.fatal(bb.msg.domain.RunQueue, "fork failed: %d (%s)" % (e.errno, e.strerror))
if pid == 0:
pipein.close()
@@ -1112,6 +1094,12 @@ class RunQueueExecute:
# events
bb.event.worker_pid = os.getpid()
bb.event.worker_pipe = pipeout
bb.event.useStdout = False
# Child processes should send their messages to the UI
# process via the server process, not print them
# themselves
bblogger.handlers = [bb.event.LogHandler()]
self.rq.state = runQueueChildProcess
# Make the child the process group leader
@@ -1120,43 +1108,47 @@ class RunQueueExecute:
newsi = os.open(os.devnull, os.O_RDWR)
os.dup2(newsi, sys.stdin.fileno())
if umask:
os.umask(umask)
bb.data.setVar("BB_WORKERCONTEXT", "1", self.cooker.configuration.data)
the_data = bb.cache.Cache.loadDataFull(fn, self.cooker.get_file_appends(fn), self.cooker.configuration.data)
env2 = bb.data.export_vars(the_data)
env2 = bb.data.export_envvars(env2, the_data)
for e in os.environ:
os.unsetenv(e)
for e in env2:
os.putenv(e, env2[e])
for e in env:
os.putenv(e, env[e])
if quieterrors:
the_data.setVarFlag(taskname, "quieterrors", "1")
bb.data.setVar("__RUNQUEUE_DO_NOT_USE_EXTERNALLY", self, self.cooker.configuration.data)
bb.data.setVar("__RUNQUEUE_DO_NOT_USE_EXTERNALLY2", fn, self.cooker.configuration.data)
bb.data.setVar("BB_WORKERCONTEXT", "1", the_data)
bb.parse.siggen.set_taskdata(self.rqdata.hashes, self.rqdata.hash_deps)
for h in self.rqdata.hashes:
bb.data.setVar("BBHASH_%s" % h, self.rqdata.hashes[h], the_data)
for h in self.rqdata.hash_deps:
bb.data.setVar("BBHASHDEPS_%s" % h, self.rqdata.hash_deps[h], the_data)
bb.data.setVar("BB_TASKHASH", self.rqdata.runq_hash[task], the_data)
ret = 0
try:
the_data = bb.cache.Cache.loadDataFull(fn, self.cooker.get_file_appends(fn), self.cooker.configuration.data)
the_data.setVar('BB_TASKHASH', self.rqdata.runq_hash[task])
for h in self.rqdata.hashes:
the_data.setVar("BBHASH_%s" % h, self.rqdata.hashes[h])
for h in self.rqdata.hash_deps:
the_data.setVar("BBHASHDEPS_%s" % h, self.rqdata.hash_deps[h])
os.environ.update(bb.data.exported_vars(the_data))
if quieterrors:
the_data.setVarFlag(taskname, "quieterrors", "1")
except Exception as exc:
if not quieterrors:
logger.critical(str(exc))
os._exit(1)
try:
if not self.cooker.configuration.dry_run:
ret = bb.build.exec_task(fn, taskname, the_data)
os._exit(ret)
except:
os._exit(1)
else:
for key, value in envbackup.iteritems():
if value is None:
del os.environ[key]
else:
os.environ[key] = value
for e in env:
os.unsetenv(e)
for e in envbackup:
if e in env:
os.putenv(e, envbackup[e])
return pid, pipein, pipeout
@@ -1248,7 +1240,7 @@ class RunQueueExecuteTasks(RunQueueExecute):
modname, name = sched.rsplit(".", 1)
try:
module = __import__(modname, fromlist=(name,))
except ImportError as exc:
except ImportError, exc:
logger.critical("Unable to import scheduler '%s' from '%s': %s" % (name, modname, exc))
raise SystemExit(1)
else:
@@ -1339,7 +1331,6 @@ class RunQueueExecuteTasks(RunQueueExecute):
self.build_pids[pid] = task
self.build_pipes[pid] = runQueuePipe(pipein, pipeout, self.cfgData)
self.build_stamps[pid] = bb.build.stampfile(taskname, self.rqdata.dataCache, fn)
self.runq_running[task] = 1
self.stats.taskActive()
if self.stats.active < self.number_tasks:
@@ -1462,25 +1453,16 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
sq_taskname = []
sq_task = []
noexec = []
stamppresent = []
for task in xrange(len(self.sq_revdeps)):
realtask = self.rqdata.runq_setscene[task]
fn = self.rqdata.taskData.fn_index[self.rqdata.runq_fnid[realtask]]
taskname = self.rqdata.runq_task[realtask]
taskdep = self.rqdata.dataCache.task_deps[fn]
if 'noexec' in taskdep and taskname in taskdep['noexec']:
noexec.append(task)
self.task_skip(task)
bb.build.make_stamp(taskname + "_setscene", self.rqdata.dataCache, fn)
continue
if self.rq.check_stamp_task(realtask, taskname + "_setscene"):
logger.debug(2, 'Setscene stamp current for task %s(%s)', task, self.rqdata.get_user_idstring(realtask))
stamppresent.append(task)
self.task_skip(task)
continue
sq_fn.append(fn)
sq_hashfn.append(self.rqdata.dataCache.hashfn[fn])
sq_hash.append(self.rqdata.runq_hash[realtask])
@@ -1490,7 +1472,7 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
locs = { "sq_fn" : sq_fn, "sq_task" : sq_taskname, "sq_hash" : sq_hash, "sq_hashfn" : sq_hashfn, "d" : self.cooker.configuration.data }
valid = bb.utils.better_eval(call, locs)
valid_new = stamppresent
valid_new = []
for v in valid:
valid_new.append(sq_task[v])

View File

@@ -28,6 +28,7 @@
import time
import bb
import pickle
import signal
DEBUG = False
@@ -35,7 +36,8 @@ DEBUG = False
import inspect, select
class BitBakeServerCommands():
def __init__(self, server):
def __init__(self, server, cooker):
self.cooker = cooker
self.server = server
def runCommand(self, command):
@@ -67,7 +69,7 @@ class BBUIEventQueue:
self.parent = parent
@staticmethod
def send(event):
bb.server.none.eventQueue.append(event)
bb.server.none.eventQueue.append(pickle.loads(event))
@staticmethod
def quit():
return
@@ -104,17 +106,13 @@ class BBUIEventQueue:
def chldhandler(signum, stackframe):
pass
class BitBakeNoneServer():
class BitBakeServer():
# remove this when you're done with debugging
# allow_reuse_address = True
def __init__(self):
def __init__(self, cooker):
self._idlefuns = {}
self.commands = BitBakeServerCommands(self)
def addcooker(self, cooker):
self.cooker = cooker
self.commands.cooker = cooker
self.commands = BitBakeServerCommands(self, cooker)
def register_idle_function(self, function, data):
"""Register a function to be called while the server is idle"""
@@ -159,10 +157,25 @@ class BitBakeNoneServer():
except:
pass
class BitBakeServerConnection():
class BitbakeServerInfo():
def __init__(self, server):
self.server = server.server
self.connection = self.server.commands
self.server = server
self.commands = server.commands
class BitBakeServerFork():
def __init__(self, cooker, server, serverinfo, logfile):
serverinfo.logfile = logfile
serverinfo.cooker = cooker
serverinfo.server = server
class BitbakeUILauch():
def launch(self, serverinfo, uifunc, *args):
return bb.cooker.server_main(serverinfo.cooker, uifunc, *args)
class BitBakeServerConnection():
def __init__(self, serverinfo):
self.server = serverinfo.server
self.connection = serverinfo.commands
self.events = bb.server.none.BBUIEventQueue(self.server)
for event in bb.event.ui_queue:
self.events.queue_event(event)
@@ -176,28 +189,3 @@ class BitBakeServerConnection():
self.connection.terminateServer()
except:
pass
class BitBakeServer(object):
def initServer(self):
self.server = BitBakeNoneServer()
def addcooker(self, cooker):
self.cooker = cooker
self.server.addcooker(cooker)
def getServerIdleCB(self):
return self.server.register_idle_function
def saveConnectionDetails(self):
return
def detach(self, cooker_logfile):
self.logfile = cooker_logfile
def establishConnection(self):
self.connection = BitBakeServerConnection(self)
return self.connection
def launchUI(self, uifunc, *args):
return bb.cooker.server_main(self.cooker, uifunc, *args)

View File

@@ -1,270 +0,0 @@
#
# BitBake Process based server.
#
# Copyright (C) 2010 Bob Foerster <robert@erafx.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
"""
This module implements a multiprocessing.Process based server for bitbake.
"""
import bb
import bb.event
import itertools
import logging
import multiprocessing
import os
import signal
import sys
import time
from Queue import Empty
from multiprocessing import Event, Process, util, Queue, Pipe, queues
logger = logging.getLogger('BitBake')
class ServerCommunicator():
def __init__(self, connection):
self.connection = connection
def runCommand(self, command):
# @todo try/except
self.connection.send(command)
while True:
# don't let the user ctrl-c while we're waiting for a response
try:
if self.connection.poll(.5):
return self.connection.recv()
else:
return None
except KeyboardInterrupt:
pass
class EventAdapter():
"""
Adapter to wrap our event queue since the caller (bb.event) expects to
call a send() method, but our actual queue only has put()
"""
def __init__(self, queue):
self.queue = queue
def send(self, event):
try:
self.queue.put(event)
except Exception as err:
print("EventAdapter puked: %s" % str(err))
class ProcessServer(Process):
profile_filename = "profile.log"
profile_processed_filename = "profile.log.processed"
def __init__(self, command_channel, event_queue):
Process.__init__(self)
self.command_channel = command_channel
self.event_queue = event_queue
self.event = EventAdapter(event_queue)
self._idlefunctions = {}
self.quit = False
self.keep_running = Event()
self.keep_running.set()
def register_idle_function(self, function, data):
"""Register a function to be called while the server is idle"""
assert hasattr(function, '__call__')
self._idlefunctions[function] = data
def run(self):
for event in bb.event.ui_queue:
self.event_queue.put(event)
self.event_handle = bb.event.register_UIHhandler(self)
bb.cooker.server_main(self.cooker, self.main)
def main(self):
# Ignore SIGINT within the server, as all SIGINT handling is done by
# the UI and communicated to us
signal.signal(signal.SIGINT, signal.SIG_IGN)
while self.keep_running.is_set():
try:
if self.command_channel.poll():
command = self.command_channel.recv()
self.runCommand(command)
self.idle_commands(.1)
except Exception:
logger.exception('Running command %s', command)
self.event_queue.cancel_join_thread()
bb.event.unregister_UIHhandler(self.event_handle)
self.command_channel.close()
self.cooker.stop()
self.idle_commands(.1)
def idle_commands(self, delay):
nextsleep = delay
for function, data in self._idlefunctions.items():
try:
retval = function(self, data, False)
if retval is False:
del self._idlefunctions[function]
elif retval is True:
nextsleep = None
elif nextsleep is None:
continue
elif retval < nextsleep:
nextsleep = retval
except SystemExit:
raise
except Exception:
logger.exception('Running idle function')
if nextsleep is not None:
time.sleep(nextsleep)
def runCommand(self, command):
"""
Run a cooker command on the server
"""
self.command_channel.send(self.cooker.command.runCommand(command))
def stop(self):
self.keep_running.clear()
def bootstrap_2_6_6(self):
"""Pulled from python 2.6.6. Needed to ensure we have the fix from
http://bugs.python.org/issue5313 when running on python version 2.6.2
or lower."""
try:
self._children = set()
self._counter = itertools.count(1)
try:
sys.stdin.close()
sys.stdin = open(os.devnull)
except (OSError, ValueError):
pass
multiprocessing._current_process = self
util._finalizer_registry.clear()
util._run_after_forkers()
util.info('child process calling self.run()')
try:
self.run()
exitcode = 0
finally:
util._exit_function()
except SystemExit as e:
if not e.args:
exitcode = 1
elif type(e.args[0]) is int:
exitcode = e.args[0]
else:
sys.stderr.write(e.args[0] + '\n')
sys.stderr.flush()
exitcode = 1
except:
exitcode = 1
import traceback
sys.stderr.write('Process %s:\n' % self.name)
sys.stderr.flush()
traceback.print_exc()
util.info('process exiting with exitcode %d' % exitcode)
return exitcode
# Python versions 2.6.0 through 2.6.2 suffer from a multiprocessing bug
# which can result in a bitbake server hang during the parsing process
if (2, 6, 0) <= sys.version_info < (2, 6, 3):
_bootstrap = bootstrap_2_6_6
class BitBakeServerConnection():
def __init__(self, server):
self.server = server
self.procserver = server.server
self.connection = ServerCommunicator(server.ui_channel)
self.events = server.event_queue
def terminate(self, force = False):
signal.signal(signal.SIGINT, signal.SIG_IGN)
self.procserver.stop()
if force:
self.procserver.join(0.5)
if self.procserver.is_alive():
self.procserver.terminate()
self.procserver.join()
else:
self.procserver.join()
while True:
try:
event = self.server.event_queue.get(block=False)
except (Empty, IOError):
break
if isinstance(event, logging.LogRecord):
logger.handle(event)
self.server.ui_channel.close()
self.server.event_queue.close()
if force:
sys.exit(1)
# Wrap Queue to provide API which isn't server implementation specific
class ProcessEventQueue(multiprocessing.queues.Queue):
def waitEvent(self, timeout):
try:
return self.get(True, timeout)
except Empty:
return None
def getEvent(self):
try:
return self.get(False)
except Empty:
return None
class BitBakeServer(object):
def initServer(self):
# establish communication channels. We use bidirectional pipes for
# ui <--> server command/response pairs
# and a queue for server -> ui event notifications
#
self.ui_channel, self.server_channel = Pipe()
self.event_queue = ProcessEventQueue(0)
self.server = ProcessServer(self.server_channel, self.event_queue)
def addcooker(self, cooker):
self.cooker = cooker
self.server.cooker = cooker
def getServerIdleCB(self):
return self.server.register_idle_function
def saveConnectionDetails(self):
return
def detach(self, cooker_logfile):
self.server.start()
return
def establishConnection(self):
self.connection = BitBakeServerConnection(self)
signal.signal(signal.SIGTERM, lambda i, s: self.connection.terminate(force=True))
return self.connection
def launchUI(self, uifunc, *args):
return bb.cooker.server_main(self.cooker, uifunc, *args)

View File

@@ -122,7 +122,8 @@ def _create_server(host, port):
return s
class BitBakeServerCommands():
def __init__(self, server):
def __init__(self, server, cooker):
self.cooker = cooker
self.server = server
def registerEventHandler(self, host, port):
@@ -150,7 +151,7 @@ class BitBakeServerCommands():
Trigger the server to quit
"""
self.server.quit = True
print("Server (cooker) exiting")
print("Server (cooker) exitting")
return
def ping(self):
@@ -159,11 +160,11 @@ class BitBakeServerCommands():
"""
return True
class BitBakeXMLRPCServer(SimpleXMLRPCServer):
class BitBakeServer(SimpleXMLRPCServer):
# remove this when you're done with debugging
# allow_reuse_address = True
def __init__(self, interface = ("localhost", 0)):
def __init__(self, cooker, interface = ("localhost", 0)):
"""
Constructor
"""
@@ -173,12 +174,9 @@ class BitBakeXMLRPCServer(SimpleXMLRPCServer):
self._idlefuns = {}
self.host, self.port = self.socket.getsockname()
#self.register_introspection_functions()
self.commands = BitBakeServerCommands(self)
self.autoregister_all_functions(self.commands, "")
def addcooker(self, cooker):
commands = BitBakeServerCommands(self, cooker)
self.autoregister_all_functions(commands, "")
self.cooker = cooker
self.commands.cooker = cooker
def autoregister_all_functions(self, context, prefix):
"""
@@ -246,6 +244,14 @@ class BitbakeServerInfo():
self.host = server.host
self.port = server.port
class BitBakeServerFork():
def __init__(self, cooker, server, serverinfo, logfile):
daemonize.createDaemon(server.serve_forever, logfile)
class BitbakeUILauch():
def launch(self, serverinfo, uifunc, *args):
return uifunc(*args)
class BitBakeServerConnection():
def __init__(self, serverinfo):
self.connection = _create_server(serverinfo.host, serverinfo.port)
@@ -265,31 +271,3 @@ class BitBakeServerConnection():
self.connection.terminateServer()
except:
pass
class BitBakeServer(object):
def initServer(self):
self.server = BitBakeXMLRPCServer()
def addcooker(self, cooker):
self.cooker = cooker
self.server.addcooker(cooker)
def getServerIdleCB(self):
return self.server.register_idle_function
def saveConnectionDetails(self):
self.serverinfo = BitbakeServerInfo(self.server)
def detach(self, cooker_logfile):
daemonize.createDaemon(self.server.serve_forever, cooker_logfile)
del self.cooker
del self.server
def establishConnection(self):
self.connection = BitBakeServerConnection(self.serverinfo)
return self.connection
def launchUI(self, uifunc, *args):
return uifunc(*args)

View File

@@ -407,7 +407,7 @@ SRC_URI = ""
def parse( self, params ):
"""(Re-)parse .bb files and calculate the dependency graph"""
cooker.status = cache.CacheData(cooker.caches_array)
cooker.status = cache.CacheData()
ignore = data.getVar("ASSUME_PROVIDED", cooker.configuration.data, 1) or ""
cooker.status.ignored_dependencies = set( ignore.split() )
cooker.handleCollections( data.getVar("BBFILE_COLLECTIONS", cooker.configuration.data, 1) )

View File

@@ -1,6 +1,5 @@
import hashlib
import logging
import os
import re
import bb.data
@@ -47,9 +46,6 @@ class SignatureGenerator(object):
def stampfile(self, stampbase, file_name, taskname, extrainfo):
return ("%s.%s.%s" % (stampbase, taskname, extrainfo)).rstrip('.')
def dump_sigtask(self, fn, task, stampbase, runtime):
return
class SignatureGeneratorBasic(SignatureGenerator):
"""
"""
@@ -82,10 +78,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
data = d.getVar(task, False)
lookupcache[task] = data
if data is None:
bb.error("Task %s from %s seems to be empty?!" % (task, fn))
data = ''
newdeps = gendeps[task]
seen = set()
while newdeps:
@@ -107,7 +99,9 @@ class SignatureGeneratorBasic(SignatureGenerator):
var = d.getVar(dep, False)
lookupcache[dep] = var
if var:
data = data + str(var)
data = data + var
if data is None:
bb.error("Task %s from %s seems to be empty?!" % (task, fn))
self.basehash[fn + "." + task] = hashlib.md5(data).hexdigest()
taskdeps[task] = sorted(alldeps)

View File

@@ -1,278 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gobject
import copy
import re, os
from bb import data
class Configurator(gobject.GObject):
"""
A GObject to handle writing modified configuration values back
to conf files.
"""
__gsignals__ = {
"layers-loaded" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
"layers-changed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
())
}
def __init__(self):
gobject.GObject.__init__(self)
self.local = None
self.bblayers = None
self.enabled_layers = {}
self.loaded_layers = {}
self.config = {}
self.orig_config = {}
# NOTE: cribbed from the cooker...
def _parse(self, f, data, include=False):
try:
return bb.parse.handle(f, data, include)
except (IOError, bb.parse.ParseError) as exc:
parselog.critical("Unable to parse %s: %s" % (f, exc))
sys.exit(1)
def _loadLocalConf(self, path):
def getString(var):
return bb.data.getVar(var, data, True) or ""
self.local = path
if self.orig_config:
del self.orig_config
self.orig_config = {}
data = bb.data.init()
data = self._parse(self.local, data)
# We only need to care about certain variables
mach = getString('MACHINE')
if mach and mach != self.config.get('MACHINE', ''):
self.config['MACHINE'] = mach
sdkmach = getString('SDKMACHINE')
if sdkmach and sdkmach != self.config.get('SDKMACHINE', ''):
self.config['SDKMACHINE'] = sdkmach
distro = getString('DISTRO')
if distro and distro != self.config.get('DISTRO', ''):
self.config['DISTRO'] = distro
bbnum = getString('BB_NUMBER_THREADS')
if bbnum and bbnum != self.config.get('BB_NUMBER_THREADS', ''):
self.config['BB_NUMBER_THREADS'] = bbnum
pmake = getString('PARALLEL_MAKE')
if pmake and pmake != self.config.get('PARALLEL_MAKE', ''):
self.config['PARALLEL_MAKE'] = pmake
incompat = getString('INCOMPATIBLE_LICENSE')
if incompat and incompat != self.config.get('INCOMPATIBLE_LICENSE', ''):
self.config['INCOMPATIBLE_LICENSE'] = incompat
pclass = getString('PACKAGE_CLASSES')
if pclass and pclass != self.config.get('PACKAGE_CLASSES', ''):
self.config['PACKAGE_CLASSES'] = pclass
self.orig_config = copy.deepcopy(self.config)
def setLocalConfVar(self, var, val):
if var in self.config:
self.config[var] = val
def _loadLayerConf(self, path):
self.bblayers = path
self.enabled_layers = {}
self.loaded_layers = {}
data = bb.data.init()
data = self._parse(self.bblayers, data)
layers = (bb.data.getVar('BBLAYERS', data, True) or "").split()
for layer in layers:
# TODO: we may be better off calling the layer by its
# BBFILE_COLLECTIONS value?
name = self._getLayerName(layer)
self.loaded_layers[name] = layer
self.enabled_layers = copy.deepcopy(self.loaded_layers)
self.emit("layers-loaded")
def _addConfigFile(self, path):
pref, sep, filename = path.rpartition("/")
if filename == "local.conf" or filename == "hob.local.conf":
self._loadLocalConf(path)
elif filename == "bblayers.conf":
self._loadLayerConf(path)
def _splitLayer(self, path):
# we only care about the path up to /conf/layer.conf
layerpath, conf, end = path.rpartition("/conf/")
return layerpath
def _getLayerName(self, path):
# Should this be the collection name?
layerpath, sep, name = path.rpartition("/")
return name
def disableLayer(self, layer):
if layer in self.enabled_layers:
del self.enabled_layers[layer]
def addLayerConf(self, confpath):
layerpath = self._splitLayer(confpath)
name = self._getLayerName(layerpath)
if name not in self.enabled_layers:
self.addLayer(name, layerpath)
return name, layerpath
def addLayer(self, name, path):
self.enabled_layers[name] = path
def _isLayerConfDirty(self):
# if a different number of layers enabled to what was
# loaded, definitely different
if len(self.enabled_layers) != len(self.loaded_layers):
return True
for layer in self.loaded_layers:
# if layer loaded but no longer present, definitely dirty
if layer not in self.enabled_layers:
return True
for layer in self.enabled_layers:
# if this layer wasn't present at load, definitely dirty
if layer not in self.loaded_layers:
return True
# if this layers path has changed, definitely dirty
if self.enabled_layers[layer] != self.loaded_layers[layer]:
return True
return False
def _constructLayerEntry(self):
"""
Returns a string representing the new layer selection
"""
layers = self.enabled_layers.copy()
# Construct BBLAYERS entry
layer_entry = "BBLAYERS = \" \\\n"
if 'meta' in layers:
layer_entry = layer_entry + " %s \\\n" % layers['meta']
del layers['meta']
for layer in layers:
layer_entry = layer_entry + " %s \\\n" % layers[layer]
layer_entry = layer_entry + " \""
return "".join(layer_entry)
def writeLocalConf(self):
# Dictionary containing only new or modified variables
changed_values = {}
for var in self.config:
val = self.config[var]
if self.orig_config.get(var, None) != val:
changed_values[var] = val
if not len(changed_values):
return
# Create a backup of the local.conf
bkup = "%s~" % self.local
os.rename(self.local, bkup)
# read the original conf into a list
with open(bkup, 'r') as config:
config_lines = config.readlines()
new_config_lines = ["\n"]
for var in changed_values:
# Convenience function for re.subn(). If the pattern matches
# return a string which contains an assignment using the same
# assignment operator as the old assignment.
def replace_val(matchobj):
var = matchobj.group(1) # config variable
op = matchobj.group(2) # assignment operator
val = changed_values[var] # new config value
return "%s %s \"%s\"" % (var, op, val)
pattern = '^\s*(%s)\s*([+=?.]+)(.*)' % re.escape(var)
p = re.compile(pattern)
cnt = 0
replaced = False
# Iterate over the local.conf lines and if they are a match
# for the pattern comment out the line and append a new line
# with the new VAR op "value" entry
for line in config_lines:
new_line, replacements = p.subn(replace_val, line)
if replacements:
config_lines[cnt] = "#%s" % line
new_config_lines.append(new_line)
replaced = True
cnt = cnt + 1
if not replaced:
new_config_lines.append("%s = \"%s\"" % (var, changed_values[var]))
# Add the modified variables
config_lines.extend(new_config_lines)
# Write the updated lines list object to the local.conf
with open(self.local, "w") as n:
n.write("".join(config_lines))
del self.orig_config
self.orig_config = copy.deepcopy(self.config)
def writeLayerConf(self):
# If we've not added/removed new layers don't write
if not self._isLayerConfDirty():
return
# This pattern should find the existing BBLAYERS
pattern = 'BBLAYERS\s=\s\".*\"'
# Backup the users bblayers.conf
bkup = "%s~" % self.bblayers
os.rename(self.bblayers, bkup)
replacement = self._constructLayerEntry()
with open(bkup, "r") as f:
contents = f.read()
p = re.compile(pattern, re.DOTALL)
new = p.sub(replacement, contents)
with open(self.bblayers, "w") as n:
n.write(new)
# At some stage we should remove the backup we've created
# though we should probably verify it first
#os.remove(bkup)
# set loaded_layers for dirtiness tracking
self.loaded_layers = copy.deepcopy(self.enabled_layers)
self.emit("layers-changed")
def configFound(self, handler, path):
self._addConfigFile(path)
def loadConfig(self, path):
self._addConfigFile(path)

View File

@@ -1,61 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gobject
import gtk
"""
The following are convenience classes for implementing GNOME HIG compliant
BitBake GUI's
In summary: spacing = 12px, border-width = 6px
"""
class CrumbsDialog(gtk.Dialog):
"""
A GNOME HIG compliant dialog widget.
Add buttons with gtk.Dialog.add_button or gtk.Dialog.add_buttons
"""
def __init__(self, parent=None, label="", icon=gtk.STOCK_INFO):
gtk.Dialog.__init__(self, "", parent, gtk.DIALOG_DESTROY_WITH_PARENT)
#self.set_property("has-separator", False) # note: deprecated in 2.22
self.set_border_width(6)
self.vbox.set_property("spacing", 12)
self.action_area.set_property("spacing", 12)
self.action_area.set_property("border-width", 6)
first_row = gtk.HBox(spacing=12)
first_row.set_property("border-width", 6)
first_row.show()
self.vbox.add(first_row)
self.icon = gtk.Image()
self.icon.set_from_stock(icon, gtk.ICON_SIZE_DIALOG)
self.icon.set_property("yalign", 0.00)
self.icon.show()
first_row.add(self.icon)
self.label = gtk.Label()
self.label.set_use_markup(True)
self.label.set_line_wrap(True)
self.label.set_markup(label)
self.label.set_property("yalign", 0.00)
self.label.show()
first_row.add(self.label)

View File

@@ -19,6 +19,7 @@
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gobject
from bb.ui.crumbs.progress import ProgressBar
progress_total = 0
@@ -28,78 +29,46 @@ class HobHandler(gobject.GObject):
This object does BitBake event handling for the hob gui.
"""
__gsignals__ = {
"machines-updated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
"sdk-machines-updated": (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
"distros-updated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
"package-formats-found" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
"config-found" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_STRING,)),
"generating-data" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
"data-generated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
"error" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_STRING,)),
"build-complete" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
"reload-triggered" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_STRING,
gobject.TYPE_STRING)),
"machines-updated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
"distros-updated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_PYOBJECT,)),
"generating-data" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
"data-generated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
())
}
def __init__(self, taskmodel, server):
gobject.GObject.__init__(self)
self.current_command = None
self.building = None
self.gplv3_excluded = False
self.build_toolchain = False
self.build_toolchain_headers = False
self.generating = False
self.build_queue = []
self.model = taskmodel
self.server = server
self.current_command = None
self.building = False
self.command_map = {
"findConfigFilePathLocal" : ("findConfigFilePath", ["hob.local.conf"], "findConfigFilePathHobLocal"),
"findConfigFilePathHobLocal" : ("findConfigFilePath", ["bblayers.conf"], "findConfigFilePathLayers"),
"findConfigFilePathLayers" : ("findConfigFiles", ["DISTRO"], "findConfigFilesDistro"),
"findConfigFilesDistro" : ("findConfigFiles", ["MACHINE"], "findConfigFilesMachine"),
"findConfigFilesMachine" : ("findConfigFiles", ["MACHINE-SDK"], "findConfigFilesSdkMachine"),
"findConfigFilesSdkMachine" : ("findFilesMatchingInDir", ["rootfs_", "classes"], "findFilesMatchingPackage"),
"findFilesMatchingPackage" : ("generateTargetsTree", ["classes/image.bbclass"], None),
"generateTargetsTree" : (None, [], None),
"findConfigFilesDistro" : ("findConfigFiles", "MACHINE", "findConfigFilesMachine"),
"findConfigFilesMachine" : ("generateTargetsTree", "classes/image.bbclass", None),
"generateTargetsTree" : (None, None, None),
}
def run_next_command(self):
# FIXME: this is ugly and I *will* replace it
if self.current_command:
if not self.generating:
self.emit("generating-data")
self.generating = True
next_cmd = self.command_map[self.current_command]
command = next_cmd[0]
argument = next_cmd[1]
self.current_command = next_cmd[2]
args = [command]
args.extend(argument)
self.server.runCommand(args)
if command == "generateTargetsTree":
self.emit("generating-data")
self.server.runCommand([command, argument])
def handle_event(self, event, running_build, pbar):
def handle_event(self, event, running_build, pbar=None):
if not event:
return
@@ -108,9 +77,9 @@ class HobHandler(gobject.GObject):
running_build.handle_event(event)
elif isinstance(event, bb.event.TargetsTreeGenerated):
self.emit("data-generated")
self.generating = False
if event._model:
self.model.populate(event._model)
elif isinstance(event, bb.event.ConfigFilesFound):
var = event._variable
if var == "distro":
@@ -121,44 +90,26 @@ class HobHandler(gobject.GObject):
machines = event._values
machines.sort()
self.emit("machines-updated", machines)
elif var == "machine-sdk":
sdk_machines = event._values
sdk_machines.sort()
self.emit("sdk-machines-updated", sdk_machines)
elif isinstance(event, bb.event.ConfigFilePathFound):
path = event._path
self.emit("config-found", path)
elif isinstance(event, bb.event.FilesMatchingFound):
# FIXME: hard coding, should at least be a variable shared between
# here and the caller
if event._pattern == "rootfs_":
formats = []
for match in event._matches:
classname, sep, cls = match.rpartition(".")
fs, sep, format = classname.rpartition("_")
formats.append(format)
formats.sort()
self.emit("package-formats-found", formats)
elif isinstance(event, bb.command.CommandCompleted):
self.run_next_command()
elif isinstance(event, bb.command.CommandFailed):
self.emit("error", event.error)
elif isinstance(event, bb.event.CacheLoadStarted):
elif isinstance(event, bb.event.CacheLoadStarted) and pbar:
pbar.set_title("Loading cache")
bb.ui.crumbs.hobeventhandler.progress_total = event.total
pbar.set_text("Loading cache: %s/%s" % (0, bb.ui.crumbs.hobeventhandler.progress_total))
elif isinstance(event, bb.event.CacheLoadProgress):
pbar.set_text("Loading cache: %s/%s" % (event.current, bb.ui.crumbs.hobeventhandler.progress_total))
elif isinstance(event, bb.event.CacheLoadCompleted):
pbar.set_text("Loading cache: %s/%s" % (bb.ui.crumbs.hobeventhandler.progress_total, bb.ui.crumbs.hobeventhandler.progress_total))
elif isinstance(event, bb.event.ParseStarted):
if event.total == 0:
return
pbar.update(0, bb.ui.crumbs.hobeventhandler.progress_total)
elif isinstance(event, bb.event.CacheLoadProgress) and pbar:
pbar.update(event.current, bb.ui.crumbs.hobeventhandler.progress_total)
elif isinstance(event, bb.event.CacheLoadCompleted) and pbar:
pbar.update(bb.ui.crumbs.hobeventhandler.progress_total, bb.ui.crumbs.hobeventhandler.progress_total)
elif isinstance(event, bb.event.ParseStarted) and pbar:
pbar.set_title("Processing recipes")
bb.ui.crumbs.hobeventhandler.progress_total = event.total
pbar.set_text("Processing recipes: %s/%s" % (0, bb.ui.crumbs.hobeventhandler.progress_total))
elif isinstance(event, bb.event.ParseProgress):
pbar.set_text("Processing recipes: %s/%s" % (event.current, bb.ui.crumbs.hobeventhandler.progress_total))
elif isinstance(event, bb.event.ParseCompleted):
pbar.set_fraction(1.0)
pbar.update(0, bb.ui.crumbs.hobeventhandler.progress_total)
elif isinstance(event, bb.event.ParseProgress) and pbar:
pbar.update(event.current, bb.ui.crumbs.hobeventhandler.progress_total)
elif isinstance(event, bb.event.ParseCompleted) and pbar:
pbar.hide()
return
def event_handle_idle_func (self, eventHandler, running_build, pbar):
@@ -171,95 +122,16 @@ class HobHandler(gobject.GObject):
def set_machine(self, machine):
self.server.runCommand(["setVariable", "MACHINE", machine])
def set_sdk_machine(self, sdk_machine):
self.server.runCommand(["setVariable", "SDKMACHINE", sdk_machine])
self.current_command = "findConfigFilesMachine"
self.run_next_command()
def set_distro(self, distro):
self.server.runCommand(["setVariable", "DISTRO", distro])
def set_package_format(self, format):
self.server.runCommand(["setVariable", "PACKAGE_CLASSES", "package_%s" % format])
def reload_data(self, config=None):
img = self.model.selected_image
selected_packages, _ = self.model.get_selected_packages()
self.emit("reload-triggered", img, " ".join(selected_packages))
self.server.runCommand(["reparseFiles"])
self.current_command = "findConfigFilePathLayers"
self.run_next_command()
def set_bbthreads(self, threads):
self.server.runCommand(["setVariable", "BB_NUMBER_THREADS", threads])
def set_pmake(self, threads):
pmake = "-j %s" % threads
self.server.runCommand(["setVariable", "BB_NUMBER_THREADS", pmake])
def run_build(self, tgts):
self.building = "image"
targets = []
targets.append(tgts)
if self.build_toolchain and self.build_toolchain_headers:
targets = ["meta-toolchain-sdk"] + targets
elif self.build_toolchain:
targets = ["meta-toolchain"] + targets
def run_build(self, targets):
self.building = True
self.server.runCommand(["buildTargets", targets, "build"])
def build_packages(self, pkgs):
self.building = "packages"
if 'meta-toolchain' in self.build_queue:
self.build_queue.remove('meta-toolchain')
pkgs.extend('meta-toolchain')
self.server.runCommand(["buildTargets", pkgs, "build"])
def build_file(self, image):
self.building = "image"
self.server.runCommand(["buildFile", image, "build"])
def cancel_build(self, force=False):
if force:
# Force the cooker to stop as quickly as possible
self.server.runCommand(["stateStop"])
else:
# Wait for tasks to complete before shutting down, this helps
# leave the workdir in a usable state
self.server.runCommand(["stateShutdown"])
def toggle_gplv3(self, excluded):
if self.gplv3_excluded != excluded:
self.gplv3_excluded = excluded
if excluded:
self.server.runCommand(["setVariable", "INCOMPATIBLE_LICENSE", "GPLv3"])
else:
self.server.runCommand(["setVariable", "INCOMPATIBLE_LICENSE", ""])
def toggle_toolchain(self, enabled):
if self.build_toolchain != enabled:
self.build_toolchain = enabled
def toggle_toolchain_headers(self, enabled):
if self.build_toolchain_headers != enabled:
self.build_toolchain_headers = enabled
def queue_image_recipe_path(self, path):
self.build_queue.append(path)
def build_complete_cb(self, running_build):
if len(self.build_queue) > 0:
next = self.build_queue.pop(0)
if next.endswith('.bb'):
self.build_file(next)
self.building = 'image'
self.build_file(next)
else:
self.build_packages(next.split(" "))
else:
self.building = None
self.emit("build-complete")
def set_image_output_type(self, output_type):
self.server.runCommand(["setVariable", "IMAGE_FSTYPES", output_type])
def get_image_deploy_dir(self):
return self.server.runCommand(["getVariable", "DEPLOY_DIR_IMAGE"])
def cancel_build(self):
# Note: this may not be the right way to stop an in-progress build
self.server.runCommand(["stateStop"])

View File

@@ -1,293 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gtk
from bb.ui.crumbs.configurator import Configurator
class HobPrefs(gtk.Dialog):
"""
"""
def empty_combo_text(self, combo_text):
model = combo_text.get_model()
if model:
model.clear()
def output_type_changed_cb(self, combo, handler):
ot = combo.get_active_text()
if ot != self.curr_output_type:
self.curr_output_type = ot
handler.set_image_output_type(ot)
def sdk_machine_combo_changed_cb(self, combo, handler):
sdk_mach = combo.get_active_text()
if sdk_mach != self.curr_sdk_mach:
self.curr_sdk_mach = sdk_mach
self.configurator.setLocalConfVar('SDKMACHINE', sdk_mach)
handler.set_sdk_machine(sdk_mach)
def update_sdk_machines(self, handler, sdk_machines):
active = 0
# disconnect the signal handler before updating the combo model
if self.sdk_machine_handler_id:
self.sdk_machine_combo.disconnect(self.sdk_machine_handler_id)
self.sdk_machine_handler_id = None
self.empty_combo_text(self.sdk_machine_combo)
for sdk_machine in sdk_machines:
self.sdk_machine_combo.append_text(sdk_machine)
if sdk_machine == self.curr_sdk_mach:
self.sdk_machine_combo.set_active(active)
active = active + 1
self.sdk_machine_handler_id = self.sdk_machine_combo.connect("changed", self.sdk_machine_combo_changed_cb, handler)
def distro_combo_changed_cb(self, combo, handler):
distro = combo.get_active_text()
if distro != self.curr_distro:
self.curr_distro = distro
self.configurator.setLocalConfVar('DISTRO', distro)
handler.set_distro(distro)
self.reload_required = True
def update_distros(self, handler, distros):
active = 0
# disconnect the signal handler before updating combo model
if self.distro_handler_id:
self.distro_combo.disconnect(self.distro_handler_id)
self.distro_handler_id = None
self.empty_combo_text(self.distro_combo)
for distro in distros:
self.distro_combo.append_text(distro)
if distro == self.curr_distro:
self.distro_combo.set_active(active)
active = active + 1
self.distro_handler_id = self.distro_combo.connect("changed", self.distro_combo_changed_cb, handler)
def package_format_combo_changed_cb(self, combo, handler):
package_format = combo.get_active_text()
if package_format != self.curr_package_format:
self.curr_package_format = package_format
self.configurator.setLocalConfVar('PACKAGE_CLASSES', 'package_%s' % package_format)
handler.set_package_format(package_format)
def update_package_formats(self, handler, formats):
active = 0
# disconnect the signal handler before updating the model
if self.package_handler_id:
self.package_combo.disconnect(self.package_handler_id)
self.package_handler_id = None
self.empty_combo_text(self.package_combo)
for format in formats:
self.package_combo.append_text(format)
if format == self.curr_package_format:
self.package_combo.set_active(active)
active = active + 1
self.package_handler_id = self.package_combo.connect("changed", self.package_format_combo_changed_cb, handler)
def include_gplv3_cb(self, toggle):
excluded = toggle.get_active()
self.handler.toggle_gplv3(excluded)
if excluded:
self.configurator.setLocalConfVar('INCOMPATIBLE_LICENSE', 'GPLv3')
else:
self.configurator.setLocalConfVar('INCOMPATIBLE_LICENSE', '')
self.reload_required = True
def change_bb_threads_cb(self, spinner):
val = spinner.get_value_as_int()
self.handler.set_bbthreads(val)
self.configurator.setLocalConfVar('BB_NUMBER_THREADS', val)
def change_make_threads_cb(self, spinner):
val = spinner.get_value_as_int()
self.handler.set_pmake(val)
self.configurator.setLocalConfVar('PARALLEL_MAKE', "-j %s" % val)
def toggle_toolchain_cb(self, check):
enabled = check.get_active()
self.handler.toggle_toolchain(enabled)
def toggle_headers_cb(self, check):
enabled = check.get_active()
self.handler.toggle_toolchain_headers(enabled)
def set_parent_window(self, parent):
self.set_transient_for(parent)
def write_changes(self):
self.configurator.writeLocalConf()
def prefs_response_cb(self, dialog, response):
if self.reload_required:
glib.idle_add(self.handler.reload_data)
def __init__(self, configurator, handler, curr_sdk_mach, curr_distro, pclass,
cpu_cnt, pmake, bbthread, image_types):
"""
"""
gtk.Dialog.__init__(self, "Preferences", None,
gtk.DIALOG_DESTROY_WITH_PARENT,
(gtk.STOCK_CLOSE, gtk.RESPONSE_OK))
self.set_border_width(6)
self.vbox.set_property("spacing", 12)
self.action_area.set_property("spacing", 12)
self.action_area.set_property("border-width", 6)
self.handler = handler
self.configurator = configurator
self.curr_sdk_mach = curr_sdk_mach
self.curr_distro = curr_distro
self.curr_package_format = pclass
self.curr_output_type = None
self.cpu_cnt = cpu_cnt
self.pmake = pmake
self.bbthread = bbthread
self.reload_required = False
self.distro_handler_id = None
self.sdk_machine_handler_id = None
self.package_handler_id = None
left = gtk.SizeGroup(gtk.SIZE_GROUP_HORIZONTAL)
right = gtk.SizeGroup(gtk.SIZE_GROUP_HORIZONTAL)
label = gtk.Label()
label.set_markup("<b>Policy</b>")
label.show()
frame = gtk.Frame()
frame.set_label_widget(label)
frame.set_shadow_type(gtk.SHADOW_NONE)
frame.show()
self.vbox.pack_start(frame)
pbox = gtk.VBox(False, 12)
pbox.show()
frame.add(pbox)
hbox = gtk.HBox(False, 12)
hbox.show()
pbox.pack_start(hbox, expand=False, fill=False, padding=6)
# Distro selector
label = gtk.Label("Distribution:")
label.show()
hbox.pack_start(label, expand=False, fill=False, padding=6)
self.distro_combo = gtk.combo_box_new_text()
self.distro_combo.set_tooltip_text("Select the Yocto distribution you would like to use")
self.distro_combo.show()
hbox.pack_start(self.distro_combo, expand=False, fill=False, padding=6)
# Exclude GPLv3
check = gtk.CheckButton("Exclude GPLv3 packages")
check.set_tooltip_text("Check this box to prevent GPLv3 packages from being included in your image")
check.show()
check.connect("toggled", self.include_gplv3_cb)
hbox.pack_start(check, expand=False, fill=False, padding=6)
hbox = gtk.HBox(False, 12)
hbox.show()
pbox.pack_start(hbox, expand=False, fill=False, padding=6)
# Package format selector
label = gtk.Label("Package format:")
label.show()
hbox.pack_start(label, expand=False, fill=False, padding=6)
self.package_combo = gtk.combo_box_new_text()
self.package_combo.set_tooltip_text("Select the package format you would like to use in your image")
self.package_combo.show()
hbox.pack_start(self.package_combo, expand=False, fill=False, padding=6)
# Image output type selector
label = gtk.Label("Image output type:")
label.show()
hbox.pack_start(label, expand=False, fill=False, padding=6)
output_combo = gtk.combo_box_new_text()
if image_types:
for it in image_types.split(" "):
output_combo.append_text(it)
output_combo.connect("changed", self.output_type_changed_cb, handler)
else:
output_combo.set_sensitive(False)
output_combo.show()
hbox.pack_start(output_combo)
# BitBake
label = gtk.Label()
label.set_markup("<b>BitBake</b>")
label.show()
frame = gtk.Frame()
frame.set_label_widget(label)
frame.set_shadow_type(gtk.SHADOW_NONE)
frame.show()
self.vbox.pack_start(frame)
pbox = gtk.VBox(False, 12)
pbox.show()
frame.add(pbox)
hbox = gtk.HBox(False, 12)
hbox.show()
pbox.pack_start(hbox, expand=False, fill=False, padding=6)
label = gtk.Label("BitBake threads:")
label.show()
spin_max = 9 #self.cpu_cnt * 3
hbox.pack_start(label, expand=False, fill=False, padding=6)
bbadj = gtk.Adjustment(value=self.bbthread, lower=1, upper=spin_max, step_incr=1)
bbspinner = gtk.SpinButton(adjustment=bbadj, climb_rate=1, digits=0)
bbspinner.show()
bbspinner.connect("value-changed", self.change_bb_threads_cb)
hbox.pack_start(bbspinner, expand=False, fill=False, padding=6)
label = gtk.Label("Make threads:")
label.show()
hbox.pack_start(label, expand=False, fill=False, padding=6)
madj = gtk.Adjustment(value=self.pmake, lower=1, upper=spin_max, step_incr=1)
makespinner = gtk.SpinButton(adjustment=madj, climb_rate=1, digits=0)
makespinner.connect("value-changed", self.change_make_threads_cb)
makespinner.show()
hbox.pack_start(makespinner, expand=False, fill=False, padding=6)
# Toolchain
label = gtk.Label()
label.set_markup("<b>External Toolchain</b>")
label.show()
frame = gtk.Frame()
frame.set_label_widget(label)
frame.set_shadow_type(gtk.SHADOW_NONE)
frame.show()
self.vbox.pack_start(frame)
pbox = gtk.VBox(False, 12)
pbox.show()
frame.add(pbox)
hbox = gtk.HBox(False, 12)
hbox.show()
pbox.pack_start(hbox, expand=False, fill=False, padding=6)
toolcheck = gtk.CheckButton("Build external development toolchain with image")
toolcheck.show()
toolcheck.connect("toggled", self.toggle_toolchain_cb)
hbox.pack_start(toolcheck, expand=False, fill=False, padding=6)
hbox = gtk.HBox(False, 12)
hbox.show()
pbox.pack_start(hbox, expand=False, fill=False, padding=6)
label = gtk.Label("Toolchain host:")
label.show()
hbox.pack_start(label, expand=False, fill=False, padding=6)
self.sdk_machine_combo = gtk.combo_box_new_text()
self.sdk_machine_combo.set_tooltip_text("Select the host architecture of the external machine")
self.sdk_machine_combo.show()
hbox.pack_start(self.sdk_machine_combo, expand=False, fill=False, padding=6)
headerscheck = gtk.CheckButton("Include development headers with toolchain")
headerscheck.show()
headerscheck.connect("toggled", self.toggle_headers_cb)
hbox.pack_start(headerscheck, expand=False, fill=False, padding=6)
self.connect("response", self.prefs_response_cb)

View File

@@ -1,136 +0,0 @@
#
# BitBake Graphical GTK User Interface
#
# Copyright (C) 2011 Intel Corporation
#
# Authored by Joshua Lock <josh@linux.intel.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import gobject
import gtk
from bb.ui.crumbs.configurator import Configurator
class LayerEditor(gtk.Dialog):
"""
Gtk+ Widget for enabling and disabling layers.
Layers are added through using an open dialog to find the layer.conf
Disabled layers are deleted from conf/bblayers.conf
"""
def __init__(self, configurator, parent=None):
gtk.Dialog.__init__(self, "Layers", None,
gtk.DIALOG_DESTROY_WITH_PARENT,
(gtk.STOCK_CLOSE, gtk.RESPONSE_OK))
# We want to show a little more of the treeview in the default,
# emptier, case
self.set_size_request(-1, 300)
self.set_border_width(6)
self.vbox.set_property("spacing", 0)
self.action_area.set_property("border-width", 6)
self.configurator = configurator
self.newly_added = {}
# Label to inform users that meta is enabled but that you can't
# disable it as it'd be a *bad* idea
msg = "As the core of the build system the <i>meta</i> layer must always be included and therefore can't be viewed or edited here."
lbl = gtk.Label()
lbl.show()
lbl.set_use_markup(True)
lbl.set_markup(msg)
lbl.set_line_wrap(True)
lbl.set_justify(gtk.JUSTIFY_FILL)
self.vbox.pack_start(lbl, expand=False, fill=False, padding=6)
# Create a treeview in which to list layers
# ListStore of Name, Path, Enabled
self.layer_store = gtk.ListStore(gobject.TYPE_STRING, gobject.TYPE_STRING, gobject.TYPE_BOOLEAN)
self.tv = gtk.TreeView(self.layer_store)
self.tv.set_headers_visible(True)
col0 = gtk.TreeViewColumn('Name')
self.tv.append_column(col0)
col1 = gtk.TreeViewColumn('Path')
self.tv.append_column(col1)
col2 = gtk.TreeViewColumn('Enabled')
self.tv.append_column(col2)
cell0 = gtk.CellRendererText()
col0.pack_start(cell0, True)
col0.set_attributes(cell0, text=0)
cell1 = gtk.CellRendererText()
col1.pack_start(cell1, True)
col1.set_attributes(cell1, text=1)
cell2 = gtk.CellRendererToggle()
cell2.connect("toggled", self._toggle_layer_cb)
col2.pack_start(cell2, True)
col2.set_attributes(cell2, active=2)
self.tv.show()
self.vbox.pack_start(self.tv, expand=True, fill=True, padding=0)
tb = gtk.Toolbar()
tb.set_icon_size(gtk.ICON_SIZE_SMALL_TOOLBAR)
tb.set_style(gtk.TOOLBAR_BOTH)
tb.set_tooltips(True)
tb.show()
icon = gtk.Image()
icon.set_from_stock(gtk.STOCK_ADD, gtk.ICON_SIZE_SMALL_TOOLBAR)
icon.show()
tb.insert_item("Add Layer", "Add new layer", None, icon,
self._find_layer_cb, None, -1)
self.vbox.pack_start(tb, expand=False, fill=False, padding=0)
def set_parent_window(self, parent):
self.set_transient_for(parent)
def load_current_layers(self, data):
for layer, path in self.configurator.enabled_layers.items():
if layer != 'meta':
self.layer_store.append([layer, path, True])
def save_current_layers(self):
self.configurator.writeLayerConf()
def _toggle_layer_cb(self, cell, path):
name = self.layer_store[path][0]
toggle = not self.layer_store[path][2]
if toggle:
self.configurator.addLayer(name, path)
else:
self.configurator.disableLayer(name)
self.layer_store[path][2] = toggle
def _find_layer_cb(self, button):
self.find_layer(self)
def find_layer(self, parent):
dialog = gtk.FileChooserDialog("Add new layer", parent,
gtk.FILE_CHOOSER_ACTION_OPEN,
(gtk.STOCK_CANCEL, gtk.RESPONSE_NO,
gtk.STOCK_OPEN, gtk.RESPONSE_YES))
label = gtk.Label("Select the layer.conf of the layer you wish to add")
label.show()
dialog.set_extra_widget(label)
response = dialog.run()
path = dialog.get_filename()
dialog.destroy()
if response == gtk.RESPONSE_YES:
# FIXME: verify we've actually got a layer conf?
if path.endswith(".conf"):
name, layerpath = self.configurator.addLayerConf(path)
self.newly_added[name] = layerpath
self.layer_store.append([name, layerpath, True])

View File

@@ -47,18 +47,12 @@ class RunningBuildModel (gtk.TreeStore):
class RunningBuild (gobject.GObject):
__gsignals__ = {
'build-started' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
'build-succeeded' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
'build-failed' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
'build-complete' : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
())
())
}
pids_to_task = {}
tasks_to_iter = {}
@@ -207,7 +201,6 @@ class RunningBuild (gobject.GObject):
elif isinstance(event, bb.event.BuildStarted):
self.emit("build-started")
self.model.prepend(None, (None,
None,
None,
@@ -225,9 +218,6 @@ class RunningBuild (gobject.GObject):
Colors.OK,
0))
# Emit a generic "build-complete" signal for things wishing to
# handle when the build is finished
self.emit("build-complete")
# Emit the appropriate signal depending on the number of failures
if (failures >= 1):
self.emit ("build-failed")
@@ -244,8 +234,6 @@ class RunningBuild (gobject.GObject):
pbar.update(self.progress_total, self.progress_total)
elif isinstance(event, bb.event.ParseStarted) and pbar:
if event.total == 0:
return
pbar.set_title("Processing recipes")
self.progress_total = event.total
pbar.update(0, self.progress_total)
@@ -320,4 +308,4 @@ class RunningBuildTreeView (gtk.TreeView):
clipboard = gtk.clipboard_get()
clipboard.set_text(paste_url)
clipboard.store()
clipboard.store()

View File

@@ -20,57 +20,6 @@
import gtk
import gobject
import re
class BuildRep(gobject.GObject):
def __init__(self, userpkgs, allpkgs, base_image=None):
gobject.GObject.__init__(self)
self.base_image = base_image
self.allpkgs = allpkgs
self.userpkgs = userpkgs
def loadRecipe(self, pathname):
contents = []
packages = ""
base_image = ""
with open(pathname, 'r') as f:
contents = f.readlines()
pkg_pattern = "^\s*(IMAGE_INSTALL)\s*([+=.?]+)\s*(\"\S*\")"
img_pattern = "^\s*(require)\s+(\S+.bb)"
for line in contents:
matchpkg = re.search(pkg_pattern, line)
matchimg = re.search(img_pattern, line)
if matchpkg:
packages = packages + matchpkg.group(3).strip('"')
if matchimg:
base_image = os.path.basename(matchimg.group(2)).split(".")[0]
self.base_image = base_image
self.userpkgs = packages
def writeRecipe(self, writepath, model):
template = """
# Recipe generated by the HOB
require %s
IMAGE_INSTALL += "%s"
"""
meta_path = model.find_image_path(self.base_image)
recipe = template % (meta_path, self.userpkgs)
if os.path.exists(writepath):
os.rename(writepath, "%s~" % writepath)
with open(writepath, 'w') as r:
r.write(recipe)
return writepath
class TaskListModel(gtk.ListStore):
"""
@@ -79,18 +28,12 @@ class TaskListModel(gtk.ListStore):
providing convenience functions to access gtk.TreeModel subclasses which
provide filtered views of the data.
"""
(COL_NAME, COL_DESC, COL_LIC, COL_GROUP, COL_DEPS, COL_BINB, COL_TYPE, COL_INC, COL_IMG, COL_PATH) = range(10)
(COL_NAME, COL_DESC, COL_LIC, COL_GROUP, COL_DEPS, COL_BINB, COL_TYPE, COL_INC) = range(8)
__gsignals__ = {
"tasklist-populated" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
()),
"contents-changed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_INT,)),
"image-changed" : (gobject.SIGNAL_RUN_LAST,
gobject.TYPE_NONE,
(gobject.TYPE_STRING,)),
())
}
"""
@@ -100,7 +43,6 @@ class TaskListModel(gtk.ListStore):
self.tasks = None
self.packages = None
self.images = None
self.selected_image = None
gtk.ListStore.__init__ (self,
gobject.TYPE_STRING,
@@ -110,22 +52,7 @@ class TaskListModel(gtk.ListStore):
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_STRING,
gobject.TYPE_BOOLEAN,
gobject.TYPE_BOOLEAN,
gobject.TYPE_STRING)
def contents_changed_cb(self, tree_model, path, it=None):
pkg_cnt = self.contents.iter_n_children(None)
self.emit("contents-changed", pkg_cnt)
def contents_model_filter(self, model, it):
if not model.get_value(it, self.COL_INC) or model.get_value(it, self.COL_TYPE) == 'image':
return False
name = model.get_value(it, self.COL_NAME)
if name.endswith('-native') or name.endswith('-cross'):
return False
else:
return True
gobject.TYPE_BOOLEAN)
"""
Create, if required, and return a filtered gtk.TreeModel
@@ -135,9 +62,7 @@ class TaskListModel(gtk.ListStore):
def contents_model(self):
if not self.contents:
self.contents = self.filter_new()
self.contents.set_visible_func(self.contents_model_filter)
self.contents.connect("row-inserted", self.contents_changed_cb)
self.contents.connect("row-deleted", self.contents_changed_cb)
self.contents.set_visible_column(self.COL_INC)
return self.contents
"""
@@ -182,10 +107,10 @@ class TaskListModel(gtk.ListStore):
Helper function to determine whether an item is a package
"""
def package_model_filter(self, model, it):
if model.get_value(it, self.COL_TYPE) != 'package':
return False
else:
if model.get_value(it, self.COL_TYPE) == 'package':
return True
else:
return False
"""
Create, if required, and return a filtered gtk.TreeModel
@@ -204,78 +129,33 @@ class TaskListModel(gtk.ListStore):
to notify any listeners that the model is ready
"""
def populate(self, event_model):
# First clear the model, in case repopulating
self.clear()
for item in event_model["pn"]:
atype = 'package'
name = item
summary = event_model["pn"][item]["summary"]
lic = event_model["pn"][item]["license"]
license = event_model["pn"][item]["license"]
group = event_model["pn"][item]["section"]
filename = event_model["pn"][item]["filename"]
depends = event_model["depends"].get(item, "")
depends = event_model["depends"].get(item, "")
rdepends = event_model["rdepends-pn"].get(item, "")
if rdepends:
for rdep in rdepends:
if event_model["packages"].get(rdep, ""):
pn = event_model["packages"][rdep].get("pn", "")
if pn:
depends.append(pn)
depends = depends + rdepends
self.squish(depends)
deps = " ".join(depends)
if name.count('task-') > 0:
atype = 'task'
elif name.count('-image-') > 0:
atype = 'image'
self.set(self.append(), self.COL_NAME, name, self.COL_DESC, summary,
self.COL_LIC, lic, self.COL_GROUP, group,
self.COL_DEPS, deps, self.COL_BINB, "",
self.COL_TYPE, atype, self.COL_INC, False,
self.COL_IMG, False, self.COL_PATH, filename)
self.COL_LIC, license, self.COL_GROUP, group,
self.COL_DEPS, deps, self.COL_BINB, "",
self.COL_TYPE, atype, self.COL_INC, False)
self.emit("tasklist-populated")
"""
Load a BuildRep into the model
"""
def load_image_rep(self, rep):
# Unset everything
it = self.get_iter_first()
while it:
path = self.get_path(it)
self[path][self.COL_INC] = False
self[path][self.COL_IMG] = False
it = self.iter_next(it)
# Iterate the images and disable them all
it = self.images.get_iter_first()
while it:
path = self.images.convert_path_to_child_path(self.images.get_path(it))
name = self[path][self.COL_NAME]
if name == rep.base_image:
self.include_item(path, image_contents=True)
else:
self[path][self.COL_INC] = False
it = self.images.iter_next(it)
# Mark all of the additional packages for inclusion
packages = rep.packages.split(" ")
it = self.get_iter_first()
while it:
path = self.get_path(it)
name = self[path][self.COL_NAME]
if name in packages:
self.include_item(path)
packages.remove(name)
it = self.iter_next(it)
self.emit("image-changed", rep.base_image)
"""
squish lst so that it doesn't contain any duplicate entries
squish lst so that it doesn't contain any duplicates
"""
def squish(self, lst):
seen = {}
@@ -293,61 +173,54 @@ class TaskListModel(gtk.ListStore):
self[path][self.COL_INC] = False
"""
recursively called to mark the item at opath and any package which
depends on it for removal
"""
def mark(self, opath):
removals = []
def mark(self, path):
name = self[path][self.COL_NAME]
it = self.get_iter_first()
name = self[opath][self.COL_NAME]
removals = []
#print("Removing %s" % name)
self.remove_item_path(opath)
self.remove_item_path(path)
# Remove all dependent packages, update binb
while it:
path = self.get_path(it)
inc = self[path][self.COL_INC]
deps = self[path][self.COL_DEPS]
binb = self[path][self.COL_BINB]
# FIXME: need to ensure partial name matching doesn't happen
if inc and deps.count(name):
# FIXME: need to ensure partial name matching doesn't happen, regexp?
if self[path][self.COL_INC] and self[path][self.COL_DEPS].count(name):
#print("%s depended on %s, marking for removal" % (self[path][self.COL_NAME], name))
# found a dependency, remove it
self.mark(path)
if inc and binb.count(name):
bib = self.find_alt_dependency(name)
self[path][self.COL_BINB] = bib
if self[path][self.COL_INC] and self[path][self.COL_BINB].count(name):
binb = self.find_alt_dependency(self[path][self.COL_NAME])
#print("%s was brought in by %s, binb set to %s" % (self[path][self.COL_NAME], name, binb))
self[path][self.COL_BINB] = binb
it = self.iter_next(it)
"""
Remove items from contents if the have an empty COL_BINB (brought in by)
caused by all packages they are a dependency of being removed.
If the item isn't a package we leave it included.
"""
def sweep_up(self):
it = self.contents.get_iter_first()
while it:
binb = self.contents.get_value(it, self.COL_BINB)
itype = self.contents.get_value(it, self.COL_TYPE)
remove = False
removals = []
it = self.get_iter_first()
if itype == 'package' and not binb:
oit = self.contents.convert_iter_to_child_iter(it)
opath = self.get_path(oit)
self.mark(opath)
remove = True
while it:
path = self.get_path(it)
binb = self[path][self.COL_BINB]
if binb == "" or binb is None:
#print("Sweeping up %s" % self[path][self.COL_NAME])
if not path in removals:
removals.extend(path)
it = self.iter_next(it)
# When we remove a package from the contents model we alter the
# model, so continuing to iterate is bad. *Furthermore* it's
# likely that the removal has affected an already iterated item
# so we should start from the beginning anyway.
# Only when we've managed to iterate the entire contents model
# without removing any items do we allow the loop to exit.
if remove:
it = self.contents.get_iter_first()
else:
it = self.contents.iter_next(it)
while removals:
path = removals.pop()
self.mark(path)
"""
Remove an item from the contents
"""
def remove_item(self, path):
self.mark(path)
self.sweep_up()
"""
Find the name of an item in the image contents which depends on the item
@@ -365,10 +238,17 @@ class TaskListModel(gtk.ListStore):
inc = self[path][self.COL_INC]
if itname != name and inc and deps.count(name) > 0:
# if this item depends on the item, return this items name
#print("%s depends on %s" % (itname, name))
return itname
it = self.iter_next(it)
return ""
"""
Convert a path in self to a path in the filtered contents model
"""
def contents_path_for_path(self, path):
return self.contents.convert_child_path_to_path(path)
"""
Check the self.contents gtk.TreeModel for an item
where COL_NAME matches item_name
@@ -386,38 +266,27 @@ class TaskListModel(gtk.ListStore):
"""
Add this item, and any of its dependencies, to the image contents
"""
def include_item(self, item_path, binb="", image_contents=False):
def include_item(self, item_path, binb=""):
name = self[item_path][self.COL_NAME]
deps = self[item_path][self.COL_DEPS]
cur_inc = self[item_path][self.COL_INC]
#print("Adding %s for %s dependency" % (name, binb))
if not cur_inc:
self[item_path][self.COL_INC] = True
self[item_path][self.COL_BINB] = binb
# We want to do some magic with things which are brought in by the
# base image so tag them as so
if image_contents:
self[item_path][self.COL_IMG] = True
if self[item_path][self.COL_TYPE] == 'image':
self.selected_image = name
if deps:
#print("Dependencies of %s are %s" % (name, deps))
# add all of the deps and set their binb to this item
for dep in deps.split(" "):
# FIXME: this skipping virtuals can't be right? Unless we choose only to show target
# packages? In which case we should handle this server side...
# If the contents model doesn't already contain dep, add it
# We only care to show things which will end up in the
# resultant image, so filter cross and native recipes
dep_included = self.contents_includes_name(dep)
path = self.find_path_for_item(dep)
if not dep_included and not dep.endswith("-native") and not dep.endswith("-cross"):
if not dep.startswith("virtual") and not self.contents_includes_name(dep):
path = self.find_path_for_item(dep)
if path:
self.include_item(path, name, image_contents)
self.include_item(path, name)
else:
pass
# Set brought in by for any no longer orphan packages
elif dep_included and path:
if not self[path][self.COL_BINB]:
self[path][self.COL_BINB] = name
"""
Find the model path for the item_name
@@ -438,100 +307,40 @@ class TaskListModel(gtk.ListStore):
Empty self.contents by setting the include of each entry to None
"""
def reset(self):
# Deselect images - slightly more complex logic so that we don't
# have to iterate all of the contents of the main model, instead
# just iterate the images model.
if self.selected_image:
iit = self.images.get_iter_first()
while iit:
pit = self.images.convert_iter_to_child_iter(iit)
self.set(pit, self.COL_INC, False)
iit = self.images.iter_next(iit)
self.selected_image = None
it = self.contents.get_iter_first()
while it:
oit = self.contents.convert_iter_to_child_iter(it)
self.set(oit,
self.COL_INC, False,
self.COL_BINB, "",
self.COL_IMG, False)
path = self.contents.get_path(it)
opath = self.contents.convert_path_to_child_path(path)
self[opath][self.COL_INC] = False
self[opath][self.COL_BINB] = ""
# As we've just removed the first item...
it = self.contents.get_iter_first()
"""
Returns two lists. One of user selected packages and the other containing
all selected packages
Returns True if one of the selected tasks is an image, False otherwise
"""
def get_selected_packages(self):
allpkgs = []
userpkgs = []
it = self.contents.get_iter_first()
while it:
sel = self.contents.get_value(it, self.COL_BINB) == "User Selected"
name = self.contents.get_value(it, self.COL_NAME)
allpkgs.append(name)
if sel:
userpkgs.append(name)
it = self.contents.iter_next(it)
return userpkgs, allpkgs
def get_build_rep(self):
userpkgs, allpkgs = self.get_selected_packages()
image = self.selected_image
return BuildRep(" ".join(userpkgs), " ".join(allpkgs), image)
def find_reverse_depends(self, pn):
revdeps = []
it = self.contents.get_iter_first()
while it:
if self.contents.get_value(it, self.COL_DEPS).count(pn) != 0:
revdeps.append(self.contents.get_value(it, self.COL_NAME))
it = self.contents.iter_next(it)
if pn in revdeps:
revdeps.remove(pn)
return revdeps
def set_selected_image(self, img):
self.selected_image = img
path = self.find_path_for_item(img)
self.include_item(item_path=path,
binb="User Selected",
image_contents=True)
self.emit("image-changed", self.selected_image)
def set_selected_packages(self, pkglist):
selected = pkglist
it = self.get_iter_first()
while it:
name = self.get_value(it, self.COL_NAME)
if name in pkglist:
pkglist.remove(name)
path = self.get_path(it)
self.include_item(item_path=path,
binb="User Selected")
if len(pkglist) == 0:
return
it = self.iter_next(it)
def find_image_path(self, image):
def targets_contains_image(self):
it = self.images.get_iter_first()
while it:
image_name = self.images.get_value(it, self.COL_NAME)
if image_name == image:
path = self.images.get_value(it, self.COL_PATH)
meta_pattern = "(\S*)/(meta*/)(\S*)"
meta_match = re.search(meta_pattern, path)
if meta_match:
_, lyr, bbrel = path.partition(meta_match.group(2))
if bbrel:
path = bbrel
return path
path = self.images.get_path(it)
inc = self.images[path][self.COL_INC]
if inc:
return True
it = self.images.iter_next(it)
return False
"""
Return a list of all selected items which are not -native or -cross
"""
def get_targets(self):
tasks = []
it = self.contents.get_iter_first()
while it:
path = self.contents.get_path(it)
name = self.contents[path][self.COL_NAME]
stype = self.contents[path][self.COL_TYPE]
if not name.count('-native') and not name.count('-cross'):
tasks.append(name)
it = self.contents.iter_next(it)
return tasks

View File

@@ -199,13 +199,10 @@ class gtkthread(threading.Thread):
def main(server, eventHandler):
try:
cmdline = server.runCommand(["getCmdLineAction"])
if cmdline and not cmdline['action']:
print(cmdline['msg'])
return
elif not cmdline or (cmdline['action'] and cmdline['action'][0] != "generateDotGraph"):
if not cmdline or cmdline[0] != "generateDotGraph":
print("This UI is only compatible with the -g option")
return
ret = server.runCommand(["generateDepTreeEvent", cmdline['action'][1], cmdline['action'][2]])
ret = server.runCommand(["generateDepTreeEvent", cmdline[1], cmdline[2]])
if ret != True:
print("Couldn't run command! %s" % ret)
return
@@ -250,13 +247,13 @@ def main(server, eventHandler):
continue
if isinstance(event, bb.event.CacheLoadCompleted):
pbar.hide()
gtk.gdk.threads_enter()
pbar.update(progress_total, progress_total)
gtk.gdk.threads_leave()
continue
if isinstance(event, bb.event.ParseStarted):
progress_total = event.total
if progress_total == 0:
continue
gtk.gdk.threads_enter()
pbar.set_title("Processing recipes")
pbar.update(0, progress_total)

View File

@@ -82,12 +82,8 @@ def main (server, eventHandler):
try:
cmdline = server.runCommand(["getCmdLineAction"])
if not cmdline:
print("Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
return 1
elif not cmdline['action']:
print(cmdline['msg'])
return 1
ret = server.runCommand(cmdline['action'])
ret = server.runCommand(cmdline)
if ret != True:
print("Couldn't get default commandline! %s" % ret)
return 1

File diff suppressed because it is too large Load Diff

View File

@@ -80,12 +80,8 @@ def main(server, eventHandler):
try:
cmdline = server.runCommand(["getCmdLineAction"])
if not cmdline:
print("Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
return 1
elif not cmdline['action']:
print(cmdline['msg'])
return 1
ret = server.runCommand(cmdline['action'])
ret = server.runCommand(cmdline)
if ret != True:
print("Couldn't get default commandline! %s" % ret)
return 1
@@ -154,17 +150,12 @@ def main(server, eventHandler):
logger.info(event._message)
continue
if isinstance(event, bb.event.ParseStarted):
if event.total == 0:
continue
parseprogress = new_progress("Parsing recipes", event.total).start()
continue
if isinstance(event, bb.event.ParseProgress):
parseprogress.update(event.current)
continue
if isinstance(event, bb.event.ParseCompleted):
if not parseprogress:
continue
parseprogress.finish()
print(("Parsing of %d .bb files complete (%d cached, %d parsed). %d targets, %d skipped, %d masked, %d errors."
% ( event.total, event.cached, event.parsed, event.virtuals, event.skipped, event.masked, event.errors)))
@@ -232,7 +223,6 @@ def main(server, eventHandler):
bb.event.StampUpdate,
bb.event.ConfigParsed,
bb.event.RecipeParsed,
bb.event.RecipePreFinalise,
bb.runqueue.runQueueEvent,
bb.runqueue.runQueueExitWait)):
continue

View File

@@ -232,12 +232,8 @@ class NCursesUI:
try:
cmdline = server.runCommand(["getCmdLineAction"])
if not cmdline:
print("Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
return
elif not cmdline['action']:
print(cmdline['msg'])
return
ret = server.runCommand(cmdline['action'])
ret = server.runCommand(cmdline)
if ret != True:
print("Couldn't get default commandlind! %s" % ret)
return

View File

@@ -76,7 +76,7 @@ class BBUIEventQueue:
self.host, self.port = server.socket.getsockname()
server.register_function( self.system_quit, "event.quit" )
server.register_function( self.send_event, "event.sendpickle" )
server.register_function( self.send_event, "event.send" )
server.socket.settimeout(1)
self.EventHandle = self.BBServer.registerEventHandler(self.host, self.port)

View File

@@ -402,7 +402,7 @@ def fileslocked(files):
for lock in locks:
bb.utils.unlockfile(lock)
def lockfile(name, shared=False, retry=True):
def lockfile(name, shared=False):
"""
Use the file fn as a lock file, return when the lock has been acquired.
Returns a variable to pass to unlockfile().
@@ -418,8 +418,6 @@ def lockfile(name, shared=False, retry=True):
op = fcntl.LOCK_EX
if shared:
op = fcntl.LOCK_SH
if not retry:
op = op | fcntl.LOCK_NB
while True:
# If we leave the lockfiles lying around there is no problem
@@ -444,8 +442,6 @@ def lockfile(name, shared=False, retry=True):
lf.close()
except Exception:
continue
if not retry:
return None
def unlockfile(lf):
"""

View File

@@ -1,11 +0,0 @@
__version__ = "1.0.0"
import os, time
import sys,logging
def init_logger(logfile, loglevel):
numeric_level = getattr(logging, loglevel.upper(), None)
if not isinstance(numeric_level, int):
raise ValueError('Invalid log level: %s' % loglevel)
logging.basicConfig(level=numeric_level, filename=logfile)

View File

@@ -1,100 +0,0 @@
import logging
import os.path
import errno
import sys
import warnings
import sqlite3
try:
import sqlite3
except ImportError:
from pysqlite2 import dbapi2 as sqlite3
sqlversion = sqlite3.sqlite_version_info
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
raise Exception("sqlite3 version 3.3.0 or later is required.")
class NotFoundError(StandardError):
pass
class PRTable():
def __init__(self,cursor,table):
self.cursor = cursor
self.table = table
#create the table
self._execute("CREATE TABLE IF NOT EXISTS %s \
(version TEXT NOT NULL, \
checksum TEXT NOT NULL, \
value INTEGER, \
PRIMARY KEY (version,checksum));"
% table)
def _execute(self, *query):
"""Execute a query, waiting to acquire a lock if necessary"""
count = 0
while True:
try:
return self.cursor.execute(*query)
except sqlite3.OperationalError as exc:
if 'database is locked' in str(exc) and count < 500:
count = count + 1
continue
raise
except sqlite3.IntegrityError as exc:
print "Integrity error %s" % str(exc)
break
def getValue(self, version, checksum):
data=self._execute("SELECT value FROM %s WHERE version=? AND checksum=?;" % self.table,
(version,checksum))
row=data.fetchone()
if row != None:
return row[0]
else:
#no value found, try to insert
self._execute("INSERT INTO %s VALUES (?, ?, (select ifnull(max(value)+1,0) from %s where version=?));"
% (self.table,self.table),
(version,checksum,version))
data=self._execute("SELECT value FROM %s WHERE version=? AND checksum=?;" % self.table,
(version,checksum))
row=data.fetchone()
if row != None:
return row[0]
else:
raise NotFoundError
class PRData(object):
"""Object representing the PR database"""
def __init__(self, filename):
self.filename=os.path.abspath(filename)
#build directory hierarchy
try:
os.makedirs(os.path.dirname(self.filename))
except OSError as e:
if e.errno != errno.EEXIST:
raise e
self.connection=sqlite3.connect(self.filename, timeout=5,
isolation_level=None)
self.cursor=self.connection.cursor()
self._tables={}
def __del__(self):
print "PRData: closing DB %s" % self.filename
self.connection.close()
def __getitem__(self,tblname):
if not isinstance(tblname, basestring):
raise TypeError("tblname argument must be a string, not '%s'" %
type(tblname))
if tblname in self._tables:
return self._tables[tblname]
else:
tableobj = self._tables[tblname] = PRTable(self.cursor, tblname)
return tableobj
def __delitem__(self, tblname):
if tblname in self._tables:
del self._tables[tblname]
logging.info("drop table %s" % (tblname))
self.cursor.execute("DROP TABLE IF EXISTS %s;" % tblname)

View File

@@ -1,198 +0,0 @@
import os,sys,logging
import signal,time, atexit
from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
import xmlrpclib,sqlite3
import bb.server.xmlrpc
import prserv
import prserv.db
if sys.hexversion < 0x020600F0:
print("Sorry, python 2.6 or later is required.")
sys.exit(1)
class Handler(SimpleXMLRPCRequestHandler):
def _dispatch(self,method,params):
try:
value=self.server.funcs[method](*params)
except:
import traceback
traceback.print_exc()
raise
return value
class PRServer(SimpleXMLRPCServer):
pidfile="/tmp/PRServer.pid"
def __init__(self, dbfile, logfile, interface, daemon=True):
''' constructor '''
SimpleXMLRPCServer.__init__(self, interface,
requestHandler=SimpleXMLRPCRequestHandler,
logRequests=False, allow_none=True)
self.dbfile=dbfile
self.daemon=daemon
self.logfile=logfile
self.host, self.port = self.socket.getsockname()
self.db=prserv.db.PRData(dbfile)
self.table=self.db["PRMAIN"]
self.register_function(self.getPR, "getPR")
self.register_function(self.quit, "quit")
self.register_function(self.ping, "ping")
self.register_introspection_functions()
def ping(self):
return not self.quit
def getPR(self, version, checksum):
try:
return self.table.getValue(version,checksum)
except prserv.NotFoundError:
logging.error("can not find value for (%s, %s)",version,checksum)
return None
except sqlite3.Error as exc:
logging.error(str(exc))
return None
def quit(self):
self.quit=True
return
def _serve_forever(self):
self.quit = False
self.timeout = 0.5
while not self.quit:
self.handle_request()
logging.info("PRServer: stopping...")
self.server_close()
return
def start(self):
if self.daemon is True:
logging.info("PRServer: starting daemon...")
self.daemonize()
else:
logging.info("PRServer: starting...")
self._serve_forever()
def delpid(self):
os.remove(PRServer.pidfile)
def daemonize(self):
"""
See Advanced Programming in the UNIX, Sec 13.3
"""
os.umask(0)
try:
pid = os.fork()
if pid > 0:
sys.exit(0)
except OSError as e:
sys.stderr.write("1st fork failed: %d %s\n" % (e.errno, e.strerror))
sys.exit(1)
os.setsid()
"""
fork again to make sure the daemon is not session leader,
which prevents it from acquiring controlling terminal
"""
try:
pid = os.fork()
if pid > 0: #parent
sys.exit(0)
except OSError as e:
sys.stderr.write("2nd fork failed: %d %s\n" % (e.errno, e.strerror))
sys.exit(1)
os.chdir("/")
sys.stdout.flush()
sys.stderr.flush()
si = file('/dev/null', 'r')
so = file(self.logfile, 'a+')
se = so
os.dup2(si.fileno(),sys.stdin.fileno())
os.dup2(so.fileno(),sys.stdout.fileno())
os.dup2(se.fileno(),sys.stderr.fileno())
# write pidfile
atexit.register(self.delpid)
pid = str(os.getpid())
pf = file(PRServer.pidfile, 'w+')
pf.write("%s\n" % pid)
pf.write("%s\n" % self.host)
pf.write("%s\n" % self.port)
pf.close()
self._serve_forever()
class PRServerConnection():
def __init__(self, host, port):
self.connection = bb.server.xmlrpc._create_server(host, port)
self.host = host
self.port = port
def terminate(self):
# Don't wait for server indefinitely
import socket
socket.setdefaulttimeout(2)
try:
self.connection.quit()
except:
pass
def getPR(self, version, checksum):
return self.connection.getPR(version, checksum)
def ping(self):
return self.connection.ping()
def start_daemon(options):
try:
pf = file(PRServer.pidfile,'r')
pid = int(pf.readline().strip())
pf.close()
except IOError:
pid = None
if pid:
sys.stderr.write("pidfile %s already exist. Daemon already running?\n"
% PRServer.pidfile)
sys.exit(1)
server = PRServer(options.dbfile, interface=(options.host, options.port),
logfile=os.path.abspath(options.logfile))
server.start()
def stop_daemon():
try:
pf = file(PRServer.pidfile,'r')
pid = int(pf.readline().strip())
host = pf.readline().strip()
port = int(pf.readline().strip())
pf.close()
except IOError:
pid = None
if not pid:
sys.stderr.write("pidfile %s does not exist. Daemon not running?\n"
% PRServer.pidfile)
sys.exit(1)
PRServerConnection(host,port).terminate()
time.sleep(0.5)
try:
while 1:
os.kill(pid,signal.SIGTERM)
time.sleep(0.1)
except OSError as err:
err = str(err)
if err.find("No such process") > 0:
if os.path.exists(PRServer.pidfile):
os.remove(PRServer.pidfile)
else:
print err
sys.exit(1)

View File

@@ -6,20 +6,19 @@
<para>
Recall that earlier we talked about how to use an existing toolchain
tarball that had been installed into <filename>/opt/poky</filename>,
which is outside of the Yocto Project build tree
which is outside of the Poky build environment
(see <xref linkend='using-an-existing-toolchain-tarball'>
“Using an Existing Toolchain Tarball”)</xref>.
And, that sourcing your architecture-specific environment setup script
initializes a suitable cross-toolchain development environment.
initializes a suitable development environment.
This setup occurs by adding the compiler, QEMU scripts, QEMU binary,
a special version of <filename>pkgconfig</filename> and other useful
utilities to the <filename>PATH</filename> variable.
Variables to assist <filename>pkgconfig</filename> and <filename>autotools</filename>
are also defined so that,
Variables to assist pkgconfig and autotools are also defined so that,
for example, <filename>configure.sh</filename> can find pre-generated
test results for tests that need target hardware on which to run.
These conditions allow you to easily use the toolchain outside of the
Yocto Project build environment on both autotools-based projects and
Poky build environment on both autotools-based projects and
makefile-based projects.
</para>

View File

@@ -29,7 +29,7 @@
<orderedlist>
<listitem><para>Be sure the optimal version of Eclipse IDE
is installed.</para></listitem>
<listitem><para>Install Eclipse plug-in requirements prior to installing
<listitem><para>Install required Eclipse plug-ins prior to installing
the Eclipse Yocto Plug-in.</para></listitem>
<listitem><para>Configure the Eclipse Yocto Plug-in.</para></listitem>
</orderedlist>
@@ -38,7 +38,7 @@
<section id='installing-eclipse-ide'>
<title>Installing Eclipse IDE</title>
<para>
It is recommended that you have the Indigo 3.7 version of the
It is recommended that you have the Helios 3.6.1 version of the
Eclipse IDE installed on your development system.
If you dont have this version you can find it at
<ulink url='http://www.eclipse.org/downloads'></ulink>.
@@ -78,14 +78,14 @@
<title>Installing Required Plug-ins and the Eclipse Yocto Plug-in</title>
<para>
Before installing the Yocto Plug-in you need to be sure that the
CDT 8.0, RSE 3.2, and Autotools plug-ins are all installed in the
CDT 7.0, RSE 3.2, and Autotools plug-ins are all installed in the
following order.
After installing these three plug-ins, you can install the
Eclipse Yocto Plug-in.
Use the following URLs for the plug-ins:
<orderedlist>
<listitem><para><emphasis>CDT 8.0</emphasis>
<ulink url='http://download.eclipse.org/tools/cdt/releases/indigo/'></ulink>:
<listitem><para><emphasis>CDT 7.0</emphasis>
<ulink url='http://download.eclipse.org/tools/cdt/releases/helios/'></ulink>:
For CDT main features select the checkbox so you get all items.
For CDT optional features expand the selections and check
“C/C++ Remote Launch”.</para></listitem>
@@ -147,26 +147,26 @@
<section id='configuring-the-cross-compiler-options'>
<title>Configuring the Cross-Compiler Options</title>
<para>
Choose between Stand-alone Prebuilt Toolchain and Build System Derived Toolchain for Cross
Choose between SDK Root Mode and Poky Tree Mode for Cross
Compiler Options.
<itemizedlist>
<listitem><para><emphasis>Stand-alone Prebuilt Toolchain</emphasis> Select this mode
when you are not concerned with building a target image or you do not have
a Yocto Project build tree on your development system.
<listitem><para><emphasis>SDK Root Mode</emphasis> Select this mode
when you are not concerned with building an image or you do not have
a Poky build tree on your system.
For example, suppose you are an application developer and do not
need to build a target image.
Instead, you just want to use an architecture-specific toolchain on an
existing kernel and target root filesystem.
When you use Stand-alone Prebuilt Toolchain you are using the toolchain installed
need to build an image.
You just want to use an architecture-specific toolchain on an
existing kernel and root filesystem.
When you use SDK Root Mode you are using the toolchain installed
in the <filename>/opt/poky</filename> directory.</para></listitem>
<listitem><para><emphasis>Build System Derived Toolchain</emphasis> Select this mode
if you are building images for target hardware or your
development environment already has a Yocto Project build tree.
In this case you likely already have a Yocto Project build tree installed on
<listitem><para><emphasis>Poky Tree Mode</emphasis> Select this mode
if you are concerned with building images for hardware or your
development environment already has a build tree.
In this case you likely already have a Poky build tree installed on
your system or you (or someone else) will be building one.
When you select Build System Derived Toolchain you are using the toolchain bundled
inside the Yocto Project build tree.
If you use this mode you must also supply the Yocto Project build directory
When you use the Poky Tree Mode you are using the toolchain bundled
inside the Poky build tree.
If you use this mode you must also supply the Poky Root Location
in the Preferences Dialog.</para></listitem>
</itemizedlist>
</para>
@@ -175,11 +175,11 @@
<section id='configuring-the-sysroot'>
<title>Configuring the Sysroot</title>
<para>
Specify the sysroot location, which is where the root filesystem for the
target hardware is created on the development system by the ADT Installer.
The QEMU user-space tools, the
NFS boot process and the cross-toolchain all use the sysroot location
regardless of wheather you select (Stand-alone Prebuilt Toolchain or Build System Derived Toolchain).
Specify the sysroot, which is used by both the QEMU user-space
NFS boot process and by the cross-toolchain regardless of the
mode you select (SDK Root Mode or Poky Tree Mode).
For example, sysroot is the location to which you extract the
downloaded images root filesystem to through the ADT Installer.
</para>
</section>
@@ -212,11 +212,10 @@
<listitem><para><emphasis>QEMU</emphasis> Select this option if
you will be using the QEMU emulator.
If you are using the emulator you also need to locate the Kernel
and specify any custom options.</para>
<para>If you select Build System Derived Toolchain the target kernel you built
will be located in the
Yocto Project build tree in <filename>tmp/deploy/images</filename> directory.
If you select Stand-alone Prebuilt Toolchain the pre-built kernel you downloaded is located
and you can specify custom options.</para>
<para>In Poky Tree Mode the kernel you built will be located in the
Poky Build tree in <filename>tmp/deploy/images</filename> directory.
In SDK Root Mode the pre-built kernel you downloaded is located
in the directory you specified when you downloaded the image.</para>
<para>Most custom options are for advanced QEMU users to further
customize their QEMU instance.
@@ -288,10 +287,10 @@
You can change these settings for a given project by following these steps:
<orderedlist>
<listitem><para>Select Project -> Invoke Yocto Tools -> Reconfigure Yocto.
This brings up the project's Yocto Settings Dialog.
This brings up the project Yocto Settings Dialog.
Settings are inherited from the default project configuration.
The information in this dialogue is identical to that chosen earlier
for the Cross Compiler Option (Stand-alone Prebuilt Toolchain or Build System Derived Toolchain),
for the Cross Compiler Option (SDK Root Mode or Poky Tree Mode),
the Target Architecture, and the Target Options.
The settings are inherited from the Yocto Plug-in configuration performed
after installing the plug-in.</para></listitem>
@@ -309,7 +308,7 @@
<title>Building the Project</title>
<para>
To build the project, select Project -&gt; Build Project.
The console should update and you can note the cross-compiler you are using.
You should see the console updated and you can note the cross-compiler you are using.
</para>
</section>
@@ -401,10 +400,10 @@
on your local host machine.
The oprofile-server is installed by default in the image.</para></listitem>
<listitem><para><emphasis>Lttng-ust:</emphasis> Selecting this tool runs
<filename>usttrace</filename> on the remote target, transfers the output data back to the
local host machine and uses <filename>lttv-gui</filename> to graphically display the output.
The <filename>lttv-gui</filename> must be installed on the local host machine to use this tool.
For information on how to use <filename>lttng</filename> to trace an application, see
"usttrace" on the remote target, transfers the output data back to the
local host machine and uses "lttv-gui" to graphically display the output.
The "lttv-gui" must be installed on the local host machine to use this tool.
For information on how to use "lttng" to trace an application, see
<ulink url='http://lttng.org/files/ust/manual/ust.html'></ulink>.</para>
<para>For "Application" you must supply the absolute path name of the
application to be traced by user mode lttng.
@@ -418,10 +417,10 @@
new view called "powertop".</para>
<para>"Time to gather data(sec):" is the time passed in seconds before data
is gathered from the remote target for analysis.</para>
<para>"show pids in wakeups list:" corresponds to the <filename>-p</filename> argument
passed to <filename>powertop</filename>.</para></listitem>
<para>"show pids in wakeups list:" corresponds to the -p argument
passed to "powertop".</para></listitem>
<listitem><para><emphasis>LatencyTOP and Perf:</emphasis> "LatencyTOP"
identifies system latency, while <filename>perf</filename> monitors the system's
identifies system latency, while "perf" monitors the system's
performance counter registers.
Selecting either of these tools causes an RSE terminal view to appear
from which you can run the tools.

View File

@@ -21,7 +21,7 @@
</para>
<para>
Additionally, to provide an effective development platform, the Yocto Project
makes available and suggests other tools you can use with the ADT.
makes available and suggests other tools as part of the ADT.
These other tools include the Eclipse IDE Yocto Plug-in, an emulator (QEMU),
and various user-space tools that greatly enhance your development experience.
</para>
@@ -35,9 +35,7 @@
<title>The Cross-Toolchain</title>
<para>
The cross-toolchain consists of a cross-compiler, cross-linker, and cross-debugger
that are used to develop for targeted hardware.
This toolchain is created either by running the ADT Installer script or
through a Yocto Project build tree that is based on your metadata
that are all generated through a Poky build that is based on your metadata
configuration or extension for your targeted device.
The cross-toolchain works with a matching target sysroot.
</para>
@@ -57,19 +55,9 @@
<title>The QEMU Emulator</title>
<para>
The QEMU emulator allows you to simulate your hardware while running your
application or image.
QEMU is made available a number of ways:
<itemizedlist>
<listitem><para>If you use the ADT Installer script to install ADT you can
specify whether or not to install QEMU.</para></listitem>
<listitem><para>If you have downloaded a Yocto Project release and unpacked
it to create a Yocto Project source directory followed by sourcing
the Yocto Project environment setup script, QEMU is installed and automatically
available.</para></listitem>
<listitem><para>If you have installed the cross-toolchain
tarball followed by sourcing the toolchain's setup environment script, QEMU
is installed and automatically available.</para></listitem>
</itemizedlist>
application or image.
QEMU is installed several ways: as part of the Poky tree, ADT installation
through a toolchain tarball, or through the ADT Installer.
</para>
</section>

View File

@@ -38,6 +38,11 @@
<date>23 May 2011</date>
<revremark>Released with Yocto Project 1.0.1 on 23 May 2011.</revremark>
</revision>
<revision>
<revnumber>1.0.2</revnumber>
<date>20 December 2011</date>
<revremark>Released with Yocto Project 1.0.2 on 20 December 2011.</revremark>
</revision>
</revhistory>
<copyright>

View File

@@ -8,13 +8,13 @@
likely that you will need to customize your development packages installation.
For example, if you are developing a minimal image then you might not need
certain packages (e.g. graphics support packages).
Thus, you would like to be able to remove those packages from your target sysroot.
Thus, you would like to be able to remove those packages from your sysroot.
</para>
<section id='package-management-systems'>
<title>Package Management Systems</title>
<para>
The Yocto Project supports the generation of sysroot files using
The Yocto Project supports the generation of root filesystem files using
three different Package Management Systems (PMS):
<itemizedlist>
<listitem><para><emphasis>OPKG</emphasis> A less well known PMS whose use
@@ -30,7 +30,7 @@
for more information about RPM.</para></listitem>
<listitem><para><emphasis>Debian</emphasis> The PMS for Debian-based systems
is built on many PMS tools.
The lower-level PMS tool <filename>dpkg</filename> forms the base of the Debian PMS.
The lower-level PMS tool dpkg forms the base of the Debian PMS.
For information on dpkg see
<ulink url='http://en.wikipedia.org/wiki/Dpkg'></ulink>.</para></listitem>
</itemizedlist>
@@ -44,16 +44,16 @@
<filename>PACKAGE_CLASSES</filename> variable in the <filename>conf/local.conf</filename>
file is set to reflect that system.
The first value you choose for the variable specifies the package file format for the root
filesystem at sysroot.
filesystem.
Additional values specify additional formats for convenience or testing.
See the configuration file for details.
</para>
<para>
As an example, consider a scenario where you are using OPKG and you want to add
the <filename>libglade</filename> package to the target sysroot.
the libglade package to sysroot.
</para>
<para>
First, you should generate the ipk file for the <filename>libglade</filename> package and add it
First, you should generate the ipk file for the libglade package and add it
into a working opkg repository.
Use these commands:
<literallayout class='monospaced'>
@@ -62,17 +62,17 @@
</literallayout>
</para>
<para>
Next, source the environment setup script found in the Yocto Project source directory.
Next, source the environment setup script.
Follow that by setting up the installation destination to point to your
sysroot as <filename>&lt;sysroot_dir&gt;</filename>.
Finally, have an opkg configuration file <filename>&lt;conf_file&gt;</filename>
sysroot as <filename>&lt;sysroot dir&gt;</filename>.
Finally, have an opkg configuration file <filename>&lt;conf file&gt;</filename>
that corresponds to the opkg repository you have just created.
The following command forms should now work:
<literallayout class='monospaced'>
$ opkg-cl f &lt;conf_file&gt; -o &lt;sysroot-dir&gt; update
$ opkg-cl f &lt;cconf_file&gt;> -o &lt;sysroot-dir&gt; --force-overwrite install libglade
$ opkg-cl f &lt;cconf_file&gt; -o &lt;sysroot-dir&gt; --force-overwrite install libglade-dbg
$ opkg-cl f &lt;conf_file&gt; -o &lt;sysroot-dir&gt; --force-overwrite install libglade-dev
$ opkg-cl f &lt;conf file&gt; -o &lt;sysroot dir&gt; update
$ opkg-cl f &lt;conf file&gt;> -o &lt;sysroot dir&gt; --force-overwrite install libglade
$ opkg-cl f &lt;conf file&gt; -o &lt;sysroot dir&gt; --force-overwrite install libglade-dbg
$ opkg-cl f &lt;conf file&gt; -o &lt;sysroot dir&gt; --force-overwrite install libglade-dev
</literallayout>
</para>
</section>

View File

@@ -6,159 +6,74 @@
<title>Preparing to Use the Application Development Toolkit (ADT)</title>
<para>
In order to use the ADT you must install it, source a script to set up the
environment, and be sure the kernel and filesystem image specific to the target architecture
exists.
</para>
<para>
This section describes how to be sure you meet these requirements.
Througout this section two important terms are used:
<itemizedlist>
<listitem><para><emphasis>Yocto Project Source Tree:</emphasis>
This term refers to the directory structure created as a result of downloading
and unpacking a Yocto Project release tarball.
The Yocto Project source tree contains Bitbake, Documentation, Meta-data and
other files.
The name of the top-level directory of the Yocto Project source tree
is derived from the Yocto Project release tarball.
For example, downloading and unpacking <filename>poky-bernard-5.0.1.tar.bz2</filename>
results in a Yocto Project source tree whose Yocto Project source directory is named
<filename>poky-bernard-5.0.1</filename>.</para></listitem>
<listitem><para><emphasis>Yocto Project Build Tree:</emphasis>
This term refers to the area where you run your builds.
The area is created when you source the Yocto Project setup environment script
that is found in the Yocto Project source directory
(e.g. <filename>poky-init-build-env</filename>).
You can create the Yocto Project build tree anywhere you want on your
development system.
Here is an example that creates the tree in <filename>mybuilds</filename>
and names the Yocto Project build directory <filename>YP-5.0.1</filename>:
<literallayout class='monospaced'>
$ source poky-bernard-5.0.1/poky-init-build-env $HOME/mybuilds/YP-5.0.1
</literallayout>
If you don't specifically name the build directory then Bitbake creates it
in the current directory and uses the name <filename>build</filename>.
Also, if you supply an existing directory then Bitbake uses that
directory as the Yocto Project build directory and populates the build tree
beneath it.</para></listitem>
</itemizedlist>
In order to use the ADT it must be installed, the environment setup script must be
sourced, and the kernel and filesystem image specific to the target architecture must exist.
This section describes how to install the ADT, set up the environment, and provides
some reference information on kernels and filesystem images.
</para>
<section id='installing-the-adt'>
<title>Installing the ADT</title>
<para>
The following list describes how you can install the ADT, which includes the cross-toolchain.
Regardless of the installation you choose, however, you must source the cross-toolchain
environment setup script before you use the toolchain.
See the <xref linkend='setting-up-the-environment'>“Setting Up the Environment”</xref>
section for more information.
<itemizedlist>
<listitem><para><emphasis>Use the ADT Installer Script:</emphasis>
This method is the recommended way to install the ADT because it
automates much of the process for you.
For example, you can configure the installation to install the QEMU emulator
and the user-space NFS, specify which root filesystem profiles to download,
and define the target sysroot location.
</para></listitem>
<listitem><para><emphasis>Use an Existing Toolchain Tarball:</emphasis>
Using this method you select and download an architecture-specific
toolchain tarball and then hand-install the toolchain.
If you use this method you just get the cross-toolchain and QEMU - you do not
get any of the other mentioned benefits had you run the ADT Installer script.</para></listitem>
<listitem><para><emphasis>Use the Toolchain from Within a Yocto Project Build Tree:</emphasis>
If you already have a Yocto Project build tree you can install the cross-toolchain
using that tree.
However, like the previous method mentioned, you only get the cross-toolchain and QEMU - you
do not get any of the other benefits without taking separate steps.</para></listitem>
</itemizedlist>
You can install the ADT three ways.
However, we recommend configuring and running the ADT Installer script.
Running this script automates much of the process for you.
For example, the script allows you to install the QEMU emulator and
user-space NFS, define which root filesystem profiles to download,
and allows you to define the target sysroot location.
</para>
<note>
If you need to generate the ADT tarball you can do so using the following command:
<literallayout class='monospaced'>
$ bitbake adt-installer
</literallayout>
This command generates the file <filename>adt-installer.tar.bz2</filename>
in the <filename>../build/tmp/deploy/sdk</filename> directory.
</note>
<section id='using-the-adt-installer'>
<title>Using the ADT Installer</title>
<section id='configuring-and-running-the-adt-installer'>
<title>Configuring and Running the ADT Installer</title>
<para>
To run the ADT Installer you need to first get the ADT Installer tarball and then run the ADT
Installer Script.
The ADT Installer is contained in a tarball that can be built using
<filename>bitbake adt-installer</filename>.
Yocto Project has a pre-built ADT Installer tarball that you can download
from <filename>tmp/deploy/sdk</filename> located in the build directory.
</para>
<section id='getting-the-adt-installer-tarball'>
<title>Getting the ADT Installer Tarball</title>
<note>
You can install and run the ADT Installer tarball in any directory you want.
</note>
<para>
The ADT Installer is contained in the ADT Installer tarball.
You can download the tarball into any directory from
<ulink url='http://autobuilder.yoctoproject.org/downloads/yocto-1.0/adt-installer/'></ulink>.
Or, you can use Bitbake to generate the tarball inside the existing Yocto Project build tree.
</para>
<para>
Before running the ADT Installer you need to configure it by editing
the <filename>adt-installer.conf</filename> file, which is located in the
directory where the ADT Installer tarball was installed.
Your configurations determine which kernel and filesystem image are downloaded.
The following list describes the variables you can define for the ADT Installer.
For configuration values and restrictions see the comments in
the <filename>adt-installer.conf</filename> file:
<para>
If you use Bitbake to generate the ADT Installer tarball, you must
source the Yocto Project environment setup script located in the Yocto Project
source directory before running the Bitbake command that creates the tarball.
</para>
<para>
The following example commands download the Yocto Project release tarball, create the Yocto
Project source tree, set up the environment while also creating the Yocto Project build tree,
and finally run the Bitbake command that results in the tarball
<filename>~/yocto-project/build/tmp/deploy/sdk/adt_installer.tar.bz2</filename>:
<literallayout class='monospaced'>
$ cd ~
$ mkdir yocto-project
$ cd yocto-project
$ wget http://www.yoctoproject.org/downloads/poky/poky-bernard-5.0.1.tar.bz2
$ tar xjf poky-bernard-5.0.1.tar.bz2
$ source poky-bernard-5.0.1/poky-init-build-env poky-5.0.1-build
$ bitbake adt-installer
</literallayout>
</para>
</section>
<section id='configuring-and-running-the-adt-installer-script'>
<title>Configuring and Running the ADT Installer Script</title>
<para>
Before running the ADT Installer script you need to unpack the tarball.
You can unpack the tarball in any directory you wish.
Unpacking it creates the directory <filename>adt-installer</filename>,
which contains the ADT Installer script and its configuration file.
</para>
<para>
Before you run the script, however, you should examine the ADT Installer configuration
file (<filename>adt_installer</filename>) and be sure you are going to get what you want.
Your configurations determine which kernel and filesystem image are downloaded.
</para>
<para>
The following list describes the configurations you can define for the ADT Installer.
For configuration values and restrictions see the comments in
the <filename>adt-installer.conf</filename> file:
<itemizedlist>
<listitem><para><filename>YOCTOADT_IPKG_REPO</filename> This area
includes the IPKG-based packages and the root filesystem upon which
the installation is based.
If you want to set up your own IPKG repository pointed to by
<filename>YOCTOADT_IPKG_REPO</filename>, you need to be sure that the
directory structure follows the same layout as the reference directory
set up at <ulink url='http://adtrepo.yoctoproject.org'></ulink>.
Also, your repository needs to be accessible through HTTP.
</para></listitem>
<listitem><para><filename>YOCTOADT-TARGETS</filename> The machine
target architectures for which you want to set up cross-development
environments.
</para></listitem>
<listitem><para><filename>YOCTOADT_QEMU</filename> Indicates whether
or not to install the emulator QEMU.
</para></listitem>
<listitem><para><filename>YOCTOADT_NFS_UTIL</filename> Indicates whether
or not to install user-mode NFS.
If you plan to use the Yocto Eclipse IDE plug-in against QEMU,
you should install NFS.
<itemizedlist>
<listitem><para><filename>YOCTOADT_IPKG_REPO</filename> This area
includes the IPKG-based packages and the root filesystem upon which
the installation is based.
If you want to set up your own IPKG repository pointed to by
<filename>YOCTOADT_IPKG_REPO</filename>, you need to be sure that the
directory structure follows the same layout as the reference directory
set up at <ulink url='http://adtrepo.yoctoproject.org'></ulink>.
Also, your repository needs to be accessible through HTTP.
</para></listitem>
<listitem><para><filename>YOCTOADT-TARGETS</filename> The machine
target architectures for which you want to set up cross-development
environments.
</para></listitem>
<listitem><para><filename>YOCTOADT_QEMU</filename> Indicates whether
or not to install the emulator QEMU.
</para></listitem>
<listitem><para><filename>YOCTOADT_NFS_UTIL</filename> Indicates whether
or not to install user-mode NFS.
If you plan to use the Yocto Eclipse IDE plug-in against QEMU,
you should install NFS.
<note>
To boot QEMU images using our userspace NFS server, you need
to be running portmap or rpcbind.
@@ -168,138 +83,112 @@
Your firewall settings may also have to be modified to allow
NFS booting to work.
</note>
</para></listitem>
<listitem><para><filename>YOCTOADT_ROOTFS_&lt;arch&gt;</filename> - The root
filesystem images you want to download from the <filename>YOCTOADT_IPKG_REPO</filename>
repository.
</para></listitem>
<listitem><para><filename>YOCTOADT_TARGET_SYSROOT_IMAGE_&lt;arch&gt;</filename> - The
particular root filesystem used to extract and create the target sysroot.
The value of this variable must have been specified with
<filename>YOCTOADT_ROOTFS_&lt;arch&gt;</filename>.
For example, if you downloaded both <filename>minimal</filename> and
<filename>sato-sdk</filename> images by setting <filename>YOCTOADT_ROOTFS_&lt;arch&gt;</filename>
to "minimal sato-sdk", then <filename>YOCTOADT_ROOTFS_&lt;arch&gt;</filename>
must be set to either "minimal" or "sato-sdk".
</para></listitem>
<listitem><para><filename>YOCTOADT_TARGET_SYSROOT_LOC_&lt;arch&gt;</filename> - The
location on the development host where the target sysroot will be created.
</para></listitem>
</itemizedlist>
</para>
<para>
After you have configured the <filename>adt_installer.conf</filename> file,
run the installer using the following command:
<literallayout class='monospaced'>
$ adt_installer
</literallayout>
</para>
<note>
The ADT Installer requires the <filename>libtool</filename> package to complete.
If you install the recommended packages as described in the
<ulink url='http://www.yoctoproject.org/docs/yocto-project-qs/yocto-project-qs.html'>
Yocto Project Quick Start</ulink> then you will have libtool installed.
</note>
<para>
Once the installer begins to run you are asked whether you want to run in
interactive or silent mode.
If you want to closely monitor the installation then choose “I” for interactive
mode rather than “S” for silent mode.
Follow the prompts from the script to complete the installation.
</para>
<para>
Once the installation completes, the ADT, which includes the cross-toolchain, is installed.
You will notice environment setup files for the cross-toolchain in
<filename>/opt/poky/$SDKVERSION</filename>,
and image tarballs in the <filename>adt-installer</filename>
directory according to your installer configurations, and the target sysroot located
according to the <filename>YOCTOADT_TARGET_SYSROOT_LOC_&lt;arch&gt;</filename> variable
also in your configuration file.
</para>
</section>
</section>
<section id='using-an-existing-toolchain-tarball'>
<title>Using a Cross-Toolchain Tarball</title>
<para>
If you want to simply install the cross-toolchain by hand you can do so by using an existing
cross-toolchain tarball.
If you install the cross-toolchain by hand you will have to set up the target sysroot separately.
</para></listitem>
<listitem><para><filename>YOCTOADT_ROOTFS_&lt;arch&gt;</filename> - The root
filesystem images you want to download.
</para></listitem>
<listitem><para><filename>YOCTOADT_TARGET_SYSROOT_IMAGE_&lt;arch&gt;</filename> - The
root filesystem used to extract and create the target sysroot.
</para></listitem>
<listitem><para><filename>YOCTOADT_TARGET_SYSROOT_LOC_&lt;arch&gt;</filename> - The
location of the target sysroot that will be set up on the development machine.
</para></listitem>
</itemizedlist>
</para>
<para>
After you have configured the <filename>adt-installer.conf</filename> file,
run the installer using the following command:
<literallayout class='monospaced'>
$ adt_installer
</literallayout>
</para>
<para>
Once the installer begins to run you are asked whether you want to run in
interactive or silent mode.
If you want to closely monitor the installation then choose “I” for interactive
mode rather than “S” for silent mode.
Follow the prompts from the script to complete the installation.
</para>
<para>
Once the installation completes, the cross-toolchain is installed in
<filename>/opt/poky/$SDKVERSION</filename>.
</para>
<para>
Before using the ADT you need to run the environment setup script for
your target architecture also located in <filename>/opt/poky/$SDKVERSION</filename>.
See the <xref linkend='setting-up-the-environment'>“Setting Up the Environment”</xref>
section for information.
</para>
</section>
<section id='using-an-existing-toolchain-tarball'>
<title>Using an Existing Toolchain Tarball</title>
<para>
If you do not want to use the ADT Installer you can install the toolchain
and the sysroot by hand.
Follow these steps:
<orderedlist>
<listitem><para>Go to
<ulink url='http://autobuilder.yoctoproject.org/downloads/yocto-1.0/toolchain'></ulink>
and find the folder that matches your host development system
(i.e. 'i686' for 32-bit machines or 'x86_64' for 64-bit machines).</para>
</listitem>
<listitem><para>Go into that folder and download the toolchain tarball whose name
includes the appropriate target architecture.
For example, if your host development system is an Intel-based 64-bit system and
you are going to use your cross-toolchain for an arm target go into the
<filename>x86_64</filename> folder and download the following tarball:
<literallayout class='monospaced'>
yocto-eglibc-x86_64-arm-toolchain-gmae-1.0.tar.bz2
</literallayout>
<listitem><para>Locate and download the architecture-specific toolchain
tarball from <ulink url='http://autobuilder.yoctoproject.org/downloads/yocto-1.0'></ulink>.
Look in the toolchain folder and then open up the folder that matches your
host development system (i.e. 'i686' for 32-bit machines or 'x86_64'
for 64-bit machines).
Then, select the toolchain tarball whose name includes the appropriate
target architecture.
<note>
Alternatively you can build the toolchain tarball if you have a Yocto Project build tree.
Use the <filename>bitbake meta-toolchain</filename> command after you have
sourced the <filename>poky-build-init script</filename> located in the Yocto Project
source directory.
When the <filename>bitbake</filename> command completes the toolchain tarball will
be in <filename>tmp/deploy/sdk</filename> in the Yocto Project build tree.
</note></para></listitem>
If you need to build the toolchain tarball use the
<filename>bitbake meta-toolchain</filename> command after you have
sourced the poky-build-init script.
The tarball will be located in the build directory at
<filename>tmp/deploy/sdk</filename> after the build.
</note>
</para></listitem>
<listitem><para>Make sure you are in the root directory and then expand
the tarball.
The tarball expands into <filename>/opt/poky/$SDKVERSION</filename>.
Once the tarball in unpacked the cross-toolchain is installed.
You will notice environment setup files for the cross-toolchain in the directory.
The tarball expands into the <filename>/opt/poky/$SDKVERSION</filename> directory.
</para></listitem>
<listitem><para>Set up the environment by sourcing the environment set up
script.
See the <xref linkend='setting-up-the-environment'>“Setting Up the Environment”</xref>
for information.
</para></listitem>
</orderedlist>
</para>
</section>
<section id='using-the-toolchain-from-within-the-build-tree'>
<title>Using Bitbake and the Yocto Project Build Tree</title>
<title>Using the Toolchain from Within the Build Tree</title>
<para>
A final way of installing just the cross-toolchain is to use Bitbake within an existing
Yocto Project build tree.
Follow these steps:
<orderedlist>
<listitem><para>Source the environment setup script located in the Yocto Project
source directory.
The script has the string <filename>init-build-env</filename>
as part of the name.</para></listitem>
<listitem><para>At this point you should be sure that the
<filename>MACHINE</filename> variable
in the <filename>local.conf</filename> file is set for the target architecture.
You can find the <filename>local.conf</filename> file in the Yocto Project source
directory.
Comments within the <filename>local.conf</filename> file list the values you
can use for the <filename>MACHINE</filename> variable.
<note>You can populate the build tree with the cross-toolchains for more
than a single architecture.
You just need to edit the <filename>MACHINE</filename> variable in the
<filename>local.conf</filename> file and re-run the BitBake command.</note></para></listitem>
<listitem><para>Run <filename>bitbake meta-ide-support</filename> to complete the
cross-toolchain installation.
<note>If you change your working directory after you source the environment
setup script and before you run the Bitbake command the command will not work.
Be sure to run the Bitbake command immediately after checking or editing the
<filename>local.conf</filename> but without changing your working directory.</note>
Once Bitbake finishes, the cross-toolchain is installed.
You will notice environment setup files for the cross-toolchain in the
Yocto Project build tree in the <filename>tmp</filename> directory.
Setup script filenames contain the strings <filename>environment-setup</filename>.
</para></listitem>
</orderedlist>
A final way of accessing the toolchain is from the build tree.
The build tree can be set up to contain the architecture-specific cross toolchain.
To populate the build tree with the toolchain you need to run the following command:
<literallayout class='monospaced'>
$ bitbake meta-ide-support
</literallayout>
</para>
<para>
Before running the command you need to be sure that the
<filename>conf/local.conf</filename> file in the build directory has
the desired architecture specified for the <filename>MACHINE</filename>
variable.
See the <filename>local.conf</filename> file for a list of values you
can supply for this variable.
You can populate the build tree with the cross-toolchains for more
than a single architecture.
You just need to edit the <filename>local.conf</filename> file and re-run
the BitBake command.
</para>
<para>
Once the build tree has the toolchain you need to source the environment
setup script so that you can run the cross-tools without having to locate them.
See the <xref linkend='setting-up-the-environment'>“Setting Up the Environment”</xref>
for information.
</para>
</section>
</section>
@@ -307,14 +196,13 @@
<section id='setting-up-the-environment'>
<title>Setting Up the Environment</title>
<para>
Before you can use the cross-toolchain you need to set up the toolchain environment by
Before you can use the cross-toolchain you need to set up the environment by
sourcing the environment setup script.
If you used the ADT Installer or used an existing ADT tarball to install the ADT,
If you used adt_installer or used an existing ADT tarball to install the ADT,
then you can find this script in the <filename>/opt/poky/$SDKVERSION</filename>
directory.
If you used Bitbake and the Yocto Project Build Tree to install the cross-toolchain
then you can find the environment setup scripts in in the Yocto Project build tree
in the <filename>tmp</filename> directory.
If you are using the ADT from a Poky build tree, then look in the build
directory in <filename>tmp</filename> for the setup script.
</para>
<para>
@@ -325,7 +213,7 @@
For example, the environment setup script for a 64-bit IA-based architecture would
be the following:
<literallayout class='monospaced'>
/opt/poky/1.0/environment-setup-x86_64-poky-linux
/opt/poky/environment-setup-x86_64-poky-linux
</literallayout>
</para>
</section>
@@ -341,10 +229,10 @@
<ulink url='http://www.yoctoproject.org/docs/yocto-quick-start/yocto-project-qs.html'></ulink>.
<note>
Yocto Project provides basic kernels and filesystem images for several
architectures (x86, x86-64, mips, powerpc, and arm) that you can use
architectures (x86, x86-64, mips, powerpc, and arm) that can be used
unaltered in the QEMU emulator.
These kernels and filesystem images reside in the Yocto Project release
area - <ulink url='http://autobuilder.yoctoproject.org/downloads/yocto-1.0/machines/'></ulink>
area - <ulink url='http://autobuilder.yoctoproject.org/downloads/yocto-1.0/'></ulink>
and are ideal for experimentation within Yocto Project.
</note>
</para>

View File

@@ -44,6 +44,11 @@
<date>23 May 2011</date>
<revremark>Released with Yocto Project 1.0.1 on 23 May 2011.</revremark>
</revision>
<revision>
<revnumber>1.0.2</revnumber>
<date>20 December 2011</date>
<revremark>Released with Yocto Project 1.0.2 on 20 December 2011.</revremark>
</revision>
</revhistory>
<copyright>

View File

@@ -609,7 +609,7 @@ FILESEXTRAPATHS := "${THISDIR}/${PN}"
</para>
<programlisting>
$ BSPKEY_&lt;keydomain&gt;=&lt;key&gt; bitbake core-image-sato
$ BSPKEY_&lt;keydomain&gt;=&lt;key&gt; bitbake poky-image-sato
</programlisting>
<para>

View File

@@ -1126,7 +1126,7 @@ That's it. Configure and build.
<para>
You should now be able to build and boot an image with the new kernel:
<literallayout class='monospaced'>
$ bitbake core-image-sato-live
$ bitbake poky-image-sato-live
</literallayout>
</para></listitem>

View File

@@ -44,6 +44,11 @@
<date>23 May 2011</date>
<revremark>Released with Yocto Project 1.0.1 on 23 May 2011.</revremark>
</revision>
<revision>
<revnumber>1.0.2</revnumber>
<date>20 December 2011</date>
<revremark>Released with Yocto Project 1.0.2 on 20 December 2011.</revremark>
</revision>
</revhistory>
<copyright>

View File

@@ -383,8 +383,8 @@
triplet is "i586-poky-linux".</para></listitem>
<listitem><para>Kernel: Use the file chooser to select the kernel used with QEMU.</para></listitem>
<listitem><para>Root filesystem: Use the file chooser to select the root
filesystem directory. This directory is where you use "runqemu-extract-sdk" to extract the
core-image-sdk tarball.</para></listitem>
filesystem directory. This directory is where you use "poky-extract-sdk" to extract the
poky-image-sdk tarball.</para></listitem>
</itemizedlist>
</para>
</section>
@@ -738,7 +738,7 @@ tmp/sysroots/&lt;host-arch&gt;/usr/bin/&lt;target-abi&gt;-gdb
<para>
Perhaps the easiest is to have an 'sdk' image that corresponds to the plain
image installed on the device.
In the case of 'core-image-sato', 'core-image-sdk' would contain suitable symbols.
In the case of 'poky-image-sato', 'poky-image-sdk' would contain suitable symbols.
Because the sdk images already have the debugging symbols installed it is just a
question of expanding the archive to some location and then informing GDB.
</para>
@@ -764,17 +764,17 @@ tmp/sysroots/&lt;host-arch&gt;/usr/bin/&lt;target-abi&gt;-gdb
<filename>tmp/rootfs</filename>:
<programlisting>
tmp/sysroots/i686-linux/usr/bin/opkg-cl -f \
tmp/work/&lt;target-abi&gt;/core-image-sato-1.0-r0/temp/opkg.conf -o \
tmp/work/&lt;target-abi&gt;/poky-image-sato-1.0-r0/temp/opkg.conf -o \
tmp/rootfs/ update
</programlisting></para></listitem>
<listitem><para>Install the debugging information:
<programlisting>
tmp/sysroots/i686-linux/usr/bin/opkg-cl -f \
tmp/work/&lt;target-abi&gt;/core-image-sato-1.0-r0/temp/opkg.conf \
tmp/work/&lt;target-abi&gt;/poky-image-sato-1.0-r0/temp/opkg.conf \
-o tmp/rootfs install foo
tmp/sysroots/i686-linux/usr/bin/opkg-cl -f \
tmp/work/&lt;target-abi&gt;/core-image-sato-1.0-r0/temp/opkg.conf \
tmp/work/&lt;target-abi&gt;/poky-image-sato-1.0-r0/temp/opkg.conf \
-o tmp/rootfs install foo-dbg
</programlisting></para></listitem>
</orderedlist>

View File

@@ -269,9 +269,9 @@ fi
The following example shows the form for the two lines you need:
</para>
<programlisting>
IMAGE_INSTALL = "task-core-x11-base package1 package2"
IMAGE_INSTALL = "task-poky-x11-base package1 package2"
inherit core-image
inherit poky-image
</programlisting>
<para>
By creating a custom image, a developer has total control
@@ -283,11 +283,11 @@ inherit core-image
</para>
<para>
The other method for creating a custom image is to modify an existing image.
For example, if a developer wants to add "strace" into "core-image-sato", they can use
For example, if a developer wants to add "strace" into "poky-image-sato", they can use
the following recipe:
</para>
<programlisting>
require core-image-sato.bb
require poky-image-sato.bb
IMAGE_INSTALL += "strace"
</programlisting>
@@ -355,7 +355,7 @@ RRECOMMENDS_task-custom-tools = "\
<glossterm><link linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm>
variable.
To create these features, the best reference is
<filename>meta/classes/core-image.bbclass</filename>, which shows how poky achieves this.
<filename>meta/classes/poky-image.bbclass</filename>, which shows how poky achieves this.
In summary, the file looks at the contents of the
<glossterm><link linkend='var-IMAGE_FEATURES'>IMAGE_FEATURES</link></glossterm>
variable and then maps that into a set of tasks or packages.
@@ -371,8 +371,8 @@ RRECOMMENDS_task-custom-tools = "\
Poky ships with two SSH servers you can use in your images: Dropbear and OpenSSH.
Dropbear is a minimal SSH server appropriate for resource-constrained environments,
while OpenSSH is a well-known standard SSH server implementation.
By default, core-image-sato is configured to use Dropbear.
The core-image-basic and core-image-lsb images both include OpenSSH.
By default, poky-image-sato is configured to use Dropbear.
The poky-image-basic and poky-image-lsb images both include OpenSSH.
To change these defaults, edit the <filename>IMAGE_FEATURES</filename> variable
so that it sets the image you are working with to include ssh-server-dropbear
or ssh-server-openssh.
@@ -415,7 +415,7 @@ DISTRO_EXTRA_RDEPENDS += "strace"
</para>
<programlisting>
$ bitbake -c clean task-boot task-base task-poky
$ bitbake core-image-sato
$ bitbake poky-image-sato
</programlisting>
</section>
@@ -637,7 +637,7 @@ BBFILE_PRIORITY_emenlow = "6"
tree.</para></listitem>
</itemizedlist>
Following these recommendations keeps your Poky tree and its configuration entirely
inside COREBASE.
inside POKYBASE.
</para>
</section>

View File

@@ -28,7 +28,7 @@
<qandaentry>
<question>
<para>
I only have Python 2.4 or 2.5 but BitBake requires Python 2.6 or 2.7.
I only have Python 2.4 or 2.5 but BitBake requires Python 2.6.
Can I still use Poky?
</para>
</question>
@@ -37,8 +37,8 @@
You can use a stand-alone tarball to provide Python 2.6.
You can find pre-built 32 and 64-bit versions of Python 2.6 at the following locations:
<itemizedlist>
<listitem><para><ulink url='http://autobuilder.yoctoproject.org/downloads/miscsupport/yocto-1.0-python-nativesdk/python-nativesdk-standalone-i686.tar.bz2'>32-bit tarball</ulink></para></listitem>
<listitem><para><ulink url='http://autobuilder.yoctoproject.org/downloads/miscsupport/yocto-1.0-python-nativesdk/python-nativesdk-standalone-x86_64.tar.bz2'>64-bit tarball</ulink></para></listitem>
<listitem><para><ulink url='http://autobuilder.yoctoproject.org/downloads/miscsupport/python-nativesdk-standalone-i586.tar.bz2'></ulink></para></listitem>
<listitem><para><ulink url='http://autobuilder.yoctoproject.org/downloads/miscsupport/python-nativesdk-standalone-x86_64.tar.bz2'></ulink></para></listitem>
</itemizedlist>
</para>
<para>

View File

@@ -57,6 +57,11 @@
<date>23 May 2011</date>
<revremark>Released with Yocto Project 1.0.1 on 23 May 2011.</revremark>
</revision>
<revision>
<revnumber>1.0.2</revnumber>
<date>20 December 2011</date>
<revremark>Released with Yocto Project 1.0.2 on 20 December 2011.</revremark>
</revision>
</revhistory>
<copyright>

View File

@@ -9,7 +9,7 @@
BitBake is a program written in Python that interprets the metadata that makes up Poky.
At some point, people wonder what actually happens when you enter:
<literallayout class='monospaced'>
$ bitbake core-image-sato
$ bitbake poky-image-sato
</literallayout>
</para>
@@ -111,11 +111,11 @@
<para>
Once all the <filename>.bb</filename> files have been
parsed, BitBake starts to build the target (core-image-sato in the previous section's
parsed, BitBake starts to build the target (poky-image-sato in the previous section's
example) and looks for providers of that target.
Once a provider is selected, BitBake resolves all the dependencies for
the target.
In the case of "core-image-sato", it would lead to <filename>task-base.bb</filename>,
In the case of "poky-image-sato", it would lead to <filename>task-base.bb</filename>,
which in turn leads to packages like <application>Contacts</application>,
<application>Dates</application> and <application>BusyBox</application>.
These packages in turn depend on glibc and the toolchain.

View File

@@ -28,41 +28,41 @@
<itemizedlist>
<listitem>
<para>
<emphasis>core-image-minimal</emphasis> - A small image just capable
<emphasis>poky-image-minimal</emphasis> - A small image just capable
of allowing a device to boot.
</para>
</listitem>
<listitem>
<para>
<emphasis>core-image-base</emphasis> - A console-only image that fully
<emphasis>poky-image-base</emphasis> - A console-only image that fully
supports the target device hardware.
</para>
</listitem>
<listitem>
<para>
<emphasis>core-image-core</emphasis> - An X11 image with simple
<emphasis>poky-image-core</emphasis> - An X11 image with simple
applications such as terminal, editor, and file manager.
</para>
</listitem>
<listitem>
<para>
<emphasis>core-image-sato</emphasis> - An X11 image with Sato theme and
<emphasis>poky-image-sato</emphasis> - An X11 image with Sato theme and
Pimlico applications.
The image also contains terminal, editor, and file manager.
</para>
</listitem>
<listitem>
<para>
<emphasis>core-image-sato-dev</emphasis> - An X11 image similar to
core-image-sato but
<emphasis>poky-image-sato-dev</emphasis> - An X11 image similar to
poky-image-sato but
also includes a native toolchain and libraries needed to build applications
on the device itself. The image also includes testing and profiling tools
as well as debug symbols. This image was formerly core-image-sdk.
as well as debug symbols. This image was formerly poky-image-sdk.
</para>
</listitem>
<listitem>
<para>
<emphasis>core-image-lsb</emphasis> - An image suitable for implementations
<emphasis>poky-image-lsb</emphasis> - An image suitable for implementations
that conform to Linux Standard Base (LSB).
</para>
</listitem>

View File

@@ -28,7 +28,7 @@
Consequently, most users do not need to worry about BitBake.
The <filename class="directory">bitbake/bin/</filename> directory is placed
into the PATH environment variable by the
<link linkend="structure-core-script">oe-init-build-env</link> script.
<link linkend="structure-core-script">poky-init-build-env</link> script.
</para>
<para>
@@ -47,7 +47,7 @@
It is also possible to place output and configuration
files in a directory separate from the Poky source.
For information on separating output from the Poky source, see <link
linkend='structure-core-script'>oe-init-build-env</link>.
linkend='structure-core-script'>poky-init-build-env</link>.
</para>
</section>
@@ -104,7 +104,7 @@
<para>
This directory contains various integration scripts that implement
extra functionality in the Poky environment (e.g. QEMU scripts).
The <link linkend="structure-core-script">oe-init-build-env</link> script appends this
The <link linkend="structure-core-script">poky-init-build-env</link> script appends this
directory to the PATH environment variable.
</para>
</section>
@@ -154,7 +154,7 @@
</section>
<section id='structure-core-script'>
<title><filename>oe-init-build-env</filename></title>
<title><filename>poky-init-build-env</filename></title>
<para>
This script sets up the Poky build environment.
@@ -168,7 +168,7 @@
</para>
<literallayout class='monospaced'>
$ source POKY_SRC/oe-init-build-env [BUILDDIR]
$ source POKY_SRC/poky-init-build-env [BUILDDIR]
</literallayout>
<para>

View File

@@ -139,14 +139,14 @@
</para>
<para>
<literallayout class='monospaced'>
$ source oe-init-build-env [build_dir]
$ source poky-init-build-env [build_dir]
</literallayout>
</para>
<para>
The build_dir is the dir containing all the build's object files. The default
build dir is poky-dir/build. A different build_dir can be used for each of the targets.
For example, ~/build/x86 for a qemux86 target, and ~/build/arm for a qemuarm target.
Please refer to <link linkend="structure-core-script">oe-init-build-env</link>
Please refer to <link linkend="structure-core-script">poky-init-build-env</link>
for more detailed information.
</para>
<para>

View File

@@ -138,51 +138,14 @@
<title>The Linux Distribution</title>
<para>
The Yocto Project has been tested and is known to work on the current releases minus one
of the following distributions.
Follow this <ulink url='https://wiki.pokylinux.org/wiki/Distro_Test'>link </ulink> for more
information on distribution testing.
<itemizedlist>
<listitem><para>Ubuntu</para></listitem>
<listitem><para>Fedora</para></listitem>
<listitem><para>OpenSuse</para></listitem>
</itemizedlist>
</para>
<para>
The build system should be able to run on any modern distribution with Python 2.6 or 2.7.
Earlier releases of Python are known to not work and the system does not support Python 3 at this time.
This document assumes you are running one of the previously noted distributions on your Linux-based
host systems.
This document assumes you are running a reasonably current Linux-based host system.
The examples work for both Debian-based and RPM-based distributions.
</para>
<note><para>
If you attempt to use a distribution not in the above list, you may or may not have success - you
are venturing into untested territory.
Refer to
<ulink url='http://openembedded.net/index.php?title=OEandYourDistro&amp;action=historysubmit&amp;diff=4309&amp;okdid=4225'>OE and Your Distro</ulink> and
<ulink url='http://openembedded.net/index.php?title=Required_software&amp;action=historysubmit&amp;diff=4311&amp;oldid=4251'>Required Software</ulink>
for information for other distributions used with the Open Embedded project, which might be
a starting point for exploration.
If you go down this path, you should expect problems.
When you do, please go to <ulink url='http://bugzilla.yoctoproject.org'>Yocto Project Bugzilla</ulink>
and submit a bug.
We are interested in hearing about your experience.
</para></note>
</section>
<section id='packages'>
<title>The Packages</title>
<para>
Packages and package installation vary depending on your development system.
In general, you need to have root access and then install the required packages.
</para>
<note><para>
If you are using a Fedora version prior to version 15 you will need to take some
extra steps to enable <filename>sudo</filename>.
See <ulink url='https://fedoraproject.org/wiki/Configureing_Sudo'></ulink> for details.
</para></note>
<para>
The packages you need for a Debian-based host are shown in the following command:
</para>
@@ -192,12 +155,11 @@
unzip texi2html texinfo libsdl1.2-dev docbook-utils gawk \
python-pysqlite2 diffstat help2man make gcc build-essential \
g++ desktop-file-utils chrpath libgl1-mesa-dev libglu1-mesa-dev \
mercurial autoconf automake groff libtool
mercurial autoconf automake groff
</literallayout>
<para>
The packages you need for an RPM-based host like Fedora and OpenSUSE,
respectively, are as follows:
The packages you need for an RPM-based host like Fedora are shown in these commands:
</para>
<literallayout class='monospaced'>
@@ -209,15 +171,17 @@
groff linuxdoc-tools patch linuxdoc-tools cmake help2man \
perl-ExtUtils-MakeMaker tcl-devel gettext chrpath ncurses apr \
SDL-devel mesa-libGL-devel mesa-libGLU-devel gnome-doc-utils \
autoconf automake libtool
</literallayout>
<literallayout class='monospaced'>
$ sudo zypper install python gcc gcc-c++ libtool
$ subversion git chrpath automake
$ help2man diffstat texinfo mercurial wget
autoconf automake
</literallayout>
<note><para>
Packages vary in number and name for other Linux distributions.
The commands here should work. We are interested, though, to learn what works for you.
You can find more information for package requirements on common Linux distributions
at <ulink url="http://wiki.openembedded.net/index.php/OEandYourDistro"></ulink>.
However, you should be careful when using this information as the information applies
to old Linux distributions that are known to not work with a current Poky install.
</para></note>
</section>
<section id='releases'>
@@ -294,9 +258,9 @@
<para>
<literallayout class='monospaced'>
$ wget http://www.yoctoproject.org/downloads/poky/poky-bernard-5.0.1.tar.bz2
$ tar xjf poky-bernard-5.0.1.tar.bz2
$ source poky-bernard-5.0.1/poky-init-build-env poky-5.0.1-build
$ wget http://www.yoctoproject.org/downloads/poky/poky-bernard-5.0.2.tar.bz2
$ tar xjf poky-bernard-5.0.2.tar.bz2
$ source poky-bernard-5.0.2/poky-init-build-env poky-5.0.2-build
</literallayout>
</para>
@@ -314,8 +278,8 @@
<listitem><para>The first two commands extract the Yocto Project files from the
release tarball and place them into a subdirectory of your current directory.</para></listitem>
<listitem><para>The <command>source</command> command creates the
<filename>poky-5.0.1-build</filename> directory and executes the <command>cd</command>
command to make <filename>poky-5.0.1-build</filename> the working directory.
<filename>poky-5.0.2-build</filename> directory and executes the <command>cd</command>
command to make <filename>poky-5.0.2-build</filename> the working directory.
The resulting build directory contains all the files created during the build.
By default the target architecture is qemux86.
To change this default, edit the value of the MACHINE variable in the
@@ -324,10 +288,8 @@
<para>
Take some time to examine your <filename>conf/local.conf</filename> file.
The defaults should work fine.
However, if you have a multi-core CPU you might want to set the variable
BB_NUMBER_THREADS equal to twice the number of processor cores your system has.
And, set the variable PARALLEL_MAKE equal to the number of processor cores.
Setting these variables can significantly shorten your build time.
However, if you have a multi-core CPU you might want to set the variables
BB_NUMBER_THREADS and PARALLEL_MAKE to the number of processor cores on your build machine.
By default, these variables are commented out.
</para>
<para>
@@ -341,7 +303,7 @@
$ bitbake -k poky-image-sato
</literallayout>
<note><para>
BitBake requires Python 2.6 or 2.7. For more information on this requirement,
BitBake requires Python 2.6. For more information on this requirement,
see the FAQ appendix in the
<ulink url='http://www.yoctoproject.org/docs/poky-ref-manual/poky-ref-manual.html'>
Poky Reference Manual</ulink>.

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [configuration]
---
configure.ac | 1 +
1 file changed, 1 insertion(+)

View File

@@ -6,7 +6,8 @@ HOMEPAGE = "http://www.openswan.org"
LICENSE = "GPLv2"
DEPENDS = "gmp flex-native"
RRECOMMENDS_${PN} = "kernel-module-ipsec"
PR = "r2"
RDEPENDS_${PN}_nylon = "perl"
PR = "r1"
SRC_URI = "http://www.openswan.org/download/old/openswan-${PV}.tar.gz \
file://openswan-2.4.7-gentoo.patch;patch=1 \

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [configuration]
---
cmake/OpenSyncInternal.cmake.in | 1 -
1 file changed, 1 deletion(-)

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [configuration]
---
opensync/CMakeLists.txt | 1 -
1 file changed, 1 deletion(-)

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [configuration]
---
CMakeLists.txt | 1 -
1 file changed, 1 deletion(-)

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [configuration]
Index: libopensync-plugin-evolution2-0.36/cmake/modules/FindOpenSync.cmake
===================================================================
--- libopensync-plugin-evolution2-0.36.orig/cmake/modules/FindOpenSync.cmake 2008-10-20 13:07:14.000000000 +0100

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [others]
Index: libopensync-plugin-syncml-0.38/src/syncml_callbacks.c
===================================================================
--- libopensync-plugin-syncml-0.38.orig/src/syncml_callbacks.c 2009-07-31 10:30:33.000000000 +0100

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [configuration]
---
CMakeLists.txt | 4 ----
1 file changed, 4 deletions(-)

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [disable feature]
---
Makefile.am | 2 +-
configure.ac | 1 -

View File

@@ -1,6 +1,5 @@
require abiword.inc
SRCDATE = "20070130"
PV="2.5.0+cvs${SRCDATE}"
PR = "r4"

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [configuration]
Index: gnome-settings-daemon-2.26.1/configure.ac
===================================================================
--- gnome-settings-daemon-2.26.1.orig/configure.ac 2009-09-16 22:57:31.000000000 +0100

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [configuration]
--- gnome-settings-daemon-2.26.1/data/gnome-settings-daemon.desktop.in.in~ 2009-04-24 20:59:51.000000000 -0700
+++ gnome-settings-daemon-2.26.1/data/gnome-settings-daemon.desktop.in.in 2009-04-24 20:59:51.000000000 -0700
@@ -2,7 +2,7 @@

View File

@@ -1,5 +1,3 @@
Upstream-Status: Pending
============================================================
Listen for DeviceAdded in addition to DeviceEnabled

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [configuration]
Index: gnome-settings-daemon-2.25.90/configure.ac
===================================================================
--- gnome-settings-daemon-2.25.90.orig/configure.ac

View File

@@ -1,5 +1,3 @@
Upstream-Status: Pending
diff --git a/plugins/housekeeping/gsd-housekeeping-manager.c b/plugins/housekeeping/gsd-housekeeping-manager.c
index f84cfad..e8f474a 100644
--- a/plugins/housekeeping/gsd-housekeeping-manager.c

View File

@@ -1,5 +1,3 @@
Upstream-Status: Pending
diff --git a/configure.ac b/configure.ac
index 135f2ce..ba737a5 100644
--- a/configure.ac

View File

@@ -3,8 +3,6 @@ From: Seán de Búrca <leftmostcat@gmail.com>
Date: Fri, 07 Aug 2009 00:38:52 +0000
Subject: Remove useless Plural-Forms line which breaks build with gnome-doc-utils master
Upstream-Status: Inappropriate [configuration]
---
diff --git a/help/el/el.po b/help/el/el.po
index ab77264..635b68f 100644

View File

@@ -1,7 +1,7 @@
DESCRIPTION = "GNOME keyboard library"
LICENSE = "LGPL"
DEPENDS = "gconf dbus libxklavier gtk+"
DEPENDS = "gconf-dbus dbus libxklavier gtk+"
inherit gnome

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [configuration]
Index: wv-1.2.0/wv-1.0.pc.in
===================================================================
--- wv-1.2.0.orig/wv-1.0.pc.in 2008-03-19 22:25:18.000000000 +0000

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [configuration]
Index: libxklavier-3.7/libxklavier.pc.in
===================================================================
--- libxklavier-3.7.orig/libxklavier.pc.in 2009-06-10 15:58:46.000000000 +0100

View File

@@ -1,5 +1,6 @@
DESCRIPTION = "Utility library to make using XKB easier"
SECTION = "x11/libs"
PRIORITY = "optional"
DEPENDS = "iso-codes libxml2 glib-2.0 libxkbfile"
LICENSE = "LGPL"
PR = "r2"

Some files were not shown because too many files have changed in this diff Show More