Compare commits

..

2 Commits

Author SHA1 Message Date
Marcin Juszkiewicz
b9065372f4 db: fix SRC_URI
git-svn-id: https://svn.o-hand.com/repos/poky/branches/clyde@4130 311d38ba-8fff-0310-9ca6-ca027cbcb966
2008-03-28 13:14:58 +00:00
Richard Purdie
0e09f04573 Branch for clyde
git-svn-id: https://svn.o-hand.com/repos/poky/branches/clyde@1165 311d38ba-8fff-0310-9ca6-ca027cbcb966
2007-01-19 10:14:05 +00:00
1519 changed files with 29193 additions and 227401 deletions

11
LICENSE
View File

@@ -1,11 +0,0 @@
Different components of Poky are under different licenses (a mix of
MIT and GPLv2). Please see:
bitbake/COPYING (GPLv2)
meta/COPYING.MIT (MIT)
meta-extras/COPYING.MIT (MIT)
which cover the components in those subdirectories.
License information for any other files is either explicitly stated
or defaults to GPL version 2.

6
README
View File

@@ -44,15 +44,12 @@ Building An Image
Simply run;
% source poky-init-build-env
% bitbake poky-image-sato
% bitbake oh-image-pda
This will result in an ext2 image and kernel for qemu arm (see scripts dir).
To build for other machine types see MACHINE in build/conf/local.conf
Other image targets such as poky-image-sdk or poky-image-minimal are available,
see meta/packages/images/*.
Notes:
===
@@ -67,4 +64,3 @@ http://projects.o-hand.com/poky
OE Homepage and wiki
http://openembedded.org
Copyright (C) 2006-2007 OpenedHand Ltd.

View File

@@ -1,106 +0,0 @@
Using Poky - Poky Commands
==========================
Bitbake
=======
Bitbake is the tool at the heart of poky and is responsible for parsing the
metadata, generating a list of tasks from it and then executing them. To see a
list of the options it supports look at "bitbake --help".
The most common usage is "bitbake <packagename>" where <packagename> is the name
of the package you wish to build. This often equates to the first part of a .bb
filename so to run the matchbox-desktop_1.2.3.bb file, you might type "bitbake
matchbox-desktop. Several different versions of matchbox-desktop might exist
and bitbake will choose the one selected by the distribution configuration.
Bitbake will also try to execute any dependent tasks first so before building
matchbox-desktop it would build a cross compiler and glibc if not already built.
Bitbake - Package Tasks
=======================
Any given package consists of a set of tasks, in most cases the series is fetch,
unpack, patch, configure, compile, install, package, package_write and build.
The default task is "build" and any tasks this depends on are built first hence
the standard bitbake behaviour. There are some tasks such as devshell which are
not part of the default build chain. If you wish to run such a task you can use
the "-c" option to bitbake e.g. "bitbake matchbox-desktop -c devshell".
If you wish to rerun a task you can use the force option "-f". A typical usage
case might look like:
% bitbake matchbox-desktop
[change some source in the WORKDIR for example]
% bitbake matchbox-desktop -c compile -f
% bitbake matchbox-desktop
which would build matchbox-desktop, then recompile it. The final command reruns
all tasks after the compile (basically the packaging tasks) since bitbake will
notice the the compile has been rerun and hence the other tasks also need to run
again.
You can view a list of tasks in a given package by running the listtasks task
e.g. "bitbake matchbox-desktop -c listtasks".
Bitbake - Dependency Graphs
===========================
Sometimes it can be hard to see why bitbake wants to build some other packages
before a given package you've specified. "bitbake matchbox-desktop -g" will
create a task-depends.dot file in the current directory. This shows which
packages and tasks depend on which other packages and tasks and it useful for
debugging purposes.
Bitbake - Advanced Usage
========================
Debug output from bitbake can be seen with the "-D" option and can sometimes
give more information about what bitbake is doing and/or why. Each -D options
increases the logging level, the most common usage being "-DDD".
If you really want to build a specific .bb file, you can use the form "bitbake
-b somepath/somefile.bb". Note that this will not check the dependencies so this
option should only be used when you know the dependencies already exist. You can
specify fragments of the filename and bitbake will see if it can find a unique
match.
The -e option will dump the resulting environment for either the configuration
(no package specified) or for a specific package when specified with the -b
option.
The -k option will cause bitbake to try and continue even if a task fails. It
can be useful for world or unattended builds.
The -s option lists all the versions of packages that bitbake will use.
Bitbake - More Information
==========================
See the bitbake user manual at: http://bitbake.berlios.de/manual/
QEMU
====
Running images built by poky under qemu is possible within the poky environment
through the "runqemu" command. It has the form:
runqemu MACHINE IMAGETYPE ZIMAGE IMAGEFILE
where:
MACHINE - the machine to emulate (qemux86, qemuarm, spitz, akita)
IMAGETYPE - the type of image to use (nfs or ext2)
ZIMAGE - location of the kernel binary to use
IMAGEFILE - location of the image file to use
(common options are in brackets)
MACHINE is mandatory, the others are optional.
This assumes a suitable qemu binary is available with support for a given
machine. For further information see scripts/poky-qemu.README.
Copyright (C) 2006-2007 OpenedHand Ltd.

View File

@@ -1,51 +0,0 @@
Using Poky generated host SDK
=============================
How to build host SDK
====
You need to setup Poky and then run one command:
$ bitbake meta-toolchain
Result would be tarball in tmp/deploy/sdk/ with everything needed to build for
your target device. Unpack this in / directory - toolchain will reside in
/usr/local/poky/arm/ dir.
Usage of SDK
=====
First add toolchain into PATH:
$ export PATH=/usr/local/poky/arm/bin/:$PATH
Compiler is 'arm-poky-linux-gnueabi-gcc'. Building 'helloworld' example is
simple:
$ arm-poky-linux-gnueabi-gcc hello.c -o hello
$ file hello
hello: ELF 32-bit LSB executable, ARM, version 1 (SYSV), for GNU/Linux 2.6.14, dynamically linked (uses shared libs), not stripped
Autotools and SDK
======
'Configure' scripts allow to specify Host, Target, Build architecture. To build
with Poky SDK you need to specify:
./configure --target=arm-poky-linux-gnueabi --host=arm-poky-linux-gnueabi
Using packages from Poky
========
During development it is often situation that we want to use some libraries
which are available in Poky build. Their packages need to be unpacked to
/usr/local/poky/arm/arm-poky-linux-gnueabi/ directory.
For example to add libiw (from wireless-tools package) you need to unpack two
packages:
libiw29_29-pre20-r0_armv5te.ipk
libiw-dev_29-pre20-r0_armv5te.ipk
Copyright (C) 2006-2007 OpenedHand Ltd.

View File

@@ -1,214 +0,0 @@
A walk through the poky directory tree
======================================
Poky consists of several components and understanding what these are and where
they each live is one of the keys to using it.
Top level core components
=========================
bitbake/
A copy of bitbake is included within poky for ease of use and resides here.
This should usually be the same as a standard bitbake release from the bitbake
project. Bitbake is a metadata interpreter and is responsible for reading the
poky metadata and running the tasks it defines. Failures are usually from the
metadata and not bitbake itself and most users don't need to worry about
bitbake. bitbake/bin is placed into the PATH environmental variable so bitbake
can be found.
build/
This directory contains user configuration files and the output from Poky is
also placed here.
meta/
The core metadata - this is the key part of poky. Within this directory there
are definitions of the machines, the poky distribution and the packages that
make up a given system.
meta-extras/
Similar to meta containing some extra package files not included in standard
poky, disabled by default and hence not supported as part of poky.
scripts/
Various integration scripts which implement extra functionality in the poky
environment for example the qemu scripts. This directory is appended to the
PATH environmental variable.
sources/
Whilst not part of a checkout, poky will create this directory as part of any
build. Any downloads are placed in this directory (as specified by the
DL_DIR variable). This directory can be shared between poky builds to save
downloading files multiple times. SCM checkouts are also stored here as e.g.
sources/svn/, sources/cvs/ or sources/git/ and the sources directory may contain
archives of checkouts for various revisions or dates.
Its worth noting that bitbake creates .md5 stamp files for downloads. It uses
these to mark downloads as complete as well as for checksum and access
accounting purposes. If you add a file manually to the directory, you need to
touch the corresponding .md5 file too.
poky-init-build-env
This script is used to setup the poky build environment. Sourcing this file in
a shell makes changes to PATH and sets other core bitbake variables based on the
current working directory. You need to use this before running poky commands.
Internally it uses scripts within the scripts/ directory to do the bulk of the
work.
The Build Directory
===================
conf/local.conf
This file contains all the local user configuration of poky. If it isn't
present, its created from local.conf.sample. That file contains documentation
on the various standard options which can be configured there although any
standard conf file variable can be also be set here and usually overrides any
variable set elsewhere within poky.
Edit this file to set the MACHINE you want to build for, which package types you
which to use (PACKAGE_CLASSES) or where downloaded files should go (DL_DIR) for
exmaple.
tmp/
This is created by bitbake if it doesn't exist and is where all the poky output
is placed. To clean poky and start a build from scratch (other than downloads),
you can wipe this directory. tmp has some important subcomponents detailed
below.
tmp/cache/
When bitbake parses the metadata it creates a cache file of the result which can
be used when subsequently running the command. These are stored here, usually on
a per machine basis.
tmp/cross/
The cross compiler when generated is placed into this directory and those
beneath it.
tmp/deploy/
Any 'end result' output from poky is placed under here.
tmp/deploy/deb/
Any .deb packages emitted by poky are placed here, sorted into feeds for
different architecture types.
tmp/deploy/images/
Complete filesystem images are placed here. If you want to flash the resulting
image from a build onto a device, look here for them.
tmp/deploy/ipk/
Any resulting .ipk packages emitted by poky are placed here.
tmp/rootfs/
This is a temporary scratch area used when creating filesystem images. It is run
under fakeroot and is not useful once that fakeroot session has ended as
information is lost. It is left around since it is still useful in debugging
image creation problems.
tmp/staging/
Any package needing to share output with other packages does so within staging.
This means it contains any shared header files and any shared libraries amongst
other data. It is subdivided by architecture so multiple builds can run within
the one build directory.
tmp/stamps/
This is used by bitbake for accounting purposes to keep track of which tasks
have been run and when. It is also subdivided by architecture. The files are
empty and the important information is the filenames and timestamps.
tmp/work/
Each package build by bitbake is worked on its own work directory. Here, the
source is unpacked, patched, configured, compiled etc. It is subdivided by
architecture.
It is worth considering the structure of a typical work directory. An example is
the linux-rp kernel, version 2.6.20 r7 on the machine spitz built within poky
which would result in a work directory of
"tmp/work/spitz-poky-linux-gnueabi/linux-rp-2.6.20-r7", referred to as WORKDIR.
Within this, the source is unpacked to linux-2.6.20 and then patched by quilt
hence the existence of the standard quilt directories linux-2.6.20/patches and
linux-2.6.20/.pc. Within the linux-2.6.20 directory, standard quilt commands
can be used.
There are other directories generated within WORKDIR. The most important/useful
is WORKDIR/temp which has log files for each task (log.do_*.pid) and the scripts
bitbake runs for each task (run.do_*.pid). WORKDIR/image is where "make install"
places its output which is then split into subpackages within WORKDIR/install.
The Metadata
============
As mentioned previously, this is the core of poky. It has several important
subdivisions:
meta/classes/
Contains the *.bbclass files. Class files are used to abstract common code
allowing it to be reused by multiple packages. The base.bbclass file is
inherited by every package. Examples of other important classes are
autotools.bbclass which in theory allows any "autotooled" package to work with
poky with minimal effort or kernel.bbclass which contains common code and
functions for working with the linux kernel. Functions like image generation or
packaging also have their specific class files (image.bbclass, rootfs_*.bbclass
and package*.bbclass).
meta/conf/
This is the core set of configuration files which start from bitbake.conf and
from which all other configuration files are included (see the includes at the
end of the file, even local.conf is loaded from there!). Whilst bitbake.conf
sets up the defaults, often these can be overridden by user (local.conf),
machine or distribution configuration files.
meta/conf/machine/
Contains all the machine configuration files. If you set MACHINE="spitz", the
end result is poky looking for a spitz.conf file in this directory. The includes
directory contains various data common to multiple machines. If you want to add
support for a new machine to poky, this is the directory to look in.
meta/conf/distro/
Any distribution specific configuration is controlled from here. OpenEmbedded
supports multiple distributions of which poky is one. Poky only contains the
poky distribution so poky.conf is the main file here. This includes the
versions and SRCDATES for applications which are configured here. An example of
an alternative configuration is poky-bleeding.conf although this mainly inherits
its configuration from poky itself.
packages/
Each application (package) poky can build has an associated .bb file which are
all stored under this directory. Poky finds them through the BBFILES variable
which defaults to packages/*/*.bb. Adding a new piece of software to poky
consists of adding the appropriate .bb file. The .bb files from OpenEmbedded
upstream are usually compatible although they are not supported.
site/
Certain autoconf test results cannot be determined when cross compiling since it
can't run tests on a live system. This directory therefore contains a list of
cached results for various architectures which is passed to autoconf.
Copyright (C) 2006-2007 OpenedHand Ltd.

View File

@@ -1,68 +1,10 @@
Changes in Bitbake 1.8.x:
- Correctly redirect stdin when forking
- If parsing errors are found, exit, too many users miss the errors
- Remove supriours PREFERRED_PROVIDER warnings
- Start to fix path quoting
Changes in BitBake 1.7.3:
Changes in Bitbake 1.8.4:
- Make sure __inherit_cache is updated before calling include() (from Michael Krelin)
- Fix bug when target was in ASSUME_PROVIDED (#2236)
- Raise ParseError for filenames with multiple underscores instead of infinitely looping (#2062)
- Fix invalid regexp in BBMASK error handling (missing import) (#1124)
- Don't run build sanity checks on incomplete builds
- Promote certain warnings from debug to note 2 level
- Update manual
Changes in Bitbake 1.8.2:
- Catch truncated cache file errors
- Add PE (Package Epoch) support from Philipp Zabel (pH5)
- Add code to handle inter-task dependencies
- Allow operations other than assignment on flag variables
- Fix cache errors when generation dotGraphs
Changes in Bitbake 1.8.0:
- Release 1.7.x as a stable series
Changes in BitBake 1.7.x:
- Major updates of the dependency handling and execution
of tasks. Code from bin/bitbake replaced with runqueue.py
and taskdata.py
- New task execution code supports multithreading with a simplistic
threading algorithm controlled by BB_NUMBER_THREADS
- Change of the SVN Fetcher to keep the checkout around
courtsey of Paul Sokolovsky (#1367)
- PATH fix to bbimage (#1108)
- Allow debug domains to be specified on the commandline (-l)
- Allow 'interactive' tasks
- Logging message improvements
- Drop now uneeded BUILD_ALL_DEPS variable
- Add support for wildcards to -b option
- Major overhaul of the fetchers making a large amount of code common
including mirroring code
- Fetchers now touch md5 stamps upon access (to show activity)
- Fix -f force option when used without -b (long standing bug)
- Add expand_cache to data_cache.py, caching expanded data (speedup)
- Allow version field in DEPENDS (ignored for now)
- Add abort flag support to the shell
- Make inherit fail if the class doesn't exist (#1478)
- Fix data.emit_env() to expand keynames as well as values
- Add ssh fetcher
- Add perforce fetcher
- Make PREFERRED_PROVIDER_foobar defaults to foobar if available
- Share the parser's mtime_cache, reducing the number of stat syscalls
- Compile all anonfuncs at once!
*** Anonfuncs must now use common spacing format ***
- Memorise the list of handlers in __BBHANDLERS and tasks in __BBTASKS
This removes 2 million function calls resulting in a 5-10% speedup
- Add manpage
- Update generateDotGraph to use taskData/runQueue improving accuracy
and also adding a task dependency graph
- Fix/standardise on GPLv2 licence
- Move most functionality from bin/bitbake to cooker.py and split into
separate funcitons
- CVS fetcher: Added support for non-default port
- Add BBINCLUDELOGS_LINES, the number of lines to read from any logfile
- Drop shebangs from lib/bb scripts
Changes in BitBake 1.7.1:
- Major updates of the dependency handling and execution
of tasks
- Change of the SVN Fetcher to keep the checkout around
courtsey to Paul Sokolovsky (#1367)
Changes in Bitbake 1.6.0:
- Better msg handling

View File

@@ -1,49 +1,45 @@
AUTHORS
COPYING
ChangeLog
MANIFEST
setup.py
bin/bitdoc
bin/bbimage
bin/bitbake
lib/bb/COW.py
lib/bb/__init__.py
lib/bb/build.py
lib/bb/cache.py
lib/bb/cooker.py
lib/bb/COW.py
lib/bb/data.py
lib/bb/data_smart.py
lib/bb/event.py
lib/bb/fetch/__init__.py
lib/bb/manifest.py
lib/bb/methodpool.py
lib/bb/msg.py
lib/bb/providers.py
lib/bb/runqueue.py
lib/bb/shell.py
lib/bb/taskdata.py
lib/bb/utils.py
lib/bb/fetch/cvs.py
lib/bb/fetch/git.py
lib/bb/fetch/__init__.py
lib/bb/fetch/local.py
lib/bb/fetch/perforce.py
lib/bb/fetch/ssh.py
lib/bb/fetch/svk.py
lib/bb/fetch/svn.py
lib/bb/fetch/wget.py
lib/bb/manifest.py
lib/bb/methodpool.py
lib/bb/msg.py
lib/bb/parse/__init__.py
lib/bb/parse/parse_py/__init__.py
lib/bb/parse/parse_py/BBHandler.py
lib/bb/parse/parse_py/ConfHandler.py
lib/bb/providers.py
lib/bb/runqueue.py
lib/bb/shell.py
lib/bb/taskdata.py
lib/bb/utils.py
setup.py
lib/bb/parse/parse_py/__init__.py
doc/COPYING.GPL
doc/COPYING.MIT
doc/bitbake.1
doc/manual/html.css
doc/manual/Makefile
doc/manual/usermanual.xml
contrib/bbdev.sh
contrib/vim/syntax/bitbake.vim
contrib/vim/ftdetect/bitbake.vim
conf/bitbake.conf
classes/base.bbclass

View File

@@ -27,7 +27,7 @@ sys.path.insert(0,os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'l
import bb
from bb import cooker
__version__ = "1.8.5"
__version__ = "1.7.4"
#============================================================================#
# BBOptions
@@ -109,9 +109,15 @@ Default BBFILES are the .bb files in the current directory.""" )
configuration.pkgs_to_build = []
configuration.pkgs_to_build.extend(args[1:])
cooker = bb.cooker.BBCooker(configuration)
cooker.cook()
bb.cooker.BBCooker().cook(configuration)
if __name__ == "__main__":
main()
sys.exit(0)
import profile
profile.run('main()', "profile.log")
import pstats
p = pstats.Stats('profile.log')
p.sort_stats('time')
p.print_stats()
p.print_callers()

View File

@@ -0,0 +1,79 @@
# Copyright (C) 2003 Chris Larson
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.
die() {
bbfatal "$*"
}
bbnote() {
echo "NOTE:" "$*"
}
bbwarn() {
echo "WARNING:" "$*"
}
bbfatal() {
echo "FATAL:" "$*"
exit 1
}
bbdebug() {
test $# -ge 2 || {
echo "Usage: bbdebug level \"message\""
exit 1
}
test ${@bb.msg.debug_level} -ge $1 && {
shift
echo "DEBUG:" $*
}
}
addtask showdata
do_showdata[nostamp] = "1"
python do_showdata() {
import sys
# emit variables and shell functions
bb.data.emit_env(sys.__stdout__, d, True)
# emit the metadata which isnt valid shell
for e in bb.data.keys(d):
if bb.data.getVarFlag(e, 'python', d):
sys.__stdout__.write("\npython %s () {\n%s}\n" % (e, bb.data.getVar(e, d, 1)))
}
addtask listtasks
do_listtasks[nostamp] = "1"
python do_listtasks() {
import sys
for e in bb.data.keys(d):
if bb.data.getVarFlag(e, 'task', d):
sys.__stdout__.write("%s\n" % e)
}
addtask build
do_build[dirs] = "${TOPDIR}"
do_build[nostamp] = "1"
python base_do_build () {
bb.note("The included, default BB base.bbclass does not define a useful default task.")
bb.note("Try running the 'listtasks' task against a .bb to see what tasks are defined.")
}
EXPORT_FUNCTIONS do_clean do_mrproper do_build

58
bitbake/conf/bitbake.conf Normal file
View File

@@ -0,0 +1,58 @@
# Copyright (C) 2003 Chris Larson
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.
B = "${S}"
CVSDIR = "${DL_DIR}/cvs"
DEPENDS = ""
DEPLOY_DIR = "${TMPDIR}/deploy"
DEPLOY_DIR_IMAGE = "${DEPLOY_DIR}/images"
DL_DIR = "${TMPDIR}/downloads"
FETCHCOMMAND = ""
FETCHCOMMAND_cvs = "/usr/bin/env cvs -d${CVSROOT} co ${CVSCOOPTS} ${CVSMODULE}"
FETCHCOMMAND_svn = "/usr/bin/env svn co ${SVNCOOPTS} ${SVNROOT} ${SVNMODULE}"
FETCHCOMMAND_wget = "/usr/bin/env wget -t 5 --passive-ftp -P ${DL_DIR} ${URI}"
FILESDIR = "${@bb.which(bb.data.getVar('FILESPATH', d, 1), '.')}"
FILESPATH = "${FILE_DIRNAME}/${PF}:${FILE_DIRNAME}/${P}:${FILE_DIRNAME}/${PN}:${FILE_DIRNAME}/files:${FILE_DIRNAME}"
FILE_DIRNAME = "${@os.path.dirname(bb.data.getVar('FILE', d))}"
GITDIR = "${DL_DIR}/git"
IMAGE_CMD = "_NO_DEFINED_IMAGE_TYPES_"
IMAGE_ROOTFS = "${TMPDIR}/rootfs"
MKTEMPCMD = "mktemp -q ${TMPBASE}"
MKTEMPDIRCMD = "mktemp -d -q ${TMPBASE}"
OVERRIDES = "local:${MACHINE}:${TARGET_OS}:${TARGET_ARCH}"
P = "${PN}-${PV}"
PF = "${PN}-${PV}-${PR}"
PN = "${@bb.parse.BBHandler.vars_from_file(bb.data.getVar('FILE',d),d)[0] or 'defaultpkgname'}"
PR = "${@bb.parse.BBHandler.vars_from_file(bb.data.getVar('FILE',d),d)[2] or 'r0'}"
PROVIDES = ""
PV = "${@bb.parse.BBHandler.vars_from_file(bb.data.getVar('FILE',d),d)[1] or '1.0'}"
RESUMECOMMAND = ""
RESUMECOMMAND_wget = "/usr/bin/env wget -c -t 5 --passive-ftp -P ${DL_DIR} ${URI}"
S = "${WORKDIR}/${P}"
SRC_URI = "file://${FILE}"
STAMP = "${TMPDIR}/stamps/${PF}"
SVNDIR = "${DL_DIR}/svn"
T = "${WORKDIR}/temp"
TARGET_ARCH = "${BUILD_ARCH}"
TMPDIR = "${TOPDIR}/tmp"
UPDATECOMMAND = ""
UPDATECOMMAND_cvs = "/usr/bin/env cvs -d${CVSROOT} update ${CVSCOOPTS}"
UPDATECOMMAND_svn = "/usr/bin/env svn update ${SVNCOOPTS}"
WORKDIR = "${TMPDIR}/work/${PF}"

View File

@@ -175,12 +175,6 @@ include</literal> directive.</para>
<varname>DEPENDS</varname> = "${@get_depends(bb, d)}"</screen></para>
<para>This would result in <varname>DEPENDS</varname> containing <literal>dependencywithcond</literal>.</para>
</section>
<section>
<title>Variable Flags</title>
<para>Variables can have associated flags which provide a way of tagging extra information onto a variable. Several flags are used internally by bitbake but they can be used externally too if needed. The standard operations mentioned above also work on flags.</para>
<para><screen><varname>VARIABLE</varname>[<varname>SOMEFLAG</varname>] = "value"</screen></para>
<para>In this example, <varname>VARIABLE</varname> has a flag, <varname>SOMEFLAG</varname> which is set to <literal>value</literal>.</para>
</section>
<section>
<title>Inheritance</title>
<para><emphasis>NOTE:</emphasis> This is only supported in .bb and .bbclass files.</para>
@@ -218,42 +212,6 @@ method one can get the name of the triggered event.</para><para>The above event
of the event and the content of the <varname>FILE</varname> variable.</para>
</section>
</section>
<section>
<title>Dependency Handling</title>
<para>Bitbake 1.7.x onwards works with the metadata at the task level since this is optimal when dealing with multiple threads of execution. A robust method of specifing task dependencies is therefore needed. </para>
<section>
<title>Dependencies internal to the .bb file</title>
<para>Where the dependencies are internal to a given .bb file, the dependencies are handled by the previously detailed addtask directive.</para>
</section>
<section>
<title>DEPENDS</title>
<para>DEPENDS is taken to specify build time dependencies. The 'deptask' flag for tasks is used to signify the task of each DEPENDS which must have completed before that task can be executed.</para>
<para><screen>do_configure[deptask] = "do_populate_staging"</screen></para>
<para>means the do_populate_staging task of each item in DEPENDS must have completed before do_configure can execute.</para>
</section>
<section>
<title>RDEPENDS</title>
<para>RDEPENDS is taken to specify runtime dependencies. The 'rdeptask' flag for tasks is used to signify the task of each RDEPENDS which must have completed before that task can be executed.</para>
<para><screen>do_package_write[rdeptask] = "do_package"</screen></para>
<para>means the do_package task of each item in RDEPENDS must have completed before do_package_write can execute.</para>
</section>
<section>
<title>Recursive DEPENDS</title>
<para>These are specified with the 'recdeptask' flag and is used signify the task(s) of each DEPENDS which must have completed before that task can be executed. It applies recursively so also, the DEPENDS of each item in the original DEPENDS must be met and so on.</para>
</section>
<section>
<title>Recursive RDEPENDS</title>
<para>These are specified with the 'recrdeptask' flag and is used signify the task(s) of each RDEPENDS which must have completed before that task can be executed. It applies recursively so also, the RDEPENDS of each item in the original RDEPENDS must be met and so on. It also runs all DEPENDS first too.</para>
</section>
<section>
<title>Inter Task</title>
<para>The 'depends' flag for tasks is a more generic form of which allows an interdependency on specific tasks rather than specifying the data in DEPENDS or RDEPENDS.</para>
<para><screen>do_patch[depends] = "quilt-native:do_populate_staging"</screen></para>
<para>means the do_populate_staging task of the target quilt-native must have completed before the do_patch can execute.</para>
</section>
</section>
<section>
<title>Parsing</title>
<section>
@@ -413,8 +371,6 @@ options:
Stop processing at the given list of dependencies when
generating dependency graphs. This can help to make
the graph more appealing
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
Show debug logging for the specified logging domains
</screen>
</para>
@@ -445,20 +401,12 @@ options:
<title>Generating dependency graphs</title>
<para>BitBake is able to generate dependency graphs using the dot syntax. These graphs can be converted
to images using the <application>dot</application> application from <ulink url="http://www.graphviz.org">graphviz</ulink>.
Two files will be written into the current working directory, <emphasis>depends.dot</emphasis> containing dependency information at the package level and <emphasis>task-depends.dot</emphasis> containing a breakdown of the dependencies at the task level. To stop depending on common depends one can use the <prompt>-I depend</prompt> to omit these from the graph. This can lead to more readable graphs. E.g. this way <varname>DEPENDS</varname> from inherited classes, e.g. base.bbclass, can be removed from the graph.</para>
Three files will be written into the current working directory, <emphasis>depends.dot</emphasis> containing <varname>DEPENDS</varname> variables, <emphasis>rdepends.dot</emphasis> and <emphasis>alldepends.dot</emphasis> containing both <varname>DEPENDS</varname> and <varname>RDEPENDS</varname>. To stop depending on common depends one can use the <prompt>-I depend</prompt> to omit these from the graph. This can lead to more readable graphs. E.g. this way <varname>DEPENDS</varname> from inherited classes, e.g. base.bbclass, can be removed from the graph.</para>
<screen><prompt>$ </prompt>bitbake -g blah</screen>
<screen><prompt>$ </prompt>bitbake -g -I virtual/whatever -I bloom blah</screen>
</example>
</para>
</section>
<section>
<title>Special variables</title>
<para>Certain variables affect bitbake operation:</para>
<section>
<title><varname>BB_NUMBER_THREADS</varname></title>
<para> The number of threads bitbake should run at once (default: 1).</para>
</section>
</section>
<section>
<title>Metadata</title>
<para>As you may have seen in the usage information, or in the information about .bb files, the BBFILES variable is how the bitbake tool locates its files. This variable is a space seperated list of files that are available, and supports wildcards.

View File

@@ -21,7 +21,7 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
__version__ = "1.8.5"
__version__ = "1.7.4"
__all__ = [

View File

@@ -150,7 +150,7 @@ def exec_func_shell(func, d):
if bb.msg.debug_level['default'] > 0: f.write("set -x\n")
data.emit_env(f, d)
f.write("cd '%s'\n" % os.getcwd())
f.write("cd %s\n" % os.getcwd())
if func: f.write("%s\n" % func)
f.close()
os.chmod(runfile, 0775)
@@ -188,8 +188,7 @@ def exec_func_shell(func, d):
maybe_fakeroot = "PATH=\"%s\" fakeroot " % bb.data.getVar("PATH", d, 1)
else:
maybe_fakeroot = ''
lang_environment = "LC_ALL=C "
ret = os.system('%s%ssh -e "%s"' % (lang_environment, maybe_fakeroot, runfile))
ret = os.system('%ssh -e %s' % (maybe_fakeroot, runfile))
try:
os.chdir(prevdir)
except:
@@ -416,11 +415,9 @@ def add_task(task, deps, d):
def getTask(name):
deptask = data.getVarFlag(task, name, d)
if deptask:
deptask = data.expand(deptask, d)
if not name in task_deps:
task_deps[name] = {}
task_deps[name][task] = deptask
getTask('depends')
getTask('deptask')
getTask('rdeptask')
getTask('recrdeptask')

View File

@@ -39,7 +39,7 @@ except ImportError:
import pickle
bb.msg.note(1, bb.msg.domain.Cache, "Importing cPickle failed. Falling back to a very slow implementation.")
__cache_version__ = "126"
__cache_version__ = "125"
class Cache:
"""
@@ -75,9 +75,6 @@ class Cache:
raise ValueError, 'Cache Version Mismatch'
if version_data['BITBAKE_VER'] != bb.__version__:
raise ValueError, 'Bitbake Version Mismatch'
except EOFError:
bb.msg.note(1, bb.msg.domain.Cache, "Truncated cache found, rebuilding...")
self.depends_cache = {}
except (ValueError, KeyError):
bb.msg.note(1, bb.msg.domain.Cache, "Invalid cache found, rebuilding...")
self.depends_cache = {}
@@ -254,7 +251,6 @@ class Cache:
"""
pn = self.getVar('PN', file_name, True)
pe = self.getVar('PE', file_name, True) or "0"
pv = self.getVar('PV', file_name, True)
pr = self.getVar('PR', file_name, True)
dp = int(self.getVar('DEFAULT_PREFERENCE', file_name, True) or "0")
@@ -276,7 +272,7 @@ class Cache:
# build FileName to PackageName lookup table
cacheData.pkg_fn[file_name] = pn
cacheData.pkg_pepvpr[file_name] = (pe,pv,pr)
cacheData.pkg_pvpr[file_name] = (pv,pr)
cacheData.pkg_dp[file_name] = dp
# Build forward and reverse provider hashes
@@ -411,7 +407,7 @@ class CacheData:
self.possible_world = []
self.pkg_pn = {}
self.pkg_fn = {}
self.pkg_pepvpr = {}
self.pkg_pvpr = {}
self.pkg_dp = {}
self.pn_provides = {}
self.all_depends = Set()

View File

@@ -26,10 +26,33 @@ import sys, os, getopt, glob, copy, os.path, re, time
import bb
from bb import utils, data, parse, event, cache, providers, taskdata, runqueue
from sets import Set
import itertools, sre_constants
import itertools
parsespin = itertools.cycle( r'|/-\\' )
#============================================================================#
# BBStatistics
#============================================================================#
class BBStatistics:
"""
Manage build statistics for one run
"""
def __init__(self ):
self.attempt = 0
self.success = 0
self.fail = 0
self.deps = 0
def show( self ):
print "Build statistics:"
print " Attempted builds: %d" % self.attempt
if self.fail:
print " Failed builds: %d" % self.fail
if self.deps:
print " Dependencies not satisfied: %d" % self.deps
if self.fail or self.deps: return 1
else: return 0
#============================================================================#
# BBCooker
#============================================================================#
@@ -38,61 +61,43 @@ class BBCooker:
Manages one bitbake build run
"""
def __init__(self, configuration):
Statistics = BBStatistics # make it visible from the shell
def __init__( self ):
self.build_cache_fail = []
self.build_cache = []
self.stats = BBStatistics()
self.status = None
self.cache = None
self.bb_cache = None
self.configuration = configuration
if self.configuration.verbose:
bb.msg.set_verbose(True)
if self.configuration.debug:
bb.msg.set_debug_level(self.configuration.debug)
else:
bb.msg.set_debug_level(0)
if self.configuration.debug_domains:
bb.msg.set_debug_domains(self.configuration.debug_domains)
self.configuration.data = bb.data.init()
for f in self.configuration.file:
self.parseConfigurationFile( f )
self.parseConfigurationFile( os.path.join( "conf", "bitbake.conf" ) )
if not self.configuration.cmd:
self.configuration.cmd = bb.data.getVar("BB_DEFAULT_TASK", self.configuration.data) or "build"
#
# Special updated configuration we use for firing events
#
self.configuration.event_data = bb.data.createCopy(self.configuration.data)
bb.data.update_data(self.configuration.event_data)
def tryBuildPackage(self, fn, item, task, the_data, build_depends):
"""
Build one task of a package, optionally build following task depends
"""
bb.event.fire(bb.event.PkgStarted(item, the_data))
try:
self.stats.attempt += 1
if not build_depends:
bb.data.setVarFlag('do_%s' % task, 'dontrundeps', 1, the_data)
if not self.configuration.dry_run:
bb.build.exec_task('do_%s' % task, the_data)
bb.event.fire(bb.event.PkgSucceeded(item, the_data))
self.build_cache.append(fn)
return True
except bb.build.FuncFailed:
self.stats.fail += 1
bb.msg.error(bb.msg.domain.Build, "task stack execution failed")
bb.event.fire(bb.event.PkgFailed(item, the_data))
self.build_cache_fail.append(fn)
raise
except bb.build.EventException, e:
self.stats.fail += 1
event = e.args[1]
bb.msg.error(bb.msg.domain.Build, "%s event exception, aborting" % bb.event.getName(event))
bb.event.fire(bb.event.PkgFailed(item, the_data))
self.build_cache_fail.append(fn)
raise
def tryBuild( self, fn, build_depends):
@@ -107,11 +112,12 @@ class BBCooker:
item = self.status.pkg_fn[fn]
if bb.build.stamp_is_current('do_%s' % self.configuration.cmd, the_data):
self.build_cache.append(fn)
return True
return self.tryBuildPackage(fn, item, self.configuration.cmd, the_data, build_depends)
def showVersions(self):
def showVersions( self ):
pkg_pn = self.status.pkg_pn
preferred_versions = {}
latest_versions = {}
@@ -130,11 +136,11 @@ class BBCooker:
latest = latest_versions[p]
if pref != latest:
prefstr = pref[0][0] + ":" + pref[0][1] + '-' + pref[0][2]
prefstr = pref[0][0] + "-" + pref[0][1]
else:
prefstr = ""
print "%-30s %20s %20s" % (p, latest[0][0] + ":" + latest[0][1] + "-" + latest[0][2],
print "%-30s %20s %20s" % (p, latest[0][0] + "-" + latest[0][1],
prefstr)
@@ -186,8 +192,8 @@ class BBCooker:
taskdata.add_unresolved(localdata, self.status)
except bb.providers.NoProvider:
sys.exit(1)
rq = bb.runqueue.RunQueue(self, self.configuration.data, self.status, taskdata, runlist)
rq.prepare_runqueue()
rq = bb.runqueue.RunQueue()
rq.prepare_runqueue(self, self.configuration.data, self.status, taskdata, runlist)
seen_fnids = []
depends_file = file('depends.dot', 'w' )
@@ -201,7 +207,7 @@ class BBCooker:
fnid = rq.runq_fnid[task]
fn = taskdata.fn_index[fnid]
pn = self.status.pkg_fn[fn]
version = "%s:%s-%s" % self.status.pkg_pepvpr[fn]
version = self.bb_cache.getVar('PV', fn, True ) + '-' + self.bb_cache.getVar('PR', fn, True)
print >> tdepends_file, '"%s.%s" [label="%s %s\\n%s\\n%s"]' % (pn, taskname, pn, taskname, version, fn)
for dep in rq.runq_depends[task]:
depfn = taskdata.fn_index[rq.runq_fnid[dep]]
@@ -365,138 +371,98 @@ class BBCooker:
except ValueError:
bb.msg.error(bb.msg.domain.Parsing, "invalid value for BBFILE_PRIORITY_%s: \"%s\"" % (c, priority))
def buildSetVars(self):
"""
Setup any variables needed before starting a build
"""
if not bb.data.getVar("BUILDNAME", self.configuration.data):
bb.data.setVar("BUILDNAME", os.popen('date +%Y%m%d%H%M').readline().strip(), self.configuration.data)
bb.data.setVar("BUILDSTART", time.strftime('%m/%d/%Y %H:%M:%S',time.gmtime()),self.configuration.data)
def buildFile(self, buildfile):
"""
Build the file matching regexp buildfile
"""
bf = os.path.abspath(buildfile)
try:
os.stat(bf)
except OSError:
(filelist, masked) = self.collect_bbfiles()
regexp = re.compile(buildfile)
matches = []
for f in filelist:
if regexp.search(f) and os.path.isfile(f):
bf = f
matches.append(f)
if len(matches) != 1:
bb.msg.error(bb.msg.domain.Parsing, "Unable to match %s (%s matches found):" % (buildfile, len(matches)))
for f in matches:
bb.msg.error(bb.msg.domain.Parsing, " %s" % f)
sys.exit(1)
bf = matches[0]
bbfile_data = bb.parse.handle(bf, self.configuration.data)
# Remove stamp for target if force mode active
if self.configuration.force:
bb.msg.note(2, bb.msg.domain.RunQueue, "Remove stamp %s, %s" % (self.configuration.cmd, bf))
bb.build.del_stamp('do_%s' % self.configuration.cmd, bbfile_data)
item = bb.data.getVar('PN', bbfile_data, 1)
try:
self.tryBuildPackage(bf, item, self.configuration.cmd, bbfile_data, True)
except bb.build.EventException:
bb.msg.error(bb.msg.domain.Build, "Build of '%s' failed" % item )
sys.exit(0)
def buildTargets(self, targets):
"""
Attempt to build the targets specified
"""
buildname = bb.data.getVar("BUILDNAME", self.configuration.data)
bb.event.fire(bb.event.BuildStarted(buildname, targets, self.configuration.event_data))
localdata = data.createCopy(self.configuration.data)
bb.data.update_data(localdata)
bb.data.expandKeys(localdata)
taskdata = bb.taskdata.TaskData(self.configuration.abort)
runlist = []
try:
for k in targets:
taskdata.add_provider(localdata, self.status, k)
runlist.append([k, "do_%s" % self.configuration.cmd])
taskdata.add_unresolved(localdata, self.status)
except bb.providers.NoProvider:
sys.exit(1)
rq = bb.runqueue.RunQueue(self, self.configuration.data, self.status, taskdata, runlist)
rq.prepare_runqueue()
try:
failures = rq.execute_runqueue()
except runqueue.TaskFailure, fnids:
for fnid in fnids:
bb.msg.error(bb.msg.domain.Build, "'%s' failed" % taskdata.fn_index[fnid])
sys.exit(1)
bb.event.fire(bb.event.BuildCompleted(buildname, targets, self.configuration.event_data, failures))
sys.exit(0)
def updateCache(self):
# Import Psyco if available and not disabled
if not self.configuration.disable_psyco:
try:
import psyco
except ImportError:
bb.msg.note(1, bb.msg.domain.Collection, "Psyco JIT Compiler (http://psyco.sf.net) not available. Install it to increase performance.")
else:
psyco.bind( self.parse_bbfiles )
else:
bb.msg.note(1, bb.msg.domain.Collection, "You have disabled Psyco. This decreases performance.")
self.status = bb.cache.CacheData()
ignore = bb.data.getVar("ASSUME_PROVIDED", self.configuration.data, 1) or ""
self.status.ignored_dependencies = Set( ignore.split() )
self.handleCollections( bb.data.getVar("BBFILE_COLLECTIONS", self.configuration.data, 1) )
bb.msg.debug(1, bb.msg.domain.Collection, "collecting .bb files")
(filelist, masked) = self.collect_bbfiles()
self.parse_bbfiles(filelist, masked, self.myProgressCallback)
bb.msg.debug(1, bb.msg.domain.Collection, "parsing complete")
self.buildDepgraph()
def cook(self):
def cook(self, configuration):
"""
We are building stuff here. We do the building
from here. By default we try to execute task
build.
"""
self.configuration = configuration
if self.configuration.verbose:
bb.msg.set_verbose(True)
if self.configuration.debug:
bb.msg.set_debug_level(self.configuration.debug)
else:
bb.msg.set_debug_level(0)
if self.configuration.debug_domains:
bb.msg.set_debug_domains(self.configuration.debug_domains)
self.configuration.data = bb.data.init()
for f in self.configuration.file:
self.parseConfigurationFile( f )
self.parseConfigurationFile( os.path.join( "conf", "bitbake.conf" ) )
if not self.configuration.cmd:
self.configuration.cmd = bb.data.getVar("BB_DEFAULT_TASK", self.configuration.data) or "build"
#
# Special updated configuration we use for firing events
#
self.configuration.event_data = bb.data.createCopy(self.configuration.data)
bb.data.update_data(self.configuration.event_data)
if self.configuration.show_environment:
self.showEnvironment()
sys.exit( 0 )
self.buildSetVars()
# inject custom variables
if not bb.data.getVar("BUILDNAME", self.configuration.data):
bb.data.setVar("BUILDNAME", os.popen('date +%Y%m%d%H%M').readline().strip(), self.configuration.data)
bb.data.setVar("BUILDSTART", time.strftime('%m/%d/%Y %H:%M:%S',time.gmtime()),self.configuration.data)
buildname = bb.data.getVar("BUILDNAME", self.configuration.data)
if self.configuration.interactive:
self.interactiveMode()
if self.configuration.buildfile is not None:
return self.buildFile(self.configuration.buildfile)
bf = os.path.abspath( self.configuration.buildfile )
try:
os.stat(bf)
except OSError:
(filelist, masked) = self.collect_bbfiles()
regexp = re.compile(self.configuration.buildfile)
matches = []
for f in filelist:
if regexp.search(f) and os.path.isfile(f):
bf = f
matches.append(f)
if len(matches) != 1:
bb.msg.error(bb.msg.domain.Parsing, "Unable to match %s (%s matches found):" % (self.configuration.buildfile, len(matches)))
for f in matches:
bb.msg.error(bb.msg.domain.Parsing, " %s" % f)
sys.exit(1)
bf = matches[0]
bbfile_data = bb.parse.handle(bf, self.configuration.data)
# Remove stamp for target if force mode active
if self.configuration.force:
bb.msg.note(2, bb.msg.domain.RunQueue, "Remove stamp %s, %s" % (self.configuration.cmd, bf))
bb.build.del_stamp('do_%s' % self.configuration.cmd, bbfile_data)
item = bb.data.getVar('PN', bbfile_data, 1)
try:
self.tryBuildPackage(bf, item, self.configuration.cmd, bbfile_data, True)
except bb.build.EventException:
bb.msg.error(bb.msg.domain.Build, "Build of '%s' failed" % item )
sys.exit( self.stats.show() )
# initialise the parsing status now we know we will need deps
self.updateCache()
self.status = bb.cache.CacheData()
if self.configuration.parse_only:
bb.msg.note(1, bb.msg.domain.Collection, "Requested parsing .bb files only. Exiting.")
return 0
ignore = bb.data.getVar("ASSUME_PROVIDED", self.configuration.data, 1) or ""
self.status.ignored_dependencies = Set( ignore.split() )
self.handleCollections( bb.data.getVar("BBFILE_COLLECTIONS", self.configuration.data, 1) )
pkgs_to_build = self.configuration.pkgs_to_build
@@ -509,7 +475,30 @@ class BBCooker:
print "for usage information."
sys.exit(0)
# Import Psyco if available and not disabled
if not self.configuration.disable_psyco:
try:
import psyco
except ImportError:
bb.msg.note(1, bb.msg.domain.Collection, "Psyco JIT Compiler (http://psyco.sf.net) not available. Install it to increase performance.")
else:
psyco.bind( self.parse_bbfiles )
else:
bb.msg.note(1, bb.msg.domain.Collection, "You have disabled Psyco. This decreases performance.")
try:
bb.msg.debug(1, bb.msg.domain.Collection, "collecting .bb files")
(filelist, masked) = self.collect_bbfiles()
self.parse_bbfiles(filelist, masked, self.myProgressCallback)
bb.msg.debug(1, bb.msg.domain.Collection, "parsing complete")
print
if self.configuration.parse_only:
bb.msg.note(1, bb.msg.domain.Collection, "Requested parsing .bb files only. Exiting.")
return
self.buildDepgraph()
if self.configuration.show_versions:
self.showVersions()
sys.exit( 0 )
@@ -523,7 +512,34 @@ class BBCooker:
self.generateDotGraph( pkgs_to_build, self.configuration.ignored_dot_deps )
sys.exit( 0 )
return self.buildTargets(pkgs_to_build)
bb.event.fire(bb.event.BuildStarted(buildname, pkgs_to_build, self.configuration.event_data))
localdata = data.createCopy(self.configuration.data)
bb.data.update_data(localdata)
bb.data.expandKeys(localdata)
taskdata = bb.taskdata.TaskData(self.configuration.abort)
runlist = []
try:
for k in pkgs_to_build:
taskdata.add_provider(localdata, self.status, k)
runlist.append([k, "do_%s" % self.configuration.cmd])
taskdata.add_unresolved(localdata, self.status)
except bb.providers.NoProvider:
sys.exit(1)
rq = bb.runqueue.RunQueue()
rq.prepare_runqueue(self, self.configuration.data, self.status, taskdata, runlist)
try:
failures = rq.execute_runqueue(self, self.configuration.data, self.status, taskdata, runlist)
except runqueue.TaskFailure, fnids:
for fnid in fnids:
bb.msg.error(bb.msg.domain.Build, "'%s' failed" % taskdata.fn_index[fnid])
sys.exit(1)
bb.event.fire(bb.event.BuildCompleted(buildname, pkgs_to_build, self.configuration.event_data, failures))
sys.exit( self.stats.show() )
except KeyboardInterrupt:
bb.msg.note(1, bb.msg.domain.Collection, "KeyboardInterrupt - Build not completed.")
@@ -540,17 +556,13 @@ class BBCooker:
return bbfiles
def find_bbfiles( self, path ):
"""Find all the .bb files in a directory"""
from os.path import join
found = []
for dir, dirs, files in os.walk(path):
for ignored in ('SCCS', 'CVS', '.svn'):
if ignored in dirs:
dirs.remove(ignored)
found += [join(dir,f) for f in files if f.endswith('.bb')]
return found
"""Find all the .bb files in a directory (uses find)"""
findcmd = 'find ' + path + ' -name *.bb | grep -v SCCS/'
try:
finddata = os.popen(findcmd)
except OSError:
return []
return finddata.readlines()
def collect_bbfiles( self ):
"""Collect all available .bb build files"""
@@ -597,7 +609,7 @@ class BBCooker:
return (finalfiles, masked)
def parse_bbfiles(self, filelist, masked, progressCallback = None):
parsed, cached, skipped, error = 0, 0, 0, 0
parsed, cached, skipped = 0, 0, 0
for i in xrange( len( filelist ) ):
f = filelist[i]
@@ -640,7 +652,6 @@ class BBCooker:
self.bb_cache.sync()
raise
except Exception, e:
error += 1
self.bb_cache.remove(f)
bb.msg.error(bb.msg.domain.Collection, "%s while parsing %s" % (e, f))
except:
@@ -652,6 +663,3 @@ class BBCooker:
bb.msg.note(1, bb.msg.domain.Collection, "Parsing finished. %d cached, %d parsed, %d skipped, %d masked." % ( cached, parsed, skipped, masked ))
self.bb_cache.sync()
if error > 0:
bb.msg.fatal(bb.msg.domain.Collection, "Parsing errors found, exiting...")

View File

@@ -23,13 +23,14 @@ BitBake build tools.
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import os, re
import bb.data
import bb.utils
class Event:
"""Base class for events"""
type = "Event"
def __init__(self, d):
def __init__(self, d = bb.data.init()):
self._data = d
def getData(self):
@@ -128,7 +129,7 @@ def getName(e):
class PkgBase(Event):
"""Base class for package events"""
def __init__(self, t, d):
def __init__(self, t, d = bb.data.init()):
self._pkg = t
Event.__init__(self, d)

View File

@@ -91,12 +91,6 @@ class Svn(Fetch):
elif ud.date != "now":
options.append("-r {%s}" % ud.date)
if ud.user:
options.append("--username %s" % ud.user)
if ud.pswd:
options.append("--password %s" % ud.pswd)
localdata = data.createCopy(d)
data.setVar('OVERRIDES', "svn:%s" % data.getVar('OVERRIDES', localdata), localdata)
data.update_data(localdata)

View File

@@ -23,7 +23,7 @@ Message handling infrastructure for bitbake
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
import sys, os, re, bb
from bb import utils, event
from bb import utils
debug_level = {}
@@ -42,29 +42,6 @@ domain = bb.utils.Enum(
'TaskData',
'Util')
class MsgBase(bb.event.Event):
"""Base class for messages"""
def __init__(self, msg, d ):
self._message = msg
event.Event.__init__(self, d)
class MsgDebug(MsgBase):
"""Debug Message"""
class MsgNote(MsgBase):
"""Note Message"""
class MsgWarn(MsgBase):
"""Warning Message"""
class MsgError(MsgBase):
"""Error Message"""
class MsgFatal(MsgBase):
"""Fatal Message"""
#
# Message control functions
#
@@ -94,7 +71,6 @@ def set_debug_domains(domains):
def debug(level, domain, msg, fn = None):
if debug_level[domain] >= level:
bb.event.fire(MsgDebug(msg, None))
print 'DEBUG: ' + msg
def note(level, domain, msg, fn = None):
@@ -115,22 +91,17 @@ def fatal(domain, msg, fn = None):
#
def std_debug(lvl, msg):
if debug_level['default'] >= lvl:
bb.event.fire(MsgDebug(msg, None))
print 'DEBUG: ' + msg
def std_note(msg):
bb.event.fire(MsgNote(msg, None))
print 'NOTE: ' + msg
def std_warn(msg):
bb.event.fire(MsgWarn(msg, None))
print 'WARNING: ' + msg
def std_error(msg):
bb.event.fire(MsgError(msg, None))
print 'ERROR: ' + msg
def std_fatal(msg):
bb.event.fire(MsgFatal(msg, None))
print 'ERROR: ' + msg
sys.exit(1)

View File

@@ -72,9 +72,9 @@ def inherit(files, d):
if not file in __inherit_cache:
bb.msg.debug(2, bb.msg.domain.Parsing, "BB %s:%d: inheriting %s" % (fn, lineno, file))
__inherit_cache.append( file )
data.setVar('__inherit_cache', __inherit_cache, d)
include(fn, file, d, "inherit")
__inherit_cache = data.getVar('__inherit_cache', d) or []
data.setVar('__inherit_cache', __inherit_cache, d)
def handle(fn, d, include = 0):
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __infunc__, __body__, __residue__
@@ -377,8 +377,6 @@ def vars_from_file(mypkg, d):
myfile = os.path.splitext(os.path.basename(mypkg))
parts = myfile[0].split('_')
__pkgsplit_cache__[mypkg] = parts
if len(parts) > 3:
raise ParseError("Unable to generate default variables from the filename: %s (too many underscores)" % mypkg)
exp = 3 - len(parts)
tmplist = []
while exp != 0:

View File

@@ -161,12 +161,6 @@ def handle(fn, data, include = 0):
return data
def feeder(lineno, s, fn, data):
def getFunc(groupd, key, data):
if 'flag' in groupd and groupd['flag'] != None:
return bb.data.getVarFlag(key, groupd['flag'], data)
else:
return bb.data.getVar(key, data)
m = __config_regexp__.match(s)
if m:
groupd = m.groupdict()
@@ -174,19 +168,19 @@ def feeder(lineno, s, fn, data):
if "exp" in groupd and groupd["exp"] != None:
bb.data.setVarFlag(key, "export", 1, data)
if "ques" in groupd and groupd["ques"] != None:
val = getFunc(groupd, key, data)
val = bb.data.getVar(key, data)
if val == None:
val = groupd["value"]
elif "colon" in groupd and groupd["colon"] != None:
val = bb.data.expand(groupd["value"], data)
elif "append" in groupd and groupd["append"] != None:
val = "%s %s" % ((getFunc(groupd, key, data) or ""), groupd["value"])
val = "%s %s" % ((bb.data.getVar(key, data) or ""), groupd["value"])
elif "prepend" in groupd and groupd["prepend"] != None:
val = "%s %s" % (groupd["value"], (getFunc(groupd, key, data) or ""))
val = "%s %s" % (groupd["value"], (bb.data.getVar(key, data) or ""))
elif "postdot" in groupd and groupd["postdot"] != None:
val = "%s%s" % ((getFunc(groupd, key, data) or ""), groupd["value"])
val = "%s%s" % ((bb.data.getVar(key, data) or ""), groupd["value"])
elif "predot" in groupd and groupd["predot"] != None:
val = "%s%s" % (groupd["value"], (getFunc(groupd, key, data) or ""))
val = "%s%s" % (groupd["value"], (bb.data.getVar(key, data) or ""))
else:
val = groupd["value"]
if 'flag' in groupd and groupd['flag'] != None:

View File

@@ -61,27 +61,19 @@ def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
preferred_v = bb.data.getVar('PREFERRED_VERSION_%s' % pn, localdata, True)
if preferred_v:
m = re.match('(\d+:)*(.*)(_.*)*', preferred_v)
m = re.match('(.*)_(.*)', preferred_v)
if m:
if m.group(1):
preferred_e = int(m.group(1)[:-1])
else:
preferred_e = None
preferred_v = m.group(2)
if m.group(3):
preferred_r = m.group(3)[1:]
else:
preferred_r = None
preferred_v = m.group(1)
preferred_r = m.group(2)
else:
preferred_e = None
preferred_r = None
for file_set in tmp_pn:
for f in file_set:
pe,pv,pr = dataCache.pkg_pepvpr[f]
if preferred_v == pv and (preferred_r == pr or preferred_r == None) and (preferred_e == pe or preferred_e == None):
pv,pr = dataCache.pkg_pvpr[f]
if preferred_v == pv and (preferred_r == pr or preferred_r == None):
preferred_file = f
preferred_ver = (pe, pv, pr)
preferred_ver = (pv, pr)
break
if preferred_file:
break;
@@ -89,8 +81,6 @@ def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
pv_str = '%s-%s' % (preferred_v, preferred_r)
else:
pv_str = preferred_v
if not (preferred_e is None):
pv_str = '%s:%s' % (preferred_e, pv_str)
itemstr = ""
if item:
itemstr = " (for item %s)" % item
@@ -107,11 +97,11 @@ def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
latest_p = 0
latest_f = None
for file_name in files:
pe,pv,pr = dataCache.pkg_pepvpr[file_name]
pv,pr = dataCache.pkg_pvpr[file_name]
dp = dataCache.pkg_dp[file_name]
if (latest is None) or ((latest_p == dp) and (utils.vercmp(latest, (pe, pv, pr)) < 0)) or (dp > latest_p):
latest = (pe, pv, pr)
if (latest is None) or ((latest_p == dp) and (utils.vercmp(latest, (pv, pr)) < 0)) or (dp > latest_p):
latest = (pv, pr)
latest_f = file_name
latest_p = dp
if preferred_file is None:
@@ -120,7 +110,10 @@ def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
return (latest,latest_f,preferred_ver, preferred_file)
def filterProviders(providers, item, cfgData, dataCache):
#
# RP - build_cache_fail needs to move elsewhere
#
def filterProviders(providers, item, cfgData, dataCache, build_cache_fail = {}):
"""
Take a list of providers and filter/reorder according to the
environment variables and previous build results
@@ -142,6 +135,12 @@ def filterProviders(providers, item, cfgData, dataCache):
preferred_versions[pn] = bb.providers.findBestProvider(pn, cfgData, dataCache, pkg_pn, item)[2:4]
eligible.append(preferred_versions[pn][1])
for p in eligible:
if p in build_cache_fail:
bb.msg.debug(1, bb.msg.domain.Provider, "rejecting already-failed %s" % p)
eligible.remove(p)
if len(eligible) == 0:
bb.msg.error(bb.msg.domain.Provider, "no eligible providers for %s" % item)
return 0
@@ -163,7 +162,7 @@ def filterProviders(providers, item, cfgData, dataCache):
# if so, bump it to the head of the queue
for p in providers:
pn = dataCache.pkg_fn[p]
pe, pv, pr = dataCache.pkg_pepvpr[p]
pv, pr = dataCache.pkg_pvpr[p]
stamp = '%s.do_populate_staging' % dataCache.stamp[p]
if os.path.exists(stamp):
@@ -172,11 +171,7 @@ def filterProviders(providers, item, cfgData, dataCache):
# package was made ineligible by already-failed check
continue
oldver = "%s-%s" % (pv, pr)
if pe > 0:
oldver = "%s:%s" % (pe, oldver)
newver = "%s-%s" % (newvers[1], newvers[2])
if newvers[0] > 0:
newver = "%s:%s" % (newvers[0], newver)
newver = '-'.join(newvers)
if (newver != oldver):
extra_chat = "%s (%s) already staged but upgrading to %s to satisfy %s" % (pn, oldver, newver, item)
else:

View File

@@ -25,47 +25,20 @@ Handles preparation and execution of a queue of tasks
from bb import msg, data, fetch, event, mkdirhier, utils
from sets import Set
import bb, os, sys
import signal
class TaskFailure(Exception):
"""Exception raised when a task in a runqueue fails"""
def __init__(self, x):
self.args = x
class RunQueueStats:
"""
Holds statistics on the tasks handled by the associated runQueue
"""
def __init__(self):
self.completed = 0
self.skipped = 0
self.failed = 0
def taskFailed(self):
self.failed = self.failed + 1
def taskCompleted(self):
self.completed = self.completed + 1
def taskSkipped(self):
self.skipped = self.skipped + 1
class RunQueue:
"""
BitBake Run Queue implementation
"""
def __init__(self, cooker, cfgData, dataCache, taskData, targets):
def __init__(self):
self.reset_runqueue()
self.cooker = cooker
self.dataCache = dataCache
self.taskData = taskData
self.targets = targets
self.number_tasks = int(bb.data.getVar("BB_NUMBER_THREADS", cfgData) or 1)
def reset_runqueue(self):
self.runq_fnid = []
self.runq_task = []
self.runq_depends = []
@@ -73,15 +46,16 @@ class RunQueue:
self.runq_weight = []
self.prio_map = []
def get_user_idstring(self, task):
fn = self.taskData.fn_index[self.runq_fnid[task]]
def get_user_idstring(self, task, taskData):
fn = taskData.fn_index[self.runq_fnid[task]]
taskname = self.runq_task[task]
return "%s, %s" % (fn, taskname)
def prepare_runqueue(self):
def prepare_runqueue(self, cooker, cfgData, dataCache, taskData, targets):
"""
Turn a set of taskData into a RunQueue and compute data needed
to optimise the execution order.
targets is list of paired values - a provider name and the task to run
"""
depends = []
@@ -89,18 +63,12 @@ class RunQueue:
runq_build = []
runq_done = []
taskData = self.taskData
if len(taskData.tasks_name) == 0:
# Nothing to do
return
bb.msg.note(1, bb.msg.domain.RunQueue, "Preparing runqueue")
bb.msg.note(1, bb.msg.domain.RunQueue, "Preparing Runqueue")
for task in range(len(taskData.tasks_name)):
fnid = taskData.tasks_fnid[task]
fn = taskData.fn_index[fnid]
task_deps = self.dataCache.task_deps[fn]
task_deps = dataCache.task_deps[fn]
if fnid not in taskData.failed_fnids:
@@ -126,15 +94,6 @@ class RunQueue:
dep = taskData.fn_index[depdata]
depends.append(taskData.gettask_id(dep, taskname))
idepends = taskData.tasks_idepends[task]
for idepend in idepends:
depid = int(idepend.split(":")[0])
if depid in taskData.build_targets:
depdata = taskData.build_targets[depid][0]
if depdata:
dep = taskData.fn_index[depdata]
depends.append(taskData.gettask_id(dep, idepend.split(":")[1]))
def add_recursive_build(depid):
"""
Add build depends of depid to depends
@@ -193,9 +152,9 @@ class RunQueue:
# Resolve Recursive Runtime Depends
# Also includes all Build Depends (and their runtime depends)
if 'recrdeptask' in task_deps and taskData.tasks_name[task] in task_deps['recrdeptask']:
dep_seen = []
rdep_seen = []
for taskname in task_deps['recrdeptask'][taskData.tasks_name[task]].split():
dep_seen = []
rdep_seen = []
for depid in taskData.depids[fnid]:
add_recursive_build(depid)
for rdepid in taskData.rdepids[fnid]:
@@ -238,22 +197,22 @@ class RunQueue:
for depend in depends:
mark_active(depend, depth+1)
for target in self.targets:
for target in targets:
targetid = taskData.getbuild_id(target[0])
if targetid not in taskData.build_targets:
continue
if targetid in taskData.failed_deps:
continue
fnid = taskData.build_targets[targetid][0]
# Remove stamps for targets if force mode active
if self.cooker.configuration.force:
if cooker.configuration.force:
fn = taskData.fn_index[fnid]
bb.msg.note(2, bb.msg.domain.RunQueue, "Remove stamp %s, %s" % (target[1], fn))
bb.build.del_stamp(target[1], self.dataCache, fn)
bb.build.del_stamp(target[1], dataCache, fn)
if targetid in taskData.failed_deps:
continue
if fnid in taskData.failed_fnids:
continue
@@ -340,18 +299,18 @@ class RunQueue:
seen.append(taskid)
for revdep in self.runq_revdeps[taskid]:
if runq_done[revdep] == 0 and revdep not in seen and not finish:
bb.msg.error(bb.msg.domain.RunQueue, "Task %s (%s) (depends: %s)" % (revdep, self.get_user_idstring(revdep), self.runq_depends[revdep]))
bb.msg.error(bb.msg.domain.RunQueue, "Task %s (%s) (depends: %s)" % (revdep, self.get_user_idstring(revdep, taskData), self.runq_depends[revdep]))
if revdep in deps_seen:
bb.msg.error(bb.msg.domain.RunQueue, "Chain ends at Task %s (%s)" % (revdep, self.get_user_idstring(revdep)))
bb.msg.error(bb.msg.domain.RunQueue, "Chain ends at Task %s (%s)" % (revdep, self.get_user_idstring(revdep, taskData)))
finish = True
return
for dep in self.runq_depends[revdep]:
deps_seen.append(dep)
print_chain(revdep, finish)
print_chain(task, False)
bb.msg.fatal(bb.msg.domain.RunQueue, "Task %s (%s) not processed!\nThis is probably a circular dependency (the chain might be printed above)." % (task, self.get_user_idstring(task)))
bb.msg.fatal(bb.msg.domain.RunQueue, "Task %s (%s) not processed!\nThis is probably a circular dependency (the chain might be printed above)." % (task, self.get_user_idstring(task, taskData)))
if runq_weight1[task] != 0:
bb.msg.fatal(bb.msg.domain.RunQueue, "Task %s (%s) count not zero!" % (task, self.get_user_idstring(task)))
bb.msg.fatal(bb.msg.domain.RunQueue, "Task %s (%s) count not zero!" % (task, self.get_user_idstring(task, taskData)))
# Make a weight sorted map
from copy import deepcopy
@@ -369,7 +328,7 @@ class RunQueue:
#self.dump_data(taskData)
def execute_runqueue(self):
def execute_runqueue(self, cooker, cfgData, dataCache, taskData, runlist):
"""
Run the tasks in a queue prepared by prepare_runqueue
Upon failure, optionally try to recover the build using any alternate providers
@@ -378,192 +337,181 @@ class RunQueue:
failures = 0
while 1:
failed_fnids = []
try:
self.execute_runqueue_internal()
finally:
if self.master_process:
failed_fnids = self.finish_runqueue()
failed_fnids = self.execute_runqueue_internal(cooker, cfgData, dataCache, taskData)
if len(failed_fnids) == 0:
return failures
if self.taskData.abort:
if taskData.abort:
raise bb.runqueue.TaskFailure(failed_fnids)
for fnid in failed_fnids:
#print "Failure: %s %s %s" % (fnid, self.taskData.fn_index[fnid], self.runq_task[fnid])
self.taskData.fail_fnid(fnid)
#print "Failure: %s %s %s" % (fnid, taskData.fn_index[fnid], self.runq_task[fnid])
taskData.fail_fnid(fnid)
failures = failures + 1
self.reset_runqueue()
self.prepare_runqueue()
self.prepare_runqueue(cfgData, dataCache, taskData, runlist)
def execute_runqueue_initVars(self):
self.stats = RunQueueStats()
self.active_builds = 0
self.runq_buildable = []
self.runq_running = []
self.runq_complete = []
self.build_pids = {}
self.failed_fnids = []
self.master_process = True
# Mark initial buildable tasks
for task in range(len(self.runq_fnid)):
self.runq_running.append(0)
self.runq_complete.append(0)
if len(self.runq_depends[task]) == 0:
self.runq_buildable.append(1)
else:
self.runq_buildable.append(0)
def task_complete(self, task):
"""
Mark a task as completed
Look at the reverse dependencies and mark any task with
completed dependencies as buildable
"""
self.runq_complete[task] = 1
for revdep in self.runq_revdeps[task]:
if self.runq_running[revdep] == 1:
continue
if self.runq_buildable[revdep] == 1:
continue
alldeps = 1
for dep in self.runq_depends[revdep]:
if self.runq_complete[dep] != 1:
alldeps = 0
if alldeps == 1:
self.runq_buildable[revdep] = 1
fn = self.taskData.fn_index[self.runq_fnid[revdep]]
taskname = self.runq_task[revdep]
bb.msg.debug(1, bb.msg.domain.RunQueue, "Marking task %s (%s, %s) as buildable" % (revdep, fn, taskname))
def get_next_task(self):
"""
Return the id of the highest priority task that is buildable
"""
for task1 in range(len(self.runq_fnid)):
task = self.prio_map[task1]
if self.runq_running[task] == 1:
continue
if self.runq_buildable[task] == 1:
return task
return None
def execute_runqueue_internal(self):
def execute_runqueue_internal(self, cooker, cfgData, dataCache, taskData):
"""
Run the tasks in a queue prepared by prepare_runqueue
"""
import signal
bb.msg.note(1, bb.msg.domain.RunQueue, "Executing runqueue")
self.execute_runqueue_initVars()
active_builds = 0
tasks_completed = 0
tasks_skipped = 0
runq_buildable = []
runq_running = []
runq_complete = []
build_pids = {}
failed_fnids = []
if len(self.runq_fnid) == 0:
# nothing to do
return []
return
def sigint_handler(signum, frame):
raise KeyboardInterrupt
while True:
task = self.get_next_task()
if task is not None:
fn = self.taskData.fn_index[self.runq_fnid[task]]
taskname = self.runq_task[task]
if bb.build.stamp_is_current(taskname, self.dataCache, fn):
bb.msg.debug(2, bb.msg.domain.RunQueue, "Stamp current task %s (%s)" % (task, self.get_user_idstring(task)))
self.runq_running[task] = 1
self.task_complete(task)
self.stats.taskCompleted()
self.stats.taskSkipped()
def get_next_task(data):
"""
Return the id of the highest priority task that is buildable
"""
for task1 in range(len(data.runq_fnid)):
task = data.prio_map[task1]
if runq_running[task] == 1:
continue
if runq_buildable[task] == 1:
return task
return None
bb.msg.note(1, bb.msg.domain.RunQueue, "Running task %d of %d (ID: %s, %s)" % (self.stats.completed + self.active_builds + 1, len(self.runq_fnid), task, self.get_user_idstring(task)))
try:
pid = os.fork()
except OSError, e:
bb.msg.fatal(bb.msg.domain.RunQueue, "fork failed: %d (%s)" % (e.errno, e.strerror))
if pid == 0:
# Bypass master process' handling
self.master_process = False
# Stop Ctrl+C being sent to children
# signal.signal(signal.SIGINT, signal.SIG_IGN)
# Make the child the process group leader
os.setpgid(0, 0)
newsi = os.open('/dev/null', os.O_RDWR)
os.dup2(newsi, sys.stdin.fileno())
self.cooker.configuration.cmd = taskname[3:]
try:
self.cooker.tryBuild(fn, False)
except bb.build.EventException:
bb.msg.error(bb.msg.domain.Build, "Build of " + fn + " " + taskname + " failed")
sys.exit(1)
except:
bb.msg.error(bb.msg.domain.Build, "Build of " + fn + " " + taskname + " failed")
raise
sys.exit(0)
self.build_pids[pid] = task
self.runq_running[task] = 1
self.active_builds = self.active_builds + 1
if self.active_builds < self.number_tasks:
def task_complete(data, task):
"""
Mark a task as completed
Look at the reverse dependencies and mark any task with
completed dependencies as buildable
"""
runq_complete[task] = 1
for revdep in data.runq_revdeps[task]:
if runq_running[revdep] == 1:
continue
if self.active_builds > 0:
result = os.waitpid(-1, 0)
self.active_builds = self.active_builds - 1
task = self.build_pids[result[0]]
if result[1] != 0:
del self.build_pids[result[0]]
bb.msg.error(bb.msg.domain.RunQueue, "Task %s (%s) failed" % (task, self.get_user_idstring(task)))
self.failed_fnids.append(self.runq_fnid[task])
self.stats.taskFailed()
break
self.task_complete(task)
self.stats.taskCompleted()
del self.build_pids[result[0]]
continue
return
if runq_buildable[revdep] == 1:
continue
alldeps = 1
for dep in data.runq_depends[revdep]:
if runq_complete[dep] != 1:
alldeps = 0
if alldeps == 1:
runq_buildable[revdep] = 1
fn = taskData.fn_index[self.runq_fnid[revdep]]
taskname = self.runq_task[revdep]
bb.msg.debug(1, bb.msg.domain.RunQueue, "Marking task %s (%s, %s) as buildable" % (revdep, fn, taskname))
# Mark initial buildable tasks
for task in range(len(self.runq_fnid)):
runq_running.append(0)
runq_complete.append(0)
if len(self.runq_depends[task]) == 0:
runq_buildable.append(1)
else:
runq_buildable.append(0)
number_tasks = int(bb.data.getVar("BB_NUMBER_THREADS", cfgData) or 1)
def finish_runqueue(self):
try:
while self.active_builds > 0:
bb.msg.note(1, bb.msg.domain.RunQueue, "Waiting for %s active tasks to finish" % self.active_builds)
tasknum = 1
for k, v in self.build_pids.iteritems():
bb.msg.note(1, bb.msg.domain.RunQueue, "%s: %s (%s)" % (tasknum, self.get_user_idstring(v), k))
tasknum = tasknum + 1
result = os.waitpid(-1, 0)
task = self.build_pids[result[0]]
if result[1] != 0:
bb.msg.error(bb.msg.domain.RunQueue, "Task %s (%s) failed" % (task, self.get_user_idstring(task)))
self.failed_fnids.append(self.runq_fnid[task])
self.stats.taskFailed()
del self.build_pids[result[0]]
self.active_builds = self.active_builds - 1
bb.msg.note(1, bb.msg.domain.RunQueue, "Tasks Summary: Attempted %d tasks of which %d didn't need to be rerun and %d failed." % (self.stats.completed, self.stats.skipped, self.stats.failed))
return self.failed_fnids
except KeyboardInterrupt:
bb.msg.note(1, bb.msg.domain.RunQueue, "Sending SIGINT to remaining %s tasks" % self.active_builds)
for k, v in self.build_pids.iteritems():
try:
while 1:
task = get_next_task(self)
if task is not None:
fn = taskData.fn_index[self.runq_fnid[task]]
taskname = self.runq_task[task]
if bb.build.stamp_is_current(taskname, dataCache, fn):
bb.msg.debug(2, bb.msg.domain.RunQueue, "Stamp current task %s (%s)" % (task, self.get_user_idstring(task, taskData)))
runq_running[task] = 1
task_complete(self, task)
tasks_completed = tasks_completed + 1
tasks_skipped = tasks_skipped + 1
continue
bb.msg.note(1, bb.msg.domain.RunQueue, "Running task %d of %d (ID: %s, %s)" % (tasks_completed + active_builds + 1, len(self.runq_fnid), task, self.get_user_idstring(task, taskData)))
try:
pid = os.fork()
except OSError, e:
bb.msg.fatal(bb.msg.domain.RunQueue, "fork failed: %d (%s)" % (e.errno, e.strerror))
if pid == 0:
# Bypass finally below
active_builds = 0
# Stop Ctrl+C being sent to children
# signal.signal(signal.SIGINT, signal.SIG_IGN)
# Make the child the process group leader
os.setpgid(0, 0)
sys.stdin = open('/dev/null', 'r')
cooker.configuration.cmd = taskname[3:]
try:
cooker.tryBuild(fn, False)
except bb.build.EventException:
bb.msg.error(bb.msg.domain.Build, "Build of " + fn + " " + taskname + " failed")
sys.exit(1)
except:
bb.msg.error(bb.msg.domain.Build, "Build of " + fn + " " + taskname + " failed")
raise
sys.exit(0)
build_pids[pid] = task
runq_running[task] = 1
active_builds = active_builds + 1
if active_builds < number_tasks:
continue
if active_builds > 0:
result = os.waitpid(-1, 0)
active_builds = active_builds - 1
task = build_pids[result[0]]
if result[1] != 0:
del build_pids[result[0]]
bb.msg.error(bb.msg.domain.RunQueue, "Task %s (%s) failed" % (task, self.get_user_idstring(task, taskData)))
failed_fnids.append(self.runq_fnid[task])
break
task_complete(self, task)
tasks_completed = tasks_completed + 1
del build_pids[result[0]]
continue
break
finally:
try:
while active_builds > 0:
bb.msg.note(1, bb.msg.domain.RunQueue, "Waiting for %s active tasks to finish" % active_builds)
tasknum = 1
for k, v in build_pids.iteritems():
bb.msg.note(1, bb.msg.domain.RunQueue, "%s: %s (%s)" % (tasknum, self.get_user_idstring(v, taskData), k))
tasknum = tasknum + 1
result = os.waitpid(-1, 0)
task = build_pids[result[0]]
if result[1] != 0:
bb.msg.error(bb.msg.domain.RunQueue, "Task %s (%s) failed" % (task, self.get_user_idstring(task, taskData)))
failed_fnids.append(self.runq_fnid[task])
del build_pids[result[0]]
active_builds = active_builds - 1
if len(failed_fnids) > 0:
return failed_fnids
except:
bb.msg.note(1, bb.msg.domain.RunQueue, "Sending SIGINT to remaining %s tasks" % active_builds)
for k, v in build_pids.iteritems():
os.kill(-k, signal.SIGINT)
except:
pass
raise
raise
# Sanity Checks
for task in range(len(self.runq_fnid)):
if self.runq_buildable[task] == 0:
if runq_buildable[task] == 0:
bb.msg.error(bb.msg.domain.RunQueue, "Task %s never buildable!" % task)
if self.runq_running[task] == 0:
if runq_running[task] == 0:
bb.msg.error(bb.msg.domain.RunQueue, "Task %s never ran!" % task)
if self.runq_complete[task] == 0:
if runq_complete[task] == 0:
bb.msg.error(bb.msg.domain.RunQueue, "Task %s never completed!" % task)
bb.msg.note(1, bb.msg.domain.RunQueue, "Tasks Summary: Attempted %d tasks of which %d didn't need to be rerun and %d failed." % (self.stats.completed, self.stats.skipped, self.stats.failed))
bb.msg.note(1, bb.msg.domain.RunQueue, "Tasks Summary: Attempted %d tasks of which %d didn't need to be rerun and %d failed." % (tasks_completed, tasks_skipped, len(failed_fnids)))
return self.failed_fnids
return failed_fnids
def dump_data(self, taskQueue):
"""

View File

@@ -104,11 +104,10 @@ class BitBakeShellCommands:
def _findProvider( self, item ):
self._checkParsed()
# Need to use taskData for this information
preferred = data.getVar( "PREFERRED_PROVIDER_%s" % item, cooker.configuration.data, 1 )
if not preferred: preferred = item
try:
lv, lf, pv, pf = Providers.findBestProvider(preferred, cooker.configuration.data, cooker.status)
lv, lf, pv, pf = Providers.findBestProvider(preferred, cooker.configuration.data, cooker.status, cooker.build_cache_fail)
except KeyError:
if item in cooker.status.providers:
pf = cooker.status.providers[item][0]
@@ -145,7 +144,6 @@ class BitBakeShellCommands:
def build( self, params, cmd = "build" ):
"""Build a providee"""
global last_exception
globexpr = params[0]
self._checkParsed()
names = globfilter( cooker.status.pkg_pn.keys(), globexpr )
@@ -154,6 +152,8 @@ class BitBakeShellCommands:
oldcmd = cooker.configuration.cmd
cooker.configuration.cmd = cmd
cooker.build_cache = []
cooker.build_cache_fail = []
td = taskdata.TaskData(cooker.configuration.abort)
@@ -170,21 +170,24 @@ class BitBakeShellCommands:
td.add_unresolved(cooker.configuration.data, cooker.status)
rq = runqueue.RunQueue(cooker, cooker.configuration.data, cooker.status, td, tasks)
rq.prepare_runqueue()
rq.execute_runqueue()
rq = runqueue.RunQueue()
rq.prepare_runqueue(cooker, cooker.configuration.data, cooker.status, td, tasks)
rq.execute_runqueue(cooker, cooker.configuration.data, cooker.status, td, tasks)
except Providers.NoProvider:
print "ERROR: No Provider"
global last_exception
last_exception = Providers.NoProvider
except runqueue.TaskFailure, fnids:
for fnid in fnids:
print "ERROR: '%s' failed" % td.fn_index[fnid]
global last_exception
last_exception = runqueue.TaskFailure
except build.EventException, e:
print "ERROR: Couldn't build '%s'" % names
global last_exception
last_exception = e
cooker.configuration.cmd = oldcmd
@@ -233,13 +236,14 @@ class BitBakeShellCommands:
def fileBuild( self, params, cmd = "build" ):
"""Parse and build a .bb file"""
global last_exception
name = params[0]
bf = completeFilePath( name )
print "SHELL: Calling '%s' on '%s'" % ( cmd, bf )
oldcmd = cooker.configuration.cmd
cooker.configuration.cmd = cmd
cooker.build_cache = []
cooker.build_cache_fail = []
thisdata = copy.deepcopy( initdata )
# Caution: parse.handle modifies thisdata, hence it would
@@ -262,6 +266,7 @@ class BitBakeShellCommands:
cooker.tryBuildPackage( os.path.abspath( bf ), item, cmd, bbfile_data, True )
except build.EventException, e:
print "ERROR: Couldn't build '%s'" % name
global last_exception
last_exception = e
cooker.configuration.cmd = oldcmd
@@ -532,6 +537,8 @@ SRC_URI = ""
def status( self, params ):
"""<just for testing>"""
print "-" * 78
print "build cache = '%s'" % cooker.build_cache
print "build cache fail = '%s'" % cooker.build_cache_fail
print "building list = '%s'" % cooker.building_list
print "build path = '%s'" % cooker.build_path
print "consider_msgs_cache = '%s'" % cooker.consider_msgs_cache
@@ -550,7 +557,6 @@ SRC_URI = ""
def which( self, params ):
"""Computes the providers for a given providee"""
# Need to use taskData for this information
item = params[0]
self._checkParsed()
@@ -559,7 +565,8 @@ SRC_URI = ""
if not preferred: preferred = item
try:
lv, lf, pv, pf = Providers.findBestProvider(preferred, cooker.configuration.data, cooker.status)
lv, lf, pv, pf = Providers.findBestProvider(preferred, cooker.configuration.data, cooker.status,
cooker.build_cache_fail)
except KeyError:
lv, lf, pv, pf = (None,)*4

View File

@@ -43,7 +43,6 @@ class TaskData:
self.tasks_fnid = []
self.tasks_name = []
self.tasks_tdepends = []
self.tasks_idepends = []
# Cache to speed up task ID lookups
self.tasks_lookup = {}
@@ -109,7 +108,6 @@ class TaskData:
self.tasks_name.append(task)
self.tasks_fnid.append(fnid)
self.tasks_tdepends.append([])
self.tasks_idepends.append([])
listid = len(self.tasks_name) - 1
@@ -136,9 +134,8 @@ class TaskData:
if fnid in self.tasks_fnid:
return
# Work out task dependencies
for task in task_graph.allnodes():
# Work out task dependencies
parentids = []
for dep in task_graph.getparents(task):
parentid = self.gettask_id(fn, dep)
@@ -146,14 +143,6 @@ class TaskData:
taskid = self.gettask_id(fn, task)
self.tasks_tdepends[taskid].extend(parentids)
# Touch all intertask dependencies
if 'depends' in task_deps and task in task_deps['depends']:
ids = []
for dep in task_deps['depends'][task].split(" "):
if dep:
ids.append(str(self.getbuild_id(dep.split(":")[0])) + ":" + dep.split(":")[1])
self.tasks_idepends[taskid].extend(ids)
# Work out build dependencies
if not fnid in self.depids:
dependids = {}
@@ -348,7 +337,7 @@ class TaskData:
return
if not item in dataCache.providers:
bb.msg.note(2, bb.msg.domain.Provider, "No providers of build target %s (for %s)" % (item, self.get_dependees_str(item)))
bb.msg.debug(1, bb.msg.domain.Provider, "No providers of build target %s (for %s)" % (item, self.get_dependees_str(item)))
bb.event.fire(bb.event.NoProvider(item, cfgData))
raise bb.providers.NoProvider(item)
@@ -365,7 +354,7 @@ class TaskData:
eligible.remove(p)
if not eligible:
bb.msg.note(2, bb.msg.domain.Provider, "No providers of build target %s after filtering (for %s)" % (item, self.get_dependees_str(item)))
bb.msg.debug(1, bb.msg.domain.Provider, "No providers of build target %s after filtering (for %s)" % (item, self.get_dependees_str(item)))
bb.event.fire(bb.event.NoProvider(item, cfgData))
raise bb.providers.NoProvider(item)
@@ -448,7 +437,6 @@ class TaskData:
eligible.remove(p)
eligible = [p] + eligible
preferred.append(p)
break
if len(eligible) > 1 and len(preferred) == 0:
if item not in self.consider_msgs_cache:
@@ -504,7 +492,7 @@ class TaskData:
Mark a build target as failed (unbuildable)
Trigger removal of any files that have this as a dependency
"""
bb.msg.note(2, bb.msg.domain.Provider, "Removing failed build target %s" % self.build_names_index[targetid])
bb.msg.debug(1, bb.msg.domain.Provider, "Removing failed build target %s" % self.build_names_index[targetid])
self.failed_deps.append(targetid)
dependees = self.get_dependees(targetid)
for fnid in dependees:

View File

@@ -62,12 +62,10 @@ def vercmp_part(a, b):
return -1
def vercmp(ta, tb):
(ea, va, ra) = ta
(eb, vb, rb) = tb
(va, ra) = ta
(vb, rb) = tb
r = int(ea)-int(eb)
if (r == 0):
r = vercmp_part(va, vb)
r = vercmp_part(va, vb)
if (r == 0):
r = vercmp_part(ra, rb)
return r

View File

@@ -24,26 +24,25 @@ MACHINE ?= "qemuarm"
#MACHINE ?= "nokia770"
DISTRO ?= "poky"
DISTRO = "poky"
# For bleeding edge / experimental / unstable package versions
# DISTRO ?= "poky-bleeding"
# DISTRO = "poky-bleeding"
# IMAGE_FEATURES configuration of the generated images
# (Some of these are automatically added to certain image types)
# "dbg-pkgs" - add -dbg packages for all installed packages
# (adds symbol information for debugging/profiling)
# "dev-pkgs" - add -dev packages for all installed packages
# (useful if you want to develop against libs in the image)
# "tools-sdk" - add development tools (gcc, make, pkgconfig etc.)
# "tools-debug" - add debugging tools (gdb, strace)
# "tools-profile" - add profiling tools (oprofile, exmap, lttng valgrind (x86 only))
# "tools-testapps" - add useful testing tools (ts_print, aplay, arecord etc.)
# "debug-tweaks" - make an image for suitable of development
# e.g. ssh root access has a blank password
# There are other application targets too, see meta/classes/poky-image.bbclass
# and meta/packages/tasks/task-poky.bb for more details.
# "dev-pkgs" - add -dev packages for all installed packages
# (useful if you want to develop against libs in the image)
# "dbg-pkgs" - add -dbg packages for all installed packages
# (adds symbol information for debugging/profiling)
# "apps-core" - core applications
# "apps-pda" - add PDA application suite (contacts, dates, etc.)
# "dev-tools" - add development tools (gcc, make, pkgconfig etc.)
# "dbg-tools" - add debugging tools (gdb, strace, oprofile, etc.)
# "test-tools" - add useful testing tools (ts_print, aplay, arecord etc.)
# "debug-tweaks" - make an image for suitable of development
# e.g. ssh root access has a blank password
IMAGE_FEATURES = "tools-dbg tools-profile tools-testapps debug-tweaks"
IMAGE_FEATURES = "dbg-tools test-tools debug-tweaks"
# A list of packaging systems used in generated images
# The first package type listed will be used for rootfs generation
@@ -67,6 +66,7 @@ TMPDIR = "${OEROOT}/build/tmp"
# Uncomment and set to allow bitbake to execute multiple tasks at once.
# Note, This option is currently experimental - YMMV.
# 'quilt' is also required on the host system
# BB_NUMBER_THREADS = "1"
# Comment this out if you are *not* using provided qemu deb - see README
@@ -96,7 +96,3 @@ BBINCLUDELOGS = "yes"
CVS_TARBALL_STASH = "http://folks.o-hand.com/~richard/poky/sources/"
ENABLE_BINARY_LOCALE_GENERATION = "1"
# A precompiled poky toolchain is available. If installed, uncomment the
# line below to enable (Note this support is still experimental)
#require conf/external_toolchain.conf

View File

@@ -1,312 +0,0 @@
Collected upstream patches: 001 -> 005
Index: bash-3.2/parse.y
===================================================================
--- bash-3.2.orig/parse.y 2006-11-27 20:09:18.000000000 +0100
+++ bash-3.2/parse.y 2006-11-27 20:10:10.000000000 +0100
@@ -1029,6 +1029,7 @@
#define PST_CMDTOKEN 0x1000 /* command token OK - unused */
#define PST_COMPASSIGN 0x2000 /* parsing x=(...) compound assignment */
#define PST_ASSIGNOK 0x4000 /* assignment statement ok in this context */
+#define PST_REGEXP 0x8000 /* parsing an ERE/BRE as a single word */
/* Initial size to allocate for tokens, and the
amount to grow them by. */
@@ -2591,6 +2592,9 @@
return (character);
}
+ if (parser_state & PST_REGEXP)
+ goto tokword;
+
/* Shell meta-characters. */
if MBTEST(shellmeta (character) && ((parser_state & PST_DBLPAREN) == 0))
{
@@ -2698,6 +2702,7 @@
if MBTEST(character == '-' && (last_read_token == LESS_AND || last_read_token == GREATER_AND))
return (character);
+tokword:
/* Okay, if we got this far, we have to read a word. Read one,
and then check it against the known ones. */
result = read_token_word (character);
@@ -2735,7 +2740,7 @@
/* itrace("parse_matched_pair: open = %c close = %c", open, close); */
count = 1;
pass_next_character = backq_backslash = was_dollar = in_comment = 0;
- check_comment = (flags & P_COMMAND) && qc != '\'' && qc != '"' && (flags & P_DQUOTE) == 0;
+ check_comment = (flags & P_COMMAND) && qc != '`' && qc != '\'' && qc != '"' && (flags & P_DQUOTE) == 0;
/* RFLAGS is the set of flags we want to pass to recursive calls. */
rflags = (qc == '"') ? P_DQUOTE : (flags & P_DQUOTE);
@@ -3202,8 +3207,11 @@
if (tok == WORD && test_binop (yylval.word->word))
op = yylval.word;
#if defined (COND_REGEXP)
- else if (tok == WORD && STREQ (yylval.word->word,"=~"))
- op = yylval.word;
+ else if (tok == WORD && STREQ (yylval.word->word, "=~"))
+ {
+ op = yylval.word;
+ parser_state |= PST_REGEXP;
+ }
#endif
else if (tok == '<' || tok == '>')
op = make_word_from_token (tok); /* ( */
@@ -3234,6 +3242,7 @@
/* rhs */
tok = read_token (READ);
+ parser_state &= ~PST_REGEXP;
if (tok == WORD)
{
tright = make_cond_node (COND_TERM, yylval.word, (COND_COM *)NULL, (COND_COM *)NULL);
@@ -3419,9 +3428,34 @@
goto next_character;
}
+#ifdef COND_REGEXP
+ /* When parsing a regexp as a single word inside a conditional command,
+ we need to special-case characters special to both the shell and
+ regular expressions. Right now, that is only '(' and '|'. */ /*)*/
+ if MBTEST((parser_state & PST_REGEXP) && (character == '(' || character == '|')) /*)*/
+ {
+ if (character == '|')
+ goto got_character;
+
+ push_delimiter (dstack, character);
+ ttok = parse_matched_pair (cd, '(', ')', &ttoklen, 0);
+ pop_delimiter (dstack);
+ if (ttok == &matched_pair_error)
+ return -1; /* Bail immediately. */
+ RESIZE_MALLOCED_BUFFER (token, token_index, ttoklen + 2,
+ token_buffer_size, TOKEN_DEFAULT_GROW_SIZE);
+ token[token_index++] = character;
+ strcpy (token + token_index, ttok);
+ token_index += ttoklen;
+ FREE (ttok);
+ dollar_present = all_digit_token = 0;
+ goto next_character;
+ }
+#endif /* COND_REGEXP */
+
#ifdef EXTENDED_GLOB
/* Parse a ksh-style extended pattern matching specification. */
- if (extended_glob && PATTERN_CHAR (character))
+ if MBTEST(extended_glob && PATTERN_CHAR (character))
{
peek_char = shell_getc (1);
if MBTEST(peek_char == '(') /* ) */
Index: bash-3.2/patchlevel.h
===================================================================
--- bash-3.2.orig/patchlevel.h 2006-11-27 20:09:18.000000000 +0100
+++ bash-3.2/patchlevel.h 2006-11-27 20:11:06.000000000 +0100
@@ -25,6 +25,6 @@
regexp `^#define[ ]*PATCHLEVEL', since that's what support/mkversion.sh
looks for to find the patch level (for the sccs version string). */
-#define PATCHLEVEL 0
+#define PATCHLEVEL 5
#endif /* _PATCHLEVEL_H_ */
Index: bash-3.2/po/ru.po
===================================================================
--- bash-3.2.orig/po/ru.po 2006-11-27 20:09:18.000000000 +0100
+++ bash-3.2/po/ru.po 2006-11-27 20:10:00.000000000 +0100
@@ -12,7 +12,7 @@
"Last-Translator: Evgeniy Dushistov <dushistov@mail.ru>\n"
"Language-Team: Russian <ru@li.org>\n"
"MIME-Version: 1.0\n"
-"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Type: text/plain; charset=KOI8-R\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=3; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2);\n"
Index: bash-3.2/subst.c
===================================================================
--- bash-3.2.orig/subst.c 2006-11-27 20:09:18.000000000 +0100
+++ bash-3.2/subst.c 2006-11-27 20:10:26.000000000 +0100
@@ -5707,6 +5707,11 @@
vtype &= ~VT_STARSUB;
mflags = 0;
+ if (patsub && *patsub == '/')
+ {
+ mflags |= MATCH_GLOBREP;
+ patsub++;
+ }
/* Malloc this because expand_string_if_necessary or one of the expansion
functions in its call chain may free it on a substitution error. */
@@ -5741,13 +5746,12 @@
}
/* ksh93 doesn't allow the match specifier to be a part of the expanded
- pattern. This is an extension. */
+ pattern. This is an extension. Make sure we don't anchor the pattern
+ at the beginning or end of the string if we're doing global replacement,
+ though. */
p = pat;
- if (pat && pat[0] == '/')
- {
- mflags |= MATCH_GLOBREP|MATCH_ANY;
- p++;
- }
+ if (mflags & MATCH_GLOBREP)
+ mflags |= MATCH_ANY;
else if (pat && pat[0] == '#')
{
mflags |= MATCH_BEG;
Index: bash-3.2/tests/new-exp.right
===================================================================
--- bash-3.2.orig/tests/new-exp.right 2006-11-27 20:09:18.000000000 +0100
+++ bash-3.2/tests/new-exp.right 2006-11-27 20:10:29.000000000 +0100
@@ -430,7 +430,7 @@
Case06---1---A B C::---
Case07---3---A:B:C---
Case08---3---A:B:C---
-./new-exp.tests: line 506: /${$(($#-1))}: bad substitution
+./new-exp.tests: line 506: ${$(($#-1))}: bad substitution
argv[1] = <a>
argv[2] = <b>
argv[3] = <c>
Index: bash-3.2/builtins/printf.def
===================================================================
--- bash-3.2.orig/builtins/printf.def 2006-11-27 20:09:18.000000000 +0100
+++ bash-3.2/builtins/printf.def 2006-11-27 20:11:05.000000000 +0100
@@ -49,6 +49,12 @@
# define INT_MIN (-2147483647-1)
#endif
+#if defined (PREFER_STDARG)
+# include <stdarg.h>
+#else
+# include <varargs.h>
+#endif
+
#include <stdio.h>
#include <chartypes.h>
@@ -151,6 +157,10 @@
#define SKIP1 "#'-+ 0"
#define LENMODS "hjlLtz"
+#ifndef HAVE_ASPRINTF
+extern int asprintf __P((char **, const char *, ...)) __attribute__((__format__ (printf, 2, 3)));
+#endif
+
static void printf_erange __P((char *));
static int printstr __P((char *, char *, int, int, int));
static int tescape __P((char *, char *, int *));
Index: bash-3.2/lib/sh/snprintf.c
===================================================================
--- bash-3.2.orig/lib/sh/snprintf.c 2006-11-27 20:09:18.000000000 +0100
+++ bash-3.2/lib/sh/snprintf.c 2006-11-27 20:11:06.000000000 +0100
@@ -471,6 +471,8 @@
10^x ~= r
* log_10(200) = 2;
* log_10(250) = 2;
+ *
+ * NOTE: do not call this with r == 0 -- an infinite loop results.
*/
static int
log_10(r)
@@ -576,8 +578,11 @@
{
integral_part[0] = '0';
integral_part[1] = '\0';
- fraction_part[0] = '0';
- fraction_part[1] = '\0';
+ /* The fractional part has to take the precision into account */
+ for (ch = 0; ch < precision-1; ch++)
+ fraction_part[ch] = '0';
+ fraction_part[ch] = '0';
+ fraction_part[ch+1] = '\0';
if (fract)
*fract = fraction_part;
return integral_part;
@@ -805,6 +810,7 @@
PUT_CHAR(*tmp, p);
tmp++;
}
+
PAD_LEFT(p);
}
@@ -972,11 +978,21 @@
if ((p->flags & PF_THOUSANDS) && grouping && (t = groupnum (tmp)))
tmp = t;
+ if ((*p->pf == 'g' || *p->pf == 'G') && (p->flags & PF_ALTFORM) == 0)
+ {
+ /* smash the trailing zeros unless altform */
+ for (i = strlen(tmp2) - 1; i >= 0 && tmp2[i] == '0'; i--)
+ tmp2[i] = '\0';
+ if (tmp2[0] == '\0')
+ p->precision = 0;
+ }
+
/* calculate the padding. 1 for the dot */
p->width = p->width -
((d > 0. && p->justify == RIGHT) ? 1:0) -
((p->flags & PF_SPACE) ? 1:0) -
- strlen(tmp) - p->precision - 1;
+ strlen(tmp) - p->precision -
+ ((p->precision != 0 || (p->flags & PF_ALTFORM)) ? 1 : 0); /* radix char */
PAD_RIGHT(p);
PUT_PLUS(d, p, 0.);
PUT_SPACE(d, p, 0.);
@@ -991,11 +1007,6 @@
if (p->precision != 0 || (p->flags & PF_ALTFORM))
PUT_CHAR(decpoint, p); /* put the '.' */
- if ((*p->pf == 'g' || *p->pf == 'G') && (p->flags & PF_ALTFORM) == 0)
- /* smash the trailing zeros unless altform */
- for (i = strlen(tmp2) - 1; i >= 0 && tmp2[i] == '0'; i--)
- tmp2[i] = '\0';
-
for (; *tmp2; tmp2++)
PUT_CHAR(*tmp2, p); /* the fraction */
@@ -1011,14 +1022,19 @@
char *tmp, *tmp2;
int j, i;
- if (chkinfnan(p, d, 1) || chkinfnan(p, d, 2))
+ if (d != 0 && (chkinfnan(p, d, 1) || chkinfnan(p, d, 2)))
return; /* already printed nan or inf */
GETLOCALEDATA(decpoint, thoussep, grouping);
DEF_PREC(p);
- j = log_10(d);
- d = d / pow_10(j); /* get the Mantissa */
- d = ROUND(d, p);
+ if (d == 0.)
+ j = 0;
+ else
+ {
+ j = log_10(d);
+ d = d / pow_10(j); /* get the Mantissa */
+ d = ROUND(d, p);
+ }
tmp = dtoa(d, p->precision, &tmp2);
/* 1 for unit, 1 for the '.', 1 for 'e|E',
@@ -1076,6 +1092,7 @@
PUT_CHAR(*tmp, p);
tmp++;
}
+
PAD_LEFT(p);
}
#endif
@@ -1358,7 +1375,7 @@
STAR_ARGS(data);
DEF_PREC(data);
d = GETDOUBLE(data);
- i = log_10(d);
+ i = (d != 0.) ? log_10(d) : -1;
/*
* for '%g|%G' ANSI: use f if exponent
* is in the range or [-4,p] exclusively

View File

@@ -1,28 +0,0 @@
DESCRIPTION = "An sh-compatible command language interpreter."
HOMEPAGE = "http://cnswww.cns.cwru.edu/~chet/bash/bashtop.html"
DEPENDS = "ncurses"
SECTION = "base/shell"
LICENSE = "GPL"
SRC_URI = "${GNU_MIRROR}/bash/bash-${PV}.tar.gz \
file://001-005.patch;patch=1"
inherit autotools gettext
PARALLEL_MAKE = ""
bindir = "/bin"
sbindir = "/sbin"
EXTRA_OECONF = "--with-ncurses"
export CC_FOR_BUILD = "${BUILD_CC}"
do_configure () {
gnu-configize
oe_runconf
}
pkg_postinst () {
grep -q "bin/bash" ${sysconfdir}/shells || echo /bin/bash >> ${sysconfdir}/shells
grep -q "bin/sh" ${sysconfdir}/shells || echo /bin/sh >> ${sysconfdir}/shells
}

View File

@@ -1,23 +0,0 @@
# cdrtools-native OE build file
# Copyright (C) 2004-2006, Advanced Micro Devices, Inc. All Rights Reserved
# Released under the MIT license (see packages/COPYING)
LICENSE="GPL"
DESCRIPTION="A set of tools for CD recording, including cdrecord"
HOMEPAGE="http://cdrecord.berlios.de/old/private/cdrecord.html"
SRC_URI="ftp://ftp.berlios.de/pub/cdrecord/cdrtools-${PV}.tar.bz2"
S="${WORKDIR}/cdrtools-${PV}"
inherit native
STAGE_TEMP="${WORKDIR}/stage_temp"
do_stage() {
install -d ${STAGE_TEMP}
make install INS_BASE=${STAGE_TEMP}
install -d ${STAGING_BINDIR}
install ${STAGE_TEMP}/bin/* ${STAGING_BINDIR}
}

View File

@@ -1,25 +0,0 @@
DESCRIPTION="DBus-enabled dhcp client"
SECTION="net"
LICENSE="GPL"
HOMEPAGE="http://people.redhat.com/jvdias/dhcdbd/"
DEPENDS = "dbus"
RDEPENDS = "dhcp-client"
PR = "r0"
SRC_URI="http://people.redhat.com/dcantrel/dhcdbd/dhcdbd-${PV}.tar.bz2 \
file://no-ext-options.patch;patch=1 \
file://dhcdbd"
do_compile() {
CC=${TARGET_SYS}-gcc DESTDIR=${prefix} make
}
do_install() {
DESTDIR=${D} make install
install -d ${D}/etc/init.d
install -m 0755 ${WORKDIR}/dhcdbd ${D}/etc/init.d/
}
FILES_${PN} += "${sysconfdir} ${datadir}/dbus-1 ${base_sbindir}/*"

View File

@@ -1,20 +0,0 @@
--- /tmp/dbus_service.c 2006-08-24 22:09:14.000000000 +0200
+++ dhcdbd-1.14/dbus_service.c 2006-08-24 22:09:44.228306000 +0200
@@ -1412,7 +1412,7 @@
return ( cs );
give_up:
- dbus_connection_disconnect( connection );
+ dbus_connection_close( connection );
dbus_shutdown();
return ( 0L );
}
@@ -1456,7 +1456,7 @@
cs->roots=0L;
- dbus_connection_disconnect( cs->connection );
+ dbus_connection_close( cs->connection );
dbus_shutdown();
free( cs );
}

View File

@@ -1,28 +0,0 @@
#!/bin/sh
#
# DHCDBD startup script
. /etc/profile
case $1 in
'start')
echo -n "Starting dhcdbd daemon: dhcdbd"
/sbin/dhcdbd --system
echo "."
;;
'stop')
echo -n "Stopping dhcdbd: dhcdbd"
killall `ps |grep /sbin/dhcdbd | grep -v grep | cut "-d " -f2`
echo "."
;;
'restart')
$0 stop
$0 start
;;
*)
echo "Usage: $0 { start | stop | restart }"
;;
esac

View File

@@ -1,26 +0,0 @@
diff -Naur dhcdbd-1.14/Makefile dhcdbd-1.14-mod/Makefile
--- dhcdbd-1.14/Makefile 2006-01-17 22:23:51.000000000 +0100
+++ dhcdbd-1.14-mod/Makefile 2006-08-02 18:02:42.000000000 +0200
@@ -7,8 +7,8 @@
LDFLAGS ?= -g
DESTDIR ?= /
LIBDIR ?= lib
-DBUS_INCLUDES ?= -I/usr/$(LIBDIR)/dbus-1.0/include -I/usr/include/dbus-1.0
-DBUS_LIBS ?= -ldbus-1
+DBUS_INCLUDES ?= `pkg-config dbus-1 --cflags`
+DBUS_LIBS ?= `pkg-config dbus-1 --libs`
OBJS = dbus_service.o dhcdbd.o dhcp_options.o main.o
SRCS = dbus_service.c dhcdbd.c dhcp_options.c main.c
INCS = dbus_service.h dhcdbd.h dhcp_options.h includes.h
diff -Naur dhcdbd-1.14/tests/Makefile dhcdbd-1.14-mod/tests/Makefile
--- dhcdbd-1.14/tests/Makefile 2006-01-17 22:23:51.000000000 +0100
+++ dhcdbd-1.14-mod/tests/Makefile 2006-08-02 18:11:43.000000000 +0200
@@ -2,7 +2,7 @@
LD = ${CC}
CFLAGS ?= -g -Wall
LDFLAGS ?= -g
-DBUS_LIBS ?= -ldbus-1
+DBUS_LIBS ?= `pkg-config dbus-1 --libs`
all: test_dhcp_options test_dhcdbd_state test_subscriber test_subscriber_dbus test_prospective_subscriber

View File

@@ -1,13 +0,0 @@
Index: dhcdbd-2.0/include/dhcdbd.h
===================================================================
--- dhcdbd-2.0.orig/include/dhcdbd.h 2006-10-18 09:38:18.000000000 +0100
+++ dhcdbd-2.0/include/dhcdbd.h 2006-10-18 09:38:45.000000000 +0100
@@ -76,7 +76,7 @@
#endif
#ifndef DHCLIENT_EXTENDED_OPTION_ENVIRONMENT
-#define DHCLIENT_EXTENDED_OPTION_ENVIRONMENT 1
+#define DHCLIENT_EXTENDED_OPTION_ENVIRONMENT 0
#endif
#define DHCDBD_INTERFACE_TEXT "text"

View File

@@ -1,51 +0,0 @@
SECTION = "console/network"
DESCRIPTION = "Internet Software Consortium DHCP package"
HOMEPAGE = "http://www.isc.org/"
LICENSE = "BSD"
PR = "r4"
SRC_URI = "ftp://ftp.isc.org/isc/dhcp/dhcp-3.0-history/dhcp-${PV}.tar.gz \
file://noattrmode.patch;patch=1 \
file://fixincludes.patch;patch=1 \
file://dhcp-3.0.3-dhclient-dbus.patch;patch=1;pnum=0 \
file://init-relay file://default-relay \
file://init-server file://default-server \
file://dhclient.conf file://dhcpd.conf"
do_configure() {
./configure
}
do_compile() {
make RANLIB=${RANLIB} PREDEFINES='-D_PATH_DHCPD_DB=\"/var/lib/dhcp/dhcpd.leases\" \
-D_PATH_DHCLIENT_DB=\"/var/lib/dhcp/dhclient.leases\" \
-D_PATH_DHCLIENT_SCRIPT=\"/sbin/dhclient-script\" \
-D_PATH_DHCPD_CONF=\"/etc/dhcp/dhcpd.conf\" \
-D_PATH_DHCLIENT_CONF=\"/etc/dhcp/dhclient.conf\"'
}
do_install() {
make -e DESTDIR=${D} USRMANDIR=${mandir}/man1 ADMMANDIR=${mandir}/man8 FFMANDIR=${mandir}/man5 LIBMANDIR=${mandir}/man3 LIBDIR=${libdir} INCDIR=${includedir} install
install -d ${D}${sysconfdir}/init.d
install -d ${D}${sysconfdir}/default
install -d ${D}${sysconfdir}/dhcp
install -m 0755 ${WORKDIR}/init-relay ${D}${sysconfdir}/init.d/dhcp-relay
install -m 0644 ${WORKDIR}/default-relay ${D}${sysconfdir}/default/dhcp-relay
install -m 0755 ${WORKDIR}/init-server ${D}${sysconfdir}/init.d/dhcp-server
install -m 0644 ${WORKDIR}/default-server ${D}${sysconfdir}/default/dhcp-server
install -m 0644 ${WORKDIR}/dhclient.conf ${D}${sysconfdir}/dhcp/dhclient.conf
install -m 0644 ${WORKDIR}/dhcpd.conf ${D}${sysconfdir}/dhcp/dhcpd.conf
}
PACKAGES += "dhcp-server dhcp-client dhcp-relay dhcp-omshell"
FILES_${PN} = ""
FILES_dhcp-server = "${sbindir}/dhcpd ${sysconfdir}/init.d/dhcp-server ${sysconfdir}/default/dhcp-server ${sysconfdir}/dhcp/dhcpd.conf"
FILES_dhcp-relay = "${sbindir}/dhcrelay ${sysconfdir}/init.d/dhcp-relay ${sysconfdir}/default/dhcp-relay"
FILES_dhcp-client = "${base_sbindir}/dhclient ${base_sbindir}/dhclient-script ${sysconfdir}/dhcp/dhclient.conf"
RDEPENDS_dhcp-client = "bash"
FILES_dhcp-omshell = "${bindir}/omshell"
CONFFILES_dhcp-server_nylon = "/etc/dhcp/dhcpd.conf"
CONFFILES_dhcp-relay_nylon = "/etc/default/dhcp-relay"
CONFFILES_dhcp-client_nylon = "/etc/dhcp/dhclient.conf"

View File

@@ -1,12 +0,0 @@
# Defaults for dhcp-relay initscript
# sourced by /etc/init.d/dhcp-relay
# What servers should the DHCP relay forward requests to?
# e.g: SERVERS="192.168.0.1"
SERVERS=""
# On what interfaces should the DHCP relay (dhrelay) serve DHCP requests?
INTERFACES=""
# Additional options that are passed to the DHCP relay daemon?
OPTIONS=""

View File

@@ -1,7 +0,0 @@
# Defaults for dhcp initscript
# sourced by /etc/init.d/dhcp-server
# installed at /etc/default/dhcp-server by the maintainer scripts
# On what interfaces should the DHCP server (dhcpd) serve DHCP requests?
# Separate multiple interfaces with spaces, e.g. "eth0 eth1".
INTERFACES=""

View File

@@ -1,50 +0,0 @@
# Configuration file for /sbin/dhclient, which is included in Debian's
# dhcp3-client package.
#
# This is a sample configuration file for dhclient. See dhclient.conf's
# man page for more information about the syntax of this file
# and a more comprehensive list of the parameters understood by
# dhclient.
#
# Normally, if the DHCP server provides reasonable information and does
# not leave anything out (like the domain name, for example), then
# few changes must be made to this file, if any.
#
#send host-name "andare.fugue.com";
#send dhcp-client-identifier 1:0:a0:24:ab:fb:9c;
#send dhcp-lease-time 3600;
#supersede domain-name "fugue.com home.vix.com";
#prepend domain-name-servers 127.0.0.1;
request subnet-mask, broadcast-address, time-offset, routers,
domain-name, domain-name-servers, host-name,
netbios-name-servers, netbios-scope;
#require subnet-mask, domain-name-servers;
#timeout 60;
#retry 60;
#reboot 10;
#select-timeout 5;
#initial-interval 2;
#script "/etc/dhcp3/dhclient-script";
#media "-link0 -link1 -link2", "link0 link1";
#reject 192.33.137.209;
#alias {
# interface "eth0";
# fixed-address 192.5.5.213;
# option subnet-mask 255.255.255.255;
#}
#lease {
# interface "eth0";
# fixed-address 192.33.137.200;
# medium "link0 link1";
# option host-name "andare.swiftmedia.com";
# option subnet-mask 255.255.255.0;
# option broadcast-address 192.33.137.255;
# option routers 192.33.137.250;
# option domain-name-servers 127.0.0.1;
# renew 2 2000/1/12 00:00:01;
# rebind 2 2000/1/12 00:00:01;
# expire 2 2000/1/12 00:00:01;
#}

View File

@@ -1,84 +0,0 @@
--- client/scripts/bsdos
+++ client/scripts/bsdos
@@ -47,6 +47,11 @@
. /etc/dhcp/dhclient-exit-hooks
fi
# probably should do something with exit status of the local script
+ if [ x$dhc_dbus != x -a $exit_status -eq 0 ]; then
+ dbus-send --system --dest=com.redhat.dhcp \
+ --type=method_call /com/redhat/dhcp/$interface com.redhat.dhcp.set \
+ 'string:'"`env | grep -Ev '^(PATH|SHLVL|_|PWD|dhc_dbus)\='`"
+ fi
exit $exit_status
}
--- client/scripts/freebsd
+++ client/scripts/freebsd
@@ -57,6 +57,11 @@
. /etc/dhcp/dhclient-exit-hooks
fi
# probably should do something with exit status of the local script
+ if [ x$dhc_dbus != x -a $exit_status -eq 0 ]; then
+ dbus-send --system --dest=com.redhat.dhcp \
+ --type=method_call /com/redhat/dhcp/$interface com.redhat.dhcp.set \
+ 'string:'"`env | grep -Ev '^(PATH|SHLVL|_|PWD|dhc_dbus)\='`"
+ fi
exit $exit_status
}
--- client/scripts/linux
+++ client/scripts/linux
@@ -69,6 +69,11 @@
. /etc/dhcp/dhclient-exit-hooks
fi
# probably should do something with exit status of the local script
+ if [ x$dhc_dbus != x -a $exit_status -eq 0 ]; then
+ dbus-send --system --dest=com.redhat.dhcp \
+ --type=method_call /com/redhat/dhcp/$interface com.redhat.dhcp.set \
+ 'string:'"`env | grep -Ev '^(PATH|SHLVL|_|PWD|dhc_dbus)\='`"
+ fi
exit $exit_status
}
--- client/scripts/netbsd
+++ client/scripts/netbsd
@@ -47,6 +47,11 @@
. /etc/dhcp/dhclient-exit-hooks
fi
# probably should do something with exit status of the local script
+ if [ x$dhc_dbus != x -a $exit_status -eq 0 ]; then
+ dbus-send --system --dest=com.redhat.dhcp \
+ --type=method_call /com/redhat/dhcp/$interface com.redhat.dhcp.set \
+ 'string:'"`env | grep -Ev '^(PATH|SHLVL|_|PWD|dhc_dbus)\='`"
+ fi
exit $exit_status
}
--- client/scripts/openbsd
+++ client/scripts/openbsd
@@ -47,6 +47,11 @@
. /etc/dhcp/dhclient-exit-hooks
fi
# probably should do something with exit status of the local script
+ if [ x$dhc_dbus != x -a $exit_status -eq 0 ]; then
+ dbus-send --system --dest=com.redhat.dhcp \
+ --type=method_call /com/redhat/dhcp/$interface com.redhat.dhcp.set \
+ 'string:'"`env | grep -Ev '^(PATH|SHLVL|_|PWD|dhc_dbus)\='`"
+ fi
exit $exit_status
}
--- client/scripts/solaris
+++ client/scripts/solaris
@@ -47,6 +47,11 @@
. /etc/dhcp/dhclient-exit-hooks
fi
# probably should do something with exit status of the local script
+ if [ x$dhc_dbus != x -a $exit_status -eq 0 ]; then
+ dbus-send --system --dest=com.redhat.dhcp \
+ --type=method_call /com/redhat/dhcp/$interface com.redhat.dhcp.set \
+ 'string:'"`env | grep -Ev '^(PATH|SHLVL|_|PWD|dhc_dbus)\='`"
+ fi
exit $exit_status
}

View File

@@ -1,108 +0,0 @@
#
# Sample configuration file for ISC dhcpd for Debian
#
# $Id: dhcpd.conf,v 1.1.1.1 2002/05/21 00:07:44 peloy Exp $
#
# The ddns-updates-style parameter controls whether or not the server will
# attempt to do a DNS update when a lease is confirmed. We default to the
# behavior of the version 2 packages ('none', since DHCP v2 didn't
# have support for DDNS.)
ddns-update-style none;
# option definitions common to all supported networks...
option domain-name "example.org";
option domain-name-servers ns1.example.org, ns2.example.org;
default-lease-time 600;
max-lease-time 7200;
# If this DHCP server is the official DHCP server for the local
# network, the authoritative directive should be uncommented.
#authoritative;
# Use this to send dhcp log messages to a different log file (you also
# have to hack syslog.conf to complete the redirection).
log-facility local7;
# No service will be given on this subnet, but declaring it helps the
# DHCP server to understand the network topology.
#subnet 10.152.187.0 netmask 255.255.255.0 {
#}
# This is a very basic subnet declaration.
#subnet 10.254.239.0 netmask 255.255.255.224 {
# range 10.254.239.10 10.254.239.20;
# option routers rtr-239-0-1.example.org, rtr-239-0-2.example.org;
#}
# This declaration allows BOOTP clients to get dynamic addresses,
# which we don't really recommend.
#subnet 10.254.239.32 netmask 255.255.255.224 {
# range dynamic-bootp 10.254.239.40 10.254.239.60;
# option broadcast-address 10.254.239.31;
# option routers rtr-239-32-1.example.org;
#}
# A slightly different configuration for an internal subnet.
#subnet 10.5.5.0 netmask 255.255.255.224 {
# range 10.5.5.26 10.5.5.30;
# option domain-name-servers ns1.internal.example.org;
# option domain-name "internal.example.org";
# option routers 10.5.5.1;
# option broadcast-address 10.5.5.31;
# default-lease-time 600;
# max-lease-time 7200;
#}
# Hosts which require special configuration options can be listed in
# host statements. If no address is specified, the address will be
# allocated dynamically (if possible), but the host-specific information
# will still come from the host declaration.
#host passacaglia {
# hardware ethernet 0:0:c0:5d:bd:95;
# filename "vmunix.passacaglia";
# server-name "toccata.fugue.com";
#}
# Fixed IP addresses can also be specified for hosts. These addresses
# should not also be listed as being available for dynamic assignment.
# Hosts for which fixed IP addresses have been specified can boot using
# BOOTP or DHCP. Hosts for which no fixed address is specified can only
# be booted with DHCP, unless there is an address range on the subnet
# to which a BOOTP client is connected which has the dynamic-bootp flag
# set.
#host fantasia {
# hardware ethernet 08:00:07:26:c0:a5;
# fixed-address fantasia.fugue.com;
#}
# You can declare a class of clients and then do address allocation
# based on that. The example below shows a case where all clients
# in a certain class get addresses on the 10.17.224/24 subnet, and all
# other clients get addresses on the 10.0.29/24 subnet.
#class "foo" {
# match if substring (option vendor-class-identifier, 0, 4) = "SUNW";
#}
#shared-network 224-29 {
# subnet 10.17.224.0 netmask 255.255.255.0 {
# option routers rtr-224.example.org;
# }
# subnet 10.0.29.0 netmask 255.255.255.0 {
# option routers rtr-29.example.org;
# }
# pool {
# allow members of "foo";
# range 10.17.224.10 10.17.224.250;
# }
# pool {
# deny members of "foo";
# range 10.0.29.10 10.0.29.230;
# }
#}

View File

@@ -1,10 +0,0 @@
--- dhcp-3.0.2/common/tr.c~compile 2005-10-13 14:23:37.000000000 +0200
+++ dhcp-3.0.2/common/tr.c 2005-10-13 14:23:45.000000000 +0200
@@ -39,6 +39,7 @@
#include "includes/netinet/udp.h"
#include "includes/netinet/if_ether.h"
#include "netinet/if_tr.h"
+#include <asm/types.h>
#include <sys/time.h>
/*

View File

@@ -1,44 +0,0 @@
#!/bin/sh
#
# $Id: dhcp3-relay,v 1.1 2004/04/16 15:41:08 ml Exp $
#
# It is not safe to start if we don't have a default configuration...
if [ ! -f /etc/default/dhcp-relay ]; then
echo "/etc/default/dhcp-relay does not exist! - Aborting..."
echo "create this file to fix the problem."
exit 1
fi
# Read init script configuration (interfaces the daemon should listen on
# and the DHCP server we should forward requests to.)
. /etc/default/dhcp-relay
# Build command line for interfaces (will be passed to dhrelay below.)
IFCMD=""
if test "$INTERFACES" != ""; then
for I in $INTERFACES; do
IFCMD=${IFCMD}"-i "${I}" "
done
fi
DHCRELAYPID=/var/run/dhcrelay.pid
case "$1" in
start)
start-stop-daemon -S -x /usr/sbin/dhcrelay -- -q $OPTIONS $IFCMD $SERVERS
;;
stop)
start-stop-daemon -K -x /usr/sbin/dhcrelay
;;
restart | force-reload)
$0 stop
sleep 2
$0 start
;;
*)
echo "Usage: /etc/init.d/dhcp-relay {start|stop|restart|force-reload}"
exit 1
esac
exit 0

View File

@@ -1,44 +0,0 @@
#!/bin/sh
#
# $Id: dhcp3-server.init.d,v 1.4 2003/07/13 19:12:41 mdz Exp $
#
test -f /usr/sbin/dhcpd || exit 0
# It is not safe to start if we don't have a default configuration...
if [ ! -f /etc/default/dhcp-server ]; then
echo "/etc/default/dhcp-server does not exist! - Aborting..."
exit 0
fi
# Read init script configuration (so far only interfaces the daemon
# should listen on.)
. /etc/default/dhcp-server
case "$1" in
start)
echo -n "Starting DHCP server: "
test -d /var/lib/dhcp/ || mkdir -p /var/lib/dhcp/
test -f /var/lib/dhcp/dhcpd.leases || touch /var/lib/dhcp/dhcpd.leases
start-stop-daemon -S -x /usr/sbin/dhcpd -- -q $INTERFACES
echo "."
;;
stop)
echo -n "Stopping DHCP server: dhcpd3"
start-stop-daemon -K -x /usr/sbin/dhcpd
echo "."
;;
restart | force-reload)
$0 stop
sleep 2
$0 start
if [ "$?" != "0" ]; then
exit 1
fi
;;
*)
echo "Usage: /etc/init.d/dhcp-server {start|stop|restart|force-reload}"
exit 1
esac
exit 0

View File

@@ -1,20 +0,0 @@
#
# Patch managed by http://www.holgerschurig.de/patcher.html
#
--- dhcp-3.0.1/includes/dhcpd.h~compile
+++ dhcp-3.0.1/includes/dhcpd.h
@@ -306,9 +306,9 @@
# define EPHEMERAL_FLAGS (MS_NULL_TERMINATION | \
UNICAST_BROADCAST_HACK)
- binding_state_t __attribute__ ((mode (__byte__))) binding_state;
- binding_state_t __attribute__ ((mode (__byte__))) next_binding_state;
- binding_state_t __attribute__ ((mode (__byte__))) desired_binding_state;
+ binding_state_t binding_state;
+ binding_state_t next_binding_state;
+ binding_state_t desired_binding_state;
struct lease_state *state;

View File

@@ -1,27 +0,0 @@
# dosfstools-native OE build file
# Copyright (C) 2004-2006, Advanced Micro Devices, Inc. All Rights Reserved
# Released under the MIT license (see packages/COPYING)
require dosfstools_${PV}.bb
FILESDIR = "${@os.path.dirname(bb.data.getVar('FILE',d,1))}/dosfstools-${PV}"
S="${WORKDIR}/dosfstools-${PV}"
PR="r4"
SRC_URI = "ftp://ftp.uni-erlangen.de/pub/Linux/LOCAL/dosfstools/dosfstools-${PV}.src.tar.gz \
file://mkdosfs-bootcode.patch;patch=1 \
file://mkdosfs-dir.patch;patch=1 \
file://alignment_hack.patch;patch=1 \
file://dosfstools-2.10-kernel-2.6.patch;patch=1 \
file://msdos_fat12_undefined.patch;patch=1 \
file://dosfstools-msdos_fs-types.patch;patch=1 \
file://include-linux-types.patch;patch=1 \
file://2.6.20-syscall.patch;patch=1"
inherit native
do_stage() {
install -m 755 ${S}/mkdosfs/mkdosfs ${STAGING_BINDIR}/mkdosfs
install -m 755 ${S}/dosfsck/dosfsck ${STAGING_BINDIR}/dosfsck
}

View File

@@ -1,22 +0,0 @@
# dosfstools OE build file
# Copyright (C) 2004-2006, Advanced Micro Devices, Inc. All Rights Reserved
# Released under the MIT license (see packages/COPYING)
DESCRIPTION = "DOS FAT Filesystem Utilities"
SECTION = "base"
PRIORITY = "optional"
LICENSE = "GPL"
PR = "r2"
SRC_URI = "ftp://ftp.uni-erlangen.de/pub/Linux/LOCAL/dosfstools/dosfstools-${PV}.src.tar.gz \
file://alignment_hack.patch;patch=1 \
file://dosfstools-2.10-kernel-2.6.patch;patch=1 \
file://msdos_fat12_undefined.patch;patch=1 \
file://include-linux-types.patch;patch=1"
do_install () {
oe_runmake "PREFIX=${D}" "SBINDIR=${D}${sbindir}" \
"MANDIR=${D}${mandir}/man8" install
}

View File

@@ -1,21 +0,0 @@
# dosfstools OE build file
# Copyright (C) 2004-2006, Advanced Micro Devices, Inc. All Rights Reserved
# Released under the MIT license (see packages/COPYING)
DESCRIPTION = "DOS FAT Filesystem Utilities"
SECTION = "base"
PRIORITY = "optional"
LICENSE = "GPL"
PR = "r0"
SRC_URI = "ftp://ftp.uni-erlangen.de/pub/Linux/LOCAL/dosfstools/dosfstools-${PV}.src.tar.gz \
file://alignment_hack.patch;patch=1 \
file://msdos_fat12_undefined.patch;patch=1 \
file://include-linux-types.patch;patch=1"
do_install () {
oe_runmake "PREFIX=${D}" "SBINDIR=${D}${sbindir}" \
"MANDIR=${D}${mandir}/man8" install
}

View File

@@ -1,65 +0,0 @@
Index: dosfstools-2.10/dosfsck/io.c
===================================================================
--- dosfstools-2.10.orig/dosfsck/io.c 2007-06-07 16:15:52.000000000 +0200
+++ dosfstools-2.10/dosfsck/io.c 2007-06-07 16:16:06.000000000 +0200
@@ -42,28 +42,11 @@
/* Use the _llseek system call directly, because there (once?) was a bug in
* the glibc implementation of it. */
#include <linux/unistd.h>
-#if defined __alpha || defined __ia64__ || defined __s390x__ || defined __x86_64__ || defined __ppc64__
/* On alpha, the syscall is simply lseek, because it's a 64 bit system. */
static loff_t llseek( int fd, loff_t offset, int whence )
{
return lseek(fd, offset, whence);
}
-#else
-# ifndef __NR__llseek
-# error _llseek system call not present
-# endif
-static _syscall5( int, _llseek, uint, fd, ulong, hi, ulong, lo,
- loff_t *, res, uint, wh );
-
-static loff_t llseek( int fd, loff_t offset, int whence )
-{
- loff_t actual;
-
- if (_llseek(fd, offset>>32, offset&0xffffffff, &actual, whence) != 0)
- return (loff_t)-1;
- return actual;
-}
-#endif
void fs_open(char *path,int rw)
Index: dosfstools-2.10/mkdosfs/mkdosfs.c
===================================================================
--- dosfstools-2.10.orig/mkdosfs/mkdosfs.c 2007-06-07 16:15:11.000000000 +0200
+++ dosfstools-2.10/mkdosfs/mkdosfs.c 2007-06-07 16:15:30.000000000 +0200
@@ -116,27 +116,11 @@
/* Use the _llseek system call directly, because there (once?) was a bug in
* the glibc implementation of it. */
#include <linux/unistd.h>
-#if defined __alpha || defined __ia64__ || defined __s390x__ || defined __x86_64__ || defined __ppc64__
/* On alpha, the syscall is simply lseek, because it's a 64 bit system. */
static loff_t llseek( int fd, loff_t offset, int whence )
{
return lseek(fd, offset, whence);
}
-#else
-# ifndef __NR__llseek
-# error _llseek system call not present
-# endif
-static _syscall5( int, _llseek, uint, fd, ulong, hi, ulong, lo,
- loff_t *, res, uint, wh );
-static loff_t llseek( int fd, loff_t offset, int whence )
-{
- loff_t actual;
-
- if (_llseek(fd, offset>>32, offset&0xffffffff, &actual, whence) != 0)
- return (loff_t)-1;
- return actual;
-}
-#endif
#define ROUND_UP(value, divisor) (value + (divisor - (value % divisor))) / divisor

View File

@@ -1,34 +0,0 @@
The problem is that unsigned char[2] is
guranteed to be 8Bit aligned on arm
but unsigned short is/needs to be 16bit aligned
the union { unsigned short; unsigned char[2] } trick
didn't work so no we use the alpha hack.
memcpy into an 16bit aligned
-zecke
--- dosfstools/dosfsck/boot.c.orig 2003-05-15 19:32:23.000000000 +0200
+++ dosfstools/dosfsck/boot.c 2003-06-13 17:44:25.000000000 +0200
@@ -36,17 +36,15 @@
{ 0xff, "5.25\" 320k floppy 2s/40tr/8sec" },
};
-#if defined __alpha || defined __ia64__ || defined __s390x__ || defined __x86_64__ || defined __ppc64__
+
/* Unaligned fields must first be copied byte-wise */
#define GET_UNALIGNED_W(f) \
({ \
unsigned short __v; \
memcpy( &__v, &f, sizeof(__v) ); \
- CF_LE_W( *(unsigned short *)&f ); \
+ CF_LE_W( *(unsigned short *)&__v ); \
})
-#else
-#define GET_UNALIGNED_W(f) CF_LE_W( *(unsigned short *)&f )
-#endif
+
static char *get_media_descr( unsigned char media )

View File

@@ -1,74 +0,0 @@
Submitted By: Jim Gifford (jim at linuxfromscratch dot org)
Date: 2004-02-09
Initial Package Version: 2.6
Origin: Jim Gifford
Upstream Status: Accepted
Description: Fixes Compile Issues with the 2.6 Kernel
--- dosfstools-2.10/dosfsck/common.h.orig 2004-02-09 18:37:59.056737458 +0000
+++ dosfstools-2.10/dosfsck/common.h 2004-02-09 18:38:18.333392952 +0000
@@ -2,6 +2,13 @@
/* Written 1993 by Werner Almesberger */
+#include <linux/version.h>
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 0)
+ #define __KERNEL__
+ #include <asm/types.h>
+ #undef __KERNEL__
+ #define MSDOS_FAT12 4084 /* maximum number of clusters in a 12 bit FAT */
+#endif
#ifndef _COMMON_H
#define _COMMON_H
--- dosfstools-2.10/dosfsck/file.c.orig 2004-02-09 18:40:52.016728845 +0000
+++ dosfstools-2.10/dosfsck/file.c 2004-02-09 18:40:03.665117865 +0000
@@ -15,6 +15,14 @@
#define _LINUX_STAT_H /* hack to avoid inclusion of <linux/stat.h> */
#define _LINUX_STRING_H_ /* hack to avoid inclusion of <linux/string.h>*/
#define _LINUX_FS_H /* hack to avoid inclusion of <linux/fs.h> */
+
+#include <linux/version.h>
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 0)
+ #define __KERNEL__
+ #include <asm/types.h>
+ #undef __KERNEL__
+#endif
+
#include <linux/msdos_fs.h>
#include "common.h"
--- dosfstools-2.10/dosfsck/dosfsck.h.orig 2004-02-09 18:57:11.022870974 +0000
+++ dosfstools-2.10/dosfsck/dosfsck.h 2004-02-09 18:56:20.628614393 +0000
@@ -13,6 +13,15 @@
#define _LINUX_STAT_H /* hack to avoid inclusion of <linux/stat.h> */
#define _LINUX_STRING_H_ /* hack to avoid inclusion of <linux/string.h>*/
#define _LINUX_FS_H /* hack to avoid inclusion of <linux/fs.h> */
+
+#include <linux/version.h>
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 0)
+ #define __KERNEL__
+ #include <asm/types.h>
+ #include <asm/byteorder.h>
+ #undef __KERNEL__
+#endif
+
#include <linux/msdos_fs.h>
/* 2.1 kernels use le16_to_cpu() type functions for CF_LE_W & Co., but don't
--- dosfstools-2.10/mkdosfs/mkdosfs.c.orig 2004-02-09 18:31:41.997157413 +0000
+++ dosfstools-2.10/mkdosfs/mkdosfs.c 2004-02-09 18:34:07.311945252 +0000
@@ -66,6 +66,13 @@
#include <time.h>
#include <errno.h>
+#include <linux/version.h>
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 0)
+ #define __KERNEL__
+ #include <asm/types.h>
+ #undef __KERNEL__
+#endif
+
#if __BYTE_ORDER == __BIG_ENDIAN
#include <asm/byteorder.h>

View File

@@ -1,30 +0,0 @@
--- dosfstools-2.10/dosfsck/dosfsck.h.org 2006-02-21 08:36:14.000000000 -0700
+++ dosfstools-2.10/dosfsck/dosfsck.h 2006-02-21 08:40:12.000000000 -0700
@@ -22,6 +22,14 @@
#undef __KERNEL__
#endif
+#ifndef __s8
+#include <asm/types.h>
+#endif
+
+#ifndef __ASM_STUB_BYTEORDER_H__
+#include <asm/byteorder.h>
+#endif
+
#include <linux/msdos_fs.h>
/* 2.1 kernels use le16_to_cpu() type functions for CF_LE_W & Co., but don't
--- dosfstools-2.10/dosfsck/file.c.org 2006-02-21 08:37:36.000000000 -0700
+++ dosfstools-2.10/dosfsck/file.c 2006-02-21 08:37:47.000000000 -0700
@@ -23,6 +23,10 @@
#undef __KERNEL__
#endif
+#ifndef __s8
+#include <asm/types.h>
+#endif
+
#include <linux/msdos_fs.h>
#include "common.h"

View File

@@ -1,17 +0,0 @@
mkdsofs is using types of the style __u8, which it gets with some
versions of libc headers via linux/hdreg.h including asm/types.h.
Newer version of fedora (at least) have a hdreg.h whichdoes not
include asm/types.h. To work around this patch mkdosfs.c to explicity
include linux/types.h which will in turn pull in asm/types.h which
defines these variables.
--- dosfstools-2.10/mkdosfs/mkdosfs.c~ 2006-07-12 18:46:21.000000000 +1000
+++ dosfstools-2.10/mkdosfs/mkdosfs.c 2006-07-12 18:46:21.000000000 +1000
@@ -60,6 +60,7 @@
#include "../version.h"
#include <fcntl.h>
+#include <linux/types.h>
#include <linux/hdreg.h>
#include <linux/fs.h>
#include <linux/fd.h>

View File

@@ -1,240 +0,0 @@
diff -urN dosfstools-2.10.orig/mkdosfs/ChangeLog dosfstools-2.10/mkdosfs/ChangeLog
--- dosfstools-2.10.orig/mkdosfs/ChangeLog 1997-06-18 03:09:38.000000000 -0700
+++ dosfstools-2.10/mkdosfs/ChangeLog 2004-08-02 20:57:57.734939816 -0700
@@ -1,3 +1,14 @@
+19th June 2003 Sam Bingner (sam@bingner.com)
+
+ Added option to read in bootcode from a file so that if you have
+ for example Windows 2000 boot code, you can have it write that
+ as the bootcode. This is a dump of the behinning of a partition
+ generally 512 bytes, but can be up to reserved sectors*512 bytes.
+ Also writes 0x80 as the BIOS drive number if we are formatting a
+ hard drive, and sets the number of hidden sectors to be the
+ number of sectors in one track. These were required so that DOS
+ could boot using the bootcode.
+
28th January 1995 H. Peter Anvin (hpa@yggdrasil.com)
Better algorithm to select cluster sizes on large filesystems.
diff -urN dosfstools-2.10.orig/mkdosfs/mkdosfs.8 dosfstools-2.10/mkdosfs/mkdosfs.8
--- dosfstools-2.10.orig/mkdosfs/mkdosfs.8 2003-05-15 11:28:28.000000000 -0700
+++ dosfstools-2.10/mkdosfs/mkdosfs.8 2004-08-02 20:57:57.735939664 -0700
@@ -40,6 +40,10 @@
.I message-file
]
[
+.B \-B
+.I bootcode-file
+]
+[
.B \-n
.I volume-name
]
@@ -155,6 +159,18 @@
carriage return-line feed combinations, and tabs have been expanded.
If the filename is a hyphen (-), the text is taken from standard input.
.TP
+.BI \-B " bootcode-file"
+Uses boot machine code from file "file". On any thing other than FAT32,
+this only writes the first 3 bytes, and 480 bytes from offset 3Eh. On
+FAT32, this writes the first 3 bytes, 420 bytes from offset 5Ah to both
+primary and backup boot sectors. Also writes all other reserved sectors
+excluding the sectors following boot sectors (usually sector 2 and 7).
+Does not require that the input file be as large as reserved_sectors*512.
+To make a FAT32 partition bootable, you will need at least the first
+13 sectors (6656 bytes). You can also specify a partition as the argument
+to clone the boot code from that partition.
+i.e mkdosfs -B /dev/sda1 /dev/sda1
+.TP
.BI \-n " volume-name"
Sets the volume name (label) of the filesystem. The volume name can
be up to 11 characters long. The default is no label.
@@ -188,8 +204,9 @@
simply will not support it ;)
.SH AUTHOR
Dave Hudson - <dave@humbug.demon.co.uk>; modified by Peter Anvin
-<hpa@yggdrasil.com>. Fixes and additions by Roman Hodek
-<Roman.Hodek@informatik.uni-erlangen.de> for Debian/GNU Linux.
+<hpa@yggdrasil.com> and Sam Bingner <sam@bingner.com>. Fixes and
+additions by Roman Hodek <Roman.Hodek@informatik.uni-erlangen.de>
+for Debian/GNU Linux.
.SH ACKNOWLEDGEMENTS
.B mkdosfs
is based on code from
diff -urN dosfstools-2.10.orig/mkdosfs/mkdosfs.c dosfstools-2.10/mkdosfs/mkdosfs.c
--- dosfstools-2.10.orig/mkdosfs/mkdosfs.c 2003-06-14 13:07:08.000000000 -0700
+++ dosfstools-2.10/mkdosfs/mkdosfs.c 2004-08-02 20:57:57.736939512 -0700
@@ -24,6 +24,12 @@
- New options -A, -S, -C
- Support for filesystems > 2GB
- FAT32 support
+
+ Fixes/additions June 2003 by Sam Bingner
+ <sam@bingner.com>:
+ - Add -B option to read in bootcode from a file
+ - Write BIOS drive number so that FS can properly boot
+ - Set number of hidden sectors before boot code to be one track
Copying: Copyright 1993, 1994 David Hudson (dave@humbug.demon.co.uk)
@@ -167,6 +173,8 @@
#define FAT_BAD 0x0ffffff7
#define MSDOS_EXT_SIGN 0x29 /* extended boot sector signature */
+#define HD_DRIVE_NUMBER 0x80 /* Boot off first hard drive */
+#define FD_DRIVE_NUMBER 0x00 /* Boot off first floppy drive */
#define MSDOS_FAT12_SIGN "FAT12 " /* FAT12 filesystem signature */
#define MSDOS_FAT16_SIGN "FAT16 " /* FAT16 filesystem signature */
#define MSDOS_FAT32_SIGN "FAT32 " /* FAT32 filesystem signature */
@@ -188,6 +196,8 @@
#define BOOTCODE_SIZE 448
#define BOOTCODE_FAT32_SIZE 420
+#define MAX_RESERVED 0xFFFF
+
/* __attribute__ ((packed)) is used on all structures to make gcc ignore any
* alignments */
@@ -215,7 +225,7 @@
__u16 fat_length; /* sectors/FAT */
__u16 secs_track; /* sectors per track */
__u16 heads; /* number of heads */
- __u32 hidden; /* hidden sectors (unused) */
+ __u32 hidden; /* hidden sectors (one track) */
__u32 total_sect; /* number of sectors (if sectors == 0) */
union {
struct {
@@ -298,6 +308,8 @@
/* Global variables - the root of all evil :-) - see these and weep! */
+static char *template_boot_code; /* Variable to store a full template boot sector in */
+static int use_template = 0;
static char *program_name = "mkdosfs"; /* Name of the program */
static char *device_name = NULL; /* Name of the device on which to create the filesystem */
static int atari_format = 0; /* Use Atari variation of MS-DOS FS format */
@@ -842,6 +854,12 @@
vi->volume_id[2] = (unsigned char) ((volume_id & 0x00ff0000) >> 16);
vi->volume_id[3] = (unsigned char) (volume_id >> 24);
}
+ if (bs.media == 0xf8) {
+ vi->drive_number = HD_DRIVE_NUMBER; /* Set bios drive number to 80h */
+ }
+ else {
+ vi->drive_number = FD_DRIVE_NUMBER; /* Set bios drive number to 00h */
+ }
if (!atari_format) {
memcpy(vi->volume_label, volume_name, 11);
@@ -886,7 +904,7 @@
printf( "Using %d reserved sectors\n", reserved_sectors );
bs.fats = (char) nr_fats;
if (!atari_format || size_fat == 32)
- bs.hidden = CT_LE_L(0);
+ bs.hidden = bs.secs_track;
else
/* In Atari format, hidden is a 16 bit field */
memset( &bs.hidden, 0, 2 );
@@ -1358,6 +1376,32 @@
* dir area on FAT12/16, and the first cluster on FAT32. */
writebuf( (char *) root_dir, size_root_dir, "root directory" );
+ if (use_template == 1) {
+ /* dupe template into reserved sectors */
+ seekto( 0, "Start of partition" );
+ if (size_fat == 32) {
+ writebuf( template_boot_code, 3, "backup jmpBoot" );
+ seekto( 0x5a, "sector 1 boot area" );
+ writebuf( template_boot_code+0x5a, 420, "sector 1 boot area" );
+ seekto( 512*2, "third sector" );
+ if (backup_boot != 0) {
+ writebuf( template_boot_code+512*2, backup_boot*sector_size - 512*2, "data to backup boot" );
+ seekto( backup_boot*sector_size, "backup boot sector" );
+ writebuf( template_boot_code, 3, "backup jmpBoot" );
+ seekto( backup_boot*sector_size+0x5a, "backup boot sector boot area" );
+ writebuf( template_boot_code+0x5a, 420, "backup boot sector boot area" );
+ seekto( (backup_boot+2)*sector_size, "sector following backup code" );
+ writebuf( template_boot_code+(backup_boot+2)*sector_size, (reserved_sectors-backup_boot-2)*512, "remaining data" );
+ } else {
+ writebuf( template_boot_code+512*2, (reserved_sectors-2)*512, "remaining data" );
+ }
+ } else {
+ writebuf( template_boot_code, 3, "jmpBoot" );
+ seekto( 0x3e, "sector 1 boot area" );
+ writebuf( template_boot_code+0x3e, 448, "boot code" );
+ }
+ }
+
if (info_sector) free( info_sector );
free (root_dir); /* Free up the root directory space from setup_tables */
free (fat); /* Free up the fat table space reserved during setup_tables */
@@ -1371,7 +1415,7 @@
{
fatal_error("\
Usage: mkdosfs [-A] [-c] [-C] [-v] [-I] [-l bad-block-file] [-b backup-boot-sector]\n\
- [-m boot-msg-file] [-n volume-name] [-i volume-id]\n\
+ [-m boot-msg-file] [-n volume-name] [-i volume-id] [-B bootcode]\n\
[-s sectors-per-cluster] [-S logical-sector-size] [-f number-of-FATs]\n\
[-F fat-size] [-r root-dir-entries] [-R reserved-sectors]\n\
/dev/name [blocks]\n");
@@ -1433,7 +1477,7 @@
printf ("%s " VERSION " (" VERSION_DATE ")\n",
program_name);
- while ((c = getopt (argc, argv, "AcCf:F:Ii:l:m:n:r:R:s:S:v")) != EOF)
+ while ((c = getopt (argc, argv, "AcCf:F:Ii:l:m:n:r:R:s:S:v:B:b")) != EOF)
/* Scan the command line for options */
switch (c)
{
@@ -1494,6 +1538,51 @@
listfile = optarg;
break;
+ case 'B': /* B : read in bootcode */
+ if ( strcmp(optarg, "-") )
+ {
+ msgfile = fopen(optarg, "r");
+ if ( !msgfile )
+ perror(optarg);
+ }
+ else
+ msgfile = stdin;
+
+ if ( msgfile )
+ {
+ if (!(template_boot_code = malloc( MAX_RESERVED )))
+ die( "Out of memory" );
+ /* The template boot sector including reserved must not be > 65535 */
+ use_template = 1;
+ i = 0;
+ do
+ {
+ ch = getc(msgfile);
+ switch (ch)
+ {
+ case EOF:
+ break;
+
+ default:
+ template_boot_code[i++] = ch; /* Store character */
+ break;
+ }
+ }
+ while ( ch != EOF && i < MAX_RESERVED );
+ ch = getc(msgfile); /* find out if we're at EOF */
+
+ /* Fill up with zeros */
+ while( i < MAX_RESERVED )
+ template_boot_code[i++] = '\0';
+
+ if ( ch != EOF )
+ printf ("Warning: template too long; truncated after %d bytes\n", i);
+
+ if ( msgfile != stdin )
+ fclose(msgfile);
+ }
+ break;
+
case 'm': /* m : Set boot message */
if ( strcmp(optarg, "-") )
{

View File

@@ -1,634 +0,0 @@
diff -urN dosfstools-2.10.orig/mkdosfs/mkdosfs.c dosfstools-2.10/mkdosfs/mkdosfs.c
--- dosfstools-2.10.orig/mkdosfs/mkdosfs.c 2004-08-02 20:48:45.000000000 -0700
+++ dosfstools-2.10/mkdosfs/mkdosfs.c 2004-08-02 20:49:44.296953792 -0700
@@ -18,6 +18,10 @@
as a rule), and not the block. For example the boot block does not
occupy a full cluster.
+ June 2004 - Jordan Crouse (info.linux@amd.com)
+ Added -d <directory> support to populate the image
+ Copyright (C) 2004, Advanced Micro Devices, All Rights Reserved
+
Fixes/additions May 1998 by Roman Hodek
<Roman.Hodek@informatik.uni-erlangen.de>:
- Atari format support
@@ -71,6 +75,8 @@
#include <unistd.h>
#include <time.h>
#include <errno.h>
+#include <libgen.h>
+#include <dirent.h>
#if __BYTE_ORDER == __BIG_ENDIAN
@@ -124,6 +130,8 @@
}
#endif
+#define ROUND_UP(value, divisor) (value + (divisor - (value % divisor))) / divisor
+
/* Constant definitions */
#define TRUE 1 /* Boolean constants */
@@ -163,7 +171,6 @@
#define ATTR_VOLUME 8 /* volume label */
#define ATTR_DIR 16 /* directory */
#define ATTR_ARCH 32 /* archived */
-
#define ATTR_NONE 0 /* no attribute bits */
#define ATTR_UNUSED (ATTR_VOLUME | ATTR_ARCH | ATTR_SYS | ATTR_HIDDEN)
/* attribute bits that are copied "as is" */
@@ -258,6 +265,19 @@
__u32 reserved2[4];
};
+/* This stores up to 13 chars of the name */
+
+struct msdos_dir_slot {
+ __u8 id; /* sequence number for slot */
+ __u8 name0_4[10]; /* first 5 characters in name */
+ __u8 attr; /* attribute byte */
+ __u8 reserved; /* always 0 */
+ __u8 alias_checksum; /* checksum for 8.3 alias */
+ __u8 name5_10[12]; /* 6 more characters in name */
+ __u16 start; /* starting cluster number, 0 in long slots */
+ __u8 name11_12[4]; /* last 2 characters in name */
+};
+
struct msdos_dir_entry
{
char name[8], ext[3]; /* name and extension */
@@ -306,6 +326,15 @@
#define MESSAGE_OFFSET 29 /* Offset of message in above code */
+/* Special structure to keep track of directories as we add them for the -d option */
+
+struct dir_entry {
+ int root; /* Specifies if this is the root dir or not */
+ int count; /* Number of items in the table */
+ int entries; /* Number of entries in the table */
+ struct msdos_dir_entry *table; /* Pointer to the entry table */
+};
+
/* Global variables - the root of all evil :-) - see these and weep! */
static char *template_boot_code; /* Variable to store a full template boot sector in */
@@ -339,6 +368,9 @@
static int size_root_dir; /* Size of the root directory in bytes */
static int sectors_per_cluster = 0; /* Number of sectors per disk cluster */
static int root_dir_entries = 0; /* Number of root directory entries */
+static int root_dir_num_entries = 0;
+static int last_cluster_written = 0;
+
static char *blank_sector; /* Blank sector - all zeros */
@@ -411,7 +443,6 @@
}
}
-
/* Mark a specified sector as having a particular value in it's FAT entry */
static void
@@ -1262,6 +1293,9 @@
die ("unable to allocate space for root directory in memory");
}
+
+ last_cluster_written = 2;
+
memset(root_dir, 0, size_root_dir);
if ( memcmp(volume_name, " ", 11) )
{
@@ -1310,11 +1344,11 @@
}
if (!(blank_sector = malloc( sector_size )))
- die( "Out of memory" );
+ die( "Out of memory" );
+
memset(blank_sector, 0, sector_size);
}
-
-
+
/* Write the new filesystem's data tables to wherever they're going to end up! */
#define error(str) \
@@ -1336,7 +1370,7 @@
do { \
int __size = (size); \
if (write (dev, buf, __size) != __size) \
- error ("failed whilst writing " errstr); \
+ error ("failed whilst writing " errstr); \
} while(0)
@@ -1407,6 +1441,452 @@
free (fat); /* Free up the fat table space reserved during setup_tables */
}
+/* Add a file to the specified directory entry, and also write it into the image */
+
+static void copy_filename(char *filename, char *base, char *ext) {
+
+ char *ch = filename;
+ int i, len;
+
+ memset(base, 0x20, 8);
+ memset(ext, 0x20, 3);
+
+ for(len = 0 ; *ch && *ch != '.'; ch++) {
+ base[len++] = toupper(*ch);
+ if (len == 8) break;
+ }
+
+ for ( ; *ch && *ch != '.'; ch++);
+ if (*ch) ch++;
+
+ for(len = 0 ; *ch; ch++) {
+ ext[len++] = toupper(*ch);
+ if (len == 3) break;
+ }
+}
+
+/* Check for an .attrib.<filename> file, and read the attributes therein */
+
+/* We are going to be pretty pedantic about this. The file needs 3
+ bytes at the beginning, the attributes are listed in this order:
+
+ (H)idden|(S)ystem|(A)rchived
+
+ A capital HSA means to enable it, anything else will disable it
+ (I recommend a '-') The unix user attributes will still be used
+ for write access.
+
+ For example, to enable system file access for ldlinux.sys, write
+ the following to .attrib.ldlinux.sys: -S-
+*/
+
+unsigned char check_attrib_file(char *dir, char *filename) {
+
+ char attrib[4] = { '-', '-', '-' };
+ unsigned char *buffer = 0;
+ int ret = ATTR_NONE;
+ int fd = -1;
+
+ buffer = (char *) calloc(1, strlen(dir) + strlen(filename) + 10);
+ if (!buffer) return ATTR_NONE;
+
+ sprintf(buffer, "%s/.attrib.%s", dir, filename);
+
+ if (access(buffer, R_OK))
+ goto exit_attrib;
+
+ if ((fd = open(buffer, O_RDONLY, 0)) < 0)
+ goto exit_attrib;
+
+ if (read(fd, attrib, 3) < 0)
+ goto exit_attrib;
+
+ if (attrib[0] == 'H') ret |= ATTR_HIDDEN;
+ if (attrib[1] == 'S') ret |= ATTR_SYS;
+ if (attrib[2] == 'A') ret |= ATTR_ARCH;
+
+ printf("%s: Setting atrribute %x\n", filename, ret);
+
+ exit_attrib:
+ if (fd >= 0) close(fd);
+ if (buffer) free(buffer);
+
+ return ret;
+}
+
+static void copy_name(char *buffer, int size, char **pointer) {
+ int i;
+
+ for(i = 0; i < size; i += 2) {
+ if (*pointer) {
+ buffer[i] = **pointer;
+ buffer[i + 1] = 0x00;
+ *pointer = **pointer ? *pointer + 1 : 0;
+ }
+ else {
+ buffer[i] = 0xFF;
+ buffer[i + 1] = 0xFF;
+ }
+ }
+}
+
+static int add_file(char *filename, struct dir_entry *dir, unsigned char attr)
+{
+ struct stat stat;
+ struct msdos_dir_entry *entry;
+ int infile = 0;
+ int sectors, clusters;
+ struct tm *ctime;
+ int c, s;
+ int ptr;
+ char *buffer, *base;
+ int start;
+ int usedsec, totalsec;
+
+ char name83[8], ext83[3];
+
+ struct msdos_dir_slot *slot;
+ int i;
+ char *p;
+
+ /* The root directory is static, everything else grows as needed */
+
+ if (dir->root) {
+ if (dir->count == dir->entries) {
+ printf("Error - too many directory entries\n");
+ }
+ }
+ else {
+ if (dir->count == dir->entries) {
+ if (!dir->table)
+ dir->table =
+ (struct msdos_dir_entry *) malloc(sizeof(struct msdos_dir_entry));
+ else {
+ dir->table =
+ (struct msdos_dir_entry *) realloc(dir->table, (dir->entries + 1) *
+ sizeof(struct msdos_dir_entry));
+
+ memset(&dir->table[dir->entries], 0, sizeof(struct msdos_dir_entry));
+ }
+
+ dir->entries++;
+ }
+ }
+
+ infile = open(filename, O_RDONLY, 0);
+ if (!infile) return;
+
+ if (fstat(infile, &stat))
+ goto exit_add;
+
+ if (S_ISCHR(stat.st_mode) ||S_ISBLK(stat.st_mode) ||
+ S_ISFIFO(stat.st_mode) || S_ISLNK(stat.st_mode)) {
+ printf("Error - cannot create a special file in a FATFS\n");
+ goto exit_add;
+ }
+
+ /* FIXME: This isn't very pretty */
+
+ usedsec = start_data_sector + (size_root_dir / sector_size) +
+ (last_cluster_written * bs.cluster_size);
+
+ totalsec = blocks * BLOCK_SIZE / sector_size;
+
+ /* Figure out how many sectors / clustors the file requires */
+
+ sectors = ROUND_UP(stat.st_size, sector_size);
+ clusters = ROUND_UP(sectors, (int) bs.cluster_size);
+
+ if (usedsec + sectors > totalsec) {
+ printf("Error - %s is too big (%d vs %d)\n", filename, sectors, totalsec - usedsec);
+ close(infile);
+ return -1;
+ }
+
+ printf("ADD %s\n", filename);
+
+ /* Grab the basename of the file */
+ base = basename(filename);
+
+ /* Extract out the 8.3 name */
+ copy_filename(base, name83, ext83);
+
+ /* Make an extended name slot */
+
+ slot = (struct msdos_dir_slot *) &dir->table[dir->count++];
+ slot->id = 'A';
+ slot->attr = 0x0F;
+ slot->reserved = 0;
+ slot->start = 0;
+
+ slot->alias_checksum = 0;
+
+ for(i = 0; i < 8; i++)
+ slot->alias_checksum = (((slot->alias_checksum&1)<<7)|((slot->alias_checksum&0xfe)>>1)) + name83[i];
+
+ for(i = 0; i < 3; i++)
+ slot->alias_checksum = (((slot->alias_checksum&1)<<7)|((slot->alias_checksum&0xfe)>>1)) + ext83[i];
+
+ p = base;
+
+ copy_name(slot->name0_4, 10, &p);
+ copy_name(slot->name5_10, 12, &p);
+ copy_name(slot->name11_12, 4, &p);
+
+
+ /* Get the entry from the root filesytem */
+ entry = &dir->table[dir->count++];
+
+ strncpy(entry->name, name83, 8);
+ strncpy(entry->ext, ext83, 3);
+
+
+ /* If the user has it read only, then add read only to the incoming
+ attribute settings */
+
+ if (!(stat.st_mode & S_IWUSR)) attr |= ATTR_RO;
+ entry->attr = attr;
+
+ /* Set the access time on the file */
+ ctime = localtime(&create_time);
+
+ entry->time = CT_LE_W((unsigned short)((ctime->tm_sec >> 1) +
+ (ctime->tm_min << 5) + (ctime->tm_hour << 11)));
+
+ entry->date = CT_LE_W((unsigned short)(ctime->tm_mday +
+ ((ctime->tm_mon+1) << 5) +
+ ((ctime->tm_year-80) << 9)));
+
+ entry->ctime_ms = 0;
+ entry->ctime = entry->time;
+ entry->cdate = entry->date;
+ entry->adate = entry->date;
+ entry->size = stat.st_size;
+
+ start = last_cluster_written;
+
+ entry->start = CT_LE_W(start); /* start sector */
+ entry->starthi = CT_LE_W((start & 0xFFFF0000) >> 16); /* High start sector (for FAT32) */
+
+ /* We mark all of the clusters we use in the FAT */
+
+ for(c = 0; c < clusters; c++ ) {
+ int free;
+ int next = c == (clusters - 1) ? FAT_EOF : start + c + 1;
+ mark_FAT_cluster(start + c, next);
+ last_cluster_written++;
+ }
+
+ /* This confused me too - cluster 2 starts after the
+ root directory data - search me as to why */
+
+ ptr = (start_data_sector * sector_size) + size_root_dir;
+ ptr += (start - 2) * bs.cluster_size * sector_size;
+
+ buffer = (char *) malloc(sector_size);
+
+ if (!buffer) {
+ printf("Error - couldn't allocate memory\n");
+ goto exit_add;
+ }
+
+ /* Write the file into the file block */
+
+ seekto(ptr, "datafile");
+
+ while(1) {
+ int size = read(infile, buffer, sector_size);
+ if (size <= 0) break;
+
+ writebuf(buffer, size, "data");
+ }
+
+ exit_add:
+ if (infile) close(infile);
+}
+
+/* Add a new directory to the specified directory entry, and in turn populate
+ it with its own files */
+
+/* FIXME: This should check to make sure there is enough size to add itself */
+
+static void add_directory(char *filename, struct dir_entry *dir) {
+
+ struct dir_entry *newdir = 0;
+ struct msdos_dir_entry *entry;
+ struct tm *ctime;
+ DIR *rddir = opendir(filename);
+ struct dirent *dentry = 0;
+ int remain;
+ char *data;
+
+ /* If the directory doesn't exist */
+ if (!rddir) return;
+
+ if (dir->root) {
+ if (dir->count == dir->entries) {
+ printf("Error - too many directory entries\n");
+ goto exit_add_dir;
+ }
+ }
+ else {
+ if (dir->count == dir->entries) {
+ if (!dir->table)
+ dir->table = (struct msdos_dir_entry *) malloc(sizeof(struct msdos_dir_entry));
+ else {
+ dir->table = (struct msdos_dir_entry *) realloc(dir->table, (dir->entries + 1) *
+ sizeof(struct msdos_dir_entry));
+
+ /* Zero it out to avoid issues */
+ memset(&dir->table[dir->entries], 0, sizeof(struct msdos_dir_entry));
+ }
+ dir->entries++;
+ }
+ }
+
+ /* Now, create a new directory entry for the new directory */
+ newdir = (struct dir_entry *) calloc(1, sizeof(struct dir_entry));
+ if (!newdir) goto exit_add_dir;
+
+ entry = &dir->table[dir->count++];
+
+ strncpy(entry->name, basename(filename), sizeof(entry->name));
+
+ entry->attr = ATTR_DIR;
+ ctime = localtime(&create_time);
+
+ entry->time = CT_LE_W((unsigned short)((ctime->tm_sec >> 1) +
+ (ctime->tm_min << 5) + (ctime->tm_hour << 11)));
+
+ entry->date = CT_LE_W((unsigned short)(ctime->tm_mday +
+ ((ctime->tm_mon+1) << 5) +
+ ((ctime->tm_year-80) << 9)));
+
+ entry->ctime_ms = 0;
+ entry->ctime = entry->time;
+ entry->cdate = entry->date;
+ entry->adate = entry->date;
+
+ /* Now, read the directory */
+
+ while((dentry = readdir(rddir))) {
+ struct stat st;
+ char *buffer;
+
+ if (!strcmp(dentry->d_name, ".") || !strcmp(dentry->d_name, ".."))
+ continue;
+
+ /* DOS wouldn't like a typical unix . (dot) file, so we skip those too */
+ if (dentry->d_name[0] == '.') continue;
+
+ buffer = malloc(strlen(filename) + strlen(dentry->d_name) + 3);
+ if (!buffer) continue;
+
+ sprintf(buffer, "%s/%s", filename, dentry->d_name);
+ if (!stat(buffer, &st)) {
+ if (S_ISDIR(st.st_mode))
+ add_directory(buffer, newdir);
+ else if (S_ISREG(st.st_mode)) {
+ unsigned char attrib = check_attrib_file(filename, dentry->d_name);
+ add_file(buffer, newdir, attrib);
+ }
+ }
+
+ free(buffer);
+ }
+
+ /* Now that the entire directory has been written, go ahead and write the directory
+ entry as well */
+
+ entry->start = CT_LE_W(last_cluster_written);
+ entry->starthi = CT_LE_W((last_cluster_written & 0xFFFF0000) >> 16);
+ entry->size = newdir->count * sizeof(struct msdos_dir_entry);
+
+ remain = entry->size;
+ data = (char *) newdir->table;
+
+ while(remain) {
+ int size =
+ remain > bs.cluster_size * sector_size ? bs.cluster_size * sector_size : remain;
+
+ int pos = (start_data_sector * sector_size) + size_root_dir;
+ pos += (last_cluster_written - 2) * bs.cluster_size * sector_size;
+
+ seekto(pos, "add_dir");
+ writebuf(data, size, "add_dir");
+
+ remain -= size;
+ data += size;
+
+ mark_FAT_cluster(last_cluster_written, remain ? last_cluster_written + 1 : FAT_EOF);
+ last_cluster_written++;
+ }
+
+ exit_add_dir:
+ if (rddir) closedir(rddir);
+ if (newdir->table) free(newdir->table);
+ if (newdir) free(newdir);
+}
+
+/* Given a directory, add all the files and directories to the root directory of the
+ image.
+*/
+
+static void add_root_directory(char *dirname)
+{
+ DIR *dir = opendir(dirname);
+ struct dirent *entry = 0;
+ struct dir_entry *newdir = 0;
+
+ if (!dir) {
+ printf("Error - directory %s does not exist\n", dirname);
+ return;
+ }
+
+ /* Create the root directory structure - this is a bit different then
+ above, because the table already exists, we just refer to it. */
+
+ newdir = (struct dir_entry *) calloc(1,sizeof(struct dir_entry));
+
+ if (!newdir) {
+ closedir(dir);
+ return;
+ }
+
+ newdir->entries = root_dir_entries;
+ newdir->root = 1;
+ newdir->count = 0;
+ newdir->table = root_dir;
+
+ while((entry = readdir(dir))) {
+ struct stat st;
+ char *buffer;
+
+ if (!strcmp(entry->d_name, ".") || !strcmp(entry->d_name, ".."))
+ continue;
+
+ /* DOS wouldn't like a typical unix . (dot) file, so we skip those too */
+ if (entry->d_name[0] == '.') continue;
+
+ buffer = malloc(strlen(dirname) + strlen(entry->d_name) + 3);
+ if (!buffer) continue;
+
+ sprintf(buffer, "%s/%s", dirname, entry->d_name);
+ if (!stat(buffer, &st)) {
+ if (S_ISDIR(st.st_mode))
+ add_directory(buffer, newdir);
+ else if (S_ISREG(st.st_mode)) {
+ unsigned char attrib = check_attrib_file(dirname, entry->d_name);
+ add_file(buffer, newdir, attrib);
+ }
+ }
+
+ free(buffer);
+ }
+
+ closedir(dir);
+ if (newdir) free(newdir);
+}
/* Report the command usage and return a failure error code */
@@ -1418,9 +1898,9 @@
[-m boot-msg-file] [-n volume-name] [-i volume-id] [-B bootcode]\n\
[-s sectors-per-cluster] [-S logical-sector-size] [-f number-of-FATs]\n\
[-F fat-size] [-r root-dir-entries] [-R reserved-sectors]\n\
- /dev/name [blocks]\n");
+ [-d directory] /dev/name [blocks]\n");
}
-
+
/*
* ++roman: On m68k, check if this is an Atari; if yes, turn on Atari variant
* of MS-DOS filesystem by default.
@@ -1458,6 +1938,8 @@
int c;
char *tmp;
char *listfile = NULL;
+ char *dirname = NULL;
+
FILE *msgfile;
struct stat statbuf;
int i = 0, pos, ch;
@@ -1477,7 +1959,7 @@
printf ("%s " VERSION " (" VERSION_DATE ")\n",
program_name);
- while ((c = getopt (argc, argv, "AcCf:F:Ii:l:m:n:r:R:s:S:v:B:b")) != EOF)
+ while ((c = getopt (argc, argv, "AcCd:f:F:Ii:l:m:n:r:R:s:S:v:B:b")) != EOF)
/* Scan the command line for options */
switch (c)
{
@@ -1502,6 +1984,10 @@
create = TRUE;
break;
+ case 'd':
+ dirname = optarg;
+ break;
+
case 'f': /* f : Choose number of FATs */
nr_fats = (int) strtol (optarg, &tmp, 0);
if (*tmp || nr_fats < 1 || nr_fats > 4)
@@ -1796,8 +2282,10 @@
else if (listfile)
get_list_blocks (listfile);
- write_tables (); /* Write the file system tables away! */
+ if (dirname) add_root_directory(dirname);
+
+ write_tables (); /* Write the file system tables away! */
exit (0); /* Terminate with no errors! */
}

View File

@@ -1,12 +0,0 @@
--- dosfstools-2.10/dosfsck/boot.c.orig 2004-10-15 08:51:42.394725176 -0600
+++ dosfstools-2.10/dosfsck/boot.c 2004-10-15 08:49:16.776862456 -0600
@@ -14,6 +14,9 @@
#include "io.h"
#include "boot.h"
+#ifndef MSDOS_FAT12
+#define MSDOS_FAT12 4084
+#endif
#define ROUND_TO_MULTIPLE(n,m) ((n) && (m) ? (n)+(m)-1-((n)-1)%(m) : 0)
/* don't divide by zero */

View File

@@ -1,11 +0,0 @@
DESCRIPTION = "Evince is a document viewer for document formats like PDF, PS, DjVu."
LICENSE = "GPL"
SECTION = "x11/office"
DEPENDS = "gnome-doc-utils poppler libxml2 gtk+ gnome-vfs gconf libglade gnome-keyring"
inherit gnome pkgconfig gtk-icon-cache
SRC_URI = "${GNOME_MIRROR}/${PN}/0.9/${PN}-${PV}.tar.bz2 \
file://no-icon-theme.diff;patch=1;pnum=0"
EXTRA_OECONF = "--without-libgnome --disable-thumbnailer"

View File

@@ -1,15 +0,0 @@
DESCRIPTION = "Evince is a document viewer for document formats like PDF, PS, DjVu."
LICENSE = "GPL"
SECTION = "x11/office"
DEPENDS = "gnome-doc-utils poppler libxml2 gtk+ gnome-vfs gconf libglade gnome-keyring"
PV = "0.9.0+svn${SRCDATE}"
inherit gnome pkgconfig gtk-icon-cache
SRC_URI = "svn://svn.gnome.org/svn/evince;module=trunk \
file://no-icon-theme.diff;patch=1;pnum=0"
S = "${WORKDIR}/trunk"
EXTRA_OECONF = "--without-libgnome --disable-thumbnailer"

View File

@@ -1,13 +0,0 @@
Index: configure.ac
===================================================================
--- configure.ac (revision 2436)
+++ configure.ac (working copy)
@@ -57,7 +57,7 @@
PKG_CHECK_MODULES(LIB, gtk+-2.0 >= $GTK_REQUIRED libxml-2.0 >= $LIBXML_REQUIRED)
PKG_CHECK_MODULES(BACKEND, gtk+-2.0 >= $GTK_REQUIRED gnome-vfs-2.0)
PKG_CHECK_MODULES(FRONTEND_CORE, gtk+-2.0 >= $GTK_REQUIRED libglade-2.0 gnome-vfs-2.0)
-PKG_CHECK_MODULES(SHELL_CORE, libxml-2.0 >= $LIBXML_REQUIRED gtk+-2.0 >= $GTK_REQUIRED gnome-icon-theme >= $GNOME_ICON_THEME_REQUIRED gnome-vfs-2.0 libglade-2.0 gconf-2.0 gnome-keyring-1 >= $KEYRING_REQUIRED)
+PKG_CHECK_MODULES(SHELL_CORE, libxml-2.0 >= $LIBXML_REQUIRED gtk+-2.0 >= $GTK_REQUIRED gnome-vfs-2.0 libglade-2.0 gconf-2.0 gnome-keyring-1 >= $KEYRING_REQUIRED)
AC_ARG_WITH(libgnome,
AC_HELP_STRING([--without-libgnome],[disable the use of libgnome]),

View File

@@ -1,68 +0,0 @@
Index: flumotion-0.4.1/configure.ac
===================================================================
--- flumotion-0.4.1.orig/configure.ac 2007-03-05 17:16:48.121264330 +0100
+++ flumotion-0.4.1/configure.ac 2007-03-05 17:20:40.343837320 +0100
@@ -73,13 +73,6 @@
AC_MSG_ERROR([PyGTK 2.5.2 contains known bugs, please install other version])
fi
-if test "x$DISPLAY" != "x"; then
- AS_PYTHON_IMPORT([gtk.glade],,
- AC_MSG_ERROR([You need to have python libglade bindings installed]))
-else
- AC_MSG_NOTICE([Not trying to import gtk.glade because DISPLAY is unset])
-fi
-
if test $GST_010_SUPPORTED = "no"; then
AC_MSG_ERROR([No appropriate version of PyGTK installed. Correct the above
errors and try again.])
@@ -94,16 +87,6 @@
[AC_MSG_RESULT([$PYGST_010_PKG_ERRORS])
GST_010_SUPPORTED=no])
- if test $GST_010_SUPPORTED = "yes"; then
- saved_PYTHONPATH=$PYTHONPATH
- export PYTHONPATH=$PYGST_010_DIR:$PYTHONPATH
- AS_PYTHON_IMPORT([gst],,
- [AC_MSG_NOTICE([Unable to import gst-python 0.10 -- check your PYTHONPATH?])
- GST_010_SUPPORTED=no],
- [import pygst; pygst.require('0.10')],
- [assert gst.pygst_version[[1]] == 10 or (gst.pygst_version[[1]] == 9 and gst.pygst_version[[2]] >= 7)])
-
- fi
fi
if test $GST_010_SUPPORTED = "no"; then
@@ -158,32 +141,7 @@
AC_CHECK_PROG(PYCHECKER, pychecker, yes, no)
AM_CONDITIONAL(HAVE_PYCHECKER, test "x$PYCHECKER" = "xyes")
-dnl check for Twisted
-AS_PYTHON_IMPORT(twisted,
- [
- AC_MSG_CHECKING(for Twisted >= 2.0.1)
- prog="
-import sys
-import twisted.copyright
-minver = '2.0.1'
-if twisted.copyright.version < minver:
- sys.exit(1)
-sys.exit(0)
-"
- if $PYTHON -c "$prog" 1>&AC_FD_CC 2>&AC_FD_CC
- then
- AC_MSG_RESULT(found)
- else
- AC_MSG_RESULT(too old)
- AC_MSG_ERROR([You need at least version 2.0.1 of Twisted])
- fi
- ]
- ,
- AC_MSG_ERROR([You need at least version 2.0.1 of Twisted])
-)
-TWISTED_MODULE([twisted.names])
-TWISTED_MODULE([twisted.web])
AC_CONFIG_FILES([env], [chmod +x env])
AC_CONFIG_FILES([bin/flumotion], [chmod +x bin/flumotion])

View File

@@ -1,25 +0,0 @@
Index: flumotion-0.3.1/common/as-python.m4
===================================================================
--- flumotion-0.3.1.orig/common/as-python.m4 2007-03-02 15:26:46.704717964 +0100
+++ flumotion-0.3.1/common/as-python.m4 2007-03-02 15:27:28.601326374 +0100
@@ -199,6 +199,12 @@
AC_MSG_CHECKING(for headers required to compile python extensions)
dnl deduce PYTHON_INCLUDES
+
+ AC_ARG_WITH(python-includes,
+ [ --with-python-includes=DIR path to Python includes], py_exec_prefix=$withval)
+ if test x$py_exec_prefix != x; then
+ PYTHON_INCLUDES="-I${py_exec_prefix}/include/python${PYTHON_VERSION}"
+ else
py_prefix=`$PYTHON -c "import sys; print sys.prefix"`
py_exec_prefix=`$PYTHON -c "import sys; print sys.exec_prefix"`
PYTHON_INCLUDES="-I${py_prefix}/include/python${PYTHON_VERSION}"
@@ -206,6 +212,7 @@
if test "$py_prefix" != "$py_exec_prefix"; then
PYTHON_INCLUDES="$PYTHON_INCLUDES -I${py_exec_prefix}/include/python${PYTHON_VERSION}"
fi
+ fi
AC_SUBST(PYTHON_INCLUDES)
dnl check if the headers exist:

View File

@@ -1,30 +0,0 @@
DESCRIPTION = "Fluendo Streaming Server"
LICENSE = "GPL"
DEPENDS = "gstreamer python-gst twisted python-pygtk2"
RDEPENDS = "python-twisted-core python-twisted-web python-core python-gst"
RDEPENDS_${PN}-gui = "${PN} python-pygtk2"
PR = "r3"
SRC_URI = "http://www.flumotion.net/src/flumotion/flumotion-${PV}.tar.bz2 \
file://python-path.patch;patch=1 \
file://no-check-for-python-stuff.patch;patch=1"
inherit autotools distutils-base pkgconfig
export EPYDOC = "no"
EXTRA_OECONF += "--with-python-includes=${STAGING_INCDIR}/../"
PACKAGES =+ "flumotion-gui"
FILES_${PN} = "${bindir} ${sbindir} ${libdir}/flumotion"
FILES_${PN}-dev += "${libdir}/pkgconfig"
FILES_${PN}-gui = "${bindir}/flumotion-admin ${bindir}/flumotion-tester \
${libdir}/flumotion/python/flumotion/admin/gtk \
${libdir}/flumotion/python/flumotion/component/*/admin_gtk* \
${libdir}/flumotion/python/flumotion/component/*/*/admin_gtk* \
${libdir}/flumotion/python/flumotion/extern \
${libdir}/flumotion/python/flumotion/manager \
${libdir}/flumotion/python/flumotion/ui \
${libdir}/flumotion/python/flumotion/wizard \
${datadir}/pixmaps ${datadir}/flumotion ${datadir}/applications"

View File

@@ -0,0 +1,28 @@
PR = "r9"
export IMAGE_BASENAME = "oh-extras"
GUI_MACHINE_CLASS ?= "none"
XSERVER ?= "xserver-kdrive-fbdev"
DEPENDS = "\
task-oh \
task-oh-extras"
RDEPENDS = "\
task-base \
task-oh-boot \
task-oh-boot-extras \
task-oh-base \
task-oh-standard \
task-oh-testapps \
task-oh-devtools \
task-oh-extraapps \
${XSERVER} "
export PACKAGE_INSTALL = "${RDEPENDS}"
#ROOTFS_POSTPROCESS_COMMAND += "zap_root_password; "
inherit image
LICENSE = MIT

View File

@@ -1,9 +0,0 @@
#
# Copyright (C) 2007 OpenedHand Ltd.
#
IMAGE_FEATURES += "apps-core apps-pda"
inherit poky-image
IMAGE_INSTALL += "task-poky-extraapps"

View File

@@ -1,25 +0,0 @@
--- ip/Makefile 2006/02/23 21:22:18 1.1
+++ ip/Makefile 2006/02/23 21:22:27
@@ -16,7 +16,7 @@
rtmon: $(RTMONOBJ) $(LIBNETLINK)
install: all
- install -m 0755 -s $(TARGETS) $(DESTDIR)$(SBINDIR)
+ install -m 0755 $(TARGETS) $(DESTDIR)$(SBINDIR)
install -m 0755 $(SCRIPTS) $(DESTDIR)$(SBINDIR)
clean:
--- tc/Makefile 2006/02/23 21:23:52 1.1
+++ tc/Makefile 2006/02/23 21:23:57
@@ -70,9 +70,9 @@
install: all
mkdir -p $(DESTDIR)/usr/lib/tc
- install -m 0755 -s tc $(DESTDIR)$(SBINDIR)
+ install -m 0755 tc $(DESTDIR)$(SBINDIR)
for i in $(TCSO); \
- do install -m 755 -s $$i $(DESTDIR)/usr/lib/tc; \
+ do install -m 755 $$i $(DESTDIR)/usr/lib/tc; \
done
clean:

View File

@@ -1,83 +0,0 @@
The tc command was failing to build due to flex errors. These errors are
caused by an incompatible change to flex in recent versions, including the
version shipped with OE.
This fix is as per the one used by opensure:
http://lists.opensuse.org/opensuse-commit/2006-04/msg00090.html
and simple renames str to prevent it conflicting.
--- iproute2-2.6.16-060323/tc/emp_ematch.l 2006/10/30 22:46:29 1.1
+++ iproute2-2.6.16-060323/tc/emp_ematch.l 2006/10/30 22:47:26
@@ -63,7 +63,7 @@
%}
-%x str
+%x STR
%option 8bit stack warn noyywrap prefix="ematch_"
%%
@@ -78,17 +78,17 @@
}
strbuf_index = 0;
- BEGIN(str);
+ BEGIN(STR);
}
-<str>\" {
+<STR>\" {
BEGIN(INITIAL);
yylval.b = bstr_new(strbuf, strbuf_index);
yylval.b->quoted = 1;
return ATTRIBUTE;
}
-<str>\\[0-7]{1,3} { /* octal escape sequence */
+<STR>\\[0-7]{1,3} { /* octal escape sequence */
int res;
sscanf(yytext + 1, "%o", &res);
@@ -100,12 +100,12 @@
strbuf_append_char((unsigned char) res);
}
-<str>\\[0-9]+ { /* catch wrong octal escape seq. */
+<STR>\\[0-9]+ { /* catch wrong octal escape seq. */
fprintf(stderr, "error: invalid octale escape sequence\n");
return ERROR;
}
-<str>\\x[0-9a-fA-F]{1,2} {
+<STR>\\x[0-9a-fA-F]{1,2} {
int res;
sscanf(yytext + 2, "%x", &res);
@@ -118,16 +118,16 @@
strbuf_append_char((unsigned char) res);
}
-<str>\\n strbuf_append_char('\n');
-<str>\\r strbuf_append_char('\r');
-<str>\\t strbuf_append_char('\t');
-<str>\\v strbuf_append_char('\v');
-<str>\\b strbuf_append_char('\b');
-<str>\\f strbuf_append_char('\f');
-<str>\\a strbuf_append_char('\a');
+<STR>\\n strbuf_append_char('\n');
+<STR>\\r strbuf_append_char('\r');
+<STR>\\t strbuf_append_char('\t');
+<STR>\\v strbuf_append_char('\v');
+<STR>\\b strbuf_append_char('\b');
+<STR>\\f strbuf_append_char('\f');
+<STR>\\a strbuf_append_char('\a');
-<str>\\(.|\n) strbuf_append_char(yytext[1]);
-<str>[^\\\n\"]+ strbuf_append_charp(yytext);
+<STR>\\(.|\n) strbuf_append_char(yytext[1]);
+<STR>[^\\\n\"]+ strbuf_append_charp(yytext);
[aA][nN][dD] return AND;
[oO][rR] return OR;

View File

@@ -1,18 +0,0 @@
DESCRIPTION = "kernel routing and traffic control utilities"
SECTION = "base"
LICENSE = "GPL"
DEPENDS = "flex-native bison-native"
# Set DATE in the .bb file
SRC_URI = "http://developer.osdl.org/dev/iproute2/download/${P}-${DATE}.tar.gz"
S = "${WORKDIR}/${P}-${DATE}"
EXTRA_OEMAKE = "CC='${CC}' KERNEL_INCLUDE=${STAGING_KERNEL_DIR}/include DOCDIR=${docdir}/iproute2 SUBDIRS='lib tc ip' SBINDIR=/sbin"
do_install () {
oe_runmake DESTDIR=${D} install
}
FILES_${PN} += "/usr/lib/tc/*"
FILES_${PN}-dbg += "/usr/lib/tc/.debug"

View File

@@ -1,8 +0,0 @@
PR = "r0"
SRC_URI += "file://iproute2-2.6.15_no_strip.diff;patch=1;pnum=0 \
file://new-flex-fix.patch;patch=1"
require iproute2.inc
DATE = "061002"

View File

@@ -0,0 +1,19 @@
SECTION = "libs"
PRIORITY = "optional"
MAINTAINER = "Greg Gilbert <greg@treke.net>"
DEPENDS = "zlib"
DESCRIPTION = "Library for interacting with ID3 tags."
LICENSE = "GPL"
PR = "r2"
SRC_URI = "ftp://ftp.mars.org/pub/mpeg/libid3tag-${PV}.tar.gz "
S = "${WORKDIR}/libid3tag-${PV}"
inherit autotools
EXTRA_OECONF = "-enable-speed"
do_stage() {
oe_libinstall -so libid3tag ${STAGING_LIBDIR}
install -m 0644 id3tag.h ${STAGING_INCDIR}
}

View File

@@ -0,0 +1,26 @@
DESCRIPTION = "MPEG Audio Decoder Library"
SECTION = "libs"
PRIORITY = "optional"
MAINTAINER = "Greg Gilbert <greg@treke.net>"
DEPENDS = "libid3tag"
LICENSE = "GPL"
PR = "r2"
SRC_URI = "ftp://ftp.mars.org/pub/mpeg/libmad-${PV}.tar.gz"
S = "${WORKDIR}/libmad-${PV}"
inherit autotools
EXTRA_OECONF = "-enable-speed --enable-shared"
# The ASO's don't take any account of thumb...
EXTRA_OECONF_append_thumb = " --disable-aso --enable-fpm=default"
do_configure_prepend () {
# damn picky automake...
touch NEWS AUTHORS ChangeLog
}
do_stage() {
oe_libinstall -so libmad ${STAGING_LIBDIR}
install -m 0644 mad.h ${STAGING_INCDIR}
}

View File

@@ -1,11 +0,0 @@
--- libnl-1.0-pre6/Makefile.opts.in.orig 2006-08-24 14:57:42.000000000 +0200
+++ libnl-1.0-pre6/Makefile.opts.in 2006-08-24 14:58:20.000000000 +0200
@@ -10,7 +10,7 @@
#
CC := @CC@
-CFLAGS := @CFLAGS@
+CFLAGS := -I./include -I. -I../include @CFLAGS@
LDFLAGS := @LDFLAGS@
CPPFLAGS := @CPPFLAGS@
PACKAGE_NAME := @PACKAGE_NAME@

View File

@@ -1,18 +0,0 @@
DESCRIPTION = "libnl is a library for applications dealing with netlink sockets"
SECTION = "libs/network"
LICENSE = "LGPL"
HOMEPAGE = "http://people.suug.ch/~tgr/libnl/"
PRIORITY = "optional"
PV = "0.99+1.0-pre6"
inherit autotools pkgconfig
SRC_URI= "http://people.suug.ch/~tgr/libnl/files/${PN}-1.0-pre6.tar.gz \
file://local-includes.patch;patch=1"
S = "${WORKDIR}/${PN}-1.0-pre6"
do_stage () {
autotools_stage_all prefix=${prefix}
}

View File

@@ -1,14 +0,0 @@
PR = "r7"
SRC_URI = "http://www.balabit.com/downloads/libol/0.3/${P}.tar.gz"
S = "${WORKDIR}/${PN}-${PV}"
inherit autotools binconfig
do_stage() {
install -d ${STAGING_INCDIR}/libol
install -m 0755 ${S}/src/.libs/libol.so.0.0.0 ${STAGING_LIBDIR}/
ln -fs ${STAGING_LIBDIR}/libol.so.0.0.0 ${STAGING_LIBDIR}/libol.so.0
install ${S}/src/*.h ${STAGING_INCDIR}/libol/
}

View File

@@ -1,116 +0,0 @@
Index: current/configure.ac
===================================================================
--- current.orig/configure.ac 2007-06-14 09:06:04.000000000 +0000
+++ current/configure.ac 2007-06-14 09:14:37.000000000 +0000
@@ -52,8 +52,6 @@
CFLAGS="$CFLAGS $X_CFLAGS -Wall"
dnl Checks for libraries.
-AM_PATH_GTK(1.2.2,,
- AC_MSG_WARN([*** GTK+ >= 1.2.2 not found ***]))
AC_CHECK_LIB(X11, XInitThreads,,
AC_MSG_ERROR([*** X11 not threadsafe ***]))
AC_CHECK_LIB(Xext, XShapeQueryExtension,,
@@ -107,63 +105,6 @@
AC_MSG_WARN("beep media player not found")
fi
-dnl Check for xmms
-AM_PATH_XMMS(1.2.7,,
- AC_MSG_WARN("xmms plugins can not be built"))
-dnl Override where to place libxmms_xosd.so
-AC_ARG_WITH([plugindir],
- AC_HELP_STRING([--with-plugindir=DIR],
- [Set the xmms plugin directory]),
- [XMMS_PLUGIN_DIR="$withval"],
- [XMMS_PLUGIN_DIR="$XMMS_GENERAL_PLUGIN_DIR"])
-
-dnl Check for gdk-pixbuf
-AM_PATH_GDK_PIXBUF(0.22.0,,
- AC_MSG_WARN("new xmms plugin can not be built"))
-dnl Override where to place pixmaps for libxmms_xosd.so
-AC_ARG_WITH([pixmapdir],
- AC_HELP_STRING([--with-pixmapdir=DIR],
- [Set the directory for xmms plugin pixmaps]),
- [XMMS_PIXMAPDIR="$withval"],
- [XMMS_PIXMAPDIR="${datadir}/xosd"])
-AC_SUBST(XMMS_PIXMAPDIR)
-
-dnl NEW plugin
-AC_ARG_ENABLE([new-plugin],
- AC_HELP_STRING([--disable-new-plugin],
- [Disable new xmms plugin (enabled by default)]),
- [enable_new_plugin="$enableval"],
- [enable_new_plugin="yes"])
-AC_MSG_CHECKING([whether new xmms plugin was requested])
-AC_MSG_RESULT($enable_new_plugin)
-AM_CONDITIONAL([BUILD_NEW_PLUGIN],
- [test x"$enable_new_plugin" = "xyes" -a x"$no_xmms" != "xyes" -a x"$no_gdk_pixbuf" != "xyes"])
-
-
-dnl Check for Beep Media player
-AC_ARG_ENABLE([beep_media_player_plugin],
- AC_HELP_STRING([--enable-beep_media_player_plugin],
- [Enable beep media plugin (enabled by default)]),
- [beep_media_player_plugin="$enableval"],
- [beep_media_player_plugin="yes"])
-AC_MSG_CHECKING([whether beep media plugin was requested])
-AC_MSG_RESULT($beep_media_player_plugin)
-
-AM_CONDITIONAL([BUILD_BEEP_MEDIA_PLUGIN],
- [test x"$beep_media_player_plugin" = "xyes" -a x"$no_bmp" == "xyes"])
-
-dnl OLD plugin
-AC_ARG_ENABLE([old-plugin],
- AC_HELP_STRING([--enable-old-plugin],
- [Enable old xmms plugin (disabled by default)]),
- [enable_old_plugin="$enableval"],
- [enable_old_plugin="no"])
-AC_MSG_CHECKING([whether old xmms plugin was requested])
-AC_MSG_RESULT($enable_old_plugin)
-AM_CONDITIONAL([BUILD_OLD_PLUGIN],
- [test x"$enable_old_plugin" = "xyes" -a x"$no_xmms" != "xyes"])
-
-
dnl Define XOSD_VERSION
AC_DEFINE_UNQUOTED(XOSD_VERSION, "${VERSION}")
@@ -174,8 +115,6 @@
Makefile
src/Makefile
src/libxosd/Makefile
-src/xmms_plugin/Makefile
-src/bmp_plugin/Makefile
macros/Makefile
man/Makefile
pixmaps/Makefile
Index: current/src/Makefile.am
===================================================================
--- current.orig/src/Makefile.am 2007-06-14 09:06:04.000000000 +0000
+++ current/src/Makefile.am 2007-06-14 09:14:37.000000000 +0000
@@ -10,6 +10,4 @@
include_HEADERS = xosd.h
-AM_CFLAGS = ${GTK_CFLAGS}
-
-SUBDIRS=libxosd xmms_plugin bmp_plugin
+SUBDIRS=libxosd
Index: current/Makefile.am
===================================================================
--- current.orig/Makefile.am 2007-06-14 09:06:04.000000000 +0000
+++ current/Makefile.am 2007-06-14 09:16:15.000000000 +0000
@@ -1,11 +1,5 @@
-if BUILD_NEW_PLUGIN
- NEW_SUB = pixmaps
-endif
-if BUILD_BEEP_MEDIA_PLUGIN
- NEW_SUB = pixmaps
-endif
-SUBDIRS = macros man $(NEW_SUB) src script
-DIST_SUBDIRS = macros man pixmaps src script
+SUBDIRS = macros man src script
+DIST_SUBDIRS = macros man src script
DISTCLEANFILES = libtool-disable-static

Some files were not shown because too many files have changed in this diff Show More