Compare commits

..

9 Commits

Author SHA1 Message Date
Richard Purdie
b806f726b1 scripts/runqemu-ifup: Ensure netmask is set correctly
Without this the command will add a route for the subnet 192.168.7.0 which
means multiple qemu instances can't operate correctly since all but the last
one will be masked out.

(From OE-Core rev: 9e00d6b343120496ec0dd72240c7b04e0a8b7eaa)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-13 12:05:29 -08:00
Richard Purdie
160c1d9977 bitbake.conf/image.bbclass: Ensure images use the correct passwd/group files
We need pseudo to use the rootfs passwd/group files belonging to the
rootfs when building images. This patch ensures that we use the rootfs
files instead of those in the sysroot which can lead to incorrect file
ownership issues.

(From OE-Core rev: c4da803ef78322b758380eb0af0dcb73cae6553c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-13 12:04:55 -08:00
Richard Purdie
b9d3a5224c conf/machine: Don't poke around providers which aren't machine specific/safe
Machines shouldn't be poking around PREFERRED_PROVIDERS which aren't
machine specific or at least machine safe. Kernels are machine specific
and the xserver is selectable. libx11 and mesa are now really a distro choice
and machine configurations shouldn't be poking around them as it just leads
to corruption, conflicts and confusion.

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-07 00:30:15 +00:00
Richard Purdie
dc51e4a982 conf/machine: Don't poke around providers which aren't machine specific/safe
Machines shouldn't be poking around PREFERRED_PROVIDERS which aren't
machine specific or at least machine safe. Kernels are machine specific
and the xserver is selectable. libx11 and mesa are now really a distro choice
and machine configurations shouldn't be poking around them as it just leads
to corruption, conflicts and confusion.

(From OE-Core rev: 97a57aca12437c24b628071bb189c9f3b94e27ca)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-07 00:30:04 +00:00
Saul Wold
d22daf5c06 wget: Fix wget alternative path to be /usr/bin not /bin
(From OE-Core rev: 4339459bd38c75250610c4cdb767504e808c5bf0)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-06 16:37:19 +00:00
Saul Wold
bc557dece3 distro_tracking: fix manual entries
(From OE-Core rev: a1784e814a412f209fe36626affdb82e2dfbeffe)

Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-06 16:37:19 +00:00
Koen Kooi
4290ee8526 buildhistory bbclass: avoid absolute paths for files-in-image.txt to avoid diff churn when relocating TMPDIR
(From OE-Core rev: fb642d21111691b9302e16e984aff9d8fb18c431)

Signed-off-by: Koen Kooi <koen@dominion.thruhere.net>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-06 16:37:19 +00:00
Mei Lei
965777b2fb distrodata.bbclass:Fix some recipes upstream version check issue.
Some recipes,like rt-tests,clutter-box2d,iproute2,didn't declare upstream protocal, but in distrodata.bbclass, we use rsync as the default protocal,
this will lead an error when checking upstream version.
Change default protocal from rsync to git in distrodata.bbclass.

(From OE-Core rev: 7f38cbef365c05d75563760f15b10284147c2de3)

Signed-off-by: Mei Lei <lei.mei@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-06 16:37:19 +00:00
Richard Purdie
8d182bb423 libsdl: Disable pulseaudio explicitly
Its not listed in DEPENDS so should never have been built. We could
configure this as a configuration option and I'll take a patch for
that but I like deterministic builds so force it off for now.

(From OE-Core rev: 0a7a8597be05c8def8af58eecab49d963dc9d757)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-12-06 16:37:19 +00:00
868 changed files with 26684 additions and 20430 deletions

6
README
View File

@@ -20,10 +20,6 @@ The Yocto Project has extensive documentation about the system including a
reference manual which can be found at:
http://yoctoproject.org/community/documentation
OpenEmbedded-Core is a layer containing the core metadata for current versions
of OpenEmbedded. It is distro-less (can build a functional image with
DISTRO = "") and contains only emulated machine support.
For information about OpenEmbedded, see the OpenEmbedded website:
For information about OpenEmbedded see their website:
http://www.openembedded.org/

View File

@@ -165,9 +165,6 @@ Default BBFILES are the .bb files in the current directory.""")
parser.add_option("", "--revisions-changed", help = "Set the exit code depending on whether upstream floating revisions have changed or not",
action = "store_true", dest = "revisions_changed", default = False)
parser.add_option("", "--server-only", help = "Run bitbake without UI, the frontend can connect with bitbake server itself",
action = "store_true", dest = "server_only", default = False)
options, args = parser.parse_args(sys.argv)
configuration = BBConfiguration(options)
@@ -189,9 +186,6 @@ Default BBFILES are the .bb files in the current directory.""")
sys.exit("FATAL: Invalid server type '%s' specified.\n"
"Valid interfaces: xmlrpc, process [default], none." % servertype)
if configuration.server_only and configuration.servertype != "xmlrpc":
sys.exit("FATAL: If '--server-only' is defined, we must set the servertype as 'xmlrpc'.\n")
# Save a logfile for cooker into the current working directory. When the
# server is daemonized this logfile will be truncated.
cooker_logfile = os.path.join(os.getcwd(), "cooker.log")
@@ -228,17 +222,14 @@ Default BBFILES are the .bb files in the current directory.""")
logger.removeHandler(handler)
if not configuration.server_only:
# Setup a connection to the server (cooker)
server_connection = server.establishConnection()
# Setup a connection to the server (cooker)
server_connection = server.establishConnection()
try:
return server.launchUI(ui_main, server_connection.connection, server_connection.events)
finally:
bb.event.ui_queue = []
server_connection.terminate()
else:
print("server address: %s, server port: %s" % (server.serverinfo.host, server.serverinfo.port))
try:
return server.launchUI(ui_main, server_connection.connection, server_connection.events)
finally:
bb.event.ui_queue = []
server_connection.terminate()
return 1

View File

@@ -8,7 +8,6 @@ import cmd
import logging
import os
import sys
import fnmatch
bindir = os.path.dirname(__file__)
topdir = os.path.dirname(bindir)
@@ -19,7 +18,6 @@ import bb.cooker
import bb.providers
import bb.utils
from bb.cooker import state
import bb.fetch2
logger = logging.getLogger('BitBake')
@@ -140,10 +138,9 @@ Highest priority recipes are listed with the recipes they overlay as subitems.
def do_flatten(self, args):
"""flattens layer configuration into a separate output directory.
usage: flatten [layer1 layer2 [layer3]...] <outputdir>
usage: flatten <outputdir>
Takes the specified layers (or all layers in the current layer
configuration if none are specified) and builds a "flattened" directory
Takes the current layer configuration and builds a "flattened" directory
containing the contents of all layers, with any overlayed recipes removed
and bbappends appended to the corresponding recipes. Note that some manual
cleanup may still be necessary afterwards, in particular:
@@ -151,63 +148,21 @@ cleanup may still be necessary afterwards, in particular:
* where non-recipe files (such as patches) are overwritten (the flatten
command will show a warning for these)
* where anything beyond the normal layer setup has been added to
layer.conf (only the lowest priority number layer's layer.conf is used)
layer.conf (only the lowest priority layer's layer.conf is used)
* overridden/appended items from bbappends will need to be tidied up
* when the flattened layers do not have the same directory structure (the
flatten command should show a warning when this will cause a problem)
Warning: if you flatten several layers where another layer is intended to
be used "inbetween" them (in layer priority order) such that recipes /
bbappends in the layers interact, and then attempt to use the new output
layer together with that other layer, you may no longer get the same
build results (as the layer priority order has effectively changed).
"""
arglist = args.split()
if len(arglist) < 1:
if len(arglist) != 1:
logger.error('Please specify an output directory')
self.do_help('flatten')
return
if len(arglist) == 2:
logger.error('If you specify layers to flatten you must specify at least two')
self.do_help('flatten')
return
outputdir = arglist[-1]
if os.path.exists(outputdir) and os.listdir(outputdir):
logger.error('Directory %s exists and is non-empty, please clear it out first' % outputdir)
if os.path.exists(arglist[0]) and os.listdir(arglist[0]):
logger.error('Directory %s exists and is non-empty, please clear it out first' % arglist[0])
return
self.check_prepare_cooker()
layers = (self.config_data.getVar('BBLAYERS', True) or "").split()
if len(arglist) > 2:
layernames = arglist[:-1]
found_layernames = []
found_layerdirs = []
for layerdir in layers:
for layername, _, regex, _ in self.cooker.status.bbfile_config_priorities:
if layername in layernames:
if regex.match(os.path.join(layerdir, 'test')):
found_layerdirs.append(layerdir)
found_layernames.append(layername)
break
for layername in layernames:
if not layername in found_layernames:
logger.error('Unable to find layer %s in current configuration, please run "%s show_layers" to list configured layers' % (layername, os.path.basename(sys.argv[0])))
return
layers = found_layerdirs
else:
layernames = []
# Ensure a specified path matches our list of layers
def layer_path_match(path):
for layerdir in layers:
if path.startswith(os.path.join(layerdir, '')):
return layerdir
return None
appended_recipes = []
for layer in layers:
overlayed = []
for f in self.cooker.overlayed.iterkeys():
@@ -225,7 +180,7 @@ build results (as the layer priority order has effectively changed).
ext = os.path.splitext(f1)[1]
if ext != '.bbappend':
fdest = f1full[len(layer):]
fdest = os.path.normpath(os.sep.join([outputdir,fdest]))
fdest = os.path.normpath(os.sep.join([arglist[0],fdest]))
bb.utils.mkdirhier(os.path.dirname(fdest))
if os.path.exists(fdest):
if f1 == 'layer.conf' and root.endswith('/conf'):
@@ -240,60 +195,7 @@ build results (as the layer priority order has effectively changed).
if appends:
logger.plain(' Applying appends to %s' % fdest )
for appendname in appends:
if layer_path_match(appendname):
self.apply_append(appendname, fdest)
appended_recipes.append(f1)
# Take care of when some layers are excluded and yet we have included bbappends for those recipes
for recipename in self.cooker_data.appends.iterkeys():
if recipename not in appended_recipes:
appends = self.cooker_data.appends[recipename]
first_append = None
for appendname in appends:
layer = layer_path_match(appendname)
if layer:
if first_append:
self.apply_append(appendname, first_append)
else:
fdest = appendname[len(layer):]
fdest = os.path.normpath(os.sep.join([outputdir,fdest]))
bb.utils.mkdirhier(os.path.dirname(fdest))
bb.utils.copyfile(appendname, fdest)
first_append = fdest
# Get the regex for the first layer in our list (which is where the conf/layer.conf file will
# have come from)
first_regex = None
layerdir = layers[0]
for layername, pattern, regex, _ in self.cooker.status.bbfile_config_priorities:
if (not layernames) or layername in layernames:
if regex.match(os.path.join(layerdir, 'test')):
first_regex = regex
break
if first_regex:
# Find the BBFILES entries that match (which will have come from this conf/layer.conf file)
bbfiles = str(self.config_data.getVar('BBFILES', True)).split()
bbfiles_layer = []
for item in bbfiles:
if first_regex.match(item):
newpath = os.path.join(outputdir, item[len(layerdir)+1:])
bbfiles_layer.append(newpath)
if bbfiles_layer:
# Check that all important layer files match BBFILES
for root, dirs, files in os.walk(outputdir):
for f1 in files:
ext = os.path.splitext(f1)[1]
if ext in ['.bb', '.bbappend']:
f1full = os.sep.join([root, f1])
entry_found = False
for item in bbfiles_layer:
if fnmatch.fnmatch(f1full, item):
entry_found = True
break
if not entry_found:
logger.warning("File %s does not match the flattened layer's BBFILES setting, you may need to edit conf/layer.conf or move the file elsewhere" % f1full)
self.apply_append(appendname, fdest)
def get_append_layer(self, appendname):
for layer, _, regex, _ in self.cooker.status.bbfile_config_priorities:
@@ -307,8 +209,6 @@ build results (as the layer priority order has effectively changed).
recipefile.write('\n')
recipefile.write('##### bbappended from %s #####\n' % self.get_append_layer(appendname))
recipefile.writelines(appendfile.readlines())
recipefile.close()
appendfile.close()
def do_show_appends(self, args):
"""list bbappend files and recipe files they apply to

View File

@@ -10,39 +10,37 @@ import prserv.serv
__version__="1.0.0"
PRHOST_DEFAULT='0.0.0.0'
PRHOST_DEFAULT=''
PRPORT_DEFAULT=8585
def main():
parser = optparse.OptionParser(
version="Bitbake PR Service Core version %s, %%prog version %s" % (prserv.__version__, __version__),
usage = "%prog < --start | --stop > [options]")
usage = "%prog [options]")
parser.add_option("-f", "--file", help="database filename(default: prserv.sqlite3)", action="store",
dest="dbfile", type="string", default="prserv.sqlite3")
parser.add_option("-l", "--log", help="log filename(default: prserv.log)", action="store",
parser.add_option("-f", "--file", help="database filename(default prserv.db)", action="store",
dest="dbfile", type="string", default="prserv.db")
parser.add_option("-l", "--log", help="log filename(default prserv.log)", action="store",
dest="logfile", type="string", default="prserv.log")
parser.add_option("--loglevel", help="logging level, i.e. CRITICAL, ERROR, WARNING, INFO, DEBUG",
action = "store", type="string", dest="loglevel", default = "INFO")
action = "store", type="string", dest="loglevel", default = "WARNING")
parser.add_option("--start", help="start daemon",
action="store_true", dest="start")
action="store_true", dest="start", default="True")
parser.add_option("--stop", help="stop daemon",
action="store_true", dest="stop")
action="store_false", dest="start")
parser.add_option("--host", help="ip address to bind", action="store",
dest="host", type="string", default=PRHOST_DEFAULT)
parser.add_option("--port", help="port number(default: 8585)", action="store",
parser.add_option("--port", help="port number(default 8585)", action="store",
dest="port", type="int", default=PRPORT_DEFAULT)
options, args = parser.parse_args(sys.argv)
prserv.init_logger(os.path.abspath(options.logfile),options.loglevel)
if options.start:
ret=prserv.serv.start_daemon(options.dbfile, options.host, options.port,os.path.abspath(options.logfile))
elif options.stop:
ret=prserv.serv.stop_daemon(options.host, options.port)
prserv.serv.start_daemon(options)
else:
ret=parser.print_help()
return ret
prserv.serv.stop_daemon()
if __name__ == "__main__":
try:

View File

@@ -52,8 +52,8 @@ syn match bbExport "^export" nextgroup=bbIdentifier skipwhite
syn keyword bbExportFlag export contained nextgroup=bbIdentifier skipwhite
syn match bbIdentifier "[a-zA-Z0-9\-_\.\/\+]\+" display contained
syn match bbVarDeref "${[a-zA-Z0-9\-_\.\/\+]\+}" contained
syn match bbVarEq "\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)" contained nextgroup=bbVarValue
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.\/\+]\+\(_[${}a-zA-Z0-9\-_\.\/\+]\+\)\?\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbVarDeref nextgroup=bbVarEq
syn match bbVarEq "\(:=\|+=\|=+\|\.=\|=\.\|?=\|=\)" contained nextgroup=bbVarValue
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.\/\+]\+\(_[${}a-zA-Z0-9\-_\.\/\+]\+\)\?\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbVarDeref nextgroup=bbVarEq
syn match bbVarValue ".*$" contained contains=bbString,bbVarDeref,bbVarPyValue
syn region bbVarPyValue start=+${@+ skip=+\\$+ excludenl end=+}+ contained contains=@python

View File

@@ -27,18 +27,6 @@ import sys
if sys.version_info < (2, 6, 0):
raise RuntimeError("Sorry, python 2.6.0 or later is required for this version of bitbake")
class BBHandledException(Exception):
"""
The big dilemma for generic bitbake code is what information to give the user
when an exception occurs. Any exception inheriting this base exception class
has already provided information to the user via some 'fired' message type such as
an explicitly fired event using bb.fire, or a bb.error message. If bitbake
encounters an exception derived from this class, no backtrace or other information
will be given to the user, its assumed the earlier event provided the relevant information.
"""
pass
import os
import logging

View File

@@ -98,12 +98,9 @@ class Command:
else:
self.finishAsyncCommand("Exited with %s" % arg)
return False
except Exception as exc:
except Exception:
import traceback
if isinstance(exc, bb.BBHandledException):
self.finishAsyncCommand("")
else:
self.finishAsyncCommand(traceback.format_exc())
self.finishAsyncCommand(traceback.format_exc())
return False
def finishAsyncCommand(self, msg=None, code=None):
@@ -160,12 +157,6 @@ class CommandsSync:
value = params[1]
command.cooker.configuration.data.setVar(varname, value)
def initCooker(self, command, params):
"""
Init the cooker to initial state with nothing parsed
"""
command.cooker.initialize()
def resetCooker(self, command, params):
"""
Reset the cooker to its initial state, thus forcing a reparse for
@@ -241,17 +232,6 @@ class CommandsAsync:
command.finishAsyncCommand()
generateTargetsTree.needcache = True
def findCoreBaseFiles(self, command, params):
"""
Find certain files in COREBASE directory. i.e. Layers
"""
subdir = params[0]
filename = params[1]
command.cooker.findCoreBaseFiles(subdir, filename)
command.finishAsyncCommand()
findCoreBaseFiles.needcache = False
def findConfigFiles(self, command, params):
"""
Find config files which provide appropriate values
@@ -261,7 +241,7 @@ class CommandsAsync:
command.cooker.findConfigFiles(varname)
command.finishAsyncCommand()
findConfigFiles.needcache = False
findConfigFiles.needcache = True
def findFilesMatchingInDir(self, command, params):
"""
@@ -273,7 +253,7 @@ class CommandsAsync:
command.cooker.findFilesMatchingInDir(pattern, directory)
command.finishAsyncCommand()
findFilesMatchingInDir.needcache = False
findFilesMatchingInDir.needcache = True
def findConfigFilePath(self, command, params):
"""
@@ -340,13 +320,3 @@ class CommandsAsync:
else:
command.finishAsyncCommand()
compareRevisions.needcache = True
def parseConfigurationFiles(self, command, params):
"""
Parse the configuration files
"""
prefiles = params[0]
postfiles = params[1]
command.cooker.parseConfigurationFiles(prefiles, postfiles)
command.finishAsyncCommand()
parseConfigurationFiles.needcache = False

View File

@@ -34,9 +34,8 @@ from cStringIO import StringIO
from contextlib import closing
from functools import wraps
from collections import defaultdict
import bb, bb.exceptions, bb.command
from bb import utils, data, parse, event, cache, providers, taskdata, runqueue
import prserv.serv
import bb, bb.exceptions
from bb import utils, data, parse, event, cache, providers, taskdata, command, runqueue
logger = logging.getLogger("BitBake")
collectlog = logging.getLogger("BitBake.Collection")
@@ -168,15 +167,6 @@ class BBCooker:
self.parser = None
def initConfigurationData(self):
self.configuration.data = bb.data.init()
if not self.server_registration_cb:
bb.data.setVar("BB_WORKERCONTEXT", "1", self.configuration.data)
filtered_keys = bb.utils.approved_variables()
bb.data.inheritFromOS(self.configuration.data, self.savedenv, filtered_keys)
def loadConfigurationData(self):
self.configuration.data = bb.data.init()
@@ -642,18 +632,6 @@ class BBCooker:
if regex in unmatched:
collectlog.warn("No bb files matched BBFILE_PATTERN_%s '%s'" % (collection, pattern))
def findCoreBaseFiles(self, subdir, configfile):
corebase = self.configuration.data.getVar('COREBASE', True) or ""
paths = []
for root, dirs, files in os.walk(corebase + '/' + subdir):
for d in dirs:
configfilepath = os.path.join(root, d, configfile)
if os.path.exists(configfilepath):
paths.append(os.path.join(root, d))
if paths:
bb.event.fire(bb.event.CoreBaseFilesFound(paths), self.configuration.data)
def findConfigFilePath(self, configfile):
"""
Find the location on disk of configfile and if it exists and was parsed by BitBake
@@ -1110,7 +1088,7 @@ class BBCooker:
return False
if not retval:
bb.event.fire(bb.event.BuildCompleted(buildname, targets, failures), self.configuration.data)
bb.event.fire(bb.event.BuildCompleted(buildname, targets, failures), self.configuration.event_data)
self.command.finishAsyncCommand()
return False
if retval is True:
@@ -1120,7 +1098,7 @@ class BBCooker:
self.buildSetVars()
buildname = self.configuration.data.getVar("BUILDNAME")
bb.event.fire(bb.event.BuildStarted(buildname, targets), self.configuration.data)
bb.event.fire(bb.event.BuildStarted(buildname, targets), self.configuration.event_data)
localdata = data.createCopy(self.configuration.data)
bb.data.update_data(localdata)
@@ -1312,11 +1290,9 @@ class BBCooker:
# Empty the environment. The environment will be populated as
# necessary from the data store.
#bb.utils.empty_environment()
prserv.serv.auto_start(self.configuration.data)
return
def post_serve(self):
prserv.serv.auto_shutdown(self.configuration.data)
bb.event.fire(CookerExit(), self.configuration.event_data)
def shutdown(self):
@@ -1328,10 +1304,6 @@ class BBCooker:
def reparseFiles(self):
return
def initialize(self):
self.state = state.initial
self.initConfigurationData()
def reset(self):
self.state = state.initial
self.loadConfigurationData()

View File

@@ -402,14 +402,6 @@ class FilesMatchingFound(Event):
self._pattern = pattern
self._matches = matches
class CoreBaseFilesFound(Event):
"""
Event when a list of appropriate config files has been generated
"""
def __init__(self, paths):
Event.__init__(self)
self._paths = paths
class ConfigFilesFound(Event):
"""
Event when a list of appropriate config files has been generated

View File

@@ -115,7 +115,7 @@ class Git(FetchMethod):
ud.branches[name] = ud.revisions[name]
ud.revisions[name] = self.latest_revision(ud.url, ud, d, name)
gitsrcname = '%s%s' % (ud.host.replace(':','.'), ud.path.replace('/', '.'))
gitsrcname = '%s%s' % (ud.host, ud.path.replace('/', '.'))
# for rebaseable git repo, it is necessary to keep mirror tar ball
# per revision, so that even the revision disappears from the
# upstream repo in the future, the mirror will remain intact and still

View File

@@ -28,10 +28,10 @@ import bb
logger = logging.getLogger("BitBake.Provider")
class NoProvider(bb.BBHandledException):
class NoProvider(Exception):
"""Exception raised when no provider of a build dependency can be found"""
class NoRProvider(bb.BBHandledException):
class NoRProvider(Exception):
"""Exception raised when no provider of a runtime dependency can be found"""

View File

@@ -1209,14 +1209,10 @@ class RunQueueExecuteTasks(RunQueueExecute):
for task in xrange(self.stats.total):
if task in self.rq.scenequeue_covered:
continue
logger.debug(1, 'Considering %s (%s): %s' % (task, self.rqdata.get_user_idstring(task), str(self.rqdata.runq_revdeps[task])))
if len(self.rqdata.runq_revdeps[task]) > 0 and self.rqdata.runq_revdeps[task].issubset(self.rq.scenequeue_covered):
ok = True
for revdep in self.rqdata.runq_revdeps[task]:
if self.rqdata.runq_fnid[task] != self.rqdata.runq_fnid[revdep]:
logger.debug(1, 'Found "bad" dep %s (%s) for %s (%s)' % (revdep, self.rqdata.get_user_idstring(revdep), task, self.rqdata.get_user_idstring(task)))
ok = False
break
if ok:

View File

@@ -242,9 +242,9 @@ class BitBakeXMLRPCServer(SimpleXMLRPCServer):
return
class BitbakeServerInfo():
def __init__(self, host, port):
self.host = host
self.port = port
def __init__(self, server):
self.host = server.host
self.port = server.port
class BitBakeServerConnection():
def __init__(self, serverinfo):
@@ -278,7 +278,7 @@ class BitBakeServer(object):
return self.server.register_idle_function
def saveConnectionDetails(self):
self.serverinfo = BitbakeServerInfo(self.server.host, self.server.port)
self.serverinfo = BitbakeServerInfo(self.server)
def detach(self, cooker_logfile):
daemonize.createDaemon(self.server.serve_forever, cooker_logfile)

View File

@@ -39,7 +39,6 @@ class HobPrefs(gtk.Dialog):
self.selected_image_types = handler.remove_image_output_type(ot)
self.configurator.setConfVar('IMAGE_FSTYPES', "%s" % " ".join(self.selected_image_types).lstrip(" "))
self.reload_required = True
def sdk_machine_combo_changed_cb(self, combo, handler):
sdk_mach = combo.get_active_text()

View File

@@ -179,10 +179,6 @@ class RunningBuild (gobject.GObject):
# that we need to attach to a task.
self.tasks_to_iter[(package, task)] = i
# If we don't handle these the GUI does not proceed
elif isinstance(event, bb.build.TaskInvalid):
return
elif isinstance(event, bb.build.TaskBase):
current = self.tasks_to_iter[(package, task)]
parent = self.tasks_to_iter[(package, None)]

View File

@@ -105,8 +105,6 @@ def main(server, eventHandler):
cacheprogress = None
shutdown = 0
return_value = 0
errors = 0
warnings = 0
while True:
try:
event = eventHandler.waitEvent(0.25)
@@ -125,15 +123,13 @@ def main(server, eventHandler):
if isinstance(event, logging.LogRecord):
if event.levelno >= format.ERROR:
errors = errors + 1
return_value = 1
if event.levelno >= format.WARNING:
warnings = warnings + 1
# For "normal" logging conditions, don't show note logs from tasks
# but do show them if the user has changed the default log level to
# include verbose/debug messages
#if logger.getEffectiveLevel() > format.VERBOSE:
if event.taskpid != 0 and event.levelno <= format.NOTE:
continue
continue
logger.handle(event)
continue
@@ -212,7 +208,6 @@ def main(server, eventHandler):
continue
if isinstance(event, bb.event.NoProvider):
return_value = 1
errors = errors + 1
if event._runtime:
r = "R"
else:
@@ -272,8 +267,4 @@ def main(server, eventHandler):
server.runCommand(["stateShutdown"])
shutdown = shutdown + 1
pass
if warnings:
print("Summary: There were %s WARNING messages shown.\n" % warnings)
if return_value:
print("Summary: There were %s ERROR messages shown, returning a non-zero exit code.\n" % errors)
return return_value

View File

@@ -7,8 +7,5 @@ def init_logger(logfile, loglevel):
numeric_level = getattr(logging, loglevel.upper(), None)
if not isinstance(numeric_level, int):
raise ValueError('Invalid log level: %s' % loglevel)
FORMAT = '%(asctime)-15s %(message)s'
logging.basicConfig(level=numeric_level, filename=logfile, format=FORMAT)
logging.basicConfig(level=numeric_level, filename=logfile)
class NotFoundError(StandardError):
pass

View File

@@ -1,233 +1,86 @@
import logging
import os.path
import errno
import prserv
import sys
import warnings
import sqlite3
try:
import sqlite3
except ImportError:
from pysqlite2 import dbapi2 as sqlite3
logger = logging.getLogger("BitBake.PRserv")
sqlversion = sqlite3.sqlite_version_info
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
raise Exception("sqlite3 version 3.3.0 or later is required.")
class PRTable():
def __init__(self, conn, table, nohist):
self.conn = conn
self.nohist = nohist
if nohist:
self.table = "%s_nohist" % table
else:
self.table = "%s_hist" % table
class NotFoundError(StandardError):
pass
class PRTable():
def __init__(self,cursor,table):
self.cursor = cursor
self.table = table
#create the table
self._execute("CREATE TABLE IF NOT EXISTS %s \
(version TEXT NOT NULL, \
pkgarch TEXT NOT NULL, \
checksum TEXT NOT NULL, \
value INTEGER, \
PRIMARY KEY (version, pkgarch, checksum));" % self.table)
PRIMARY KEY (version,checksum));"
% table)
def _execute(self, *query):
"""Execute a query, waiting to acquire a lock if necessary"""
count = 0
while True:
try:
return self.conn.execute(*query)
return self.cursor.execute(*query)
except sqlite3.OperationalError as exc:
if 'database is locked' in str(exc) and count < 500:
count = count + 1
continue
raise exc
raise
except sqlite3.IntegrityError as exc:
print "Integrity error %s" % str(exc)
break
def _getValueHist(self, version, pkgarch, checksum):
data=self._execute("SELECT value FROM %s WHERE version=? AND pkgarch=? AND checksum=?;" % self.table,
(version, pkgarch, checksum))
def getValue(self, version, checksum):
data=self._execute("SELECT value FROM %s WHERE version=? AND checksum=?;" % self.table,
(version,checksum))
row=data.fetchone()
if row != None:
return row[0]
else:
#no value found, try to insert
try:
self._execute("BEGIN")
self._execute("INSERT OR ROLLBACK INTO %s VALUES (?, ?, ?, (select ifnull(max(value)+1,0) from %s where version=? AND pkgarch=?));"
self._execute("INSERT INTO %s VALUES (?, ?, (select ifnull(max(value)+1,0) from %s where version=?));"
% (self.table,self.table),
(version,pkgarch, checksum,version, pkgarch))
self.conn.commit()
except sqlite3.IntegrityError as exc:
logger.error(str(exc))
data=self._execute("SELECT value FROM %s WHERE version=? AND pkgarch=? AND checksum=?;" % self.table,
(version, pkgarch, checksum))
(version,checksum,version))
data=self._execute("SELECT value FROM %s WHERE version=? AND checksum=?;" % self.table,
(version,checksum))
row=data.fetchone()
if row != None:
return row[0]
else:
raise prserv.NotFoundError
def _getValueNohist(self, version, pkgarch, checksum):
data=self._execute("SELECT value FROM %s \
WHERE version=? AND pkgarch=? AND checksum=? AND \
value >= (select max(value) from %s where version=? AND pkgarch=?);"
% (self.table, self.table),
(version, pkgarch, checksum, version, pkgarch))
row=data.fetchone()
if row != None:
return row[0]
else:
#no value found, try to insert
try:
self._execute("BEGIN")
self._execute("INSERT OR REPLACE INTO %s VALUES (?, ?, ?, (select ifnull(max(value)+1,0) from %s where version=? AND pkgarch=?));"
% (self.table,self.table),
(version, pkgarch, checksum, version, pkgarch))
self.conn.commit()
except sqlite3.IntegrityError as exc:
logger.error(str(exc))
self.conn.rollback()
data=self._execute("SELECT value FROM %s WHERE version=? AND pkgarch=? AND checksum=?;" % self.table,
(version, pkgarch, checksum))
row=data.fetchone()
if row != None:
return row[0]
else:
raise prserv.NotFoundError
def getValue(self, version, pkgarch, checksum):
if self.nohist:
return self._getValueNohist(version, pkgarch, checksum)
else:
return self._getValueHist(version, pkgarch, checksum)
def _importHist(self, version, pkgarch, checksum, value):
val = None
data = self._execute("SELECT value FROM %s WHERE version=? AND pkgarch=? AND checksum=?;" % self.table,
(version, pkgarch, checksum))
row = data.fetchone()
if row != None:
val=row[0]
else:
#no value found, try to insert
try:
self._execute("BEGIN")
self._execute("INSERT OR ROLLBACK INTO %s VALUES (?, ?, ?, ?);" % (self.table),
(version, pkgarch, checksum, value))
self.conn.commit()
except sqlite3.IntegrityError as exc:
logger.error(str(exc))
data = self._execute("SELECT value FROM %s WHERE version=? AND pkgarch=? AND checksum=?;" % self.table,
(version, pkgarch, checksum))
row = data.fetchone()
if row != None:
val = row[0]
return val
def _importNohist(self, version, pkgarch, checksum, value):
try:
#try to insert
self._execute("BEGIN")
self._execute("INSERT OR ROLLBACK INTO %s VALUES (?, ?, ?, ?);" % (self.table),
(version, pkgarch, checksum,value))
self.conn.commit()
except sqlite3.IntegrityError as exc:
#already have the record, try to update
try:
self._execute("BEGIN")
self._execute("UPDATE %s SET value=? WHERE version=? AND pkgarch=? AND checksum=? AND value<?"
% (self.table),
(value,version,pkgarch,checksum,value))
self.conn.commit()
except sqlite3.IntegrityError as exc:
logger.error(str(exc))
data = self._execute("SELECT value FROM %s WHERE version=? AND pkgarch=? AND checksum=? AND value>=?;" % self.table,
(version,pkgarch,checksum,value))
row=data.fetchone()
if row != None:
return row[0]
else:
return None
def importone(self, version, pkgarch, checksum, value):
if self.nohist:
return self._importNohist(version, pkgarch, checksum, value)
else:
return self._importHist(version, pkgarch, checksum, value)
def export(self, version, pkgarch, checksum, colinfo):
metainfo = {}
#column info
if colinfo:
metainfo['tbl_name'] = self.table
metainfo['core_ver'] = prserv.__version__
metainfo['col_info'] = []
data = self._execute("PRAGMA table_info(%s);" % self.table)
for row in data:
col = {}
col['name'] = row['name']
col['type'] = row['type']
col['notnull'] = row['notnull']
col['dflt_value'] = row['dflt_value']
col['pk'] = row['pk']
metainfo['col_info'].append(col)
#data info
datainfo = []
if self.nohist:
sqlstmt = "SELECT T1.version, T1.pkgarch, T1.checksum, T1.value FROM %s as T1, \
(SELECT version,pkgarch,max(value) as maxvalue FROM %s GROUP BY version,pkgarch) as T2 \
WHERE T1.version=T2.version AND T1.pkgarch=T2.pkgarch AND T1.value=T2.maxvalue " % (self.table, self.table)
else:
sqlstmt = "SELECT * FROM %s as T1 WHERE 1=1 " % self.table
sqlarg = []
where = ""
if version:
where += "AND T1.version=? "
sqlarg.append(str(version))
if pkgarch:
where += "AND T1.pkgarch=? "
sqlarg.append(str(pkgarch))
if checksum:
where += "AND T1.checksum=? "
sqlarg.append(str(checksum))
sqlstmt += where + ";"
if len(sqlarg):
data = self._execute(sqlstmt, tuple(sqlarg))
else:
data = self._execute(sqlstmt)
for row in data:
if row['version']:
col = {}
col['version'] = row['version']
col['pkgarch'] = row['pkgarch']
col['checksum'] = row['checksum']
col['value'] = row['value']
datainfo.append(col)
return (metainfo, datainfo)
raise NotFoundError
class PRData(object):
"""Object representing the PR database"""
def __init__(self, filename, nohist=True):
def __init__(self, filename):
self.filename=os.path.abspath(filename)
self.nohist=nohist
#build directory hierarchy
try:
os.makedirs(os.path.dirname(self.filename))
except OSError as e:
if e.errno != errno.EEXIST:
raise e
self.connection=sqlite3.connect(self.filename, isolation_level="DEFERRED")
self.connection.row_factory=sqlite3.Row
self.connection=sqlite3.connect(self.filename, timeout=5,
isolation_level=None)
self.cursor=self.connection.cursor()
self._tables={}
def __del__(self):
print "PRData: closing DB %s" % self.filename
self.connection.close()
def __getitem__(self,tblname):
@@ -237,11 +90,11 @@ class PRData(object):
if tblname in self._tables:
return self._tables[tblname]
else:
tableobj = self._tables[tblname] = PRTable(self.connection, tblname, self.nohist)
tableobj = self._tables[tblname] = PRTable(self.cursor, tblname)
return tableobj
def __delitem__(self, tblname):
if tblname in self._tables:
del self._tables[tblname]
logger.info("drop table %s" % (tblname))
self.connection.execute("DROP TABLE IF EXISTS %s;" % tblname)
logging.info("drop table %s" % (tblname))
self.cursor.execute("DROP TABLE IF EXISTS %s;" % tblname)

View File

@@ -1,5 +1,5 @@
import os,sys,logging
import signal, time, atexit, threading
import signal,time, atexit
from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
import xmlrpclib,sqlite3
@@ -7,8 +7,6 @@ import bb.server.xmlrpc
import prserv
import prserv.db
logger = logging.getLogger("BitBake.PRserv")
if sys.hexversion < 0x020600F0:
print("Sorry, python 2.6 or later is required.")
sys.exit(1)
@@ -23,10 +21,8 @@ class Handler(SimpleXMLRPCRequestHandler):
raise
return value
PIDPREFIX = "/tmp/PRServer_%s_%s.pid"
singleton = None
class PRServer(SimpleXMLRPCServer):
pidfile="/tmp/PRServer.pid"
def __init__(self, dbfile, logfile, interface, daemon=True):
''' constructor '''
SimpleXMLRPCServer.__init__(self, interface,
@@ -35,88 +31,66 @@ class PRServer(SimpleXMLRPCServer):
self.dbfile=dbfile
self.daemon=daemon
self.logfile=logfile
self.working_thread=None
self.host, self.port = self.socket.getsockname()
self.db=prserv.db.PRData(dbfile)
self.table=self.db["PRMAIN"]
self.pidfile=PIDPREFIX % (self.host, self.port)
self.register_function(self.getPR, "getPR")
self.register_function(self.quit, "quit")
self.register_function(self.ping, "ping")
self.register_function(self.export, "export")
self.register_function(self.importone, "importone")
self.register_introspection_functions()
def export(self, version=None, pkgarch=None, checksum=None, colinfo=True):
try:
return self.table.export(version, pkgarch, checksum, colinfo)
except sqlite3.Error as exc:
logger.error(str(exc))
return None
def importone(self, version, pkgarch, checksum, value):
return self.table.importone(version, pkgarch, checksum, value)
def ping(self):
return not self.quit
def getinfo(self):
return (self.host, self.port)
def getPR(self, version, pkgarch, checksum):
def getPR(self, version, checksum):
try:
return self.table.getValue(version, pkgarch, checksum)
return self.table.getValue(version,checksum)
except prserv.NotFoundError:
logger.error("can not find value for (%s, %s)",version, checksum)
logging.error("can not find value for (%s, %s)",version,checksum)
return None
except sqlite3.Error as exc:
logger.error(str(exc))
logging.error(str(exc))
return None
def quit(self):
self.quit=True
return
def work_forever(self,):
def _serve_forever(self):
self.quit = False
self.timeout = 0.5
logger.info("PRServer: started! DBfile: %s, IP: %s, PORT: %s, PID: %s" %
(self.dbfile, self.host, self.port, str(os.getpid())))
while not self.quit:
self.handle_request()
logger.info("PRServer: stopping...")
logging.info("PRServer: stopping...")
self.server_close()
return
def start(self):
if self.daemon is True:
logger.info("PRServer: try to start daemon...")
logging.info("PRServer: starting daemon...")
self.daemonize()
else:
atexit.register(self.delpid)
pid = str(os.getpid())
pf = file(self.pidfile, 'w+')
pf.write("%s\n" % pid)
pf.close()
self.work_forever()
logging.info("PRServer: starting...")
self._serve_forever()
def delpid(self):
os.remove(self.pidfile)
os.remove(PRServer.pidfile)
def daemonize(self):
"""
See Advanced Programming in the UNIX, Sec 13.3
"""
os.umask(0)
try:
pid = os.fork()
if pid > 0:
#parent return instead of exit to give control
return
if pid > 0:
sys.exit(0)
except OSError as e:
raise Exception("%s [%d]" % (e.strerror, e.errno))
sys.stderr.write("1st fork failed: %d %s\n" % (e.errno, e.strerror))
sys.exit(1)
os.setsid()
"""
@@ -128,9 +102,9 @@ class PRServer(SimpleXMLRPCServer):
if pid > 0: #parent
sys.exit(0)
except OSError as e:
raise Exception("%s [%d]" % (e.strerror, e.errno))
sys.stderr.write("2nd fork failed: %d %s\n" % (e.errno, e.strerror))
sys.exit(1)
os.umask(0)
os.chdir("/")
sys.stdout.flush()
@@ -145,44 +119,19 @@ class PRServer(SimpleXMLRPCServer):
# write pidfile
atexit.register(self.delpid)
pid = str(os.getpid())
pf = file(self.pidfile, 'w')
pf = file(PRServer.pidfile, 'w+')
pf.write("%s\n" % pid)
pf.write("%s\n" % self.host)
pf.write("%s\n" % self.port)
pf.close()
self.work_forever()
sys.exit(0)
class PRServSingleton():
def __init__(self, dbfile, logfile, interface):
self.dbfile = dbfile
self.logfile = logfile
self.interface = interface
self.host = None
self.port = None
self.event = threading.Event()
def _work(self):
self.prserv = PRServer(self.dbfile, self.logfile, self.interface, False)
self.host, self.port = self.prserv.getinfo()
self.event.set()
self.prserv.work_forever()
del self.prserv.db
def start(self):
self.working_thread = threading.Thread(target=self._work)
self.working_thread.start()
def getinfo(self):
self.event.wait()
return (self.host, self.port)
self._serve_forever()
class PRServerConnection():
def __init__(self, host, port):
if is_local_special(host, port):
host, port = singleton.getinfo()
self.connection = bb.server.xmlrpc._create_server(host, port)
self.host = host
self.port = port
self.connection = bb.server.xmlrpc._create_server(self.host, self.port)
def terminate(self):
# Don't wait for server indefinitely
@@ -190,25 +139,18 @@ class PRServerConnection():
socket.setdefaulttimeout(2)
try:
self.connection.quit()
except Exception as exc:
sys.stderr.write("%s\n" % str(exc))
except:
pass
def getPR(self, version, pkgarch, checksum):
return self.connection.getPR(version, pkgarch, checksum)
def getPR(self, version, checksum):
return self.connection.getPR(version, checksum)
def ping(self):
return self.connection.ping()
def export(self,version=None, pkgarch=None, checksum=None, colinfo=True):
return self.connection.export(version, pkgarch, checksum, colinfo)
def importone(self, version, pkgarch, checksum, value):
return self.connection.importone(version, pkgarch, checksum, value)
def start_daemon(dbfile, host, port, logfile):
pidfile = PIDPREFIX % (host, port)
def start_daemon(options):
try:
pf = file(pidfile,'r')
pf = file(PRServer.pidfile,'r')
pid = int(pf.readline().strip())
pf.close()
except IOError:
@@ -216,89 +158,41 @@ def start_daemon(dbfile, host, port, logfile):
if pid:
sys.stderr.write("pidfile %s already exist. Daemon already running?\n"
% pidfile)
return 1
% PRServer.pidfile)
sys.exit(1)
server = PRServer(os.path.abspath(dbfile), os.path.abspath(logfile), (host,port))
server = PRServer(options.dbfile, interface=(options.host, options.port),
logfile=os.path.abspath(options.logfile))
server.start()
return 0
def stop_daemon(host, port):
pidfile = PIDPREFIX % (host, port)
def stop_daemon():
try:
pf = file(pidfile,'r')
pf = file(PRServer.pidfile,'r')
pid = int(pf.readline().strip())
host = pf.readline().strip()
port = int(pf.readline().strip())
pf.close()
except IOError:
pid = None
if not pid:
sys.stderr.write("pidfile %s does not exist. Daemon not running?\n"
% pidfile)
% PRServer.pidfile)
sys.exit(1)
try:
PRServerConnection(host, port).terminate()
except:
logger.critical("Stop PRService %s:%d failed" % (host,port))
PRServerConnection(host,port).terminate()
time.sleep(0.5)
try:
if pid:
if os.path.exists(pidfile):
os.remove(pidfile)
while 1:
os.kill(pid,signal.SIGTERM)
time.sleep(0.1)
except OSError as e:
err = str(e)
if err.find("No such process") <= 0:
raise e
return 0
def is_local_special(host, port):
if host.strip().upper() == 'localhost'.upper() and (not port):
return True
else:
return False
def auto_start(d):
global singleton
if d.getVar('USE_PR_SERV', True) == '0':
return True
if is_local_special(d.getVar('PRSERV_HOST', True), int(d.getVar('PRSERV_PORT', True))) and not singleton:
import bb.utils
cachedir = (d.getVar("PERSISTENT_DIR", True) or d.getVar("CACHE", True))
if not cachedir:
logger.critical("Please set the 'PERSISTENT_DIR' or 'CACHE' variable")
except OSError as err:
err = str(err)
if err.find("No such process") > 0:
if os.path.exists(PRServer.pidfile):
os.remove(PRServer.pidfile)
else:
print err
sys.exit(1)
bb.utils.mkdirhier(cachedir)
dbfile = os.path.join(cachedir, "prserv.sqlite3")
logfile = os.path.join(cachedir, "prserv.log")
singleton = PRServSingleton(os.path.abspath(dbfile), os.path.abspath(logfile), ("localhost",0))
singleton.start()
if singleton:
host, port = singleton.getinfo()
else:
host = d.getVar('PRSERV_HOST', True)
port = int(d.getVar('PRSERV_PORT', True))
try:
return PRServerConnection(host,port).ping()
except Exception:
logger.critical("PRservice %s:%d not available" % (host, port))
return False
def auto_shutdown(d=None):
global singleton
if singleton:
host, port = singleton.getinfo()
try:
PRServerConnection(host, port).terminate()
except:
logger.critical("Stop PRService %s:%d failed" % (host,port))
singleton = None
def ping(host, port):
conn=PRServerConnection(host, port)
return conn.ping()

View File

@@ -44,13 +44,13 @@
management techniques.</para></listitem>
<listitem><para>Deliver the most up-to-date kernel possible while still ensuring that
the baseline kernel is the most stable official release.</para></listitem>
<listitem><para>Include major technological features as part of Yocto Project's
upward revision strategy.</para></listitem>
<listitem><para>Include major technological features as part of Yocto Project's up-rev
strategy.</para></listitem>
<listitem><para>Present a kernel Git repository that, similar to the upstream
<filename>kernel.org</filename> tree,
has a clear and continuous history.</para></listitem>
<listitem><para>Deliver a key set of supported kernel types, where each type is tailored
to meet a specific use (e.g. networking, consumer, devices, and so forth).</para></listitem>
to a specific use case (e.g. networking, consumer, devices, and so forth).</para></listitem>
<listitem><para>Employ a Git branching strategy that, from a developer's point of view,
results in a linear path from the baseline <filename>kernel.org</filename>,
through a select group of features and
@@ -78,7 +78,7 @@
</para>
<para>
This balance allows the team to deliver the most up-to-date kernel
as possible, while still ensuring that the team has a stable official release for
as possible, while still ensuring that the team has a stable official release as
the baseline kernel version.
</para>
<para>
@@ -94,8 +94,8 @@
</para>
<para>
Once a Yocto Project kernel is officially released, the Yocto Project team goes into
their next development cycle, or upward revision (uprev) cycle, while still
continuing maintenance on the released kernel.
their next development cycle, or "uprev" cycle, while still continuing maintenance on the
released kernel.
It is important to note that the most sustainable and stable way
to include feature development upstream is through a kernel uprev process.
Back-porting hundreds of individual fixes and minor features from various
@@ -148,8 +148,7 @@
<section id='architecture-overview'>
<title>Overview</title>
<para>
As mentioned earlier, a key goal of the Yocto Project is to present the
developer with
As mentioned earlier, a key goal of Yocto Project is to present the developer with
a kernel that has a clear and continuous history that is visible to the user.
The architecture and mechanisms used achieve that goal in a manner similar to the
upstream <filename>kernel.org</filename>.
@@ -177,7 +176,7 @@
<imagedata fileref="figures/kernel-architecture-overview.png" width="6in" depth="7in" align="center" scale="100" />
</para>
<para>
In the illustration, the "Kernel.org Branch Point"
In the illustration, the "<filename>kernel.org</filename> Branch Point"
marks the specific spot (or release) from
which the Yocto Project kernel is created.
From this point "up" in the tree, features and differences are organized and tagged.
@@ -292,8 +291,8 @@
"<ulink url='http://www.yoctoproject.org/docs/latest/dev-manual/dev-manual.html#git'>Git</ulink>"
section in <ulink url='http://www.yoctoproject.org/docs/latest/dev-manual/dev-manual.html'>The
Yocto Project Development Manual</ulink>.
These referenced sections overview Git and describe a minimal set of
commands that allow you to be functional using Git.
This section overviews Git and describes a minimal set of commands that allow you to be
functional using Git.
<note>
You can use as much, or as little, of what Git has to offer to accomplish what
you need for your project.

View File

@@ -11,7 +11,7 @@
The Yocto Project presents the kernel as a fully patched, history-clean Git
repository.
The Git tree represents the selected features, board support,
and configurations extensively tested by the Yocto Project.
and configurations extensively tested by Yocto Project.
The Yocto Project kernel allows the end user to leverage community
best practices to seamlessly manage the development, build and debug cycles.
</para>
@@ -28,34 +28,27 @@
<listitem><para><emphasis>Using the Kernel:</emphasis> Describes best practices
and "how-to" information
that lets you put the kernel to practical use.
Some examples are how to examine changes in a branch and how to
save kernel modifications.</para></listitem>
Some examples are "How to Build a
Project Specific Tree", "How to Examine Changes in a Branch", and "How to
Save Kernel Modifications."</para></listitem>
</itemizedlist>
</para>
<para>
For more information on the Linux kernel, see the following links:
For more information on the kernel, see the following links:
<itemizedlist>
<listitem><para>The Linux Foundation's guide for kernel development
process - <ulink url='http://ldn.linuxfoundation.org/book/1-a-guide-kernel-development-process'></ulink></para></listitem>
<!-- <listitem><para><ulink url='http://userweb.kernel.org/~akpm/stuff/tpp.txt'></ulink></para></listitem> -->
<listitem><para>A fairly emcompassing guide on Linux kernel development -
<ulink url='http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob_plain;f=Documentation/HOWTO;hb=HEAD'></ulink></para></listitem>
<listitem><para><ulink url='http://ldn.linuxfoundation.org/book/1-a-guide-kernel-development-process'></ulink></para></listitem>
<listitem><para><ulink url='http://userweb.kernel.org/~akpm/stuff/tpp.txt'></ulink></para></listitem>
<listitem><para><ulink url='http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob_plain;f=Documentation/HOWTO;hb=HEAD'></ulink></para></listitem>
</itemizedlist>
</para>
<para>
For more discussion on the Yocto Project kernel, you can see these sections
in <ulink url='http://www.yoctoproject.org/docs/latest/dev-manual/dev-manual.html'>The Yocto Project Development Manual</ulink>:
<itemizedlist>
<listitem><para>
"<ulink url='http://www.yoctoproject.org/docs/latest/dev-manual/dev-manual.html#kernel-overview'>Kernel Overview</ulink>"</para></listitem>
<listitem><para>
"<ulink url='http://www.yoctoproject.org/docs/latest/dev-manual/dev-manual.html#kernel-modification-workflow'>Kernel Modification Workflow</ulink>"
</para></listitem>
<listitem><para>
"<ulink url='http://www.yoctoproject.org/docs/latest/dev-manual/dev-manual.html#dev-manual-kernel-appendix'>Kernel Modification Example</ulink>".</para></listitem>
</itemizedlist>
For more discussion on the Yocto Project kernel, you can also see the
"<ulink url='http://www.yoctoproject.org/docs/latest/dev-manual/dev-manual.html#kernel-overview'>Kernel Overview</ulink>",
"<ulink url='http://www.yoctoproject.org/docs/latest/dev-manual/dev-manual.html#kernel-modification-workflow'>Kernel Modification Workflow</ulink>", and
"<ulink url='http://www.yoctoproject.org/docs/latest/dev-manual/dev-manual.html#dev-manual-kernel-appendix'>Kernel Modification Example</ulink>" sections all in
<ulink url='http://www.yoctoproject.org/docs/latest/dev-manual/dev-manual.html'>The Yocto Project Development Manual</ulink>.
</para>
<para>

View File

@@ -10,8 +10,8 @@
<title>Introduction</title>
<para>
This chapter describes how to accomplish tasks involving the kernel's tree structure.
The information is designed to help the developer that wants to modify the Yocto
Project kernel and contribute changes upstream to the Yocto Project.
This information is designed to help the developer that wants to modify the Yocto Project kernel
and contribute changes upstream to the Yocto Project.
The information covers the following:
<itemizedlist>
<listitem><para>Tree construction</para></listitem>
@@ -24,11 +24,10 @@
<section id='tree-construction'>
<title>Tree Construction</title>
<para>
This section describes construction of the Yocto Project kernel repositories
as accomplished by the Yocto Project team to create kernel repositories.
These kernel repositories are found at
<ulink url='http://git.yoctoproject.org/cgit.cgi'>http://git.yoctoproject.org/cgit.cgi</ulink>
and can be shipped as part of a Yocto Project release.
This section describes construction of the Yocto Project kernel repositories as accomplished
by the Yocto Project team to create kernel repositories, which are found at
<ulink url='http://git.yoctoproject.org/cgit.cgi'>http://git.yoctoproject.org/cgit.cgi</ulink>,
that can be shipped as part of a Yocto Project release.
The team creates these repositories by
compiling and executing the set of feature descriptions for every BSP/feature
in the product.
@@ -130,8 +129,7 @@
<ulink url='http://git.yoctoproject.org/cgit.cgi'>http://git.yoctoproject.org/cgit.cgi</ulink>
is the combination of all supported boards and configurations.</para>
<para>The technique the Yocto Project team uses is flexible and allows for seamless
blending of an immutable history with additional patches specific to a
deployment.
blending of an immutable history with additional deployment specific patches.
Any additions to the kernel become an integrated part of the branches.</para>
</note>
</para>
@@ -265,9 +263,9 @@
<para>
Following are a few examples that show how to use Git to examine changes.
Because the Yocto Project Git repository does not break existing Git
Note that because the Yocto Project Git repository does not break existing Git
functionality and because there exists many permutations of these types of
commands, there are many more methods to discover changes.
commands there are many more methods to discover changes.
<note>
Unless you provide a commit range
(&lt;kernel-type&gt;..&lt;bsp&gt;-&lt;kernel-type&gt;), <filename>kernel.org</filename> history

View File

@@ -3,13 +3,13 @@
<chapter id='extendpoky'>
<title>Common Tasks</title>
<title>Extending the Yocto Project</title>
<para>
This chapter describes standard tasks such as adding new
This chapter provides information about how to extend the functionality
already present in the Yocto Project.
The chapter also documents standard tasks such as adding new
software packages, extending or customizing images or porting the Yocto Project to
new hardware (adding a new machine).
The chapter also describes ways to modify package source code, combine multiple
versions of library files into a single image, and handle a package name alias.
Finally, the chapter contains advice about how to make changes to the
Yocto Project to achieve the best results.
</para>
@@ -658,307 +658,6 @@
</section>
</section>
<section id="usingpoky-modifing-packages">
<title>Modifying Package Source Code</title>
<para>
Although the Yocto Project is usually used to build software, you can use it to modify software.
</para>
<para>
During a build, source is available in the
<filename><link linkend='var-WORKDIR'>WORKDIR</link></filename> directory.
The actual location depends on the type of package and the architecture of the target device.
For a standard recipe not related to
<filename><link linkend='var-MACHINE'>MACHINE</link></filename>, the location is
<filename>tmp/work/PACKAGE_ARCH-poky-TARGET_OS/PN-PV-PR/</filename>.
For target device-dependent packages, you should use the <filename>MACHINE</filename>
variable instead of
<filename><link linkend='var-PACKAGE_ARCH'>PACKAGE_ARCH</link></filename>
in the directory name.
</para>
<tip>
Be sure the package recipe sets the
<filename><link linkend='var-S'>S</link></filename> variable to something
other than the standard <filename>WORKDIR/PN-PV/</filename> value.
</tip>
<para>
After building a package, you can modify the package source code without problems.
The easiest way to test your changes is by calling the
<filename>compile</filename> task as shown in the following example:
<literallayout class='monospaced'>
$ bitbake -c compile -f NAME_OF_PACKAGE
</literallayout>
</para>
<para>
The <filename>-f</filename> or <filename>--force</filename>
option forces re-execution of the specified task.
You can call other tasks this way as well.
But note that all the modifications in
<filename><link linkend='var-WORKDIR'>WORKDIR</link></filename>
are gone once you execute <filename>-c clean</filename> for a package.
</para>
</section>
<section id="usingpoky-modifying-packages-quilt">
<title>Modifying Package Source Code with Quilt</title>
<para>
By default Poky uses <ulink url='http://savannah.nongnu.org/projects/quilt'>Quilt</ulink>
to manage patches in the <filename>do_patch</filename> task.
This is a powerful tool that you can use to track all modifications to package sources.
</para>
<para>
Before modifying source code, it is important to notify Quilt so it can track the changes
into the new patch file:
<literallayout class='monospaced'>
$ quilt new NAME-OF-PATCH.patch
</literallayout>
</para>
<para>
After notifying Quilt, add all modified files into that patch:
<literallayout class='monospaced'>
$ quilt add file1 file2 file3
</literallayout>
</para>
<para>
You can now start editing.
Once you are done editing, you need to use Quilt to generate the final patch that
will contain all your modifications.
<literallayout class='monospaced'>
$ quilt refresh
</literallayout>
</para>
<para>
You can find the resulting patch file in the
<filename>patches/</filename> subdirectory of the source
(<filename><link linkend='var-S'>S</link></filename>) directory.
For future builds, you should copy the patch into the Yocto Project metadata and add it into the
<filename><link linkend='var-SRC_URI'>SRC_URI</link></filename> of a recipe.
Here is an example:
<literallayout class='monospaced'>
SRC_URI += "file://NAME-OF-PATCH.patch"
</literallayout>
</para>
<para>
Finally, don't forget to 'bump' the
<filename><link linkend='var-PR'>PR</link></filename> value in the same recipe since
the resulting packages have changed.
</para>
</section>
<section id="building-multiple-architecture-libraries-into-one-image">
<title>Combining Multiple Versions of Library Files into One Image</title>
<para>
The build system offers the ability to build libraries with different
target optimizations or architecture formats and combine these together
into one system image.
You can link different binaries in the image
against the different libraries as needed for specific use cases.
This feature is called "Multilib."
</para>
<para>
An example would be where you have most of a system compiled in 32-bit
mode using 32-bit libraries, but you have something large, like a database
engine, that needs to be a 64-bit application and use 64-bit libraries.
Multilib allows you to get the best of both 32-bit and 64-bit libraries.
</para>
<para>
While the Multilib feature is most commonly used for 32 and 64-bit differences,
the approach the build system uses facilitates different target optimizations.
You could compile some binaries to use one set of libraries and other binaries
to use other different sets of libraries.
The libraries could differ in architecture, compiler options, or other
optimizations.
</para>
<para>
This section overviews the Multilib process only.
For more details on how to implement Multilib, see the
<ulink url='https://wiki.yoctoproject.org/wiki/Multilib'>Multilib</ulink> wiki
page.
</para>
<section id='preparing-to-use-multilib'>
<title>Preparing to use Multilib</title>
<para>
User-specific requirements drive the Multilib feature,
Consequently, there is no one "out-of-the-box" configuration that likely
exists to meet your needs.
</para>
<para>
In order to enable Multilib, you first need to ensure your recipe is
extended to support multiple libraries.
Many standard recipes are already extended and support multiple libraries.
You can check in the <filename>meta/conf/multilib.conf</filename>
configuration file in the Yocto Project files directory to see how this is
done using the <filename>BBCLASSEXTEND</filename> variable.
Eventually, all recipes will be covered and this list will be unneeded.
</para>
<para>
For the most part, the Multilib class extension works automatically to
extend the package name from <filename>${PN}</filename> to
<filename>${MLPREFIX}${PN}</filename>, where <filename>MLPREFIX</filename>
is the particular multilib (e.g. "lib32-" or "lib64-").
Standard variables such as <filename>DEPENDS</filename>,
<filename>RDEPENDS</filename>, <filename>RPROVIDES</filename>,
<filename>RRECOMMENDS</filename>, <filename>PACKAGES</filename>, and
<filename>PACKAGES_DYNAMIC</filename> are automatically extended by the system.
If you are extending any manual code in the recipe, you can use the
<filename>${MLPREFIX}</filename> variable to ensure those names are extended
correctly.
This automatic extension code resides in <filename>multilib.bbclass</filename>.
</para>
</section>
<section id='using-multilib'>
<title>Using Multilib</title>
<para>
After you have set up the recipes, you need to define the actual
combination of multiple libraries you want to build.
You accomplish this through your <filename>local.conf</filename>
configuration file in the Yocto Project build directory.
An example configuration would be as follows:
<literallayout class='monospaced'>
MACHINE = "qemux86-64"
require conf/multilib.conf
MULTILIBS = "multilib:lib32"
DEFAULTTUNE_virtclass-multilib-lib32 = "x86"
MULTILIB_IMAGE_INSTALL = "lib32-connman"
</literallayout>
This example enables an
additional library named <filename>lib32</filename> alongside the
normal target packages.
When combining these "lib32" alternatives, the example uses "x86" for tuning.
For information on this particular tuning, see
<filename>meta/conf/machine/include/ia32/arch-ia32.inc</filename>.
</para>
<para>
The example then includes <filename>lib32-connman</filename>
in all the images, which illustrates one method of including a
multiple library dependency.
You can use a normal image build to include this dependency,
for example:
<literallayout class='monospaced'>
$ bitbake core-image-sato
</literallayout>
You can also build Multilib packages specifically with a command like this:
<literallayout class='monospaced'>
$ bitbake lib32-connman
</literallayout>
</para>
</section>
<section id='additional-implementation-details'>
<title>Additional Implementation Details</title>
<para>
Different packaging systems have different levels of native Multilib
support.
For the RPM Package Management System, the following implementation details
exist:
<itemizedlist>
<listitem><para>A unique architecture is defined for the Multilib packages,
along with creating a unique deploy folder under
<filename>tmp/deploy/rpm</filename> in the Yocto
Project build directory.
For example, consider <filename>lib32</filename> in a
<filename>qemux86-64</filename> image.
The possible architectures in the system are "all", "qemux86_64",
"lib32_qemux86_64", and "lib32_x86".</para></listitem>
<listitem><para>The <filename>${MLPREFIX}</filename> variable is stripped from
<filename>${PN}</filename> during RPM packaging.
The naming for a normal RPM package and a Multilib RPM package in a
<filename>qemux86-64</filename> system resolves to something similar to
<filename>bash-4.1-r2.x86_64.rpm</filename> and
<filename>bash-4.1.r2.lib32_x86.rpm</filename>, respectively.
</para></listitem>
<listitem><para>When installing a Multilib image, the RPM backend first
installs the base image and then installs the Multilib libraries.
</para></listitem>
<listitem><para>The build system relies on RPM to resolve the identical files in the
two (or more) Multilib packages.</para></listitem>
</itemizedlist>
</para>
<para>
For the IPK Package Management System, the following implementation details exist:
<itemizedlist>
<listitem><para>The <filename>${MLPREFIX}</filename> is not stripped from
<filename>${PN}</filename> during IPK packaging.
The naming for a normal RPM package and a Multilib IPK package in a
<filename>qemux86-64</filename> system resolves to something like
<filename>bash_4.1-r2.x86_64.ipk</filename> and
<filename>lib32-bash_4.1-rw_x86.ipk</filename>, respectively.
</para></listitem>
<listitem><para>The IPK deploy folder is not modified with
<filename>${MLPREFIX}</filename> because packages with and without
the Multilib feature can exist in the same folder due to the
<filename>${PN}</filename> differences.</para></listitem>
<listitem><para>IPK defines a sanity check for Multilib installation
using certain rules for file comparison, overridden, etc.
</para></listitem>
</itemizedlist>
</para>
</section>
</section>
<section id="usingpoky-configuring-DISTRO_PN_ALIAS">
<title>Handling a Package Name Alias</title>
<para>
Sometimes a package name you are using might exist under an alias or as a similarly named
package in a different distribution.
The Yocto Project implements a <filename>distro_check</filename>
task that automatically connects to major distributions
and checks for these situations.
If the package exists under a different name in a different distribution, you get a
<filename>distro_check</filename> mismatch.
You can resolve this problem by defining a per-distro recipe name alias using the
<filename><link linkend='var-DISTRO_PN_ALIAS'>DISTRO_PN_ALIAS</link></filename> variable.
</para>
<para>
Following is an example that shows how you specify the <filename>DISTRO_PN_ALIAS</filename>
variable:
<literallayout class='monospaced'>
DISTRO_PN_ALIAS_pn-PACKAGENAME = "distro1=package_name_alias1 \
distro2=package_name_alias2 \
distro3=package_name_alias3 \
..."
</literallayout>
</para>
<para>
If you have more than one distribution alias, separate them with a space.
Note that the Yocto Project currently automatically checks the
Fedora, OpenSuSE, Debian, Ubuntu,
and Mandriva distributions for source package recipes without having to specify them
using the <filename>DISTRO_PN_ALIAS</filename> variable.
For example, the following command generates a report that lists the Linux distributions
that include the sources for each of the Yocto Project recipes.
<literallayout class='monospaced'>
$ bitbake world -f -c distro_check
</literallayout>
The results are stored in the <filename>build/tmp/log/distro_check-${DATETIME}.results</filename>
file found in the Yocto Project files area.
</para>
</section>
<section id="usingpoky-changes">
<title>Making and Maintaining Changes</title>
<para>
@@ -1277,6 +976,411 @@
</section>
</section>
<section id="usingpoky-modifing-packages">
<title>Modifying Package Source Code</title>
<para>
Although the Yocto Project is usually used to build software, you can use it to modify software.
</para>
<para>
During a build, source is available in the
<filename><link linkend='var-WORKDIR'>WORKDIR</link></filename> directory.
The actual location depends on the type of package and the architecture of the target device.
For a standard recipe not related to
<filename><link linkend='var-MACHINE'>MACHINE</link></filename>, the location is
<filename>tmp/work/PACKAGE_ARCH-poky-TARGET_OS/PN-PV-PR/</filename>.
For target device-dependent packages, you should use the <filename>MACHINE</filename>
variable instead of
<filename><link linkend='var-PACKAGE_ARCH'>PACKAGE_ARCH</link></filename>
in the directory name.
</para>
<tip>
Be sure the package recipe sets the
<filename><link linkend='var-S'>S</link></filename> variable to something
other than the standard <filename>WORKDIR/PN-PV/</filename> value.
</tip>
<para>
After building a package, you can modify the package source code without problems.
The easiest way to test your changes is by calling the
<filename>compile</filename> task as shown in the following example:
<literallayout class='monospaced'>
$ bitbake -c compile -f NAME_OF_PACKAGE
</literallayout>
</para>
<para>
The <filename>-f</filename> or <filename>--force</filename>
option forces re-execution of the specified task.
You can call other tasks this way as well.
But note that all the modifications in
<filename><link linkend='var-WORKDIR'>WORKDIR</link></filename>
are gone once you execute <filename>-c clean</filename> for a package.
</para>
</section>
<section id="usingpoky-modifying-packages-quilt">
<title>Modifying Package Source Code with Quilt</title>
<para>
By default Poky uses <ulink url='http://savannah.nongnu.org/projects/quilt'>Quilt</ulink>
to manage patches in the <filename>do_patch</filename> task.
This is a powerful tool that you can use to track all modifications to package sources.
</para>
<para>
Before modifying source code, it is important to notify Quilt so it can track the changes
into the new patch file:
<literallayout class='monospaced'>
$ quilt new NAME-OF-PATCH.patch
</literallayout>
</para>
<para>
After notifying Quilt, add all modified files into that patch:
<literallayout class='monospaced'>
$ quilt add file1 file2 file3
</literallayout>
</para>
<para>
You can now start editing.
Once you are done editing, you need to use Quilt to generate the final patch that
will contain all your modifications.
<literallayout class='monospaced'>
$ quilt refresh
</literallayout>
</para>
<para>
You can find the resulting patch file in the
<filename>patches/</filename> subdirectory of the source
(<filename><link linkend='var-S'>S</link></filename>) directory.
For future builds, you should copy the patch into the Yocto Project metadata and add it into the
<filename><link linkend='var-SRC_URI'>SRC_URI</link></filename> of a recipe.
Here is an example:
<literallayout class='monospaced'>
SRC_URI += "file://NAME-OF-PATCH.patch"
</literallayout>
</para>
<para>
Finally, don't forget to 'bump' the
<filename><link linkend='var-PR'>PR</link></filename> value in the same recipe since
the resulting packages have changed.
</para>
</section>
<section id="building-multiple-architecture-libraries-into-one-image">
<title>Combining Multiple versions of Library Files into One Image</title>
<para>
The build system offers the ability to build libraries with different
target optimizations or architecture formats and combine these together
into one system image.
You can link different binaries in the image
against the different libraries as needed for specific use cases.
This feature is called "Multilib."
</para>
<para>
An example would be where you have most of a system compiled in 32-bit
mode using 32-bit libraries, but you have something large, like a database
engine, that needs to be a 64-bit application and use 64-bit libraries.
Multilib allows you to get the best of both 32-bit and 64-bit libraries.
</para>
<para>
While the Multilib feature is most commonly used for 32 and 64-bit differences,
the approach the build system uses facilitates different target optimizations.
You could compile some binaries to use one set of libraries and other binaries
to use other different sets of libraries.
The libraries could differ in architecture, compiler options, or other
optimizations.
</para>
<para>
This section overviews the Multilib process only.
For more details on how to implement Multilib, see the
<ulink url='https://wiki.yoctoproject.org/wiki/Multilib'>Multilib</ulink> wiki
page.
</para>
<section id='preparing-to-use-multilib'>
<title>Preparing to use Multilib</title>
<para>
User-specific requirements drive the Multilib feature,
Consequently, there is no one "out-of-the-box" configuration that likely
exists to meet your needs.
</para>
<para>
In order to enable Multilib, you first need to ensure your recipe is
extended to support multiple libraries.
Many standard recipes are already extended and support multiple libraries.
You can check in the <filename>meta/conf/multilib.conf</filename>
configuration file in the Yocto Project files directory to see how this is
done using the <filename>BBCLASSEXTEND</filename> variable.
Eventually, all recipes will be covered and this list will be unneeded.
</para>
<para>
For the most part, the Multilib class extension works automatically to
extend the package name from <filename>${PN}</filename> to
<filename>${MLPREFIX}${PN}</filename>, where <filename>MLPREFIX</filename>
is the particular multilib (e.g. "lib32-" or "lib64-").
Standard variables such as <filename>DEPENDS</filename>,
<filename>RDEPENDS</filename>, <filename>RPROVIDES</filename>,
<filename>RRECOMMENDS</filename>, <filename>PACKAGES</filename>, and
<filename>PACKAGES_DYNAMIC</filename> are automatically extended by the system.
If you are extending any manual code in the recipe, you can use the
<filename>${MLPREFIX}</filename> variable to ensure those names are extended
correctly.
This automatic extension code resides in <filename>multilib.bbclass</filename>.
</para>
</section>
<section id='using-multilib'>
<title>Using Multilib</title>
<para>
After you have set up the recipes, you need to define the actual
combination of multiple libraries you want to build.
You accomplish this through your <filename>local.conf</filename>
configuration file in the Yocto Project build directory.
An example configuration would be as follows:
<literallayout class='monospaced'>
MACHINE = "qemux86-64"
require conf/multilib.conf
MULTILIBS = "multilib:lib32"
DEFAULTTUNE_virtclass-multilib-lib32 = "x86"
MULTILIB_IMAGE_INSTALL = "lib32-connman"
</literallayout>
This example enables an
additional library named <filename>lib32</filename> alongside the
normal target packages.
When combining these "lib32" alternatives, the example uses "x86" for tuning.
For information on this particular tuning, see
<filename>meta/conf/machine/include/ia32/arch-ia32.inc</filename>.
</para>
<para>
The example then includes <filename>lib32-connman</filename>
in all the images, which illustrates one method of including a
multiple library dependency.
You can use a normal image build to include this dependency,
for example:
<literallayout class='monospaced'>
$ bitbake core-image-sato
</literallayout>
You can also build Multilib packages specifically with a command like this:
<literallayout class='monospaced'>
$ bitbake lib32-connman
</literallayout>
</para>
</section>
<section id='additional-implementation-details'>
<title>Additional Implementation Details</title>
<para>
Different packaging systems have different levels of native Multilib
support.
For the RPM Package Management System, the following implementation details
exist:
<itemizedlist>
<listitem><para>A unique architecture is defined for the Multilib packages,
along with creating a unique deploy folder under
<filename>tmp/deploy/rpm</filename> in the Yocto
Project build directory.
For example, consider <filename>lib32</filename> in a
<filename>qemux86-64</filename> image.
The possible architectures in the system are "all", "qemux86_64",
"lib32_qemux86_64", and "lib32_x86".</para></listitem>
<listitem><para>The <filename>${MLPREFIX}</filename> variable is stripped from
<filename>${PN}</filename> during RPM packaging.
The naming for a normal RPM package and a Multilib RPM package in a
<filename>qemux86-64</filename> system resolves to something similar to
<filename>bash-4.1-r2.x86_64.rpm</filename> and
<filename>bash-4.1.r2.lib32_x86.rpm</filename>, respectively.
</para></listitem>
<listitem><para>When installing a Multilib image, the RPM backend first
installs the base image and then installs the Multilib libraries.
</para></listitem>
<listitem><para>The build system relies on RPM to resolve the identical files in the
two (or more) Multilib packages.</para></listitem>
</itemizedlist>
</para>
<para>
For the IPK Package Management System, the following implementation details exist:
<itemizedlist>
<listitem><para>The <filename>${MLPREFIX}</filename> is not stripped from
<filename>${PN}</filename> during IPK packaging.
The naming for a normal RPM package and a Multilib IPK package in a
<filename>qemux86-64</filename> system resolves to something like
<filename>bash_4.1-r2.x86_64.ipk</filename> and
<filename>lib32-bash_4.1-rw_x86.ipk</filename>, respectively.
</para></listitem>
<listitem><para>The IPK deploy folder is not modified with
<filename>${MLPREFIX}</filename> because packages with and without
the Multilib feature can exist in the same folder due to the
<filename>${PN}</filename> differences.</para></listitem>
<listitem><para>IPK defines a sanity check for Multilib installation
using certain rules for file comparison, overridden, etc.
</para></listitem>
</itemizedlist>
</para>
</section>
</section>
<section id="usingpoky-configuring-LIC_FILES_CHKSUM">
<title>Tracking License Changes</title>
<para>
The license of an upstream project might change in the future. In order to prevent these changes
going unnoticed, the Yocto Project provides a
<filename><link linkend='var-LIC_FILES_CHKSUM'>LIC_FILES_CHKSUM</link></filename>
variable to track changes to the license text. The checksums are validated at the end of the
configure step, and if the checksums do not match, the build will fail.
</para>
<section id="usingpoky-specifying-LIC_FILES_CHKSUM">
<title>Specifying the <filename>LIC_FILES_CHKSUM</filename> Variable</title>
<para>
The <filename>LIC_FILES_CHKSUM</filename>
variable contains checksums of the license text in the source code for the recipe.
Following is an example of how to specify <filename>LIC_FILES_CHKSUM</filename>:
<literallayout class='monospaced'>
LIC_FILES_CHKSUM = "file://COPYING;md5=xxxx \
file://licfile1.txt;beginline=5;endline=29;md5=yyyy \
file://licfile2.txt;endline=50;md5=zzzz \
..."
</literallayout>
</para>
<para>
The Yocto Project uses the
<filename><link linkend='var-S'>S</link></filename> variable as the
default directory used when searching files listed in
<filename>LIC_FILES_CHKSUM</filename>.
The previous example employs the default directory.
</para>
<para>
You can also use relative paths as shown in the following example:
<literallayout class='monospaced'>
LIC_FILES_CHKSUM = "file://src/ls.c;startline=5;endline=16;\
md5=bb14ed3c4cda583abc85401304b5cd4e"
LIC_FILES_CHKSUM = "file://../license.html;md5=5c94767cedb5d6987c902ac850ded2c6"
</literallayout>
</para>
<para>
In this example, the first line locates a file in
<filename><link linkend='var-S'>S</link>/src/ls.c</filename>.
The second line refers to a file in
<filename><link linkend='var-WORKDIR'>WORKDIR</link></filename>, which is the parent
of <filename>S</filename>.
</para>
<para>
Note that this variable is mandatory for all recipes, unless the
<filename>LICENSE</filename> variable is set to "CLOSED".
</para>
</section>
<section id="usingpoky-LIC_FILES_CHKSUM-explanation-of-syntax">
<title>Explanation of Syntax</title>
<para>
As mentioned in the previous section, the
<filename>LIC_FILES_CHKSUM</filename> variable lists all the
important files that contain the license text for the source code.
It is possible to specify a checksum for an entire file, or a specific section of a
file (specified by beginning and ending line numbers with the "beginline" and "endline"
parameters, respectively).
The latter is useful for source files with a license notice header,
README documents, and so forth.
If you do not use the "beginline" parameter, then it is assumed that the text begins on the
first line of the file.
Similarly, if you do not use the "endline" parameter, it is assumed that the license text
ends with the last line of the file.
</para>
<para>
The "md5" parameter stores the md5 checksum of the license text.
If the license text changes in any way as compared to this parameter
then a mismatch occurs.
This mismatch triggers a build failure and notifies the developer.
Notification allows the developer to review and address the license text changes.
Also note that if a mismatch occurs during the build, the correct md5
checksum is placed in the build log and can be easily copied to the recipe.
</para>
<para>
There is no limit to how many files you can specify using the
<filename>LIC_FILES_CHKSUM</filename> variable.
Generally, however, every project requires a few specifications for license tracking.
Many projects have a "COPYING" file that stores the license information for all the source
code files.
This practice allows you to just track the "COPYING" file as long as it is kept up to date.
</para>
<tip>
If you specify an empty or invalid "md5" parameter, BitBake returns an md5 mis-match
error and displays the correct "md5" parameter value during the build.
The correct parameter is also captured in the build log.
</tip>
<tip>
If the whole file contains only license text, you do not need to use the "beginline" and
"endline" parameters.
</tip>
</section>
</section>
<section id="usingpoky-configuring-DISTRO_PN_ALIAS">
<title>Handling a Package Name Alias</title>
<para>
Sometimes a package name you are using might exist under an alias or as a similarly named
package in a different distribution.
The Yocto Project implements a <filename>distro_check</filename>
task that automatically connects to major distributions
and checks for these situations.
If the package exists under a different name in a different distribution, you get a
<filename>distro_check</filename> mismatch.
You can resolve this problem by defining a per-distro recipe name alias using the
<filename><link linkend='var-DISTRO_PN_ALIAS'>DISTRO_PN_ALIAS</link></filename> variable.
</para>
<para>
Following is an example that shows how you specify the <filename>DISTRO_PN_ALIAS</filename>
variable:
<literallayout class='monospaced'>
DISTRO_PN_ALIAS_pn-PACKAGENAME = "distro1=package_name_alias1 \
distro2=package_name_alias2 \
distro3=package_name_alias3 \
..."
</literallayout>
</para>
<para>
If you have more than one distribution alias, separate them with a space.
Note that the Yocto Project currently automatically checks the
Fedora, OpenSuSE, Debian, Ubuntu,
and Mandriva distributions for source package recipes without having to specify them
using the <filename>DISTRO_PN_ALIAS</filename> variable.
For example, the following command generates a report that lists the Linux distributions
that include the sources for each of the Yocto Project recipes.
<literallayout class='monospaced'>
$ bitbake world -f -c distro_check
</literallayout>
The results are stored in the <filename>build/tmp/log/distro_check-${DATETIME}.results</filename>
file found in the Yocto Project files area.
</para>
</section>
</chapter>
<!--

View File

@@ -17,9 +17,6 @@
by reading the
<ulink url='http://www.yoctoproject.org/docs/latest/yocto-project-qs/yocto-project-qs.html'>
Yocto Project Quick Start</ulink>.
For task-based information using the Yocto Project, see
<ulink url='http://www.yoctoproject.org/docs/latest/dev-manual/dev-manual.html'>
The Yocto Project Development Manual</ulink>.
You can also find lots of information on the Yocto Project on the
<ulink url="http://www.yoctoproject.org">Yocto Project website</ulink>.
</para>
@@ -39,11 +36,6 @@
<link linkend='extendpoky'>Extending the Yocto Project</link>:</emphasis> This chapter
provides information about how to extend and customize the Yocto Project
along with advice on how to manage these changes.</para></listitem>
<listitem><para><emphasis>
<link linkend='technical-details'>Technical Details</link>:</emphasis>
This chapter describes fundamental Yocto Project components as well as an explanation
behind how the Yocto Project uses shared state (sstate) cache to speed build time.
</para></listitem>
<listitem><para><emphasis>
<link linkend='bsp'>Board Support Packages (BSP) - Developer's Guide</link>:</emphasis>
This chapter describes the example filesystem layout for BSP development and

View File

@@ -92,8 +92,6 @@
<xi:include href="extendpoky.xml"/>
<xi:include href="technical-details.xml"/>
<xi:include href="../bsp-guide/bsp.xml"/>
<xi:include href="development.xml"/>

View File

@@ -260,51 +260,9 @@
<para>
Once all the tasks have been completed BitBake exits.
</para>
<para>
When running a task, BitBake tightly controls the execution environment
of the build tasks to make sure unwanted contamination from the build machine
cannot influence the build.
Consequently, if you do want something to get passed into the build
task's environment, you must take a few steps:
<orderedlist>
<listitem><para>Tell BitBake to load what you want from the environment
into the data store.
You can do so through the <filename>BB_ENV_WHITELIST</filename>
variable.
For example, assume you want to prevent the build system from
accessing your <filename>$HOME/.ccache</filename> directory.
The following command tells BitBake to load
<filename>CCACHE_DIR</filename> from the environment into the data
store:
<literallayout class='monospaced'>
export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE CCACHE_DIR"
</literallayout></para></listitem>
<listitem><para>Tell BitBake to export what you have loaded into the
environment store to the task environment of every running task.
Loading something from the environment into the data store
(previous step) only makes it available in the datastore.
To export it to the task environment of every running task,
use a command similar to the following in your
<filename>local.conf</filename> or distro configuration file:
<literallayout class='monospaced'>
export CCACHE_DIR
</literallayout></para></listitem>
</orderedlist>
</para>
<note>
A side effect of the previous steps is that BitBake records the variable
as a dependency of the build process in things like the shared state
checksums.
If doing so results in unnecessary rebuilds of tasks, you can whitelist the
variable so that the shared state code ignores the dependency when it creates
checksums.
For information on this process, see the <filename>BB_HASHBASE_WHITELIST</filename>
example in <xref linkend='checksums'>Checksums (Signatures)</xref>.
</note>
</section>
<section id='ref-bitbake-commandline'>
<title>BitBake Command Line</title>

View File

@@ -391,87 +391,6 @@
for common problems that show up during runtime.
Distribution policy usually dictates whether to include this class as the Yocto Project does.
</para>
<para>
You can configure the sanity checks so that specific test failures either raise a warning or
an error message.
Typically, failures for new tests generate a warning.
Subsequent failures for the same test would then generate an error message
once the metadata is in a known and good condition.
You use the <filename>WARN_QA</filename> variable to specify tests for which you
want to generate a warning message on failure.
You use the <filename>ERROR_QA</filename> variable to specify tests for which you
want to generate an error message on failure.
</para>
<para>
The following list shows the tests you can list with the <filename>WARN_QA</filename>
and <filename>ERROR_QA</filename> variables:
<itemizedlist>
<listitem><para><emphasis><filename>ldflags:</filename></emphasis>
Ensures that the binaries were linked with the
<filename>LDFLAGS</filename> options provided by the build system.
If this test fails, check that the <filename>LDFLAGS</filename> variable
is being passed to the linker command.</para></listitem>
<listitem><para><emphasis><filename>useless-rpaths:</filename></emphasis>
Checks for dynamic library load paths (rpaths) in the binaries that
by default on a standard system are searched by the linker (e.g.
<filename>/lib</filename> and <filename>/usr/lib</filename>).
While these paths will not cause any breakage, they do waste space and
are unnecessary.</para></listitem>
<listitem><para><emphasis><filename>rpaths:</filename></emphasis>
Checks for rpaths in the binaries that contain build system paths such
as <filename>TMPDIR</filename>.
If this test fails, bad <filename>-rpath</filename> options are being
passed to the linker commands and your binaries have potential security
issues.</para></listitem>
<listitem><para><emphasis><filename>dev-so:</filename></emphasis>
Checks that the <filename>.so</filename> symbolic links are in the
<filename>-dev</filename> package and not in any of the other packages.
In general, these symlinks are only useful for development purposes.
Thus, the <filename>-dev</filename> package is the correct location for
them.
Some very rare cases do exist for dynamically loaded modules where
these symlinks are needed instead in the main package.
</para></listitem>
<listitem><para><emphasis><filename>debug-files:</filename></emphasis>
Checks for <filename>.debug</filename> directories in anything but the
<filename>-dbg</filename> package.
The debug files should all be in the <filename>-dbg</filename> package.
Thus, anything packaged elsewhere is incorrect packaging.</para></listitem>
<listitem><para><emphasis><filename>arch:</filename></emphasis>
Checks the Executable and Linkable Format (ELF) type, bit size and endianness
of any binaries to ensure it matches the target architecture.
This test fails if any binaries don't match the type since there would be an
incompatibility.
Sometimes software, like bootloaders, might need to bypass this check.
</para></listitem>
<listitem><para><emphasis><filename>debug-deps:</filename></emphasis>
Checks that <filename>-dbg</filename> packages only depend on other
<filename>-dbg</filename> packages and not on any other types of packages,
which would cause a packaging bug.</para></listitem>
<listitem><para><emphasis><filename>dev-deps:</filename></emphasis>
Checks that <filename>-dev</filename> packages only depend on other
<filename>-dev</filename> packages and not on any other types of packages,
which would be a packaging bug.</para></listitem>
<listitem><para><emphasis><filename>pkgconfig:</filename></emphasis>
Checks <filename>.pc</filename> files for any
<filename>TMPDIR/WORKDIR</filename> paths.
Any <filename>.pc</filename> file containing these paths is incorrect
since <filename>pkg-config</filename> itself adds the correct sysroot prefix
when the files are accessed.</para></listitem>
<listitem><para><emphasis><filename>la:</filename></emphasis>
Checks <filename>.la</filename> files for any <filename>TMPDIR</filename>
paths.
Any <filename>.la</filename> file continaing these paths is incorrect since
<filename>libtool</filename> adds the correct sysroot prefix when using the
files automatically itself.</para></listitem>
<listitem><para><emphasis><filename>desktop:</filename></emphasis>
Runs the <filename>desktop-file-validate</filename> program against any
<filename>.desktop</filename> files to validate their contents against
the specification for <filename>.desktop</filename> files.</para></listitem>
</itemizedlist>
</para>
</section>
<section id='ref-classes-siteinfo'>

View File

@@ -510,87 +510,6 @@
</glossdef>
</glossentry>
<glossentry id='var-IMAGE_OVERHEAD_FACTOR'><glossterm>IMAGE_OVERHEAD_FACTOR</glossterm>
<glossdef>
<para>
Defines a multiplier that the build system might apply to the initial image
size to create free disk space in the image as overhead.
By default, the build process uses a multiplier of 1.3 for this variable.
This default value results in 30% free disk space added to the image when this
method is used to determine the final generated image size.
See <filename><link linkend='var-IMAGE_ROOTFS_SIZE'>IMAGE_ROOTFS_SIZE</link></filename>
for information on how the build system determines the overall image size.
</para>
<para>
The default 30% free disk space typically gives the image enough room to boot
and allows for basic post installs while still leaving a small amount of
free disk space.
If 30% free space is inadequate, you can increase the default value.
For example, the following setting gives you 50% free space added to the image:
<literallayout class='monospaced'>
IMAGE_OVERHEAD_FACTOR = "1.5"
</literallayout>
</para>
<para>
Alternatively, you can ensure a specific amount of free disk space is added
to the image by using
<filename><link linkend='var-IMAGE_ROOTFS_EXTRA_SPACE'>IMAGE_ROOTFS_EXTRA_SPACE</link></filename>
the variable.
</para>
</glossdef>
</glossentry>
<glossentry id='var-IMAGE_ROOTFS_EXTRA_SPACE'><glossterm>IMAGE_ROOTFS_EXTRA_SPACE</glossterm>
<glossdef>
<para>
Defines additional free disk space created in the image in Kbytes.
By default, this variable is set to "0".
This free disk space is added to the image after the build system determines
the image size as described in
<filename><link linkend='var-IMAGE_ROOTFS_SIZE'>IMAGE_ROOTFS_SIZE</link></filename>.
</para>
<para>
This variable is particularly useful when you want to ensure that a
specific amount of free disk space is available on a device after an image
is installed and running.
For example, to be sure 5 Gbytes of free disk space is available, set the
variable as follows:
<literallayout class='monospaced'>
IMAGE_ROOTFS_EXTRA_SPACE = "5242880"
</literallayout>
</para>
</glossdef>
</glossentry>
<glossentry id='var-IMAGE_ROOTFS_SIZE'><glossterm>IMAGE_ROOTFS_SIZE</glossterm>
<glossdef>
<para>
Defines the size in Kbytes for the generated image.
The Yocto Project build system determines the final size for the generated
image using an algorithm that takes into account the initial disk space used
for the generated image, a requested size for the image, and requested
additional free disk space to be added to the image.
Programatically, the build system determines the final size of the
generated image as follows:
<literallayout class='monospaced'>
if (du * overhead) &lt; IMAGE_ROOTFS_SIZE:
IMAGE_ROOTFS_SIZE = IMAGE_ROOTFS_SIZE + xspace
else:
IMAGE_ROOTFS_SIZE = (du * overhead) + xspace
</literallayout>
In the above example, <filename>overhead</filename> is defined by the
<filename><link linkend='var-IMAGE_OVERHEAD_FACTOR'>IMAGE_OVERHEAD_FACTOR</link></filename>
variable, <filename>xspace</filename> is defined by the
<filename><link linkend='var-IMAGE_ROOTFS_EXTRA_SPACE'>IMAGE_ROOTFS_EXTRA_SPACE</link></filename>
variable, and <filename>du</filename> is the results of the disk usage command
on the initially generated image.
</para>
</glossdef>
</glossentry>
<glossentry id='var-INC_PR'><glossterm>INC_PR</glossterm>
<glossdef>
<para>Defines the Package revision.
@@ -758,6 +677,12 @@
</glossdef>
</glossentry>
<glossentry id='var-LICENSE'><glossterm>LICENSE</glossterm>
<glossdef>
<para>The list of package source licenses.</para>
</glossdef>
</glossentry>
<glossentry id='var-LIC_FILES_CHKSUM'><glossterm>LIC_FILES_CHKSUM</glossterm>
<glossdef>
<para>Checksums of the license text in the recipe source code.</para>
@@ -775,25 +700,6 @@
</glossdef>
</glossentry>
<glossentry id='var-LICENSE'><glossterm>LICENSE</glossterm>
<glossdef>
<para>The list of package source licenses.</para>
</glossdef>
</glossentry>
<glossentry id='var-LICENSE_DIR'><glossterm>LICENSE_DIR</glossterm>
<glossdef>
<para>Path to additional licenses used during the build.
By default, the Yocto Project uses <filename>COMMON_LICENSE_DIR</filename>
to define the directory that holds common license text used during the build.
The <filename>LICENSE_DIR</filename> variable allows you to extend that
location to other areas that have additional licenses:
<literallayout class='monospaced'>
LICENSE_DIR += "/path/to/additional/common/licenses"
</literallayout></para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv id='var-glossary-m'><title>M</title>

View File

@@ -1,733 +0,0 @@
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id='technical-details'>
<title>Technical Details</title>
<para>
This chapter provides technical details for various parts of the Yocto Project.
Currently, topics include Yocto Project components and shared state (sstate) cache.
</para>
<section id='usingpoky-components'>
<title>Yocto Project Components</title>
<para>
The BitBake task executor together with various types of configuration files form the
Yocto Project core.
This section overviews the BitBake task executor and the
configuration files by describing what they are used for and how they interact.
</para>
<para>
BitBake handles the parsing and execution of the data files.
The data itself is of various types:
<itemizedlist>
<listitem><para><emphasis>Recipes:</emphasis> Provides details about particular
pieces of software</para></listitem>
<listitem><para><emphasis>Class Data:</emphasis> An abstraction of common build
information (e.g. how to build a Linux kernel).</para></listitem>
<listitem><para><emphasis>Configuration Data:</emphasis> Defines machine-specific settings,
policy decisions, etc.
Configuration data acts as the glue to bind everything together.</para></listitem>
</itemizedlist>
For more information on data, see the
<ulink url='http://www.yoctoproject.org/docs/latest/dev-manual/dev-manual.html#yocto-project-terms'>
Yocto Project Terms</ulink> section in
<ulink url='http://www.yoctoproject.org/docs/latest/dev-manual/dev-manual.html'>
The Yocto Project Development Manual</ulink>.
</para>
<para>
BitBake knows how to combine multiple data sources together and refers to each data source
as a "<link linkend='usingpoky-changes-layers'>layer</link>".
</para>
<para>
Following are some brief details on these core components.
For more detailed information on these components see the
<link linkend='ref-structure'>'Reference: Directory Structure'</link>
appendix.
</para>
<section id='usingpoky-components-bitbake'>
<title>BitBake</title>
<para>
BitBake is the tool at the heart of the Yocto Project and is responsible
for parsing the metadata, generating a list of tasks from it,
and then executing those tasks.
To see a list of the options BitBake supports, use the following help command:
<literallayout class='monospaced'>
$ bitbake --help
</literallayout>
</para>
<para>
The most common usage for BitBake is <filename>bitbake &lt;packagename&gt;</filename>, where
<filename>packagename</filename> is the name of the package you want to build
(referred to as the "target" in this manual).
The target often equates to the first part of a <filename>.bb</filename> filename.
So, to run the <filename>matchbox-desktop_1.2.3.bb</filename> file, you
might type the following:
<literallayout class='monospaced'>
$ bitbake matchbox-desktop
</literallayout>
Several different versions of <filename>matchbox-desktop</filename> might exist.
BitBake chooses the one selected by the distribution configuration.
You can get more details about how BitBake chooses between different
target versions and providers in the
<link linkend='ref-bitbake-providers'>Preferences and Providers</link> section.
</para>
<para>
BitBake also tries to execute any dependent tasks first.
So for example, before building <filename>matchbox-desktop</filename>, BitBake
would build a cross compiler and <filename>eglibc</filename> if they had not already
been built.
<note>This release of the Yocto Project does not support the <filename>glibc</filename>
GNU version of the Unix standard C library. By default, the Yocto Project builds with
<filename>eglibc</filename>.</note>
</para>
<para>
A useful BitBake option to consider is the <filename>-k</filename> or
<filename>--continue</filename> option.
This option instructs BitBake to try and continue processing the job as much
as possible even after encountering an error.
When an error occurs, the target that
failed and those that depend on it cannot be remade.
However, when you use this option other dependencies can still be processed.
</para>
</section>
<section id='usingpoky-components-metadata'>
<title>Metadata (Recipes)</title>
<para>
The <filename>.bb</filename> files are usually referred to as "recipes."
In general, a recipe contains information about a single piece of software.
The information includes the location from which to download the source patches
(if any are needed), which special configuration options to apply,
how to compile the source files, and how to package the compiled output.
</para>
<para>
The term "package" can also be used to describe recipes.
However, since the same word is used for the packaged output from the Yocto
Project (i.e. <filename>.ipk</filename> or <filename>.deb</filename> files),
this document avoids using the term "package" when refering to recipes.
</para>
</section>
<section id='usingpoky-components-classes'>
<title>Classes</title>
<para>
Class files (<filename>.bbclass</filename>) contain information that is useful to share
between metadata files.
An example is the Autotools class, which contains
common settings for any application that Autotools uses.
The <link linkend='ref-classes'>Reference: Classes</link> appendix provides details
about common classes and how to use them.
</para>
</section>
<section id='usingpoky-components-configuration'>
<title>Configuration</title>
<para>
The configuration files (<filename>.conf</filename>) define various configuration variables
that govern the Yocto Project build process.
These files fall into several areas that define machine configuration options,
distribution configuration options, compiler tuning options, general common configuration
options and user configuration options (<filename>local.conf</filename>, which is found
in the Yocto Project files build directory).
</para>
</section>
</section>
<section id="shared-state-cache">
<title>Shared State Cache</title>
<para>
By design, the Yocto Project build system builds everything from scratch unless
BitBake can determine that parts don't need to be rebuilt.
Fundamentally, building from scratch is attractive as it means all parts are
built fresh and there is no possibility of stale data causing problems.
When developers hit problems, they typically default back to building from scratch
so they know the state of things from the start.
</para>
<para>
Building an image from scratch is both an advantage and a disadvantage to the process.
As mentioned in the previous paragraph, building from scratch ensures that
everything is current and starts from a known state.
However, building from scratch also takes much longer as it generally means
rebuiding things that don't necessarily need rebuilt.
</para>
<para>
The Yocto Project implements shared state code that supports incremental builds.
The implementation of the shared state code answers the following questions that
were fundamental roadblocks within the Yocto Project incremental build support system:
<itemizedlist>
<listitem>What pieces of the system have changed and what pieces have not changed?</listitem>
<listitem>How are changed pieces of software removed and replaced?</listitem>
<listitem>How are pre-built components that don't need to be rebuilt from scratch
used when they are available?</listitem>
</itemizedlist>
</para>
<para>
For the first question, the build system detects changes in the "inputs" to a given task by
creating a checksum (or signature) of the task's inputs.
If the checksum changes, the system assumes the inputs have changed and the task needs to be
rerun.
For the second question, the shared state (sstate) code tracks which tasks add which output
to the build process.
This means the output from a given task can be removed, upgraded or otherwise manipulated.
The third question is partly addressed by the solution for the second question
assuming the build system can fetch the sstate objects from remote locations and
install them if they are deemed to be valid.
</para>
<para>
The rest of this section goes into detail about the overall incremental build
architecture, the checksums (signatures), shared state, and some tips and tricks.
</para>
<section id='overall-architecture'>
<title>Overall Architecture</title>
<para>
When determining what parts of the system need to be built, BitBake
uses a per-task basis and does not use a per-recipe basis.
You might wonder why using a per-task basis is preferred over a per-recipe basis.
To help explain, consider having the IPK packaging backend enabled and then switching to DEB.
In this case, <filename>do_install</filename> and <filename>do_package</filename>
output are still valid.
However, with a per-recipe approach, the build would not include the
<filename>.deb</filename> files.
Consequently, you would have to invalidate the whole build and rerun it.
Rerunning everything is not the best situation.
Also in this case, the core must be "taught" much about specific tasks.
This methodology does not scale well and does not allow users to easily add new tasks
in layers or as external recipes without touching the packaged-staging core.
</para>
</section>
<section id='checksums'>
<title>Checksums (Signatures)</title>
<para>
The shared state code uses a checksum, which is a unique signature of a task's
inputs, to determine if a task needs to be run again.
Because it is a change in a task's inputs that triggers a rerun, the process
needs to detect all the inputs to a given task.
For shell tasks, this turns out to be fairly easy because
the build process generates a "run" shell script for each task and
it is possible to create a checksum that gives you a good idea of when
the task's data changes.
</para>
<para>
To complicate the problem, there are things that should not be included in
the checksum.
First, there is the actual specific build path of a given task -
the <filename>WORKDIR</filename>.
It does not matter if the working directory changes because it should not
affect the output for target packages.
Also, the build process has the objective of making native/cross packages relocatable.
The checksum therefore needs to exclude <filename>WORKDIR</filename>.
The simplistic approach for excluding the worknig directory is to set
<filename>WORKDIR</filename> to some fixed value and create the checksum
for the "run" script.
</para>
<para>
Another problem results from the "run" scripts containing functions that
might or might not get called.
The incremental build solution contains code that figures out dependencies
between shell functions.
This code is used to prune the "run" scripts down to the minimum set,
thereby alleviating this problem and making the "run" scripts much more
readable as a bonus.
</para>
<para>
So far we have solutions for shell scripts.
What about python tasks?
The same approach applies even though these tasks are more difficult.
The process needs to figure out what variables a python function accesses
and what functions it calls.
Again, the incremental build solution contains code that first figures out
the variable and function dependencies, and then creates a checksum for the data
used as the input to the task.
</para>
<para>
Like the <filename>WORKDIR</filename> case, situations exist where dependencies
should be ignored.
For these cases, you can instruct the build process to ignore a dependency
by using a line like the following:
<literallayout class='monospaced'>
PACKAGE_ARCHS[vardepsexclude] = "MACHINE"
</literallayout>
This example ensures that the <filename>PACKAGE_ARCHS</filename> variable does not
depend on the value of <filename>MACHINE</filename>, even if it does reference it.
</para>
<para>
Equally, there are cases where we need to add dependencies BitBake is not able to find.
You can accomplish this by using a line like the following:
<literallayout class='monospaced'>
PACKAGE_ARCHS[vardeps] = "MACHINE"
</literallayout>
This example explicitly adds the <filename>MACHINE</filename> variable as a
dependency for <filename>PACKAGE_ARCHS</filename>.
</para>
<para>
Consider a case with inline python, for example, where BitBake is not
able to figure out dependencies.
When running in debug mode (i.e. using <filename>-DDD</filename>), BitBake
produces output when it discovers something for which it cannot figure out
dependencies.
The Yocto Project team has currently not managed to cover those dependencies
in detail and is aware of the need to fix this situation.
</para>
<para>
Thus far, this section has limited discussion to the direct inputs into a task.
Information based on direct inputs is referred to as the "basehash" in the code.
However, there is still the question of a task's indirect inputs, the things that
were already built and present in the build directory.
The checksum (or signature) for a particular task needs to add the hashes of all the
tasks on which the particular task depends.
Choosing which dependencies to add is a policy decision.
However, the effect is to generate a master checksum that combines the
basehash and the hashes of the task's dependencies.
</para>
<para>
While figuring out the dependencies and creating these checksums is good,
what does the Yocto Project build system do with the checksum information?
The build system uses a signature handler that is responsible for
processing the checksum information.
By default, there is a dummy "noop" signature handler enabled in BitBake.
This means that behaviour is unchanged from previous versions.
OECore uses the "basic" signature handler through this setting in the
<filename>bitbake.conf</filename> file:
<literallayout class='monospaced'>
BB_SIGNATURE_HANDLER ?= "basic"
</literallayout>
Also within the BitBake configuration file, we can give BitBake
some extra information to help it handle this information.
The following statements effectively result in a list of global
variable dependency excludes - variables never included in
any checksum:
<literallayout class='monospaced'>
BB_HASHBASE_WHITELIST ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH"
BB_HASHBASE_WHITELIST += "DL_DIR SSTATE_DIR THISDIR FILESEXTRAPATHS"
BB_HASHBASE_WHITELIST += "FILE_DIRNAME HOME LOGNAME SHELL TERM USER"
BB_HASHBASE_WHITELIST += "FILESPATH USERNAME STAGING_DIR_HOST STAGING_DIR_TARGET"
BB_HASHTASK_WHITELIST += "(.*-cross$|.*-native$|.*-cross-initial$| \
.*-cross-intermediate$|^virtual:native:.*|^virtual:nativesdk:.*)"
</literallayout>
This example is actually where <filename>WORKDIR</filename>
is excluded since <filename>WORKDIR</filename> is constructed as a
path within <filename>TMPDIR</filename>, which is on the whitelist.
</para>
<para>
The <filename>BB_HASHTASK_WHITELIST</filename> covers dependent tasks and
excludes certain kinds of tasks from the dependency chains.
The effect of the previous example is to isolate the native, target,
and cross-components.
So, for example, toolchain changes do not force a rebuild of the whole system.
</para>
<para>
The end result of the "basic" handler is to make some dependency and
hash information available to the build.
This includes:
<literallayout class='monospaced'>
BB_BASEHASH_task-&lt;taskname&gt; - the base hashes for each task in the recipe
BB_BASEHASH_&lt;filename:taskname&gt; - the base hashes for each dependent task
BBHASHDEPS_&lt;filename:taskname&gt; - The task dependencies for each task
BB_TASKHASH - the hash of the currently running task
</literallayout>
There is also a "basichash" <filename>BB_SIGNATURE_HANDLER</filename>,
which is the same as the basic version but adds the task hash to the stamp files.
This results in any metadata change that changes the task hash,
automatically causing the task to be run again.
This removes the need to bump <filename>PR</filename>
values and changes to metadata automatically ripple across the build.
Currently, this behavior is not the default behavior.
However, it is likely that the Yocto Project team will go forward with this
behavior in the future since all the functionality exists.
The reason for the delay is the potential impact to the distribution feed
creation as they need increasing <filename>PR</filename> fields
and the Yocto Project currently lacks a mechanism to automate incrementing
this field.
</para>
</section>
<section id='shared-state'>
<title>Shared State</title>
<para>
Checksums and dependencies, as discussed in the previous section, solve half the
problem.
The other part of the problem is being able to use checksum information during the build
and being able to reuse or rebuild specific components.
</para>
<para>
The shared state class (<filename>sstate.bbclass</filename>)
is a relatively generic implementation of how to "capture" a snapshot of a given task.
The idea is that the build process does not care about the source of a task's output.
Output could be freshly built or it could be downloaded and unpacked from
somewhere - the build process doesn't need to worry about its source.
</para>
<para>
There are two types of output, one is just about creating a directory
in <filename>WORKDIR</filename>.
A good example is the output of either <filename>do_install</filename> or
<filename>do_package</filename>.
The other type of output occurs when a set of data is merged into a shared directory
tree such as the sysroot.
</para>
<para>
The Yocto Project team has tried to keep the details of the implementation hidden in
<filename>sstate.bbclass</filename>.
From a user's perspective, adding shared state wrapping to a task
is as simple as this <filename>do_deploy</filename> example taken from
<filename>do_deploy.bbclass</filename>:
<literallayout class='monospaced'>
DEPLOYDIR = "${WORKDIR}/deploy-${PN}"
SSTATETASKS += "do_deploy"
do_deploy[sstate-name] = "deploy"
do_deploy[sstate-inputdirs] = "${DEPLOYDIR}"
do_deploy[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}"
python do_deploy_setscene () {
sstate_setscene(d)
}
addtask do_deploy_setscene
</literallayout>
In the example, we add some extra flags to the task, a name field ("deploy"), an
input directory where the task sends data, and the output
directory where the data from the task should eventually be copied.
We also add a <filename>_setscene</filename> variant of the task and add the task
name to the <filename>SSTATETASKS</filename> list.
</para>
<para>
If you have a directory whose contents you need to preserve, you can do this with
a line like the following:
<literallayout class='monospaced'>
do_package[sstate-plaindirs] = "${PKGD} ${PKGDEST}"
</literallayout>
This method, as well as the following example, also works for mutliple directories.
<literallayout class='monospaced'>
do_package[sstate-inputdirs] = "${PKGDESTWORK} ${SHLIBSWORKDIR}"
do_package[sstate-outputdirs] = "${PKGDATA_DIR} ${SHLIBSDIR}"
do_package[sstate-lockfile] = "${PACKAGELOCK}"
</literallayout>
These methods also include the ability to take a lockfile when manipulating
shared state directory structures since some cases are sensitive to file
additions or removals.
</para>
<para>
Behind the scenes, the shared state code works by looking in
<filename>SSTATE_DIR</filename> and
<filename>SSTATE_MIRRORS</filename> for shared state files.
Here is an example:
<literallayout class='monospaced'>
SSTATE_MIRRORS ?= "\
file://.* http://someserver.tld/share/sstate/ \n \
file://.* file:///some/local/dir/sstate/"
</literallayout>
</para>
<para>
The shared state package validity can be detected just by looking at the
filename since the filename contains the task checksum (or signature) as
described earlier in this section.
If a valid shared state package is found, the build process downloads it
and uses it to accelerate the task.
</para>
<para>
The build processes uses the <filename>*_setscene</filename> tasks
for the task acceleration phase.
BitBake goes through this phase before the main execution code and tries
to accelerate any tasks for which it can find shared state packages.
If a shared state package for a task is available, the shared state
package is used.
This means the task and any tasks on which it is dependent are not
executed.
</para>
<para>
As a real world example, the aim is when building an IPK-based image,
only the <filename>do_package_write_ipk</filename> tasks would have their
shared state packages fetched and extracted.
Since the sysroot is not used, it would never get extracted.
This is another reason why a task-based approach is preferred over a
recipe-based approach, which would have to install the output from every task.
</para>
</section>
<section id='tips-and-tricks'>
<title>Tips and Tricks</title>
<para>
The code in the Yocto Project that supports incremental builds is not
simple code.
This section presents some tips and tricks that help you work around
issues related to shared state code.
</para>
<section id='debugging'>
<title>Debugging</title>
<para>
When things go wrong, debugging needs to be straightforward.
Because of this, the Yocto Project team included strong debugging
tools:
<itemizedlist>
<listitem><para>Whenever a shared state package is written out, so is a
corresponding <filename>.siginfo</filename> file.
This practice results in a pickled python database of all
the metadata that went into creating the hash for a given shared state
package.</para></listitem>
<listitem><para>If BitBake is run with the <filename>--dump-signatures</filename>
(or <filename>-S</filename>) option, BitBake dumps out
<filename>.siginfo</filename> files in
the stamp directory for every task it would have executed instead of
building the specified target package.</para></listitem>
<listitem><para>There is a <filename>bitbake-diffsigs</filename> command that
can process these <filename>.siginfo</filename> files.
If one file is specified, it will dump out the dependency
information in the file.
If two files are specified, it will compare the two files and dump out
the differences between the two.
This allows the question of "What changed between X and Y?" to be
answered easily.</para></listitem>
</itemizedlist>
</para>
</section>
<section id='invalidating-shared-state'>
<title>Invalidating Shared State</title>
<para>
The shared state code uses checksums and shared state memory
cache to avoid unnecessarily rebuilding tasks.
As with all schemes, this one has some drawbacks.
It is possible that you could make implicit changes that are not factored
into the checksum calculation, but do affect a task's output.
A good example is perhaps when a tool changes its output.
Let's say that the output of <filename>rpmdeps</filename> needed to change.
The result of the change should be that all the "package", "package_write_rpm",
and "package_deploy-rpm" shared state cache items would become invalid.
But, because this is a change that is external to the code and therefore implicit,
the associated shared state cache items do not become invalidated.
In this case, the build process would use the cached items rather than running the
task again.
Obviously, these types of implicit changes can cause problems.
</para>
<para>
To avoid these problems during the build, you need to understand the effects of any
change you make.
Note that any changes you make directly to a function automatically are factored into
the checksum calculation and thus, will invalidate the associated area of sstate cache.
You need to be aware of any implicit changes that are not obvious changes to the
code and could affect the output of a given task.
Once you are aware of such a change, you can take steps to invalidate the cache
and force the task to run.
The step to take is as simple as changing a function's comments in the source code.
For example, to invalidate package shared state files, change the comment statments
of <filename>do_package</filename> or the comments of one of the functions it calls.
The change is purely cosmetic, but it causes the checksum to be recalculated and
forces the task to be run again.
</para>
<note>
For an example of a commit that makes a cosmetic change to invalidate
a shared state, see this
<ulink url='http://git.yoctoproject.org/cgit.cgi/poky/commit/meta/classes/package.bbclass?id=737f8bbb4f27b4837047cb9b4fbfe01dfde36d54'>commit</ulink>.
</note>
</section>
</section>
</section>
<section id="licenses">
<title>Licenses</title>
<para>
This section describes the mechanism by which the Yocto Project build system
tracks changes to licensing text.
The section also describes how to enable commercially licensed receipes,
which by default are disabled.
</para>
<section id="usingpoky-configuring-LIC_FILES_CHKSUM">
<title>Tracking License Changes</title>
<para>
The license of an upstream project might change in the future. In order to prevent these changes
going unnoticed, the Yocto Project provides a
<filename><link linkend='var-LIC_FILES_CHKSUM'>LIC_FILES_CHKSUM</link></filename>
variable to track changes to the license text. The checksums are validated at the end of the
configure step, and if the checksums do not match, the build will fail.
</para>
<section id="usingpoky-specifying-LIC_FILES_CHKSUM">
<title>Specifying the <filename>LIC_FILES_CHKSUM</filename> Variable</title>
<para>
The <filename>LIC_FILES_CHKSUM</filename>
variable contains checksums of the license text in the source code for the recipe.
Following is an example of how to specify <filename>LIC_FILES_CHKSUM</filename>:
<literallayout class='monospaced'>
LIC_FILES_CHKSUM = "file://COPYING;md5=xxxx \
file://licfile1.txt;beginline=5;endline=29;md5=yyyy \
file://licfile2.txt;endline=50;md5=zzzz \
..."
</literallayout>
</para>
<para>
The Yocto Project uses the
<filename><link linkend='var-S'>S</link></filename> variable as the
default directory used when searching files listed in
<filename>LIC_FILES_CHKSUM</filename>.
The previous example employs the default directory.
</para>
<para>
You can also use relative paths as shown in the following example:
<literallayout class='monospaced'>
LIC_FILES_CHKSUM = "file://src/ls.c;startline=5;endline=16;\
md5=bb14ed3c4cda583abc85401304b5cd4e"
LIC_FILES_CHKSUM = "file://../license.html;md5=5c94767cedb5d6987c902ac850ded2c6"
</literallayout>
</para>
<para>
In this example, the first line locates a file in
<filename><link linkend='var-S'>S</link>/src/ls.c</filename>.
The second line refers to a file in
<filename><link linkend='var-WORKDIR'>WORKDIR</link></filename>, which is the parent
of <filename>S</filename>.
</para>
<para>
Note that this variable is mandatory for all recipes, unless the
<filename>LICENSE</filename> variable is set to "CLOSED".
</para>
</section>
<section id="usingpoky-LIC_FILES_CHKSUM-explanation-of-syntax">
<title>Explanation of Syntax</title>
<para>
As mentioned in the previous section, the
<filename>LIC_FILES_CHKSUM</filename> variable lists all the
important files that contain the license text for the source code.
It is possible to specify a checksum for an entire file, or a specific section of a
file (specified by beginning and ending line numbers with the "beginline" and "endline"
parameters, respectively).
The latter is useful for source files with a license notice header,
README documents, and so forth.
If you do not use the "beginline" parameter, then it is assumed that the text begins on the
first line of the file.
Similarly, if you do not use the "endline" parameter, it is assumed that the license text
ends with the last line of the file.
</para>
<para>
The "md5" parameter stores the md5 checksum of the license text.
If the license text changes in any way as compared to this parameter
then a mismatch occurs.
This mismatch triggers a build failure and notifies the developer.
Notification allows the developer to review and address the license text changes.
Also note that if a mismatch occurs during the build, the correct md5
checksum is placed in the build log and can be easily copied to the recipe.
</para>
<para>
There is no limit to how many files you can specify using the
<filename>LIC_FILES_CHKSUM</filename> variable.
Generally, however, every project requires a few specifications for license tracking.
Many projects have a "COPYING" file that stores the license information for all the source
code files.
This practice allows you to just track the "COPYING" file as long as it is kept up to date.
</para>
<tip>
If you specify an empty or invalid "md5" parameter, BitBake returns an md5 mis-match
error and displays the correct "md5" parameter value during the build.
The correct parameter is also captured in the build log.
</tip>
<tip>
If the whole file contains only license text, you do not need to use the "beginline" and
"endline" parameters.
</tip>
</section>
</section>
<section id="enabling-commercially-licensed-recipes">
<title>Enabling Commercially Licensed Recipes</title>
<para>
By default, the Yocto Project build system disables components that
have commercial licensing requirements.
The following four statements in the
<filename>$HOME/poky/meta/conf/distro/poky.conf</filename> file
disable components:
<literallayout class='monospaced'>
COMMERCIAL_LICENSE ?= "lame gst-fluendo-mp3 libmad mpeg2dec ffmpeg qmmp"
COMMERCIAL_AUDIO_PLUGINS ?= ""
COMMERCIAL_VIDEO_PLUGINS ?= ""
COMMERCIAL_QT ?= "qmmp"
</literallayout>
</para>
<para>
If you want to enable these components, you can do so by making sure you have
the following statements in the configuration file:
<literallayout class='monospaced'>
COMMERCIAL_AUDIO_PLUGINS = "gst-plugins-ugly-mad \
gst-plugins-ugly-mpegaudioparse"
COMMERCIAL_VIDEO_PLUGINS = "gst-plugins-ugly-mpeg2dec \
gst-plugins-ugly-mpegstream gst-plugins-bad-mpegvideoparse"
COMMERCIAL_LICENSE = ""
COMMERCIAL_QT = ""
</literallayout>
</para>
<para>
Excluding a package name from the
<filename>COMMERCIAL_LICENSE</filename> or
<filename>COMMERCIAL_QT</filename> statement enables that package.
</para>
<para>
Specifying audio and video plug-ins as part of the
<filename>COMMERCIAL_AUDIO_PLUGINS</filename> and
<filename>COMMERCIAL_VIDEO_PLUGINS</filename> statements includes
the plug-ins into built images - thus adding support for media formats.
</para>
</section>
</section>
</chapter>
<!--
vim: expandtab tw=80 ts=4
-->

View File

@@ -4,84 +4,213 @@
<title>Using the Yocto Project</title>
<para>
This chapter describes common usage for the Yocto Project.
The information is introductory in nature as other manuals in the Yocto Project
provide more details on how to use the Yocto Project.
This section gives an overview of the components that make up the Yocto Project
followed by information about Yocto Project builds and dealing with any
problems that might arise.
</para>
<section id='usingpoky-components'>
<title>Yocto Project Components</title>
<para>
The BitBake task executor together with various types of configuration files form the
Yocto Project core.
This section overviews the BitBake task executor and the
configuration files by describing what they are used for and how they interact.
</para>
<para>
BitBake handles the parsing and execution of the data files.
The data itself is of various types:
<itemizedlist>
<listitem><para><emphasis>Recipes:</emphasis> Provides details about particular
pieces of software</para></listitem>
<listitem><para><emphasis>Class Data:</emphasis> An abstraction of common build
information (e.g. how to build a Linux kernel).</para></listitem>
<listitem><para><emphasis>Configuration Data:</emphasis> Defines machine-specific settings,
policy decisions, etc.
Configuration data acts as the glue to bind everything together.</para></listitem>
</itemizedlist>
For more information on data, see the
<ulink url='http://www.yoctoproject.org/docs/latest/dev-manual/dev-manual.html#yocto-project-terms'>
Yocto Project Terms</ulink> section in
<ulink url='http://www.yoctoproject.org/docs/latest/dev-manual/dev-manual.html'>
The Yocto Project Development Manual</ulink>.
</para>
<para>
BitBake knows how to combine multiple data sources together and refers to each data source
as a <link linkend='usingpoky-changes-layers'>'layer'</link>.
</para>
<para>
Following are some brief details on these core components.
For more detailed information on these components see the
<link linkend='ref-structure'>'Reference: Directory Structure'</link>
appendix.
</para>
<section id='usingpoky-components-bitbake'>
<title>BitBake</title>
<para>
BitBake is the tool at the heart of the Yocto Project and is responsible
for parsing the metadata, generating a list of tasks from it,
and then executing those tasks.
To see a list of the options BitBake supports, use the following help command:
<literallayout class='monospaced'>
$ bitbake --help
</literallayout>
</para>
<para>
The most common usage for BitBake is <filename>bitbake &lt;packagename&gt;</filename>, where
<filename>packagename</filename> is the name of the package you want to build
(referred to as the "target" in this manual).
The target often equates to the first part of a <filename>.bb</filename> filename.
So, to run the <filename>matchbox-desktop_1.2.3.bb</filename> file, you
might type the following:
<literallayout class='monospaced'>
$ bitbake matchbox-desktop
</literallayout>
Several different versions of <filename>matchbox-desktop</filename> might exist.
BitBake chooses the one selected by the distribution configuration.
You can get more details about how BitBake chooses between different
target versions and providers in the
<link linkend='ref-bitbake-providers'>Preferences and Providers</link> section.
</para>
<para>
BitBake also tries to execute any dependent tasks first.
So for example, before building <filename>matchbox-desktop</filename>, BitBake
would build a cross compiler and <filename>eglibc</filename> if they had not already
been built.
<note>This release of the Yocto Project does not support the <filename>glibc</filename>
GNU version of the Unix standard C library. By default, the Yocto Project builds with
<filename>eglibc</filename>.</note>
</para>
<para>
A useful BitBake option to consider is the <filename>-k</filename> or
<filename>--continue</filename> option.
This option instructs BitBake to try and continue processing the job as much
as possible even after encountering an error.
When an error occurs, the target that
failed and those that depend on it cannot be remade.
However, when you use this option other dependencies can still be processed.
</para>
</section>
<section id='usingpoky-components-metadata'>
<title>Metadata (Recipes)</title>
<para>
The <filename>.bb</filename> files are usually referred to as "recipes."
In general, a recipe contains information about a single piece of software.
The information includes the location from which to download the source patches
(if any are needed), which special configuration options to apply,
how to compile the source files, and how to package the compiled output.
</para>
<para>
The term "package" can also be used to describe recipes.
However, since the same word is used for the packaged output from the Yocto
Project (i.e. <filename>.ipk</filename> or <filename>.deb</filename> files),
this document avoids using the term "package" to refer to recipes.
</para>
</section>
<section id='usingpoky-components-classes'>
<title>Classes</title>
<para>
Class files (<filename>.bbclass</filename>) contain information that is useful to share
between metadata files.
An example is the Autotools class, which contains
common settings for any application that Autotools uses.
The <link linkend='ref-classes'>Reference: Classes</link> appendix provides details
about common classes and how to use them.
</para>
</section>
<section id='usingpoky-components-configuration'>
<title>Configuration</title>
<para>
The configuration files (<filename>.conf</filename>) define various configuration variables
that govern the Yocto Project build process.
These files fall into several areas that define machine configuration options,
distribution configuration options, compiler tuning options, general common configuration
options and user configuration options (<filename>local.conf</filename>, which is found
in the Yocto Project files build directory).
</para>
</section>
</section>
<section id='usingpoky-build'>
<title>Running a Build</title>
<para>
You can find general information on how to build an image using the
Yocto Project in the
You can find information on how to build an image using the Yocto Project in the
<ulink url='http://www.yoctoproject.org/docs/latest/yocto-project-qs/yocto-project-qs.html#building-image'>
Building an Image</ulink> section of the
<ulink url='http://www.yoctoproject.org/docs/latest/yocto-project-qs/yocto-project-qs.html'>
Yocto Project Quick Start</ulink>.
This section provides a summary of the build process and provides information
for less obvious aspects of the build process.
This section provides a quick overview.
</para>
<section id='build-overview'>
<title>Build Overview</title>
<para>
The first thing you need to do is set up the Yocto Project build environment by sourcing
the environment setup script as follows:
<literallayout class='monospaced'>
<para>
The first thing you need to do is set up the Yocto Project build environment by sourcing
the environment setup script as follows:
<literallayout class='monospaced'>
$ source oe-init-build-env [build_dir]
</literallayout>
</para>
</literallayout>
</para>
<para>
The <filename>build_dir</filename> is optional and specifies the directory Yocto Project
uses for the build.
If you do not specify a build directory it defaults to <filename>build</filename>
in your current working directory.
A common practice is to use a different build directory for different targets.
For example, <filename>~/build/x86</filename> for a <filename>qemux86</filename>
target, and <filename>~/build/arm</filename> for a <filename>qemuarm</filename> target.
See <link linkend="structure-core-script">oe-init-build-env</link>
for more information on this script.
</para>
<para>
The <filename>build_dir</filename> is optional and specifies the directory Yocto Project
uses for the build.
If you do not specify a build directory it defaults to <filename>build</filename>
in your current working directory.
A common practice is to use a different build directory for different targets.
For example, <filename>~/build/x86</filename> for a <filename>qemux86</filename>
target, and <filename>~/build/arm</filename> for a <filename>qemuarm</filename> target.
See <link linkend="structure-core-script">oe-init-build-env</link>
for more information on this script.
</para>
<para>
Once the Yocto Project build environment is set up, you can build a target using:
<literallayout class='monospaced'>
<para>
Once the Yocto Project build environment is set up, you can build a target using:
<literallayout class='monospaced'>
$ bitbake &lt;target&gt;
</literallayout>
</para>
</literallayout>
</para>
<para>
The <filename>target</filename> is the name of the recipe you want to build.
Common targets are the images in <filename>meta/recipes-core/images</filename>,
<filename>/meta/recipes-sato/images</filename>, etc. all found in the Yocto Project
files.
Or, the target can be the name of a recipe for a specific piece of software such as
<application>busybox</application>.
For more details about the images Yocto Project supports, see the
<link linkend="ref-images">'Reference: Images'</link> appendix.
</para>
<para>
The <filename>target</filename> is the name of the recipe you want to build.
Common targets are the images in <filename>meta/recipes-core/images</filename>,
<filename>/meta/recipes-sato/images</filename>, etc. all found in the Yocto Project
files.
Or, the target can be the name of a recipe for a specific piece of software such as
<application>busybox</application>.
For more details about the images Yocto Project supports, see the
<link linkend="ref-images">'Reference: Images'</link> appendix.
</para>
<note>
Building an image without GNU Public License Version 3 (GPLv3) components is
only supported for minimal and base images.
See <link linkend='ref-images'>'Reference: Images'</link> for more information.
</note>
</section>
<note>
Building an image without GNU Public License Version 3 (GPLv3) components is
only supported for minimal and base images.
See <link linkend='ref-images'>'Reference: Images'</link> for more information.
</note>
<section id='building-an-image-using-gpl-components'>
<title>Building an Image Using GPL Components</title>
<para>
When building an image using GPL components, you need to maintain your original
settings and not switch back and forth applying different versions of the GNU
Public License.
If you rebuild using different versions of GPL, dependency errors might occur
due to some components not being rebuilt.
</para>
</section>
<note>
When building an image using GPL components, you need to maintain your original
settings and not switch back and forth applying different versions of the GNU
Public License.
If you rebuild using different versions of GPL, dependency errors might occur
due to some components not being rebuilt.
</note>
</section>
<section id='usingpoky-install'>

View File

@@ -198,6 +198,15 @@
<section id='ubuntu'>
<title>Ubuntu</title>
<para>
If your distribution is Ubuntu, you need to be running the bash shell.
You can be sure you are running this shell by entering the following command and
selecting "No" at the prompt:
<literallayout class='monospaced'>
$ sudo dpkg-reconfigure dash
</literallayout>
</para>
<para>
The packages you need for a supported Ubuntu distribution are shown in the following command:
</para>
@@ -207,7 +216,7 @@
unzip texi2html texinfo libsdl1.2-dev docbook-utils gawk \
python-pysqlite2 diffstat help2man make gcc build-essential \
g++ desktop-file-utils chrpath libgl1-mesa-dev libglu1-mesa-dev \
mercurial autoconf automake groff libtool xterm libxml-parser-perl
mercurial autoconf automake groff libtool xterm
</literallayout>
</section>

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [configuration]
Index: openswan-2.4.7/Makefile.inc
===================================================================
--- openswan-2.4.7.orig/Makefile.inc 2006-12-25 18:05:40.608503250 +0100

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [configuration]
--- openswan-2.2.0.orig/programs/Makefile.program 2004-06-03 03:06:27.000000000 +0200
+++ openswan-2.2.0/programs/Makefile.program 2005-03-05 13:50:19.000000000 +0100
@@ -30,10 +30,6 @@

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [configuration]
diff -Nru openswan-2.4.7.orig/doc/Makefile openswan-2.4.7/doc/Makefile
--- openswan-2.4.7.orig/doc/Makefile 2005-11-08 23:32:45.000000000 +0200
+++ openswan-2.4.7/doc/Makefile 2006-12-06 22:46:54.732830840 +0200

View File

@@ -1,17 +1,16 @@
SUMMARY = "GObject-based sync library"
DESCRIPTION = "LibSync is a GObject-based framework for more convenient use of \
OpenSync in GLib applications."
LICENSE = "LGPLv2"
LICENSE = "LGPL"
SECTION = "x11"
DEPENDS = "glib-2.0 gtk+ libglade libopensync avahi"
RRECOMMENDS_${PN} = "\
libopensync-plugin-file \
"
SRCREV = "3f375969d56028505db97cd25ef1679a167cfc59"
PV = "0.0+gitr${SRCPV}"
PR = "r2"
PV = "0.0+svnr${SRCPV}"
PR = "r1"
SRC_URI = "git://git.yoctoproject.org/sync;protocol=git"
SRC_URI = "svn://svn.o-hand.com/repos/sync/trunk;module=sync;proto=http"
inherit autotools pkgconfig

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [configuration]
--- wbxml2-0.9.2/Makefile.am.old 2007-01-03 19:50:24.000000000 +0000
+++ wbxml2-0.9.2/Makefile.am 2007-01-03 19:50:39.000000000 +0000
@@ -24,9 +24,9 @@

View File

@@ -3,8 +3,6 @@
gcalctool/Makefile.am | 2 --
2 files changed, 1 insertion(+), 3 deletions(-)
Upstream-Status: Inappropriate [configuration]
Index: gcalctool-5.8.17/gcalctool/Makefile.am
===================================================================
--- gcalctool-5.8.17.orig/gcalctool/Makefile.am 2005-12-19 15:46:57.000000000 +0000

View File

@@ -3,12 +3,9 @@ LICENSE = "GPL"
DEPENDS = "matchbox-wm"
SECTION = "x11/wm"
SRC_URI = "http://downloads.yoctoproject.org/releases/matchbox/matchbox-themes-extra/${PV}/matchbox-themes-extra-${PV}.tar.bz2"
SRC_URI = "http://projects.o-hand.com/matchbox/sources/matchbox-themes-extra/${PV}/matchbox-themes-extra-${PV}.tar.bz2"
S = "${WORKDIR}/matchbox-themes-extra-${PV}"
SRC_URI[md5sum] = "04312628f4a21f4105bce1251ea08035"
SRC_URI[sha256sum] = "98a1c8695842b0cd7f32e67b0ef9118fd0f32db5297f3f08706c706dee8fc6be"
inherit autotools pkgconfig
# split into several packages plus one meta package

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [configuration]
--- fribidi-0.10.4/configure.in~ 2002-05-19 11:06:48.000000000 +0100
+++ fribidi-0.10.4/configure.in 2004-08-03 17:42:28.000000000 +0100
@@ -50,7 +50,7 @@

View File

@@ -1,8 +1,7 @@
#
# Patch managed by http://www.holgerschurig.de/patcher.html
#
Upstream-Status: Inappropriate [configuration]
#
--- openobex-1.2/apps/Makefile.am~disable-cable-test
+++ openobex-1.2/apps/Makefile.am

View File

@@ -1,5 +1,3 @@
Upstream-Status: Inappropriate [configuration]
Index: openobex-1.5/acinclude.m4
===================================================================
--- openobex-1.5.orig/acinclude.m4 2009-02-08 18:30:22.000000000 +0000

View File

@@ -3,7 +3,7 @@
LCONF_VERSION = "4"
BBFILES ?= ""
BBLAYERS ?= " \
BBLAYERS = " \
##COREBASE##/meta \
##COREBASE##/meta-yocto \
"

View File

@@ -1,110 +0,0 @@
# Distribution definition for: poky-tiny
#
# Copyright (c) 2011, Intel Corporation.
# All rights reserved.
#
# This file is released under the MIT license as described in
# ../meta/COPYING.MIT.
#
# Poky-tiny is intended to define a tiny Linux system comprised of a
# Linux kernel tailored to support each specific MACHINE and busybox.
# Poky-tiny sets some basic policy to ensure a usable system while still
# keeping the rootfs and kernel image as small as possible.
#
# The policies defined are intended to meet the following goals:
# o Serial consoles only (no framebuffer or VGA console)
# o Basic support for IPV4 networking
# o Single user ash shell
# o Static images (no support for adding packages or libraries later)
# o Read-only or RAMFS root filesystem
# o Combined Linux kernel + rootfs in under 4MB
# o Allow the user to select between eglibc or uclibc with the TCLIBC variable
#
# This is currently a partial definition, the following tasks remain:
# [ ] Integrate linux-yocto-tiny ktype into linux-yocto
# [ ] Define linux-yocto-tiny configs for all supported BSPs
# [ ] Drop ldconfig from the installation
# [ ] Modify the runqemu scripts to work with ext2 parameter:
# runqemu qemux86 qemuparams="-nographic" bootparams="console=ttyS0,115200 root=0800"
# [ ] Modify busybox to allow for DISTRO_FEATURES-like confiruration
require conf/distro/poky.conf
DISTRO = "poky-tiny"
# FIXME: consider adding a new "tiny" feature
#DISTRO_FEATURES_append = " tiny"
# Distro config is evaluated after the machine config, so we have to explicitly
# set the kernel provider to override a machine config.
PREFERRED_PROVIDER_virtual/kernel = "linux-yocto-tiny"
PREFERRED_VERSION_linux-yocto-tiny = "3.0%"
# We can use task-core-boot, but in the future we may need a new task-core-tiny
#POKY_DEFAULT_EXTRA_RDEPENDS += "task-core-boot"
# Drop kernel-module-af-packet from RRECOMMENDS
POKY_DEFAULT_EXTRA_RRECOMMENDS = ""
# FIXME: what should we do with this?
TCLIBCAPPEND = ""
# Disable wide char support for ncurses as we don't include it in
# in the LIBC features below.
ENABLE_WIDEC="false"
# Drop native language support. This removes the
# eglibc->bash->gettext->libc-posix-clang-wchar dependency.
USE_NLS="no"
# Reconfigure eglibc for a smaller installation
# Comment out any of the lines below to disable them in the build
DISTRO_FEATURES_LIBC_TINY = "libc-libm libc-crypt"
# Required for "who"
DISTRO_FEATURES_LIBC_MINIMAL = "libc-utmp libc-getlogin"
DISTRO_FEATURES_LIBC_REGEX = "libc-posix-regexp"
DISTRO_FEATURES_LIBC_NET = "libc-inet libc-nis"
DISTRO_FEATURES_LIBC = "${DISTRO_FEATURES_LIBC_TINY} \
${DISTRO_FEATURES_LIBC_MINIMAL} \
${DISTRO_FEATURES_LIBC_REGEX} \
${DISTRO_FEATURES_LIBC_NET} \
"
# Comment out any of the lines below to disable them in the build
# DISTRO_FEATURES options:
# alsa bluetooth ext2 irda pcmcia usbgadget usbhost wifi nfs zeroconf pci
DISTRO_FEATURES_TINY = "pci"
DISTRO_FEATURES_NET = "ipv4"
DISTRO_FEATURES_USB = "usbhost"
#DISTRO_FEATURES_USBGADGET = "usbgadget"
#DISTRO_FEATURES_WIFI = "wifi"
DISTRO_FEATURES = "${DISTRO_FEATURES_TINY} \
${DISTRO_FEATURES_NET} \
${DISTRO_FEATURES_USB} \
${DISTRO_FEATURES_USBGADGET} \
${DISTRO_FEATURES_WIFI} \
${DISTRO_FEATURES_LIBC} \
"
# Use tmpdevfs and the busybox runtime services
VIRTUAL-RUNTIME_dev_manager = ""
VIRTUAL-RUNTIME_login_manager = ""
VIRTUAL-RUNTIME_init_manager = ""
VIRTUAL-RUNTIME_keymaps = ""
# FIXME: Consider adding "modules" to MACHINE_FEATURES and using that in
# task-core-base to select modutils-initscripts or not. Similar with "net" and
# netbase.
# By default we only support ext2 and initramfs. We don't build live as that
# pulls in a lot of dependencies for the live image and the installer, like
# udev, grub, etc. These pull in gettext, which fails to build with wide
# character support.
IMAGE_FSTYPES = "ext2 cpio.gz"
# Drop v86d from qemu dependency list (we support serial)
# Drop grub from meta-intel BSPs
# FIXME: A different mechanism is needed here. We could define -tiny
# variants of all compatible machines, but that leads to a lot
# more machine configs to maintain long term.
MACHINE_ESSENTIAL_EXTRA_RDEPENDS = ""

View File

@@ -24,12 +24,8 @@ SDKPATH = "/opt/${DISTRO}/${SDK_VERSION}"
EXTRAOPKGCONFIG = "poky-feed-config-opkg"
# Override these in poky based distros to modify DISTRO_EXTRA_R*
POKY_DEFAULT_EXTRA_RDEPENDS = "task-core-boot"
POKY_DEFAULT_EXTRA_RRECOMMENDS = "kernel-module-af-packet"
DISTRO_EXTRA_RDEPENDS += " ${POKY_DEFAULT_EXTRA_RDEPENDS}"
DISTRO_EXTRA_RRECOMMENDS += " ${POKY_DEFAULT_EXTRA_RRECOMMENDS}"
DISTRO_EXTRA_RDEPENDS += "task-core-boot"
DISTRO_EXTRA_RRECOMMENDS += "kernel-module-af-packet"
POKYQEMUDEPS = "${@base_contains("INCOMPATIBLE_LICENSE", "GPLv3", "", "qemu-config",d)}"
DISTRO_EXTRA_RDEPENDS_append_qemuarm = " ${POKYQEMUDEPS}"

View File

@@ -4,7 +4,7 @@
# to the system might want to change but pretty much any configuration option can
# be set in this file. More adventurous users can look at local.conf.extended
# which contains other examples of configuration which can be placed in this file
# but new users likely won't need any of them initially.
# but new users likely don't need any of them initially.
#
# Lines starting with the '#' character are commented out and in some cases the
# default values are provided as comments to show people example syntax. Enabling
@@ -22,16 +22,16 @@
# The second option controls how many processes make should run in parallel when
# running compile tasks:
#
#PARALLEL_MAKE = "-j 4"
# PARALLEL_MAKE = "-j 4"
#
# For a quad-core machine, BB_NUMBER_THREADS = "4", PARALLEL_MAKE = "-j 4" would
# For a quadcore, BB_NUMBER_THREADS = "4", PARALLEL_MAKE = "-j 4" would
# be appropriate for example.
#
# Machine Selection
#
# You need to select a specific machine to target the build with. There are a selection
# of emulated machines available which can boot and run in the QEMU emulator:
# emulated machines available which can boot and run in the QEMU emulator:
#
#MACHINE ?= "qemuarm"
#MACHINE ?= "qemumips"
@@ -67,7 +67,7 @@ MACHINE ??= "qemux86"
# Where to place shared-state files
#
# BitBake has the capability to accelerate builds based on previously built output.
# This is done using "shared state" files which can be thought of as cache objects
# This is done using "shared state" files which can be through of as cache objects
# and this option determines where those files are placed.
#
# You can wipe out TMPDIR leaving this directory intact and the build would regenerate
@@ -143,10 +143,10 @@ PACKAGE_CLASSES ?= "package_rpm"
# "tools-debug" - add debugging tools (gdb, strace)
# "tools-profile" - add profiling tools (oprofile, exmap, lttng valgrind (x86 only))
# "tools-testapps" - add useful testing tools (ts_print, aplay, arecord etc.)
# "debug-tweaks" - make an image suitable for development
# "debug-tweaks" - make an image for suitable of development
# e.g. ssh root access has a blank password
# There are other application targets that can be used here too, see
# meta/classes/image.bbclass and meta/classes/core-image.bbclass for more details.
# There are other application targets that can be uses here too, see
# meta/classes/core-image.bbclass and meta/recipes-core/tasks/task-core.bb for more details.
# We default to enabling the debugging tweaks.
EXTRA_IMAGE_FEATURES = "debug-tweaks"
@@ -156,13 +156,12 @@ EXTRA_IMAGE_FEATURES = "debug-tweaks"
# The following is a list of additional classes to use when building images which
# enable extra features. Some available options which can be included in this variable
# are:
# - 'buildstats' collect build statistics
# - 'image-mklibs' to reduce shared library files size for an image
# - 'image-prelink' in order to prelink the filesystem image
# - 'image-swab' to perform host system intrusion detection
# NOTE: if listing mklibs & prelink both, then make sure mklibs is before prelink
# NOTE: mklibs also needs to be explicitly enabled for a given image, see local.conf.extended
USER_CLASSES ?= "buildstats image-mklibs image-prelink"
USER_CLASSES ?= "image-mklibs image-prelink"
#
# Runtime testing of images
@@ -173,17 +172,16 @@ USER_CLASSES ?= "buildstats image-mklibs image-prelink"
#IMAGETEST = "qemu"
#
# This variable controls which tests are run against virtual images if enabled
# above. The following would enable bat, boot the test case under the sanity suite
# and perform toolchain tests
# above. The following would enable bat, oot test case under sanity suite and
# toolchain tests
#TEST_SCEN = "sanity bat sanity:boot toolchain"
#
# Because of the QEMU booting slowness issue (see bug #646 and #618), the
# autobuilder may suffer a timeout issue when running sanity tests. We introduce
# the variable TEST_SERIALIZE here to reduce the time taken by the sanity tests.
# It is set to 1 by default, which will boot the image and run cases in the same
# image without rebooting or killing the machine instance. If it is set to 0, the
# image will be copied and tested for each case, which will take longer but be
# more precise.
# Because of the QEMU booting slowness issue(see bug #646 and #618), autobuilder
# may suffer a timeout issue when running sanity test. We introduce variable
# TEST_SERIALIZE here to reduce the time on sanity test. It is by default set
# to 1. This will start image and run cases in the same image without reboot
# or kill. If it is set to 0, the image will be copied and tested for each
# case, which will take longer but be more precise.
#TEST_SERIALIZE = "1"
#
@@ -198,7 +196,7 @@ USER_CLASSES ?= "buildstats image-mklibs image-prelink"
# Examples of the occasions this may happen are when resolving patches which cannot
# be applied, to use the devshell or the kernel menuconfig
#
# Supported values are auto, gnome, xfce, rxvt, screen, konsole (KDE 3.x only), none
# Supported values are auto, gnome, xfce, rxvt, xcreen, konsole (3.x only), none
# Note: currently, Konsole support only works for KDE 3.x due to the way
# newer Konsole versions behave
#OE_TERMINAL = "auto"

View File

@@ -123,10 +123,3 @@
# The following is a list of classes to import to use in the generation of images
# currently an example class is image_types_uboot
# IMAGE_CLASSES = " image_types_uboot"
# Incremental rpm image generation, the rootfs would be totally removed
# and re-created in the second generation by default, but with
# INC_RPM_IMAGE_GEN = "1", the rpm based rootfs would be kept, and will
# do update(remove/add some pkgs) on it. NOTE: This is not suggested
# when you want to create a productive rootfs
#INC_RPM_IMAGE_GEN = "1"

View File

@@ -32,7 +32,6 @@ EXTRA_IMAGECMD_jffs2 = "-lnp "
SERIAL_CONSOLE = "115200 ttyO2"
PREFERRED_PROVIDER_virtual/kernel ?= "linux-yocto"
PREFERRED_VERSION_linux-yocto ?= "3.0%"
KERNEL_IMAGETYPE = "uImage"

View File

@@ -11,7 +11,6 @@ KERNEL_IMAGETYPE = "vmlinux"
KERNEL_ALT_IMAGETYPE = "vmlinux.bin"
PREFERRED_PROVIDER_virtual/kernel ?= "linux-yocto"
PREFERRED_VERSION_linux-yocto ?= "3.0%"
PREFERRED_PROVIDER_virtual/xserver = "xserver-kdrive"
XSERVER = "xserver-kdrive-fbdev"

View File

@@ -19,7 +19,7 @@ SRCREV_machine_pn-linux-yocto-rt_mpc8315e-rdb = "0b805cce57f61a244eb3b8fce460b14
#SRCREV_machine_pn-linux-yocto-rt_beagleboard =
# routerstationpro support - preempt-rt kernel build failure
COMPATIBLE_MACHINE_routerstationpro = "routerstationpro"
KMACHINE_routerstationpro = "routerstationpro"
KBRANCH_routerstationpro = "yocto/standard/preempt-rt/routerstationpro"
SRCREV_machine_pn-linux-yocto-rt_routerstationpro = "43dcdffebb64d9ce2f5cdcb18bb74bd9c301133f"
#COMPATIBLE_MACHINE_routerstationpro = "routerstationpro"
#KMACHINE_routerstationpro = "routerstationpro"
#KBRANCH_routerstationpro = "yocto/standard/preempt-rt/base"
#SRCREV_machine_pn-linux-yocto-rt_routerstationpro = "7e1e5b6c8a13c615feb0d7b6d37988a094aae98f"

View File

@@ -5,11 +5,11 @@ KMACHINE_beagleboard = "yocto/standard/beagleboard"
SRCREV_machine_atom-pc ?= "1e18e44adbe79b846e382370eb29bc4b8cd5a1a0"
SRCREV_machine_routerstationpro ?= "8f38705810634a84326d3a3ebe9653951aa4bf61"
SRCREV_machine_routerstationpro ?= "ed0e03a8b04388a982141919da805392b7ca1c91"
SRCREV_machine_mpc8315e-rdb ?= "58ffdb8000e34d2ba7c3ef278b26680b0886e8b5"
SRCREV_machine_beagleboard ?= "6b4bf6173b0bd2d1619a8218bac66ebc4681dd35"
SRCREV_machine_beagleboard ?= "2bba211297d10047637b8f49abd2c5415480ce4d"
COMPATIBLE_MACHINE_mpc8315e-rdb = "mpc8315e-rdb"
COMPATIBLE_MACHINE_routerstationpro = "routerstationpro"
COMPATIBLE_MACHINE_beagleboard = "beagleboard"
# COMPATIBLE_MACHINE_routerstationpro = "routerstationpro"
# COMPATIBLE_MACHINE_beagleboard = "beagleboard"
COMPATIBLE_MACHINE_atom-pc = "atom-pc"

View File

@@ -73,7 +73,7 @@ oe_runconf () {
cfgscript="${S}/configure"
if [ -x "$cfgscript" ] ; then
bbnote "Running $cfgscript ${CONFIGUREOPTS} ${EXTRA_OECONF} $@"
${CACHED_CONFIGUREVARS} $cfgscript ${CONFIGUREOPTS} ${EXTRA_OECONF} "$@" || bbfatal "oe_runconf failed"
$cfgscript ${CONFIGUREOPTS} ${EXTRA_OECONF} "$@" || bbfatal "oe_runconf failed"
else
bbfatal "no configure script found at $cfgscript"
fi
@@ -122,9 +122,7 @@ autotools_do_configure() {
# We avoid this by taking a copy here and then files cannot disappear.
if [ -d ${STAGING_DATADIR}/aclocal ]; then
mkdir -p ${B}/aclocal-copy/
# for scratch build this directory can be empty
# so avoid cp's no files to copy error
cp -r ${STAGING_DATADIR}/aclocal/. ${B}/aclocal-copy/
cp ${STAGING_DATADIR}/aclocal/* ${B}/aclocal-copy/
acpaths="$acpaths -I ${B}/aclocal-copy/"
fi
# autoreconf is too shy to overwrite aclocal.m4 if it doesn't look

View File

@@ -7,6 +7,7 @@ inherit mirrors
inherit utils
inherit utility-tasks
inherit metadata_scm
inherit buildstats
inherit logging
OE_IMPORTS += "os sys time oe.path oe.utils oe.data oe.packagegroup"
@@ -398,8 +399,9 @@ python () {
dont_want_whitelist = (d.getVar('WHITELIST_%s' % dont_want_license, 1) or "").split()
if pn not in hosttools_whitelist and pn not in lgplv2_whitelist and pn not in dont_want_whitelist:
import re
this_license = d.getVar('LICENSE', 1)
if incompatible_license(d,dont_want_license):
if this_license and re.search(dont_want_license, this_license):
bb.note("SKIPPING %s because it's %s" % (pn, this_license))
raise bb.parse.SkipPackage("incompatible with license %s" % this_license)

View File

@@ -103,21 +103,12 @@ build_hddimg() {
grubefi_hddimg_populate
fi
# Determine the 1024 byte block count for the final image.
BLOCKS=`du --apparent-size -ks ${HDDDIR} | cut -f 1`
# Determine the block count for the final image
BLOCKS=`du -bks ${HDDDIR} | cut -f 1`
SIZE=`expr $BLOCKS + ${BOOTIMG_EXTRA_SPACE}`
# Ensure total sectors is an integral number of sectors per
# track or mcopy will complain. Sectors are 512 bytes, and and
# we generate images with 32 sectors per track. This calculation
# is done in blocks, which are twice the size of sectors, thus
# the 16 instead of 32.
SIZE=$(expr $SIZE + $(expr 16 - $(expr $SIZE % 16)))
IMG=${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.hddimg
mkdosfs -n ${BOOTIMG_VOLUME_ID} -S 512 -C ${IMG} ${SIZE}
# Copy HDDDIR recursively into the image file directly
mcopy -i ${IMG} -s ${HDDDIR}/* ::/
mkdosfs -n ${BOOTIMG_VOLUME_ID} -d ${HDDDIR} \
-C ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.hddimg $SIZE
if [ "${PCBIOS}" = "1" ]; then
syslinux_hddimg_install

View File

@@ -158,7 +158,7 @@ python buildhistory_emit_pkghistory() {
last_pr = lastversion.pr
r = bb.utils.vercmp((pe, pv, pr), (last_pe, last_pv, last_pr))
if r < 0:
bb.error("Package version for package %s went backwards which would break package feeds from (%s:%s-%s to %s:%s-%s)" % (pkg, last_pe, last_pv, last_pr, pe, pv, pr))
bb.fatal("Package version for package %s went backwards which would break package feeds from (%s:%s-%s to %s:%s-%s)" % (pkg, last_pe, last_pv, last_pr, pe, pv, pr))
pkginfo = PackageInfo(pkg)
pkginfo.pe = pe
@@ -182,10 +182,21 @@ python buildhistory_emit_pkghistory() {
write_pkghistory(pkginfo, d)
if lastversion:
check_pkghistory(pkginfo, lastversion)
write_latestlink(pkg, pe, pv, pr, d)
}
def check_pkghistory(pkginfo, lastversion):
bb.debug(2, "Checking package history")
# RDEPENDS removed?
# PKG changed?
# Each file list of each package for file removals?
def write_recipehistory(rcpinfo, d):
bb.debug(2, "Writing recipe history")
@@ -314,10 +325,7 @@ buildhistory_get_imageinfo() {
# Add some configuration information
echo "${MACHINE}: ${IMAGE_BASENAME} configured for ${DISTRO} ${DISTRO_VERSION}" > ${BUILDHISTORY_DIR_IMAGE}/build-id
cat >> ${BUILDHISTORY_DIR_IMAGE}/build-id <<END
${@buildhistory_get_layers(d)}
END
echo "${@buildhistory_get_layers(d)}" >> ${BUILDHISTORY_DIR_IMAGE}/build-id
}
# By prepending we get in before the removal of packaging files
@@ -331,25 +339,11 @@ def buildhistory_get_layers(d):
buildhistory_commit() {
if [ ! -d ${BUILDHISTORY_DIR} ] ; then
# Code above that creates this dir never executed, so there can't be anything to commit
exit
fi
( cd ${BUILDHISTORY_DIR}/
# Initialise the repo if necessary
if [ ! -d .git ] ; then
git init -q
fi
# Ensure there are new/changed files to commit
repostatus=`git status --porcelain`
if [ "$repostatus" != "" ] ; then
git add ${BUILDHISTORY_DIR}/*
HOSTNAME=`cat /etc/hostname 2>/dev/null || echo unknown`
git commit ${BUILDHISTORY_DIR}/ -m "Build ${BUILDNAME} of ${DISTRO} ${DISTRO_VERSION} for machine ${MACHINE} on $HOSTNAME" --author "${BUILDHISTORY_COMMIT_AUTHOR}" > /dev/null
if [ "${BUILDHISTORY_PUSH_REPO}" != "" ] ; then
git push -q ${BUILDHISTORY_PUSH_REPO}
fi
git add ${BUILDHISTORY_DIR}/*
git commit ${BUILDHISTORY_DIR}/ -m "Build ${BUILDNAME} for machine ${MACHINE} configured for ${DISTRO} ${DISTRO_VERSION}" --author "${BUILDHISTORY_COMMIT_AUTHOR}" > /dev/null
if [ "${BUILDHISTORY_PUSH_REPO}" != "" ] ; then
git push -q ${BUILDHISTORY_PUSH_REPO}
fi) || true
}

View File

@@ -1,96 +0,0 @@
# Deploy sources for recipes for compliance with copyleft-style licenses
# Defaults to using symlinks, as it's a quick operation, and one can easily
# follow the links when making use of the files (e.g. tar with the -h arg).
#
# By default, includes all GPL and LGPL, and excludes CLOSED and Proprietary.
#
# vi:sts=4:sw=4:et
COPYLEFT_SOURCES_DIR ?= '${DEPLOY_DIR}/copyleft_sources'
COPYLEFT_LICENSE_INCLUDE ?= 'GPL* LGPL*'
COPYLEFT_LICENSE_INCLUDE[type] = 'list'
COPYLEFT_LICENSE_INCLUDE[doc] = 'Space separated list of globs which include licenses'
COPYLEFT_LICENSE_EXCLUDE ?= 'CLOSED Proprietary'
COPYLEFT_LICENSE_EXCLUDE[type] = 'list'
COPYLEFT_LICENSE_INCLUDE[doc] = 'Space separated list of globs which exclude licenses'
def copyleft_should_include(d):
"""Determine if this recipe's sources should be deployed for compliance"""
import ast
import oe.license
from fnmatch import fnmatchcase as fnmatch
if oe.utils.inherits(d, 'native', 'nativesdk', 'cross', 'crossdk'):
# not a target recipe
return
include = oe.data.typed_value('COPYLEFT_LICENSE_INCLUDE', d)
exclude = oe.data.typed_value('COPYLEFT_LICENSE_EXCLUDE', d)
def include_license(license):
if any(fnmatch(license, pattern) for pattern in exclude):
return False
if any(fnmatch(license, pattern) for pattern in include):
return True
return False
def choose_licenses(a, b):
"""Select the left option in an OR if all its licenses are to be included"""
if all(include_license(lic) for lic in a):
return a
else:
return b
try:
licenses = oe.license.flattened_licenses(d.getVar('LICENSE', True), choose_licenses)
except oe.license.InvalidLicense as exc:
bb.fatal('%s: %s' % (d.getVar('PF', True), exc))
except SyntaxError:
bb.warn("%s: Failed to parse it's LICENSE field." % (d.getVar('PF', True)))
return all(include_license(lic) for lic in licenses)
python do_prepare_copyleft_sources () {
"""Populate a tree of the recipe sources and emit patch series files"""
import os.path
import shutil
if not copyleft_should_include(d):
return
sources_dir = d.getVar('COPYLEFT_SOURCES_DIR', 1)
src_uri = d.getVar('SRC_URI', 1).split()
fetch = bb.fetch2.Fetch(src_uri, d)
ud = fetch.ud
locals = (fetch.localpath(url) for url in fetch.urls)
localpaths = [local for local in locals if not local.endswith('.bb')]
if not localpaths:
return
pf = d.getVar('PF', True)
dest = os.path.join(sources_dir, pf)
shutil.rmtree(dest, ignore_errors=True)
bb.mkdirhier(dest)
for path in localpaths:
os.symlink(path, os.path.join(dest, os.path.basename(path)))
patches = src_patches(d)
for patch in patches:
_, _, local, _, _, parm = bb.decodeurl(patch)
patchdir = parm.get('patchdir')
if patchdir:
series = os.path.join(dest, 'series.subdir.%s' % patchdir.replace('/', '_'))
else:
series = os.path.join(dest, 'series')
with open(series, 'a') as s:
s.write('%s -p%s\n' % (os.path.basename(local), parm['striplevel']))
}
addtask prepare_copyleft_sources after do_fetch before do_build
do_build[recrdeptask] += 'do_prepare_copyleft_sources'

View File

@@ -13,7 +13,6 @@ LIC_FILES_CHKSUM = "file://${COREBASE}/LICENSE;md5=3f40d7994397109285ec7b81fdeb3
# Available IMAGE_FEATURES:
#
# - apps-console-core
# - x11-mini - minimal environment for X11 server
# - x11-base - X11 server + minimal desktop
# - x11-sato - OpenedHand Sato environment
# - x11-netbook - Metacity based environment for netbooks
@@ -30,7 +29,6 @@ LIC_FILES_CHKSUM = "file://${COREBASE}/LICENSE;md5=3f40d7994397109285ec7b81fdeb3
# - debug-tweaks - makes an image suitable for development
#
PACKAGE_GROUP_apps-console-core = "task-core-apps-console"
PACKAGE_GROUP_x11-mini = "task-core-x11-mini"
PACKAGE_GROUP_x11-base = "task-core-x11-base"
PACKAGE_GROUP_x11-sato = "task-core-x11-sato"
PACKAGE_GROUP_x11-netbook = "task-core-x11-netbook"

View File

@@ -3,15 +3,15 @@ def gettext_dependencies(d):
return ""
if d.getVar('INHIBIT_DEFAULT_DEPS', True) and not oe.utils.inherits(d, 'cross-canadian'):
return ""
if oe.utils.inherits(d, 'native', 'cross'):
if oe.utils.inherits(d, 'native'):
return "gettext-minimal-native"
return d.getVar('DEPENDS_GETTEXT', False)
def gettext_oeconf(d):
if oe.utils.inherits(d, 'native', 'cross'):
if oe.utils.inherits(d, 'native'):
return '--disable-nls'
# Remove the NLS bits if USE_NLS is no.
if d.getVar('USE_NLS', True) == 'no' and not oe.utils.inherits(d, 'nativesdk', 'cross-canadian'):
if d.getVar('USE_NLS', True) == 'no' and not oe.utils.inherits(d, 'native', 'nativesdk', 'cross', 'cross-canadian'):
return '--disable-nls'
return "--enable-nls"

View File

@@ -21,8 +21,10 @@ GRUB_TIMEOUT ?= "10"
#FIXME: build this from the machine config
GRUB_OPTS ?= "serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1"
# FIXME: add EFI/BOOT to GRUB_HDDDIR once the mkdosfs subdir bug is resolved
# http://bugzilla.yoctoproject.org/show_bug.cgi?id=1783
EFIDIR = "/EFI/BOOT"
GRUB_HDDDIR = "${HDDDIR}${EFIDIR}"
GRUB_HDDDIR = "${HDDDIR}"
GRUB_ISODIR = "${ISODIR}${EFIDIR}"
grubefi_populate() {
@@ -51,12 +53,22 @@ grubefi_populate() {
grubefi_iso_populate() {
grubefi_populate ${GRUB_ISODIR}
# FIXUP the <EFIDIR> token in the config
# FIXME: This can be dropped once mkdosfs is fixed
sed -i "s@<EFIDIR>@${EFIDIR}@g" ${GRUB_ISODIR}/$(basename "${GRUBCFG}")
}
grubefi_hddimg_populate() {
grubefi_populate ${GRUB_HDDDIR}
# FIXUP the <EFIDIR> token in the config
# FIXME: This can be dropped once mkdosfs is fixed
sed -i "s@<EFIDIR>@@g" ${GRUB_HDDDIR}/$(basename "${GRUBCFG}")
}
# FIXME: The <EFIDIR> token can be replaced with ${EFIDIR} once the
# mkdosfs bug is resolved.
python build_grub_cfg() {
import sys
@@ -109,7 +121,7 @@ python build_grub_cfg() {
bb.data.update_data(localdata)
cfgfile.write('\nmenuentry \'%s\'{\n' % (label))
cfgfile.write('linux ${EFIDIR}/vmlinuz LABEL=%s' % (label))
cfgfile.write('linux <EFIDIR>/vmlinuz LABEL=%s' % (label))
append = localdata.getVar('APPEND', True)
initrd = localdata.getVar('INITRD', True)
@@ -119,7 +131,7 @@ python build_grub_cfg() {
cfgfile.write('\n')
if initrd:
cfgfile.write('initrd ${EFIDIR}/initrd')
cfgfile.write('initrd <EFIDIR>/initrd')
cfgfile.write('\n}\n')
cfgfile.close()

View File

@@ -14,7 +14,7 @@ GDK_PIXBUF_MODULEDIR=${libdir}/gdk-pixbuf-2.0/2.10.0/loaders gdk-pixbuf-query-lo
for icondir in /usr/share/icons/* ; do
if [ -d $icondir ] ; then
gtk-update-icon-cache -fqt $icondir
gtk-update-icon-cache -qt $icondir
fi
done
}

View File

@@ -211,10 +211,6 @@ do_compile_prepend() {
set_icecc_env
}
do_compile_kernelmodules_prepend() {
set_icecc_env
}
#do_install_prepend() {
# set_icecc_env
#}

View File

@@ -60,7 +60,7 @@ mklibs_optimize_image_doit() {
mklibs_optimize_image() {
for img in ${MKLIBS_OPTIMIZED_IMAGES}
do
if [ "${img}" = "${PN}" ] || [ "${img}" = "all" ]
if [ "${img}" == "${PN}" ] || [ "${img}" == "all" ]
then
mklibs_optimize_image_doit
break

View File

@@ -119,7 +119,7 @@ ROOTFS_POSTPROCESS_COMMAND ?= ""
# some default locales
IMAGE_LINGUAS ?= "de-de fr-fr en-gb"
LINGUAS_INSTALL ?= "${@" ".join(map(lambda s: "locale-base-%s" % s, d.getVar('IMAGE_LINGUAS', 1).split()))}"
LINGUAS_INSTALL = "${@" ".join(map(lambda s: "locale-base-%s" % s, d.getVar('IMAGE_LINGUAS', 1).split()))}"
PSEUDO_PASSWD = "${IMAGE_ROOTFS}"
@@ -134,22 +134,15 @@ do_rootfs[umask] = 022
fakeroot do_rootfs () {
#set -x
# When use the rpm incremental image generation, don't remove the rootfs
if [ "${INC_RPM_IMAGE_GEN}" != "1" -o "${IMAGE_PKGTYPE}" != "rpm" ]; then
rm -rf ${IMAGE_ROOTFS}
fi
rm -rf ${IMAGE_ROOTFS}
rm -rf ${MULTILIB_TEMP_ROOTFS}
mkdir -p ${IMAGE_ROOTFS}
mkdir -p ${DEPLOY_DIR_IMAGE}
cp ${COREBASE}/meta/files/deploydir_readme.txt ${DEPLOY_DIR_IMAGE}/README_-_DO_NOT_DELETE_FILES_IN_THIS_DIRECTORY.txt
# If "${IMAGE_ROOTFS}/dev" exists, then the device had been made by
# the previous build
if [ "${USE_DEVFS}" != "1" -a ! -r "${IMAGE_ROOTFS}/dev" ]; then
if [ "${USE_DEVFS}" != "1" ]; then
for devtable in ${@get_devtable_list(d)}; do
# Always return ture since there maybe already one when use the
# incremental image generation
makedevs -r ${IMAGE_ROOTFS} -D $devtable
done
fi
@@ -262,43 +255,6 @@ multilib_sanity_check() {
echo $@ | python ${MULTILIB_CHECK_FILE}
}
get_split_linguas() {
for translation in ${IMAGE_LINGUAS}; do
translation_split=$(echo ${translation} | awk -F '-' '{print $1}')
echo ${translation}
echo ${translation_split}
done | sort | uniq
}
rootfs_install_all_locales() {
# Generate list of installed packages for which additional locale packages might be available
INSTALLED_PACKAGES=`list_installed_packages | egrep -v -- "(-locale-|^locale-base-|-dev$|-doc$|^kernel|^glibc|^ttf|^task|^perl|^python)"`
# Generate a list of locale packages that exist
SPLIT_LINGUAS=`get_split_linguas`
PACKAGES_TO_INSTALL=""
for lang in $SPLIT_LINGUAS; do
for pkg in $INSTALLED_PACKAGES; do
existing_pkg=`rootfs_check_package_exists $pkg-locale-$lang`
if [ "$existing_pkg" != "" ]; then
PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL $existing_pkg"
fi
done
done
# Install the packages, if any
if [ "$PACKAGES_TO_INSTALL" != "" ]; then
rootfs_install_packages $PACKAGES_TO_INSTALL
fi
# Workaround for broken shell function dependencies
if false ; then
get_split_linguas
list_installed_packages
rootfs_check_package_exists
fi
}
# set '*' as the root password so the images
# can decide if they want it or not
zap_root_password () {

View File

@@ -23,7 +23,7 @@ def get_imagecmds(d):
runimagecmd () {
# Image generation code for image type ${type}
ROOTFS_SIZE=`du -ks ${IMAGE_ROOTFS}|awk '{base_size = ($1 * ${IMAGE_OVERHEAD_FACTOR}); OFMT = "%.0f" ; print ((base_size > ${IMAGE_ROOTFS_SIZE} ? base_size : ${IMAGE_ROOTFS_SIZE}) + ${IMAGE_ROOTFS_EXTRA_SPACE}) }'`
ROOTFS_SIZE=`du -ks ${IMAGE_ROOTFS}|awk '{size = $1 * ${IMAGE_OVERHEAD_FACTOR} + ${IMAGE_ROOTFS_EXTRA_SPACE}; OFMT = "%.0f" ; print (size > ${IMAGE_ROOTFS_SIZE} ? size : ${IMAGE_ROOTFS_SIZE}) }'`
${cmd}
cd ${DEPLOY_DIR_IMAGE}/
rm -f ${DEPLOY_DIR_IMAGE}/${IMAGE_LINK_NAME}.${type}
@@ -34,9 +34,7 @@ runimagecmd () {
XZ_COMPRESSION_LEVEL ?= "-e -9"
XZ_INTEGRITY_CHECK ?= "crc32"
IMAGE_CMD_jffs2 = "mkfs.jffs2 --root=${IMAGE_ROOTFS} --faketime --output=${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.rootfs.jffs2 -n ${EXTRA_IMAGECMD}"
IMAGE_CMD_sum.jffs2 = "${IMAGE_CMD_jffs2} && sumtool -i ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.rootfs.jffs2 \
-o ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.rootfs.sum.jffs2 -n ${EXTRA_IMAGECMD}"
IMAGE_CMD_jffs2 = "mkfs.jffs2 --root=${IMAGE_ROOTFS} --faketime --output=${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.rootfs.jffs2 ${EXTRA_IMAGECMD}"
IMAGE_CMD_cramfs = "mkcramfs ${IMAGE_ROOTFS} ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.rootfs.cramfs ${EXTRA_IMAGECMD}"
@@ -110,14 +108,8 @@ IMAGE_CMD_tar = "cd ${IMAGE_ROOTFS} && tar -cvf ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME
IMAGE_CMD_tar.gz = "cd ${IMAGE_ROOTFS} && tar -zcvf ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.rootfs.tar.gz ."
IMAGE_CMD_tar.bz2 = "cd ${IMAGE_ROOTFS} && tar -jcvf ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.rootfs.tar.bz2 ."
IMAGE_CMD_tar.xz = "cd ${IMAGE_ROOTFS} && tar --xz -cvf ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.rootfs.tar.xz ."
IMAGE_CMD_cpio () {
touch ${IMAGE_ROOTFS}/init
cd ${IMAGE_ROOTFS} && (find . | cpio -o -H newc >${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.rootfs.cpio)
}
IMAGE_CMD_cpio.gz () {
touch ${IMAGE_ROOTFS}/init
cd ${IMAGE_ROOTFS} && (find . | cpio -o -H newc | gzip -c -9 >${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.rootfs.cpio.gz)
}
IMAGE_CMD_cpio = "cd ${IMAGE_ROOTFS} && (find . | cpio -o -H newc >${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.rootfs.cpio)"
IMAGE_CMD_cpio.gz = "cd ${IMAGE_ROOTFS} && (find . | cpio -o -H newc | gzip -c -9 >${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.rootfs.cpio.gz)"
IMAGE_CMD_cpio.xz = "type cpio >/dev/null; cd ${IMAGE_ROOTFS} && (find . | cpio -o -H newc | xz -c ${XZ_COMPRESSION_LEVEL} --check=${XZ_INTEGRITY_CHECK} > ${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.rootfs.cpio.xz) ${EXTRA_IMAGECMD}"
IMAGE_CMD_cpio.lzma = "type cpio >/dev/null; cd ${IMAGE_ROOTFS} && (find . | cpio -o -H newc | xz --format=lzma -c ${XZ_COMPRESSION_LEVEL} --check=${XZ_INTEGRITY_CHECK} >${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}.rootfs.cpio.lzma) ${EXTRA_IMAGECMD}"
@@ -146,7 +138,6 @@ EXTRA_IMAGECMD_btrfs ?= ""
IMAGE_DEPENDS = ""
IMAGE_DEPENDS_jffs2 = "mtd-utils-native"
IMAGE_DEPENDS_sum.jffs2 = "mtd-utils-native"
IMAGE_DEPENDS_cramfs = "cramfs-native"
IMAGE_DEPENDS_ext2 = "genext2fs-native"
IMAGE_DEPENDS_ext2.gz = "genext2fs-native"
@@ -166,4 +157,4 @@ IMAGE_DEPENDS_ubi = "mtd-utils-native"
IMAGE_DEPENDS_ubifs = "mtd-utils-native"
# This variable is available to request which values are suitable for IMAGE_FSTYPES
IMAGE_TYPES = "jffs2 sum.jffs2 cramfs ext2 ext2.gz ext2.bz2 ext3 ext3.gz ext2.lzma btrfs live squashfs squashfs-lzma ubi tar tar.gz tar.bz2 tar.xz cpio cpio.gz cpio.xz cpio.lzma"
IMAGE_TYPES = "jffs2 cramfs ext2 ext2.gz ext2.bz2 ext3 ext3.gz ext2.lzma live squashfs squashfs-lzma ubi tar tar.gz tar.bz2 tar.xz cpio cpio.gz cpio.xz cpio.lzma"

View File

@@ -11,10 +11,6 @@
# -Check if packages contains .debug directories or .so files
# where they should be in -dev or -dbg
# -Check if config.log contains traces to broken autoconf tests
# -Ensure that binaries in base_[bindir|sbindir|libdir] do not link
# into exec_prefix
# -Check that scripts in base_[bindir|sbindir|libdir] do not reference
# files under exec_prefix
#
@@ -23,14 +19,9 @@
# The package.bbclass can help us here.
#
inherit package
PACKAGE_DEPENDS += "pax-utils-native desktop-file-utils-native ${QADEPENDS}"
PACKAGE_DEPENDS += "pax-utils-native desktop-file-utils-native"
PACKAGEFUNCS += " do_package_qa "
# unsafe-references-in-binaries requires prelink-rtld from
# prelink-native, but we don't want this DEPENDS for -native builds
QADEPENDS = "prelink-native"
QADEPENDS_virtclass-native = ""
QADEPENDS_virtclass-nativesdk = ""
#
# dictionary for elf headers
@@ -109,7 +100,7 @@ def package_qa_get_machine_dict():
# Currently not being used by default "desktop"
WARN_QA ?= "ldflags useless-rpaths rpaths unsafe-references-in-binaries unsafe-references-in-scripts"
WARN_QA ?= "ldflags useless-rpaths rpaths"
ERROR_QA ?= "dev-so debug-deps dev-deps debug-files arch la2 pkgconfig la perms"
def package_qa_clean_path(path,d):
@@ -210,100 +201,6 @@ def package_qa_check_perm(path,name,d, elf, messages):
"""
return
QAPATHTEST[unsafe-references-in-binaries] = "package_qa_check_unsafe_references_in_binaries"
def package_qa_check_unsafe_references_in_binaries(path, name, d, elf, messages):
"""
Ensure binaries in base_[bindir|sbindir|libdir] do not link to files under exec_prefix
"""
if unsafe_references_skippable(path, name, d):
return
if elf:
import subprocess as sub
pn = d.getVar('PN', True)
exec_prefix = d.getVar('exec_prefix', True)
sysroot_path = d.getVar('STAGING_DIR_TARGET', True)
sysroot_path_usr = sysroot_path + exec_prefix
try:
ldd_output = bb.process.Popen(["prelink-rtld", "--root", sysroot_path, path], stdout=sub.PIPE).stdout.read()
except bb.process.CmdError:
error_msg = pn + ": prelink-rtld aborted when processing %s" % path
package_qa_handle_error("unsafe-references-in-binaries", error_msg, d)
return False
if sysroot_path_usr in ldd_output:
error_msg = pn + ": %s links to something under exec_prefix" % path
package_qa_handle_error("unsafe-references-in-binaries", error_msg, d)
error_msg = "ldd reports: %s" % ldd_output
package_qa_handle_error("unsafe-references-in-binaries", error_msg, d)
return False
QAPATHTEST[unsafe-references-in-scripts] = "package_qa_check_unsafe_references_in_scripts"
def package_qa_check_unsafe_references_in_scripts(path, name, d, elf, messages):
"""
Warn if scripts in base_[bindir|sbindir|libdir] reference files under exec_prefix
"""
if unsafe_references_skippable(path, name, d):
return
if not elf:
import stat
pn = d.getVar('PN', True)
# Ensure we're checking an executable script
statinfo = os.stat(path)
if bool(statinfo.st_mode & stat.S_IXUSR):
# grep shell scripts for possible references to /exec_prefix/
exec_prefix = d.getVar('exec_prefix', True)
statement = "grep -e '%s/' %s > /dev/null" % (exec_prefix, path)
if os.system(statement) == 0:
error_msg = pn + ": Found a reference to %s/ in %s" % (exec_prefix, path)
package_qa_handle_error("unsafe-references-in-scripts", error_msg, d)
error_msg = "Shell scripts in base_bindir and base_sbindir should not reference anything in exec_prefix"
package_qa_handle_error("unsafe-references-in-scripts", error_msg, d)
def unsafe_references_skippable(path, name, d):
if bb.data.inherits_class('native', d) or bb.data.inherits_class('nativesdk', d):
return True
if "-dbg" in name or "-dev" in name:
return True
# Other package names to skip:
if name.startswith("kernel-module-"):
return True
# Skip symlinks
if os.path.islink(path):
return True
# Skip unusual rootfs layouts which make these tests irrelevant
exec_prefix = d.getVar('exec_prefix', True)
if exec_prefix == "":
return True
pkgdest = d.getVar('PKGDEST', True)
pkgdest = pkgdest + "/" + name
pkgdest = os.path.abspath(pkgdest)
base_bindir = pkgdest + d.getVar('base_bindir', True)
base_sbindir = pkgdest + d.getVar('base_sbindir', True)
base_libdir = pkgdest + d.getVar('base_libdir', True)
bindir = pkgdest + d.getVar('bindir', True)
sbindir = pkgdest + d.getVar('sbindir', True)
libdir = pkgdest + d.getVar('libdir', True)
if base_bindir == bindir and base_sbindir == sbindir and base_libdir == libdir:
return True
# Skip files not in base_[bindir|sbindir|libdir]
path = os.path.abspath(path)
if not (base_bindir in path or base_sbindir in path or base_libdir in path):
return True
return False
QAPATHTEST[arch] = "package_qa_check_arch"
def package_qa_check_arch(path,name,d, elf, messages):
"""

View File

@@ -1,15 +1,5 @@
S = "${WORKDIR}/linux"
def find_patches(d):
patches=src_patches(d)
patch_list=[]
for p in patches:
_, _, local, _, _, _ = bb.decodeurl(p)
patch_list.append(local)
return patch_list
do_patch() {
cd ${S}
if [ -f ${WORKDIR}/defconfig ]; then
@@ -41,67 +31,14 @@ do_patch() {
exit 1
fi
patches="${@" ".join(find_patches(d))}"
# This loops through all patches, and looks for directories that do
# not already have feature descriptions. If a directory doesn't have
# a feature description, we switch to the ${WORKDIR} variant of the
# feature (so we can write to it) and generate a feature for those
# patches. The generated feature will respect the patch order.
#
# By leaving source patch directories that already have .scc files
# as-is it means that a SRC_URI can only contain a .scc file, and all
# patches that the .scc references will be picked up, without having
# to be repeated on the SRC_URI line .. which is more intutive
set +e
patch_dirs=
for p in ${patches}; do
pdir=`dirname ${p}`
pname=`basename ${p}`
scc=`find ${pdir} -maxdepth 1 -name '*.scc'`
if [ -z "${scc}" ]; then
# there is no scc file. We need to switch to someplace that we know
# we can create content (the workdir)
workdir_subdir=`echo ${pdir} | sed "s%^.*/${PN}%%" | sed 's%^/%%'`
suggested_dir="${WORKDIR}/${workdir_subdir}"
echo ${gen_feature_dirs} | grep -q ${suggested_dir}
if [ $? -ne 0 ]; then
gen_feature_dirs="${gen_feature_dirs} ${suggested_dir}"
fi
# we call the file *.scc_tmp, so the test above will continue to find
# that patches from a common subdirectory don't have a scc file and
# they'll be placed in order, into this file. We'll rename it later.
echo "patch ${pname}" >> ${suggested_dir}/gen_${workdir_subdir}_desc.scc_tmp
else
suggested_dir="${pdir}"
fi
echo ${patch_dirs} | grep -q ${suggested_dir}
if [ $? -ne 0 ]; then
patch_dirs="${patch_dirs} ${suggested_dir}"
fi
done
# go through the patch directories and look for any scc feature files
# that were constructed above. If one is found, rename it to ".scc" so
# the kernel patching can see it.
for pdir in ${patch_dirs}; do
scc=`find ${pdir} -maxdepth 1 -name '*.scc_tmp'`
if [ -n "${scc}" ]; then
new_scc=`echo ${scc} | sed 's/_tmp//'`
mv -f ${scc} ${new_scc}
fi
done
# add any explicitly referenced features onto the end of the feature
# list that is passed to the kernel build scripts.
# updates or generates the target description
if [ -n "${KERNEL_FEATURES}" ]; then
for feat in ${KERNEL_FEATURES}; do
addon_features="$addon_features --feature $feat"
done
fi
# updates or generates the target description
updateme --branch ${kbranch} -DKDESC=${KMACHINE}:${LINUX_KERNEL_TYPE} \
${addon_features} ${ARCH} ${KMACHINE} ${patch_dirs}
${addon_features} ${ARCH} ${KMACHINE} ${WORKDIR}
if [ $? -ne 0 ]; then
echo "ERROR. Could not update ${kbranch}"
exit 1
@@ -187,8 +124,8 @@ python do_kernel_configcheck() {
bb.plain("NOTE: validating kernel configuration")
pathprefix = "export PATH=%s:%s; " % (d.getVar('PATH', True), "${S}/scripts/util/")
cmd = bb.data.expand("cd ${B}/..; kconf_check -config- ${B} ${S} ${B} ${KBRANCH}",d )
pathprefix = "export PATH=%s; " % d.getVar('PATH', True)
cmd = bb.data.expand("cd ${B}/..; ${S}/scripts/util/kconf_check -config- ${B} ${S} ${B} ${KBRANCH}",d )
ret, result = commands.getstatusoutput("%s%s" % (pathprefix, cmd))
bb.plain( "%s" % result )

View File

@@ -89,7 +89,7 @@ kernel_do_compile() {
do_compile_kernelmodules() {
unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS MACHINE
if (grep -q -i -e '^CONFIG_MODULES=y$' .config); then
oe_runmake ${PARALLEL_MAKE} modules CC="${KERNEL_CC}" LD="${KERNEL_LD}"
oe_runmake modules CC="${KERNEL_CC}" LD="${KERNEL_LD}"
else
bbnote "no modules to compile"
fi
@@ -223,11 +223,11 @@ do_savedefconfig() {
do_savedefconfig[nostamp] = "1"
addtask savedefconfig after do_configure
pkg_postinst_kernel-base () {
pkg_postinst_kernel () {
cd /${KERNEL_IMAGEDEST}; update-alternatives --install /${KERNEL_IMAGEDEST}/${KERNEL_IMAGETYPE} ${KERNEL_IMAGETYPE} ${KERNEL_IMAGETYPE}-${KERNEL_VERSION} ${KERNEL_PRIORITY} || true
}
pkg_postrm_kernel-base () {
pkg_postrm_kernel () {
cd /${KERNEL_IMAGEDEST}; update-alternatives --remove ${KERNEL_IMAGETYPE} ${KERNEL_IMAGETYPE}-${KERNEL_VERSION} || true
}

View File

@@ -1,12 +1,17 @@
# Populates LICENSE_DIRECTORY as set in distro config with the license files as set by
# LIC_FILES_CHKSUM.
# LIC_FILES_CHKSUM.
# TODO:
# - There is a real issue revolving around license naming standards.
# - We should also enable the ability to put the generated license directory onto the
# rootfs
# - Gather up more generic licenses
# - There is a real issue revolving around license naming standards. See license names
# licenses.conf and compare them to the license names in the recipes. You'll see some
# differences and that should be corrected.
LICENSE_DIRECTORY ??= "${DEPLOY_DIR}/licenses"
LICSSTATEDIR = "${WORKDIR}/license-destdir/"
addtask populate_lic after do_patch before do_package
addtask populate_lic after do_patch before do_package
do_populate_lic[dirs] = "${LICSSTATEDIR}/${PN}"
do_populate_lic[cleandirs] = "${LICSSTATEDIR}"
@@ -15,121 +20,35 @@ do_populate_lic[cleandirs] = "${LICSSTATEDIR}"
# break the non-standardized license names that we find in LICENSE, we'll set
# up a bunch of VarFlags to accomodate non-SPDX license names.
#
# We should really discuss standardizing this field, but that's a longer term goal.
# We should really discuss standardizing this field, but that's a longer term goal.
# For now, we can do this and it should grab the most common LICENSE naming variations.
#
# Changing GPL mapping to GPL-2 as it's not very likely to be GPL-1
# We should NEVER have a GPL/LGPL without a version!!!!
# Any mapping to MPL/LGPL/GPL should be fixed
# see: https://wiki.yoctoproject.org/wiki/License_Audit
# GPL variations
SPDXLICENSEMAP[GPL-2] = "GPL-2.0"
SPDXLICENSEMAP[GPLv2] = "GPL-2.0"
SPDXLICENSEMAP[GPLv2.0] = "GPL-2.0"
SPDXLICENSEMAP[GPL-3] = "GPL-3.0"
SPDXLICENSEMAP[GPLv3] = "GPL-3.0"
SPDXLICENSEMAP[GPLv3.0] = "GPL-3.0"
#GPL variations
SPDXLICENSEMAP[GPL] = "GPL-1"
SPDXLICENSEMAP[GPLv2] = "GPL-2"
SPDXLICENSEMAP[GPLv3] = "GPL-3"
#LGPL variations
SPDXLICENSEMAP[LGPLv2] = "LGPL-2.0"
SPDXLICENSEMAP[LGPL] = "LGPL-2"
SPDXLICENSEMAP[LGPLv2] = "LGPL-2"
SPDXLICENSEMAP[LGPL2.1] = "LGPL-2.1"
SPDXLICENSEMAP[LGPLv2.1] = "LGPL-2.1"
SPDXLICENSEMAP[LGPLv3] = "LGPL-3.0"
SPDXLICENSEMAP[LGPLv3] = "LGPL-3"
#MPL variations
SPDXLICENSEMAP[MPL-1] = "MPL-1.0"
SPDXLICENSEMAP[MPLv1] = "MPL-1.0"
SPDXLICENSEMAP[MPLv1.1] = "MPL-1.1"
SPDXLICENSEMAP[MPL] = "MPL-1"
SPDXLICENSEMAP[MPLv1] = "MPL-1"
SPDXLICENSEMAP[MPLv1.1] = "MPL-1"
#MIT variations
SPDXLICENSEMAP[MIT-X] = "MIT"
SPDXLICENSEMAP[MIT-style] = "MIT"
#Openssl variations
SPDXLICENSEMAP[openssl] = "OpenSSL"
#Python variations
SPDXLICENSEMAP[PSF] = "Python-2.0"
SPDXLICENSEMAP[PSFv2] = "Python-2.0"
SPDXLICENSEMAP[Python-2] = "Python-2.0"
#Apache variations
SPDXLICENSEMAP[Apachev2] = "Apache-2.0"
SPDXLICENSEMAP[Apache-2] = "Apache-2.0"
#Artistic variations
SPDXLICENSEMAP[Artisticv1] = "Artistic-1.0"
SPDXLICENSEMAP[Artistic-1] = "Artistic-1.0"
#Academic variations
SPDXLICENSEMAP[AFL-2] = "AFL-2.0"
SPDXLICENSEMAP[AFL-1] = "AFL-1.2"
SPDXLICENSEMAP[AFLv2] = "AFL-2.0"
SPDXLICENSEMAP[AFLv1] = "AFL-1.2"
#Other variations
SPDXLICENSEMAP[EPLv1.0] = "EPL-1.0"
license_create_manifest() {
mkdir -p ${LICENSE_DIRECTORY}/${IMAGE_NAME}
# Get list of installed packages
list_installed_packages | grep -v "locale" |sort > ${LICENSE_DIRECTORY}/${IMAGE_NAME}/package.manifest
INSTALLED_PKGS=`cat ${LICENSE_DIRECTORY}/${IMAGE_NAME}/package.manifest`
# list of installed packages is broken for deb
for pkg in ${INSTALLED_PKGS}; do
# not the best way to do this but licenses are not arch dependant iirc
files=`find ${TMPDIR}/pkgdata/*/runtime -name ${pkg}| head -1`
for filename in $files; do
pkged_pn="$(sed -n 's/^PN: //p' ${filename})"
pkged_lic="$(sed -n '/^LICENSE: /{ s/^LICENSE: //; s/[+|&()*]/ /g; s/ */ /g; p }' ${filename})"
# check to see if the package name exists in the manifest. if so, bail.
if ! grep -q "PACKAGE NAME: ${pkg}" ${filename}; then
# exclude local recipes
if [ ! "${pkged_pn}" = "*locale*" ]; then
echo "PACKAGE NAME:" ${pkg} >> ${LICENSE_DIRECTORY}/${IMAGE_NAME}/license.manifest
echo "RECIPE NAME:" ${pkged_pn} >> ${LICENSE_DIRECTORY}/${IMAGE_NAME}/license.manifest
echo "LICENSE: " >> ${LICENSE_DIRECTORY}/${IMAGE_NAME}/license.manifest
for lic in ${pkged_lic}; do
if [ -e "${LICENSE_DIRECTORY}/${pkged_pn}/generic_${lic}" ]; then
echo ${lic}|sed s'/generic_//'g >> ${LICENSE_DIRECTORY}/${IMAGE_NAME}/license.manifest
else
echo "WARNING: The license listed, " ${lic} " was not in the licenses collected for " ${pkged_pn}>> ${LICENSE_DIRECTORY}/${IMAGE_NAME}/license.manifest
fi
done
echo "" >> ${LICENSE_DIRECTORY}/${IMAGE_NAME}/license.manifest
fi
fi
done
done
# Two options here:
# - Just copy the manifest
# - Copy the manifest and the license directories
# This will make your image a bit larger, however
# if you are concerned about license compliance
# and delivery this should cover all your bases
if [ -n "${COPY_LIC_MANIFEST}" ]; then
mkdir -p ${IMAGE_ROOTFS}/usr/share/common-licenses/
cp ${LICENSE_DIRECTORY}/${IMAGE_NAME}/license.manifest ${IMAGE_ROOTFS}/usr/share/common-licenses/license.manifest
if [ -n "${COPY_LIC_DIRS}" ]; then
for pkg in ${INSTALLED_PKGS}; do
mkdir -p ${IMAGE_ROOTFS}/usr/share/common-licenses/${pkg}
for lic in `ls ${LICENSE_DIRECTORY}/${pkged_pn}`; do
# Really don't need to copy the generics as they're
# represented in the manifest and in the actual pkg licenses
# Doing so would make your image quite a bit larger
if [ ! ${lic} = "generic_*" ]; then
cp ${LICENSE_DIRECTORY}/${pkged_pn}/${lic} ${IMAGE_ROOTFS}/usr/share/common-licenses/${pkg}/${lic}
fi
done
done
fi
fi
}
SPDXLICENSEMAP[AFL2.1] = "AFL-2"
SPDXLICENSEMAP[EPLv1.0] = "EPL-1"
python do_populate_lic() {
"""
@@ -138,7 +57,67 @@ python do_populate_lic() {
import os
import bb
import shutil
import oe.license
import ast
class LicenseVisitor(ast.NodeVisitor):
def generic_visit(self, node):
ast.NodeVisitor.generic_visit(self, node)
def visit_Str(self, node):
#
# Until I figure out what to do with
# the two modifiers I support (or greater = +
# and "with exceptions" being *
# we'll just strip out the modifier and put
# the base license.
find_license(node.s.replace("+", "").replace("*", ""))
ast.NodeVisitor.generic_visit(self, node)
def visit_BinOp(self, node):
op = node.op
if isinstance(op, ast.BitOr):
x = LicenseVisitor()
x.visit(node.left)
x.visit(node.right)
else:
ast.NodeVisitor.generic_visit(self, node)
def copy_license(source, destination, file_name):
try:
bb.copyfile(os.path.join(source, file_name), os.path.join(destination, file_name))
except:
bb.warn("%s: No generic license file exists for: %s at %s" % (pn, file_name, source))
pass
def link_license(source, destination, file_name):
try:
os.symlink(os.path.join(source, file_name), os.path.join(destination, "generic_" + file_name))
except:
bb.warn("%s: Could not symlink: %s at %s to %s at %s" % (pn, file_name, source, file_name, destination))
pass
def find_license(license_type):
try:
bb.mkdirhier(gen_lic_dest)
except:
pass
# If the generic does not exist we need to check to see if there is an SPDX mapping to it
if not os.path.isfile(os.path.join(generic_directory, license_type)):
if d.getVarFlag('SPDXLICENSEMAP', license_type) != None:
# Great, there is an SPDXLICENSEMAP. We can copy!
bb.note("We need to use a SPDXLICENSEMAP for %s" % (license_type))
spdx_generic = d.getVarFlag('SPDXLICENSEMAP', license_type)
copy_license(generic_directory, gen_lic_dest, spdx_generic)
link_license(gen_lic_dest, destdir, spdx_generic)
else:
# And here is where we warn people that their licenses are lousy
bb.warn("%s: No generic license file exists for: %s at %s" % (pn, license_type, generic_directory))
bb.warn("%s: There is also no SPDXLICENSEMAP for this license type: %s at %s" % (pn, license_type, generic_directory))
pass
elif os.path.isfile(os.path.join(generic_directory, license_type)):
copy_license(generic_directory, gen_lic_dest, license_type)
link_license(gen_lic_dest, destdir, license_type)
# All the license types for the package
license_types = d.getVar('LICENSE', True)
@@ -151,59 +130,7 @@ python do_populate_lic() {
srcdir = d.getVar('S', True)
# Directory we store the generic licenses as set in the distro configuration
generic_directory = d.getVar('COMMON_LICENSE_DIR', True)
license_source_dirs = []
license_source_dirs.append(generic_directory)
try:
additional_lic_dirs = d.getVar('LICENSE_DIR', True).split()
for lic_dir in additional_lic_dirs:
license_source_dirs.append(lic_dir)
except:
pass
class FindVisitor(oe.license.LicenseVisitor):
def visit_Str(self, node):
#
# Until I figure out what to do with
# the two modifiers I support (or greater = +
# and "with exceptions" being *
# we'll just strip out the modifier and put
# the base license.
find_license(node.s.replace("+", "").replace("*", ""))
self.generic_visit(node)
def find_license(license_type):
try:
bb.mkdirhier(gen_lic_dest)
except:
pass
spdx_generic = None
license_source = None
# If the generic does not exist we need to check to see if there is an SPDX mapping to it
for lic_dir in license_source_dirs:
if not os.path.isfile(os.path.join(lic_dir, license_type)):
if d.getVarFlag('SPDXLICENSEMAP', license_type) != None:
# Great, there is an SPDXLICENSEMAP. We can copy!
bb.debug(1, "We need to use a SPDXLICENSEMAP for %s" % (license_type))
spdx_generic = d.getVarFlag('SPDXLICENSEMAP', license_type)
license_source = lic_dir
break
elif os.path.isfile(os.path.join(lic_dir, license_type)):
spdx_generic = license_type
license_source = lic_dir
break
if spdx_generic and license_source:
# we really should copy to generic_ + spdx_generic, however, that ends up messing the manifest
# audit up. This should be fixed in emit_pkgdata (or, we actually got and fix all the recipes)
ret = bb.copyfile(os.path.join(license_source, spdx_generic), os.path.join(os.path.join(d.getVar('LICSSTATEDIR', True), pn), "generic_" + license_type))
# If the copy didn't occur, something horrible went wrong and we fail out
if not ret:
bb.warn("%s for %s could not be copied for some reason. It may not exist. WARN for now." % (spdx_generic, pn))
else:
# And here is where we warn people that their licenses are lousy
bb.warn("%s: No generic license file exists for: %s in any provider" % (pn, license_type))
pass
try:
bb.mkdirhier(destdir)
except:
@@ -224,72 +151,32 @@ python do_populate_lic() {
srclicfile = os.path.join(srcdir, path)
ret = bb.copyfile(srclicfile, os.path.join(destdir, os.path.basename(path)))
# If the copy didn't occur, something horrible went wrong and we fail out
if not ret:
if ret is False or ret == 0:
bb.warn("%s could not be copied for some reason. It may not exist. WARN for now." % srclicfile)
gen_lic_dest = os.path.join(d.getVar('LICENSE_DIRECTORY', True), "common-licenses")
clean_licenses = ""
v = FindVisitor()
try:
v.visit_string(license_types)
except oe.license.InvalidLicense as exc:
bb.fatal('%s: %s' % (d.getVar('PF', True), exc))
except SyntaxError:
bb.warn("%s: Failed to parse it's LICENSE field." % (d.getVar('PF', True)))
}
def incompatible_license(d,dont_want_license):
"""
This function checks if a package has only incompatible licenses. It also take into consideration 'or'
operand.
"""
import re
import oe.license
from fnmatch import fnmatchcase as fnmatch
dont_want_licenses = []
dont_want_licenses.append(d.getVar('INCOMPATIBLE_LICENSE', 1))
if d.getVarFlag('SPDXLICENSEMAP', dont_want_license):
dont_want_licenses.append(d.getVarFlag('SPDXLICENSEMAP', dont_want_license))
def include_license(license):
if any(fnmatch(license, pattern) for pattern in dont_want_licenses):
return False
else:
spdx_license = d.getVarFlag('SPDXLICENSEMAP', license)
if spdx_license and any(fnmatch(spdx_license, pattern) for pattern in dont_want_licenses):
return False
else:
return True
def choose_licenses(a, b):
if all(include_license(lic) for lic in a):
return a
for x in license_types.replace("(", " ( ").replace(")", " ) ").split():
if ((x != "(") and (x != ")") and (x != "&") and (x != "|")):
clean_licenses += "'" + x + "'"
else:
return b
clean_licenses += " " + x + " "
"""
If you want to exlude license named generically 'X', we surely want to exlude 'X+' as well.
In consequence, we will exclude the '+' character from LICENSE in case INCOMPATIBLE_LICENSE
is not a 'X+' license.
"""
if not re.search(r'[+]',dont_want_license):
licenses=oe.license.flattened_licenses(re.sub(r'[+]', '', d.getVar('LICENSE', True)), choose_licenses)
else:
licenses=oe.license.flattened_licenses(d.getVar('LICENSE', True), choose_licenses)
for onelicense in licenses:
if not include_license(onelicense):
return True
return False
# lstrip any possible indents, since ast needs python syntax.
node = ast.parse(clean_licenses.lstrip())
v = LicenseVisitor()
v.visit(node)
}
SSTATETASKS += "do_populate_lic"
do_populate_lic[sstate-name] = "populate-lic"
do_populate_lic[sstate-inputdirs] = "${LICSSTATEDIR}"
do_populate_lic[sstate-outputdirs] = "${LICENSE_DIRECTORY}/"
ROOTFS_POSTINSTALL_COMMAND += "license_create_manifest; "
python do_populate_lic_setscene () {
sstate_setscene(d)
}
addtask do_populate_lic_setscene

View File

@@ -37,14 +37,25 @@ STAGINGCC_prepend = "${BBEXTENDVARIANT}-"
python __anonymous () {
variant = d.getVar("BBEXTENDVARIANT", True)
import oe.classextend
clsextend = oe.classextend.ClassExtender(variant, d)
def map_dependencies(varname, d, suffix = ""):
if suffix:
varname = varname + "_" + suffix
deps = d.getVar(varname, True)
if not deps:
return
deps = bb.utils.explode_deps(deps)
newdeps = []
for dep in deps:
if dep.endswith(("-native", "-native-runtime")):
newdeps.append(dep)
else:
newdeps.append(multilib_extend_name(variant, dep))
d.setVar(varname, " ".join(newdeps))
if bb.data.inherits_class('image', d):
clsextend.map_depends_variable("PACKAGE_INSTALL")
clsextend.map_depends_variable("LINGUAS_INSTALL")
clsextend.map_depends_variable("RDEPENDS")
map_dependencies("PACKAGE_INSTALL", d)
map_dependencies("LINGUAS_INSTALL", d)
map_dependencies("RDEPENDS", d)
pinstall = d.getVar("LINGUAS_INSTALL", True) + " " + d.getVar("PACKAGE_INSTALL", True)
d.setVar("PACKAGE_INSTALL", pinstall)
d.setVar("LINGUAS_INSTALL", "")
@@ -52,13 +63,32 @@ python __anonymous () {
d.setVar("PACKAGE_INSTALL_ATTEMPTONLY", "")
return
clsextend.rename_packages()
clsextend.rename_package_variables((d.getVar("PACKAGEVARS", True) or "").split())
pkgs_mapping = []
for pkg in (d.getVar("PACKAGES", True) or "").split():
if pkg.startswith(variant):
pkgs_mapping.append([pkg.split(variant + "-")[1], pkg])
continue
pkgs_mapping.append([pkg, multilib_extend_name(variant, pkg)])
clsextend.map_depends_variable("DEPENDS")
clsextend.map_packagevars()
clsextend.map_variable("PROVIDES")
clsextend.map_variable("PACKAGES_DYNAMIC")
clsextend.map_variable("PACKAGE_INSTALL")
clsextend.map_variable("INITSCRIPT_PACKAGES")
d.setVar("PACKAGES", " ".join([row[1] for row in pkgs_mapping]))
vars = (d.getVar("PACKAGEVARS", True) or "").split()
for pkg_mapping in pkgs_mapping:
for subs in vars:
d.renameVar("%s_%s" % (subs, pkg_mapping[0]), "%s_%s" % (subs, pkg_mapping[1]))
map_dependencies("DEPENDS", d)
for pkg in (d.getVar("PACKAGES", True).split() + [""]):
map_dependencies("RDEPENDS", d, pkg)
map_dependencies("RRECOMMENDS", d, pkg)
map_dependencies("RSUGGESTS", d, pkg)
map_dependencies("RPROVIDES", d, pkg)
map_dependencies("RREPLACES", d, pkg)
map_dependencies("RCONFLICTS", d, pkg)
map_dependencies("PKG", d, pkg)
multilib_map_variable("PROVIDES", variant, d)
multilib_map_variable("PACKAGES_DYNAMIC", variant, d)
multilib_map_variable("PACKAGE_INSTALL", variant, d)
multilib_map_variable("INITSCRIPT_PACKAGES", variant, d)
}

View File

@@ -8,31 +8,55 @@ python multilib_virtclass_handler_global () {
if bb.data.inherits_class('kernel', e.data) or bb.data.inherits_class('module-base', e.data):
variants = (e.data.getVar("MULTILIB_VARIANTS", True) or "").split()
import oe.classextend
clsextends = []
for variant in variants:
clsextends.append(oe.classextend.ClassExtender(variant, e.data))
# Process PROVIDES
origprovs = provs = e.data.getVar("PROVIDES", True) or ""
for clsextend in clsextends:
provs = provs + " " + clsextend.map_variable("PROVIDES", setvar=False)
for variant in variants:
provs = provs + " " + multilib_map_variable("PROVIDES", variant, e.data)
# Reset to original value so next time around multilib_map_variable works properly
e.data.setVar("PROVIDES", origprovs)
e.data.setVar("PROVIDES", provs)
# Process RPROVIDES
origrprovs = rprovs = e.data.getVar("RPROVIDES", True) or ""
for clsextend in clsextends:
rprovs = rprovs + " " + clsextend.map_variable("RPROVIDES", setvar=False)
for variant in variants:
rprovs = rprovs + " " + multilib_map_variable("RPROVIDES", variant, e.data)
# Reset to original value so next time around multilib_map_variable works properly
e.data.setVar("RPROVIDES", origrprovs)
e.data.setVar("RPROVIDES", rprovs)
# Process RPROVIDES_${PN}...
for pkg in (e.data.getVar("PACKAGES", True) or "").split():
origrprovs = rprovs = e.data.getVar("RPROVIDES_%s" % pkg, True) or ""
for clsextend in clsextends:
rprovs = rprovs + " " + clsextend.map_variable("RPROVIDES_%s" % pkg, setvar=False)
rprovs = rprovs + " " + clsextend.extname + "-" + pkg
for variant in variants:
rprovs = rprovs + " " + multilib_map_variable("RPROVIDES_%s" % pkg, variant, e.data)
rprovs = rprovs + " " + variant + "-" + pkg
# Reset to original value so next time around multilib_map_variable works properly
e.data.setVar("RPROVIDES_%s" % pkg, origrprovs)
e.data.setVar("RPROVIDES_%s" % pkg, rprovs)
}
addhandler multilib_virtclass_handler_global
def multilib_extend_name(variant, name):
if name.startswith("kernel-module"):
return name
if name.startswith("virtual/"):
subs = name.split("/", 1)[1]
if not subs.startswith(variant):
return "virtual/" + variant + "-" + subs
return name
if not name.startswith(variant):
return variant + "-" + name
return name
def multilib_map_variable(varname, variant, d):
var = d.getVar(varname, True)
if not var:
return ""
var = var.split()
newvar = []
for v in var:
newvar.append(multilib_extend_name(variant, v))
newdata = " ".join(newvar)
d.setVar(varname, newdata)
return newdata

View File

@@ -127,7 +127,7 @@ python native_virtclass_handler () {
d.setVar(varname, " ".join(newdeps))
map_dependencies("DEPENDS", e.data)
for pkg in [e.data.getVar("PN", True), "", "${PN}"]:
for pkg in (e.data.getVar("PACKAGES", True).split() + [""]):
map_dependencies("RDEPENDS", e.data, pkg)
map_dependencies("RRECOMMENDS", e.data, pkg)
map_dependencies("RSUGGESTS", e.data, pkg)

View File

@@ -53,6 +53,11 @@ prefix = "${SDKPATHNATIVE}${prefix_nativesdk}"
exec_prefix = "${SDKPATHNATIVE}${prefix_nativesdk}"
baselib = "lib"
FILES_${PN} += "${prefix}"
FILES_${PN}-dbg += "${prefix}/.debug \
${prefix}/bin/.debug \
"
export PKG_CONFIG_DIR = "${STAGING_DIR_HOST}${libdir}/pkgconfig"
export PKG_CONFIG_SYSROOT_DIR = "${STAGING_DIR_HOST}"

View File

@@ -350,26 +350,11 @@ def runtime_mapping_rename (varname, d):
#
python package_get_auto_pr() {
# per recipe PRSERV_HOST PRSERV_PORT
pn = d.getVar('PN', True)
host = d.getVar("PRSERV_HOST_" + pn, True)
port = d.getVar("PRSERV_PORT_" + pn, True)
if not (host is None):
d.setVar("PRSERV_HOST", host)
if not (port is None):
d.setVar("PRSERV_PORT", port)
if d.getVar('USE_PR_SERV', True) != "0":
try:
auto_pr=prserv_get_pr_auto(d)
except Exception as e:
bb.fatal("Can NOT get PRAUTO, exception %s" % str(e))
return
auto_pr=prserv_get_pr_auto(d)
if auto_pr is None:
if d.getVar('PRSERV_LOCKDOWN', True):
bb.fatal("Can NOT get PRAUTO from lockdown exported file")
else:
bb.fatal("Can NOT get PRAUTO from remote PR service")
return
bb.fatal("Can NOT get auto PR revision from remote PR service")
return
d.setVar('PRAUTO',str(auto_pr))
}
@@ -1080,7 +1065,6 @@ python emit_pkgdata() {
write_if_exists(sf, pkg, 'PR')
write_if_exists(sf, pkg, 'PKGV')
write_if_exists(sf, pkg, 'PKGR')
write_if_exists(sf, pkg, 'LICENSE')
write_if_exists(sf, pkg, 'DESCRIPTION')
write_if_exists(sf, pkg, 'SUMMARY')
write_if_exists(sf, pkg, 'RDEPENDS')
@@ -1128,7 +1112,7 @@ if [ x"$D" = "x" ]; then
fi
}
RPMDEPS = "${STAGING_LIBDIR_NATIVE}/rpm/bin/rpmdeps --macros ${STAGING_LIBDIR_NATIVE}/rpm/macros --define '_rpmfc_magic_path ${STAGING_DIR_NATIVE}${datadir_native}/misc/magic.mgc' --rpmpopt ${STAGING_LIBDIR_NATIVE}/rpm/rpmpopt"
RPMDEPS = "${STAGING_LIBDIR_NATIVE}/rpm/bin/rpmdeps --macros ${STAGING_LIBDIR_NATIVE}/rpm/macros --define '_rpmfc_magic_path ${STAGING_DIR_NATIVE}/usr/share/misc/magic.mgc' --rpmpopt ${STAGING_LIBDIR_NATIVE}/rpm/rpmpopt"
# Collect perfile run-time dependency metadata
# Output:

View File

@@ -72,10 +72,8 @@ package_tryout_install_multilib_ipk() {
local ipkg_args="-f ${INSTALL_CONF_IPK} -o ${target_rootfs} --force_overwrite"
local selected_pkg=""
local pkgname_prefix="${item}-"
local pkgname_len=${#pkgname_prefix}
for pkg in ${INSTALL_PACKAGES_MULTILIB_IPK}; do
local pkgname=$(echo $pkg | awk -v var=$pkgname_len '{ pkgname=substr($1, 1, var - 1); print pkgname; }' )
if [ ${pkgname} = ${pkgname_prefix} ]; then
if [ ${pkg:0:${#pkgname_prefix}} == ${pkgname_prefix} ]; then
selected_pkg="${selected_pkg} ${pkg}"
fi
done
@@ -96,9 +94,7 @@ split_multilib_packages() {
is_multilib=0
for item in ${MULTILIB_VARIANTS}; do
local pkgname_prefix="${item}-"
local pkgname_len=${#pkgname_prefix}
local pkgname=$(echo $pkg | awk -v var=$pkgname_len '{ pkgname=substr($1, 1, var - 1); print pkgname; }' )
if [ ${pkgname} = ${pkgname_prefix} ]; then
if [ ${pkg:0:${#pkgname_prefix}} == ${pkgname_prefix} ]; then
is_multilib=1
break
fi
@@ -137,7 +133,7 @@ package_install_internal_ipk() {
mkdir -p ${target_rootfs}${localstatedir}/lib/opkg/
local ipkg_args="-f ${conffile} -o ${target_rootfs} --force-overwrite --force_postinstall"
local ipkg_args="-f ${conffile} -o ${target_rootfs} --force-overwrite"
opkg-cl ${ipkg_args} update

View File

@@ -147,67 +147,6 @@ resolve_package_rpm () {
echo $pkg_name
}
# rpm common command and options
rpm_common_comand () {
local target_rootfs="${INSTALL_ROOTFS_RPM}"
local extra_args="$@"
${RPM} --root ${target_rootfs} \
--predefine "_rpmds_sysinfo_path ${target_rootfs}/etc/rpm/sysinfo" \
--predefine "_rpmrc_platform_path ${target_rootfs}/etc/rpm/platform" \
-D "_var ${localstatedir}" \
-D "_dbpath ${rpmlibdir}" \
--noparentdirs --nolinktos \
-D "__dbi_txn create nofsync private" \
-D "_cross_scriptlet_wrapper ${WORKDIR}/scriptlet_wrapper" $extra_args
}
# install or remove the pkg
rpm_update_pkg () {
local target_rootfs="${INSTALL_ROOTFS_RPM}"
# Save the rpm's build time for incremental image generation, and the file
# would be moved to ${T}
rm -f ${target_rootfs}/install/total_solution_bt.manifest
for i in `cat ${target_rootfs}/install/total_solution.manifest`; do
# Use "rpm" rather than "${RPM}" here, since we don't need the
# '--dbpath' option
echo "$i `rpm -qp --qf '%{BUILDTIME}\n' $i`" >> \
${target_rootfs}/install/total_solution_bt.manifest
done
# Only install the different pkgs if incremental image generation is set
if [ "${INC_RPM_IMAGE_GEN}" = "1" -a -f ${T}/total_solution_bt.manifest -a \
"${IMAGE_PKGTYPE}" = "rpm" ]; then
cur_list="${target_rootfs}/install/total_solution_bt.manifest"
pre_list="${T}/total_solution_bt.manifest"
sort -u $cur_list -o $cur_list
sort -u $pre_list -o $pre_list
comm -1 -3 $cur_list $pre_list | sed 's#.*/\(.*\)\.rpm .*#\1#' > \
${target_rootfs}/install/remove.manifest
comm -2 -3 $cur_list $pre_list | awk '{print $1}' > \
${target_rootfs}/install/incremental.manifest
# Attempt to remove unwanted pkgs, the scripts(pre, post, etc.) has not
# been run by now, so don't have to run them(preun, postun, etc.) when
# erase the pkg
if [ -s ${target_rootfs}/install/remove.manifest ]; then
rpm_common_comand --noscripts --nodeps \
-e `cat ${target_rootfs}/install/remove.manifest`
fi
# Attempt to install the incremental pkgs
rpm_common_comand --nodeps --replacefiles --replacepkgs \
-Uvh ${target_rootfs}/install/incremental.manifest
else
# Attempt to install
rpm_common_comand --replacepkgs \
-Uhv ${target_rootfs}/install/total_solution.manifest
fi
}
#
# install a bunch of packages using rpm
# the following shell variables needs to be set before calling this func:
@@ -467,8 +406,16 @@ EOF
chmod 0755 ${WORKDIR}/scriptlet_wrapper
rpm_update_pkg
# Attempt install
${RPM} --root ${target_rootfs} \
--predefine "_rpmds_sysinfo_path ${target_rootfs}/etc/rpm/sysinfo" \
--predefine "_rpmrc_platform_path ${target_rootfs}/etc/rpm/platform" \
-D "_var ${localstatedir}" \
-D "_dbpath ${rpmlibdir}" \
--noparentdirs --nolinktos --replacepkgs \
-D "__dbi_txn create nofsync private" \
-D "_cross_scriptlet_wrapper ${WORKDIR}/scriptlet_wrapper" \
-Uhv ${target_rootfs}/install/total_solution.manifest
}
python write_specfile () {
@@ -870,12 +817,6 @@ python write_specfile () {
except OSError:
raise bb.build.FuncFailed("unable to open spec file for writing.")
# RPMSPEC_PREAMBLE is a way to add arbitrary text to the top
# of the generated spec file
external_preamble = d.getVar("RPMSPEC_PREAMBLE", True)
if external_preamble:
specfile.write(external_preamble + "\n")
for line in spec_preamble_top:
specfile.write(line + "\n")
@@ -1000,7 +941,7 @@ python do_package_rpm () {
d.setVar('PACKAGE_ARCH_EXTEND', package_arch)
pkgwritedir = bb.data.expand('${PKGWRITEDIRRPM}/${PACKAGE_ARCH_EXTEND}', d)
pkgarch = bb.data.expand('${PACKAGE_ARCH_EXTEND}${TARGET_VENDOR}-${TARGET_OS}', d)
magicfile = bb.data.expand('${STAGING_DIR_NATIVE}${datadir_native}/misc/magic.mgc', d)
magicfile = bb.data.expand('${STAGING_DIR_NATIVE}/usr/share/misc/magic.mgc', d)
bb.mkdirhier(pkgwritedir)
os.chmod(pkgwritedir, 0755)

View File

@@ -7,131 +7,115 @@ PATCHDEPENDENCY = "${PATCHTOOL}-native:do_populate_sysroot"
inherit terminal
def src_patches(d):
workdir = d.getVar('WORKDIR', True)
fetch = bb.fetch2.Fetch([], d)
patches = []
for url in fetch.urls:
local = patch_path(url, fetch, workdir)
if not local:
continue
urldata = fetch.ud[url]
parm = urldata.parm
patchname = parm.get('pname') or os.path.basename(local)
apply, reason = should_apply(parm, d)
if not apply:
if reason:
bb.note("Patch %s %s" % (patchname, reason))
continue
patchparm = {'patchname': patchname}
if "striplevel" in parm:
striplevel = parm["striplevel"]
elif "pnum" in parm:
#bb.msg.warn(None, "Deprecated usage of 'pnum' url parameter in '%s', please use 'striplevel'" % url)
striplevel = parm["pnum"]
else:
striplevel = '1'
patchparm['striplevel'] = striplevel
patchdir = parm.get('patchdir')
if patchdir:
patchparm['patchdir'] = patchdir
localurl = bb.encodeurl(('file', '', local, '', '', patchparm))
patches.append(localurl)
return patches
def patch_path(url, fetch, workdir):
"""Return the local path of a patch, or None if this isn't a patch"""
local = fetch.localpath(url)
base, ext = os.path.splitext(os.path.basename(local))
if ext in ('.gz', '.bz2', '.Z'):
local = os.path.join(workdir, base)
ext = os.path.splitext(base)[1]
urldata = fetch.ud[url]
if "apply" in urldata.parm:
apply = oe.types.boolean(urldata.parm["apply"])
if not apply:
return
elif ext not in (".diff", ".patch"):
return
return local
def should_apply(parm, d):
"""Determine if we should apply the given patch"""
if "mindate" in parm or "maxdate" in parm:
pn = d.getVar('PN', True)
srcdate = d.getVar('SRCDATE_%s' % pn, True)
if not srcdate:
srcdate = d.getVar('SRCDATE', True)
if srcdate == "now":
srcdate = d.getVar('DATE', True)
if "maxdate" in parm and parm["maxdate"] < srcdate:
return False, 'is outdated'
if "mindate" in parm and parm["mindate"] > srcdate:
return False, 'is predated'
if "minrev" in parm:
srcrev = d.getVar('SRCREV', True)
if srcrev and srcrev < parm["minrev"]:
return False, 'applies to later revisions'
if "maxrev" in parm:
srcrev = d.getVar('SRCREV', True)
if srcrev and srcrev > parm["maxrev"]:
return False, 'applies to earlier revisions'
if "rev" in parm:
srcrev = d.getVar('SRCREV', True)
if srcrev and parm["rev"] not in srcrev:
return False, "doesn't apply to revision"
if "notrev" in parm:
srcrev = d.getVar('SRCREV', True)
if srcrev and parm["notrev"] in srcrev:
return False, "doesn't apply to revision"
return True, None
python patch_do_patch() {
import oe.patch
src_uri = (d.getVar('SRC_URI', 1) or '').split()
if not src_uri:
return
patchsetmap = {
"patch": oe.patch.PatchTree,
"quilt": oe.patch.QuiltTree,
"git": oe.patch.GitApplyTree,
}
cls = patchsetmap[d.getVar('PATCHTOOL', True) or 'quilt']
cls = patchsetmap[d.getVar('PATCHTOOL', 1) or 'quilt']
resolvermap = {
"noop": oe.patch.NOOPResolver,
"user": oe.patch.UserResolver,
}
rcls = resolvermap[d.getVar('PATCHRESOLVE', True) or 'user']
rcls = resolvermap[d.getVar('PATCHRESOLVE', 1) or 'user']
s = d.getVar('S', 1)
path = os.getenv('PATH')
os.putenv('PATH', d.getVar('PATH', 1))
classes = {}
s = d.getVar('S', True)
workdir = d.getVar('WORKDIR', 1)
for url in src_uri:
(type, host, path, user, pswd, parm) = bb.decodeurl(url)
path = os.getenv('PATH')
os.putenv('PATH', d.getVar('PATH', True))
local = None
base, ext = os.path.splitext(os.path.basename(path))
if ext in ('.gz', '.bz2', '.Z'):
local = os.path.join(workdir, base)
ext = os.path.splitext(base)[1]
for patch in src_patches(d):
_, _, local, _, _, parm = bb.decodeurl(patch)
if "apply" in parm:
apply = parm["apply"]
if apply != "yes":
if apply != "no":
bb.msg.warn(None, "Unsupported value '%s' for 'apply' url param in '%s', please use 'yes' or 'no'" % (apply, url))
continue
#elif "patch" in parm:
#bb.msg.warn(None, "Deprecated usage of 'patch' url param in '%s', please use 'apply={yes,no}'" % url)
elif ext not in (".diff", ".patch"):
continue
if not local:
url = bb.encodeurl((type, host, path, user, pswd, []))
local = os.path.join('/', bb.fetch2.localpath(url, d))
local = bb.data.expand(local, d)
if "striplevel" in parm:
striplevel = parm["striplevel"]
elif "pnum" in parm:
#bb.msg.warn(None, "Deprecated usage of 'pnum' url parameter in '%s', please use 'striplevel'" % url)
striplevel = parm["pnum"]
else:
striplevel = '1'
if "pname" in parm:
pname = parm["pname"]
else:
pname = os.path.basename(local)
if "mindate" in parm or "maxdate" in parm:
pn = d.getVar('PN', 1)
srcdate = d.getVar('SRCDATE_%s' % pn, 1)
if not srcdate:
srcdate = d.getVar('SRCDATE', 1)
if srcdate == "now":
srcdate = d.getVar('DATE', 1)
if "maxdate" in parm and parm["maxdate"] < srcdate:
bb.note("Patch '%s' is outdated" % pname)
continue
if "mindate" in parm and parm["mindate"] > srcdate:
bb.note("Patch '%s' is predated" % pname)
continue
if "minrev" in parm:
srcrev = d.getVar('SRCREV', 1)
if srcrev and srcrev < parm["minrev"]:
bb.note("Patch '%s' applies to later revisions" % pname)
continue
if "maxrev" in parm:
srcrev = d.getVar('SRCREV', 1)
if srcrev and srcrev > parm["maxrev"]:
bb.note("Patch '%s' applies to earlier revisions" % pname)
continue
if "rev" in parm:
srcrev = d.getVar('SRCREV', 1)
if srcrev and parm["rev"] not in srcrev:
bb.note("Patch '%s' doesn't apply to revision" % pname)
continue
if "notrev" in parm:
srcrev = d.getVar('SRCREV', 1)
if srcrev and parm["notrev"] in srcrev:
bb.note("Patch '%s' doesn't apply to revision" % pname)
continue
if "patchdir" in parm:
patchdir = parm["patchdir"]
@@ -148,11 +132,12 @@ python patch_do_patch() {
else:
patchset, resolver = classes[patchdir]
bb.note("Applying patch '%s' (%s)" % (parm['patchname'], oe.path.format_display(local, d)))
bb.note("Applying patch '%s' (%s)" % (pname, oe.path.format_display(local, d)))
try:
patchset.Import({"file":local, "strippath": parm['striplevel']}, True)
except Exception as exc:
bb.fatal(str(exc))
patchset.Import({"file":local, "remote":url, "strippath": striplevel}, True)
except Exception:
import sys
raise bb.build.FuncFailed(str(sys.exc_value))
resolver.Resolve()
}
patch_do_patch[vardepsexclude] = "DATE SRCDATE PATCHRESOLVE"

View File

@@ -18,13 +18,6 @@ PID = "${@os.getpid()}"
EXCLUDE_FROM_WORLD = "1"
python () {
# If we don't do this we try and run the mapping hooks while parsing which is slow
# bitbake should really provide something to let us know this...
if bb.data.getVar('BB_WORKERCONTEXT', d, True) is not None:
runtime_mapping_rename("TOOLCHAIN_TARGET_TASK", d)
}
fakeroot do_populate_sdk() {
rm -rf ${SDK_OUTPUT}
mkdir -p ${SDK_OUTPUT}

View File

@@ -20,11 +20,6 @@ populate_sdk_ipk() {
export INSTALL_CONF_IPK="${IPKGCONF_TARGET}"
export INSTALL_PACKAGES_IPK="${TOOLCHAIN_TARGET_TASK}"
export D=${INSTALL_ROOTFS_IPK}
export OFFLINE_ROOT=${INSTALL_ROOTFS_IPK}
export IPKG_OFFLINE_ROOT=${INSTALL_ROOTFS_IPK}
export OPKG_OFFLINE_ROOT=${IPKG_OFFLINE_ROOT}
package_install_internal_ipk
#install host

View File

@@ -1,45 +0,0 @@
PRSERV_DUMPOPT_VERSION = "${PRAUTOINX}"
PRSERV_DUMPOPT_PKGARCH = ""
PRSERV_DUMPOPT_CHECKSUM = ""
PRSERV_DUMPOPT_COL = "0"
PRSERV_DUMPDIR ??= "${LOG_DIR}/db"
PRSERV_DUMPFILE ??= "${PRSERV_DUMPDIR}/prserv.inc"
python prexport_handler () {
import bb.event
if not e.data:
return
if isinstance(e, bb.event.RecipeParsed):
import oe.prservice
#get all PR values for the current PRAUTOINX
ver = e.data.getVar('PRSERV_DUMPOPT_VERSION', True)
ver = ver.replace('%','-')
retval = oe.prservice.prserv_dump_db(e.data)
if not retval:
bb.fatal("prexport_handler: export failed!")
(metainfo, datainfo) = retval
if not datainfo:
bb.error("prexport_handler: No AUROPR values found for %s" % ver)
return
oe.prservice.prserv_export_tofile(e.data, None, datainfo, False)
elif isinstance(e, bb.event.ParseStarted):
import bb.utils
#remove dumpfile
bb.utils.remove(e.data.getVar('PRSERV_DUMPFILE', True))
elif isinstance(e, bb.event.ParseCompleted):
import oe.prservice
#dump meta info of tables
d = e.data.createCopy()
d.setVar('PRSERV_DUMPOPT_COL', "1")
retval = oe.prservice.prserv_dump_db(d)
if not retval:
bb.error("prexport_handler: export failed!")
return
(metainfo, datainfo) = retval
oe.prservice.prserv_export_tofile(d, metainfo, None, True)
}
addhandler prexport_handler

View File

@@ -1,17 +0,0 @@
python primport_handler () {
import bb.event
if not e.data:
return
if isinstance(e, bb.event.ParseCompleted):
import oe.prservice
#import all exported AUTOPR values
imported = oe.prservice.prserv_import_db(e.data)
if imported is None:
bb.fatal("import failed!")
for (version, pkgarch, checksum, value) in imported:
bb.note("imported (%s,%s,%s,%d)" % (version, pkgarch, checksum, value))
}
addhandler primport_handler

View File

@@ -1,21 +1,29 @@
def prserv_make_conn(d):
import prserv.serv
host=d.getVar("PRSERV_HOST",True)
port=d.getVar("PRSERV_PORT",True)
try:
conn=None
conn=prserv.serv.PRServerConnection(host,int(port))
d.setVar("__PRSERV_CONN",conn)
except Exception, exc:
bb.fatal("Connecting to PR service %s:%s failed: %s" % (host, port, str(exc)))
return conn
def prserv_get_pr_auto(d):
import oe.prservice
if d.getVar('USE_PR_SERV', True) != "1":
if d.getVar('USE_PR_SERV', True) != "0":
bb.warn("Not using network based PR service")
return None
version = d.getVar("PRAUTOINX", True)
pkgarch = d.getVar("PACKAGE_ARCH", True)
checksum = d.getVar("BB_TASKHASH", True)
if d.getVar('PRSERV_LOCKDOWN', True):
auto_rev = d.getVar('PRAUTO_' + version + '_' + pkgarch, True) or d.getVar('PRAUTO_' + version, True) or None
else:
conn = d.getVar("__PRSERV_CONN", True)
conn=d.getVar("__PRSERV_CONN", True)
if conn is None:
conn=prserv_make_conn(d)
if conn is None:
conn = oe.prservice.prserv_make_conn(d)
if conn is None:
return None
auto_rev = conn.getPR(version, pkgarch, checksum)
return None
version=d.getVar("PF", True)
checksum=d.getVar("BB_TASKHASH", True)
auto_rev=conn.getPR(version,checksum)
bb.debug(1,"prserv_get_pr_auto: version: %s checksum: %s result %d" % (version, checksum, auto_rev))
return auto_rev

View File

@@ -1,4 +1,3 @@
QMAKE_MKSPEC_PATH ?= "${STAGING_DATADIR_NATIVE}/qmake"
OE_QMAKE_PLATFORM = "${TARGET_OS}-oe-g++"
QMAKESPEC := "${QMAKE_MKSPEC_PATH}/${OE_QMAKE_PLATFORM}"

View File

@@ -2,7 +2,6 @@ DEPENDS_prepend = "${@["qt4-embedded ", ""][(d.getVar('PN', 1)[:12] == 'qt4-embe
inherit qmake2
QT_BASE_NAME = "qt4-embedded"
QT_DIR_NAME = "qtopia"
QT_LIBINFIX = "E"
# override variables set by qmake-base to compile Qt/Embedded apps

View File

@@ -2,7 +2,6 @@ DEPENDS_prepend = "${@["qt4-x11-free ", ""][(d.getVar('BPN', True)[:12] == 'qt4-
inherit qmake2
QT_BASE_NAME = "qt4"
QT_DIR_NAME = "qt4"
QT_LIBINFIX = ""

View File

@@ -8,18 +8,8 @@ ROOTFS_PKGMANAGE_BOOTSTRAP = "run-postinsts"
do_rootfs[depends] += "dpkg-native:do_populate_sysroot apt-native:do_populate_sysroot"
do_rootfs[recrdeptask] += "do_package_write_deb"
DEB_POSTPROCESS_COMMANDS = "rootfs_install_all_locales; "
opkglibdir = "${localstatedir}/lib/opkg"
deb_package_setflag() {
sed -i -e "/^Package: $2\$/{n; s/Status: install ok .*/Status: install ok $1/;}" ${IMAGE_ROOTFS}/var/lib/dpkg/status
}
deb_package_getflag() {
cat ${IMAGE_ROOTFS}/var/lib/dpkg/status | sed -n -e "/^Package: $2\$/{n; s/Status: install ok .*/$1/; p}"
}
fakeroot rootfs_deb_do_rootfs () {
set +e
@@ -38,18 +28,25 @@ fakeroot rootfs_deb_do_rootfs () {
export INSTALL_TASK_DEB="rootfs"
package_install_internal_deb
${DEB_POSTPROCESS_COMMANDS}
export D=${IMAGE_ROOTFS}
export OFFLINE_ROOT=${IMAGE_ROOTFS}
export IPKG_OFFLINE_ROOT=${IMAGE_ROOTFS}
export OPKG_OFFLINE_ROOT=${IMAGE_ROOTFS}
_flag () {
sed -i -e "/^Package: $2\$/{n; s/Status: install ok .*/Status: install ok $1/;}" ${IMAGE_ROOTFS}/var/lib/dpkg/status
}
_getflag () {
cat ${IMAGE_ROOTFS}/var/lib/dpkg/status | sed -n -e "/^Package: $2\$/{n; s/Status: install ok .*/$1/; p}"
}
# Attempt to run preinsts
# Mark packages with preinst failures as unpacked
for i in ${IMAGE_ROOTFS}/var/lib/dpkg/info/*.preinst; do
if [ -f $i ] && ! sh $i; then
deb_package_setflag unpacked `basename $i .preinst`
_flag unpacked `basename $i .preinst`
fi
done
@@ -57,7 +54,7 @@ fakeroot rootfs_deb_do_rootfs () {
# Mark packages with postinst failures as unpacked
for i in ${IMAGE_ROOTFS}/var/lib/dpkg/info/*.postinst; do
if [ -f $i ] && ! sh $i configure; then
deb_package_setflag unpacked `basename $i .postinst`
_flag unpacked `basename $i .postinst`
fi
done
@@ -84,40 +81,3 @@ remove_packaging_data_files() {
rm -rf ${IMAGE_ROOTFS}${opkglibdir}
rm -rf ${IMAGE_ROOTFS}/usr/dpkg/
}
DPKG_QUERY_COMMAND = "${STAGING_BINDIR_NATIVE}/dpkg --admindir=${IMAGE_ROOTFS}/var/lib/dpkg"
list_installed_packages() {
${DPKG_QUERY_COMMAND} -l | grep ^ii | awk '{ print $2 }'
}
get_package_filename() {
fullname=`find ${DEPLOY_DIR_DEB} -name "$1_*.deb" || true`
if [ "$fullname" = "" ] ; then
echo $name
else
echo $fullname
fi
}
list_package_depends() {
${DPKG_QUERY_COMMAND} -s $1 | grep ^Depends | sed -e 's/^Depends: //' -e 's/,//g' -e 's:([=<>]* [0-9a-zA-Z.~\-]*)::g'
}
list_package_recommends() {
${DPKG_QUERY_COMMAND} -s $1 | grep ^Recommends | sed -e 's/^Recommends: //' -e 's/,//g' -e 's:([=<>]* [0-9a-zA-Z.~\-]*)::g'
}
rootfs_check_package_exists() {
if [ `apt-cache showpkg $1 | wc -l` -gt 2 ]; then
echo $1
fi
}
rootfs_install_packages() {
${STAGING_BINDIR_NATIVE}/apt-get install $@ --force-yes --allow-unauthenticated
for pkg in $@ ; do
deb_package_setflag installed $pkg
done
}

View File

@@ -16,7 +16,7 @@ IPKG_ARGS = "-f ${IPKGCONF_TARGET} -o ${IMAGE_ROOTFS} --force-overwrite"
OPKG_PREPROCESS_COMMANDS = "package_update_index_ipk; package_generate_ipkg_conf"
OPKG_POSTPROCESS_COMMANDS = "ipk_insert_feed_uris; rootfs_install_all_locales; "
OPKG_POSTPROCESS_COMMANDS = "ipk_insert_feed_uris"
opkglibdir = "${localstatedir}/lib/opkg"
@@ -60,14 +60,14 @@ fakeroot rootfs_ipk_do_rootfs () {
export INSTALL_CONF_IPK="${IPKGCONF_TARGET}"
export INSTALL_PACKAGES_IPK="${PACKAGE_INSTALL}"
package_install_internal_ipk
#post install
export D=${IMAGE_ROOTFS}
export OFFLINE_ROOT=${IMAGE_ROOTFS}
export IPKG_OFFLINE_ROOT=${IMAGE_ROOTFS}
export OPKG_OFFLINE_ROOT=${IPKG_OFFLINE_ROOT}
package_install_internal_ipk
# Distro specific packages should create this
#mkdir -p ${IMAGE_ROOTFS}/etc/opkg/
#grep "^arch" ${IPKGCONF_TARGET} >${IMAGE_ROOTFS}/etc/opkg/arch.conf
@@ -75,8 +75,28 @@ fakeroot rootfs_ipk_do_rootfs () {
${OPKG_POSTPROCESS_COMMANDS}
${ROOTFS_POSTINSTALL_COMMAND}
runtime_script_required=0
# Base-passwd needs to run first to install /etc/passwd and friends
if [ -e ${IMAGE_ROOTFS}${opkglibdir}/info/base-passwd.preinst ] ; then
sh ${IMAGE_ROOTFS}${opkglibdir}/info/base-passwd.preinst
fi
for i in ${IMAGE_ROOTFS}${opkglibdir}/info/*.preinst; do
if [ -f $i ] && ! sh $i; then
runtime_script_required=1
opkg-cl ${IPKG_ARGS} flag unpacked `basename $i .preinst`
fi
done
for i in ${IMAGE_ROOTFS}${opkglibdir}/info/*.postinst; do
if [ -f $i ] && ! sh $i configure; then
runtime_script_required=1
opkg-cl ${IPKG_ARGS} flag unpacked `basename $i .postinst`
fi
done
if ${@base_contains("IMAGE_FEATURES", "read-only-rootfs", "true", "false" ,d)}; then
if grep Status:.install.ok.unpacked ${IMAGE_ROOTFS}${opkglibdir}status; then
if [ $runtime_script_required -eq 1 ]; then
echo "Some packages could not be configured offline and rootfs is read-only."
exit 1
fi
@@ -148,14 +168,26 @@ list_package_recommends() {
opkg-cl ${IPKG_ARGS} info $1 | grep ^Recommends | sed -e 's/^Recommends: //' -e 's/,//g' -e 's:([=<>]* [0-9a-zA-Z.~\-]*)::g'
}
rootfs_check_package_exists() {
if [ `opkg-cl ${IPKG_ARGS} info $1 | wc -l` -gt 2 ]; then
echo $1
fi
}
install_all_locales() {
rootfs_install_packages() {
opkg-cl ${IPKG_ARGS} install $PACKAGES_TO_INSTALL
PACKAGES_TO_INSTALL=""
INSTALLED_PACKAGES=`list_installed_packages | egrep -v -- "(-locale-|-dev$|-doc$|^kernel|^glibc|^ttf|^task|^perl|^python)"`
for pkg in $INSTALLED_PACKAGES
do
for lang in ${IMAGE_LOCALES}
do
if [ `opkg-cl ${IPKG_ARGS} info $pkg-locale-$lang | wc -l` -gt 2 ]
then
PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL $pkg-locale-$lang"
fi
done
done
if [ "$PACKAGES_TO_INSTALL" != "" ]
then
opkg-cl ${IPKG_ARGS} install $PACKAGES_TO_INSTALL
fi
}
ipk_insert_feed_uris () {
@@ -173,18 +205,7 @@ ipk_insert_feed_uris () {
# insert new feed-sources
echo "src/gz $feed_name $feed_uri" >> ${IPKGCONF_TARGET}
done
# Allow to use package deploy directory contents as quick devel-testing
# feed. This creates individual feed configs for each arch subdir of those
# specified as compatible for the current machine.
# NOTE: Development-helper feature, NOT a full-fledged feed.
if [ -n "${FEED_DEPLOYDIR_BASE_URI}" ]; then
for arch in ${PACKAGE_ARCHS}
do
echo "src/gz local-$arch ${FEED_DEPLOYDIR_BASE_URI}/$arch" >> ${IMAGE_ROOTFS}/etc/opkg/local-$arch-feed.conf
done
fi
done
}
python () {

View File

@@ -21,7 +21,12 @@ do_rootfs[depends] += "opkg-native:do_populate_sysroot"
do_rootfs[recrdeptask] += "do_package_write_rpm"
RPM_PREPROCESS_COMMANDS = "package_update_index_rpm; package_generate_rpm_conf; "
RPM_POSTPROCESS_COMMANDS = "rootfs_install_all_locales; "
RPM_POSTPROCESS_COMMANDS = ""
# To test the install_all_locales.. enable the following...
#RPM_POSTPROCESS_COMMANDS = "install_all_locales; "
#
#IMAGE_LOCALES="en-gb"
#
# Allow distributions to alter when [postponed] package install scripts are run
@@ -61,9 +66,6 @@ fakeroot rootfs_rpm_do_rootfs () {
mkdir -p ${INSTALL_ROOTFS_RPM}${rpmlibdir}
mkdir -p ${INSTALL_ROOTFS_RPM}${rpmlibdir}/log
# After change the __db.* cache size, log file will not be generated automatically,
# that will raise some warnings, so touch a bare log for rpm write into it.
touch ${INSTALL_ROOTFS_RPM}${rpmlibdir}/log/log.0000000001
cat > ${INSTALL_ROOTFS_RPM}${rpmlibdir}/DB_CONFIG << EOF
# ================ Environment
set_data_dir .
@@ -172,9 +174,7 @@ get_package_filename() {
list_package_depends() {
pkglist=`list_installed_packages`
# REQUIRE* lists "soft" requirements (which we know as recommends and RPM refers to
# as "suggests") so filter these out with the help of awk
for req in `${RPM_QUERY_CMD} -q --qf "[%{REQUIRENAME} %{REQUIREFLAGS}\n]" $1 | awk '{ if( and($2, 0x80000) == 0) print $1 }'`; do
for req in `${RPM_QUERY_CMD} -q --qf "[%{REQUIRES}\n]" $1`; do
if echo "$req" | grep -q "^rpmlib" ; then continue ; fi
realpkg=""
@@ -193,23 +193,27 @@ list_package_depends() {
}
list_package_recommends() {
${RPM_QUERY_CMD} -q --suggests $1
:
}
rootfs_check_package_exists() {
resolve_package_rpm ${RPMCONF_TARGET_BASE}-base_archs.conf $1
}
install_all_locales() {
PACKAGES_TO_INSTALL=""
rootfs_install_packages() {
# The pkg to be installed here is not controlled by the
# package_install_internal_rpm, so it may have already been
# installed(e.g, installed in the first time when generate the
# rootfs), use '--replacepkgs' to always install them
for pkg in $@; do
${RPM} --root ${IMAGE_ROOTFS} -D "_dbpath ${rpmlibdir}" \
-D "__dbi_txn create nofsync private" \
--noscripts --notriggers --noparentdirs --nolinktos \
--replacepkgs -Uhv $pkg || true
# Generate list of installed packages...
INSTALLED_PACKAGES=`list_installed_packages | egrep -v -- "(-locale-|-dev$|-doc$|^kernel|^glibc|^ttf|^task|^perl|^python)"`
# This would likely be faster if we did it in one transaction
# but this should be good enough for the few users of this function...
for pkg in $INSTALLED_PACKAGES; do
for lang in ${IMAGE_LOCALES}; do
pkg_name=$(resolve_package_rpm $pkg-locale-$lang ${RPMCONF_TARGET_BASE}.conf)
if [ -n "$pkg_name" ]; then
${RPM} --root ${IMAGE_ROOTFS} -D "_dbpath ${rpmlibdir}" \
-D "__dbi_txn create nofsync private" \
--noscripts --notriggers --noparentdirs --nolinktos \
-Uhv $pkg_name || true
fi
done
done
}

View File

@@ -1,8 +1,6 @@
# Build Class for Sip based Python Bindings
# (C) Michael 'Mickey' Lauer <mickey@Vanille.de>
#
STAGING_SIPDIR ?= "${STAGING_DATADIR_NATIVE}/sip"
DEPENDS =+ "sip-native"
RDEPENDS += "python-sip"

View File

@@ -76,7 +76,7 @@ def siteinfo_data(d):
"x86_64-linux": "bit-64",
"x86_64-linux-uclibc": "bit-64",
"x86_64-linux-gnu": "bit-64 x86_64-linux",
"x86_64-linux-gnux32": "bit-32 ix86-common x32-linux",
"x86_64-linux-gnux32": "bit-32 ix86-common",
"x86_64-mingw32": "bit-64",
}

View File

@@ -10,8 +10,7 @@ SSTATE_PKGSPEC = "sstate-${PN}-${PACKAGE_ARCH}${TARGET_VENDOR}-${TARGET_OS}-$
SSTATE_PKGNAME = "${SSTATE_PKGSPEC}${BB_TASKHASH}"
SSTATE_PKG = "${SSTATE_DIR}/${SSTATE_PKGNAME}"
SSTATE_SCAN_FILES ?= "*.la *-config"
SSTATE_SCAN_CMD ?= 'find ${SSTATE_BUILDDIR} \( -name "${@"\" -o -name \"".join(d.getVar("SSTATE_SCAN_FILES", True).split())}" \) -type f'
SSTATE_SCAN_CMD ?= "find ${SSTATE_BUILDDIR} \( -name "*.la" -o -name "*-config" \) -type f"
BB_HASHFILENAME = "${SSTATE_PKGNAME}"

View File

@@ -49,7 +49,7 @@ toolchain_create_tree_env_script () {
echo 'export CXX=${TARGET_PREFIX}g++' >> $script
echo 'export GDB=${TARGET_PREFIX}gdb' >> $script
echo 'export TARGET_PREFIX=${TARGET_PREFIX}' >> $script
echo 'export CONFIGURE_FLAGS="--target=${TARGET_SYS} --host=${TARGET_SYS} --build=${BUILD_SYS} --with-libtool-sysroot=${STAGING_DIR_TARGET}"' >> $script
echo 'export CONFIGURE_FLAGS="--target=${TARGET_SYS} --host=${TARGET_SYS} --build=${BUILD_SYS}"' >> $script
if [ "${TARGET_OS}" = "darwin8" ]; then
echo 'export TARGET_CFLAGS="-I${STAGING_DIR}${MACHINE}${includedir}"' >> $script
echo 'export TARGET_LDFLAGS="-L${STAGING_DIR}${MACHINE}${libdir}"' >> $script
@@ -57,10 +57,9 @@ toolchain_create_tree_env_script () {
cd ${SDK_OUTPUT}${SDKTARGETSYSROOT}/usr
ln -s /usr/local local
fi
echo 'export CFLAGS="${TARGET_CC_ARCH} --sysroot=${STAGING_DIR_TARGET}"' >> $script
echo 'export CXXFLAGS="${TARGET_CC_ARCH} --sysroot=${STAGING_DIR_TARGET}"' >> $script
echo 'export LDFLAGS="${TARGET_LD_ARCH} --sysroot=${STAGING_DIR_TARGET}"' >> $script
echo 'export CPPFLAGS="${TARGET_CC_ARCH} --sysroot=${STAGING_DIR_TARGET}"' >> $script
echo 'export CFLAGS="${TARGET_CC_ARCH}"' >> $script
echo 'export CXXFLAGS="${TARGET_CC_ARCH}"' >> $script
echo 'export LDFLAGS="${TARGET_LD_ARCH}"' >> $script
echo 'export OECORE_NATIVE_SYSROOT="${STAGING_DIR_NATIVE}"' >> $script
echo 'export OECORE_TARGET_SYSROOT="${STAGING_DIR_TARGET}"' >> $script
echo 'export OECORE_ACLOCAL_OPTS="-I ${STAGING_DIR_NATIVE}/usr/share/aclocal"' >> $script

View File

@@ -3,7 +3,7 @@
# command directly in your recipe, but in most cases this class simplifies
# that job.
#
# There are two basic modes supported: 'single update' and 'batch update'
# There're two basic modes supported: 'single update' and 'batch update'
#
# 'single update' is used for a single alternative command, and you're
# expected to provide at least below keywords:
@@ -11,19 +11,19 @@
# ALTERNATIVE_NAME - the name that the alternative is registered
# ALTERNATIVE_PATH - the path of installed alternative
#
# ALTERNATIVE_PRIORITY and ALTERNATIVE_LINK are optional which have defaults
# ALTENATIVE_PRIORITY and ALTERNATIVE_LINK are optional which have defautls
# in this class.
#
# 'batch update' is used if you have multiple alternatives to be updated.
# Unlike 'single update', 'batch update' in most times only require two
# parameters:
# parameter:
#
# ALTERNATIVE_LINKS - a list of symbolic links for which you'd like to
# ALTERNATIVE_LINKS - a list of symbol links for which you'd like to
# create alternatives, with space as delimiter, e.g:
#
# ALTERNATIVE_LINKS = "${bindir}/cmd1 ${sbindir}/cmd2 ..."
#
# ALTERNATIVE_PRIORITY - optional, applies to all
# ALTNERATIVE_PRIORITY - optional, applies to all
#
# To simplify the design, this class has the assumption that for a name
# listed in ALTERNATIVE_LINKS, say /path/cmd:
@@ -49,7 +49,7 @@ update-alternatives --remove ${ALTERNATIVE_NAME} ${ALTERNATIVE_PATH}
}
# for batch alternatives, we use a simple approach to require only one parameter
# with the rest of the info deduced implicitly
# with the rest info deduced implicitly
update_alternatives_batch_postinst() {
for link in ${ALTERNATIVE_LINKS}
do

View File

@@ -3,8 +3,6 @@ UPDATERCPN ?= "${PN}"
DEPENDS_append = " update-rc.d-native"
UPDATERCD = "update-rc.d"
UPDATERCD_virtclass-native = ""
UPDATERCD_virtclass-nativesdk = ""
RDEPENDS_${UPDATERCPN}_append = " ${UPDATERCD}"
INITSCRIPT_PARAMS ?= "defaults"

View File

@@ -156,6 +156,7 @@ ASSUME_PROVIDED = "\
python-native-runtime \
subversion-native \
tar-native \
texinfo-native \
virtual/libintl-native \
"
@@ -190,7 +191,7 @@ BP = "${BPN}-${PV}"
#
# network based PR service
#
USE_PR_SERV = "${@[1,0][((not d.getVar('PRSERV_HOST', True)) or (not d.getVar('PRSERV_PORT', True))) and (not d.getVar('PRSERV_LOCKDOWN', True))]}"
USE_PR_SERV = "${@[1,0][(d.getVar('PRSERV_HOST',1) is None) or (d.getVar('PRSERV_PORT',1) is None)]}"
# Package info.
@@ -606,6 +607,9 @@ export PATCH_GET="0"
# Not sure about the rest of this yet.
##################################################################
# slot - currently unused by OE. portage remnants
SLOT = "0"
# Other
export PKG_CONFIG_DIR = "${STAGING_DIR_HOST}/${libdir}/pkgconfig"
@@ -614,6 +618,10 @@ export PKG_CONFIG_LIBDIR = "${PKG_CONFIG_DIR}"
export PKG_CONFIG_SYSROOT_DIR = "${STAGING_DIR_HOST}"
export PKG_CONFIG_DISABLE_UNINSTALLED = "yes"
export QMAKE_MKSPEC_PATH = "${STAGING_DATADIR_NATIVE}/qmake"
export STAGING_SIPDIR = "${STAGING_DATADIR_NATIVE}/sip"
export STAGING_IDLDIR = "${STAGING_DATADIR}/idl"
# library package naming
AUTO_LIBNAME_PKGS = "${PACKAGES}"
@@ -666,7 +674,6 @@ require conf/abi_version.conf
DL_DIR ?= "${TOPDIR}/downloads"
SSTATE_DIR ?= "${TOPDIR}/sstate-cache"
IMAGE_FSTYPES ?= "tar.gz"
INITRAMFS_FSTYPES ?= "cpio.gz"
PCMCIA_MANAGER ?= "pcmcia-cs"
DEFAULT_TASK_PROVIDER ?= "task-base"
MACHINE_TASK_PROVIDER ?= "${DEFAULT_TASK_PROVIDER}"
@@ -685,6 +692,7 @@ OES_BITBAKE_CONF = "1"
# Machine properties and task-base stuff
##################################################################
MACHINE_FEATURES ?= "kernel26"
DISTRO_FEATURES ?= ""
# This is used to limit what packages goes into images built, so set big by default
@@ -732,7 +740,7 @@ BB_CONSOLELOG = "${TMPDIR}/cooker.log.${DATETIME}"
# Setup our default hash policy
BB_SIGNATURE_HANDLER ?= "basic"
BB_HASHTASK_WHITELIST ?= "(.*-cross$|.*-native$|.*-cross-initial$|.*-cross-intermediate$|^virtual:native:.*|^virtual:nativesdk:.*)"
BB_HASHBASE_WHITELIST ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR SSTATE_DIR THISDIR FILESEXTRAPATHS FILE_DIRNAME HOME LOGNAME SHELL TERM USER FILESPATH STAGING_DIR_HOST STAGING_DIR_TARGET COREBASE PRSERV_HOST PRSERV_PORT PRSERV_DUMPDIR PRSERV_DUMPFILE PRSERV_LOCKDOWN"
BB_HASHBASE_WHITELIST ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR SSTATE_DIR THISDIR FILESEXTRAPATHS FILE_DIRNAME HOME LOGNAME SHELL TERM USER FILESPATH STAGING_DIR_HOST STAGING_DIR_TARGET COREBASE"
MLPREFIX ??= ""
MULTILIB_VARIANTS ??= ""

View File

@@ -48,6 +48,3 @@ NO32LIBS ??= "1"
BBINCLUDELOGS ??= "yes"
SDK_VERSION ??= "oe-core.0"
DISTRO_VERSION ??= "oe-core.0"
# Missing checksums should raise an error
BB_STRICT_CHECKSUM = "1"

File diff suppressed because it is too large Load Diff

View File

@@ -23,6 +23,13 @@ EGLIBCVERSION ?= "2.13"
UCLIBCVERSION ?= "0.9.32"
LINUXLIBCVERSION ?= "3.1"
# Temporary preferred version overrides for PPC
PREFERRED_VERSION_u-boot-mkimage-native_powerpc ?= "2009.08"
# Temporary workaround for gcc 4.6.0 ICE with beagleboard
# gcc bug: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=47719
TARGET_CC_ARCH_arm_pn-mesa-xlib := "${@'${TARGET_CC_ARCH}'.replace('armv7-a','armv5')}"
PREFERRED_VERSION_gcc ?= "${GCCVERSION}"
PREFERRED_VERSION_gcc-cross ?= "${GCCVERSION}"
PREFERRED_VERSION_gcc-cross-initial ?= "${GCCVERSION}"

View File

@@ -1,51 +1,5 @@
# These aren't actually used anywhere that I can tell
# They may be in the future (or are used by someone else
# For completion sake, I've updated them
SRC_DISTRIBUTE_LICENSES += "AAL Adobe AFL-1.2 AFL-2.0 AFL-2.1 AFL-3.0"
SRC_DISTRIBUTE_LICENSES += "AGPL-3.0 ANTLR-PD Apache-1.0 Apache-1.1 Apache-2.0"
SRC_DISTRIBUTE_LICENSES += "APL-1.0 APSL-1.0 APSL-1.1 APSL-1.2 APSL-2.0"
SRC_DISTRIBUTE_LICENSES += "Artistic-1.0 Artistic-2.0 BitstreamVera BSD"
SRC_DISTRIBUTE_LICENSES += "BSD-2-Clause BSD-3-Clause BSD-4-Clause BSL-1.0"
SRC_DISTRIBUTE_LICENSES += "CATOSL-1.1 CC0-1.0 CC-BY-1.0 CC-BY-2.0 CC-BY-2.5"
SRC_DISTRIBUTE_LICENSES += "CC-BY-3.0 CC-BY-NC-1.0 CC-BY-NC-2.0 CC-BY-NC-2.5"
SRC_DISTRIBUTE_LICENSES += "CC-BY-NC-3.0 CC-BY-NC-ND-1.0 CC-BY-NC-ND-2.0"
SRC_DISTRIBUTE_LICENSES += "CC-BY-NC-ND-2.5 CC-BY-NC-ND-3.0 CC-BY-NC-SA-1.0"
SRC_DISTRIBUTE_LICENSES += "CC-BY-NC-SA-2.0 CC-BY-NC-SA-2.5 CC-BY-NC-SA-3.0"
SRC_DISTRIBUTE_LICENSES += "CC-BY-ND-1.0 CC-BY-ND-2.0 CC-BY-ND-2.5 CC-BY-ND-3.0"
SRC_DISTRIBUTE_LICENSES += "CC-BY-SA-1.0 CC-BY-SA-2.0 CC-BY-SA-2.5 CC-BY-SA-3.0"
SRC_DISTRIBUTE_LICENSES += "CDDL-1.0 CECILL-1.0 CECILL-2.0 CECILL-B CECILL-C"
SRC_DISTRIBUTE_LICENSES += "ClArtistic CPAL-1.0 CPL-1.0 CUA-OPL-1.0 DSSSL"
SRC_DISTRIBUTE_LICENSES += "ECL-1.0 ECL-2.0 eCos-2.0 EDL-1.0 EFL-1.0 EFL-2.0"
SRC_DISTRIBUTE_LICENSES += "Elfutils-Exception Entessa EPL-1.0 ErlPL-1.1"
SRC_DISTRIBUTE_LICENSES += "EUDatagrid EUPL-1.0 EUPL-1.1 Fair Frameworx-1.0"
SRC_DISTRIBUTE_LICENSES += "FreeType GFDL-1.1 GFDL-1.2 GFDL-1.3 GPL-1.0"
SRC_DISTRIBUTE_LICENSES += "GPL-2.0 GPL-2.0-with-autoconf-exception"
SRC_DISTRIBUTE_LICENSES += "GPL-2.0-with-classpath-exception"
SRC_DISTRIBUTE_LICENSES += "GPL-2.0-with-font-exception"
SRC_DISTRIBUTE_LICENSES += "GPL-2.0-with-GCC-exception"
SRC_DISTRIBUTE_LICENSES += "GPL-2-with-bison-exception GPL-3.0"
SRC_DISTRIBUTE_LICENSES += "GPL-3.0-with-autoconf-exception"
SRC_DISTRIBUTE_LICENSES += "GPL-3.0-with-GCC-exception"
SRC_DISTRIBUTE_LICENSES += "gSOAP-1 gSOAP-1.3b HPND IPA IPL-1.0 ISC LGPL-2.0"
SRC_DISTRIBUTE_LICENSES += "LGPL-2.1 LGPL-3.0 Libpng LPL-1.02 LPPL-1.0 LPPL-1.1"
SRC_DISTRIBUTE_LICENSES += "LPPL-1.2 LPPL-1.3c MirOS MIT Motosoto MPL-1.0"
SRC_DISTRIBUTE_LICENSES += "MPL-1.1 MS-PL MS-RL Multics NASA-1.3 Nauman NCSA"
SRC_DISTRIBUTE_LICENSES += "NGPL Nokia NPOSL-3.0 NTP OASIS OCLC-2.0 ODbL-1.0"
SRC_DISTRIBUTE_LICENSES += "OFL-1.1 OGTSL OLDAP-2.8 OpenSSL OSL-1.0 OSL-2.0"
SRC_DISTRIBUTE_LICENSES += "OSL-3.0 PD PHP-3.0 PostgreSQL Proprietary"
SRC_DISTRIBUTE_LICENSES += "Python-2.0 QPL-1.0 RHeCos-1 RHeCos-1.1 RPL-1.5"
SRC_DISTRIBUTE_LICENSES += "RPSL-1.0 RSCPL Ruby SAX-PD Simple-2.0 Sleepycat"
SRC_DISTRIBUTE_LICENSES += "SPL-1.0 SugarCRM-1 SugarCRM-1.1.3 UCB VSL-1.0 W3C
SRC_DISTRIBUTE_LICENSES += "Watcom-1.0 WXwindows XFree86-1.1 Xnet YPL-1.1"
SRC_DISTRIBUTE_LICENSES += "Zimbra-1.3 Zlib ZPL-1.1 ZPL-2.0 ZPL-2.1"
SRC_DISTRIBUTE_LICENSES += "GPL GPLv2 BSD LGPL Apache-2.0 QPL AFL"
SRC_DISTRIBUTE_LICENSES += "MIT Sleepycat Classpath Perl PSF PD Artistic"
SRC_DISTRIBUTE_LICENSES += "bzip2 zlib ntp cron libpng netperf openssl"
SRC_DISTRIBUTE_LICENSES += "Info-ZIP tcp-wrappers"
# Additional license directories. Add your custom licenses directories this path.
# LICENSE_PATH += "${COREBASE}/custom-licenses"
# Set if you want the license.manifest copied to the image
#COPY_LIC_MANIFEST = "1"
# If you want the pkg licenses copied over as well you must set
# both COPY_LIC_MANIFEST and COPY_LIC_DIRS
#COPY_LIC_DIRS = "1"

View File

@@ -4,7 +4,6 @@ ARMPKGARCH ?= "armv4"
TUNEVALID[armv4] = "Enable instructions for ARMv4"
TUNE_CCARGS += "${@bb.utils.contains("TUNE_FEATURES", "armv4", "-march=armv4${ARMPKGSFX_THUMB}", "", d)}"
MACHINEOVERRIDES .= "${@bb.utils.contains("TUNE_FEATURES", "armv4", ":armv4", "" ,d)}"
require conf/machine/include/arm/arch-arm.inc
require conf/machine/include/arm/feature-arm-thumb.inc

View File

@@ -5,7 +5,6 @@ ARMPKGARCH ?= "armv5"
TUNEVALID[armv5] = "Enable instructions for ARMv5"
TUNE_CONFLICTS[armv5] = "armv4"
TUNE_CCARGS += "${@bb.utils.contains("TUNE_FEATURES", "armv5", "-march=armv5${ARMPKGSFX_THUMB}${ARMPKGSFX_DSP}", "", d)}"
MACHINEOVERRIDES .= "${@bb.utils.contains("TUNE_FEATURES", "armv5", ":armv5", "" ,d)}"
ARMPKGSFX_DSP = "${@bb.utils.contains("TUNE_FEATURES", [ "armv5", "dsp" ], "e", "", d)}"

View File

@@ -5,7 +5,6 @@ ARMPKGARCH ?= "armv6"
TUNEVALID[armv6] = "Enable instructions for ARMv6"
TUNE_CONFLICTS[armv6] = "armv4 armv5"
TUNE_CCARGS += "${@bb.utils.contains("TUNE_FEATURES", "armv6", "-march=armv6", "", d)}"
MACHINEOVERRIDES .= "${@bb.utils.contains("TUNE_FEATURES", "armv6", ":armv6", "" ,d)}"
require conf/machine/include/arm/arch-armv5-dsp.inc

Some files were not shown because too many files have changed in this diff Show More