Compare commits
5 Commits
dylan
...
1.4_M3.fin
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3ae1a4ba97 | ||
|
|
5e17deac79 | ||
|
|
b47d0b23d9 | ||
|
|
8617f7eb15 | ||
|
|
87017b554e |
@@ -1,34 +1,28 @@
|
||||
Poky Hardware README
|
||||
====================
|
||||
|
||||
This file gives details about using Poky with the reference machines
|
||||
supported out of the box. A full list of supported reference target machines
|
||||
can be found by looking in the following directories:
|
||||
|
||||
meta/conf/machine/
|
||||
meta-yocto-bsp/conf/machine/
|
||||
|
||||
If you are in doubt about using Poky/OpenEmbedded with your hardware, consult
|
||||
the documentation for your board/device.
|
||||
This file gives details about using Poky with different hardware reference
|
||||
boards and consumer devices. A full list of target machines can be found by
|
||||
looking in the meta/conf/machine/ directory. If in doubt about using Poky with
|
||||
your hardware, consult the documentation for your board/device.
|
||||
|
||||
Support for additional devices is normally added by creating BSP layers - for
|
||||
more information please see the Yocto Board Support Package (BSP) Developer's
|
||||
Guide - documentation source is in documentation/bspguide or download the PDF
|
||||
from:
|
||||
|
||||
http://yoctoproject.org/documentation
|
||||
http://yoctoproject.org/community/documentation
|
||||
|
||||
Support for physical reference hardware has now been split out into a
|
||||
meta-yocto-bsp layer which can be removed separately from other layers if not
|
||||
needed.
|
||||
Support for machines other than QEMU may be moved out to separate BSP layers in
|
||||
future versions.
|
||||
|
||||
|
||||
QEMU Emulation Targets
|
||||
======================
|
||||
|
||||
To simplify development, the build system supports building images to
|
||||
work with the QEMU emulator in system emulation mode. Several architectures
|
||||
are currently supported:
|
||||
To simplify development Poky supports building images to work with the QEMU
|
||||
emulator in system emulation mode. Several architectures are currently
|
||||
supported:
|
||||
|
||||
* ARM (qemuarm)
|
||||
* x86 (qemux86)
|
||||
@@ -36,33 +30,32 @@ are currently supported:
|
||||
* PowerPC (qemuppc)
|
||||
* MIPS (qemumips)
|
||||
|
||||
Use of the QEMU images is covered in the Yocto Project Reference Manual.
|
||||
The appropriate MACHINE variable value corresponding to the target is given
|
||||
in brackets.
|
||||
Use of the QEMU images is covered in the Poky Reference Manual. The Poky
|
||||
MACHINE setting corresponding to the target is given in brackets.
|
||||
|
||||
|
||||
Hardware Reference Boards
|
||||
=========================
|
||||
|
||||
The following boards are supported by the meta-yocto-bsp layer:
|
||||
The following boards are supported by Poky's core layer:
|
||||
|
||||
* Texas Instruments Beagleboard (beagleboard)
|
||||
* Freescale MPC8315E-RDB (mpc8315e-rdb)
|
||||
* Ubiquiti Networks RouterStation Pro (routerstationpro)
|
||||
|
||||
For more information see the board's section below. The appropriate MACHINE
|
||||
variable value corresponding to the board is given in brackets.
|
||||
For more information see the board's section below. The Poky MACHINE setting
|
||||
corresponding to the board is given in brackets.
|
||||
|
||||
|
||||
Consumer Devices
|
||||
================
|
||||
|
||||
The following consumer devices are supported by the meta-yocto-bsp layer:
|
||||
The following consumer devices are supported by Poky's core layer:
|
||||
|
||||
* Intel Atom based PCs and devices (atom-pc)
|
||||
|
||||
For more information see the device's section below. The appropriate MACHINE
|
||||
variable value corresponding to the device is given in brackets.
|
||||
For more information see the device's section below. The Poky MACHINE setting
|
||||
corresponding to the device is given in brackets.
|
||||
|
||||
|
||||
|
||||
@@ -85,7 +78,7 @@ supports ethernet, wifi, sound, and i915 graphics by default in addition to
|
||||
common PC input devices, busses, and so on.
|
||||
|
||||
Depending on the device, it can boot from a traditional hard-disk, a USB device,
|
||||
or over the network. Writing generated images to physical media is
|
||||
or over the network. Writing poky generated images to physical media is
|
||||
straightforward with a caveat for USB devices. The following examples assume the
|
||||
target boot device is /dev/sdb, be sure to verify this and use the correct
|
||||
device as the following commands are run as root and are not reversable.
|
||||
@@ -138,7 +131,7 @@ USB Device:
|
||||
device stops flashing, remove and reinsert the device to allow the
|
||||
kernel to detect the new partition layout.
|
||||
|
||||
c. Copy the contents of the image to the USB-ZIP mode device:
|
||||
c. Copy the contents of the poky image to the USB-ZIP mode device:
|
||||
|
||||
# mkdir /tmp/image
|
||||
# mkdir /tmp/usbkey
|
||||
@@ -288,8 +281,8 @@ anything here.
|
||||
Load the kernel and dtb (device tree blob), and boot the system as follows:
|
||||
|
||||
1. Get the kernel (uImage-mpc8315e-rdb.bin) and dtb (uImage-mpc8315e-rdb.dtb)
|
||||
files from the tmp/deploy directory, and make them available on your TFTP
|
||||
server.
|
||||
files from the Poky build tmp/deploy directory, and make them available on
|
||||
your TFTP server.
|
||||
|
||||
2. Connect the board's first serial port to your workstation and then start up
|
||||
your favourite serial terminal so that you will be able to interact with
|
||||
@@ -308,9 +301,9 @@ Load the kernel and dtb (device tree blob), and boot the system as follows:
|
||||
|
||||
5. Download the kernel and dtb, and boot:
|
||||
|
||||
=> tftp 1000000 uImage-mpc8315e-rdb.bin
|
||||
=> tftp 2000000 uImage-mpc8315e-rdb.dtb
|
||||
=> bootm 1000000 - 2000000
|
||||
=> tftp 800000 uImage-mpc8315e-rdb.bin
|
||||
=> tftp 780000 uImage-mpc8315e-rdb.dtb
|
||||
=> bootm 800000 - 780000
|
||||
|
||||
|
||||
Ubiquiti Networks RouterStation Pro (routerstationpro)
|
||||
|
||||
@@ -40,7 +40,7 @@ from bb import cooker
|
||||
from bb import ui
|
||||
from bb import server
|
||||
|
||||
__version__ = "1.18.0"
|
||||
__version__ = "1.17.0"
|
||||
logger = logging.getLogger("BitBake")
|
||||
|
||||
# Unbuffer stdout to avoid log truncation in the event
|
||||
@@ -229,8 +229,10 @@ Default BBFILES are the .bb files in the current directory.""")
|
||||
# Before we start modifying the environment we should take a pristine
|
||||
# copy for possible later use
|
||||
initialenv = os.environ.copy()
|
||||
# Clear away any spurious environment variables while we stoke up the cooker
|
||||
cleanedvars = bb.utils.clean_environment()
|
||||
# Clear away any spurious environment variables. But don't wipe the
|
||||
# environment totally. This is necessary to ensure the correct operation
|
||||
# of the UIs (e.g. for DISPLAY, etc.)
|
||||
bb.utils.clean_environment()
|
||||
|
||||
server = server.BitBakeServer()
|
||||
if configuration.bind:
|
||||
@@ -256,10 +258,6 @@ Default BBFILES are the .bb files in the current directory.""")
|
||||
# Setup a connection to the server (cooker)
|
||||
server_connection = server.establishConnection()
|
||||
|
||||
# Restore the environment in case the UI needs it
|
||||
for k in cleanedvars:
|
||||
os.environ[k] = cleanedvars[k]
|
||||
|
||||
try:
|
||||
return server.launchUI(ui_main, server_connection.connection, server_connection.events)
|
||||
finally:
|
||||
|
||||
@@ -85,8 +85,8 @@ options, args = parser.parse_args(sys.argv)
|
||||
if len(args) == 1:
|
||||
parser.print_help()
|
||||
else:
|
||||
tinfoil = bb.tinfoil.Tinfoil()
|
||||
if options.taskmode:
|
||||
tinfoil = bb.tinfoil.Tinfoil()
|
||||
if len(args) < 3:
|
||||
logger.error("Please specify a recipe and task name")
|
||||
sys.exit(1)
|
||||
|
||||
@@ -26,7 +26,6 @@ import os
|
||||
import sys
|
||||
import fnmatch
|
||||
from collections import defaultdict
|
||||
import re
|
||||
|
||||
bindir = os.path.dirname(__file__)
|
||||
topdir = os.path.dirname(bindir)
|
||||
@@ -73,7 +72,7 @@ class Commands(cmd.Cmd):
|
||||
else:
|
||||
sys.stdout.write("usage: bitbake-layers <command> [arguments]\n\n")
|
||||
sys.stdout.write("Available commands:\n")
|
||||
procnames = list(set(self.get_names()))
|
||||
procnames = self.get_names()
|
||||
for procname in procnames:
|
||||
if procname[:3] == 'do_':
|
||||
sys.stdout.write(" %s\n" % procname[3:].replace('_', '-'))
|
||||
@@ -459,25 +458,10 @@ build results (as the layer priority order has effectively changed).
|
||||
for layer, _, regex, _ in self.bbhandler.cooker.status.bbfile_config_priorities:
|
||||
if regex.match(filename):
|
||||
for layerdir in self.bblayers:
|
||||
if regex.match(os.path.join(layerdir, 'test')) and re.match(layerdir, filename):
|
||||
if regex.match(os.path.join(layerdir, 'test')):
|
||||
return self.get_layer_name(layerdir)
|
||||
return "?"
|
||||
|
||||
def get_file_layerdir(self, filename):
|
||||
for layer, _, regex, _ in self.bbhandler.cooker.status.bbfile_config_priorities:
|
||||
if regex.match(filename):
|
||||
for layerdir in self.bblayers:
|
||||
if regex.match(os.path.join(layerdir, 'test')) and re.match(layerdir, filename):
|
||||
return layerdir
|
||||
return "?"
|
||||
|
||||
def remove_layer_prefix(self, f):
|
||||
"""Remove the layer_dir prefix, e.g., f = /path/to/layer_dir/foo/blah, the
|
||||
return value will be: layer_dir/foo/blah"""
|
||||
f_layerdir = self.get_file_layerdir(f)
|
||||
prefix = os.path.join(os.path.dirname(f_layerdir), '')
|
||||
return f[len(prefix):] if f.startswith(prefix) else f
|
||||
|
||||
def get_layer_name(self, layerdir):
|
||||
return os.path.basename(layerdir.rstrip(os.sep))
|
||||
|
||||
@@ -557,164 +541,6 @@ Recipes are listed with the bbappends that apply to them as subitems.
|
||||
notappended.append(basename)
|
||||
return appended, notappended
|
||||
|
||||
def do_show_cross_depends(self, args):
|
||||
"""figure out the dependency between recipes that crosses a layer boundary.
|
||||
|
||||
usage: show-cross-depends [-f]
|
||||
|
||||
Figure out the dependency between recipes that crosses a layer boundary.
|
||||
|
||||
Options:
|
||||
-f show full file path
|
||||
|
||||
NOTE:
|
||||
The .bbappend file can impact the dependency.
|
||||
"""
|
||||
self.bbhandler.prepare()
|
||||
|
||||
show_filenames = False
|
||||
for arg in args.split():
|
||||
if arg == '-f':
|
||||
show_filenames = True
|
||||
else:
|
||||
sys.stderr.write("show-cross-depends: invalid option %s\n" % arg)
|
||||
self.do_help('')
|
||||
return
|
||||
|
||||
pkg_fn = self.bbhandler.cooker_data.pkg_fn
|
||||
bbpath = str(self.bbhandler.config_data.getVar('BBPATH', True))
|
||||
self.require_re = re.compile(r"require\s+(.+)")
|
||||
self.include_re = re.compile(r"include\s+(.+)")
|
||||
self.inherit_re = re.compile(r"inherit\s+(.+)")
|
||||
|
||||
# The bb's DEPENDS and RDEPENDS
|
||||
for f in pkg_fn:
|
||||
f = bb.cache.Cache.virtualfn2realfn(f)[0]
|
||||
# Get the layername that the file is in
|
||||
layername = self.get_file_layer(f)
|
||||
|
||||
# The DEPENDS
|
||||
deps = self.bbhandler.cooker_data.deps[f]
|
||||
for pn in deps:
|
||||
if pn in self.bbhandler.cooker_data.pkg_pn:
|
||||
best = bb.providers.findBestProvider(pn,
|
||||
self.bbhandler.cooker.configuration.data,
|
||||
self.bbhandler.cooker_data,
|
||||
self.bbhandler.cooker_data.pkg_pn)
|
||||
self.check_cross_depends("DEPENDS", layername, f, best[3], show_filenames)
|
||||
|
||||
# The RDPENDS
|
||||
all_rdeps = self.bbhandler.cooker_data.rundeps[f].values()
|
||||
# Remove the duplicated or null one.
|
||||
sorted_rdeps = {}
|
||||
# The all_rdeps is the list in list, so we need two for loops
|
||||
for k1 in all_rdeps:
|
||||
for k2 in k1:
|
||||
sorted_rdeps[k2] = 1
|
||||
all_rdeps = sorted_rdeps.keys()
|
||||
for rdep in all_rdeps:
|
||||
all_p = bb.providers.getRuntimeProviders(self.bbhandler.cooker_data, rdep)
|
||||
if all_p:
|
||||
best = bb.providers.filterProvidersRunTime(all_p, rdep,
|
||||
self.bbhandler.cooker.configuration.data,
|
||||
self.bbhandler.cooker_data)[0][0]
|
||||
self.check_cross_depends("RDEPENDS", layername, f, best, show_filenames)
|
||||
|
||||
# The inherit class
|
||||
cls_re = re.compile('classes/')
|
||||
if f in self.bbhandler.cooker_data.inherits:
|
||||
inherits = self.bbhandler.cooker_data.inherits[f]
|
||||
for cls in inherits:
|
||||
# The inherits' format is [classes/cls, /path/to/classes/cls]
|
||||
# ignore the classes/cls.
|
||||
if not cls_re.match(cls):
|
||||
inherit_layername = self.get_file_layer(cls)
|
||||
if inherit_layername != layername:
|
||||
if not show_filenames:
|
||||
f_short = self.remove_layer_prefix(f)
|
||||
cls = self.remove_layer_prefix(cls)
|
||||
else:
|
||||
f_short = f
|
||||
logger.plain("%s inherits %s" % (f_short, cls))
|
||||
|
||||
# The 'require/include xxx' in the bb file
|
||||
pv_re = re.compile(r"\${PV}")
|
||||
fnfile = open(f, 'r')
|
||||
line = fnfile.readline()
|
||||
while line:
|
||||
m, keyword = self.match_require_include(line)
|
||||
# Found the 'require/include xxxx'
|
||||
if m:
|
||||
needed_file = m.group(1)
|
||||
# Replace the ${PV} with the real PV
|
||||
if pv_re.search(needed_file) and f in self.bbhandler.cooker_data.pkg_pepvpr:
|
||||
pv = self.bbhandler.cooker_data.pkg_pepvpr[f][1]
|
||||
needed_file = re.sub(r"\${PV}", pv, needed_file)
|
||||
self.print_cross_files(bbpath, keyword, layername, f, needed_file, show_filenames)
|
||||
line = fnfile.readline()
|
||||
fnfile.close()
|
||||
|
||||
# The "require/include xxx" in conf/machine/*.conf, .inc and .bbclass
|
||||
conf_re = re.compile(".*/conf/machine/[^\/]*\.conf$")
|
||||
inc_re = re.compile(".*\.inc$")
|
||||
# The "inherit xxx" in .bbclass
|
||||
bbclass_re = re.compile(".*\.bbclass$")
|
||||
for layerdir in self.bblayers:
|
||||
layername = self.get_layer_name(layerdir)
|
||||
for dirpath, dirnames, filenames in os.walk(layerdir):
|
||||
for name in filenames:
|
||||
f = os.path.join(dirpath, name)
|
||||
s = conf_re.match(f) or inc_re.match(f) or bbclass_re.match(f)
|
||||
if s:
|
||||
ffile = open(f, 'r')
|
||||
line = ffile.readline()
|
||||
while line:
|
||||
m, keyword = self.match_require_include(line)
|
||||
# Only bbclass has the "inherit xxx" here.
|
||||
bbclass=""
|
||||
if not m and f.endswith(".bbclass"):
|
||||
m, keyword = self.match_inherit(line)
|
||||
bbclass=".bbclass"
|
||||
# Find a 'require/include xxxx'
|
||||
if m:
|
||||
self.print_cross_files(bbpath, keyword, layername, f, m.group(1) + bbclass, show_filenames)
|
||||
line = ffile.readline()
|
||||
ffile.close()
|
||||
|
||||
def print_cross_files(self, bbpath, keyword, layername, f, needed_filename, show_filenames):
|
||||
"""Print the depends that crosses a layer boundary"""
|
||||
needed_file = bb.utils.which(bbpath, needed_filename)
|
||||
if needed_file:
|
||||
# Which layer is this file from
|
||||
needed_layername = self.get_file_layer(needed_file)
|
||||
if needed_layername != layername:
|
||||
if not show_filenames:
|
||||
f = self.remove_layer_prefix(f)
|
||||
needed_file = self.remove_layer_prefix(needed_file)
|
||||
logger.plain("%s %s %s" %(f, keyword, needed_file))
|
||||
def match_inherit(self, line):
|
||||
"""Match the inherit xxx line"""
|
||||
return (self.inherit_re.match(line), "inherits")
|
||||
|
||||
def match_require_include(self, line):
|
||||
"""Match the require/include xxx line"""
|
||||
m = self.require_re.match(line)
|
||||
keyword = "requires"
|
||||
if not m:
|
||||
m = self.include_re.match(line)
|
||||
keyword = "includes"
|
||||
return (m, keyword)
|
||||
|
||||
def check_cross_depends(self, keyword, layername, f, needed_file, show_filenames):
|
||||
"""Print the DEPENDS/RDEPENDS file that crosses a layer boundary"""
|
||||
best_realfn = bb.cache.Cache.virtualfn2realfn(needed_file)[0]
|
||||
needed_layername = self.get_file_layer(best_realfn)
|
||||
if needed_layername != layername:
|
||||
if not show_filenames:
|
||||
f = self.remove_layer_prefix(f)
|
||||
best_realfn = self.remove_layer_prefix(best_realfn)
|
||||
|
||||
logger.plain("%s %s %s" % (f, keyword, best_realfn))
|
||||
|
||||
if __name__ == '__main__':
|
||||
sys.exit(main(sys.argv[1:]) or 0)
|
||||
|
||||
@@ -103,24 +103,6 @@ Show debug logging for the specified logging domains
|
||||
.TP
|
||||
.B \-P, \-\-profile
|
||||
profile the command and print a report
|
||||
.TP
|
||||
.B \-uUI, \-\-ui=UI
|
||||
User interface to use. Currently, hob, depexp, goggle or ncurses can be specified as UI.
|
||||
.TP
|
||||
.B \-tSERVERTYPE, \-\-servertype=SERVERTYPE
|
||||
Choose which server to use, none, process or xmlrpc.
|
||||
.TP
|
||||
.B \-\-revisions-changed
|
||||
Set the exit code depending on whether upstream floating revisions have changed or not.
|
||||
.TP
|
||||
.B \-\-server-only
|
||||
Run bitbake without UI, the frontend can connect with bitbake server itself.
|
||||
.TP
|
||||
.B \-BBIND, \-\-bind=BIND
|
||||
The name/address for the bitbake server to bind to.
|
||||
.TP
|
||||
.B \-\-no\-setscene
|
||||
Do not run any setscene tasks, forces builds.
|
||||
|
||||
.SH ENVIRONMENT VARIABLES
|
||||
bitbake uses the following environment variables to control its
|
||||
|
||||
@@ -21,7 +21,7 @@
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
__version__ = "1.18.0"
|
||||
__version__ = "1.17.0"
|
||||
|
||||
import sys
|
||||
if sys.version_info < (2, 6, 0):
|
||||
|
||||
@@ -423,27 +423,13 @@ def _exec_task(fn, task, d, quieterr):
|
||||
|
||||
return 0
|
||||
|
||||
def exec_task(fn, task, d, profile = False):
|
||||
def exec_task(fn, task, d):
|
||||
try:
|
||||
quieterr = False
|
||||
if d.getVarFlag(task, "quieterrors") is not None:
|
||||
quieterr = True
|
||||
|
||||
if profile:
|
||||
profname = "profile-%s.log" % (os.path.basename(fn) + "-" + task)
|
||||
try:
|
||||
import cProfile as profile
|
||||
except:
|
||||
import profile
|
||||
prof = profile.Profile()
|
||||
ret = profile.Profile.runcall(prof, _exec_task, fn, task, d, quieterr)
|
||||
prof.dump_stats(profname)
|
||||
bb.utils.process_profilelog(profname)
|
||||
|
||||
return ret
|
||||
else:
|
||||
return _exec_task(fn, task, d, quieterr)
|
||||
|
||||
return _exec_task(fn, task, d, quieterr)
|
||||
except Exception:
|
||||
from traceback import format_exc
|
||||
if not quieterr:
|
||||
@@ -519,13 +505,10 @@ def make_stamp(task, d, file_name = None):
|
||||
"""
|
||||
cleanmask = stamp_cleanmask_internal(task, d, file_name)
|
||||
for mask in cleanmask:
|
||||
# Preserve sigdata files in the stamps directory
|
||||
for name in glob.glob(mask):
|
||||
# Preserve sigdata files in the stamps directory
|
||||
if "sigdata" in name:
|
||||
continue
|
||||
# Preserve taint files in the stamps directory
|
||||
if name.endswith('.taint'):
|
||||
continue
|
||||
os.unlink(name)
|
||||
|
||||
stamp = stamp_internal(task, d, file_name)
|
||||
|
||||
@@ -119,6 +119,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
|
||||
self.basetaskhashes = self.taskvar('BB_BASEHASH', self.tasks, metadata)
|
||||
self.hashfilename = self.getvar('BB_HASHFILENAME', metadata)
|
||||
|
||||
self.file_depends = metadata.getVar('__depends', False)
|
||||
self.task_deps = metadata.getVar('_task_deps', False) or {'tasks': [], 'parents': {}}
|
||||
|
||||
self.skipped = False
|
||||
@@ -126,6 +127,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
|
||||
self.pv = self.getvar('PV', metadata)
|
||||
self.pr = self.getvar('PR', metadata)
|
||||
self.defaultpref = self.intvar('DEFAULT_PREFERENCE', metadata)
|
||||
self.broken = self.getvar('BROKEN', metadata)
|
||||
self.not_world = self.getvar('EXCLUDE_FROM_WORLD', metadata)
|
||||
self.stamp = self.getvar('STAMP', metadata)
|
||||
self.stampclean = self.getvar('STAMPCLEAN', metadata)
|
||||
@@ -232,7 +234,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
|
||||
|
||||
# Collect files we may need for possible world-dep
|
||||
# calculations
|
||||
if not self.not_world:
|
||||
if not self.broken and not self.not_world:
|
||||
cachedata.possible_world.append(fn)
|
||||
|
||||
# create a collection of all targets for sanity checking
|
||||
@@ -526,7 +528,7 @@ class Cache(object):
|
||||
|
||||
if appends != info_array[0].appends:
|
||||
logger.debug(2, "Cache: appends for %s changed", fn)
|
||||
logger.debug(2, "%s to %s" % (str(appends), str(info_array[0].appends)))
|
||||
bb.note("%s to %s" % (str(appends), str(info_array[0].appends)))
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
|
||||
@@ -41,10 +41,6 @@ class HobRecipeInfo(RecipeInfoCommon):
|
||||
self.license = self.getvar('LICENSE', metadata)
|
||||
self.section = self.getvar('SECTION', metadata)
|
||||
self.description = self.getvar('DESCRIPTION', metadata)
|
||||
self.homepage = self.getvar('HOMEPAGE', metadata)
|
||||
self.bugtracker = self.getvar('BUGTRACKER', metadata)
|
||||
self.prevision = self.getvar('PR', metadata)
|
||||
self.files_info = self.getvar('FILES_INFO', metadata)
|
||||
|
||||
@classmethod
|
||||
def init_cacheData(cls, cachedata):
|
||||
@@ -53,17 +49,9 @@ class HobRecipeInfo(RecipeInfoCommon):
|
||||
cachedata.license = {}
|
||||
cachedata.section = {}
|
||||
cachedata.description = {}
|
||||
cachedata.homepage = {}
|
||||
cachedata.bugtracker = {}
|
||||
cachedata.prevision = {}
|
||||
cachedata.files_info = {}
|
||||
|
||||
def add_cacheData(self, cachedata, fn):
|
||||
cachedata.summary[fn] = self.summary
|
||||
cachedata.license[fn] = self.license
|
||||
cachedata.section[fn] = self.section
|
||||
cachedata.description[fn] = self.description
|
||||
cachedata.homepage[fn] = self.homepage
|
||||
cachedata.bugtracker[fn] = self.bugtracker
|
||||
cachedata.prevision[fn] = self.prevision
|
||||
cachedata.files_info[fn] = self.files_info
|
||||
|
||||
@@ -35,7 +35,7 @@ def check_indent(codestr):
|
||||
|
||||
class CodeParserCache(MultiProcessCache):
|
||||
cache_file_name = "bb_codeparser.dat"
|
||||
CACHE_VERSION = 3
|
||||
CACHE_VERSION = 2
|
||||
|
||||
def __init__(self):
|
||||
MultiProcessCache.__init__(self)
|
||||
@@ -100,8 +100,7 @@ class BufferedLogger(Logger):
|
||||
self.buffer = []
|
||||
|
||||
class PythonParser():
|
||||
getvars = ("d.getVar", "bb.data.getVar", "data.getVar", "d.appendVar", "d.prependVar")
|
||||
containsfuncs = ("bb.utils.contains", "base_contains", "oe.utils.contains")
|
||||
getvars = ("d.getVar", "bb.data.getVar", "data.getVar")
|
||||
execfuncs = ("bb.build.exec_func", "bb.build.exec_task")
|
||||
|
||||
def warn(self, func, arg):
|
||||
@@ -120,7 +119,7 @@ class PythonParser():
|
||||
|
||||
def visit_Call(self, node):
|
||||
name = self.called_node_name(node.func)
|
||||
if name in self.getvars or name in self.containsfuncs:
|
||||
if name in self.getvars:
|
||||
if isinstance(node.args[0], ast.Str):
|
||||
self.var_references.add(node.args[0].s)
|
||||
else:
|
||||
|
||||
@@ -174,18 +174,6 @@ class CommandsSync:
|
||||
value = str(params[1])
|
||||
command.cooker.configuration.data.setVar(varname, value)
|
||||
|
||||
def enableDataTracking(self, command, params):
|
||||
"""
|
||||
Enable history tracking for variables
|
||||
"""
|
||||
command.cooker.enableDataTracking()
|
||||
|
||||
def disableDataTracking(self, command, params):
|
||||
"""
|
||||
Disable history tracking for variables
|
||||
"""
|
||||
command.cooker.disableDataTracking()
|
||||
|
||||
def initCooker(self, command, params):
|
||||
"""
|
||||
Init the cooker to initial state with nothing parsed
|
||||
@@ -205,6 +193,13 @@ class CommandsSync:
|
||||
"""
|
||||
return bb.utils.cpu_count()
|
||||
|
||||
def setConfFilter(self, command, params):
|
||||
"""
|
||||
Set the configuration file parsing filter
|
||||
"""
|
||||
filterfunc = params[0]
|
||||
bb.parse.parse_py.ConfHandler.confFilters.append(filterfunc)
|
||||
|
||||
def matchFile(self, command, params):
|
||||
fMatch = params[0]
|
||||
return command.cooker.matchFile(fMatch)
|
||||
@@ -215,12 +210,6 @@ class CommandsSync:
|
||||
package_queue = params[2]
|
||||
return command.cooker.generateNewImage(image, base_image, package_queue)
|
||||
|
||||
def setVarFile(self, command, params):
|
||||
var = params[0]
|
||||
val = params[1]
|
||||
default_file = params[2]
|
||||
command.cooker.saveConfigurationVar(var, val, default_file)
|
||||
|
||||
class CommandsAsync:
|
||||
"""
|
||||
A class of asynchronous commands
|
||||
|
||||
@@ -239,690 +239,3 @@ class OrderedDict(dict):
|
||||
def viewitems(self):
|
||||
"od.viewitems() -> a set-like object providing a view on od's items"
|
||||
return ItemsView(self)
|
||||
|
||||
# Multiprocessing pool code imported from python 2.7.3. Previous versions of
|
||||
# python have issues in this code which hang pool usage
|
||||
|
||||
#
|
||||
# Module providing the `Pool` class for managing a process pool
|
||||
#
|
||||
# multiprocessing/pool.py
|
||||
#
|
||||
# Copyright (c) 2006-2008, R Oudkerk
|
||||
# All rights reserved.
|
||||
#
|
||||
# Redistribution and use in source and binary forms, with or without
|
||||
# modification, are permitted provided that the following conditions
|
||||
# are met:
|
||||
#
|
||||
# 1. Redistributions of source code must retain the above copyright
|
||||
# notice, this list of conditions and the following disclaimer.
|
||||
# 2. Redistributions in binary form must reproduce the above copyright
|
||||
# notice, this list of conditions and the following disclaimer in the
|
||||
# documentation and/or other materials provided with the distribution.
|
||||
# 3. Neither the name of author nor the names of any contributors may be
|
||||
# used to endorse or promote products derived from this software
|
||||
# without specific prior written permission.
|
||||
#
|
||||
# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS "AS IS" AND
|
||||
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
# ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
|
||||
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
||||
# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
||||
# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
||||
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
||||
# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
||||
# SUCH DAMAGE.
|
||||
#
|
||||
import threading
|
||||
import Queue
|
||||
import itertools
|
||||
import collections
|
||||
import time
|
||||
|
||||
import multiprocessing
|
||||
from multiprocessing import Process, cpu_count, TimeoutError, pool
|
||||
from multiprocessing.util import Finalize, debug
|
||||
|
||||
#
|
||||
# Constants representing the state of a pool
|
||||
#
|
||||
|
||||
RUN = 0
|
||||
CLOSE = 1
|
||||
TERMINATE = 2
|
||||
|
||||
#
|
||||
# Miscellaneous
|
||||
#
|
||||
|
||||
def mapstar(args):
|
||||
return map(*args)
|
||||
|
||||
class MaybeEncodingError(Exception):
|
||||
"""Wraps possible unpickleable errors, so they can be
|
||||
safely sent through the socket."""
|
||||
|
||||
def __init__(self, exc, value):
|
||||
self.exc = repr(exc)
|
||||
self.value = repr(value)
|
||||
super(MaybeEncodingError, self).__init__(self.exc, self.value)
|
||||
|
||||
def __str__(self):
|
||||
return "Error sending result: '%s'. Reason: '%s'" % (self.value,
|
||||
self.exc)
|
||||
|
||||
def __repr__(self):
|
||||
return "<MaybeEncodingError: %s>" % str(self)
|
||||
|
||||
def worker(inqueue, outqueue, initializer=None, initargs=(), maxtasks=None):
|
||||
assert maxtasks is None or (type(maxtasks) == int and maxtasks > 0)
|
||||
put = outqueue.put
|
||||
get = inqueue.get
|
||||
if hasattr(inqueue, '_writer'):
|
||||
inqueue._writer.close()
|
||||
outqueue._reader.close()
|
||||
|
||||
if initializer is not None:
|
||||
initializer(*initargs)
|
||||
|
||||
completed = 0
|
||||
while maxtasks is None or (maxtasks and completed < maxtasks):
|
||||
try:
|
||||
task = get()
|
||||
except (EOFError, IOError):
|
||||
debug('worker got EOFError or IOError -- exiting')
|
||||
break
|
||||
|
||||
if task is None:
|
||||
debug('worker got sentinel -- exiting')
|
||||
break
|
||||
|
||||
job, i, func, args, kwds = task
|
||||
try:
|
||||
result = (True, func(*args, **kwds))
|
||||
except Exception, e:
|
||||
result = (False, e)
|
||||
try:
|
||||
put((job, i, result))
|
||||
except Exception as e:
|
||||
wrapped = MaybeEncodingError(e, result[1])
|
||||
debug("Possible encoding error while sending result: %s" % (
|
||||
wrapped))
|
||||
put((job, i, (False, wrapped)))
|
||||
completed += 1
|
||||
debug('worker exiting after %d tasks' % completed)
|
||||
|
||||
|
||||
class Pool(object):
|
||||
'''
|
||||
Class which supports an async version of the `apply()` builtin
|
||||
'''
|
||||
Process = Process
|
||||
|
||||
def __init__(self, processes=None, initializer=None, initargs=(),
|
||||
maxtasksperchild=None):
|
||||
self._setup_queues()
|
||||
self._taskqueue = Queue.Queue()
|
||||
self._cache = {}
|
||||
self._state = RUN
|
||||
self._maxtasksperchild = maxtasksperchild
|
||||
self._initializer = initializer
|
||||
self._initargs = initargs
|
||||
|
||||
if processes is None:
|
||||
try:
|
||||
processes = cpu_count()
|
||||
except NotImplementedError:
|
||||
processes = 1
|
||||
if processes < 1:
|
||||
raise ValueError("Number of processes must be at least 1")
|
||||
|
||||
if initializer is not None and not hasattr(initializer, '__call__'):
|
||||
raise TypeError('initializer must be a callable')
|
||||
|
||||
self._processes = processes
|
||||
self._pool = []
|
||||
self._repopulate_pool()
|
||||
|
||||
self._worker_handler = threading.Thread(
|
||||
target=Pool._handle_workers,
|
||||
args=(self, )
|
||||
)
|
||||
self._worker_handler.daemon = True
|
||||
self._worker_handler._state = RUN
|
||||
self._worker_handler.start()
|
||||
|
||||
|
||||
self._task_handler = threading.Thread(
|
||||
target=Pool._handle_tasks,
|
||||
args=(self._taskqueue, self._quick_put, self._outqueue, self._pool)
|
||||
)
|
||||
self._task_handler.daemon = True
|
||||
self._task_handler._state = RUN
|
||||
self._task_handler.start()
|
||||
|
||||
self._result_handler = threading.Thread(
|
||||
target=Pool._handle_results,
|
||||
args=(self._outqueue, self._quick_get, self._cache)
|
||||
)
|
||||
self._result_handler.daemon = True
|
||||
self._result_handler._state = RUN
|
||||
self._result_handler.start()
|
||||
|
||||
self._terminate = Finalize(
|
||||
self, self._terminate_pool,
|
||||
args=(self._taskqueue, self._inqueue, self._outqueue, self._pool,
|
||||
self._worker_handler, self._task_handler,
|
||||
self._result_handler, self._cache),
|
||||
exitpriority=15
|
||||
)
|
||||
|
||||
def _join_exited_workers(self):
|
||||
"""Cleanup after any worker processes which have exited due to reaching
|
||||
their specified lifetime. Returns True if any workers were cleaned up.
|
||||
"""
|
||||
cleaned = False
|
||||
for i in reversed(range(len(self._pool))):
|
||||
worker = self._pool[i]
|
||||
if worker.exitcode is not None:
|
||||
# worker exited
|
||||
debug('cleaning up worker %d' % i)
|
||||
worker.join()
|
||||
cleaned = True
|
||||
del self._pool[i]
|
||||
return cleaned
|
||||
|
||||
def _repopulate_pool(self):
|
||||
"""Bring the number of pool processes up to the specified number,
|
||||
for use after reaping workers which have exited.
|
||||
"""
|
||||
for i in range(self._processes - len(self._pool)):
|
||||
w = self.Process(target=worker,
|
||||
args=(self._inqueue, self._outqueue,
|
||||
self._initializer,
|
||||
self._initargs, self._maxtasksperchild)
|
||||
)
|
||||
self._pool.append(w)
|
||||
w.name = w.name.replace('Process', 'PoolWorker')
|
||||
w.daemon = True
|
||||
w.start()
|
||||
debug('added worker')
|
||||
|
||||
def _maintain_pool(self):
|
||||
"""Clean up any exited workers and start replacements for them.
|
||||
"""
|
||||
if self._join_exited_workers():
|
||||
self._repopulate_pool()
|
||||
|
||||
def _setup_queues(self):
|
||||
from multiprocessing.queues import SimpleQueue
|
||||
self._inqueue = SimpleQueue()
|
||||
self._outqueue = SimpleQueue()
|
||||
self._quick_put = self._inqueue._writer.send
|
||||
self._quick_get = self._outqueue._reader.recv
|
||||
|
||||
def apply(self, func, args=(), kwds={}):
|
||||
'''
|
||||
Equivalent of `apply()` builtin
|
||||
'''
|
||||
assert self._state == RUN
|
||||
return self.apply_async(func, args, kwds).get()
|
||||
|
||||
def map(self, func, iterable, chunksize=None):
|
||||
'''
|
||||
Equivalent of `map()` builtin
|
||||
'''
|
||||
assert self._state == RUN
|
||||
return self.map_async(func, iterable, chunksize).get()
|
||||
|
||||
def imap(self, func, iterable, chunksize=1):
|
||||
'''
|
||||
Equivalent of `itertools.imap()` -- can be MUCH slower than `Pool.map()`
|
||||
'''
|
||||
assert self._state == RUN
|
||||
if chunksize == 1:
|
||||
result = IMapIterator(self._cache)
|
||||
self._taskqueue.put((((result._job, i, func, (x,), {})
|
||||
for i, x in enumerate(iterable)), result._set_length))
|
||||
return result
|
||||
else:
|
||||
assert chunksize > 1
|
||||
task_batches = Pool._get_tasks(func, iterable, chunksize)
|
||||
result = IMapIterator(self._cache)
|
||||
self._taskqueue.put((((result._job, i, mapstar, (x,), {})
|
||||
for i, x in enumerate(task_batches)), result._set_length))
|
||||
return (item for chunk in result for item in chunk)
|
||||
|
||||
def imap_unordered(self, func, iterable, chunksize=1):
|
||||
'''
|
||||
Like `imap()` method but ordering of results is arbitrary
|
||||
'''
|
||||
assert self._state == RUN
|
||||
if chunksize == 1:
|
||||
result = IMapUnorderedIterator(self._cache)
|
||||
self._taskqueue.put((((result._job, i, func, (x,), {})
|
||||
for i, x in enumerate(iterable)), result._set_length))
|
||||
return result
|
||||
else:
|
||||
assert chunksize > 1
|
||||
task_batches = Pool._get_tasks(func, iterable, chunksize)
|
||||
result = IMapUnorderedIterator(self._cache)
|
||||
self._taskqueue.put((((result._job, i, mapstar, (x,), {})
|
||||
for i, x in enumerate(task_batches)), result._set_length))
|
||||
return (item for chunk in result for item in chunk)
|
||||
|
||||
def apply_async(self, func, args=(), kwds={}, callback=None):
|
||||
'''
|
||||
Asynchronous equivalent of `apply()` builtin
|
||||
'''
|
||||
assert self._state == RUN
|
||||
result = ApplyResult(self._cache, callback)
|
||||
self._taskqueue.put(([(result._job, None, func, args, kwds)], None))
|
||||
return result
|
||||
|
||||
def map_async(self, func, iterable, chunksize=None, callback=None):
|
||||
'''
|
||||
Asynchronous equivalent of `map()` builtin
|
||||
'''
|
||||
assert self._state == RUN
|
||||
if not hasattr(iterable, '__len__'):
|
||||
iterable = list(iterable)
|
||||
|
||||
if chunksize is None:
|
||||
chunksize, extra = divmod(len(iterable), len(self._pool) * 4)
|
||||
if extra:
|
||||
chunksize += 1
|
||||
if len(iterable) == 0:
|
||||
chunksize = 0
|
||||
|
||||
task_batches = Pool._get_tasks(func, iterable, chunksize)
|
||||
result = MapResult(self._cache, chunksize, len(iterable), callback)
|
||||
self._taskqueue.put((((result._job, i, mapstar, (x,), {})
|
||||
for i, x in enumerate(task_batches)), None))
|
||||
return result
|
||||
|
||||
@staticmethod
|
||||
def _handle_workers(pool):
|
||||
thread = threading.current_thread()
|
||||
|
||||
# Keep maintaining workers until the cache gets drained, unless the pool
|
||||
# is terminated.
|
||||
while thread._state == RUN or (pool._cache and thread._state != TERMINATE):
|
||||
pool._maintain_pool()
|
||||
time.sleep(0.1)
|
||||
# send sentinel to stop workers
|
||||
pool._taskqueue.put(None)
|
||||
debug('worker handler exiting')
|
||||
|
||||
@staticmethod
|
||||
def _handle_tasks(taskqueue, put, outqueue, pool):
|
||||
thread = threading.current_thread()
|
||||
|
||||
for taskseq, set_length in iter(taskqueue.get, None):
|
||||
i = -1
|
||||
for i, task in enumerate(taskseq):
|
||||
if thread._state:
|
||||
debug('task handler found thread._state != RUN')
|
||||
break
|
||||
try:
|
||||
put(task)
|
||||
except IOError:
|
||||
debug('could not put task on queue')
|
||||
break
|
||||
else:
|
||||
if set_length:
|
||||
debug('doing set_length()')
|
||||
set_length(i+1)
|
||||
continue
|
||||
break
|
||||
else:
|
||||
debug('task handler got sentinel')
|
||||
|
||||
|
||||
try:
|
||||
# tell result handler to finish when cache is empty
|
||||
debug('task handler sending sentinel to result handler')
|
||||
outqueue.put(None)
|
||||
|
||||
# tell workers there is no more work
|
||||
debug('task handler sending sentinel to workers')
|
||||
for p in pool:
|
||||
put(None)
|
||||
except IOError:
|
||||
debug('task handler got IOError when sending sentinels')
|
||||
|
||||
debug('task handler exiting')
|
||||
|
||||
@staticmethod
|
||||
def _handle_results(outqueue, get, cache):
|
||||
thread = threading.current_thread()
|
||||
|
||||
while 1:
|
||||
try:
|
||||
task = get()
|
||||
except (IOError, EOFError):
|
||||
debug('result handler got EOFError/IOError -- exiting')
|
||||
return
|
||||
|
||||
if thread._state:
|
||||
assert thread._state == TERMINATE
|
||||
debug('result handler found thread._state=TERMINATE')
|
||||
break
|
||||
|
||||
if task is None:
|
||||
debug('result handler got sentinel')
|
||||
break
|
||||
|
||||
job, i, obj = task
|
||||
try:
|
||||
cache[job]._set(i, obj)
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
while cache and thread._state != TERMINATE:
|
||||
try:
|
||||
task = get()
|
||||
except (IOError, EOFError):
|
||||
debug('result handler got EOFError/IOError -- exiting')
|
||||
return
|
||||
|
||||
if task is None:
|
||||
debug('result handler ignoring extra sentinel')
|
||||
continue
|
||||
job, i, obj = task
|
||||
try:
|
||||
cache[job]._set(i, obj)
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
if hasattr(outqueue, '_reader'):
|
||||
debug('ensuring that outqueue is not full')
|
||||
# If we don't make room available in outqueue then
|
||||
# attempts to add the sentinel (None) to outqueue may
|
||||
# block. There is guaranteed to be no more than 2 sentinels.
|
||||
try:
|
||||
for i in range(10):
|
||||
if not outqueue._reader.poll():
|
||||
break
|
||||
get()
|
||||
except (IOError, EOFError):
|
||||
pass
|
||||
|
||||
debug('result handler exiting: len(cache)=%s, thread._state=%s',
|
||||
len(cache), thread._state)
|
||||
|
||||
@staticmethod
|
||||
def _get_tasks(func, it, size):
|
||||
it = iter(it)
|
||||
while 1:
|
||||
x = tuple(itertools.islice(it, size))
|
||||
if not x:
|
||||
return
|
||||
yield (func, x)
|
||||
|
||||
def __reduce__(self):
|
||||
raise NotImplementedError(
|
||||
'pool objects cannot be passed between processes or pickled'
|
||||
)
|
||||
|
||||
def close(self):
|
||||
debug('closing pool')
|
||||
if self._state == RUN:
|
||||
self._state = CLOSE
|
||||
self._worker_handler._state = CLOSE
|
||||
|
||||
def terminate(self):
|
||||
debug('terminating pool')
|
||||
self._state = TERMINATE
|
||||
self._worker_handler._state = TERMINATE
|
||||
self._terminate()
|
||||
|
||||
def join(self):
|
||||
debug('joining pool')
|
||||
assert self._state in (CLOSE, TERMINATE)
|
||||
self._worker_handler.join()
|
||||
self._task_handler.join()
|
||||
self._result_handler.join()
|
||||
for p in self._pool:
|
||||
p.join()
|
||||
|
||||
@staticmethod
|
||||
def _help_stuff_finish(inqueue, task_handler, size):
|
||||
# task_handler may be blocked trying to put items on inqueue
|
||||
debug('removing tasks from inqueue until task handler finished')
|
||||
inqueue._rlock.acquire()
|
||||
while task_handler.is_alive() and inqueue._reader.poll():
|
||||
inqueue._reader.recv()
|
||||
time.sleep(0)
|
||||
|
||||
@classmethod
|
||||
def _terminate_pool(cls, taskqueue, inqueue, outqueue, pool,
|
||||
worker_handler, task_handler, result_handler, cache):
|
||||
# this is guaranteed to only be called once
|
||||
debug('finalizing pool')
|
||||
|
||||
worker_handler._state = TERMINATE
|
||||
task_handler._state = TERMINATE
|
||||
|
||||
debug('helping task handler/workers to finish')
|
||||
cls._help_stuff_finish(inqueue, task_handler, len(pool))
|
||||
|
||||
assert result_handler.is_alive() or len(cache) == 0
|
||||
|
||||
result_handler._state = TERMINATE
|
||||
outqueue.put(None) # sentinel
|
||||
|
||||
# We must wait for the worker handler to exit before terminating
|
||||
# workers because we don't want workers to be restarted behind our back.
|
||||
debug('joining worker handler')
|
||||
if threading.current_thread() is not worker_handler:
|
||||
worker_handler.join(1e100)
|
||||
|
||||
# Terminate workers which haven't already finished.
|
||||
if pool and hasattr(pool[0], 'terminate'):
|
||||
debug('terminating workers')
|
||||
for p in pool:
|
||||
if p.exitcode is None:
|
||||
p.terminate()
|
||||
|
||||
debug('joining task handler')
|
||||
if threading.current_thread() is not task_handler:
|
||||
task_handler.join(1e100)
|
||||
|
||||
debug('joining result handler')
|
||||
if threading.current_thread() is not result_handler:
|
||||
result_handler.join(1e100)
|
||||
|
||||
if pool and hasattr(pool[0], 'terminate'):
|
||||
debug('joining pool workers')
|
||||
for p in pool:
|
||||
if p.is_alive():
|
||||
# worker has not yet exited
|
||||
debug('cleaning up worker %d' % p.pid)
|
||||
p.join()
|
||||
|
||||
class ApplyResult(object):
|
||||
|
||||
def __init__(self, cache, callback):
|
||||
self._cond = threading.Condition(threading.Lock())
|
||||
self._job = multiprocessing.pool.job_counter.next()
|
||||
self._cache = cache
|
||||
self._ready = False
|
||||
self._callback = callback
|
||||
cache[self._job] = self
|
||||
|
||||
def ready(self):
|
||||
return self._ready
|
||||
|
||||
def successful(self):
|
||||
assert self._ready
|
||||
return self._success
|
||||
|
||||
def wait(self, timeout=None):
|
||||
self._cond.acquire()
|
||||
try:
|
||||
if not self._ready:
|
||||
self._cond.wait(timeout)
|
||||
finally:
|
||||
self._cond.release()
|
||||
|
||||
def get(self, timeout=None):
|
||||
self.wait(timeout)
|
||||
if not self._ready:
|
||||
raise TimeoutError
|
||||
if self._success:
|
||||
return self._value
|
||||
else:
|
||||
raise self._value
|
||||
|
||||
def _set(self, i, obj):
|
||||
self._success, self._value = obj
|
||||
if self._callback and self._success:
|
||||
self._callback(self._value)
|
||||
self._cond.acquire()
|
||||
try:
|
||||
self._ready = True
|
||||
self._cond.notify()
|
||||
finally:
|
||||
self._cond.release()
|
||||
del self._cache[self._job]
|
||||
|
||||
#
|
||||
# Class whose instances are returned by `Pool.map_async()`
|
||||
#
|
||||
|
||||
class MapResult(ApplyResult):
|
||||
|
||||
def __init__(self, cache, chunksize, length, callback):
|
||||
ApplyResult.__init__(self, cache, callback)
|
||||
self._success = True
|
||||
self._value = [None] * length
|
||||
self._chunksize = chunksize
|
||||
if chunksize <= 0:
|
||||
self._number_left = 0
|
||||
self._ready = True
|
||||
del cache[self._job]
|
||||
else:
|
||||
self._number_left = length//chunksize + bool(length % chunksize)
|
||||
|
||||
def _set(self, i, success_result):
|
||||
success, result = success_result
|
||||
if success:
|
||||
self._value[i*self._chunksize:(i+1)*self._chunksize] = result
|
||||
self._number_left -= 1
|
||||
if self._number_left == 0:
|
||||
if self._callback:
|
||||
self._callback(self._value)
|
||||
del self._cache[self._job]
|
||||
self._cond.acquire()
|
||||
try:
|
||||
self._ready = True
|
||||
self._cond.notify()
|
||||
finally:
|
||||
self._cond.release()
|
||||
|
||||
else:
|
||||
self._success = False
|
||||
self._value = result
|
||||
del self._cache[self._job]
|
||||
self._cond.acquire()
|
||||
try:
|
||||
self._ready = True
|
||||
self._cond.notify()
|
||||
finally:
|
||||
self._cond.release()
|
||||
|
||||
#
|
||||
# Class whose instances are returned by `Pool.imap()`
|
||||
#
|
||||
|
||||
class IMapIterator(object):
|
||||
|
||||
def __init__(self, cache):
|
||||
self._cond = threading.Condition(threading.Lock())
|
||||
self._job = multiprocessing.pool.job_counter.next()
|
||||
self._cache = cache
|
||||
self._items = collections.deque()
|
||||
self._index = 0
|
||||
self._length = None
|
||||
self._unsorted = {}
|
||||
cache[self._job] = self
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
|
||||
def next(self, timeout=None):
|
||||
self._cond.acquire()
|
||||
try:
|
||||
try:
|
||||
item = self._items.popleft()
|
||||
except IndexError:
|
||||
if self._index == self._length:
|
||||
raise StopIteration
|
||||
self._cond.wait(timeout)
|
||||
try:
|
||||
item = self._items.popleft()
|
||||
except IndexError:
|
||||
if self._index == self._length:
|
||||
raise StopIteration
|
||||
raise TimeoutError
|
||||
finally:
|
||||
self._cond.release()
|
||||
|
||||
success, value = item
|
||||
if success:
|
||||
return value
|
||||
raise value
|
||||
|
||||
__next__ = next # XXX
|
||||
|
||||
def _set(self, i, obj):
|
||||
self._cond.acquire()
|
||||
try:
|
||||
if self._index == i:
|
||||
self._items.append(obj)
|
||||
self._index += 1
|
||||
while self._index in self._unsorted:
|
||||
obj = self._unsorted.pop(self._index)
|
||||
self._items.append(obj)
|
||||
self._index += 1
|
||||
self._cond.notify()
|
||||
else:
|
||||
self._unsorted[i] = obj
|
||||
|
||||
if self._index == self._length:
|
||||
del self._cache[self._job]
|
||||
finally:
|
||||
self._cond.release()
|
||||
|
||||
def _set_length(self, length):
|
||||
self._cond.acquire()
|
||||
try:
|
||||
self._length = length
|
||||
if self._index == self._length:
|
||||
self._cond.notify()
|
||||
del self._cache[self._job]
|
||||
finally:
|
||||
self._cond.release()
|
||||
|
||||
#
|
||||
# Class whose instances are returned by `Pool.imap_unordered()`
|
||||
#
|
||||
|
||||
class IMapUnorderedIterator(IMapIterator):
|
||||
|
||||
def _set(self, i, obj):
|
||||
self._cond.acquire()
|
||||
try:
|
||||
self._items.append(obj)
|
||||
self._index += 1
|
||||
self._cond.notify()
|
||||
if self._index == self._length:
|
||||
del self._cache[self._job]
|
||||
finally:
|
||||
self._cond.release()
|
||||
|
||||
|
||||
|
||||
@@ -185,13 +185,6 @@ class BBCooker:
|
||||
|
||||
filtered_keys = bb.utils.approved_variables()
|
||||
bb.data.inheritFromOS(self.configuration.data, self.savedenv, filtered_keys)
|
||||
self.configuration.data.setVar("BB_ORIGENV", self.savedenv)
|
||||
|
||||
def enableDataTracking(self):
|
||||
self.configuration.data.enableTracking()
|
||||
|
||||
def disableDataTracking(self):
|
||||
self.configuration.data.disableTracking()
|
||||
|
||||
def loadConfigurationData(self):
|
||||
self.initConfigurationData()
|
||||
@@ -208,89 +201,6 @@ class BBCooker:
|
||||
if not self.configuration.cmd:
|
||||
self.configuration.cmd = self.configuration.data.getVar("BB_DEFAULT_TASK", True) or "build"
|
||||
|
||||
def saveConfigurationVar(self, var, val, default_file):
|
||||
|
||||
replaced = False
|
||||
#do not save if nothing changed
|
||||
if str(val) == self.configuration.data.getVar(var):
|
||||
return
|
||||
|
||||
conf_files = self.configuration.data.varhistory.get_variable_files(var)
|
||||
|
||||
#format the value when it is a list
|
||||
if isinstance(val, list):
|
||||
listval = ""
|
||||
for value in val:
|
||||
listval += "%s " % value
|
||||
val = listval
|
||||
|
||||
topdir = self.configuration.data.getVar("TOPDIR")
|
||||
|
||||
#comment or replace operations made on var
|
||||
for conf_file in conf_files:
|
||||
if topdir in conf_file:
|
||||
with open(conf_file, 'r') as f:
|
||||
contents = f.readlines()
|
||||
f.close()
|
||||
|
||||
lines = self.configuration.data.varhistory.get_variable_lines(var, conf_file)
|
||||
for line in lines:
|
||||
total = ""
|
||||
i = 0
|
||||
for c in contents:
|
||||
total += c
|
||||
i = i + 1
|
||||
if i==int(line):
|
||||
end_index = len(total)
|
||||
index = total.rfind(var, 0, end_index)
|
||||
|
||||
begin_line = total.count("\n",0,index)
|
||||
end_line = int(line)
|
||||
|
||||
#check if the variable was saved before in the same way
|
||||
#if true it replace the place where the variable was declared
|
||||
#else it comments it
|
||||
if contents[begin_line-1]== "#added by bitbake\n":
|
||||
contents[begin_line] = "%s = \"%s\"\n" % (var, val)
|
||||
replaced = True
|
||||
else:
|
||||
for ii in range(begin_line, end_line):
|
||||
contents[ii] = "#" + contents[ii]
|
||||
|
||||
total = ""
|
||||
for c in contents:
|
||||
total += c
|
||||
with open(conf_file, 'w') as f:
|
||||
f.write(total)
|
||||
f.close()
|
||||
|
||||
if replaced == False:
|
||||
#remove var from history
|
||||
self.configuration.data.varhistory.del_var_history(var)
|
||||
|
||||
#add var to the end of default_file
|
||||
default_file = self._findConfigFile(default_file)
|
||||
|
||||
with open(default_file, 'r') as f:
|
||||
contents = f.readlines()
|
||||
f.close()
|
||||
|
||||
total = ""
|
||||
for c in contents:
|
||||
total += c
|
||||
|
||||
#add the variable on a single line, to be easy to replace the second time
|
||||
total += "#added by bitbake"
|
||||
total += "\n%s = \"%s\"\n" % (var, val)
|
||||
|
||||
with open(default_file, 'w') as f:
|
||||
f.write(total)
|
||||
f.close()
|
||||
|
||||
#add to history
|
||||
loginfo = {"op":set, "file":default_file, "line":total.count("\n")}
|
||||
self.configuration.data.setVar(var, val, **loginfo)
|
||||
|
||||
def parseConfiguration(self):
|
||||
|
||||
# Set log file verbosity
|
||||
@@ -471,8 +381,6 @@ class BBCooker:
|
||||
taskdata.add_unresolved(localdata, self.status)
|
||||
bb.event.fire(bb.event.TreeDataPreparationCompleted(len(pkgs_to_build)), self.configuration.data)
|
||||
return runlist, taskdata
|
||||
|
||||
######## WARNING : this function requires cache_extra to be enabled ########
|
||||
|
||||
def generateTaskDepTreeData(self, pkgs_to_build, task):
|
||||
"""
|
||||
@@ -546,7 +454,6 @@ class BBCooker:
|
||||
|
||||
return depend_tree
|
||||
|
||||
######## WARNING : this function requires cache_extra to be enabled ########
|
||||
def generatePkgDepTreeData(self, pkgs_to_build, task):
|
||||
"""
|
||||
Create a dependency tree of pkgs_to_build, returning the data.
|
||||
@@ -574,12 +481,8 @@ class BBCooker:
|
||||
lic = self.status.license[fn]
|
||||
section = self.status.section[fn]
|
||||
description = self.status.description[fn]
|
||||
homepage = self.status.homepage[fn]
|
||||
bugtracker = self.status.bugtracker[fn]
|
||||
files_info = self.status.files_info[fn]
|
||||
rdepends = self.status.rundeps[fn]
|
||||
rrecs = self.status.runrecs[fn]
|
||||
prevision = self.status.prevision[fn]
|
||||
inherits = self.status.inherits.get(fn, None)
|
||||
if pn not in depend_tree["pn"]:
|
||||
depend_tree["pn"][pn] = {}
|
||||
@@ -590,10 +493,6 @@ class BBCooker:
|
||||
depend_tree["pn"][pn]["section"] = section
|
||||
depend_tree["pn"][pn]["description"] = description
|
||||
depend_tree["pn"][pn]["inherits"] = inherits
|
||||
depend_tree["pn"][pn]["homepage"] = homepage
|
||||
depend_tree["pn"][pn]["bugtracker"] = bugtracker
|
||||
depend_tree["pn"][pn]["files_info"] = files_info
|
||||
depend_tree["pn"][pn]["revision"] = prevision
|
||||
|
||||
if fnid not in seen_fnids:
|
||||
seen_fnids.append(fnid)
|
||||
@@ -988,16 +887,10 @@ class BBCooker:
|
||||
bb.fetch.fetcher_init(data)
|
||||
bb.codeparser.parser_cache_init(data)
|
||||
bb.event.fire(bb.event.ConfigParsed(), data)
|
||||
|
||||
if data.getVar("BB_INVALIDCONF") is True:
|
||||
data.setVar("BB_INVALIDCONF", False)
|
||||
self.parseConfigurationFiles(self.configuration.prefile,
|
||||
self.configuration.postfile)
|
||||
else:
|
||||
bb.parse.init_parser(data)
|
||||
data.setVar('BBINCLUDED',bb.parse.get_file_depends(data))
|
||||
self.configuration.data = data
|
||||
self.configuration.data_hash = data.get_hash()
|
||||
bb.parse.init_parser(data)
|
||||
data.setVar('BBINCLUDED',bb.parse.get_file_depends(data))
|
||||
self.configuration.data = data
|
||||
self.configuration.data_hash = data.get_hash()
|
||||
|
||||
def handleCollections( self, collections ):
|
||||
"""Handle collections"""
|
||||
@@ -1486,10 +1379,7 @@ class BBCooker:
|
||||
# Empty the environment. The environment will be populated as
|
||||
# necessary from the data store.
|
||||
#bb.utils.empty_environment()
|
||||
try:
|
||||
prserv.serv.auto_start(self.configuration.data)
|
||||
except prserv.serv.PRServiceConfigError:
|
||||
bb.event.fire(CookerExit(), self.configuration.event_data)
|
||||
prserv.serv.auto_start(self.configuration.data)
|
||||
return
|
||||
|
||||
def post_serve(self):
|
||||
@@ -1526,7 +1416,25 @@ def server_main(cooker, func, *args):
|
||||
ret = profile.Profile.runcall(prof, func, *args)
|
||||
|
||||
prof.dump_stats("profile.log")
|
||||
bb.utils.process_profilelog("profile.log")
|
||||
|
||||
# Redirect stdout to capture profile information
|
||||
pout = open('profile.log.processed', 'w')
|
||||
so = sys.stdout.fileno()
|
||||
orig_so = os.dup(sys.stdout.fileno())
|
||||
os.dup2(pout.fileno(), so)
|
||||
|
||||
import pstats
|
||||
p = pstats.Stats('profile.log')
|
||||
p.sort_stats('time')
|
||||
p.print_stats()
|
||||
p.print_callers()
|
||||
p.sort_stats('cumulative')
|
||||
p.print_stats()
|
||||
|
||||
os.dup2(orig_so, so)
|
||||
pout.flush()
|
||||
pout.close()
|
||||
|
||||
print("Raw profiling information saved to profile.log and processed statistics to profile.log.processed")
|
||||
|
||||
else:
|
||||
@@ -1606,7 +1514,6 @@ class Parser(multiprocessing.Process):
|
||||
self.quit = quit
|
||||
self.init = init
|
||||
multiprocessing.Process.__init__(self)
|
||||
self.context = bb.utils._context.copy()
|
||||
|
||||
def run(self):
|
||||
if self.init:
|
||||
@@ -1641,7 +1548,6 @@ class Parser(multiprocessing.Process):
|
||||
|
||||
def parse(self, filename, appends, caches_array):
|
||||
try:
|
||||
bb.utils._context = self.context.copy()
|
||||
return True, bb.cache.Cache.parse(filename, appends, self.cfg, caches_array)
|
||||
except Exception as exc:
|
||||
tb = sys.exc_info()[2]
|
||||
|
||||
@@ -158,11 +158,6 @@ def expandKeys(alterdata, readdata = None):
|
||||
|
||||
for key in todolist:
|
||||
ekey = todolist[key]
|
||||
newval = alterdata.getVar(ekey, 0)
|
||||
if newval:
|
||||
val = alterdata.getVar(key, 0)
|
||||
if val is not None and newval is not None:
|
||||
bb.warn("Variable key %s (%s) replaces original key %s (%s)." % (key, val, ekey, newval))
|
||||
alterdata.renameVar(key, ekey)
|
||||
|
||||
def inheritFromOS(d, savedenv, permitted):
|
||||
|
||||
@@ -259,27 +259,6 @@ class VariableHistory(object):
|
||||
o.write("#\n# $%s\n# [no history recorded]\n#\n" % var)
|
||||
o.write('# "%s"\n' % (commentVal))
|
||||
|
||||
def get_variable_files(self, var):
|
||||
"""Get the files where operations are made on a variable"""
|
||||
var_history = self.variable(var)
|
||||
files = []
|
||||
for event in var_history:
|
||||
files.append(event['file'])
|
||||
return files
|
||||
|
||||
def get_variable_lines(self, var, f):
|
||||
"""Get the line where a operation is made on a variable in file f"""
|
||||
var_history = self.variable(var)
|
||||
lines = []
|
||||
for event in var_history:
|
||||
if f== event['file']:
|
||||
line = event['line']
|
||||
lines.append(line)
|
||||
return lines
|
||||
|
||||
def del_var_history(self, var):
|
||||
if var in self.variables:
|
||||
self.variables[var] = []
|
||||
|
||||
class DataSmart(MutableMapping):
|
||||
def __init__(self, special = COWDictBase.copy(), seen = COWDictBase.copy() ):
|
||||
@@ -450,7 +429,6 @@ class DataSmart(MutableMapping):
|
||||
|
||||
|
||||
def setVar(self, var, value, **loginfo):
|
||||
#print("var=" + str(var) + " val=" + str(value))
|
||||
if 'op' not in loginfo:
|
||||
loginfo['op'] = "set"
|
||||
self.expand_cache = {}
|
||||
@@ -738,21 +716,5 @@ class DataSmart(MutableMapping):
|
||||
value = d.getVar(key, False) or ""
|
||||
data.update({key:value})
|
||||
|
||||
varflags = d.getVarFlags(key)
|
||||
if not varflags:
|
||||
continue
|
||||
for f in varflags:
|
||||
data.update({'%s[%s]' % (key, f):varflags[f]})
|
||||
|
||||
for key in ["__BBTASKS", "__BBANONFUNCS", "__BBHANDLERS"]:
|
||||
bb_list = d.getVar(key, False) or []
|
||||
bb_list.sort()
|
||||
data.update({key:str(bb_list)})
|
||||
|
||||
if key == "__BBANONFUNCS":
|
||||
for i in bb_list:
|
||||
value = d.getVar(i, True) or ""
|
||||
data.update({i:value})
|
||||
|
||||
data_str = str([(k, data[k]) for k in sorted(data.keys())])
|
||||
return hashlib.md5(data_str).hexdigest()
|
||||
|
||||
@@ -30,10 +30,6 @@ from __future__ import print_function
|
||||
import os, re
|
||||
import logging
|
||||
import urllib
|
||||
import urlparse
|
||||
if 'git' not in urlparse.uses_netloc:
|
||||
urlparse.uses_netloc.append('git')
|
||||
from urlparse import urlparse
|
||||
import operator
|
||||
import bb.persist_data, bb.utils
|
||||
import bb.checksum
|
||||
@@ -74,9 +70,6 @@ class FetchError(BBFetchException):
|
||||
|
||||
class ChecksumError(FetchError):
|
||||
"""Exception when mismatched checksum encountered"""
|
||||
def __init__(self, message, url = None, checksum = None):
|
||||
self.checksum = checksum
|
||||
FetchError.__init__(self, message, url)
|
||||
|
||||
class NoChecksumError(FetchError):
|
||||
"""Exception when no checksum is specified, but BB_STRICT_CHECKSUM is set"""
|
||||
@@ -127,205 +120,12 @@ class NonLocalMethod(Exception):
|
||||
def __init__(self):
|
||||
Exception.__init__(self)
|
||||
|
||||
|
||||
class URI(object):
|
||||
"""
|
||||
A class representing a generic URI, with methods for
|
||||
accessing the URI components, and stringifies to the
|
||||
URI.
|
||||
|
||||
It is constructed by calling it with a URI, or setting
|
||||
the attributes manually:
|
||||
|
||||
uri = URI("http://example.com/")
|
||||
|
||||
uri = URI()
|
||||
uri.scheme = 'http'
|
||||
uri.hostname = 'example.com'
|
||||
uri.path = '/'
|
||||
|
||||
It has the following attributes:
|
||||
|
||||
* scheme (read/write)
|
||||
* userinfo (authentication information) (read/write)
|
||||
* username (read/write)
|
||||
* password (read/write)
|
||||
|
||||
Note, password is deprecated as of RFC 3986.
|
||||
|
||||
* hostname (read/write)
|
||||
* port (read/write)
|
||||
* hostport (read only)
|
||||
"hostname:port", if both are set, otherwise just "hostname"
|
||||
* path (read/write)
|
||||
* path_quoted (read/write)
|
||||
A URI quoted version of path
|
||||
* params (dict) (read/write)
|
||||
* relative (bool) (read only)
|
||||
True if this is a "relative URI", (e.g. file:foo.diff)
|
||||
|
||||
It stringifies to the URI itself.
|
||||
|
||||
Some notes about relative URIs: while it's specified that
|
||||
a URI beginning with <scheme>:// should either be directly
|
||||
followed by a hostname or a /, the old URI handling of the
|
||||
fetch2 library did not comform to this. Therefore, this URI
|
||||
class has some kludges to make sure that URIs are parsed in
|
||||
a way comforming to bitbake's current usage. This URI class
|
||||
supports the following:
|
||||
|
||||
file:relative/path.diff (IETF compliant)
|
||||
git:relative/path.git (IETF compliant)
|
||||
git:///absolute/path.git (IETF compliant)
|
||||
file:///absolute/path.diff (IETF compliant)
|
||||
|
||||
file://relative/path.diff (not IETF compliant)
|
||||
|
||||
But it does not support the following:
|
||||
|
||||
file://hostname/absolute/path.diff (would be IETF compliant)
|
||||
|
||||
Note that the last case only applies to a list of
|
||||
"whitelisted" schemes (currently only file://), that requires
|
||||
its URIs to not have a network location.
|
||||
"""
|
||||
|
||||
_relative_schemes = ['file', 'git']
|
||||
_netloc_forbidden = ['file']
|
||||
|
||||
def __init__(self, uri=None):
|
||||
self.scheme = ''
|
||||
self.userinfo = ''
|
||||
self.hostname = ''
|
||||
self.port = None
|
||||
self._path = ''
|
||||
self.params = {}
|
||||
self.relative = False
|
||||
|
||||
if not uri:
|
||||
return
|
||||
|
||||
urlp = urlparse(uri)
|
||||
self.scheme = urlp.scheme
|
||||
|
||||
# Convert URI to be relative
|
||||
if urlp.scheme in self._netloc_forbidden:
|
||||
uri = re.sub("(?<=:)//(?!/)", "", uri, 1)
|
||||
urlp = urlparse(uri)
|
||||
|
||||
# Identify if the URI is relative or not
|
||||
if urlp.scheme in self._relative_schemes and \
|
||||
re.compile("^\w+:(?!//)").match(uri):
|
||||
self.relative = True
|
||||
|
||||
if not self.relative:
|
||||
self.hostname = urlp.hostname or ''
|
||||
self.port = urlp.port
|
||||
|
||||
self.userinfo += urlp.username or ''
|
||||
|
||||
if urlp.password:
|
||||
self.userinfo += ':%s' % urlp.password
|
||||
|
||||
# Do support params even for URI schemes that Python's
|
||||
# urlparse doesn't support params for.
|
||||
path = ''
|
||||
param_str = ''
|
||||
if not urlp.params:
|
||||
path, param_str = (list(urlp.path.split(";", 1)) + [None])[:2]
|
||||
else:
|
||||
path = urlp.path
|
||||
param_str = urlp.params
|
||||
|
||||
self.path = urllib.unquote(path)
|
||||
|
||||
if param_str:
|
||||
self.params = self._param_dict(param_str)
|
||||
|
||||
def __str__(self):
|
||||
userinfo = self.userinfo
|
||||
if userinfo:
|
||||
userinfo += '@'
|
||||
|
||||
return "%s:%s%s%s%s%s" % (
|
||||
self.scheme,
|
||||
'' if self.relative else '//',
|
||||
userinfo,
|
||||
self.hostport,
|
||||
self.path_quoted,
|
||||
self._param_str)
|
||||
|
||||
@property
|
||||
def _param_str(self):
|
||||
ret = ''
|
||||
for key, val in self.params.items():
|
||||
ret += ";%s=%s" % (key, val)
|
||||
return ret
|
||||
|
||||
def _param_dict(self, param_str):
|
||||
parm = {}
|
||||
|
||||
for keyval in param_str.split(";"):
|
||||
key, val = keyval.split("=", 1)
|
||||
parm[key] = val
|
||||
|
||||
return parm
|
||||
|
||||
@property
|
||||
def hostport(self):
|
||||
if not self.port:
|
||||
return self.hostname
|
||||
return "%s:%d" % (self.hostname, self.port)
|
||||
|
||||
@property
|
||||
def path_quoted(self):
|
||||
return urllib.quote(self.path)
|
||||
|
||||
@path_quoted.setter
|
||||
def path_quoted(self, path):
|
||||
self.path = urllib.unquote(path)
|
||||
|
||||
@property
|
||||
def path(self):
|
||||
return self._path
|
||||
|
||||
@path.setter
|
||||
def path(self, path):
|
||||
self._path = path
|
||||
|
||||
if re.compile("^/").match(path):
|
||||
self.relative = False
|
||||
else:
|
||||
self.relative = True
|
||||
|
||||
@property
|
||||
def username(self):
|
||||
if self.userinfo:
|
||||
return (self.userinfo.split(":", 1))[0]
|
||||
return ''
|
||||
|
||||
@username.setter
|
||||
def username(self, username):
|
||||
self.userinfo = username
|
||||
if self.password:
|
||||
self.userinfo += ":%s" % self.password
|
||||
|
||||
@property
|
||||
def password(self):
|
||||
if self.userinfo and ":" in self.userinfo:
|
||||
return (self.userinfo.split(":", 1))[1]
|
||||
return ''
|
||||
|
||||
@password.setter
|
||||
def password(self, password):
|
||||
self.userinfo = "%s:%s" % (self.username, password)
|
||||
|
||||
def decodeurl(url):
|
||||
"""Decodes an URL into the tokens (scheme, network location, path,
|
||||
user, password, parameters).
|
||||
"""
|
||||
|
||||
m = re.compile('(?P<type>[^:]*)://((?P<user>[^/]+)@)?(?P<location>[^;]+)(;(?P<parm>.*))?').match(url)
|
||||
m = re.compile('(?P<type>[^:]*)://((?P<user>.+)@)?(?P<location>[^;]+)(;(?P<parm>.*))?').match(url)
|
||||
if not m:
|
||||
raise MalformedUrl(url)
|
||||
|
||||
@@ -414,8 +214,6 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d):
|
||||
return None
|
||||
# Overwrite any specified replacement parameters
|
||||
for k in uri_replace_decoded[loc]:
|
||||
for l in replacements:
|
||||
uri_replace_decoded[loc][k] = uri_replace_decoded[loc][k].replace(l, replacements[l])
|
||||
result_decoded[loc][k] = uri_replace_decoded[loc][k]
|
||||
elif (re.match(regexp, uri_decoded[loc])):
|
||||
if not uri_replace_decoded[loc]:
|
||||
@@ -564,7 +362,7 @@ def verify_checksum(u, ud, d):
|
||||
msg = msg + '\nIf this change is expected (e.g. you have upgraded to a new version without updating the checksums) then you can use these lines within the recipe:\nSRC_URI[%s] = "%s"\nSRC_URI[%s] = "%s"\nOtherwise you should retry the download and/or check with upstream to determine if the file has become corrupted or otherwise unexpectedly modified.\n' % (ud.md5_name, md5data, ud.sha256_name, sha256data)
|
||||
|
||||
if len(msg):
|
||||
raise ChecksumError('Checksum mismatch!%s' % msg, u, md5data)
|
||||
raise ChecksumError('Checksum mismatch!%s' % msg, u)
|
||||
|
||||
|
||||
def update_stamp(u, ud, d):
|
||||
@@ -660,16 +458,11 @@ def runfetchcmd(cmd, d, quiet = False, cleanup = []):
|
||||
# rather than host provided
|
||||
# Also include some other variables.
|
||||
# FIXME: Should really include all export varaiables?
|
||||
exportvars = ['HOME', 'PATH',
|
||||
'HTTP_PROXY', 'http_proxy',
|
||||
'HTTPS_PROXY', 'https_proxy',
|
||||
'FTP_PROXY', 'ftp_proxy',
|
||||
'FTPS_PROXY', 'ftps_proxy',
|
||||
'NO_PROXY', 'no_proxy',
|
||||
'ALL_PROXY', 'all_proxy',
|
||||
'GIT_PROXY_COMMAND',
|
||||
'SSH_AUTH_SOCK', 'SSH_AGENT_PID',
|
||||
'SOCKS5_USER', 'SOCKS5_PASSWD']
|
||||
exportvars = ['PATH', 'GIT_PROXY_COMMAND', 'GIT_PROXY_HOST',
|
||||
'GIT_PROXY_PORT', 'GIT_CONFIG', 'http_proxy', 'ftp_proxy',
|
||||
'https_proxy', 'no_proxy', 'ALL_PROXY', 'all_proxy',
|
||||
'SSH_AUTH_SOCK', 'SSH_AGENT_PID', 'HOME',
|
||||
'GIT_PROXY_IGNORE', 'SOCKS5_USER', 'SOCKS5_PASSWD']
|
||||
|
||||
for var in exportvars:
|
||||
val = d.getVar(var, True)
|
||||
@@ -756,19 +549,6 @@ def build_mirroruris(origud, mirrors, ld):
|
||||
|
||||
return uris, uds
|
||||
|
||||
def rename_bad_checksum(ud, suffix):
|
||||
"""
|
||||
Renames files to have suffix from parameter
|
||||
"""
|
||||
|
||||
if ud.localpath is None:
|
||||
return
|
||||
|
||||
new_localpath = "%s_bad-checksum_%s" % (ud.localpath, suffix)
|
||||
bb.warn("Renaming %s to %s" % (ud.localpath, new_localpath))
|
||||
bb.utils.movefile(ud.localpath, new_localpath)
|
||||
|
||||
|
||||
def try_mirror_url(newuri, origud, ud, ld, check = False):
|
||||
# Return of None or a value means we're finished
|
||||
# False means try another url
|
||||
@@ -797,7 +577,6 @@ def try_mirror_url(newuri, origud, ud, ld, check = False):
|
||||
dldir = ld.getVar("DL_DIR", True)
|
||||
if origud.mirrortarball and os.path.basename(ud.localpath) == os.path.basename(origud.mirrortarball) \
|
||||
and os.path.basename(ud.localpath) != os.path.basename(origud.localpath):
|
||||
bb.utils.mkdirhier(os.path.dirname(ud.donestamp))
|
||||
open(ud.donestamp, 'w').close()
|
||||
dest = os.path.join(dldir, os.path.basename(ud.localpath))
|
||||
if not os.path.exists(dest):
|
||||
@@ -820,7 +599,6 @@ def try_mirror_url(newuri, origud, ud, ld, check = False):
|
||||
if isinstance(e, ChecksumError):
|
||||
logger.warn("Mirror checksum failure for url %s (original url: %s)\nCleaning and trying again." % (newuri, origud.url))
|
||||
logger.warn(str(e))
|
||||
rename_bad_checksum(ud, e.checksum)
|
||||
elif isinstance(e, NoChecksumError):
|
||||
raise
|
||||
else:
|
||||
@@ -872,14 +650,11 @@ def srcrev_internal_helper(ud, d, name):
|
||||
if not rev:
|
||||
rev = d.getVar("SRCREV_%s" % name, True)
|
||||
if not rev:
|
||||
rev = d.getVar("SRCREV_pn-%s" % pn, True)
|
||||
rev = d.getVar("SRCREV_pn-%s" % pn, True)
|
||||
if not rev:
|
||||
rev = d.getVar("SRCREV", True)
|
||||
if rev == "INVALID":
|
||||
var = "SRCREV_pn-%s" % pn
|
||||
if name != '':
|
||||
var = "SRCREV_%s_pn-%s" % (name, pn)
|
||||
raise FetchError("Please set %s to a valid value" % var, ud.url)
|
||||
raise FetchError("Please set SRCREV to a valid value", ud.url)
|
||||
if rev == "AUTOINC":
|
||||
rev = ud.method.latest_revision(ud.url, ud, d, name)
|
||||
|
||||
@@ -971,7 +746,6 @@ class FetchData(object):
|
||||
self.lockfile = None
|
||||
self.mirrortarball = None
|
||||
self.basename = None
|
||||
self.basepath = None
|
||||
(self.type, self.host, self.path, self.user, self.pswd, self.parm) = decodeurl(data.expand(url, d))
|
||||
self.date = self.getSRCDate(d)
|
||||
self.url = url
|
||||
@@ -989,13 +763,13 @@ class FetchData(object):
|
||||
self.sha256_name = "sha256sum"
|
||||
if self.md5_name in self.parm:
|
||||
self.md5_expected = self.parm[self.md5_name]
|
||||
elif self.type not in ["http", "https", "ftp", "ftps", "sftp"]:
|
||||
elif self.type not in ["http", "https", "ftp", "ftps"]:
|
||||
self.md5_expected = None
|
||||
else:
|
||||
self.md5_expected = d.getVarFlag("SRC_URI", self.md5_name)
|
||||
if self.sha256_name in self.parm:
|
||||
self.sha256_expected = self.parm[self.sha256_name]
|
||||
elif self.type not in ["http", "https", "ftp", "ftps", "sftp"]:
|
||||
elif self.type not in ["http", "https", "ftp", "ftps"]:
|
||||
self.sha256_expected = None
|
||||
else:
|
||||
self.sha256_expected = d.getVarFlag("SRC_URI", self.sha256_name)
|
||||
@@ -1028,14 +802,8 @@ class FetchData(object):
|
||||
elif self.localfile:
|
||||
self.localpath = self.method.localpath(self.url, self, d)
|
||||
|
||||
dldir = d.getVar("DL_DIR", True)
|
||||
# Note: .done and .lock files should always be in DL_DIR whereas localpath may not be.
|
||||
if self.localpath and self.localpath.startswith(dldir):
|
||||
basepath = self.localpath
|
||||
elif self.localpath:
|
||||
basepath = dldir + os.sep + os.path.basename(self.localpath)
|
||||
else:
|
||||
basepath = dldir + os.sep + (self.basepath or self.basename)
|
||||
# Note: These files should always be in DL_DIR whereas localpath may not be.
|
||||
basepath = d.expand("${DL_DIR}/%s" % os.path.basename(self.localpath or self.basename))
|
||||
self.donestamp = basepath + '.done'
|
||||
self.lockfile = basepath + '.lock'
|
||||
|
||||
@@ -1405,7 +1173,6 @@ class Fetch(object):
|
||||
if isinstance(e, ChecksumError):
|
||||
logger.warn("Checksum failure encountered with download of %s - will attempt other sources if available" % u)
|
||||
logger.debug(1, str(e))
|
||||
rename_bad_checksum(ud, e.checksum)
|
||||
elif isinstance(e, NoChecksumError):
|
||||
raise
|
||||
else:
|
||||
@@ -1515,13 +1282,11 @@ class Fetch(object):
|
||||
|
||||
from . import cvs
|
||||
from . import git
|
||||
from . import gitsm
|
||||
from . import local
|
||||
from . import svn
|
||||
from . import wget
|
||||
from . import svk
|
||||
from . import ssh
|
||||
from . import sftp
|
||||
from . import perforce
|
||||
from . import bzr
|
||||
from . import hg
|
||||
@@ -1532,11 +1297,9 @@ methods.append(local.Local())
|
||||
methods.append(wget.Wget())
|
||||
methods.append(svn.Svn())
|
||||
methods.append(git.Git())
|
||||
methods.append(gitsm.GitSM())
|
||||
methods.append(cvs.Cvs())
|
||||
methods.append(svk.Svk())
|
||||
methods.append(ssh.SSH())
|
||||
methods.append(sftp.SFTP())
|
||||
methods.append(perforce.Perforce())
|
||||
methods.append(bzr.Bzr())
|
||||
methods.append(hg.Hg())
|
||||
|
||||
@@ -217,10 +217,6 @@ class Git(FetchMethod):
|
||||
def build_mirror_data(self, url, ud, d):
|
||||
# Generate a mirror tarball if needed
|
||||
if ud.write_tarballs and (ud.repochanged or not os.path.exists(ud.fullmirror)):
|
||||
# it's possible that this symlink points to read-only filesystem with PREMIRROR
|
||||
if os.path.islink(ud.fullmirror):
|
||||
os.unlink(ud.fullmirror)
|
||||
|
||||
os.chdir(ud.clonedir)
|
||||
logger.info("Creating tarball of git repository")
|
||||
runfetchcmd("tar -czf %s %s" % (ud.fullmirror, os.path.join(".") ), d)
|
||||
@@ -238,7 +234,7 @@ class Git(FetchMethod):
|
||||
def_destsuffix = "git/"
|
||||
|
||||
destsuffix = ud.parm.get("destsuffix", def_destsuffix)
|
||||
destdir = ud.destdir = os.path.join(destdir, destsuffix)
|
||||
destdir = os.path.join(destdir, destsuffix)
|
||||
if os.path.exists(destdir):
|
||||
bb.utils.prunedir(destdir)
|
||||
|
||||
|
||||
@@ -1,78 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' git submodules implementation
|
||||
"""
|
||||
|
||||
# Copyright (C) 2013 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import os
|
||||
import bb
|
||||
from bb import data
|
||||
from bb.fetch2.git import Git
|
||||
from bb.fetch2 import runfetchcmd
|
||||
from bb.fetch2 import logger
|
||||
|
||||
class GitSM(Git):
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with git.
|
||||
"""
|
||||
return ud.type in ['gitsm']
|
||||
|
||||
def uses_submodules(self, ud, d):
|
||||
for name in ud.names:
|
||||
try:
|
||||
runfetchcmd("%s show %s:.gitmodules" % (ud.basecmd, ud.revisions[name]), d, quiet=True)
|
||||
return True
|
||||
except bb.fetch.FetchError:
|
||||
pass
|
||||
return False
|
||||
|
||||
def update_submodules(self, u, ud, d):
|
||||
# We have to convert bare -> full repo, do the submodule bit, then convert back
|
||||
tmpclonedir = ud.clonedir + ".tmp"
|
||||
gitdir = tmpclonedir + os.sep + ".git"
|
||||
bb.utils.remove(tmpclonedir, True)
|
||||
os.mkdir(tmpclonedir)
|
||||
os.rename(ud.clonedir, gitdir)
|
||||
runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*true/bare = false/'", d)
|
||||
os.chdir(tmpclonedir)
|
||||
runfetchcmd("git reset --hard", d)
|
||||
runfetchcmd("git submodule init", d)
|
||||
runfetchcmd("git submodule update", d)
|
||||
runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*false/bare = true/'", d)
|
||||
os.rename(gitdir, ud.clonedir,)
|
||||
bb.utils.remove(tmpclonedir, True)
|
||||
|
||||
def download(self, loc, ud, d):
|
||||
Git.download(self, loc, ud, d)
|
||||
|
||||
os.chdir(ud.clonedir)
|
||||
submodules = self.uses_submodules(ud, d)
|
||||
if submodules:
|
||||
self.update_submodules(loc, ud, d)
|
||||
|
||||
def unpack(self, ud, destdir, d):
|
||||
Git.unpack(self, ud, destdir, d)
|
||||
|
||||
os.chdir(ud.destdir)
|
||||
submodules = self.uses_submodules(ud, d)
|
||||
if submodules:
|
||||
runfetchcmd("cp -r " + ud.clonedir + "/modules " + ud.destdir + "/.git/", d)
|
||||
runfetchcmd("git submodule init", d)
|
||||
runfetchcmd("git submodule update", d)
|
||||
|
||||
@@ -92,10 +92,7 @@ class Hg(FetchMethod):
|
||||
if not ud.user:
|
||||
hgroot = host + ud.path
|
||||
else:
|
||||
if ud.pswd:
|
||||
hgroot = ud.user + ":" + ud.pswd + "@" + host + ud.path
|
||||
else:
|
||||
hgroot = ud.user + "@" + host + ud.path
|
||||
hgroot = ud.user + "@" + host + ud.path
|
||||
|
||||
if command == "info":
|
||||
return "%s identify -i %s://%s/%s" % (basecmd, proto, hgroot, ud.module)
|
||||
@@ -115,10 +112,7 @@ class Hg(FetchMethod):
|
||||
# do not pass options list; limiting pull to rev causes the local
|
||||
# repo not to contain it and immediately following "update" command
|
||||
# will crash
|
||||
if ud.user and ud.pswd:
|
||||
cmd = "%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" pull" % (basecmd, ud.user, ud.pswd, proto)
|
||||
else:
|
||||
cmd = "%s pull" % (basecmd)
|
||||
cmd = "%s pull" % (basecmd)
|
||||
elif command == "update":
|
||||
cmd = "%s update -C %s" % (basecmd, " ".join(options))
|
||||
else:
|
||||
|
||||
@@ -112,7 +112,7 @@ class Perforce(FetchMethod):
|
||||
base = path
|
||||
which = path.find('/...')
|
||||
if which != -1:
|
||||
base = path[:which-1]
|
||||
base = path[:which]
|
||||
|
||||
base = self._strip_leading_slashes(base)
|
||||
|
||||
@@ -170,7 +170,7 @@ class Perforce(FetchMethod):
|
||||
logger.info("Fetch " + loc)
|
||||
logger.info("%s%s files %s", p4cmd, p4opt, depot)
|
||||
p4file, errors = bb.process.run("%s%s files %s" % (p4cmd, p4opt, depot))
|
||||
p4file = [f.rstrip() for f in p4file.splitlines()]
|
||||
p4file = p4file.strip()
|
||||
|
||||
if not p4file:
|
||||
raise FetchError("Fetch: unable to get the P4 files from %s" % depot, loc)
|
||||
|
||||
@@ -1,129 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake SFTP Fetch implementation
|
||||
|
||||
Class for fetching files via SFTP. It tries to adhere to the (now
|
||||
expired) IETF Internet Draft for "Uniform Resource Identifier (URI)
|
||||
Scheme for Secure File Transfer Protocol (SFTP) and Secure Shell
|
||||
(SSH)" (SECSH URI).
|
||||
|
||||
It uses SFTP (as to adhere to the SECSH URI specification). It only
|
||||
supports key based authentication, not password. This class, unlike
|
||||
the SSH fetcher, does not support fetching a directory tree from the
|
||||
remote.
|
||||
|
||||
http://tools.ietf.org/html/draft-ietf-secsh-scp-sftp-ssh-uri-04
|
||||
https://www.iana.org/assignments/uri-schemes/prov/sftp
|
||||
https://tools.ietf.org/html/draft-ietf-secsh-filexfer-13
|
||||
|
||||
Please note that '/' is used as host path seperator, and not ":"
|
||||
as you may be used to from the scp/sftp commands. You can use a
|
||||
~ (tilde) to specify a path relative to your home directory.
|
||||
(The /~user/ syntax, for specyfing a path relative to another
|
||||
user's home directory is not supported.) Note that the tilde must
|
||||
still follow the host path seperator ("/"). See exampels below.
|
||||
|
||||
Example SRC_URIs:
|
||||
|
||||
SRC_URI = "sftp://host.example.com/dir/path.file.txt"
|
||||
|
||||
A path relative to your home directory.
|
||||
|
||||
SRC_URI = "sftp://host.example.com/~/dir/path.file.txt"
|
||||
|
||||
You can also specify a username (specyfing password in the
|
||||
URI is not supported, use SSH keys to authenticate):
|
||||
|
||||
SRC_URI = "sftp://user@host.example.com/dir/path.file.txt"
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2013, Olof Johansson <olof.johansson@axis.com>
|
||||
#
|
||||
# Based in part on bb.fetch2.wget:
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import bb
|
||||
import urllib
|
||||
import commands
|
||||
from bb import data
|
||||
from bb.fetch2 import URI
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import runfetchcmd
|
||||
|
||||
|
||||
class SFTP(FetchMethod):
|
||||
"""Class to fetch urls via 'sftp'"""
|
||||
|
||||
def supports(self, url, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with sftp.
|
||||
"""
|
||||
return ud.type in ['sftp']
|
||||
|
||||
def recommends_checksum(self, urldata):
|
||||
return True
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
if 'protocol' in ud.parm and ud.parm['protocol'] == 'git':
|
||||
raise bb.fetch2.ParameterError(
|
||||
"Invalid protocol - if you wish to fetch from a " +
|
||||
"git repository using ssh, you need to use the " +
|
||||
"git:// prefix with protocol=ssh", ud.url)
|
||||
|
||||
if 'downloadfilename' in ud.parm:
|
||||
ud.basename = ud.parm['downloadfilename']
|
||||
else:
|
||||
ud.basename = os.path.basename(ud.path)
|
||||
|
||||
ud.localfile = data.expand(urllib.unquote(ud.basename), d)
|
||||
|
||||
def download(self, uri, ud, d):
|
||||
"""Fetch urls"""
|
||||
|
||||
urlo = URI(uri)
|
||||
basecmd = 'sftp -oPasswordAuthentication=no'
|
||||
port = ''
|
||||
if urlo.port:
|
||||
port = '-P %d' % urlo.port
|
||||
urlo.port = None
|
||||
|
||||
dldir = data.getVar('DL_DIR', d, True)
|
||||
lpath = os.path.join(dldir, ud.localfile)
|
||||
|
||||
user = ''
|
||||
if urlo.userinfo:
|
||||
user = urlo.userinfo + '@'
|
||||
|
||||
path = urlo.path
|
||||
|
||||
# Supoprt URIs relative to the user's home directory, with
|
||||
# the tilde syntax. (E.g. <sftp://example.com/~/foo.diff>).
|
||||
if path[:3] == '/~/':
|
||||
path = path[3:]
|
||||
|
||||
remote = '%s%s:%s' % (user, urlo.hostname, path)
|
||||
|
||||
cmd = '%s %s %s %s' % (basecmd, port, commands.mkarg(remote),
|
||||
commands.mkarg(lpath))
|
||||
|
||||
bb.fetch2.check_network_access(d, cmd, uri)
|
||||
runfetchcmd(cmd, d)
|
||||
return True
|
||||
@@ -10,12 +10,6 @@ IETF secsh internet draft:
|
||||
Currently does not support the sftp parameters, as this uses scp
|
||||
Also does not support the 'fingerprint' connection parameter.
|
||||
|
||||
Please note that '/' is used as host, path separator not ':' as you may
|
||||
be used to, also '~' can be used to specify user HOME, but again after '/'
|
||||
|
||||
Example SRC_URI:
|
||||
SRC_URI = "ssh://user@host.example.com/dir/path/file.txt"
|
||||
SRC_URI = "ssh://user@host.example.com/~/file.txt"
|
||||
'''
|
||||
|
||||
# Copyright (C) 2006 OpenedHand Ltd.
|
||||
@@ -78,19 +72,15 @@ class SSH(FetchMethod):
|
||||
def supports_checksum(self, urldata):
|
||||
return False
|
||||
|
||||
def urldata_init(self, urldata, d):
|
||||
if 'protocol' in urldata.parm and urldata.parm['protocol'] == 'git':
|
||||
raise bb.fetch2.ParameterError(
|
||||
"Invalid protocol - if you wish to fetch from a git " +
|
||||
"repository using ssh, you need to use " +
|
||||
"git:// prefix with protocol=ssh", urldata.url)
|
||||
def localpath(self, url, urldata, d):
|
||||
m = __pattern__.match(urldata.url)
|
||||
path = m.group('path')
|
||||
host = m.group('host')
|
||||
urldata.localpath = os.path.join(d.getVar('DL_DIR', True), os.path.basename(path))
|
||||
lpath = os.path.join(data.getVar('DL_DIR', d, True), host, os.path.basename(path))
|
||||
return lpath
|
||||
|
||||
def download(self, url, urldata, d):
|
||||
dldir = d.getVar('DL_DIR', True)
|
||||
dldir = data.getVar('DL_DIR', d, True)
|
||||
|
||||
m = __pattern__.match(url)
|
||||
path = m.group('path')
|
||||
@@ -99,10 +89,16 @@ class SSH(FetchMethod):
|
||||
user = m.group('user')
|
||||
password = m.group('pass')
|
||||
|
||||
ldir = os.path.join(dldir, host)
|
||||
lpath = os.path.join(ldir, os.path.basename(path))
|
||||
|
||||
if not os.path.exists(ldir):
|
||||
os.makedirs(ldir)
|
||||
|
||||
if port:
|
||||
portarg = '-P %s' % port
|
||||
port = '-P %s' % port
|
||||
else:
|
||||
portarg = ''
|
||||
port = ''
|
||||
|
||||
if user:
|
||||
fr = user
|
||||
@@ -116,9 +112,9 @@ class SSH(FetchMethod):
|
||||
|
||||
import commands
|
||||
cmd = 'scp -B -r %s %s %s/' % (
|
||||
portarg,
|
||||
port,
|
||||
commands.mkarg(fr),
|
||||
commands.mkarg(dldir)
|
||||
commands.mkarg(ldir)
|
||||
)
|
||||
|
||||
bb.fetch2.check_network_access(d, cmd, urldata.url)
|
||||
|
||||
@@ -27,7 +27,6 @@ import os
|
||||
import sys
|
||||
import logging
|
||||
import bb
|
||||
import re
|
||||
from bb import data
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import FetchError
|
||||
@@ -90,8 +89,6 @@ class Svn(FetchMethod):
|
||||
|
||||
if command == "info":
|
||||
svncmd = "%s info %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
|
||||
elif command == "log1":
|
||||
svncmd = "%s log --limit 1 %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
|
||||
else:
|
||||
suffix = ""
|
||||
if ud.revision:
|
||||
@@ -168,13 +165,14 @@ class Svn(FetchMethod):
|
||||
"""
|
||||
Return the latest upstream revision number
|
||||
"""
|
||||
bb.fetch2.check_network_access(d, self._buildsvncommand(ud, d, "log1"))
|
||||
bb.fetch2.check_network_access(d, self._buildsvncommand(ud, d, "info"))
|
||||
|
||||
output = runfetchcmd("LANG=C LC_ALL=C " + self._buildsvncommand(ud, d, "log1"), d, True)
|
||||
output = runfetchcmd("LANG=C LC_ALL=C " + self._buildsvncommand(ud, d, "info"), d, True)
|
||||
|
||||
# skip the first line, as per output of svn log
|
||||
# then we expect the revision on the 2nd line
|
||||
revision = re.search('^r([0-9]*)', output.splitlines()[1]).group(1)
|
||||
revision = None
|
||||
for line in output.splitlines():
|
||||
if "Last Changed Rev" in line:
|
||||
revision = line.split(":")[1].strip()
|
||||
|
||||
return revision
|
||||
|
||||
|
||||
@@ -32,6 +32,8 @@ import urllib
|
||||
from bb import data
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import FetchError
|
||||
from bb.fetch2 import encodeurl
|
||||
from bb.fetch2 import decodeurl
|
||||
from bb.fetch2 import logger
|
||||
from bb.fetch2 import runfetchcmd
|
||||
|
||||
@@ -63,10 +65,8 @@ class Wget(FetchMethod):
|
||||
|
||||
basecmd = d.getVar("FETCHCMD_wget", True) or "/usr/bin/env wget -t 2 -T 30 -nv --passive-ftp --no-check-certificate"
|
||||
|
||||
if not checkonly and 'downloadfilename' in ud.parm:
|
||||
dldir = d.getVar("DL_DIR", True)
|
||||
bb.utils.mkdirhier(os.path.dirname(dldir + os.sep + ud.localfile))
|
||||
basecmd += " -O " + dldir + os.sep + ud.localfile
|
||||
if 'downloadfilename' in ud.parm:
|
||||
basecmd += " -O ${DL_DIR}/" + ud.localfile
|
||||
|
||||
if checkonly:
|
||||
fetchcmd = d.getVar("CHECKCOMMAND_wget", True) or d.expand(basecmd + " --spider '${URI}'")
|
||||
@@ -77,6 +77,9 @@ class Wget(FetchMethod):
|
||||
fetchcmd = d.getVar("FETCHCOMMAND_wget", True) or d.expand(basecmd + " -P ${DL_DIR} '${URI}'")
|
||||
|
||||
uri = uri.split(";")[0]
|
||||
uri_decoded = list(decodeurl(uri))
|
||||
uri_type = uri_decoded[0]
|
||||
uri_host = uri_decoded[1]
|
||||
|
||||
fetchcmd = fetchcmd.replace("${URI}", uri.split(";")[0])
|
||||
fetchcmd = fetchcmd.replace("${FILE}", ud.basename)
|
||||
|
||||
@@ -17,7 +17,24 @@
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
|
||||
"""
|
||||
What is a method pool?
|
||||
|
||||
BitBake has a global method scope where .bb, .inc and .bbclass
|
||||
files can install methods. These methods are parsed from strings.
|
||||
To avoid recompiling and executing these string we introduce
|
||||
a method pool to do this task.
|
||||
|
||||
This pool will be used to compile and execute the functions. It
|
||||
will be smart enough to
|
||||
"""
|
||||
|
||||
from bb.utils import better_compile, better_exec
|
||||
from bb import error
|
||||
|
||||
# A dict of function names we have seen
|
||||
_parsed_fns = { }
|
||||
|
||||
def insert_method(modulename, code, fn):
|
||||
"""
|
||||
@@ -26,3 +43,29 @@ def insert_method(modulename, code, fn):
|
||||
"""
|
||||
comp = better_compile(code, modulename, fn )
|
||||
better_exec(comp, None, code, fn)
|
||||
|
||||
# now some instrumentation
|
||||
code = comp.co_names
|
||||
for name in code:
|
||||
if name in ['None', 'False']:
|
||||
continue
|
||||
elif name in _parsed_fns and not _parsed_fns[name] == modulename:
|
||||
error("The function %s defined in %s was already declared in %s. BitBake has a global python function namespace so shared functions should be declared in a common include file rather than being duplicated, or if the functions are different, please use different function names." % (name, modulename, _parsed_fns[name]))
|
||||
else:
|
||||
_parsed_fns[name] = modulename
|
||||
|
||||
# A dict of modules the parser has finished with
|
||||
_parsed_methods = {}
|
||||
|
||||
def parsed_module(modulename):
|
||||
"""
|
||||
Has module been parsed?
|
||||
"""
|
||||
return modulename in _parsed_methods
|
||||
|
||||
def set_parsed_module(modulename):
|
||||
"""
|
||||
Set module as parsed
|
||||
"""
|
||||
_parsed_methods[modulename] = True
|
||||
|
||||
|
||||
@@ -107,7 +107,7 @@ def getDiskData(BBDirs, configuration):
|
||||
printErr("Invalid disk space value in BB_DISKMON_DIRS: %s" % pathSpaceInodeRe.group(3))
|
||||
return None
|
||||
else:
|
||||
# None means that it is not specified
|
||||
# 0 means that it is not specified
|
||||
minSpace = None
|
||||
|
||||
minInode = pathSpaceInodeRe.group(4)
|
||||
@@ -117,7 +117,7 @@ def getDiskData(BBDirs, configuration):
|
||||
printErr("Invalid inode value in BB_DISKMON_DIRS: %s" % pathSpaceInodeRe.group(4))
|
||||
return None
|
||||
else:
|
||||
# None means that it is not specified
|
||||
# 0 means that it is not specified
|
||||
minInode = None
|
||||
|
||||
if minSpace is None and minInode is None:
|
||||
@@ -127,9 +127,8 @@ def getDiskData(BBDirs, configuration):
|
||||
# DL_DIR may not exist at the very beginning
|
||||
if not os.path.exists(path):
|
||||
bb.utils.mkdirhier(path)
|
||||
dev = getMountedDev(path)
|
||||
# Use path/action as the key
|
||||
devDict[os.path.join(path, action)] = [dev, minSpace, minInode]
|
||||
mountedDev = getMountedDev(path)
|
||||
devDict[mountedDev] = action, path, minSpace, minInode
|
||||
|
||||
return devDict
|
||||
|
||||
@@ -193,10 +192,10 @@ class diskMonitor:
|
||||
# This is for STOPTASKS and ABORT, to avoid print the message repeatly
|
||||
# during waiting the tasks to finish
|
||||
self.checked = {}
|
||||
for k in self.devDict:
|
||||
self.preFreeS[k] = 0
|
||||
self.preFreeI[k] = 0
|
||||
self.checked[k] = False
|
||||
for dev in self.devDict:
|
||||
self.preFreeS[dev] = 0
|
||||
self.preFreeI[dev] = 0
|
||||
self.checked[dev] = False
|
||||
if self.spaceInterval is None and self.inodeInterval is None:
|
||||
self.enableMonitor = False
|
||||
|
||||
@@ -205,61 +204,46 @@ class diskMonitor:
|
||||
""" Take action for the monitor """
|
||||
|
||||
if self.enableMonitor:
|
||||
for k in self.devDict:
|
||||
path = os.path.dirname(k)
|
||||
action = os.path.basename(k)
|
||||
dev = self.devDict[k][0]
|
||||
minSpace = self.devDict[k][1]
|
||||
minInode = self.devDict[k][2]
|
||||
|
||||
st = os.statvfs(path)
|
||||
for dev in self.devDict:
|
||||
st = os.statvfs(self.devDict[dev][1])
|
||||
|
||||
# The free space, float point number
|
||||
freeSpace = st.f_bavail * st.f_frsize
|
||||
|
||||
if minSpace and freeSpace < minSpace:
|
||||
if self.devDict[dev][2] and freeSpace < self.devDict[dev][2]:
|
||||
# Always show warning, the self.checked would always be False if the action is WARN
|
||||
if self.preFreeS[k] == 0 or self.preFreeS[k] - freeSpace > self.spaceInterval and not self.checked[k]:
|
||||
logger.warn("The free space of %s (%s) is running low (%.3fGB left)" % \
|
||||
(path, dev, freeSpace / 1024 / 1024 / 1024.0))
|
||||
self.preFreeS[k] = freeSpace
|
||||
if self.preFreeS[dev] == 0 or self.preFreeS[dev] - freeSpace > self.spaceInterval and not self.checked[dev]:
|
||||
logger.warn("The free space of %s is running low (%.3fGB left)" % (dev, freeSpace / 1024 / 1024 / 1024.0))
|
||||
self.preFreeS[dev] = freeSpace
|
||||
|
||||
if action == "STOPTASKS" and not self.checked[k]:
|
||||
if self.devDict[dev][0] == "STOPTASKS" and not self.checked[dev]:
|
||||
logger.error("No new tasks can be excuted since the disk space monitor action is \"STOPTASKS\"!")
|
||||
self.checked[k] = True
|
||||
self.checked[dev] = True
|
||||
rq.finish_runqueue(False)
|
||||
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
|
||||
elif action == "ABORT" and not self.checked[k]:
|
||||
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, self.devDict[dev][1]), self.configuration)
|
||||
elif self.devDict[dev][0] == "ABORT" and not self.checked[dev]:
|
||||
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
|
||||
self.checked[k] = True
|
||||
self.checked[dev] = True
|
||||
rq.finish_runqueue(True)
|
||||
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
|
||||
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, self.devDict[dev][1]), self.configuration)
|
||||
|
||||
# The free inodes, float point number
|
||||
freeInode = st.f_favail
|
||||
|
||||
if minInode and freeInode < minInode:
|
||||
# Some fs formats' (e.g., btrfs) statvfs.f_files (inodes) is
|
||||
# zero, this is a feature of the fs, we disable the inode
|
||||
# checking for such a fs.
|
||||
if st.f_files == 0:
|
||||
logger.warn("Inode check for %s is unavaliable, will remove it from disk monitor" % path)
|
||||
self.devDict[k][2] = None
|
||||
continue
|
||||
if self.devDict[dev][3] and freeInode < self.devDict[dev][3]:
|
||||
# Always show warning, the self.checked would always be False if the action is WARN
|
||||
if self.preFreeI[k] == 0 or self.preFreeI[k] - freeInode > self.inodeInterval and not self.checked[k]:
|
||||
logger.warn("The free inode of %s (%s) is running low (%.3fK left)" % \
|
||||
(path, dev, freeInode / 1024.0))
|
||||
self.preFreeI[k] = freeInode
|
||||
if self.preFreeI[dev] == 0 or self.preFreeI[dev] - freeInode > self.inodeInterval and not self.checked[dev]:
|
||||
logger.warn("The free inode of %s is running low (%.3fK left)" % (dev, freeInode / 1024.0))
|
||||
self.preFreeI[dev] = freeInode
|
||||
|
||||
if action == "STOPTASKS" and not self.checked[k]:
|
||||
if self.devDict[dev][0] == "STOPTASKS" and not self.checked[dev]:
|
||||
logger.error("No new tasks can be excuted since the disk space monitor action is \"STOPTASKS\"!")
|
||||
self.checked[k] = True
|
||||
self.checked[dev] = True
|
||||
rq.finish_runqueue(False)
|
||||
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)
|
||||
elif action == "ABORT" and not self.checked[k]:
|
||||
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeSpace, self.devDict[dev][1]), self.configuration)
|
||||
elif self.devDict[dev][0] == "ABORT" and not self.checked[dev]:
|
||||
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
|
||||
self.checked[k] = True
|
||||
self.checked[dev] = True
|
||||
rq.finish_runqueue(True)
|
||||
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)
|
||||
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeSpace, self.devDict[dev][1]), self.configuration)
|
||||
return
|
||||
|
||||
@@ -148,8 +148,9 @@ class MethodNode(AstNode):
|
||||
text = '\n'.join(self.body)
|
||||
if self.func_name == "__anonymous":
|
||||
funcname = ("__anon_%s_%s" % (self.lineno, self.filename.translate(string.maketrans('/.+-', '____'))))
|
||||
text = "def %s(d):\n" % (funcname) + text
|
||||
bb.methodpool.insert_method(funcname, text, self.filename)
|
||||
if not funcname in bb.methodpool._parsed_fns:
|
||||
text = "def %s(d):\n" % (funcname) + text
|
||||
bb.methodpool.insert_method(funcname, text, self.filename)
|
||||
anonfuncs = data.getVar('__BBANONFUNCS') or []
|
||||
anonfuncs.append(funcname)
|
||||
data.setVar('__BBANONFUNCS', anonfuncs)
|
||||
@@ -170,7 +171,8 @@ class PythonMethodNode(AstNode):
|
||||
# 'this' file. This means we will not parse methods from
|
||||
# bb classes twice
|
||||
text = '\n'.join(self.body)
|
||||
bb.methodpool.insert_method(self.modulename, text, self.filename)
|
||||
if not bb.methodpool.parsed_module(self.modulename):
|
||||
bb.methodpool.insert_method(self.modulename, text, self.filename)
|
||||
data.setVarFlag(self.function, "func", 1)
|
||||
data.setVarFlag(self.function, "python", 1)
|
||||
data.setVar(self.function, text)
|
||||
|
||||
@@ -166,6 +166,10 @@ def handle(fn, d, include):
|
||||
if oldfile:
|
||||
d.setVar("FILE", oldfile)
|
||||
|
||||
# we have parsed the bb class now
|
||||
if ext == ".bbclass" or ext == ".inc":
|
||||
bb.methodpool.set_parsed_module(base_name)
|
||||
|
||||
return d
|
||||
|
||||
def feeder(lineno, s, fn, root, statements):
|
||||
|
||||
@@ -29,30 +29,7 @@ import logging
|
||||
import bb.utils
|
||||
from bb.parse import ParseError, resolve_file, ast, logger
|
||||
|
||||
__config_regexp__ = re.compile( r"""
|
||||
^
|
||||
(?P<exp>export\s*)?
|
||||
(?P<var>[a-zA-Z0-9\-~_+.${}/]+?)
|
||||
(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?
|
||||
|
||||
\s* (
|
||||
(?P<colon>:=) |
|
||||
(?P<lazyques>\?\?=) |
|
||||
(?P<ques>\?=) |
|
||||
(?P<append>\+=) |
|
||||
(?P<prepend>=\+) |
|
||||
(?P<predot>=\.) |
|
||||
(?P<postdot>\.=) |
|
||||
=
|
||||
) \s*
|
||||
|
||||
(?!'[^']*'[^']*'$)
|
||||
(?!\"[^\"]*\"[^\"]*\"$)
|
||||
(?P<apo>['\"])
|
||||
(?P<value>.*)
|
||||
(?P=apo)
|
||||
$
|
||||
""", re.X)
|
||||
__config_regexp__ = re.compile( r"(?P<exp>export\s*)?(?P<var>[a-zA-Z0-9\-~_+.${}/]+)(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?\s*((?P<colon>:=)|(?P<lazyques>\?\?=)|(?P<ques>\?=)|(?P<append>\+=)|(?P<prepend>=\+)|(?P<predot>=\.)|(?P<postdot>\.=)|=)\s*(?!'[^']*'[^']*'$)(?!\"[^\"]*\"[^\"]*\"$)(?P<apo>['\"])(?P<value>.*)(?P=apo)$")
|
||||
__include_regexp__ = re.compile( r"include\s+(.+)" )
|
||||
__require_regexp__ = re.compile( r"require\s+(.+)" )
|
||||
__export_regexp__ = re.compile( r"export\s+([a-zA-Z0-9\-_+.${}/]+)$" )
|
||||
|
||||
@@ -472,7 +472,7 @@ class RunQueueData:
|
||||
if depdata is not None:
|
||||
taskid = taskData.gettask_id_fromfnid(depdata, idependtask)
|
||||
if taskid is None:
|
||||
bb.msg.fatal("RunQueue", "Task %s in %s depends upon non-existent task %s in %s" % (taskData.tasks_name[task], fn, idependtask, taskData.fn_index[depdata]))
|
||||
bb.msg.fatal("RunQueue", "Task %s in %s depends upon non-existent task %s in %s" % (taskData.tasks_name[task], fn, idependtask, dep))
|
||||
depends.add(taskid)
|
||||
irdepends = taskData.tasks_irdepends[task]
|
||||
for (depid, idependtask) in irdepends:
|
||||
@@ -482,7 +482,7 @@ class RunQueueData:
|
||||
if depdata is not None:
|
||||
taskid = taskData.gettask_id_fromfnid(depdata, idependtask)
|
||||
if taskid is None:
|
||||
bb.msg.fatal("RunQueue", "Task %s in %s rdepends upon non-existent task %s in %s" % (taskData.tasks_name[task], fn, idependtask, taskData.fn_index[depdata]))
|
||||
bb.msg.fatal("RunQueue", "Task %s in %s rdepends upon non-existent task %s in %s" % (taskData.tasks_name[task], fn, idependtask, dep))
|
||||
depends.add(taskid)
|
||||
|
||||
# Resolve recursive 'recrdeptask' dependencies (Part A)
|
||||
@@ -1149,8 +1149,7 @@ class RunQueueExecute:
|
||||
os._exit(1)
|
||||
try:
|
||||
if not self.cooker.configuration.dry_run:
|
||||
profile = self.cooker.configuration.profile
|
||||
ret = bb.build.exec_task(fn, taskname, the_data, profile)
|
||||
ret = bb.build.exec_task(fn, taskname, the_data)
|
||||
os._exit(ret)
|
||||
except:
|
||||
os._exit(1)
|
||||
|
||||
@@ -266,5 +266,5 @@ class BitBakeServer(object):
|
||||
return self.connection
|
||||
|
||||
def launchUI(self, uifunc, *args):
|
||||
return uifunc(*args)
|
||||
return bb.cooker.server_main(self.cooker, uifunc, *args)
|
||||
|
||||
|
||||
@@ -98,7 +98,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
bb.error("Task %s from %s seems to be empty?!" % (task, fn))
|
||||
data = ''
|
||||
|
||||
gendeps[task] -= self.basewhitelist
|
||||
newdeps = gendeps[task]
|
||||
seen = set()
|
||||
while newdeps:
|
||||
@@ -108,12 +107,12 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
for dep in nextdeps:
|
||||
if dep in self.basewhitelist:
|
||||
continue
|
||||
gendeps[dep] -= self.basewhitelist
|
||||
newdeps |= gendeps[dep]
|
||||
newdeps -= seen
|
||||
|
||||
alldeps = sorted(seen)
|
||||
for dep in alldeps:
|
||||
alldeps = seen - self.basewhitelist
|
||||
|
||||
for dep in sorted(alldeps):
|
||||
data = data + dep
|
||||
if dep in lookupcache:
|
||||
var = lookupcache[dep]
|
||||
@@ -127,7 +126,7 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
if var:
|
||||
data = data + str(var)
|
||||
self.basehash[fn + "." + task] = hashlib.md5(data).hexdigest()
|
||||
taskdeps[task] = alldeps
|
||||
taskdeps[task] = sorted(alldeps)
|
||||
|
||||
self.taskdeps[fn] = taskdeps
|
||||
self.gendeps[fn] = gendeps
|
||||
@@ -331,12 +330,12 @@ def compare_sigfiles(a, b, recursecb = None):
|
||||
return changed, added, removed
|
||||
|
||||
if 'basewhitelist' in a_data and a_data['basewhitelist'] != b_data['basewhitelist']:
|
||||
output.append("basewhitelist changed from '%s' to '%s'" % (a_data['basewhitelist'], b_data['basewhitelist']))
|
||||
output.append("basewhitelist changed from %s to %s" % (a_data['basewhitelist'], b_data['basewhitelist']))
|
||||
if a_data['basewhitelist'] and b_data['basewhitelist']:
|
||||
output.append("changed items: %s" % a_data['basewhitelist'].symmetric_difference(b_data['basewhitelist']))
|
||||
|
||||
if 'taskwhitelist' in a_data and a_data['taskwhitelist'] != b_data['taskwhitelist']:
|
||||
output.append("taskwhitelist changed from '%s' to '%s'" % (a_data['taskwhitelist'], b_data['taskwhitelist']))
|
||||
output.append("taskwhitelist changed from %s to %s" % (a_data['taskwhitelist'], b_data['taskwhitelist']))
|
||||
if a_data['taskwhitelist'] and b_data['taskwhitelist']:
|
||||
output.append("changed items: %s" % a_data['taskwhitelist'].symmetric_difference(b_data['taskwhitelist']))
|
||||
|
||||
@@ -349,7 +348,7 @@ def compare_sigfiles(a, b, recursecb = None):
|
||||
changed, added, removed = dict_diff(a_data['gendeps'], b_data['gendeps'], a_data['basewhitelist'] & b_data['basewhitelist'])
|
||||
if changed:
|
||||
for dep in changed:
|
||||
output.append("List of dependencies for variable %s changed from '%s' to '%s'" % (dep, a_data['gendeps'][dep], b_data['gendeps'][dep]))
|
||||
output.append("List of dependencies for variable %s changed from %s to %s" % (dep, a_data['gendeps'][dep], b_data['gendeps'][dep]))
|
||||
if a_data['gendeps'][dep] and b_data['gendeps'][dep]:
|
||||
output.append("changed items: %s" % a_data['gendeps'][dep].symmetric_difference(b_data['gendeps'][dep]))
|
||||
if added:
|
||||
@@ -363,7 +362,7 @@ def compare_sigfiles(a, b, recursecb = None):
|
||||
changed, added, removed = dict_diff(a_data['varvals'], b_data['varvals'])
|
||||
if changed:
|
||||
for dep in changed:
|
||||
output.append("Variable %s value changed from '%s' to '%s'" % (dep, a_data['varvals'][dep], b_data['varvals'][dep]))
|
||||
output.append("Variable %s value changed from %s to %s" % (dep, a_data['varvals'][dep], b_data['varvals'][dep]))
|
||||
|
||||
changed, added, removed = dict_diff(a_data['file_checksum_values'], b_data['file_checksum_values'])
|
||||
if changed:
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# BitBake Test for codeparser.py
|
||||
#
|
||||
@@ -26,9 +24,6 @@ import bb
|
||||
|
||||
logger = logging.getLogger('BitBake.TestCodeParser')
|
||||
|
||||
# bb.data references bb.parse but can't directly import due to circular dependencies.
|
||||
# Hack around it for now :(
|
||||
import bb.parse
|
||||
import bb.data
|
||||
|
||||
class ReferenceTest(unittest.TestCase):
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# BitBake Tests for Copy-on-Write (cow.py)
|
||||
#
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# BitBake Tests for the Data Store (data.py/data_smart.py)
|
||||
#
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# BitBake Tests for the Fetcher (fetch2/)
|
||||
#
|
||||
@@ -23,252 +21,11 @@ import unittest
|
||||
import tempfile
|
||||
import subprocess
|
||||
import os
|
||||
from bb.fetch2 import URI
|
||||
import bb
|
||||
|
||||
class URITest(unittest.TestCase):
|
||||
test_uris = {
|
||||
"http://www.google.com/index.html" : {
|
||||
'uri': 'http://www.google.com/index.html',
|
||||
'scheme': 'http',
|
||||
'hostname': 'www.google.com',
|
||||
'port': None,
|
||||
'hostport': 'www.google.com',
|
||||
'path': '/index.html',
|
||||
'userinfo': '',
|
||||
'username': '',
|
||||
'password': '',
|
||||
'params': {},
|
||||
'relative': False
|
||||
},
|
||||
"http://www.google.com/index.html;param1=value1" : {
|
||||
'uri': 'http://www.google.com/index.html;param1=value1',
|
||||
'scheme': 'http',
|
||||
'hostname': 'www.google.com',
|
||||
'port': None,
|
||||
'hostport': 'www.google.com',
|
||||
'path': '/index.html',
|
||||
'userinfo': '',
|
||||
'username': '',
|
||||
'password': '',
|
||||
'params': {
|
||||
'param1': 'value1'
|
||||
},
|
||||
'relative': False
|
||||
},
|
||||
"http://www.example.com:8080/index.html" : {
|
||||
'uri': 'http://www.example.com:8080/index.html',
|
||||
'scheme': 'http',
|
||||
'hostname': 'www.example.com',
|
||||
'port': 8080,
|
||||
'hostport': 'www.example.com:8080',
|
||||
'path': '/index.html',
|
||||
'userinfo': '',
|
||||
'username': '',
|
||||
'password': '',
|
||||
'params': {},
|
||||
'relative': False
|
||||
},
|
||||
"cvs://anoncvs@cvs.handhelds.org/cvs;module=familiar/dist/ipkg" : {
|
||||
'uri': 'cvs://anoncvs@cvs.handhelds.org/cvs;module=familiar/dist/ipkg',
|
||||
'scheme': 'cvs',
|
||||
'hostname': 'cvs.handhelds.org',
|
||||
'port': None,
|
||||
'hostport': 'cvs.handhelds.org',
|
||||
'path': '/cvs',
|
||||
'userinfo': 'anoncvs',
|
||||
'username': 'anoncvs',
|
||||
'password': '',
|
||||
'params': {
|
||||
'module': 'familiar/dist/ipkg'
|
||||
},
|
||||
'relative': False
|
||||
},
|
||||
"cvs://anoncvs:anonymous@cvs.handhelds.org/cvs;tag=V0-99-81;module=familiar/dist/ipkg": {
|
||||
'uri': 'cvs://anoncvs:anonymous@cvs.handhelds.org/cvs;tag=V0-99-81;module=familiar/dist/ipkg',
|
||||
'scheme': 'cvs',
|
||||
'hostname': 'cvs.handhelds.org',
|
||||
'port': None,
|
||||
'hostport': 'cvs.handhelds.org',
|
||||
'path': '/cvs',
|
||||
'userinfo': 'anoncvs:anonymous',
|
||||
'username': 'anoncvs',
|
||||
'password': 'anonymous',
|
||||
'params': {
|
||||
'tag': 'V0-99-81',
|
||||
'module': 'familiar/dist/ipkg'
|
||||
},
|
||||
'relative': False
|
||||
},
|
||||
"file://example.diff": { # NOTE: Not RFC compliant!
|
||||
'uri': 'file:example.diff',
|
||||
'scheme': 'file',
|
||||
'hostname': '',
|
||||
'port': None,
|
||||
'hostport': '',
|
||||
'path': 'example.diff',
|
||||
'userinfo': '',
|
||||
'username': '',
|
||||
'password': '',
|
||||
'params': {},
|
||||
'relative': True
|
||||
},
|
||||
"file:example.diff": { # NOTE: RFC compliant version of the former
|
||||
'uri': 'file:example.diff',
|
||||
'scheme': 'file',
|
||||
'hostname': '',
|
||||
'port': None,
|
||||
'hostport': '',
|
||||
'path': 'example.diff',
|
||||
'userinfo': '',
|
||||
'userinfo': '',
|
||||
'username': '',
|
||||
'password': '',
|
||||
'params': {},
|
||||
'relative': True
|
||||
},
|
||||
"file:///tmp/example.diff": {
|
||||
'uri': 'file:///tmp/example.diff',
|
||||
'scheme': 'file',
|
||||
'hostname': '',
|
||||
'port': None,
|
||||
'hostport': '',
|
||||
'path': '/tmp/example.diff',
|
||||
'userinfo': '',
|
||||
'userinfo': '',
|
||||
'username': '',
|
||||
'password': '',
|
||||
'params': {},
|
||||
'relative': False
|
||||
},
|
||||
"git:///path/example.git": {
|
||||
'uri': 'git:///path/example.git',
|
||||
'scheme': 'git',
|
||||
'hostname': '',
|
||||
'port': None,
|
||||
'hostport': '',
|
||||
'path': '/path/example.git',
|
||||
'userinfo': '',
|
||||
'userinfo': '',
|
||||
'username': '',
|
||||
'password': '',
|
||||
'params': {},
|
||||
'relative': False
|
||||
},
|
||||
"git:path/example.git": {
|
||||
'uri': 'git:path/example.git',
|
||||
'scheme': 'git',
|
||||
'hostname': '',
|
||||
'port': None,
|
||||
'hostport': '',
|
||||
'path': 'path/example.git',
|
||||
'userinfo': '',
|
||||
'userinfo': '',
|
||||
'username': '',
|
||||
'password': '',
|
||||
'params': {},
|
||||
'relative': True
|
||||
},
|
||||
"git://example.net/path/example.git": {
|
||||
'uri': 'git://example.net/path/example.git',
|
||||
'scheme': 'git',
|
||||
'hostname': 'example.net',
|
||||
'port': None,
|
||||
'hostport': 'example.net',
|
||||
'path': '/path/example.git',
|
||||
'userinfo': '',
|
||||
'userinfo': '',
|
||||
'username': '',
|
||||
'password': '',
|
||||
'params': {},
|
||||
'relative': False
|
||||
}
|
||||
}
|
||||
|
||||
def test_uri(self):
|
||||
for test_uri, ref in self.test_uris.items():
|
||||
uri = URI(test_uri)
|
||||
|
||||
self.assertEqual(str(uri), ref['uri'])
|
||||
|
||||
# expected attributes
|
||||
self.assertEqual(uri.scheme, ref['scheme'])
|
||||
|
||||
self.assertEqual(uri.userinfo, ref['userinfo'])
|
||||
self.assertEqual(uri.username, ref['username'])
|
||||
self.assertEqual(uri.password, ref['password'])
|
||||
|
||||
self.assertEqual(uri.hostname, ref['hostname'])
|
||||
self.assertEqual(uri.port, ref['port'])
|
||||
self.assertEqual(uri.hostport, ref['hostport'])
|
||||
|
||||
self.assertEqual(uri.path, ref['path'])
|
||||
self.assertEqual(uri.params, ref['params'])
|
||||
|
||||
self.assertEqual(uri.relative, ref['relative'])
|
||||
|
||||
def test_dict(self):
|
||||
for test in self.test_uris.values():
|
||||
uri = URI()
|
||||
|
||||
self.assertEqual(uri.scheme, '')
|
||||
self.assertEqual(uri.userinfo, '')
|
||||
self.assertEqual(uri.username, '')
|
||||
self.assertEqual(uri.password, '')
|
||||
self.assertEqual(uri.hostname, '')
|
||||
self.assertEqual(uri.port, None)
|
||||
self.assertEqual(uri.path, '')
|
||||
self.assertEqual(uri.params, {})
|
||||
|
||||
|
||||
uri.scheme = test['scheme']
|
||||
self.assertEqual(uri.scheme, test['scheme'])
|
||||
|
||||
uri.userinfo = test['userinfo']
|
||||
self.assertEqual(uri.userinfo, test['userinfo'])
|
||||
self.assertEqual(uri.username, test['username'])
|
||||
self.assertEqual(uri.password, test['password'])
|
||||
|
||||
uri.hostname = test['hostname']
|
||||
self.assertEqual(uri.hostname, test['hostname'])
|
||||
self.assertEqual(uri.hostport, test['hostname'])
|
||||
|
||||
uri.port = test['port']
|
||||
self.assertEqual(uri.port, test['port'])
|
||||
self.assertEqual(uri.hostport, test['hostport'])
|
||||
|
||||
uri.path = test['path']
|
||||
self.assertEqual(uri.path, test['path'])
|
||||
|
||||
uri.params = test['params']
|
||||
self.assertEqual(uri.params, test['params'])
|
||||
|
||||
self.assertEqual(str(uri)+str(uri.relative), str(test['uri'])+str(test['relative']))
|
||||
|
||||
self.assertEqual(str(uri), test['uri'])
|
||||
|
||||
uri.params = {}
|
||||
self.assertEqual(uri.params, {})
|
||||
self.assertEqual(str(uri), (str(uri).split(";"))[0])
|
||||
|
||||
class FetcherTest(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
self.d = bb.data.init()
|
||||
self.tempdir = tempfile.mkdtemp()
|
||||
self.dldir = os.path.join(self.tempdir, "download")
|
||||
os.mkdir(self.dldir)
|
||||
self.d.setVar("DL_DIR", self.dldir)
|
||||
self.unpackdir = os.path.join(self.tempdir, "unpacked")
|
||||
os.mkdir(self.unpackdir)
|
||||
persistdir = os.path.join(self.tempdir, "persistdata")
|
||||
self.d.setVar("PERSISTENT_DIR", persistdir)
|
||||
|
||||
def tearDown(self):
|
||||
bb.utils.prunedir(self.tempdir)
|
||||
|
||||
class MirrorUriTest(FetcherTest):
|
||||
|
||||
replaceuris = {
|
||||
("git://git.invalid.infradead.org/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "http://somewhere.org/somedir/")
|
||||
: "http://somewhere.org/somedir/git2_git.invalid.infradead.org.mtd-utils.git.tar.gz",
|
||||
@@ -309,6 +66,87 @@ class MirrorUriTest(FetcherTest):
|
||||
"https://.*/.* file:///someotherpath/downloads/ \n" \
|
||||
"http://.*/.* file:///someotherpath/downloads/ \n"
|
||||
|
||||
def setUp(self):
|
||||
self.d = bb.data.init()
|
||||
self.tempdir = tempfile.mkdtemp()
|
||||
self.dldir = os.path.join(self.tempdir, "download")
|
||||
os.mkdir(self.dldir)
|
||||
self.d.setVar("DL_DIR", self.dldir)
|
||||
self.unpackdir = os.path.join(self.tempdir, "unpacked")
|
||||
os.mkdir(self.unpackdir)
|
||||
persistdir = os.path.join(self.tempdir, "persistdata")
|
||||
self.d.setVar("PERSISTENT_DIR", persistdir)
|
||||
|
||||
def tearDown(self):
|
||||
bb.utils.prunedir(self.tempdir)
|
||||
|
||||
def test_fetch(self):
|
||||
fetcher = bb.fetch.Fetch(["http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
|
||||
fetcher.download()
|
||||
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
|
||||
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.1.tar.gz"), 57892)
|
||||
self.d.setVar("BB_NO_NETWORK", "1")
|
||||
fetcher = bb.fetch.Fetch(["http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
|
||||
fetcher.download()
|
||||
fetcher.unpack(self.unpackdir)
|
||||
self.assertEqual(len(os.listdir(self.unpackdir + "/bitbake-1.0/")), 9)
|
||||
self.assertEqual(len(os.listdir(self.unpackdir + "/bitbake-1.1/")), 9)
|
||||
|
||||
def test_fetch_mirror(self):
|
||||
self.d.setVar("MIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
|
||||
fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
|
||||
fetcher.download()
|
||||
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
|
||||
|
||||
def test_fetch_premirror(self):
|
||||
self.d.setVar("PREMIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
|
||||
fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
|
||||
fetcher.download()
|
||||
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
|
||||
|
||||
def gitfetcher(self, url1, url2):
|
||||
def checkrevision(self, fetcher):
|
||||
fetcher.unpack(self.unpackdir)
|
||||
revision = subprocess.check_output("git rev-parse HEAD", shell=True, cwd=self.unpackdir + "/git").strip()
|
||||
self.assertEqual(revision, "270a05b0b4ba0959fe0624d2a4885d7b70426da5")
|
||||
|
||||
self.d.setVar("BB_GENERATE_MIRROR_TARBALLS", "1")
|
||||
self.d.setVar("SRCREV", "270a05b0b4ba0959fe0624d2a4885d7b70426da5")
|
||||
fetcher = bb.fetch.Fetch([url1], self.d)
|
||||
fetcher.download()
|
||||
checkrevision(self, fetcher)
|
||||
# Wipe out the dldir clone and the unpacked source, turn off the network and check mirror tarball works
|
||||
bb.utils.prunedir(self.dldir + "/git2/")
|
||||
bb.utils.prunedir(self.unpackdir)
|
||||
self.d.setVar("BB_NO_NETWORK", "1")
|
||||
fetcher = bb.fetch.Fetch([url2], self.d)
|
||||
fetcher.download()
|
||||
checkrevision(self, fetcher)
|
||||
|
||||
def test_gitfetch(self):
|
||||
url1 = url2 = "git://git.openembedded.org/bitbake"
|
||||
self.gitfetcher(url1, url2)
|
||||
|
||||
def test_gitfetch_premirror(self):
|
||||
url1 = "git://git.openembedded.org/bitbake"
|
||||
url2 = "git://someserver.org/bitbake"
|
||||
self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
|
||||
self.gitfetcher(url1, url2)
|
||||
|
||||
def test_gitfetch_premirror2(self):
|
||||
url1 = url2 = "git://someserver.org/bitbake"
|
||||
self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
|
||||
self.gitfetcher(url1, url2)
|
||||
|
||||
def test_gitfetch_premirror3(self):
|
||||
realurl = "git://git.openembedded.org/bitbake"
|
||||
dummyurl = "git://someserver.org/bitbake"
|
||||
self.sourcedir = self.unpackdir.replace("unpacked", "sourcemirror.git")
|
||||
os.chdir(self.tempdir)
|
||||
subprocess.check_output("git clone %s %s 2> /dev/null" % (realurl, self.sourcedir), shell=True)
|
||||
self.d.setVar("PREMIRRORS", "%s git://%s;protocol=file \n" % (dummyurl, self.sourcedir))
|
||||
self.gitfetcher(dummyurl, dummyurl)
|
||||
|
||||
def test_urireplace(self):
|
||||
for k, v in self.replaceuris.items():
|
||||
ud = bb.fetch.FetchData(k[0], self.d)
|
||||
@@ -330,85 +168,13 @@ class MirrorUriTest(FetcherTest):
|
||||
uris, uds = bb.fetch2.build_mirroruris(fetcher, mirrors, self.d)
|
||||
self.assertEqual(uris, ['file:///someotherpath/downloads/bitbake-1.0.tar.gz'])
|
||||
|
||||
class FetcherNetworkTest(FetcherTest):
|
||||
|
||||
if os.environ.get("BB_SKIP_NETTESTS") == "yes":
|
||||
print("Unset BB_SKIP_NETTESTS to run network tests")
|
||||
else:
|
||||
def test_fetch(self):
|
||||
fetcher = bb.fetch.Fetch(["http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
|
||||
fetcher.download()
|
||||
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
|
||||
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.1.tar.gz"), 57892)
|
||||
self.d.setVar("BB_NO_NETWORK", "1")
|
||||
fetcher = bb.fetch.Fetch(["http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
|
||||
fetcher.download()
|
||||
fetcher.unpack(self.unpackdir)
|
||||
self.assertEqual(len(os.listdir(self.unpackdir + "/bitbake-1.0/")), 9)
|
||||
self.assertEqual(len(os.listdir(self.unpackdir + "/bitbake-1.1/")), 9)
|
||||
|
||||
def test_fetch_mirror(self):
|
||||
self.d.setVar("MIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
|
||||
fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
|
||||
fetcher.download()
|
||||
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
|
||||
|
||||
def test_fetch_premirror(self):
|
||||
self.d.setVar("PREMIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
|
||||
fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
|
||||
fetcher.download()
|
||||
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
|
||||
|
||||
def gitfetcher(self, url1, url2):
|
||||
def checkrevision(self, fetcher):
|
||||
fetcher.unpack(self.unpackdir)
|
||||
revision = bb.process.run("git rev-parse HEAD", shell=True, cwd=self.unpackdir + "/git")[0].strip()
|
||||
self.assertEqual(revision, "270a05b0b4ba0959fe0624d2a4885d7b70426da5")
|
||||
|
||||
self.d.setVar("BB_GENERATE_MIRROR_TARBALLS", "1")
|
||||
self.d.setVar("SRCREV", "270a05b0b4ba0959fe0624d2a4885d7b70426da5")
|
||||
fetcher = bb.fetch.Fetch([url1], self.d)
|
||||
fetcher.download()
|
||||
checkrevision(self, fetcher)
|
||||
# Wipe out the dldir clone and the unpacked source, turn off the network and check mirror tarball works
|
||||
bb.utils.prunedir(self.dldir + "/git2/")
|
||||
bb.utils.prunedir(self.unpackdir)
|
||||
self.d.setVar("BB_NO_NETWORK", "1")
|
||||
fetcher = bb.fetch.Fetch([url2], self.d)
|
||||
fetcher.download()
|
||||
checkrevision(self, fetcher)
|
||||
|
||||
def test_gitfetch(self):
|
||||
url1 = url2 = "git://git.openembedded.org/bitbake"
|
||||
self.gitfetcher(url1, url2)
|
||||
|
||||
def test_gitfetch_premirror(self):
|
||||
url1 = "git://git.openembedded.org/bitbake"
|
||||
url2 = "git://someserver.org/bitbake"
|
||||
self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
|
||||
self.gitfetcher(url1, url2)
|
||||
|
||||
def test_gitfetch_premirror2(self):
|
||||
url1 = url2 = "git://someserver.org/bitbake"
|
||||
self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
|
||||
self.gitfetcher(url1, url2)
|
||||
|
||||
def test_gitfetch_premirror3(self):
|
||||
realurl = "git://git.openembedded.org/bitbake"
|
||||
dummyurl = "git://someserver.org/bitbake"
|
||||
self.sourcedir = self.unpackdir.replace("unpacked", "sourcemirror.git")
|
||||
os.chdir(self.tempdir)
|
||||
bb.process.run("git clone %s %s 2> /dev/null" % (realurl, self.sourcedir), shell=True)
|
||||
self.d.setVar("PREMIRRORS", "%s git://%s;protocol=file \n" % (dummyurl, self.sourcedir))
|
||||
self.gitfetcher(dummyurl, dummyurl)
|
||||
|
||||
class URLHandle(unittest.TestCase):
|
||||
|
||||
datatable = {
|
||||
"http://www.google.com/index.html" : ('http', 'www.google.com', '/index.html', '', '', {}),
|
||||
"cvs://anoncvs@cvs.handhelds.org/cvs;module=familiar/dist/ipkg" : ('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', '', {'module': 'familiar/dist/ipkg'}),
|
||||
"cvs://anoncvs:anonymous@cvs.handhelds.org/cvs;tag=V0-99-81;module=familiar/dist/ipkg" : ('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', 'anonymous', {'tag': 'V0-99-81', 'module': 'familiar/dist/ipkg'}),
|
||||
"git://git.openembedded.org/bitbake;branch=@foo" : ('git', 'git.openembedded.org', '/bitbake', '', '', {'branch': '@foo'})
|
||||
"cvs://anoncvs:anonymous@cvs.handhelds.org/cvs;tag=V0-99-81;module=familiar/dist/ipkg" : ('cvs', 'cvs.handhelds.org', '/cvs', 'anoncvs', 'anonymous', {'tag': 'V0-99-81', 'module': 'familiar/dist/ipkg'})
|
||||
}
|
||||
|
||||
def test_decodeurl(self):
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# BitBake Tests for utils.py
|
||||
#
|
||||
|
||||
@@ -29,17 +29,15 @@ from bb.cooker import state
|
||||
import bb.fetch2
|
||||
|
||||
class Tinfoil:
|
||||
def __init__(self, output=sys.stdout):
|
||||
def __init__(self):
|
||||
# Needed to avoid deprecation warnings with python 2.6
|
||||
warnings.filterwarnings("ignore", category=DeprecationWarning)
|
||||
|
||||
# Set up logging
|
||||
self.logger = logging.getLogger('BitBake')
|
||||
console = logging.StreamHandler(output)
|
||||
bb.msg.addDefaultlogFilter(console)
|
||||
console = logging.StreamHandler(sys.stdout)
|
||||
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
|
||||
if output.isatty():
|
||||
format.enable_color()
|
||||
bb.msg.addDefaultlogFilter(console)
|
||||
console.setFormatter(format)
|
||||
self.logger.addHandler(console)
|
||||
|
||||
|
||||
@@ -47,7 +47,6 @@ from bb.ui.crumbs.hig.deployimagedialog import DeployImageDialog
|
||||
from bb.ui.crumbs.hig.layerselectiondialog import LayerSelectionDialog
|
||||
from bb.ui.crumbs.hig.imageselectiondialog import ImageSelectionDialog
|
||||
from bb.ui.crumbs.hig.parsingwarningsdialog import ParsingWarningsDialog
|
||||
from bb.ui.crumbs.hig.propertydialog import PropertyDialog
|
||||
|
||||
hobVer = 20120808
|
||||
|
||||
@@ -56,7 +55,7 @@ class Configuration:
|
||||
|
||||
@classmethod
|
||||
def parse_proxy_string(cls, proxy):
|
||||
pattern = "^\s*((http|https|ftp|socks|cvs)://)?((\S+):(\S+)@)?([^\s:]+)(:(\d+))?/?"
|
||||
pattern = "^\s*((http|https|ftp|git|cvs)://)?((\S+):(\S+)@)?([^\s:]+)(:(\d+))?/?"
|
||||
match = re.search(pattern, proxy)
|
||||
if match:
|
||||
return match.group(2), match.group(4), match.group(5), match.group(6), match.group(8)
|
||||
@@ -124,7 +123,7 @@ class Configuration:
|
||||
"http" : [None, None, None, "", ""], # protocol : [prot, user, passwd, host, port]
|
||||
"https" : [None, None, None, "", ""],
|
||||
"ftp" : [None, None, None, "", ""],
|
||||
"socks" : [None, None, None, "", ""],
|
||||
"git" : [None, None, None, "", ""],
|
||||
"cvs" : [None, None, None, "", ""],
|
||||
}
|
||||
|
||||
@@ -181,16 +180,30 @@ class Configuration:
|
||||
self.default_task = params["default_task"]
|
||||
|
||||
# proxy settings
|
||||
self.enable_proxy = params["http_proxy"] != "" or params["https_proxy"] != "" \
|
||||
or params["ftp_proxy"] != "" or params["socks_proxy"] != "" \
|
||||
self.enable_proxy = params["http_proxy"] != "" or params["https_proxy"] != "" or params["ftp_proxy"] != "" \
|
||||
or params["git_proxy_host"] != "" or params["git_proxy_port"] != "" \
|
||||
or params["cvs_proxy_host"] != "" or params["cvs_proxy_port"] != ""
|
||||
self.split_proxy("http", params["http_proxy"])
|
||||
self.split_proxy("https", params["https_proxy"])
|
||||
self.split_proxy("ftp", params["ftp_proxy"])
|
||||
self.split_proxy("socks", params["socks_proxy"])
|
||||
self.split_proxy("git", params["git_proxy_host"] + ":" + params["git_proxy_port"])
|
||||
self.split_proxy("cvs", params["cvs_proxy_host"] + ":" + params["cvs_proxy_port"])
|
||||
|
||||
def load(self, template):
|
||||
self.curr_mach = template.getVar("MACHINE")
|
||||
self.curr_package_format = " ".join(template.getVar("PACKAGE_CLASSES").split("package_")).strip()
|
||||
self.curr_distro = template.getVar("DISTRO")
|
||||
self.dldir = template.getVar("DL_DIR")
|
||||
self.sstatedir = template.getVar("SSTATE_DIR")
|
||||
self.sstatemirror = template.getVar("SSTATE_MIRRORS")
|
||||
try:
|
||||
self.pmake = int(template.getVar("PARALLEL_MAKE").split()[1])
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
self.bbthread = int(template.getVar("BB_NUMBER_THREADS"))
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
self.image_rootfs_size = int(template.getVar("IMAGE_ROOTFS_SIZE"))
|
||||
except:
|
||||
@@ -202,9 +215,13 @@ class Configuration:
|
||||
# image_overhead_factor is read-only.
|
||||
self.incompat_license = template.getVar("INCOMPATIBLE_LICENSE")
|
||||
self.curr_sdk_machine = template.getVar("SDKMACHINE")
|
||||
self.conf_version = template.getVar("CONF_VERSION")
|
||||
self.lconf_version = template.getVar("LCONF_VERSION")
|
||||
self.extra_setting = eval(template.getVar("EXTRA_SETTING"))
|
||||
self.toolchain_build = eval(template.getVar("TOOLCHAIN_BUILD"))
|
||||
self.image_fstypes = template.getVar("IMAGE_FSTYPES")
|
||||
# bblayers.conf
|
||||
self.layers = template.getVar("BBLAYERS").split()
|
||||
# image/recipes/packages
|
||||
self.selected_image = template.getVar("__SELECTED_IMAGE__")
|
||||
self.selected_recipes = template.getVar("DEPENDS").split()
|
||||
@@ -215,35 +232,29 @@ class Configuration:
|
||||
self.split_proxy("http", template.getVar("http_proxy"))
|
||||
self.split_proxy("https", template.getVar("https_proxy"))
|
||||
self.split_proxy("ftp", template.getVar("ftp_proxy"))
|
||||
self.split_proxy("socks", template.getVar("all_proxy"))
|
||||
self.split_proxy("git", template.getVar("GIT_PROXY_HOST") + ":" + template.getVar("GIT_PROXY_PORT"))
|
||||
self.split_proxy("cvs", template.getVar("CVS_PROXY_HOST") + ":" + template.getVar("CVS_PROXY_PORT"))
|
||||
|
||||
def save(self, handler, template, defaults=False):
|
||||
def save(self, template, defaults=False):
|
||||
template.setVar("VERSION", "%s" % hobVer)
|
||||
# bblayers.conf
|
||||
handler.set_var_in_file("BBLAYERS", self.layers, "bblayers.conf")
|
||||
template.setVar("BBLAYERS", " ".join(self.layers))
|
||||
# local.conf
|
||||
if not defaults:
|
||||
handler.set_var_in_file("MACHINE", self.curr_mach, "local.conf")
|
||||
handler.set_var_in_file("DISTRO", self.curr_distro, "local.conf")
|
||||
handler.set_var_in_file("DL_DIR", self.dldir, "local.conf")
|
||||
handler.set_var_in_file("SSTATE_DIR", self.sstatedir, "local.conf")
|
||||
sstate_mirror_list = self.sstatemirror.split("\\n ")
|
||||
sstate_mirror_list_modified = []
|
||||
for mirror in sstate_mirror_list:
|
||||
if mirror != "":
|
||||
mirror = mirror + "\\n"
|
||||
sstate_mirror_list_modified.append(mirror)
|
||||
handler.set_var_in_file("SSTATE_MIRRORS", sstate_mirror_list_modified, "local.conf")
|
||||
handler.set_var_in_file("PARALLEL_MAKE", "-j %s" % self.pmake, "local.conf")
|
||||
handler.set_var_in_file("BB_NUMBER_THREADS", self.bbthread, "local.conf")
|
||||
handler.set_var_in_file("PACKAGE_CLASSES", " ".join(["package_" + i for i in self.curr_package_format.split()]), "local.conf")
|
||||
template.setVar("MACHINE", self.curr_mach)
|
||||
template.setVar("DISTRO", self.curr_distro)
|
||||
template.setVar("DL_DIR", self.dldir)
|
||||
template.setVar("SSTATE_DIR", self.sstatedir)
|
||||
template.setVar("SSTATE_MIRRORS", self.sstatemirror)
|
||||
template.setVar("PARALLEL_MAKE", "-j %s" % self.pmake)
|
||||
template.setVar("BB_NUMBER_THREADS", self.bbthread)
|
||||
template.setVar("PACKAGE_CLASSES", " ".join(["package_" + i for i in self.curr_package_format.split()]))
|
||||
template.setVar("IMAGE_ROOTFS_SIZE", self.image_rootfs_size)
|
||||
template.setVar("IMAGE_EXTRA_SPACE", self.image_extra_size)
|
||||
template.setVar("INCOMPATIBLE_LICENSE", self.incompat_license)
|
||||
template.setVar("SDKMACHINE", self.curr_sdk_machine)
|
||||
handler.set_var_in_file("CONF_VERSION", self.conf_version, "local.conf")
|
||||
handler.set_var_in_file("LCONF_VERSION", self.lconf_version, "bblayers.conf")
|
||||
template.setVar("CONF_VERSION", self.conf_version)
|
||||
template.setVar("LCONF_VERSION", self.lconf_version)
|
||||
template.setVar("EXTRA_SETTING", self.extra_setting)
|
||||
template.setVar("TOOLCHAIN_BUILD", self.toolchain_build)
|
||||
template.setVar("IMAGE_FSTYPES", self.image_fstypes)
|
||||
@@ -258,7 +269,8 @@ class Configuration:
|
||||
template.setVar("http_proxy", self.combine_proxy("http"))
|
||||
template.setVar("https_proxy", self.combine_proxy("https"))
|
||||
template.setVar("ftp_proxy", self.combine_proxy("ftp"))
|
||||
template.setVar("all_proxy", self.combine_proxy("socks"))
|
||||
template.setVar("GIT_PROXY_HOST", self.combine_host_only("git"))
|
||||
template.setVar("GIT_PROXY_PORT", self.combine_port_only("git"))
|
||||
template.setVar("CVS_PROXY_HOST", self.combine_host_only("cvs"))
|
||||
template.setVar("CVS_PROXY_PORT", self.combine_port_only("cvs"))
|
||||
|
||||
@@ -273,8 +285,8 @@ class Configuration:
|
||||
(self.lconf_version, self.extra_setting, self.toolchain_build, self.image_fstypes, self.selected_image)
|
||||
s += "DEPENDS: '%s', IMAGE_INSTALL: '%s', enable_proxy: '%s', use_same_proxy: '%s', http_proxy: '%s', " % \
|
||||
(self.selected_recipes, self.user_selected_packages, self.enable_proxy, self.same_proxy, self.combine_proxy("http"))
|
||||
s += "https_proxy: '%s', ftp_proxy: '%s', all_proxy: '%s', CVS_PROXY_HOST: '%s', CVS_PROXY_PORT: '%s'" % \
|
||||
(self.combine_proxy("https"), self.combine_proxy("ftp"), self.combine_proxy("socks"),
|
||||
s += "https_proxy: '%s', ftp_proxy: '%s', GIT_PROXY_HOST: '%s', GIT_PROXY_PORT: '%s', CVS_PROXY_HOST: '%s', CVS_PROXY_PORT: '%s'" % \
|
||||
(self.combine_proxy("https"), self.combine_proxy("ftp"),self.combine_host_only("git"), self.combine_port_only("git"),
|
||||
self.combine_host_only("cvs"), self.combine_port_only("cvs"))
|
||||
return s
|
||||
|
||||
@@ -469,6 +481,8 @@ class Builder(gtk.Window):
|
||||
self.handler.connect("recipe-populated", self.handler_recipe_populated_cb)
|
||||
self.handler.connect("package-populated", self.handler_package_populated_cb)
|
||||
|
||||
self.handler.set_config_filter(hob_conf_filter)
|
||||
|
||||
self.initiate_new_build_async()
|
||||
|
||||
def create_visual_elements(self):
|
||||
@@ -547,10 +561,11 @@ class Builder(gtk.Window):
|
||||
|
||||
def initiate_new_build_async(self):
|
||||
self.switch_page(self.MACHINE_SELECTION)
|
||||
self.handler.init_cooker()
|
||||
self.handler.set_extra_inherit("image_types")
|
||||
self.generate_configuration()
|
||||
self.load_template(TemplateMgr.convert_to_template_pathfilename("default", ".hob/"))
|
||||
if self.load_template(TemplateMgr.convert_to_template_pathfilename("default", ".hob/")) == False:
|
||||
self.show_sanity_check_page()
|
||||
self.handler.init_cooker()
|
||||
self.handler.set_extra_inherit("image_types")
|
||||
self.generate_configuration()
|
||||
|
||||
def update_config_async(self):
|
||||
self.switch_page(self.MACHINE_SELECTION)
|
||||
@@ -659,7 +674,8 @@ class Builder(gtk.Window):
|
||||
if not os.path.exists(layer+'/conf/layer.conf'):
|
||||
return False
|
||||
|
||||
self.set_user_config_extra()
|
||||
self.save_defaults() # remember layers and settings
|
||||
self.update_config_async()
|
||||
return True
|
||||
|
||||
def save_template(self, path, defaults=False):
|
||||
@@ -673,7 +689,7 @@ class Builder(gtk.Window):
|
||||
self.template = TemplateMgr()
|
||||
try:
|
||||
self.template.open(filename, path)
|
||||
self.configuration.save(self.handler, self.template, defaults)
|
||||
self.configuration.save(self.template, defaults)
|
||||
|
||||
self.template.save()
|
||||
except Exception as e:
|
||||
@@ -754,26 +770,15 @@ class Builder(gtk.Window):
|
||||
self.handler.set_http_proxy(self.configuration.combine_proxy("http"))
|
||||
self.handler.set_https_proxy(self.configuration.combine_proxy("https"))
|
||||
self.handler.set_ftp_proxy(self.configuration.combine_proxy("ftp"))
|
||||
self.handler.set_socks_proxy(self.configuration.combine_proxy("socks"))
|
||||
self.handler.set_git_proxy(self.configuration.combine_host_only("git"), self.configuration.combine_port_only("git"))
|
||||
self.handler.set_cvs_proxy(self.configuration.combine_host_only("cvs"), self.configuration.combine_port_only("cvs"))
|
||||
elif self.configuration.enable_proxy == False:
|
||||
self.handler.set_http_proxy("")
|
||||
self.handler.set_https_proxy("")
|
||||
self.handler.set_ftp_proxy("")
|
||||
self.handler.set_socks_proxy("")
|
||||
self.handler.set_git_proxy("", "")
|
||||
self.handler.set_cvs_proxy("", "")
|
||||
|
||||
def set_user_config_extra(self):
|
||||
self.handler.set_rootfs_size(self.configuration.image_rootfs_size)
|
||||
self.handler.set_extra_size(self.configuration.image_extra_size)
|
||||
self.handler.set_incompatible_license(self.configuration.incompat_license)
|
||||
self.handler.set_sdk_machine(self.configuration.curr_sdk_machine)
|
||||
self.handler.set_image_fstypes(self.configuration.image_fstypes)
|
||||
self.handler.set_extra_config(self.configuration.extra_setting)
|
||||
self.handler.set_extra_inherit("packageinfo")
|
||||
self.handler.set_extra_inherit("image_types")
|
||||
self.set_user_config_proxies()
|
||||
|
||||
def set_user_config(self):
|
||||
self.handler.init_cooker()
|
||||
# set bb layers
|
||||
@@ -787,7 +792,15 @@ class Builder(gtk.Window):
|
||||
self.handler.set_sstate_mirrors(self.configuration.sstatemirror)
|
||||
self.handler.set_pmake(self.configuration.pmake)
|
||||
self.handler.set_bbthreads(self.configuration.bbthread)
|
||||
self.set_user_config_extra()
|
||||
self.handler.set_rootfs_size(self.configuration.image_rootfs_size)
|
||||
self.handler.set_extra_size(self.configuration.image_extra_size)
|
||||
self.handler.set_incompatible_license(self.configuration.incompat_license)
|
||||
self.handler.set_sdk_machine(self.configuration.curr_sdk_machine)
|
||||
self.handler.set_image_fstypes(self.configuration.image_fstypes)
|
||||
self.handler.set_extra_config(self.configuration.extra_setting)
|
||||
self.handler.set_extra_inherit("packageinfo")
|
||||
self.handler.set_extra_inherit("image_types")
|
||||
self.set_user_config_proxies()
|
||||
|
||||
def update_recipe_model(self, selected_image, selected_recipes):
|
||||
self.recipe_model.set_selected_image(selected_image)
|
||||
@@ -1024,12 +1037,10 @@ class Builder(gtk.Window):
|
||||
fraction = 0.9
|
||||
elif self.current_step == self.IMAGE_GENERATING:
|
||||
fraction = 1.0
|
||||
version = ""
|
||||
self.parameters.image_names = []
|
||||
selected_image = self.recipe_model.get_selected_image()
|
||||
if selected_image == self.recipe_model.__custom_image__:
|
||||
if self.configuration.initial_selected_image != selected_image:
|
||||
version = self.recipe_model.get_custom_image_version()
|
||||
version = self.recipe_model.get_custom_image_version()
|
||||
linkname = 'hob-image' + version+ "-" + self.configuration.curr_mach
|
||||
else:
|
||||
linkname = selected_image + '-' + self.configuration.curr_mach
|
||||
@@ -1205,37 +1216,11 @@ class Builder(gtk.Window):
|
||||
|
||||
self.fast_generate_image_async(True)
|
||||
|
||||
def show_recipe_property_dialog(self, properties):
|
||||
information = {}
|
||||
dialog = PropertyDialog(title = properties["name"] +' '+ "properties",
|
||||
parent = self,
|
||||
information = properties,
|
||||
flags = gtk.DIALOG_DESTROY_WITH_PARENT
|
||||
| gtk.DIALOG_NO_SEPARATOR)
|
||||
def show_binb_dialog(self, binb):
|
||||
markup = "<b>Brought in by:</b>\n%s" % binb
|
||||
ptip = PersistentTooltip(markup, self)
|
||||
|
||||
dialog.set_modal(False)
|
||||
|
||||
button = dialog.add_button("Close", gtk.RESPONSE_NO)
|
||||
HobAltButton.style_button(button)
|
||||
button.connect("clicked", lambda w: dialog.destroy())
|
||||
|
||||
dialog.run()
|
||||
|
||||
def show_packages_property_dialog(self, properties):
|
||||
information = {}
|
||||
dialog = PropertyDialog(title = properties["name"] +' '+ "properties",
|
||||
parent = self,
|
||||
information = properties,
|
||||
flags = gtk.DIALOG_DESTROY_WITH_PARENT
|
||||
| gtk.DIALOG_NO_SEPARATOR)
|
||||
|
||||
dialog.set_modal(False)
|
||||
|
||||
button = dialog.add_button("Close", gtk.RESPONSE_NO)
|
||||
HobAltButton.style_button(button)
|
||||
button.connect("clicked", lambda w: dialog.destroy())
|
||||
|
||||
dialog.run()
|
||||
ptip.show()
|
||||
|
||||
def show_layer_selection_dialog(self):
|
||||
dialog = LayerSelectionDialog(title = "Layers",
|
||||
@@ -1259,6 +1244,39 @@ class Builder(gtk.Window):
|
||||
self.update_config_async()
|
||||
dialog.destroy()
|
||||
|
||||
def show_load_template_dialog(self):
|
||||
dialog = gtk.FileChooserDialog("Load Template Files", self,
|
||||
gtk.FILE_CHOOSER_ACTION_OPEN)
|
||||
button = dialog.add_button("Cancel", gtk.RESPONSE_NO)
|
||||
HobAltButton.style_button(button)
|
||||
button = dialog.add_button("Open", gtk.RESPONSE_YES)
|
||||
HobButton.style_button(button)
|
||||
filter = gtk.FileFilter()
|
||||
filter.set_name("Hob Files")
|
||||
filter.add_pattern("*.hob")
|
||||
dialog.add_filter(filter)
|
||||
|
||||
response = dialog.run()
|
||||
path = None
|
||||
if response == gtk.RESPONSE_YES:
|
||||
path = dialog.get_filename()
|
||||
dialog.destroy()
|
||||
return response == gtk.RESPONSE_YES, path
|
||||
|
||||
def show_save_template_dialog(self):
|
||||
dialog = gtk.FileChooserDialog("Save Template Files", self,
|
||||
gtk.FILE_CHOOSER_ACTION_SAVE)
|
||||
button = dialog.add_button("Cancel", gtk.RESPONSE_NO)
|
||||
HobAltButton.style_button(button)
|
||||
button = dialog.add_button("Save", gtk.RESPONSE_YES)
|
||||
HobButton.style_button(button)
|
||||
dialog.set_current_name("hob")
|
||||
response = dialog.run()
|
||||
if response == gtk.RESPONSE_YES:
|
||||
path = dialog.get_filename()
|
||||
self.save_template(path)
|
||||
dialog.destroy()
|
||||
|
||||
def get_image_extension(self):
|
||||
image_extension = {}
|
||||
for type in self.parameters.image_types:
|
||||
|
||||
@@ -208,7 +208,7 @@ class AdvancedSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
self.all_distros[ i ] = "Default"
|
||||
if self.configuration.curr_distro == "defaultsetup":
|
||||
self.configuration.curr_distro = "Default"
|
||||
distro_widget, self.distro_combo = self.gen_combo_widget(self.configuration.curr_distro, self.all_distros,"<b>Distro</b>" + "*" + tooltip)
|
||||
distro_widget, self.distro_combo = self.gen_combo_widget(self.configuration.curr_distro, self.all_distros, tooltip)
|
||||
distro_vbox.pack_start(label, expand=False, fill=False)
|
||||
distro_vbox.pack_start(distro_widget, expand=False, fill=False)
|
||||
main_vbox.pack_start(distro_vbox, expand=False, fill=False)
|
||||
@@ -219,7 +219,7 @@ class AdvancedSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
advanced_vbox.pack_start(table, expand=False, fill=False)
|
||||
|
||||
tooltip = "Image file system types you want."
|
||||
info = HobInfoButton("<b>Image types</b>" + "*" + tooltip, self)
|
||||
info = HobInfoButton(tooltip, self)
|
||||
label = self.gen_label_widget("Image types:")
|
||||
align = gtk.Alignment(0, 0.5, 0, 0)
|
||||
table.attach(align, 0, 4, 0, 1)
|
||||
@@ -257,7 +257,7 @@ class AdvancedSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
|
||||
tooltip_combo = "Selects the package format used to generate rootfs."
|
||||
tooltip_extra = "Selects extra package formats to build"
|
||||
pkgfmt_widget, self.rootfs_combo, self.check_hbox = self.gen_pkgfmt_widget(self.configuration.curr_package_format, self.all_package_formats,"<b>Root file system package format</b>" + "*" + tooltip_combo,"<b>Additional package formats</b>" + "*" + tooltip_extra)
|
||||
pkgfmt_widget, self.rootfs_combo, self.check_hbox = self.gen_pkgfmt_widget(self.configuration.curr_package_format, self.all_package_formats, tooltip_combo, tooltip_extra)
|
||||
sub_vbox.pack_start(pkgfmt_widget, expand=False, fill=False)
|
||||
|
||||
advanced_vbox.pack_start(self.gen_label_widget('<span weight="bold">Image size</span>'), expand=False, fill=False)
|
||||
@@ -265,7 +265,7 @@ class AdvancedSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
|
||||
label = self.gen_label_widget("Image basic size (in MB)")
|
||||
tooltip = "Sets the basic size of your target image.\nThis is the basic size of your target image unless your selected package size exceeds this value or you select \'Image Extra Size\'."
|
||||
rootfs_size_widget, self.rootfs_size_spinner = self.gen_spinner_widget(int(self.configuration.image_rootfs_size*1.0/1024), 0, 65536,"<b>Image basic size</b>" + "*" + tooltip)
|
||||
rootfs_size_widget, self.rootfs_size_spinner = self.gen_spinner_widget(int(self.configuration.image_rootfs_size*1.0/1024), 0, 65536, tooltip)
|
||||
sub_vbox.pack_start(label, expand=False, fill=False)
|
||||
sub_vbox.pack_start(rootfs_size_widget, expand=False, fill=False)
|
||||
|
||||
@@ -273,7 +273,7 @@ class AdvancedSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
|
||||
label = self.gen_label_widget("Additional free space (in MB)")
|
||||
tooltip = "Sets the extra free space of your target image.\nBy default, the system reserves 30% of your image size as free space. If your image contains zypper, it brings in 50MB more space. The maximum free space is 64GB."
|
||||
extra_size_widget, self.extra_size_spinner = self.gen_spinner_widget(int(self.configuration.image_extra_size*1.0/1024), 0, 65536,"<b>Additional free space</b>" + "*" + tooltip)
|
||||
extra_size_widget, self.extra_size_spinner = self.gen_spinner_widget(int(self.configuration.image_extra_size*1.0/1024), 0, 65536, tooltip)
|
||||
sub_vbox.pack_start(label, expand=False, fill=False)
|
||||
sub_vbox.pack_start(extra_size_widget, expand=False, fill=False)
|
||||
|
||||
@@ -295,7 +295,7 @@ class AdvancedSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
sub_hbox.pack_start(self.toolchain_checkbox, expand=False, fill=False)
|
||||
|
||||
tooltip = "Selects the host platform for which you want to run the toolchain"
|
||||
sdk_machine_widget, self.sdk_machine_combo = self.gen_combo_widget(self.configuration.curr_sdk_machine, self.all_sdk_machines,"<b>Build toolchain</b>" + "*" + tooltip)
|
||||
sdk_machine_widget, self.sdk_machine_combo = self.gen_combo_widget(self.configuration.curr_sdk_machine, self.all_sdk_machines, tooltip)
|
||||
sub_hbox.pack_start(sdk_machine_widget, expand=False, fill=False)
|
||||
|
||||
return advanced_vbox
|
||||
|
||||
@@ -84,7 +84,7 @@ class ImageSelectionDialog (CrumbsDialog):
|
||||
open_button.connect("clicked", self.select_path_cb, self, entry)
|
||||
table.attach(open_button, 9, 10, 0, 1)
|
||||
|
||||
self.image_table = HobViewTable(self.__columns__, "Images")
|
||||
self.image_table = HobViewTable(self.__columns__)
|
||||
self.image_table.set_size_request(-1, 300)
|
||||
self.image_table.connect("toggled", self.toggled_cb)
|
||||
self.image_table.connect_group_selection(self.table_selected_cb)
|
||||
|
||||
@@ -132,13 +132,12 @@ class LayerSelectionDialog (CrumbsDialog):
|
||||
tree_selection.set_mode(gtk.SELECTION_SINGLE)
|
||||
|
||||
# Allow enable drag and drop of rows including row move
|
||||
dnd_internal_target = ''
|
||||
dnd_targets = [(dnd_internal_target, gtk.TARGET_SAME_WIDGET, 0)]
|
||||
layer_tv.enable_model_drag_source( gtk.gdk.BUTTON1_MASK,
|
||||
dnd_targets,
|
||||
gtk.gdk.ACTION_MOVE)
|
||||
layer_tv.enable_model_drag_dest(dnd_targets,
|
||||
self.TARGETS,
|
||||
gtk.gdk.ACTION_DEFAULT|
|
||||
gtk.gdk.ACTION_MOVE)
|
||||
layer_tv.enable_model_drag_dest(self.TARGETS,
|
||||
gtk.gdk.ACTION_DEFAULT)
|
||||
layer_tv.connect("drag_data_get", self.drag_data_get_cb)
|
||||
layer_tv.connect("drag_data_received", self.drag_data_received_cb)
|
||||
|
||||
|
||||
@@ -105,10 +105,8 @@ class ParsingWarningsDialog (CrumbsDialog):
|
||||
def create_visual_elements(self):
|
||||
self.set_size_request(350, 350)
|
||||
self.heading_label = gtk.Label()
|
||||
self.heading_label.set_alignment(0, 0)
|
||||
self.heading_label.set_alignment(0.1, 0)
|
||||
self.warning_label = gtk.Label()
|
||||
self.warning_label.set_selectable(True)
|
||||
self.warning_label.set_alignment(0, 0)
|
||||
self.textWindow = gtk.ScrolledWindow()
|
||||
|
||||
table = gtk.Table(1, 10, False)
|
||||
@@ -157,7 +155,7 @@ class ParsingWarningsDialog (CrumbsDialog):
|
||||
self.vbox.pack_start(self.heading_label, expand=False, fill=False)
|
||||
self.vbox.pack_start(self.warning_label, expand=False, fill=False)
|
||||
self.vbox.pack_start(self.textWindow, expand=False, fill=False)
|
||||
cancel_button = self.add_button("Close", gtk.RESPONSE_CANCEL)
|
||||
cancel_button = self.add_button("Cancel", gtk.RESPONSE_CANCEL)
|
||||
HobAltButton.style_button(cancel_button)
|
||||
|
||||
self.refresh_components()
|
||||
|
||||
@@ -1,437 +0,0 @@
|
||||
#
|
||||
# BitBake Graphical GTK User Interface
|
||||
#
|
||||
# Copyright (C) 2011-2013 Intel Corporation
|
||||
#
|
||||
# Authored by Andrei Dinu <andrei.adrianx.dinu@intel.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import string
|
||||
import gtk
|
||||
import gobject
|
||||
import os
|
||||
import tempfile
|
||||
import glib
|
||||
from bb.ui.crumbs.hig.crumbsdialog import CrumbsDialog
|
||||
from bb.ui.crumbs.hig.settingsuihelper import SettingsUIHelper
|
||||
from bb.ui.crumbs.hig.crumbsmessagedialog import CrumbsMessageDialog
|
||||
from bb.ui.crumbs.hig.layerselectiondialog import LayerSelectionDialog
|
||||
|
||||
"""
|
||||
The following are convenience classes for implementing GNOME HIG compliant
|
||||
BitBake GUI's
|
||||
In summary: spacing = 12px, border-width = 6px
|
||||
"""
|
||||
|
||||
class PropertyDialog(CrumbsDialog):
|
||||
|
||||
def __init__(self, title, parent, information, flags, buttons=None):
|
||||
|
||||
super(PropertyDialog, self).__init__(title, parent, flags, buttons)
|
||||
|
||||
self.properties = information
|
||||
|
||||
if len(self.properties) == 10:
|
||||
self.create_recipe_visual_elements()
|
||||
elif len(self.properties) == 5:
|
||||
self.create_package_visual_elements()
|
||||
else:
|
||||
self.create_information_visual_elements()
|
||||
|
||||
|
||||
def create_information_visual_elements(self):
|
||||
|
||||
HOB_ICON_BASE_DIR = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), ("icons/"))
|
||||
ICON_PACKAGES_DISPLAY_FILE = os.path.join(HOB_ICON_BASE_DIR, ('info/info_display.png'))
|
||||
|
||||
self.set_resizable(False)
|
||||
|
||||
self.table = gtk.Table(1,1,False)
|
||||
self.table.set_row_spacings(0)
|
||||
self.table.set_col_spacings(0)
|
||||
|
||||
self.image = gtk.Image()
|
||||
self.image.set_from_file(ICON_PACKAGES_DISPLAY_FILE)
|
||||
self.image.set_property("xalign",0)
|
||||
#self.vbox.add(self.image)
|
||||
|
||||
image_info = self.properties.split("*")[0]
|
||||
info = self.properties.split("*")[1]
|
||||
|
||||
vbox = gtk.VBox(True, spacing=30)
|
||||
|
||||
self.label_short = gtk.Label()
|
||||
self.label_short.set_line_wrap(False)
|
||||
self.label_short.set_markup(image_info)
|
||||
self.label_short.set_property("xalign", 0)
|
||||
|
||||
self.info_label = gtk.Label()
|
||||
self.info_label.set_line_wrap(True)
|
||||
self.info_label.set_markup(info)
|
||||
self.info_label.set_property("yalign", 0.5)
|
||||
|
||||
self.table.attach(self.image, 0,1,0,1, xoptions=gtk.FILL|gtk.EXPAND, yoptions=gtk.FILL,xpadding=5,ypadding=5)
|
||||
self.table.attach(self.label_short, 0,1,0,1, xoptions=gtk.FILL|gtk.EXPAND, yoptions=gtk.FILL,xpadding=40,ypadding=5)
|
||||
self.table.attach(self.info_label, 0,1,1,2, xoptions=gtk.FILL|gtk.EXPAND, yoptions=gtk.FILL,xpadding=40,ypadding=10)
|
||||
|
||||
self.vbox.add(self.table)
|
||||
|
||||
def treeViewTooltip( self, widget, e, tooltips, cell, emptyText="" ):
|
||||
try:
|
||||
(path,col,x,y) = widget.get_path_at_pos( int(e.x), int(e.y) )
|
||||
it = widget.get_model().get_iter(path)
|
||||
value = widget.get_model().get_value(it,cell)
|
||||
if value in self.tooltip_items:
|
||||
tooltips.set_tip(widget, self.tooltip_items[value])
|
||||
tooltips.enable()
|
||||
else:
|
||||
tooltips.set_tip(widget, emptyText)
|
||||
except:
|
||||
tooltips.set_tip(widget, emptyText)
|
||||
|
||||
|
||||
def create_package_visual_elements(self):
|
||||
|
||||
name = self.properties['name']
|
||||
binb = self.properties['binb']
|
||||
size = self.properties['size']
|
||||
recipe = self.properties['recipe']
|
||||
file_list = self.properties['files_list']
|
||||
|
||||
file_list = file_list.strip("{}'")
|
||||
files_temp = ''
|
||||
paths_temp = ''
|
||||
files_binb = []
|
||||
paths_binb = []
|
||||
|
||||
self.tooltip_items = {}
|
||||
|
||||
self.set_resizable(False)
|
||||
|
||||
#cleaning out the recipe variable
|
||||
recipe = recipe.split("+")[0]
|
||||
|
||||
vbox = gtk.VBox(True,spacing = 0)
|
||||
|
||||
###################################### NAME ROW + COL #################################
|
||||
|
||||
self.label_short = gtk.Label()
|
||||
self.label_short.set_size_request(300,-1)
|
||||
self.label_short.set_selectable(True)
|
||||
self.label_short.set_line_wrap(True)
|
||||
self.label_short.set_markup("<span weight=\"bold\">Name: </span>" + name)
|
||||
self.label_short.set_property("xalign", 0)
|
||||
|
||||
self.vbox.add(self.label_short)
|
||||
|
||||
###################################### SIZE ROW + COL ######################################
|
||||
|
||||
self.label_short = gtk.Label()
|
||||
self.label_short.set_size_request(300,-1)
|
||||
self.label_short.set_selectable(True)
|
||||
self.label_short.set_line_wrap(True)
|
||||
self.label_short.set_markup("<span weight=\"bold\">Size: </span>" + size)
|
||||
self.label_short.set_property("xalign", 0)
|
||||
|
||||
self.vbox.add(self.label_short)
|
||||
|
||||
##################################### RECIPE ROW + COL #########################################
|
||||
|
||||
self.label_short = gtk.Label()
|
||||
self.label_short.set_size_request(300,-1)
|
||||
self.label_short.set_selectable(True)
|
||||
self.label_short.set_line_wrap(True)
|
||||
self.label_short.set_markup("<span weight=\"bold\">Recipe: </span>" + recipe)
|
||||
self.label_short.set_property("xalign", 0)
|
||||
|
||||
self.vbox.add(self.label_short)
|
||||
|
||||
##################################### BINB ROW + COL #######################################
|
||||
|
||||
if binb != '':
|
||||
self.label_short = gtk.Label()
|
||||
self.label_short.set_selectable(True)
|
||||
self.label_short.set_line_wrap(True)
|
||||
self.label_short.set_markup("<span weight=\"bold\">Brought in by: </span>")
|
||||
self.label_short.set_property("xalign", 0)
|
||||
|
||||
self.label_info = gtk.Label()
|
||||
self.label_info.set_size_request(300,-1)
|
||||
self.label_info.set_selectable(True)
|
||||
self.label_info.set_line_wrap(True)
|
||||
self.label_info.set_markup(binb)
|
||||
self.label_info.set_property("xalign", 0)
|
||||
|
||||
self.vbox.add(self.label_short)
|
||||
self.vbox.add(self.label_info)
|
||||
|
||||
#################################### FILES BROUGHT BY PACKAGES ###################################
|
||||
|
||||
if file_list != '':
|
||||
|
||||
self.textWindow = gtk.ScrolledWindow()
|
||||
self.textWindow.set_shadow_type(gtk.SHADOW_IN)
|
||||
self.textWindow.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
|
||||
self.textWindow.set_size_request(100, 170)
|
||||
|
||||
sstatemirrors_store = gtk.ListStore(str)
|
||||
|
||||
self.sstatemirrors_tv = gtk.TreeView()
|
||||
self.sstatemirrors_tv.set_rules_hint(True)
|
||||
self.sstatemirrors_tv.set_headers_visible(True)
|
||||
self.textWindow.add(self.sstatemirrors_tv)
|
||||
|
||||
self.cell1 = gtk.CellRendererText()
|
||||
col1 = gtk.TreeViewColumn('Package files', self.cell1)
|
||||
col1.set_cell_data_func(self.cell1, self.regex_field)
|
||||
self.sstatemirrors_tv.append_column(col1)
|
||||
|
||||
for items in file_list.split(']'):
|
||||
if len(items) > 1:
|
||||
paths_temp = items.split(":")[0]
|
||||
paths_binb.append(paths_temp.strip(" ,'"))
|
||||
files_temp = items.split(":")[1]
|
||||
files_binb.append(files_temp.strip(" ['"))
|
||||
|
||||
unsorted_list = []
|
||||
|
||||
for items in range(len(paths_binb)):
|
||||
if len(files_binb[items]) > 1:
|
||||
for aduse in (files_binb[items].split(",")):
|
||||
unsorted_list.append(paths_binb[items].split(name)[len(paths_binb[items].split(name))-1] + '/' + aduse.strip(" '"))
|
||||
|
||||
|
||||
unsorted_list.sort()
|
||||
for items in unsorted_list:
|
||||
temp = items
|
||||
while len(items) > 35:
|
||||
items = items[:len(items)/2] + "" + items[len(items)/2+1:]
|
||||
if len(items) == 35:
|
||||
items = items[:len(items)/2] + "..." + items[len(items)/2+3:]
|
||||
self.tooltip_items[items] = temp
|
||||
|
||||
sstatemirrors_store.append([str(items)])
|
||||
|
||||
|
||||
self.sstatemirrors_tv.set_model(sstatemirrors_store)
|
||||
|
||||
tips = gtk.Tooltips()
|
||||
tips.set_tip(self.sstatemirrors_tv, "")
|
||||
self.sstatemirrors_tv.connect("motion-notify-event", self.treeViewTooltip, tips, 0)
|
||||
self.sstatemirrors_tv.set_events(gtk.gdk.POINTER_MOTION_MASK)
|
||||
|
||||
self.vbox.add(self.textWindow)
|
||||
|
||||
self.vbox.show_all()
|
||||
|
||||
|
||||
def regex_field(self, column, cell, model, iter):
|
||||
cell.set_property('text', model.get_value(iter, 0))
|
||||
return
|
||||
|
||||
|
||||
def create_recipe_visual_elements(self):
|
||||
|
||||
summary = self.properties['summary']
|
||||
name = self.properties['name']
|
||||
version = self.properties['version']
|
||||
revision = self.properties['revision']
|
||||
binb = self.properties['binb']
|
||||
group = self.properties['group']
|
||||
license = self.properties['license']
|
||||
homepage = self.properties['homepage']
|
||||
bugtracker = self.properties['bugtracker']
|
||||
description = self.properties['description']
|
||||
|
||||
self.set_resizable(False)
|
||||
|
||||
#cleaning out the version variable and also the summary
|
||||
version = version.split(":")[1]
|
||||
if len(version) > 30:
|
||||
version = version.split("+")[0]
|
||||
else:
|
||||
version = version.split("-")[0]
|
||||
license = license.replace("&" , "and")
|
||||
if (homepage == ''):
|
||||
homepage = 'unknown'
|
||||
if (bugtracker == ''):
|
||||
bugtracker = 'unknown'
|
||||
summary = summary.split("+")[0]
|
||||
|
||||
#calculating the rows needed for the table
|
||||
binb_items_count = len(binb.split(','))
|
||||
binb_items = binb.split(',')
|
||||
|
||||
vbox = gtk.VBox(True,spacing = 0)
|
||||
|
||||
######################################## SUMMARY LABEL #########################################
|
||||
|
||||
if summary != '':
|
||||
self.label_short = gtk.Label()
|
||||
self.label_short.set_size_request(300,-1)
|
||||
self.label_short.set_selectable(True)
|
||||
self.label_short.set_line_wrap(True)
|
||||
self.label_short.set_markup("<b>" + summary + "</b>")
|
||||
self.label_short.set_property("xalign", 0)
|
||||
|
||||
self.vbox.pack_start(self.label_short, expand=False, fill=False, padding=0)
|
||||
|
||||
########################################## NAME ROW + COL #######################################
|
||||
|
||||
self.label_short = gtk.Label()
|
||||
self.label_short.set_size_request(300,-1)
|
||||
self.label_short.set_selectable(True)
|
||||
self.label_short.set_line_wrap(True)
|
||||
self.label_short.set_markup("<span weight=\"bold\">Name: </span>" + name)
|
||||
self.label_short.set_property("xalign", 0)
|
||||
|
||||
self.vbox.add(self.label_short)
|
||||
|
||||
####################################### VERSION ROW + COL ####################################
|
||||
|
||||
self.label_short = gtk.Label()
|
||||
self.label_short.set_size_request(300,-1)
|
||||
self.label_short.set_selectable(True)
|
||||
self.label_short.set_line_wrap(True)
|
||||
self.label_short.set_markup("<span weight=\"bold\">Version: </span>" + version)
|
||||
self.label_short.set_property("xalign", 0)
|
||||
|
||||
self.vbox.add(self.label_short)
|
||||
|
||||
##################################### REVISION ROW + COL #####################################
|
||||
|
||||
self.label_short = gtk.Label()
|
||||
self.label_short.set_size_request(300,-1)
|
||||
self.label_short.set_line_wrap(True)
|
||||
self.label_short.set_selectable(True)
|
||||
self.label_short.set_markup("<span weight=\"bold\">Revision: </span>" + revision)
|
||||
self.label_short.set_property("xalign", 0)
|
||||
|
||||
self.vbox.add(self.label_short)
|
||||
|
||||
################################## GROUP ROW + COL ############################################
|
||||
|
||||
self.label_short = gtk.Label()
|
||||
self.label_short.set_size_request(300,-1)
|
||||
self.label_short.set_selectable(True)
|
||||
self.label_short.set_line_wrap(True)
|
||||
self.label_short.set_markup("<span weight=\"bold\">Group: </span>" + group)
|
||||
self.label_short.set_property("xalign", 0)
|
||||
|
||||
self.vbox.add(self.label_short)
|
||||
|
||||
################################# HOMEPAGE ROW + COL ############################################
|
||||
|
||||
if homepage != 'unknown':
|
||||
self.label_info = gtk.Label()
|
||||
self.label_info.set_selectable(True)
|
||||
self.label_info.set_line_wrap(True)
|
||||
if len(homepage) > 35:
|
||||
self.label_info.set_markup("<a href=\"" + homepage + "\">" + homepage[0:35] + "..." + "</a>")
|
||||
else:
|
||||
self.label_info.set_markup("<a href=\"" + homepage + "\">" + homepage[0:60] + "</a>")
|
||||
|
||||
self.label_info.set_property("xalign", 0)
|
||||
|
||||
self.label_short = gtk.Label()
|
||||
self.label_short.set_size_request(300,-1)
|
||||
self.label_short.set_selectable(True)
|
||||
self.label_short.set_line_wrap(True)
|
||||
self.label_short.set_markup("<b>Homepage: </b>")
|
||||
self.label_short.set_property("xalign", 0)
|
||||
|
||||
self.vbox.add(self.label_short)
|
||||
self.vbox.add(self.label_info)
|
||||
|
||||
################################# BUGTRACKER ROW + COL ###########################################
|
||||
|
||||
if bugtracker != 'unknown':
|
||||
self.label_info = gtk.Label()
|
||||
self.label_info.set_selectable(True)
|
||||
self.label_info.set_line_wrap(True)
|
||||
if len(bugtracker) > 35:
|
||||
self.label_info.set_markup("<a href=\"" + bugtracker + "\">" + bugtracker[0:35] + "..." + "</a>")
|
||||
else:
|
||||
self.label_info.set_markup("<a href=\"" + bugtracker + "\">" + bugtracker[0:60] + "</a>")
|
||||
self.label_info.set_property("xalign", 0)
|
||||
|
||||
self.label_short = gtk.Label()
|
||||
self.label_short.set_size_request(300,-1)
|
||||
self.label_short.set_selectable(True)
|
||||
self.label_short.set_line_wrap(True)
|
||||
self.label_short.set_markup("<b>Bugtracker: </b>")
|
||||
self.label_short.set_property("xalign", 0)
|
||||
|
||||
self.vbox.add(self.label_short)
|
||||
self.vbox.add(self.label_info)
|
||||
|
||||
################################# LICENSE ROW + COL ############################################
|
||||
|
||||
self.label_info = gtk.Label()
|
||||
self.label_info.set_size_request(300,-1)
|
||||
self.label_info.set_selectable(True)
|
||||
self.label_info.set_line_wrap(True)
|
||||
self.label_info.set_markup(license)
|
||||
self.label_info.set_property("xalign", 0)
|
||||
|
||||
self.label_short = gtk.Label()
|
||||
self.label_short.set_selectable(True)
|
||||
self.label_short.set_line_wrap(True)
|
||||
self.label_short.set_markup("<span weight=\"bold\">License: </span>")
|
||||
self.label_short.set_property("xalign", 0)
|
||||
|
||||
self.vbox.add(self.label_short)
|
||||
self.vbox.add(self.label_info)
|
||||
|
||||
################################### BINB ROW+COL #############################################
|
||||
|
||||
if binb != '':
|
||||
self.label_short = gtk.Label()
|
||||
self.label_short.set_selectable(True)
|
||||
self.label_short.set_line_wrap(True)
|
||||
self.label_short.set_markup("<span weight=\"bold\">Brought in by: </span>")
|
||||
self.label_short.set_property("xalign", 0)
|
||||
|
||||
self.label_info = gtk.Label()
|
||||
self.label_info.set_size_request(300,-1)
|
||||
self.label_info.set_selectable(True)
|
||||
self.label_info.set_markup(binb)
|
||||
self.label_info.set_property("xalign", 0)
|
||||
self.label_info.set_line_wrap(True)
|
||||
|
||||
self.vbox.add(self.label_short)
|
||||
self.vbox.add(self.label_info)
|
||||
|
||||
################################ DESCRIPTION TAG ROW #################################################
|
||||
|
||||
self.label_short = gtk.Label()
|
||||
self.label_short.set_line_wrap(True)
|
||||
self.label_short.set_markup("<span weight=\"bold\">Description </span>")
|
||||
self.label_short.set_property("xalign", 0)
|
||||
self.vbox.add(self.label_short)
|
||||
|
||||
################################ DESCRIPTION INFORMATION ROW ##########################################
|
||||
|
||||
hbox = gtk.HBox(True,spacing = 0)
|
||||
|
||||
self.label_short = gtk.Label()
|
||||
self.label_short.set_size_request(300,-1)
|
||||
self.label_short.set_selectable(True)
|
||||
self.label_short.set_text(description)
|
||||
self.label_short.set_line_wrap(True)
|
||||
self.label_short.set_property("xalign", 0)
|
||||
self.vbox.add(self.label_short)
|
||||
|
||||
self.vbox.show_all()
|
||||
@@ -120,3 +120,97 @@ class SettingsUIHelper():
|
||||
|
||||
hbox.show_all()
|
||||
return hbox, entry
|
||||
|
||||
def gen_mirror_entry_widget(self, content, index, match_content=""):
|
||||
hbox = gtk.HBox(False)
|
||||
entry = gtk.Entry()
|
||||
content = content[:-2]
|
||||
entry.set_text(content)
|
||||
entry.set_size_request(350,30)
|
||||
|
||||
entry_match = gtk.Entry()
|
||||
entry_match.set_text(match_content)
|
||||
entry_match.set_size_request(100,30)
|
||||
|
||||
table = gtk.Table(2, 5, False)
|
||||
table.set_row_spacings(12)
|
||||
table.set_col_spacings(6)
|
||||
hbox.pack_start(table, expand=True, fill=True)
|
||||
|
||||
label_configuration = gtk.Label("Configuration")
|
||||
label_configuration.set_alignment(0.0,0.5)
|
||||
label_mirror_url = gtk.Label("Mirror URL")
|
||||
label_mirror_url.set_alignment(0.0,0.5)
|
||||
label_match = gtk.Label("Match")
|
||||
label_match.set_alignment(0.0,0.5)
|
||||
label_replace_with = gtk.Label("Replace with")
|
||||
label_replace_with.set_alignment(0.0,0.5)
|
||||
|
||||
combo = gtk.combo_box_new_text()
|
||||
combo.append_text("Standard")
|
||||
combo.append_text("Custom")
|
||||
if match_content == "":
|
||||
combo.set_active(0)
|
||||
else:
|
||||
combo.set_active(1)
|
||||
combo.connect("changed", self.on_combo_changed, index)
|
||||
combo.set_size_request(100,30)
|
||||
|
||||
delete_button = HobAltButton("Delete")
|
||||
delete_button.connect("clicked", self.delete_cb, index, entry)
|
||||
if content == "" and index == 0 and len(self.sstatemirrors_list) == 1:
|
||||
delete_button.set_sensitive(False)
|
||||
delete_button.set_size_request(100, 30)
|
||||
|
||||
entry_match.connect("changed", self.insert_entry_match_cb, index)
|
||||
entry.connect("changed", self.insert_entry_cb, index, delete_button)
|
||||
|
||||
if match_content == "":
|
||||
table.attach(label_configuration, 1, 2, 0, 1, xoptions=gtk.SHRINK|gtk.FILL)
|
||||
table.attach(label_mirror_url, 2, 3, 0, 1, xoptions=gtk.SHRINK|gtk.FILL)
|
||||
table.attach(combo, 1, 2, 1, 2, xoptions=gtk.SHRINK)
|
||||
table.attach(entry, 2, 3, 1, 2, xoptions=gtk.SHRINK)
|
||||
table.attach(delete_button, 3, 4, 1, 2, xoptions=gtk.SHRINK)
|
||||
else:
|
||||
table.attach(label_configuration, 1, 2, 0, 1, xoptions=gtk.SHRINK|gtk.FILL)
|
||||
table.attach(label_match, 2, 3, 0, 1, xoptions=gtk.SHRINK|gtk.FILL)
|
||||
table.attach(label_replace_with, 3, 4, 0, 1, xoptions=gtk.SHRINK|gtk.FILL)
|
||||
table.attach(combo, 1, 2, 1, 2, xoptions=gtk.SHRINK)
|
||||
table.attach(entry_match, 2, 3, 1, 2, xoptions=gtk.SHRINK)
|
||||
table.attach(entry, 3, 4, 1, 2, xoptions=gtk.SHRINK)
|
||||
table.attach(delete_button, 4, 5, 1, 2, xoptions=gtk.SHRINK)
|
||||
|
||||
hbox.show_all()
|
||||
return hbox
|
||||
|
||||
def insert_entry_match_cb(self, entry_match, index):
|
||||
self.sstatemirrors_list[index][2] = entry_match.get_text()
|
||||
|
||||
def insert_entry_cb(self, entry, index, button):
|
||||
self.sstatemirrors_list[index][1] = entry.get_text()
|
||||
if entry.get_text() == "" and index == 0:
|
||||
button.set_sensitive(False)
|
||||
else:
|
||||
button.set_sensitive(True)
|
||||
|
||||
def on_combo_changed(self, combo, index):
|
||||
if combo.get_active_text() == "Standard":
|
||||
self.sstatemirrors_list[index][0] = 0
|
||||
self.sstatemirrors_list[index][2] = "file://(.*)"
|
||||
else:
|
||||
self.sstatemirrors_list[index][0] = 1
|
||||
self.refresh_shared_state_page()
|
||||
|
||||
def delete_cb(self, button, index, entry):
|
||||
if index == 0 and len(self.sstatemirrors_list)==1:
|
||||
entry.set_text("")
|
||||
else:
|
||||
self.sstatemirrors_list.pop(index)
|
||||
self.refresh_shared_state_page()
|
||||
|
||||
def add_mirror(self, button):
|
||||
tooltip = "Select the pre-built mirror that will speed your build"
|
||||
index = len(self.sstatemirrors_list)
|
||||
sm_list = [0, "", "file://(.*)"]
|
||||
self.sstatemirrors_list.append(sm_list)
|
||||
self.refresh_shared_state_page()
|
||||
|
||||
@@ -50,13 +50,6 @@ class SimpleSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
TEST_NETWORK_FAILED,
|
||||
TEST_NETWORK_CANCELED) = range(6)
|
||||
|
||||
TARGETS = [
|
||||
("MY_TREE_MODEL_ROW", gtk.TARGET_SAME_WIDGET, 0),
|
||||
("text/plain", 0, 1),
|
||||
("TEXT", 0, 2),
|
||||
("STRING", 0, 3),
|
||||
]
|
||||
|
||||
def __init__(self, title, configuration, all_image_types,
|
||||
all_package_formats, all_distros, all_sdk_machines,
|
||||
max_threads, parent, flags, handler, buttons=None):
|
||||
@@ -91,8 +84,6 @@ class SimpleSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
self.proxy_settings_changed = False
|
||||
self.handler = handler
|
||||
self.proxy_test_ran = False
|
||||
self.selected_mirror_row = 0
|
||||
self.new_mirror = False
|
||||
|
||||
# create visual elements on the dialog
|
||||
self.create_visual_elements()
|
||||
@@ -161,13 +152,13 @@ class SimpleSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
self.ftp_proxy_port.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
|
||||
self.ftp_proxy_details.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
|
||||
|
||||
self.socks_proxy.set_text(self.configuration.combine_host_only("socks"))
|
||||
self.socks_proxy.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
|
||||
self.socks_proxy.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
|
||||
self.socks_proxy_port.set_text(self.configuration.combine_port_only("socks"))
|
||||
self.socks_proxy_port.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
|
||||
self.socks_proxy_port.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
|
||||
self.socks_proxy_details.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
|
||||
self.git_proxy.set_text(self.configuration.combine_host_only("git"))
|
||||
self.git_proxy.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
|
||||
self.git_proxy.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
|
||||
self.git_proxy_port.set_text(self.configuration.combine_port_only("git"))
|
||||
self.git_proxy_port.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
|
||||
self.git_proxy_port.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
|
||||
self.git_proxy_details.set_sensitive(self.configuration.enable_proxy and (not self.configuration.same_proxy))
|
||||
|
||||
self.cvs_proxy.set_text(self.configuration.combine_host_only("cvs"))
|
||||
self.cvs_proxy.set_editable(self.configuration.enable_proxy and (not self.configuration.same_proxy))
|
||||
@@ -201,12 +192,12 @@ class SimpleSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
if self.configuration.same_proxy:
|
||||
self.configuration.split_proxy("https", self.http_proxy.get_text() + ":" + self.http_proxy_port.get_text())
|
||||
self.configuration.split_proxy("ftp", self.http_proxy.get_text() + ":" + self.http_proxy_port.get_text())
|
||||
self.configuration.split_proxy("socks", self.http_proxy.get_text() + ":" + self.http_proxy_port.get_text())
|
||||
self.configuration.split_proxy("git", self.http_proxy.get_text() + ":" + self.http_proxy_port.get_text())
|
||||
self.configuration.split_proxy("cvs", self.http_proxy.get_text() + ":" + self.http_proxy_port.get_text())
|
||||
else:
|
||||
self.configuration.split_proxy("https", self.https_proxy.get_text() + ":" + self.https_proxy_port.get_text())
|
||||
self.configuration.split_proxy("ftp", self.ftp_proxy.get_text() + ":" + self.ftp_proxy_port.get_text())
|
||||
self.configuration.split_proxy("socks", self.socks_proxy.get_text() + ":" + self.socks_proxy_port.get_text())
|
||||
self.configuration.split_proxy("git", self.git_proxy.get_text() + ":" + self.git_proxy_port.get_text())
|
||||
self.configuration.split_proxy("cvs", self.cvs_proxy.get_text() + ":" + self.cvs_proxy_port.get_text())
|
||||
|
||||
def response_cb(self, dialog, response_id):
|
||||
@@ -228,7 +219,7 @@ class SimpleSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
self.configuration.sstatedir = self.sstatedir_text.get_text()
|
||||
self.configuration.sstatemirror = ""
|
||||
for mirror in self.sstatemirrors_list:
|
||||
if mirror[1] != "" and mirror[2].startswith("file://"):
|
||||
if mirror[1] != "":
|
||||
if mirror[1].endswith("\\1"):
|
||||
smirror = mirror[2] + " " + mirror[1] + " \\n "
|
||||
else:
|
||||
@@ -260,7 +251,7 @@ class SimpleSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
tooltip = "Sets the number of threads that BitBake tasks can simultaneously run. See the <a href=\""
|
||||
tooltip += "http://www.yoctoproject.org/docs/current/poky-ref-manual/"
|
||||
tooltip += "poky-ref-manual.html#var-BB_NUMBER_THREADS\">Poky reference manual</a> for information"
|
||||
bbthread_widget, self.bb_spinner = self.gen_spinner_widget(self.configuration.bbthread, 1, self.max_threads,"<b>BitBake prallalel threads</b>" + "*" + tooltip)
|
||||
bbthread_widget, self.bb_spinner = self.gen_spinner_widget(self.configuration.bbthread, 1, self.max_threads, tooltip)
|
||||
sub_vbox.pack_start(label, expand=False, fill=False)
|
||||
sub_vbox.pack_start(bbthread_widget, expand=False, fill=False)
|
||||
|
||||
@@ -270,7 +261,7 @@ class SimpleSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
tooltip = "Sets the maximum number of threads the host can use during the build. See the <a href=\""
|
||||
tooltip += "http://www.yoctoproject.org/docs/current/poky-ref-manual/"
|
||||
tooltip += "poky-ref-manual.html#var-PARALLEL_MAKE\">Poky reference manual</a> for information"
|
||||
pmake_widget, self.pmake_spinner = self.gen_spinner_widget(self.configuration.pmake, 1, self.max_threads,"<b>Make parallel threads</b>" + "*" + tooltip)
|
||||
pmake_widget, self.pmake_spinner = self.gen_spinner_widget(self.configuration.pmake, 1, self.max_threads, tooltip)
|
||||
sub_vbox.pack_start(label, expand=False, fill=False)
|
||||
sub_vbox.pack_start(pmake_widget, expand=False, fill=False)
|
||||
|
||||
@@ -279,7 +270,7 @@ class SimpleSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
|
||||
label = self.gen_label_widget("Downloads directory")
|
||||
tooltip = "Select a folder that caches the upstream project source code"
|
||||
dldir_widget, self.dldir_text = self.gen_entry_widget(self.configuration.dldir, self,"<b>Downloaded source code</b>" + "*" + tooltip)
|
||||
dldir_widget, self.dldir_text = self.gen_entry_widget(self.configuration.dldir, self, tooltip)
|
||||
sub_vbox.pack_start(label, expand=False, fill=False)
|
||||
sub_vbox.pack_start(dldir_widget, expand=False, fill=False)
|
||||
|
||||
@@ -293,10 +284,10 @@ class SimpleSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False, padding=24)
|
||||
content = "<span>Shared state directory</span>"
|
||||
tooltip = "Select a folder that caches your prebuilt results"
|
||||
label = self.gen_label_info_widget(content,"<b>Shared state directory</b>" + "*" + tooltip)
|
||||
label = self.gen_label_info_widget(content, tooltip)
|
||||
sstatedir_widget, self.sstatedir_text = self.gen_entry_widget(self.configuration.sstatedir, self)
|
||||
sub_vbox.pack_start(label, expand=False, fill=False)
|
||||
sub_vbox.pack_start(sstatedir_widget, expand=False, fill=False, padding=6)
|
||||
sub_vbox.pack_start(sstatedir_widget, expand=False, fill=False, padding=12)
|
||||
|
||||
content = "<span weight=\"bold\">Shared state mirrors</span>"
|
||||
tooltip = "URLs pointing to pre-built mirrors that will speed your build. "
|
||||
@@ -305,18 +296,22 @@ class SimpleSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
tooltip += "For more information on shared state mirrors, check the <a href=\""
|
||||
tooltip += "http://www.yoctoproject.org/docs/current/poky-ref-manual/"
|
||||
tooltip += "poky-ref-manual.html#shared-state\">Yocto Project Reference Manual</a>."
|
||||
table = self.gen_label_info_widget(content,"<b>Shared state mirrors</b>" + "*" + tooltip)
|
||||
advanced_vbox.pack_start(table, expand=False, fill=False, padding=6)
|
||||
table = self.gen_label_info_widget(content, tooltip)
|
||||
advanced_vbox.pack_start(table, expand=False, fill=False)
|
||||
|
||||
sub_vbox = gtk.VBox(False)
|
||||
advanced_vbox.pack_start(sub_vbox, gtk.TRUE, gtk.TRUE, 0)
|
||||
scroll = gtk.ScrolledWindow()
|
||||
scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_AUTOMATIC)
|
||||
scroll.add_with_viewport(sub_vbox)
|
||||
scroll.connect('size-allocate', self.scroll_changed)
|
||||
advanced_vbox.pack_start(scroll, gtk.TRUE, gtk.TRUE, 0)
|
||||
searched_string = "file://"
|
||||
|
||||
if self.sstatemirrors_changed == 0:
|
||||
self.sstatemirrors_changed = 1
|
||||
sstatemirrors = self.configuration.sstatemirror
|
||||
if sstatemirrors == "":
|
||||
sm_list = ["Standard", "", "file://(.*)"]
|
||||
sm_list = [ 0, "", "file://(.*)"]
|
||||
self.sstatemirrors_list.append(sm_list)
|
||||
else:
|
||||
while sstatemirrors.find(searched_string) != -1:
|
||||
@@ -328,208 +323,31 @@ class SimpleSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
sstatemirrors = sstatemirrors[1:]
|
||||
|
||||
sstatemirror_fields = [x for x in sstatemirror.split(' ') if x.strip()]
|
||||
if len(sstatemirror_fields):
|
||||
if sstatemirror_fields[0] == "file://(.*)":
|
||||
sm_list = ["Standard", sstatemirror_fields[1], "file://(.*)"]
|
||||
else:
|
||||
sm_list = ["Custom", sstatemirror_fields[1], sstatemirror_fields[0]]
|
||||
self.sstatemirrors_list.append(sm_list)
|
||||
if sstatemirror_fields[0] == "file://(.*)":
|
||||
sm_list = [ 0, sstatemirror_fields[1], "file://(.*)"]
|
||||
else:
|
||||
sm_list = [ 1, sstatemirror_fields[1], sstatemirror_fields[0]]
|
||||
self.sstatemirrors_list.append(sm_list)
|
||||
|
||||
sstatemirrors_widget, sstatemirrors_store = self.gen_shared_sstate_widget(self.sstatemirrors_list, self)
|
||||
sub_vbox.pack_start(sstatemirrors_widget, expand=True, fill=True)
|
||||
index = 0
|
||||
for mirror in self.sstatemirrors_list:
|
||||
if mirror[0] == 0:
|
||||
sstatemirror_widget = self.gen_mirror_entry_widget(mirror[1], index)
|
||||
else:
|
||||
sstatemirror_widget = self.gen_mirror_entry_widget(mirror[1], index, mirror[2])
|
||||
sub_vbox.pack_start(sstatemirror_widget, expand=False, fill=False, padding=9)
|
||||
index += 1
|
||||
|
||||
table = gtk.Table(1, 10, False)
|
||||
table = gtk.Table(1, 1, False)
|
||||
table.set_col_spacings(6)
|
||||
add_mirror_button = HobAltButton("Add mirror")
|
||||
add_mirror_button = HobAltButton("Add another mirror")
|
||||
add_mirror_button.connect("clicked", self.add_mirror)
|
||||
add_mirror_button.set_size_request(120,30)
|
||||
add_mirror_button.set_size_request(150,30)
|
||||
table.attach(add_mirror_button, 1, 2, 0, 1, xoptions=gtk.SHRINK)
|
||||
|
||||
self.delete_button = HobAltButton("Delete mirror")
|
||||
self.delete_button.connect("clicked", self.delete_cb)
|
||||
self.delete_button.set_size_request(120, 30)
|
||||
table.attach(self.delete_button, 3, 4, 0, 1, xoptions=gtk.SHRINK)
|
||||
|
||||
advanced_vbox.pack_start(table, expand=False, fill=False, padding=6)
|
||||
advanced_vbox.pack_start(table, expand=False, fill=False, padding=9)
|
||||
|
||||
return advanced_vbox
|
||||
|
||||
def gen_shared_sstate_widget(self, sstatemirrors_list, window):
|
||||
hbox = gtk.HBox(False)
|
||||
|
||||
sstatemirrors_store = gtk.ListStore(str, str, str)
|
||||
for sstatemirror in sstatemirrors_list:
|
||||
sstatemirrors_store.append(sstatemirror)
|
||||
|
||||
self.sstatemirrors_tv = gtk.TreeView()
|
||||
self.sstatemirrors_tv.set_rules_hint(True)
|
||||
self.sstatemirrors_tv.set_headers_visible(True)
|
||||
tree_selection = self.sstatemirrors_tv.get_selection()
|
||||
tree_selection.set_mode(gtk.SELECTION_SINGLE)
|
||||
|
||||
# Allow enable drag and drop of rows including row move
|
||||
self.sstatemirrors_tv.enable_model_drag_source( gtk.gdk.BUTTON1_MASK,
|
||||
self.TARGETS,
|
||||
gtk.gdk.ACTION_DEFAULT|
|
||||
gtk.gdk.ACTION_MOVE)
|
||||
self.sstatemirrors_tv.enable_model_drag_dest(self.TARGETS,
|
||||
gtk.gdk.ACTION_DEFAULT)
|
||||
self.sstatemirrors_tv.connect("drag_data_get", self.drag_data_get_cb)
|
||||
self.sstatemirrors_tv.connect("drag_data_received", self.drag_data_received_cb)
|
||||
|
||||
|
||||
self.scroll = gtk.ScrolledWindow()
|
||||
self.scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_AUTOMATIC)
|
||||
self.scroll.set_shadow_type(gtk.SHADOW_IN)
|
||||
self.scroll.connect('size-allocate', self.scroll_changed)
|
||||
self.scroll.add(self.sstatemirrors_tv)
|
||||
|
||||
#list store for cell renderer
|
||||
m = gtk.ListStore(gobject.TYPE_STRING)
|
||||
m.append(["Standard"])
|
||||
m.append(["Custom"])
|
||||
|
||||
cell0 = gtk.CellRendererCombo()
|
||||
cell0.set_property("model",m)
|
||||
cell0.set_property("text-column", 0)
|
||||
cell0.set_property("editable", True)
|
||||
cell0.set_property("has-entry", False)
|
||||
col0 = gtk.TreeViewColumn("Configuration")
|
||||
col0.pack_start(cell0, False)
|
||||
col0.add_attribute(cell0, "text", 0)
|
||||
col0.set_cell_data_func(cell0, self.configuration_field)
|
||||
self.sstatemirrors_tv.append_column(col0)
|
||||
|
||||
cell0.connect("edited", self.combo_changed, sstatemirrors_store)
|
||||
|
||||
self.cell1 = gtk.CellRendererText()
|
||||
self.cell1.set_padding(5,2)
|
||||
col1 = gtk.TreeViewColumn('Regex', self.cell1)
|
||||
col1.set_cell_data_func(self.cell1, self.regex_field)
|
||||
self.sstatemirrors_tv.append_column(col1)
|
||||
|
||||
self.cell1.connect("edited", self.regex_changed, sstatemirrors_store)
|
||||
|
||||
cell2 = gtk.CellRendererText()
|
||||
cell2.set_padding(5,2)
|
||||
cell2.set_property("editable", True)
|
||||
col2 = gtk.TreeViewColumn('URL', cell2)
|
||||
col2.set_cell_data_func(cell2, self.url_field)
|
||||
self.sstatemirrors_tv.append_column(col2)
|
||||
|
||||
cell2.connect("edited", self.url_changed, sstatemirrors_store)
|
||||
|
||||
self.sstatemirrors_tv.set_model(sstatemirrors_store)
|
||||
self.sstatemirrors_tv.set_cursor(self.selected_mirror_row)
|
||||
hbox.pack_start(self.scroll, expand=True, fill=True)
|
||||
hbox.show_all()
|
||||
|
||||
return hbox, sstatemirrors_store
|
||||
|
||||
def drag_data_get_cb(self, treeview, context, selection, target_id, etime):
|
||||
treeselection = treeview.get_selection()
|
||||
model, iter = treeselection.get_selected()
|
||||
data = model.get_string_from_iter(iter)
|
||||
selection.set(selection.target, 8, data)
|
||||
|
||||
def drag_data_received_cb(self, treeview, context, x, y, selection, info, etime):
|
||||
model = treeview.get_model()
|
||||
data = []
|
||||
tree_iter = model.get_iter_from_string(selection.data)
|
||||
data.append(model.get_value(tree_iter, 0))
|
||||
data.append(model.get_value(tree_iter, 1))
|
||||
data.append(model.get_value(tree_iter, 2))
|
||||
|
||||
drop_info = treeview.get_dest_row_at_pos(x, y)
|
||||
if drop_info:
|
||||
path, position = drop_info
|
||||
iter = model.get_iter(path)
|
||||
if (position == gtk.TREE_VIEW_DROP_BEFORE or position == gtk.TREE_VIEW_DROP_INTO_OR_BEFORE):
|
||||
model.insert_before(iter, data)
|
||||
else:
|
||||
model.insert_after(iter, data)
|
||||
else:
|
||||
model.append(data)
|
||||
if context.action == gtk.gdk.ACTION_MOVE:
|
||||
context.finish(True, True, etime)
|
||||
return
|
||||
|
||||
def delete_cb(self, button):
|
||||
selection = self.sstatemirrors_tv.get_selection()
|
||||
tree_model, tree_iter = selection.get_selected()
|
||||
index = int(tree_model.get_string_from_iter(tree_iter))
|
||||
if index == 0:
|
||||
self.selected_mirror_row = index
|
||||
else:
|
||||
self.selected_mirror_row = index - 1
|
||||
self.sstatemirrors_list.pop(index)
|
||||
self.refresh_shared_state_page()
|
||||
if not self.sstatemirrors_list:
|
||||
self.delete_button.set_sensitive(False)
|
||||
|
||||
def add_mirror(self, button):
|
||||
self.new_mirror = True
|
||||
tooltip = "Select the pre-built mirror that will speed your build"
|
||||
index = len(self.sstatemirrors_list)
|
||||
self.selected_mirror_row = index
|
||||
sm_list = ["Standard", "", "file://(.*)"]
|
||||
self.sstatemirrors_list.append(sm_list)
|
||||
self.refresh_shared_state_page()
|
||||
|
||||
def scroll_changed(self, widget, event, data=None):
|
||||
if self.new_mirror == True:
|
||||
adj = widget.get_vadjustment()
|
||||
adj.set_value(adj.upper - adj.page_size)
|
||||
self.new_mirror = False
|
||||
|
||||
def combo_changed(self, widget, path, text, model):
|
||||
model[path][0] = text
|
||||
selection = self.sstatemirrors_tv.get_selection()
|
||||
tree_model, tree_iter = selection.get_selected()
|
||||
index = int(tree_model.get_string_from_iter(tree_iter))
|
||||
self.sstatemirrors_list[index][0] = text
|
||||
|
||||
def regex_changed(self, cell, path, new_text, user_data):
|
||||
user_data[path][2] = new_text
|
||||
selection = self.sstatemirrors_tv.get_selection()
|
||||
tree_model, tree_iter = selection.get_selected()
|
||||
index = int(tree_model.get_string_from_iter(tree_iter))
|
||||
self.sstatemirrors_list[index][2] = new_text
|
||||
return
|
||||
|
||||
def url_changed(self, cell, path, new_text, user_data):
|
||||
if new_text!="Enter the mirror URL" and new_text!="Match regex and replace it with this URL":
|
||||
user_data[path][1] = new_text
|
||||
selection = self.sstatemirrors_tv.get_selection()
|
||||
tree_model, tree_iter = selection.get_selected()
|
||||
index = int(tree_model.get_string_from_iter(tree_iter))
|
||||
self.sstatemirrors_list[index][1] = new_text
|
||||
return
|
||||
|
||||
def configuration_field(self, column, cell, model, iter):
|
||||
cell.set_property('text', model.get_value(iter, 0))
|
||||
if model.get_value(iter, 0) == "Standard":
|
||||
self.cell1.set_property("sensitive", False)
|
||||
self.cell1.set_property("editable", False)
|
||||
else:
|
||||
self.cell1.set_property("sensitive", True)
|
||||
self.cell1.set_property("editable", True)
|
||||
return
|
||||
|
||||
def regex_field(self, column, cell, model, iter):
|
||||
cell.set_property('text', model.get_value(iter, 2))
|
||||
return
|
||||
|
||||
def url_field(self, column, cell, model, iter):
|
||||
text = model.get_value(iter, 1)
|
||||
if text == "":
|
||||
if model.get_value(iter, 0) == "Standard":
|
||||
text = "Enter the mirror URL"
|
||||
else:
|
||||
text = "Match regex and replace it with this URL"
|
||||
cell.set_property('text', text)
|
||||
return
|
||||
|
||||
def refresh_shared_state_page(self):
|
||||
page_num = self.nb.get_current_page()
|
||||
self.nb.remove_page(page_num);
|
||||
@@ -555,13 +373,13 @@ class SimpleSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
self.handler.set_http_proxy(self.configuration.combine_proxy("http"))
|
||||
self.handler.set_https_proxy(self.configuration.combine_proxy("https"))
|
||||
self.handler.set_ftp_proxy(self.configuration.combine_proxy("ftp"))
|
||||
self.handler.set_socks_proxy(self.configuration.combine_proxy("socks"))
|
||||
self.handler.set_git_proxy(self.configuration.combine_host_only("git"), self.configuration.combine_port_only("git"))
|
||||
self.handler.set_cvs_proxy(self.configuration.combine_host_only("cvs"), self.configuration.combine_port_only("cvs"))
|
||||
elif self.configuration.enable_proxy == False:
|
||||
self.handler.set_http_proxy("")
|
||||
self.handler.set_https_proxy("")
|
||||
self.handler.set_ftp_proxy("")
|
||||
self.handler.set_socks_proxy("")
|
||||
self.handler.set_git_proxy("", "")
|
||||
self.handler.set_cvs_proxy("", "")
|
||||
self.proxy_test_ran = True
|
||||
self.proxy_test_running = True
|
||||
@@ -627,7 +445,7 @@ class SimpleSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
advanced_vbox.pack_start(sub_vbox, expand=False, fill=False)
|
||||
label = self.gen_label_widget("<span weight=\"bold\">Set the proxies used when fetching source code</span>")
|
||||
tooltip = "Set the proxies used when fetching source code. A blank field uses a direct internet connection."
|
||||
info = HobInfoButton("<span weight=\"bold\">Set the proxies used when fetching source code</span>" + "*" + tooltip, self)
|
||||
info = HobInfoButton(tooltip, self)
|
||||
hbox = gtk.HBox(False, 12)
|
||||
hbox.pack_start(label, expand=True, fill=True)
|
||||
hbox.pack_start(info, expand=False, fill=False)
|
||||
@@ -673,11 +491,11 @@ class SimpleSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
self.same_proxy_addresses.append(self.ftp_proxy)
|
||||
self.same_proxy_ports.append(self.ftp_proxy_port)
|
||||
|
||||
self.socks_proxy, self.socks_proxy_port, self.socks_proxy_details = self.gen_proxy_entry_widget(
|
||||
"socks", self, True, 3)
|
||||
proxy_test_focus += [self.socks_proxy, self.socks_proxy_port]
|
||||
self.same_proxy_addresses.append(self.socks_proxy)
|
||||
self.same_proxy_ports.append(self.socks_proxy_port)
|
||||
self.git_proxy, self.git_proxy_port, self.git_proxy_details = self.gen_proxy_entry_widget(
|
||||
"git", self, True, 3)
|
||||
proxy_test_focus += [self.git_proxy, self.git_proxy_port]
|
||||
self.same_proxy_addresses.append(self.git_proxy)
|
||||
self.same_proxy_ports.append(self.git_proxy_port)
|
||||
|
||||
self.cvs_proxy, self.cvs_proxy_port, self.cvs_proxy_details = self.gen_proxy_entry_widget(
|
||||
"cvs", self, True, 4)
|
||||
@@ -875,7 +693,7 @@ class SimpleSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
advanced_vbox.pack_start(sub_vbox, expand=True, fill=True)
|
||||
label = self.gen_label_widget("<span weight=\"bold\">Add your own variables:</span>")
|
||||
tooltip = "These are key/value pairs for your extra settings. Click \'Add\' and then directly edit the key and the value"
|
||||
setting_widget, self.setting_store = self.gen_editable_settings(self.configuration.extra_setting,"<b>Add your own variables</b>" + "*" + tooltip)
|
||||
setting_widget, self.setting_store = self.gen_editable_settings(self.configuration.extra_setting, tooltip)
|
||||
sub_vbox.pack_start(label, expand=False, fill=False)
|
||||
sub_vbox.pack_start(setting_widget, expand=True, fill=True)
|
||||
|
||||
@@ -898,3 +716,7 @@ class SimpleSettingsDialog (CrumbsDialog, SettingsUIHelper):
|
||||
self.handler.disconnect(self.proxy_test_passed_id)
|
||||
self.handler.disconnect(self.proxy_test_failed_id)
|
||||
super(SimpleSettingsDialog, self).destroy()
|
||||
|
||||
def scroll_changed(self, widget, event, data=None):
|
||||
adj = widget.get_vadjustment()
|
||||
adj.set_value(adj.upper - adj.page_size)
|
||||
|
||||
@@ -146,9 +146,7 @@ class HobHandler(gobject.GObject):
|
||||
elif next_command == self.SUB_MATCH_CLASS:
|
||||
self.runCommand(["findFilesMatchingInDir", "rootfs_", "classes"])
|
||||
elif next_command == self.SUB_PARSE_CONFIG:
|
||||
self.runCommand(["enableDataTracking"])
|
||||
self.runCommand(["parseConfigurationFiles", "", ""])
|
||||
self.runCommand(["disableDataTracking"])
|
||||
elif next_command == self.SUB_GNERATE_TGTS:
|
||||
self.runCommand(["generateTargetsTree", "classes/image.bbclass", []])
|
||||
elif next_command == self.SUB_GENERATE_PKGINFO:
|
||||
@@ -202,10 +200,6 @@ class HobHandler(gobject.GObject):
|
||||
self.run_next_command()
|
||||
|
||||
elif isinstance(event, bb.event.SanityCheckPassed):
|
||||
reparse = self.runCommand(["getVariable", "BB_INVALIDCONF"]) or None
|
||||
if reparse is True:
|
||||
self.runCommand(["setVariable", "BB_INVALIDCONF", False])
|
||||
self.runCommand(["parseConfigurationFiles", "", ""])
|
||||
self.run_next_command()
|
||||
|
||||
elif isinstance(event, bb.event.SanityCheckFailed):
|
||||
@@ -307,42 +301,42 @@ class HobHandler(gobject.GObject):
|
||||
self.runCommand(["setVariable", "INHERIT", inherits])
|
||||
|
||||
def set_bblayers(self, bblayers):
|
||||
self.runCommand(["setVariable", "BBLAYERS", " ".join(bblayers)])
|
||||
self.runCommand(["setVariable", "BBLAYERS_HOB", " ".join(bblayers)])
|
||||
|
||||
def set_machine(self, machine):
|
||||
if machine:
|
||||
self.runCommand(["setVariable", "MACHINE", machine])
|
||||
self.runCommand(["setVariable", "MACHINE_HOB", machine])
|
||||
|
||||
def set_sdk_machine(self, sdk_machine):
|
||||
self.runCommand(["setVariable", "SDKMACHINE", sdk_machine])
|
||||
self.runCommand(["setVariable", "SDKMACHINE_HOB", sdk_machine])
|
||||
|
||||
def set_image_fstypes(self, image_fstypes):
|
||||
self.runCommand(["setVariable", "IMAGE_FSTYPES", image_fstypes])
|
||||
|
||||
def set_distro(self, distro):
|
||||
self.runCommand(["setVariable", "DISTRO", distro])
|
||||
self.runCommand(["setVariable", "DISTRO_HOB", distro])
|
||||
|
||||
def set_package_format(self, format):
|
||||
package_classes = ""
|
||||
for pkgfmt in format.split():
|
||||
package_classes += ("package_%s" % pkgfmt + " ")
|
||||
self.runCommand(["setVariable", "PACKAGE_CLASSES", package_classes])
|
||||
self.runCommand(["setVariable", "PACKAGE_CLASSES_HOB", package_classes])
|
||||
|
||||
def set_bbthreads(self, threads):
|
||||
self.runCommand(["setVariable", "BB_NUMBER_THREADS", threads])
|
||||
self.runCommand(["setVariable", "BB_NUMBER_THREADS_HOB", threads])
|
||||
|
||||
def set_pmake(self, threads):
|
||||
pmake = "-j %s" % threads
|
||||
self.runCommand(["setVariable", "PARALLEL_MAKE", pmake])
|
||||
self.runCommand(["setVariable", "PARALLEL_MAKE_HOB", pmake])
|
||||
|
||||
def set_dl_dir(self, directory):
|
||||
self.runCommand(["setVariable", "DL_DIR", directory])
|
||||
self.runCommand(["setVariable", "DL_DIR_HOB", directory])
|
||||
|
||||
def set_sstate_dir(self, directory):
|
||||
self.runCommand(["setVariable", "SSTATE_DIR", directory])
|
||||
self.runCommand(["setVariable", "SSTATE_DIR_HOB", directory])
|
||||
|
||||
def set_sstate_mirrors(self, url):
|
||||
self.runCommand(["setVariable", "SSTATE_MIRRORS", url])
|
||||
self.runCommand(["setVariable", "SSTATE_MIRRORS_HOB", url])
|
||||
|
||||
def set_extra_size(self, image_extra_size):
|
||||
self.runCommand(["setVariable", "IMAGE_ROOTFS_EXTRA_SPACE", str(image_extra_size)])
|
||||
@@ -351,13 +345,16 @@ class HobHandler(gobject.GObject):
|
||||
self.runCommand(["setVariable", "IMAGE_ROOTFS_SIZE", str(image_rootfs_size)])
|
||||
|
||||
def set_incompatible_license(self, incompat_license):
|
||||
self.runCommand(["setVariable", "INCOMPATIBLE_LICENSE", incompat_license])
|
||||
self.runCommand(["setVariable", "INCOMPATIBLE_LICENSE_HOB", incompat_license])
|
||||
|
||||
def set_extra_config(self, extra_setting):
|
||||
for key in extra_setting.keys():
|
||||
value = extra_setting[key]
|
||||
self.runCommand(["setVariable", key, value])
|
||||
|
||||
def set_config_filter(self, config_filter):
|
||||
self.runCommand(["setConfFilter", config_filter])
|
||||
|
||||
def set_http_proxy(self, http_proxy):
|
||||
self.runCommand(["setVariable", "http_proxy", http_proxy])
|
||||
|
||||
@@ -367,8 +364,9 @@ class HobHandler(gobject.GObject):
|
||||
def set_ftp_proxy(self, ftp_proxy):
|
||||
self.runCommand(["setVariable", "ftp_proxy", ftp_proxy])
|
||||
|
||||
def set_socks_proxy(self, socks_proxy):
|
||||
self.runCommand(["setVariable", "all_proxy", socks_proxy])
|
||||
def set_git_proxy(self, host, port):
|
||||
self.runCommand(["setVariable", "GIT_PROXY_HOST", host])
|
||||
self.runCommand(["setVariable", "GIT_PROXY_PORT", port])
|
||||
|
||||
def set_cvs_proxy(self, host, port):
|
||||
self.runCommand(["setVariable", "CVS_PROXY_HOST", host])
|
||||
@@ -453,9 +451,6 @@ class HobHandler(gobject.GObject):
|
||||
ret.append(i)
|
||||
return " ".join(ret)
|
||||
|
||||
def set_var_in_file(self, var, val, default_file=None):
|
||||
self.server.runCommand(["setVarFile", var, val, default_file])
|
||||
|
||||
def get_parameters(self):
|
||||
# retrieve the parameters from bitbake
|
||||
params = {}
|
||||
@@ -564,7 +559,9 @@ class HobHandler(gobject.GObject):
|
||||
|
||||
params["default_task"] = self.runCommand(["getVariable", "BB_DEFAULT_TASK"]) or "build"
|
||||
|
||||
params["socks_proxy"] = self.runCommand(["getVariable", "all_proxy"]) or ""
|
||||
params["git_proxy_host"] = self.runCommand(["getVariable", "GIT_PROXY_HOST"]) or ""
|
||||
params["git_proxy_port"] = self.runCommand(["getVariable", "GIT_PROXY_PORT"]) or ""
|
||||
|
||||
params["http_proxy"] = self.runCommand(["getVariable", "http_proxy"]) or ""
|
||||
params["ftp_proxy"] = self.runCommand(["getVariable", "ftp_proxy"]) or ""
|
||||
params["https_proxy"] = self.runCommand(["getVariable", "https_proxy"]) or ""
|
||||
|
||||
@@ -27,15 +27,14 @@ from bb.ui.crumbs.hobpages import HobPage
|
||||
#
|
||||
# PackageListModel
|
||||
#
|
||||
class PackageListModel(gtk.ListStore):
|
||||
class PackageListModel(gtk.TreeStore):
|
||||
"""
|
||||
This class defines an gtk.ListStore subclass which will convert the output
|
||||
of the bb.event.TargetsTreeGenerated event into a gtk.ListStore whilst also
|
||||
This class defines an gtk.TreeStore subclass which will convert the output
|
||||
of the bb.event.TargetsTreeGenerated event into a gtk.TreeStore whilst also
|
||||
providing convenience functions to access gtk.TreeModel subclasses which
|
||||
provide filtered views of the data.
|
||||
"""
|
||||
|
||||
(COL_NAME, COL_VER, COL_REV, COL_RNM, COL_SEC, COL_SUM, COL_RDEP, COL_RPROV, COL_SIZE, COL_RCP, COL_BINB, COL_INC, COL_FADE_INC, COL_FONT, COL_FLIST) = range(15)
|
||||
(COL_NAME, COL_VER, COL_REV, COL_RNM, COL_SEC, COL_SUM, COL_RDEP, COL_RPROV, COL_SIZE, COL_BINB, COL_INC, COL_FADE_INC, COL_FONT) = range(13)
|
||||
|
||||
__gsignals__ = {
|
||||
"package-selection-changed" : (gobject.SIGNAL_RUN_LAST,
|
||||
@@ -46,9 +45,15 @@ class PackageListModel(gtk.ListStore):
|
||||
__toolchain_required_packages__ = ["packagegroup-core-standalone-sdk-target", "packagegroup-core-standalone-sdk-target-dbg"]
|
||||
|
||||
def __init__(self):
|
||||
|
||||
self.contents = None
|
||||
self.images = None
|
||||
self.pkgs_size = 0
|
||||
self.pn_path = {}
|
||||
self.pkg_path = {}
|
||||
self.rprov_pkg = {}
|
||||
gtk.ListStore.__init__ (self,
|
||||
gobject.TYPE_STRING,
|
||||
|
||||
gtk.TreeStore.__init__ (self,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
@@ -61,23 +66,23 @@ class PackageListModel(gtk.ListStore):
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_BOOLEAN,
|
||||
gobject.TYPE_BOOLEAN,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING)
|
||||
|
||||
|
||||
"""
|
||||
Find the model path for the item_name
|
||||
Returns the path in the model or None
|
||||
"""
|
||||
def find_path_for_item(self, item_name):
|
||||
pkg = item_name
|
||||
if item_name not in self.pn_path.keys():
|
||||
if item_name not in self.pkg_path.keys():
|
||||
if item_name not in self.rprov_pkg.keys():
|
||||
return None
|
||||
pkg = self.rprov_pkg[item_name]
|
||||
if pkg not in self.pn_path.keys():
|
||||
if pkg not in self.pkg_path.keys():
|
||||
return None
|
||||
|
||||
return self.pn_path[pkg]
|
||||
return self.pkg_path[pkg]
|
||||
|
||||
def find_item_for_path(self, item_path):
|
||||
return self[item_path][self.COL_NAME]
|
||||
@@ -86,110 +91,25 @@ class PackageListModel(gtk.ListStore):
|
||||
Helper function to determine whether an item is an item specified by filter
|
||||
"""
|
||||
def tree_model_filter(self, model, it, filter):
|
||||
name = model.get_value(it, self.COL_NAME)
|
||||
|
||||
for key in filter.keys():
|
||||
if key == self.COL_NAME:
|
||||
if filter[key] != 'Search packages by name':
|
||||
if name and filter[key] not in name:
|
||||
return False
|
||||
else:
|
||||
if model.get_value(it, key) not in filter[key]:
|
||||
return False
|
||||
self.filtered_nb += 1
|
||||
if model.get_value(it, key) not in filter[key]:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
"""
|
||||
Create, if required, and return a filtered gtk.TreeModelSort
|
||||
containing only the items specified by filter
|
||||
"""
|
||||
def tree_model(self, filter, excluded_items_ahead=False, included_items_ahead=False, search_data=None, initial=False):
|
||||
def tree_model(self, filter):
|
||||
model = self.filter_new()
|
||||
self.filtered_nb = 0
|
||||
model.set_visible_func(self.tree_model_filter, filter)
|
||||
|
||||
sort = gtk.TreeModelSort(model)
|
||||
if initial:
|
||||
sort.set_sort_column_id(PackageListModel.COL_NAME, gtk.SORT_ASCENDING)
|
||||
sort.set_default_sort_func(None)
|
||||
|
||||
if excluded_items_ahead:
|
||||
sort.set_default_sort_func(self.exclude_item_sort_func, search_data)
|
||||
elif included_items_ahead:
|
||||
sort.set_default_sort_func(self.include_item_sort_func, search_data)
|
||||
else:
|
||||
if search_data and search_data!='Search recipes by name' and search_data!='Search package groups by name':
|
||||
sort.set_default_sort_func(self.sort_func, search_data)
|
||||
else:
|
||||
sort.set_sort_column_id(PackageListModel.COL_NAME, gtk.SORT_ASCENDING)
|
||||
sort.set_default_sort_func(None)
|
||||
|
||||
sort.set_sort_func(PackageListModel.COL_INC, self.sort_column, PackageListModel.COL_INC)
|
||||
sort.set_sort_func(PackageListModel.COL_SIZE, self.sort_column, PackageListModel.COL_SIZE)
|
||||
sort.set_sort_func(PackageListModel.COL_BINB, self.sort_column, PackageListModel.COL_BINB)
|
||||
sort.set_sort_func(PackageListModel.COL_RCP, self.sort_column, PackageListModel.COL_RCP)
|
||||
sort.set_sort_column_id(PackageListModel.COL_NAME, gtk.SORT_ASCENDING)
|
||||
sort.set_default_sort_func(None)
|
||||
return sort
|
||||
|
||||
def sort_column(self, model, row1, row2, col):
|
||||
value1 = model.get_value(row1, col)
|
||||
value2 = model.get_value(row2, col)
|
||||
if col==PackageListModel.COL_SIZE:
|
||||
value1 = HobPage._string_to_size(value1)
|
||||
value2 = HobPage._string_to_size(value2)
|
||||
|
||||
cmp_res = cmp(value1, value2)
|
||||
if cmp_res!=0:
|
||||
if col==PackageListModel.COL_INC:
|
||||
return -cmp_res
|
||||
else:
|
||||
return cmp_res
|
||||
else:
|
||||
name1 = model.get_value(row1, PackageListModel.COL_NAME)
|
||||
name2 = model.get_value(row2, PackageListModel.COL_NAME)
|
||||
return cmp(name1,name2)
|
||||
|
||||
def exclude_item_sort_func(self, model, iter1, iter2, user_data=None):
|
||||
if user_data:
|
||||
val1 = model.get_value(iter1, PackageListModel.COL_NAME)
|
||||
val2 = model.get_value(iter2, PackageListModel.COL_NAME)
|
||||
if val1.startswith(user_data) and not val2.startswith(user_data):
|
||||
return -1
|
||||
elif not val1.startswith(user_data) and val2.startswith(user_data):
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
else:
|
||||
val1 = model.get_value(iter1, PackageListModel.COL_FADE_INC)
|
||||
val2 = model.get_value(iter2, PackageListModel.COL_INC)
|
||||
return ((val1 == True) and (val2 == False))
|
||||
|
||||
def include_item_sort_func(self, model, iter1, iter2, user_data=None):
|
||||
if user_data:
|
||||
val1 = model.get_value(iter1, PackageListModel.COL_NAME)
|
||||
val2 = model.get_value(iter2, PackageListModel.COL_NAME)
|
||||
if val1.startswith(user_data) and not val2.startswith(user_data):
|
||||
return -1
|
||||
elif not val1.startswith(user_data) and val2.startswith(user_data):
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
else:
|
||||
val1 = model.get_value(iter1, PackageListModel.COL_INC)
|
||||
val2 = model.get_value(iter2, PackageListModel.COL_INC)
|
||||
return ((val1 == False) and (val2 == True))
|
||||
|
||||
def sort_func(self, model, iter1, iter2, user_data):
|
||||
val1 = model.get_value(iter1, PackageListModel.COL_NAME)
|
||||
val2 = model.get_value(iter2, PackageListModel.COL_NAME)
|
||||
if val1 is None or val2 is None:
|
||||
return 0
|
||||
elif val1.startswith(user_data) and not val2.startswith(user_data):
|
||||
return -1
|
||||
elif not val1.startswith(user_data) and val2.startswith(user_data):
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
|
||||
def convert_vpath_to_path(self, view_model, view_path):
|
||||
# view_model is the model sorted
|
||||
# get the path of the model filtered
|
||||
@@ -201,13 +121,16 @@ class PackageListModel(gtk.ListStore):
|
||||
return path
|
||||
|
||||
def convert_path_to_vpath(self, view_model, path):
|
||||
name = self.find_item_for_path(path)
|
||||
it = view_model.get_iter_first()
|
||||
while it:
|
||||
name = self.find_item_for_path(path)
|
||||
view_name = view_model.get_value(it, PackageListModel.COL_NAME)
|
||||
if view_name == name:
|
||||
view_path = view_model.get_path(it)
|
||||
return view_path
|
||||
child_it = view_model.iter_children(it)
|
||||
while child_it:
|
||||
view_name = view_model.get_value(child_it, self.COL_NAME)
|
||||
if view_name == name:
|
||||
view_path = view_model.get_path(child_it)
|
||||
return view_path
|
||||
child_it = view_model.iter_next(child_it)
|
||||
it = view_model.iter_next(it)
|
||||
return None
|
||||
|
||||
@@ -216,8 +139,11 @@ class PackageListModel(gtk.ListStore):
|
||||
bb.event.PackageInfo event and populates the package list.
|
||||
"""
|
||||
def populate(self, pkginfolist):
|
||||
# First clear the model, in case repopulating
|
||||
self.clear()
|
||||
self.pkgs_size = 0
|
||||
self.pn_path = {}
|
||||
self.pkg_path = {}
|
||||
self.rprov_pkg = {}
|
||||
|
||||
def getpkgvalue(pkgdict, key, pkgname, defaultval = None):
|
||||
value = pkgdict.get('%s_%s' % (key, pkgname), None)
|
||||
@@ -229,6 +155,15 @@ class PackageListModel(gtk.ListStore):
|
||||
pn = pkginfo['PN']
|
||||
pv = pkginfo['PV']
|
||||
pr = pkginfo['PR']
|
||||
if pn in self.pn_path.keys():
|
||||
pniter = self.get_iter(self.pn_path[pn])
|
||||
else:
|
||||
pniter = self.append(None)
|
||||
self.set(pniter, self.COL_NAME, pn + '-' + pv + '-' + pr,
|
||||
self.COL_INC, False)
|
||||
self.pn_path[pn] = self.get_path(pniter)
|
||||
|
||||
# PKG is always present
|
||||
pkg = pkginfo['PKG']
|
||||
pkgv = getpkgvalue(pkginfo, 'PKGV', pkg)
|
||||
pkgr = getpkgvalue(pkginfo, 'PKGR', pkg)
|
||||
@@ -242,12 +177,9 @@ class PackageListModel(gtk.ListStore):
|
||||
rdep = getpkgvalue(pkginfo, 'RDEPENDS', pkg, "")
|
||||
rrec = getpkgvalue(pkginfo, 'RRECOMMENDS', pkg, "")
|
||||
rprov = getpkgvalue(pkginfo, 'RPROVIDES', pkg, "")
|
||||
files_list = getpkgvalue(pkginfo, 'FILES_INFO', pkg, "")
|
||||
for i in rprov.split():
|
||||
self.rprov_pkg[i] = pkg
|
||||
|
||||
recipe = pn + '-' + pv + '-' + pr
|
||||
|
||||
allow_empty = getpkgvalue(pkginfo, 'ALLOW_EMPTY', pkg, "")
|
||||
|
||||
if pkgsize == "0" and not allow_empty:
|
||||
@@ -255,27 +187,15 @@ class PackageListModel(gtk.ListStore):
|
||||
|
||||
# pkgsize is in KB
|
||||
size = HobPage._size_to_string(HobPage._string_to_size(pkgsize + ' KB'))
|
||||
self.set(self.append(), self.COL_NAME, pkg, self.COL_VER, pkgv,
|
||||
|
||||
it = self.append(pniter)
|
||||
self.pkg_path[pkg] = self.get_path(it)
|
||||
self.set(it, self.COL_NAME, pkg, self.COL_VER, pkgv,
|
||||
self.COL_REV, pkgr, self.COL_RNM, pkg_rename,
|
||||
self.COL_SEC, section, self.COL_SUM, summary,
|
||||
self.COL_RDEP, rdep + ' ' + rrec,
|
||||
self.COL_RPROV, rprov, self.COL_SIZE, size,
|
||||
self.COL_RCP, recipe, self.COL_BINB, "",
|
||||
self.COL_INC, False, self.COL_FONT, '10', self.COL_FLIST, files_list)
|
||||
|
||||
self.pn_path = {}
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
pn = self.get_value(it, self.COL_NAME)
|
||||
path = self.get_path(it)
|
||||
self.pn_path[pn] = path
|
||||
it = self.iter_next(it)
|
||||
|
||||
"""
|
||||
Update the model, send out the notification.
|
||||
"""
|
||||
def selection_change_notification(self):
|
||||
self.emit("package-selection-changed")
|
||||
self.COL_BINB, "", self.COL_INC, False, self.COL_FONT, '10')
|
||||
|
||||
"""
|
||||
Check whether the item at item_path is included or not
|
||||
@@ -284,26 +204,54 @@ class PackageListModel(gtk.ListStore):
|
||||
return self[item_path][self.COL_INC]
|
||||
|
||||
"""
|
||||
Add this item, and any of its dependencies, to the image contents
|
||||
Update the model, send out the notification.
|
||||
"""
|
||||
def selection_change_notification(self):
|
||||
self.emit("package-selection-changed")
|
||||
|
||||
"""
|
||||
Mark a certain package as selected.
|
||||
All its dependencies are marked as selected.
|
||||
The recipe provides the package is marked as selected.
|
||||
If user explicitly selects a recipe, all its providing packages are selected
|
||||
"""
|
||||
def include_item(self, item_path, binb=""):
|
||||
if self.path_included(item_path):
|
||||
return
|
||||
|
||||
item_name = self[item_path][self.COL_NAME]
|
||||
item_deps = self[item_path][self.COL_RDEP]
|
||||
item_rdep = self[item_path][self.COL_RDEP]
|
||||
|
||||
self[item_path][self.COL_INC] = True
|
||||
|
||||
it = self.get_iter(item_path)
|
||||
|
||||
# If user explicitly selects a recipe, all its providing packages are selected.
|
||||
if not self[item_path][self.COL_VER] and binb == "User Selected":
|
||||
child_it = self.iter_children(it)
|
||||
while child_it:
|
||||
child_path = self.get_path(child_it)
|
||||
child_included = self.path_included(child_path)
|
||||
if not child_included:
|
||||
self.include_item(child_path, binb="User Selected")
|
||||
child_it = self.iter_next(child_it)
|
||||
return
|
||||
|
||||
# The recipe provides the package is also marked as selected
|
||||
parent_it = self.iter_parent(it)
|
||||
if parent_it:
|
||||
parent_path = self.get_path(parent_it)
|
||||
self[parent_path][self.COL_INC] = True
|
||||
|
||||
item_bin = self[item_path][self.COL_BINB].split(', ')
|
||||
if binb and not binb in item_bin:
|
||||
item_bin.append(binb)
|
||||
self[item_path][self.COL_BINB] = ', '.join(item_bin).lstrip(', ')
|
||||
|
||||
if item_deps:
|
||||
if item_rdep:
|
||||
# Ensure all of the items deps are included and, where appropriate,
|
||||
# add this item to their COL_BINB
|
||||
for dep in item_deps.split(" "):
|
||||
for dep in item_rdep.split(" "):
|
||||
if dep.startswith('('):
|
||||
continue
|
||||
# If the contents model doesn't already contain dep, add it
|
||||
@@ -322,6 +270,12 @@ class PackageListModel(gtk.ListStore):
|
||||
elif not dep_included:
|
||||
self.include_item(dep_path, binb=item_name)
|
||||
|
||||
"""
|
||||
Mark a certain package as de-selected.
|
||||
All other packages that depends on this package are marked as de-selected.
|
||||
If none of the packages provided by the recipe, the recipe should be marked as de-selected.
|
||||
If user explicitly de-select a recipe, all its providing packages are de-selected.
|
||||
"""
|
||||
def exclude_item(self, item_path):
|
||||
if not self.path_included(item_path):
|
||||
return
|
||||
@@ -329,9 +283,37 @@ class PackageListModel(gtk.ListStore):
|
||||
self[item_path][self.COL_INC] = False
|
||||
|
||||
item_name = self[item_path][self.COL_NAME]
|
||||
item_deps = self[item_path][self.COL_RDEP]
|
||||
if item_deps:
|
||||
for dep in item_deps.split(" "):
|
||||
item_rdep = self[item_path][self.COL_RDEP]
|
||||
it = self.get_iter(item_path)
|
||||
|
||||
# If user explicitly de-select a recipe, all its providing packages are de-selected.
|
||||
if not self[item_path][self.COL_VER]:
|
||||
child_it = self.iter_children(it)
|
||||
while child_it:
|
||||
child_path = self.get_path(child_it)
|
||||
child_included = self[child_path][self.COL_INC]
|
||||
if child_included:
|
||||
self.exclude_item(child_path)
|
||||
child_it = self.iter_next(child_it)
|
||||
return
|
||||
|
||||
# If none of the packages provided by the recipe, the recipe should be marked as de-selected.
|
||||
parent_it = self.iter_parent(it)
|
||||
peer_iter = self.iter_children(parent_it)
|
||||
enabled = 0
|
||||
while peer_iter:
|
||||
peer_path = self.get_path(peer_iter)
|
||||
if self[peer_path][self.COL_INC]:
|
||||
enabled = 1
|
||||
break
|
||||
peer_iter = self.iter_next(peer_iter)
|
||||
if not enabled:
|
||||
parent_path = self.get_path(parent_it)
|
||||
self[parent_path][self.COL_INC] = False
|
||||
|
||||
# All packages that depends on this package are de-selected.
|
||||
if item_rdep:
|
||||
for dep in item_rdep.split(" "):
|
||||
if dep.startswith('('):
|
||||
continue
|
||||
dep_path = self.find_path_for_item(dep)
|
||||
@@ -351,40 +333,51 @@ class PackageListModel(gtk.ListStore):
|
||||
self.exclude_item(binb_path)
|
||||
|
||||
"""
|
||||
Empty self.contents by setting the include of each entry to None
|
||||
Package model may be incomplete, therefore when calling the
|
||||
set_selected_packages(), some packages will not be set included.
|
||||
Return the un-set packages list.
|
||||
"""
|
||||
def reset(self):
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
self.set(it,
|
||||
self.COL_INC, False,
|
||||
self.COL_BINB, "")
|
||||
it = self.iter_next(it)
|
||||
def set_selected_packages(self, packagelist, user_selected=False):
|
||||
left = []
|
||||
binb = 'User Selected' if user_selected else ''
|
||||
for pn in packagelist:
|
||||
if pn in self.pkg_path.keys():
|
||||
path = self.pkg_path[pn]
|
||||
self.include_item(item_path=path, binb=binb)
|
||||
else:
|
||||
left.append(pn)
|
||||
|
||||
self.selection_change_notification()
|
||||
|
||||
def get_selected_packages(self):
|
||||
packagelist = []
|
||||
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
if self.get_value(it, self.COL_INC):
|
||||
name = self.get_value(it, self.COL_NAME)
|
||||
packagelist.append(name)
|
||||
it = self.iter_next(it)
|
||||
|
||||
return packagelist
|
||||
return left
|
||||
|
||||
def get_user_selected_packages(self):
|
||||
packagelist = []
|
||||
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
if self.get_value(it, self.COL_INC):
|
||||
binb = self.get_value(it, self.COL_BINB)
|
||||
if binb == "User Selected":
|
||||
name = self.get_value(it, self.COL_NAME)
|
||||
child_it = self.iter_children(it)
|
||||
while child_it:
|
||||
if self.get_value(child_it, self.COL_INC):
|
||||
binb = self.get_value(child_it, self.COL_BINB)
|
||||
if binb == "User Selected":
|
||||
name = self.get_value(child_it, self.COL_NAME)
|
||||
packagelist.append(name)
|
||||
child_it = self.iter_next(child_it)
|
||||
it = self.iter_next(it)
|
||||
|
||||
return packagelist
|
||||
|
||||
def get_selected_packages(self):
|
||||
packagelist = []
|
||||
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
child_it = self.iter_children(it)
|
||||
while child_it:
|
||||
if self.get_value(child_it, self.COL_INC):
|
||||
name = self.get_value(child_it, self.COL_NAME)
|
||||
packagelist.append(name)
|
||||
child_it = self.iter_next(child_it)
|
||||
it = self.iter_next(it)
|
||||
|
||||
return packagelist
|
||||
@@ -395,31 +388,16 @@ class PackageListModel(gtk.ListStore):
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
if self.get_value(it, self.COL_INC):
|
||||
name = self.get_value(it, self.COL_NAME)
|
||||
if name.endswith("-dev") or name.endswith("-dbg"):
|
||||
packagelist.append(name)
|
||||
child_it = self.iter_children(it)
|
||||
while child_it:
|
||||
name = self.get_value(child_it, self.COL_NAME)
|
||||
inc = self.get_value(child_it, self.COL_INC)
|
||||
if inc or name.endswith("-dev") or name.endswith("-dbg"):
|
||||
packagelist.append(name)
|
||||
child_it = self.iter_next(child_it)
|
||||
it = self.iter_next(it)
|
||||
|
||||
return list(set(packagelist + self.__toolchain_required_packages__));
|
||||
|
||||
"""
|
||||
Package model may be incomplete, therefore when calling the
|
||||
set_selected_packages(), some packages will not be set included.
|
||||
Return the un-set packages list.
|
||||
"""
|
||||
def set_selected_packages(self, packagelist, user_selected=False):
|
||||
left = []
|
||||
binb = 'User Selected' if user_selected else ''
|
||||
for pn in packagelist:
|
||||
if pn in self.pn_path.keys():
|
||||
path = self.pn_path[pn]
|
||||
self.include_item(item_path=path, binb=binb)
|
||||
else:
|
||||
left.append(pn)
|
||||
|
||||
self.selection_change_notification()
|
||||
return left
|
||||
|
||||
"""
|
||||
Return the selected package size, unit is B.
|
||||
"""
|
||||
@@ -427,16 +405,37 @@ class PackageListModel(gtk.ListStore):
|
||||
packages_size = 0
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
if self.get_value(it, self.COL_INC):
|
||||
str_size = self.get_value(it, self.COL_SIZE)
|
||||
if not str_size:
|
||||
continue
|
||||
child_it = self.iter_children(it)
|
||||
while child_it:
|
||||
if self.get_value(child_it, self.COL_INC):
|
||||
str_size = self.get_value(child_it, self.COL_SIZE)
|
||||
if not str_size:
|
||||
continue
|
||||
|
||||
packages_size += HobPage._string_to_size(str_size)
|
||||
packages_size += HobPage._string_to_size(str_size)
|
||||
|
||||
child_it = self.iter_next(child_it)
|
||||
it = self.iter_next(it)
|
||||
return packages_size
|
||||
|
||||
"""
|
||||
Empty self.contents by setting the include of each entry to None
|
||||
"""
|
||||
def reset(self):
|
||||
self.pkgs_size = 0
|
||||
it = self.get_iter_first()
|
||||
while it:
|
||||
self.set(it, self.COL_INC, False)
|
||||
child_it = self.iter_children(it)
|
||||
while child_it:
|
||||
self.set(child_it,
|
||||
self.COL_INC, False,
|
||||
self.COL_BINB, "")
|
||||
child_it = self.iter_next(child_it)
|
||||
it = self.iter_next(it)
|
||||
|
||||
self.selection_change_notification()
|
||||
|
||||
"""
|
||||
Resync the state of included items to a backup column before performing the fadeout visible effect
|
||||
"""
|
||||
@@ -445,6 +444,9 @@ class PackageListModel(gtk.ListStore):
|
||||
while it:
|
||||
active = self.get_value(it, self.COL_INC)
|
||||
self.set(it, self.COL_FADE_INC, active)
|
||||
if self.iter_has_child(it):
|
||||
self.resync_fadeout_column(self.iter_children(it))
|
||||
|
||||
it = self.iter_next(it)
|
||||
|
||||
#
|
||||
@@ -457,8 +459,7 @@ class RecipeListModel(gtk.ListStore):
|
||||
providing convenience functions to access gtk.TreeModel subclasses which
|
||||
provide filtered views of the data.
|
||||
"""
|
||||
(COL_NAME, COL_DESC, COL_LIC, COL_GROUP, COL_DEPS, COL_BINB, COL_TYPE, COL_INC, COL_IMG, COL_INSTALL, COL_PN, COL_FADE_INC, COL_SUMMARY, COL_VERSION,
|
||||
COL_REVISION, COL_HOMEPAGE, COL_BUGTRACKER) = range(17)
|
||||
(COL_NAME, COL_DESC, COL_LIC, COL_GROUP, COL_DEPS, COL_BINB, COL_TYPE, COL_INC, COL_IMG, COL_INSTALL, COL_PN, COL_FADE_INC) = range(12)
|
||||
|
||||
__custom_image__ = "Create your own image"
|
||||
|
||||
@@ -483,12 +484,7 @@ class RecipeListModel(gtk.ListStore):
|
||||
gobject.TYPE_BOOLEAN,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_BOOLEAN,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING,
|
||||
gobject.TYPE_STRING)
|
||||
gobject.TYPE_BOOLEAN)
|
||||
|
||||
"""
|
||||
Find the model path for the item_name
|
||||
@@ -520,104 +516,39 @@ class RecipeListModel(gtk.ListStore):
|
||||
return False
|
||||
|
||||
for key in filter.keys():
|
||||
if key == self.COL_NAME:
|
||||
if filter[key] != 'Search recipes by name' and filter[key] != 'Search package groups by name':
|
||||
if filter[key] not in name:
|
||||
return False
|
||||
else:
|
||||
if model.get_value(it, key) not in filter[key]:
|
||||
return False
|
||||
self.filtered_nb += 1
|
||||
if model.get_value(it, key) not in filter[key]:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def exclude_item_sort_func(self, model, iter1, iter2, user_data=None):
|
||||
if user_data:
|
||||
val1 = model.get_value(iter1, RecipeListModel.COL_NAME)
|
||||
val2 = model.get_value(iter2, RecipeListModel.COL_NAME)
|
||||
if val1.startswith(user_data) and not val2.startswith(user_data):
|
||||
return -1
|
||||
elif not val1.startswith(user_data) and val2.startswith(user_data):
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
else:
|
||||
val1 = model.get_value(iter1, RecipeListModel.COL_FADE_INC)
|
||||
val2 = model.get_value(iter2, RecipeListModel.COL_INC)
|
||||
return ((val1 == True) and (val2 == False))
|
||||
def exclude_item_sort_func(self, model, iter1, iter2):
|
||||
val1 = model.get_value(iter1, RecipeListModel.COL_FADE_INC)
|
||||
val2 = model.get_value(iter2, RecipeListModel.COL_INC)
|
||||
return ((val1 == True) and (val2 == False))
|
||||
|
||||
def include_item_sort_func(self, model, iter1, iter2, user_data=None):
|
||||
if user_data:
|
||||
val1 = model.get_value(iter1, RecipeListModel.COL_NAME)
|
||||
val2 = model.get_value(iter2, RecipeListModel.COL_NAME)
|
||||
if val1.startswith(user_data) and not val2.startswith(user_data):
|
||||
return -1
|
||||
elif not val1.startswith(user_data) and val2.startswith(user_data):
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
else:
|
||||
val1 = model.get_value(iter1, RecipeListModel.COL_INC)
|
||||
val2 = model.get_value(iter2, RecipeListModel.COL_INC)
|
||||
return ((val1 == False) and (val2 == True))
|
||||
|
||||
def sort_func(self, model, iter1, iter2, user_data):
|
||||
val1 = model.get_value(iter1, RecipeListModel.COL_NAME)
|
||||
val2 = model.get_value(iter2, RecipeListModel.COL_NAME)
|
||||
if val1 is None or val2 is None:
|
||||
return 0
|
||||
elif val1.startswith(user_data) and not val2.startswith(user_data):
|
||||
return -1
|
||||
elif not val1.startswith(user_data) and val2.startswith(user_data):
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
def include_item_sort_func(self, model, iter1, iter2):
|
||||
val1 = model.get_value(iter1, RecipeListModel.COL_INC)
|
||||
val2 = model.get_value(iter2, RecipeListModel.COL_INC)
|
||||
return ((val1 == False) and (val2 == True))
|
||||
|
||||
"""
|
||||
Create, if required, and return a filtered gtk.TreeModelSort
|
||||
containing only the items specified by filter
|
||||
containing only the items which are items specified by filter
|
||||
"""
|
||||
def tree_model(self, filter, excluded_items_ahead=False, included_items_ahead=False, search_data=None, initial=False):
|
||||
def tree_model(self, filter, excluded_items_ahead=False, included_items_ahead=True):
|
||||
model = self.filter_new()
|
||||
self.filtered_nb = 0
|
||||
model.set_visible_func(self.tree_model_filter, filter)
|
||||
|
||||
sort = gtk.TreeModelSort(model)
|
||||
if initial:
|
||||
if excluded_items_ahead:
|
||||
sort.set_default_sort_func(self.exclude_item_sort_func)
|
||||
elif included_items_ahead:
|
||||
sort.set_default_sort_func(self.include_item_sort_func)
|
||||
else:
|
||||
sort.set_sort_column_id(RecipeListModel.COL_NAME, gtk.SORT_ASCENDING)
|
||||
sort.set_default_sort_func(None)
|
||||
|
||||
if excluded_items_ahead:
|
||||
sort.set_default_sort_func(self.exclude_item_sort_func, search_data)
|
||||
elif included_items_ahead:
|
||||
sort.set_default_sort_func(self.include_item_sort_func, search_data)
|
||||
else:
|
||||
if search_data and search_data!='Search recipes by name' and search_data!='Search package groups by name':
|
||||
sort.set_default_sort_func(self.sort_func, search_data)
|
||||
else:
|
||||
sort.set_sort_column_id(RecipeListModel.COL_NAME, gtk.SORT_ASCENDING)
|
||||
sort.set_default_sort_func(None)
|
||||
|
||||
sort.set_sort_func(RecipeListModel.COL_INC, self.sort_column, RecipeListModel.COL_INC)
|
||||
sort.set_sort_func(RecipeListModel.COL_GROUP, self.sort_column, RecipeListModel.COL_GROUP)
|
||||
sort.set_sort_func(RecipeListModel.COL_BINB, self.sort_column, RecipeListModel.COL_BINB)
|
||||
sort.set_sort_func(RecipeListModel.COL_LIC, self.sort_column, RecipeListModel.COL_LIC)
|
||||
return sort
|
||||
|
||||
def sort_column(self, model, row1, row2, col):
|
||||
value1 = model.get_value(row1, col)
|
||||
value2 = model.get_value(row2, col)
|
||||
cmp_res = cmp(value1, value2)
|
||||
if cmp_res!=0:
|
||||
if col==RecipeListModel.COL_INC:
|
||||
return -cmp_res
|
||||
else:
|
||||
return cmp_res
|
||||
else:
|
||||
name1 = model.get_value(row1, RecipeListModel.COL_NAME)
|
||||
name2 = model.get_value(row2, RecipeListModel.COL_NAME)
|
||||
return cmp(name1,name2)
|
||||
|
||||
def convert_vpath_to_path(self, view_model, view_path):
|
||||
filtered_model_path = view_model.convert_path_to_child_path(view_path)
|
||||
filtered_model = view_model.get_model()
|
||||
@@ -652,9 +583,7 @@ class RecipeListModel(gtk.ListStore):
|
||||
self.COL_LIC, "", self.COL_GROUP, "",
|
||||
self.COL_DEPS, "", self.COL_BINB, "",
|
||||
self.COL_TYPE, "image", self.COL_INC, False,
|
||||
self.COL_IMG, False, self.COL_INSTALL, "", self.COL_PN, self.__custom_image__,
|
||||
self.COL_SUMMARY, "", self.COL_VERSION, "", self.COL_REVISION, "",
|
||||
self.COL_HOMEPAGE, "", self.COL_BUGTRACKER, "")
|
||||
self.COL_IMG, False, self.COL_INSTALL, "", self.COL_PN, self.__custom_image__)
|
||||
|
||||
for item in event_model["pn"]:
|
||||
name = item
|
||||
@@ -662,11 +591,6 @@ class RecipeListModel(gtk.ListStore):
|
||||
lic = event_model["pn"][item]["license"]
|
||||
group = event_model["pn"][item]["section"]
|
||||
inherits = event_model["pn"][item]["inherits"]
|
||||
summary = event_model["pn"][item]["summary"]
|
||||
version = event_model["pn"][item]["version"]
|
||||
revision = event_model["pn"][item]["revision"]
|
||||
homepage = event_model["pn"][item]["homepage"]
|
||||
bugtracker = event_model["pn"][item]["bugtracker"]
|
||||
install = []
|
||||
|
||||
depends = event_model["depends"].get(item, []) + event_model["rdepends-pn"].get(item, [])
|
||||
@@ -688,9 +612,7 @@ class RecipeListModel(gtk.ListStore):
|
||||
self.COL_LIC, lic, self.COL_GROUP, group,
|
||||
self.COL_DEPS, " ".join(depends), self.COL_BINB, "",
|
||||
self.COL_TYPE, atype, self.COL_INC, False,
|
||||
self.COL_IMG, False, self.COL_INSTALL, " ".join(install), self.COL_PN, item,
|
||||
self.COL_SUMMARY, summary, self.COL_VERSION, version, self.COL_REVISION, revision,
|
||||
self.COL_HOMEPAGE, homepage, self.COL_BUGTRACKER, bugtracker)
|
||||
self.COL_IMG, False, self.COL_INSTALL, " ".join(install), self.COL_PN, item)
|
||||
|
||||
self.pn_path = {}
|
||||
it = self.get_iter_first()
|
||||
|
||||
@@ -44,6 +44,8 @@ class hic:
|
||||
ICON_PACKAGES_HOVER_FILE = os.path.join(HOB_ICON_BASE_DIR, ('packages/packages_hover.png'))
|
||||
ICON_LAYERS_DISPLAY_FILE = os.path.join(HOB_ICON_BASE_DIR, ('layers/layers_display.png'))
|
||||
ICON_LAYERS_HOVER_FILE = os.path.join(HOB_ICON_BASE_DIR, ('layers/layers_hover.png'))
|
||||
ICON_TEMPLATES_DISPLAY_FILE = os.path.join(HOB_ICON_BASE_DIR, ('templates/templates_display.png'))
|
||||
ICON_TEMPLATES_HOVER_FILE = os.path.join(HOB_ICON_BASE_DIR, ('templates/templates_hover.png'))
|
||||
ICON_IMAGES_DISPLAY_FILE = os.path.join(HOB_ICON_BASE_DIR, ('images/images_display.png'))
|
||||
ICON_IMAGES_HOVER_FILE = os.path.join(HOB_ICON_BASE_DIR, ('images/images_hover.png'))
|
||||
ICON_SETTINGS_DISPLAY_FILE = os.path.join(HOB_ICON_BASE_DIR, ('settings/settings_display.png'))
|
||||
@@ -83,29 +85,23 @@ class HobViewTable (gtk.VBox):
|
||||
gobject.TYPE_PYOBJECT,)),
|
||||
}
|
||||
|
||||
def __init__(self, columns, name):
|
||||
def __init__(self, columns):
|
||||
gtk.VBox.__init__(self, False, 6)
|
||||
self.table_tree = gtk.TreeView()
|
||||
self.table_tree.set_headers_visible(True)
|
||||
self.table_tree.set_headers_clickable(True)
|
||||
self.table_tree.set_enable_search(True)
|
||||
self.table_tree.set_rules_hint(True)
|
||||
self.table_tree.set_enable_tree_lines(True)
|
||||
self.table_tree.get_selection().set_mode(gtk.SELECTION_SINGLE)
|
||||
self.toggle_columns = []
|
||||
self.table_tree.connect("row-activated", self.row_activated_cb)
|
||||
self.top_bar = None
|
||||
self.tab_name = name
|
||||
|
||||
for i, column in enumerate(columns):
|
||||
col_name = column['col_name']
|
||||
col = gtk.TreeViewColumn(col_name)
|
||||
col = gtk.TreeViewColumn(column['col_name'])
|
||||
col.set_clickable(True)
|
||||
col.set_resizable(True)
|
||||
if self.tab_name.startswith('Included'):
|
||||
if col_name!='Included':
|
||||
col.set_sort_column_id(column['col_id'])
|
||||
else:
|
||||
col.set_sort_column_id(column['col_id'])
|
||||
col.set_sort_column_id(column['col_id'])
|
||||
if 'col_min' in column.keys():
|
||||
col.set_min_width(column['col_min'])
|
||||
if 'col_max' in column.keys():
|
||||
@@ -128,7 +124,7 @@ class HobViewTable (gtk.VBox):
|
||||
self.toggle_id = i
|
||||
col.pack_end(cell, True)
|
||||
col.set_attributes(cell, active=column['col_id'])
|
||||
self.toggle_columns.append(col_name)
|
||||
self.toggle_columns.append(column['col_name'])
|
||||
if 'col_group' in column.keys():
|
||||
col.set_cell_data_func(cell, self.set_group_number_cb)
|
||||
elif column['col_style'] == 'radio toggle':
|
||||
@@ -139,7 +135,7 @@ class HobViewTable (gtk.VBox):
|
||||
self.toggle_id = i
|
||||
col.pack_end(cell, True)
|
||||
col.set_attributes(cell, active=column['col_id'])
|
||||
self.toggle_columns.append(col_name)
|
||||
self.toggle_columns.append(column['col_name'])
|
||||
elif column['col_style'] == 'binb':
|
||||
cell = gtk.CellRendererText()
|
||||
col.pack_start(cell, True)
|
||||
@@ -147,42 +143,10 @@ class HobViewTable (gtk.VBox):
|
||||
if 'col_t_id' in column.keys():
|
||||
col.add_attribute(cell, 'font', column['col_t_id'])
|
||||
|
||||
self.scroll = gtk.ScrolledWindow()
|
||||
self.scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_AUTOMATIC)
|
||||
self.scroll.add(self.table_tree)
|
||||
|
||||
self.pack_end(self.scroll, True, True, 0)
|
||||
|
||||
def add_no_result_bar(self, entry):
|
||||
color = HobColors.KHAKI
|
||||
self.top_bar = gtk.EventBox()
|
||||
self.top_bar.set_size_request(-1, 70)
|
||||
self.top_bar.modify_bg(gtk.STATE_NORMAL, gtk.gdk.color_parse(color))
|
||||
self.top_bar.set_flags(gtk.CAN_DEFAULT)
|
||||
self.top_bar.grab_default()
|
||||
|
||||
no_result_tab = gtk.Table(5, 20, True)
|
||||
self.top_bar.add(no_result_tab)
|
||||
|
||||
label = gtk.Label()
|
||||
label.set_alignment(0.0, 0.5)
|
||||
title = "No results matching your search"
|
||||
label.set_markup("<span size='x-large'><b>%s</b></span>" % title)
|
||||
no_result_tab.attach(label, 1, 14, 1, 4)
|
||||
|
||||
clear_button = HobButton("Clear search")
|
||||
clear_button.set_tooltip_text("Clear search query")
|
||||
clear_button.connect('clicked', self.set_search_entry_clear_cb, entry)
|
||||
no_result_tab.attach(clear_button, 16, 19, 1, 4)
|
||||
|
||||
self.pack_start(self.top_bar, False, True, 12)
|
||||
self.top_bar.show_all()
|
||||
|
||||
def set_search_entry_clear_cb(self, button, search):
|
||||
if search.get_editable() == True:
|
||||
search.set_text("")
|
||||
search.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, False)
|
||||
search.grab_focus()
|
||||
scroll = gtk.ScrolledWindow()
|
||||
scroll.set_policy(gtk.POLICY_NEVER, gtk.POLICY_ALWAYS)
|
||||
scroll.add(self.table_tree)
|
||||
self.pack_start(scroll, True, True, 0)
|
||||
|
||||
def display_binb_cb(self, col, cell, model, it, col_id):
|
||||
binb = model.get_value(it, col_id)
|
||||
@@ -208,6 +172,10 @@ class HobViewTable (gtk.VBox):
|
||||
def set_model(self, tree_model):
|
||||
self.table_tree.set_model(tree_model)
|
||||
|
||||
def set_search_entry(self, search_column_id, entry):
|
||||
self.table_tree.set_search_column(search_column_id)
|
||||
self.table_tree.set_search_entry(entry)
|
||||
|
||||
def toggle_default(self):
|
||||
model = self.table_tree.get_model()
|
||||
if not model:
|
||||
@@ -379,13 +347,18 @@ class HobInfoButton(gtk.EventBox):
|
||||
hic.ICON_INFO_DISPLAY_FILE)
|
||||
self.image.show()
|
||||
self.add(self.image)
|
||||
self.tip_markup = tip_markup
|
||||
self.my_parent = parent
|
||||
|
||||
self.set_events(gtk.gdk.BUTTON_RELEASE |
|
||||
gtk.gdk.ENTER_NOTIFY_MASK |
|
||||
gtk.gdk.LEAVE_NOTIFY_MASK)
|
||||
|
||||
self.ptip = PersistentTooltip(tip_markup)
|
||||
|
||||
if parent:
|
||||
self.ptip.set_parent(parent)
|
||||
self.ptip.set_transient_for(parent)
|
||||
self.ptip.set_destroy_with_parent(True)
|
||||
|
||||
self.connect("button-release-event", self.button_release_cb)
|
||||
self.connect("enter-notify-event", self.mouse_in_cb)
|
||||
self.connect("leave-notify-event", self.mouse_out_cb)
|
||||
@@ -395,18 +368,7 @@ class HobInfoButton(gtk.EventBox):
|
||||
PersistentTooltip
|
||||
"""
|
||||
def button_release_cb(self, widget, event):
|
||||
from bb.ui.crumbs.hig.propertydialog import PropertyDialog
|
||||
self.dialog = PropertyDialog(title = '',
|
||||
parent = self.my_parent,
|
||||
information = self.tip_markup,
|
||||
flags = gtk.DIALOG_DESTROY_WITH_PARENT
|
||||
| gtk.DIALOG_NO_SEPARATOR)
|
||||
|
||||
button = self.dialog.add_button("Close", gtk.RESPONSE_CANCEL)
|
||||
HobAltButton.style_button(button)
|
||||
button.connect("clicked", lambda w: self.dialog.destroy())
|
||||
self.dialog.show_all()
|
||||
self.dialog.run()
|
||||
self.ptip.show()
|
||||
|
||||
"""
|
||||
Change to the prelight image when the mouse enters the widget
|
||||
@@ -487,8 +449,7 @@ class HobNotebook(gtk.Notebook):
|
||||
self.pages = []
|
||||
|
||||
self.search = None
|
||||
self.search_focus = False
|
||||
self.page_changed = False
|
||||
self.search_name = ""
|
||||
|
||||
self.connect("switch-page", self.page_changed_cb)
|
||||
|
||||
@@ -501,10 +462,6 @@ class HobNotebook(gtk.Notebook):
|
||||
else:
|
||||
lbl.set_active(False)
|
||||
|
||||
if self.search:
|
||||
self.page_changed = True
|
||||
self.reset_entry(self.search, page_num)
|
||||
|
||||
def append_page(self, child, tab_label, tab_tooltip=None):
|
||||
label = HobTabLabel(tab_label)
|
||||
if tab_tooltip:
|
||||
@@ -513,22 +470,16 @@ class HobNotebook(gtk.Notebook):
|
||||
self.pages.append(label)
|
||||
gtk.Notebook.append_page(self, child, label)
|
||||
|
||||
def set_entry(self, names, tips):
|
||||
def set_entry(self, name="Search:"):
|
||||
self.search = gtk.Entry()
|
||||
self.search_names = names
|
||||
self.search_tips = tips
|
||||
self.search_name = name
|
||||
style = self.search.get_style()
|
||||
style.text[gtk.STATE_NORMAL] = self.get_colormap().alloc_color(HobColors.GRAY, False, False)
|
||||
self.search.set_style(style)
|
||||
self.search.set_text(names[0])
|
||||
self.search.set_tooltip_text(self.search_tips[0])
|
||||
self.search.props.has_tooltip = True
|
||||
|
||||
self.search.set_text(name)
|
||||
self.search.set_editable(False)
|
||||
self.search.set_icon_from_stock(gtk.ENTRY_ICON_SECONDARY, gtk.STOCK_CLEAR)
|
||||
self.search.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, False)
|
||||
self.search.connect("icon-release", self.set_search_entry_clear_cb)
|
||||
self.search.set_width_chars(30)
|
||||
self.search.show()
|
||||
|
||||
self.search.connect("focus-in-event", self.set_search_entry_editable_cb)
|
||||
@@ -546,35 +497,25 @@ class HobNotebook(gtk.Notebook):
|
||||
child.set_count(0)
|
||||
|
||||
def set_search_entry_editable_cb(self, search, event):
|
||||
self.search_focus = True
|
||||
search.set_editable(True)
|
||||
text = search.get_text()
|
||||
if text in self.search_names:
|
||||
search.set_text("")
|
||||
search.set_text("")
|
||||
style = self.search.get_style()
|
||||
style.text[gtk.STATE_NORMAL] = self.get_colormap().alloc_color(HobColors.BLACK, False, False)
|
||||
search.set_style(style)
|
||||
|
||||
def set_search_entry_reset_cb(self, search, event):
|
||||
page_num = self.get_current_page()
|
||||
text = search.get_text()
|
||||
if not text:
|
||||
self.reset_entry(search, page_num)
|
||||
|
||||
def reset_entry(self, entry, page_num):
|
||||
def reset_entry(self, entry):
|
||||
style = entry.get_style()
|
||||
style.text[gtk.STATE_NORMAL] = self.get_colormap().alloc_color(HobColors.GRAY, False, False)
|
||||
entry.set_style(style)
|
||||
entry.set_text(self.search_names[page_num])
|
||||
entry.set_tooltip_text(self.search_tips[page_num])
|
||||
entry.set_text(self.search_name)
|
||||
entry.set_editable(False)
|
||||
entry.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, False)
|
||||
|
||||
def set_search_entry_reset_cb(self, search, event):
|
||||
self.reset_entry(search)
|
||||
|
||||
def set_search_entry_clear_cb(self, search, icon_pos, event):
|
||||
if search.get_editable() == True:
|
||||
search.set_text("")
|
||||
search.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, False)
|
||||
search.grab_focus()
|
||||
|
||||
def set_page(self, title):
|
||||
for child in self.pages:
|
||||
|
||||
@@ -55,6 +55,12 @@ class ImageConfigurationPage (HobPage):
|
||||
self.toolbar.set_orientation(gtk.ORIENTATION_HORIZONTAL)
|
||||
self.toolbar.set_style(gtk.TOOLBAR_BOTH)
|
||||
|
||||
template_button = self.append_toolbar_button(self.toolbar,
|
||||
"Templates",
|
||||
hic.ICON_TEMPLATES_DISPLAY_FILE,
|
||||
hic.ICON_TEMPLATES_HOVER_FILE,
|
||||
"Load a previously saved template",
|
||||
self.template_button_clicked_cb)
|
||||
my_images_button = self.append_toolbar_button(self.toolbar,
|
||||
"Images",
|
||||
hic.ICON_IMAGES_DISPLAY_FILE,
|
||||
@@ -167,11 +173,6 @@ class ImageConfigurationPage (HobPage):
|
||||
|
||||
return warnings_bar
|
||||
|
||||
def disable_warnings_bar(self):
|
||||
if self.builder.parsing_warnings:
|
||||
self.warnings_bar.hide_all()
|
||||
self.builder.parsing_warnings = []
|
||||
|
||||
def create_config_machine(self):
|
||||
self.machine_title = gtk.Label()
|
||||
self.machine_title.set_alignment(0.0, 0.5)
|
||||
@@ -198,7 +199,8 @@ class ImageConfigurationPage (HobPage):
|
||||
markup += "For more on layers, check the <a href=\""
|
||||
markup += "http://www.yoctoproject.org/docs/current/dev-manual/"
|
||||
markup += "dev-manual.html#understanding-and-using-layers\">reference manual</a>."
|
||||
self.layer_info_icon = HobInfoButton("<b>Layers</b>" + "*" + markup, self.get_parent())
|
||||
self.layer_info_icon = HobInfoButton(markup, self.get_parent())
|
||||
|
||||
# self.progress_box = gtk.HBox(False, 6)
|
||||
self.progress_bar = HobProgressBar()
|
||||
# self.progress_box.pack_start(self.progress_bar, expand=True, fill=True)
|
||||
@@ -296,7 +298,6 @@ class ImageConfigurationPage (HobPage):
|
||||
|
||||
def machine_combo_changed_cb(self, machine_combo):
|
||||
self.stopping = False
|
||||
self.builder.parsing_warnings = []
|
||||
combo_item = machine_combo.get_active_text()
|
||||
if not combo_item or combo_item == self.__dummy_machine__:
|
||||
return
|
||||
@@ -317,7 +318,6 @@ class ImageConfigurationPage (HobPage):
|
||||
self.builder.populate_recipe_package_info_async()
|
||||
|
||||
def update_machine_combo(self):
|
||||
self.disable_warnings_bar()
|
||||
all_machines = [self.__dummy_machine__] + self.builder.parameters.all_machines
|
||||
|
||||
model = self.machine_combo.get_model()
|
||||
@@ -327,7 +327,6 @@ class ImageConfigurationPage (HobPage):
|
||||
self.machine_combo.set_active(0)
|
||||
|
||||
def switch_machine_combo(self):
|
||||
self.disable_warnings_bar()
|
||||
self.machine_combo_changed_by_manual = False
|
||||
model = self.machine_combo.get_model()
|
||||
active = 0
|
||||
@@ -487,6 +486,13 @@ class ImageConfigurationPage (HobPage):
|
||||
self.builder.configuration.initial_selected_image = self.builder.configuration.selected_image
|
||||
self.builder.show_recipes()
|
||||
|
||||
def template_button_clicked_cb(self, button):
|
||||
response, path = self.builder.show_load_template_dialog()
|
||||
if not response:
|
||||
return
|
||||
if path:
|
||||
self.builder.load_template(path)
|
||||
|
||||
def my_images_button_clicked_cb(self, button):
|
||||
self.builder.show_load_my_images_dialog()
|
||||
|
||||
|
||||
@@ -197,6 +197,12 @@ class ImageDetailsPage (HobPage):
|
||||
self.toolbar.set_orientation(gtk.ORIENTATION_HORIZONTAL)
|
||||
self.toolbar.set_style(gtk.TOOLBAR_BOTH)
|
||||
|
||||
template_button = self.append_toolbar_button(self.toolbar,
|
||||
"Templates",
|
||||
hic.ICON_TEMPLATES_DISPLAY_FILE,
|
||||
hic.ICON_TEMPLATES_HOVER_FILE,
|
||||
"Load a previously saved template",
|
||||
self.template_button_clicked_cb)
|
||||
my_images_button = self.append_toolbar_button(self.toolbar,
|
||||
"Images",
|
||||
hic.ICON_IMAGES_DISPLAY_FILE,
|
||||
@@ -259,7 +265,11 @@ class ImageDetailsPage (HobPage):
|
||||
self.build_result = self.BuildDetailBox(varlist=varlist, vallist=vallist, icon=icon, color=color)
|
||||
self.box_group_area.pack_start(self.build_result, expand=False, fill=False)
|
||||
|
||||
self.buttonlist = ["Build new image", "Run image", "Deploy image"]
|
||||
# create the buttons at the bottom first because the buttons are used in apply_button_per_image()
|
||||
if self.build_succeeded:
|
||||
self.buttonlist = ["Build new image", "Save as template", "Run image", "Deploy image"]
|
||||
else: # get to this page from "My images"
|
||||
self.buttonlist = ["Build new image", "Run image", "Deploy image"]
|
||||
|
||||
# Name
|
||||
self.image_store = []
|
||||
@@ -572,6 +582,26 @@ class ImageDetailsPage (HobPage):
|
||||
created = True
|
||||
is_runnable = True
|
||||
|
||||
name = "Save as template"
|
||||
if name in buttonlist:
|
||||
if created == True:
|
||||
# separator
|
||||
#label = gtk.Label(" or ")
|
||||
#self.details_bottom_buttons.pack_end(label, expand=False, fill=False)
|
||||
|
||||
# create button "Save as template"
|
||||
save_button = HobAltButton("Save as template")
|
||||
else:
|
||||
save_button = HobButton("Save as template")
|
||||
#save_button.set_size_request(205, 49)
|
||||
save_button.set_flags(gtk.CAN_DEFAULT)
|
||||
packed = True
|
||||
save_button.set_tooltip_text("Save the image configuration for reuse")
|
||||
button_id = save_button.connect("clicked", self.save_button_clicked_cb)
|
||||
self.button_ids[button_id] = save_button
|
||||
self.details_bottom_buttons.pack_end(save_button, expand=False, fill=False)
|
||||
create = True
|
||||
|
||||
name = "Build new image"
|
||||
if name in buttonlist:
|
||||
# create button "Build new image"
|
||||
@@ -588,6 +618,9 @@ class ImageDetailsPage (HobPage):
|
||||
|
||||
return is_runnable
|
||||
|
||||
def save_button_clicked_cb(self, button):
|
||||
self.builder.show_save_template_dialog()
|
||||
|
||||
def deploy_button_clicked_cb(self, button):
|
||||
if self.toggled_image:
|
||||
if self.num_toggled > 1:
|
||||
@@ -615,6 +648,13 @@ class ImageDetailsPage (HobPage):
|
||||
def edit_packages_button_clicked_cb(self, button):
|
||||
self.builder.show_packages(ask=False)
|
||||
|
||||
def template_button_clicked_cb(self, button):
|
||||
response, path = self.builder.show_load_template_dialog()
|
||||
if not response:
|
||||
return
|
||||
if path:
|
||||
self.builder.load_template(path)
|
||||
|
||||
def my_images_button_clicked_cb(self, button):
|
||||
self.builder.show_load_my_images_dialog()
|
||||
|
||||
|
||||
@@ -34,12 +34,10 @@ class PackageSelectionPage (HobPage):
|
||||
|
||||
pages = [
|
||||
{
|
||||
'name' : 'Included packages',
|
||||
'tooltip' : 'The packages currently included for your image',
|
||||
'filter' : { PackageListModel.COL_INC : [True] },
|
||||
'search' : 'Search packages by name',
|
||||
'searchtip' : 'Enter a package name to find it',
|
||||
'columns' : [{
|
||||
'name' : 'Included packages',
|
||||
'tooltip' : 'The packages currently included for your image',
|
||||
'filter' : { PackageListModel.COL_INC : [True] },
|
||||
'columns' : [{
|
||||
'col_name' : 'Package name',
|
||||
'col_id' : PackageListModel.COL_NAME,
|
||||
'col_style': 'text',
|
||||
@@ -53,13 +51,6 @@ class PackageSelectionPage (HobPage):
|
||||
'col_min' : 100,
|
||||
'col_max' : 300,
|
||||
'expand' : 'True'
|
||||
}, {
|
||||
'col_name' : 'Recipe',
|
||||
'col_id' : PackageListModel.COL_RCP,
|
||||
'col_style': 'text',
|
||||
'col_min' : 100,
|
||||
'col_max' : 250,
|
||||
'expand' : 'True'
|
||||
}, {
|
||||
'col_name' : 'Brought in by (+others)',
|
||||
'col_id' : PackageListModel.COL_BINB,
|
||||
@@ -75,12 +66,10 @@ class PackageSelectionPage (HobPage):
|
||||
'col_max' : 100
|
||||
}]
|
||||
}, {
|
||||
'name' : 'All packages',
|
||||
'tooltip' : 'All packages that have been built',
|
||||
'filter' : {},
|
||||
'search' : 'Search packages by name',
|
||||
'searchtip' : 'Enter a package name to find it',
|
||||
'columns' : [{
|
||||
'name' : 'All packages',
|
||||
'tooltip' : 'All packages that have been built',
|
||||
'filter' : {},
|
||||
'columns' : [{
|
||||
'col_name' : 'Package name',
|
||||
'col_id' : PackageListModel.COL_NAME,
|
||||
'col_style': 'text',
|
||||
@@ -94,13 +83,6 @@ class PackageSelectionPage (HobPage):
|
||||
'col_min' : 100,
|
||||
'col_max' : 500,
|
||||
'expand' : 'True'
|
||||
}, {
|
||||
'col_name' : 'Recipe',
|
||||
'col_id' : PackageListModel.COL_RCP,
|
||||
'col_style': 'text',
|
||||
'col_min' : 100,
|
||||
'col_max' : 250,
|
||||
'expand' : 'True'
|
||||
}, {
|
||||
'col_name' : 'Included',
|
||||
'col_id' : PackageListModel.COL_INC,
|
||||
@@ -117,7 +99,7 @@ class PackageSelectionPage (HobPage):
|
||||
def __init__(self, builder):
|
||||
super(PackageSelectionPage, self).__init__(builder, "Edit packages")
|
||||
|
||||
# set invisible members
|
||||
# set invisiable members
|
||||
self.recipe_model = self.builder.recipe_model
|
||||
self.package_model = self.builder.package_model
|
||||
|
||||
@@ -136,31 +118,26 @@ class PackageSelectionPage (HobPage):
|
||||
# set visible members
|
||||
self.ins = HobNotebook()
|
||||
self.tables = [] # we need to modify table when the dialog is shown
|
||||
|
||||
search_names = []
|
||||
search_tips = []
|
||||
# append the tab
|
||||
for page in self.pages:
|
||||
columns = page['columns']
|
||||
name = page['name']
|
||||
tab = HobViewTable(columns, name)
|
||||
search_names.append(page['search'])
|
||||
search_tips.append(page['searchtip'])
|
||||
tab = HobViewTable(columns)
|
||||
filter = page['filter']
|
||||
sort_model = self.package_model.tree_model(filter, initial=True)
|
||||
tab.set_model(sort_model)
|
||||
tab.connect("toggled", self.table_toggled_cb, name)
|
||||
if name == "Included packages":
|
||||
tab.connect("button-release-event", self.button_click_cb)
|
||||
tab.connect("cell-fadeinout-stopped", self.after_fadeout_checkin_include)
|
||||
if name == "All packages":
|
||||
tab.set_model(self.package_model.tree_model(filter))
|
||||
tab.connect("toggled", self.table_toggled_cb, page['name'])
|
||||
if page['name'] == "Included packages":
|
||||
tab.connect("button-release-event", self.button_click_cb)
|
||||
tab.connect("cell-fadeinout-stopped", self.after_fadeout_checkin_include)
|
||||
self.ins.append_page(tab, page['name'], page['tooltip'])
|
||||
self.tables.append(tab)
|
||||
|
||||
self.ins.set_entry(search_names, search_tips)
|
||||
self.ins.search.connect("changed", self.search_entry_changed)
|
||||
self.ins.set_entry("Search packages:")
|
||||
# set the search entry for each table
|
||||
for tab in self.tables:
|
||||
search_tip = "Enter a package name to find it"
|
||||
self.ins.search.set_tooltip_text(search_tip)
|
||||
self.ins.search.props.has_tooltip = True
|
||||
tab.set_search_entry(0, self.ins.search)
|
||||
|
||||
# add all into the dialog
|
||||
self.box_group_area.pack_start(self.ins, expand=True, fill=True)
|
||||
@@ -180,48 +157,13 @@ class PackageSelectionPage (HobPage):
|
||||
self.back_button.connect("clicked", self.back_button_clicked_cb)
|
||||
self.button_box.pack_end(self.back_button, expand=False, fill=False)
|
||||
|
||||
def search_entry_changed(self, entry):
|
||||
text = entry.get_text()
|
||||
if self.ins.search_focus:
|
||||
self.ins.search_focus = False
|
||||
elif self.ins.page_changed:
|
||||
self.ins.page_change = False
|
||||
self.filter_search(entry)
|
||||
elif text not in self.ins.search_names:
|
||||
self.filter_search(entry)
|
||||
|
||||
def filter_search(self, entry):
|
||||
text = entry.get_text()
|
||||
current_tab = self.ins.get_current_page()
|
||||
filter = self.pages[current_tab]['filter']
|
||||
filter[PackageListModel.COL_NAME] = text
|
||||
self.tables[current_tab].set_model(self.package_model.tree_model(filter, search_data=text))
|
||||
if self.package_model.filtered_nb == 0:
|
||||
if not self.ins.get_nth_page(current_tab).top_bar:
|
||||
self.ins.get_nth_page(current_tab).add_no_result_bar(entry)
|
||||
self.ins.get_nth_page(current_tab).top_bar.show()
|
||||
self.ins.get_nth_page(current_tab).scroll.hide()
|
||||
else:
|
||||
if self.ins.get_nth_page(current_tab).top_bar:
|
||||
self.ins.get_nth_page(current_tab).top_bar.hide()
|
||||
self.ins.get_nth_page(current_tab).scroll.show()
|
||||
if entry.get_text() == '':
|
||||
entry.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, False)
|
||||
else:
|
||||
entry.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, True)
|
||||
|
||||
def button_click_cb(self, widget, event):
|
||||
path, col = widget.table_tree.get_cursor()
|
||||
tree_model = widget.table_tree.get_model()
|
||||
if path and col.get_title() != 'Included': # else activation is likely a removal
|
||||
properties = {'binb': '' , 'name': '', 'size':'', 'recipe':'', 'files_list':''}
|
||||
properties['binb'] = tree_model.get_value(tree_model.get_iter(path), PackageListModel.COL_BINB)
|
||||
properties['name'] = tree_model.get_value(tree_model.get_iter(path), PackageListModel.COL_NAME)
|
||||
properties['size'] = tree_model.get_value(tree_model.get_iter(path), PackageListModel.COL_SIZE)
|
||||
properties['recipe'] = tree_model.get_value(tree_model.get_iter(path), PackageListModel.COL_RCP)
|
||||
properties['files_list'] = tree_model.get_value(tree_model.get_iter(path), PackageListModel.COL_FLIST)
|
||||
|
||||
self.builder.show_recipe_property_dialog(properties)
|
||||
if path: # else activation is likely a removal
|
||||
binb = tree_model.get_value(tree_model.get_iter(path), PackageListModel.COL_BINB)
|
||||
if binb:
|
||||
self.builder.show_binb_dialog(binb)
|
||||
|
||||
def open_log_clicked_cb(self, button, log_file):
|
||||
if log_file:
|
||||
@@ -254,7 +196,13 @@ class PackageSelectionPage (HobPage):
|
||||
else:
|
||||
self.builder.show_configuration()
|
||||
|
||||
def _expand_all(self):
|
||||
for tab in self.tables:
|
||||
tab.table_tree.expand_all()
|
||||
|
||||
def refresh_selection(self):
|
||||
self._expand_all()
|
||||
|
||||
self.builder.configuration.selected_packages = self.package_model.get_selected_packages()
|
||||
self.builder.configuration.user_selected_packages = self.package_model.get_user_selected_packages()
|
||||
selected_packages_num = len(self.builder.configuration.selected_packages)
|
||||
@@ -288,7 +236,6 @@ class PackageSelectionPage (HobPage):
|
||||
self.refresh_selection()
|
||||
if not self.builder.customized:
|
||||
self.builder.customized = True
|
||||
self.builder.configuration.initial_selected_image = self.builder.configuration.selected_image
|
||||
self.builder.configuration.selected_image = self.recipe_model.__custom_image__
|
||||
self.builder.rcppkglist_populated()
|
||||
|
||||
|
||||
@@ -33,13 +33,11 @@ from bb.ui.crumbs.hobpages import HobPage
|
||||
class RecipeSelectionPage (HobPage):
|
||||
pages = [
|
||||
{
|
||||
'name' : 'Included recipes',
|
||||
'tooltip' : 'The recipes currently included for your image',
|
||||
'filter' : { RecipeListModel.COL_INC : [True],
|
||||
'name' : 'Included recipes',
|
||||
'tooltip' : 'The recipes currently included for your image',
|
||||
'filter' : { RecipeListModel.COL_INC : [True],
|
||||
RecipeListModel.COL_TYPE : ['recipe', 'packagegroup'] },
|
||||
'search' : 'Search recipes by name',
|
||||
'searchtip' : 'Enter a recipe name to find it',
|
||||
'columns' : [{
|
||||
'columns' : [{
|
||||
'col_name' : 'Recipe name',
|
||||
'col_id' : RecipeListModel.COL_NAME,
|
||||
'col_style': 'text',
|
||||
@@ -68,12 +66,10 @@ class RecipeSelectionPage (HobPage):
|
||||
'col_max' : 100
|
||||
}]
|
||||
}, {
|
||||
'name' : 'All recipes',
|
||||
'tooltip' : 'All recipes in your configured layers',
|
||||
'filter' : { RecipeListModel.COL_TYPE : ['recipe'] },
|
||||
'search' : 'Search recipes by name',
|
||||
'searchtip' : 'Enter a recipe name to find it',
|
||||
'columns' : [{
|
||||
'name' : 'All recipes',
|
||||
'tooltip' : 'All recipes in your configured layers',
|
||||
'filter' : { RecipeListModel.COL_TYPE : ['recipe'] },
|
||||
'columns' : [{
|
||||
'col_name' : 'Recipe name',
|
||||
'col_id' : RecipeListModel.COL_NAME,
|
||||
'col_style': 'text',
|
||||
@@ -102,12 +98,10 @@ class RecipeSelectionPage (HobPage):
|
||||
'col_max' : 100
|
||||
}]
|
||||
}, {
|
||||
'name' : 'Package Groups',
|
||||
'tooltip' : 'All package groups in your configured layers',
|
||||
'filter' : { RecipeListModel.COL_TYPE : ['packagegroup'] },
|
||||
'search' : 'Search package groups by name',
|
||||
'searchtip' : 'Enter a package group name to find it',
|
||||
'columns' : [{
|
||||
'name' : 'Package Groups',
|
||||
'tooltip' : 'All package groups in your configured layers',
|
||||
'filter' : { RecipeListModel.COL_TYPE : ['packagegroup'] },
|
||||
'columns' : [{
|
||||
'col_name' : 'Package group name',
|
||||
'col_id' : RecipeListModel.COL_NAME,
|
||||
'col_style': 'text',
|
||||
@@ -148,34 +142,26 @@ class RecipeSelectionPage (HobPage):
|
||||
# set visible members
|
||||
self.ins = HobNotebook()
|
||||
self.tables = [] # we need modify table when the dialog is shown
|
||||
|
||||
search_names = []
|
||||
search_tips = []
|
||||
# append the tabs in order
|
||||
for page in self.pages:
|
||||
columns = page['columns']
|
||||
name = page['name']
|
||||
tab = HobViewTable(columns, name)
|
||||
search_names.append(page['search'])
|
||||
search_tips.append(page['searchtip'])
|
||||
tab = HobViewTable(columns)
|
||||
filter = page['filter']
|
||||
sort_model = self.recipe_model.tree_model(filter, initial=True)
|
||||
tab.set_model(sort_model)
|
||||
tab.connect("toggled", self.table_toggled_cb, name)
|
||||
if name == "Included recipes":
|
||||
tab.set_model(self.recipe_model.tree_model(filter))
|
||||
tab.connect("toggled", self.table_toggled_cb, page['name'])
|
||||
if page['name'] == "Included recipes":
|
||||
tab.connect("button-release-event", self.button_click_cb)
|
||||
tab.connect("cell-fadeinout-stopped", self.after_fadeout_checkin_include)
|
||||
if name == "Package Groups":
|
||||
tab.connect("button-release-event", self.button_click_cb)
|
||||
tab.connect("cell-fadeinout-stopped", self.after_fadeout_checkin_include)
|
||||
if name == "All recipes":
|
||||
tab.connect("button-release-event", self.button_click_cb)
|
||||
tab.connect("cell-fadeinout-stopped", self.button_click_cb)
|
||||
self.ins.append_page(tab, page['name'], page['tooltip'])
|
||||
self.tables.append(tab)
|
||||
|
||||
self.ins.set_entry(search_names, search_tips)
|
||||
self.ins.search.connect("changed", self.search_entry_changed)
|
||||
self.ins.set_entry("Search recipes:")
|
||||
# set the search entry for each table
|
||||
for tab in self.tables:
|
||||
search_tip = "Enter a recipe's or task's name to find it"
|
||||
self.ins.search.set_tooltip_text(search_tip)
|
||||
self.ins.search.props.has_tooltip = True
|
||||
tab.set_search_entry(0, self.ins.search)
|
||||
|
||||
# add all into the window
|
||||
self.box_group_area.pack_start(self.ins, expand=True, fill=True)
|
||||
@@ -195,52 +181,13 @@ class RecipeSelectionPage (HobPage):
|
||||
self.back_button.connect("clicked", self.back_button_clicked_cb)
|
||||
button_box.pack_end(self.back_button, expand=False, fill=False)
|
||||
|
||||
def search_entry_changed(self, entry):
|
||||
text = entry.get_text()
|
||||
if self.ins.search_focus:
|
||||
self.ins.search_focus = False
|
||||
elif self.ins.page_changed:
|
||||
self.ins.page_change = False
|
||||
self.filter_search(entry)
|
||||
elif text not in self.ins.search_names:
|
||||
self.filter_search(entry)
|
||||
|
||||
def filter_search(self, entry):
|
||||
text = entry.get_text()
|
||||
current_tab = self.ins.get_current_page()
|
||||
filter = self.pages[current_tab]['filter']
|
||||
filter[RecipeListModel.COL_NAME] = text
|
||||
self.tables[current_tab].set_model(self.recipe_model.tree_model(filter, search_data=text))
|
||||
if self.recipe_model.filtered_nb == 0:
|
||||
if not self.ins.get_nth_page(current_tab).top_bar:
|
||||
self.ins.get_nth_page(current_tab).add_no_result_bar(entry)
|
||||
self.ins.get_nth_page(current_tab).top_bar.show()
|
||||
self.ins.get_nth_page(current_tab).scroll.hide()
|
||||
else:
|
||||
if self.ins.get_nth_page(current_tab).top_bar:
|
||||
self.ins.get_nth_page(current_tab).top_bar.hide()
|
||||
self.ins.get_nth_page(current_tab).scroll.show()
|
||||
if entry.get_text() == '':
|
||||
entry.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, False)
|
||||
else:
|
||||
entry.set_icon_sensitive(gtk.ENTRY_ICON_SECONDARY, True)
|
||||
|
||||
def button_click_cb(self, widget, event):
|
||||
path, col = widget.table_tree.get_cursor()
|
||||
tree_model = widget.table_tree.get_model()
|
||||
if path and col.get_title() != 'Included': # else activation is likely a removal
|
||||
properties = {'summary': '', 'name': '', 'version': '', 'revision': '', 'binb': '', 'group': '', 'license': '', 'homepage': '', 'bugtracker': '', 'description': ''}
|
||||
properties['summary'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_SUMMARY)
|
||||
properties['name'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_NAME)
|
||||
properties['version'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_VERSION)
|
||||
properties['revision'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_REVISION)
|
||||
properties['binb'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_BINB)
|
||||
properties['group'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_GROUP)
|
||||
properties['license'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_LIC)
|
||||
properties['homepage'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_HOMEPAGE)
|
||||
properties['bugtracker'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_BUGTRACKER)
|
||||
properties['description'] = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_DESC)
|
||||
self.builder.show_recipe_property_dialog(properties)
|
||||
if path: # else activation is likely a removal
|
||||
binb = tree_model.get_value(tree_model.get_iter(path), RecipeListModel.COL_BINB)
|
||||
if binb:
|
||||
self.builder.show_binb_dialog(binb)
|
||||
|
||||
def build_packages_clicked_cb(self, button):
|
||||
self.builder.build_packages()
|
||||
|
||||
@@ -137,6 +137,8 @@ class RecipeFile(ConfigFile):
|
||||
|
||||
class TemplateMgr(gobject.GObject):
|
||||
|
||||
__gLocalVars__ = ["MACHINE", "PACKAGE_CLASSES", "DISTRO", "DL_DIR", "SSTATE_DIR", "SSTATE_MIRRORS", "PARALLEL_MAKE", "BB_NUMBER_THREADS", "CONF_VERSION"]
|
||||
__gBBLayersVars__ = ["BBLAYERS", "LCONF_VERSION"]
|
||||
__gRecipeVars__ = ["DEPENDS", "IMAGE_INSTALL"]
|
||||
|
||||
def __init__(self):
|
||||
@@ -150,21 +152,37 @@ class TemplateMgr(gobject.GObject):
|
||||
def convert_to_template_pathfilename(cls, filename, path):
|
||||
return "%s/%s%s%s" % (path, "template-", filename, ".hob")
|
||||
|
||||
@classmethod
|
||||
def convert_to_bblayers_pathfilename(cls, filename, path):
|
||||
return "%s/%s%s%s" % (path, "bblayers-", filename, ".conf")
|
||||
|
||||
@classmethod
|
||||
def convert_to_local_pathfilename(cls, filename, path):
|
||||
return "%s/%s%s%s" % (path, "local-", filename, ".conf")
|
||||
|
||||
@classmethod
|
||||
def convert_to_image_pathfilename(cls, filename, path):
|
||||
return "%s/%s%s%s" % (path, "hob-image-", filename, ".bb")
|
||||
|
||||
def open(self, filename, path):
|
||||
self.template_hob = HobTemplateFile(TemplateMgr.convert_to_template_pathfilename(filename, path))
|
||||
self.bblayers_conf = ConfigFile(TemplateMgr.convert_to_bblayers_pathfilename(filename, path))
|
||||
self.local_conf = ConfigFile(TemplateMgr.convert_to_local_pathfilename(filename, path))
|
||||
self.image_bb = RecipeFile(TemplateMgr.convert_to_image_pathfilename(filename, path))
|
||||
|
||||
def setVar(self, var, val):
|
||||
if var in TemplateMgr.__gLocalVars__:
|
||||
self.local_conf.setVar(var, val)
|
||||
if var in TemplateMgr.__gBBLayersVars__:
|
||||
self.bblayers_conf.setVar(var, val)
|
||||
if var in TemplateMgr.__gRecipeVars__:
|
||||
self.image_bb.setVar(var, val)
|
||||
|
||||
self.template_hob.setVar(var, val)
|
||||
|
||||
def save(self):
|
||||
self.local_conf.save()
|
||||
self.bblayers_conf.save()
|
||||
self.image_bb.save()
|
||||
self.template_hob.save()
|
||||
|
||||
@@ -182,6 +200,12 @@ class TemplateMgr(gobject.GObject):
|
||||
if self.template_hob:
|
||||
del self.template_hob
|
||||
template_hob = None
|
||||
if self.bblayers_conf:
|
||||
del self.bblayers_conf
|
||||
self.bblayers_conf = None
|
||||
if self.local_conf:
|
||||
del self.local_conf
|
||||
self.local_conf = None
|
||||
if self.image_bb:
|
||||
del self.image_bb
|
||||
self.image_bb = None
|
||||
|
||||
@@ -81,7 +81,7 @@ def main (server, eventHandler):
|
||||
|
||||
try:
|
||||
cmdline, error = server.runCommand(["getCmdLineAction"])
|
||||
if error:
|
||||
if err:
|
||||
print("Error getting bitbake commandline: %s" % error)
|
||||
return 1
|
||||
elif not cmdline:
|
||||
|
||||
@@ -453,8 +453,7 @@ def main(server, eventHandler, tf = TerminalFilter):
|
||||
bb.runqueue.runQueueExitWait,
|
||||
bb.event.OperationStarted,
|
||||
bb.event.OperationCompleted,
|
||||
bb.event.OperationProgress,
|
||||
bb.event.DiskFull)):
|
||||
bb.event.OperationProgress)):
|
||||
continue
|
||||
|
||||
logger.error("Unknown event: %s", event)
|
||||
|
||||
@@ -458,6 +458,27 @@ def preserved_envvars_exported():
|
||||
'USER',
|
||||
]
|
||||
|
||||
def preserved_envvars_exported_interactive():
|
||||
"""Variables which are taken from the environment and placed in and exported
|
||||
from the metadata, for interactive tasks"""
|
||||
return [
|
||||
'COLORTERM',
|
||||
'DBUS_SESSION_BUS_ADDRESS',
|
||||
'DESKTOP_SESSION',
|
||||
'DESKTOP_STARTUP_ID',
|
||||
'DISPLAY',
|
||||
'GNOME_KEYRING_PID',
|
||||
'GNOME_KEYRING_SOCKET',
|
||||
'GPG_AGENT_INFO',
|
||||
'GTK_RC_FILES',
|
||||
'SESSION_MANAGER',
|
||||
'KRB5CCNAME',
|
||||
'SSH_AUTH_SOCK',
|
||||
'XAUTHORITY',
|
||||
'XDG_DATA_DIRS',
|
||||
'XDG_SESSION_COOKIE',
|
||||
]
|
||||
|
||||
def preserved_envvars():
|
||||
"""Variables which are taken from the environment and placed in the metadata"""
|
||||
v = [
|
||||
@@ -466,7 +487,7 @@ def preserved_envvars():
|
||||
'BB_ENV_WHITELIST',
|
||||
'BB_ENV_EXTRAWHITE',
|
||||
]
|
||||
return v + preserved_envvars_exported()
|
||||
return v + preserved_envvars_exported() + preserved_envvars_exported_interactive()
|
||||
|
||||
def filter_environment(good_vars):
|
||||
"""
|
||||
@@ -474,20 +495,24 @@ def filter_environment(good_vars):
|
||||
are not known and may influence the build in a negative way.
|
||||
"""
|
||||
|
||||
removed_vars = {}
|
||||
removed_vars = []
|
||||
for key in os.environ.keys():
|
||||
if key in good_vars:
|
||||
continue
|
||||
|
||||
removed_vars[key] = os.environ[key]
|
||||
removed_vars.append(key)
|
||||
os.unsetenv(key)
|
||||
del os.environ[key]
|
||||
|
||||
if len(removed_vars):
|
||||
logger.debug(1, "Removed the following variables from the environment: %s", ", ".join(removed_vars.keys()))
|
||||
logger.debug(1, "Removed the following variables from the environment: %s", ", ".join(removed_vars))
|
||||
|
||||
return removed_vars
|
||||
|
||||
def create_interactive_env(d):
|
||||
for k in preserved_envvars_exported_interactive():
|
||||
os.setenv(k, d.getVar(k, True))
|
||||
|
||||
def approved_variables():
|
||||
"""
|
||||
Determine and return the list of whitelisted variables which are approved
|
||||
@@ -496,13 +521,10 @@ def approved_variables():
|
||||
approved = []
|
||||
if 'BB_ENV_WHITELIST' in os.environ:
|
||||
approved = os.environ['BB_ENV_WHITELIST'].split()
|
||||
approved.extend(['BB_ENV_WHITELIST'])
|
||||
else:
|
||||
approved = preserved_envvars()
|
||||
if 'BB_ENV_EXTRAWHITE' in os.environ:
|
||||
approved.extend(os.environ['BB_ENV_EXTRAWHITE'].split())
|
||||
if 'BB_ENV_EXTRAWHITE' not in approved:
|
||||
approved.extend(['BB_ENV_EXTRAWHITE'])
|
||||
return approved
|
||||
|
||||
def clean_environment():
|
||||
@@ -512,9 +534,7 @@ def clean_environment():
|
||||
"""
|
||||
if 'BB_PRESERVE_ENV' not in os.environ:
|
||||
good_vars = approved_variables()
|
||||
return filter_environment(good_vars)
|
||||
|
||||
return {}
|
||||
filter_environment(good_vars)
|
||||
|
||||
def empty_environment():
|
||||
"""
|
||||
@@ -538,17 +558,14 @@ def remove(path, recurse=False):
|
||||
"""Equivalent to rm -f or rm -rf"""
|
||||
if not path:
|
||||
return
|
||||
if recurse:
|
||||
import subprocess, glob
|
||||
# shutil.rmtree(name) would be ideal but its too slow
|
||||
subprocess.call(['rm', '-rf'] + glob.glob(path))
|
||||
return
|
||||
import os, errno, glob
|
||||
import os, errno, shutil, glob
|
||||
for name in glob.glob(path):
|
||||
try:
|
||||
os.unlink(name)
|
||||
except OSError as exc:
|
||||
if exc.errno != errno.ENOENT:
|
||||
if recurse and exc.errno == errno.EISDIR:
|
||||
shutil.rmtree(name)
|
||||
elif exc.errno != errno.ENOENT:
|
||||
raise
|
||||
|
||||
def prunedir(topdir):
|
||||
@@ -803,31 +820,3 @@ def cpu_count():
|
||||
def nonblockingfd(fd):
|
||||
fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK)
|
||||
|
||||
def process_profilelog(fn):
|
||||
# Redirect stdout to capture profile information
|
||||
pout = open(fn + '.processed', 'w')
|
||||
so = sys.stdout.fileno()
|
||||
orig_so = os.dup(sys.stdout.fileno())
|
||||
os.dup2(pout.fileno(), so)
|
||||
|
||||
import pstats
|
||||
p = pstats.Stats(fn)
|
||||
p.sort_stats('time')
|
||||
p.print_stats()
|
||||
p.print_callers()
|
||||
p.sort_stats('cumulative')
|
||||
p.print_stats()
|
||||
|
||||
os.dup2(orig_so, so)
|
||||
pout.flush()
|
||||
pout.close()
|
||||
|
||||
#
|
||||
# Work around multiprocessing pool bugs in python < 2.7.3
|
||||
#
|
||||
def multiprocessingpool(*args, **kwargs):
|
||||
if sys.version_info < (2, 7, 3):
|
||||
return bb.compat.Pool(*args, **kwargs)
|
||||
else:
|
||||
return multiprocessing.pool.Pool(*args, **kwargs)
|
||||
|
||||
|
||||
@@ -86,8 +86,7 @@ class PRServer(SimpleXMLRPCServer):
|
||||
def work_forever(self,):
|
||||
self.quit = False
|
||||
self.timeout = 0.5
|
||||
|
||||
logger.info("Started PRServer with DBfile: %s, IP: %s, PORT: %s, PID: %s" %
|
||||
logger.info("PRServer: started! DBfile: %s, IP: %s, PORT: %s, PID: %s" %
|
||||
(self.dbfile, self.host, self.port, str(os.getpid())))
|
||||
|
||||
while not self.quit:
|
||||
@@ -98,10 +97,16 @@ class PRServer(SimpleXMLRPCServer):
|
||||
return
|
||||
|
||||
def start(self):
|
||||
pid = self.daemonize()
|
||||
# Ensure both the parent sees this and the child from the work_forever log entry above
|
||||
logger.info("Started PRServer with DBfile: %s, IP: %s, PORT: %s, PID: %s" %
|
||||
(self.dbfile, self.host, self.port, str(pid)))
|
||||
if self.daemon is True:
|
||||
logger.info("PRServer: try to start daemon...")
|
||||
self.daemonize()
|
||||
else:
|
||||
atexit.register(self.delpid)
|
||||
pid = str(os.getpid())
|
||||
pf = file(self.pidfile, 'w+')
|
||||
pf.write("%s\n" % pid)
|
||||
pf.close()
|
||||
self.work_forever()
|
||||
|
||||
def delpid(self):
|
||||
os.remove(self.pidfile)
|
||||
@@ -113,9 +118,8 @@ class PRServer(SimpleXMLRPCServer):
|
||||
try:
|
||||
pid = os.fork()
|
||||
if pid > 0:
|
||||
os.waitpid(pid, 0)
|
||||
#parent return instead of exit to give control
|
||||
return pid
|
||||
return
|
||||
except OSError as e:
|
||||
raise Exception("%s [%d]" % (e.strerror, e.errno))
|
||||
|
||||
@@ -127,7 +131,7 @@ class PRServer(SimpleXMLRPCServer):
|
||||
try:
|
||||
pid = os.fork()
|
||||
if pid > 0: #parent
|
||||
os._exit(0)
|
||||
sys.exit(0)
|
||||
except OSError as e:
|
||||
raise Exception("%s [%d]" % (e.strerror, e.errno))
|
||||
|
||||
@@ -143,22 +147,15 @@ class PRServer(SimpleXMLRPCServer):
|
||||
os.dup2(so.fileno(),sys.stdout.fileno())
|
||||
os.dup2(se.fileno(),sys.stderr.fileno())
|
||||
|
||||
# Ensure logging makes it to the logfile
|
||||
streamhandler = logging.StreamHandler()
|
||||
streamhandler.setLevel(logging.DEBUG)
|
||||
formatter = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
|
||||
streamhandler.setFormatter(formatter)
|
||||
logger.addHandler(streamhandler)
|
||||
|
||||
# write pidfile
|
||||
atexit.register(self.delpid)
|
||||
pid = str(os.getpid())
|
||||
pf = file(self.pidfile, 'w')
|
||||
pf.write("%s\n" % pid)
|
||||
pf.close()
|
||||
|
||||
self.work_forever()
|
||||
self.delpid
|
||||
os._exit(0)
|
||||
sys.exit(0)
|
||||
|
||||
class PRServSingleton():
|
||||
def __init__(self, dbfile, logfile, interface):
|
||||
@@ -167,14 +164,21 @@ class PRServSingleton():
|
||||
self.interface = interface
|
||||
self.host = None
|
||||
self.port = None
|
||||
self.event = threading.Event()
|
||||
|
||||
def start(self):
|
||||
self.prserv = PRServer(self.dbfile, self.logfile, self.interface)
|
||||
self.prserv.start()
|
||||
def _work(self):
|
||||
self.prserv = PRServer(self.dbfile, self.logfile, self.interface, False)
|
||||
self.host, self.port = self.prserv.getinfo()
|
||||
self.event.set()
|
||||
self.prserv.work_forever()
|
||||
del self.prserv.db
|
||||
|
||||
def start(self):
|
||||
self.working_thread = threading.Thread(target=self._work)
|
||||
self.working_thread.start()
|
||||
|
||||
def getinfo(self):
|
||||
self.event.wait()
|
||||
return (self.host, self.port)
|
||||
|
||||
class PRServerConnection():
|
||||
@@ -190,7 +194,6 @@ class PRServerConnection():
|
||||
import socket
|
||||
socket.setdefaulttimeout(2)
|
||||
try:
|
||||
logger.info("Terminating PRServer...")
|
||||
self.connection.quit()
|
||||
except Exception as exc:
|
||||
sys.stderr.write("%s\n" % str(exc))
|
||||
@@ -263,27 +266,17 @@ def is_local_special(host, port):
|
||||
else:
|
||||
return False
|
||||
|
||||
class PRServiceConfigError(Exception):
|
||||
pass
|
||||
|
||||
def auto_start(d):
|
||||
global singleton
|
||||
|
||||
host_params = filter(None, (d.getVar('PRSERV_HOST', True) or '').split(':'))
|
||||
if not host_params:
|
||||
if (not d.getVar('PRSERV_HOST', True)) or (not d.getVar('PRSERV_PORT', True)):
|
||||
return True
|
||||
|
||||
if len(host_params) != 2:
|
||||
logger.critical('\n'.join(['PRSERV_HOST: incorrect format',
|
||||
'Usage: PRSERV_HOST = "<hostname>:<port>"']))
|
||||
raise PRServiceConfigError
|
||||
|
||||
if is_local_special(host_params[0], int(host_params[1])) and not singleton:
|
||||
if is_local_special(d.getVar('PRSERV_HOST', True), int(d.getVar('PRSERV_PORT', True))) and not singleton:
|
||||
import bb.utils
|
||||
cachedir = (d.getVar("PERSISTENT_DIR", True) or d.getVar("CACHE", True))
|
||||
if not cachedir:
|
||||
logger.critical("Please set the 'PERSISTENT_DIR' or 'CACHE' variable")
|
||||
raise PRServiceConfigError
|
||||
sys.exit(1)
|
||||
bb.utils.mkdirhier(cachedir)
|
||||
dbfile = os.path.join(cachedir, "prserv.sqlite3")
|
||||
logfile = os.path.join(cachedir, "prserv.log")
|
||||
@@ -292,14 +285,14 @@ def auto_start(d):
|
||||
if singleton:
|
||||
host, port = singleton.getinfo()
|
||||
else:
|
||||
host = host_params[0]
|
||||
port = int(host_params[1])
|
||||
host = d.getVar('PRSERV_HOST', True)
|
||||
port = int(d.getVar('PRSERV_PORT', True))
|
||||
|
||||
try:
|
||||
return PRServerConnection(host,port).ping()
|
||||
except Exception:
|
||||
logger.critical("PRservice %s:%d not available" % (host, port))
|
||||
raise PRServiceConfigError
|
||||
return False
|
||||
|
||||
def auto_shutdown(d=None):
|
||||
global singleton
|
||||
|
||||
@@ -11,8 +11,6 @@
|
||||
# or the mega-manual (single, large HTML file comprised of all
|
||||
# Yocto Project manuals).
|
||||
# html: generates an HTML version of a manual.
|
||||
# eclipse: generates an HTML version of a manual that can be used as
|
||||
# eclipse help (including necessary metadata files).
|
||||
# tarball: creates a tarball for the doc files.
|
||||
# validate: validates
|
||||
# publish: pushes generated files to the Yocto Project website
|
||||
@@ -72,19 +70,26 @@
|
||||
#
|
||||
|
||||
ifeq ($(DOC),bsp-guide)
|
||||
XSLTOPTS = --xinclude
|
||||
ALLPREQ = html pdf eclipse tarball
|
||||
TARFILES = bsp-style.css bsp-guide.html bsp-guide.pdf figures/bsp-title.png \
|
||||
eclipse
|
||||
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf $(DOC)/eclipse
|
||||
XSLTOPTS = --stringparam html.stylesheet bsp-style.css \
|
||||
--stringparam chapter.autolabel 1 \
|
||||
--stringparam section.autolabel 1 \
|
||||
--stringparam section.label.includes.component.label 1 \
|
||||
--xinclude
|
||||
ALLPREQ = html pdf tarball
|
||||
TARFILES = bsp-style.css bsp-guide.html bsp-guide.pdf figures/bsp-title.png
|
||||
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf
|
||||
FIGURES = figures
|
||||
STYLESHEET = $(DOC)/*.css
|
||||
|
||||
endif
|
||||
|
||||
ifeq ($(DOC),dev-manual)
|
||||
XSLTOPTS = --xinclude
|
||||
ALLPREQ = html pdf eclipse tarball
|
||||
XSLTOPTS = --stringparam html.stylesheet dev-style.css \
|
||||
--stringparam chapter.autolabel 1 \
|
||||
--stringparam section.autolabel 1 \
|
||||
--stringparam section.label.includes.component.label 1 \
|
||||
--xinclude
|
||||
ALLPREQ = html pdf tarball
|
||||
#
|
||||
# Note that the tarfile might produce the "Cannot stat: No such file or directory" error
|
||||
# message for .PNG files that are not present when building a particular branch. The
|
||||
@@ -115,24 +120,21 @@ TARFILES = dev-style.css dev-manual.html dev-manual.pdf \
|
||||
figures/app-dev-flow.png figures/bsp-dev-flow.png figures/dev-title.png \
|
||||
figures/git-workflow.png figures/index-downloads.png figures/kernel-dev-flow.png \
|
||||
figures/kernel-overview-1.png figures/kernel-overview-2-generic.png \
|
||||
figures/source-repos.png figures/yp-download.png \
|
||||
eclipse
|
||||
figures/source-repos.png figures/yp-download.png
|
||||
endif
|
||||
|
||||
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf $(DOC)/eclipse
|
||||
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf
|
||||
FIGURES = figures
|
||||
STYLESHEET = $(DOC)/*.css
|
||||
|
||||
endif
|
||||
|
||||
ifeq ($(DOC),yocto-project-qs)
|
||||
XSLTOPTS = --xinclude
|
||||
ALLPREQ = html eclipse tarball
|
||||
TARFILES = yocto-project-qs.html qs-style.css figures/yocto-environment.png \
|
||||
figures/building-an-image.png figures/using-a-pre-built-image.png \
|
||||
figures/yocto-project-transp.png \
|
||||
eclipse
|
||||
MANUALS = $(DOC)/$(DOC).html $(DOC)/eclipse
|
||||
XSLTOPTS = --stringparam html.stylesheet qs-style.css \
|
||||
--xinclude
|
||||
ALLPREQ = html tarball
|
||||
TARFILES = yocto-project-qs.html qs-style.css figures/yocto-environment.png figures/building-an-image.png figures/using-a-pre-built-image.png figures/yocto-project-transp.png
|
||||
MANUALS = $(DOC)/$(DOC).html
|
||||
FIGURES = figures
|
||||
STYLESHEET = $(DOC)/*.css
|
||||
endif
|
||||
@@ -176,27 +178,11 @@ TARFILES = mega-manual.html mega-style.css figures/yocto-environment.png figures
|
||||
figures/using-a-pre-built-image.png \
|
||||
figures/poky-title.png figures/buildhistory.png figures/buildhistory-web.png \
|
||||
figures/adt-title.png figures/bsp-title.png \
|
||||
figures/kernel-dev-title.png figures/kernel-architecture-overview.png \
|
||||
figures/kernel-title.png figures/kernel-architecture-overview.png \
|
||||
figures/app-dev-flow.png figures/bsp-dev-flow.png figures/dev-title.png \
|
||||
figures/git-workflow.png figures/index-downloads.png figures/kernel-dev-flow.png \
|
||||
figures/kernel-overview-1.png figures/kernel-overview-2-generic.png \
|
||||
figures/source-repos.png figures/yp-download.png \
|
||||
figures/profile-title.png figures/kernelshark-all.png \
|
||||
figures/kernelshark-choose-events.png figures/kernelshark-i915-display.png \
|
||||
figures/kernelshark-output-display.png figures/lttngmain0.png \
|
||||
figures/oprofileui-busybox.png figures/oprofileui-copy-to-user.png \
|
||||
figures/oprofileui-downloading.png figures/oprofileui-processes.png \
|
||||
figures/perf-probe-do_fork-profile.png figures/perf-report-cycles-u.png \
|
||||
figures/perf-systemwide.png figures/perf-systemwide-libc.png \
|
||||
figures/perf-wget-busybox-annotate-menu.png figures/perf-wget-busybox-annotate-udhcpc.png \
|
||||
figures/perf-wget-busybox-debuginfo.png figures/perf-wget-busybox-dso-zoom.png \
|
||||
figures/perf-wget-busybox-dso-zoom-menu.png figures/perf-wget-busybox-expanded-stripped.png \
|
||||
figures/perf-wget-flat-stripped.png figures/perf-wget-g-copy-from-user-expanded-stripped.png \
|
||||
figures/perf-wget-g-copy-to-user-expanded-debuginfo.png figures/perf-wget-g-copy-to-user-expanded-stripped.png \
|
||||
figures/perf-wget-g-copy-to-user-expanded-stripped-unresolved-hidden.png figures/pybootchartgui-linux-yocto.png \
|
||||
figures/pychart-linux-yocto-rpm.png figures/pychart-linux-yocto-rpm-nostrip.png \
|
||||
figures/sched-wakeup-profile.png figures/sysprof-callers.png \
|
||||
figures/sysprof-copy-from-user.png figures/sysprof-copy-to-user.png
|
||||
figures/source-repos.png figures/yp-download.png figures/kernel-dev-title.png
|
||||
endif
|
||||
|
||||
MANUALS = $(DOC)/$(DOC).html
|
||||
@@ -206,59 +192,59 @@ STYLESHEET = $(DOC)/*.css
|
||||
endif
|
||||
|
||||
ifeq ($(DOC),ref-manual)
|
||||
XSLTOPTS = --xinclude
|
||||
ALLPREQ = html pdf eclipse tarball
|
||||
XSLTOPTS = --stringparam html.stylesheet ref-style.css \
|
||||
--stringparam chapter.autolabel 1 \
|
||||
--stringparam appendix.autolabel A \
|
||||
--stringparam section.autolabel 1 \
|
||||
--stringparam section.label.includes.component.label 1 \
|
||||
--xinclude
|
||||
ALLPREQ = html pdf tarball
|
||||
TARFILES = ref-manual.html ref-style.css figures/poky-title.png \
|
||||
figures/buildhistory.png figures/buildhistory-web.png eclipse
|
||||
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf $(DOC)/eclipse
|
||||
figures/buildhistory.png figures/buildhistory-web.png
|
||||
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf
|
||||
FIGURES = figures
|
||||
STYLESHEET = $(DOC)/*.css
|
||||
endif
|
||||
|
||||
|
||||
ifeq ($(DOC),adt-manual)
|
||||
XSLTOPTS = --xinclude
|
||||
ALLPREQ = html pdf eclipse tarball
|
||||
TARFILES = adt-manual.html adt-manual.pdf adt-style.css figures/adt-title.png \
|
||||
eclipse
|
||||
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf $(DOC)/eclipse
|
||||
XSLTOPTS = --stringparam html.stylesheet adt-style.css \
|
||||
--stringparam chapter.autolabel 1 \
|
||||
--stringparam appendix.autolabel A \
|
||||
--stringparam section.autolabel 1 \
|
||||
--stringparam section.label.includes.component.label 1 \
|
||||
--xinclude
|
||||
ALLPREQ = html pdf tarball
|
||||
TARFILES = adt-manual.html adt-manual.pdf adt-style.css figures/adt-title.png
|
||||
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf
|
||||
FIGURES = figures
|
||||
STYLESHEET = $(DOC)/*.css
|
||||
endif
|
||||
|
||||
ifeq ($(DOC),profile-manual)
|
||||
XSLTOPTS = --xinclude
|
||||
ALLPREQ = html pdf eclipse tarball
|
||||
TARFILES = profile-manual.html profile-manual.pdf profile-manual-style.css \
|
||||
figures/profile-title.png figures/kernelshark-all.png \
|
||||
figures/kernelshark-choose-events.png figures/kernelshark-i915-display.png \
|
||||
figures/kernelshark-output-display.png figures/lttngmain0.png \
|
||||
figures/oprofileui-busybox.png figures/oprofileui-copy-to-user.png \
|
||||
figures/oprofileui-downloading.png figures/oprofileui-processes.png \
|
||||
figures/perf-probe-do_fork-profile.png figures/perf-report-cycles-u.png \
|
||||
figures/perf-systemwide.png figures/perf-systemwide-libc.png \
|
||||
figures/perf-wget-busybox-annotate-menu.png figures/perf-wget-busybox-annotate-udhcpc.png \
|
||||
figures/perf-wget-busybox-debuginfo.png figures/perf-wget-busybox-dso-zoom.png \
|
||||
figures/perf-wget-busybox-dso-zoom-menu.png figures/perf-wget-busybox-expanded-stripped.png \
|
||||
figures/perf-wget-flat-stripped.png figures/perf-wget-g-copy-from-user-expanded-stripped.png \
|
||||
figures/perf-wget-g-copy-to-user-expanded-debuginfo.png figures/perf-wget-g-copy-to-user-expanded-stripped.png \
|
||||
figures/perf-wget-g-copy-to-user-expanded-stripped-unresolved-hidden.png figures/pybootchartgui-linux-yocto.png \
|
||||
figures/pychart-linux-yocto-rpm.png figures/pychart-linux-yocto-rpm-nostrip.png \
|
||||
figures/sched-wakeup-profile.png figures/sysprof-callers.png \
|
||||
figures/sysprof-copy-from-user.png figures/sysprof-copy-to-user.png \
|
||||
eclipse
|
||||
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf $(DOC)/eclipse
|
||||
ifeq ($(DOC),kernel-manual)
|
||||
XSLTOPTS = --stringparam html.stylesheet kernel-style.css \
|
||||
--stringparam chapter.autolabel 1 \
|
||||
--stringparam appendix.autolabel A \
|
||||
--stringparam section.autolabel 1 \
|
||||
--stringparam section.label.includes.component.label 1 \
|
||||
--xinclude
|
||||
ALLPREQ = html pdf tarball
|
||||
TARFILES = kernel-manual.html kernel-manual.pdf kernel-style.css figures/kernel-title.png figures/kernel-architecture-overview.png
|
||||
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf
|
||||
FIGURES = figures
|
||||
STYLESHEET = $(DOC)/*.css
|
||||
endif
|
||||
|
||||
ifeq ($(DOC),kernel-dev)
|
||||
XSLTOPTS = --xinclude
|
||||
ALLPREQ = html pdf eclipse tarball
|
||||
TARFILES = kernel-dev.html kernel-dev.pdf kernel-dev-style.css figures/kernel-dev-title.png \
|
||||
figures/kernel-architecture-overview.png \
|
||||
eclipse
|
||||
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf $(DOC)/eclipse
|
||||
XSLTOPTS = --stringparam html.stylesheet kernel-dev-style.css \
|
||||
--stringparam chapter.autolabel 1 \
|
||||
--stringparam appendix.autolabel A \
|
||||
--stringparam section.autolabel 1 \
|
||||
--stringparam section.label.includes.component.label 1 \
|
||||
--xinclude
|
||||
ALLPREQ = html pdf tarball
|
||||
TARFILES = kernel-dev.html kernel-dev.pdf kernel-dev-style.css figures/kernel-dev-title.png
|
||||
MANUALS = $(DOC)/$(DOC).html $(DOC)/$(DOC).pdf
|
||||
FIGURES = figures
|
||||
STYLESHEET = $(DOC)/*.css
|
||||
endif
|
||||
@@ -312,49 +298,6 @@ else
|
||||
endif
|
||||
|
||||
|
||||
eclipse: BASE_DIR = html/$(DOC)/
|
||||
|
||||
eclipse: eclipse-generate eclipse-resolve-links
|
||||
|
||||
.PHONY : eclipse-generate eclipse-resolve-links
|
||||
|
||||
eclipse-generate:
|
||||
ifeq ($(filter $(DOC), adt-manual bsp-guide dev-manual kernel-dev profile-manual ref-manual yocto-project-qs),)
|
||||
@echo " "
|
||||
@echo "ERROR: You can only create eclipse documentation"
|
||||
@echo " of the following documentation parts:"
|
||||
@echo " - adt-manual"
|
||||
@echo " - bsp-guide"
|
||||
@echo " - dev-manual"
|
||||
@echo " - kernel-dev"
|
||||
@echo " - profile-manual"
|
||||
@echo " - ref-manual"
|
||||
@echo " - yocto-project-qs"
|
||||
@echo " "
|
||||
else
|
||||
@echo " "
|
||||
@echo "******** Building eclipse help of "$(DOC)
|
||||
@echo " "
|
||||
cd $(DOC) && \
|
||||
xsltproc $(XSLTOPTS) \
|
||||
--stringparam base.dir '$(BASE_DIR)' \
|
||||
-o eclipse/$(DOC).html \
|
||||
$(DOC)-eclipse-customization.xsl $(DOC).xml && \
|
||||
mv eclipse/toc.xml eclipse/$(DOC)-toc.xml && \
|
||||
cp -rf $(FIGURES) eclipse/$(BASE_DIR) && \
|
||||
cd ..;
|
||||
|
||||
$(call modify-eclipse)
|
||||
endif
|
||||
|
||||
eclipse-resolve-links:
|
||||
@echo " "
|
||||
@echo "******** Using eclipse-help.sed to process external links"
|
||||
@echo " "
|
||||
$(foreach FILE, \
|
||||
$(wildcard $(DOC)/eclipse/html/$(DOC)/*.html), \
|
||||
$(shell sed -i -f tools/eclipse-help.sed $(FILE)))
|
||||
|
||||
tarball: html
|
||||
@echo " "
|
||||
@echo "******** Creating Tarball of document files"
|
||||
|
||||
@@ -7,8 +7,8 @@
|
||||
|
||||
<para>
|
||||
Recall that earlier the manual discussed how to use an existing toolchain
|
||||
tarball that had been installed into the default installation
|
||||
directory, <filename>/opt/poky</filename>, which is outside of the
|
||||
tarball that had been installed into <filename>/opt/poky</filename>,
|
||||
which is outside of the
|
||||
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
|
||||
(see the section "<link linkend='using-an-existing-toolchain-tarball'>Using a Cross-Toolchain Tarball)</link>".
|
||||
And, that sourcing your architecture-specific environment setup script
|
||||
@@ -81,21 +81,19 @@
|
||||
<listitem><para><emphasis>Source the cross-toolchain
|
||||
environment setup file:</emphasis>
|
||||
Installation of the cross-toolchain creates a cross-toolchain
|
||||
environment setup script in the directory that the ADT
|
||||
was installed.
|
||||
environment setup script in <filename>/opt/poky/<release></filename>.
|
||||
Before you can use the tools to develop your project, you must
|
||||
source this setup script.
|
||||
The script begins with the string "environment-setup" and contains
|
||||
the machine architecture, which is followed by the string
|
||||
"poky-linux".
|
||||
Here is an example that sources a script from the
|
||||
default ADT installation directory that uses the
|
||||
Here is an example for an environment setup using the
|
||||
32-bit Intel x86 Architecture and using the
|
||||
&DISTRO_NAME; Yocto Project release:
|
||||
<literallayout class='monospaced'>
|
||||
$ source /opt/poky/&DISTRO;/environment-setup-i586-poky-linux
|
||||
</literallayout></para></listitem>
|
||||
<listitem><para><emphasis>Generate the local aclocal.m4
|
||||
<listitem><para><emphasis>Generate the local <filename>aclocal.m4</filename>
|
||||
files and create the configure script:</emphasis>
|
||||
The following GNU Autotools generate the local
|
||||
<filename>aclocal.m4</filename> files and create the
|
||||
@@ -112,7 +110,7 @@
|
||||
<literallayout class='monospaced'>
|
||||
$ touch NEWS README AUTHORS ChangeLog
|
||||
</literallayout></para></listitem>
|
||||
<listitem><para><emphasis>Generate the configure
|
||||
<listitem><para><emphasis>Generate the <filename>configure</filename>
|
||||
file:</emphasis>
|
||||
This command generates the <filename>configure</filename>:
|
||||
<literallayout class='monospaced'>
|
||||
@@ -160,7 +158,8 @@
|
||||
For an Autotools-based project, you can use the cross-toolchain by just
|
||||
passing the appropriate host option to <filename>configure.sh</filename>.
|
||||
The host option you use is derived from the name of the environment setup
|
||||
script found in the directory in which you installed the cross-toolchain.
|
||||
script in <filename>/opt/poky</filename> resulting from installation of the
|
||||
cross-toolchain tarball.
|
||||
For example, the host option for an ARM-based target that uses the GNU EABI
|
||||
is <filename>armv5te-poky-linux-gnueabi</filename>.
|
||||
You will notice that the name of the script is
|
||||
|
||||
@@ -19,13 +19,7 @@
|
||||
how to access and use the cross-development toolchains, how to
|
||||
customize the development packages installation,
|
||||
how to use command line development for both Autotools-based and Makefile-based projects,
|
||||
and an introduction to the <trademark class='trade'>Eclipse</trademark> IDE
|
||||
Yocto Plug-in.
|
||||
<note>
|
||||
The ADT is distribution-neutral and does not require the Yocto
|
||||
Project reference distribution, which is called Poky.
|
||||
This manual, however, uses examples that use the Poky distribution.
|
||||
</note>
|
||||
and an introduction to the Eclipse Yocto Plug-in.
|
||||
</para>
|
||||
|
||||
<section id='adt-intro-section'>
|
||||
@@ -42,10 +36,8 @@
|
||||
Fundamentally, the ADT consists of the following:
|
||||
<itemizedlist>
|
||||
<listitem><para>An architecture-specific cross-toolchain and matching
|
||||
sysroot both built by the OpenEmbedded build system.
|
||||
The toolchain and sysroot are based on a
|
||||
<ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>
|
||||
configuration and extensions,
|
||||
sysroot both built by the OpenEmbedded build system, which uses Poky.
|
||||
The toolchain and sysroot are based on a metadata configuration and extensions,
|
||||
which allows you to cross-develop on the host machine for the target hardware.
|
||||
</para></listitem>
|
||||
<listitem><para>The Eclipse IDE Yocto Plug-in.</para></listitem>
|
||||
@@ -56,20 +48,17 @@
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
<section id='the-cross-development-toolchain'>
|
||||
<title>The Cross-Development Toolchain</title>
|
||||
<section id='the-cross-toolchain'>
|
||||
<title>The Cross-Toolchain</title>
|
||||
|
||||
<para>
|
||||
The
|
||||
<ulink url='&YOCTO_DOCS_DEV_URL;#cross-development-toolchain'>Cross-Development Toolchain</ulink>
|
||||
consists of a cross-compiler, cross-linker, and cross-debugger
|
||||
that are used to develop user-space applications for targeted
|
||||
hardware.
|
||||
This toolchain is created either by running the ADT Installer
|
||||
script, a toolchain installer script, or through a
|
||||
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
|
||||
that is based on your Metadata configuration or extension for
|
||||
your targeted device.
|
||||
The cross-toolchain consists of a cross-compiler, cross-linker, and cross-debugger
|
||||
that are used to develop user-space applications for targeted hardware.
|
||||
This toolchain is created either by running the ADT Installer script, a toolchain installer
|
||||
script, or through a
|
||||
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink> that
|
||||
is based on your metadata
|
||||
configuration or extension for your targeted device.
|
||||
The cross-toolchain works with a matching target sysroot.
|
||||
</para>
|
||||
</section>
|
||||
@@ -81,7 +70,7 @@
|
||||
The matching target sysroot contains needed headers and libraries for generating
|
||||
binaries that run on the target architecture.
|
||||
The sysroot is based on the target root filesystem image that is built by
|
||||
the OpenEmbedded build system and uses the same Metadata configuration
|
||||
the OpenEmbedded build system Poky and uses the same metadata configuration
|
||||
used to build the cross-toolchain.
|
||||
</para>
|
||||
</section>
|
||||
@@ -130,7 +119,7 @@
|
||||
the environment setup script, QEMU is installed and automatically
|
||||
available.</para></listitem>
|
||||
<listitem><para>If you have installed the cross-toolchain
|
||||
tarball and you have sourced the toolchain's setup environment script, QEMU
|
||||
tarball and you have sourcing the toolchain's setup environment script, QEMU
|
||||
is also installed and automatically available.</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
@@ -150,7 +139,7 @@
|
||||
stutters in your desktop experience, or situations that overload your server
|
||||
even when you have plenty of CPU power left.
|
||||
You can find out more about LatencyTOP at
|
||||
<ulink url='https://latencytop.org/'></ulink>.</para></listitem>
|
||||
<ulink url='http://www.latencytop.org/'></ulink>.</para></listitem>
|
||||
<listitem><para><emphasis>PowerTOP:</emphasis> Helps you determine what
|
||||
software is using the most power.
|
||||
You can find out more about PowerTOP at
|
||||
@@ -158,29 +147,18 @@
|
||||
<listitem><para><emphasis>OProfile:</emphasis> A system-wide profiler for Linux
|
||||
systems that is capable of profiling all running code at low overhead.
|
||||
You can find out more about OProfile at
|
||||
<ulink url='http://oprofile.sourceforge.net/about/'></ulink>.
|
||||
For examples on how to setup and use this tool, see the
|
||||
"<ulink url='&YOCTO_DOCS_PROF_URL;#profile-manual-oprofile'>OProfile</ulink>"
|
||||
section in the Yocto Project Profiling and Tracing Manual.
|
||||
</para></listitem>
|
||||
<ulink url='http://oprofile.sourceforge.net/about/'></ulink>.</para></listitem>
|
||||
<listitem><para><emphasis>Perf:</emphasis> Performance counters for Linux used
|
||||
to keep track of certain types of hardware and software events.
|
||||
For more information on these types of counters see
|
||||
<ulink url='https://perf.wiki.kernel.org/'></ulink> and click
|
||||
on “Perf tools.”
|
||||
For examples on how to setup and use this tool, see the
|
||||
"<ulink url='&YOCTO_DOCS_PROF_URL;#profile-manual-perf'>perf</ulink>"
|
||||
section in the Yocto Project Profiling and Tracing Manual.
|
||||
</para></listitem>
|
||||
on “Perf tools.”</para></listitem>
|
||||
<listitem><para><emphasis>SystemTap:</emphasis> A free software infrastructure
|
||||
that simplifies information gathering about a running Linux system.
|
||||
This information helps you diagnose performance or functional problems.
|
||||
SystemTap is not available as a user-space tool through the Eclipse IDE Yocto Plug-in.
|
||||
See <ulink url='http://sourceware.org/systemtap'></ulink> for more information
|
||||
on SystemTap.
|
||||
For examples on how to setup and use this tool, see the
|
||||
"<ulink url='&YOCTO_DOCS_PROF_URL;#profile-manual-systemtap'>SystemTap</ulink>"
|
||||
section in the Yocto Project Profiling and Tracing Manual.</para></listitem>
|
||||
on SystemTap.</para></listitem>
|
||||
<listitem><para><emphasis>Lttng-ust:</emphasis> A User-space Tracer designed to
|
||||
provide detailed information on user-space activity.
|
||||
See <ulink url='http://lttng.org/ust'></ulink> for more information on Lttng-ust.
|
||||
|
||||
@@ -3,9 +3,6 @@
|
||||
|
||||
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/xhtml/docbook.xsl" />
|
||||
|
||||
<xsl:param name="html.stylesheet" select="'adt-style.css'" />
|
||||
<xsl:param name="chapter.autolabel" select="1" />
|
||||
<xsl:param name="appendix.autolabel" select="1" />
|
||||
<xsl:param name="section.autolabel" select="1" />
|
||||
<xsl:param name="section.label.includes.component.label" select="1" />
|
||||
<!-- <xsl:param name="generate.toc" select="'article nop'"></xsl:param> -->
|
||||
|
||||
</xsl:stylesheet>
|
||||
|
||||
@@ -1,27 +0,0 @@
|
||||
<?xml version='1.0'?>
|
||||
<xsl:stylesheet
|
||||
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
|
||||
xmlns="http://www.w3.org/1999/xhtml"
|
||||
xmlns:fo="http://www.w3.org/1999/XSL/Format"
|
||||
version="1.0">
|
||||
|
||||
<xsl:import
|
||||
href="http://docbook.sourceforge.net/release/xsl/current/eclipse/eclipse3.xsl" />
|
||||
|
||||
<xsl:param name="chunker.output.indent" select="'yes'"/>
|
||||
<xsl:param name="chunk.quietly" select="1"/>
|
||||
<xsl:param name="chunk.first.sections" select="1"/>
|
||||
<xsl:param name="chunk.section.depth" select="10"/>
|
||||
<xsl:param name="use.id.as.filename" select="1"/>
|
||||
<xsl:param name="ulink.target" select="'_self'" />
|
||||
<xsl:param name="base.dir" select="'html/adt-manual/'"/>
|
||||
<xsl:param name="html.stylesheet" select="'../book.css'"/>
|
||||
<xsl:param name="eclipse.manifest" select="0"/>
|
||||
<xsl:param name="create.plugin.xml" select="0"/>
|
||||
<xsl:param name="suppress.navigation" select="1"/>
|
||||
<xsl:param name="generate.index" select="0"/>
|
||||
<xsl:param name="chapter.autolabel" select="1" />
|
||||
<xsl:param name="appendix.autolabel" select="1" />
|
||||
<xsl:param name="section.autolabel" select="1" />
|
||||
<xsl:param name="section.label.includes.component.label" select="1" />
|
||||
</xsl:stylesheet>
|
||||
@@ -16,9 +16,7 @@
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
|
||||
<title>
|
||||
Yocto Project Application Developer's Guide
|
||||
</title>
|
||||
<title></title>
|
||||
|
||||
<authorgroup>
|
||||
<author>
|
||||
@@ -58,35 +56,10 @@
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4</revnumber>
|
||||
<date>April 2013</date>
|
||||
<date>Sometime in 2013</date>
|
||||
<revremark>Released with the Yocto Project 1.4 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.1</revnumber>
|
||||
<date>June 2013</date>
|
||||
<revremark>Released with the Yocto Project 1.4.1 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.2</revnumber>
|
||||
<date>August 2013</date>
|
||||
<revremark>Released with the Yocto Project 1.4.2 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.3</revnumber>
|
||||
<date>March 2014</date>
|
||||
<revremark>Released with the Yocto Project 1.4.3 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.4</revnumber>
|
||||
<date>May 2014</date>
|
||||
<revremark>Released with the Yocto Project 1.4.4 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.5</revnumber>
|
||||
<date>July 2014</date>
|
||||
<revremark>Released with the Yocto Project 1.4.5 Release.</revremark>
|
||||
</revision>
|
||||
</revhistory>
|
||||
</revhistory>
|
||||
|
||||
<copyright>
|
||||
<year>©RIGHT_YEAR;</year>
|
||||
|
||||
@@ -55,9 +55,9 @@
|
||||
</para>
|
||||
|
||||
<note>
|
||||
For build performance information related to the PMS, see the
|
||||
"<ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-package'>Packaging - <filename>package*.bbclass</filename></ulink>"
|
||||
section in the Yocto Project Reference Manual.
|
||||
For build performance information related to the PMS, see
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-package'>Packaging - <filename>package*.bbclass</filename></ulink>
|
||||
in the Yocto Project Reference Manual.
|
||||
</note>
|
||||
|
||||
<para>
|
||||
|
||||
@@ -39,18 +39,18 @@
|
||||
|
||||
<para>
|
||||
<itemizedlist>
|
||||
<listitem><para><emphasis>Use the ADT installer script:</emphasis>
|
||||
<listitem><para><emphasis>Use the ADT Installer Script:</emphasis>
|
||||
This method is the recommended way to install the ADT because it
|
||||
automates much of the process for you.
|
||||
For example, you can configure the installation to install the QEMU emulator
|
||||
and the user-space NFS, specify which root filesystem profiles to download,
|
||||
and define the target sysroot location.</para></listitem>
|
||||
<listitem><para><emphasis>Use an existing toolchain:</emphasis>
|
||||
<listitem><para><emphasis>Use an Existing Toolchain:</emphasis>
|
||||
Using this method, you select and download an architecture-specific
|
||||
toolchain installer and then run the script to hand-install the toolchain.
|
||||
If you use this method, you just get the cross-toolchain and QEMU - you do not
|
||||
get any of the other mentioned benefits had you run the ADT Installer script.</para></listitem>
|
||||
<listitem><para><emphasis>Use the toolchain from within the Build Directory:</emphasis>
|
||||
<listitem><para><emphasis>Use the Toolchain from within the Build Directory:</emphasis>
|
||||
If you already have a
|
||||
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>,
|
||||
you can build the cross-toolchain within the directory.
|
||||
@@ -91,16 +91,16 @@
|
||||
<para>
|
||||
If you use BitBake to generate the ADT Installer tarball, you must
|
||||
<filename>source</filename> the environment setup script
|
||||
(<ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>)
|
||||
located in the Source Directory before running the
|
||||
BitBake command that creates the tarball.
|
||||
(<filename>&OE_INIT_FILE;</filename>) located
|
||||
in the Source Directory before running the <filename>bitbake</filename>
|
||||
command that creates the tarball.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The following example commands download the Poky tarball, set up the
|
||||
<ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>,
|
||||
set up the environment while also creating the default Build Directory,
|
||||
and run the BitBake command that results in the tarball
|
||||
and run the <filename>bitbake</filename> command that results in the tarball
|
||||
<filename>~/yocto-project/build/tmp/deploy/sdk/adt_installer.tar.bz2</filename>:
|
||||
<literallayout class='monospaced'>
|
||||
$ cd ~
|
||||
@@ -198,29 +198,24 @@
|
||||
$ cd ~/adt-installer
|
||||
$ ./adt_installer
|
||||
</literallayout>
|
||||
Once the installer begins to run, you are asked to enter the
|
||||
location for cross-toolchain installation.
|
||||
The default location is
|
||||
<filename>/opt/poky/<release></filename>.
|
||||
After either accepting the default location or selecting your
|
||||
own location, you are prompted to run the installation script
|
||||
interactively or in silent mode.
|
||||
If you want to closely monitor the installation,
|
||||
choose “I” for interactive mode rather than “S” for silent mode.
|
||||
Once the installer begins to run, you are asked to enter the location for
|
||||
cross-toolchain installation.
|
||||
The default location is <filename>/opt/poky/<release></filename>.
|
||||
After selecting the location, you are prompted to run in
|
||||
interactive or silent mode.
|
||||
If you want to closely monitor the installation, choose “I” for interactive
|
||||
mode rather than “S” for silent mode.
|
||||
Follow the prompts from the script to complete the installation.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Once the installation completes, the ADT, which includes the
|
||||
cross-toolchain, is installed in the selected installation
|
||||
directory.
|
||||
You will notice environment setup files for the cross-toolchain
|
||||
in the installation directory, and image tarballs in the
|
||||
<filename>adt-installer</filename> directory according to your
|
||||
installer configurations, and the target sysroot located
|
||||
according to the
|
||||
<filename>YOCTOADT_TARGET_SYSROOT_LOC_<arch></filename>
|
||||
variable also in your configuration file.
|
||||
Once the installation completes, the ADT, which includes the cross-toolchain, is installed.
|
||||
You will notice environment setup files for the cross-toolchain in
|
||||
<filename>&YOCTO_ADTPATH_DIR;</filename>,
|
||||
and image tarballs in the <filename>adt-installer</filename>
|
||||
directory according to your installer configurations, and the target sysroot located
|
||||
according to the <filename>YOCTOADT_TARGET_SYSROOT_LOC_<arch></filename> variable
|
||||
also in your configuration file.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
@@ -229,12 +224,11 @@
|
||||
<title>Using a Cross-Toolchain Tarball</title>
|
||||
|
||||
<para>
|
||||
If you want to simply install the cross-toolchain by hand, you can
|
||||
do so by running the toolchain installer.
|
||||
If you use this method to install the cross-toolchain and you
|
||||
might still need to install the target sysroot by installing and
|
||||
extracting it separately.
|
||||
For information on how to install the sysroot, see the
|
||||
If you want to simply install the cross-toolchain by hand, you can do so by running the
|
||||
toolchain installer.
|
||||
If you use this method to install the cross-toolchain and you still need to install the target
|
||||
sysroot, you will have to extract and install sysroot separately.
|
||||
For information on how to do this, see the
|
||||
"<link linkend='extracting-the-root-filesystem'>Extracting the Root Filesystem</link>" section.
|
||||
</para>
|
||||
|
||||
@@ -254,50 +248,29 @@
|
||||
<literallayout class='monospaced'>
|
||||
poky-eglibc-x86_64-i586-toolchain-gmae-&DISTRO;.sh
|
||||
</literallayout>
|
||||
<note><para>As an alternative to steps one and two, you can
|
||||
build the toolchain installer if you have a
|
||||
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
|
||||
If you need GMAE, you should use the
|
||||
<filename>bitbake meta-toolchain-gmae</filename>
|
||||
<note><para>As an alternative to steps one and two, you can build the toolchain installer
|
||||
if you have a <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
|
||||
If you need GMAE, you should use the <filename>bitbake meta-toolchain-gmae</filename>
|
||||
command.
|
||||
Running the resulting installation script will support
|
||||
such development.
|
||||
If you are not concerned with GMAE, you can generate
|
||||
the toolchain installer using
|
||||
<filename>bitbake meta-toolchain</filename>.
|
||||
Either of these methods requires you to still
|
||||
install the target sysroot by installing and
|
||||
extracting it separately.
|
||||
For information on how to install the sysroot, see the
|
||||
"<link linkend='extracting-the-root-filesystem'>Extracting the Root Filesystem</link>" section.
|
||||
</para>
|
||||
<para>A final method of building the toolchain installer
|
||||
exists that has significant advantages over the previous
|
||||
two methods.
|
||||
This method results in a toolchain installer that
|
||||
contains the sysroot that matches your target root
|
||||
filesystem.
|
||||
To build this installer, use the
|
||||
<filename>bitbake image -c populate_sdk</filename>
|
||||
command.</para>
|
||||
<para>Remember, before using any
|
||||
<filename>bitbake</filename> command, you must source
|
||||
the <filename>&OE_INIT_PATH;</filename> script
|
||||
located in the Source Directory and you must make sure
|
||||
your <filename>conf/local.conf</filename> variables are
|
||||
correct.
|
||||
The resulting installation script when run will support such development.
|
||||
However, if you are not concerned with GMAE,
|
||||
you can generate the toolchain installer using
|
||||
<filename>bitbake meta-toolchain</filename>.</para>
|
||||
<para>Use the appropriate <filename>bitbake</filename> command only after you have
|
||||
sourced the <filename>&OE_INIT_PATH;</filename> script located in the Source
|
||||
Directory and you have made sure your <filename>conf/local.conf</filename>
|
||||
variables are correct.
|
||||
In particular, you need to be sure the
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
|
||||
variable matches the architecture for which you are
|
||||
building and that the <filename>SDKMACHINE</filename>
|
||||
variable is correctly set if you are building
|
||||
a toolchain for an architecture that differs from your
|
||||
current development host machine.</para>
|
||||
<para>When the BitBake command
|
||||
completes, the toolchain installer will be in
|
||||
<filename>tmp/deploy/sdk</filename> in the Build
|
||||
Directory.</para>
|
||||
</note></para></listitem>
|
||||
variable matches the architecture for which you are building and that the
|
||||
<filename>SDKMACHINE</filename> variable is correctly set if you are building
|
||||
a toolchain for an architecture that differs from your current
|
||||
development host machine.</para>
|
||||
<para>When the <filename>bitbake</filename> command completes, the
|
||||
toolchain installer will be in <filename>tmp/deploy/sdk</filename> in the
|
||||
Build Directory.
|
||||
</para></note>
|
||||
</para></listitem>
|
||||
<listitem><para>Once you have the installer, run it to install the toolchain.
|
||||
You must change the permissions on the toolchain installer
|
||||
script so that it is executable.</para>
|
||||
@@ -326,7 +299,7 @@
|
||||
A final way of making the cross-toolchain available is to use BitBake
|
||||
to generate the toolchain within an existing
|
||||
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
|
||||
This method does not install the toolchain into the default
|
||||
This method does not install the toolchain into the
|
||||
<filename>/opt</filename> directory.
|
||||
As with the previous method, if you need to install the target sysroot, you must
|
||||
do that separately as well.
|
||||
@@ -355,11 +328,11 @@
|
||||
cross-toolchain generation.
|
||||
<note>If you change out of your working directory after you
|
||||
<filename>source</filename> the environment setup script and before you run
|
||||
the BitBake command, the command might not work.
|
||||
Be sure to run the BitBake command immediately
|
||||
the <filename>bitbake</filename> command, the command might not work.
|
||||
Be sure to run the <filename>bitbake</filename> command immediately
|
||||
after checking or editing the <filename>local.conf</filename> but without
|
||||
changing out of your working directory.</note>
|
||||
Once the BitBake command finishes,
|
||||
Once the <filename>bitbake</filename> command finishes,
|
||||
the cross-toolchain is generated and populated within the Build Directory.
|
||||
You will notice environment setup files for the cross-toolchain in the
|
||||
Build Directory in the <filename>tmp</filename> directory.
|
||||
@@ -381,8 +354,7 @@
|
||||
Before you can develop using the cross-toolchain, you need to set up the
|
||||
cross-development environment by sourcing the toolchain's environment setup script.
|
||||
If you used the ADT Installer or hand-installed cross-toolchain,
|
||||
then you can find this script in the directory you chose for installation.
|
||||
The default installation directory is the <filename>&YOCTO_ADTPATH_DIR;</filename>
|
||||
then you can find this script in the <filename>&YOCTO_ADTPATH_DIR;</filename>
|
||||
directory.
|
||||
If you installed the toolchain in the
|
||||
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>,
|
||||
@@ -393,11 +365,10 @@
|
||||
<para>
|
||||
Be sure to run the environment setup script that matches the architecture for
|
||||
which you are developing.
|
||||
Environment setup scripts begin with the string "<filename>environment-setup</filename>"
|
||||
Environment setup scripts begin with the string “<filename>environment-setup</filename>”
|
||||
and include as part of their name the architecture.
|
||||
For example, the toolchain environment setup script for a 64-bit
|
||||
IA-based architecture installed in the default installation directory
|
||||
would be the following:
|
||||
For example, the toolchain environment setup script for a 64-bit IA-based architecture would
|
||||
be the following:
|
||||
<literallayout class='monospaced'>
|
||||
&YOCTO_ADTPATH_DIR;/environment-setup-x86_64-poky-linux
|
||||
</literallayout>
|
||||
@@ -442,8 +413,8 @@
|
||||
<para>
|
||||
If you are planning on developing against your image and you are not
|
||||
building or using one of the Yocto Project development images
|
||||
(e.g. <filename>core-image-*-dev</filename>), you must be sure to
|
||||
include the development packages as part of your image recipe.
|
||||
(e.g. core-image-*-dev), you must be sure to include the development
|
||||
packages as part of your image recipe.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
@@ -526,9 +497,9 @@
|
||||
The example extracts the root filesystem into the <filename>$HOME/qemux86-sato</filename>
|
||||
directory:
|
||||
<literallayout class='monospaced'>
|
||||
$ source $HOME/toolchain_dir/environment-setup-i586-poky-linux
|
||||
$ source $HOME/poky/build/tmp/environment-setup-i586-poky-linux
|
||||
$ runqemu-extract-sdk \
|
||||
~Downloads/core-image-sato-sdk-qemux86-2011091411831.rootfs.tar.bz2 \
|
||||
tmp/deploy/images/core-image-sato-sdk-qemux86-2011091411831.rootfs.tar.bz2 \
|
||||
$HOME/qemux86-sato
|
||||
</literallayout>
|
||||
In this case, you could now point to the target sysroot at
|
||||
|
||||
@@ -3,8 +3,4 @@
|
||||
|
||||
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/xhtml/docbook.xsl" />
|
||||
|
||||
<xsl:param name="html.stylesheet" select="'bsp-style.css'" />
|
||||
<xsl:param name="chapter.autolabel" select="1" />
|
||||
<xsl:param name="section.autolabel" select="1" />
|
||||
<xsl:param name="section.label.includes.component.label" select="1" />
|
||||
</xsl:stylesheet>
|
||||
|
||||
@@ -1,27 +0,0 @@
|
||||
<?xml version='1.0'?>
|
||||
<xsl:stylesheet
|
||||
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
|
||||
xmlns="http://www.w3.org/1999/xhtml"
|
||||
xmlns:fo="http://www.w3.org/1999/XSL/Format"
|
||||
version="1.0">
|
||||
|
||||
<xsl:import
|
||||
href="http://docbook.sourceforge.net/release/xsl/current/eclipse/eclipse3.xsl" />
|
||||
|
||||
<xsl:param name="chunker.output.indent" select="'yes'"/>
|
||||
<xsl:param name="chunk.quietly" select="1"/>
|
||||
<xsl:param name="chunk.first.sections" select="1"/>
|
||||
<xsl:param name="chunk.section.depth" select="10"/>
|
||||
<xsl:param name="use.id.as.filename" select="1"/>
|
||||
<xsl:param name="ulink.target" select="'_self'" />
|
||||
<xsl:param name="base.dir" select="'html/bsp-guide/'"/>
|
||||
<xsl:param name="html.stylesheet" select="'../book.css'"/>
|
||||
<xsl:param name="eclipse.manifest" select="0"/>
|
||||
<xsl:param name="create.plugin.xml" select="0"/>
|
||||
<xsl:param name="suppress.navigation" select="1"/>
|
||||
<xsl:param name="generate.index" select="0"/>
|
||||
<xsl:param name="chapter.autolabel" select="1" />
|
||||
<xsl:param name="appendix.autolabel" select="1" />
|
||||
<xsl:param name="section.autolabel" select="1" />
|
||||
<xsl:param name="section.label.includes.component.label" select="1" />
|
||||
</xsl:stylesheet>
|
||||
@@ -16,9 +16,7 @@
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
|
||||
<title>
|
||||
Yocto Project Board Support Package Developer's Guide
|
||||
</title>
|
||||
<title></title>
|
||||
|
||||
<authorgroup>
|
||||
<author>
|
||||
@@ -70,34 +68,9 @@
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4</revnumber>
|
||||
<date>April 2013</date>
|
||||
<date>Sometime in 2013</date>
|
||||
<revremark>Released with the Yocto Project 1.4 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.1</revnumber>
|
||||
<date>June 2013</date>
|
||||
<revremark>Released with the Yocto Project 1.4.1 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.2</revnumber>
|
||||
<date>August 2013</date>
|
||||
<revremark>Released with the Yocto Project 1.4.2 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.3</revnumber>
|
||||
<date>March 2014</date>
|
||||
<revremark>Released with the Yocto Project 1.4.3 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.4</revnumber>
|
||||
<date>May 2014</date>
|
||||
<revremark>Released with the Yocto Project 1.4.4 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.5</revnumber>
|
||||
<date>July 2014</date>
|
||||
<revremark>Released with the Yocto Project 1.4.5 Release.</revremark>
|
||||
</revision>
|
||||
</revhistory>
|
||||
|
||||
<copyright>
|
||||
|
||||
@@ -185,13 +185,10 @@
|
||||
meta-crownbay/recipes-graphics/xorg-xserver/xserver-xf86-config/crownbay-noemgd/xorg.conf
|
||||
meta-crownbay/recipes-kernel/
|
||||
meta-crownbay/recipes-kernel/linux/
|
||||
meta-crownbay/recipes-kernel/linux/linux-yocto_3.2.bbappend
|
||||
meta-crownbay/recipes-kernel/linux/linux-yocto_3.4.bbappend
|
||||
meta-crownbay/recipes-kernel/linux/linux-yocto_3.8.bbappend
|
||||
meta-crownbay/recipes-kernel/linux/linux-yocto-dev.bbappend
|
||||
meta-crownbay/recipes-kernel/linux/linux-yocto-rt_3.2.bbappend
|
||||
meta-crownbay/recipes-kernel/linux/linux-yocto-rt_3.4.bbappend
|
||||
meta-crownbay/recipes-kernel/linux/linux-yocto-rt_3.8.bbappend
|
||||
meta-crownbay/recipes-kernel/linux/linux-yocto_3.2.bbappend
|
||||
meta-crownbay/recipes-kernel/linux/linux-yocto_3.4.bbappend
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
@@ -259,9 +256,8 @@
|
||||
This file provides information on where to locate the BSP source files.
|
||||
For example, information provides where to find the sources that comprise
|
||||
the images shipped with the BSP.
|
||||
Information is also included to help you find the
|
||||
<ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>
|
||||
used to generate the images that ship with the BSP.
|
||||
Information is also included to help you find the metadata used to generate the images
|
||||
that ship with the BSP.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
@@ -277,7 +273,7 @@
|
||||
<para>
|
||||
This optional area contains useful pre-built kernels and user-space filesystem
|
||||
images appropriate to the target system.
|
||||
This directory typically contains graphical (e.g. Sato) and minimal live images
|
||||
This directory typically contains graphical (e.g. sato) and minimal live images
|
||||
when the BSP tarball has been created and made available in the
|
||||
<ulink url='&YOCTO_HOME_URL;'>Yocto Project</ulink> website.
|
||||
You can use these kernels and images to get a system running and quickly get started
|
||||
@@ -316,14 +312,14 @@
|
||||
<para>
|
||||
<literallayout class='monospaced'>
|
||||
# We have a conf and classes directory, add to BBPATH
|
||||
BBPATH .= ":${LAYERDIR}"
|
||||
BBPATH := "${BBPATH}:${LAYERDIR}"
|
||||
|
||||
# We have recipes-* directories, add to BBFILES
|
||||
BBFILES += "${LAYERDIR}/recipes-*/*.bb \
|
||||
# We have a recipes directory, add to BBFILES
|
||||
BBFILES := "${BBFILES} ${LAYERDIR}/recipes-*/*.bb \
|
||||
${LAYERDIR}/recipes-*/*.bbappend"
|
||||
|
||||
BBFILE_COLLECTIONS += "bsp"
|
||||
BBFILE_PATTERN_bsp = "^${LAYERDIR}/"
|
||||
BBFILE_PATTERN_bsp := "^${LAYERDIR}/"
|
||||
BBFILE_PRIORITY_bsp = "6"
|
||||
</literallayout>
|
||||
</para>
|
||||
@@ -333,7 +329,7 @@
|
||||
Bay <filename>conf/layer.conf</filename> file:
|
||||
<literallayout class='monospaced'>
|
||||
BBFILE_COLLECTIONS += "crownbay"
|
||||
BBFILE_PATTERN_crownbay = "^${LAYERDIR}/"
|
||||
BBFILE_PATTERN_crownbay := "^${LAYERDIR}/"
|
||||
BBFILE_PRIORITY_crownbay = "6"
|
||||
</literallayout>
|
||||
</para>
|
||||
@@ -380,9 +376,9 @@
|
||||
The <filename>crownbay.conf</filename> file is used for the Crown Bay BSP
|
||||
that supports the <trademark class='registered'>Intel</trademark> Embedded
|
||||
Media and Graphics Driver (<trademark class='registered'>Intel</trademark>
|
||||
EMGD), while the <filename>crownbay-noemgd</filename> file is used for the
|
||||
Crown Bay BSP that supports Video Electronics Standards Association (VESA)
|
||||
graphics only.
|
||||
EMGD), while the <filename>crownbay-noemgd.conf</filename> file is used for the
|
||||
Crown Bay BSP that does not support the <trademark class='registered'>Intel</trademark>
|
||||
EMGD.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
@@ -625,8 +621,7 @@
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESEXTRAPATHS'><filename>FILESEXTRAPATHS</filename></ulink>
|
||||
variable is in boilerplate form in the
|
||||
The <filename>FILESEXTRAPATHS</filename> variable is in boilerplate form in the
|
||||
previous example in order to make it easy to do that.
|
||||
This variable must be in your layer or BitBake will not find the patches or
|
||||
configurations even if you have them in your <filename>SRC_URI</filename>.
|
||||
@@ -666,8 +661,8 @@
|
||||
<para>
|
||||
Certain requirements exist for a released BSP to be considered
|
||||
compliant with the Yocto Project.
|
||||
Additionally, recommendations also exist.
|
||||
This section describes the requirements and recommendations for
|
||||
Additionally, a single recommendation also exists.
|
||||
This section describes the requirements and recommendation for
|
||||
released BSPs.
|
||||
</para>
|
||||
|
||||
@@ -688,11 +683,11 @@
|
||||
You should consult the packaging and distribution guidelines for your
|
||||
specific release process.
|
||||
For an example of packaging and distribution requirements, see the
|
||||
"<ulink url='https://wiki.yoctoproject.org/wiki/Third_Party_BSP_Release_Process'>Third Party BSP Release Process</ulink>"
|
||||
wiki page.</para></listitem>
|
||||
<ulink url='https://wiki.yoctoproject.org/wiki/Third_Party_BSP_Release_Process'>Third
|
||||
Party BSP Release Process</ulink> wiki page.</para></listitem>
|
||||
<listitem><para>The requirements for the BSP as it is made available to a developer
|
||||
are completely independent of the released form of the BSP.
|
||||
For example, the BSP Metadata can be contained within a Git repository
|
||||
For example, the BSP metadata can be contained within a Git repository
|
||||
and could have a directory structure completely different from what appears
|
||||
in the officially released BSP layer.</para></listitem>
|
||||
<listitem><para>It is not required that specific packages or package
|
||||
@@ -741,12 +736,12 @@
|
||||
recipes.
|
||||
The recipes themselves should follow the general guidelines
|
||||
for recipes used in the Yocto Project found in the
|
||||
"<ulink url='https://wiki.yoctoproject.org/wiki/Recipe_%26_Patch_Style_Guide'>Yocto Recipe and Patch Style Guide</ulink>".
|
||||
</para></listitem>
|
||||
<ulink url='https://wiki.yoctoproject.org/wiki/Recipe_%26_Patch_Style_Guide'>Yocto
|
||||
Recipe and Patch Style Guide</ulink>.</para></listitem>
|
||||
<listitem><para><emphasis>License File:</emphasis>
|
||||
You must include a license file in the
|
||||
<filename>meta-<bsp_name></filename> directory.
|
||||
This license covers the BSP Metadata as a whole.
|
||||
This license covers the BSP metadata as a whole.
|
||||
You must specify which license to use since there is no
|
||||
default license if one is not specified.
|
||||
See the
|
||||
@@ -775,15 +770,11 @@
|
||||
For example, this information includes information on
|
||||
special variables needed to satisfy a EULA,
|
||||
or instructions on information needed to build or distribute
|
||||
binaries built from the BSP Metadata.</para></listitem>
|
||||
binaries built from the BSP metadata.</para></listitem>
|
||||
<listitem><para>The name and contact information for the
|
||||
BSP layer maintainer.
|
||||
This is the person to whom patches and questions should
|
||||
be sent.
|
||||
For information on how to find the right person, see the
|
||||
"<ulink url='&YOCTO_DOCS_DEV_URL;#how-to-submit-a-change'>How to Submit a Change</ulink>"
|
||||
section in the Yocto Project Development Manual.
|
||||
</para></listitem>
|
||||
be sent.</para></listitem>
|
||||
<listitem><para>Instructions on how to build the BSP using the BSP
|
||||
layer.</para></listitem>
|
||||
<listitem><para>Instructions on how to boot the BSP build from
|
||||
@@ -826,7 +817,7 @@
|
||||
BSP layers for each target.
|
||||
<note>It is completely possible for a developer to structure the
|
||||
working repository as a conglomeration of unrelated BSP
|
||||
files, and to possibly generate BSPs targeted for release
|
||||
files, and to possibly generate specifically targeted 'release' BSPs
|
||||
from that directory using scripts or some other mechanism.
|
||||
Such considerations are outside the scope of this document.</note>
|
||||
</para></listitem>
|
||||
@@ -883,59 +874,31 @@
|
||||
If you plan on customizing a recipe for a particular BSP, you need to do the
|
||||
following:
|
||||
<itemizedlist>
|
||||
<listitem><para>Create a <filename>.bbappend</filename>
|
||||
file for the modified recipe.
|
||||
For information on using append files, see the
|
||||
"<ulink url='&YOCTO_DOCS_DEV_URL;#using-bbappend-files'>Using .bbappend Files</ulink>"
|
||||
section in the Yocto Project Development Manual.
|
||||
</para></listitem>
|
||||
<listitem><para>
|
||||
Ensure your directory structure in the BSP layer
|
||||
that supports your machine is such that it can be found
|
||||
by the build system.
|
||||
See the example later in this section for more information.
|
||||
</para></listitem>
|
||||
<listitem><para>
|
||||
Put the append file in a directory whose name matches
|
||||
the machine's name and is located in an appropriate
|
||||
sub-directory inside the BSP layer (i.e.
|
||||
<filename>recipes-bsp</filename>, <filename>recipes-graphics</filename>,
|
||||
<filename>recipes-core</filename>, and so forth).
|
||||
</para></listitem>
|
||||
<listitem><para>Place the BSP-specific files in the directory named for
|
||||
your machine inside the BSP layer.
|
||||
</para></listitem>
|
||||
<listitem><para>Include within the BSP layer a <filename>.bbappend</filename>
|
||||
file for the modified recipe.</para></listitem>
|
||||
<listitem><para>Place the BSP-specific file in the BSP's recipe
|
||||
<filename>.bbappend</filename> file path under a directory named
|
||||
after the machine.</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Following is a specific example to help you better understand the process.
|
||||
Consider an example that customizes a recipe by adding
|
||||
To better understand this, consider an example that customizes a recipe by adding
|
||||
a BSP-specific configuration file named <filename>interfaces</filename> to the
|
||||
<filename>init-ifupdown_1.0.bb</filename> recipe for machine "xyz".
|
||||
<filename>netbase_5.0.bb</filename> recipe for machine "xyz".
|
||||
Do the following:
|
||||
<orderedlist>
|
||||
<listitem><para>Edit the <filename>init-ifupdown_1.0.bbappend</filename> file so that it
|
||||
<listitem><para>Edit the <filename>netbase_4.47.bbappend</filename> file so that it
|
||||
contains the following:
|
||||
<literallayout class='monospaced'>
|
||||
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
|
||||
PRINC := "${@int(PRINC) + 2}"
|
||||
</literallayout>
|
||||
The append file needs to be in the
|
||||
<filename>meta-xyz/recipes-core/init-ifupdown</filename> directory.
|
||||
</para></listitem>
|
||||
</literallayout></para></listitem>
|
||||
<listitem><para>Create and place the new <filename>interfaces</filename>
|
||||
configuration file in the BSP's layer here:
|
||||
<literallayout class='monospaced'>
|
||||
meta-xyz/recipes-core/init-ifupdown/files/xyz/interfaces
|
||||
</literallayout>
|
||||
The
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-FILESEXTRAPATHS'><filename>FILESEXTRAPATHS</filename></ulink>
|
||||
variable in the append files extends the search path
|
||||
the build system uses to find files during the build.
|
||||
Consequently, for this example you need to have the
|
||||
<filename>files</filename> directory in the same location
|
||||
as your append file.</para></listitem>
|
||||
meta-xyz/recipes-core/netbase/files/xyz/interfaces
|
||||
</literallayout></para></listitem>
|
||||
</orderedlist>
|
||||
</para>
|
||||
</section>
|
||||
@@ -969,9 +932,9 @@
|
||||
|
||||
<para>
|
||||
For cases where you can substitute a free component and still
|
||||
maintain the system's functionality, the "Downloads" page from the
|
||||
<ulink url='&YOCTO_HOME_URL;'>Yocto Project website's</ulink>
|
||||
makes available de-featured BSPs
|
||||
maintain the system's functionality, the Yocto Project website's
|
||||
<ulink url='&YOCTO_HOME_URL;/download/all?keys=&download_type=1&download_version='>BSP
|
||||
Download Page</ulink> makes available de-featured BSPs
|
||||
that are completely free of any IP encumbrances.
|
||||
For these cases, you can use the substitution directly and
|
||||
without any further licensing requirements.
|
||||
@@ -1024,9 +987,9 @@
|
||||
can build the encumbered image with no change at all
|
||||
to the normal build process.</para></listitem>
|
||||
<listitem><para><emphasis>Get a pre-built version of the BSP:</emphasis>
|
||||
You can get this type of BSP by visiting the
|
||||
"Downloads" page of the
|
||||
<ulink url='&YOCTO_HOME_URL;'>Yocto Project website</ulink>.
|
||||
You can get this type of BSP by visiting the Yocto Project website's
|
||||
<ulink url='&YOCTO_HOME_URL;/download'>Download</ulink>
|
||||
page and clicking on "BSP Downloads".
|
||||
You can download BSP tarballs that contain proprietary components
|
||||
after agreeing to the licensing
|
||||
requirements of each of the individually encumbered
|
||||
@@ -1061,7 +1024,7 @@
|
||||
The Yocto Project includes a couple of tools that enable
|
||||
you to create a <link linkend='bsp-layers'>BSP layer</link>
|
||||
from scratch and do basic configuration and maintenance
|
||||
of the kernel without ever looking at a Metadata file.
|
||||
of the kernel without ever looking at a metadata file.
|
||||
These tools are <filename>yocto-bsp</filename> and <filename>yocto-kernel</filename>,
|
||||
respectively.
|
||||
</para>
|
||||
@@ -1148,7 +1111,7 @@
|
||||
</para>
|
||||
|
||||
<para>
|
||||
For any sub-command, you can use the word "help" option just before the
|
||||
For any sub-command, you can also use the word 'help' just before the
|
||||
sub-command to get more extensive documentation:
|
||||
<literallayout class='monospaced'>
|
||||
$ yocto-bsp help create
|
||||
@@ -1170,7 +1133,7 @@
|
||||
The value of the 'karch' parameter determines the set of files
|
||||
that will be generated for the BSP, along with the specific set of
|
||||
'properties' that will be used to fill out the BSP-specific
|
||||
portions of the BSP. The possible values for the 'karch' parameter
|
||||
portions of the BSP. The possible values for the 'karch' paramter
|
||||
can be listed via 'yocto-bsp list karch'.
|
||||
|
||||
...
|
||||
@@ -1182,16 +1145,6 @@
|
||||
on them, you should find it relatively straightforward to discover the commands
|
||||
necessary to create a BSP and perform basic kernel maintenance on that BSP using
|
||||
the tools.
|
||||
<note>
|
||||
You can also use the <filename>yocto-layer</filename> tool to create
|
||||
a "generic" layer.
|
||||
For information on this tool, see the
|
||||
"<ulink url='&YOCTO_DOCS_DEV_URL;#creating-a-general-layer-using-the-yocto-layer-script'>Creating a General Layer Using the yocto-layer Script</ulink>"
|
||||
section in the Yocto Project Development Guide.
|
||||
</note>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The next sections provide a concrete starting point to expand on a few points that
|
||||
might not be immediately obvious or that could use further explanation.
|
||||
</para>
|
||||
@@ -1207,9 +1160,6 @@
|
||||
by the Yocto Project, as well as QEMU versions of the same.
|
||||
The default mode of the script's operation is to prompt you for information needed
|
||||
to generate the BSP layer.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
For the current set of BSPs, the script prompts you for various important
|
||||
parameters such as:
|
||||
<itemizedlist>
|
||||
@@ -1250,7 +1200,7 @@
|
||||
Of the available architectures, <filename>qemu</filename> is the only architecture
|
||||
that causes the script to prompt you further for an actual architecture.
|
||||
In every other way, this architecture is representative of how creating a BSP for
|
||||
an actual machine would work.
|
||||
a 'real' machine would work.
|
||||
The reason the example uses this architecture is because it is an emulated architecture
|
||||
and can easily be followed without requiring actual hardware.
|
||||
</para>
|
||||
@@ -1262,9 +1212,8 @@
|
||||
and providing an invalid response causes the script to accept the default value.
|
||||
Once the script completes, the new <filename>meta-myarm</filename> BSP layer
|
||||
is created in the current working directory.
|
||||
This example assumes you have sourced the
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>
|
||||
and are currently in the top-level folder of the
|
||||
This example assumes you have source the &OE_INIT_FILE; and are currently
|
||||
in the top-level folder of the
|
||||
<ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
|
||||
</para>
|
||||
|
||||
@@ -1279,17 +1228,17 @@
|
||||
4) PowerPC (32-bit)
|
||||
5) MIPS (32-bit)
|
||||
3
|
||||
Would you like to use the default (3.8) kernel? (y/n) [default: y]
|
||||
Would you like to use the default (3.4) kernel? (y/n) [default: y]
|
||||
Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n] [default: y]
|
||||
Getting branches from remote repo git://git.yoctoproject.org/linux-yocto-3.8.git...
|
||||
Getting branches from remote repo git://git.yoctoproject.org/linux-yocto-3.4.git...
|
||||
Please choose a machine branch to base your new BSP branch on: [default: standard/base]
|
||||
1) standard/arm-versatile-926ejs
|
||||
2) standard/base
|
||||
3) standard/beagleboard
|
||||
4) standard/ck
|
||||
4) standard/cedartrail
|
||||
5) standard/crownbay
|
||||
6) standard/edf
|
||||
7) standard/emenlow
|
||||
6) standard/emenlow
|
||||
7) standard/fishriver
|
||||
8) standard/fri2
|
||||
9) standard/fsl-mpc8315e-rdb
|
||||
10) standard/mti-malta32
|
||||
@@ -1301,26 +1250,25 @@
|
||||
Would you like SMP support? (y/n) [default: y]
|
||||
Does your BSP have a touchscreen? (y/n) [default: n]
|
||||
Does your BSP have a keyboard? (y/n) [default: y]
|
||||
|
||||
New qemu BSP created in meta-myarm
|
||||
</literallayout>
|
||||
Let's take a closer look at the example now:
|
||||
<orderedlist>
|
||||
<listitem><para>For the QEMU architecture,
|
||||
<listitem><para>For the <filename>qemu</filename> architecture,
|
||||
the script first prompts you for which emulated architecture to use.
|
||||
In the example, we use the ARM architecture.
|
||||
In the example, we use the <filename>arm</filename> architecture.
|
||||
</para></listitem>
|
||||
<listitem><para>The script then prompts you for the kernel.
|
||||
The default 3.8 kernel is acceptable.
|
||||
The default 3.4 kernel is acceptable.
|
||||
So, the example accepts the default.
|
||||
If you enter 'n', the script prompts you to further enter the kernel
|
||||
you do want to use (e.g. 3.2, 3.2_preempt-rt, and so forth.).</para></listitem>
|
||||
you do want to use (e.g. 3.0, 3.2_preempt-rt, and so forth.).</para></listitem>
|
||||
<listitem><para>Next, the script asks whether you would like to have a new
|
||||
branch created especially for your BSP in the local
|
||||
<ulink url='&YOCTO_DOCS_DEV_URL;#local-kernel-files'>Linux Yocto Kernel</ulink>
|
||||
Git repository .
|
||||
If not, then the script re-uses an existing branch.</para>
|
||||
<para>In this example, the default (or "yes") is accepted.
|
||||
<para>In this example, the default (or 'yes') is accepted.
|
||||
Thus, a new branch is created for the BSP rather than using a common, shared
|
||||
branch.
|
||||
The new branch is the branch committed to for any patches you might later add.
|
||||
@@ -1332,8 +1280,8 @@
|
||||
you are now given the opportunity to select a particular machine branch on
|
||||
which to base your new BSP-specific machine branch
|
||||
(or to re-use if you had elected to not create a new branch).
|
||||
Because this example is generating an ARM-based BSP, the example
|
||||
uses <filename>#1</filename> at the prompt, which selects the ARM-versatile branch.
|
||||
Because this example is generating an <filename>arm</filename> BSP, the example
|
||||
uses <filename>#1</filename> at the prompt, which selects the arm-versatile branch.
|
||||
</para></listitem>
|
||||
<listitem><para>The remainder of the prompts are routine.
|
||||
Defaults are accepted for each.</para></listitem>
|
||||
@@ -1364,7 +1312,7 @@
|
||||
</literallayout>
|
||||
Adding the layer to this file allows the build system to build the BSP and
|
||||
the <filename>yocto-kernel</filename> tool to be able to find the layer and
|
||||
other Metadata it needs on which to operate.
|
||||
other metadata it needs on which to operate.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
@@ -1403,13 +1351,6 @@
|
||||
patch list List the patches associated with a BSP
|
||||
patch add Patch the Yocto kernel for a BSP
|
||||
patch rm Remove patches from a BSP
|
||||
feature list List the features used by a BSP
|
||||
feature add Have a BSP use a feature
|
||||
feature rm Have a BSP stop using a feature
|
||||
features list List the features available to BSPs
|
||||
feature describe Describe a particular feature
|
||||
feature create Create a new BSP-local feature
|
||||
feature destroy Remove a BSP-local feature
|
||||
|
||||
See 'yocto-kernel help COMMAND' for more information on a specific command.
|
||||
|
||||
@@ -1486,7 +1427,7 @@
|
||||
Added items:
|
||||
CONFIG_MISC_DEVICES=y
|
||||
|
||||
$ yocto-kernel config add myarm CONFIG_YOCTO_TESTMOD=y
|
||||
$ yocto-kernel config add myarm KCONFIG_YOCTO_TESTMOD=y
|
||||
Added items:
|
||||
CONFIG_YOCTO_TESTMOD=y
|
||||
</literallayout>
|
||||
|
||||
@@ -3,8 +3,6 @@
|
||||
|
||||
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/xhtml/docbook.xsl" />
|
||||
|
||||
<xsl:param name="html.stylesheet" select="'dev-style.css'" />
|
||||
<xsl:param name="chapter.autolabel" select="1" />
|
||||
<xsl:param name="section.autolabel" select="1" />
|
||||
<xsl:param name="section.label.includes.component.label" select="1" />
|
||||
<!-- <xsl:param name="generate.toc" select="'article nop'"></xsl:param> -->
|
||||
|
||||
</xsl:stylesheet>
|
||||
|
||||
@@ -1,27 +0,0 @@
|
||||
<?xml version='1.0'?>
|
||||
<xsl:stylesheet
|
||||
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
|
||||
xmlns="http://www.w3.org/1999/xhtml"
|
||||
xmlns:fo="http://www.w3.org/1999/XSL/Format"
|
||||
version="1.0">
|
||||
|
||||
<xsl:import
|
||||
href="http://docbook.sourceforge.net/release/xsl/current/eclipse/eclipse3.xsl" />
|
||||
|
||||
<xsl:param name="chunker.output.indent" select="'yes'"/>
|
||||
<xsl:param name="chunk.quietly" select="1"/>
|
||||
<xsl:param name="chunk.first.sections" select="1"/>
|
||||
<xsl:param name="chunk.section.depth" select="10"/>
|
||||
<xsl:param name="use.id.as.filename" select="1"/>
|
||||
<xsl:param name="ulink.target" select="'_self'" />
|
||||
<xsl:param name="base.dir" select="'html/dev-manual/'"/>
|
||||
<xsl:param name="html.stylesheet" select="'../book.css'"/>
|
||||
<xsl:param name="eclipse.manifest" select="0"/>
|
||||
<xsl:param name="create.plugin.xml" select="0"/>
|
||||
<xsl:param name="suppress.navigation" select="1"/>
|
||||
<xsl:param name="generate.index" select="0"/>
|
||||
<xsl:param name="chapter.autolabel" select="1" />
|
||||
<xsl:param name="appendix.autolabel" select="1" />
|
||||
<xsl:param name="section.autolabel" select="1" />
|
||||
<xsl:param name="section.label.includes.component.label" select="1" />
|
||||
</xsl:stylesheet>
|
||||
@@ -10,61 +10,43 @@
|
||||
|
||||
<para>
|
||||
Welcome to the Yocto Project Development Manual!
|
||||
This manual provides information on how to use the Yocto Project to
|
||||
develop embedded Linux images and user-space applications that
|
||||
run on targeted devices.
|
||||
The manual provides an overview of image, kernel, and
|
||||
user-space application development using the Yocto Project.
|
||||
Because much of the information in this manual is general, it
|
||||
contains many references to other sources where you can find more
|
||||
detail.
|
||||
For example, you can find detailed information on Git, repositories,
|
||||
and open source in general in many places on the Internet.
|
||||
Another example specific to the Yocto Project is how to quickly
|
||||
set up your host development system and build an image, which you
|
||||
find in the
|
||||
<ulink url='&YOCTO_DOCS_QS_URL;'>Yocto Project Quick Start</ulink>.
|
||||
<note>
|
||||
By default, using the Yocto Project creates a Poky distribution.
|
||||
However, you can create your own distribution by providing key
|
||||
<link linkend='metadata'>Metadata</link>.
|
||||
A good example is Angstrom, which has had a distribution
|
||||
based on the Yocto Project since its inception.
|
||||
Other examples include commercial distributions like
|
||||
Wind River Linux, Mentor Embedded Linux, and ENEA Linux.
|
||||
See the "<link linkend='creating-your-own-distribution'>Creating Your Own Distribution</link>"
|
||||
section for more information.
|
||||
</note>
|
||||
This manual gives you an idea of how to use the Yocto Project to develop embedded Linux
|
||||
images and user-space applications to run on targeted devices.
|
||||
Reading this manual gives you an overview of image, kernel, and user-space application development
|
||||
using the Yocto Project.
|
||||
Because much of the information in this manual is general, it contains many references to other
|
||||
sources where you can find more detail.
|
||||
For example, detailed information on Git, repositories and open source in general
|
||||
can be found in many places.
|
||||
Another example is how to get set up to use the Yocto Project, which our Yocto Project
|
||||
Quick Start covers.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The Yocto Project Development Manual does, however, provide
|
||||
guidance and examples on how to change the kernel source code,
|
||||
reconfigure the kernel, and develop an application using the
|
||||
popular <trademark class='trade'>Eclipse</trademark> IDE.
|
||||
The Yocto Project Development Manual, however, does provide detailed examples
|
||||
on how to change the kernel source code, reconfigure the kernel, and develop
|
||||
an application using the popular <trademark class='trade'>Eclipse</trademark> IDE.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='what-this-manual-provides'>
|
||||
<title>What This Manual Provides</title>
|
||||
<title>What this Manual Provides</title>
|
||||
|
||||
<para>
|
||||
The following list describes what you can get from this guide:
|
||||
<itemizedlist>
|
||||
<listitem><para>Information that lets you get set
|
||||
up to develop using the Yocto Project.</para></listitem>
|
||||
<listitem><para>Information to help developers who are new to
|
||||
the open source environment and to the distributed revision
|
||||
control system Git, which the Yocto Project uses.
|
||||
</para></listitem>
|
||||
<listitem><para>An understanding of common end-to-end
|
||||
development models and tasks.</para></listitem>
|
||||
<listitem><para>Information about common development tasks
|
||||
generally used during image development for
|
||||
embedded devices.
|
||||
</para></listitem>
|
||||
<listitem><para>Many references to other sources of related
|
||||
information.</para></listitem>
|
||||
<listitem><para>Information to help developers who are new to the open source environment
|
||||
and to the distributed revision control system Git, which the Yocto Project
|
||||
uses.</para></listitem>
|
||||
<listitem><para>An understanding of common end-to-end development models and tasks.</para></listitem>
|
||||
<listitem><para>Development case overviews for both system development and user-space
|
||||
applications.</para></listitem>
|
||||
<listitem><para>An overview and understanding of the emulation environment used with
|
||||
the Yocto Project - the Quick EMUlator (QEMU).</para></listitem>
|
||||
<listitem><para>An understanding of basic kernel architecture and concepts.</para></listitem>
|
||||
<listitem><para>Many references to other sources of related information.</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</section>
|
||||
@@ -100,7 +82,7 @@
|
||||
need to supplement it with other information.
|
||||
The following list presents other sources of information you might find helpful:
|
||||
<itemizedlist>
|
||||
<listitem><para><emphasis><ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink>:
|
||||
<listitem><para><emphasis>The <ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink>:
|
||||
</emphasis> The home page for the Yocto Project provides lots of information on the project
|
||||
as well as links to software and documentation.</para></listitem>
|
||||
<listitem><para><emphasis>
|
||||
@@ -109,7 +91,8 @@
|
||||
<listitem><para><emphasis>
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;'>Yocto Project Reference Manual</ulink>:</emphasis> This manual is a reference
|
||||
guide to the OpenEmbedded build system known as "Poky."
|
||||
</para></listitem>
|
||||
The manual also contains a reference chapter on Board Support Package (BSP)
|
||||
layout.</para></listitem>
|
||||
<listitem><para><emphasis>
|
||||
<ulink url='&YOCTO_DOCS_ADT_URL;'>Yocto Project Application Developer's Guide</ulink>:</emphasis>
|
||||
This guide provides information that lets you get going with the Application
|
||||
@@ -120,15 +103,9 @@
|
||||
This guide defines the structure for BSP components.
|
||||
Having a commonly understood structure encourages standardization.</para></listitem>
|
||||
<listitem><para><emphasis>
|
||||
<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;'>Yocto Project Linux Kernel Development Manual</ulink>:</emphasis>
|
||||
This manual describes how to work with Linux Yocto kernels as well as provides a bit
|
||||
of conceptual information on the construction of the Yocto Linux kernel tree.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>
|
||||
<ulink url='&YOCTO_DOCS_PROF_URL;'>Yocto Project Profiling and Tracing Manual</ulink>:</emphasis>
|
||||
This manual presents a set of common and generally useful tracing and
|
||||
profiling schemes along with their applications (as appropriate) to each tool.
|
||||
</para></listitem>
|
||||
<ulink url='&YOCTO_DOCS_KERNEL_URL;'>Yocto Project Kernel Architecture and Use Manual</ulink>:</emphasis>
|
||||
This manual describes the architecture of the Yocto Project kernel and provides
|
||||
some work flow examples.</para></listitem>
|
||||
<listitem><para><emphasis>
|
||||
<ulink url='http://www.youtube.com/watch?v=3ZlOu-gLsh0'>
|
||||
Eclipse IDE Yocto Plug-in</ulink>:</emphasis> A step-by-step instructional video that
|
||||
@@ -138,8 +115,8 @@
|
||||
<ulink url='&YOCTO_WIKI_URL;/wiki/FAQ'>FAQ</ulink>:</emphasis>
|
||||
A list of commonly asked questions and their answers.</para></listitem>
|
||||
<listitem><para><emphasis>
|
||||
<ulink url='&YOCTO_RELEASE_NOTES;'>Release Notes</ulink>:</emphasis>
|
||||
Features, updates and known issues for the current
|
||||
<ulink url='&YOCTO_HOME_URL;/download/yocto/yocto-project-&DISTRO;-release-notes-poky-&POKYVERSION;'>
|
||||
Release Notes</ulink>:</emphasis> Features, updates and known issues for the current
|
||||
release of the Yocto Project.</para></listitem>
|
||||
<listitem><para><emphasis>
|
||||
<ulink url='&YOCTO_HOME_URL;/tools-resources/projects/hob'>
|
||||
@@ -147,13 +124,11 @@
|
||||
Hob's primary goal is to enable a user to perform common tasks more easily.</para></listitem>
|
||||
<listitem><para><emphasis>
|
||||
<ulink url='&YOCTO_HOME_URL;/download/build-appliance-0'>
|
||||
Build Appliance</ulink>:</emphasis> A virtual machine that
|
||||
enables you to build and boot a custom embedded Linux image
|
||||
with the Yocto Project using a non-Linux development system.
|
||||
For more information, see the
|
||||
<ulink url='&YOCTO_HOME_URL;/documentation/build-appliance-manual'>Build Appliance</ulink>
|
||||
page.
|
||||
</para></listitem>
|
||||
Build Appliance</ulink>:</emphasis> A bootable custom embedded Linux image you can
|
||||
either build using a non-Linux development system (VMware applications) or download
|
||||
from the Yocto Project website.
|
||||
See the <ulink url='&YOCTO_HOME_URL;/documentation/build-appliance-manual'>Build Appliance</ulink>
|
||||
page for more information.</para></listitem>
|
||||
<listitem><para><emphasis>
|
||||
<ulink url='&YOCTO_BUGZILLA_URL;'>Bugzilla</ulink>:</emphasis>
|
||||
The bug tracking application the Yocto Project uses.
|
||||
@@ -166,9 +141,7 @@
|
||||
<listitem><para><ulink url='&YOCTO_LISTS_URL;/listinfo/yocto'></ulink> for a
|
||||
Yocto Project Discussions mailing list.</para></listitem>
|
||||
<listitem><para><ulink url='&YOCTO_LISTS_URL;/listinfo/poky'></ulink> for a
|
||||
Yocto Project Discussions mailing list about the
|
||||
OpenEmbedded build system (Poky).
|
||||
</para></listitem>
|
||||
Yocto Project Discussions mailing list about the Poky build system.</para></listitem>
|
||||
<listitem><para><ulink url='&YOCTO_LISTS_URL;/listinfo/yocto-announce'></ulink>
|
||||
for a mailing list to receive official Yocto Project announcements for developments and
|
||||
as well as Yocto Project milestones.</para></listitem>
|
||||
@@ -180,6 +153,16 @@
|
||||
Two IRC channels on freenode are available
|
||||
for Yocto Project and Poky discussions: <filename>#yocto</filename> and
|
||||
<filename>#poky</filename>, respectively.</para></listitem>
|
||||
<listitem><para><emphasis>
|
||||
<ulink url='&OH_HOME_URL;'>OpenedHand</ulink>:</emphasis>
|
||||
The company that initially developed the Poky project, which is the basis
|
||||
for the OpenEmbedded build system used by the Yocto Project.
|
||||
OpenedHand was acquired by Intel Corporation in 2008.</para></listitem>
|
||||
<listitem><para><emphasis>
|
||||
<ulink url='http://www.intel.com/'>Intel Corporation</ulink>:</emphasis>
|
||||
A multinational semiconductor chip manufacturer company whose Software and
|
||||
Services Group created and supports the Yocto Project.
|
||||
Intel acquired OpenedHand in 2008.</para></listitem>
|
||||
<listitem><para><emphasis>
|
||||
<ulink url='&OE_HOME_URL;'>OpenEmbedded</ulink>:</emphasis>
|
||||
The build system used by the Yocto Project.
|
||||
@@ -192,7 +175,7 @@
|
||||
<listitem><para><emphasis>
|
||||
BitBake User Manual:</emphasis>
|
||||
A comprehensive guide to the BitBake tool.
|
||||
If you want information on BitBake, see the user manual included in the
|
||||
If you want information on BitBake, see the user manual inculded in the
|
||||
<filename>bitbake/doc/manual</filename> directory of the
|
||||
<link linkend='source-directory'>Source Directory</link>.</para></listitem>
|
||||
<listitem><para><emphasis>
|
||||
|
||||
@@ -33,15 +33,8 @@
|
||||
<para>
|
||||
You can use the OpenEmbedded build system, which uses
|
||||
BitBake to develop complete Linux
|
||||
images and associated user-space applications for architectures based
|
||||
on ARM, MIPS, PowerPC, x86 and x86-64.
|
||||
<note>
|
||||
By default, using the Yocto Project creates a Poky distribution.
|
||||
However, you can create your own distribution by providing key
|
||||
<link linkend='metadata'>Metadata</link>.
|
||||
See the "<link linkend='creating-your-own-distribution'>Creating Your Own Distribution</link>"
|
||||
section for more information.
|
||||
</note>
|
||||
images and associated user-space applications for architectures based on ARM, MIPS, PowerPC,
|
||||
x86 and x86-64.
|
||||
While the Yocto Project does not provide a strict testing framework,
|
||||
it does provide or generate for you artifacts that let you perform target-level and
|
||||
emulated testing and debugging.
|
||||
@@ -72,12 +65,9 @@
|
||||
<listitem><para><emphasis>Packages:</emphasis> The OpenEmbedded build system
|
||||
requires certain packages exist on your development system (e.g. Python 2.6 or 2.7).
|
||||
See "<ulink url='&YOCTO_DOCS_QS_URL;#packages'>The Packages</ulink>"
|
||||
section in the Yocto Project Quick Start and the
|
||||
"<ulink url='&YOCTO_DOCS_REF_URL;#required-packages-for-the-host-development-system'>Required Packages for the Host Development System</ulink>"
|
||||
section in the Yocto Project Reference Manual for the exact
|
||||
package requirements and the installation commands to install
|
||||
them for the supported distributions.
|
||||
</para></listitem>
|
||||
section in the Yocto Project Quick Start for the exact package
|
||||
requirements and the installation commands to install them
|
||||
for the supported distributions.</para></listitem>
|
||||
<listitem id='local-yp-release'><para><emphasis>Yocto Project Release:</emphasis>
|
||||
You need a release of the Yocto Project.
|
||||
You set that up with a local <link linkend='source-directory'>Source Directory</link>
|
||||
@@ -88,15 +78,12 @@
|
||||
hierarchical set of files as the "Source Directory."
|
||||
</note>
|
||||
<itemizedlist>
|
||||
<listitem><para><emphasis>Tarball Extraction:</emphasis>
|
||||
If you are not going to contribute back into the Yocto
|
||||
Project, you can simply go to the
|
||||
<ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink>,
|
||||
select the "Downloads" tab, and choose what you want.
|
||||
Once you have the tarball, just extract it into a
|
||||
directory of your choice.</para>
|
||||
<para>For example, the following command extracts the
|
||||
Yocto Project &DISTRO; release tarball
|
||||
<listitem><para><emphasis>Tarball Extraction:</emphasis> If you are not going to contribute
|
||||
back into the Yocto Project, you can simply download a Yocto Project release you want
|
||||
from the website’s <ulink url='&YOCTO_HOME_URL;/download'>download page</ulink>.
|
||||
Once you have the tarball, just extract it into a directory of your choice.</para>
|
||||
<para>For example, the following command extracts the Yocto Project &DISTRO;
|
||||
release tarball
|
||||
into the current working directory and sets up the local Source Directory
|
||||
with a top-level folder named <filename>&YOCTO_POKY;</filename>:
|
||||
<literallayout class='monospaced'>
|
||||
@@ -110,23 +97,23 @@
|
||||
Git repository of the upstream <filename>poky</filename> source repository.
|
||||
Doing so creates a repository with a complete history of changes and allows
|
||||
you to easily submit your changes upstream to the project.
|
||||
Because you clone the repository, you have access to all the Yocto Project development
|
||||
Because you cloned the repository, you have access to all the Yocto Project development
|
||||
branches and tag names used in the upstream repository.</para>
|
||||
<note>You can view the Yocto Project Source Repositories at
|
||||
<ulink url='&YOCTO_GIT_URL;/cgit.cgi'></ulink></note>
|
||||
<para>The following transcript shows how to clone the <filename>poky</filename>
|
||||
Git repository into the current working directory.
|
||||
<note>You can view the Yocto Project Source Repositories at
|
||||
<ulink url='&YOCTO_GIT_URL;/cgit.cgi'></ulink></note>
|
||||
The command creates the local repository in a directory named <filename>poky</filename>.
|
||||
For information on Git used within the Yocto Project, see the
|
||||
"<link linkend='git'>Git</link>" section.
|
||||
<literallayout class='monospaced'>
|
||||
$ git clone git://git.yoctoproject.org/poky
|
||||
Cloning into 'poky'...
|
||||
remote: Counting objects: 183981, done.
|
||||
remote: Compressing objects: 100% (47428/47428), done.
|
||||
remote: Total 183981 (delta 132271), reused 183703 (delta 132044)
|
||||
Receiving objects: 100% (183981/183981), 89.71 MiB | 2.93 MiB/s, done.
|
||||
Resolving deltas: 100% (132271/132271), done.
|
||||
Initialized empty Git repository in /home/scottrif/poky/.git/
|
||||
remote: Counting objects: 141863, done.
|
||||
remote: Compressing objects: 100% (38624/38624), done.
|
||||
remote: Total 141863 (delta 99661), reused 141816 (delta 99614)
|
||||
Receiving objects: 100% (141863/141863), 76.64 MiB | 126 KiB/s, done.
|
||||
Resolving deltas: 100% (99661/99661), done.
|
||||
</literallayout></para>
|
||||
<para>For another example of how to set up your own local Git repositories, see this
|
||||
<ulink url='&YOCTO_WIKI_URL;/wiki/Transcript:_from_git_checkout_to_meta-intel_BSP'>
|
||||
@@ -145,32 +132,33 @@
|
||||
For simplicity, it is recommended that you create these structures outside of the
|
||||
Source Directory (usually <filename>poky</filename>).</para>
|
||||
<para>As an example, the following transcript shows how to create the bare clone
|
||||
of the <filename>linux-yocto-3.8</filename> kernel and then create a copy of
|
||||
of the <filename>linux-yocto-3.4</filename> kernel and then create a copy of
|
||||
that clone.
|
||||
<note>When you have a local Yocto Project kernel Git repository, you can
|
||||
reference that repository rather than the upstream Git repository as
|
||||
part of the <filename>clone</filename> command.
|
||||
Doing so can speed up the process.</note></para>
|
||||
<para>In the following example, the bare clone is named
|
||||
<filename>linux-yocto-3.8.git</filename>, while the
|
||||
copy is named <filename>my-linux-yocto-3.8-work</filename>:
|
||||
<filename>linux-yocto-3.4.git</filename>, while the
|
||||
copy is named <filename>my-linux-yocto-3.4-work</filename>:
|
||||
<literallayout class='monospaced'>
|
||||
$ git clone --bare git://git.yoctoproject.org/linux-yocto-3.8 linux-yocto-3.8.git
|
||||
Cloning into bare repository 'linux-yocto-3.8.git'...
|
||||
remote: Counting objects: 2847090, done.
|
||||
remote: Compressing objects: 100% (454675/454675), done.
|
||||
remote: Total 2847090 (delta 2386170), reused 2825793 (delta 2364886)
|
||||
Receiving objects: 100% (2847090/2847090), 603.19 MiB | 3.54 MiB/s, done.
|
||||
Resolving deltas: 100% (2386170/2386170), done. </literallayout></para>
|
||||
$ git clone --bare git://git.yoctoproject.org/linux-yocto-3.4 linux-yocto-3.4.git
|
||||
Initialized empty Git repository in /home/scottrif/linux-yocto-3.4.git/
|
||||
remote: Counting objects: 2468027, done.
|
||||
remote: Compressing objects: 100% (392255/392255), done.
|
||||
remote: Total 2468027 (delta 2071693), reused 2448773 (delta 2052498)
|
||||
Receiving objects: 100% (2468027/2468027), 530.46 MiB | 129 KiB/s, done.
|
||||
Resolving deltas: 100% (2071693/2071693), done.
|
||||
</literallayout></para>
|
||||
<para>Now create a clone of the bare clone just created:
|
||||
<literallayout class='monospaced'>
|
||||
$ git clone linux-yocto-3.8.git my-linux-yocto-3.8-work
|
||||
Cloning into 'my-linux-yocto-3.8-work'...
|
||||
$ git clone linux-yocto-3.4.git my-linux-yocto-3.4-work
|
||||
Cloning into 'my-linux-yocto-3.4-work'...
|
||||
done.
|
||||
</literallayout></para></listitem>
|
||||
<listitem id='poky-extras-repo'><para><emphasis>
|
||||
The <filename>poky-extras</filename> Git Repository</emphasis>:
|
||||
The <filename>poky-extras</filename> Git repository contains Metadata needed
|
||||
The <filename>poky-extras</filename> Git repository contains metadata needed
|
||||
only if you are modifying and building the kernel image.
|
||||
In particular, it contains the kernel BitBake append (<filename>.bbappend</filename>)
|
||||
files that you
|
||||
@@ -188,12 +176,13 @@
|
||||
<literallayout class='monospaced'>
|
||||
$ cd ~/poky
|
||||
$ git clone git://git.yoctoproject.org/poky-extras poky-extras
|
||||
Cloning into 'poky-extras'...
|
||||
remote: Counting objects: 690, done.
|
||||
remote: Compressing objects: 100% (431/431), done.
|
||||
remote: Total 690 (delta 238), reused 690 (delta 238)
|
||||
Receiving objects: 100% (690/690), 532.60 KiB, done.
|
||||
Resolving deltas: 100% (238/238), done. </literallayout></para></listitem>
|
||||
Initialized empty Git repository in /home/scottrif/poky/poky-extras/.git/
|
||||
remote: Counting objects: 618, done.
|
||||
remote: Compressing objects: 100% (558/558), done.
|
||||
remote: Total 618 (delta 192), reused 307 (delta 39)
|
||||
Receiving objects: 100% (618/618), 526.26 KiB | 111 KiB/s, done.
|
||||
Resolving deltas: 100% (192/192), done.
|
||||
</literallayout></para></listitem>
|
||||
<listitem><para id='supported-board-support-packages-(bsps)'><emphasis>Supported Board
|
||||
Support Packages (BSPs):</emphasis>
|
||||
The Yocto Project provides a layer called <filename>meta-intel</filename> and
|
||||
@@ -223,12 +212,10 @@
|
||||
information on BSP Layers.
|
||||
<itemizedlist>
|
||||
<listitem><para><emphasis>Tarball Extraction:</emphasis> You can download any released
|
||||
BSP tarball from the same "Downloads" page of the
|
||||
<ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink>
|
||||
BSP tarball from the same
|
||||
<ulink url='&YOCTO_HOME_URL;/download'>download site</ulink> used
|
||||
to get the Yocto Project release.
|
||||
Once on the "Download" page, look for "BSP" under the
|
||||
"Type" heading.</para>
|
||||
<para>Once you have the tarball, just extract it into a directory of your choice.
|
||||
Once you have the tarball, just extract it into a directory of your choice.
|
||||
Again, this method just produces a snapshot of the BSP layer in the form
|
||||
of a hierarchical directory structure.</para></listitem>
|
||||
<listitem><para><emphasis>Git Repository Method:</emphasis> If you are working
|
||||
@@ -245,12 +232,12 @@
|
||||
<literallayout class='monospaced'>
|
||||
$ cd ~/poky
|
||||
$ git clone git://git.yoctoproject.org/meta-intel.git
|
||||
Cloning into 'meta-intel'...
|
||||
remote: Counting objects: 6264, done.
|
||||
remote: Compressing objects: 100% (2135/2135), done.
|
||||
remote: Total 6264 (delta 3321), reused 6235 (delta 3293)
|
||||
Receiving objects: 100% (6264/6264), 2.17 MiB | 2.63 MiB/s, done.
|
||||
Resolving deltas: 100% (3321/3321), done.
|
||||
Initialized empty Git repository in /home/scottrif/poky/meta-intel/.git/
|
||||
remote: Counting objects: 3380, done.
|
||||
remote: Compressing objects: 100% (2750/2750), done.
|
||||
remote: Total 3380 (delta 1689), reused 227 (delta 113)
|
||||
Receiving objects: 100% (3380/3380), 1.77 MiB | 128 KiB/s, done.
|
||||
Resolving deltas: 100% (1689/1689), done.
|
||||
</literallayout></para>
|
||||
<para>The same
|
||||
<ulink url='&YOCTO_WIKI_URL;/wiki/Transcript:_from_git_checkout_to_meta-intel_BSP'>
|
||||
@@ -297,7 +284,7 @@
|
||||
a centralized tarball download directory through the
|
||||
<filename><ulink url='&YOCTO_DOCS_REF_URL;#var-DL_DIR'>DL_DIR</ulink></filename> variable.</para></listitem>
|
||||
<listitem><para>Build the image using the <filename>bitbake</filename> command.
|
||||
If you want information on BitBake, see the user manual included in the
|
||||
If you want information on BitBake, see the user manual inculded in the
|
||||
<filename>bitbake/doc/manual</filename> directory of the
|
||||
<link linkend='source-directory'>Source Directory</link>.</para></listitem>
|
||||
<listitem><para>Run the image either on the actual hardware or using the QEMU
|
||||
@@ -333,7 +320,7 @@
|
||||
Regardless of the type of image you are using, you need to download the pre-built kernel
|
||||
that you will boot in the QEMU emulator and then download and extract the target root
|
||||
filesystem for your target machine’s architecture.
|
||||
You can get architecture-specific binaries and file systems from
|
||||
You can get architecture-specific binaries and filesystems from
|
||||
<ulink url='&YOCTO_MACHINES_DL_URL;'>machines</ulink>.
|
||||
You can get installation scripts for stand-alone toolchains from
|
||||
<ulink url='&YOCTO_TOOLCHAIN_DL_URL;'>toolchains</ulink>.
|
||||
@@ -366,7 +353,7 @@
|
||||
You can accomplish this by defining the cross-compiler variable
|
||||
(e.g. <filename>export CC="distcc"</filename>).
|
||||
Alternatively, if you are using a suitable SDK image or the appropriate
|
||||
stand-alone toolchain is present,
|
||||
stand-alone toolchain is present in <filename>/opt/poky</filename>,
|
||||
the toolchain is also automatically used.
|
||||
</para>
|
||||
|
||||
@@ -382,12 +369,12 @@
|
||||
The connection uses standard IP networking.</para></listitem>
|
||||
<listitem><para>SSH servers exist in some QEMU images.
|
||||
The <filename>core-image-sato</filename> QEMU image has a Dropbear secure
|
||||
shell (SSH) server that runs with the root password disabled.
|
||||
shell (ssh) server that runs with the root password disabled.
|
||||
The <filename>core-image-basic</filename> and <filename>core-image-lsb</filename> QEMU images
|
||||
have OpenSSH instead of Dropbear.
|
||||
Including these SSH servers allow you to use standard <filename>ssh</filename> and
|
||||
<filename>scp</filename> commands.
|
||||
The <filename>core-image-minimal</filename> QEMU image, however, contains no SSH
|
||||
The <filename>core-image-minimal</filename> QEMU image, however, contains no ssh
|
||||
server.</para></listitem>
|
||||
<listitem><para>You can use a provided, user-space NFS server to boot the QEMU session
|
||||
using a local copy of the root filesystem on the host.
|
||||
|
||||
@@ -16,9 +16,7 @@
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
|
||||
<title>
|
||||
Yocto Project Development Manual
|
||||
</title>
|
||||
<title></title>
|
||||
|
||||
<authorgroup>
|
||||
<author>
|
||||
@@ -48,34 +46,9 @@
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4</revnumber>
|
||||
<date>April 2013</date>
|
||||
<date>Sometime in 2013</date>
|
||||
<revremark>Released with the Yocto Project 1.4 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.1</revnumber>
|
||||
<date>June 2013</date>
|
||||
<revremark>Released with the Yocto Project 1.4.1 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.2</revnumber>
|
||||
<date>August 2013</date>
|
||||
<revremark>Released with the Yocto Project 1.4.2 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.3</revnumber>
|
||||
<date>March 2014</date>
|
||||
<revremark>Released with the Yocto Project 1.4.3 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.4</revnumber>
|
||||
<date>May 2014</date>
|
||||
<revremark>Released with the Yocto Project 1.4.4 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.5</revnumber>
|
||||
<date>July 2014</date>
|
||||
<revremark>Released with the Yocto Project 1.4.5 Release.</revremark>
|
||||
</revision>
|
||||
</revhistory>
|
||||
|
||||
<copyright>
|
||||
|
||||
|
Before Width: | Height: | Size: 98 KiB After Width: | Height: | Size: 57 KiB |
|
Before Width: | Height: | Size: 210 KiB After Width: | Height: | Size: 116 KiB |
|
Before Width: | Height: | Size: 40 KiB |
@@ -18,6 +18,26 @@
|
||||
to help you manage the complexity of the configuration and sources
|
||||
used to support multiple BSPs and Linux kernel types.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
In particular, the kernel tools allow you to specify only what you
|
||||
must, and nothing more.
|
||||
Where a complete Linux kernel <filename>.config</filename> includes
|
||||
all the automatically selected <filename>CONFIG</filename> options,
|
||||
the configuration fragments only need to contain the highest level
|
||||
visible <filename>CONFIG</filename> options as presented by the Linux
|
||||
kernel <filename>menuconfig</filename> system.
|
||||
This reduces your maintenance effort and allows you
|
||||
to further separate your configuration in ways that make sense for
|
||||
your project.
|
||||
A common split is policy and hardware.
|
||||
For example, all your kernels might support
|
||||
the <filename>proc</filename> and <filename>sys</filename> filesystems,
|
||||
but only specific boards will require sound, USB, or specific drivers.
|
||||
Specifying these individually allows you to aggregate them
|
||||
together as needed, but maintain them in only one place.
|
||||
Similar logic applies to source changes.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='using-kernel-metadata-in-a-recipe'>
|
||||
@@ -30,7 +50,7 @@
|
||||
This Metadata defines Board Support Packages (BSPs) that
|
||||
correspond to definitions in linux-yocto recipes for the same BSPs.
|
||||
A BSP consists of an aggregation of kernel policy and hardware-specific
|
||||
feature enablements.
|
||||
feature enablement.
|
||||
The BSP can be influenced from within the linux-yocto recipe.
|
||||
<note>
|
||||
Linux kernel source that contains kernel Metadata is said to be
|
||||
@@ -62,10 +82,10 @@
|
||||
The linux-yocto style recipes can optionally define the following
|
||||
variables:
|
||||
<literallayout class='monospaced'>
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-KBRANCH'>KBRANCH</ulink>
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-KERNEL_FEATURES'>KERNEL_FEATURES</ulink>
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-KBRANCH_DEFAULT'>KBRANCH_DEFAULT</ulink>
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-LINUX_KERNEL_TYPE'>LINUX_KERNEL_TYPE</ulink>
|
||||
KBRANCH
|
||||
KERNEL_FEATURES
|
||||
KBRANCH_DEFAULT
|
||||
LINUX_KERNEL_TYPE
|
||||
</literallayout>
|
||||
<filename>KBRANCH_DEFAULT</filename> defines the Linux kernel source
|
||||
repository's default branch to use to build the Linux kernel.
|
||||
@@ -84,8 +104,7 @@
|
||||
used in assembling the configuration.
|
||||
If you do not specify a <filename>LINUX_KERNEL_TYPE</filename>,
|
||||
it defaults to "standard".
|
||||
Together with
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-KMACHINE'><filename>KMACHINE</filename></ulink>,
|
||||
Together with <filename>KMACHINE</filename>,
|
||||
<filename>LINUX_KERNEL_TYPE</filename> defines the search
|
||||
arguments used by the kernel tools to find the
|
||||
appropriate description within the kernel Metadata with which to
|
||||
@@ -120,8 +139,7 @@
|
||||
then for the <filename>LINUX_KERNEL_TYPE</filename>.
|
||||
If the tools cannot find a partial match, they will use the
|
||||
sources from the <filename>KBRANCH</filename> and any configuration
|
||||
specified in the
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>.
|
||||
specified in the <filename>SRC_URI</filename>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
@@ -203,7 +221,7 @@
|
||||
<filename>oe-core/meta-skeleton/recipes-kernel/linux/linux-yocto-custom.bb</filename>
|
||||
to a recipe in your layer, <filename>FILESEXTRAPATHS</filename>
|
||||
is typically set to
|
||||
<filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-THISDIR'><filename>THISDIR</filename></ulink><filename>}/${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PN'><filename>PN</filename></ulink><filename>}</filename>.
|
||||
<filename>${THISDIR}/${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PN'><filename>PN</filename></ulink><filename>}</filename>.
|
||||
See the "<link linkend='modifying-an-existing-recipe'>Modifying an Existing Recipe</link>"
|
||||
section for more information.
|
||||
</para>
|
||||
@@ -376,20 +394,19 @@
|
||||
description files within the structure:
|
||||
<itemizedlist>
|
||||
<listitem><para>If your file contains
|
||||
only configuration fragments, place the file in the
|
||||
<filename>cfg</filename> directory.</para></listitem>
|
||||
only configuration fragments, place the file in
|
||||
<filename>cfg</filename>.</para></listitem>
|
||||
<listitem><para>If your file contains
|
||||
only source-code fixes, place the file in the
|
||||
<filename>patches</filename> directory.</para></listitem>
|
||||
only source-code fixes, place the file in
|
||||
<filename>patches</filename>.</para></listitem>
|
||||
<listitem><para>If your file encapsulates
|
||||
a major feature, often combining sources and configurations,
|
||||
place the file in <filename>features</filename> directory.
|
||||
place the file in <filename>features</filename>.
|
||||
</para></listitem>
|
||||
<listitem><para>If your file aggregates
|
||||
non-hardware configuration and patches in order to define a
|
||||
base kernel policy or major kernel type to be reused across
|
||||
multiple BSPs, place the file in <filename>ktypes</filename>
|
||||
directory.
|
||||
multiple BSPs, place the file in <filename>ktypes</filename>.
|
||||
</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
@@ -449,8 +466,8 @@
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-KFEATURE_DESCRIPTION'><filename>KFEATURE_DESCRIPTION</filename></ulink>
|
||||
provides a short description of the fragment.
|
||||
<filename>KFEATURE_DESCRIPTION</filename> provides a short
|
||||
description of the fragment.
|
||||
Higher level kernel tools use this description.
|
||||
</para>
|
||||
|
||||
@@ -636,8 +653,7 @@
|
||||
It is not strictly necessary to create a kernel type
|
||||
<filename>.scc</filename> file.
|
||||
The Board Support Package (BSP) file can implicitly define
|
||||
the kernel type using a <filename>define
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-KTYPE'>KTYPE</ulink> myktype</filename>
|
||||
the kernel type using a <filename>define KTYPE myktype</filename>
|
||||
line.
|
||||
See the "<link linkend='bsp-descriptions'>BSP Descriptions</link>"
|
||||
section for more information.
|
||||
@@ -664,15 +680,13 @@
|
||||
kconf mybsp.cfg
|
||||
</literallayout>
|
||||
Every BSP description should define the
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-KMACHINE'><filename>KMACHINE</filename></ulink>,
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-KTYPE'><filename>KTYPE</filename></ulink>,
|
||||
and <ulink url='&YOCTO_DOCS_REF_URL;#var-KARCH'><filename>KARCH</filename></ulink>
|
||||
variables.
|
||||
<filename>KMACHINE</filename>, <filename>KTYPE</filename>,
|
||||
and <filename>KARCH</filename> variables.
|
||||
These variables allow the OpenEmbedded build system to identify
|
||||
the description as meeting the criteria set by the recipe being
|
||||
built.
|
||||
This simple example supports the "mybsp" machine for the "standard"
|
||||
kernel and the "i386" architecture.
|
||||
kernel and the 'i386" architecture.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
@@ -681,9 +695,9 @@
|
||||
description file does not exist.
|
||||
Thus, if you do not have kernel types defined in your kernel
|
||||
Metadata, you only need to ensure that the kernel recipe's
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-LINUX_KERNEL_TYPE'><filename>LINUX_KERNEL_TYPE</filename></ulink>
|
||||
variable and the <filename>KTYPE</filename> variable in the
|
||||
BSP description file match.
|
||||
<filename>LINUX_KERNEL_TYPE</filename> variable and the
|
||||
<filename>KTYPE</filename> variable in the BSP description
|
||||
file match.
|
||||
<note>
|
||||
Future versions of the tooling make the specification of
|
||||
<filename>KTYPE</filename> in the BSP optional.
|
||||
@@ -787,7 +801,7 @@
|
||||
</literallayout>
|
||||
The <filename>include</filename> command midway through the file
|
||||
includes the <filename>fri2.scc</filename> description that
|
||||
defines all hardware enablements for the BSP that is common to all
|
||||
defines all hardware enablement for the BSP that is common to all
|
||||
kernel types.
|
||||
Using this command significantly reduces duplication.
|
||||
</para>
|
||||
@@ -889,7 +903,7 @@
|
||||
if you are reusing patches from an external tree and are not
|
||||
working on the patches, you might find the encapsulated feature
|
||||
to be appropriate.
|
||||
Given this scenario, you do not need to create any branches in the
|
||||
Given this scenario, you don't need to create any branches in the
|
||||
source repository.
|
||||
Rather, you just take the static patches you need and encapsulate
|
||||
them within a feature description.
|
||||
@@ -1029,8 +1043,9 @@
|
||||
<listitem><para><filename>branch [ref]</filename>:
|
||||
Creates a new branch relative to the current branch
|
||||
(typically <filename>${KTYPE}</filename>) using
|
||||
the currently checked-out branch, or "ref" if specified.
|
||||
</para></listitem>
|
||||
the currently checked-out branch, or "ref" if specified.</para>
|
||||
<para><emphasis>TODO:</emphasis> Bruce, we need to clarify
|
||||
the "relative to the current branch" bit.</para></listitem>
|
||||
<listitem><para><filename>define</filename>:
|
||||
Defines variables, such as <filename>KMACHINE</filename>,
|
||||
<filename>KTYPE</filename>, <filename>KARCH</filename>,
|
||||
|
||||
@@ -7,15 +7,11 @@
|
||||
<title>Common Tasks</title>
|
||||
|
||||
<para>
|
||||
This chapter presents several common tasks you perform when you
|
||||
This chapter presents several common tasks that are performed when you
|
||||
work with the Yocto Project Linux kernel.
|
||||
These tasks include preparing a layer, modifying an existing recipe,
|
||||
iterative development, working with your own sources, and incorporating
|
||||
out-of-tree modules.
|
||||
<note>
|
||||
The examples presented in this chapter work with the Yocto Project
|
||||
1.2.2 Release and forward.
|
||||
</note>
|
||||
</para>
|
||||
|
||||
<section id='creating-and-preparing-a-layer'>
|
||||
@@ -33,8 +29,8 @@
|
||||
sections in the Yocto Project Development Manual:
|
||||
<itemizedlist>
|
||||
<listitem><para>"<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding and Creating Layers</ulink>" for
|
||||
general information on layers and how to create layers.</para></listitem>
|
||||
<listitem><para>"<ulink url='&YOCTO_DOCS_DEV_URL;#set-up-your-layer-for-the-build'>Set Up Your Layer for the Build</ulink>" for
|
||||
general information on layers and how to create them.</para></listitem>
|
||||
<listitem><para>"<ulink url='&YOCTO_DOCS_DEV_URL;#get-your-layer-setup-for-the-build'>Get Your Layer Setup for the Build</ulink>" for
|
||||
specific instructions on setting up a layer for kernel
|
||||
development.</para></listitem>
|
||||
</itemizedlist>
|
||||
@@ -69,7 +65,7 @@
|
||||
See the "<link linkend='creating-and-preparing-a-layer'>Creating and Preparing a Layer</link>"
|
||||
section for some general resources.
|
||||
You can also see the
|
||||
"<ulink url='&YOCTO_DOCS_DEV_URL;#set-up-your-layer-for-the-build'>Set Up Your Layer for the Build</ulink>" section
|
||||
"<ulink url='&YOCTO_DOCS_DEV_URL;#get-your-layer-setup-for-the-build'>Get Your Layer Setup for the Build</ulink>" section
|
||||
of the Yocto Project Development Manual for a detailed
|
||||
example.
|
||||
</para>
|
||||
@@ -92,13 +88,10 @@
|
||||
<literallayout class='monospaced'>
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-FILESEXTRAPATHS'>FILESEXTRAPATHS</ulink> := "${THISDIR}/${PN}"
|
||||
</literallayout>
|
||||
The path <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-THISDIR'><filename>THISDIR</filename></ulink><filename>}/${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PN'><filename>PN</filename></ulink><filename>}</filename>
|
||||
expands to "linux-yocto" in the current directory for this
|
||||
example.
|
||||
If you add any new files that modify the kernel recipe and you
|
||||
have extended <filename>FILESPATH</filename> as
|
||||
described above, you must place the files in your layer in the
|
||||
following area:
|
||||
The path <filename>${THISDIR}/${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PN'><filename>PN</filename></ulink><filename>}</filename> expands
|
||||
to "linux-yocto" in the current directory for this example.
|
||||
If you add any new files that modify the kernel recipe,
|
||||
you need to place them in your layer in the following area:
|
||||
<literallayout class='monospaced'>
|
||||
<your-layer>/recipes-kernel/linux/linux-yocto/
|
||||
</literallayout>
|
||||
@@ -152,31 +145,19 @@
|
||||
You can make wholesale or incremental changes to the Linux
|
||||
kernel <filename>.config</filename> file by including a
|
||||
<filename>defconfig</filename> or by specifying
|
||||
configuration fragments in the
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>.
|
||||
configuration fragments in the <filename>SRC_URI</filename>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
If you have a complete Linux kernel <filename>.config</filename>
|
||||
file you want to use, copy it to a directory named
|
||||
<filename>files</filename>, which must be in
|
||||
your layer's <filename>recipes-kernel/linux</filename>
|
||||
directory, and name the file "defconfig".
|
||||
Then, add the following lines to your linux-yocto
|
||||
file you want to use, copy it to the
|
||||
<filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-FILES'><filename>FILES</filename></ulink><filename>}</filename>
|
||||
directory within your layer and name it "defconfig".
|
||||
Then, add the following line to your linux-yocto
|
||||
<filename>.bbappend</filename> file in your layer:
|
||||
<literallayout class='monospaced'>
|
||||
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
|
||||
SRC_URI += "file://defconfig"
|
||||
</literallayout>
|
||||
The
|
||||
<filename>SRC_URI</filename> tells the build system how to
|
||||
search for the file, while the
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-FILESEXTRAPATHS'><filename>FILESEXTRAPATHS</filename></ulink>
|
||||
extends the
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-FILESPATH'><filename>FILESPATH</filename></ulink>
|
||||
variable (search directories) to include the
|
||||
<filename>files</filename> directory you created for the
|
||||
configuration changes.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
@@ -185,7 +166,7 @@
|
||||
configuration fragment.
|
||||
For example, if you want to add support for a basic serial
|
||||
console, create a file named <filename>8250.cfg</filename> in the
|
||||
<filename>files</filename> directory with the following
|
||||
<filename>${FILES}</filename> directory with the following
|
||||
content (without indentation):
|
||||
<literallayout class='monospaced'>
|
||||
CONFIG_SERIAL_8250=y
|
||||
@@ -196,11 +177,10 @@
|
||||
CONFIG_SERIAL_CORE=y
|
||||
CONFIG_SERIAL_CORE_CONSOLE=y
|
||||
</literallayout>
|
||||
Next, include this configuration fragment and extend the
|
||||
<filename>FILESPATH</filename> variable in your
|
||||
Next, include this configuration fragment in a
|
||||
<filename>SRC_URI</filename> statement in your
|
||||
<filename>.bbappend</filename> file:
|
||||
<literallayout class='monospaced'>
|
||||
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
|
||||
SRC_URI += "file://8250.cfg"
|
||||
</literallayout>
|
||||
The next time you run BitBake to build the Linux kernel, BitBake
|
||||
@@ -237,51 +217,6 @@
|
||||
"linux-yocto".
|
||||
</para>
|
||||
|
||||
<section id='tip-dirty-string'>
|
||||
<title>"-dirty" String</title>
|
||||
|
||||
<!--
|
||||
<para>
|
||||
<emphasis>AR - Darrren Hart:</emphasis> This section
|
||||
originated from the old Yocto Project Kernel Architecture
|
||||
and Use Manual.
|
||||
It was decided we need to put it in this section here.
|
||||
Darren needs to figure out where we want it and what part
|
||||
of it we want (all, revision???)
|
||||
</para>
|
||||
-->
|
||||
|
||||
<para>
|
||||
If kernel images are being built with "-dirty" on the
|
||||
end of the version string, this simply means that
|
||||
modifications in the source directory have not been committed.
|
||||
<literallayout class='monospaced'>
|
||||
$ git status
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
You can use the above Git command to report modified,
|
||||
removed, or added files.
|
||||
You should commit those changes to the tree regardless of
|
||||
whether they will be saved, exported, or used.
|
||||
Once you commit the changes, you need to rebuild the kernel.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
To force a pickup and commit of all such pending changes,
|
||||
enter the following:
|
||||
<literallayout class='monospaced'>
|
||||
$ git add .
|
||||
$ git commit -s -a -m "getting rid of -dirty"
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Next, rebuild the kernel.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='generating-configuration-files'>
|
||||
<title>Generating Configuration Files</title>
|
||||
|
||||
@@ -303,7 +238,7 @@
|
||||
The resulting <filename>.config</filename> file is
|
||||
located in
|
||||
<filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink><filename>}</filename> under the
|
||||
<filename>linux-${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink><filename>}-${<ulink url='&YOCTO_DOCS_REF_URL;#var-KTYPE'><filename>KTYPE</filename></ulink>}-build</filename> directory.
|
||||
<filename>linux-${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink><filename>}-${KTYPE}-build</filename> directory.
|
||||
You can use the entire <filename>.config</filename> file as the
|
||||
<filename>defconfig</filename> file as described in the
|
||||
"<link linkend='changing-the-configuration'>Changing the Configuration</link>" section.
|
||||
@@ -387,7 +322,7 @@
|
||||
|
||||
WARNING: There were 2 hardware options requested that do not
|
||||
have a corresponding value present in the final ".config" file.
|
||||
This probably means you are not't getting the config you wanted.
|
||||
This probably means you aren't getting the config you wanted.
|
||||
The full list can be found in your kernel src dir at:
|
||||
meta/cfg/standard/mybsp/mismatch.cfg
|
||||
</literallayout>
|
||||
@@ -677,14 +612,11 @@
|
||||
|
||||
<para>
|
||||
The important point to note here is the
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-KERNEL_SRC'><filename>KERNEL_SRC</filename></ulink>
|
||||
variable.
|
||||
The class <filename>module.bbclass</filename> sets this variable,
|
||||
as well as the
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-KERNEL_PATH'><filename>KERNEL_PATH</filename></ulink>
|
||||
variable to
|
||||
<filename>${<ulink url='&YOCTO_DOCS_REF_URL;#var-STAGING_KERNEL_DIR'><filename>STAGING_KERNEL_DIR</filename></ulink>}</filename>
|
||||
with the necessary Linux kernel build information to build modules.
|
||||
<filename>KERNEL_SRC</filename> variable.
|
||||
The module <filename>bbclass</filename> sets this variable,
|
||||
as well as the <filename>KERNEL_PATH</filename> variable
|
||||
to <filename>${STAGING_KERNEL_DIR}</filename> with the
|
||||
necessary Linux kernel build information to build modules.
|
||||
If your module <filename>Makefile</filename> uses a different
|
||||
variable, you might want to override the
|
||||
<filename>do_compile()</filename> step, or create a patch to
|
||||
@@ -724,122 +656,11 @@
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Because the variable is
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-RRECOMMENDS'><filename>RRECOMMENDS</filename></ulink>
|
||||
and not a
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-RDEPENDS'><filename>RDEPENDS</filename></ulink>
|
||||
variable, the build will not fail if this module is not available
|
||||
to include in the image.
|
||||
Because the variable is <filename>RRECOMMENDS</filename> and not
|
||||
a <filename>RDEPENDS</filename> variable, the build will not fail
|
||||
if this module is not available to include in the image.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='inspecting-changes-and-commits'>
|
||||
<title>Inspecting Changes and Commits</title>
|
||||
|
||||
<para>
|
||||
A common question when working with a kernel is:
|
||||
"What changes have been applied to this tree?"
|
||||
Rather than using "grep" across directories to see what has
|
||||
changed, you can use Git to inspect or search the kernel tree.
|
||||
Using Git is an efficient way to see what has changed in the tree.
|
||||
</para>
|
||||
|
||||
<section id='what-changed-in-a-kernel'>
|
||||
<title>What Changed in a Kernel?</title>
|
||||
|
||||
<para>
|
||||
Following are a few examples that show how to use Git
|
||||
commands to examine changes.
|
||||
These examples are by no means the only way to see changes.
|
||||
<note>
|
||||
In the following examples, unless you provide a commit
|
||||
range, <filename>kernel.org</filename> history is blended
|
||||
with Yocto Project kernel changes.
|
||||
You can form ranges by using branch names from the
|
||||
kernel tree as the upper and lower commit markers with
|
||||
the Git commands.
|
||||
You can see the branch names through the web interface
|
||||
to the Yocto Project source repositories at
|
||||
<ulink url='http://git.yoctoproject.org/cgit.cgi'></ulink>.
|
||||
</note>
|
||||
To see a full range of the changes, use the
|
||||
<filename>git whatchanged</filename> command and specify a
|
||||
commit range for the branch
|
||||
(<filename><commit>..<commit></filename>).
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Here is an example that looks at what has changed in the
|
||||
<filename>emenlow</filename> branch of the
|
||||
<filename>linux-yocto-3.4</filename> kernel.
|
||||
The lower commit range is the commit associated with the
|
||||
<filename>standard/base</filename> branch, while
|
||||
the upper commit range is the commit associated with the
|
||||
<filename>standard/emenlow</filename> branch.
|
||||
<literallayout class='monospaced'>
|
||||
$ git whatchanged origin/standard/base..origin/standard/emenlow
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
To see short, one line summaries of changes use the
|
||||
<filename>git log</filename> command:
|
||||
<literallayout class='monospaced'>
|
||||
$ git log --oneline origin/standard/base..origin/standard/emenlow
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Use this command to see code differences for the changes:
|
||||
<literallayout class='monospaced'>
|
||||
$ git diff origin/standard/base..origin/standard/emenlow
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Use this command to see the commit log messages and the
|
||||
text differences:
|
||||
<literallayout class='monospaced'>
|
||||
$ git show origin/standard/base..origin/standard/emenlow
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Use this command to create individual patches for
|
||||
each change.
|
||||
Here is an example that that creates patch files for each
|
||||
commit and places them in your <filename>Documents</filename>
|
||||
directory:
|
||||
<literallayout class='monospaced'>
|
||||
$ git format-patch -o $HOME/Documents origin/standard/base..origin/standard/emenlow
|
||||
</literallayout>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='showing-a-particular-feature-or-branch-change'>
|
||||
<title>Showing a Particular Feature or Branch Change</title>
|
||||
|
||||
<para>
|
||||
Tags in the Yocto Project kernel tree divide changes for
|
||||
significant features or branches.
|
||||
The <filename>git show <tag></filename> command shows
|
||||
changes based on a tag.
|
||||
Here is an example that shows <filename>systemtap</filename>
|
||||
changes:
|
||||
<literallayout class='monospaced'>
|
||||
$ git show systemtap
|
||||
</literallayout>
|
||||
You can use the
|
||||
<filename>git branch --contains <tag></filename> command
|
||||
to show the branches that contain a particular feature.
|
||||
This command shows the branches that contain the
|
||||
<filename>systemtap</filename> feature:
|
||||
<literallayout class='monospaced'>
|
||||
$ git branch --contains systemtap
|
||||
</literallayout>
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
</chapter>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
|
||||
@@ -1,253 +0,0 @@
|
||||
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
|
||||
[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
|
||||
|
||||
<appendix id='kernel-dev-concepts-appx'>
|
||||
<title>Advanced Kernel Concepts</title>
|
||||
|
||||
<section id='kernel-big-picture'>
|
||||
<title>Yocto Project Kernel Development and Maintenance</title>
|
||||
<para>
|
||||
Kernels available through the Yocto Project, like other kernels, are based off the Linux
|
||||
kernel releases from <ulink url='http://www.kernel.org'></ulink>.
|
||||
At the beginning of a major development cycle, the Yocto Project team
|
||||
chooses its kernel based on factors such as release timing, the anticipated release
|
||||
timing of final upstream <filename>kernel.org</filename> versions, and Yocto Project
|
||||
feature requirements.
|
||||
Typically, the kernel chosen is in the
|
||||
final stages of development by the community.
|
||||
In other words, the kernel is in the release
|
||||
candidate or "rc" phase and not yet a final release.
|
||||
But, by being in the final stages of external development, the team knows that the
|
||||
<filename>kernel.org</filename> final release will clearly be within the early stages of
|
||||
the Yocto Project development window.
|
||||
</para>
|
||||
<para>
|
||||
This balance allows the team to deliver the most up-to-date kernel
|
||||
possible, while still ensuring that the team has a stable official release for
|
||||
the baseline Linux kernel version.
|
||||
</para>
|
||||
<para>
|
||||
The ultimate source for kernels available through the Yocto Project are released kernels
|
||||
from <filename>kernel.org</filename>.
|
||||
In addition to a foundational kernel from <filename>kernel.org</filename>, the
|
||||
kernels available contain a mix of important new mainline
|
||||
developments, non-mainline developments (when there is no alternative),
|
||||
Board Support Package (BSP) developments,
|
||||
and custom features.
|
||||
These additions result in a commercially released Yocto Project Linux kernel that caters
|
||||
to specific embedded designer needs for targeted hardware.
|
||||
</para>
|
||||
<para>
|
||||
Once a kernel is officially released, the Yocto Project team goes into
|
||||
their next development cycle, or upward revision (uprev) cycle, while still
|
||||
continuing maintenance on the released kernel.
|
||||
It is important to note that the most sustainable and stable way
|
||||
to include feature development upstream is through a kernel uprev process.
|
||||
Back-porting hundreds of individual fixes and minor features from various
|
||||
kernel versions is not sustainable and can easily compromise quality.
|
||||
</para>
|
||||
<para>
|
||||
During the uprev cycle, the Yocto Project team uses an ongoing analysis of
|
||||
kernel development, BSP support, and release timing to select the best
|
||||
possible <filename>kernel.org</filename> version.
|
||||
The team continually monitors community kernel
|
||||
development to look for significant features of interest.
|
||||
The team does consider back-porting large features if they have a significant advantage.
|
||||
User or community demand can also trigger a back-port or creation of new
|
||||
functionality in the Yocto Project baseline kernel during the uprev cycle.
|
||||
</para>
|
||||
<para>
|
||||
Generally speaking, every new kernel both adds features and introduces new bugs.
|
||||
These consequences are the basic properties of upstream kernel development and are
|
||||
managed by the Yocto Project team's kernel strategy.
|
||||
It is the Yocto Project team's policy to not back-port minor features to the released kernel.
|
||||
They only consider back-porting significant technological jumps - and, that is done
|
||||
after a complete gap analysis.
|
||||
The reason for this policy is that back-porting any small to medium sized change
|
||||
from an evolving kernel can easily create mismatches, incompatibilities and very
|
||||
subtle errors.
|
||||
</para>
|
||||
<para>
|
||||
These policies result in both a stable and a cutting
|
||||
edge kernel that mixes forward ports of existing features and significant and critical
|
||||
new functionality.
|
||||
Forward porting functionality in the kernels available through the Yocto Project kernel
|
||||
can be thought of as a "micro uprev."
|
||||
The many “micro uprevs” produce a kernel version with a mix of
|
||||
important new mainline, non-mainline, BSP developments and feature integrations.
|
||||
This kernel gives insight into new features and allows focused
|
||||
amounts of testing to be done on the kernel, which prevents
|
||||
surprises when selecting the next major uprev.
|
||||
The quality of these cutting edge kernels is evolving and the kernels are used in leading edge
|
||||
feature and BSP development.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='kernel-architecture'>
|
||||
<title>Kernel Architecture</title>
|
||||
<para>
|
||||
This section describes the architecture of the kernels available through the
|
||||
Yocto Project and provides information
|
||||
on the mechanisms used to achieve that architecture.
|
||||
</para>
|
||||
|
||||
<section id='architecture-overview'>
|
||||
<title>Overview</title>
|
||||
<para>
|
||||
As mentioned earlier, a key goal of the Yocto Project is to present the
|
||||
developer with
|
||||
a kernel that has a clear and continuous history that is visible to the user.
|
||||
The architecture and mechanisms used achieve that goal in a manner similar to the
|
||||
upstream <filename>kernel.org</filename>.
|
||||
</para>
|
||||
<para>
|
||||
You can think of a Yocto Project kernel as consisting of a baseline Linux kernel with
|
||||
added features logically structured on top of the baseline.
|
||||
The features are tagged and organized by way of a branching strategy implemented by the
|
||||
source code manager (SCM) Git.
|
||||
For information on Git as applied to the Yocto Project, see the
|
||||
"<ulink url='&YOCTO_DOCS_DEV_URL;#git'>Git</ulink>" section in the
|
||||
Yocto Project Development Manual.
|
||||
</para>
|
||||
<para>
|
||||
The result is that the user has the ability to see the added features and
|
||||
the commits that make up those features.
|
||||
In addition to being able to see added features, the user can also view the history of what
|
||||
made up the baseline kernel.
|
||||
</para>
|
||||
<para>
|
||||
The following illustration shows the conceptual Yocto Project kernel.
|
||||
</para>
|
||||
<para>
|
||||
<imagedata fileref="figures/kernel-architecture-overview.png" width="6in" depth="7in" align="center" scale="100" />
|
||||
</para>
|
||||
<para>
|
||||
In the illustration, the "Kernel.org Branch Point"
|
||||
marks the specific spot (or release) from
|
||||
which the Yocto Project kernel is created.
|
||||
From this point "up" in the tree, features and differences are organized and tagged.
|
||||
</para>
|
||||
<para>
|
||||
The "Yocto Project Baseline Kernel" contains functionality that is common to every kernel
|
||||
type and BSP that is organized further up the tree.
|
||||
Placing these common features in the
|
||||
tree this way means features do not have to be duplicated along individual branches of the
|
||||
structure.
|
||||
</para>
|
||||
<para>
|
||||
From the Yocto Project Baseline Kernel, branch points represent specific functionality
|
||||
for individual BSPs as well as real-time kernels.
|
||||
The illustration represents this through three BSP-specific branches and a real-time
|
||||
kernel branch.
|
||||
Each branch represents some unique functionality for the BSP or a real-time kernel.
|
||||
</para>
|
||||
<para>
|
||||
In this example structure, the real-time kernel branch has common features for all
|
||||
real-time kernels and contains
|
||||
more branches for individual BSP-specific real-time kernels.
|
||||
The illustration shows three branches as an example.
|
||||
Each branch points the way to specific, unique features for a respective real-time
|
||||
kernel as they apply to a given BSP.
|
||||
</para>
|
||||
<para>
|
||||
The resulting tree structure presents a clear path of markers (or branches) to the
|
||||
developer that, for all practical purposes, is the kernel needed for any given set
|
||||
of requirements.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='branching-and-workflow'>
|
||||
<title>Branching Strategy and Workflow</title>
|
||||
<para>
|
||||
The Yocto Project team creates kernel branches at points where functionality is
|
||||
no longer shared and thus, needs to be isolated.
|
||||
For example, board-specific incompatibilities would require different functionality
|
||||
and would require a branch to separate the features.
|
||||
Likewise, for specific kernel features, the same branching strategy is used.
|
||||
</para>
|
||||
<para>
|
||||
This branching strategy results in a tree that has features organized to be specific
|
||||
for particular functionality, single kernel types, or a subset of kernel types.
|
||||
This strategy also results in not having to store the same feature twice
|
||||
internally in the tree.
|
||||
Rather, the kernel team stores the unique differences required to apply the
|
||||
feature onto the kernel type in question.
|
||||
<note>
|
||||
The Yocto Project team strives to place features in the tree such that they can be
|
||||
shared by all boards and kernel types where possible.
|
||||
However, during development cycles or when large features are merged,
|
||||
the team cannot always follow this practice.
|
||||
In those cases, the team uses isolated branches to merge features.
|
||||
</note>
|
||||
</para>
|
||||
<para>
|
||||
BSP-specific code additions are handled in a similar manner to kernel-specific additions.
|
||||
Some BSPs only make sense given certain kernel types.
|
||||
So, for these types, the team creates branches off the end of that kernel type for all
|
||||
of the BSPs that are supported on that kernel type.
|
||||
From the perspective of the tools that create the BSP branch, the BSP is really no
|
||||
different than a feature.
|
||||
Consequently, the same branching strategy applies to BSPs as it does to features.
|
||||
So again, rather than store the BSP twice, the team only stores the unique
|
||||
differences for the BSP across the supported multiple kernels.
|
||||
</para>
|
||||
<para>
|
||||
While this strategy can result in a tree with a significant number of branches, it is
|
||||
important to realize that from the developer's point of view, there is a linear
|
||||
path that travels from the baseline <filename>kernel.org</filename>, through a select
|
||||
group of features and ends with their BSP-specific commits.
|
||||
In other words, the divisions of the kernel are transparent and are not relevant
|
||||
to the developer on a day-to-day basis.
|
||||
From the developer's perspective, this path is the "master" branch.
|
||||
The developer does not need to be aware of the existence of any other branches at all.
|
||||
Of course, there is value in the existence of these branches
|
||||
in the tree, should a person decide to explore them.
|
||||
For example, a comparison between two BSPs at either the commit level or at the line-by-line
|
||||
code <filename>diff</filename> level is now a trivial operation.
|
||||
</para>
|
||||
<para>
|
||||
Working with the kernel as a structured tree follows recognized community best practices.
|
||||
In particular, the kernel as shipped with the product, should be
|
||||
considered an "upstream source" and viewed as a series of
|
||||
historical and documented modifications (commits).
|
||||
These modifications represent the development and stabilization done
|
||||
by the Yocto Project kernel development team.
|
||||
</para>
|
||||
<para>
|
||||
Because commits only change at significant release points in the product life cycle,
|
||||
developers can work on a branch created
|
||||
from the last relevant commit in the shipped Yocto Project kernel.
|
||||
As mentioned previously, the structure is transparent to the developer
|
||||
because the kernel tree is left in this state after cloning and building the kernel.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='source-code-manager-git'>
|
||||
<title>Source Code Manager - Git</title>
|
||||
<para>
|
||||
The Source Code Manager (SCM) is Git.
|
||||
This SCM is the obvious mechanism for meeting the previously mentioned goals.
|
||||
Not only is it the SCM for <filename>kernel.org</filename> but,
|
||||
Git continues to grow in popularity and supports many different work flows,
|
||||
front-ends and management techniques.
|
||||
</para>
|
||||
<para>
|
||||
You can find documentation on Git at <ulink url='http://git-scm.com/documentation'></ulink>.
|
||||
You can also get an introduction to Git as it applies to the Yocto Project in the
|
||||
"<ulink url='&YOCTO_DOCS_DEV_URL;#git'>Git</ulink>"
|
||||
section in the Yocto Project Development Manual.
|
||||
These referenced sections overview Git and describe a minimal set of
|
||||
commands that allows you to be functional using Git.
|
||||
<note>
|
||||
You can use as much, or as little, of what Git has to offer to accomplish what
|
||||
you need for your project.
|
||||
You do not have to be a "Git Master" in order to use it with the Yocto Project.
|
||||
</note>
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
</appendix>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
@@ -1,11 +1,8 @@
|
||||
<?xml version='1.0'?>
|
||||
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://www.w3.org/1999/xhtml" xmlns:fo="http://www.w3.org/1999/XSL/Format" version="1.0">
|
||||
|
||||
|
||||
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/xhtml/docbook.xsl" />
|
||||
|
||||
<xsl:param name="html.stylesheet" select="'kernel-dev-style.css'" />
|
||||
<xsl:param name="chapter.autolabel" select="1" />
|
||||
<xsl:param name="appendix.autolabel">A</xsl:param>
|
||||
<xsl:param name="section.autolabel" select="1" />
|
||||
<xsl:param name="section.label.includes.component.label" select="1" />
|
||||
<!-- <xsl:param name="generate.toc" select="'article nop'"></xsl:param> -->
|
||||
|
||||
</xsl:stylesheet>
|
||||
|
||||
@@ -1,27 +0,0 @@
|
||||
<?xml version='1.0'?>
|
||||
<xsl:stylesheet
|
||||
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
|
||||
xmlns="http://www.w3.org/1999/xhtml"
|
||||
xmlns:fo="http://www.w3.org/1999/XSL/Format"
|
||||
version="1.0">
|
||||
|
||||
<xsl:import
|
||||
href="http://docbook.sourceforge.net/release/xsl/current/eclipse/eclipse3.xsl" />
|
||||
|
||||
<xsl:param name="chunker.output.indent" select="'yes'"/>
|
||||
<xsl:param name="chunk.quietly" select="1"/>
|
||||
<xsl:param name="chunk.first.sections" select="1"/>
|
||||
<xsl:param name="chunk.section.depth" select="10"/>
|
||||
<xsl:param name="use.id.as.filename" select="1"/>
|
||||
<xsl:param name="ulink.target" select="'_self'" />
|
||||
<xsl:param name="base.dir" select="'html/kernel-dev/'"/>
|
||||
<xsl:param name="html.stylesheet" select="'../book.css'"/>
|
||||
<xsl:param name="eclipse.manifest" select="0"/>
|
||||
<xsl:param name="create.plugin.xml" select="0"/>
|
||||
<xsl:param name="suppress.navigation" select="1"/>
|
||||
<xsl:param name="generate.index" select="0"/>
|
||||
<xsl:param name="chapter.autolabel" select="1" />
|
||||
<xsl:param name="appendix.autolabel">A</xsl:param>
|
||||
<xsl:param name="section.autolabel" select="1" />
|
||||
<xsl:param name="section.label.includes.component.label" select="1" />
|
||||
</xsl:stylesheet>
|
||||
@@ -2,7 +2,7 @@
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
|
||||
[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
|
||||
|
||||
<appendix id='kernel-dev-faq'>
|
||||
<chapter id='kernel-dev-faq'>
|
||||
<title>Kernel Development FAQ</title>
|
||||
<qandaset>
|
||||
<qandaentry>
|
||||
@@ -125,7 +125,7 @@
|
||||
</qandaentry>
|
||||
|
||||
</qandaset>
|
||||
</appendix>
|
||||
</chapter>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
|
||||
@@ -5,29 +5,6 @@
|
||||
<chapter id='kernel-dev-intro'>
|
||||
<title>Introduction</title>
|
||||
|
||||
<!--
|
||||
<para>
|
||||
<emphasis>AR - Darrren Hart:</emphasis> See if the concepts in these
|
||||
three bullets are adequately covered in somewhere in this manual:
|
||||
<itemizedlist>
|
||||
<listitem><para>Do we convey that our kernel Git repositories
|
||||
have a clear and continuous history, similar to the way the
|
||||
kernel Git repositories for <filename>kernel.org</filename>
|
||||
do.
|
||||
</para></listitem>
|
||||
<listitem><para>Does the manual note that Yocto Project delivers
|
||||
a key set of supported kernel types, where
|
||||
each type is tailored to meet a specific use (e.g. networking,
|
||||
consumer, devices, and so forth).</para></listitem>
|
||||
<listitem><para>Do we convey that the Yocto Project uses a
|
||||
Git branching strategy that, from a
|
||||
developer's point of view, results in a linear path from the
|
||||
baseline kernel.org, through a select group of features and
|
||||
ends with their BSP-specific commits.</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
-->
|
||||
|
||||
<section id='kernel-dev-overview'>
|
||||
<title>Overview</title>
|
||||
|
||||
@@ -37,7 +14,7 @@
|
||||
This manual provides background information on the Yocto Linux kernel
|
||||
<ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>,
|
||||
describes common tasks you can perform using the kernel tools,
|
||||
and shows you how to use the kernel Metadata needed to work with
|
||||
and shows you how to use the Metadata needed to work with
|
||||
the kernel inside the Yocto Project.
|
||||
</para>
|
||||
|
||||
@@ -55,7 +32,7 @@
|
||||
Also included is a linux-yocto development recipe
|
||||
(<filename>linux-yocto-dev.bb</filename>) should you want to work
|
||||
with the very latest in upstream Linux kernel development and
|
||||
kernel Metadata development.
|
||||
Metadata development.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
|
||||
@@ -1,220 +0,0 @@
|
||||
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
|
||||
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
|
||||
[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
|
||||
|
||||
<appendix id='kernel-dev-maint-appx'>
|
||||
<title>Kernel Maintenance</title>
|
||||
|
||||
<section id='tree-construction'>
|
||||
<title>Tree Construction</title>
|
||||
<para>
|
||||
This section describes construction of the Yocto Project kernel source repositories
|
||||
as accomplished by the Yocto Project team to create kernel repositories.
|
||||
These kernel repositories are found under the heading "Yocto Linux Kernel" at
|
||||
<ulink url='&YOCTO_GIT_URL;/cgit.cgi'>&YOCTO_GIT_URL;/cgit.cgi</ulink>
|
||||
and can be shipped as part of a Yocto Project release.
|
||||
The team creates these repositories by
|
||||
compiling and executing the set of feature descriptions for every BSP
|
||||
and feature in the product.
|
||||
Those feature descriptions list all necessary patches,
|
||||
configuration, branching, tagging and feature divisions found in a kernel.
|
||||
Thus, the Yocto Project kernel repository (or tree) is built.
|
||||
</para>
|
||||
<para>
|
||||
The existence of this tree allows you to access and clone a particular
|
||||
Yocto Project kernel repository and use it to build images based on their configurations
|
||||
and features.
|
||||
</para>
|
||||
<para>
|
||||
You can find the files used to describe all the valid features and BSPs
|
||||
in the Yocto Project kernel in any clone of the Yocto Project kernel source repository
|
||||
Git tree.
|
||||
For example, the following command clones the Yocto Project baseline kernel that
|
||||
branched off of <filename>linux.org</filename> version 3.4:
|
||||
<literallayout class='monospaced'>
|
||||
$ git clone git://git.yoctoproject.org/linux-yocto-3.4
|
||||
</literallayout>
|
||||
For another example of how to set up a local Git repository of the Yocto Project
|
||||
kernel files, see the
|
||||
"<ulink url='&YOCTO_DOCS_DEV_URL;#local-kernel-files'>Yocto Project Kernel</ulink>" bulleted
|
||||
item in the Yocto Project Development Manual.
|
||||
</para>
|
||||
<para>
|
||||
Once you have cloned the kernel Git repository on your local machine, you can
|
||||
switch to the <filename>meta</filename> branch within the repository.
|
||||
Here is an example that assumes the local Git repository for the kernel is in
|
||||
a top-level directory named <filename>linux-yocto-3.4</filename>:
|
||||
<literallayout class='monospaced'>
|
||||
$ cd ~/linux-yocto-3.4
|
||||
$ git checkout -b meta origin/meta
|
||||
</literallayout>
|
||||
Once you have checked out and switched to the <filename>meta</filename> branch,
|
||||
you can see a snapshot of all the kernel configuration and feature descriptions that are
|
||||
used to build that particular kernel repository.
|
||||
These descriptions are in the form of <filename>.scc</filename> files.
|
||||
</para>
|
||||
<para>
|
||||
You should realize, however, that browsing your local kernel repository
|
||||
for feature descriptions and patches is not an effective way to determine what is in a
|
||||
particular kernel branch.
|
||||
Instead, you should use Git directly to discover the changes in a branch.
|
||||
Using Git is an efficient and flexible way to inspect changes to the kernel.
|
||||
<note>
|
||||
Ground up reconstruction of the complete kernel tree is an action only taken by the
|
||||
Yocto Project team during an active development cycle.
|
||||
When you create a clone of the kernel Git repository, you are simply making it
|
||||
efficiently available for building and development.
|
||||
</note>
|
||||
</para>
|
||||
<para>
|
||||
The following steps describe what happens when the Yocto Project Team constructs
|
||||
the Yocto Project kernel source Git repository (or tree) found at
|
||||
<ulink url='&YOCTO_GIT_URL;/cgit.cgi'></ulink> given the
|
||||
introduction of a new top-level kernel feature or BSP.
|
||||
These are the actions that effectively create the tree
|
||||
that includes the new feature, patch or BSP:
|
||||
<orderedlist>
|
||||
<listitem><para>A top-level kernel feature is passed to the kernel build subsystem.
|
||||
Normally, this feature is a BSP for a particular kernel type.</para></listitem>
|
||||
<listitem><para>The file that describes the top-level feature is located by searching
|
||||
these system directories:
|
||||
<itemizedlist>
|
||||
<listitem><para>The in-tree kernel-cache directories, which are located
|
||||
in <filename>meta/cfg/kernel-cache</filename></para></listitem>
|
||||
<listitem><para>Areas pointed to by <filename>SRC_URI</filename> statements
|
||||
found in recipes</para></listitem>
|
||||
</itemizedlist>
|
||||
For a typical build, the target of the search is a
|
||||
feature description in an <filename>.scc</filename> file
|
||||
whose name follows this format:
|
||||
<literallayout class='monospaced'>
|
||||
<bsp_name>-<kernel_type>.scc
|
||||
</literallayout>
|
||||
</para></listitem>
|
||||
<listitem><para>Once located, the feature description is either compiled into a simple script
|
||||
of actions, or into an existing equivalent script that is already part of the
|
||||
shipped kernel.</para></listitem>
|
||||
<listitem><para>Extra features are appended to the top-level feature description.
|
||||
These features can come from the
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-KERNEL_FEATURES'><filename>KERNEL_FEATURES</filename></ulink>
|
||||
variable in recipes.</para></listitem>
|
||||
<listitem><para>Each extra feature is located, compiled and appended to the script
|
||||
as described in step three.</para></listitem>
|
||||
<listitem><para>The script is executed to produce a series of <filename>meta-*</filename>
|
||||
directories.
|
||||
These directories are descriptions of all the branches, tags, patches and configurations that
|
||||
need to be applied to the base Git repository to completely create the
|
||||
source (build) branch for the new BSP or feature.</para></listitem>
|
||||
<listitem><para>The base repository is cloned, and the actions
|
||||
listed in the <filename>meta-*</filename> directories are applied to the
|
||||
tree.</para></listitem>
|
||||
<listitem><para>The Git repository is left with the desired branch checked out and any
|
||||
required branching, patching and tagging has been performed.</para></listitem>
|
||||
</orderedlist>
|
||||
</para>
|
||||
<para>
|
||||
The kernel tree is now ready for developer consumption to be locally cloned,
|
||||
configured, and built into a Yocto Project kernel specific to some target hardware.
|
||||
<note><para>The generated <filename>meta-*</filename> directories add to the kernel
|
||||
as shipped with the Yocto Project release.
|
||||
Any add-ons and configuration data are applied to the end of an existing branch.
|
||||
The full repository generation that is found in the
|
||||
official Yocto Project kernel repositories at
|
||||
<ulink url='&YOCTO_GIT_URL;/cgit.cgi'>http://git.yoctoproject.org/cgit.cgi</ulink>
|
||||
is the combination of all supported boards and configurations.</para>
|
||||
<para>The technique the Yocto Project team uses is flexible and allows for seamless
|
||||
blending of an immutable history with additional patches specific to a
|
||||
deployment.
|
||||
Any additions to the kernel become an integrated part of the branches.</para>
|
||||
</note>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section id='build-strategy'>
|
||||
<title>Build Strategy</title>
|
||||
|
||||
<!--
|
||||
<para>
|
||||
<emphasis>AR - Darrren Hart:</emphasis> Some parts of this section
|
||||
need to be in the
|
||||
"<link linkend='using-an-iterative-development-process'>Using an Iterative Development Process</link>"
|
||||
section.
|
||||
Darren needs to figure out which parts and identify them.
|
||||
</para>
|
||||
-->
|
||||
|
||||
<para>
|
||||
Once a local Git repository of the Yocto Project kernel exists on a development system,
|
||||
you can consider the compilation phase of kernel development - building a kernel image.
|
||||
Some prerequisites exist that are validated by the build process before compilation
|
||||
starts:
|
||||
</para>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem><para>The
|
||||
<ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink> points
|
||||
to the kernel Git repository.</para></listitem>
|
||||
<listitem><para>A BSP build branch exists.
|
||||
This branch has the following form:
|
||||
<literallayout class='monospaced'>
|
||||
<kernel_type>/<bsp_name>
|
||||
</literallayout></para></listitem>
|
||||
</itemizedlist>
|
||||
|
||||
<para>
|
||||
The OpenEmbedded build system makes sure these conditions exist before attempting compilation.
|
||||
Other means, however, do exist, such as as bootstrapping a BSP.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Before building a kernel, the build process verifies the tree
|
||||
and configures the kernel by processing all of the
|
||||
configuration "fragments" specified by feature descriptions in the <filename>.scc</filename>
|
||||
files.
|
||||
As the features are compiled, associated kernel configuration fragments are noted
|
||||
and recorded in the <filename>meta-*</filename> series of directories in their compilation order.
|
||||
The fragments are migrated, pre-processed and passed to the Linux Kernel
|
||||
Configuration subsystem (<filename>lkc</filename>) as raw input in the form
|
||||
of a <filename>.config</filename> file.
|
||||
The <filename>lkc</filename> uses its own internal dependency constraints to do the final
|
||||
processing of that information and generates the final <filename>.config</filename> file
|
||||
that is used during compilation.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Using the board's architecture and other relevant values from the board's template,
|
||||
kernel compilation is started and a kernel image is produced.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The other thing that you notice once you configure a kernel is that
|
||||
the build process generates a build tree that is separate from your kernel's local Git
|
||||
source repository tree.
|
||||
This build tree has a name that uses the following form, where
|
||||
<filename>${MACHINE}</filename> is the metadata name of the machine (BSP) and "kernel_type" is one
|
||||
of the Yocto Project supported kernel types (e.g. "standard"):
|
||||
<literallayout class='monospaced'>
|
||||
linux-${MACHINE}-<kernel_type>-build
|
||||
</literallayout>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The existing support in the <filename>kernel.org</filename> tree achieves this
|
||||
default functionality.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
This behavior means that all the generated files for a particular machine or BSP are now in
|
||||
the build tree directory.
|
||||
The files include the final <filename>.config</filename> file, all the <filename>.o</filename>
|
||||
files, the <filename>.a</filename> files, and so forth.
|
||||
Since each machine or BSP has its own separate
|
||||
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
|
||||
in its own separate branch
|
||||
of the Git repository, you can easily switch between different builds.
|
||||
</para>
|
||||
</section>
|
||||
</appendix>
|
||||
<!--
|
||||
vim: expandtab tw=80 ts=4
|
||||
-->
|
||||
@@ -16,9 +16,7 @@
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
|
||||
<title>
|
||||
Yocto Project Linux Kernel Development Manual
|
||||
</title>
|
||||
<title></title>
|
||||
|
||||
<authorgroup>
|
||||
<author>
|
||||
@@ -33,34 +31,9 @@
|
||||
<revhistory>
|
||||
<revision>
|
||||
<revnumber>1.4</revnumber>
|
||||
<date>April 2013</date>
|
||||
<date>Sometime in 2013</date>
|
||||
<revremark>Released with the Yocto Project 1.4 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.1</revnumber>
|
||||
<date>June 2013</date>
|
||||
<revremark>Released with the Yocto Project 1.4.1 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.2</revnumber>
|
||||
<date>August 2013</date>
|
||||
<revremark>Released with the Yocto Project 1.4.2 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.3</revnumber>
|
||||
<date>March 2014</date>
|
||||
<revremark>Released with the Yocto Project 1.4.3 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.4</revnumber>
|
||||
<date>May 2014</date>
|
||||
<revremark>Released with the Yocto Project 1.4.4 Release.</revremark>
|
||||
</revision>
|
||||
<revision>
|
||||
<revnumber>1.4.5</revnumber>
|
||||
<date>July 2014</date>
|
||||
<revremark>Released with the Yocto Project 1.4.5 Release.</revremark>
|
||||
</revision>
|
||||
</revhistory>
|
||||
|
||||
<copyright>
|
||||
@@ -76,7 +49,7 @@
|
||||
<note>
|
||||
Due to production processes, there could be differences between the Yocto Project
|
||||
documentation bundled in the release tarball and the
|
||||
<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;'>Yocto Project Linux Kernel Development Manual</ulink> on
|
||||
<ulink url='&YOCTO_DOCS_KERNEL_URL;'>Yocto Project Linux Kernel Development Manual</ulink> on
|
||||
the <ulink url='&YOCTO_HOME_URL;'>Yocto Project</ulink> website.
|
||||
For the latest version of this manual, see the manual on the website.
|
||||
</note>
|
||||
@@ -90,10 +63,6 @@
|
||||
|
||||
<xi:include href="kernel-dev-advanced.xml"/>
|
||||
|
||||
<xi:include href="kernel-dev-concepts-appx.xml"/>
|
||||
|
||||
<xi:include href="kernel-dev-maint-appx.xml"/>
|
||||
|
||||
<!--
|
||||
<xi:include href="kernel-dev-examples.xml"/>
|
||||
-->
|
||||
|
||||
@@ -246,10 +246,10 @@
|
||||
</para>
|
||||
|
||||
<section id='change-inspection-kernel-changes-commits'>
|
||||
<title>Change Inspection: Changes/Commits</title>
|
||||
<title>Change Inspection: Kernel Changes/Commits</title>
|
||||
|
||||
<para>
|
||||
A common question when working with a kernel is:
|
||||
A common question when working with a BSP or kernel is:
|
||||
"What changes have been applied to this tree?"
|
||||
</para>
|
||||
|
||||
@@ -257,95 +257,53 @@
|
||||
In projects that have a collection of directories that
|
||||
contain patches to the kernel, it is possible to inspect or "grep" the contents
|
||||
of the directories to get a general feel for the changes.
|
||||
This sort of patch inspection is not an efficient way to determine what has been
|
||||
done to the kernel.
|
||||
This sort of patch inspection is not an efficient way to determine what has been done to the
|
||||
kernel.
|
||||
The reason it is inefficient is because there are many optional patches that are
|
||||
selected based on the kernel type and the feature description.
|
||||
Additionally, patches could exist in directories that are not included in the search.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
A more efficient way to determine what has changed in the branch is to use
|
||||
A more efficient way to determine what has changed in the kernel is to use
|
||||
Git and inspect or search the kernel tree.
|
||||
This method gives you a full view of not only the source code modifications,
|
||||
but also provides the reasons for the changes.
|
||||
</para>
|
||||
|
||||
<section id='what-changed-in-a-kernel'>
|
||||
<title>What Changed in a Kernel?</title>
|
||||
<section id='what-changed-in-a-bsp'>
|
||||
<title>What Changed in a BSP?</title>
|
||||
|
||||
<para>
|
||||
Following are a few examples that show how to use Git commands to examine changes.
|
||||
Because Git repositories in the Yocto Project do not break existing Git
|
||||
functionality, and because there exists many permutations of these types of
|
||||
Git commands, many methods exist by which you can discover changes.
|
||||
Following are a few examples that show how to use Git to examine changes.
|
||||
Because the Yocto Project Git repository does not break existing Git
|
||||
functionality and because there exists many permutations of these types of
|
||||
commands, there are many more methods to discover changes.
|
||||
<note>
|
||||
In the following examples, unless you provide a commit range,
|
||||
<filename>kernel.org</filename> history is blended with Yocto Project
|
||||
kernel changes.
|
||||
You can form ranges by using branch names from the kernel tree as the
|
||||
upper and lower commit markers with the Git commands.
|
||||
You can see the branch names through the web interface to the
|
||||
Yocto Project source repositories at
|
||||
<ulink url='http://git.yoctoproject.org/cgit.cgi'></ulink>.
|
||||
For example, the branch names for the <filename>linux-yocto-3.4</filename>
|
||||
kernel repository can be seen at
|
||||
<ulink url='http://git.yoctoproject.org/cgit.cgi/linux-yocto-3.4/refs/heads'></ulink>.
|
||||
Unless you provide a commit range
|
||||
(<kernel-type>..<bsp>-<kernel-type>), <filename>kernel.org</filename> history
|
||||
is blended with Yocto Project changes.
|
||||
</note>
|
||||
To see a full range of the changes, use the
|
||||
<filename>git whatchanged</filename> command and specify a commit range
|
||||
for the branch (<filename><commit>..<commit></filename>).
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Here is an example that looks at what has changed in the
|
||||
<filename>emenlow</filename> branch of the
|
||||
<filename>linux-yocto-3.4</filename> kernel.
|
||||
The lower commit range is the commit associated with the
|
||||
<filename>standard/base</filename> branch, while
|
||||
the upper commit range is the commit associated with the
|
||||
<filename>standard/emenlow</filename> branch.
|
||||
<literallayout class='monospaced'>
|
||||
$ git whatchanged origin/standard/base..origin/standard/emenlow
|
||||
</literallayout>
|
||||
</para>
|
||||
# full description of the changes
|
||||
> git whatchanged <kernel type>..<kernel type>/<bsp>
|
||||
> eg: git whatchanged yocto/standard/base..yocto/standard/common-pc/base
|
||||
|
||||
<para>
|
||||
To see a summary of changes use the <filename>git log</filename> command.
|
||||
Here is an example using the same branches:
|
||||
<literallayout class='monospaced'>
|
||||
$ git log --oneline origin/standard/base..origin/standard/emenlow
|
||||
</literallayout>
|
||||
The <filename>git log</filename> output might be more useful than
|
||||
the <filename>git whatchanged</filename> as you get
|
||||
a short, one-line summary of each change and not the entire commit.
|
||||
</para>
|
||||
# summary of the changes
|
||||
> git log --pretty=oneline --abbrev-commit <kernel type>..<kernel type>/<bsp>
|
||||
|
||||
<para>
|
||||
If you want to see code differences associated with all the changes, use
|
||||
the <filename>git diff</filename> command.
|
||||
Here is an example:
|
||||
<literallayout class='monospaced'>
|
||||
$ git diff origin/standard/base..origin/standard/emenlow
|
||||
</literallayout>
|
||||
</para>
|
||||
# source code changes (one combined diff)
|
||||
> git diff <kernel type>..<kernel type>/<bsp>
|
||||
> git show <kernel type>..<kernel type>/<bsp>
|
||||
|
||||
<para>
|
||||
You can see the commit log messages and the text differences using the
|
||||
<filename>git show</filename> command:
|
||||
Here is an example:
|
||||
<literallayout class='monospaced'>
|
||||
$ git show origin/standard/base..origin/standard/emenlow
|
||||
</literallayout>
|
||||
</para>
|
||||
# dump individual patches per commit
|
||||
> git format-patch -o <dir> <kernel type>..<kernel type>/<bsp>
|
||||
|
||||
<para>
|
||||
You can create individual patches for each change by using the
|
||||
<filename>git format-patch</filename> command.
|
||||
Here is an example that that creates patch files for each commit and
|
||||
places them in your <filename>Documents</filename> directory:
|
||||
<literallayout class='monospaced'>
|
||||
$ git format-patch -o $HOME/Documents origin/standard/base..origin/standard/emenlow
|
||||
# determine the change history of a particular file
|
||||
> git whatchanged <path to file>
|
||||
|
||||
# determine the commits which touch each line in a file
|
||||
> git blame <path to file>
|
||||
</literallayout>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
@@ -3,9 +3,6 @@
|
||||
|
||||
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/xhtml/docbook.xsl" />
|
||||
|
||||
<xsl:param name="html.stylesheet" select="'kernel-style.css'" />
|
||||
<xsl:param name="chapter.autolabel" select="1" />
|
||||
<xsl:param name="appendix.autolabel" select="A" />
|
||||
<xsl:param name="section.autolabel" select="1" />
|
||||
<xsl:param name="section.label.includes.component.label" select="1" />
|
||||
<!-- <xsl:param name="generate.toc" select="'article nop'"></xsl:param> -->
|
||||
|
||||
</xsl:stylesheet>
|
||||
|
||||
@@ -1,27 +0,0 @@
|
||||
<?xml version='1.0'?>
|
||||
<xsl:stylesheet
|
||||
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
|
||||
xmlns="http://www.w3.org/1999/xhtml"
|
||||
xmlns:fo="http://www.w3.org/1999/XSL/Format"
|
||||
version="1.0">
|
||||
|
||||
<xsl:import
|
||||
href="http://docbook.sourceforge.net/release/xsl/current/eclipse/eclipse3.xsl" />
|
||||
|
||||
<xsl:param name="chunker.output.indent" select="'yes'"/>
|
||||
<xsl:param name="chunk.quietly" select="1"/>
|
||||
<xsl:param name="chunk.first.sections" select="1"/>
|
||||
<xsl:param name="chunk.section.depth" select="10"/>
|
||||
<xsl:param name="use.id.as.filename" select="1"/>
|
||||
<xsl:param name="ulink.target" select="'_self'" />
|
||||
<xsl:param name="base.dir" select="'html/kernel-manual/'"/>
|
||||
<xsl:param name="html.stylesheet" select="'../book.css'"/>
|
||||
<xsl:param name="eclipse.manifest" select="0"/>
|
||||
<xsl:param name="create.plugin.xml" select="0"/>
|
||||
<xsl:param name="suppress.navigation" select="1"/>
|
||||
<xsl:param name="generate.index" select="0"/>
|
||||
<xsl:param name="chapter.autolabel" select="1" />
|
||||
<xsl:param name="appendix.autolabel" select="A" />
|
||||
<xsl:param name="section.autolabel" select="1" />
|
||||
<xsl:param name="section.label.includes.component.label" select="1" />
|
||||
</xsl:stylesheet>
|
||||
@@ -16,9 +16,7 @@
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
|
||||
<title>
|
||||
The Yocto Project Kernel Architecture and Use Manual
|
||||
</title>
|
||||
<title></title>
|
||||
|
||||
<authorgroup>
|
||||
<author>
|
||||
|
||||
|
Before Width: | Height: | Size: 87 KiB |
|
Before Width: | Height: | Size: 56 KiB |
|
Before Width: | Height: | Size: 96 KiB |
|
Before Width: | Height: | Size: 200 KiB |
|
Before Width: | Height: | Size: 118 KiB |
|
Before Width: | Height: | Size: 96 KiB |