mirror of
https://git.yoctoproject.org/poky
synced 2026-02-16 05:33:03 +01:00
Compare commits
94 Commits
yocto-4.1.
...
yocto-3.2.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
943ef2fad8 | ||
|
|
76dac9d657 | ||
|
|
333f24caec | ||
|
|
e5bd9b93b4 | ||
|
|
a4ff9dd2dc | ||
|
|
2d3224bf20 | ||
|
|
e6f6420d98 | ||
|
|
f0b8b3a960 | ||
|
|
fef73fcd3a | ||
|
|
d12e2d67c9 | ||
|
|
eeb98ec6ae | ||
|
|
3f2bc0a2e1 | ||
|
|
cbd023e0db | ||
|
|
307146220b | ||
|
|
d754cd3a49 | ||
|
|
3d5309b736 | ||
|
|
369b6e0192 | ||
|
|
e03e489758 | ||
|
|
321e17803e | ||
|
|
086ed4af2a | ||
|
|
67ff1d9ffb | ||
|
|
8de9b33e14 | ||
|
|
afe59c8e1d | ||
|
|
f6434fde67 | ||
|
|
e46465c718 | ||
|
|
e4156f232b | ||
|
|
bfa254bd1a | ||
|
|
4315a12330 | ||
|
|
9b58e1d1a8 | ||
|
|
f4ff33fd11 | ||
|
|
f9f50c5638 | ||
|
|
23eef02eff | ||
|
|
bef1f4761e | ||
|
|
8b9bdf1d1e | ||
|
|
1a4b81a392 | ||
|
|
c111b692cc | ||
|
|
701e43727a | ||
|
|
dedca9ecb7 | ||
|
|
d890775c90 | ||
|
|
fd3e68b355 | ||
|
|
678eafa74d | ||
|
|
c2014927f2 | ||
|
|
c5b7872dab | ||
|
|
2691a54e91 | ||
|
|
e2de476001 | ||
|
|
45c8a7e583 | ||
|
|
4d2fd8ddd3 | ||
|
|
ea0af53e2a | ||
|
|
2d342da2a3 | ||
|
|
f1b304df93 | ||
|
|
b569f2a414 | ||
|
|
411f541288 | ||
|
|
83477f0280 | ||
|
|
7e7893983f | ||
|
|
e3a67d60cc | ||
|
|
23a0428069 | ||
|
|
b74901b816 | ||
|
|
010625f35a | ||
|
|
0647439a0a | ||
|
|
87a05c7316 | ||
|
|
5c33ee311c | ||
|
|
3ad92d4d09 | ||
|
|
5e5a7fd73d | ||
|
|
3269613984 | ||
|
|
b955cbdcfb | ||
|
|
58e47e1b70 | ||
|
|
bb0524e189 | ||
|
|
7d58c8bed6 | ||
|
|
5232b03e22 | ||
|
|
e2312cd887 | ||
|
|
f552970178 | ||
|
|
d59e28ea73 | ||
|
|
61642ef429 | ||
|
|
7f6f1519b9 | ||
|
|
528de6bc4f | ||
|
|
0ccf16fab3 | ||
|
|
4e513e2b86 | ||
|
|
1272d1b8fc | ||
|
|
686396e3dc | ||
|
|
2fa7fde32f | ||
|
|
72050b72e2 | ||
|
|
2fa97151cd | ||
|
|
e67a7af07c | ||
|
|
2306702899 | ||
|
|
f652c4d1b8 | ||
|
|
ca1ed50ab3 | ||
|
|
46db037b1f | ||
|
|
70761072f5 | ||
|
|
efa68c6490 | ||
|
|
3daa976efb | ||
|
|
4d35e4b168 | ||
|
|
dff89518bd | ||
|
|
cdae385f7d | ||
|
|
b7a7dde44a |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -30,5 +30,4 @@ hob-image-*.bb
|
||||
pull-*/
|
||||
bitbake/lib/toaster/contrib/tts/backlog.txt
|
||||
bitbake/lib/toaster/contrib/tts/log/*
|
||||
bitbake/lib/toaster/contrib/tts/.cache/*
|
||||
bitbake/lib/bb/tests/runqueue-tests/bitbake-cookerdaemon.log
|
||||
bitbake/lib/toaster/contrib/tts/.cache/*
|
||||
@@ -1,2 +1,2 @@
|
||||
# Template settings
|
||||
TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf/templates/default}
|
||||
TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf}
|
||||
|
||||
@@ -1,71 +0,0 @@
|
||||
OpenEmbedded-Core and Yocto Project Maintainer Information
|
||||
==========================================================
|
||||
|
||||
OpenEmbedded and Yocto Project work jointly together to maintain the metadata,
|
||||
layers, tools and sub-projects that make up their ecosystems.
|
||||
|
||||
The projects operate through collaborative development. This currently takes
|
||||
place on mailing lists for many components as the "pull request on github"
|
||||
workflow works well for single or small numbers of maintainers but we have
|
||||
a large number, all with different specialisms and benefit from the mailing
|
||||
list review process. Changes therefore undergo peer review through mailing
|
||||
lists in many cases.
|
||||
|
||||
This file aims to acknowledge people with specific skills/knowledge/interest
|
||||
both to recognise their contributions but also empower them to help lead and
|
||||
curate those components. Where we have people with specialist knowledge in
|
||||
particular areas, during review patches/feedback from these people in these
|
||||
areas would generally carry weight.
|
||||
|
||||
This file is maintained in OE-Core but may refer to components that are separate
|
||||
to it if that makes sense in the context of maintainership. The README of specific
|
||||
layers and components should ultimately be definitive about the patch process and
|
||||
maintainership for the component.
|
||||
|
||||
Recipe Maintainers
|
||||
------------------
|
||||
|
||||
See meta/conf/distro/include/maintainers.inc
|
||||
|
||||
Component/Subsystem Maintainers
|
||||
-------------------------------
|
||||
|
||||
* Kernel (inc. linux-yocto, perf): Bruce Ashfield
|
||||
* Reproducible Builds: Joshua Watt
|
||||
* Toaster: David Reyna
|
||||
* Hash-Equivalence: Joshua Watt
|
||||
* Recipe upgrade infrastructure: Alex Kanavin
|
||||
* Toolchain: Khem Raj
|
||||
* ptest-runner: Aníbal Limón
|
||||
* opkg: Alex Stewart
|
||||
* devtool: Saul Wold
|
||||
* eSDK: Saul Wold
|
||||
* overlayfs: Vyacheslav Yurkov
|
||||
|
||||
Maintainers needed
|
||||
------------------
|
||||
|
||||
* Pseudo
|
||||
* Layer Index
|
||||
* recipetool
|
||||
* QA framework/automated testing
|
||||
* error reporting system/web UI
|
||||
* wic
|
||||
* Patchwork
|
||||
* Patchtest
|
||||
* Matchbox
|
||||
* Sato
|
||||
* Autobuilder
|
||||
|
||||
Layer Maintainers needed
|
||||
------------------------
|
||||
|
||||
* meta-gplv2 (ideally new strategy but active maintainer welcome)
|
||||
|
||||
Shadow maintainers/development needed
|
||||
--------------------------------------
|
||||
|
||||
* toaster
|
||||
* bitbake
|
||||
|
||||
|
||||
29
README.OE-Core
Normal file
29
README.OE-Core
Normal file
@@ -0,0 +1,29 @@
|
||||
OpenEmbedded-Core
|
||||
=================
|
||||
|
||||
OpenEmbedded-Core is a layer containing the core metadata for current versions
|
||||
of OpenEmbedded. It is distro-less (can build a functional image with
|
||||
DISTRO = "nodistro") and contains only emulated machine support.
|
||||
|
||||
For information about OpenEmbedded, see the OpenEmbedded website:
|
||||
http://www.openembedded.org/
|
||||
|
||||
The Yocto Project has extensive documentation about OE including a reference manual
|
||||
which can be found at:
|
||||
http://yoctoproject.org/documentation
|
||||
|
||||
|
||||
Contributing
|
||||
------------
|
||||
|
||||
Please refer to
|
||||
http://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
|
||||
for guidelines on how to submit patches.
|
||||
|
||||
Mailing list:
|
||||
|
||||
http://lists.openembedded.org/mailman/listinfo/openembedded-core
|
||||
|
||||
Source code:
|
||||
|
||||
http://git.openembedded.org/openembedded-core/
|
||||
@@ -1,29 +0,0 @@
|
||||
OpenEmbedded-Core
|
||||
=================
|
||||
|
||||
OpenEmbedded-Core is a layer containing the core metadata for current versions
|
||||
of OpenEmbedded. It is distro-less (can build a functional image with
|
||||
DISTRO = "nodistro") and contains only emulated machine support.
|
||||
|
||||
For information about OpenEmbedded, see the OpenEmbedded website:
|
||||
https://www.openembedded.org/
|
||||
|
||||
The Yocto Project has extensive documentation about OE including a reference manual
|
||||
which can be found at:
|
||||
https://docs.yoctoproject.org/
|
||||
|
||||
|
||||
Contributing
|
||||
------------
|
||||
|
||||
Please refer to
|
||||
https://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
|
||||
for guidelines on how to submit patches.
|
||||
|
||||
Mailing list:
|
||||
|
||||
https://lists.openembedded.org/g/openembedded-core
|
||||
|
||||
Source code:
|
||||
|
||||
https://git.openembedded.org/openembedded-core/
|
||||
1
README.hardware
Symbolic link
1
README.hardware
Symbolic link
@@ -0,0 +1 @@
|
||||
meta-yocto-bsp/README.hardware
|
||||
@@ -1 +0,0 @@
|
||||
meta-yocto-bsp/README.hardware.md
|
||||
1
README.poky
Symbolic link
1
README.poky
Symbolic link
@@ -0,0 +1 @@
|
||||
meta-poky/README.poky
|
||||
@@ -1 +0,0 @@
|
||||
meta-poky/README.poky.md
|
||||
@@ -7,17 +7,17 @@ One of BitBake's main users, OpenEmbedded, takes this core and builds embedded L
|
||||
stacks using a task-oriented approach.
|
||||
|
||||
For information about Bitbake, see the OpenEmbedded website:
|
||||
https://www.openembedded.org/
|
||||
http://www.openembedded.org/
|
||||
|
||||
Bitbake plain documentation can be found under the doc directory or its integrated
|
||||
html version at the Yocto Project website:
|
||||
https://docs.yoctoproject.org
|
||||
http://yoctoproject.org/documentation
|
||||
|
||||
Contributing
|
||||
------------
|
||||
|
||||
Please refer to
|
||||
https://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
|
||||
http://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
|
||||
for guidelines on how to submit patches, just note that the latter documentation is intended
|
||||
for OpenEmbedded (and its core) not bitbake patches (bitbake-devel@lists.openembedded.org)
|
||||
but in general main guidelines apply. Once the commit(s) have been created, the way to send
|
||||
@@ -28,16 +28,8 @@ branch, type:
|
||||
|
||||
Mailing list:
|
||||
|
||||
https://lists.openembedded.org/g/bitbake-devel
|
||||
http://lists.openembedded.org/mailman/listinfo/bitbake-devel
|
||||
|
||||
Source code:
|
||||
|
||||
https://git.openembedded.org/bitbake/
|
||||
|
||||
Testing:
|
||||
|
||||
Bitbake has a testsuite located in lib/bb/tests/ whichs aim to try and prevent regressions.
|
||||
You can run this with "bitbake-selftest". In particular the fetcher is well covered since
|
||||
it has so many corner cases. The datastore has many tests too. Testing with the testsuite is
|
||||
recommended before submitting patches, particularly to the fetcher and datastore. We also
|
||||
appreciate new test cases and may require them for more obscure issues.
|
||||
http://git.openembedded.org/bitbake/
|
||||
|
||||
@@ -12,8 +12,6 @@
|
||||
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),
|
||||
'lib'))
|
||||
@@ -28,7 +26,7 @@ from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
|
||||
if sys.getfilesystemencoding() != "utf-8":
|
||||
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
|
||||
|
||||
__version__ = "2.2.0"
|
||||
__version__ = "1.48.0"
|
||||
|
||||
if __name__ == "__main__":
|
||||
if __version__ != bb.__version__:
|
||||
|
||||
@@ -11,8 +11,6 @@
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
|
||||
warnings.simplefilter("default")
|
||||
import argparse
|
||||
import logging
|
||||
import pickle
|
||||
@@ -28,7 +26,6 @@ logger = bb.msg.logger_create(myname)
|
||||
|
||||
is_dump = myname == 'bitbake-dumpsig'
|
||||
|
||||
|
||||
def find_siginfo(tinfoil, pn, taskname, sigs=None):
|
||||
result = None
|
||||
tinfoil.set_event_mask(['bb.event.FindSigInfoResult',
|
||||
@@ -54,7 +51,6 @@ def find_siginfo(tinfoil, pn, taskname, sigs=None):
|
||||
sys.exit(2)
|
||||
return result
|
||||
|
||||
|
||||
def find_siginfo_task(bbhandler, pn, taskname, sig1=None, sig2=None):
|
||||
""" Find the most recent signature files for the specified PN/task """
|
||||
|
||||
@@ -63,13 +59,13 @@ def find_siginfo_task(bbhandler, pn, taskname, sig1=None, sig2=None):
|
||||
|
||||
if sig1 and sig2:
|
||||
sigfiles = find_siginfo(bbhandler, pn, taskname, [sig1, sig2])
|
||||
if not sigfiles:
|
||||
if len(sigfiles) == 0:
|
||||
logger.error('No sigdata files found matching %s %s matching either %s or %s' % (pn, taskname, sig1, sig2))
|
||||
sys.exit(1)
|
||||
elif sig1 not in sigfiles:
|
||||
elif not sig1 in sigfiles:
|
||||
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig1))
|
||||
sys.exit(1)
|
||||
elif sig2 not in sigfiles:
|
||||
elif not sig2 in sigfiles:
|
||||
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig2))
|
||||
sys.exit(1)
|
||||
latestfiles = [sigfiles[sig1], sigfiles[sig2]]
|
||||
@@ -89,11 +85,11 @@ def recursecb(key, hash1, hash2):
|
||||
hashfiles = find_siginfo(tinfoil, key, None, hashes)
|
||||
|
||||
recout = []
|
||||
if not hashfiles:
|
||||
if len(hashfiles) == 0:
|
||||
recout.append("Unable to find matching sigdata for %s with hashes %s or %s" % (key, hash1, hash2))
|
||||
elif hash1 not in hashfiles:
|
||||
elif not hash1 in hashfiles:
|
||||
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash1))
|
||||
elif hash2 not in hashfiles:
|
||||
elif not hash2 in hashfiles:
|
||||
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash2))
|
||||
else:
|
||||
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1], hashfiles[hash2], recursecb, color=color)
|
||||
@@ -113,36 +109,36 @@ parser.add_argument('-D', '--debug',
|
||||
|
||||
if is_dump:
|
||||
parser.add_argument("-t", "--task",
|
||||
help="find the signature data file for the last run of the specified task",
|
||||
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
|
||||
help="find the signature data file for the last run of the specified task",
|
||||
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
|
||||
|
||||
parser.add_argument("sigdatafile1",
|
||||
help="Signature file to dump. Not used when using -t/--task.",
|
||||
action="store", nargs='?', metavar="sigdatafile")
|
||||
help="Signature file to dump. Not used when using -t/--task.",
|
||||
action="store", nargs='?', metavar="sigdatafile")
|
||||
else:
|
||||
parser.add_argument('-c', '--color',
|
||||
help='Colorize the output (where %(metavar)s is %(choices)s)',
|
||||
choices=['auto', 'always', 'never'], default='auto', metavar='color')
|
||||
help='Colorize the output (where %(metavar)s is %(choices)s)',
|
||||
choices=['auto', 'always', 'never'], default='auto', metavar='color')
|
||||
|
||||
parser.add_argument('-d', '--dump',
|
||||
help='Dump the last signature data instead of comparing (equivalent to using bitbake-dumpsig)',
|
||||
action='store_true')
|
||||
help='Dump the last signature data instead of comparing (equivalent to using bitbake-dumpsig)',
|
||||
action='store_true')
|
||||
|
||||
parser.add_argument("-t", "--task",
|
||||
help="find the signature data files for the last two runs of the specified task and compare them",
|
||||
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
|
||||
help="find the signature data files for the last two runs of the specified task and compare them",
|
||||
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
|
||||
|
||||
parser.add_argument("-s", "--signature",
|
||||
help="With -t/--task, specify the signatures to look for instead of taking the last two",
|
||||
action="store", dest="sigargs", nargs=2, metavar=('fromsig', 'tosig'))
|
||||
help="With -t/--task, specify the signatures to look for instead of taking the last two",
|
||||
action="store", dest="sigargs", nargs=2, metavar=('fromsig', 'tosig'))
|
||||
|
||||
parser.add_argument("sigdatafile1",
|
||||
help="First signature file to compare (or signature file to dump, if second not specified). Not used when using -t/--task.",
|
||||
action="store", nargs='?')
|
||||
help="First signature file to compare (or signature file to dump, if second not specified). Not used when using -t/--task.",
|
||||
action="store", nargs='?')
|
||||
|
||||
parser.add_argument("sigdatafile2",
|
||||
help="Second signature file to compare",
|
||||
action="store", nargs='?')
|
||||
help="Second signature file to compare",
|
||||
action="store", nargs='?')
|
||||
|
||||
options = parser.parse_args()
|
||||
if is_dump:
|
||||
@@ -160,8 +156,7 @@ if options.taskargs:
|
||||
with bb.tinfoil.Tinfoil() as tinfoil:
|
||||
tinfoil.prepare(config_only=True)
|
||||
if not options.dump and options.sigargs:
|
||||
files = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1], options.sigargs[0],
|
||||
options.sigargs[1])
|
||||
files = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1], options.sigargs[0], options.sigargs[1])
|
||||
else:
|
||||
files = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1])
|
||||
|
||||
@@ -170,8 +165,7 @@ if options.taskargs:
|
||||
output = bb.siggen.dump_sigfile(files[-1])
|
||||
else:
|
||||
if len(files) < 2:
|
||||
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (
|
||||
options.taskargs[0], options.taskargs[1]))
|
||||
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (options.taskargs[0], options.taskargs[1]))
|
||||
sys.exit(1)
|
||||
|
||||
# Recurse into signature comparison
|
||||
|
||||
@@ -1,50 +0,0 @@
|
||||
#! /usr/bin/env python3
|
||||
#
|
||||
# Copyright (C) 2021 Richard Purdie
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import argparse
|
||||
import io
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
|
||||
bindir = os.path.dirname(__file__)
|
||||
topdir = os.path.dirname(bindir)
|
||||
sys.path[0:0] = [os.path.join(topdir, 'lib')]
|
||||
|
||||
import bb.tinfoil
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description="Bitbake Query Variable")
|
||||
parser.add_argument("variable", help="variable name to query")
|
||||
parser.add_argument("-r", "--recipe", help="Recipe name to query", default=None, required=False)
|
||||
parser.add_argument('-u', '--unexpand', help='Do not expand the value (with --value)', action="store_true")
|
||||
parser.add_argument('-f', '--flag', help='Specify a variable flag to query (with --value)', default=None)
|
||||
parser.add_argument('--value', help='Only report the value, no history and no variable name', action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.unexpand and not args.value:
|
||||
print("--unexpand only makes sense with --value")
|
||||
sys.exit(1)
|
||||
|
||||
if args.flag and not args.value:
|
||||
print("--flag only makes sense with --value")
|
||||
sys.exit(1)
|
||||
|
||||
with bb.tinfoil.Tinfoil(tracking=True) as tinfoil:
|
||||
if args.recipe:
|
||||
tinfoil.prepare(quiet=2)
|
||||
d = tinfoil.parse_recipe(args.recipe)
|
||||
else:
|
||||
tinfoil.prepare(quiet=2, config_only=True)
|
||||
d = tinfoil.config_data
|
||||
if args.flag:
|
||||
print(str(d.getVarFlag(args.variable, args.flag, expand=(not args.unexpand))))
|
||||
elif args.value:
|
||||
print(str(d.getVar(args.variable, expand=(not args.unexpand))))
|
||||
else:
|
||||
bb.data.emit_var(args.variable, d=d, all=True)
|
||||
@@ -13,8 +13,6 @@ import pprint
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
|
||||
try:
|
||||
import tqdm
|
||||
@@ -153,6 +151,9 @@ def main():
|
||||
func = getattr(args, 'func', None)
|
||||
if func:
|
||||
client = hashserv.create_client(args.address)
|
||||
# Try to establish a connection to the server now to detect failures
|
||||
# early
|
||||
client.connect()
|
||||
|
||||
return func(args, client)
|
||||
|
||||
|
||||
@@ -10,8 +10,6 @@ import sys
|
||||
import logging
|
||||
import argparse
|
||||
import sqlite3
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
|
||||
|
||||
@@ -32,11 +30,9 @@ def main():
|
||||
"--bind [::1]:8686"'''
|
||||
)
|
||||
|
||||
parser.add_argument('-b', '--bind', default=DEFAULT_BIND, help='Bind address (default "%(default)s")')
|
||||
parser.add_argument('-d', '--database', default='./hashserv.db', help='Database file (default "%(default)s")')
|
||||
parser.add_argument('-l', '--log', default='WARNING', help='Set logging level')
|
||||
parser.add_argument('-u', '--upstream', help='Upstream hashserv to pull hashes from')
|
||||
parser.add_argument('-r', '--read-only', action='store_true', help='Disallow write operations from clients')
|
||||
parser.add_argument('--bind', default=DEFAULT_BIND, help='Bind address (default "%(default)s")')
|
||||
parser.add_argument('--database', default='./hashserv.db', help='Database file (default "%(default)s")')
|
||||
parser.add_argument('--log', default='WARNING', help='Set logging level')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
@@ -51,7 +47,7 @@ def main():
|
||||
console.setLevel(level)
|
||||
logger.addHandler(console)
|
||||
|
||||
server = hashserv.create_server(args.bind, args.database, upstream=args.upstream, read_only=args.read_only)
|
||||
server = hashserv.create_server(args.bind, args.database)
|
||||
server.serve_forever()
|
||||
return 0
|
||||
|
||||
|
||||
@@ -14,8 +14,6 @@ import logging
|
||||
import os
|
||||
import sys
|
||||
import argparse
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
|
||||
bindir = os.path.dirname(__file__)
|
||||
topdir = os.path.dirname(bindir)
|
||||
@@ -68,11 +66,11 @@ def main():
|
||||
|
||||
registered = False
|
||||
for plugin in plugins:
|
||||
if hasattr(plugin, 'tinfoil_init'):
|
||||
plugin.tinfoil_init(tinfoil)
|
||||
if hasattr(plugin, 'register_commands'):
|
||||
registered = True
|
||||
plugin.register_commands(subparsers)
|
||||
if hasattr(plugin, 'tinfoil_init'):
|
||||
plugin.tinfoil_init(tinfoil)
|
||||
|
||||
if not registered:
|
||||
logger.error("No commands registered - missing plugins?")
|
||||
|
||||
@@ -1,15 +1,11 @@
|
||||
#!/usr/bin/env python3
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import os
|
||||
import sys,logging
|
||||
import optparse
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),'lib'))
|
||||
|
||||
@@ -40,14 +36,12 @@ def main():
|
||||
dest="host", type="string", default=PRHOST_DEFAULT)
|
||||
parser.add_option("--port", help="port number(default: 8585)", action="store",
|
||||
dest="port", type="int", default=PRPORT_DEFAULT)
|
||||
parser.add_option("-r", "--read-only", help="open database in read-only mode",
|
||||
action="store_true")
|
||||
|
||||
options, args = parser.parse_args(sys.argv)
|
||||
prserv.init_logger(os.path.abspath(options.logfile),options.loglevel)
|
||||
|
||||
if options.start:
|
||||
ret=prserv.serv.start_daemon(options.dbfile, options.host, options.port,os.path.abspath(options.logfile), options.read_only)
|
||||
ret=prserv.serv.start_daemon(options.dbfile, options.host, options.port,os.path.abspath(options.logfile))
|
||||
elif options.stop:
|
||||
ret=prserv.serv.stop_daemon(options.host, options.port)
|
||||
else:
|
||||
|
||||
@@ -7,8 +7,6 @@
|
||||
|
||||
import os
|
||||
import sys, logging
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
|
||||
|
||||
import unittest
|
||||
@@ -31,7 +29,6 @@ tests = ["bb.tests.codeparser",
|
||||
"bb.tests.runqueue",
|
||||
"bb.tests.siggen",
|
||||
"bb.tests.utils",
|
||||
"bb.tests.compression",
|
||||
"hashserv.tests",
|
||||
"layerindexlib.tests.layerindexobj",
|
||||
"layerindexlib.tests.restapi",
|
||||
|
||||
@@ -8,7 +8,6 @@
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
import logging
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
|
||||
|
||||
@@ -27,10 +26,12 @@ readypipeinfd = int(sys.argv[3])
|
||||
logfile = sys.argv[4]
|
||||
lockname = sys.argv[5]
|
||||
sockname = sys.argv[6]
|
||||
timeout = float(sys.argv[7])
|
||||
timeout = sys.argv[7]
|
||||
xmlrpcinterface = (sys.argv[8], int(sys.argv[9]))
|
||||
if xmlrpcinterface[0] == "None":
|
||||
xmlrpcinterface = (None, xmlrpcinterface[1])
|
||||
if timeout == "None":
|
||||
timeout = None
|
||||
|
||||
# Replace standard fds with our own
|
||||
with open('/dev/null', 'r') as si:
|
||||
|
||||
@@ -1,14 +1,11 @@
|
||||
#!/usr/bin/env python3
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
|
||||
from bb import fetch2
|
||||
import logging
|
||||
@@ -19,8 +16,6 @@ import signal
|
||||
import pickle
|
||||
import traceback
|
||||
import queue
|
||||
import shlex
|
||||
import subprocess
|
||||
from multiprocessing import Lock
|
||||
from threading import Thread
|
||||
|
||||
@@ -123,9 +118,7 @@ def worker_child_fire(event, d):
|
||||
data = b"<event>" + pickle.dumps(event) + b"</event>"
|
||||
try:
|
||||
worker_pipe_lock.acquire()
|
||||
while(len(data)):
|
||||
written = worker_pipe.write(data)
|
||||
data = data[written:]
|
||||
worker_pipe.write(data)
|
||||
worker_pipe_lock.release()
|
||||
except IOError:
|
||||
sigterm_handler(None, None)
|
||||
@@ -150,31 +143,21 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
|
||||
# a fork() or exec*() activates PSEUDO...
|
||||
|
||||
envbackup = {}
|
||||
fakeroot = False
|
||||
fakeenv = {}
|
||||
umask = None
|
||||
|
||||
uid = os.getuid()
|
||||
gid = os.getgid()
|
||||
|
||||
|
||||
taskdep = workerdata["taskdeps"][fn]
|
||||
if 'umask' in taskdep and taskname in taskdep['umask']:
|
||||
umask = taskdep['umask'][taskname]
|
||||
elif workerdata["umask"]:
|
||||
umask = workerdata["umask"]
|
||||
if umask:
|
||||
# umask might come in as a number or text string..
|
||||
try:
|
||||
umask = int(umask, 8)
|
||||
umask = int(taskdep['umask'][taskname],8)
|
||||
except TypeError:
|
||||
pass
|
||||
umask = taskdep['umask'][taskname]
|
||||
|
||||
dry_run = cfg.dry_run or dry_run_exec
|
||||
|
||||
# We can't use the fakeroot environment in a dry run as it possibly hasn't been built
|
||||
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not dry_run:
|
||||
fakeroot = True
|
||||
envvars = (workerdata["fakerootenv"][fn] or "").split()
|
||||
for key, value in (var.split('=') for var in envvars):
|
||||
envbackup[key] = os.environ.get(key)
|
||||
@@ -184,7 +167,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
|
||||
fakedirs = (workerdata["fakerootdirs"][fn] or "").split()
|
||||
for p in fakedirs:
|
||||
bb.utils.mkdirhier(p)
|
||||
logger.debug2('Running %s:%s under fakeroot, fakedirs: %s' %
|
||||
logger.debug(2, 'Running %s:%s under fakeroot, fakedirs: %s' %
|
||||
(fn, taskname, ', '.join(fakedirs)))
|
||||
else:
|
||||
envvars = (workerdata["fakerootnoenv"][fn] or "").split()
|
||||
@@ -243,7 +226,6 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
|
||||
the_data = databuilder.mcdata[mc]
|
||||
the_data.setVar("BB_WORKERCONTEXT", "1")
|
||||
the_data.setVar("BB_TASKDEPDATA", taskdepdata)
|
||||
the_data.setVar('BB_CURRENTTASK', taskname.replace("do_", ""))
|
||||
if cfg.limited_deps:
|
||||
the_data.setVar("BB_LIMITEDDEPS", "1")
|
||||
the_data.setVar("BUILDNAME", workerdata["buildname"])
|
||||
@@ -263,13 +245,6 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
|
||||
|
||||
bb.utils.set_process_name("%s:%s" % (the_data.getVar("PN"), taskname.replace("do_", "")))
|
||||
|
||||
if not the_data.getVarFlag(taskname, 'network', False):
|
||||
if bb.utils.is_local_uid(uid):
|
||||
logger.debug("Attempting to disable network for %s" % taskname)
|
||||
bb.utils.disable_network(uid, gid)
|
||||
else:
|
||||
logger.debug("Skipping disable network for %s since %s is not a local uid." % (taskname, uid))
|
||||
|
||||
# exported_vars() returns a generator which *cannot* be passed to os.environ.update()
|
||||
# successfully. We also need to unset anything from the environment which shouldn't be there
|
||||
exports = bb.data.exported_vars(the_data)
|
||||
@@ -301,13 +276,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
|
||||
try:
|
||||
if dry_run:
|
||||
return 0
|
||||
try:
|
||||
ret = bb.build.exec_task(fn, taskname, the_data, cfg.profile)
|
||||
finally:
|
||||
if fakeroot:
|
||||
fakerootcmd = shlex.split(the_data.getVar("FAKEROOTCMD"))
|
||||
subprocess.run(fakerootcmd + ['-S'], check=True, stdout=subprocess.PIPE)
|
||||
return ret
|
||||
return bb.build.exec_task(fn, taskname, the_data, cfg.profile)
|
||||
except:
|
||||
os._exit(1)
|
||||
if not profiling:
|
||||
@@ -352,9 +321,7 @@ class runQueueWorkerPipe():
|
||||
end = len(self.queue)
|
||||
index = self.queue.find(b"</event>")
|
||||
while index != -1:
|
||||
msg = self.queue[:index+8]
|
||||
assert msg.startswith(b"<event>") and msg.count(b"<event>") == 1
|
||||
worker_fire_prepickled(msg)
|
||||
worker_fire_prepickled(self.queue[:index+8])
|
||||
self.queue = self.queue[index+8:]
|
||||
index = self.queue.find(b"</event>")
|
||||
return (end > start)
|
||||
@@ -431,18 +398,14 @@ class BitbakeWorker(object):
|
||||
if self.queue.startswith(b"<" + item + b">"):
|
||||
index = self.queue.find(b"</" + item + b">")
|
||||
while index != -1:
|
||||
try:
|
||||
func(self.queue[(len(item) + 2):index])
|
||||
except pickle.UnpicklingError:
|
||||
workerlog_write("Unable to unpickle data: %s\n" % ":".join("{:02x}".format(c) for c in self.queue))
|
||||
raise
|
||||
func(self.queue[(len(item) + 2):index])
|
||||
self.queue = self.queue[(index + len(item) + 3):]
|
||||
index = self.queue.find(b"</" + item + b">")
|
||||
|
||||
def handle_cookercfg(self, data):
|
||||
self.cookercfg = pickle.loads(data)
|
||||
self.databuilder = bb.cookerdata.CookerDataBuilder(self.cookercfg, worker=True)
|
||||
self.databuilder.parseBaseConfiguration(worker=True)
|
||||
self.databuilder.parseBaseConfiguration()
|
||||
self.data = self.databuilder.data
|
||||
|
||||
def handle_extraconfigdata(self, data):
|
||||
@@ -457,7 +420,6 @@ class BitbakeWorker(object):
|
||||
for mc in self.databuilder.mcdata:
|
||||
self.databuilder.mcdata[mc].setVar("PRSERV_HOST", self.workerdata["prhost"])
|
||||
self.databuilder.mcdata[mc].setVar("BB_HASHSERVE", self.workerdata["hashservaddr"])
|
||||
self.databuilder.mcdata[mc].setVar("__bbclasstype", "recipe")
|
||||
|
||||
def handle_newtaskhashes(self, data):
|
||||
self.workerdata["newhashes"] = pickle.loads(data)
|
||||
@@ -543,11 +505,9 @@ except BaseException as e:
|
||||
import traceback
|
||||
sys.stderr.write(traceback.format_exc())
|
||||
sys.stderr.write(str(e))
|
||||
finally:
|
||||
worker_thread_exit = True
|
||||
worker_thread.join()
|
||||
|
||||
workerlog_write("exiting")
|
||||
if not normalexit:
|
||||
sys.exit(1)
|
||||
worker_thread_exit = True
|
||||
worker_thread.join()
|
||||
|
||||
workerlog_write("exitting")
|
||||
sys.exit(0)
|
||||
|
||||
@@ -1,7 +1,5 @@
|
||||
#!/usr/bin/env python3
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
@@ -18,8 +16,6 @@ import itertools
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
|
||||
version = 1.0
|
||||
|
||||
|
||||
@@ -33,7 +33,7 @@ databaseCheck()
|
||||
$MANAGE migrate --noinput || retval=1
|
||||
|
||||
if [ $retval -eq 1 ]; then
|
||||
echo "Failed migrations, halting system start" 1>&2
|
||||
echo "Failed migrations, aborting system start" 1>&2
|
||||
return $retval
|
||||
fi
|
||||
# Make sure that checksettings can pick up any value for TEMPLATECONF
|
||||
@@ -41,7 +41,7 @@ databaseCheck()
|
||||
$MANAGE checksettings --traceback || retval=1
|
||||
|
||||
if [ $retval -eq 1 ]; then
|
||||
printf "\nError while checking settings; exiting\n"
|
||||
printf "\nError while checking settings; aborting\n"
|
||||
return $retval
|
||||
fi
|
||||
|
||||
@@ -248,7 +248,7 @@ fi
|
||||
# 3) the sqlite db if that is being used.
|
||||
# 4) pid's we need to clean up on exit/shutdown
|
||||
export TOASTER_DIR=$TOASTERDIR
|
||||
export BB_ENV_PASSTHROUGH_ADDITIONS="$BB_ENV_PASSTHROUGH_ADDITIONS TOASTER_DIR"
|
||||
export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE TOASTER_DIR"
|
||||
|
||||
# Determine the action. If specified by arguments, fine, if not, toggle it
|
||||
if [ "$CMD" = "start" ] ; then
|
||||
|
||||
@@ -19,8 +19,6 @@ import sys
|
||||
import json
|
||||
import pickle
|
||||
import codecs
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
|
||||
from collections import namedtuple
|
||||
|
||||
|
||||
@@ -1,23 +0,0 @@
|
||||
# SPDX-License-Identifier: MIT
|
||||
#
|
||||
# Copyright (c) 2021 Joshua Watt <JPEWhacker@gmail.com>
|
||||
#
|
||||
# Dockerfile to build a bitbake hash equivalence server container
|
||||
#
|
||||
# From the root of the bitbake repository, run:
|
||||
#
|
||||
# docker build -f contrib/hashserv/Dockerfile .
|
||||
#
|
||||
|
||||
FROM alpine:3.13.1
|
||||
|
||||
RUN apk add --no-cache python3
|
||||
|
||||
COPY bin/bitbake-hashserv /opt/bbhashserv/bin/
|
||||
COPY lib/hashserv /opt/bbhashserv/lib/hashserv/
|
||||
COPY lib/bb /opt/bbhashserv/lib/bb/
|
||||
COPY lib/codegen.py /opt/bbhashserv/lib/codegen.py
|
||||
COPY lib/ply /opt/bbhashserv/lib/ply/
|
||||
COPY lib/bs4 /opt/bbhashserv/lib/bs4/
|
||||
|
||||
ENTRYPOINT ["/opt/bbhashserv/bin/bitbake-hashserv"]
|
||||
@@ -1,62 +0,0 @@
|
||||
# SPDX-License-Identifier: MIT
|
||||
#
|
||||
# Copyright (c) 2022 Daniel Gomez <daniel@qtec.com>
|
||||
#
|
||||
# Dockerfile to build a bitbake PR service container
|
||||
#
|
||||
# From the root of the bitbake repository, run:
|
||||
#
|
||||
# docker build -f contrib/prserv/Dockerfile . -t prserv
|
||||
#
|
||||
# Running examples:
|
||||
#
|
||||
# 1. PR Service in RW mode, port 18585:
|
||||
#
|
||||
# docker run --detach --tty \
|
||||
# --env PORT=18585 \
|
||||
# --publish 18585:18585 \
|
||||
# --volume $PWD:/var/lib/bbprserv \
|
||||
# prserv
|
||||
#
|
||||
# 2. PR Service in RO mode, default port (8585) and custom LOGFILE:
|
||||
#
|
||||
# docker run --detach --tty \
|
||||
# --env DBMODE="--read-only" \
|
||||
# --env LOGFILE=/var/lib/bbprserv/prservro.log \
|
||||
# --publish 8585:8585 \
|
||||
# --volume $PWD:/var/lib/bbprserv \
|
||||
# prserv
|
||||
#
|
||||
|
||||
FROM alpine:3.14.4
|
||||
|
||||
RUN apk add --no-cache python3
|
||||
|
||||
COPY bin/bitbake-prserv /opt/bbprserv/bin/
|
||||
COPY lib/prserv /opt/bbprserv/lib/prserv/
|
||||
COPY lib/bb /opt/bbprserv/lib/bb/
|
||||
COPY lib/codegen.py /opt/bbprserv/lib/codegen.py
|
||||
COPY lib/ply /opt/bbprserv/lib/ply/
|
||||
COPY lib/bs4 /opt/bbprserv/lib/bs4/
|
||||
|
||||
ENV PATH=$PATH:/opt/bbprserv/bin
|
||||
|
||||
RUN mkdir -p /var/lib/bbprserv
|
||||
|
||||
ENV DBFILE=/var/lib/bbprserv/prserv.sqlite3 \
|
||||
LOGFILE=/var/lib/bbprserv/prserv.log \
|
||||
LOGLEVEL=debug \
|
||||
HOST=0.0.0.0 \
|
||||
PORT=8585 \
|
||||
DBMODE=""
|
||||
|
||||
ENTRYPOINT [ "/bin/sh", "-c", \
|
||||
"bitbake-prserv \
|
||||
--file=$DBFILE \
|
||||
--log=$LOGFILE \
|
||||
--loglevel=$LOGLEVEL \
|
||||
--start \
|
||||
--host=$HOST \
|
||||
--port=$PORT \
|
||||
$DBMODE \
|
||||
&& tail -f $LOGFILE"]
|
||||
@@ -20,7 +20,7 @@ fun! NewBBAppendTemplate()
|
||||
set nopaste
|
||||
|
||||
" New bbappend template
|
||||
0 put ='FILESEXTRAPATHS:prepend := \"${THISDIR}/${PN}:\"'
|
||||
0 put ='FILESEXTRAPATHS_prepend := \"${THISDIR}/${PN}:\"'
|
||||
2
|
||||
|
||||
if paste == 1
|
||||
|
||||
@@ -51,9 +51,9 @@ syn region bbString matchgroup=bbQuote start=+'+ skip=+\\$+ end=+'+
|
||||
syn match bbExport "^export" nextgroup=bbIdentifier skipwhite
|
||||
syn keyword bbExportFlag export contained nextgroup=bbIdentifier skipwhite
|
||||
syn match bbIdentifier "[a-zA-Z0-9\-_\.\/\+]\+" display contained
|
||||
syn match bbVarDeref "${[a-zA-Z0-9\-_:\.\/\+]\+}" contained
|
||||
syn match bbVarDeref "${[a-zA-Z0-9\-_\.\/\+]\+}" contained
|
||||
syn match bbVarEq "\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)" contained nextgroup=bbVarValue
|
||||
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.\/\+][${}a-zA-Z0-9\-_:\.\/\+]*\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbOverrideOperator,bbVarDeref nextgroup=bbVarEq
|
||||
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.\/\+]\+\(_[${}a-zA-Z0-9\-_\.\/\+]\+\)\?\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbVarDeref nextgroup=bbVarEq
|
||||
syn match bbVarValue ".*$" contained contains=bbString,bbVarDeref,bbVarPyValue
|
||||
syn region bbVarPyValue start=+${@+ skip=+\\$+ end=+}+ contained contains=@python
|
||||
|
||||
@@ -77,15 +77,13 @@ syn keyword bbOEFunctions do_fetch do_unpack do_patch do_configure do_comp
|
||||
" Generic Functions
|
||||
syn match bbFunction "\h[0-9A-Za-z_\-\.]*" display contained contains=bbOEFunctions
|
||||
|
||||
syn keyword bbOverrideOperator append prepend remove contained
|
||||
|
||||
" BitBake shell metadata
|
||||
syn include @shell syntax/sh.vim
|
||||
if exists("b:current_syntax")
|
||||
unlet b:current_syntax
|
||||
endif
|
||||
syn keyword bbShFakeRootFlag fakeroot contained
|
||||
syn match bbShFuncDef "^\(fakeroot\s*\)\?\([\.0-9A-Za-z_:${}\-\.]\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbFunction,bbOverrideOperator,bbVarDeref,bbDelimiter nextgroup=bbShFuncRegion skipwhite
|
||||
syn match bbShFuncDef "^\(fakeroot\s*\)\?\([\.0-9A-Za-z_${}\-\.]\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbFunction,bbVarDeref,bbDelimiter nextgroup=bbShFuncRegion skipwhite
|
||||
syn region bbShFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" contained contains=@shell
|
||||
|
||||
" Python value inside shell functions
|
||||
@@ -93,7 +91,7 @@ syn region shDeref start=+${@+ skip=+\\$+ excludenl end=+}+ contained co
|
||||
|
||||
" BitBake python metadata
|
||||
syn keyword bbPyFlag python contained
|
||||
syn match bbPyFuncDef "^\(fakeroot\s*\)\?\(python\)\(\s\+[0-9A-Za-z_:${}\-\.]\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbPyFlag,bbFunction,bbOverrideOperator,bbVarDeref,bbDelimiter nextgroup=bbPyFuncRegion skipwhite
|
||||
syn match bbPyFuncDef "^\(fakeroot\s*\)\?\(python\)\(\s\+[0-9A-Za-z_${}\-\.]\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbPyFlag,bbFunction,bbVarDeref,bbDelimiter nextgroup=bbPyFuncRegion skipwhite
|
||||
syn region bbPyFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" contained contains=@python
|
||||
|
||||
" BitBake 'def'd python functions
|
||||
@@ -124,6 +122,5 @@ hi def link bbStatement Statement
|
||||
hi def link bbStatementRest Identifier
|
||||
hi def link bbOEFunctions Special
|
||||
hi def link bbVarPyValue PreProc
|
||||
hi def link bbOverrideOperator Operator
|
||||
|
||||
let b:current_syntax = "bb"
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
|
||||
# You can set these variables from the command line, and also
|
||||
# from the environment for the first two.
|
||||
SPHINXOPTS ?= -W --keep-going -j auto
|
||||
SPHINXOPTS ?=
|
||||
SPHINXBUILD ?= sphinx-build
|
||||
SOURCEDIR = .
|
||||
BUILDDIR = _build
|
||||
|
||||
@@ -8,12 +8,12 @@ Manual Organization
|
||||
|
||||
Folders exist for individual manuals as follows:
|
||||
|
||||
* bitbake-user-manual --- The BitBake User Manual
|
||||
* bitbake-user-manual - The BitBake User Manual
|
||||
|
||||
Each folder is self-contained regarding content and figures.
|
||||
|
||||
If you want to find HTML versions of the BitBake manuals on the web,
|
||||
go to https://www.openembedded.org/wiki/Documentation.
|
||||
go to http://www.openembedded.org/wiki/Documentation.
|
||||
|
||||
Sphinx
|
||||
======
|
||||
|
||||
@@ -16,7 +16,7 @@ data, or simply return information about the execution environment.
|
||||
|
||||
This chapter describes BitBake's execution process from start to finish
|
||||
when you use it to create an image. The execution process is launched
|
||||
using the following command form::
|
||||
using the following command form: ::
|
||||
|
||||
$ bitbake target
|
||||
|
||||
@@ -32,7 +32,7 @@ the BitBake command and its options, see ":ref:`The BitBake Command
|
||||
your project's ``local.conf`` configuration file.
|
||||
|
||||
A common method to determine this value for your build host is to run
|
||||
the following::
|
||||
the following: ::
|
||||
|
||||
$ grep processor /proc/cpuinfo
|
||||
|
||||
@@ -40,7 +40,7 @@ the BitBake command and its options, see ":ref:`The BitBake Command
|
||||
the number of processors, which takes into account hyper-threading.
|
||||
Thus, a quad-core build host with hyper-threading most likely shows
|
||||
eight processors, which is the value you would then assign to
|
||||
:term:`BB_NUMBER_THREADS`.
|
||||
``BB_NUMBER_THREADS``.
|
||||
|
||||
A possibly simpler solution is that some Linux distributions (e.g.
|
||||
Debian and Ubuntu) provide the ``ncpus`` command.
|
||||
@@ -65,13 +65,13 @@ data itself is of various types:
|
||||
|
||||
The ``layer.conf`` files are used to construct key variables such as
|
||||
:term:`BBPATH` and :term:`BBFILES`.
|
||||
:term:`BBPATH` is used to search for configuration and class files under the
|
||||
``conf`` and ``classes`` directories, respectively. :term:`BBFILES` is used
|
||||
``BBPATH`` is used to search for configuration and class files under the
|
||||
``conf`` and ``classes`` directories, respectively. ``BBFILES`` is used
|
||||
to locate both recipe and recipe append files (``.bb`` and
|
||||
``.bbappend``). If there is no ``bblayers.conf`` file, it is assumed the
|
||||
user has set the :term:`BBPATH` and :term:`BBFILES` directly in the environment.
|
||||
user has set the ``BBPATH`` and ``BBFILES`` directly in the environment.
|
||||
|
||||
Next, the ``bitbake.conf`` file is located using the :term:`BBPATH` variable
|
||||
Next, the ``bitbake.conf`` file is located using the ``BBPATH`` variable
|
||||
that was just constructed. The ``bitbake.conf`` file may also include
|
||||
other configuration files using the ``include`` or ``require``
|
||||
directives.
|
||||
@@ -79,8 +79,8 @@ directives.
|
||||
Prior to parsing configuration files, BitBake looks at certain
|
||||
variables, including:
|
||||
|
||||
- :term:`BB_ENV_PASSTHROUGH`
|
||||
- :term:`BB_ENV_PASSTHROUGH_ADDITIONS`
|
||||
- :term:`BB_ENV_WHITELIST`
|
||||
- :term:`BB_ENV_EXTRAWHITE`
|
||||
- :term:`BB_PRESERVE_ENV`
|
||||
- :term:`BB_ORIGENV`
|
||||
- :term:`BITBAKE_UI`
|
||||
@@ -104,7 +104,7 @@ BitBake first searches the current working directory for an optional
|
||||
contain a :term:`BBLAYERS` variable that is a
|
||||
space-delimited list of 'layer' directories. Recall that if BitBake
|
||||
cannot find a ``bblayers.conf`` file, then it is assumed the user has
|
||||
set the :term:`BBPATH` and :term:`BBFILES` variables directly in the
|
||||
set the ``BBPATH`` and ``BBFILES`` variables directly in the
|
||||
environment.
|
||||
|
||||
For each directory (layer) in this list, a ``conf/layer.conf`` file is
|
||||
@@ -114,7 +114,7 @@ files automatically set up :term:`BBPATH` and other
|
||||
variables correctly for a given build directory.
|
||||
|
||||
BitBake then expects to find the ``conf/bitbake.conf`` file somewhere in
|
||||
the user-specified :term:`BBPATH`. That configuration file generally has
|
||||
the user-specified ``BBPATH``. That configuration file generally has
|
||||
include directives to pull in any other metadata such as files specific
|
||||
to the architecture, the machine, the local environment, and so forth.
|
||||
|
||||
@@ -135,11 +135,11 @@ The ``base.bbclass`` file is always included. Other classes that are
|
||||
specified in the configuration using the
|
||||
:term:`INHERIT` variable are also included. BitBake
|
||||
searches for class files in a ``classes`` subdirectory under the paths
|
||||
in :term:`BBPATH` in the same way as configuration files.
|
||||
in ``BBPATH`` in the same way as configuration files.
|
||||
|
||||
A good way to get an idea of the configuration files and the class files
|
||||
used in your execution environment is to run the following BitBake
|
||||
command::
|
||||
command: ::
|
||||
|
||||
$ bitbake -e > mybb.log
|
||||
|
||||
@@ -155,7 +155,7 @@ execution environment.
|
||||
pair of curly braces in a shell function, the closing curly brace
|
||||
must not be located at the start of the line without leading spaces.
|
||||
|
||||
Here is an example that causes BitBake to produce a parsing error::
|
||||
Here is an example that causes BitBake to produce a parsing error: ::
|
||||
|
||||
fakeroot create_shar() {
|
||||
cat << "EOF" > ${SDK_DEPLOY}/${TOOLCHAIN_OUTPUTNAME}.sh
|
||||
@@ -184,13 +184,13 @@ Locating and Parsing Recipes
|
||||
During the configuration phase, BitBake will have set
|
||||
:term:`BBFILES`. BitBake now uses it to construct a
|
||||
list of recipes to parse, along with any append files (``.bbappend``) to
|
||||
apply. :term:`BBFILES` is a space-separated list of available files and
|
||||
supports wildcards. An example would be::
|
||||
apply. ``BBFILES`` is a space-separated list of available files and
|
||||
supports wildcards. An example would be: ::
|
||||
|
||||
BBFILES = "/path/to/bbfiles/*.bb /path/to/appends/*.bbappend"
|
||||
|
||||
BitBake parses each
|
||||
recipe and append file located with :term:`BBFILES` and stores the values of
|
||||
recipe and append file located with ``BBFILES`` and stores the values of
|
||||
various variables into the datastore.
|
||||
|
||||
.. note::
|
||||
@@ -201,18 +201,18 @@ For each file, a fresh copy of the base configuration is made, then the
|
||||
recipe is parsed line by line. Any inherit statements cause BitBake to
|
||||
find and then parse class files (``.bbclass``) using
|
||||
:term:`BBPATH` as the search path. Finally, BitBake
|
||||
parses in order any append files found in :term:`BBFILES`.
|
||||
parses in order any append files found in ``BBFILES``.
|
||||
|
||||
One common convention is to use the recipe filename to define pieces of
|
||||
metadata. For example, in ``bitbake.conf`` the recipe name and version
|
||||
are used to set the variables :term:`PN` and
|
||||
:term:`PV`::
|
||||
:term:`PV`: ::
|
||||
|
||||
PN = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
|
||||
PV = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
|
||||
PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
|
||||
PV = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
|
||||
|
||||
In this example, a recipe called "something_1.2.3.bb" would set
|
||||
:term:`PN` to "something" and :term:`PV` to "1.2.3".
|
||||
``PN`` to "something" and ``PV`` to "1.2.3".
|
||||
|
||||
By the time parsing is complete for a recipe, BitBake has a list of
|
||||
tasks that the recipe defines and a set of data consisting of keys and
|
||||
@@ -228,7 +228,7 @@ and then reload it.
|
||||
Where possible, subsequent BitBake commands reuse this cache of recipe
|
||||
information. The validity of this cache is determined by first computing
|
||||
a checksum of the base configuration data (see
|
||||
:term:`BB_HASHCONFIG_IGNORE_VARS`) and
|
||||
:term:`BB_HASHCONFIG_WHITELIST`) and
|
||||
then checking if the checksum matches. If that checksum matches what is
|
||||
in the cache and the recipe and class files have not changed, BitBake is
|
||||
able to use the cache. BitBake then reloads the cached information about
|
||||
@@ -238,14 +238,13 @@ Recipe file collections exist to allow the user to have multiple
|
||||
repositories of ``.bb`` files that contain the same exact package. For
|
||||
example, one could easily use them to make one's own local copy of an
|
||||
upstream repository, but with custom modifications that one does not
|
||||
want upstream. Here is an example::
|
||||
want upstream. Here is an example: ::
|
||||
|
||||
BBFILES = "/stuff/openembedded/*/*.bb /stuff/openembedded.modified/*/*.bb"
|
||||
BBFILE_COLLECTIONS = "upstream local"
|
||||
BBFILE_PATTERN_upstream = "^/stuff/openembedded/"
|
||||
BBFILE_PATTERN_local = "^/stuff/openembedded.modified/"
|
||||
BBFILE_PRIORITY_upstream = "5"
|
||||
BBFILE_PRIORITY_local = "10"
|
||||
BBFILE_PRIORITY_upstream = "5" BBFILE_PRIORITY_local = "10"
|
||||
|
||||
.. note::
|
||||
|
||||
@@ -260,21 +259,21 @@ Providers
|
||||
|
||||
Assuming BitBake has been instructed to execute a target and that all
|
||||
the recipe files have been parsed, BitBake starts to figure out how to
|
||||
build the target. BitBake looks through the :term:`PROVIDES` list for each
|
||||
of the recipes. A :term:`PROVIDES` list is the list of names by which the
|
||||
recipe can be known. Each recipe's :term:`PROVIDES` list is created
|
||||
build the target. BitBake looks through the ``PROVIDES`` list for each
|
||||
of the recipes. A ``PROVIDES`` list is the list of names by which the
|
||||
recipe can be known. Each recipe's ``PROVIDES`` list is created
|
||||
implicitly through the recipe's :term:`PN` variable and
|
||||
explicitly through the recipe's :term:`PROVIDES`
|
||||
variable, which is optional.
|
||||
|
||||
When a recipe uses :term:`PROVIDES`, that recipe's functionality can be
|
||||
found under an alternative name or names other than the implicit :term:`PN`
|
||||
When a recipe uses ``PROVIDES``, that recipe's functionality can be
|
||||
found under an alternative name or names other than the implicit ``PN``
|
||||
name. As an example, suppose a recipe named ``keyboard_1.0.bb``
|
||||
contained the following::
|
||||
contained the following: ::
|
||||
|
||||
PROVIDES += "fullkeyboard"
|
||||
|
||||
The :term:`PROVIDES`
|
||||
The ``PROVIDES``
|
||||
list for this recipe becomes "keyboard", which is implicit, and
|
||||
"fullkeyboard", which is explicit. Consequently, the functionality found
|
||||
in ``keyboard_1.0.bb`` can be found under two different names.
|
||||
@@ -284,14 +283,14 @@ in ``keyboard_1.0.bb`` can be found under two different names.
|
||||
Preferences
|
||||
===========
|
||||
|
||||
The :term:`PROVIDES` list is only part of the solution for figuring out a
|
||||
The ``PROVIDES`` list is only part of the solution for figuring out a
|
||||
target's recipes. Because targets might have multiple providers, BitBake
|
||||
needs to prioritize providers by determining provider preferences.
|
||||
|
||||
A common example in which a target has multiple providers is
|
||||
"virtual/kernel", which is on the :term:`PROVIDES` list for each kernel
|
||||
"virtual/kernel", which is on the ``PROVIDES`` list for each kernel
|
||||
recipe. Each machine often selects the best kernel provider by using a
|
||||
line similar to the following in the machine configuration file::
|
||||
line similar to the following in the machine configuration file: ::
|
||||
|
||||
PREFERRED_PROVIDER_virtual/kernel = "linux-yocto"
|
||||
|
||||
@@ -309,10 +308,10 @@ specify a particular version. You can influence the order by using the
|
||||
:term:`DEFAULT_PREFERENCE` variable.
|
||||
|
||||
By default, files have a preference of "0". Setting
|
||||
:term:`DEFAULT_PREFERENCE` to "-1" makes the recipe unlikely to be used
|
||||
unless it is explicitly referenced. Setting :term:`DEFAULT_PREFERENCE` to
|
||||
"1" makes it likely the recipe is used. :term:`PREFERRED_VERSION` overrides
|
||||
any :term:`DEFAULT_PREFERENCE` setting. :term:`DEFAULT_PREFERENCE` is often used
|
||||
``DEFAULT_PREFERENCE`` to "-1" makes the recipe unlikely to be used
|
||||
unless it is explicitly referenced. Setting ``DEFAULT_PREFERENCE`` to
|
||||
"1" makes it likely the recipe is used. ``PREFERRED_VERSION`` overrides
|
||||
any ``DEFAULT_PREFERENCE`` setting. ``DEFAULT_PREFERENCE`` is often used
|
||||
to mark newer and more experimental recipe versions until they have
|
||||
undergone sufficient testing to be considered stable.
|
||||
|
||||
@@ -331,7 +330,7 @@ If the first recipe is named ``a_1.1.bb``, then the
|
||||
|
||||
Thus, if a recipe named ``a_1.2.bb`` exists, BitBake will choose 1.2 by
|
||||
default. However, if you define the following variable in a ``.conf``
|
||||
file that BitBake parses, you can change that preference::
|
||||
file that BitBake parses, you can change that preference: ::
|
||||
|
||||
PREFERRED_VERSION_a = "1.1"
|
||||
|
||||
@@ -394,7 +393,7 @@ ready to run, those tasks have all their dependencies met, and the
|
||||
thread threshold has not been exceeded.
|
||||
|
||||
It is worth noting that you can greatly speed up the build time by
|
||||
properly setting the :term:`BB_NUMBER_THREADS` variable.
|
||||
properly setting the ``BB_NUMBER_THREADS`` variable.
|
||||
|
||||
As each task completes, a timestamp is written to the directory
|
||||
specified by the :term:`STAMP` variable. On subsequent
|
||||
@@ -435,7 +434,7 @@ BitBake writes a shell script to
|
||||
executes the script. The generated shell script contains all the
|
||||
exported variables, and the shell functions with all variables expanded.
|
||||
Output from the shell script goes to the file
|
||||
``${``\ :term:`T`\ ``}/log.do_taskname.pid``. Looking at the expanded shell functions in
|
||||
``${T}/log.do_taskname.pid``. Looking at the expanded shell functions in
|
||||
the run file and the output in the log files is a useful debugging
|
||||
technique.
|
||||
|
||||
@@ -477,7 +476,7 @@ changes because it should not affect the output for target packages. The
|
||||
simplistic approach for excluding the working directory is to set it to
|
||||
some fixed value and create the checksum for the "run" script. BitBake
|
||||
goes one step better and uses the
|
||||
:term:`BB_BASEHASH_IGNORE_VARS` variable
|
||||
:term:`BB_HASHBASE_WHITELIST` variable
|
||||
to define a list of variables that should never be included when
|
||||
generating the signatures.
|
||||
|
||||
@@ -498,7 +497,7 @@ to the task.
|
||||
|
||||
Like the working directory case, situations exist where dependencies
|
||||
should be ignored. For these cases, you can instruct the build process
|
||||
to ignore a dependency by using a line like the following::
|
||||
to ignore a dependency by using a line like the following: ::
|
||||
|
||||
PACKAGE_ARCHS[vardepsexclude] = "MACHINE"
|
||||
|
||||
@@ -508,7 +507,7 @@ even if it does reference it.
|
||||
|
||||
Equally, there are cases where we need to add dependencies BitBake is
|
||||
not able to find. You can accomplish this by using a line like the
|
||||
following::
|
||||
following: ::
|
||||
|
||||
PACKAGE_ARCHS[vardeps] = "MACHINE"
|
||||
|
||||
@@ -523,7 +522,7 @@ it cannot figure out dependencies.
|
||||
Thus far, this section has limited discussion to the direct inputs into
|
||||
a task. Information based on direct inputs is referred to as the
|
||||
"basehash" in the code. However, there is still the question of a task's
|
||||
indirect inputs --- the things that were already built and present in the
|
||||
indirect inputs - the things that were already built and present in the
|
||||
build directory. The checksum (or signature) for a particular task needs
|
||||
to add the hashes of all the tasks on which the particular task depends.
|
||||
Choosing which dependencies to add is a policy decision. However, the
|
||||
@@ -534,11 +533,11 @@ At the code level, there are a variety of ways both the basehash and the
|
||||
dependent task hashes can be influenced. Within the BitBake
|
||||
configuration file, we can give BitBake some extra information to help
|
||||
it construct the basehash. The following statement effectively results
|
||||
in a list of global variable dependency excludes --- variables never
|
||||
in a list of global variable dependency excludes - variables never
|
||||
included in any checksum. This example uses variables from OpenEmbedded
|
||||
to help illustrate the concept::
|
||||
to help illustrate the concept: ::
|
||||
|
||||
BB_BASEHASH_IGNORE_VARS ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \
|
||||
BB_HASHBASE_WHITELIST ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \
|
||||
SSTATE_DIR THISDIR FILESEXTRAPATHS FILE_DIRNAME HOME LOGNAME SHELL \
|
||||
USER FILESPATH STAGING_DIR_HOST STAGING_DIR_TARGET COREBASE PRSERV_HOST \
|
||||
PRSERV_DUMPDIR PRSERV_DUMPFILE PRSERV_LOCKDOWN PARALLEL_MAKE \
|
||||
@@ -557,11 +556,11 @@ OpenEmbedded-Core uses: "OEBasic" and "OEBasicHash". By default, there
|
||||
is a dummy "noop" signature handler enabled in BitBake. This means that
|
||||
behavior is unchanged from previous versions. ``OE-Core`` uses the
|
||||
"OEBasicHash" signature handler by default through this setting in the
|
||||
``bitbake.conf`` file::
|
||||
``bitbake.conf`` file: ::
|
||||
|
||||
BB_SIGNATURE_HANDLER ?= "OEBasicHash"
|
||||
|
||||
The "OEBasicHash" :term:`BB_SIGNATURE_HANDLER` is the same as the "OEBasic"
|
||||
The "OEBasicHash" ``BB_SIGNATURE_HANDLER`` is the same as the "OEBasic"
|
||||
version but adds the task hash to the stamp files. This results in any
|
||||
metadata change that changes the task hash, automatically causing the
|
||||
task to be run again. This removes the need to bump
|
||||
@@ -578,7 +577,10 @@ the build. This information includes:
|
||||
- ``BB_BASEHASH_``\ *filename:taskname*: The base hashes for each
|
||||
dependent task.
|
||||
|
||||
- :term:`BB_TASKHASH`: The hash of the currently running task.
|
||||
- ``BBHASHDEPS_``\ *filename:taskname*: The task dependencies for
|
||||
each task.
|
||||
|
||||
- ``BB_TASKHASH``: The hash of the currently running task.
|
||||
|
||||
It is worth noting that BitBake's "-S" option lets you debug BitBake's
|
||||
processing of signatures. The options passed to -S allow different
|
||||
@@ -645,6 +647,13 @@ compiled binary. To handle this, BitBake calls the
|
||||
each successful setscene task to know whether or not it needs to obtain
|
||||
the dependencies of that task.
|
||||
|
||||
Finally, after all the setscene tasks have executed, BitBake calls the
|
||||
function listed in
|
||||
:term:`BB_SETSCENE_VERIFY_FUNCTION2`
|
||||
with the list of tasks BitBake thinks has been "covered". The metadata
|
||||
can then ensure that this list is correct and can inform BitBake that it
|
||||
wants specific tasks to be run regardless of the setscene result.
|
||||
|
||||
You can find more information on setscene metadata in the
|
||||
:ref:`bitbake-user-manual/bitbake-user-manual-metadata:task checksums and setscene`
|
||||
section.
|
||||
|
||||
@@ -27,7 +27,7 @@ and unpacking the files is often optionally followed by patching.
|
||||
Patching, however, is not covered by this module.
|
||||
|
||||
The code to execute the first part of this process, a fetch, looks
|
||||
something like the following::
|
||||
something like the following: ::
|
||||
|
||||
src_uri = (d.getVar('SRC_URI') or "").split()
|
||||
fetcher = bb.fetch2.Fetch(src_uri, d)
|
||||
@@ -37,7 +37,7 @@ This code sets up an instance of the fetch class. The instance uses a
|
||||
space-separated list of URLs from the :term:`SRC_URI`
|
||||
variable and then calls the ``download`` method to download the files.
|
||||
|
||||
The instantiation of the fetch class is usually followed by::
|
||||
The instantiation of the fetch class is usually followed by: ::
|
||||
|
||||
rootdir = l.getVar('WORKDIR')
|
||||
fetcher.unpack(rootdir)
|
||||
@@ -51,7 +51,7 @@ This code unpacks the downloaded files to the specified by ``WORKDIR``.
|
||||
examine the OpenEmbedded class file ``base.bbclass``
|
||||
.
|
||||
|
||||
The :term:`SRC_URI` and ``WORKDIR`` variables are not hardcoded into the
|
||||
The ``SRC_URI`` and ``WORKDIR`` variables are not hardcoded into the
|
||||
fetcher, since those fetcher methods can be (and are) called with
|
||||
different variable names. In OpenEmbedded for example, the shared state
|
||||
(sstate) code uses the fetch module to fetch the sstate files.
|
||||
@@ -64,38 +64,38 @@ URLs by looking for source files in a specific search order:
|
||||
:term:`PREMIRRORS` variable.
|
||||
|
||||
- *Source URI:* If pre-mirrors fail, BitBake uses the original URL (e.g
|
||||
from :term:`SRC_URI`).
|
||||
from ``SRC_URI``).
|
||||
|
||||
- *Mirror Sites:* If fetch failures occur, BitBake next uses mirror
|
||||
locations as defined by the :term:`MIRRORS` variable.
|
||||
|
||||
For each URL passed to the fetcher, the fetcher calls the submodule that
|
||||
handles that particular URL type. This behavior can be the source of
|
||||
some confusion when you are providing URLs for the :term:`SRC_URI` variable.
|
||||
Consider the following two URLs::
|
||||
some confusion when you are providing URLs for the ``SRC_URI`` variable.
|
||||
Consider the following two URLs: ::
|
||||
|
||||
https://git.yoctoproject.org/git/poky;protocol=git
|
||||
http://git.yoctoproject.org/git/poky;protocol=git
|
||||
git://git.yoctoproject.org/git/poky;protocol=http
|
||||
|
||||
In the former case, the URL is passed to the ``wget`` fetcher, which does not
|
||||
understand "git". Therefore, the latter case is the correct form since the Git
|
||||
fetcher does know how to use HTTP as a transport.
|
||||
|
||||
Here are some examples that show commonly used mirror definitions::
|
||||
Here are some examples that show commonly used mirror definitions: ::
|
||||
|
||||
PREMIRRORS ?= "\
|
||||
bzr://.*/.\* http://somemirror.org/sources/ \
|
||||
cvs://.*/.\* http://somemirror.org/sources/ \
|
||||
git://.*/.\* http://somemirror.org/sources/ \
|
||||
hg://.*/.\* http://somemirror.org/sources/ \
|
||||
osc://.*/.\* http://somemirror.org/sources/ \
|
||||
p4://.*/.\* http://somemirror.org/sources/ \
|
||||
svn://.*/.\* http://somemirror.org/sources/"
|
||||
bzr://.*/.\* http://somemirror.org/sources/ \\n \
|
||||
cvs://.*/.\* http://somemirror.org/sources/ \\n \
|
||||
git://.*/.\* http://somemirror.org/sources/ \\n \
|
||||
hg://.*/.\* http://somemirror.org/sources/ \\n \
|
||||
osc://.*/.\* http://somemirror.org/sources/ \\n \
|
||||
p4://.*/.\* http://somemirror.org/sources/ \\n \
|
||||
svn://.*/.\* http://somemirror.org/sources/ \\n"
|
||||
|
||||
MIRRORS =+ "\
|
||||
ftp://.*/.\* http://somemirror.org/sources/ \
|
||||
http://.*/.\* http://somemirror.org/sources/ \
|
||||
https://.*/.\* http://somemirror.org/sources/"
|
||||
ftp://.*/.\* http://somemirror.org/sources/ \\n \
|
||||
http://.*/.\* http://somemirror.org/sources/ \\n \
|
||||
https://.*/.\* http://somemirror.org/sources/ \\n"
|
||||
|
||||
It is useful to note that BitBake
|
||||
supports cross-URLs. It is possible to mirror a Git repository on an
|
||||
@@ -110,26 +110,26 @@ which is specified by the :term:`DL_DIR` variable.
|
||||
File integrity is of key importance for reproducing builds. For
|
||||
non-local archive downloads, the fetcher code can verify SHA-256 and MD5
|
||||
checksums to ensure the archives have been downloaded correctly. You can
|
||||
specify these checksums by using the :term:`SRC_URI` variable with the
|
||||
appropriate varflags as follows::
|
||||
specify these checksums by using the ``SRC_URI`` variable with the
|
||||
appropriate varflags as follows: ::
|
||||
|
||||
SRC_URI[md5sum] = "value"
|
||||
SRC_URI[sha256sum] = "value"
|
||||
|
||||
You can also specify the checksums as
|
||||
parameters on the :term:`SRC_URI` as shown below::
|
||||
parameters on the ``SRC_URI`` as shown below: ::
|
||||
|
||||
SRC_URI = "http://example.com/foobar.tar.bz2;md5sum=4a8e0f237e961fd7785d19d07fdb994d"
|
||||
|
||||
If multiple URIs exist, you can specify the checksums either directly as
|
||||
in the previous example, or you can name the URLs. The following syntax
|
||||
shows how you name the URIs::
|
||||
shows how you name the URIs: ::
|
||||
|
||||
SRC_URI = "http://example.com/foobar.tar.bz2;name=foo"
|
||||
SRC_URI[foo.md5sum] = 4a8e0f237e961fd7785d19d07fdb994d
|
||||
|
||||
After a file has been downloaded and
|
||||
has had its checksum checked, a ".done" stamp is placed in :term:`DL_DIR`.
|
||||
has had its checksum checked, a ".done" stamp is placed in ``DL_DIR``.
|
||||
BitBake uses this stamp during subsequent builds to avoid downloading or
|
||||
comparing a checksum for the file again.
|
||||
|
||||
@@ -144,10 +144,6 @@ download without a checksum triggers an error message. The
|
||||
make any attempted network access a fatal error, which is useful for
|
||||
checking that mirrors are complete as well as other things.
|
||||
|
||||
If :term:`BB_CHECK_SSL_CERTS` is set to ``0`` then SSL certificate checking will
|
||||
be disabled. This variable defaults to ``1`` so SSL certificates are normally
|
||||
checked.
|
||||
|
||||
.. _bb-the-unpack:
|
||||
|
||||
The Unpack
|
||||
@@ -167,8 +163,8 @@ govern the behavior of the unpack stage:
|
||||
- *dos:* Applies to ``.zip`` and ``.jar`` files and specifies whether
|
||||
to use DOS line ending conversion on text files.
|
||||
|
||||
- *striplevel:* Strip specified number of leading components (levels)
|
||||
from file names on extraction
|
||||
- *basepath:* Instructs the unpack stage to strip the specified
|
||||
directories from the source path when unpacking.
|
||||
|
||||
- *subdir:* Unpacks the specific URL to the specified subdirectory
|
||||
within the root directory.
|
||||
@@ -208,7 +204,7 @@ time the ``download()`` method is called.
|
||||
If you specify a directory, the entire directory is unpacked.
|
||||
|
||||
Here are a couple of example URLs, the first relative and the second
|
||||
absolute::
|
||||
absolute: ::
|
||||
|
||||
SRC_URI = "file://relativefile.patch"
|
||||
SRC_URI = "file:///Users/ich/very_important_software"
|
||||
@@ -229,12 +225,7 @@ downloaded file is useful for avoiding collisions in
|
||||
:term:`DL_DIR` when dealing with multiple files that
|
||||
have the same name.
|
||||
|
||||
If a username and password are specified in the ``SRC_URI``, a Basic
|
||||
Authorization header will be added to each request, including across redirects.
|
||||
To instead limit the Authorization header to the first request, add
|
||||
"redirectauth=0" to the list of parameters.
|
||||
|
||||
Some example URLs are as follows::
|
||||
Some example URLs are as follows: ::
|
||||
|
||||
SRC_URI = "http://oe.handhelds.org/not_there.aac"
|
||||
SRC_URI = "ftp://oe.handhelds.org/not_there_as_well.aac"
|
||||
@@ -244,13 +235,15 @@ Some example URLs are as follows::
|
||||
|
||||
Because URL parameters are delimited by semi-colons, this can
|
||||
introduce ambiguity when parsing URLs that also contain semi-colons,
|
||||
for example::
|
||||
for example:
|
||||
::
|
||||
|
||||
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git;a=snapshot;h=a5dd47"
|
||||
|
||||
|
||||
Such URLs should should be modified by replacing semi-colons with '&'
|
||||
characters::
|
||||
characters:
|
||||
::
|
||||
|
||||
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&a=snapshot&h=a5dd47"
|
||||
|
||||
@@ -258,7 +251,8 @@ Some example URLs are as follows::
|
||||
In most cases this should work. Treating semi-colons and '&' in
|
||||
queries identically is recommended by the World Wide Web Consortium
|
||||
(W3C). Note that due to the nature of the URL, you may have to
|
||||
specify the name of the downloaded file as well::
|
||||
specify the name of the downloaded file as well:
|
||||
::
|
||||
|
||||
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&a=snapshot&h=a5dd47;downloadfilename=myfile.bz2"
|
||||
|
||||
@@ -327,7 +321,7 @@ The supported parameters are as follows:
|
||||
|
||||
- *"port":* The port to which the CVS server connects.
|
||||
|
||||
Some example URLs are as follows::
|
||||
Some example URLs are as follows: ::
|
||||
|
||||
SRC_URI = "cvs://CVSROOT;module=mymodule;tag=some-version;method=ext"
|
||||
SRC_URI = "cvs://CVSROOT;module=mymodule;date=20060126;localdir=usethat"
|
||||
@@ -369,7 +363,7 @@ The supported parameters are as follows:
|
||||
username is different than the username used in the main URL, which
|
||||
is passed to the subversion command.
|
||||
|
||||
Following are three examples using svn::
|
||||
Following are three examples using svn: ::
|
||||
|
||||
SRC_URI = "svn://myrepos/proj1;module=vip;protocol=http;rev=667"
|
||||
SRC_URI = "svn://myrepos/proj1;module=opie;protocol=svn+ssh"
|
||||
@@ -396,19 +390,6 @@ This fetcher supports the following parameters:
|
||||
protocol is "file". You can also use "http", "https", "ssh" and
|
||||
"rsync".
|
||||
|
||||
.. note::
|
||||
|
||||
When ``protocol`` is "ssh", the URL expected in :term:`SRC_URI` differs
|
||||
from the one that is typically passed to ``git clone`` command and provided
|
||||
by the Git server to fetch from. For example, the URL returned by GitLab
|
||||
server for ``mesa`` when cloning over SSH is
|
||||
``git@gitlab.freedesktop.org:mesa/mesa.git``, however the expected URL in
|
||||
:term:`SRC_URI` is the following::
|
||||
|
||||
SRC_URI = "git://git@gitlab.freedesktop.org/mesa/mesa.git;branch=main;protocol=ssh;..."
|
||||
|
||||
Note the ``:`` character changed for a ``/`` before the path to the project.
|
||||
|
||||
- *"nocheckout":* Tells the fetcher to not checkout source code when
|
||||
unpacking when set to "1". Set this option for the URL where there is
|
||||
a custom routine to checkout code. The default is "0".
|
||||
@@ -432,9 +413,9 @@ This fetcher supports the following parameters:
|
||||
raw Git metadata is provided. This parameter implies the "nocheckout"
|
||||
parameter as well.
|
||||
|
||||
- *"branch":* The branch(es) of the Git tree to clone. Unless
|
||||
"nobranch" is set to "1", this is a mandatory parameter. The number of
|
||||
branch parameters must match the number of name parameters.
|
||||
- *"branch":* The branch(es) of the Git tree to clone. If unset, this
|
||||
is assumed to be "master". The number of branch parameters much match
|
||||
the number of name parameters.
|
||||
|
||||
- *"rev":* The revision to use for the checkout. The default is
|
||||
"master".
|
||||
@@ -455,27 +436,10 @@ This fetcher supports the following parameters:
|
||||
parameter implies no branch and only works when the transfer protocol
|
||||
is ``file://``.
|
||||
|
||||
Here are some example URLs::
|
||||
|
||||
SRC_URI = "git://github.com/fronteed/icheck.git;protocol=https;branch=${PV};tag=${PV}"
|
||||
SRC_URI = "git://github.com/asciidoc/asciidoc-py;protocol=https;branch=main"
|
||||
SRC_URI = "git://git@gitlab.freedesktop.org/mesa/mesa.git;branch=main;protocol=ssh;..."
|
||||
|
||||
.. note::
|
||||
|
||||
When using ``git`` as the fetcher of the main source code of your software,
|
||||
``S`` should be set accordingly::
|
||||
|
||||
S = "${WORKDIR}/git"
|
||||
|
||||
.. note::
|
||||
|
||||
Specifying passwords directly in ``git://`` urls is not supported.
|
||||
There are several reasons: :term:`SRC_URI` is often written out to logs and
|
||||
other places, and that could easily leak passwords; it is also all too
|
||||
easy to share metadata without removing passwords. SSH keys, ``~/.netrc``
|
||||
and ``~/.ssh/config`` files can be used as alternatives.
|
||||
Here are some example URLs: ::
|
||||
|
||||
SRC_URI = "git://git.oe.handhelds.org/git/vip.git;tag=version-1"
|
||||
SRC_URI = "git://git.oe.handhelds.org/git/vip.git;protocol=http"
|
||||
|
||||
.. _gitsm-fetcher:
|
||||
|
||||
@@ -511,7 +475,7 @@ repository.
|
||||
|
||||
To use this fetcher, make sure your recipe has proper
|
||||
:term:`SRC_URI`, :term:`SRCREV`, and
|
||||
:term:`PV` settings. Here is an example::
|
||||
:term:`PV` settings. Here is an example: ::
|
||||
|
||||
SRC_URI = "ccrc://cc.example.org/ccrc;vob=/example_vob;module=/example_module"
|
||||
SRCREV = "EXAMPLE_CLEARCASE_TAG"
|
||||
@@ -520,7 +484,7 @@ To use this fetcher, make sure your recipe has proper
|
||||
The fetcher uses the ``rcleartool`` or
|
||||
``cleartool`` remote client, depending on which one is available.
|
||||
|
||||
Following are options for the :term:`SRC_URI` statement:
|
||||
Following are options for the ``SRC_URI`` statement:
|
||||
|
||||
- *vob*: The name, which must include the prepending "/" character,
|
||||
of the ClearCase VOB. This option is required.
|
||||
@@ -533,7 +497,7 @@ Following are options for the :term:`SRC_URI` statement:
|
||||
The module and vob options are combined to create the load rule in the
|
||||
view config spec. As an example, consider the vob and module values from
|
||||
the SRC_URI statement at the start of this section. Combining those values
|
||||
results in the following::
|
||||
results in the following: ::
|
||||
|
||||
load /example_vob/example_module
|
||||
|
||||
@@ -582,10 +546,10 @@ password if you do not wish to keep those values in a recipe itself. If
|
||||
you choose not to use ``P4CONFIG``, or to explicitly set variables that
|
||||
``P4CONFIG`` can contain, you can specify the ``P4PORT`` value, which is
|
||||
the server's URL and port number, and you can specify a username and
|
||||
password directly in your recipe within :term:`SRC_URI`.
|
||||
password directly in your recipe within ``SRC_URI``.
|
||||
|
||||
Here is an example that relies on ``P4CONFIG`` to specify the server URL
|
||||
and port, username, and password, and fetches the Head Revision::
|
||||
and port, username, and password, and fetches the Head Revision: ::
|
||||
|
||||
SRC_URI = "p4://example-depot/main/source/..."
|
||||
SRCREV = "${AUTOREV}"
|
||||
@@ -593,7 +557,7 @@ and port, username, and password, and fetches the Head Revision::
|
||||
S = "${WORKDIR}/p4"
|
||||
|
||||
Here is an example that specifies the server URL and port, username, and
|
||||
password, and fetches a Revision based on a Label::
|
||||
password, and fetches a Revision based on a Label: ::
|
||||
|
||||
P4PORT = "tcp:p4server.example.net:1666"
|
||||
SRC_URI = "p4://user:passwd@example-depot/main/source/..."
|
||||
@@ -619,7 +583,7 @@ paths locally is desirable, the fetcher supports two parameters:
|
||||
paths locally for the specified location, even in combination with the
|
||||
``module`` parameter.
|
||||
|
||||
Here is an example use of the the ``module`` parameter::
|
||||
Here is an example use of the the ``module`` parameter: ::
|
||||
|
||||
SRC_URI = "p4://user:passwd@example-depot/main;module=source/..."
|
||||
|
||||
@@ -627,7 +591,7 @@ In this case, the content of the top-level directory ``source/`` will be fetched
|
||||
to ``${P4DIR}``, including the directory itself. The top-level directory will
|
||||
be accesible at ``${P4DIR}/source/``.
|
||||
|
||||
Here is an example use of the the ``remotepath`` parameter::
|
||||
Here is an example use of the the ``remotepath`` parameter: ::
|
||||
|
||||
SRC_URI = "p4://user:passwd@example-depot/main;module=source/...;remotepath=keep"
|
||||
|
||||
@@ -655,131 +619,11 @@ This fetcher supports the following parameters:
|
||||
|
||||
- *"manifest":* Name of the manifest file (default: ``default.xml``).
|
||||
|
||||
Here are some example URLs::
|
||||
Here are some example URLs: ::
|
||||
|
||||
SRC_URI = "repo://REPOROOT;protocol=git;branch=some_branch;manifest=my_manifest.xml"
|
||||
SRC_URI = "repo://REPOROOT;protocol=file;branch=some_branch;manifest=my_manifest.xml"
|
||||
|
||||
.. _az-fetcher:
|
||||
|
||||
Az Fetcher (``az://``)
|
||||
--------------------------
|
||||
|
||||
This submodule fetches data from an
|
||||
`Azure Storage account <https://docs.microsoft.com/en-us/azure/storage/>`__ ,
|
||||
it inherits its functionality from the HTTP wget fetcher, but modifies its
|
||||
behavior to accomodate the usage of a
|
||||
`Shared Access Signature (SAS) <https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview>`__
|
||||
for non-public data.
|
||||
|
||||
Such functionality is set by the variable:
|
||||
|
||||
- :term:`AZ_SAS`: The Azure Storage Shared Access Signature provides secure
|
||||
delegate access to resources, if this variable is set, the Az Fetcher will
|
||||
use it when fetching artifacts from the cloud.
|
||||
|
||||
You can specify the AZ_SAS variable as shown below::
|
||||
|
||||
AZ_SAS = "se=2021-01-01&sp=r&sv=2018-11-09&sr=c&skoid=<skoid>&sig=<signature>"
|
||||
|
||||
Here is an example URL::
|
||||
|
||||
SRC_URI = "az://<azure-storage-account>.blob.core.windows.net/<foo_container>/<bar_file>"
|
||||
|
||||
It can also be used when setting mirrors definitions using the :term:`PREMIRRORS` variable.
|
||||
|
||||
.. _crate-fetcher:
|
||||
|
||||
Crate Fetcher (``crate://``)
|
||||
----------------------------
|
||||
|
||||
This submodule fetches code for
|
||||
`Rust language "crates" <https://doc.rust-lang.org/reference/glossary.html?highlight=crate#crate>`__
|
||||
corresponding to Rust libraries and programs to compile. Such crates are typically shared
|
||||
on https://crates.io/ but this fetcher supports other crate registries too.
|
||||
|
||||
The format for the :term:`SRC_URI` setting must be::
|
||||
|
||||
SRC_URI = "crate://REGISTRY/NAME/VERSION"
|
||||
|
||||
Here is an example URL::
|
||||
|
||||
SRC_URI = "crate://crates.io/glob/0.2.11"
|
||||
|
||||
.. _npm-fetcher:
|
||||
|
||||
NPM Fetcher (``npm://``)
|
||||
------------------------
|
||||
|
||||
This submodule fetches source code from an
|
||||
`NPM <https://en.wikipedia.org/wiki/Npm_(software)>`__
|
||||
Javascript package registry.
|
||||
|
||||
The format for the :term:`SRC_URI` setting must be::
|
||||
|
||||
SRC_URI = "npm://some.registry.url;ParameterA=xxx;ParameterB=xxx;..."
|
||||
|
||||
This fetcher supports the following parameters:
|
||||
|
||||
- *"package":* The NPM package name. This is a mandatory parameter.
|
||||
|
||||
- *"version":* The NPM package version. This is a mandatory parameter.
|
||||
|
||||
- *"downloadfilename":* Specifies the filename used when storing the downloaded file.
|
||||
|
||||
- *"destsuffix":* Specifies the directory to use to unpack the package (default: ``npm``).
|
||||
|
||||
Note that NPM fetcher only fetches the package source itself. The dependencies
|
||||
can be fetched through the `npmsw-fetcher`_.
|
||||
|
||||
Here is an example URL with both fetchers::
|
||||
|
||||
SRC_URI = " \
|
||||
npm://registry.npmjs.org/;package=cute-files;version=${PV} \
|
||||
npmsw://${THISDIR}/${BPN}/npm-shrinkwrap.json \
|
||||
"
|
||||
|
||||
See :yocto_docs:`Creating Node Package Manager (NPM) Packages
|
||||
</dev-manual/common-tasks.html#creating-node-package-manager-npm-packages>`
|
||||
in the Yocto Project manual for details about using
|
||||
:yocto_docs:`devtool <https://docs.yoctoproject.org/ref-manual/devtool-reference.html>`
|
||||
to automatically create a recipe from an NPM URL.
|
||||
|
||||
.. _npmsw-fetcher:
|
||||
|
||||
NPM shrinkwrap Fetcher (``npmsw://``)
|
||||
-------------------------------------
|
||||
|
||||
This submodule fetches source code from an
|
||||
`NPM shrinkwrap <https://docs.npmjs.com/cli/v8/commands/npm-shrinkwrap>`__
|
||||
description file, which lists the dependencies
|
||||
of an NPM package while locking their versions.
|
||||
|
||||
The format for the :term:`SRC_URI` setting must be::
|
||||
|
||||
SRC_URI = "npmsw://some.registry.url;ParameterA=xxx;ParameterB=xxx;..."
|
||||
|
||||
This fetcher supports the following parameters:
|
||||
|
||||
- *"dev":* Set this parameter to ``1`` to install "devDependencies".
|
||||
|
||||
- *"destsuffix":* Specifies the directory to use to unpack the dependencies
|
||||
(``${S}`` by default).
|
||||
|
||||
Note that the shrinkwrap file can also be provided by the recipe for
|
||||
the package which has such dependencies, for example::
|
||||
|
||||
SRC_URI = " \
|
||||
npm://registry.npmjs.org/;package=cute-files;version=${PV} \
|
||||
npmsw://${THISDIR}/${BPN}/npm-shrinkwrap.json \
|
||||
"
|
||||
|
||||
Such a file can automatically be generated using
|
||||
:yocto_docs:`devtool <https://docs.yoctoproject.org/ref-manual/devtool-reference.html>`
|
||||
as described in the :yocto_docs:`Creating Node Package Manager (NPM) Packages
|
||||
</dev-manual/common-tasks.html#creating-node-package-manager-npm-packages>`
|
||||
section of the Yocto Project.
|
||||
|
||||
Other Fetchers
|
||||
--------------
|
||||
|
||||
@@ -789,6 +633,8 @@ Fetch submodules also exist for the following:
|
||||
|
||||
- Mercurial (``hg://``)
|
||||
|
||||
- npm (``npm://``)
|
||||
|
||||
- OSC (``osc://``)
|
||||
|
||||
- Secure FTP (``sftp://``)
|
||||
@@ -803,4 +649,4 @@ submodules. However, you might find the code helpful and readable.
|
||||
Auto Revisions
|
||||
==============
|
||||
|
||||
We need to document ``AUTOREV`` and :term:`SRCREV_FORMAT` here.
|
||||
We need to document ``AUTOREV`` and ``SRCREV_FORMAT`` here.
|
||||
|
||||
@@ -20,7 +20,7 @@ Obtaining BitBake
|
||||
|
||||
See the :ref:`bitbake-user-manual/bitbake-user-manual-hello:obtaining bitbake` section for
|
||||
information on how to obtain BitBake. Once you have the source code on
|
||||
your machine, the BitBake directory appears as follows::
|
||||
your machine, the BitBake directory appears as follows: ::
|
||||
|
||||
$ ls -al
|
||||
total 100
|
||||
@@ -49,7 +49,7 @@ Setting Up the BitBake Environment
|
||||
|
||||
First, you need to be sure that you can run BitBake. Set your working
|
||||
directory to where your local BitBake files are and run the following
|
||||
command::
|
||||
command: ::
|
||||
|
||||
$ ./bin/bitbake --version
|
||||
BitBake Build Tool Core version 1.23.0, bitbake version 1.23.0
|
||||
@@ -61,14 +61,14 @@ The recommended method to run BitBake is from a directory of your
|
||||
choice. To be able to run BitBake from any directory, you need to add
|
||||
the executable binary to your binary to your shell's environment
|
||||
``PATH`` variable. First, look at your current ``PATH`` variable by
|
||||
entering the following::
|
||||
entering the following: ::
|
||||
|
||||
$ echo $PATH
|
||||
|
||||
Next, add the directory location
|
||||
for the BitBake binary to the ``PATH``. Here is an example that adds the
|
||||
``/home/scott-lenovo/bitbake/bin`` directory to the front of the
|
||||
``PATH`` variable::
|
||||
``PATH`` variable: ::
|
||||
|
||||
$ export PATH=/home/scott-lenovo/bitbake/bin:$PATH
|
||||
|
||||
@@ -99,7 +99,7 @@ discussion mailing list about the BitBake build tool.
|
||||
|
||||
This example was inspired by and drew heavily from
|
||||
`Mailing List post - The BitBake equivalent of "Hello, World!"
|
||||
<https://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html>`_.
|
||||
<http://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html>`_.
|
||||
|
||||
As stated earlier, the goal of this example is to eventually compile
|
||||
"Hello World". However, it is unknown what BitBake needs and what you
|
||||
@@ -116,7 +116,7 @@ Following is the complete "Hello World" example.
|
||||
|
||||
#. **Create a Project Directory:** First, set up a directory for the
|
||||
"Hello World" project. Here is how you can do so in your home
|
||||
directory::
|
||||
directory: ::
|
||||
|
||||
$ mkdir ~/hello
|
||||
$ cd ~/hello
|
||||
@@ -127,7 +127,7 @@ Following is the complete "Hello World" example.
|
||||
directory is a good way to isolate your project.
|
||||
|
||||
#. **Run BitBake:** At this point, you have nothing but a project
|
||||
directory. Run the ``bitbake`` command and see what it does::
|
||||
directory. Run the ``bitbake`` command and see what it does: ::
|
||||
|
||||
$ bitbake
|
||||
The BBPATH variable is not set and bitbake did not
|
||||
@@ -145,23 +145,23 @@ Following is the complete "Hello World" example.
|
||||
|
||||
The majority of this output is specific to environment variables that
|
||||
are not directly relevant to BitBake. However, the very first
|
||||
message regarding the :term:`BBPATH` variable and the
|
||||
message regarding the ``BBPATH`` variable and the
|
||||
``conf/bblayers.conf`` file is relevant.
|
||||
|
||||
When you run BitBake, it begins looking for metadata files. The
|
||||
:term:`BBPATH` variable is what tells BitBake where
|
||||
to look for those files. :term:`BBPATH` is not set and you need to set
|
||||
it. Without :term:`BBPATH`, BitBake cannot find any configuration files
|
||||
to look for those files. ``BBPATH`` is not set and you need to set
|
||||
it. Without ``BBPATH``, BitBake cannot find any configuration files
|
||||
(``.conf``) or recipe files (``.bb``) at all. BitBake also cannot
|
||||
find the ``bitbake.conf`` file.
|
||||
|
||||
#. **Setting BBPATH:** For this example, you can set :term:`BBPATH` in
|
||||
#. **Setting BBPATH:** For this example, you can set ``BBPATH`` in
|
||||
the same manner that you set ``PATH`` earlier in the appendix. You
|
||||
should realize, though, that it is much more flexible to set the
|
||||
:term:`BBPATH` variable up in a configuration file for each project.
|
||||
``BBPATH`` variable up in a configuration file for each project.
|
||||
|
||||
From your shell, enter the following commands to set and export the
|
||||
:term:`BBPATH` variable::
|
||||
``BBPATH`` variable: ::
|
||||
|
||||
$ BBPATH="projectdirectory"
|
||||
$ export BBPATH
|
||||
@@ -175,8 +175,8 @@ Following is the complete "Hello World" example.
|
||||
("~") character as BitBake does not expand that character as the
|
||||
shell would.
|
||||
|
||||
#. **Run BitBake:** Now that you have :term:`BBPATH` defined, run the
|
||||
``bitbake`` command again::
|
||||
#. **Run BitBake:** Now that you have ``BBPATH`` defined, run the
|
||||
``bitbake`` command again: ::
|
||||
|
||||
$ bitbake
|
||||
ERROR: Traceback (most recent call last):
|
||||
@@ -205,18 +205,18 @@ Following is the complete "Hello World" example.
|
||||
recipe files. For this example, you need to create the file in your
|
||||
project directory and define some key BitBake variables. For more
|
||||
information on the ``bitbake.conf`` file, see
|
||||
https://git.openembedded.org/bitbake/tree/conf/bitbake.conf.
|
||||
http://git.openembedded.org/bitbake/tree/conf/bitbake.conf.
|
||||
|
||||
Use the following commands to create the ``conf`` directory in the
|
||||
project directory::
|
||||
project directory: ::
|
||||
|
||||
$ mkdir conf
|
||||
|
||||
From within the ``conf`` directory,
|
||||
use some editor to create the ``bitbake.conf`` so that it contains
|
||||
the following::
|
||||
the following: ::
|
||||
|
||||
PN = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
|
||||
PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
|
||||
|
||||
TMPDIR = "${TOPDIR}/tmp"
|
||||
CACHE = "${TMPDIR}/cache"
|
||||
@@ -251,7 +251,7 @@ Following is the complete "Hello World" example.
|
||||
glossary.
|
||||
|
||||
#. **Run BitBake:** After making sure that the ``conf/bitbake.conf`` file
|
||||
exists, you can run the ``bitbake`` command again::
|
||||
exists, you can run the ``bitbake`` command again: ::
|
||||
|
||||
$ bitbake
|
||||
ERROR: Traceback (most recent call last):
|
||||
@@ -278,7 +278,7 @@ Following is the complete "Hello World" example.
|
||||
in the ``classes`` directory of the project (i.e ``hello/classes``
|
||||
in this example).
|
||||
|
||||
Create the ``classes`` directory as follows::
|
||||
Create the ``classes`` directory as follows: ::
|
||||
|
||||
$ cd $HOME/hello
|
||||
$ mkdir classes
|
||||
@@ -291,7 +291,7 @@ Following is the complete "Hello World" example.
|
||||
environments BitBake is supporting.
|
||||
|
||||
#. **Run BitBake:** After making sure that the ``classes/base.bbclass``
|
||||
file exists, you can run the ``bitbake`` command again::
|
||||
file exists, you can run the ``bitbake`` command again: ::
|
||||
|
||||
$ bitbake
|
||||
Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.
|
||||
@@ -314,7 +314,7 @@ Following is the complete "Hello World" example.
|
||||
Minimally, you need a recipe file and a layer configuration file in
|
||||
your layer. The configuration file needs to be in the ``conf``
|
||||
directory inside the layer. Use these commands to set up the layer
|
||||
and the ``conf`` directory::
|
||||
and the ``conf`` directory: ::
|
||||
|
||||
$ cd $HOME
|
||||
$ mkdir mylayer
|
||||
@@ -322,12 +322,12 @@ Following is the complete "Hello World" example.
|
||||
$ mkdir conf
|
||||
|
||||
Move to the ``conf`` directory and create a ``layer.conf`` file that has the
|
||||
following::
|
||||
following: ::
|
||||
|
||||
BBPATH .= ":${LAYERDIR}"
|
||||
BBFILES += "${LAYERDIR}/*.bb"
|
||||
BBFILES += "${LAYERDIR}/\*.bb"
|
||||
BBFILE_COLLECTIONS += "mylayer"
|
||||
BBFILE_PATTERN_mylayer := "^${LAYERDIR_RE}/"
|
||||
`BBFILE_PATTERN_mylayer := "^${LAYERDIR_RE}/"
|
||||
|
||||
For information on these variables, click on :term:`BBFILES`,
|
||||
:term:`LAYERDIR`, :term:`BBFILE_COLLECTIONS` or :term:`BBFILE_PATTERN_mylayer <BBFILE_PATTERN>`
|
||||
@@ -335,7 +335,7 @@ Following is the complete "Hello World" example.
|
||||
|
||||
You need to create the recipe file next. Inside your layer at the
|
||||
top-level, use an editor and create a recipe file named
|
||||
``printhello.bb`` that has the following::
|
||||
``printhello.bb`` that has the following: ::
|
||||
|
||||
DESCRIPTION = "Prints Hello World"
|
||||
PN = 'printhello'
|
||||
@@ -356,7 +356,7 @@ Following is the complete "Hello World" example.
|
||||
follow the links to the glossary.
|
||||
|
||||
#. **Run BitBake With a Target:** Now that a BitBake target exists, run
|
||||
the command and provide that target::
|
||||
the command and provide that target: ::
|
||||
|
||||
$ cd $HOME/hello
|
||||
$ bitbake printhello
|
||||
@@ -376,7 +376,7 @@ Following is the complete "Hello World" example.
|
||||
``hello/conf`` for this example).
|
||||
|
||||
Set your working directory to the ``hello/conf`` directory and then
|
||||
create the ``bblayers.conf`` file so that it contains the following::
|
||||
create the ``bblayers.conf`` file so that it contains the following: ::
|
||||
|
||||
BBLAYERS ?= " \
|
||||
/home/<you>/mylayer \
|
||||
@@ -386,7 +386,7 @@ Following is the complete "Hello World" example.
|
||||
|
||||
#. **Run BitBake With a Target:** Now that you have supplied the
|
||||
``bblayers.conf`` file, run the ``bitbake`` command and provide the
|
||||
target::
|
||||
target: ::
|
||||
|
||||
$ bitbake printhello
|
||||
Parsing recipes: 100% |##################################################################################|
|
||||
|
||||
@@ -27,7 +27,7 @@ Linux software stacks using a task-oriented approach.
|
||||
Conceptually, BitBake is similar to GNU Make in some regards but has
|
||||
significant differences:
|
||||
|
||||
- BitBake executes tasks according to the provided metadata that builds up
|
||||
- BitBake executes tasks according to provided metadata that builds up
|
||||
the tasks. Metadata is stored in recipe (``.bb``) and related recipe
|
||||
"append" (``.bbappend``) files, configuration (``.conf``) and
|
||||
underlying include (``.inc``) files, and in class (``.bbclass``)
|
||||
@@ -60,10 +60,11 @@ member Chris Larson split the project into two distinct pieces:
|
||||
- OpenEmbedded, a metadata set utilized by BitBake
|
||||
|
||||
Today, BitBake is the primary basis of the
|
||||
`OpenEmbedded <https://www.openembedded.org/>`__ project, which is being
|
||||
used to build and maintain Linux distributions such as the `Poky
|
||||
Reference Distribution <https://www.yoctoproject.org/software-item/poky/>`__,
|
||||
developed under the umbrella of the `Yocto Project <https://www.yoctoproject.org>`__.
|
||||
`OpenEmbedded <http://www.openembedded.org/>`__ project, which is being
|
||||
used to build and maintain Linux distributions such as the `Angstrom
|
||||
Distribution <http://www.angstrom-distribution.org/>`__, and which is
|
||||
also being used as the build tool for Linux projects such as the `Yocto
|
||||
Project <http://www.yoctoproject.org>`__.
|
||||
|
||||
Prior to BitBake, no other build tool adequately met the needs of an
|
||||
aspiring embedded Linux distribution. All of the build systems used by
|
||||
@@ -247,13 +248,13 @@ underlying, similarly-named recipe files.
|
||||
|
||||
When you name an append file, you can use the "``%``" wildcard character
|
||||
to allow for matching recipe names. For example, suppose you have an
|
||||
append file named as follows::
|
||||
append file named as follows: ::
|
||||
|
||||
busybox_1.21.%.bbappend
|
||||
|
||||
That append file
|
||||
would match any ``busybox_1.21.``\ x\ ``.bb`` version of the recipe. So,
|
||||
the append file would match the following recipe names::
|
||||
the append file would match the following recipe names: ::
|
||||
|
||||
busybox_1.21.1.bb
|
||||
busybox_1.21.2.bb
|
||||
@@ -289,7 +290,7 @@ You can obtain BitBake several different ways:
|
||||
are using. The metadata is generally backwards compatible but not
|
||||
forward compatible.
|
||||
|
||||
Here is an example that clones the BitBake repository::
|
||||
Here is an example that clones the BitBake repository: ::
|
||||
|
||||
$ git clone git://git.openembedded.org/bitbake
|
||||
|
||||
@@ -297,7 +298,7 @@ You can obtain BitBake several different ways:
|
||||
Git repository into a directory called ``bitbake``. Alternatively,
|
||||
you can designate a directory after the ``git clone`` command if you
|
||||
want to call the new directory something other than ``bitbake``. Here
|
||||
is an example that names the directory ``bbdev``::
|
||||
is an example that names the directory ``bbdev``: ::
|
||||
|
||||
$ git clone git://git.openembedded.org/bitbake bbdev
|
||||
|
||||
@@ -316,9 +317,9 @@ You can obtain BitBake several different ways:
|
||||
method for getting BitBake. Cloning the repository makes it easier
|
||||
to update as patches are added to the stable branches.
|
||||
|
||||
The following example downloads a snapshot of BitBake version 1.17.0::
|
||||
The following example downloads a snapshot of BitBake version 1.17.0: ::
|
||||
|
||||
$ wget https://git.openembedded.org/bitbake/snapshot/bitbake-1.17.0.tar.gz
|
||||
$ wget http://git.openembedded.org/bitbake/snapshot/bitbake-1.17.0.tar.gz
|
||||
$ tar zxpvf bitbake-1.17.0.tar.gz
|
||||
|
||||
After extraction of the tarball using
|
||||
@@ -346,7 +347,7 @@ execution examples.
|
||||
Usage and syntax
|
||||
----------------
|
||||
|
||||
Following is the usage and syntax for BitBake::
|
||||
Following is the usage and syntax for BitBake: ::
|
||||
|
||||
$ bitbake -h
|
||||
Usage: bitbake [options] [recipename/target recipe:do_task ...]
|
||||
@@ -416,8 +417,8 @@ Following is the usage and syntax for BitBake::
|
||||
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
|
||||
Show debug logging for the specified logging domains
|
||||
-P, --profile Profile the command and save reports.
|
||||
-u UI, --ui=UI The user interface to use (knotty, ncurses, taskexp or
|
||||
teamcity - default knotty).
|
||||
-u UI, --ui=UI The user interface to use (knotty, ncurses or taskexp
|
||||
- default knotty).
|
||||
--token=XMLRPCTOKEN Specify the connection token to be used when
|
||||
connecting to a remote server.
|
||||
--revisions-changed Set the exit code depending on whether upstream
|
||||
@@ -432,9 +433,6 @@ Following is the usage and syntax for BitBake::
|
||||
Environment variable BB_SERVER_TIMEOUT.
|
||||
--no-setscene Do not run any setscene tasks. sstate will be ignored
|
||||
and everything needed, built.
|
||||
--skip-setscene Skip setscene tasks if they would be executed. Tasks
|
||||
previously restored from sstate will be kept, unlike
|
||||
--no-setscene
|
||||
--setscene-only Only run setscene tasks, don't run any real tasks.
|
||||
--remote-server=REMOTE_SERVER
|
||||
Connect to the specified server.
|
||||
@@ -471,11 +469,11 @@ default task, which is "build". BitBake obeys inter-task dependencies
|
||||
when doing so.
|
||||
|
||||
The following command runs the build task, which is the default task, on
|
||||
the ``foo_1.0.bb`` recipe file::
|
||||
the ``foo_1.0.bb`` recipe file: ::
|
||||
|
||||
$ bitbake -b foo_1.0.bb
|
||||
|
||||
The following command runs the clean task on the ``foo.bb`` recipe file::
|
||||
The following command runs the clean task on the ``foo.bb`` recipe file: ::
|
||||
|
||||
$ bitbake -b foo.bb -c clean
|
||||
|
||||
@@ -499,13 +497,13 @@ functionality, or when there are multiple versions of a recipe.
|
||||
The ``bitbake`` command, when not using "--buildfile" or "-b" only
|
||||
accepts a "PROVIDES". You cannot provide anything else. By default, a
|
||||
recipe file generally "PROVIDES" its "packagename" as shown in the
|
||||
following example::
|
||||
following example: ::
|
||||
|
||||
$ bitbake foo
|
||||
|
||||
This next example "PROVIDES" the
|
||||
package name and also uses the "-c" option to tell BitBake to just
|
||||
execute the ``do_clean`` task::
|
||||
execute the ``do_clean`` task: ::
|
||||
|
||||
$ bitbake -c clean foo
|
||||
|
||||
@@ -516,7 +514,7 @@ The BitBake command line supports specifying different tasks for
|
||||
individual targets when you specify multiple targets. For example,
|
||||
suppose you had two targets (or recipes) ``myfirstrecipe`` and
|
||||
``mysecondrecipe`` and you needed BitBake to run ``taskA`` for the first
|
||||
recipe and ``taskB`` for the second recipe::
|
||||
recipe and ``taskB`` for the second recipe: ::
|
||||
|
||||
$ bitbake myfirstrecipe:do_taskA mysecondrecipe:do_taskB
|
||||
|
||||
@@ -536,13 +534,13 @@ current working directory:
|
||||
- ``pn-buildlist``: Shows a simple list of targets that are to be
|
||||
built.
|
||||
|
||||
To stop depending on common depends, use the ``-I`` depend option and
|
||||
To stop depending on common depends, use the "-I" depend option and
|
||||
BitBake omits them from the graph. Leaving this information out can
|
||||
produce more readable graphs. This way, you can remove from the graph
|
||||
:term:`DEPENDS` from inherited classes such as ``base.bbclass``.
|
||||
``DEPENDS`` from inherited classes such as ``base.bbclass``.
|
||||
|
||||
Here are two examples that create dependency graphs. The second example
|
||||
omits depends common in OpenEmbedded from the graph::
|
||||
omits depends common in OpenEmbedded from the graph: ::
|
||||
|
||||
$ bitbake -g foo
|
||||
|
||||
@@ -566,7 +564,7 @@ for two separate targets:
|
||||
.. image:: figures/bb_multiconfig_files.png
|
||||
:align: center
|
||||
|
||||
The reason for this required file hierarchy is because the :term:`BBPATH`
|
||||
The reason for this required file hierarchy is because the ``BBPATH``
|
||||
variable is not constructed until the layers are parsed. Consequently,
|
||||
using the configuration file as a pre-configuration file is not possible
|
||||
unless it is located in the current working directory.
|
||||
@@ -584,17 +582,17 @@ accomplished by setting the
|
||||
configuration files for ``target1`` and ``target2`` defined in the build
|
||||
directory. The following statement in the ``local.conf`` file both
|
||||
enables BitBake to perform multiple configuration builds and specifies
|
||||
the two extra multiconfigs::
|
||||
the two extra multiconfigs: ::
|
||||
|
||||
BBMULTICONFIG = "target1 target2"
|
||||
|
||||
Once the target configuration files are in place and BitBake has been
|
||||
enabled to perform multiple configuration builds, use the following
|
||||
command form to start the builds::
|
||||
command form to start the builds: ::
|
||||
|
||||
$ bitbake [mc:multiconfigname:]target [[[mc:multiconfigname:]target] ... ]
|
||||
|
||||
Here is an example for two extra multiconfigs: ``target1`` and ``target2``::
|
||||
Here is an example for two extra multiconfigs: ``target1`` and ``target2``: ::
|
||||
|
||||
$ bitbake mc::target mc:target1:target mc:target2:target
|
||||
|
||||
@@ -615,12 +613,12 @@ multiconfig.
|
||||
|
||||
To enable dependencies in a multiple configuration build, you must
|
||||
declare the dependencies in the recipe using the following statement
|
||||
form::
|
||||
form: ::
|
||||
|
||||
task_or_package[mcdepends] = "mc:from_multiconfig:to_multiconfig:recipe_name:task_on_which_to_depend"
|
||||
|
||||
To better show how to use this statement, consider an example with two
|
||||
multiconfigs: ``target1`` and ``target2``::
|
||||
multiconfigs: ``target1`` and ``target2``: ::
|
||||
|
||||
image_task[mcdepends] = "mc:target1:target2:image2:rootfs_task"
|
||||
|
||||
@@ -631,7 +629,7 @@ completion of the rootfs_task used to build out image2, which is
|
||||
associated with the "target2" multiconfig.
|
||||
|
||||
Once you set up this dependency, you can build the "target1" multiconfig
|
||||
using a BitBake command as follows::
|
||||
using a BitBake command as follows: ::
|
||||
|
||||
$ bitbake mc:target1:image1
|
||||
|
||||
@@ -641,7 +639,7 @@ the ``rootfs_task`` for the "target2" multiconfig build.
|
||||
|
||||
Having a recipe depend on the root filesystem of another build might not
|
||||
seem that useful. Consider this change to the statement in the image1
|
||||
recipe::
|
||||
recipe: ::
|
||||
|
||||
image_task[mcdepends] = "mc:target1:target2:image2:image_task"
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -14,7 +14,6 @@
|
||||
# import sys
|
||||
# sys.path.insert(0, os.path.abspath('.'))
|
||||
|
||||
import sys
|
||||
import datetime
|
||||
|
||||
current_version = "dev"
|
||||
|
||||
@@ -1,74 +1,32 @@
|
||||
.. SPDX-License-Identifier: CC-BY-2.5
|
||||
|
||||
===========================
|
||||
Supported Release Manuals
|
||||
===========================
|
||||
|
||||
******************************
|
||||
Release Series 3.4 (honister)
|
||||
******************************
|
||||
|
||||
- :yocto_docs:`3.4 BitBake User Manual </3.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.4.1 BitBake User Manual </3.4.1/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.4.2 BitBake User Manual </3.4.2/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
******************************
|
||||
Release Series 3.3 (hardknott)
|
||||
******************************
|
||||
|
||||
- :yocto_docs:`3.3 BitBake User Manual </3.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.3.1 BitBake User Manual </3.3.1/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.3.2 BitBake User Manual </3.3.2/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.3.3 BitBake User Manual </3.3.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.3.4 BitBake User Manual </3.3.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.3.5 BitBake User Manual </3.3.5/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
=========================
|
||||
Current Release Manuals
|
||||
=========================
|
||||
|
||||
****************************
|
||||
Release Series 3.1 (dunfell)
|
||||
3.1 'dunfell' Release Series
|
||||
****************************
|
||||
|
||||
- :yocto_docs:`3.1 BitBake User Manual </3.1/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.1 BitBake User Manual </3.1.1/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.2 BitBake User Manual </3.1.2/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.3 BitBake User Manual </3.1.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.4 BitBake User Manual </3.1.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.5 BitBake User Manual </3.1.5/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.6 BitBake User Manual </3.1.6/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.7 BitBake User Manual </3.1.7/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.8 BitBake User Manual </3.1.8/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.9 BitBake User Manual </3.1.9/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.10 BitBake User Manual </3.1.10/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.11 BitBake User Manual </3.1.11/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.12 BitBake User Manual </3.1.12/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.13 BitBake User Manual </3.1.13/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.14 BitBake User Manual </3.1.14/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
==========================
|
||||
Outdated Release Manuals
|
||||
Previous Release Manuals
|
||||
==========================
|
||||
|
||||
*******************************
|
||||
Release Series 3.2 (gatesgarth)
|
||||
*******************************
|
||||
|
||||
- :yocto_docs:`3.2 BitBake User Manual </3.2/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.2.1 BitBake User Manual </3.2.1/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.2.2 BitBake User Manual </3.2.2/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.2.3 BitBake User Manual </3.2.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.2.4 BitBake User Manual </3.2.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
*************************
|
||||
Release Series 3.0 (zeus)
|
||||
3.0 'zeus' Release Series
|
||||
*************************
|
||||
|
||||
- :yocto_docs:`3.0 BitBake User Manual </3.0/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.0.1 BitBake User Manual </3.0.1/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.0.2 BitBake User Manual </3.0.2/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.0.3 BitBake User Manual </3.0.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.0.4 BitBake User Manual </3.0.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
****************************
|
||||
Release Series 2.7 (warrior)
|
||||
2.7 'warrior' Release Series
|
||||
****************************
|
||||
|
||||
- :yocto_docs:`2.7 BitBake User Manual </2.7/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
@@ -78,7 +36,7 @@ Release Series 2.7 (warrior)
|
||||
- :yocto_docs:`2.7.4 BitBake User Manual </2.7.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
*************************
|
||||
Release Series 2.6 (thud)
|
||||
2.6 'thud' Release Series
|
||||
*************************
|
||||
|
||||
- :yocto_docs:`2.6 BitBake User Manual </2.6/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
@@ -88,16 +46,16 @@ Release Series 2.6 (thud)
|
||||
- :yocto_docs:`2.6.4 BitBake User Manual </2.6.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
*************************
|
||||
Release Series 2.5 (sumo)
|
||||
2.5 'sumo' Release Series
|
||||
*************************
|
||||
|
||||
- :yocto_docs:`2.5 Documentation </2.5>`
|
||||
- :yocto_docs:`2.5.1 Documentation </2.5.1>`
|
||||
- :yocto_docs:`2.5.2 Documentation </2.5.2>`
|
||||
- :yocto_docs:`2.5.3 Documentation </2.5.3>`
|
||||
- :yocto_docs:`2.5 BitBake User Manual </2.5/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`2.5.1 BitBake User Manual </2.5.1/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`2.5.2 BitBake User Manual </2.5.2/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`2.5.3 BitBake User Manual </2.5.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
**************************
|
||||
Release Series 2.4 (rocko)
|
||||
2.4 'rocko' Release Series
|
||||
**************************
|
||||
|
||||
- :yocto_docs:`2.4 BitBake User Manual </2.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
@@ -107,7 +65,7 @@ Release Series 2.4 (rocko)
|
||||
- :yocto_docs:`2.4.4 BitBake User Manual </2.4.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
*************************
|
||||
Release Series 2.3 (pyro)
|
||||
2.3 'pyro' Release Series
|
||||
*************************
|
||||
|
||||
- :yocto_docs:`2.3 BitBake User Manual </2.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
@@ -117,7 +75,7 @@ Release Series 2.3 (pyro)
|
||||
- :yocto_docs:`2.3.4 BitBake User Manual </2.3.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
**************************
|
||||
Release Series 2.2 (morty)
|
||||
2.2 'morty' Release Series
|
||||
**************************
|
||||
|
||||
- :yocto_docs:`2.2 BitBake User Manual </2.2/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
@@ -126,7 +84,7 @@ Release Series 2.2 (morty)
|
||||
- :yocto_docs:`2.2.3 BitBake User Manual </2.2.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
****************************
|
||||
Release Series 2.1 (krogoth)
|
||||
2.1 'krogoth' Release Series
|
||||
****************************
|
||||
|
||||
- :yocto_docs:`2.1 BitBake User Manual </2.1/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
@@ -135,7 +93,7 @@ Release Series 2.1 (krogoth)
|
||||
- :yocto_docs:`2.1.3 BitBake User Manual </2.1.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
***************************
|
||||
Release Series 2.0 (jethro)
|
||||
2.0 'jethro' Release Series
|
||||
***************************
|
||||
|
||||
- :yocto_docs:`1.9 BitBake User Manual </1.9/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
@@ -145,7 +103,7 @@ Release Series 2.0 (jethro)
|
||||
- :yocto_docs:`2.0.3 BitBake User Manual </2.0.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
*************************
|
||||
Release Series 1.8 (fido)
|
||||
1.8 'fido' Release Series
|
||||
*************************
|
||||
|
||||
- :yocto_docs:`1.8 BitBake User Manual </1.8/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
@@ -153,7 +111,7 @@ Release Series 1.8 (fido)
|
||||
- :yocto_docs:`1.8.2 BitBake User Manual </1.8.2/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
**************************
|
||||
Release Series 1.7 (dizzy)
|
||||
1.7 'dizzy' Release Series
|
||||
**************************
|
||||
|
||||
- :yocto_docs:`1.7 BitBake User Manual </1.7/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
@@ -162,7 +120,7 @@ Release Series 1.7 (dizzy)
|
||||
- :yocto_docs:`1.7.3 BitBake User Manual </1.7.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
**************************
|
||||
Release Series 1.6 (daisy)
|
||||
1.6 'daisy' Release Series
|
||||
**************************
|
||||
|
||||
- :yocto_docs:`1.6 BitBake User Manual </1.6/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
@@ -3,8 +3,6 @@
|
||||
#
|
||||
# Copyright (C) 2006 Tim Ansell
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
# Please Note:
|
||||
# Be careful when using mutable types (ie Dict and Lists) - operations involving these are SLOW.
|
||||
# Assign a file to __warn__ to get warnings about slow operations.
|
||||
|
||||
@@ -9,11 +9,11 @@
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
__version__ = "2.2.0"
|
||||
__version__ = "1.48.0"
|
||||
|
||||
import sys
|
||||
if sys.version_info < (3, 6, 0):
|
||||
raise RuntimeError("Sorry, python 3.6.0 or later is required for this version of bitbake")
|
||||
if sys.version_info < (3, 5, 0):
|
||||
raise RuntimeError("Sorry, python 3.5.0 or later is required for this version of bitbake")
|
||||
|
||||
|
||||
class BBHandledException(Exception):
|
||||
@@ -21,8 +21,8 @@ class BBHandledException(Exception):
|
||||
The big dilemma for generic bitbake code is what information to give the user
|
||||
when an exception occurs. Any exception inheriting this base exception class
|
||||
has already provided information to the user via some 'fired' message type such as
|
||||
an explicitly fired event using bb.fire, or a bb.error message. If bitbake
|
||||
encounters an exception derived from this class, no backtrace or other information
|
||||
an explicitly fired event using bb.fire, or a bb.error message. If bitbake
|
||||
encounters an exception derived from this class, no backtrace or other information
|
||||
will be given to the user, its assumed the earlier event provided the relevant information.
|
||||
"""
|
||||
pass
|
||||
@@ -42,28 +42,15 @@ class BBLoggerMixin(object):
|
||||
|
||||
def setup_bblogger(self, name):
|
||||
if name.split(".")[0] == "BitBake":
|
||||
self.debug = self._debug_helper
|
||||
|
||||
def _debug_helper(self, *args, **kwargs):
|
||||
return self.bbdebug(1, *args, **kwargs)
|
||||
|
||||
def debug2(self, *args, **kwargs):
|
||||
return self.bbdebug(2, *args, **kwargs)
|
||||
|
||||
def debug3(self, *args, **kwargs):
|
||||
return self.bbdebug(3, *args, **kwargs)
|
||||
self.debug = self.bbdebug
|
||||
|
||||
def bbdebug(self, level, msg, *args, **kwargs):
|
||||
loglevel = logging.DEBUG - level + 1
|
||||
if not bb.event.worker_pid:
|
||||
if self.name in bb.msg.loggerDefaultDomains and loglevel > (bb.msg.loggerDefaultDomains[self.name]):
|
||||
return
|
||||
if loglevel < bb.msg.loggerDefaultLogLevel:
|
||||
if loglevel > bb.msg.loggerDefaultLogLevel:
|
||||
return
|
||||
|
||||
if not isinstance(level, int) or not isinstance(msg, str):
|
||||
mainlogger.warning("Invalid arguments in bbdebug: %s" % repr((level, msg,) + args))
|
||||
|
||||
return self.log(loglevel, msg, *args, **kwargs)
|
||||
|
||||
def plain(self, msg, *args, **kwargs):
|
||||
@@ -75,13 +62,6 @@ class BBLoggerMixin(object):
|
||||
def verbnote(self, msg, *args, **kwargs):
|
||||
return self.log(logging.INFO + 2, msg, *args, **kwargs)
|
||||
|
||||
def warnonce(self, msg, *args, **kwargs):
|
||||
return self.log(logging.WARNING - 1, msg, *args, **kwargs)
|
||||
|
||||
def erroronce(self, msg, *args, **kwargs):
|
||||
return self.log(logging.ERROR - 1, msg, *args, **kwargs)
|
||||
|
||||
|
||||
Logger = logging.getLoggerClass()
|
||||
class BBLogger(Logger, BBLoggerMixin):
|
||||
def __init__(self, name, *args, **kwargs):
|
||||
@@ -148,7 +128,7 @@ def debug(lvl, *args):
|
||||
mainlogger.warning("Passed invalid debug level '%s' to bb.debug", lvl)
|
||||
args = (lvl,) + args
|
||||
lvl = 1
|
||||
mainlogger.bbdebug(lvl, ''.join(args))
|
||||
mainlogger.debug(lvl, ''.join(args))
|
||||
|
||||
def note(*args):
|
||||
mainlogger.info(''.join(args))
|
||||
@@ -168,15 +148,9 @@ def verbnote(*args):
|
||||
def warn(*args):
|
||||
mainlogger.warning(''.join(args))
|
||||
|
||||
def warnonce(*args):
|
||||
mainlogger.warnonce(''.join(args))
|
||||
|
||||
def error(*args, **kwargs):
|
||||
mainlogger.error(''.join(args), extra=kwargs)
|
||||
|
||||
def erroronce(*args):
|
||||
mainlogger.erroronce(''.join(args))
|
||||
|
||||
def fatal(*args, **kwargs):
|
||||
mainlogger.critical(''.join(args), extra=kwargs)
|
||||
raise BBHandledException()
|
||||
|
||||
@@ -1,33 +0,0 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import itertools
|
||||
import json
|
||||
|
||||
# The Python async server defaults to a 64K receive buffer, so we hardcode our
|
||||
# maximum chunk size. It would be better if the client and server reported to
|
||||
# each other what the maximum chunk sizes were, but that will slow down the
|
||||
# connection setup with a round trip delay so I'd rather not do that unless it
|
||||
# is necessary
|
||||
DEFAULT_MAX_CHUNK = 32 * 1024
|
||||
|
||||
|
||||
def chunkify(msg, max_chunk):
|
||||
if len(msg) < max_chunk - 1:
|
||||
yield ''.join((msg, "\n"))
|
||||
else:
|
||||
yield ''.join((json.dumps({
|
||||
'chunk-stream': None
|
||||
}), "\n"))
|
||||
|
||||
args = [iter(msg)] * (max_chunk - 1)
|
||||
for m in map(''.join, itertools.zip_longest(*args, fillvalue='')):
|
||||
yield ''.join(itertools.chain(m, "\n"))
|
||||
yield "\n"
|
||||
|
||||
|
||||
from .client import AsyncClient, Client
|
||||
from .serv import AsyncServer, AsyncServerConnection, ClientError, ServerError
|
||||
@@ -1,178 +0,0 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import abc
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import socket
|
||||
import sys
|
||||
from . import chunkify, DEFAULT_MAX_CHUNK
|
||||
|
||||
|
||||
class AsyncClient(object):
|
||||
def __init__(self, proto_name, proto_version, logger, timeout=30):
|
||||
self.reader = None
|
||||
self.writer = None
|
||||
self.max_chunk = DEFAULT_MAX_CHUNK
|
||||
self.proto_name = proto_name
|
||||
self.proto_version = proto_version
|
||||
self.logger = logger
|
||||
self.timeout = timeout
|
||||
|
||||
async def connect_tcp(self, address, port):
|
||||
async def connect_sock():
|
||||
return await asyncio.open_connection(address, port)
|
||||
|
||||
self._connect_sock = connect_sock
|
||||
|
||||
async def connect_unix(self, path):
|
||||
async def connect_sock():
|
||||
# AF_UNIX has path length issues so chdir here to workaround
|
||||
cwd = os.getcwd()
|
||||
try:
|
||||
os.chdir(os.path.dirname(path))
|
||||
# The socket must be opened synchronously so that CWD doesn't get
|
||||
# changed out from underneath us so we pass as a sock into asyncio
|
||||
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM, 0)
|
||||
sock.connect(os.path.basename(path))
|
||||
finally:
|
||||
os.chdir(cwd)
|
||||
return await asyncio.open_unix_connection(sock=sock)
|
||||
|
||||
self._connect_sock = connect_sock
|
||||
|
||||
async def setup_connection(self):
|
||||
s = '%s %s\n\n' % (self.proto_name, self.proto_version)
|
||||
self.writer.write(s.encode("utf-8"))
|
||||
await self.writer.drain()
|
||||
|
||||
async def connect(self):
|
||||
if self.reader is None or self.writer is None:
|
||||
(self.reader, self.writer) = await self._connect_sock()
|
||||
await self.setup_connection()
|
||||
|
||||
async def close(self):
|
||||
self.reader = None
|
||||
|
||||
if self.writer is not None:
|
||||
self.writer.close()
|
||||
self.writer = None
|
||||
|
||||
async def _send_wrapper(self, proc):
|
||||
count = 0
|
||||
while True:
|
||||
try:
|
||||
await self.connect()
|
||||
return await proc()
|
||||
except (
|
||||
OSError,
|
||||
ConnectionError,
|
||||
json.JSONDecodeError,
|
||||
UnicodeDecodeError,
|
||||
) as e:
|
||||
self.logger.warning("Error talking to server: %s" % e)
|
||||
if count >= 3:
|
||||
if not isinstance(e, ConnectionError):
|
||||
raise ConnectionError(str(e))
|
||||
raise e
|
||||
await self.close()
|
||||
count += 1
|
||||
|
||||
async def send_message(self, msg):
|
||||
async def get_line():
|
||||
try:
|
||||
line = await asyncio.wait_for(self.reader.readline(), self.timeout)
|
||||
except asyncio.TimeoutError:
|
||||
raise ConnectionError("Timed out waiting for server")
|
||||
|
||||
if not line:
|
||||
raise ConnectionError("Connection closed")
|
||||
|
||||
line = line.decode("utf-8")
|
||||
|
||||
if not line.endswith("\n"):
|
||||
raise ConnectionError("Bad message %r" % (line))
|
||||
|
||||
return line
|
||||
|
||||
async def proc():
|
||||
for c in chunkify(json.dumps(msg), self.max_chunk):
|
||||
self.writer.write(c.encode("utf-8"))
|
||||
await self.writer.drain()
|
||||
|
||||
l = await get_line()
|
||||
|
||||
m = json.loads(l)
|
||||
if m and "chunk-stream" in m:
|
||||
lines = []
|
||||
while True:
|
||||
l = (await get_line()).rstrip("\n")
|
||||
if not l:
|
||||
break
|
||||
lines.append(l)
|
||||
|
||||
m = json.loads("".join(lines))
|
||||
|
||||
return m
|
||||
|
||||
return await self._send_wrapper(proc)
|
||||
|
||||
async def ping(self):
|
||||
return await self.send_message(
|
||||
{'ping': {}}
|
||||
)
|
||||
|
||||
|
||||
class Client(object):
|
||||
def __init__(self):
|
||||
self.client = self._get_async_client()
|
||||
self.loop = asyncio.new_event_loop()
|
||||
|
||||
# Override any pre-existing loop.
|
||||
# Without this, the PR server export selftest triggers a hang
|
||||
# when running with Python 3.7. The drawback is that there is
|
||||
# potential for issues if the PR and hash equiv (or some new)
|
||||
# clients need to both be instantiated in the same process.
|
||||
# This should be revisited if/when Python 3.9 becomes the
|
||||
# minimum required version for BitBake, as it seems not
|
||||
# required (but harmless) with it.
|
||||
asyncio.set_event_loop(self.loop)
|
||||
|
||||
self._add_methods('connect_tcp', 'ping')
|
||||
|
||||
@abc.abstractmethod
|
||||
def _get_async_client(self):
|
||||
pass
|
||||
|
||||
def _get_downcall_wrapper(self, downcall):
|
||||
def wrapper(*args, **kwargs):
|
||||
return self.loop.run_until_complete(downcall(*args, **kwargs))
|
||||
|
||||
return wrapper
|
||||
|
||||
def _add_methods(self, *methods):
|
||||
for m in methods:
|
||||
downcall = getattr(self.client, m)
|
||||
setattr(self, m, self._get_downcall_wrapper(downcall))
|
||||
|
||||
def connect_unix(self, path):
|
||||
self.loop.run_until_complete(self.client.connect_unix(path))
|
||||
self.loop.run_until_complete(self.client.connect())
|
||||
|
||||
@property
|
||||
def max_chunk(self):
|
||||
return self.client.max_chunk
|
||||
|
||||
@max_chunk.setter
|
||||
def max_chunk(self, value):
|
||||
self.client.max_chunk = value
|
||||
|
||||
def close(self):
|
||||
self.loop.run_until_complete(self.client.close())
|
||||
if sys.version_info >= (3, 6):
|
||||
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
|
||||
self.loop.close()
|
||||
@@ -1,295 +0,0 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import abc
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import signal
|
||||
import socket
|
||||
import sys
|
||||
import multiprocessing
|
||||
from . import chunkify, DEFAULT_MAX_CHUNK
|
||||
|
||||
|
||||
class ClientError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class ServerError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class AsyncServerConnection(object):
|
||||
def __init__(self, reader, writer, proto_name, logger):
|
||||
self.reader = reader
|
||||
self.writer = writer
|
||||
self.proto_name = proto_name
|
||||
self.max_chunk = DEFAULT_MAX_CHUNK
|
||||
self.handlers = {
|
||||
'chunk-stream': self.handle_chunk,
|
||||
'ping': self.handle_ping,
|
||||
}
|
||||
self.logger = logger
|
||||
|
||||
async def process_requests(self):
|
||||
try:
|
||||
self.addr = self.writer.get_extra_info('peername')
|
||||
self.logger.debug('Client %r connected' % (self.addr,))
|
||||
|
||||
# Read protocol and version
|
||||
client_protocol = await self.reader.readline()
|
||||
if not client_protocol:
|
||||
return
|
||||
|
||||
(client_proto_name, client_proto_version) = client_protocol.decode('utf-8').rstrip().split()
|
||||
if client_proto_name != self.proto_name:
|
||||
self.logger.debug('Rejecting invalid protocol %s' % (self.proto_name))
|
||||
return
|
||||
|
||||
self.proto_version = tuple(int(v) for v in client_proto_version.split('.'))
|
||||
if not self.validate_proto_version():
|
||||
self.logger.debug('Rejecting invalid protocol version %s' % (client_proto_version))
|
||||
return
|
||||
|
||||
# Read headers. Currently, no headers are implemented, so look for
|
||||
# an empty line to signal the end of the headers
|
||||
while True:
|
||||
line = await self.reader.readline()
|
||||
if not line:
|
||||
return
|
||||
|
||||
line = line.decode('utf-8').rstrip()
|
||||
if not line:
|
||||
break
|
||||
|
||||
# Handle messages
|
||||
while True:
|
||||
d = await self.read_message()
|
||||
if d is None:
|
||||
break
|
||||
await self.dispatch_message(d)
|
||||
await self.writer.drain()
|
||||
except ClientError as e:
|
||||
self.logger.error(str(e))
|
||||
finally:
|
||||
self.writer.close()
|
||||
|
||||
async def dispatch_message(self, msg):
|
||||
for k in self.handlers.keys():
|
||||
if k in msg:
|
||||
self.logger.debug('Handling %s' % k)
|
||||
await self.handlers[k](msg[k])
|
||||
return
|
||||
|
||||
raise ClientError("Unrecognized command %r" % msg)
|
||||
|
||||
def write_message(self, msg):
|
||||
for c in chunkify(json.dumps(msg), self.max_chunk):
|
||||
self.writer.write(c.encode('utf-8'))
|
||||
|
||||
async def read_message(self):
|
||||
l = await self.reader.readline()
|
||||
if not l:
|
||||
return None
|
||||
|
||||
try:
|
||||
message = l.decode('utf-8')
|
||||
|
||||
if not message.endswith('\n'):
|
||||
return None
|
||||
|
||||
return json.loads(message)
|
||||
except (json.JSONDecodeError, UnicodeDecodeError) as e:
|
||||
self.logger.error('Bad message from client: %r' % message)
|
||||
raise e
|
||||
|
||||
async def handle_chunk(self, request):
|
||||
lines = []
|
||||
try:
|
||||
while True:
|
||||
l = await self.reader.readline()
|
||||
l = l.rstrip(b"\n").decode("utf-8")
|
||||
if not l:
|
||||
break
|
||||
lines.append(l)
|
||||
|
||||
msg = json.loads(''.join(lines))
|
||||
except (json.JSONDecodeError, UnicodeDecodeError) as e:
|
||||
self.logger.error('Bad message from client: %r' % lines)
|
||||
raise e
|
||||
|
||||
if 'chunk-stream' in msg:
|
||||
raise ClientError("Nested chunks are not allowed")
|
||||
|
||||
await self.dispatch_message(msg)
|
||||
|
||||
async def handle_ping(self, request):
|
||||
response = {'alive': True}
|
||||
self.write_message(response)
|
||||
|
||||
|
||||
class AsyncServer(object):
|
||||
def __init__(self, logger):
|
||||
self._cleanup_socket = None
|
||||
self.logger = logger
|
||||
self.start = None
|
||||
self.address = None
|
||||
self.loop = None
|
||||
|
||||
def start_tcp_server(self, host, port):
|
||||
def start_tcp():
|
||||
self.server = self.loop.run_until_complete(
|
||||
asyncio.start_server(self.handle_client, host, port)
|
||||
)
|
||||
|
||||
for s in self.server.sockets:
|
||||
self.logger.debug('Listening on %r' % (s.getsockname(),))
|
||||
# Newer python does this automatically. Do it manually here for
|
||||
# maximum compatibility
|
||||
s.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 1)
|
||||
s.setsockopt(socket.SOL_TCP, socket.TCP_QUICKACK, 1)
|
||||
|
||||
# Enable keep alives. This prevents broken client connections
|
||||
# from persisting on the server for long periods of time.
|
||||
s.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
|
||||
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 30)
|
||||
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 15)
|
||||
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 4)
|
||||
|
||||
name = self.server.sockets[0].getsockname()
|
||||
if self.server.sockets[0].family == socket.AF_INET6:
|
||||
self.address = "[%s]:%d" % (name[0], name[1])
|
||||
else:
|
||||
self.address = "%s:%d" % (name[0], name[1])
|
||||
|
||||
self.start = start_tcp
|
||||
|
||||
def start_unix_server(self, path):
|
||||
def cleanup():
|
||||
os.unlink(path)
|
||||
|
||||
def start_unix():
|
||||
cwd = os.getcwd()
|
||||
try:
|
||||
# Work around path length limits in AF_UNIX
|
||||
os.chdir(os.path.dirname(path))
|
||||
self.server = self.loop.run_until_complete(
|
||||
asyncio.start_unix_server(self.handle_client, os.path.basename(path))
|
||||
)
|
||||
finally:
|
||||
os.chdir(cwd)
|
||||
|
||||
self.logger.debug('Listening on %r' % path)
|
||||
|
||||
self._cleanup_socket = cleanup
|
||||
self.address = "unix://%s" % os.path.abspath(path)
|
||||
|
||||
self.start = start_unix
|
||||
|
||||
@abc.abstractmethod
|
||||
def accept_client(self, reader, writer):
|
||||
pass
|
||||
|
||||
async def handle_client(self, reader, writer):
|
||||
# writer.transport.set_write_buffer_limits(0)
|
||||
try:
|
||||
client = self.accept_client(reader, writer)
|
||||
await client.process_requests()
|
||||
except Exception as e:
|
||||
import traceback
|
||||
self.logger.error('Error from client: %s' % str(e), exc_info=True)
|
||||
traceback.print_exc()
|
||||
writer.close()
|
||||
self.logger.debug('Client disconnected')
|
||||
|
||||
def run_loop_forever(self):
|
||||
try:
|
||||
self.loop.run_forever()
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
|
||||
def signal_handler(self):
|
||||
self.logger.debug("Got exit signal")
|
||||
self.loop.stop()
|
||||
|
||||
def _serve_forever(self):
|
||||
try:
|
||||
self.loop.add_signal_handler(signal.SIGTERM, self.signal_handler)
|
||||
signal.pthread_sigmask(signal.SIG_UNBLOCK, [signal.SIGTERM])
|
||||
|
||||
self.run_loop_forever()
|
||||
self.server.close()
|
||||
|
||||
self.loop.run_until_complete(self.server.wait_closed())
|
||||
self.logger.debug('Server shutting down')
|
||||
finally:
|
||||
if self._cleanup_socket is not None:
|
||||
self._cleanup_socket()
|
||||
|
||||
def serve_forever(self):
|
||||
"""
|
||||
Serve requests in the current process
|
||||
"""
|
||||
# Create loop and override any loop that may have existed in
|
||||
# a parent process. It is possible that the usecases of
|
||||
# serve_forever might be constrained enough to allow using
|
||||
# get_event_loop here, but better safe than sorry for now.
|
||||
self.loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(self.loop)
|
||||
self.start()
|
||||
self._serve_forever()
|
||||
|
||||
def serve_as_process(self, *, prefunc=None, args=()):
|
||||
"""
|
||||
Serve requests in a child process
|
||||
"""
|
||||
def run(queue):
|
||||
# Create loop and override any loop that may have existed
|
||||
# in a parent process. Without doing this and instead
|
||||
# using get_event_loop, at the very minimum the hashserv
|
||||
# unit tests will hang when running the second test.
|
||||
# This happens since get_event_loop in the spawned server
|
||||
# process for the second testcase ends up with the loop
|
||||
# from the hashserv client created in the unit test process
|
||||
# when running the first testcase. The problem is somewhat
|
||||
# more general, though, as any potential use of asyncio in
|
||||
# Cooker could create a loop that needs to replaced in this
|
||||
# new process.
|
||||
self.loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(self.loop)
|
||||
try:
|
||||
self.start()
|
||||
finally:
|
||||
queue.put(self.address)
|
||||
queue.close()
|
||||
|
||||
if prefunc is not None:
|
||||
prefunc(self, *args)
|
||||
|
||||
self._serve_forever()
|
||||
|
||||
if sys.version_info >= (3, 6):
|
||||
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
|
||||
self.loop.close()
|
||||
|
||||
queue = multiprocessing.Queue()
|
||||
|
||||
# Temporarily block SIGTERM. The server process will inherit this
|
||||
# block which will ensure it doesn't receive the SIGTERM until the
|
||||
# handler is ready for it
|
||||
mask = signal.pthread_sigmask(signal.SIG_BLOCK, [signal.SIGTERM])
|
||||
try:
|
||||
self.process = multiprocessing.Process(target=run, args=(queue,))
|
||||
self.process.start()
|
||||
|
||||
self.address = queue.get()
|
||||
queue.close()
|
||||
queue.join_thread()
|
||||
|
||||
return self.process
|
||||
finally:
|
||||
signal.pthread_sigmask(signal.SIG_SETMASK, mask)
|
||||
@@ -20,7 +20,6 @@ import itertools
|
||||
import time
|
||||
import re
|
||||
import stat
|
||||
import datetime
|
||||
import bb
|
||||
import bb.msg
|
||||
import bb.process
|
||||
@@ -296,13 +295,9 @@ def exec_func_python(func, d, runfile, cwd=None):
|
||||
lineno = int(d.getVarFlag(func, "lineno", False))
|
||||
bb.methodpool.insert_method(func, text, fn, lineno - 1)
|
||||
|
||||
comp = utils.better_compile(code, func, "exec_func_python() autogenerated")
|
||||
utils.better_exec(comp, {"d": d}, code, "exec_func_python() autogenerated")
|
||||
comp = utils.better_compile(code, func, "exec_python_func() autogenerated")
|
||||
utils.better_exec(comp, {"d": d}, code, "exec_python_func() autogenerated")
|
||||
finally:
|
||||
# We want any stdout/stderr to be printed before any other log messages to make debugging
|
||||
# more accurate. In some cases we seem to lose stdout/stderr entirely in logging tests without this.
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
bb.debug(2, "Python function %s finished" % func)
|
||||
|
||||
if cwd and olddir:
|
||||
@@ -570,6 +565,7 @@ exit $ret
|
||||
def _task_data(fn, task, d):
|
||||
localdata = bb.data.createCopy(d)
|
||||
localdata.setVar('BB_FILENAME', fn)
|
||||
localdata.setVar('BB_CURRENTTASK', task[3:])
|
||||
localdata.setVar('OVERRIDES', 'task-%s:%s' %
|
||||
(task[3:].replace('_', '-'), d.getVar('OVERRIDES', False)))
|
||||
localdata.finalize()
|
||||
@@ -583,11 +579,11 @@ def _exec_task(fn, task, d, quieterr):
|
||||
running it with its own local metadata, and with some useful variables set.
|
||||
"""
|
||||
if not d.getVarFlag(task, 'task', False):
|
||||
event.fire(TaskInvalid(task, fn, d), d)
|
||||
event.fire(TaskInvalid(task, d), d)
|
||||
logger.error("No such task: %s" % task)
|
||||
return 1
|
||||
|
||||
logger.debug("Executing task %s", task)
|
||||
logger.debug(1, "Executing task %s", task)
|
||||
|
||||
localdata = _task_data(fn, task, d)
|
||||
tempdir = localdata.getVar('T')
|
||||
@@ -600,7 +596,7 @@ def _exec_task(fn, task, d, quieterr):
|
||||
curnice = os.nice(0)
|
||||
nice = int(nice) - curnice
|
||||
newnice = os.nice(nice)
|
||||
logger.debug("Renice to %s " % newnice)
|
||||
logger.debug(1, "Renice to %s " % newnice)
|
||||
ionice = localdata.getVar("BB_TASK_IONICE_LEVEL")
|
||||
if ionice:
|
||||
try:
|
||||
@@ -619,8 +615,7 @@ def _exec_task(fn, task, d, quieterr):
|
||||
logorder = os.path.join(tempdir, 'log.task_order')
|
||||
try:
|
||||
with open(logorder, 'a') as logorderfile:
|
||||
timestamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S.%f")
|
||||
logorderfile.write('{0} {1} ({2}): {3}\n'.format(timestamp, task, os.getpid(), logbase))
|
||||
logorderfile.write('{0} ({1}): {2}\n'.format(task, os.getpid(), logbase))
|
||||
except OSError:
|
||||
logger.exception("Opening log file '%s'", logorder)
|
||||
pass
|
||||
@@ -687,55 +682,47 @@ def _exec_task(fn, task, d, quieterr):
|
||||
try:
|
||||
try:
|
||||
event.fire(TaskStarted(task, fn, logfn, flags, localdata), localdata)
|
||||
except (bb.BBHandledException, SystemExit):
|
||||
return 1
|
||||
|
||||
try:
|
||||
for func in (prefuncs or '').split():
|
||||
exec_func(func, localdata)
|
||||
exec_func(task, localdata)
|
||||
for func in (postfuncs or '').split():
|
||||
exec_func(func, localdata)
|
||||
finally:
|
||||
# Need to flush and close the logs before sending events where the
|
||||
# UI may try to look at the logs.
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
except bb.BBHandledException:
|
||||
event.fire(TaskFailed(task, fn, logfn, localdata, True), localdata)
|
||||
return 1
|
||||
except Exception as exc:
|
||||
if quieterr:
|
||||
event.fire(TaskFailedSilent(task, fn, logfn, localdata), localdata)
|
||||
else:
|
||||
errprinted = errchk.triggered
|
||||
logger.error(str(exc))
|
||||
event.fire(TaskFailed(task, fn, logfn, localdata, errprinted), localdata)
|
||||
return 1
|
||||
finally:
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
|
||||
bblogger.removeHandler(handler)
|
||||
bblogger.removeHandler(handler)
|
||||
|
||||
# Restore the backup fds
|
||||
os.dup2(osi[0], osi[1])
|
||||
os.dup2(oso[0], oso[1])
|
||||
os.dup2(ose[0], ose[1])
|
||||
# Restore the backup fds
|
||||
os.dup2(osi[0], osi[1])
|
||||
os.dup2(oso[0], oso[1])
|
||||
os.dup2(ose[0], ose[1])
|
||||
|
||||
# Close the backup fds
|
||||
os.close(osi[0])
|
||||
os.close(oso[0])
|
||||
os.close(ose[0])
|
||||
|
||||
logfile.close()
|
||||
if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
|
||||
logger.debug2("Zero size logfn %s, removing", logfn)
|
||||
bb.utils.remove(logfn)
|
||||
bb.utils.remove(loglink)
|
||||
except (Exception, SystemExit) as exc:
|
||||
handled = False
|
||||
if isinstance(exc, bb.BBHandledException):
|
||||
handled = True
|
||||
|
||||
if quieterr:
|
||||
if not handled:
|
||||
logger.warning(repr(exc))
|
||||
event.fire(TaskFailedSilent(task, fn, logfn, localdata), localdata)
|
||||
else:
|
||||
errprinted = errchk.triggered
|
||||
# If the output is already on stdout, we've printed the information in the
|
||||
# logs once already so don't duplicate
|
||||
if verboseStdoutLogging or handled:
|
||||
errprinted = True
|
||||
if not handled:
|
||||
logger.error(repr(exc))
|
||||
event.fire(TaskFailed(task, fn, logfn, localdata, errprinted), localdata)
|
||||
return 1
|
||||
# Close the backup fds
|
||||
os.close(osi[0])
|
||||
os.close(oso[0])
|
||||
os.close(ose[0])
|
||||
|
||||
logfile.close()
|
||||
if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
|
||||
logger.debug(2, "Zero size logfn %s, removing", logfn)
|
||||
bb.utils.remove(logfn)
|
||||
bb.utils.remove(loglink)
|
||||
event.fire(TaskSucceeded(task, fn, logfn, localdata), localdata)
|
||||
|
||||
if not localdata.getVarFlag(task, 'nostamp', False) and not localdata.getVarFlag(task, 'selfstamp', False):
|
||||
@@ -837,7 +824,11 @@ def stamp_cleanmask_internal(taskname, d, file_name):
|
||||
|
||||
return [cleanmask, cleanmask.replace(taskflagname, taskflagname + "_setscene")]
|
||||
|
||||
def clean_stamp(task, d, file_name = None):
|
||||
def make_stamp(task, d, file_name = None):
|
||||
"""
|
||||
Creates/updates a stamp for a given task
|
||||
(d can be a data dict or dataCache)
|
||||
"""
|
||||
cleanmask = stamp_cleanmask_internal(task, d, file_name)
|
||||
for mask in cleanmask:
|
||||
for name in glob.glob(mask):
|
||||
@@ -848,14 +839,6 @@ def clean_stamp(task, d, file_name = None):
|
||||
if name.endswith('.taint'):
|
||||
continue
|
||||
os.unlink(name)
|
||||
return
|
||||
|
||||
def make_stamp(task, d, file_name = None):
|
||||
"""
|
||||
Creates/updates a stamp for a given task
|
||||
(d can be a data dict or dataCache)
|
||||
"""
|
||||
clean_stamp(task, d, file_name)
|
||||
|
||||
stamp = stamp_internal(task, d, file_name)
|
||||
# Remove the file and recreate to force timestamp
|
||||
@@ -871,23 +854,6 @@ def make_stamp(task, d, file_name = None):
|
||||
file_name = d.getVar('BB_FILENAME')
|
||||
bb.parse.siggen.dump_sigtask(file_name, task, stampbase, True)
|
||||
|
||||
def find_stale_stamps(task, d, file_name=None):
|
||||
current = stamp_internal(task, d, file_name)
|
||||
current2 = stamp_internal(task + "_setscene", d, file_name)
|
||||
cleanmask = stamp_cleanmask_internal(task, d, file_name)
|
||||
found = []
|
||||
for mask in cleanmask:
|
||||
for name in glob.glob(mask):
|
||||
if "sigdata" in name or "sigbasedata" in name:
|
||||
continue
|
||||
if name.endswith('.taint'):
|
||||
continue
|
||||
if name == current or name == current2:
|
||||
continue
|
||||
logger.debug2("Stampfile %s does not match %s or %s" % (name, current, current2))
|
||||
found.append(name)
|
||||
return found
|
||||
|
||||
def del_stamp(task, d, file_name = None):
|
||||
"""
|
||||
Removes a stamp for a given task
|
||||
@@ -944,11 +910,6 @@ def add_tasks(tasklist, d):
|
||||
task_deps[name] = {}
|
||||
if name in flags:
|
||||
deptask = d.expand(flags[name])
|
||||
if name in ['noexec', 'fakeroot', 'nostamp']:
|
||||
if deptask != '1':
|
||||
bb.warn("In a future version of BitBake, setting the '{}' flag to something other than '1' "
|
||||
"will result in the flag not being set. See YP bug #13808.".format(name))
|
||||
|
||||
task_deps[name][task] = deptask
|
||||
getTask('mcdepends')
|
||||
getTask('depends')
|
||||
@@ -1047,8 +1008,6 @@ def tasksbetween(task_start, task_end, d):
|
||||
def follow_chain(task, endtask, chain=None):
|
||||
if not chain:
|
||||
chain = []
|
||||
if task in chain:
|
||||
bb.fatal("Circular task dependencies as %s depends on itself via the chain %s" % (task, " -> ".join(chain)))
|
||||
chain.append(task)
|
||||
for othertask in tasks:
|
||||
if othertask == task:
|
||||
|
||||
@@ -19,16 +19,14 @@
|
||||
import os
|
||||
import logging
|
||||
import pickle
|
||||
from collections import defaultdict
|
||||
from collections.abc import Mapping
|
||||
from collections import defaultdict, Mapping
|
||||
import bb.utils
|
||||
from bb import PrefixLoggerAdapter
|
||||
import re
|
||||
import shutil
|
||||
|
||||
logger = logging.getLogger("BitBake.Cache")
|
||||
|
||||
__cache_version__ = "154"
|
||||
__cache_version__ = "153"
|
||||
|
||||
def getCacheFile(path, filename, mc, data_hash):
|
||||
mcspec = ''
|
||||
@@ -55,12 +53,12 @@ class RecipeInfoCommon(object):
|
||||
|
||||
@classmethod
|
||||
def pkgvar(cls, var, packages, metadata):
|
||||
return dict((pkg, cls.depvar("%s:%s" % (var, pkg), metadata))
|
||||
return dict((pkg, cls.depvar("%s_%s" % (var, pkg), metadata))
|
||||
for pkg in packages)
|
||||
|
||||
@classmethod
|
||||
def taskvar(cls, var, tasks, metadata):
|
||||
return dict((task, cls.getvar("%s:task-%s" % (var, task), metadata))
|
||||
return dict((task, cls.getvar("%s_task-%s" % (var, task), metadata))
|
||||
for task in tasks)
|
||||
|
||||
@classmethod
|
||||
@@ -96,7 +94,6 @@ class CoreRecipeInfo(RecipeInfoCommon):
|
||||
if not self.packages:
|
||||
self.packages.append(self.pn)
|
||||
self.packages_dynamic = self.listvar('PACKAGES_DYNAMIC', metadata)
|
||||
self.rprovides_pkg = self.pkgvar('RPROVIDES', self.packages, metadata)
|
||||
|
||||
self.skipreason = self.getvar('__SKIPPED', metadata)
|
||||
if self.skipreason:
|
||||
@@ -123,12 +120,12 @@ class CoreRecipeInfo(RecipeInfoCommon):
|
||||
self.depends = self.depvar('DEPENDS', metadata)
|
||||
self.rdepends = self.depvar('RDEPENDS', metadata)
|
||||
self.rrecommends = self.depvar('RRECOMMENDS', metadata)
|
||||
self.rprovides_pkg = self.pkgvar('RPROVIDES', self.packages, metadata)
|
||||
self.rdepends_pkg = self.pkgvar('RDEPENDS', self.packages, metadata)
|
||||
self.rrecommends_pkg = self.pkgvar('RRECOMMENDS', self.packages, metadata)
|
||||
self.inherits = self.getvar('__inherit_cache', metadata, expand=False)
|
||||
self.fakerootenv = self.getvar('FAKEROOTENV', metadata)
|
||||
self.fakerootdirs = self.getvar('FAKEROOTDIRS', metadata)
|
||||
self.fakerootlogs = self.getvar('FAKEROOTLOGS', metadata)
|
||||
self.fakerootnoenv = self.getvar('FAKEROOTNOENV', metadata)
|
||||
self.extradepsfunc = self.getvar('calculate_extra_depends', metadata)
|
||||
|
||||
@@ -166,7 +163,6 @@ class CoreRecipeInfo(RecipeInfoCommon):
|
||||
cachedata.fakerootenv = {}
|
||||
cachedata.fakerootnoenv = {}
|
||||
cachedata.fakerootdirs = {}
|
||||
cachedata.fakerootlogs = {}
|
||||
cachedata.extradepsfunc = {}
|
||||
|
||||
def add_cacheData(self, cachedata, fn):
|
||||
@@ -219,7 +215,7 @@ class CoreRecipeInfo(RecipeInfoCommon):
|
||||
if not self.not_world:
|
||||
cachedata.possible_world.append(fn)
|
||||
#else:
|
||||
# logger.debug2("EXCLUDE FROM WORLD: %s", fn)
|
||||
# logger.debug(2, "EXCLUDE FROM WORLD: %s", fn)
|
||||
|
||||
# create a collection of all targets for sanity checking
|
||||
# tasks, such as upstream versions, license, and tools for
|
||||
@@ -235,7 +231,6 @@ class CoreRecipeInfo(RecipeInfoCommon):
|
||||
cachedata.fakerootenv[fn] = self.fakerootenv
|
||||
cachedata.fakerootnoenv[fn] = self.fakerootnoenv
|
||||
cachedata.fakerootdirs[fn] = self.fakerootdirs
|
||||
cachedata.fakerootlogs[fn] = self.fakerootlogs
|
||||
cachedata.extradepsfunc[fn] = self.extradepsfunc
|
||||
|
||||
def virtualfn2realfn(virtualfn):
|
||||
@@ -243,7 +238,7 @@ def virtualfn2realfn(virtualfn):
|
||||
Convert a virtual file name to a real one + the associated subclass keyword
|
||||
"""
|
||||
mc = ""
|
||||
if virtualfn.startswith('mc:') and virtualfn.count(':') >= 2:
|
||||
if virtualfn.startswith('mc:'):
|
||||
elems = virtualfn.split(':')
|
||||
mc = elems[1]
|
||||
virtualfn = ":".join(elems[2:])
|
||||
@@ -273,7 +268,7 @@ def variant2virtual(realfn, variant):
|
||||
"""
|
||||
if variant == "":
|
||||
return realfn
|
||||
if variant.startswith("mc:") and variant.count(':') >= 2:
|
||||
if variant.startswith("mc:"):
|
||||
elems = variant.split(":")
|
||||
if elems[2]:
|
||||
return "mc:" + elems[1] + ":virtual:" + ":".join(elems[2:]) + ":" + realfn
|
||||
@@ -285,15 +280,36 @@ def parse_recipe(bb_data, bbfile, appends, mc=''):
|
||||
Parse a recipe
|
||||
"""
|
||||
|
||||
chdir_back = False
|
||||
|
||||
bb_data.setVar("__BBMULTICONFIG", mc)
|
||||
|
||||
# expand tmpdir to include this topdir
|
||||
bb_data.setVar('TMPDIR', bb_data.getVar('TMPDIR') or "")
|
||||
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
|
||||
oldpath = os.path.abspath(os.getcwd())
|
||||
bb.parse.cached_mtime_noerror(bbfile_loc)
|
||||
|
||||
if appends:
|
||||
bb_data.setVar('__BBAPPEND', " ".join(appends))
|
||||
bb_data = bb.parse.handle(bbfile, bb_data)
|
||||
return bb_data
|
||||
# The ConfHandler first looks if there is a TOPDIR and if not
|
||||
# then it would call getcwd().
|
||||
# Previously, we chdir()ed to bbfile_loc, called the handler
|
||||
# and finally chdir()ed back, a couple of thousand times. We now
|
||||
# just fill in TOPDIR to point to bbfile_loc if there is no TOPDIR yet.
|
||||
if not bb_data.getVar('TOPDIR', False):
|
||||
chdir_back = True
|
||||
bb_data.setVar('TOPDIR', bbfile_loc)
|
||||
try:
|
||||
if appends:
|
||||
bb_data.setVar('__BBAPPEND', " ".join(appends))
|
||||
bb_data = bb.parse.handle(bbfile, bb_data)
|
||||
if chdir_back:
|
||||
os.chdir(oldpath)
|
||||
return bb_data
|
||||
except:
|
||||
if chdir_back:
|
||||
os.chdir(oldpath)
|
||||
raise
|
||||
|
||||
|
||||
|
||||
class NoCache(object):
|
||||
@@ -307,7 +323,7 @@ class NoCache(object):
|
||||
Return a complete set of data for fn.
|
||||
To do this, we need to parse the file.
|
||||
"""
|
||||
logger.debug("Parsing %s (full)" % virtualfn)
|
||||
logger.debug(1, "Parsing %s (full)" % virtualfn)
|
||||
(fn, virtual, mc) = virtualfn2realfn(virtualfn)
|
||||
bb_data = self.load_bbfile(virtualfn, appends, virtonly=True)
|
||||
return bb_data[virtual]
|
||||
@@ -384,7 +400,7 @@ class Cache(NoCache):
|
||||
|
||||
self.cachefile = self.getCacheFile("bb_cache.dat")
|
||||
|
||||
self.logger.debug("Cache dir: %s", self.cachedir)
|
||||
self.logger.debug(1, "Cache dir: %s", self.cachedir)
|
||||
bb.utils.mkdirhier(self.cachedir)
|
||||
|
||||
cache_ok = True
|
||||
@@ -392,7 +408,7 @@ class Cache(NoCache):
|
||||
for cache_class in self.caches_array:
|
||||
cachefile = self.getCacheFile(cache_class.cachefile)
|
||||
cache_exists = os.path.exists(cachefile)
|
||||
self.logger.debug2("Checking if %s exists: %r", cachefile, cache_exists)
|
||||
self.logger.debug(2, "Checking if %s exists: %r", cachefile, cache_exists)
|
||||
cache_ok = cache_ok and cache_exists
|
||||
cache_class.init_cacheData(self)
|
||||
if cache_ok:
|
||||
@@ -400,7 +416,7 @@ class Cache(NoCache):
|
||||
elif os.path.isfile(self.cachefile):
|
||||
self.logger.info("Out of date cache found, rebuilding...")
|
||||
else:
|
||||
self.logger.debug("Cache file %s not found, building..." % self.cachefile)
|
||||
self.logger.debug(1, "Cache file %s not found, building..." % self.cachefile)
|
||||
|
||||
# We don't use the symlink, its just for debugging convinience
|
||||
if self.mc:
|
||||
@@ -433,11 +449,13 @@ class Cache(NoCache):
|
||||
return cachesize
|
||||
|
||||
def load_cachefile(self, progress):
|
||||
cachesize = self.cachesize()
|
||||
previous_progress = 0
|
||||
previous_percent = 0
|
||||
|
||||
for cache_class in self.caches_array:
|
||||
cachefile = self.getCacheFile(cache_class.cachefile)
|
||||
self.logger.debug('Loading cache file: %s' % cachefile)
|
||||
self.logger.debug(1, 'Loading cache file: %s' % cachefile)
|
||||
with open(cachefile, "rb") as cachefile:
|
||||
pickled = pickle.Unpickler(cachefile)
|
||||
# Check cache version information
|
||||
@@ -484,7 +502,7 @@ class Cache(NoCache):
|
||||
|
||||
def parse(self, filename, appends):
|
||||
"""Parse the specified filename, returning the recipe information"""
|
||||
self.logger.debug("Parsing %s", filename)
|
||||
self.logger.debug(1, "Parsing %s", filename)
|
||||
infos = []
|
||||
datastores = self.load_bbfile(filename, appends, mc=self.mc)
|
||||
depends = []
|
||||
@@ -538,7 +556,7 @@ class Cache(NoCache):
|
||||
cached, infos = self.load(fn, appends)
|
||||
for virtualfn, info_array in infos:
|
||||
if info_array[0].skipped:
|
||||
self.logger.debug("Skipping %s: %s", virtualfn, info_array[0].skipreason)
|
||||
self.logger.debug(1, "Skipping %s: %s", virtualfn, info_array[0].skipreason)
|
||||
skipped += 1
|
||||
else:
|
||||
self.add_info(virtualfn, info_array, cacheData, not cached)
|
||||
@@ -574,21 +592,21 @@ class Cache(NoCache):
|
||||
|
||||
# File isn't in depends_cache
|
||||
if not fn in self.depends_cache:
|
||||
self.logger.debug2("%s is not cached", fn)
|
||||
self.logger.debug(2, "%s is not cached", fn)
|
||||
return False
|
||||
|
||||
mtime = bb.parse.cached_mtime_noerror(fn)
|
||||
|
||||
# Check file still exists
|
||||
if mtime == 0:
|
||||
self.logger.debug2("%s no longer exists", fn)
|
||||
self.logger.debug(2, "%s no longer exists", fn)
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
info_array = self.depends_cache[fn]
|
||||
# Check the file's timestamp
|
||||
if mtime != info_array[0].timestamp:
|
||||
self.logger.debug2("%s changed", fn)
|
||||
self.logger.debug(2, "%s changed", fn)
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
@@ -599,13 +617,13 @@ class Cache(NoCache):
|
||||
fmtime = bb.parse.cached_mtime_noerror(f)
|
||||
# Check if file still exists
|
||||
if old_mtime != 0 and fmtime == 0:
|
||||
self.logger.debug2("%s's dependency %s was removed",
|
||||
self.logger.debug(2, "%s's dependency %s was removed",
|
||||
fn, f)
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
if (fmtime != old_mtime):
|
||||
self.logger.debug2("%s's dependency %s changed",
|
||||
self.logger.debug(2, "%s's dependency %s changed",
|
||||
fn, f)
|
||||
self.remove(fn)
|
||||
return False
|
||||
@@ -620,16 +638,16 @@ class Cache(NoCache):
|
||||
for f in flist:
|
||||
if not f:
|
||||
continue
|
||||
f, exist = f.rsplit(":", 1)
|
||||
f, exist = f.split(":")
|
||||
if (exist == "True" and not os.path.exists(f)) or (exist == "False" and os.path.exists(f)):
|
||||
self.logger.debug2("%s's file checksum list file %s changed",
|
||||
self.logger.debug(2, "%s's file checksum list file %s changed",
|
||||
fn, f)
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
if tuple(appends) != tuple(info_array[0].appends):
|
||||
self.logger.debug2("appends for %s changed", fn)
|
||||
self.logger.debug2("%s to %s" % (str(appends), str(info_array[0].appends)))
|
||||
self.logger.debug(2, "appends for %s changed", fn)
|
||||
self.logger.debug(2, "%s to %s" % (str(appends), str(info_array[0].appends)))
|
||||
self.remove(fn)
|
||||
return False
|
||||
|
||||
@@ -638,10 +656,10 @@ class Cache(NoCache):
|
||||
virtualfn = variant2virtual(fn, cls)
|
||||
self.clean.add(virtualfn)
|
||||
if virtualfn not in self.depends_cache:
|
||||
self.logger.debug2("%s is not cached", virtualfn)
|
||||
self.logger.debug(2, "%s is not cached", virtualfn)
|
||||
invalid = True
|
||||
elif len(self.depends_cache[virtualfn]) != len(self.caches_array):
|
||||
self.logger.debug2("Extra caches missing for %s?" % virtualfn)
|
||||
self.logger.debug(2, "Extra caches missing for %s?" % virtualfn)
|
||||
invalid = True
|
||||
|
||||
# If any one of the variants is not present, mark as invalid for all
|
||||
@@ -649,10 +667,10 @@ class Cache(NoCache):
|
||||
for cls in info_array[0].variants:
|
||||
virtualfn = variant2virtual(fn, cls)
|
||||
if virtualfn in self.clean:
|
||||
self.logger.debug2("Removing %s from cache", virtualfn)
|
||||
self.logger.debug(2, "Removing %s from cache", virtualfn)
|
||||
self.clean.remove(virtualfn)
|
||||
if fn in self.clean:
|
||||
self.logger.debug2("Marking %s as not clean", fn)
|
||||
self.logger.debug(2, "Marking %s as not clean", fn)
|
||||
self.clean.remove(fn)
|
||||
return False
|
||||
|
||||
@@ -665,10 +683,10 @@ class Cache(NoCache):
|
||||
Called from the parser in error cases
|
||||
"""
|
||||
if fn in self.depends_cache:
|
||||
self.logger.debug("Removing %s from cache", fn)
|
||||
self.logger.debug(1, "Removing %s from cache", fn)
|
||||
del self.depends_cache[fn]
|
||||
if fn in self.clean:
|
||||
self.logger.debug("Marking %s as unclean", fn)
|
||||
self.logger.debug(1, "Marking %s as unclean", fn)
|
||||
self.clean.remove(fn)
|
||||
|
||||
def sync(self):
|
||||
@@ -681,13 +699,13 @@ class Cache(NoCache):
|
||||
return
|
||||
|
||||
if self.cacheclean:
|
||||
self.logger.debug2("Cache is clean, not saving.")
|
||||
self.logger.debug(2, "Cache is clean, not saving.")
|
||||
return
|
||||
|
||||
for cache_class in self.caches_array:
|
||||
cache_class_name = cache_class.__name__
|
||||
cachefile = self.getCacheFile(cache_class.cachefile)
|
||||
self.logger.debug2("Writing %s", cachefile)
|
||||
self.logger.debug(2, "Writing %s", cachefile)
|
||||
with open(cachefile, "wb") as f:
|
||||
p = pickle.Pickler(f, pickle.HIGHEST_PROTOCOL)
|
||||
p.dump(__cache_version__)
|
||||
@@ -798,6 +816,10 @@ class MulticonfigCache(Mapping):
|
||||
for k in self.__caches:
|
||||
yield k
|
||||
|
||||
def keys(self):
|
||||
return self.__caches[key]
|
||||
|
||||
|
||||
def init(cooker):
|
||||
"""
|
||||
The Objective: Cache the minimum amount of data possible yet get to the
|
||||
@@ -863,7 +885,7 @@ class MultiProcessCache(object):
|
||||
bb.utils.mkdirhier(cachedir)
|
||||
self.cachefile = os.path.join(cachedir,
|
||||
cache_file_name or self.__class__.cache_file_name)
|
||||
logger.debug("Using cache in '%s'", self.cachefile)
|
||||
logger.debug(1, "Using cache in '%s'", self.cachefile)
|
||||
|
||||
glf = bb.utils.lockfile(self.cachefile + ".lock")
|
||||
|
||||
@@ -969,7 +991,7 @@ class SimpleCache(object):
|
||||
bb.utils.mkdirhier(cachedir)
|
||||
self.cachefile = os.path.join(cachedir,
|
||||
cache_file_name or self.__class__.cache_file_name)
|
||||
logger.debug("Using cache in '%s'", self.cachefile)
|
||||
logger.debug(1, "Using cache in '%s'", self.cachefile)
|
||||
|
||||
glf = bb.utils.lockfile(self.cachefile + ".lock")
|
||||
|
||||
@@ -999,11 +1021,3 @@ class SimpleCache(object):
|
||||
p.dump([data, self.cacheversion])
|
||||
|
||||
bb.utils.unlockfile(glf)
|
||||
|
||||
def copyfile(self, target):
|
||||
if not self.cachefile:
|
||||
return
|
||||
|
||||
glf = bb.utils.lockfile(self.cachefile + ".lock")
|
||||
shutil.copy(self.cachefile, target)
|
||||
bb.utils.unlockfile(glf)
|
||||
|
||||
@@ -11,13 +11,10 @@ import os
|
||||
import stat
|
||||
import bb.utils
|
||||
import logging
|
||||
import re
|
||||
from bb.cache import MultiProcessCache
|
||||
|
||||
logger = logging.getLogger("BitBake.Cache")
|
||||
|
||||
filelist_regex = re.compile(r'(?:(?<=:True)|(?<=:False))\s+')
|
||||
|
||||
# mtime cache (non-persistent)
|
||||
# based upon the assumption that files do not change during bitbake run
|
||||
class FileMtimeCache(object):
|
||||
@@ -53,7 +50,6 @@ class FileChecksumCache(MultiProcessCache):
|
||||
MultiProcessCache.__init__(self)
|
||||
|
||||
def get_checksum(self, f):
|
||||
f = os.path.normpath(f)
|
||||
entry = self.cachedata[0].get(f)
|
||||
cmtime = self.mtime_cache.cached_mtime(f)
|
||||
if entry:
|
||||
@@ -88,36 +84,22 @@ class FileChecksumCache(MultiProcessCache):
|
||||
return None
|
||||
return checksum
|
||||
|
||||
#
|
||||
# Changing the format of file-checksums is problematic as both OE and Bitbake have
|
||||
# knowledge of them. We need to encode a new piece of data, the portion of the path
|
||||
# we care about from a checksum perspective. This means that files that change subdirectory
|
||||
# are tracked by the task hashes. To do this, we do something horrible and put a "/./" into
|
||||
# the path. The filesystem handles it but it gives us a marker to know which subsection
|
||||
# of the path to cache.
|
||||
#
|
||||
def checksum_dir(pth):
|
||||
# Handle directories recursively
|
||||
if pth == "/":
|
||||
bb.fatal("Refusing to checksum /")
|
||||
pth = pth.rstrip("/")
|
||||
dirchecksums = []
|
||||
for root, dirs, files in os.walk(pth, topdown=True):
|
||||
[dirs.remove(d) for d in list(dirs) if d in localdirsexclude]
|
||||
for name in files:
|
||||
fullpth = os.path.join(root, name).replace(pth, os.path.join(pth, "."))
|
||||
fullpth = os.path.join(root, name)
|
||||
checksum = checksum_file(fullpth)
|
||||
if checksum:
|
||||
dirchecksums.append((fullpth, checksum))
|
||||
return dirchecksums
|
||||
|
||||
checksums = []
|
||||
for pth in filelist_regex.split(filelist):
|
||||
if not pth:
|
||||
continue
|
||||
pth = pth.strip()
|
||||
if not pth:
|
||||
continue
|
||||
for pth in filelist.split():
|
||||
exist = pth.split(":")[1]
|
||||
if exist == "False":
|
||||
continue
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
@@ -197,10 +195,6 @@ class BufferedLogger(Logger):
|
||||
self.target.handle(record)
|
||||
self.buffer = []
|
||||
|
||||
class DummyLogger():
|
||||
def flush(self):
|
||||
return
|
||||
|
||||
class PythonParser():
|
||||
getvars = (".getVar", ".appendVar", ".prependVar", "oe.utils.conditional")
|
||||
getvarflags = (".getVarFlag", ".appendVarFlag", ".prependVarFlag")
|
||||
@@ -218,9 +212,9 @@ class PythonParser():
|
||||
funcstr = codegen.to_source(func)
|
||||
argstr = codegen.to_source(arg)
|
||||
except TypeError:
|
||||
self.log.debug2('Failed to convert function and argument to source form')
|
||||
self.log.debug(2, 'Failed to convert function and argument to source form')
|
||||
else:
|
||||
self.log.debug(self.unhandled_message % (funcstr, argstr))
|
||||
self.log.debug(1, self.unhandled_message % (funcstr, argstr))
|
||||
|
||||
def visit_Call(self, node):
|
||||
name = self.called_node_name(node.func)
|
||||
@@ -282,9 +276,7 @@ class PythonParser():
|
||||
self.contains = {}
|
||||
self.execs = set()
|
||||
self.references = set()
|
||||
self._log = log
|
||||
# Defer init as expensive
|
||||
self.log = DummyLogger()
|
||||
self.log = BufferedLogger('BitBake.Data.PythonParser', logging.DEBUG, log)
|
||||
|
||||
self.unhandled_message = "in call of %s, argument '%s' is not a string literal"
|
||||
self.unhandled_message = "while parsing %s, %s" % (name, self.unhandled_message)
|
||||
@@ -311,9 +303,6 @@ class PythonParser():
|
||||
self.contains[i] = set(codeparsercache.pythoncacheextras[h].contains[i])
|
||||
return
|
||||
|
||||
# Need to parse so take the hit on the real log buffer
|
||||
self.log = BufferedLogger('BitBake.Data.PythonParser', logging.DEBUG, self._log)
|
||||
|
||||
# We can't add to the linenumbers for compile, we can pad to the correct number of blank lines though
|
||||
node = "\n" * int(lineno) + node
|
||||
code = compile(check_indent(str(node)), filename, "exec",
|
||||
@@ -332,11 +321,7 @@ class ShellParser():
|
||||
self.funcdefs = set()
|
||||
self.allexecs = set()
|
||||
self.execs = set()
|
||||
self._name = name
|
||||
self._log = log
|
||||
# Defer init as expensive
|
||||
self.log = DummyLogger()
|
||||
|
||||
self.log = BufferedLogger('BitBake.Data.%s' % name, logging.DEBUG, log)
|
||||
self.unhandled_template = "unable to handle non-literal command '%s'"
|
||||
self.unhandled_template = "while parsing %s, %s" % (name, self.unhandled_template)
|
||||
|
||||
@@ -355,9 +340,6 @@ class ShellParser():
|
||||
self.execs = set(codeparsercache.shellcacheextras[h].execs)
|
||||
return self.execs
|
||||
|
||||
# Need to parse so take the hit on the real log buffer
|
||||
self.log = BufferedLogger('BitBake.Data.%s' % self._name, logging.DEBUG, self._log)
|
||||
|
||||
self._parse_shell(value)
|
||||
self.execs = set(cmd for cmd in self.allexecs if cmd not in self.funcdefs)
|
||||
|
||||
@@ -468,7 +450,7 @@ class ShellParser():
|
||||
|
||||
cmd = word[1]
|
||||
if cmd.startswith("$"):
|
||||
self.log.debug(self.unhandled_template % cmd)
|
||||
self.log.debug(1, self.unhandled_template % cmd)
|
||||
elif cmd == "eval":
|
||||
command = " ".join(word for _, word in words[1:])
|
||||
self._parse_shell(command)
|
||||
|
||||
@@ -20,7 +20,6 @@ Commands are queued in a CommandQueue
|
||||
|
||||
from collections import OrderedDict, defaultdict
|
||||
|
||||
import io
|
||||
import bb.event
|
||||
import bb.cooker
|
||||
import bb.remotedata
|
||||
@@ -65,17 +64,9 @@ class Command:
|
||||
|
||||
# Ensure cooker is ready for commands
|
||||
if command != "updateConfig" and command != "setFeatures":
|
||||
try:
|
||||
self.cooker.init_configdata()
|
||||
if not self.remotedatastores:
|
||||
self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
|
||||
except (Exception, SystemExit) as exc:
|
||||
import traceback
|
||||
if isinstance(exc, bb.BBHandledException):
|
||||
# We need to start returning real exceptions here. Until we do, we can't
|
||||
# tell if an exception is an instance of bb.BBHandledException
|
||||
return None, "bb.BBHandledException()\n" + traceback.format_exc()
|
||||
return None, traceback.format_exc()
|
||||
self.cooker.init_configdata()
|
||||
if not self.remotedatastores:
|
||||
self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
|
||||
|
||||
if hasattr(CommandsSync, command):
|
||||
# Can run synchronous commands straight away
|
||||
@@ -509,17 +500,6 @@ class CommandsSync:
|
||||
d = command.remotedatastores[dsindex].varhistory
|
||||
return getattr(d, method)(*args, **kwargs)
|
||||
|
||||
def dataStoreConnectorVarHistCmdEmit(self, command, params):
|
||||
dsindex = params[0]
|
||||
var = params[1]
|
||||
oval = params[2]
|
||||
val = params[3]
|
||||
d = command.remotedatastores[params[4]]
|
||||
|
||||
o = io.StringIO()
|
||||
command.remotedatastores[dsindex].varhistory.emit(var, oval, val, o, d)
|
||||
return o.getvalue()
|
||||
|
||||
def dataStoreConnectorIncHistCmd(self, command, params):
|
||||
dsindex = params[0]
|
||||
method = params[1]
|
||||
@@ -667,16 +647,6 @@ class CommandsAsync:
|
||||
command.finishAsyncCommand()
|
||||
findFilesMatchingInDir.needcache = False
|
||||
|
||||
def testCookerCommandEvent(self, command, params):
|
||||
"""
|
||||
Dummy command used by OEQA selftest to test tinfoil without IO
|
||||
"""
|
||||
pattern = params[0]
|
||||
|
||||
command.cooker.testCookerCommandEvent(pattern)
|
||||
command.finishAsyncCommand()
|
||||
testCookerCommandEvent.needcache = False
|
||||
|
||||
def findConfigFilePath(self, command, params):
|
||||
"""
|
||||
Find the path of the requested configuration file
|
||||
|
||||
@@ -1,196 +0,0 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
# Helper library to implement streaming compression and decompression using an
|
||||
# external process
|
||||
#
|
||||
# This library should be used directly by end users; a wrapper library for the
|
||||
# specific compression tool should be created
|
||||
|
||||
import builtins
|
||||
import io
|
||||
import os
|
||||
import subprocess
|
||||
|
||||
|
||||
def open_wrap(
|
||||
cls, filename, mode="rb", *, encoding=None, errors=None, newline=None, **kwargs
|
||||
):
|
||||
"""
|
||||
Open a compressed file in binary or text mode.
|
||||
|
||||
Users should not call this directly. A specific compression library can use
|
||||
this helper to provide it's own "open" command
|
||||
|
||||
The filename argument can be an actual filename (a str or bytes object), or
|
||||
an existing file object to read from or write to.
|
||||
|
||||
The mode argument can be "r", "rb", "w", "wb", "x", "xb", "a" or "ab" for
|
||||
binary mode, or "rt", "wt", "xt" or "at" for text mode. The default mode is
|
||||
"rb".
|
||||
|
||||
For binary mode, this function is equivalent to the cls constructor:
|
||||
cls(filename, mode). In this case, the encoding, errors and newline
|
||||
arguments must not be provided.
|
||||
|
||||
For text mode, a cls object is created, and wrapped in an
|
||||
io.TextIOWrapper instance with the specified encoding, error handling
|
||||
behavior, and line ending(s).
|
||||
"""
|
||||
if "t" in mode:
|
||||
if "b" in mode:
|
||||
raise ValueError("Invalid mode: %r" % (mode,))
|
||||
else:
|
||||
if encoding is not None:
|
||||
raise ValueError("Argument 'encoding' not supported in binary mode")
|
||||
if errors is not None:
|
||||
raise ValueError("Argument 'errors' not supported in binary mode")
|
||||
if newline is not None:
|
||||
raise ValueError("Argument 'newline' not supported in binary mode")
|
||||
|
||||
file_mode = mode.replace("t", "")
|
||||
if isinstance(filename, (str, bytes, os.PathLike, int)):
|
||||
binary_file = cls(filename, file_mode, **kwargs)
|
||||
elif hasattr(filename, "read") or hasattr(filename, "write"):
|
||||
binary_file = cls(None, file_mode, fileobj=filename, **kwargs)
|
||||
else:
|
||||
raise TypeError("filename must be a str or bytes object, or a file")
|
||||
|
||||
if "t" in mode:
|
||||
return io.TextIOWrapper(
|
||||
binary_file, encoding, errors, newline, write_through=True
|
||||
)
|
||||
else:
|
||||
return binary_file
|
||||
|
||||
|
||||
class CompressionError(OSError):
|
||||
pass
|
||||
|
||||
|
||||
class PipeFile(io.RawIOBase):
|
||||
"""
|
||||
Class that implements generically piping to/from a compression program
|
||||
|
||||
Derived classes should add the function get_compress() and get_decompress()
|
||||
that return the required commands. Input will be piped into stdin and the
|
||||
(de)compressed output should be written to stdout, e.g.:
|
||||
|
||||
class FooFile(PipeCompressionFile):
|
||||
def get_decompress(self):
|
||||
return ["fooc", "--decompress", "--stdout"]
|
||||
|
||||
def get_compress(self):
|
||||
return ["fooc", "--compress", "--stdout"]
|
||||
|
||||
"""
|
||||
|
||||
READ = 0
|
||||
WRITE = 1
|
||||
|
||||
def __init__(self, filename=None, mode="rb", *, stderr=None, fileobj=None):
|
||||
if "t" in mode or "U" in mode:
|
||||
raise ValueError("Invalid mode: {!r}".format(mode))
|
||||
|
||||
if not "b" in mode:
|
||||
mode += "b"
|
||||
|
||||
if mode.startswith("r"):
|
||||
self.mode = self.READ
|
||||
elif mode.startswith("w"):
|
||||
self.mode = self.WRITE
|
||||
else:
|
||||
raise ValueError("Invalid mode %r" % mode)
|
||||
|
||||
if fileobj is not None:
|
||||
self.fileobj = fileobj
|
||||
else:
|
||||
self.fileobj = builtins.open(filename, mode or "rb")
|
||||
|
||||
if self.mode == self.READ:
|
||||
self.p = subprocess.Popen(
|
||||
self.get_decompress(),
|
||||
stdin=self.fileobj,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=stderr,
|
||||
close_fds=True,
|
||||
)
|
||||
self.pipe = self.p.stdout
|
||||
else:
|
||||
self.p = subprocess.Popen(
|
||||
self.get_compress(),
|
||||
stdin=subprocess.PIPE,
|
||||
stdout=self.fileobj,
|
||||
stderr=stderr,
|
||||
close_fds=True,
|
||||
)
|
||||
self.pipe = self.p.stdin
|
||||
|
||||
self.__closed = False
|
||||
|
||||
def _check_process(self):
|
||||
if self.p is None:
|
||||
return
|
||||
|
||||
returncode = self.p.wait()
|
||||
if returncode:
|
||||
raise CompressionError("Process died with %d" % returncode)
|
||||
self.p = None
|
||||
|
||||
def close(self):
|
||||
if self.closed:
|
||||
return
|
||||
|
||||
self.pipe.close()
|
||||
if self.p is not None:
|
||||
self._check_process()
|
||||
self.fileobj.close()
|
||||
|
||||
self.__closed = True
|
||||
|
||||
@property
|
||||
def closed(self):
|
||||
return self.__closed
|
||||
|
||||
def fileno(self):
|
||||
return self.pipe.fileno()
|
||||
|
||||
def flush(self):
|
||||
self.pipe.flush()
|
||||
|
||||
def isatty(self):
|
||||
return self.pipe.isatty()
|
||||
|
||||
def readable(self):
|
||||
return self.mode == self.READ
|
||||
|
||||
def writable(self):
|
||||
return self.mode == self.WRITE
|
||||
|
||||
def readinto(self, b):
|
||||
if self.mode != self.READ:
|
||||
import errno
|
||||
|
||||
raise OSError(
|
||||
errno.EBADF, "read() on write-only %s object" % self.__class__.__name__
|
||||
)
|
||||
size = self.pipe.readinto(b)
|
||||
if size == 0:
|
||||
self._check_process()
|
||||
return size
|
||||
|
||||
def write(self, data):
|
||||
if self.mode != self.WRITE:
|
||||
import errno
|
||||
|
||||
raise OSError(
|
||||
errno.EBADF, "write() on read-only %s object" % self.__class__.__name__
|
||||
)
|
||||
data = self.pipe.write(data)
|
||||
|
||||
if not data:
|
||||
self._check_process()
|
||||
|
||||
return data
|
||||
@@ -1,19 +0,0 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import bb.compress._pipecompress
|
||||
|
||||
|
||||
def open(*args, **kwargs):
|
||||
return bb.compress._pipecompress.open_wrap(LZ4File, *args, **kwargs)
|
||||
|
||||
|
||||
class LZ4File(bb.compress._pipecompress.PipeFile):
|
||||
def get_compress(self):
|
||||
return ["lz4c", "-z", "-c"]
|
||||
|
||||
def get_decompress(self):
|
||||
return ["lz4c", "-d", "-c"]
|
||||
@@ -1,30 +0,0 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import bb.compress._pipecompress
|
||||
import shutil
|
||||
|
||||
|
||||
def open(*args, **kwargs):
|
||||
return bb.compress._pipecompress.open_wrap(ZstdFile, *args, **kwargs)
|
||||
|
||||
|
||||
class ZstdFile(bb.compress._pipecompress.PipeFile):
|
||||
def __init__(self, *args, num_threads=1, compresslevel=3, **kwargs):
|
||||
self.num_threads = num_threads
|
||||
self.compresslevel = compresslevel
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
def _get_zstd(self):
|
||||
if self.num_threads == 1 or not shutil.which("pzstd"):
|
||||
return ["zstd"]
|
||||
return ["pzstd", "-p", "%d" % self.num_threads]
|
||||
|
||||
def get_compress(self):
|
||||
return self._get_zstd() + ["-c", "-%d" % self.compresslevel]
|
||||
|
||||
def get_decompress(self):
|
||||
return self._get_zstd() + ["-d", "-c"]
|
||||
@@ -13,6 +13,7 @@ import sys, os, glob, os.path, re, time
|
||||
import itertools
|
||||
import logging
|
||||
import multiprocessing
|
||||
import sre_constants
|
||||
import threading
|
||||
from io import StringIO, UnsupportedOperation
|
||||
from contextlib import closing
|
||||
@@ -72,9 +73,7 @@ class SkippedPackage:
|
||||
self.pn = info.pn
|
||||
self.skipreason = info.skipreason
|
||||
self.provides = info.provides
|
||||
self.rprovides = info.packages + info.rprovides
|
||||
for package in info.packages:
|
||||
self.rprovides += info.rprovides_pkg[package]
|
||||
self.rprovides = info.rprovides
|
||||
elif reason:
|
||||
self.skipreason = reason
|
||||
|
||||
@@ -158,9 +157,6 @@ class BBCooker:
|
||||
for f in featureSet:
|
||||
self.featureset.setFeature(f)
|
||||
|
||||
self.orig_syspath = sys.path.copy()
|
||||
self.orig_sysmodules = [*sys.modules]
|
||||
|
||||
self.configuration = bb.cookerdata.CookerConfiguration()
|
||||
|
||||
self.idleCallBackRegister = idleCallBackRegister
|
||||
@@ -168,15 +164,27 @@ class BBCooker:
|
||||
bb.debug(1, "BBCooker starting %s" % time.time())
|
||||
sys.stdout.flush()
|
||||
|
||||
self.configwatcher = None
|
||||
self.confignotifier = None
|
||||
self.configwatcher = pyinotify.WatchManager()
|
||||
bb.debug(1, "BBCooker pyinotify1 %s" % time.time())
|
||||
sys.stdout.flush()
|
||||
|
||||
self.configwatcher.bbseen = set()
|
||||
self.configwatcher.bbwatchedfiles = set()
|
||||
self.confignotifier = pyinotify.Notifier(self.configwatcher, self.config_notifications)
|
||||
bb.debug(1, "BBCooker pyinotify2 %s" % time.time())
|
||||
sys.stdout.flush()
|
||||
self.watchmask = pyinotify.IN_CLOSE_WRITE | pyinotify.IN_CREATE | pyinotify.IN_DELETE | \
|
||||
pyinotify.IN_DELETE_SELF | pyinotify.IN_MODIFY | pyinotify.IN_MOVE_SELF | \
|
||||
pyinotify.IN_MOVED_FROM | pyinotify.IN_MOVED_TO
|
||||
self.watcher = pyinotify.WatchManager()
|
||||
bb.debug(1, "BBCooker pyinotify3 %s" % time.time())
|
||||
sys.stdout.flush()
|
||||
self.watcher.bbseen = set()
|
||||
self.watcher.bbwatchedfiles = set()
|
||||
self.notifier = pyinotify.Notifier(self.watcher, self.notifications)
|
||||
|
||||
self.watcher = None
|
||||
self.notifier = None
|
||||
bb.debug(1, "BBCooker pyinotify complete %s" % time.time())
|
||||
sys.stdout.flush()
|
||||
|
||||
# If being called by something like tinfoil, we need to clean cached data
|
||||
# which may now be invalid
|
||||
@@ -189,7 +197,7 @@ class BBCooker:
|
||||
|
||||
self.inotify_modified_files = []
|
||||
|
||||
def _process_inotify_updates(server, cooker, halt):
|
||||
def _process_inotify_updates(server, cooker, abort):
|
||||
cooker.process_inotify_updates()
|
||||
return 1.0
|
||||
|
||||
@@ -227,29 +235,9 @@ class BBCooker:
|
||||
sys.stdout.flush()
|
||||
self.handlePRServ()
|
||||
|
||||
def setupConfigWatcher(self):
|
||||
if self.configwatcher:
|
||||
self.configwatcher.close()
|
||||
self.confignotifier = None
|
||||
self.configwatcher = None
|
||||
self.configwatcher = pyinotify.WatchManager()
|
||||
self.configwatcher.bbseen = set()
|
||||
self.configwatcher.bbwatchedfiles = set()
|
||||
self.confignotifier = pyinotify.Notifier(self.configwatcher, self.config_notifications)
|
||||
|
||||
def setupParserWatcher(self):
|
||||
if self.watcher:
|
||||
self.watcher.close()
|
||||
self.notifier = None
|
||||
self.watcher = None
|
||||
self.watcher = pyinotify.WatchManager()
|
||||
self.watcher.bbseen = set()
|
||||
self.watcher.bbwatchedfiles = set()
|
||||
self.notifier = pyinotify.Notifier(self.watcher, self.notifications)
|
||||
|
||||
def process_inotify_updates(self):
|
||||
for n in [self.confignotifier, self.notifier]:
|
||||
if n and n.check_events(timeout=0):
|
||||
if n.check_events(timeout=0):
|
||||
# read notified events and enqeue them
|
||||
n.read_events()
|
||||
n.process_events()
|
||||
@@ -263,12 +251,6 @@ class BBCooker:
|
||||
return
|
||||
if not event.pathname in self.configwatcher.bbwatchedfiles:
|
||||
return
|
||||
if "IN_ISDIR" in event.maskname:
|
||||
if "IN_CREATE" in event.maskname or "IN_DELETE" in event.maskname:
|
||||
if event.pathname in self.configwatcher.bbseen:
|
||||
self.configwatcher.bbseen.remove(event.pathname)
|
||||
# Could remove all entries starting with the directory but for now...
|
||||
bb.parse.clear_cache()
|
||||
if not event.pathname in self.inotify_modified_files:
|
||||
self.inotify_modified_files.append(event.pathname)
|
||||
self.baseconfig_valid = False
|
||||
@@ -282,12 +264,6 @@ class BBCooker:
|
||||
if event.pathname.endswith("bitbake-cookerdaemon.log") \
|
||||
or event.pathname.endswith("bitbake.lock"):
|
||||
return
|
||||
if "IN_ISDIR" in event.maskname:
|
||||
if "IN_CREATE" in event.maskname or "IN_DELETE" in event.maskname:
|
||||
if event.pathname in self.watcher.bbseen:
|
||||
self.watcher.bbseen.remove(event.pathname)
|
||||
# Could remove all entries starting with the directory but for now...
|
||||
bb.parse.clear_cache()
|
||||
if not event.pathname in self.inotify_modified_files:
|
||||
self.inotify_modified_files.append(event.pathname)
|
||||
self.parsecache_valid = False
|
||||
@@ -352,13 +328,6 @@ class BBCooker:
|
||||
self.state = state.initial
|
||||
self.caches_array = []
|
||||
|
||||
sys.path = self.orig_syspath.copy()
|
||||
for mod in [*sys.modules]:
|
||||
if mod not in self.orig_sysmodules:
|
||||
del sys.modules[mod]
|
||||
|
||||
self.setupConfigWatcher()
|
||||
|
||||
# Need to preserve BB_CONSOLELOG over resets
|
||||
consolelog = None
|
||||
if hasattr(self, "data"):
|
||||
@@ -402,7 +371,6 @@ class BBCooker:
|
||||
for mc in self.databuilder.mcdata.values():
|
||||
mc.renameVar("__depends", "__base_depends")
|
||||
self.add_filewatch(mc.getVar("__base_depends", False), self.configwatcher)
|
||||
mc.setVar("__bbclasstype", "recipe")
|
||||
|
||||
self.baseconfig_valid = True
|
||||
self.parsecache_valid = False
|
||||
@@ -412,30 +380,16 @@ class BBCooker:
|
||||
try:
|
||||
self.prhost = prserv.serv.auto_start(self.data)
|
||||
except prserv.serv.PRServiceConfigError as e:
|
||||
bb.fatal("Unable to start PR Server, exiting, check the bitbake-cookerdaemon.log")
|
||||
bb.fatal("Unable to start PR Server, exitting")
|
||||
|
||||
if self.data.getVar("BB_HASHSERVE") == "auto":
|
||||
# Create a new hash server bound to a unix domain socket
|
||||
if not self.hashserv:
|
||||
dbfile = (self.data.getVar("PERSISTENT_DIR") or self.data.getVar("CACHE")) + "/hashserv.db"
|
||||
upstream = self.data.getVar("BB_HASHSERVE_UPSTREAM") or None
|
||||
if upstream:
|
||||
import socket
|
||||
try:
|
||||
sock = socket.create_connection(upstream.split(":"), 5)
|
||||
sock.close()
|
||||
except socket.error as e:
|
||||
bb.warn("BB_HASHSERVE_UPSTREAM is not valid, unable to connect hash equivalence server at '%s': %s"
|
||||
% (upstream, repr(e)))
|
||||
|
||||
self.hashservaddr = "unix://%s/hashserve.sock" % self.data.getVar("TOPDIR")
|
||||
self.hashserv = hashserv.create_server(
|
||||
self.hashservaddr,
|
||||
dbfile,
|
||||
sync=False,
|
||||
upstream=upstream,
|
||||
)
|
||||
self.hashserv.serve_as_process()
|
||||
self.hashserv = hashserv.create_server(self.hashservaddr, dbfile, sync=False)
|
||||
self.hashserv.process = multiprocessing.Process(target=self.hashserv.serve_forever)
|
||||
self.hashserv.process.start()
|
||||
self.data.setVar("BB_HASHSERVE", self.hashservaddr)
|
||||
self.databuilder.origdata.setVar("BB_HASHSERVE", self.hashservaddr)
|
||||
self.databuilder.data.setVar("BB_HASHSERVE", self.hashservaddr)
|
||||
@@ -455,8 +409,6 @@ class BBCooker:
|
||||
self.data.disableTracking()
|
||||
|
||||
def parseConfiguration(self):
|
||||
self.updateCacheSync()
|
||||
|
||||
# Change nice level if we're asked to
|
||||
nice = self.data.getVar("BB_NICE_LEVEL")
|
||||
if nice:
|
||||
@@ -487,7 +439,7 @@ class BBCooker:
|
||||
continue
|
||||
except AttributeError:
|
||||
pass
|
||||
logger.debug("Marking as dirty due to '%s' option change to '%s'" % (o, options[o]))
|
||||
logger.debug(1, "Marking as dirty due to '%s' option change to '%s'" % (o, options[o]))
|
||||
print("Marking as dirty due to '%s' option change to '%s'" % (o, options[o]))
|
||||
clean = False
|
||||
if hasattr(self.configuration, o):
|
||||
@@ -514,17 +466,17 @@ class BBCooker:
|
||||
|
||||
for k in bb.utils.approved_variables():
|
||||
if k in environment and k not in self.configuration.env:
|
||||
logger.debug("Updating new environment variable %s to %s" % (k, environment[k]))
|
||||
logger.debug(1, "Updating new environment variable %s to %s" % (k, environment[k]))
|
||||
self.configuration.env[k] = environment[k]
|
||||
clean = False
|
||||
if k in self.configuration.env and k not in environment:
|
||||
logger.debug("Updating environment variable %s (deleted)" % (k))
|
||||
logger.debug(1, "Updating environment variable %s (deleted)" % (k))
|
||||
del self.configuration.env[k]
|
||||
clean = False
|
||||
if k not in self.configuration.env and k not in environment:
|
||||
continue
|
||||
if environment[k] != self.configuration.env[k]:
|
||||
logger.debug("Updating environment variable %s from %s to %s" % (k, self.configuration.env[k], environment[k]))
|
||||
logger.debug(1, "Updating environment variable %s from %s to %s" % (k, self.configuration.env[k], environment[k]))
|
||||
self.configuration.env[k] = environment[k]
|
||||
clean = False
|
||||
|
||||
@@ -532,10 +484,10 @@ class BBCooker:
|
||||
self.configuration.env = environment
|
||||
|
||||
if not clean:
|
||||
logger.debug("Base environment change, triggering reparse")
|
||||
logger.debug(1, "Base environment change, triggering reparse")
|
||||
self.reset()
|
||||
|
||||
def runCommands(self, server, data, halt):
|
||||
def runCommands(self, server, data, abort):
|
||||
"""
|
||||
Run any queued asynchronous command
|
||||
This is done by the idle handler so it runs in true context rather than
|
||||
@@ -546,30 +498,22 @@ class BBCooker:
|
||||
|
||||
def showVersions(self):
|
||||
|
||||
(latest_versions, preferred_versions, required) = self.findProviders()
|
||||
(latest_versions, preferred_versions) = self.findProviders()
|
||||
|
||||
logger.plain("%-35s %25s %25s %25s", "Recipe Name", "Latest Version", "Preferred Version", "Required Version")
|
||||
logger.plain("%-35s %25s %25s %25s\n", "===========", "==============", "=================", "================")
|
||||
logger.plain("%-35s %25s %25s", "Recipe Name", "Latest Version", "Preferred Version")
|
||||
logger.plain("%-35s %25s %25s\n", "===========", "==============", "=================")
|
||||
|
||||
for p in sorted(self.recipecaches[''].pkg_pn):
|
||||
preferred = preferred_versions[p]
|
||||
pref = preferred_versions[p]
|
||||
latest = latest_versions[p]
|
||||
requiredstr = ""
|
||||
preferredstr = ""
|
||||
if required[p]:
|
||||
if preferred[0] is not None:
|
||||
requiredstr = preferred[0][0] + ":" + preferred[0][1] + '-' + preferred[0][2]
|
||||
else:
|
||||
bb.fatal("REQUIRED_VERSION of package %s not available" % p)
|
||||
else:
|
||||
preferredstr = preferred[0][0] + ":" + preferred[0][1] + '-' + preferred[0][2]
|
||||
|
||||
prefstr = pref[0][0] + ":" + pref[0][1] + '-' + pref[0][2]
|
||||
lateststr = latest[0][0] + ":" + latest[0][1] + "-" + latest[0][2]
|
||||
|
||||
if preferred == latest:
|
||||
preferredstr = ""
|
||||
if pref == latest:
|
||||
prefstr = ""
|
||||
|
||||
logger.plain("%-35s %25s %25s %25s", p, lateststr, preferredstr, requiredstr)
|
||||
logger.plain("%-35s %25s %25s", p, lateststr, prefstr)
|
||||
|
||||
def showEnvironment(self, buildfile=None, pkgs_to_build=None):
|
||||
"""
|
||||
@@ -585,8 +529,6 @@ class BBCooker:
|
||||
if not orig_tracking:
|
||||
self.enableDataTracking()
|
||||
self.reset()
|
||||
# reset() resets to the UI requested value so we have to redo this
|
||||
self.enableDataTracking()
|
||||
|
||||
def mc_base(p):
|
||||
if p.startswith('mc:'):
|
||||
@@ -610,7 +552,7 @@ class BBCooker:
|
||||
if pkgs_to_build[0] in set(ignore.split()):
|
||||
bb.fatal("%s is in ASSUME_PROVIDED" % pkgs_to_build[0])
|
||||
|
||||
taskdata, runlist = self.buildTaskData(pkgs_to_build, None, self.configuration.halt, allowincomplete=True)
|
||||
taskdata, runlist = self.buildTaskData(pkgs_to_build, None, self.configuration.abort, allowincomplete=True)
|
||||
|
||||
mc = runlist[0][0]
|
||||
fn = runlist[0][3]
|
||||
@@ -639,7 +581,7 @@ class BBCooker:
|
||||
data.emit_env(env, envdata, True)
|
||||
logger.plain(env.getvalue())
|
||||
|
||||
# emit the metadata which isn't valid shell
|
||||
# emit the metadata which isnt valid shell
|
||||
for e in sorted(envdata.keys()):
|
||||
if envdata.getVarFlag(e, 'func', False) and envdata.getVarFlag(e, 'python', False):
|
||||
logger.plain("\npython %s () {\n%s}\n", e, envdata.getVar(e, False))
|
||||
@@ -648,7 +590,7 @@ class BBCooker:
|
||||
self.disableDataTracking()
|
||||
self.reset()
|
||||
|
||||
def buildTaskData(self, pkgs_to_build, task, halt, allowincomplete=False):
|
||||
def buildTaskData(self, pkgs_to_build, task, abort, allowincomplete=False):
|
||||
"""
|
||||
Prepare a runqueue and taskdata object for iteration over pkgs_to_build
|
||||
"""
|
||||
@@ -670,7 +612,7 @@ class BBCooker:
|
||||
# Replace string such as "mc:*:bash"
|
||||
# into "mc:A:bash mc:B:bash bash"
|
||||
for k in targetlist:
|
||||
if k.startswith("mc:") and k.count(':') >= 2:
|
||||
if k.startswith("mc:"):
|
||||
if wildcard:
|
||||
bb.fatal('multiconfig conflict')
|
||||
if k.split(":")[1] == "*":
|
||||
@@ -695,7 +637,7 @@ class BBCooker:
|
||||
localdata = {}
|
||||
|
||||
for mc in self.multiconfigs:
|
||||
taskdata[mc] = bb.taskdata.TaskData(halt, skiplist=self.skiplist, allowincomplete=allowincomplete)
|
||||
taskdata[mc] = bb.taskdata.TaskData(abort, skiplist=self.skiplist, allowincomplete=allowincomplete)
|
||||
localdata[mc] = data.createCopy(self.databuilder.mcdata[mc])
|
||||
bb.data.expandKeys(localdata[mc])
|
||||
|
||||
@@ -704,7 +646,7 @@ class BBCooker:
|
||||
for k in fulltargetlist:
|
||||
origk = k
|
||||
mc = ""
|
||||
if k.startswith("mc:") and k.count(':') >= 2:
|
||||
if k.startswith("mc:"):
|
||||
mc = k.split(":")[1]
|
||||
k = ":".join(k.split(":")[2:])
|
||||
ktask = task
|
||||
@@ -744,18 +686,19 @@ class BBCooker:
|
||||
taskdata[mc].add_unresolved(localdata[mc], self.recipecaches[mc])
|
||||
mcdeps |= set(taskdata[mc].get_mcdepends())
|
||||
new = False
|
||||
for k in mcdeps:
|
||||
if k in seen:
|
||||
continue
|
||||
l = k.split(':')
|
||||
depmc = l[2]
|
||||
if depmc not in self.multiconfigs:
|
||||
bb.fatal("Multiconfig dependency %s depends on nonexistent multiconfig configuration named configuration %s" % (k,depmc))
|
||||
else:
|
||||
logger.debug("Adding providers for multiconfig dependency %s" % l[3])
|
||||
taskdata[depmc].add_provider(localdata[depmc], self.recipecaches[depmc], l[3])
|
||||
seen.add(k)
|
||||
new = True
|
||||
for mc in self.multiconfigs:
|
||||
for k in mcdeps:
|
||||
if k in seen:
|
||||
continue
|
||||
l = k.split(':')
|
||||
depmc = l[2]
|
||||
if depmc not in self.multiconfigs:
|
||||
bb.fatal("Multiconfig dependency %s depends on nonexistent multiconfig configuration named configuration %s" % (k,depmc))
|
||||
else:
|
||||
logger.debug(1, "Adding providers for multiconfig dependency %s" % l[3])
|
||||
taskdata[depmc].add_provider(localdata[depmc], self.recipecaches[depmc], l[3])
|
||||
seen.add(k)
|
||||
new = True
|
||||
|
||||
for mc in self.multiconfigs:
|
||||
taskdata[mc].add_unresolved(localdata[mc], self.recipecaches[mc])
|
||||
@@ -768,7 +711,7 @@ class BBCooker:
|
||||
Prepare a runqueue and taskdata object for iteration over pkgs_to_build
|
||||
"""
|
||||
|
||||
# We set halt to False here to prevent unbuildable targets raising
|
||||
# We set abort to False here to prevent unbuildable targets raising
|
||||
# an exception when we're just generating data
|
||||
taskdata, runlist = self.buildTaskData(pkgs_to_build, task, False, allowincomplete=True)
|
||||
|
||||
@@ -845,9 +788,7 @@ class BBCooker:
|
||||
for dep in rq.rqdata.runtaskentries[tid].depends:
|
||||
(depmc, depfn, _, deptaskfn) = bb.runqueue.split_tid_mcfn(dep)
|
||||
deppn = self.recipecaches[depmc].pkg_fn[deptaskfn]
|
||||
if depmc:
|
||||
depmc = "mc:" + depmc + ":"
|
||||
depend_tree["tdepends"][dotname].append("%s%s.%s" % (depmc, deppn, bb.runqueue.taskname_from_tid(dep)))
|
||||
depend_tree["tdepends"][dotname].append("%s.%s" % (deppn, bb.runqueue.taskname_from_tid(dep)))
|
||||
if taskfn not in seen_fns:
|
||||
seen_fns.append(taskfn)
|
||||
packages = []
|
||||
@@ -1111,11 +1052,6 @@ class BBCooker:
|
||||
if matches:
|
||||
bb.event.fire(bb.event.FilesMatchingFound(filepattern, matches), self.data)
|
||||
|
||||
def testCookerCommandEvent(self, filepattern):
|
||||
# Dummy command used by OEQA selftest to test tinfoil without IO
|
||||
matches = ["A", "B"]
|
||||
bb.event.fire(bb.event.FilesMatchingFound(filepattern, matches), self.data)
|
||||
|
||||
def findProviders(self, mc=''):
|
||||
return bb.providers.findProviders(self.databuilder.mcdata[mc], self.recipecaches[mc], self.recipecaches[mc].pkg_pn)
|
||||
|
||||
@@ -1123,16 +1059,10 @@ class BBCooker:
|
||||
if pn in self.recipecaches[mc].providers:
|
||||
filenames = self.recipecaches[mc].providers[pn]
|
||||
eligible, foundUnique = bb.providers.filterProviders(filenames, pn, self.databuilder.mcdata[mc], self.recipecaches[mc])
|
||||
if eligible is not None:
|
||||
filename = eligible[0]
|
||||
else:
|
||||
filename = None
|
||||
filename = eligible[0]
|
||||
return None, None, None, filename
|
||||
elif pn in self.recipecaches[mc].pkg_pn:
|
||||
(latest, latest_f, preferred_ver, preferred_file, required) = bb.providers.findBestProvider(pn, self.databuilder.mcdata[mc], self.recipecaches[mc], self.recipecaches[mc].pkg_pn)
|
||||
if required and preferred_file is None:
|
||||
return None, None, None, None
|
||||
return (latest, latest_f, preferred_ver, preferred_file)
|
||||
return bb.providers.findBestProvider(pn, self.databuilder.mcdata[mc], self.recipecaches[mc], self.recipecaches[mc].pkg_pn)
|
||||
else:
|
||||
return None, None, None, None
|
||||
|
||||
@@ -1277,15 +1207,15 @@ class BBCooker:
|
||||
except bb.utils.VersionStringException as vse:
|
||||
bb.fatal('Error parsing LAYERRECOMMENDS_%s: %s' % (c, str(vse)))
|
||||
if not res:
|
||||
parselog.debug3("Layer '%s' recommends version %s of layer '%s', but version %s is currently enabled in your configuration. Check that you are using the correct matching versions/branches of these two layers.", c, opstr, rec, layerver)
|
||||
parselog.debug(3,"Layer '%s' recommends version %s of layer '%s', but version %s is currently enabled in your configuration. Check that you are using the correct matching versions/branches of these two layers.", c, opstr, rec, layerver)
|
||||
continue
|
||||
else:
|
||||
parselog.debug3("Layer '%s' recommends version %s of layer '%s', which exists in your configuration but does not specify a version. Check that you are using the correct matching versions/branches of these two layers.", c, opstr, rec)
|
||||
parselog.debug(3,"Layer '%s' recommends version %s of layer '%s', which exists in your configuration but does not specify a version. Check that you are using the correct matching versions/branches of these two layers.", c, opstr, rec)
|
||||
continue
|
||||
parselog.debug3("Layer '%s' recommends layer '%s', so we are adding it", c, rec)
|
||||
parselog.debug(3,"Layer '%s' recommends layer '%s', so we are adding it", c, rec)
|
||||
collection_depends[c].append(rec)
|
||||
else:
|
||||
parselog.debug3("Layer '%s' recommends layer '%s', but this layer is not enabled in your configuration", c, rec)
|
||||
parselog.debug(3,"Layer '%s' recommends layer '%s', but this layer is not enabled in your configuration", c, rec)
|
||||
|
||||
# Recursively work out collection priorities based on dependencies
|
||||
def calc_layer_priority(collection):
|
||||
@@ -1297,7 +1227,7 @@ class BBCooker:
|
||||
if depprio > max_depprio:
|
||||
max_depprio = depprio
|
||||
max_depprio += 1
|
||||
parselog.debug("Calculated priority of layer %s as %d", collection, max_depprio)
|
||||
parselog.debug(1, "Calculated priority of layer %s as %d", collection, max_depprio)
|
||||
collection_priorities[collection] = max_depprio
|
||||
|
||||
# Calculate all layer priorities using calc_layer_priority and store in bbfile_config_priorities
|
||||
@@ -1309,7 +1239,7 @@ class BBCooker:
|
||||
errors = True
|
||||
continue
|
||||
elif regex == "":
|
||||
parselog.debug("BBFILE_PATTERN_%s is empty" % c)
|
||||
parselog.debug(1, "BBFILE_PATTERN_%s is empty" % c)
|
||||
cre = re.compile('^NULL$')
|
||||
errors = False
|
||||
else:
|
||||
@@ -1456,7 +1386,7 @@ class BBCooker:
|
||||
|
||||
# Setup taskdata structure
|
||||
taskdata = {}
|
||||
taskdata[mc] = bb.taskdata.TaskData(self.configuration.halt)
|
||||
taskdata[mc] = bb.taskdata.TaskData(self.configuration.abort)
|
||||
taskdata[mc].add_provider(self.databuilder.mcdata[mc], self.recipecaches[mc], item)
|
||||
|
||||
if quietlog:
|
||||
@@ -1472,11 +1402,11 @@ class BBCooker:
|
||||
|
||||
rq = bb.runqueue.RunQueue(self, self.data, self.recipecaches, taskdata, runlist)
|
||||
|
||||
def buildFileIdle(server, rq, halt):
|
||||
def buildFileIdle(server, rq, abort):
|
||||
|
||||
msg = None
|
||||
interrupted = 0
|
||||
if halt or self.state == state.forceshutdown:
|
||||
if abort or self.state == state.forceshutdown:
|
||||
rq.finish_runqueue(True)
|
||||
msg = "Forced shutdown"
|
||||
interrupted = 2
|
||||
@@ -1518,10 +1448,10 @@ class BBCooker:
|
||||
Attempt to build the targets specified
|
||||
"""
|
||||
|
||||
def buildTargetsIdle(server, rq, halt):
|
||||
def buildTargetsIdle(server, rq, abort):
|
||||
msg = None
|
||||
interrupted = 0
|
||||
if halt or self.state == state.forceshutdown:
|
||||
if abort or self.state == state.forceshutdown:
|
||||
rq.finish_runqueue(True)
|
||||
msg = "Forced shutdown"
|
||||
interrupted = 2
|
||||
@@ -1564,7 +1494,7 @@ class BBCooker:
|
||||
|
||||
bb.event.fire(bb.event.BuildInit(packages), self.data)
|
||||
|
||||
taskdata, runlist = self.buildTaskData(targets, task, self.configuration.halt)
|
||||
taskdata, runlist = self.buildTaskData(targets, task, self.configuration.abort)
|
||||
|
||||
buildname = self.data.getVar("BUILDNAME", False)
|
||||
|
||||
@@ -1621,7 +1551,7 @@ class BBCooker:
|
||||
self.inotify_modified_files = []
|
||||
|
||||
if not self.baseconfig_valid:
|
||||
logger.debug("Reloading base configuration data")
|
||||
logger.debug(1, "Reloading base configuration data")
|
||||
self.initConfigurationData()
|
||||
self.handlePRServ()
|
||||
|
||||
@@ -1632,7 +1562,7 @@ class BBCooker:
|
||||
|
||||
if self.state in (state.shutdown, state.forceshutdown, state.error):
|
||||
if hasattr(self.parser, 'shutdown'):
|
||||
self.parser.shutdown(clean=False)
|
||||
self.parser.shutdown(clean=False, force = True)
|
||||
self.parser.final_cleanup()
|
||||
raise bb.BBHandledException()
|
||||
|
||||
@@ -1640,8 +1570,6 @@ class BBCooker:
|
||||
self.updateCacheSync()
|
||||
|
||||
if self.state != state.parsing and not self.parsecache_valid:
|
||||
self.setupParserWatcher()
|
||||
|
||||
bb.parse.siggen.reset(self.data)
|
||||
self.parseConfiguration ()
|
||||
if CookerFeatures.SEND_SANITYEVENTS in self.featureset:
|
||||
@@ -1678,7 +1606,7 @@ class BBCooker:
|
||||
self.state = state.parsing
|
||||
|
||||
if not self.parser.parse_next():
|
||||
collectlog.debug("parsing complete")
|
||||
collectlog.debug(1, "parsing complete")
|
||||
if self.parser.error:
|
||||
raise bb.BBHandledException()
|
||||
self.show_appends_with_no_recipes()
|
||||
@@ -1701,7 +1629,7 @@ class BBCooker:
|
||||
# Return a copy, don't modify the original
|
||||
pkgs_to_build = pkgs_to_build[:]
|
||||
|
||||
if not pkgs_to_build:
|
||||
if len(pkgs_to_build) == 0:
|
||||
raise NothingToBuild
|
||||
|
||||
ignore = (self.data.getVar("ASSUME_PROVIDED") or "").split()
|
||||
@@ -1723,7 +1651,7 @@ class BBCooker:
|
||||
|
||||
if 'universe' in pkgs_to_build:
|
||||
parselog.verbnote("The \"universe\" target is only intended for testing and may produce errors.")
|
||||
parselog.debug("collating packages for \"universe\"")
|
||||
parselog.debug(1, "collating packages for \"universe\"")
|
||||
pkgs_to_build.remove('universe')
|
||||
for mc in self.multiconfigs:
|
||||
for t in self.recipecaches[mc].universe_target:
|
||||
@@ -1748,8 +1676,6 @@ class BBCooker:
|
||||
def post_serve(self):
|
||||
self.shutdown(force=True)
|
||||
prserv.serv.auto_shutdown()
|
||||
if hasattr(bb.parse, "siggen"):
|
||||
bb.parse.siggen.exit()
|
||||
if self.hashserv:
|
||||
self.hashserv.process.terminate()
|
||||
self.hashserv.process.join()
|
||||
@@ -1763,15 +1689,13 @@ class BBCooker:
|
||||
self.state = state.shutdown
|
||||
|
||||
if self.parser:
|
||||
self.parser.shutdown(clean=not force)
|
||||
self.parser.shutdown(clean=not force, force=force)
|
||||
self.parser.final_cleanup()
|
||||
|
||||
def finishcommand(self):
|
||||
self.state = state.initial
|
||||
|
||||
def reset(self):
|
||||
if hasattr(bb.parse, "siggen"):
|
||||
bb.parse.siggen.exit()
|
||||
self.initConfigurationData()
|
||||
self.handlePRServ()
|
||||
|
||||
@@ -1800,7 +1724,7 @@ class CookerCollectFiles(object):
|
||||
def __init__(self, priorities, mc=''):
|
||||
self.mc = mc
|
||||
self.bbappends = []
|
||||
# Priorities is a list of tuples, with the second element as the pattern.
|
||||
# Priorities is a list of tupples, with the second element as the pattern.
|
||||
# We need to sort the list with the longest pattern first, and so on to
|
||||
# the shortest. This allows nested layers to be properly evaluated.
|
||||
self.bbfile_config_priorities = sorted(priorities, key=lambda tup: tup[1], reverse=True)
|
||||
@@ -1836,7 +1760,7 @@ class CookerCollectFiles(object):
|
||||
"""Collect all available .bb build files"""
|
||||
masked = 0
|
||||
|
||||
collectlog.debug("collecting .bb files")
|
||||
collectlog.debug(1, "collecting .bb files")
|
||||
|
||||
files = (config.getVar( "BBFILES") or "").split()
|
||||
|
||||
@@ -1844,10 +1768,10 @@ class CookerCollectFiles(object):
|
||||
files.sort( key=lambda fileitem: self.calc_bbfile_priority(fileitem)[0] )
|
||||
config.setVar("BBFILES_PRIORITIZED", " ".join(files))
|
||||
|
||||
if not files:
|
||||
if not len(files):
|
||||
files = self.get_bbfiles()
|
||||
|
||||
if not files:
|
||||
if not len(files):
|
||||
collectlog.error("no recipe files to build, check your BBPATH and BBFILES?")
|
||||
bb.event.fire(CookerExit(), eventdata)
|
||||
|
||||
@@ -1907,7 +1831,7 @@ class CookerCollectFiles(object):
|
||||
try:
|
||||
re.compile(mask)
|
||||
bbmasks.append(mask)
|
||||
except re.error:
|
||||
except sre_constants.error:
|
||||
collectlog.critical("BBMASK contains an invalid regular expression, ignoring: %s" % mask)
|
||||
|
||||
# Then validate the combined regular expressions. This should never
|
||||
@@ -1915,7 +1839,7 @@ class CookerCollectFiles(object):
|
||||
bbmask = "|".join(bbmasks)
|
||||
try:
|
||||
bbmask_compiled = re.compile(bbmask)
|
||||
except re.error:
|
||||
except sre_constants.error:
|
||||
collectlog.critical("BBMASK is not a valid regular expression, ignoring: %s" % bbmask)
|
||||
bbmask = None
|
||||
|
||||
@@ -1923,7 +1847,7 @@ class CookerCollectFiles(object):
|
||||
bbappend = []
|
||||
for f in newfiles:
|
||||
if bbmask and bbmask_compiled.search(f):
|
||||
collectlog.debug("skipping masked file %s", f)
|
||||
collectlog.debug(1, "skipping masked file %s", f)
|
||||
masked += 1
|
||||
continue
|
||||
if f.endswith('.bb'):
|
||||
@@ -1931,7 +1855,7 @@ class CookerCollectFiles(object):
|
||||
elif f.endswith('.bbappend'):
|
||||
bbappend.append(f)
|
||||
else:
|
||||
collectlog.debug("skipping %s: unknown file extension", f)
|
||||
collectlog.debug(1, "skipping %s: unknown file extension", f)
|
||||
|
||||
# Build a list of .bbappend files for each .bb file
|
||||
for f in bbappend:
|
||||
@@ -2033,30 +1957,15 @@ class ParsingFailure(Exception):
|
||||
Exception.__init__(self, realexception, recipe)
|
||||
|
||||
class Parser(multiprocessing.Process):
|
||||
def __init__(self, jobs, results, quit, profile):
|
||||
def __init__(self, jobs, results, quit, init, profile):
|
||||
self.jobs = jobs
|
||||
self.results = results
|
||||
self.quit = quit
|
||||
self.init = init
|
||||
multiprocessing.Process.__init__(self)
|
||||
self.context = bb.utils.get_context().copy()
|
||||
self.handlers = bb.event.get_class_handlers().copy()
|
||||
self.profile = profile
|
||||
self.queue_signals = False
|
||||
self.signal_received = []
|
||||
self.signal_threadlock = threading.Lock()
|
||||
|
||||
def catch_sig(self, signum, frame):
|
||||
if self.queue_signals:
|
||||
self.signal_received.append(signum)
|
||||
else:
|
||||
self.handle_sig(signum, frame)
|
||||
|
||||
def handle_sig(self, signum, frame):
|
||||
if signum == signal.SIGTERM:
|
||||
signal.signal(signal.SIGTERM, signal.SIG_DFL)
|
||||
os.kill(os.getpid(), signal.SIGTERM)
|
||||
elif signum == signal.SIGINT:
|
||||
signal.default_int_handler(signum, frame)
|
||||
|
||||
def run(self):
|
||||
|
||||
@@ -2076,48 +1985,36 @@ class Parser(multiprocessing.Process):
|
||||
prof.dump_stats(logfile)
|
||||
|
||||
def realrun(self):
|
||||
# Signal handling here is hard. We must not terminate any process or thread holding the write
|
||||
# lock for the event stream as it will not be released, ever, and things will hang.
|
||||
# Python handles signals in the main thread/process but they can be raised from any thread and
|
||||
# we want to defer processing of any SIGTERM/SIGINT signal until we're outside the critical section
|
||||
# and don't hold the lock (see server/process.py). We therefore always catch the signals (so any
|
||||
# new thread should also do so) and we defer handling but we handle with the local thread lock
|
||||
# held (a threading lock, not a multiprocessing one) so that no other thread in the process
|
||||
# can be in the critical section.
|
||||
signal.signal(signal.SIGTERM, self.catch_sig)
|
||||
signal.signal(signal.SIGHUP, signal.SIG_DFL)
|
||||
signal.signal(signal.SIGINT, self.catch_sig)
|
||||
bb.utils.set_process_name(multiprocessing.current_process().name)
|
||||
multiprocessing.util.Finalize(None, bb.codeparser.parser_cache_save, exitpriority=1)
|
||||
multiprocessing.util.Finalize(None, bb.fetch.fetcher_parse_save, exitpriority=1)
|
||||
if self.init:
|
||||
self.init()
|
||||
|
||||
pending = []
|
||||
try:
|
||||
while True:
|
||||
try:
|
||||
self.quit.get_nowait()
|
||||
except queue.Empty:
|
||||
pass
|
||||
else:
|
||||
break
|
||||
while True:
|
||||
try:
|
||||
self.quit.get_nowait()
|
||||
except queue.Empty:
|
||||
pass
|
||||
else:
|
||||
self.results.close()
|
||||
self.results.join_thread()
|
||||
break
|
||||
|
||||
if pending:
|
||||
result = pending.pop()
|
||||
else:
|
||||
try:
|
||||
job = self.jobs.pop()
|
||||
except IndexError:
|
||||
break
|
||||
result = self.parse(*job)
|
||||
# Clear the siggen cache after parsing to control memory usage, its huge
|
||||
bb.parse.siggen.postparsing_clean_cache()
|
||||
if pending:
|
||||
result = pending.pop()
|
||||
else:
|
||||
try:
|
||||
self.results.put(result, timeout=0.25)
|
||||
except queue.Full:
|
||||
pending.append(result)
|
||||
finally:
|
||||
self.results.close()
|
||||
self.results.join_thread()
|
||||
job = self.jobs.pop()
|
||||
except IndexError:
|
||||
self.results.close()
|
||||
self.results.join_thread()
|
||||
break
|
||||
result = self.parse(*job)
|
||||
# Clear the siggen cache after parsing to control memory usage, its huge
|
||||
bb.parse.siggen.postparsing_clean_cache()
|
||||
try:
|
||||
self.results.put(result, timeout=0.25)
|
||||
except queue.Full:
|
||||
pending.append(result)
|
||||
|
||||
def parse(self, mc, cache, filename, appends):
|
||||
try:
|
||||
@@ -2138,12 +2035,12 @@ class Parser(multiprocessing.Process):
|
||||
tb = sys.exc_info()[2]
|
||||
exc.recipe = filename
|
||||
exc.traceback = list(bb.exceptions.extract_traceback(tb, context=3))
|
||||
return True, None, exc
|
||||
return True, exc
|
||||
# Need to turn BaseExceptions into Exceptions here so we gracefully shutdown
|
||||
# and for example a worker thread doesn't just exit on its own in response to
|
||||
# a SystemExit event for example.
|
||||
except BaseException as exc:
|
||||
return True, None, ParsingFailure(exc, filename)
|
||||
return True, ParsingFailure(exc, filename)
|
||||
finally:
|
||||
bb.event.LogHandler.filter = origfilter
|
||||
|
||||
@@ -2194,6 +2091,13 @@ class CookerParser(object):
|
||||
self.processes = []
|
||||
if self.toparse:
|
||||
bb.event.fire(bb.event.ParseStarted(self.toparse), self.cfgdata)
|
||||
def init():
|
||||
signal.signal(signal.SIGTERM, signal.SIG_DFL)
|
||||
signal.signal(signal.SIGHUP, signal.SIG_DFL)
|
||||
signal.signal(signal.SIGINT, signal.SIG_IGN)
|
||||
bb.utils.set_process_name(multiprocessing.current_process().name)
|
||||
multiprocessing.util.Finalize(None, bb.codeparser.parser_cache_save, exitpriority=1)
|
||||
multiprocessing.util.Finalize(None, bb.fetch.fetcher_parse_save, exitpriority=1)
|
||||
|
||||
self.parser_quit = multiprocessing.Queue(maxsize=self.num_processes)
|
||||
self.result_queue = multiprocessing.Queue()
|
||||
@@ -2203,14 +2107,14 @@ class CookerParser(object):
|
||||
self.jobs = chunkify(list(self.willparse), self.num_processes)
|
||||
|
||||
for i in range(0, self.num_processes):
|
||||
parser = Parser(self.jobs[i], self.result_queue, self.parser_quit, self.cooker.configuration.profile)
|
||||
parser = Parser(self.jobs[i], self.result_queue, self.parser_quit, init, self.cooker.configuration.profile)
|
||||
parser.start()
|
||||
self.process_names.append(parser.name)
|
||||
self.processes.append(parser)
|
||||
|
||||
self.results = itertools.chain(self.results, self.parse_generator())
|
||||
|
||||
def shutdown(self, clean=True):
|
||||
def shutdown(self, clean=True, force=False):
|
||||
if not self.toparse:
|
||||
return
|
||||
if self.haveshutdown:
|
||||
@@ -2224,8 +2128,6 @@ class CookerParser(object):
|
||||
self.total)
|
||||
|
||||
bb.event.fire(event, self.cfgdata)
|
||||
else:
|
||||
bb.error("Parsing halted due to errors, see error messages above")
|
||||
|
||||
for process in self.processes:
|
||||
self.parser_quit.put(None)
|
||||
@@ -2239,24 +2141,11 @@ class CookerParser(object):
|
||||
break
|
||||
|
||||
for process in self.processes:
|
||||
process.join(0.5)
|
||||
|
||||
for process in self.processes:
|
||||
if process.exitcode is None:
|
||||
os.kill(process.pid, signal.SIGINT)
|
||||
|
||||
for process in self.processes:
|
||||
process.join(0.5)
|
||||
|
||||
for process in self.processes:
|
||||
if process.exitcode is None:
|
||||
if force:
|
||||
process.join(.1)
|
||||
process.terminate()
|
||||
|
||||
for process in self.processes:
|
||||
process.join()
|
||||
# Added in 3.7, cleans up zombies
|
||||
if hasattr(process, "close"):
|
||||
process.close()
|
||||
else:
|
||||
process.join()
|
||||
|
||||
self.parser_quit.close()
|
||||
# Allow data left in the cancel queue to be discarded
|
||||
@@ -2292,47 +2181,32 @@ class CookerParser(object):
|
||||
yield not cached, mc, infos
|
||||
|
||||
def parse_generator(self):
|
||||
empty = False
|
||||
while self.processes or not empty:
|
||||
for process in self.processes.copy():
|
||||
if not process.is_alive():
|
||||
process.join()
|
||||
self.processes.remove(process)
|
||||
|
||||
while True:
|
||||
if self.parsed >= self.toparse:
|
||||
break
|
||||
|
||||
try:
|
||||
result = self.result_queue.get(timeout=0.25)
|
||||
except queue.Empty:
|
||||
empty = True
|
||||
yield None, None, None
|
||||
pass
|
||||
else:
|
||||
empty = False
|
||||
yield result
|
||||
|
||||
if not (self.parsed >= self.toparse):
|
||||
raise bb.parse.ParseError("Not all recipes parsed, parser thread killed/died? Exiting.", None)
|
||||
|
||||
value = result[1]
|
||||
if isinstance(value, BaseException):
|
||||
raise value
|
||||
else:
|
||||
yield result
|
||||
|
||||
def parse_next(self):
|
||||
result = []
|
||||
parsed = None
|
||||
try:
|
||||
parsed, mc, result = next(self.results)
|
||||
if isinstance(result, BaseException):
|
||||
# Turn exceptions back into exceptions
|
||||
raise result
|
||||
if parsed is None:
|
||||
# Timeout, loop back through the main loop
|
||||
return True
|
||||
|
||||
except StopIteration:
|
||||
self.shutdown()
|
||||
return False
|
||||
except bb.BBHandledException as exc:
|
||||
self.error += 1
|
||||
logger.debug('Failed to parse recipe: %s' % exc.recipe)
|
||||
logger.error('Failed to parse recipe: %s' % exc.recipe)
|
||||
self.shutdown(clean=False)
|
||||
return False
|
||||
except ParsingFailure as exc:
|
||||
|
||||
@@ -23,8 +23,8 @@ logger = logging.getLogger("BitBake")
|
||||
parselog = logging.getLogger("BitBake.Parsing")
|
||||
|
||||
class ConfigParameters(object):
|
||||
def __init__(self, argv=None):
|
||||
self.options, targets = self.parseCommandLine(argv or sys.argv)
|
||||
def __init__(self, argv=sys.argv):
|
||||
self.options, targets = self.parseCommandLine(argv)
|
||||
self.environment = self.parseEnvironment()
|
||||
|
||||
self.options.pkgs_to_build = targets or []
|
||||
@@ -57,7 +57,7 @@ class ConfigParameters(object):
|
||||
|
||||
def updateToServer(self, server, environment):
|
||||
options = {}
|
||||
for o in ["halt", "force", "invalidate_stamp",
|
||||
for o in ["abort", "force", "invalidate_stamp",
|
||||
"dry_run", "dump_signatures",
|
||||
"extra_assume_provided", "profile",
|
||||
"prefile", "postfile", "server_timeout",
|
||||
@@ -86,7 +86,7 @@ class ConfigParameters(object):
|
||||
action['msg'] = "Only one target can be used with the --environment option."
|
||||
elif self.options.buildfile and len(self.options.pkgs_to_build) > 0:
|
||||
action['msg'] = "No target should be used with the --environment and --buildfile options."
|
||||
elif self.options.pkgs_to_build:
|
||||
elif len(self.options.pkgs_to_build) > 0:
|
||||
action['action'] = ["showEnvironmentTarget", self.options.pkgs_to_build]
|
||||
else:
|
||||
action['action'] = ["showEnvironment", self.options.buildfile]
|
||||
@@ -124,7 +124,7 @@ class CookerConfiguration(object):
|
||||
self.prefile = []
|
||||
self.postfile = []
|
||||
self.cmd = None
|
||||
self.halt = True
|
||||
self.abort = True
|
||||
self.force = False
|
||||
self.profile = False
|
||||
self.nosetscene = False
|
||||
@@ -209,8 +209,8 @@ def findConfigFile(configfile, data):
|
||||
return None
|
||||
|
||||
#
|
||||
# We search for a conf/bblayers.conf under an entry in BBPATH or in cwd working
|
||||
# up to /. If that fails, bitbake would fall back to cwd.
|
||||
# We search for a conf/bblayers.conf under an entry in BBPATH or in cwd working
|
||||
# up to /. If that fails, we search for a conf/bitbake.conf in BBPATH.
|
||||
#
|
||||
|
||||
def findTopdir():
|
||||
@@ -223,8 +223,11 @@ def findTopdir():
|
||||
layerconf = findConfigFile("bblayers.conf", d)
|
||||
if layerconf:
|
||||
return os.path.dirname(os.path.dirname(layerconf))
|
||||
|
||||
return os.path.abspath(os.getcwd())
|
||||
if bbpath:
|
||||
bitbakeconf = bb.utils.which(bbpath, "conf/bitbake.conf")
|
||||
if bitbakeconf:
|
||||
return os.path.dirname(os.path.dirname(bitbakeconf))
|
||||
return None
|
||||
|
||||
class CookerDataBuilder(object):
|
||||
|
||||
@@ -247,14 +250,10 @@ class CookerDataBuilder(object):
|
||||
self.savedenv = bb.data.init()
|
||||
for k in cookercfg.env:
|
||||
self.savedenv.setVar(k, cookercfg.env[k])
|
||||
if k in bb.data_smart.bitbake_renamed_vars:
|
||||
bb.error('Shell environment variable %s has been renamed to %s' % (k, bb.data_smart.bitbake_renamed_vars[k]))
|
||||
bb.fatal("Exiting to allow enviroment variables to be corrected")
|
||||
|
||||
filtered_keys = bb.utils.approved_variables()
|
||||
bb.data.inheritFromOS(self.basedata, self.savedenv, filtered_keys)
|
||||
self.basedata.setVar("BB_ORIGENV", self.savedenv)
|
||||
self.basedata.setVar("__bbclasstype", "global")
|
||||
|
||||
if worker:
|
||||
self.basedata.setVar("BB_WORKERCONTEXT", "1")
|
||||
@@ -262,12 +261,12 @@ class CookerDataBuilder(object):
|
||||
self.data = self.basedata
|
||||
self.mcdata = {}
|
||||
|
||||
def parseBaseConfiguration(self, worker=False):
|
||||
def parseBaseConfiguration(self):
|
||||
data_hash = hashlib.sha256()
|
||||
try:
|
||||
self.data = self.parseConfigurationFiles(self.prefiles, self.postfiles)
|
||||
|
||||
if self.data.getVar("BB_WORKERCONTEXT", False) is None and not worker:
|
||||
if self.data.getVar("BB_WORKERCONTEXT", False) is None:
|
||||
bb.fetch.fetcher_init(self.data)
|
||||
bb.parse.init_parser(self.data)
|
||||
bb.codeparser.parser_cache_init(self.data)
|
||||
@@ -292,8 +291,6 @@ class CookerDataBuilder(object):
|
||||
|
||||
multiconfig = (self.data.getVar("BBMULTICONFIG") or "").split()
|
||||
for config in multiconfig:
|
||||
if config[0].isdigit():
|
||||
bb.fatal("Multiconfig name '%s' is invalid as multiconfigs cannot start with a digit" % config)
|
||||
mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
|
||||
bb.event.fire(bb.event.ConfigParsed(), mcdata)
|
||||
self.mcdata[config] = mcdata
|
||||
@@ -311,26 +308,6 @@ class CookerDataBuilder(object):
|
||||
logger.exception("Error parsing configuration files")
|
||||
raise bb.BBHandledException()
|
||||
|
||||
|
||||
# Handle obsolete variable names
|
||||
d = self.data
|
||||
renamedvars = d.getVarFlags('BB_RENAMED_VARIABLES') or {}
|
||||
renamedvars.update(bb.data_smart.bitbake_renamed_vars)
|
||||
issues = False
|
||||
for v in renamedvars:
|
||||
if d.getVar(v) != None or d.hasOverrides(v):
|
||||
issues = True
|
||||
loginfo = {}
|
||||
history = d.varhistory.get_variable_refs(v)
|
||||
for h in history:
|
||||
for line in history[h]:
|
||||
loginfo = {'file' : h, 'line' : line}
|
||||
bb.data.data_smart._print_rename_error(v, loginfo, renamedvars)
|
||||
if not history:
|
||||
bb.data.data_smart._print_rename_error(v, loginfo, renamedvars)
|
||||
if issues:
|
||||
raise bb.BBHandledException()
|
||||
|
||||
# Create a copy so we can reset at a later date when UIs disconnect
|
||||
self.origdata = self.data
|
||||
self.data = bb.data.createCopy(self.origdata)
|
||||
@@ -356,7 +333,7 @@ class CookerDataBuilder(object):
|
||||
|
||||
layerconf = self._findLayerConf(data)
|
||||
if layerconf:
|
||||
parselog.debug2("Found bblayers.conf (%s)", layerconf)
|
||||
parselog.debug(2, "Found bblayers.conf (%s)", layerconf)
|
||||
# By definition bblayers.conf is in conf/ of TOPDIR.
|
||||
# We may have been called with cwd somewhere else so reset TOPDIR
|
||||
data.setVar("TOPDIR", os.path.dirname(os.path.dirname(layerconf)))
|
||||
@@ -365,9 +342,6 @@ class CookerDataBuilder(object):
|
||||
layers = (data.getVar('BBLAYERS') or "").split()
|
||||
broken_layers = []
|
||||
|
||||
if not layers:
|
||||
bb.fatal("The bblayers.conf file doesn't contain any BBLAYERS definition")
|
||||
|
||||
data = bb.data.createCopy(data)
|
||||
approved = bb.utils.approved_variables()
|
||||
|
||||
@@ -384,7 +358,7 @@ class CookerDataBuilder(object):
|
||||
raise bb.BBHandledException()
|
||||
|
||||
for layer in layers:
|
||||
parselog.debug2("Adding layer %s", layer)
|
||||
parselog.debug(2, "Adding layer %s", layer)
|
||||
if 'HOME' in approved and '~' in layer:
|
||||
layer = os.path.expanduser(layer)
|
||||
if layer.endswith('/'):
|
||||
@@ -422,8 +396,6 @@ class CookerDataBuilder(object):
|
||||
if c in collections_tmp:
|
||||
bb.fatal("Found duplicated BBFILE_COLLECTIONS '%s', check bblayers.conf or layer.conf to fix it." % c)
|
||||
compat = set((data.getVar("LAYERSERIES_COMPAT_%s" % c) or "").split())
|
||||
if compat and not layerseries:
|
||||
bb.fatal("No core layer found to work with layer '%s'. Missing entry in bblayers.conf?" % c)
|
||||
if compat and not (compat & layerseries):
|
||||
bb.fatal("Layer %s is not compatible with the core layer which only supports these series: %s (layer is compatible with %s)"
|
||||
% (c, " ".join(layerseries), " ".join(compat)))
|
||||
@@ -438,9 +410,6 @@ class CookerDataBuilder(object):
|
||||
" invoked bitbake from the wrong directory?")
|
||||
raise SystemExit(msg)
|
||||
|
||||
if not data.getVar("TOPDIR"):
|
||||
data.setVar("TOPDIR", os.path.abspath(os.getcwd()))
|
||||
|
||||
data = parse_config_file(os.path.join("conf", "bitbake.conf"), data)
|
||||
|
||||
# Parse files for loading *after* bitbake.conf and any includes
|
||||
@@ -452,7 +421,7 @@ class CookerDataBuilder(object):
|
||||
for bbclass in bbclasses:
|
||||
data = _inherit(bbclass, data)
|
||||
|
||||
# Normally we only register event handlers at the end of parsing .bb files
|
||||
# Nomally we only register event handlers at the end of parsing .bb files
|
||||
# We register any handlers we've found so far here...
|
||||
for var in data.getVar('__BBHANDLERS', False) or []:
|
||||
handlerfn = data.getVarFlag(var, "filename", False)
|
||||
@@ -460,7 +429,7 @@ class CookerDataBuilder(object):
|
||||
parselog.critical("Undefined event handler function '%s'" % var)
|
||||
raise bb.BBHandledException()
|
||||
handlerln = int(data.getVarFlag(var, "lineno", False))
|
||||
bb.event.register(var, data.getVar(var, False), (data.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln, data)
|
||||
bb.event.register(var, data.getVar(var, False), (data.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln)
|
||||
|
||||
data.setVar('BBINCLUDED',bb.parse.get_file_depends(data))
|
||||
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
@@ -76,26 +74,26 @@ def createDaemon(function, logfile):
|
||||
with open('/dev/null', 'r') as si:
|
||||
os.dup2(si.fileno(), sys.stdin.fileno())
|
||||
|
||||
with open(logfile, 'a+') as so:
|
||||
try:
|
||||
os.dup2(so.fileno(), sys.stdout.fileno())
|
||||
os.dup2(so.fileno(), sys.stderr.fileno())
|
||||
except io.UnsupportedOperation:
|
||||
sys.stdout = so
|
||||
try:
|
||||
so = open(logfile, 'a+')
|
||||
os.dup2(so.fileno(), sys.stdout.fileno())
|
||||
os.dup2(so.fileno(), sys.stderr.fileno())
|
||||
except io.UnsupportedOperation:
|
||||
sys.stdout = open(logfile, 'a+')
|
||||
|
||||
# Have stdout and stderr be the same so log output matches chronologically
|
||||
# and there aren't two separate buffers
|
||||
sys.stderr = sys.stdout
|
||||
# Have stdout and stderr be the same so log output matches chronologically
|
||||
# and there aren't two seperate buffers
|
||||
sys.stderr = sys.stdout
|
||||
|
||||
try:
|
||||
function()
|
||||
except Exception as e:
|
||||
traceback.print_exc()
|
||||
finally:
|
||||
bb.event.print_ui_queue()
|
||||
# os._exit() doesn't flush open files like os.exit() does. Manually flush
|
||||
# stdout and stderr so that any logging output will be seen, particularly
|
||||
# exception tracebacks.
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
os._exit(0)
|
||||
try:
|
||||
function()
|
||||
except Exception as e:
|
||||
traceback.print_exc()
|
||||
finally:
|
||||
bb.event.print_ui_queue()
|
||||
# os._exit() doesn't flush open files like os.exit() does. Manually flush
|
||||
# stdout and stderr so that any logging output will be seen, particularly
|
||||
# exception tracebacks.
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
os._exit(0)
|
||||
|
||||
@@ -226,7 +226,7 @@ def emit_func(func, o=sys.__stdout__, d = init()):
|
||||
deps = newdeps
|
||||
seen |= deps
|
||||
newdeps = set()
|
||||
for dep in sorted(deps):
|
||||
for dep in deps:
|
||||
if d.getVarFlag(dep, "func", False) and not d.getVarFlag(dep, "python", False):
|
||||
emit_var(dep, o, d, False) and o.write('\n')
|
||||
newdeps |= bb.codeparser.ShellParser(dep, logger).parse_shell(d.getVar(dep))
|
||||
@@ -272,37 +272,34 @@ def update_data(d):
|
||||
"""Performs final steps upon the datastore, including application of overrides"""
|
||||
d.finalize(parent = True)
|
||||
|
||||
def build_dependencies(key, keys, shelldeps, varflagsexcl, ignored_vars, d):
|
||||
def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
|
||||
deps = set()
|
||||
try:
|
||||
if key[-1] == ']':
|
||||
vf = key[:-1].split('[')
|
||||
if vf[1] == "vardepvalueexclude":
|
||||
return deps, ""
|
||||
value, parser = d.getVarFlag(vf[0], vf[1], False, retparser=True)
|
||||
deps |= parser.references
|
||||
deps = deps | (keys & parser.execs)
|
||||
return deps, value
|
||||
varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "exports", "postfuncs", "prefuncs", "lineno", "filename"]) or {}
|
||||
vardeps = varflags.get("vardeps")
|
||||
exclusions = varflags.get("vardepsexclude", "").split()
|
||||
|
||||
def handle_contains(value, contains, exclusions, d):
|
||||
newvalue = []
|
||||
if value:
|
||||
newvalue.append(str(value))
|
||||
def handle_contains(value, contains, d):
|
||||
newvalue = ""
|
||||
for k in sorted(contains):
|
||||
if k in exclusions or k in ignored_vars:
|
||||
continue
|
||||
l = (d.getVar(k) or "").split()
|
||||
for item in sorted(contains[k]):
|
||||
for word in item.split():
|
||||
if not word in l:
|
||||
newvalue.append("\n%s{%s} = Unset" % (k, item))
|
||||
newvalue += "\n%s{%s} = Unset" % (k, item)
|
||||
break
|
||||
else:
|
||||
newvalue.append("\n%s{%s} = Set" % (k, item))
|
||||
return "".join(newvalue)
|
||||
newvalue += "\n%s{%s} = Set" % (k, item)
|
||||
if not newvalue:
|
||||
return value
|
||||
if not value:
|
||||
return newvalue
|
||||
return value + newvalue
|
||||
|
||||
def handle_remove(value, deps, removes, d):
|
||||
for r in sorted(removes):
|
||||
@@ -321,7 +318,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, ignored_vars, d):
|
||||
parser.parse_python(value, filename=varflags.get("filename"), lineno=varflags.get("lineno"))
|
||||
deps = deps | parser.references
|
||||
deps = deps | (keys & parser.execs)
|
||||
value = handle_contains(value, parser.contains, exclusions, d)
|
||||
value = handle_contains(value, parser.contains, d)
|
||||
else:
|
||||
value, parsedvar = d.getVarFlag(key, "_content", False, retparser=True)
|
||||
parser = bb.codeparser.ShellParser(key, logger)
|
||||
@@ -329,7 +326,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, ignored_vars, d):
|
||||
deps = deps | shelldeps
|
||||
deps = deps | parsedvar.references
|
||||
deps = deps | (keys & parser.execs) | (keys & parsedvar.execs)
|
||||
value = handle_contains(value, parsedvar.contains, exclusions, d)
|
||||
value = handle_contains(value, parsedvar.contains, d)
|
||||
if hasattr(parsedvar, "removes"):
|
||||
value = handle_remove(value, deps, parsedvar.removes, d)
|
||||
if vardeps is None:
|
||||
@@ -344,7 +341,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, ignored_vars, d):
|
||||
value, parser = d.getVarFlag(key, "_content", False, retparser=True)
|
||||
deps |= parser.references
|
||||
deps = deps | (keys & parser.execs)
|
||||
value = handle_contains(value, parser.contains, exclusions, d)
|
||||
value = handle_contains(value, parser.contains, d)
|
||||
if hasattr(parser, "removes"):
|
||||
value = handle_remove(value, deps, parser.removes, d)
|
||||
|
||||
@@ -364,7 +361,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, ignored_vars, d):
|
||||
deps |= set(varfdeps)
|
||||
|
||||
deps |= set((vardeps or "").split())
|
||||
deps -= set(exclusions)
|
||||
deps -= set(varflags.get("vardepsexclude", "").split())
|
||||
except bb.parse.SkipRecipe:
|
||||
raise
|
||||
except Exception as e:
|
||||
@@ -374,7 +371,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, ignored_vars, d):
|
||||
#bb.note("Variable %s references %s and calls %s" % (key, str(deps), str(execs)))
|
||||
#d.setVarFlag(key, "vardeps", deps)
|
||||
|
||||
def generate_dependencies(d, ignored_vars):
|
||||
def generate_dependencies(d, whitelist):
|
||||
|
||||
keys = set(key for key in d if not key.startswith("__"))
|
||||
shelldeps = set(key for key in d.getVar("__exportlist", False) if d.getVarFlag(key, "export", False) and not d.getVarFlag(key, "unexport", False))
|
||||
@@ -385,22 +382,22 @@ def generate_dependencies(d, ignored_vars):
|
||||
|
||||
tasklist = d.getVar('__BBTASKS', False) or []
|
||||
for task in tasklist:
|
||||
deps[task], values[task] = build_dependencies(task, keys, shelldeps, varflagsexcl, ignored_vars, d)
|
||||
deps[task], values[task] = build_dependencies(task, keys, shelldeps, varflagsexcl, d)
|
||||
newdeps = deps[task]
|
||||
seen = set()
|
||||
while newdeps:
|
||||
nextdeps = newdeps - ignored_vars
|
||||
nextdeps = newdeps - whitelist
|
||||
seen |= nextdeps
|
||||
newdeps = set()
|
||||
for dep in nextdeps:
|
||||
if dep not in deps:
|
||||
deps[dep], values[dep] = build_dependencies(dep, keys, shelldeps, varflagsexcl, ignored_vars, d)
|
||||
deps[dep], values[dep] = build_dependencies(dep, keys, shelldeps, varflagsexcl, d)
|
||||
newdeps |= deps[dep]
|
||||
newdeps -= seen
|
||||
#print "For %s: %s" % (task, str(deps[task]))
|
||||
return tasklist, deps, values
|
||||
|
||||
def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
|
||||
def generate_dependency_hash(tasklist, gendeps, lookupcache, whitelist, fn):
|
||||
taskdeps = {}
|
||||
basehash = {}
|
||||
|
||||
@@ -409,11 +406,9 @@ def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
|
||||
|
||||
if data is None:
|
||||
bb.error("Task %s from %s seems to be empty?!" % (task, fn))
|
||||
data = []
|
||||
else:
|
||||
data = [data]
|
||||
data = ''
|
||||
|
||||
gendeps[task] -= ignored_vars
|
||||
gendeps[task] -= whitelist
|
||||
newdeps = gendeps[task]
|
||||
seen = set()
|
||||
while newdeps:
|
||||
@@ -421,27 +416,27 @@ def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
|
||||
seen |= nextdeps
|
||||
newdeps = set()
|
||||
for dep in nextdeps:
|
||||
if dep in ignored_vars:
|
||||
if dep in whitelist:
|
||||
continue
|
||||
gendeps[dep] -= ignored_vars
|
||||
gendeps[dep] -= whitelist
|
||||
newdeps |= gendeps[dep]
|
||||
newdeps -= seen
|
||||
|
||||
alldeps = sorted(seen)
|
||||
for dep in alldeps:
|
||||
data.append(dep)
|
||||
data = data + dep
|
||||
var = lookupcache[dep]
|
||||
if var is not None:
|
||||
data.append(str(var))
|
||||
data = data + str(var)
|
||||
k = fn + ":" + task
|
||||
basehash[k] = hashlib.sha256("".join(data).encode("utf-8")).hexdigest()
|
||||
basehash[k] = hashlib.sha256(data.encode("utf-8")).hexdigest()
|
||||
taskdeps[task] = alldeps
|
||||
|
||||
return taskdeps, basehash
|
||||
|
||||
def inherits_class(klass, d):
|
||||
val = d.getVar('__inherit_cache', False) or []
|
||||
needle = '/%s.bbclass' % klass
|
||||
needle = os.path.join('classes', '%s.bbclass' % klass)
|
||||
for v in val:
|
||||
if v.endswith(needle):
|
||||
return True
|
||||
|
||||
@@ -17,7 +17,7 @@ BitBake build tools.
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import copy, re, sys, traceback
|
||||
from collections.abc import MutableMapping
|
||||
from collections import MutableMapping
|
||||
import logging
|
||||
import hashlib
|
||||
import bb, bb.codeparser
|
||||
@@ -26,25 +26,13 @@ from bb.COW import COWDictBase
|
||||
|
||||
logger = logging.getLogger("BitBake.Data")
|
||||
|
||||
__setvar_keyword__ = [":append", ":prepend", ":remove"]
|
||||
__setvar_regexp__ = re.compile(r'(?P<base>.*?)(?P<keyword>:append|:prepend|:remove)(:(?P<add>[^A-Z]*))?$')
|
||||
__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~:]+?}")
|
||||
__setvar_keyword__ = ["_append", "_prepend", "_remove"]
|
||||
__setvar_regexp__ = re.compile(r'(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>[^A-Z]*))?$')
|
||||
__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~]+?}")
|
||||
__expand_python_regexp__ = re.compile(r"\${@.+?}")
|
||||
__whitespace_split__ = re.compile(r'(\s)')
|
||||
__override_regexp__ = re.compile(r'[a-z0-9]+')
|
||||
|
||||
bitbake_renamed_vars = {
|
||||
"BB_ENV_WHITELIST": "BB_ENV_PASSTHROUGH",
|
||||
"BB_ENV_EXTRAWHITE": "BB_ENV_PASSTHROUGH_ADDITIONS",
|
||||
"BB_HASHBASE_WHITELIST": "BB_BASEHASH_IGNORE_VARS",
|
||||
"BB_HASHCONFIG_WHITELIST": "BB_HASHCONFIG_IGNORE_VARS",
|
||||
"BB_HASHTASK_WHITELIST": "BB_TASKHASH_IGNORE_TASKS",
|
||||
"BB_SETSCENE_ENFORCE_WHITELIST": "BB_SETSCENE_ENFORCE_IGNORE_TASKS",
|
||||
"MULTI_PROVIDER_WHITELIST": "BB_MULTI_PROVIDER_ALLOWED",
|
||||
"BB_STAMP_WHITELIST": "is a deprecated variable and support has been removed",
|
||||
"BB_STAMP_POLICY": "is a deprecated variable and support has been removed",
|
||||
}
|
||||
|
||||
def infer_caller_details(loginfo, parent = False, varval = True):
|
||||
"""Save the caller the trouble of specifying everything."""
|
||||
# Save effort.
|
||||
@@ -152,9 +140,6 @@ class DataContext(dict):
|
||||
self['d'] = metadata
|
||||
|
||||
def __missing__(self, key):
|
||||
# Skip commonly accessed invalid variables
|
||||
if key in ['bb', 'oe', 'int', 'bool', 'time', 'str', 'os']:
|
||||
raise KeyError(key)
|
||||
value = self.metadata.getVar(key)
|
||||
if value is None or self.metadata.getVarFlag(key, 'func', False):
|
||||
raise KeyError(key)
|
||||
@@ -166,7 +151,6 @@ class ExpansionError(Exception):
|
||||
self.expression = expression
|
||||
self.variablename = varname
|
||||
self.exception = exception
|
||||
self.varlist = [varname or expression or ""]
|
||||
if varname:
|
||||
if expression:
|
||||
self.msg = "Failure expanding variable %s, expression was %s which triggered exception %s: %s" % (varname, expression, type(exception).__name__, exception)
|
||||
@@ -176,14 +160,8 @@ class ExpansionError(Exception):
|
||||
self.msg = "Failure expanding expression %s which triggered exception %s: %s" % (expression, type(exception).__name__, exception)
|
||||
Exception.__init__(self, self.msg)
|
||||
self.args = (varname, expression, exception)
|
||||
|
||||
def addVar(self, varname):
|
||||
if varname:
|
||||
self.varlist.append(varname)
|
||||
|
||||
def __str__(self):
|
||||
chain = "\nThe variable dependency chain for the failure is: " + " -> ".join(self.varlist)
|
||||
return self.msg + chain
|
||||
return self.msg
|
||||
|
||||
class IncludeHistory(object):
|
||||
def __init__(self, parent = None, filename = '[TOP LEVEL]'):
|
||||
@@ -299,7 +277,7 @@ class VariableHistory(object):
|
||||
for (r, override) in d.overridedata[var]:
|
||||
for event in self.variable(r):
|
||||
loginfo = event.copy()
|
||||
if 'flag' in loginfo and not loginfo['flag'].startswith(("_", ":")):
|
||||
if 'flag' in loginfo and not loginfo['flag'].startswith("_"):
|
||||
continue
|
||||
loginfo['variable'] = var
|
||||
loginfo['op'] = 'override[%s]:%s' % (override, loginfo['op'])
|
||||
@@ -351,16 +329,6 @@ class VariableHistory(object):
|
||||
lines.append(line)
|
||||
return lines
|
||||
|
||||
def get_variable_refs(self, var):
|
||||
"""Return a dict of file/line references"""
|
||||
var_history = self.variable(var)
|
||||
refs = {}
|
||||
for event in var_history:
|
||||
if event['file'] not in refs:
|
||||
refs[event['file']] = []
|
||||
refs[event['file']].append(event['line'])
|
||||
return refs
|
||||
|
||||
def get_variable_items_files(self, var):
|
||||
"""
|
||||
Use variable history to map items added to a list variable and
|
||||
@@ -374,7 +342,7 @@ class VariableHistory(object):
|
||||
for event in history:
|
||||
if 'flag' in event:
|
||||
continue
|
||||
if event['op'] == ':remove':
|
||||
if event['op'] == '_remove':
|
||||
continue
|
||||
if isset and event['op'] == 'set?':
|
||||
continue
|
||||
@@ -395,23 +363,6 @@ class VariableHistory(object):
|
||||
else:
|
||||
self.variables[var] = []
|
||||
|
||||
def _print_rename_error(var, loginfo, renamedvars, fullvar=None):
|
||||
info = ""
|
||||
if "file" in loginfo:
|
||||
info = " file: %s" % loginfo["file"]
|
||||
if "line" in loginfo:
|
||||
info += " line: %s" % loginfo["line"]
|
||||
if fullvar and fullvar != var:
|
||||
info += " referenced as: %s" % fullvar
|
||||
if info:
|
||||
info = " (%s)" % info.strip()
|
||||
renameinfo = renamedvars[var]
|
||||
if " " in renameinfo:
|
||||
# A space signals a string to display instead of a rename
|
||||
bb.erroronce('Variable %s %s%s' % (var, renameinfo, info))
|
||||
else:
|
||||
bb.erroronce('Variable %s has been renamed to %s%s' % (var, renameinfo, info))
|
||||
|
||||
class DataSmart(MutableMapping):
|
||||
def __init__(self):
|
||||
self.dict = {}
|
||||
@@ -419,8 +370,6 @@ class DataSmart(MutableMapping):
|
||||
self.inchistory = IncludeHistory()
|
||||
self.varhistory = VariableHistory(self)
|
||||
self._tracking = False
|
||||
self._var_renames = {}
|
||||
self._var_renames.update(bitbake_renamed_vars)
|
||||
|
||||
self.expand_cache = {}
|
||||
|
||||
@@ -454,17 +403,14 @@ class DataSmart(MutableMapping):
|
||||
s = __expand_python_regexp__.sub(varparse.python_sub, s)
|
||||
except SyntaxError as e:
|
||||
# Likely unmatched brackets, just don't expand the expression
|
||||
if e.msg != "EOL while scanning string literal" and not e.msg.startswith("unterminated string literal"):
|
||||
if e.msg != "EOL while scanning string literal":
|
||||
raise
|
||||
if s == olds:
|
||||
break
|
||||
except ExpansionError as e:
|
||||
e.addVar(varname)
|
||||
except ExpansionError:
|
||||
raise
|
||||
except bb.parse.SkipRecipe:
|
||||
raise
|
||||
except bb.BBHandledException:
|
||||
raise
|
||||
except Exception as exc:
|
||||
tb = sys.exc_info()[2]
|
||||
raise ExpansionError(varname, s, exc).with_traceback(tb) from exc
|
||||
@@ -532,26 +478,9 @@ class DataSmart(MutableMapping):
|
||||
else:
|
||||
self.initVar(var)
|
||||
|
||||
def hasOverrides(self, var):
|
||||
return var in self.overridedata
|
||||
|
||||
def setVar(self, var, value, **loginfo):
|
||||
#print("var=" + str(var) + " val=" + str(value))
|
||||
|
||||
if not var.startswith("__anon_") and ("_append" in var or "_prepend" in var or "_remove" in var):
|
||||
info = "%s" % var
|
||||
if "file" in loginfo:
|
||||
info += " file: %s" % loginfo["file"]
|
||||
if "line" in loginfo:
|
||||
info += " line: %s" % loginfo["line"]
|
||||
bb.fatal("Variable %s contains an operation using the old override syntax. Please convert this layer/metadata before attempting to use with a newer bitbake." % info)
|
||||
|
||||
shortvar = var.split(":", 1)[0]
|
||||
if shortvar in self._var_renames:
|
||||
_print_rename_error(shortvar, loginfo, self._var_renames, fullvar=var)
|
||||
# Mark that we have seen a renamed variable
|
||||
self.setVar("_FAILPARSINGERRORHANDLED", True)
|
||||
|
||||
self.expand_cache = {}
|
||||
parsing=False
|
||||
if 'parsing' in loginfo:
|
||||
@@ -580,7 +509,7 @@ class DataSmart(MutableMapping):
|
||||
# pay the cookie monster
|
||||
|
||||
# more cookies for the cookie monster
|
||||
if ':' in var:
|
||||
if '_' in var:
|
||||
self._setvar_update_overrides(base, **loginfo)
|
||||
|
||||
if base in self.overridevars:
|
||||
@@ -591,27 +520,27 @@ class DataSmart(MutableMapping):
|
||||
self._makeShadowCopy(var)
|
||||
|
||||
if not parsing:
|
||||
if ":append" in self.dict[var]:
|
||||
del self.dict[var][":append"]
|
||||
if ":prepend" in self.dict[var]:
|
||||
del self.dict[var][":prepend"]
|
||||
if ":remove" in self.dict[var]:
|
||||
del self.dict[var][":remove"]
|
||||
if "_append" in self.dict[var]:
|
||||
del self.dict[var]["_append"]
|
||||
if "_prepend" in self.dict[var]:
|
||||
del self.dict[var]["_prepend"]
|
||||
if "_remove" in self.dict[var]:
|
||||
del self.dict[var]["_remove"]
|
||||
if var in self.overridedata:
|
||||
active = []
|
||||
self.need_overrides()
|
||||
for (r, o) in self.overridedata[var]:
|
||||
if o in self.overridesset:
|
||||
active.append(r)
|
||||
elif ":" in o:
|
||||
if set(o.split(":")).issubset(self.overridesset):
|
||||
elif "_" in o:
|
||||
if set(o.split("_")).issubset(self.overridesset):
|
||||
active.append(r)
|
||||
for a in active:
|
||||
self.delVar(a)
|
||||
del self.overridedata[var]
|
||||
|
||||
# more cookies for the cookie monster
|
||||
if ':' in var:
|
||||
if '_' in var:
|
||||
self._setvar_update_overrides(var, **loginfo)
|
||||
|
||||
# setting var
|
||||
@@ -637,8 +566,8 @@ class DataSmart(MutableMapping):
|
||||
|
||||
def _setvar_update_overrides(self, var, **loginfo):
|
||||
# aka pay the cookie monster
|
||||
override = var[var.rfind(':')+1:]
|
||||
shortvar = var[:var.rfind(':')]
|
||||
override = var[var.rfind('_')+1:]
|
||||
shortvar = var[:var.rfind('_')]
|
||||
while override and __override_regexp__.match(override):
|
||||
if shortvar not in self.overridedata:
|
||||
self.overridedata[shortvar] = []
|
||||
@@ -647,9 +576,9 @@ class DataSmart(MutableMapping):
|
||||
self.overridedata[shortvar] = list(self.overridedata[shortvar])
|
||||
self.overridedata[shortvar].append([var, override])
|
||||
override = None
|
||||
if ":" in shortvar:
|
||||
override = var[shortvar.rfind(':')+1:]
|
||||
shortvar = var[:shortvar.rfind(':')]
|
||||
if "_" in shortvar:
|
||||
override = var[shortvar.rfind('_')+1:]
|
||||
shortvar = var[:shortvar.rfind('_')]
|
||||
if len(shortvar) == 0:
|
||||
override = None
|
||||
|
||||
@@ -673,11 +602,10 @@ class DataSmart(MutableMapping):
|
||||
self.varhistory.record(**loginfo)
|
||||
self.setVar(newkey, val, ignore=True, parsing=True)
|
||||
|
||||
srcflags = self.getVarFlags(key, False, True) or {}
|
||||
for i in srcflags:
|
||||
if i not in (__setvar_keyword__):
|
||||
for i in (__setvar_keyword__):
|
||||
src = self.getVarFlag(key, i, False)
|
||||
if src is None:
|
||||
continue
|
||||
src = srcflags[i]
|
||||
|
||||
dest = self.getVarFlag(newkey, i, False) or []
|
||||
dest.extend(src)
|
||||
@@ -689,7 +617,7 @@ class DataSmart(MutableMapping):
|
||||
self.overridedata[newkey].append([v.replace(key, newkey), o])
|
||||
self.renameVar(v, v.replace(key, newkey))
|
||||
|
||||
if ':' in newkey and val is None:
|
||||
if '_' in newkey and val is None:
|
||||
self._setvar_update_overrides(newkey, **loginfo)
|
||||
|
||||
loginfo['variable'] = key
|
||||
@@ -701,12 +629,12 @@ class DataSmart(MutableMapping):
|
||||
def appendVar(self, var, value, **loginfo):
|
||||
loginfo['op'] = 'append'
|
||||
self.varhistory.record(**loginfo)
|
||||
self.setVar(var + ":append", value, ignore=True, parsing=True)
|
||||
self.setVar(var + "_append", value, ignore=True, parsing=True)
|
||||
|
||||
def prependVar(self, var, value, **loginfo):
|
||||
loginfo['op'] = 'prepend'
|
||||
self.varhistory.record(**loginfo)
|
||||
self.setVar(var + ":prepend", value, ignore=True, parsing=True)
|
||||
self.setVar(var + "_prepend", value, ignore=True, parsing=True)
|
||||
|
||||
def delVar(self, var, **loginfo):
|
||||
self.expand_cache = {}
|
||||
@@ -717,9 +645,9 @@ class DataSmart(MutableMapping):
|
||||
self.dict[var] = {}
|
||||
if var in self.overridedata:
|
||||
del self.overridedata[var]
|
||||
if ':' in var:
|
||||
override = var[var.rfind(':')+1:]
|
||||
shortvar = var[:var.rfind(':')]
|
||||
if '_' in var:
|
||||
override = var[var.rfind('_')+1:]
|
||||
shortvar = var[:var.rfind('_')]
|
||||
while override and override.islower():
|
||||
try:
|
||||
if shortvar in self.overridedata:
|
||||
@@ -729,23 +657,15 @@ class DataSmart(MutableMapping):
|
||||
except ValueError as e:
|
||||
pass
|
||||
override = None
|
||||
if ":" in shortvar:
|
||||
override = var[shortvar.rfind(':')+1:]
|
||||
shortvar = var[:shortvar.rfind(':')]
|
||||
if "_" in shortvar:
|
||||
override = var[shortvar.rfind('_')+1:]
|
||||
shortvar = var[:shortvar.rfind('_')]
|
||||
if len(shortvar) == 0:
|
||||
override = None
|
||||
|
||||
def setVarFlag(self, var, flag, value, **loginfo):
|
||||
self.expand_cache = {}
|
||||
|
||||
if var == "BB_RENAMED_VARIABLES":
|
||||
self._var_renames[flag] = value
|
||||
|
||||
if var in self._var_renames:
|
||||
_print_rename_error(var, loginfo, self._var_renames)
|
||||
# Mark that we have seen a renamed variable
|
||||
self.setVar("_FAILPARSINGERRORHANDLED", True)
|
||||
|
||||
if 'op' not in loginfo:
|
||||
loginfo['op'] = "set"
|
||||
loginfo['flag'] = flag
|
||||
@@ -754,7 +674,7 @@ class DataSmart(MutableMapping):
|
||||
self._makeShadowCopy(var)
|
||||
self.dict[var][flag] = value
|
||||
|
||||
if flag == "_defaultval" and ':' in var:
|
||||
if flag == "_defaultval" and '_' in var:
|
||||
self._setvar_update_overrides(var, **loginfo)
|
||||
if flag == "_defaultval" and var in self.overridevars:
|
||||
self._setvar_update_overridevars(var, value)
|
||||
@@ -786,11 +706,11 @@ class DataSmart(MutableMapping):
|
||||
active = {}
|
||||
self.need_overrides()
|
||||
for (r, o) in overridedata:
|
||||
# FIXME What about double overrides both with "_" in the name?
|
||||
# What about double overrides both with "_" in the name?
|
||||
if o in self.overridesset:
|
||||
active[o] = r
|
||||
elif ":" in o:
|
||||
if set(o.split(":")).issubset(self.overridesset):
|
||||
elif "_" in o:
|
||||
if set(o.split("_")).issubset(self.overridesset):
|
||||
active[o] = r
|
||||
|
||||
mod = True
|
||||
@@ -798,10 +718,10 @@ class DataSmart(MutableMapping):
|
||||
mod = False
|
||||
for o in self.overrides:
|
||||
for a in active.copy():
|
||||
if a.endswith(":" + o):
|
||||
if a.endswith("_" + o):
|
||||
t = active[a]
|
||||
del active[a]
|
||||
active[a.replace(":" + o, "")] = t
|
||||
active[a.replace("_" + o, "")] = t
|
||||
mod = True
|
||||
elif a == o:
|
||||
match = active[a]
|
||||
@@ -820,31 +740,31 @@ class DataSmart(MutableMapping):
|
||||
value = copy.copy(local_var["_defaultval"])
|
||||
|
||||
|
||||
if flag == "_content" and local_var is not None and ":append" in local_var and not parsing:
|
||||
if flag == "_content" and local_var is not None and "_append" in local_var and not parsing:
|
||||
if not value:
|
||||
value = ""
|
||||
self.need_overrides()
|
||||
for (r, o) in local_var[":append"]:
|
||||
for (r, o) in local_var["_append"]:
|
||||
match = True
|
||||
if o:
|
||||
for o2 in o.split(":"):
|
||||
for o2 in o.split("_"):
|
||||
if not o2 in self.overrides:
|
||||
match = False
|
||||
if match:
|
||||
if value is None:
|
||||
value = ""
|
||||
value = value + r
|
||||
|
||||
if flag == "_content" and local_var is not None and ":prepend" in local_var and not parsing:
|
||||
if flag == "_content" and local_var is not None and "_prepend" in local_var and not parsing:
|
||||
if not value:
|
||||
value = ""
|
||||
self.need_overrides()
|
||||
for (r, o) in local_var[":prepend"]:
|
||||
for (r, o) in local_var["_prepend"]:
|
||||
|
||||
match = True
|
||||
if o:
|
||||
for o2 in o.split(":"):
|
||||
for o2 in o.split("_"):
|
||||
if not o2 in self.overrides:
|
||||
match = False
|
||||
if match:
|
||||
if value is None:
|
||||
value = ""
|
||||
value = r + value
|
||||
|
||||
parser = None
|
||||
@@ -853,12 +773,12 @@ class DataSmart(MutableMapping):
|
||||
if expand:
|
||||
value = parser.value
|
||||
|
||||
if value and flag == "_content" and local_var is not None and ":remove" in local_var and not parsing:
|
||||
if value and flag == "_content" and local_var is not None and "_remove" in local_var and not parsing:
|
||||
self.need_overrides()
|
||||
for (r, o) in local_var[":remove"]:
|
||||
for (r, o) in local_var["_remove"]:
|
||||
match = True
|
||||
if o:
|
||||
for o2 in o.split(":"):
|
||||
for o2 in o.split("_"):
|
||||
if not o2 in self.overrides:
|
||||
match = False
|
||||
if match:
|
||||
@@ -871,7 +791,7 @@ class DataSmart(MutableMapping):
|
||||
expanded_removes[r] = self.expand(r).split()
|
||||
|
||||
parser.removes = set()
|
||||
val = []
|
||||
val = ""
|
||||
for v in __whitespace_split__.split(parser.value):
|
||||
skip = False
|
||||
for r in removes:
|
||||
@@ -880,8 +800,8 @@ class DataSmart(MutableMapping):
|
||||
skip = True
|
||||
if skip:
|
||||
continue
|
||||
val.append(v)
|
||||
parser.value = "".join(val)
|
||||
val = val + v
|
||||
parser.value = val
|
||||
if expand:
|
||||
value = parser.value
|
||||
|
||||
@@ -944,7 +864,7 @@ class DataSmart(MutableMapping):
|
||||
|
||||
if local_var:
|
||||
for i in local_var:
|
||||
if i.startswith(("_", ":")) and not internalflags:
|
||||
if i.startswith("_") and not internalflags:
|
||||
continue
|
||||
flags[i] = local_var[i]
|
||||
if expand and i in expand:
|
||||
@@ -985,7 +905,6 @@ class DataSmart(MutableMapping):
|
||||
data.inchistory = self.inchistory.copy()
|
||||
|
||||
data._tracking = self._tracking
|
||||
data._var_renames = self._var_renames
|
||||
|
||||
data.overrides = None
|
||||
data.overridevars = copy.copy(self.overridevars)
|
||||
@@ -1008,7 +927,7 @@ class DataSmart(MutableMapping):
|
||||
value = self.getVar(variable, False)
|
||||
for key in keys:
|
||||
referrervalue = self.getVar(key, False)
|
||||
if referrervalue and isinstance(referrervalue, str) and ref in referrervalue:
|
||||
if referrervalue and ref in referrervalue:
|
||||
self.setVar(key, referrervalue.replace(ref, value))
|
||||
|
||||
def localkeys(self):
|
||||
@@ -1043,8 +962,8 @@ class DataSmart(MutableMapping):
|
||||
for (r, o) in self.overridedata[var]:
|
||||
if o in self.overridesset:
|
||||
overrides.add(var)
|
||||
elif ":" in o:
|
||||
if set(o.split(":")).issubset(self.overridesset):
|
||||
elif "_" in o:
|
||||
if set(o.split("_")).issubset(self.overridesset):
|
||||
overrides.add(var)
|
||||
|
||||
for k in keylist(self.dict):
|
||||
@@ -1074,10 +993,10 @@ class DataSmart(MutableMapping):
|
||||
d = self.createCopy()
|
||||
bb.data.expandKeys(d)
|
||||
|
||||
config_ignore_vars = set((d.getVar("BB_HASHCONFIG_IGNORE_VARS") or "").split())
|
||||
config_whitelist = set((d.getVar("BB_HASHCONFIG_WHITELIST") or "").split())
|
||||
keys = set(key for key in iter(d) if not key.startswith("__"))
|
||||
for key in keys:
|
||||
if key in config_ignore_vars:
|
||||
if key in config_whitelist:
|
||||
continue
|
||||
|
||||
value = d.getVar(key, False) or ""
|
||||
@@ -1086,7 +1005,7 @@ class DataSmart(MutableMapping):
|
||||
else:
|
||||
data.update({key:value})
|
||||
|
||||
varflags = d.getVarFlags(key, internalflags = True, expand=["vardepvalue"])
|
||||
varflags = d.getVarFlags(key, internalflags = True)
|
||||
if not varflags:
|
||||
continue
|
||||
for f in varflags:
|
||||
|
||||
@@ -40,7 +40,7 @@ class HeartbeatEvent(Event):
|
||||
"""Triggered at regular time intervals of 10 seconds. Other events can fire much more often
|
||||
(runQueueTaskStarted when there are many short tasks) or not at all for long periods
|
||||
of time (again runQueueTaskStarted, when there is just one long-running task), so this
|
||||
event is more suitable for doing some task-independent work occasionally."""
|
||||
event is more suitable for doing some task-independent work occassionally."""
|
||||
def __init__(self, time):
|
||||
Event.__init__(self)
|
||||
self.time = time
|
||||
@@ -118,8 +118,6 @@ def fire_class_handlers(event, d):
|
||||
if _eventfilter:
|
||||
if not _eventfilter(name, handler, event, d):
|
||||
continue
|
||||
if d is not None and not name in (d.getVar("__BBHANDLERS_MC") or set()):
|
||||
continue
|
||||
execute_handler(name, handler, event, d)
|
||||
|
||||
ui_queue = []
|
||||
@@ -132,14 +130,8 @@ def print_ui_queue():
|
||||
if not _uiready:
|
||||
from bb.msg import BBLogFormatter
|
||||
# Flush any existing buffered content
|
||||
try:
|
||||
sys.stdout.flush()
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
sys.stderr.flush()
|
||||
except:
|
||||
pass
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
stdout = logging.StreamHandler(sys.stdout)
|
||||
stderr = logging.StreamHandler(sys.stderr)
|
||||
formatter = BBLogFormatter("%(levelname)s: %(message)s")
|
||||
@@ -235,19 +227,11 @@ def fire_from_worker(event, d):
|
||||
fire_ui_handlers(event, d)
|
||||
|
||||
noop = lambda _: None
|
||||
def register(name, handler, mask=None, filename=None, lineno=None, data=None):
|
||||
def register(name, handler, mask=None, filename=None, lineno=None):
|
||||
"""Register an Event handler"""
|
||||
|
||||
if data is not None and data.getVar("BB_CURRENT_MC"):
|
||||
mc = data.getVar("BB_CURRENT_MC")
|
||||
name = '%s%s' % (mc.replace('-', '_'), name)
|
||||
|
||||
# already registered
|
||||
if name in _handlers:
|
||||
if data is not None:
|
||||
bbhands_mc = (data.getVar("__BBHANDLERS_MC") or set())
|
||||
bbhands_mc.add(name)
|
||||
data.setVar("__BBHANDLERS_MC", bbhands_mc)
|
||||
return AlreadyRegistered
|
||||
|
||||
if handler is not None:
|
||||
@@ -284,20 +268,10 @@ def register(name, handler, mask=None, filename=None, lineno=None, data=None):
|
||||
_event_handler_map[m] = {}
|
||||
_event_handler_map[m][name] = True
|
||||
|
||||
if data is not None:
|
||||
bbhands_mc = (data.getVar("__BBHANDLERS_MC") or set())
|
||||
bbhands_mc.add(name)
|
||||
data.setVar("__BBHANDLERS_MC", bbhands_mc)
|
||||
|
||||
return Registered
|
||||
|
||||
def remove(name, handler, data=None):
|
||||
def remove(name, handler):
|
||||
"""Remove an Event handler"""
|
||||
if data is not None:
|
||||
if data.getVar("BB_CURRENT_MC"):
|
||||
mc = data.getVar("BB_CURRENT_MC")
|
||||
name = '%s%s' % (mc.replace('-', '_'), name)
|
||||
|
||||
_handlers.pop(name)
|
||||
if name in _catchall_handlers:
|
||||
_catchall_handlers.pop(name)
|
||||
@@ -305,12 +279,6 @@ def remove(name, handler, data=None):
|
||||
if name in _event_handler_map[event]:
|
||||
_event_handler_map[event].pop(name)
|
||||
|
||||
if data is not None:
|
||||
bbhands_mc = (data.getVar("__BBHANDLERS_MC") or set())
|
||||
if name in bbhands_mc:
|
||||
bbhands_mc.remove(name)
|
||||
data.setVar("__BBHANDLERS_MC", bbhands_mc)
|
||||
|
||||
def get_handlers():
|
||||
return _handlers
|
||||
|
||||
@@ -492,7 +460,7 @@ class BuildCompleted(BuildBase, OperationCompleted):
|
||||
BuildBase.__init__(self, n, p, failures)
|
||||
|
||||
class DiskFull(Event):
|
||||
"""Disk full case build halted"""
|
||||
"""Disk full case build aborted"""
|
||||
def __init__(self, dev, type, freespace, mountpoint):
|
||||
Event.__init__(self)
|
||||
self._dev = dev
|
||||
@@ -676,17 +644,6 @@ class ReachableStamps(Event):
|
||||
Event.__init__(self)
|
||||
self.stamps = stamps
|
||||
|
||||
class StaleSetSceneTasks(Event):
|
||||
"""
|
||||
An event listing setscene tasks which are 'stale' and will
|
||||
be rerun. The metadata may use to clean up stale data.
|
||||
tasks is a mapping of tasks and matching stale stamps.
|
||||
"""
|
||||
|
||||
def __init__(self, tasks):
|
||||
Event.__init__(self)
|
||||
self.tasks = tasks
|
||||
|
||||
class FilesMatchingFound(Event):
|
||||
"""
|
||||
Event when a list of files matching the supplied pattern has
|
||||
@@ -770,7 +727,7 @@ class LogHandler(logging.Handler):
|
||||
class MetadataEvent(Event):
|
||||
"""
|
||||
Generic event that target for OE-Core classes
|
||||
to report information during asynchronous execution
|
||||
to report information during asynchrous execution
|
||||
"""
|
||||
def __init__(self, eventtype, eventdata):
|
||||
Event.__init__(self)
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
|
||||
@@ -1,57 +0,0 @@
|
||||
There are expectations of users of the fetcher code. This file attempts to document
|
||||
some of the constraints that are present. Some are obvious, some are less so. It is
|
||||
documented in the context of how OE uses it but the API calls are generic.
|
||||
|
||||
a) network access for sources is only expected to happen in the do_fetch task.
|
||||
This is not enforced or tested but is required so that we can:
|
||||
|
||||
i) audit the sources used (i.e. for license/manifest reasons)
|
||||
ii) support offline builds with a suitable cache
|
||||
iii) allow work to continue even with downtime upstream
|
||||
iv) allow for changes upstream in incompatible ways
|
||||
v) allow rebuilding of the software in X years time
|
||||
|
||||
b) network access is not expected in do_unpack task.
|
||||
|
||||
c) you can take DL_DIR and use it as a mirror for offline builds.
|
||||
|
||||
d) access to the network is only made when explicitly configured in recipes
|
||||
(e.g. use of AUTOREV, or use of git tags which change revision).
|
||||
|
||||
e) fetcher output is deterministic (i.e. if you fetch configuration XXX now it
|
||||
will match in future exactly in a clean build with a new DL_DIR).
|
||||
One specific pain point example are git tags. They can be replaced and change
|
||||
so the git fetcher has to resolve them with the network. We use git revisions
|
||||
where possible to avoid this and ensure determinism.
|
||||
|
||||
f) network access is expected to work with the standard linux proxy variables
|
||||
so that access behind firewalls works (the fetcher sets these in the
|
||||
environment but only in the do_fetch tasks).
|
||||
|
||||
g) access during parsing has to be minimal, a "git ls-remote" for an AUTOREV
|
||||
git recipe might be ok but you can't expect to checkout a git tree.
|
||||
|
||||
h) we need to provide revision information during parsing such that a version
|
||||
for the recipe can be constructed.
|
||||
|
||||
i) versions are expected to be able to increase in a way which sorts allowing
|
||||
package feeds to operate (see PR server required for git revisions to sort).
|
||||
|
||||
j) API to query for possible version upgrades of a url is highly desireable to
|
||||
allow our automated upgrage code to function (it is implied this does always
|
||||
have network access).
|
||||
|
||||
k) Where fixes or changes to behaviour in the fetcher are made, we ask that
|
||||
test cases are added (run with "bitbake-selftest bb.tests.fetch"). We do
|
||||
have fairly extensive test coverage of the fetcher as it is the only way
|
||||
to track all of its corner cases, it still doesn't give entire coverage
|
||||
though sadly.
|
||||
|
||||
l) If using tools during parse time, they will have to be in ASSUME_PROVIDED
|
||||
in OE's context as we can't build git-native, then parse a recipe and use
|
||||
git ls-remote.
|
||||
|
||||
Not all fetchers support all features, autorev is optional and doesn't make
|
||||
sense for some. Upgrade detection means different things in different contexts
|
||||
too.
|
||||
|
||||
@@ -113,7 +113,7 @@ class MissingParameterError(BBFetchException):
|
||||
self.args = (missing, url)
|
||||
|
||||
class ParameterError(BBFetchException):
|
||||
"""Exception raised when a url cannot be processed due to invalid parameters."""
|
||||
"""Exception raised when a url cannot be proccessed due to invalid parameters."""
|
||||
def __init__(self, message, url):
|
||||
msg = "URL: '%s' has invalid parameters. %s" % (url, message)
|
||||
self.url = url
|
||||
@@ -182,7 +182,7 @@ class URI(object):
|
||||
Some notes about relative URIs: while it's specified that
|
||||
a URI beginning with <scheme>:// should either be directly
|
||||
followed by a hostname or a /, the old URI handling of the
|
||||
fetch2 library did not conform to this. Therefore, this URI
|
||||
fetch2 library did not comform to this. Therefore, this URI
|
||||
class has some kludges to make sure that URIs are parsed in
|
||||
a way comforming to bitbake's current usage. This URI class
|
||||
supports the following:
|
||||
@@ -199,7 +199,7 @@ class URI(object):
|
||||
file://hostname/absolute/path.diff (would be IETF compliant)
|
||||
|
||||
Note that the last case only applies to a list of
|
||||
explicitly allowed schemes (currently only file://), that requires
|
||||
"whitelisted" schemes (currently only file://), that requires
|
||||
its URIs to not have a network location.
|
||||
"""
|
||||
|
||||
@@ -290,7 +290,7 @@ class URI(object):
|
||||
|
||||
def _param_str_split(self, string, elmdelim, kvdelim="="):
|
||||
ret = collections.OrderedDict()
|
||||
for k, v in [x.split(kvdelim, 1) for x in string.split(elmdelim) if x]:
|
||||
for k, v in [x.split(kvdelim, 1) for x in string.split(elmdelim)]:
|
||||
ret[k] = v
|
||||
return ret
|
||||
|
||||
@@ -402,24 +402,24 @@ def encodeurl(decoded):
|
||||
|
||||
if not type:
|
||||
raise MissingParameterError('type', "encoded from the data %s" % str(decoded))
|
||||
url = ['%s://' % type]
|
||||
url = '%s://' % type
|
||||
if user and type != "file":
|
||||
url.append("%s" % user)
|
||||
url += "%s" % user
|
||||
if pswd:
|
||||
url.append(":%s" % pswd)
|
||||
url.append("@")
|
||||
url += ":%s" % pswd
|
||||
url += "@"
|
||||
if host and type != "file":
|
||||
url.append("%s" % host)
|
||||
url += "%s" % host
|
||||
if path:
|
||||
# Standardise path to ensure comparisons work
|
||||
while '//' in path:
|
||||
path = path.replace("//", "/")
|
||||
url.append("%s" % urllib.parse.quote(path))
|
||||
url += "%s" % urllib.parse.quote(path)
|
||||
if p:
|
||||
for parm in p:
|
||||
url.append(";%s=%s" % (parm, p[parm]))
|
||||
url += ";%s=%s" % (parm, p[parm])
|
||||
|
||||
return "".join(url)
|
||||
return url
|
||||
|
||||
def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
|
||||
if not ud.url or not uri_find or not uri_replace:
|
||||
@@ -428,9 +428,8 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
|
||||
uri_decoded = list(decodeurl(ud.url))
|
||||
uri_find_decoded = list(decodeurl(uri_find))
|
||||
uri_replace_decoded = list(decodeurl(uri_replace))
|
||||
logger.debug2("For url %s comparing %s to %s" % (uri_decoded, uri_find_decoded, uri_replace_decoded))
|
||||
logger.debug(2, "For url %s comparing %s to %s" % (uri_decoded, uri_find_decoded, uri_replace_decoded))
|
||||
result_decoded = ['', '', '', '', '', {}]
|
||||
# 0 - type, 1 - host, 2 - path, 3 - user, 4- pswd, 5 - params
|
||||
for loc, i in enumerate(uri_find_decoded):
|
||||
result_decoded[loc] = uri_decoded[loc]
|
||||
regexp = i
|
||||
@@ -450,9 +449,6 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
|
||||
for l in replacements:
|
||||
uri_replace_decoded[loc][k] = uri_replace_decoded[loc][k].replace(l, replacements[l])
|
||||
result_decoded[loc][k] = uri_replace_decoded[loc][k]
|
||||
elif (loc == 3 or loc == 4) and uri_replace_decoded[loc]:
|
||||
# User/password in the replacement is just a straight replacement
|
||||
result_decoded[loc] = uri_replace_decoded[loc]
|
||||
elif (re.match(regexp, uri_decoded[loc])):
|
||||
if not uri_replace_decoded[loc]:
|
||||
result_decoded[loc] = ""
|
||||
@@ -471,21 +467,14 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
|
||||
uri_decoded[5] = {}
|
||||
elif ud.localpath and ud.method.supports_checksum(ud):
|
||||
basename = os.path.basename(ud.localpath)
|
||||
if basename:
|
||||
uri_basename = os.path.basename(uri_decoded[loc])
|
||||
# Prefix with a slash as a sentinel in case
|
||||
# result_decoded[loc] does not contain one.
|
||||
path = "/" + result_decoded[loc]
|
||||
if uri_basename and basename != uri_basename and path.endswith("/" + uri_basename):
|
||||
result_decoded[loc] = path[1:-len(uri_basename)] + basename
|
||||
elif not path.endswith("/" + basename):
|
||||
result_decoded[loc] = os.path.join(path[1:], basename)
|
||||
if basename and not result_decoded[loc].endswith(basename):
|
||||
result_decoded[loc] = os.path.join(result_decoded[loc], basename)
|
||||
else:
|
||||
return None
|
||||
result = encodeurl(result_decoded)
|
||||
if result == ud.url:
|
||||
return None
|
||||
logger.debug2("For url %s returning %s" % (ud.url, result))
|
||||
logger.debug(2, "For url %s returning %s" % (ud.url, result))
|
||||
return result
|
||||
|
||||
methods = []
|
||||
@@ -510,9 +499,9 @@ def fetcher_init(d):
|
||||
# When to drop SCM head revisions controlled by user policy
|
||||
srcrev_policy = d.getVar('BB_SRCREV_POLICY') or "clear"
|
||||
if srcrev_policy == "cache":
|
||||
logger.debug("Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
|
||||
logger.debug(1, "Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
|
||||
elif srcrev_policy == "clear":
|
||||
logger.debug("Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
|
||||
logger.debug(1, "Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
|
||||
revs.clear()
|
||||
else:
|
||||
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
|
||||
@@ -545,7 +534,7 @@ def mirror_from_string(data):
|
||||
bb.warn('Invalid mirror data %s, should have paired members.' % data)
|
||||
return list(zip(*[iter(mirrors)]*2))
|
||||
|
||||
def verify_checksum(ud, d, precomputed={}, localpath=None, fatal_nochecksum=True):
|
||||
def verify_checksum(ud, d, precomputed={}):
|
||||
"""
|
||||
verify the MD5 and SHA256 checksum for downloaded src
|
||||
|
||||
@@ -563,22 +552,16 @@ def verify_checksum(ud, d, precomputed={}, localpath=None, fatal_nochecksum=True
|
||||
if ud.ignore_checksums or not ud.method.supports_checksum(ud):
|
||||
return {}
|
||||
|
||||
if localpath is None:
|
||||
localpath = ud.localpath
|
||||
|
||||
def compute_checksum_info(checksum_id):
|
||||
checksum_name = getattr(ud, "%s_name" % checksum_id)
|
||||
|
||||
if checksum_id in precomputed:
|
||||
checksum_data = precomputed[checksum_id]
|
||||
else:
|
||||
checksum_data = getattr(bb.utils, "%s_file" % checksum_id)(localpath)
|
||||
checksum_data = getattr(bb.utils, "%s_file" % checksum_id)(ud.localpath)
|
||||
|
||||
checksum_expected = getattr(ud, "%s_expected" % checksum_id)
|
||||
|
||||
if checksum_expected == '':
|
||||
checksum_expected = None
|
||||
|
||||
return {
|
||||
"id": checksum_id,
|
||||
"name": checksum_name,
|
||||
@@ -598,7 +581,7 @@ def verify_checksum(ud, d, precomputed={}, localpath=None, fatal_nochecksum=True
|
||||
checksum_lines = ["SRC_URI[%s] = \"%s\"" % (ci["name"], ci["data"])]
|
||||
|
||||
# If no checksum has been provided
|
||||
if fatal_nochecksum and ud.method.recommends_checksum(ud) and all(ci["expected"] is None for ci in checksum_infos):
|
||||
if ud.method.recommends_checksum(ud) and all(ci["expected"] is None for ci in checksum_infos):
|
||||
messages = []
|
||||
strict = d.getVar("BB_STRICT_CHECKSUM") or "0"
|
||||
|
||||
@@ -629,8 +612,8 @@ def verify_checksum(ud, d, precomputed={}, localpath=None, fatal_nochecksum=True
|
||||
|
||||
for ci in checksum_infos:
|
||||
if ci["expected"] and ci["expected"] != ci["data"]:
|
||||
messages.append("File: '%s' has %s checksum '%s' when '%s' was " \
|
||||
"expected" % (localpath, ci["id"], ci["data"], ci["expected"]))
|
||||
messages.append("File: '%s' has %s checksum %s when %s was " \
|
||||
"expected" % (ud.localpath, ci["id"], ci["data"], ci["expected"]))
|
||||
bad_checksum = ci["data"]
|
||||
|
||||
if bad_checksum:
|
||||
@@ -768,12 +751,6 @@ def get_srcrev(d, method_name='sortable_revision'):
|
||||
that fetcher provides a method with the given name and the same signature as sortable_revision.
|
||||
"""
|
||||
|
||||
d.setVar("__BBSEENSRCREV", "1")
|
||||
recursion = d.getVar("__BBINSRCREV")
|
||||
if recursion:
|
||||
raise FetchError("There are recursive references in fetcher variables, likely through SRC_URI")
|
||||
d.setVar("__BBINSRCREV", True)
|
||||
|
||||
scms = []
|
||||
fetcher = Fetch(d.getVar('SRC_URI').split(), d)
|
||||
urldata = fetcher.ud
|
||||
@@ -781,14 +758,13 @@ def get_srcrev(d, method_name='sortable_revision'):
|
||||
if urldata[u].method.supports_srcrev():
|
||||
scms.append(u)
|
||||
|
||||
if not scms:
|
||||
if len(scms) == 0:
|
||||
raise FetchError("SRCREV was used yet no valid SCM was found in SRC_URI")
|
||||
|
||||
if len(scms) == 1 and len(urldata[scms[0]].names) == 1:
|
||||
autoinc, rev = getattr(urldata[scms[0]].method, method_name)(urldata[scms[0]], d, urldata[scms[0]].names[0])
|
||||
if len(rev) > 10:
|
||||
rev = rev[:10]
|
||||
d.delVar("__BBINSRCREV")
|
||||
if autoinc:
|
||||
return "AUTOINC+" + rev
|
||||
return rev
|
||||
@@ -823,49 +799,12 @@ def get_srcrev(d, method_name='sortable_revision'):
|
||||
if seenautoinc:
|
||||
format = "AUTOINC+" + format
|
||||
|
||||
d.delVar("__BBINSRCREV")
|
||||
return format
|
||||
|
||||
def localpath(url, d):
|
||||
fetcher = bb.fetch2.Fetch([url], d)
|
||||
return fetcher.localpath(url)
|
||||
|
||||
# Need to export PATH as binary could be in metadata paths
|
||||
# rather than host provided
|
||||
# Also include some other variables.
|
||||
FETCH_EXPORT_VARS = ['HOME', 'PATH',
|
||||
'HTTP_PROXY', 'http_proxy',
|
||||
'HTTPS_PROXY', 'https_proxy',
|
||||
'FTP_PROXY', 'ftp_proxy',
|
||||
'FTPS_PROXY', 'ftps_proxy',
|
||||
'NO_PROXY', 'no_proxy',
|
||||
'ALL_PROXY', 'all_proxy',
|
||||
'GIT_PROXY_COMMAND',
|
||||
'GIT_SSH',
|
||||
'GIT_SSH_COMMAND',
|
||||
'GIT_SSL_CAINFO',
|
||||
'GIT_SMART_HTTP',
|
||||
'SSH_AUTH_SOCK', 'SSH_AGENT_PID',
|
||||
'SOCKS5_USER', 'SOCKS5_PASSWD',
|
||||
'DBUS_SESSION_BUS_ADDRESS',
|
||||
'P4CONFIG',
|
||||
'SSL_CERT_FILE',
|
||||
'AWS_PROFILE',
|
||||
'AWS_ACCESS_KEY_ID',
|
||||
'AWS_SECRET_ACCESS_KEY',
|
||||
'AWS_DEFAULT_REGION']
|
||||
|
||||
def get_fetcher_environment(d):
|
||||
newenv = {}
|
||||
origenv = d.getVar("BB_ORIGENV")
|
||||
for name in bb.fetch2.FETCH_EXPORT_VARS:
|
||||
value = d.getVar(name)
|
||||
if not value and origenv:
|
||||
value = origenv.getVar(name)
|
||||
if value:
|
||||
newenv[name] = value
|
||||
return newenv
|
||||
|
||||
def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
|
||||
"""
|
||||
Run cmd returning the command output
|
||||
@@ -874,7 +813,25 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
|
||||
Optionally remove the files/directories listed in cleanup upon failure
|
||||
"""
|
||||
|
||||
exportvars = FETCH_EXPORT_VARS
|
||||
# Need to export PATH as binary could be in metadata paths
|
||||
# rather than host provided
|
||||
# Also include some other variables.
|
||||
# FIXME: Should really include all export varaiables?
|
||||
exportvars = ['HOME', 'PATH',
|
||||
'HTTP_PROXY', 'http_proxy',
|
||||
'HTTPS_PROXY', 'https_proxy',
|
||||
'FTP_PROXY', 'ftp_proxy',
|
||||
'FTPS_PROXY', 'ftps_proxy',
|
||||
'NO_PROXY', 'no_proxy',
|
||||
'ALL_PROXY', 'all_proxy',
|
||||
'GIT_PROXY_COMMAND',
|
||||
'GIT_SSH',
|
||||
'GIT_SSL_CAINFO',
|
||||
'GIT_SMART_HTTP',
|
||||
'SSH_AUTH_SOCK', 'SSH_AGENT_PID',
|
||||
'SOCKS5_USER', 'SOCKS5_PASSWD',
|
||||
'DBUS_SESSION_BUS_ADDRESS',
|
||||
'P4CONFIG']
|
||||
|
||||
if not cleanup:
|
||||
cleanup = []
|
||||
@@ -896,13 +853,18 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
|
||||
if val:
|
||||
cmd = 'export ' + var + '=\"%s\"; %s' % (val, cmd)
|
||||
|
||||
# Ensure that a _PYTHON_SYSCONFIGDATA_NAME value set by a recipe
|
||||
# (for example via python3native.bbclass since warrior) is not set for
|
||||
# host Python (otherwise tools like git-make-shallow will fail)
|
||||
cmd = 'unset _PYTHON_SYSCONFIGDATA_NAME; ' + cmd
|
||||
|
||||
# Disable pseudo as it may affect ssh, potentially causing it to hang.
|
||||
cmd = 'export PSEUDO_DISABLED=1; ' + cmd
|
||||
|
||||
if workdir:
|
||||
logger.debug("Running '%s' in %s" % (cmd, workdir))
|
||||
logger.debug(1, "Running '%s' in %s" % (cmd, workdir))
|
||||
else:
|
||||
logger.debug("Running %s", cmd)
|
||||
logger.debug(1, "Running %s", cmd)
|
||||
|
||||
success = False
|
||||
error_message = ""
|
||||
@@ -911,7 +873,7 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
|
||||
(output, errors) = bb.process.run(cmd, log=log, shell=True, stderr=subprocess.PIPE, cwd=workdir)
|
||||
success = True
|
||||
except bb.process.NotFoundError as e:
|
||||
error_message = "Fetch command %s not found" % (e.command)
|
||||
error_message = "Fetch command %s" % (e.command)
|
||||
except bb.process.ExecutionError as e:
|
||||
if e.stdout:
|
||||
output = "output:\n%s\n%s" % (e.stdout, e.stderr)
|
||||
@@ -943,7 +905,7 @@ def check_network_access(d, info, url):
|
||||
elif not trusted_network(d, url):
|
||||
raise UntrustedUrl(url, info)
|
||||
else:
|
||||
logger.debug("Fetcher accessed the network with the command %s" % info)
|
||||
logger.debug(1, "Fetcher accessed the network with the command %s" % info)
|
||||
|
||||
def build_mirroruris(origud, mirrors, ld):
|
||||
uris = []
|
||||
@@ -969,7 +931,7 @@ def build_mirroruris(origud, mirrors, ld):
|
||||
continue
|
||||
|
||||
if not trusted_network(ld, newuri):
|
||||
logger.debug("Mirror %s not in the list of trusted networks, skipping" % (newuri))
|
||||
logger.debug(1, "Mirror %s not in the list of trusted networks, skipping" % (newuri))
|
||||
continue
|
||||
|
||||
# Create a local copy of the mirrors minus the current line
|
||||
@@ -980,11 +942,10 @@ def build_mirroruris(origud, mirrors, ld):
|
||||
|
||||
try:
|
||||
newud = FetchData(newuri, ld)
|
||||
newud.ignore_checksums = True
|
||||
newud.setup_localpath(ld)
|
||||
except bb.fetch2.BBFetchException as e:
|
||||
logger.debug("Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
|
||||
logger.debug(str(e))
|
||||
logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
|
||||
logger.debug(1, str(e))
|
||||
try:
|
||||
# setup_localpath of file:// urls may fail, we should still see
|
||||
# if mirrors of the url exist
|
||||
@@ -1087,8 +1048,8 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
|
||||
elif isinstance(e, NoChecksumError):
|
||||
raise
|
||||
else:
|
||||
logger.debug("Mirror fetch failure for url %s (original url: %s)" % (ud.url, origud.url))
|
||||
logger.debug(str(e))
|
||||
logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (ud.url, origud.url))
|
||||
logger.debug(1, str(e))
|
||||
try:
|
||||
ud.method.clean(ud, ld)
|
||||
except UnboundLocalError:
|
||||
@@ -1101,8 +1062,6 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
|
||||
|
||||
def ensure_symlink(target, link_name):
|
||||
if not os.path.exists(link_name):
|
||||
dirname = os.path.dirname(link_name)
|
||||
bb.utils.mkdirhier(dirname)
|
||||
if os.path.islink(link_name):
|
||||
# Broken symbolic link
|
||||
os.unlink(link_name)
|
||||
@@ -1186,11 +1145,11 @@ def srcrev_internal_helper(ud, d, name):
|
||||
pn = d.getVar("PN")
|
||||
attempts = []
|
||||
if name != '' and pn:
|
||||
attempts.append("SRCREV_%s:pn-%s" % (name, pn))
|
||||
attempts.append("SRCREV_%s_pn-%s" % (name, pn))
|
||||
if name != '':
|
||||
attempts.append("SRCREV_%s" % name)
|
||||
if pn:
|
||||
attempts.append("SRCREV:pn-%s" % pn)
|
||||
attempts.append("SRCREV_pn-%s" % pn)
|
||||
attempts.append("SRCREV")
|
||||
|
||||
for a in attempts:
|
||||
@@ -1226,21 +1185,23 @@ def get_checksum_file_list(d):
|
||||
SRC_URI as a space-separated string
|
||||
"""
|
||||
fetch = Fetch([], d, cache = False, localonly = True)
|
||||
|
||||
dl_dir = d.getVar('DL_DIR')
|
||||
filelist = []
|
||||
for u in fetch.urls:
|
||||
ud = fetch.ud[u]
|
||||
|
||||
if ud and isinstance(ud.method, local.Local):
|
||||
found = False
|
||||
paths = ud.method.localpaths(ud, d)
|
||||
for f in paths:
|
||||
pth = ud.decodedurl
|
||||
if os.path.exists(f):
|
||||
found = True
|
||||
if f.startswith(dl_dir):
|
||||
# The local fetcher's behaviour is to return a path under DL_DIR if it couldn't find the file anywhere else
|
||||
if os.path.exists(f):
|
||||
bb.warn("Getting checksum for %s SRC_URI entry %s: file not found except in DL_DIR" % (d.getVar('PN'), os.path.basename(f)))
|
||||
else:
|
||||
bb.warn("Unable to get checksum for %s SRC_URI entry %s: file could not be found" % (d.getVar('PN'), os.path.basename(f)))
|
||||
filelist.append(f + ":" + str(os.path.exists(f)))
|
||||
if not found:
|
||||
bb.fatal(("Unable to get checksum for %s SRC_URI entry %s: file could not be found"
|
||||
"\nThe following paths were searched:"
|
||||
"\n%s") % (d.getVar('PN'), os.path.basename(f), '\n'.join(paths)))
|
||||
|
||||
return " ".join(filelist)
|
||||
|
||||
@@ -1287,7 +1248,7 @@ class FetchData(object):
|
||||
|
||||
if checksum_name in self.parm:
|
||||
checksum_expected = self.parm[checksum_name]
|
||||
elif self.type not in ["http", "https", "ftp", "ftps", "sftp", "s3", "az"]:
|
||||
elif self.type not in ["http", "https", "ftp", "ftps", "sftp", "s3"]:
|
||||
checksum_expected = None
|
||||
else:
|
||||
checksum_expected = d.getVarFlag("SRC_URI", checksum_name)
|
||||
@@ -1478,35 +1439,28 @@ class FetchMethod(object):
|
||||
cmd = None
|
||||
|
||||
if unpack:
|
||||
tar_cmd = 'tar --extract --no-same-owner'
|
||||
if 'striplevel' in urldata.parm:
|
||||
tar_cmd += ' --strip-components=%s' % urldata.parm['striplevel']
|
||||
if file.endswith('.tar'):
|
||||
cmd = '%s -f %s' % (tar_cmd, file)
|
||||
cmd = 'tar x --no-same-owner -f %s' % file
|
||||
elif file.endswith('.tgz') or file.endswith('.tar.gz') or file.endswith('.tar.Z'):
|
||||
cmd = '%s -z -f %s' % (tar_cmd, file)
|
||||
cmd = 'tar xz --no-same-owner -f %s' % file
|
||||
elif file.endswith('.tbz') or file.endswith('.tbz2') or file.endswith('.tar.bz2'):
|
||||
cmd = 'bzip2 -dc %s | %s -f -' % (file, tar_cmd)
|
||||
cmd = 'bzip2 -dc %s | tar x --no-same-owner -f -' % file
|
||||
elif file.endswith('.gz') or file.endswith('.Z') or file.endswith('.z'):
|
||||
cmd = 'gzip -dc %s > %s' % (file, efile)
|
||||
elif file.endswith('.bz2'):
|
||||
cmd = 'bzip2 -dc %s > %s' % (file, efile)
|
||||
elif file.endswith('.txz') or file.endswith('.tar.xz'):
|
||||
cmd = 'xz -dc %s | %s -f -' % (file, tar_cmd)
|
||||
cmd = 'xz -dc %s | tar x --no-same-owner -f -' % file
|
||||
elif file.endswith('.xz'):
|
||||
cmd = 'xz -dc %s > %s' % (file, efile)
|
||||
elif file.endswith('.tar.lz'):
|
||||
cmd = 'lzip -dc %s | %s -f -' % (file, tar_cmd)
|
||||
cmd = 'lzip -dc %s | tar x --no-same-owner -f -' % file
|
||||
elif file.endswith('.lz'):
|
||||
cmd = 'lzip -dc %s > %s' % (file, efile)
|
||||
elif file.endswith('.tar.7z'):
|
||||
cmd = '7z x -so %s | %s -f -' % (file, tar_cmd)
|
||||
cmd = '7z x -so %s | tar x --no-same-owner -f -' % file
|
||||
elif file.endswith('.7z'):
|
||||
cmd = '7za x -y %s 1>/dev/null' % file
|
||||
elif file.endswith('.tzst') or file.endswith('.tar.zst'):
|
||||
cmd = 'zstd --decompress --stdout %s | %s -f -' % (file, tar_cmd)
|
||||
elif file.endswith('.zst'):
|
||||
cmd = 'zstd --decompress --stdout %s > %s' % (file, efile)
|
||||
elif file.endswith('.zip') or file.endswith('.jar'):
|
||||
try:
|
||||
dos = bb.utils.to_boolean(urldata.parm.get('dos'), False)
|
||||
@@ -1537,7 +1491,7 @@ class FetchMethod(object):
|
||||
raise UnpackError("Unable to unpack deb/ipk package - does not contain data.tar.* file", urldata.url)
|
||||
else:
|
||||
raise UnpackError("Unable to unpack deb/ipk package - could not list contents", urldata.url)
|
||||
cmd = 'ar x %s %s && %s -p -f %s && rm %s' % (file, datafile, tar_cmd, datafile, datafile)
|
||||
cmd = 'ar x %s %s && tar --no-same-owner -xpf %s && rm %s' % (file, datafile, datafile, datafile)
|
||||
|
||||
# If 'subdir' param exists, create a dir and use it as destination for unpack cmd
|
||||
if 'subdir' in urldata.parm:
|
||||
@@ -1663,7 +1617,7 @@ class Fetch(object):
|
||||
if localonly and cache:
|
||||
raise Exception("bb.fetch2.Fetch.__init__: cannot set cache and localonly at same time")
|
||||
|
||||
if not urls:
|
||||
if len(urls) == 0:
|
||||
urls = d.getVar("SRC_URI").split()
|
||||
self.urls = urls
|
||||
self.d = d
|
||||
@@ -1735,7 +1689,7 @@ class Fetch(object):
|
||||
if m.verify_donestamp(ud, self.d) and not m.need_update(ud, self.d):
|
||||
done = True
|
||||
elif m.try_premirror(ud, self.d):
|
||||
logger.debug("Trying PREMIRRORS")
|
||||
logger.debug(1, "Trying PREMIRRORS")
|
||||
mirrors = mirror_from_string(self.d.getVar('PREMIRRORS'))
|
||||
done = m.try_mirrors(self, ud, self.d, mirrors)
|
||||
if done:
|
||||
@@ -1745,21 +1699,19 @@ class Fetch(object):
|
||||
m.update_donestamp(ud, self.d)
|
||||
except ChecksumError as e:
|
||||
logger.warning("Checksum failure encountered with premirror download of %s - will attempt other sources." % u)
|
||||
logger.debug(str(e))
|
||||
logger.debug(1, str(e))
|
||||
done = False
|
||||
|
||||
if premirroronly:
|
||||
self.d.setVar("BB_NO_NETWORK", "1")
|
||||
|
||||
firsterr = None
|
||||
verified_stamp = False
|
||||
if done:
|
||||
verified_stamp = m.verify_donestamp(ud, self.d)
|
||||
verified_stamp = m.verify_donestamp(ud, self.d)
|
||||
if not done and (not verified_stamp or m.need_update(ud, self.d)):
|
||||
try:
|
||||
if not trusted_network(self.d, ud.url):
|
||||
raise UntrustedUrl(ud.url)
|
||||
logger.debug("Trying Upstream")
|
||||
logger.debug(1, "Trying Upstream")
|
||||
m.download(ud, self.d)
|
||||
if hasattr(m, "build_mirror_data"):
|
||||
m.build_mirror_data(ud, self.d)
|
||||
@@ -1774,19 +1726,19 @@ class Fetch(object):
|
||||
except BBFetchException as e:
|
||||
if isinstance(e, ChecksumError):
|
||||
logger.warning("Checksum failure encountered with download of %s - will attempt other sources if available" % u)
|
||||
logger.debug(str(e))
|
||||
logger.debug(1, str(e))
|
||||
if os.path.exists(ud.localpath):
|
||||
rename_bad_checksum(ud, e.checksum)
|
||||
elif isinstance(e, NoChecksumError):
|
||||
raise
|
||||
else:
|
||||
logger.warning('Failed to fetch URL %s, attempting MIRRORS if available' % u)
|
||||
logger.debug(str(e))
|
||||
logger.debug(1, str(e))
|
||||
firsterr = e
|
||||
# Remove any incomplete fetch
|
||||
if not verified_stamp:
|
||||
m.clean(ud, self.d)
|
||||
logger.debug("Trying MIRRORS")
|
||||
logger.debug(1, "Trying MIRRORS")
|
||||
mirrors = mirror_from_string(self.d.getVar('MIRRORS'))
|
||||
done = m.try_mirrors(self, ud, self.d, mirrors)
|
||||
|
||||
@@ -1813,11 +1765,7 @@ class Fetch(object):
|
||||
|
||||
def checkstatus(self, urls=None):
|
||||
"""
|
||||
Check all URLs exist upstream.
|
||||
|
||||
Returns None if the URLs exist, raises FetchError if the check wasn't
|
||||
successful but there wasn't an error (such as file not found), and
|
||||
raises other exceptions in error cases.
|
||||
Check all urls exist upstream
|
||||
"""
|
||||
|
||||
if not urls:
|
||||
@@ -1827,7 +1775,7 @@ class Fetch(object):
|
||||
ud = self.ud[u]
|
||||
ud.setup_localpath(self.d)
|
||||
m = ud.method
|
||||
logger.debug("Testing URL %s", u)
|
||||
logger.debug(1, "Testing URL %s", u)
|
||||
# First try checking uri, u, from PREMIRRORS
|
||||
mirrors = mirror_from_string(self.d.getVar('PREMIRRORS'))
|
||||
ret = m.try_mirrors(self, ud, self.d, mirrors, True)
|
||||
@@ -1961,8 +1909,6 @@ from . import repo
|
||||
from . import clearcase
|
||||
from . import npm
|
||||
from . import npmsw
|
||||
from . import az
|
||||
from . import crate
|
||||
|
||||
methods.append(local.Local())
|
||||
methods.append(wget.Wget())
|
||||
@@ -1982,5 +1928,3 @@ methods.append(repo.Repo())
|
||||
methods.append(clearcase.ClearCase())
|
||||
methods.append(npm.Npm())
|
||||
methods.append(npmsw.NpmShrinkWrap())
|
||||
methods.append(az.Az())
|
||||
methods.append(crate.Crate())
|
||||
|
||||
@@ -1,93 +0,0 @@
|
||||
"""
|
||||
BitBake 'Fetch' Azure Storage implementation
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2021 Alejandro Hernandez Samaniego
|
||||
#
|
||||
# Based on bb.fetch2.wget:
|
||||
# Copyright (C) 2003, 2004 Chris Larson
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import shlex
|
||||
import os
|
||||
import bb
|
||||
from bb.fetch2 import FetchError
|
||||
from bb.fetch2 import logger
|
||||
from bb.fetch2.wget import Wget
|
||||
|
||||
|
||||
class Az(Wget):
|
||||
|
||||
def supports(self, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched from Azure Storage
|
||||
"""
|
||||
return ud.type in ['az']
|
||||
|
||||
|
||||
def checkstatus(self, fetch, ud, d, try_again=True):
|
||||
|
||||
# checkstatus discards parameters either way, we need to do this before adding the SAS
|
||||
ud.url = ud.url.replace('az://','https://').split(';')[0]
|
||||
|
||||
az_sas = d.getVar('AZ_SAS')
|
||||
if az_sas and az_sas not in ud.url:
|
||||
ud.url += az_sas
|
||||
|
||||
return Wget.checkstatus(self, fetch, ud, d, try_again)
|
||||
|
||||
# Override download method, include retries
|
||||
def download(self, ud, d, retries=3):
|
||||
"""Fetch urls"""
|
||||
|
||||
# If were reaching the account transaction limit we might be refused a connection,
|
||||
# retrying allows us to avoid false negatives since the limit changes over time
|
||||
fetchcmd = self.basecmd + ' --retry-connrefused --waitretry=5'
|
||||
|
||||
# We need to provide a localpath to avoid wget using the SAS
|
||||
# ud.localfile either has the downloadfilename or ud.path
|
||||
localpath = os.path.join(d.getVar("DL_DIR"), ud.localfile)
|
||||
bb.utils.mkdirhier(os.path.dirname(localpath))
|
||||
fetchcmd += " -O %s" % shlex.quote(localpath)
|
||||
|
||||
|
||||
if ud.user and ud.pswd:
|
||||
fetchcmd += " --user=%s --password=%s --auth-no-challenge" % (ud.user, ud.pswd)
|
||||
|
||||
# Check if a Shared Access Signature was given and use it
|
||||
az_sas = d.getVar('AZ_SAS')
|
||||
|
||||
if az_sas:
|
||||
azuri = '%s%s%s%s' % ('https://', ud.host, ud.path, az_sas)
|
||||
else:
|
||||
azuri = '%s%s%s' % ('https://', ud.host, ud.path)
|
||||
|
||||
if os.path.exists(ud.localpath):
|
||||
# file exists, but we didnt complete it.. trying again.
|
||||
fetchcmd += d.expand(" -c -P ${DL_DIR} '%s'" % azuri)
|
||||
else:
|
||||
fetchcmd += d.expand(" -P ${DL_DIR} '%s'" % azuri)
|
||||
|
||||
try:
|
||||
self._runwget(ud, d, fetchcmd, False)
|
||||
except FetchError as e:
|
||||
# Azure fails on handshake sometimes when using wget after some stress, producing a
|
||||
# FetchError from the fetcher, if the artifact exists retyring should succeed
|
||||
if 'Unable to establish SSL connection' in str(e):
|
||||
logger.debug2('Unable to establish SSL connection: Retries remaining: %s, Retrying...' % retries)
|
||||
self.download(ud, d, retries -1)
|
||||
|
||||
# Sanity check since wget can pretend it succeed when it didn't
|
||||
# Also, this used to happen if sourceforge sent us to the mirror page
|
||||
if not os.path.exists(ud.localpath):
|
||||
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (azuri, ud.localpath), azuri)
|
||||
|
||||
if os.path.getsize(ud.localpath) == 0:
|
||||
os.remove(ud.localpath)
|
||||
raise FetchError("The fetch of %s resulted in a zero size file?! Deleting and failing since this isn't right." % (azuri), azuri)
|
||||
|
||||
return True
|
||||
@@ -74,16 +74,16 @@ class Bzr(FetchMethod):
|
||||
|
||||
if os.access(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir), '.bzr'), os.R_OK):
|
||||
bzrcmd = self._buildbzrcommand(ud, d, "update")
|
||||
logger.debug("BZR Update %s", ud.url)
|
||||
logger.debug(1, "BZR Update %s", ud.url)
|
||||
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
|
||||
runfetchcmd(bzrcmd, d, workdir=os.path.join(ud.pkgdir, os.path.basename(ud.path)))
|
||||
else:
|
||||
bb.utils.remove(os.path.join(ud.pkgdir, os.path.basename(ud.pkgdir)), True)
|
||||
bzrcmd = self._buildbzrcommand(ud, d, "fetch")
|
||||
bb.fetch2.check_network_access(d, bzrcmd, ud.url)
|
||||
logger.debug("BZR Checkout %s", ud.url)
|
||||
logger.debug(1, "BZR Checkout %s", ud.url)
|
||||
bb.utils.mkdirhier(ud.pkgdir)
|
||||
logger.debug("Running %s", bzrcmd)
|
||||
logger.debug(1, "Running %s", bzrcmd)
|
||||
runfetchcmd(bzrcmd, d, workdir=ud.pkgdir)
|
||||
|
||||
scmdata = ud.parm.get("scmdata", "")
|
||||
@@ -109,7 +109,7 @@ class Bzr(FetchMethod):
|
||||
"""
|
||||
Return the latest upstream revision number
|
||||
"""
|
||||
logger.debug2("BZR fetcher hitting network for %s", ud.url)
|
||||
logger.debug(2, "BZR fetcher hitting network for %s", ud.url)
|
||||
|
||||
bb.fetch2.check_network_access(d, self._buildbzrcommand(ud, d, "revno"), ud.url)
|
||||
|
||||
|
||||
@@ -70,7 +70,7 @@ class ClearCase(FetchMethod):
|
||||
return ud.type in ['ccrc']
|
||||
|
||||
def debug(self, msg):
|
||||
logger.debug("ClearCase: %s", msg)
|
||||
logger.debug(1, "ClearCase: %s", msg)
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
"""
|
||||
|
||||
@@ -1,136 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementation for crates.io
|
||||
"""
|
||||
|
||||
# Copyright (C) 2016 Doug Goldstein
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import bb
|
||||
from bb.fetch2 import logger, subprocess_setup, UnpackError
|
||||
from bb.fetch2.wget import Wget
|
||||
|
||||
|
||||
class Crate(Wget):
|
||||
|
||||
"""Class to fetch crates via wget"""
|
||||
|
||||
def _cargo_bitbake_path(self, rootdir):
|
||||
return os.path.join(rootdir, "cargo_home", "bitbake")
|
||||
|
||||
def supports(self, ud, d):
|
||||
"""
|
||||
Check to see if a given url is for this fetcher
|
||||
"""
|
||||
return ud.type in ['crate']
|
||||
|
||||
def recommends_checksum(self, urldata):
|
||||
return False
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
"""
|
||||
Sets up to download the respective crate from crates.io
|
||||
"""
|
||||
|
||||
if ud.type == 'crate':
|
||||
self._crate_urldata_init(ud, d)
|
||||
|
||||
super(Crate, self).urldata_init(ud, d)
|
||||
|
||||
def _crate_urldata_init(self, ud, d):
|
||||
"""
|
||||
Sets up the download for a crate
|
||||
"""
|
||||
|
||||
# URL syntax is: crate://NAME/VERSION
|
||||
# break the URL apart by /
|
||||
parts = ud.url.split('/')
|
||||
if len(parts) < 5:
|
||||
raise bb.fetch2.ParameterError("Invalid URL: Must be crate://HOST/NAME/VERSION", ud.url)
|
||||
|
||||
# last field is version
|
||||
version = parts[len(parts) - 1]
|
||||
# second to last field is name
|
||||
name = parts[len(parts) - 2]
|
||||
# host (this is to allow custom crate registries to be specified
|
||||
host = '/'.join(parts[2:len(parts) - 2])
|
||||
|
||||
# if using upstream just fix it up nicely
|
||||
if host == 'crates.io':
|
||||
host = 'crates.io/api/v1/crates'
|
||||
|
||||
ud.url = "https://%s/%s/%s/download" % (host, name, version)
|
||||
ud.parm['downloadfilename'] = "%s-%s.crate" % (name, version)
|
||||
ud.parm['name'] = name
|
||||
|
||||
logger.debug2("Fetching %s to %s" % (ud.url, ud.parm['downloadfilename']))
|
||||
|
||||
def unpack(self, ud, rootdir, d):
|
||||
"""
|
||||
Uses the crate to build the necessary paths for cargo to utilize it
|
||||
"""
|
||||
if ud.type == 'crate':
|
||||
return self._crate_unpack(ud, rootdir, d)
|
||||
else:
|
||||
super(Crate, self).unpack(ud, rootdir, d)
|
||||
|
||||
def _crate_unpack(self, ud, rootdir, d):
|
||||
"""
|
||||
Unpacks a crate
|
||||
"""
|
||||
thefile = ud.localpath
|
||||
|
||||
# possible metadata we need to write out
|
||||
metadata = {}
|
||||
|
||||
# change to the rootdir to unpack but save the old working dir
|
||||
save_cwd = os.getcwd()
|
||||
os.chdir(rootdir)
|
||||
|
||||
pn = d.getVar('BPN')
|
||||
if pn == ud.parm.get('name'):
|
||||
cmd = "tar -xz --no-same-owner -f %s" % thefile
|
||||
else:
|
||||
cargo_bitbake = self._cargo_bitbake_path(rootdir)
|
||||
|
||||
cmd = "tar -xz --no-same-owner -f %s -C %s" % (thefile, cargo_bitbake)
|
||||
|
||||
# ensure we've got these paths made
|
||||
bb.utils.mkdirhier(cargo_bitbake)
|
||||
|
||||
# generate metadata necessary
|
||||
with open(thefile, 'rb') as f:
|
||||
# get the SHA256 of the original tarball
|
||||
tarhash = hashlib.sha256(f.read()).hexdigest()
|
||||
|
||||
metadata['files'] = {}
|
||||
metadata['package'] = tarhash
|
||||
|
||||
path = d.getVar('PATH')
|
||||
if path:
|
||||
cmd = "PATH=\"%s\" %s" % (path, cmd)
|
||||
bb.note("Unpacking %s to %s/" % (thefile, os.getcwd()))
|
||||
|
||||
ret = subprocess.call(cmd, preexec_fn=subprocess_setup, shell=True)
|
||||
|
||||
os.chdir(save_cwd)
|
||||
|
||||
if ret != 0:
|
||||
raise UnpackError("Unpack command %s failed with return value %s" % (cmd, ret), ud.url)
|
||||
|
||||
# if we have metadata to write out..
|
||||
if len(metadata) > 0:
|
||||
cratepath = os.path.splitext(os.path.basename(thefile))[0]
|
||||
bbpath = self._cargo_bitbake_path(rootdir)
|
||||
mdfile = '.cargo-checksum.json'
|
||||
mdpath = os.path.join(bbpath, cratepath, mdfile)
|
||||
with open(mdpath, "w") as f:
|
||||
json.dump(metadata, f)
|
||||
@@ -109,7 +109,7 @@ class Cvs(FetchMethod):
|
||||
cvsupdatecmd = "CVS_RSH=\"%s\" %s" % (cvs_rsh, cvsupdatecmd)
|
||||
|
||||
# create module directory
|
||||
logger.debug2("Fetch: checking for module directory")
|
||||
logger.debug(2, "Fetch: checking for module directory")
|
||||
moddir = os.path.join(ud.pkgdir, localdir)
|
||||
workdir = None
|
||||
if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
|
||||
@@ -123,7 +123,7 @@ class Cvs(FetchMethod):
|
||||
# check out sources there
|
||||
bb.utils.mkdirhier(ud.pkgdir)
|
||||
workdir = ud.pkgdir
|
||||
logger.debug("Running %s", cvscmd)
|
||||
logger.debug(1, "Running %s", cvscmd)
|
||||
bb.fetch2.check_network_access(d, cvscmd, ud.url)
|
||||
cmd = cvscmd
|
||||
|
||||
|
||||
@@ -68,15 +68,11 @@ import subprocess
|
||||
import tempfile
|
||||
import bb
|
||||
import bb.progress
|
||||
from contextlib import contextmanager
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import runfetchcmd
|
||||
from bb.fetch2 import logger
|
||||
|
||||
|
||||
sha1_re = re.compile(r'^[0-9a-f]{40}$')
|
||||
slash_re = re.compile(r"/+")
|
||||
|
||||
class GitProgressHandler(bb.progress.LineFilterProgressHandler):
|
||||
"""Extract progress information from git output"""
|
||||
def __init__(self, d):
|
||||
@@ -145,11 +141,6 @@ class Git(FetchMethod):
|
||||
ud.proto = 'file'
|
||||
else:
|
||||
ud.proto = "git"
|
||||
if ud.host == "github.com" and ud.proto == "git":
|
||||
# github stopped supporting git protocol
|
||||
# https://github.blog/2021-09-01-improving-git-protocol-security-github/#no-more-unauthenticated-git
|
||||
ud.proto = "https"
|
||||
bb.warn("URL: %s uses git protocol which is no longer supported by github. Please change to ;protocol=https in the url." % ud.url)
|
||||
|
||||
if not ud.proto in ('git', 'file', 'ssh', 'http', 'https', 'rsync'):
|
||||
raise bb.fetch2.ParameterError("Invalid protocol type", ud.url)
|
||||
@@ -173,18 +164,11 @@ class Git(FetchMethod):
|
||||
ud.nocheckout = 1
|
||||
|
||||
ud.unresolvedrev = {}
|
||||
branches = ud.parm.get("branch", "").split(',')
|
||||
if branches == [""] and not ud.nobranch:
|
||||
bb.warn("URL: %s does not set any branch parameter. The future default branch used by tools and repositories is uncertain and we will therefore soon require this is set in all git urls." % ud.url)
|
||||
branches = ["master"]
|
||||
branches = ud.parm.get("branch", "master").split(',')
|
||||
if len(branches) != len(ud.names):
|
||||
raise bb.fetch2.ParameterError("The number of name and branch parameters is not balanced", ud.url)
|
||||
|
||||
ud.noshared = d.getVar("BB_GIT_NOSHARED") == "1"
|
||||
|
||||
ud.cloneflags = "-n"
|
||||
if not ud.noshared:
|
||||
ud.cloneflags += " -s"
|
||||
ud.cloneflags = "-s -n"
|
||||
if ud.bareclone:
|
||||
ud.cloneflags += " --mirror"
|
||||
|
||||
@@ -236,14 +220,9 @@ class Git(FetchMethod):
|
||||
ud.shallow = False
|
||||
|
||||
if ud.usehead:
|
||||
# When usehead is set let's associate 'HEAD' with the unresolved
|
||||
# rev of this repository. This will get resolved into a revision
|
||||
# later. If an actual revision happens to have also been provided
|
||||
# then this setting will be overridden.
|
||||
for name in ud.names:
|
||||
ud.unresolvedrev[name] = 'HEAD'
|
||||
ud.unresolvedrev['default'] = 'HEAD'
|
||||
|
||||
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c gc.autoDetach=false -c core.pager=cat"
|
||||
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c core.fsyncobjectfiles=0"
|
||||
|
||||
write_tarballs = d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0"
|
||||
ud.write_tarballs = write_tarballs != "0" or ud.rebaseable
|
||||
@@ -252,8 +231,8 @@ class Git(FetchMethod):
|
||||
ud.setup_revisions(d)
|
||||
|
||||
for name in ud.names:
|
||||
# Ensure any revision that doesn't look like a SHA-1 is translated into one
|
||||
if not sha1_re.match(ud.revisions[name] or ''):
|
||||
# Ensure anything that doesn't look like a sha256 checksum/revision is translated into one
|
||||
if not ud.revisions[name] or len(ud.revisions[name]) != 40 or (False in [c in "abcdef0123456789" for c in ud.revisions[name]]):
|
||||
if ud.revisions[name]:
|
||||
ud.unresolvedrev[name] = ud.revisions[name]
|
||||
ud.revisions[name] = self.latest_revision(ud, d, name)
|
||||
@@ -262,10 +241,10 @@ class Git(FetchMethod):
|
||||
if gitsrcname.startswith('.'):
|
||||
gitsrcname = gitsrcname[1:]
|
||||
|
||||
# For a rebaseable git repo, it is necessary to keep a mirror tar ball
|
||||
# per revision, so that even if the revision disappears from the
|
||||
# for rebaseable git repo, it is necessary to keep mirror tar ball
|
||||
# per revision, so that even the revision disappears from the
|
||||
# upstream repo in the future, the mirror will remain intact and still
|
||||
# contain the revision
|
||||
# contains the revision
|
||||
if ud.rebaseable:
|
||||
for name in ud.names:
|
||||
gitsrcname = gitsrcname + '_' + ud.revisions[name]
|
||||
@@ -353,15 +332,10 @@ class Git(FetchMethod):
|
||||
if ud.shallow and os.path.exists(ud.fullshallow) and self.need_update(ud, d):
|
||||
ud.localpath = ud.fullshallow
|
||||
return
|
||||
elif os.path.exists(ud.fullmirror) and self.need_update(ud, d):
|
||||
if not os.path.exists(ud.clonedir):
|
||||
bb.utils.mkdirhier(ud.clonedir)
|
||||
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=ud.clonedir)
|
||||
else:
|
||||
tmpdir = tempfile.mkdtemp(dir=d.getVar('DL_DIR'))
|
||||
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=tmpdir)
|
||||
fetch_cmd = "LANG=C %s fetch -f --progress %s " % (ud.basecmd, shlex.quote(tmpdir))
|
||||
runfetchcmd(fetch_cmd, d, workdir=ud.clonedir)
|
||||
elif os.path.exists(ud.fullmirror) and not os.path.exists(ud.clonedir):
|
||||
bb.utils.mkdirhier(ud.clonedir)
|
||||
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=ud.clonedir)
|
||||
|
||||
repourl = self._get_repo_url(ud)
|
||||
|
||||
# If the repo still doesn't exist, fallback to cloning it
|
||||
@@ -405,50 +379,7 @@ class Git(FetchMethod):
|
||||
if missing_rev:
|
||||
raise bb.fetch2.FetchError("Unable to find revision %s even from upstream" % missing_rev)
|
||||
|
||||
if self._contains_lfs(ud, d, ud.clonedir) and self._need_lfs(ud):
|
||||
# Unpack temporary working copy, use it to run 'git checkout' to force pre-fetching
|
||||
# of all LFS blobs needed at the srcrev.
|
||||
#
|
||||
# It would be nice to just do this inline here by running 'git-lfs fetch'
|
||||
# on the bare clonedir, but that operation requires a working copy on some
|
||||
# releases of Git LFS.
|
||||
tmpdir = tempfile.mkdtemp(dir=d.getVar('DL_DIR'))
|
||||
try:
|
||||
# Do the checkout. This implicitly involves a Git LFS fetch.
|
||||
Git.unpack(self, ud, tmpdir, d)
|
||||
|
||||
# Scoop up a copy of any stuff that Git LFS downloaded. Merge them into
|
||||
# the bare clonedir.
|
||||
#
|
||||
# As this procedure is invoked repeatedly on incremental fetches as
|
||||
# a recipe's SRCREV is bumped throughout its lifetime, this will
|
||||
# result in a gradual accumulation of LFS blobs in <ud.clonedir>/lfs
|
||||
# corresponding to all the blobs reachable from the different revs
|
||||
# fetched across time.
|
||||
#
|
||||
# Only do this if the unpack resulted in a .git/lfs directory being
|
||||
# created; this only happens if at least one blob needed to be
|
||||
# downloaded.
|
||||
if os.path.exists(os.path.join(tmpdir, "git", ".git", "lfs")):
|
||||
runfetchcmd("tar -cf - lfs | tar -xf - -C %s" % ud.clonedir, d, workdir="%s/git/.git" % tmpdir)
|
||||
finally:
|
||||
bb.utils.remove(tmpdir, recurse=True)
|
||||
|
||||
def build_mirror_data(self, ud, d):
|
||||
|
||||
# Create as a temp file and move atomically into position to avoid races
|
||||
@contextmanager
|
||||
def create_atomic(filename):
|
||||
fd, tfile = tempfile.mkstemp(dir=os.path.dirname(filename))
|
||||
try:
|
||||
yield tfile
|
||||
umask = os.umask(0o666)
|
||||
os.umask(umask)
|
||||
os.chmod(tfile, (0o666 & ~umask))
|
||||
os.rename(tfile, filename)
|
||||
finally:
|
||||
os.close(fd)
|
||||
|
||||
if ud.shallow and ud.write_shallow_tarballs:
|
||||
if not os.path.exists(ud.fullshallow):
|
||||
if os.path.islink(ud.fullshallow):
|
||||
@@ -459,8 +390,7 @@ class Git(FetchMethod):
|
||||
self.clone_shallow_local(ud, shallowclone, d)
|
||||
|
||||
logger.info("Creating tarball of git repository")
|
||||
with create_atomic(ud.fullshallow) as tfile:
|
||||
runfetchcmd("tar -czf %s ." % tfile, d, workdir=shallowclone)
|
||||
runfetchcmd("tar -czf %s ." % ud.fullshallow, d, workdir=shallowclone)
|
||||
runfetchcmd("touch %s.done" % ud.fullshallow, d)
|
||||
finally:
|
||||
bb.utils.remove(tempdir, recurse=True)
|
||||
@@ -469,11 +399,7 @@ class Git(FetchMethod):
|
||||
os.unlink(ud.fullmirror)
|
||||
|
||||
logger.info("Creating tarball of git repository")
|
||||
with create_atomic(ud.fullmirror) as tfile:
|
||||
mtime = runfetchcmd("git log --all -1 --format=%cD", d,
|
||||
quiet=True, workdir=ud.clonedir)
|
||||
runfetchcmd("tar -czf %s --owner oe:0 --group oe:0 --mtime \"%s\" ."
|
||||
% (tfile, mtime), d, workdir=ud.clonedir)
|
||||
runfetchcmd("tar -czf %s ." % ud.fullmirror, d, workdir=ud.clonedir)
|
||||
runfetchcmd("touch %s.done" % ud.fullmirror, d)
|
||||
|
||||
def clone_shallow_local(self, ud, dest, d):
|
||||
@@ -535,31 +461,20 @@ class Git(FetchMethod):
|
||||
def unpack(self, ud, destdir, d):
|
||||
""" unpack the downloaded src to destdir"""
|
||||
|
||||
subdir = ud.parm.get("subdir")
|
||||
subpath = ud.parm.get("subpath")
|
||||
readpathspec = ""
|
||||
def_destsuffix = "git/"
|
||||
|
||||
if subpath:
|
||||
readpathspec = ":%s" % subpath
|
||||
def_destsuffix = "%s/" % os.path.basename(subpath.rstrip('/'))
|
||||
|
||||
if subdir:
|
||||
# If 'subdir' param exists, create a dir and use it as destination for unpack cmd
|
||||
if os.path.isabs(subdir):
|
||||
if not os.path.realpath(subdir).startswith(os.path.realpath(destdir)):
|
||||
raise bb.fetch2.UnpackError("subdir argument isn't a subdirectory of unpack root %s" % destdir, ud.url)
|
||||
destdir = subdir
|
||||
else:
|
||||
destdir = os.path.join(destdir, subdir)
|
||||
def_destsuffix = ""
|
||||
subdir = ud.parm.get("subpath", "")
|
||||
if subdir != "":
|
||||
readpathspec = ":%s" % subdir
|
||||
def_destsuffix = "%s/" % os.path.basename(subdir.rstrip('/'))
|
||||
else:
|
||||
readpathspec = ""
|
||||
def_destsuffix = "git/"
|
||||
|
||||
destsuffix = ud.parm.get("destsuffix", def_destsuffix)
|
||||
destdir = ud.destdir = os.path.join(destdir, destsuffix)
|
||||
if os.path.exists(destdir):
|
||||
bb.utils.prunedir(destdir)
|
||||
|
||||
need_lfs = self._need_lfs(ud)
|
||||
need_lfs = ud.parm.get("lfs", "1") == "1"
|
||||
|
||||
if not need_lfs:
|
||||
ud.basecmd = "GIT_LFS_SKIP_SMUDGE=1 " + ud.basecmd
|
||||
@@ -567,12 +482,13 @@ class Git(FetchMethod):
|
||||
source_found = False
|
||||
source_error = []
|
||||
|
||||
clonedir_is_up_to_date = not self.clonedir_need_update(ud, d)
|
||||
if clonedir_is_up_to_date:
|
||||
runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, destdir), d)
|
||||
source_found = True
|
||||
else:
|
||||
source_error.append("clone directory not available or not up to date: " + ud.clonedir)
|
||||
if not source_found:
|
||||
clonedir_is_up_to_date = not self.clonedir_need_update(ud, d)
|
||||
if clonedir_is_up_to_date:
|
||||
runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, destdir), d)
|
||||
source_found = True
|
||||
else:
|
||||
source_error.append("clone directory not available or not up to date: " + ud.clonedir)
|
||||
|
||||
if not source_found:
|
||||
if ud.shallow:
|
||||
@@ -598,7 +514,7 @@ class Git(FetchMethod):
|
||||
bb.note("Repository %s has LFS content but it is not being fetched" % (repourl))
|
||||
|
||||
if not ud.nocheckout:
|
||||
if subpath:
|
||||
if subdir != "":
|
||||
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d,
|
||||
workdir=destdir)
|
||||
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d, workdir=destdir)
|
||||
@@ -647,9 +563,6 @@ class Git(FetchMethod):
|
||||
raise bb.fetch2.FetchError("The command '%s' gave output with more then 1 line unexpectedly, output: '%s'" % (cmd, output))
|
||||
return output.split()[0] != "0"
|
||||
|
||||
def _need_lfs(self, ud):
|
||||
return ud.parm.get("lfs", "1") == "1"
|
||||
|
||||
def _contains_lfs(self, ud, d, wd):
|
||||
"""
|
||||
Check if the repository has 'lfs' (large file) content
|
||||
@@ -660,14 +573,8 @@ class Git(FetchMethod):
|
||||
else:
|
||||
branchname = "master"
|
||||
|
||||
# The bare clonedir doesn't use the remote names; it has the branch immediately.
|
||||
if wd == ud.clonedir:
|
||||
refname = ud.branches[ud.names[0]]
|
||||
else:
|
||||
refname = "origin/%s" % ud.branches[ud.names[0]]
|
||||
|
||||
cmd = "%s grep lfs %s:.gitattributes | wc -l" % (
|
||||
ud.basecmd, refname)
|
||||
cmd = "%s grep lfs origin/%s:.gitattributes | wc -l" % (
|
||||
ud.basecmd, ud.branches[ud.names[0]])
|
||||
|
||||
try:
|
||||
output = runfetchcmd(cmd, d, quiet=True, workdir=wd)
|
||||
@@ -688,11 +595,6 @@ class Git(FetchMethod):
|
||||
"""
|
||||
Return the repository URL
|
||||
"""
|
||||
# Note that we do not support passwords directly in the git urls. There are several
|
||||
# reasons. SRC_URI can be written out to things like buildhistory and people don't
|
||||
# want to leak passwords like that. Its also all too easy to share metadata without
|
||||
# removing the password. ssh keys, ~/.netrc and ~/.ssh/config files can be used as
|
||||
# alternatives so we will not take patches adding password support here.
|
||||
if ud.user:
|
||||
username = ud.user + '@'
|
||||
else:
|
||||
@@ -704,6 +606,7 @@ class Git(FetchMethod):
|
||||
Return a unique key for the url
|
||||
"""
|
||||
# Collapse adjacent slashes
|
||||
slash_re = re.compile(r"/+")
|
||||
return "git:" + ud.host + slash_re.sub(".", ud.path) + ud.unresolvedrev[name]
|
||||
|
||||
def _lsremote(self, ud, d, search):
|
||||
@@ -736,12 +639,6 @@ class Git(FetchMethod):
|
||||
"""
|
||||
Compute the HEAD revision for the url
|
||||
"""
|
||||
if not d.getVar("__BBSEENSRCREV"):
|
||||
raise bb.fetch2.FetchError("Recipe uses a floating tag/branch '%s' for repo '%s' without a fixed SRCREV yet doesn't call bb.fetch2.get_srcrev() (use SRCPV in PV for OE)." % (ud.unresolvedrev[name], ud.host+ud.path))
|
||||
|
||||
# Ensure we mark as not cached
|
||||
bb.fetch2.get_autorev(d)
|
||||
|
||||
output = self._lsremote(ud, d, "")
|
||||
# Tags of the form ^{} may not work, need to fallback to other form
|
||||
if ud.unresolvedrev[name][:5] == "refs/" or ud.usehead:
|
||||
|
||||
@@ -78,7 +78,7 @@ class GitSM(Git):
|
||||
module_hash = ""
|
||||
|
||||
if not module_hash:
|
||||
logger.debug("submodule %s is defined, but is not initialized in the repository. Skipping", m)
|
||||
logger.debug(1, "submodule %s is defined, but is not initialized in the repository. Skipping", m)
|
||||
continue
|
||||
|
||||
submodules.append(m)
|
||||
@@ -88,7 +88,7 @@ class GitSM(Git):
|
||||
subrevision[m] = module_hash.split()[2]
|
||||
|
||||
# Convert relative to absolute uri based on parent uri
|
||||
if uris[m].startswith('..') or uris[m].startswith('./'):
|
||||
if uris[m].startswith('..'):
|
||||
newud = copy.copy(ud)
|
||||
newud.path = os.path.realpath(os.path.join(newud.path, uris[m]))
|
||||
uris[m] = Git._get_repo_url(self, newud)
|
||||
@@ -115,9 +115,6 @@ class GitSM(Git):
|
||||
# This has to be a file reference
|
||||
proto = "file"
|
||||
url = "gitsm://" + uris[module]
|
||||
if url.endswith("{}{}".format(ud.host, ud.path)):
|
||||
raise bb.fetch2.FetchError("Submodule refers to the parent repository. This will cause deadlock situation in current version of Bitbake." \
|
||||
"Consider using git fetcher instead.")
|
||||
|
||||
url += ';protocol=%s' % proto
|
||||
url += ";name=%s" % module
|
||||
@@ -143,6 +140,16 @@ class GitSM(Git):
|
||||
if Git.need_update(self, ud, d):
|
||||
return True
|
||||
|
||||
try:
|
||||
# Check for the nugget dropped by the download operation
|
||||
known_srcrevs = runfetchcmd("%s config --get-all bitbake.srcrev" % \
|
||||
(ud.basecmd), d, workdir=ud.clonedir)
|
||||
|
||||
if ud.revisions[ud.names[0]] in known_srcrevs.split():
|
||||
return False
|
||||
except bb.fetch2.FetchError:
|
||||
pass
|
||||
|
||||
need_update_list = []
|
||||
def need_update_submodule(ud, url, module, modpath, workdir, d):
|
||||
url += ";bareclone=1;nobranch=1"
|
||||
@@ -165,9 +172,14 @@ class GitSM(Git):
|
||||
shutil.rmtree(tmpdir)
|
||||
else:
|
||||
self.process_submodules(ud, ud.clonedir, need_update_submodule, d)
|
||||
if len(need_update_list) == 0:
|
||||
# We already have the required commits of all submodules. Drop
|
||||
# a nugget so we don't need to check again.
|
||||
runfetchcmd("%s config --add bitbake.srcrev %s" % \
|
||||
(ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=ud.clonedir)
|
||||
|
||||
if need_update_list:
|
||||
logger.debug('gitsm: Submodules requiring update: %s' % (' '.join(need_update_list)))
|
||||
if len(need_update_list) > 0:
|
||||
logger.debug(1, 'gitsm: Submodules requiring update: %s' % (' '.join(need_update_list)))
|
||||
return True
|
||||
|
||||
return False
|
||||
@@ -197,6 +209,9 @@ class GitSM(Git):
|
||||
shutil.rmtree(tmpdir)
|
||||
else:
|
||||
self.process_submodules(ud, ud.clonedir, download_submodule, d)
|
||||
# Drop a nugget for the srcrev we've fetched (used by need_update)
|
||||
runfetchcmd("%s config --add bitbake.srcrev %s" % \
|
||||
(ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=ud.clonedir)
|
||||
|
||||
def unpack(self, ud, destdir, d):
|
||||
def unpack_submodules(ud, url, module, modpath, workdir, d):
|
||||
|
||||
@@ -150,7 +150,7 @@ class Hg(FetchMethod):
|
||||
def download(self, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
|
||||
# If the checkout doesn't exist and the mirror tarball does, extract it
|
||||
if not os.path.exists(ud.pkgdir) and os.path.exists(ud.fullmirror):
|
||||
@@ -160,7 +160,7 @@ class Hg(FetchMethod):
|
||||
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
|
||||
# Found the source, check whether need pull
|
||||
updatecmd = self._buildhgcommand(ud, d, "update")
|
||||
logger.debug("Running %s", updatecmd)
|
||||
logger.debug(1, "Running %s", updatecmd)
|
||||
try:
|
||||
runfetchcmd(updatecmd, d, workdir=ud.moddir)
|
||||
except bb.fetch2.FetchError:
|
||||
@@ -168,7 +168,7 @@ class Hg(FetchMethod):
|
||||
pullcmd = self._buildhgcommand(ud, d, "pull")
|
||||
logger.info("Pulling " + ud.url)
|
||||
# update sources there
|
||||
logger.debug("Running %s", pullcmd)
|
||||
logger.debug(1, "Running %s", pullcmd)
|
||||
bb.fetch2.check_network_access(d, pullcmd, ud.url)
|
||||
runfetchcmd(pullcmd, d, workdir=ud.moddir)
|
||||
try:
|
||||
@@ -183,14 +183,14 @@ class Hg(FetchMethod):
|
||||
logger.info("Fetch " + ud.url)
|
||||
# check out sources there
|
||||
bb.utils.mkdirhier(ud.pkgdir)
|
||||
logger.debug("Running %s", fetchcmd)
|
||||
logger.debug(1, "Running %s", fetchcmd)
|
||||
bb.fetch2.check_network_access(d, fetchcmd, ud.url)
|
||||
runfetchcmd(fetchcmd, d, workdir=ud.pkgdir)
|
||||
|
||||
# Even when we clone (fetch), we still need to update as hg's clone
|
||||
# won't checkout the specified revision if its on a branch
|
||||
updatecmd = self._buildhgcommand(ud, d, "update")
|
||||
logger.debug("Running %s", updatecmd)
|
||||
logger.debug(1, "Running %s", updatecmd)
|
||||
runfetchcmd(updatecmd, d, workdir=ud.moddir)
|
||||
|
||||
def clean(self, ud, d):
|
||||
@@ -247,9 +247,9 @@ class Hg(FetchMethod):
|
||||
if scmdata != "nokeep":
|
||||
proto = ud.parm.get('protocol', 'http')
|
||||
if not os.access(os.path.join(codir, '.hg'), os.R_OK):
|
||||
logger.debug2("Unpack: creating new hg repository in '" + codir + "'")
|
||||
logger.debug(2, "Unpack: creating new hg repository in '" + codir + "'")
|
||||
runfetchcmd("%s init %s" % (ud.basecmd, codir), d)
|
||||
logger.debug2("Unpack: updating source in '" + codir + "'")
|
||||
logger.debug(2, "Unpack: updating source in '" + codir + "'")
|
||||
if ud.user and ud.pswd:
|
||||
runfetchcmd("%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" pull %s" % (ud.basecmd, ud.user, ud.pswd, proto, ud.moddir), d, workdir=codir)
|
||||
else:
|
||||
@@ -259,5 +259,5 @@ class Hg(FetchMethod):
|
||||
else:
|
||||
runfetchcmd("%s up -C %s" % (ud.basecmd, revflag), d, workdir=codir)
|
||||
else:
|
||||
logger.debug2("Unpack: extracting source to '" + codir + "'")
|
||||
logger.debug(2, "Unpack: extracting source to '" + codir + "'")
|
||||
runfetchcmd("%s archive -t files %s %s" % (ud.basecmd, revflag, codir), d, workdir=ud.moddir)
|
||||
|
||||
@@ -54,9 +54,15 @@ class Local(FetchMethod):
|
||||
return [path]
|
||||
filespath = d.getVar('FILESPATH')
|
||||
if filespath:
|
||||
logger.debug2("Searching for %s in paths:\n %s" % (path, "\n ".join(filespath.split(":"))))
|
||||
logger.debug(2, "Searching for %s in paths:\n %s" % (path, "\n ".join(filespath.split(":"))))
|
||||
newpath, hist = bb.utils.which(filespath, path, history=True)
|
||||
searched.extend(hist)
|
||||
if not os.path.exists(newpath):
|
||||
dldirfile = os.path.join(d.getVar("DL_DIR"), path)
|
||||
logger.debug(2, "Defaulting to %s for %s" % (dldirfile, path))
|
||||
bb.utils.mkdirhier(os.path.dirname(dldirfile))
|
||||
searched.append(dldirfile)
|
||||
return searched
|
||||
return searched
|
||||
|
||||
def need_update(self, ud, d):
|
||||
@@ -72,6 +78,8 @@ class Local(FetchMethod):
|
||||
filespath = d.getVar('FILESPATH')
|
||||
if filespath:
|
||||
locations = filespath.split(":")
|
||||
locations.append(d.getVar("DL_DIR"))
|
||||
|
||||
msg = "Unable to find file " + urldata.url + " anywhere. The paths that were searched were:\n " + "\n ".join(locations)
|
||||
raise FetchError(msg)
|
||||
|
||||
|
||||
@@ -52,13 +52,9 @@ def npm_filename(package, version):
|
||||
"""Get the filename of a npm package"""
|
||||
return npm_package(package) + "-" + version + ".tgz"
|
||||
|
||||
def npm_localfile(package, version=None):
|
||||
def npm_localfile(package, version):
|
||||
"""Get the local filename of a npm package"""
|
||||
if version is not None:
|
||||
filename = npm_filename(package, version)
|
||||
else:
|
||||
filename = package
|
||||
return os.path.join("npm2", filename)
|
||||
return os.path.join("npm2", npm_filename(package, version))
|
||||
|
||||
def npm_integrity(integrity):
|
||||
"""
|
||||
@@ -73,31 +69,17 @@ def npm_unpack(tarball, destdir, d):
|
||||
bb.utils.mkdirhier(destdir)
|
||||
cmd = "tar --extract --gzip --file=%s" % shlex.quote(tarball)
|
||||
cmd += " --no-same-owner"
|
||||
cmd += " --delay-directory-restore"
|
||||
cmd += " --strip-components=1"
|
||||
runfetchcmd(cmd, d, workdir=destdir)
|
||||
runfetchcmd("chmod -R +X '%s'" % (destdir), d, quiet=True, workdir=destdir)
|
||||
|
||||
class NpmEnvironment(object):
|
||||
"""
|
||||
Using a npm config file seems more reliable than using cli arguments.
|
||||
This class allows to create a controlled environment for npm commands.
|
||||
"""
|
||||
def __init__(self, d, configs=[], npmrc=None):
|
||||
def __init__(self, d, configs=None):
|
||||
self.d = d
|
||||
|
||||
self.user_config = tempfile.NamedTemporaryFile(mode="w", buffering=1)
|
||||
for key, value in configs:
|
||||
self.user_config.write("%s=%s\n" % (key, value))
|
||||
|
||||
if npmrc:
|
||||
self.global_config_name = npmrc
|
||||
else:
|
||||
self.global_config_name = "/dev/null"
|
||||
|
||||
def __del__(self):
|
||||
if self.user_config:
|
||||
self.user_config.close()
|
||||
self.configs = configs
|
||||
|
||||
def run(self, cmd, args=None, configs=None, workdir=None):
|
||||
"""Run npm command in a controlled environment"""
|
||||
@@ -105,19 +87,23 @@ class NpmEnvironment(object):
|
||||
d = bb.data.createCopy(self.d)
|
||||
d.setVar("HOME", tmpdir)
|
||||
|
||||
cfgfile = os.path.join(tmpdir, "npmrc")
|
||||
|
||||
if not workdir:
|
||||
workdir = tmpdir
|
||||
|
||||
def _run(cmd):
|
||||
cmd = "NPM_CONFIG_USERCONFIG=%s " % (self.user_config.name) + cmd
|
||||
cmd = "NPM_CONFIG_GLOBALCONFIG=%s " % (self.global_config_name) + cmd
|
||||
cmd = "NPM_CONFIG_USERCONFIG=%s " % cfgfile + cmd
|
||||
cmd = "NPM_CONFIG_GLOBALCONFIG=%s " % cfgfile + cmd
|
||||
return runfetchcmd(cmd, d, workdir=workdir)
|
||||
|
||||
if self.configs:
|
||||
for key, value in self.configs:
|
||||
_run("npm config set %s %s" % (key, shlex.quote(value)))
|
||||
|
||||
if configs:
|
||||
bb.warn("Use of configs argument of NpmEnvironment.run() function"
|
||||
" is deprecated. Please use args argument instead.")
|
||||
for key, value in configs:
|
||||
cmd += " --%s=%s" % (key, shlex.quote(value))
|
||||
_run("npm config set %s %s" % (key, shlex.quote(value)))
|
||||
|
||||
if args:
|
||||
for key, value in args:
|
||||
@@ -156,12 +142,12 @@ class Npm(FetchMethod):
|
||||
raise ParameterError("Invalid 'version' parameter", ud.url)
|
||||
|
||||
# Extract the 'registry' part of the url
|
||||
ud.registry = re.sub(r"^npm://", "https://", ud.url.split(";")[0])
|
||||
ud.registry = re.sub(r"^npm://", "http://", ud.url.split(";")[0])
|
||||
|
||||
# Using the 'downloadfilename' parameter as local filename
|
||||
# or the npm package name.
|
||||
if "downloadfilename" in ud.parm:
|
||||
ud.localfile = npm_localfile(d.expand(ud.parm["downloadfilename"]))
|
||||
ud.localfile = d.expand(ud.parm["downloadfilename"])
|
||||
else:
|
||||
ud.localfile = npm_localfile(ud.package, ud.version)
|
||||
|
||||
@@ -179,14 +165,14 @@ class Npm(FetchMethod):
|
||||
|
||||
def _resolve_proxy_url(self, ud, d):
|
||||
def _npm_view():
|
||||
args = []
|
||||
args.append(("json", "true"))
|
||||
args.append(("registry", ud.registry))
|
||||
configs = []
|
||||
configs.append(("json", "true"))
|
||||
configs.append(("registry", ud.registry))
|
||||
pkgver = shlex.quote(ud.package + "@" + ud.version)
|
||||
cmd = ud.basecmd + " view %s" % pkgver
|
||||
env = NpmEnvironment(d)
|
||||
check_network_access(d, cmd, ud.registry)
|
||||
view_string = env.run(cmd, args=args)
|
||||
view_string = env.run(cmd, configs=configs)
|
||||
|
||||
if not view_string:
|
||||
raise FetchError("Unavailable package %s" % pkgver, ud.url)
|
||||
|
||||
@@ -24,14 +24,11 @@ import bb
|
||||
from bb.fetch2 import Fetch
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import ParameterError
|
||||
from bb.fetch2 import runfetchcmd
|
||||
from bb.fetch2 import URI
|
||||
from bb.fetch2.npm import npm_integrity
|
||||
from bb.fetch2.npm import npm_localfile
|
||||
from bb.fetch2.npm import npm_unpack
|
||||
from bb.utils import is_semver
|
||||
from bb.utils import lockfile
|
||||
from bb.utils import unlockfile
|
||||
|
||||
def foreach_dependencies(shrinkwrap, callback=None, dev=False):
|
||||
"""
|
||||
@@ -81,18 +78,13 @@ class NpmShrinkWrap(FetchMethod):
|
||||
extrapaths = []
|
||||
destsubdirs = [os.path.join("node_modules", dep) for dep in deptree]
|
||||
destsuffix = os.path.join(*destsubdirs)
|
||||
unpack = True
|
||||
|
||||
integrity = params.get("integrity", None)
|
||||
resolved = params.get("resolved", None)
|
||||
version = params.get("version", None)
|
||||
|
||||
# Handle registry sources
|
||||
if is_semver(version) and integrity:
|
||||
# Handle duplicate dependencies without url
|
||||
if not resolved:
|
||||
return
|
||||
|
||||
if is_semver(version) and resolved and integrity:
|
||||
localfile = npm_localfile(name, version)
|
||||
|
||||
uri = URI(resolved)
|
||||
@@ -117,7 +109,7 @@ class NpmShrinkWrap(FetchMethod):
|
||||
|
||||
# Handle http tarball sources
|
||||
elif version.startswith("http") and integrity:
|
||||
localfile = npm_localfile(os.path.basename(version))
|
||||
localfile = os.path.join("npm2", os.path.basename(version))
|
||||
|
||||
uri = URI(version)
|
||||
uri.params["downloadfilename"] = localfile
|
||||
@@ -131,8 +123,6 @@ class NpmShrinkWrap(FetchMethod):
|
||||
|
||||
# Handle git sources
|
||||
elif version.startswith("git"):
|
||||
if version.startswith("github:"):
|
||||
version = "git+https://github.com/" + version[len("github:"):]
|
||||
regex = re.compile(r"""
|
||||
^
|
||||
git\+
|
||||
@@ -158,12 +148,7 @@ class NpmShrinkWrap(FetchMethod):
|
||||
|
||||
url = str(uri)
|
||||
|
||||
# Handle local tarball and link sources
|
||||
elif version.startswith("file"):
|
||||
localpath = version[5:]
|
||||
if not version.endswith(".tgz"):
|
||||
unpack = False
|
||||
|
||||
# local tarball sources and local link sources are unsupported
|
||||
else:
|
||||
raise ParameterError("Unsupported dependency: %s" % name, ud.url)
|
||||
|
||||
@@ -172,7 +157,6 @@ class NpmShrinkWrap(FetchMethod):
|
||||
"localpath": localpath,
|
||||
"extrapaths": extrapaths,
|
||||
"destsuffix": destsuffix,
|
||||
"unpack": unpack,
|
||||
})
|
||||
|
||||
try:
|
||||
@@ -193,7 +177,7 @@ class NpmShrinkWrap(FetchMethod):
|
||||
# This fetcher resolves multiple URIs from a shrinkwrap file and then
|
||||
# forwards it to a proxy fetcher. The management of the donestamp file,
|
||||
# the lockfile and the checksums are forwarded to the proxy fetcher.
|
||||
ud.proxy = Fetch([dep["url"] for dep in ud.deps if dep["url"]], data)
|
||||
ud.proxy = Fetch([dep["url"] for dep in ud.deps], data)
|
||||
ud.needdonestamp = False
|
||||
|
||||
@staticmethod
|
||||
@@ -203,9 +187,7 @@ class NpmShrinkWrap(FetchMethod):
|
||||
proxy_ud = ud.proxy.ud[proxy_url]
|
||||
proxy_d = ud.proxy.d
|
||||
proxy_ud.setup_localpath(proxy_d)
|
||||
lf = lockfile(proxy_ud.lockfile)
|
||||
returns.append(handle(proxy_ud.method, proxy_ud, proxy_d))
|
||||
unlockfile(lf)
|
||||
return returns
|
||||
|
||||
def verify_donestamp(self, ud, d):
|
||||
@@ -255,16 +237,7 @@ class NpmShrinkWrap(FetchMethod):
|
||||
|
||||
for dep in manual:
|
||||
depdestdir = os.path.join(destdir, dep["destsuffix"])
|
||||
if dep["url"]:
|
||||
npm_unpack(dep["localpath"], depdestdir, d)
|
||||
else:
|
||||
depsrcdir= os.path.join(destdir, dep["localpath"])
|
||||
if dep["unpack"]:
|
||||
npm_unpack(depsrcdir, depdestdir, d)
|
||||
else:
|
||||
bb.utils.mkdirhier(depdestdir)
|
||||
cmd = 'cp -fpPRH "%s/." .' % (depsrcdir)
|
||||
runfetchcmd(cmd, d, workdir=depdestdir)
|
||||
npm_unpack(dep["localpath"], depdestdir, d)
|
||||
|
||||
def clean(self, ud, d):
|
||||
"""Clean any existing full or partial download"""
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
"""
|
||||
@@ -11,7 +9,6 @@ Based on the svn "Fetch" implementation.
|
||||
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import bb
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import FetchError
|
||||
@@ -39,7 +36,6 @@ class Osc(FetchMethod):
|
||||
# Create paths to osc checkouts
|
||||
oscdir = d.getVar("OSCDIR") or (d.getVar("DL_DIR") + "/osc")
|
||||
relpath = self._strip_leading_slashes(ud.path)
|
||||
ud.oscdir = oscdir
|
||||
ud.pkgdir = os.path.join(oscdir, ud.host)
|
||||
ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
|
||||
|
||||
@@ -47,13 +43,13 @@ class Osc(FetchMethod):
|
||||
ud.revision = ud.parm['rev']
|
||||
else:
|
||||
pv = d.getVar("PV", False)
|
||||
rev = bb.fetch2.srcrev_internal_helper(ud, d, '')
|
||||
rev = bb.fetch2.srcrev_internal_helper(ud, d)
|
||||
if rev:
|
||||
ud.revision = rev
|
||||
else:
|
||||
ud.revision = ""
|
||||
|
||||
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), relpath.replace('/', '.'), ud.revision))
|
||||
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.path.replace('/', '.'), ud.revision))
|
||||
|
||||
def _buildosccommand(self, ud, d, command):
|
||||
"""
|
||||
@@ -63,61 +59,38 @@ class Osc(FetchMethod):
|
||||
|
||||
basecmd = d.getVar("FETCHCMD_osc") or "/usr/bin/env osc"
|
||||
|
||||
proto = ud.parm.get('protocol', 'https')
|
||||
proto = ud.parm.get('protocol', 'ocs')
|
||||
|
||||
options = []
|
||||
|
||||
config = "-c %s" % self.generate_config(ud, d)
|
||||
|
||||
if getattr(ud, 'revision', ''):
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
|
||||
coroot = self._strip_leading_slashes(ud.path)
|
||||
|
||||
if command == "fetch":
|
||||
osccmd = "%s %s -A %s://%s co %s/%s %s" % (basecmd, config, proto, ud.host, coroot, ud.module, " ".join(options))
|
||||
osccmd = "%s %s co %s/%s %s" % (basecmd, config, coroot, ud.module, " ".join(options))
|
||||
elif command == "update":
|
||||
osccmd = "%s %s -A %s://%s up %s" % (basecmd, config, proto, ud.host, " ".join(options))
|
||||
elif command == "api_source":
|
||||
osccmd = "%s %s -A %s://%s api source/%s/%s" % (basecmd, config, proto, ud.host, coroot, ud.module)
|
||||
osccmd = "%s %s up %s" % (basecmd, config, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid osc command %s" % command, ud.url)
|
||||
|
||||
return osccmd
|
||||
|
||||
def _latest_revision(self, ud, d, name):
|
||||
"""
|
||||
Fetch latest revision for the given package
|
||||
"""
|
||||
api_source_cmd = self._buildosccommand(ud, d, "api_source")
|
||||
|
||||
output = runfetchcmd(api_source_cmd, d)
|
||||
match = re.match(r'<directory ?.* rev="(\d+)".*>', output)
|
||||
if match is None:
|
||||
raise FetchError("Unable to parse osc response", ud.url)
|
||||
return match.groups()[0]
|
||||
|
||||
def _revision_key(self, ud, d, name):
|
||||
"""
|
||||
Return a unique key for the url
|
||||
"""
|
||||
# Collapse adjacent slashes
|
||||
slash_re = re.compile(r"/+")
|
||||
rev = getattr(ud, 'revision', "latest")
|
||||
return "osc:%s%s.%s.%s" % (ud.host, slash_re.sub(".", ud.path), name, rev)
|
||||
|
||||
def download(self, ud, d):
|
||||
"""
|
||||
Fetch url
|
||||
"""
|
||||
|
||||
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
|
||||
if os.access(ud.moddir, os.R_OK):
|
||||
if os.access(os.path.join(d.getVar('OSCDIR'), ud.path, ud.module), os.R_OK):
|
||||
oscupdatecmd = self._buildosccommand(ud, d, "update")
|
||||
logger.info("Update "+ ud.url)
|
||||
# update sources there
|
||||
logger.debug("Running %s", oscupdatecmd)
|
||||
logger.debug(1, "Running %s", oscupdatecmd)
|
||||
bb.fetch2.check_network_access(d, oscupdatecmd, ud.url)
|
||||
runfetchcmd(oscupdatecmd, d, workdir=ud.moddir)
|
||||
else:
|
||||
@@ -125,7 +98,7 @@ class Osc(FetchMethod):
|
||||
logger.info("Fetch " + ud.url)
|
||||
# check out sources there
|
||||
bb.utils.mkdirhier(ud.pkgdir)
|
||||
logger.debug("Running %s", oscfetchcmd)
|
||||
logger.debug(1, "Running %s", oscfetchcmd)
|
||||
bb.fetch2.check_network_access(d, oscfetchcmd, ud.url)
|
||||
runfetchcmd(oscfetchcmd, d, workdir=ud.pkgdir)
|
||||
|
||||
@@ -141,23 +114,20 @@ class Osc(FetchMethod):
|
||||
Generate a .oscrc to be used for this run.
|
||||
"""
|
||||
|
||||
config_path = os.path.join(ud.oscdir, "oscrc")
|
||||
if not os.path.exists(ud.oscdir):
|
||||
bb.utils.mkdirhier(ud.oscdir)
|
||||
|
||||
config_path = os.path.join(d.getVar('OSCDIR'), "oscrc")
|
||||
if (os.path.exists(config_path)):
|
||||
os.remove(config_path)
|
||||
|
||||
f = open(config_path, 'w')
|
||||
proto = ud.parm.get('protocol', 'https')
|
||||
f.write("[general]\n")
|
||||
f.write("apiurl = %s://%s\n" % (proto, ud.host))
|
||||
f.write("apisrv = %s\n" % ud.host)
|
||||
f.write("scheme = http\n")
|
||||
f.write("su-wrapper = su -c\n")
|
||||
f.write("build-root = %s\n" % d.getVar('WORKDIR'))
|
||||
f.write("urllist = %s\n" % d.getVar("OSCURLLIST"))
|
||||
f.write("extra-pkgs = gzip\n")
|
||||
f.write("\n")
|
||||
f.write("[%s://%s]\n" % (proto, ud.host))
|
||||
f.write("[%s]\n" % ud.host)
|
||||
f.write("user = %s\n" % ud.parm["user"])
|
||||
f.write("pass = %s\n" % ud.parm["pswd"])
|
||||
f.close()
|
||||
|
||||
@@ -90,16 +90,16 @@ class Perforce(FetchMethod):
|
||||
p4port = d.getVar('P4PORT')
|
||||
|
||||
if p4port:
|
||||
logger.debug('Using recipe provided P4PORT: %s' % p4port)
|
||||
logger.debug(1, 'Using recipe provided P4PORT: %s' % p4port)
|
||||
ud.host = p4port
|
||||
else:
|
||||
logger.debug('Trying to use P4CONFIG to automatically set P4PORT...')
|
||||
logger.debug(1, 'Trying to use P4CONFIG to automatically set P4PORT...')
|
||||
ud.usingp4config = True
|
||||
p4cmd = '%s info | grep "Server address"' % ud.basecmd
|
||||
bb.fetch2.check_network_access(d, p4cmd, ud.url)
|
||||
ud.host = runfetchcmd(p4cmd, d, True)
|
||||
ud.host = ud.host.split(': ')[1].strip()
|
||||
logger.debug('Determined P4PORT to be: %s' % ud.host)
|
||||
logger.debug(1, 'Determined P4PORT to be: %s' % ud.host)
|
||||
if not ud.host:
|
||||
raise FetchError('Could not determine P4PORT from P4CONFIG')
|
||||
|
||||
@@ -119,7 +119,6 @@ class Perforce(FetchMethod):
|
||||
cleanedpath = ud.path.replace('/...', '').replace('/', '.')
|
||||
cleanedhost = ud.host.replace(':', '.')
|
||||
|
||||
cleanedmodule = ""
|
||||
# Merge the path and module into the final depot location
|
||||
if ud.module:
|
||||
if ud.module.find('/') == 0:
|
||||
@@ -134,7 +133,7 @@ class Perforce(FetchMethod):
|
||||
|
||||
ud.setup_revisions(d)
|
||||
|
||||
ud.localfile = d.expand('%s_%s_%s_%s.tar.gz' % (cleanedhost, cleanedpath, cleanedmodule, ud.revision))
|
||||
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (cleanedhost, cleanedpath, ud.revision))
|
||||
|
||||
def _buildp4command(self, ud, d, command, depot_filename=None):
|
||||
"""
|
||||
@@ -208,7 +207,7 @@ class Perforce(FetchMethod):
|
||||
for filename in p4fileslist:
|
||||
item = filename.split(' - ')
|
||||
lastaction = item[1].split()
|
||||
logger.debug('File: %s Last Action: %s' % (item[0], lastaction[0]))
|
||||
logger.debug(1, 'File: %s Last Action: %s' % (item[0], lastaction[0]))
|
||||
if lastaction[0] == 'delete':
|
||||
continue
|
||||
filelist.append(item[0])
|
||||
@@ -255,7 +254,7 @@ class Perforce(FetchMethod):
|
||||
raise FetchError('Could not determine the latest perforce changelist')
|
||||
|
||||
tipcset = tip.split(' ')[1]
|
||||
logger.debug('p4 tip found to be changelist %s' % tipcset)
|
||||
logger.debug(1, 'p4 tip found to be changelist %s' % tipcset)
|
||||
return tipcset
|
||||
|
||||
def sortable_revision(self, ud, d, name):
|
||||
|
||||
@@ -47,7 +47,7 @@ class Repo(FetchMethod):
|
||||
"""Fetch url"""
|
||||
|
||||
if os.access(os.path.join(d.getVar("DL_DIR"), ud.localfile), os.R_OK):
|
||||
logger.debug("%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
|
||||
logger.debug(1, "%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
|
||||
return
|
||||
|
||||
repodir = d.getVar("REPODIR") or (d.getVar("DL_DIR") + "/repo")
|
||||
|
||||
@@ -18,47 +18,10 @@ The aws tool must be correctly installed and configured prior to use.
|
||||
import os
|
||||
import bb
|
||||
import urllib.request, urllib.parse, urllib.error
|
||||
import re
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import FetchError
|
||||
from bb.fetch2 import runfetchcmd
|
||||
|
||||
def convertToBytes(value, unit):
|
||||
value = float(value)
|
||||
if (unit == "KiB"):
|
||||
value = value*1024.0;
|
||||
elif (unit == "MiB"):
|
||||
value = value*1024.0*1024.0;
|
||||
elif (unit == "GiB"):
|
||||
value = value*1024.0*1024.0*1024.0;
|
||||
return value
|
||||
|
||||
class S3ProgressHandler(bb.progress.LineFilterProgressHandler):
|
||||
"""
|
||||
Extract progress information from s3 cp output, e.g.:
|
||||
Completed 5.1 KiB/8.8 GiB (12.0 MiB/s) with 1 file(s) remaining
|
||||
"""
|
||||
def __init__(self, d):
|
||||
super(S3ProgressHandler, self).__init__(d)
|
||||
# Send an initial progress event so the bar gets shown
|
||||
self._fire_progress(0)
|
||||
|
||||
def writeline(self, line):
|
||||
percs = re.findall(r'^Completed (\d+.{0,1}\d*) (\w+)\/(\d+.{0,1}\d*) (\w+) (\(.+\)) with\s+', line)
|
||||
if percs:
|
||||
completed = (percs[-1][0])
|
||||
completedUnit = (percs[-1][1])
|
||||
total = (percs[-1][2])
|
||||
totalUnit = (percs[-1][3])
|
||||
completed = convertToBytes(completed, completedUnit)
|
||||
total = convertToBytes(total, totalUnit)
|
||||
progress = (completed/total)*100.0
|
||||
rate = percs[-1][4]
|
||||
self.update(progress, rate)
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
class S3(FetchMethod):
|
||||
"""Class to fetch urls via 'aws s3'"""
|
||||
|
||||
@@ -89,9 +52,7 @@ class S3(FetchMethod):
|
||||
|
||||
cmd = '%s cp s3://%s%s %s' % (ud.basecmd, ud.host, ud.path, ud.localpath)
|
||||
bb.fetch2.check_network_access(d, cmd, ud.url)
|
||||
|
||||
progresshandler = S3ProgressHandler(d)
|
||||
runfetchcmd(cmd, d, False, log=progresshandler)
|
||||
runfetchcmd(cmd, d)
|
||||
|
||||
# Additional sanity checks copied from the wget class (although there
|
||||
# are no known issues which mean these are required, treat the aws cli
|
||||
|
||||
@@ -32,7 +32,6 @@ IETF secsh internet draft:
|
||||
|
||||
import re, os
|
||||
from bb.fetch2 import check_network_access, FetchMethod, ParameterError, runfetchcmd
|
||||
import urllib
|
||||
|
||||
|
||||
__pattern__ = re.compile(r'''
|
||||
@@ -41,9 +40,9 @@ __pattern__ = re.compile(r'''
|
||||
( # Optional username/password block
|
||||
(?P<user>\S+) # username
|
||||
(:(?P<pass>\S+))? # colon followed by the password (optional)
|
||||
)?
|
||||
(?P<cparam>(;[^;]+)*)? # connection parameters block (optional)
|
||||
@
|
||||
)?
|
||||
(?P<host>\S+?) # non-greedy match of the host
|
||||
(:(?P<port>[0-9]+))? # colon followed by the port (optional)
|
||||
/
|
||||
@@ -71,7 +70,6 @@ class SSH(FetchMethod):
|
||||
"git:// prefix with protocol=ssh", urldata.url)
|
||||
m = __pattern__.match(urldata.url)
|
||||
path = m.group('path')
|
||||
path = urllib.parse.unquote(path)
|
||||
host = m.group('host')
|
||||
urldata.localpath = os.path.join(d.getVar('DL_DIR'),
|
||||
os.path.basename(os.path.normpath(path)))
|
||||
@@ -98,11 +96,6 @@ class SSH(FetchMethod):
|
||||
fr += '@%s' % host
|
||||
else:
|
||||
fr = host
|
||||
|
||||
if path[0] != '~':
|
||||
path = '/%s' % path
|
||||
path = urllib.parse.unquote(path)
|
||||
|
||||
fr += ':%s' % path
|
||||
|
||||
cmd = 'scp -B -r %s %s %s/' % (
|
||||
@@ -115,41 +108,3 @@ class SSH(FetchMethod):
|
||||
|
||||
runfetchcmd(cmd, d)
|
||||
|
||||
def checkstatus(self, fetch, urldata, d):
|
||||
"""
|
||||
Check the status of the url
|
||||
"""
|
||||
m = __pattern__.match(urldata.url)
|
||||
path = m.group('path')
|
||||
host = m.group('host')
|
||||
port = m.group('port')
|
||||
user = m.group('user')
|
||||
password = m.group('pass')
|
||||
|
||||
if port:
|
||||
portarg = '-P %s' % port
|
||||
else:
|
||||
portarg = ''
|
||||
|
||||
if user:
|
||||
fr = user
|
||||
if password:
|
||||
fr += ':%s' % password
|
||||
fr += '@%s' % host
|
||||
else:
|
||||
fr = host
|
||||
|
||||
if path[0] != '~':
|
||||
path = '/%s' % path
|
||||
path = urllib.parse.unquote(path)
|
||||
|
||||
cmd = 'ssh -o BatchMode=true %s %s [ -f %s ]' % (
|
||||
portarg,
|
||||
fr,
|
||||
path
|
||||
)
|
||||
|
||||
check_network_access(d, cmd, urldata.url)
|
||||
runfetchcmd(cmd, d)
|
||||
|
||||
return True
|
||||
|
||||
@@ -57,12 +57,7 @@ class Svn(FetchMethod):
|
||||
if 'rev' in ud.parm:
|
||||
ud.revision = ud.parm['rev']
|
||||
|
||||
# Whether to use the @REV peg-revision syntax in the svn command or not
|
||||
ud.pegrevision = True
|
||||
if 'nopegrevision' in ud.parm:
|
||||
ud.pegrevision = False
|
||||
|
||||
ud.localfile = d.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ["0", "1"][ud.pegrevision]))
|
||||
ud.localfile = d.expand('%s_%s_%s_%s_.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision))
|
||||
|
||||
def _buildsvncommand(self, ud, d, command):
|
||||
"""
|
||||
@@ -91,7 +86,7 @@ class Svn(FetchMethod):
|
||||
if command == "info":
|
||||
svncmd = "%s info %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
|
||||
elif command == "log1":
|
||||
svncmd = "%s log --limit 1 --quiet %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
|
||||
svncmd = "%s log --limit 1 %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
|
||||
else:
|
||||
suffix = ""
|
||||
|
||||
@@ -103,8 +98,7 @@ class Svn(FetchMethod):
|
||||
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
if ud.pegrevision:
|
||||
suffix = "@%s" % (ud.revision)
|
||||
suffix = "@%s" % (ud.revision)
|
||||
|
||||
if command == "fetch":
|
||||
transportuser = ud.parm.get("transportuser", "")
|
||||
@@ -122,7 +116,7 @@ class Svn(FetchMethod):
|
||||
def download(self, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
|
||||
lf = bb.utils.lockfile(ud.svnlock)
|
||||
|
||||
@@ -135,7 +129,7 @@ class Svn(FetchMethod):
|
||||
runfetchcmd(ud.basecmd + " upgrade", d, workdir=ud.moddir)
|
||||
except FetchError:
|
||||
pass
|
||||
logger.debug("Running %s", svncmd)
|
||||
logger.debug(1, "Running %s", svncmd)
|
||||
bb.fetch2.check_network_access(d, svncmd, ud.url)
|
||||
runfetchcmd(svncmd, d, workdir=ud.moddir)
|
||||
else:
|
||||
@@ -143,7 +137,7 @@ class Svn(FetchMethod):
|
||||
logger.info("Fetch " + ud.url)
|
||||
# check out sources there
|
||||
bb.utils.mkdirhier(ud.pkgdir)
|
||||
logger.debug("Running %s", svncmd)
|
||||
logger.debug(1, "Running %s", svncmd)
|
||||
bb.fetch2.check_network_access(d, svncmd, ud.url)
|
||||
runfetchcmd(svncmd, d, workdir=ud.pkgdir)
|
||||
|
||||
|
||||
@@ -53,23 +53,11 @@ class WgetProgressHandler(bb.progress.LineFilterProgressHandler):
|
||||
|
||||
class Wget(FetchMethod):
|
||||
"""Class to fetch urls via 'wget'"""
|
||||
|
||||
# CDNs like CloudFlare may do a 'browser integrity test' which can fail
|
||||
# with the standard wget/urllib User-Agent, so pretend to be a modern
|
||||
# browser.
|
||||
user_agent = "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0"
|
||||
|
||||
def check_certs(self, d):
|
||||
"""
|
||||
Should certificates be checked?
|
||||
"""
|
||||
return (d.getVar("BB_CHECK_SSL_CERTS") or "1") != "0"
|
||||
|
||||
def supports(self, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with wget.
|
||||
"""
|
||||
return ud.type in ['http', 'https', 'ftp', 'ftps']
|
||||
return ud.type in ['http', 'https', 'ftp']
|
||||
|
||||
def recommends_checksum(self, urldata):
|
||||
return True
|
||||
@@ -88,16 +76,13 @@ class Wget(FetchMethod):
|
||||
if not ud.localfile:
|
||||
ud.localfile = d.expand(urllib.parse.unquote(ud.host + ud.path).replace("/", "."))
|
||||
|
||||
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 30 --passive-ftp"
|
||||
|
||||
if not self.check_certs(d):
|
||||
self.basecmd += " --no-check-certificate"
|
||||
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 30 --passive-ftp --no-check-certificate"
|
||||
|
||||
def _runwget(self, ud, d, command, quiet, workdir=None):
|
||||
|
||||
progresshandler = WgetProgressHandler(d)
|
||||
|
||||
logger.debug2("Fetching %s using command '%s'" % (ud.url, command))
|
||||
logger.debug(2, "Fetching %s using command '%s'" % (ud.url, command))
|
||||
bb.fetch2.check_network_access(d, command, ud.url)
|
||||
runfetchcmd(command + ' --progress=dot -v', d, quiet, log=progresshandler, workdir=workdir)
|
||||
|
||||
@@ -106,22 +91,13 @@ class Wget(FetchMethod):
|
||||
|
||||
fetchcmd = self.basecmd
|
||||
|
||||
localpath = os.path.join(d.getVar("DL_DIR"), ud.localfile) + ".tmp"
|
||||
bb.utils.mkdirhier(os.path.dirname(localpath))
|
||||
fetchcmd += " -O %s" % shlex.quote(localpath)
|
||||
if 'downloadfilename' in ud.parm:
|
||||
localpath = os.path.join(d.getVar("DL_DIR"), ud.localfile)
|
||||
bb.utils.mkdirhier(os.path.dirname(localpath))
|
||||
fetchcmd += " -O %s" % shlex.quote(localpath)
|
||||
|
||||
if ud.user and ud.pswd:
|
||||
fetchcmd += " --auth-no-challenge"
|
||||
if ud.parm.get("redirectauth", "1") == "1":
|
||||
# An undocumented feature of wget is that if the
|
||||
# username/password are specified on the URI, wget will only
|
||||
# send the Authorization header to the first host and not to
|
||||
# any hosts that it is redirected to. With the increasing
|
||||
# usage of temporary AWS URLs, this difference now matters as
|
||||
# AWS will reject any request that has authentication both in
|
||||
# the query parameters (from the redirect) and in the
|
||||
# Authorization header.
|
||||
fetchcmd += " --user=%s --password=%s" % (ud.user, ud.pswd)
|
||||
fetchcmd += " --user=%s --password=%s --auth-no-challenge" % (ud.user, ud.pswd)
|
||||
|
||||
uri = ud.url.split(";")[0]
|
||||
if os.path.exists(ud.localpath):
|
||||
@@ -132,15 +108,6 @@ class Wget(FetchMethod):
|
||||
|
||||
self._runwget(ud, d, fetchcmd, False)
|
||||
|
||||
# Try and verify any checksum now, meaning if it isn't correct, we don't remove the
|
||||
# original file, which might be a race (imagine two recipes referencing the same
|
||||
# source, one with an incorrect checksum)
|
||||
bb.fetch2.verify_checksum(ud, d, localpath=localpath, fatal_nochecksum=False)
|
||||
|
||||
# Remove the ".tmp" and move the file into position atomically
|
||||
# Our lock prevents multiple writers but mirroring code may grab incomplete files
|
||||
os.rename(localpath, localpath[:-4])
|
||||
|
||||
# Sanity check since wget can pretend it succeed when it didn't
|
||||
# Also, this used to happen if sourceforge sent us to the mirror page
|
||||
if not os.path.exists(ud.localpath):
|
||||
@@ -236,7 +203,7 @@ class Wget(FetchMethod):
|
||||
# We let the request fail and expect it to be
|
||||
# tried once more ("try_again" in check_status()),
|
||||
# with the dead connection removed from the cache.
|
||||
# If it still fails, we give up, which can happen for bad
|
||||
# If it still fails, we give up, which can happend for bad
|
||||
# HTTP proxy settings.
|
||||
fetch.connection_cache.remove_connection(h.host, h.port)
|
||||
raise urllib.error.URLError(err)
|
||||
@@ -309,82 +276,56 @@ class Wget(FetchMethod):
|
||||
newreq = urllib.request.HTTPRedirectHandler.redirect_request(self, req, fp, code, msg, headers, newurl)
|
||||
newreq.get_method = req.get_method
|
||||
return newreq
|
||||
exported_proxies = export_proxies(d)
|
||||
|
||||
# We need to update the environment here as both the proxy and HTTPS
|
||||
# handlers need variables set. The proxy needs http_proxy and friends to
|
||||
# be set, and HTTPSHandler ends up calling into openssl to load the
|
||||
# certificates. In buildtools configurations this will be looking at the
|
||||
# wrong place for certificates by default: we set SSL_CERT_FILE to the
|
||||
# right location in the buildtools environment script but as BitBake
|
||||
# prunes prunes the environment this is lost. When binaries are executed
|
||||
# runfetchcmd ensures these values are in the environment, but this is
|
||||
# pure Python so we need to update the environment.
|
||||
#
|
||||
# Avoid tramping the environment too much by using bb.utils.environment
|
||||
# to scope the changes to the build_opener request, which is when the
|
||||
# environment lookups happen.
|
||||
newenv = bb.fetch2.get_fetcher_environment(d)
|
||||
handlers = [FixedHTTPRedirectHandler, HTTPMethodFallback]
|
||||
if exported_proxies:
|
||||
handlers.append(urllib.request.ProxyHandler())
|
||||
handlers.append(CacheHTTPHandler())
|
||||
# Since Python 2.7.9 ssl cert validation is enabled by default
|
||||
# see PEP-0476, this causes verification errors on some https servers
|
||||
# so disable by default.
|
||||
import ssl
|
||||
if hasattr(ssl, '_create_unverified_context'):
|
||||
handlers.append(urllib.request.HTTPSHandler(context=ssl._create_unverified_context()))
|
||||
opener = urllib.request.build_opener(*handlers)
|
||||
|
||||
with bb.utils.environment(**newenv):
|
||||
import ssl
|
||||
try:
|
||||
uri = ud.url.split(";")[0]
|
||||
r = urllib.request.Request(uri)
|
||||
r.get_method = lambda: "HEAD"
|
||||
# Some servers (FusionForge, as used on Alioth) require that the
|
||||
# optional Accept header is set.
|
||||
r.add_header("Accept", "*/*")
|
||||
r.add_header("User-Agent", "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/9.10 (karmic) Firefox/3.6.12")
|
||||
def add_basic_auth(login_str, request):
|
||||
'''Adds Basic auth to http request, pass in login:password as string'''
|
||||
import base64
|
||||
encodeuser = base64.b64encode(login_str.encode('utf-8')).decode("utf-8")
|
||||
authheader = "Basic %s" % encodeuser
|
||||
r.add_header("Authorization", authheader)
|
||||
|
||||
if self.check_certs(d):
|
||||
context = ssl.create_default_context()
|
||||
else:
|
||||
context = ssl._create_unverified_context()
|
||||
|
||||
handlers = [FixedHTTPRedirectHandler,
|
||||
HTTPMethodFallback,
|
||||
urllib.request.ProxyHandler(),
|
||||
CacheHTTPHandler(),
|
||||
urllib.request.HTTPSHandler(context=context)]
|
||||
opener = urllib.request.build_opener(*handlers)
|
||||
if ud.user and ud.pswd:
|
||||
add_basic_auth(ud.user + ':' + ud.pswd, r)
|
||||
|
||||
try:
|
||||
uri = ud.url.split(";")[0]
|
||||
r = urllib.request.Request(uri)
|
||||
r.get_method = lambda: "HEAD"
|
||||
# Some servers (FusionForge, as used on Alioth) require that the
|
||||
# optional Accept header is set.
|
||||
r.add_header("Accept", "*/*")
|
||||
r.add_header("User-Agent", self.user_agent)
|
||||
def add_basic_auth(login_str, request):
|
||||
'''Adds Basic auth to http request, pass in login:password as string'''
|
||||
import base64
|
||||
encodeuser = base64.b64encode(login_str.encode('utf-8')).decode("utf-8")
|
||||
authheader = "Basic %s" % encodeuser
|
||||
r.add_header("Authorization", authheader)
|
||||
|
||||
if ud.user and ud.pswd:
|
||||
add_basic_auth(ud.user + ':' + ud.pswd, r)
|
||||
|
||||
try:
|
||||
import netrc
|
||||
n = netrc.netrc()
|
||||
login, unused, password = n.authenticators(urllib.parse.urlparse(uri).hostname)
|
||||
add_basic_auth("%s:%s" % (login, password), r)
|
||||
except (TypeError, ImportError, IOError, netrc.NetrcParseError):
|
||||
pass
|
||||
|
||||
with opener.open(r, timeout=30) as response:
|
||||
pass
|
||||
except urllib.error.URLError as e:
|
||||
if try_again:
|
||||
logger.debug2("checkstatus: trying again")
|
||||
return self.checkstatus(fetch, ud, d, False)
|
||||
else:
|
||||
# debug for now to avoid spamming the logs in e.g. remote sstate searches
|
||||
logger.debug2("checkstatus() urlopen failed: %s" % e)
|
||||
return False
|
||||
except ConnectionResetError as e:
|
||||
if try_again:
|
||||
logger.debug2("checkstatus: trying again")
|
||||
return self.checkstatus(fetch, ud, d, False)
|
||||
else:
|
||||
# debug for now to avoid spamming the logs in e.g. remote sstate searches
|
||||
logger.debug2("checkstatus() urlopen failed: %s" % e)
|
||||
return False
|
||||
import netrc
|
||||
n = netrc.netrc()
|
||||
login, unused, password = n.authenticators(urllib.parse.urlparse(uri).hostname)
|
||||
add_basic_auth("%s:%s" % (login, password), r)
|
||||
except (TypeError, ImportError, IOError, netrc.NetrcParseError):
|
||||
pass
|
||||
|
||||
with opener.open(r) as response:
|
||||
pass
|
||||
except urllib.error.URLError as e:
|
||||
if try_again:
|
||||
logger.debug(2, "checkstatus: trying again")
|
||||
return self.checkstatus(fetch, ud, d, False)
|
||||
else:
|
||||
# debug for now to avoid spamming the logs in e.g. remote sstate searches
|
||||
logger.debug(2, "checkstatus() urlopen failed: %s" % e)
|
||||
return False
|
||||
return True
|
||||
|
||||
def _parse_path(self, regex, s):
|
||||
@@ -460,8 +401,9 @@ class Wget(FetchMethod):
|
||||
"""
|
||||
f = tempfile.NamedTemporaryFile()
|
||||
with tempfile.TemporaryDirectory(prefix="wget-index-") as workdir, tempfile.NamedTemporaryFile(dir=workdir, prefix="wget-listing-") as f:
|
||||
agent = "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/9.10 (karmic) Firefox/3.6.12"
|
||||
fetchcmd = self.basecmd
|
||||
fetchcmd += " -O " + f.name + " --user-agent='" + self.user_agent + "' '" + uri + "'"
|
||||
fetchcmd += " -O " + f.name + " --user-agent='" + agent + "' '" + uri + "'"
|
||||
try:
|
||||
self._runwget(ud, d, fetchcmd, True, workdir=workdir)
|
||||
fetchresult = f.read()
|
||||
@@ -517,7 +459,7 @@ class Wget(FetchMethod):
|
||||
version_dir = ['', '', '']
|
||||
version = ['', '', '']
|
||||
|
||||
dirver_regex = re.compile(r"(?P<pfx>\D*)(?P<ver>(\d+[\.\-_])*(\d+))")
|
||||
dirver_regex = re.compile(r"(?P<pfx>\D*)(?P<ver>(\d+[\.\-_])+(\d+))")
|
||||
s = dirver_regex.search(dirver)
|
||||
if s:
|
||||
version_dir[1] = s.group('ver')
|
||||
@@ -593,7 +535,7 @@ class Wget(FetchMethod):
|
||||
|
||||
# src.rpm extension was added only for rpm package. Can be removed if the rpm
|
||||
# packaged will always be considered as having to be manually upgraded
|
||||
psuffix_regex = r"(tar\.\w+|tgz|zip|xz|rpm|bz2|orig\.tar\.\w+|src\.tar\.\w+|src\.tgz|svnr\d+\.tar\.\w+|stable\.tar\.\w+|src\.rpm)"
|
||||
psuffix_regex = r"(tar\.gz|tgz|tar\.bz2|zip|xz|tar\.lz|rpm|bz2|orig\.tar\.gz|tar\.xz|src\.tar\.gz|src\.tgz|svnr\d+\.tar\.bz2|stable\.tar\.gz|src\.rpm)"
|
||||
|
||||
# match name, version and archive type of a package
|
||||
package_regex_comp = re.compile(r"(?P<name>%s?\.?v?)(?P<pver>%s)(?P<arch>%s)?[\.-](?P<type>%s$)"
|
||||
|
||||
@@ -112,181 +112,185 @@ def _showwarning(message, category, filename, lineno, file=None, line=None):
|
||||
warnlog.warning(s)
|
||||
|
||||
warnings.showwarning = _showwarning
|
||||
warnings.filterwarnings("ignore")
|
||||
warnings.filterwarnings("default", module="(<string>$|(oe|bb)\.)")
|
||||
warnings.filterwarnings("ignore", category=PendingDeprecationWarning)
|
||||
warnings.filterwarnings("ignore", category=ImportWarning)
|
||||
warnings.filterwarnings("ignore", category=DeprecationWarning, module="<string>$")
|
||||
warnings.filterwarnings("ignore", message="With-statements now directly support multiple context managers")
|
||||
|
||||
def create_bitbake_parser():
|
||||
parser = optparse.OptionParser(
|
||||
formatter=BitbakeHelpFormatter(),
|
||||
version="BitBake Build Tool Core version %s" % bb.__version__,
|
||||
usage="""%prog [options] [recipename/target recipe:do_task ...]
|
||||
class BitBakeConfigParameters(cookerdata.ConfigParameters):
|
||||
|
||||
def parseCommandLine(self, argv=sys.argv):
|
||||
parser = optparse.OptionParser(
|
||||
formatter=BitbakeHelpFormatter(),
|
||||
version="BitBake Build Tool Core version %s" % bb.__version__,
|
||||
usage="""%prog [options] [recipename/target recipe:do_task ...]
|
||||
|
||||
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
|
||||
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
|
||||
will provide the layer, BBFILES and other configuration information.""")
|
||||
|
||||
parser.add_option("-b", "--buildfile", action="store", dest="buildfile", default=None,
|
||||
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
|
||||
"not handle any dependencies from other recipes.")
|
||||
parser.add_option("-b", "--buildfile", action="store", dest="buildfile", default=None,
|
||||
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
|
||||
"not handle any dependencies from other recipes.")
|
||||
|
||||
parser.add_option("-k", "--continue", action="store_false", dest="halt", default=True,
|
||||
help="Continue as much as possible after an error. While the target that "
|
||||
"failed and anything depending on it cannot be built, as much as "
|
||||
"possible will be built before stopping.")
|
||||
parser.add_option("-k", "--continue", action="store_false", dest="abort", default=True,
|
||||
help="Continue as much as possible after an error. While the target that "
|
||||
"failed and anything depending on it cannot be built, as much as "
|
||||
"possible will be built before stopping.")
|
||||
|
||||
parser.add_option("-f", "--force", action="store_true", dest="force", default=False,
|
||||
help="Force the specified targets/task to run (invalidating any "
|
||||
"existing stamp file).")
|
||||
parser.add_option("-f", "--force", action="store_true", dest="force", default=False,
|
||||
help="Force the specified targets/task to run (invalidating any "
|
||||
"existing stamp file).")
|
||||
|
||||
parser.add_option("-c", "--cmd", action="store", dest="cmd",
|
||||
help="Specify the task to execute. The exact options available "
|
||||
"depend on the metadata. Some examples might be 'compile'"
|
||||
" or 'populate_sysroot' or 'listtasks' may give a list of "
|
||||
"the tasks available.")
|
||||
parser.add_option("-c", "--cmd", action="store", dest="cmd",
|
||||
help="Specify the task to execute. The exact options available "
|
||||
"depend on the metadata. Some examples might be 'compile'"
|
||||
" or 'populate_sysroot' or 'listtasks' may give a list of "
|
||||
"the tasks available.")
|
||||
|
||||
parser.add_option("-C", "--clear-stamp", action="store", dest="invalidate_stamp",
|
||||
help="Invalidate the stamp for the specified task such as 'compile' "
|
||||
"and then run the default task for the specified target(s).")
|
||||
parser.add_option("-C", "--clear-stamp", action="store", dest="invalidate_stamp",
|
||||
help="Invalidate the stamp for the specified task such as 'compile' "
|
||||
"and then run the default task for the specified target(s).")
|
||||
|
||||
parser.add_option("-r", "--read", action="append", dest="prefile", default=[],
|
||||
help="Read the specified file before bitbake.conf.")
|
||||
parser.add_option("-r", "--read", action="append", dest="prefile", default=[],
|
||||
help="Read the specified file before bitbake.conf.")
|
||||
|
||||
parser.add_option("-R", "--postread", action="append", dest="postfile", default=[],
|
||||
help="Read the specified file after bitbake.conf.")
|
||||
parser.add_option("-R", "--postread", action="append", dest="postfile", default=[],
|
||||
help="Read the specified file after bitbake.conf.")
|
||||
|
||||
parser.add_option("-v", "--verbose", action="store_true", dest="verbose", default=False,
|
||||
help="Enable tracing of shell tasks (with 'set -x'). "
|
||||
"Also print bb.note(...) messages to stdout (in "
|
||||
"addition to writing them to ${T}/log.do_<task>).")
|
||||
parser.add_option("-v", "--verbose", action="store_true", dest="verbose", default=False,
|
||||
help="Enable tracing of shell tasks (with 'set -x'). "
|
||||
"Also print bb.note(...) messages to stdout (in "
|
||||
"addition to writing them to ${T}/log.do_<task>).")
|
||||
|
||||
parser.add_option("-D", "--debug", action="count", dest="debug", default=0,
|
||||
help="Increase the debug level. You can specify this "
|
||||
"more than once. -D sets the debug level to 1, "
|
||||
"where only bb.debug(1, ...) messages are printed "
|
||||
"to stdout; -DD sets the debug level to 2, where "
|
||||
"both bb.debug(1, ...) and bb.debug(2, ...) "
|
||||
"messages are printed; etc. Without -D, no debug "
|
||||
"messages are printed. Note that -D only affects "
|
||||
"output to stdout. All debug messages are written "
|
||||
"to ${T}/log.do_taskname, regardless of the debug "
|
||||
"level.")
|
||||
parser.add_option("-D", "--debug", action="count", dest="debug", default=0,
|
||||
help="Increase the debug level. You can specify this "
|
||||
"more than once. -D sets the debug level to 1, "
|
||||
"where only bb.debug(1, ...) messages are printed "
|
||||
"to stdout; -DD sets the debug level to 2, where "
|
||||
"both bb.debug(1, ...) and bb.debug(2, ...) "
|
||||
"messages are printed; etc. Without -D, no debug "
|
||||
"messages are printed. Note that -D only affects "
|
||||
"output to stdout. All debug messages are written "
|
||||
"to ${T}/log.do_taskname, regardless of the debug "
|
||||
"level.")
|
||||
|
||||
parser.add_option("-q", "--quiet", action="count", dest="quiet", default=0,
|
||||
help="Output less log message data to the terminal. You can specify this more than once.")
|
||||
parser.add_option("-q", "--quiet", action="count", dest="quiet", default=0,
|
||||
help="Output less log message data to the terminal. You can specify this more than once.")
|
||||
|
||||
parser.add_option("-n", "--dry-run", action="store_true", dest="dry_run", default=False,
|
||||
help="Don't execute, just go through the motions.")
|
||||
parser.add_option("-n", "--dry-run", action="store_true", dest="dry_run", default=False,
|
||||
help="Don't execute, just go through the motions.")
|
||||
|
||||
parser.add_option("-S", "--dump-signatures", action="append", dest="dump_signatures",
|
||||
default=[], metavar="SIGNATURE_HANDLER",
|
||||
help="Dump out the signature construction information, with no task "
|
||||
"execution. The SIGNATURE_HANDLER parameter is passed to the "
|
||||
"handler. Two common values are none and printdiff but the handler "
|
||||
"may define more/less. none means only dump the signature, printdiff"
|
||||
" means compare the dumped signature with the cached one.")
|
||||
parser.add_option("-S", "--dump-signatures", action="append", dest="dump_signatures",
|
||||
default=[], metavar="SIGNATURE_HANDLER",
|
||||
help="Dump out the signature construction information, with no task "
|
||||
"execution. The SIGNATURE_HANDLER parameter is passed to the "
|
||||
"handler. Two common values are none and printdiff but the handler "
|
||||
"may define more/less. none means only dump the signature, printdiff"
|
||||
" means compare the dumped signature with the cached one.")
|
||||
|
||||
parser.add_option("-p", "--parse-only", action="store_true",
|
||||
dest="parse_only", default=False,
|
||||
help="Quit after parsing the BB recipes.")
|
||||
parser.add_option("-p", "--parse-only", action="store_true",
|
||||
dest="parse_only", default=False,
|
||||
help="Quit after parsing the BB recipes.")
|
||||
|
||||
parser.add_option("-s", "--show-versions", action="store_true",
|
||||
dest="show_versions", default=False,
|
||||
help="Show current and preferred versions of all recipes.")
|
||||
parser.add_option("-s", "--show-versions", action="store_true",
|
||||
dest="show_versions", default=False,
|
||||
help="Show current and preferred versions of all recipes.")
|
||||
|
||||
parser.add_option("-e", "--environment", action="store_true",
|
||||
dest="show_environment", default=False,
|
||||
help="Show the global or per-recipe environment complete with information"
|
||||
" about where variables were set/changed.")
|
||||
parser.add_option("-e", "--environment", action="store_true",
|
||||
dest="show_environment", default=False,
|
||||
help="Show the global or per-recipe environment complete with information"
|
||||
" about where variables were set/changed.")
|
||||
|
||||
parser.add_option("-g", "--graphviz", action="store_true", dest="dot_graph", default=False,
|
||||
help="Save dependency tree information for the specified "
|
||||
"targets in the dot syntax.")
|
||||
parser.add_option("-g", "--graphviz", action="store_true", dest="dot_graph", default=False,
|
||||
help="Save dependency tree information for the specified "
|
||||
"targets in the dot syntax.")
|
||||
|
||||
parser.add_option("-I", "--ignore-deps", action="append",
|
||||
dest="extra_assume_provided", default=[],
|
||||
help="Assume these dependencies don't exist and are already provided "
|
||||
"(equivalent to ASSUME_PROVIDED). Useful to make dependency "
|
||||
"graphs more appealing")
|
||||
parser.add_option("-I", "--ignore-deps", action="append",
|
||||
dest="extra_assume_provided", default=[],
|
||||
help="Assume these dependencies don't exist and are already provided "
|
||||
"(equivalent to ASSUME_PROVIDED). Useful to make dependency "
|
||||
"graphs more appealing")
|
||||
|
||||
parser.add_option("-l", "--log-domains", action="append", dest="debug_domains", default=[],
|
||||
help="Show debug logging for the specified logging domains")
|
||||
parser.add_option("-l", "--log-domains", action="append", dest="debug_domains", default=[],
|
||||
help="Show debug logging for the specified logging domains")
|
||||
|
||||
parser.add_option("-P", "--profile", action="store_true", dest="profile", default=False,
|
||||
help="Profile the command and save reports.")
|
||||
parser.add_option("-P", "--profile", action="store_true", dest="profile", default=False,
|
||||
help="Profile the command and save reports.")
|
||||
|
||||
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
|
||||
parser.add_option("-u", "--ui", action="store", dest="ui",
|
||||
default=os.environ.get('BITBAKE_UI', 'knotty'),
|
||||
help="The user interface to use (@CHOICES@ - default %default).")
|
||||
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
|
||||
parser.add_option("-u", "--ui", action="store", dest="ui",
|
||||
default=os.environ.get('BITBAKE_UI', 'knotty'),
|
||||
help="The user interface to use (@CHOICES@ - default %default).")
|
||||
|
||||
parser.add_option("", "--token", action="store", dest="xmlrpctoken",
|
||||
default=os.environ.get("BBTOKEN"),
|
||||
help="Specify the connection token to be used when connecting "
|
||||
"to a remote server.")
|
||||
parser.add_option("", "--token", action="store", dest="xmlrpctoken",
|
||||
default=os.environ.get("BBTOKEN"),
|
||||
help="Specify the connection token to be used when connecting "
|
||||
"to a remote server.")
|
||||
|
||||
parser.add_option("", "--revisions-changed", action="store_true",
|
||||
dest="revisions_changed", default=False,
|
||||
help="Set the exit code depending on whether upstream floating "
|
||||
"revisions have changed or not.")
|
||||
parser.add_option("", "--revisions-changed", action="store_true",
|
||||
dest="revisions_changed", default=False,
|
||||
help="Set the exit code depending on whether upstream floating "
|
||||
"revisions have changed or not.")
|
||||
|
||||
parser.add_option("", "--server-only", action="store_true",
|
||||
dest="server_only", default=False,
|
||||
help="Run bitbake without a UI, only starting a server "
|
||||
"(cooker) process.")
|
||||
parser.add_option("", "--server-only", action="store_true",
|
||||
dest="server_only", default=False,
|
||||
help="Run bitbake without a UI, only starting a server "
|
||||
"(cooker) process.")
|
||||
|
||||
parser.add_option("-B", "--bind", action="store", dest="bind", default=False,
|
||||
help="The name/address for the bitbake xmlrpc server to bind to.")
|
||||
parser.add_option("-B", "--bind", action="store", dest="bind", default=False,
|
||||
help="The name/address for the bitbake xmlrpc server to bind to.")
|
||||
|
||||
parser.add_option("-T", "--idle-timeout", type=float, dest="server_timeout",
|
||||
default=os.getenv("BB_SERVER_TIMEOUT"),
|
||||
help="Set timeout to unload bitbake server due to inactivity, "
|
||||
"set to -1 means no unload, "
|
||||
"default: Environment variable BB_SERVER_TIMEOUT.")
|
||||
parser.add_option("-T", "--idle-timeout", type=float, dest="server_timeout",
|
||||
default=os.getenv("BB_SERVER_TIMEOUT"),
|
||||
help="Set timeout to unload bitbake server due to inactivity, "
|
||||
"set to -1 means no unload, "
|
||||
"default: Environment variable BB_SERVER_TIMEOUT.")
|
||||
|
||||
parser.add_option("", "--no-setscene", action="store_true",
|
||||
dest="nosetscene", default=False,
|
||||
help="Do not run any setscene tasks. sstate will be ignored and "
|
||||
"everything needed, built.")
|
||||
parser.add_option("", "--no-setscene", action="store_true",
|
||||
dest="nosetscene", default=False,
|
||||
help="Do not run any setscene tasks. sstate will be ignored and "
|
||||
"everything needed, built.")
|
||||
|
||||
parser.add_option("", "--skip-setscene", action="store_true",
|
||||
dest="skipsetscene", default=False,
|
||||
help="Skip setscene tasks if they would be executed. Tasks previously "
|
||||
"restored from sstate will be kept, unlike --no-setscene")
|
||||
parser.add_option("", "--skip-setscene", action="store_true",
|
||||
dest="skipsetscene", default=False,
|
||||
help="Skip setscene tasks if they would be executed. Tasks previously "
|
||||
"restored from sstate will be kept, unlike --no-setscene")
|
||||
|
||||
parser.add_option("", "--setscene-only", action="store_true",
|
||||
dest="setsceneonly", default=False,
|
||||
help="Only run setscene tasks, don't run any real tasks.")
|
||||
parser.add_option("", "--setscene-only", action="store_true",
|
||||
dest="setsceneonly", default=False,
|
||||
help="Only run setscene tasks, don't run any real tasks.")
|
||||
|
||||
parser.add_option("", "--remote-server", action="store", dest="remote_server",
|
||||
default=os.environ.get("BBSERVER"),
|
||||
help="Connect to the specified server.")
|
||||
parser.add_option("", "--remote-server", action="store", dest="remote_server",
|
||||
default=os.environ.get("BBSERVER"),
|
||||
help="Connect to the specified server.")
|
||||
|
||||
parser.add_option("-m", "--kill-server", action="store_true",
|
||||
dest="kill_server", default=False,
|
||||
help="Terminate any running bitbake server.")
|
||||
parser.add_option("-m", "--kill-server", action="store_true",
|
||||
dest="kill_server", default=False,
|
||||
help="Terminate any running bitbake server.")
|
||||
|
||||
parser.add_option("", "--observe-only", action="store_true",
|
||||
dest="observe_only", default=False,
|
||||
help="Connect to a server as an observing-only client.")
|
||||
parser.add_option("", "--observe-only", action="store_true",
|
||||
dest="observe_only", default=False,
|
||||
help="Connect to a server as an observing-only client.")
|
||||
|
||||
parser.add_option("", "--status-only", action="store_true",
|
||||
dest="status_only", default=False,
|
||||
help="Check the status of the remote bitbake server.")
|
||||
parser.add_option("", "--status-only", action="store_true",
|
||||
dest="status_only", default=False,
|
||||
help="Check the status of the remote bitbake server.")
|
||||
|
||||
parser.add_option("-w", "--write-log", action="store", dest="writeeventlog",
|
||||
default=os.environ.get("BBEVENTLOG"),
|
||||
help="Writes the event log of the build to a bitbake event json file. "
|
||||
"Use '' (empty string) to assign the name automatically.")
|
||||
parser.add_option("-w", "--write-log", action="store", dest="writeeventlog",
|
||||
default=os.environ.get("BBEVENTLOG"),
|
||||
help="Writes the event log of the build to a bitbake event json file. "
|
||||
"Use '' (empty string) to assign the name automatically.")
|
||||
|
||||
parser.add_option("", "--runall", action="append", dest="runall",
|
||||
help="Run the specified task for any recipe in the taskgraph of the specified target (even if it wouldn't otherwise have run).")
|
||||
parser.add_option("", "--runall", action="append", dest="runall",
|
||||
help="Run the specified task for any recipe in the taskgraph of the specified target (even if it wouldn't otherwise have run).")
|
||||
|
||||
parser.add_option("", "--runonly", action="append", dest="runonly",
|
||||
help="Run only the specified task within the taskgraph of the specified targets (and any task dependencies those tasks may have).")
|
||||
return parser
|
||||
parser.add_option("", "--runonly", action="append", dest="runonly",
|
||||
help="Run only the specified task within the taskgraph of the specified targets (and any task dependencies those tasks may have).")
|
||||
|
||||
|
||||
class BitBakeConfigParameters(cookerdata.ConfigParameters):
|
||||
def parseCommandLine(self, argv=sys.argv):
|
||||
parser = create_bitbake_parser()
|
||||
options, targets = parser.parse_args(argv)
|
||||
|
||||
if options.quiet and options.verbose:
|
||||
@@ -462,7 +466,7 @@ def setup_bitbake(configParams, extrafeatures=None):
|
||||
logger.info("Retrying server connection (#%d)..." % tryno)
|
||||
else:
|
||||
logger.info("Retrying server connection (#%d)... (%s)" % (tryno, traceback.format_exc()))
|
||||
|
||||
|
||||
if not retries:
|
||||
bb.fatal("Unable to connect to bitbake server, or start one (server startup failures would be in bitbake-cookerdaemon.log).")
|
||||
bb.event.print_ui_queue()
|
||||
|
||||
@@ -59,7 +59,7 @@ def getMountedDev(path):
|
||||
pass
|
||||
return None
|
||||
|
||||
def getDiskData(BBDirs):
|
||||
def getDiskData(BBDirs, configuration):
|
||||
|
||||
"""Prepare disk data for disk space monitor"""
|
||||
|
||||
@@ -76,12 +76,7 @@ def getDiskData(BBDirs):
|
||||
return None
|
||||
|
||||
action = pathSpaceInodeRe.group(1)
|
||||
if action == "ABORT":
|
||||
# Emit a deprecation warning
|
||||
logger.warnonce("The BB_DISKMON_DIRS \"ABORT\" action has been renamed to \"HALT\", update configuration")
|
||||
action = "HALT"
|
||||
|
||||
if action not in ("HALT", "STOPTASKS", "WARN"):
|
||||
if action not in ("ABORT", "STOPTASKS", "WARN"):
|
||||
printErr("Unknown disk space monitor action: %s" % action)
|
||||
return None
|
||||
|
||||
@@ -173,7 +168,7 @@ class diskMonitor:
|
||||
|
||||
BBDirs = configuration.getVar("BB_DISKMON_DIRS") or None
|
||||
if BBDirs:
|
||||
self.devDict = getDiskData(BBDirs)
|
||||
self.devDict = getDiskData(BBDirs, configuration)
|
||||
if self.devDict:
|
||||
self.spaceInterval, self.inodeInterval = getInterval(configuration)
|
||||
if self.spaceInterval and self.inodeInterval:
|
||||
@@ -182,7 +177,7 @@ class diskMonitor:
|
||||
# use them to avoid printing too many warning messages
|
||||
self.preFreeS = {}
|
||||
self.preFreeI = {}
|
||||
# This is for STOPTASKS and HALT, to avoid printing the message
|
||||
# This is for STOPTASKS and ABORT, to avoid printing the message
|
||||
# repeatedly while waiting for the tasks to finish
|
||||
self.checked = {}
|
||||
for k in self.devDict:
|
||||
@@ -224,8 +219,8 @@ class diskMonitor:
|
||||
self.checked[k] = True
|
||||
rq.finish_runqueue(False)
|
||||
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
|
||||
elif action == "HALT" and not self.checked[k]:
|
||||
logger.error("Immediately halt since the disk space monitor action is \"HALT\"!")
|
||||
elif action == "ABORT" and not self.checked[k]:
|
||||
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
|
||||
self.checked[k] = True
|
||||
rq.finish_runqueue(True)
|
||||
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
|
||||
@@ -250,8 +245,8 @@ class diskMonitor:
|
||||
self.checked[k] = True
|
||||
rq.finish_runqueue(False)
|
||||
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)
|
||||
elif action == "HALT" and not self.checked[k]:
|
||||
logger.error("Immediately halt since the disk space monitor action is \"HALT\"!")
|
||||
elif action == "ABORT" and not self.checked[k]:
|
||||
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
|
||||
self.checked[k] = True
|
||||
rq.finish_runqueue(True)
|
||||
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)
|
||||
|
||||
@@ -30,9 +30,7 @@ class BBLogFormatter(logging.Formatter):
|
||||
PLAIN = logging.INFO + 1
|
||||
VERBNOTE = logging.INFO + 2
|
||||
ERROR = logging.ERROR
|
||||
ERRORONCE = logging.ERROR - 1
|
||||
WARNING = logging.WARNING
|
||||
WARNONCE = logging.WARNING - 1
|
||||
CRITICAL = logging.CRITICAL
|
||||
|
||||
levelnames = {
|
||||
@@ -44,9 +42,7 @@ class BBLogFormatter(logging.Formatter):
|
||||
PLAIN : '',
|
||||
VERBNOTE: 'NOTE',
|
||||
WARNING : 'WARNING',
|
||||
WARNONCE : 'WARNING',
|
||||
ERROR : 'ERROR',
|
||||
ERRORONCE : 'ERROR',
|
||||
CRITICAL: 'ERROR',
|
||||
}
|
||||
|
||||
@@ -62,9 +58,7 @@ class BBLogFormatter(logging.Formatter):
|
||||
PLAIN : BASECOLOR,
|
||||
VERBNOTE: BASECOLOR,
|
||||
WARNING : YELLOW,
|
||||
WARNONCE : YELLOW,
|
||||
ERROR : RED,
|
||||
ERRORONCE : RED,
|
||||
CRITICAL: RED,
|
||||
}
|
||||
|
||||
@@ -127,22 +121,6 @@ class BBLogFilter(object):
|
||||
return True
|
||||
return False
|
||||
|
||||
class LogFilterShowOnce(logging.Filter):
|
||||
def __init__(self):
|
||||
self.seen_warnings = set()
|
||||
self.seen_errors = set()
|
||||
|
||||
def filter(self, record):
|
||||
if record.levelno == bb.msg.BBLogFormatter.WARNONCE:
|
||||
if record.msg in self.seen_warnings:
|
||||
return False
|
||||
self.seen_warnings.add(record.msg)
|
||||
if record.levelno == bb.msg.BBLogFormatter.ERRORONCE:
|
||||
if record.msg in self.seen_errors:
|
||||
return False
|
||||
self.seen_errors.add(record.msg)
|
||||
return True
|
||||
|
||||
class LogFilterGEQLevel(logging.Filter):
|
||||
def __init__(self, level):
|
||||
self.strlevel = str(level)
|
||||
@@ -228,7 +206,6 @@ def logger_create(name, output=sys.stderr, level=logging.INFO, preserve_handlers
|
||||
"""Standalone logger creation function"""
|
||||
logger = logging.getLogger(name)
|
||||
console = logging.StreamHandler(output)
|
||||
console.addFilter(bb.msg.LogFilterShowOnce())
|
||||
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
|
||||
if color == 'always' or (color == 'auto' and output.isatty()):
|
||||
format.enable_color()
|
||||
@@ -301,7 +278,7 @@ def setLoggingConfig(defaultconfig, userconfigfile=None):
|
||||
with open(os.path.normpath(userconfigfile), 'r') as f:
|
||||
if userconfigfile.endswith('.yml') or userconfigfile.endswith('.yaml'):
|
||||
import yaml
|
||||
userconfig = yaml.safe_load(f)
|
||||
userconfig = yaml.load(f)
|
||||
elif userconfigfile.endswith('.json') or userconfigfile.endswith('.cfg'):
|
||||
import json
|
||||
userconfig = json.load(f)
|
||||
@@ -316,17 +293,10 @@ def setLoggingConfig(defaultconfig, userconfigfile=None):
|
||||
|
||||
# Convert all level parameters to integers in case users want to use the
|
||||
# bitbake defined level names
|
||||
for name, h in logconfig["handlers"].items():
|
||||
for h in logconfig["handlers"].values():
|
||||
if "level" in h:
|
||||
h["level"] = bb.msg.stringToLevel(h["level"])
|
||||
|
||||
# Every handler needs its own instance of the once filter.
|
||||
once_filter_name = name + ".showonceFilter"
|
||||
logconfig.setdefault("filters", {})[once_filter_name] = {
|
||||
"()": "bb.msg.LogFilterShowOnce",
|
||||
}
|
||||
h.setdefault("filters", []).append(once_filter_name)
|
||||
|
||||
for l in logconfig["loggers"].values():
|
||||
if "level" in l:
|
||||
l["level"] = bb.msg.stringToLevel(l["level"])
|
||||
|
||||
@@ -71,7 +71,7 @@ def update_mtime(f):
|
||||
|
||||
def update_cache(f):
|
||||
if f in __mtime_cache:
|
||||
logger.debug("Updating mtime cache for %s" % f)
|
||||
logger.debug(1, "Updating mtime cache for %s" % f)
|
||||
update_mtime(f)
|
||||
|
||||
def clear_cache():
|
||||
@@ -113,8 +113,6 @@ def init(fn, data):
|
||||
return h['init'](data)
|
||||
|
||||
def init_parser(d):
|
||||
if hasattr(bb.parse, "siggen"):
|
||||
bb.parse.siggen.exit()
|
||||
bb.parse.siggen = bb.siggen.init(d)
|
||||
|
||||
def resolve_file(fn, d):
|
||||
|
||||
@@ -34,7 +34,7 @@ class IncludeNode(AstNode):
|
||||
Include the file and evaluate the statements
|
||||
"""
|
||||
s = data.expand(self.what_file)
|
||||
logger.debug2("CONF %s:%s: including %s", self.filename, self.lineno, s)
|
||||
logger.debug(2, "CONF %s:%s: including %s", self.filename, self.lineno, s)
|
||||
|
||||
# TODO: Cache those includes... maybe not here though
|
||||
if self.force:
|
||||
@@ -130,10 +130,6 @@ class DataNode(AstNode):
|
||||
else:
|
||||
val = groupd["value"]
|
||||
|
||||
if ":append" in key or ":remove" in key or ":prepend" in key:
|
||||
if op in ["append", "prepend", "postdot", "predot", "ques"]:
|
||||
bb.warn(key + " " + groupd[op] + " is not a recommended operator combination, please replace it.")
|
||||
|
||||
flag = None
|
||||
if 'flag' in groupd and groupd['flag'] is not None:
|
||||
flag = groupd['flag']
|
||||
@@ -149,7 +145,7 @@ class DataNode(AstNode):
|
||||
data.setVar(key, val, parsing=True, **loginfo)
|
||||
|
||||
class MethodNode(AstNode):
|
||||
tr_tbl = str.maketrans('/.+-@%&~', '________')
|
||||
tr_tbl = str.maketrans('/.+-@%&', '_______')
|
||||
|
||||
def __init__(self, filename, lineno, func_name, body, python, fakeroot):
|
||||
AstNode.__init__(self, filename, lineno)
|
||||
@@ -223,7 +219,7 @@ class ExportFuncsNode(AstNode):
|
||||
for flag in [ "func", "python" ]:
|
||||
if data.getVarFlag(calledfunc, flag, False):
|
||||
data.setVarFlag(func, flag, data.getVarFlag(calledfunc, flag, False))
|
||||
for flag in ["dirs", "cleandirs", "fakeroot"]:
|
||||
for flag in [ "dirs" ]:
|
||||
if data.getVarFlag(func, flag, False):
|
||||
data.setVarFlag(calledfunc, flag, data.getVarFlag(func, flag, False))
|
||||
data.setVarFlag(func, "filename", "autogenerated")
|
||||
@@ -333,17 +329,13 @@ def runAnonFuncs(d):
|
||||
def finalize(fn, d, variant = None):
|
||||
saved_handlers = bb.event.get_handlers().copy()
|
||||
try:
|
||||
# Found renamed variables. Exit immediately
|
||||
if d.getVar("_FAILPARSINGERRORHANDLED", False) == True:
|
||||
raise bb.BBHandledException()
|
||||
|
||||
for var in d.getVar('__BBHANDLERS', False) or []:
|
||||
# try to add the handler
|
||||
handlerfn = d.getVarFlag(var, "filename", False)
|
||||
if not handlerfn:
|
||||
bb.fatal("Undefined event handler function '%s'" % var)
|
||||
handlerln = int(d.getVarFlag(var, "lineno", False))
|
||||
bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln, data=d)
|
||||
bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln)
|
||||
|
||||
bb.event.fire(bb.event.RecipePreFinalise(fn), d)
|
||||
|
||||
@@ -384,7 +376,7 @@ def _create_variants(datastores, names, function, onlyfinalise):
|
||||
def multi_finalize(fn, d):
|
||||
appends = (d.getVar("__BBAPPEND") or "").split()
|
||||
for append in appends:
|
||||
logger.debug("Appending .bbappend file %s to %s", append, fn)
|
||||
logger.debug(1, "Appending .bbappend file %s to %s", append, fn)
|
||||
bb.parse.BBHandler.handle(append, d, True)
|
||||
|
||||
onlyfinalise = d.getVar("__ONLYFINALISE", False)
|
||||
|
||||
@@ -13,13 +13,16 @@
|
||||
#
|
||||
|
||||
import re, bb, os
|
||||
import bb.build, bb.utils, bb.data_smart
|
||||
import bb.build, bb.utils
|
||||
|
||||
from . import ConfHandler
|
||||
from .. import resolve_file, ast, logger, ParseError
|
||||
from .ConfHandler import include, init
|
||||
|
||||
__func_start_regexp__ = re.compile(r"(((?P<py>python(?=(\s|\()))|(?P<fr>fakeroot(?=\s)))\s*)*(?P<func>[\w\.\-\+\{\}\$:]+)?\s*\(\s*\)\s*{$" )
|
||||
# For compatibility
|
||||
bb.deprecate_import(__name__, "bb.parse", ["vars_from_file"])
|
||||
|
||||
__func_start_regexp__ = re.compile(r"(((?P<py>python)|(?P<fr>fakeroot))\s*)*(?P<func>[\w\.\-\+\{\}\$]+)?\s*\(\s*\)\s*{$" )
|
||||
__inherit_regexp__ = re.compile(r"inherit\s+(.+)" )
|
||||
__export_func_regexp__ = re.compile(r"EXPORT_FUNCTIONS\s+(.+)" )
|
||||
__addtask_regexp__ = re.compile(r"addtask\s+(?P<func>\w+)\s*((before\s*(?P<before>((.*(?=after))|(.*))))|(after\s*(?P<after>((.*(?=before))|(.*)))))*")
|
||||
@@ -44,36 +47,23 @@ def inherit(files, fn, lineno, d):
|
||||
__inherit_cache = d.getVar('__inherit_cache', False) or []
|
||||
files = d.expand(files).split()
|
||||
for file in files:
|
||||
classtype = d.getVar("__bbclasstype", False)
|
||||
origfile = file
|
||||
for t in ["classes-" + classtype, "classes"]:
|
||||
file = origfile
|
||||
if not os.path.isabs(file) and not file.endswith(".bbclass"):
|
||||
file = os.path.join(t, '%s.bbclass' % file)
|
||||
if not os.path.isabs(file) and not file.endswith(".bbclass"):
|
||||
file = os.path.join('classes', '%s.bbclass' % file)
|
||||
|
||||
if not os.path.isabs(file):
|
||||
bbpath = d.getVar("BBPATH")
|
||||
abs_fn, attempts = bb.utils.which(bbpath, file, history=True)
|
||||
for af in attempts:
|
||||
if af != abs_fn:
|
||||
bb.parse.mark_dependency(d, af)
|
||||
if abs_fn:
|
||||
file = abs_fn
|
||||
|
||||
if os.path.exists(file):
|
||||
break
|
||||
|
||||
if not os.path.exists(file):
|
||||
raise ParseError("Could not inherit file %s" % (file), fn, lineno)
|
||||
if not os.path.isabs(file):
|
||||
bbpath = d.getVar("BBPATH")
|
||||
abs_fn, attempts = bb.utils.which(bbpath, file, history=True)
|
||||
for af in attempts:
|
||||
if af != abs_fn:
|
||||
bb.parse.mark_dependency(d, af)
|
||||
if abs_fn:
|
||||
file = abs_fn
|
||||
|
||||
if not file in __inherit_cache:
|
||||
logger.debug("Inheriting %s (from %s:%d)" % (file, fn, lineno))
|
||||
logger.debug(1, "Inheriting %s (from %s:%d)" % (file, fn, lineno))
|
||||
__inherit_cache.append( file )
|
||||
d.setVar('__inherit_cache', __inherit_cache)
|
||||
try:
|
||||
bb.parse.handle(file, d, True)
|
||||
except (IOError, OSError) as exc:
|
||||
raise ParseError("Could not inherit file %s: %s" % (fn, exc.strerror), fn, lineno)
|
||||
include(fn, file, lineno, d, "inherit")
|
||||
__inherit_cache = d.getVar('__inherit_cache', False) or []
|
||||
|
||||
def get_statements(filename, absolute_filename, base_name):
|
||||
@@ -191,10 +181,10 @@ def feeder(lineno, s, fn, root, statements, eof=False):
|
||||
|
||||
if s and s[0] == '#':
|
||||
if len(__residue__) != 0 and __residue__[0][0] != "#":
|
||||
bb.fatal("There is a comment on line %s of file %s:\n'''\n%s\n'''\nwhich is in the middle of a multiline expression. This syntax is invalid, please correct it." % (lineno, fn, s))
|
||||
bb.fatal("There is a comment on line %s of file %s (%s) which is in the middle of a multiline expression.\nBitbake used to ignore these but no longer does so, please fix your metadata as errors are likely as a result of this change." % (lineno, fn, s))
|
||||
|
||||
if len(__residue__) != 0 and __residue__[0][0] == "#" and (not s or s[0] != "#"):
|
||||
bb.fatal("There is a confusing multiline partially commented expression on line %s of file %s:\n%s\nPlease clarify whether this is all a comment or should be parsed." % (lineno - len(__residue__), fn, "\n".join(__residue__)))
|
||||
bb.fatal("There is a confusing multiline, partially commented expression on line %s of file %s (%s).\nPlease clarify whether this is all a comment or should be parsed." % (lineno, fn, s))
|
||||
|
||||
if s and s[-1] == '\\':
|
||||
__residue__.append(s[:-1])
|
||||
@@ -243,10 +233,6 @@ def feeder(lineno, s, fn, root, statements, eof=False):
|
||||
if taskexpression.count(word) > 1:
|
||||
logger.warning("addtask contained multiple '%s' keywords, only one is supported" % word)
|
||||
|
||||
# Check and warn for having task with exprssion as part of task name
|
||||
for te in taskexpression:
|
||||
if any( ( "%s_" % keyword ) in te for keyword in bb.data_smart.__setvar_keyword__ ):
|
||||
raise ParseError("Task name '%s' contains a keyword which is not recommended/supported.\nPlease rename the task not to include the keyword.\n%s" % (te, ("\n".join(map(str, bb.data_smart.__setvar_keyword__)))), fn)
|
||||
ast.handleAddTask(statements, fn, lineno, m)
|
||||
return
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ from bb.parse import ParseError, resolve_file, ast, logger, handle
|
||||
__config_regexp__ = re.compile( r"""
|
||||
^
|
||||
(?P<exp>export\s+)?
|
||||
(?P<var>[a-zA-Z0-9\-_+.${}/~:]+?)
|
||||
(?P<var>[a-zA-Z0-9\-_+.${}/~]+?)
|
||||
(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?
|
||||
|
||||
\s* (
|
||||
@@ -48,7 +48,10 @@ __unset_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)$" )
|
||||
__unset_flag_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)\[([a-zA-Z0-9\-_+.]+)\]$" )
|
||||
|
||||
def init(data):
|
||||
return
|
||||
topdir = data.getVar('TOPDIR', False)
|
||||
if not topdir:
|
||||
data.setVar('TOPDIR', os.getcwd())
|
||||
|
||||
|
||||
def supports(fn, d):
|
||||
return fn[-5:] == ".conf"
|
||||
@@ -92,7 +95,7 @@ def include_single_file(parentfn, fn, lineno, data, error_out):
|
||||
if exc.errno == errno.ENOENT:
|
||||
if error_out:
|
||||
raise ParseError("Could not %s file %s" % (error_out, fn), parentfn, lineno)
|
||||
logger.debug2("CONF file '%s' not found", fn)
|
||||
logger.debug(2, "CONF file '%s' not found", fn)
|
||||
else:
|
||||
if error_out:
|
||||
raise ParseError("Could not %s file %s: %s" % (error_out, fn, exc.strerror), parentfn, lineno)
|
||||
@@ -125,21 +128,16 @@ def handle(fn, data, include):
|
||||
s = f.readline()
|
||||
if not s:
|
||||
break
|
||||
origlineno = lineno
|
||||
origline = s
|
||||
w = s.strip()
|
||||
# skip empty lines
|
||||
if not w:
|
||||
continue
|
||||
s = s.rstrip()
|
||||
while s[-1] == '\\':
|
||||
line = f.readline()
|
||||
origline += line
|
||||
s2 = line.rstrip()
|
||||
s2 = f.readline().rstrip()
|
||||
lineno = lineno + 1
|
||||
if (not s2 or s2 and s2[0] != "#") and s[0] == "#" :
|
||||
bb.fatal("There is a confusing multiline, partially commented expression starting on line %s of file %s:\n%s\nPlease clarify whether this is all a comment or should be parsed." % (origlineno, fn, origline))
|
||||
|
||||
bb.fatal("There is a confusing multiline, partially commented expression on line %s of file %s (%s).\nPlease clarify whether this is all a comment or should be parsed." % (lineno, fn, s))
|
||||
s = s[:-1] + s2
|
||||
# skip comments
|
||||
if s[0] == '#':
|
||||
@@ -152,6 +150,8 @@ def handle(fn, data, include):
|
||||
if oldfile:
|
||||
data.setVar('FILE', oldfile)
|
||||
|
||||
f.close()
|
||||
|
||||
for f in confFilters:
|
||||
f(fn, data)
|
||||
|
||||
|
||||
@@ -12,14 +12,14 @@ currently, providing a key/value store accessed by 'domain'.
|
||||
#
|
||||
|
||||
import collections
|
||||
import collections.abc
|
||||
import contextlib
|
||||
import functools
|
||||
import logging
|
||||
import os.path
|
||||
import sqlite3
|
||||
import sys
|
||||
from collections.abc import Mapping
|
||||
import warnings
|
||||
from collections import Mapping
|
||||
|
||||
sqlversion = sqlite3.sqlite_version_info
|
||||
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
|
||||
@@ -29,7 +29,7 @@ if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
|
||||
logger = logging.getLogger("BitBake.PersistData")
|
||||
|
||||
@functools.total_ordering
|
||||
class SQLTable(collections.abc.MutableMapping):
|
||||
class SQLTable(collections.MutableMapping):
|
||||
class _Decorators(object):
|
||||
@staticmethod
|
||||
def retry(*, reconnect=True):
|
||||
@@ -63,7 +63,7 @@ class SQLTable(collections.abc.MutableMapping):
|
||||
"""
|
||||
Decorator that starts a database transaction and creates a database
|
||||
cursor for performing queries. If no exception is thrown, the
|
||||
database results are committed. If an exception occurs, the database
|
||||
database results are commited. If an exception occurs, the database
|
||||
is rolled back. In all cases, the cursor is closed after the
|
||||
function ends.
|
||||
|
||||
@@ -208,7 +208,7 @@ class SQLTable(collections.abc.MutableMapping):
|
||||
|
||||
def __lt__(self, other):
|
||||
if not isinstance(other, Mapping):
|
||||
raise NotImplementedError()
|
||||
raise NotImplemented
|
||||
|
||||
return len(self) < len(other)
|
||||
|
||||
@@ -238,6 +238,55 @@ class SQLTable(collections.abc.MutableMapping):
|
||||
def has_key(self, key):
|
||||
return key in self
|
||||
|
||||
|
||||
class PersistData(object):
|
||||
"""Deprecated representation of the bitbake persistent data store"""
|
||||
def __init__(self, d):
|
||||
warnings.warn("Use of PersistData is deprecated. Please use "
|
||||
"persist(domain, d) instead.",
|
||||
category=DeprecationWarning,
|
||||
stacklevel=2)
|
||||
|
||||
self.data = persist(d)
|
||||
logger.debug(1, "Using '%s' as the persistent data cache",
|
||||
self.data.filename)
|
||||
|
||||
def addDomain(self, domain):
|
||||
"""
|
||||
Add a domain (pending deprecation)
|
||||
"""
|
||||
return self.data[domain]
|
||||
|
||||
def delDomain(self, domain):
|
||||
"""
|
||||
Removes a domain and all the data it contains
|
||||
"""
|
||||
del self.data[domain]
|
||||
|
||||
def getKeyValues(self, domain):
|
||||
"""
|
||||
Return a list of key + value pairs for a domain
|
||||
"""
|
||||
return list(self.data[domain].items())
|
||||
|
||||
def getValue(self, domain, key):
|
||||
"""
|
||||
Return the value of a key for a domain
|
||||
"""
|
||||
return self.data[domain][key]
|
||||
|
||||
def setValue(self, domain, key, value):
|
||||
"""
|
||||
Sets the value of a key for a domain
|
||||
"""
|
||||
self.data[domain][key] = value
|
||||
|
||||
def delValue(self, domain, key):
|
||||
"""
|
||||
Deletes a key/value pair
|
||||
"""
|
||||
del self.data[domain][key]
|
||||
|
||||
def persist(domain, d):
|
||||
"""Convenience factory for SQLTable objects based upon metadata"""
|
||||
import bb.utils
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
@@ -62,7 +60,7 @@ class Popen(subprocess.Popen):
|
||||
"close_fds": True,
|
||||
"preexec_fn": subprocess_setup,
|
||||
"stdout": subprocess.PIPE,
|
||||
"stderr": subprocess.PIPE,
|
||||
"stderr": subprocess.STDOUT,
|
||||
"stdin": subprocess.PIPE,
|
||||
"shell": False,
|
||||
}
|
||||
@@ -144,7 +142,7 @@ def _logged_communicate(pipe, log, input, extrafiles):
|
||||
while pipe.poll() is None:
|
||||
read_all_pipes(log, rin, outdata, errdata)
|
||||
|
||||
# Process closed, drain all pipes...
|
||||
# Pocess closed, drain all pipes...
|
||||
read_all_pipes(log, rin, outdata, errdata)
|
||||
finally:
|
||||
log.flush()
|
||||
@@ -183,8 +181,5 @@ def run(cmd, input=None, log=None, extrafiles=None, **options):
|
||||
stderr = stderr.decode("utf-8")
|
||||
|
||||
if pipe.returncode != 0:
|
||||
if log:
|
||||
# Don't duplicate the output in the exception if logging it
|
||||
raise ExecutionError(cmd, pipe.returncode, None, None)
|
||||
raise ExecutionError(cmd, pipe.returncode, stdout, stderr)
|
||||
return stdout, stderr
|
||||
|
||||
@@ -94,15 +94,12 @@ class LineFilterProgressHandler(ProgressHandler):
|
||||
while True:
|
||||
breakpos = self._linebuffer.find('\n') + 1
|
||||
if breakpos == 0:
|
||||
# for the case when the line with progress ends with only '\r'
|
||||
breakpos = self._linebuffer.find('\r') + 1
|
||||
if breakpos == 0:
|
||||
break
|
||||
break
|
||||
line = self._linebuffer[:breakpos]
|
||||
self._linebuffer = self._linebuffer[breakpos:]
|
||||
# Drop any line feeds and anything that precedes them
|
||||
lbreakpos = line.rfind('\r') + 1
|
||||
if lbreakpos and lbreakpos != breakpos:
|
||||
if lbreakpos:
|
||||
line = line[lbreakpos:]
|
||||
if self.writeline(filter_color(line)):
|
||||
super().write(line)
|
||||
@@ -148,7 +145,7 @@ class MultiStageProgressReporter:
|
||||
for tasks made up of python code spread across multiple
|
||||
classes / functions - the progress reporter object can
|
||||
be passed around or stored at the object level and calls
|
||||
to next_stage() and update() made wherever needed.
|
||||
to next_stage() and update() made whereever needed.
|
||||
"""
|
||||
def __init__(self, d, stage_weights, debug=False):
|
||||
"""
|
||||
|
||||
@@ -38,17 +38,16 @@ def findProviders(cfgData, dataCache, pkg_pn = None):
|
||||
localdata = data.createCopy(cfgData)
|
||||
bb.data.expandKeys(localdata)
|
||||
|
||||
required = {}
|
||||
preferred_versions = {}
|
||||
latest_versions = {}
|
||||
|
||||
for pn in pkg_pn:
|
||||
(last_ver, last_file, pref_ver, pref_file, req) = findBestProvider(pn, localdata, dataCache, pkg_pn)
|
||||
(last_ver, last_file, pref_ver, pref_file) = findBestProvider(pn, localdata, dataCache, pkg_pn)
|
||||
preferred_versions[pn] = (pref_ver, pref_file)
|
||||
latest_versions[pn] = (last_ver, last_file)
|
||||
required[pn] = req
|
||||
|
||||
return (latest_versions, preferred_versions, required)
|
||||
return (latest_versions, preferred_versions)
|
||||
|
||||
|
||||
def allProviders(dataCache):
|
||||
"""
|
||||
@@ -60,6 +59,7 @@ def allProviders(dataCache):
|
||||
all_providers[pn].append((ver, fn))
|
||||
return all_providers
|
||||
|
||||
|
||||
def sortPriorities(pn, dataCache, pkg_pn = None):
|
||||
"""
|
||||
Reorder pkg_pn by file priority and default preference
|
||||
@@ -87,21 +87,6 @@ def sortPriorities(pn, dataCache, pkg_pn = None):
|
||||
|
||||
return tmp_pn
|
||||
|
||||
def versionVariableMatch(cfgData, keyword, pn):
|
||||
"""
|
||||
Return the value of the <keyword>_VERSION variable if set.
|
||||
"""
|
||||
|
||||
# pn can contain '_', e.g. gcc-cross-x86_64 and an override cannot
|
||||
# hence we do this manually rather than use OVERRIDES
|
||||
ver = cfgData.getVar("%s_VERSION:pn-%s" % (keyword, pn))
|
||||
if not ver:
|
||||
ver = cfgData.getVar("%s_VERSION_%s" % (keyword, pn))
|
||||
if not ver:
|
||||
ver = cfgData.getVar("%s_VERSION" % keyword)
|
||||
|
||||
return ver
|
||||
|
||||
def preferredVersionMatch(pe, pv, pr, preferred_e, preferred_v, preferred_r):
|
||||
"""
|
||||
Check if the version pe,pv,pr is the preferred one.
|
||||
@@ -117,28 +102,19 @@ def preferredVersionMatch(pe, pv, pr, preferred_e, preferred_v, preferred_r):
|
||||
|
||||
def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
"""
|
||||
Find the first provider in pkg_pn with REQUIRED_VERSION or PREFERRED_VERSION set.
|
||||
Find the first provider in pkg_pn with a PREFERRED_VERSION set.
|
||||
"""
|
||||
|
||||
preferred_file = None
|
||||
preferred_ver = None
|
||||
required = False
|
||||
|
||||
required_v = versionVariableMatch(cfgData, "REQUIRED", pn)
|
||||
preferred_v = versionVariableMatch(cfgData, "PREFERRED", pn)
|
||||
|
||||
itemstr = ""
|
||||
if item:
|
||||
itemstr = " (for item %s)" % item
|
||||
|
||||
if required_v is not None:
|
||||
if preferred_v is not None:
|
||||
logger.warning("REQUIRED_VERSION and PREFERRED_VERSION for package %s%s are both set using REQUIRED_VERSION %s", pn, itemstr, required_v)
|
||||
else:
|
||||
logger.debug("REQUIRED_VERSION is set for package %s%s", pn, itemstr)
|
||||
# REQUIRED_VERSION always takes precedence over PREFERRED_VERSION
|
||||
preferred_v = required_v
|
||||
required = True
|
||||
# pn can contain '_', e.g. gcc-cross-x86_64 and an override cannot
|
||||
# hence we do this manually rather than use OVERRIDES
|
||||
preferred_v = cfgData.getVar("PREFERRED_VERSION_pn-%s" % pn)
|
||||
if not preferred_v:
|
||||
preferred_v = cfgData.getVar("PREFERRED_VERSION_%s" % pn)
|
||||
if not preferred_v:
|
||||
preferred_v = cfgData.getVar("PREFERRED_VERSION")
|
||||
|
||||
if preferred_v:
|
||||
m = re.match(r'(\d+:)*(.*)(_.*)*', preferred_v)
|
||||
@@ -171,9 +147,11 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
pv_str = preferred_v
|
||||
if not (preferred_e is None):
|
||||
pv_str = '%s:%s' % (preferred_e, pv_str)
|
||||
itemstr = ""
|
||||
if item:
|
||||
itemstr = " (for item %s)" % item
|
||||
if preferred_file is None:
|
||||
if not required:
|
||||
logger.warning("preferred version %s of %s not available%s", pv_str, pn, itemstr)
|
||||
logger.info("preferred version %s of %s not available%s", pv_str, pn, itemstr)
|
||||
available_vers = []
|
||||
for file_set in pkg_pn:
|
||||
for f in file_set:
|
||||
@@ -185,16 +163,12 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
available_vers.append(ver_str)
|
||||
if available_vers:
|
||||
available_vers.sort()
|
||||
logger.warning("versions of %s available: %s", pn, ' '.join(available_vers))
|
||||
if required:
|
||||
logger.error("required version %s of %s not available%s", pv_str, pn, itemstr)
|
||||
logger.info("versions of %s available: %s", pn, ' '.join(available_vers))
|
||||
else:
|
||||
if required:
|
||||
logger.debug("selecting %s as REQUIRED_VERSION %s of package %s%s", preferred_file, pv_str, pn, itemstr)
|
||||
else:
|
||||
logger.debug("selecting %s as PREFERRED_VERSION %s of package %s%s", preferred_file, pv_str, pn, itemstr)
|
||||
logger.debug(1, "selecting %s as PREFERRED_VERSION %s of package %s%s", preferred_file, pv_str, pn, itemstr)
|
||||
|
||||
return (preferred_ver, preferred_file)
|
||||
|
||||
return (preferred_ver, preferred_file, required)
|
||||
|
||||
def findLatestProvider(pn, cfgData, dataCache, file_set):
|
||||
"""
|
||||
@@ -215,6 +189,7 @@ def findLatestProvider(pn, cfgData, dataCache, file_set):
|
||||
|
||||
return (latest, latest_f)
|
||||
|
||||
|
||||
def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
"""
|
||||
If there is a PREFERRED_VERSION, find the highest-priority bbfile
|
||||
@@ -223,16 +198,17 @@ def findBestProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
"""
|
||||
|
||||
sortpkg_pn = sortPriorities(pn, dataCache, pkg_pn)
|
||||
# Find the highest priority provider with a REQUIRED_VERSION or PREFERRED_VERSION set
|
||||
(preferred_ver, preferred_file, required) = findPreferredProvider(pn, cfgData, dataCache, sortpkg_pn, item)
|
||||
# Find the highest priority provider with a PREFERRED_VERSION set
|
||||
(preferred_ver, preferred_file) = findPreferredProvider(pn, cfgData, dataCache, sortpkg_pn, item)
|
||||
# Find the latest version of the highest priority provider
|
||||
(latest, latest_f) = findLatestProvider(pn, cfgData, dataCache, sortpkg_pn[0])
|
||||
|
||||
if not required and preferred_file is None:
|
||||
if preferred_file is None:
|
||||
preferred_file = latest_f
|
||||
preferred_ver = latest
|
||||
|
||||
return (latest, latest_f, preferred_ver, preferred_file, required)
|
||||
return (latest, latest_f, preferred_ver, preferred_file)
|
||||
|
||||
|
||||
def _filterProviders(providers, item, cfgData, dataCache):
|
||||
"""
|
||||
@@ -256,15 +232,12 @@ def _filterProviders(providers, item, cfgData, dataCache):
|
||||
pkg_pn[pn] = []
|
||||
pkg_pn[pn].append(p)
|
||||
|
||||
logger.debug("providers for %s are: %s", item, list(sorted(pkg_pn.keys())))
|
||||
logger.debug(1, "providers for %s are: %s", item, list(sorted(pkg_pn.keys())))
|
||||
|
||||
# First add REQUIRED_VERSIONS or PREFERRED_VERSIONS
|
||||
# First add PREFERRED_VERSIONS
|
||||
for pn in sorted(pkg_pn):
|
||||
sortpkg_pn[pn] = sortPriorities(pn, dataCache, pkg_pn)
|
||||
preferred_ver, preferred_file, required = findPreferredProvider(pn, cfgData, dataCache, sortpkg_pn[pn], item)
|
||||
if required and preferred_file is None:
|
||||
return eligible
|
||||
preferred_versions[pn] = (preferred_ver, preferred_file)
|
||||
preferred_versions[pn] = findPreferredProvider(pn, cfgData, dataCache, sortpkg_pn[pn], item)
|
||||
if preferred_versions[pn][1]:
|
||||
eligible.append(preferred_versions[pn][1])
|
||||
|
||||
@@ -275,8 +248,9 @@ def _filterProviders(providers, item, cfgData, dataCache):
|
||||
preferred_versions[pn] = findLatestProvider(pn, cfgData, dataCache, sortpkg_pn[pn][0])
|
||||
eligible.append(preferred_versions[pn][1])
|
||||
|
||||
if not eligible:
|
||||
return eligible
|
||||
if len(eligible) == 0:
|
||||
logger.error("no eligible providers for %s", item)
|
||||
return 0
|
||||
|
||||
# If pn == item, give it a slight default preference
|
||||
# This means PREFERRED_PROVIDER_foobar defaults to foobar if available
|
||||
@@ -292,6 +266,7 @@ def _filterProviders(providers, item, cfgData, dataCache):
|
||||
|
||||
return eligible
|
||||
|
||||
|
||||
def filterProviders(providers, item, cfgData, dataCache):
|
||||
"""
|
||||
Take a list of providers and filter/reorder according to the
|
||||
@@ -316,7 +291,7 @@ def filterProviders(providers, item, cfgData, dataCache):
|
||||
foundUnique = True
|
||||
break
|
||||
|
||||
logger.debug("sorted providers for %s are: %s", item, eligible)
|
||||
logger.debug(1, "sorted providers for %s are: %s", item, eligible)
|
||||
|
||||
return eligible, foundUnique
|
||||
|
||||
@@ -358,7 +333,7 @@ def filterProvidersRunTime(providers, item, cfgData, dataCache):
|
||||
provides = dataCache.pn_provides[pn]
|
||||
for provide in provides:
|
||||
prefervar = cfgData.getVar('PREFERRED_PROVIDER_%s' % provide)
|
||||
#logger.debug("checking PREFERRED_PROVIDER_%s (value %s) against %s", provide, prefervar, pns.keys())
|
||||
#logger.debug(1, "checking PREFERRED_PROVIDER_%s (value %s) against %s", provide, prefervar, pns.keys())
|
||||
if prefervar in pns and pns[prefervar] not in preferred:
|
||||
var = "PREFERRED_PROVIDER_%s = %s" % (provide, prefervar)
|
||||
logger.verbose("selecting %s to satisfy runtime %s due to %s", prefervar, item, var)
|
||||
@@ -374,7 +349,7 @@ def filterProvidersRunTime(providers, item, cfgData, dataCache):
|
||||
if numberPreferred > 1:
|
||||
logger.error("Trying to resolve runtime dependency %s resulted in conflicting PREFERRED_PROVIDER entries being found.\nThe providers found were: %s\nThe PREFERRED_PROVIDER entries resulting in this conflict were: %s. You could set PREFERRED_RPROVIDER_%s" % (item, preferred, preferred_vars, item))
|
||||
|
||||
logger.debug("sorted runtime providers for %s are: %s", item, eligible)
|
||||
logger.debug(1, "sorted runtime providers for %s are: %s", item, eligible)
|
||||
|
||||
return eligible, numberPreferred
|
||||
|
||||
@@ -396,8 +371,8 @@ def getRuntimeProviders(dataCache, rdepend):
|
||||
return rproviders
|
||||
|
||||
# Only search dynamic packages if we can't find anything in other variables
|
||||
for pat_key in dataCache.packages_dynamic:
|
||||
pattern = pat_key.replace(r'+', r"\+")
|
||||
for pattern in dataCache.packages_dynamic:
|
||||
pattern = pattern.replace(r'+', r"\+")
|
||||
if pattern in regexp_cache:
|
||||
regexp = regexp_cache[pattern]
|
||||
else:
|
||||
@@ -408,11 +383,12 @@ def getRuntimeProviders(dataCache, rdepend):
|
||||
raise
|
||||
regexp_cache[pattern] = regexp
|
||||
if regexp.match(rdepend):
|
||||
rproviders += dataCache.packages_dynamic[pat_key]
|
||||
logger.debug("Assuming %s is a dynamic package, but it may not exist" % rdepend)
|
||||
rproviders += dataCache.packages_dynamic[pattern]
|
||||
logger.debug(1, "Assuming %s is a dynamic package, but it may not exist" % rdepend)
|
||||
|
||||
return rproviders
|
||||
|
||||
|
||||
def buildWorldTargetList(dataCache, task=None):
|
||||
"""
|
||||
Build package list for "bitbake world"
|
||||
@@ -420,22 +396,22 @@ def buildWorldTargetList(dataCache, task=None):
|
||||
if dataCache.world_target:
|
||||
return
|
||||
|
||||
logger.debug("collating packages for \"world\"")
|
||||
logger.debug(1, "collating packages for \"world\"")
|
||||
for f in dataCache.possible_world:
|
||||
terminal = True
|
||||
pn = dataCache.pkg_fn[f]
|
||||
if task and task not in dataCache.task_deps[f]['tasks']:
|
||||
logger.debug2("World build skipping %s as task %s doesn't exist", f, task)
|
||||
logger.debug(2, "World build skipping %s as task %s doesn't exist", f, task)
|
||||
terminal = False
|
||||
|
||||
for p in dataCache.pn_provides[pn]:
|
||||
if p.startswith('virtual/'):
|
||||
logger.debug2("World build skipping %s due to %s provider starting with virtual/", f, p)
|
||||
logger.debug(2, "World build skipping %s due to %s provider starting with virtual/", f, p)
|
||||
terminal = False
|
||||
break
|
||||
for pf in dataCache.providers[p]:
|
||||
if dataCache.pkg_fn[pf] != pn:
|
||||
logger.debug2("World build skipping %s due to both us and %s providing %s", f, pf, p)
|
||||
logger.debug(2, "World build skipping %s due to both us and %s providing %s", f, pf, p)
|
||||
terminal = False
|
||||
break
|
||||
if terminal:
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -26,8 +26,6 @@ import errno
|
||||
import re
|
||||
import datetime
|
||||
import pickle
|
||||
import traceback
|
||||
import gc
|
||||
import bb.server.xmlrpcserver
|
||||
from bb import daemonize
|
||||
from multiprocessing import queues
|
||||
@@ -149,7 +147,7 @@ class ProcessServer():
|
||||
conn = newconnections.pop(-1)
|
||||
fds.append(conn)
|
||||
self.controllersock = conn
|
||||
elif not self.timeout and not ready:
|
||||
elif self.timeout is None and not ready:
|
||||
serverlog("No timeout, exiting.")
|
||||
self.quit = True
|
||||
|
||||
@@ -219,9 +217,8 @@ class ProcessServer():
|
||||
self.command_channel_reply.send(self.cooker.command.runCommand(command))
|
||||
serverlog("Command Completed")
|
||||
except Exception as e:
|
||||
stack = traceback.format_exc()
|
||||
serverlog('Exception in server main event loop running command %s (%s)' % (command, stack))
|
||||
logger.exception('Exception in server main event loop running command %s (%s)' % (command, stack))
|
||||
serverlog('Exception in server main event loop running command %s (%s)' % (command, str(e)))
|
||||
logger.exception('Exception in server main event loop running command %s (%s)' % (command, str(e)))
|
||||
|
||||
if self.xmlrpc in ready:
|
||||
self.xmlrpc.handle_requests()
|
||||
@@ -244,6 +241,9 @@ class ProcessServer():
|
||||
|
||||
ready = self.idle_commands(.1, fds)
|
||||
|
||||
if len(threading.enumerate()) != 1:
|
||||
serverlog("More than one thread left?: " + str(threading.enumerate()))
|
||||
|
||||
serverlog("Exiting")
|
||||
# Remove the socket file so we don't get any more connections to avoid races
|
||||
try:
|
||||
@@ -261,9 +261,6 @@ class ProcessServer():
|
||||
|
||||
self.cooker.post_serve()
|
||||
|
||||
if len(threading.enumerate()) != 1:
|
||||
serverlog("More than one thread left?: " + str(threading.enumerate()))
|
||||
|
||||
# Flush logs before we release the lock
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
@@ -327,10 +324,10 @@ class ProcessServer():
|
||||
if e.errno != errno.ENOENT:
|
||||
raise
|
||||
|
||||
msg = ["Delaying shutdown due to active processes which appear to be holding bitbake.lock"]
|
||||
msg = "Delaying shutdown due to active processes which appear to be holding bitbake.lock"
|
||||
if procs:
|
||||
msg.append(":\n%s" % str(procs.decode("utf-8")))
|
||||
serverlog("".join(msg))
|
||||
msg += ":\n%s" % str(procs.decode("utf-8"))
|
||||
serverlog(msg)
|
||||
|
||||
def idle_commands(self, delay, fds=None):
|
||||
nextsleep = delay
|
||||
@@ -370,12 +367,7 @@ class ProcessServer():
|
||||
self.next_heartbeat = now + self.heartbeat_seconds
|
||||
if hasattr(self.cooker, "data"):
|
||||
heartbeat = bb.event.HeartbeatEvent(now)
|
||||
try:
|
||||
bb.event.fire(heartbeat, self.cooker.data)
|
||||
except Exception as exc:
|
||||
if not isinstance(exc, bb.BBHandledException):
|
||||
logger.exception('Running heartbeat function')
|
||||
self.quit = True
|
||||
bb.event.fire(heartbeat, self.cooker.data)
|
||||
if nextsleep and now + nextsleep > self.next_heartbeat:
|
||||
# Shorten timeout so that we we wake up in time for
|
||||
# the heartbeat.
|
||||
@@ -437,7 +429,6 @@ class BitBakeProcessServerConnection(object):
|
||||
self.socket_connection = sock
|
||||
|
||||
def terminate(self):
|
||||
self.events.close()
|
||||
self.socket_connection.close()
|
||||
self.connection.connection.close()
|
||||
self.connection.recv.close()
|
||||
@@ -475,7 +466,7 @@ class BitBakeServer(object):
|
||||
try:
|
||||
r = ready.get()
|
||||
except EOFError:
|
||||
# Trap the child exiting/closing the pipe and error out
|
||||
# Trap the child exitting/closing the pipe and error out
|
||||
r = None
|
||||
if not r or r[0] != "r":
|
||||
ready.close()
|
||||
@@ -518,7 +509,7 @@ class BitBakeServer(object):
|
||||
os.set_inheritable(self.bitbake_lock.fileno(), True)
|
||||
os.set_inheritable(self.readypipein, True)
|
||||
serverscript = os.path.realpath(os.path.dirname(__file__) + "/../../../bin/bitbake-server")
|
||||
os.execl(sys.executable, "bitbake-server", serverscript, "decafbad", str(self.bitbake_lock.fileno()), str(self.readypipein), self.logfile, self.bitbake_lock.name, self.sockname, str(self.server_timeout or 0), str(self.xmlrpcinterface[0]), str(self.xmlrpcinterface[1]))
|
||||
os.execl(sys.executable, "bitbake-server", serverscript, "decafbad", str(self.bitbake_lock.fileno()), str(self.readypipein), self.logfile, self.bitbake_lock.name, self.sockname, str(self.server_timeout), str(self.xmlrpcinterface[0]), str(self.xmlrpcinterface[1]))
|
||||
|
||||
def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpcinterface):
|
||||
|
||||
@@ -558,7 +549,7 @@ def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpc
|
||||
|
||||
server.run()
|
||||
finally:
|
||||
# Flush any messages/errors to the logfile before exit
|
||||
# Flush any ,essages/errors to the logfile before exit
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
|
||||
@@ -663,18 +654,23 @@ class BBUIEventQueue:
|
||||
self.reader = ConnectionReader(readfd)
|
||||
|
||||
self.t = threading.Thread()
|
||||
self.t.setDaemon(True)
|
||||
self.t.run = self.startCallbackHandler
|
||||
self.t.start()
|
||||
|
||||
def getEvent(self):
|
||||
with self.eventQueueLock:
|
||||
if len(self.eventQueue) == 0:
|
||||
return None
|
||||
self.eventQueueLock.acquire()
|
||||
|
||||
item = self.eventQueue.pop(0)
|
||||
if len(self.eventQueue) == 0:
|
||||
self.eventQueueNotify.clear()
|
||||
if len(self.eventQueue) == 0:
|
||||
self.eventQueueLock.release()
|
||||
return None
|
||||
|
||||
item = self.eventQueue.pop(0)
|
||||
|
||||
if len(self.eventQueue) == 0:
|
||||
self.eventQueueNotify.clear()
|
||||
|
||||
self.eventQueueLock.release()
|
||||
return item
|
||||
|
||||
def waitEvent(self, delay):
|
||||
@@ -682,9 +678,10 @@ class BBUIEventQueue:
|
||||
return self.getEvent()
|
||||
|
||||
def queue_event(self, event):
|
||||
with self.eventQueueLock:
|
||||
self.eventQueue.append(event)
|
||||
self.eventQueueNotify.set()
|
||||
self.eventQueueLock.acquire()
|
||||
self.eventQueue.append(event)
|
||||
self.eventQueueNotify.set()
|
||||
self.eventQueueLock.release()
|
||||
|
||||
def send_event(self, event):
|
||||
self.queue_event(pickle.loads(event))
|
||||
@@ -693,17 +690,13 @@ class BBUIEventQueue:
|
||||
bb.utils.set_process_name("UIEventQueue")
|
||||
while True:
|
||||
try:
|
||||
ready = self.reader.wait(0.25)
|
||||
if ready:
|
||||
event = self.reader.get()
|
||||
self.queue_event(event)
|
||||
except (EOFError, OSError, TypeError):
|
||||
self.reader.wait()
|
||||
event = self.reader.get()
|
||||
self.queue_event(event)
|
||||
except EOFError:
|
||||
# Easiest way to exit is to close the file descriptor to cause an exit
|
||||
break
|
||||
|
||||
def close(self):
|
||||
self.reader.close()
|
||||
self.t.join()
|
||||
|
||||
class ConnectionReader(object):
|
||||
|
||||
@@ -737,32 +730,10 @@ class ConnectionWriter(object):
|
||||
# Why bb.event needs this I have no idea
|
||||
self.event = self
|
||||
|
||||
def _send(self, obj):
|
||||
gc.disable()
|
||||
with self.wlock:
|
||||
self.writer.send_bytes(obj)
|
||||
gc.enable()
|
||||
|
||||
def send(self, obj):
|
||||
obj = multiprocessing.reduction.ForkingPickler.dumps(obj)
|
||||
# See notes/code in CookerParser
|
||||
# We must not terminate holding this lock else processes will hang.
|
||||
# For SIGTERM, raising afterwards avoids this.
|
||||
# For SIGINT, we don't want to have written partial data to the pipe.
|
||||
# pthread_sigmask block/unblock would be nice but doesn't work, https://bugs.python.org/issue47139
|
||||
process = multiprocessing.current_process()
|
||||
if process and hasattr(process, "queue_signals"):
|
||||
with process.signal_threadlock:
|
||||
process.queue_signals = True
|
||||
self._send(obj)
|
||||
process.queue_signals = False
|
||||
try:
|
||||
for sig in process.signal_received.pop():
|
||||
process.handle_sig(sig, None)
|
||||
except IndexError:
|
||||
pass
|
||||
else:
|
||||
self._send(obj)
|
||||
with self.wlock:
|
||||
self.writer.send_bytes(obj)
|
||||
|
||||
def fileno(self):
|
||||
return self.writer.fileno()
|
||||
|
||||
@@ -11,7 +11,6 @@ import hashlib
|
||||
import time
|
||||
import inspect
|
||||
from xmlrpc.server import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
|
||||
import bb.server.xmlrpcclient
|
||||
|
||||
import bb
|
||||
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
@@ -13,8 +11,6 @@ import pickle
|
||||
import bb.data
|
||||
import difflib
|
||||
import simplediff
|
||||
import json
|
||||
import bb.compress.zstd
|
||||
from bb.checksum import FileChecksumCache
|
||||
from bb import runqueue
|
||||
import hashserv
|
||||
@@ -23,17 +19,6 @@ import hashserv.client
|
||||
logger = logging.getLogger('BitBake.SigGen')
|
||||
hashequiv_logger = logging.getLogger('BitBake.SigGen.HashEquiv')
|
||||
|
||||
class SetEncoder(json.JSONEncoder):
|
||||
def default(self, obj):
|
||||
if isinstance(obj, set):
|
||||
return dict(_set_object=list(sorted(obj)))
|
||||
return json.JSONEncoder.default(self, obj)
|
||||
|
||||
def SetDecoder(dct):
|
||||
if '_set_object' in dct:
|
||||
return set(dct['_set_object'])
|
||||
return dct
|
||||
|
||||
def init(d):
|
||||
siggens = [obj for obj in globals().values()
|
||||
if type(obj) is type and issubclass(obj, SignatureGenerator)]
|
||||
@@ -42,6 +27,7 @@ def init(d):
|
||||
for sg in siggens:
|
||||
if desired == sg.name:
|
||||
return sg(d)
|
||||
break
|
||||
else:
|
||||
logger.error("Invalid signature generator '%s', using default 'noop'\n"
|
||||
"Available generators: %s", desired,
|
||||
@@ -122,9 +108,6 @@ class SignatureGenerator(object):
|
||||
def save_unitaskhashes(self):
|
||||
return
|
||||
|
||||
def copy_unitaskhashes(self, targetdir):
|
||||
return
|
||||
|
||||
def set_setscene_tasks(self, setscene_tasks):
|
||||
return
|
||||
|
||||
@@ -160,9 +143,6 @@ class SignatureGenerator(object):
|
||||
|
||||
return DataCacheProxy()
|
||||
|
||||
def exit(self):
|
||||
return
|
||||
|
||||
class SignatureGeneratorBasic(SignatureGenerator):
|
||||
"""
|
||||
"""
|
||||
@@ -179,8 +159,8 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
self.gendeps = {}
|
||||
self.lookupcache = {}
|
||||
self.setscenetasks = set()
|
||||
self.basehash_ignore_vars = set((data.getVar("BB_BASEHASH_IGNORE_VARS") or "").split())
|
||||
self.taskhash_ignore_tasks = None
|
||||
self.basewhitelist = set((data.getVar("BB_HASHBASE_WHITELIST") or "").split())
|
||||
self.taskwhitelist = None
|
||||
self.init_rundepcheck(data)
|
||||
checksum_cache_file = data.getVar("BB_HASH_CHECKSUM_CACHE_FILE")
|
||||
if checksum_cache_file:
|
||||
@@ -195,18 +175,18 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
self.tidtopn = {}
|
||||
|
||||
def init_rundepcheck(self, data):
|
||||
self.taskhash_ignore_tasks = data.getVar("BB_TASKHASH_IGNORE_TASKS") or None
|
||||
if self.taskhash_ignore_tasks:
|
||||
self.twl = re.compile(self.taskhash_ignore_tasks)
|
||||
self.taskwhitelist = data.getVar("BB_HASHTASK_WHITELIST") or None
|
||||
if self.taskwhitelist:
|
||||
self.twl = re.compile(self.taskwhitelist)
|
||||
else:
|
||||
self.twl = None
|
||||
|
||||
def _build_data(self, fn, d):
|
||||
|
||||
ignore_mismatch = ((d.getVar("BB_HASH_IGNORE_MISMATCH") or '') == '1')
|
||||
tasklist, gendeps, lookupcache = bb.data.generate_dependencies(d, self.basehash_ignore_vars)
|
||||
tasklist, gendeps, lookupcache = bb.data.generate_dependencies(d, self.basewhitelist)
|
||||
|
||||
taskdeps, basehash = bb.data.generate_dependency_hash(tasklist, gendeps, lookupcache, self.basehash_ignore_vars, fn)
|
||||
taskdeps, basehash = bb.data.generate_dependency_hash(tasklist, gendeps, lookupcache, self.basewhitelist, fn)
|
||||
|
||||
for task in tasklist:
|
||||
tid = fn + ":" + task
|
||||
@@ -248,7 +228,7 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
# self.dump_sigtask(fn, task, d.getVar("STAMP"), False)
|
||||
|
||||
for task in taskdeps:
|
||||
d.setVar("BB_BASEHASH:task-%s" % task, self.basehash[fn + ":" + task])
|
||||
d.setVar("BB_BASEHASH_task-%s" % task, self.basehash[fn + ":" + task])
|
||||
|
||||
def postparsing_clean_cache(self):
|
||||
#
|
||||
@@ -260,8 +240,7 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
|
||||
def rundep_check(self, fn, recipename, task, dep, depname, dataCaches):
|
||||
# Return True if we should keep the dependency, False to drop it
|
||||
# We only manipulate the dependencies for packages not in the ignore
|
||||
# list
|
||||
# We only manipulate the dependencies for packages not in the whitelist
|
||||
if self.twl and not self.twl.search(recipename):
|
||||
# then process the actual dependencies
|
||||
if self.twl.search(depname):
|
||||
@@ -332,12 +311,16 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
|
||||
data = self.basehash[tid]
|
||||
for dep in self.runtaskdeps[tid]:
|
||||
data = data + self.get_unihash(dep)
|
||||
if dep in self.unihash:
|
||||
if self.unihash[dep] is None:
|
||||
data = data + self.taskhash[dep]
|
||||
else:
|
||||
data = data + self.unihash[dep]
|
||||
else:
|
||||
data = data + self.get_unihash(dep)
|
||||
|
||||
for (f, cs) in self.file_checksum_values[tid]:
|
||||
if cs:
|
||||
if "/./" in f:
|
||||
data = data + "./" + f.split("/./")[1]
|
||||
data = data + cs
|
||||
|
||||
if tid in self.taints:
|
||||
@@ -348,7 +331,7 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
|
||||
h = hashlib.sha256(data.encode("utf-8")).hexdigest()
|
||||
self.taskhash[tid] = h
|
||||
#d.setVar("BB_TASKHASH:task-%s" % task, taskhash[task])
|
||||
#d.setVar("BB_TASKHASH_task-%s" % task, taskhash[task])
|
||||
return h
|
||||
|
||||
def writeout_file_checksum_cache(self):
|
||||
@@ -363,9 +346,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
def save_unitaskhashes(self):
|
||||
self.unihash_cache.save(self.unitaskhashes)
|
||||
|
||||
def copy_unitaskhashes(self, targetdir):
|
||||
self.unihash_cache.copyfile(targetdir)
|
||||
|
||||
def dump_sigtask(self, fn, task, stampbase, runtime):
|
||||
|
||||
tid = fn + ":" + task
|
||||
@@ -383,27 +363,22 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
|
||||
data = {}
|
||||
data['task'] = task
|
||||
data['basehash_ignore_vars'] = self.basehash_ignore_vars
|
||||
data['taskhash_ignore_tasks'] = self.taskhash_ignore_tasks
|
||||
data['basewhitelist'] = self.basewhitelist
|
||||
data['taskwhitelist'] = self.taskwhitelist
|
||||
data['taskdeps'] = self.taskdeps[fn][task]
|
||||
data['basehash'] = self.basehash[tid]
|
||||
data['gendeps'] = {}
|
||||
data['varvals'] = {}
|
||||
data['varvals'][task] = self.lookupcache[fn][task]
|
||||
for dep in self.taskdeps[fn][task]:
|
||||
if dep in self.basehash_ignore_vars:
|
||||
if dep in self.basewhitelist:
|
||||
continue
|
||||
data['gendeps'][dep] = self.gendeps[fn][dep]
|
||||
data['varvals'][dep] = self.lookupcache[fn][dep]
|
||||
|
||||
if runtime and tid in self.taskhash:
|
||||
data['runtaskdeps'] = self.runtaskdeps[tid]
|
||||
data['file_checksum_values'] = []
|
||||
for f,cs in self.file_checksum_values[tid]:
|
||||
if "/./" in f:
|
||||
data['file_checksum_values'].append(("./" + f.split("/./")[1], cs))
|
||||
else:
|
||||
data['file_checksum_values'].append((os.path.basename(f), cs))
|
||||
data['file_checksum_values'] = [(os.path.basename(f), cs) for f,cs in self.file_checksum_values[tid]]
|
||||
data['runtaskhashes'] = {}
|
||||
for dep in data['runtaskdeps']:
|
||||
data['runtaskhashes'][dep] = self.get_unihash(dep)
|
||||
@@ -427,13 +402,13 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
bb.error("Taskhash mismatch %s versus %s for %s" % (computed_taskhash, self.taskhash[tid], tid))
|
||||
sigfile = sigfile.replace(self.taskhash[tid], computed_taskhash)
|
||||
|
||||
fd, tmpfile = bb.utils.mkstemp(dir=os.path.dirname(sigfile), prefix="sigtask.")
|
||||
fd, tmpfile = tempfile.mkstemp(dir=os.path.dirname(sigfile), prefix="sigtask.")
|
||||
try:
|
||||
with bb.compress.zstd.open(fd, "wt", encoding="utf-8", num_threads=1) as f:
|
||||
json.dump(data, f, sort_keys=True, separators=(",", ":"), cls=SetEncoder)
|
||||
f.flush()
|
||||
with os.fdopen(fd, "wb") as stream:
|
||||
p = pickle.dump(data, stream, -1)
|
||||
stream.flush()
|
||||
os.chmod(tmpfile, 0o664)
|
||||
bb.utils.rename(tmpfile, sigfile)
|
||||
os.rename(tmpfile, sigfile)
|
||||
except (OSError, IOError) as err:
|
||||
try:
|
||||
os.unlink(tmpfile)
|
||||
@@ -499,18 +474,6 @@ class SignatureGeneratorUniHashMixIn(object):
|
||||
self._client = hashserv.create_client(self.server)
|
||||
return self._client
|
||||
|
||||
def reset(self, data):
|
||||
if getattr(self, '_client', None) is not None:
|
||||
self._client.close()
|
||||
self._client = None
|
||||
return super().reset(data)
|
||||
|
||||
def exit(self):
|
||||
if getattr(self, '_client', None) is not None:
|
||||
self._client.close()
|
||||
self._client = None
|
||||
return super().exit()
|
||||
|
||||
def get_stampfile_hash(self, tid):
|
||||
if tid in self.taskhash:
|
||||
# If a unique hash is reported, use it as the stampfile hash. This
|
||||
@@ -584,8 +547,8 @@ class SignatureGeneratorUniHashMixIn(object):
|
||||
# is much more interesting, so it is reported at debug level 1
|
||||
hashequiv_logger.debug((1, 2)[unihash == taskhash], 'Found unihash %s in place of %s for %s from %s' % (unihash, taskhash, tid, self.server))
|
||||
else:
|
||||
hashequiv_logger.debug2('No reported unihash for %s:%s from %s' % (tid, taskhash, self.server))
|
||||
except ConnectionError as e:
|
||||
hashequiv_logger.debug(2, 'No reported unihash for %s:%s from %s' % (tid, taskhash, self.server))
|
||||
except hashserv.client.HashConnectionError as e:
|
||||
bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
|
||||
|
||||
self.set_unihash(tid, unihash)
|
||||
@@ -606,7 +569,7 @@ class SignatureGeneratorUniHashMixIn(object):
|
||||
if self.setscenetasks and tid not in self.setscenetasks:
|
||||
return
|
||||
|
||||
# This can happen if locked sigs are in action. Detect and just exit
|
||||
# This can happen if locked sigs are in action. Detect and just abort
|
||||
if taskhash != self.taskhash[tid]:
|
||||
return
|
||||
|
||||
@@ -658,13 +621,13 @@ class SignatureGeneratorUniHashMixIn(object):
|
||||
new_unihash = data['unihash']
|
||||
|
||||
if new_unihash != unihash:
|
||||
hashequiv_logger.debug('Task %s unihash changed %s -> %s by server %s' % (taskhash, unihash, new_unihash, self.server))
|
||||
hashequiv_logger.debug(1, 'Task %s unihash changed %s -> %s by server %s' % (taskhash, unihash, new_unihash, self.server))
|
||||
bb.event.fire(bb.runqueue.taskUniHashUpdate(fn + ':do_' + task, new_unihash), d)
|
||||
self.set_unihash(tid, new_unihash)
|
||||
d.setVar('BB_UNIHASH', new_unihash)
|
||||
else:
|
||||
hashequiv_logger.debug('Reported task %s as unihash %s to %s' % (taskhash, unihash, self.server))
|
||||
except ConnectionError as e:
|
||||
hashequiv_logger.debug(1, 'Reported task %s as unihash %s to %s' % (taskhash, unihash, self.server))
|
||||
except hashserv.client.HashConnectionError as e:
|
||||
bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
|
||||
finally:
|
||||
if sigfile:
|
||||
@@ -704,7 +667,7 @@ class SignatureGeneratorUniHashMixIn(object):
|
||||
# TODO: What to do here?
|
||||
hashequiv_logger.verbose('Task %s unihash reported as unwanted hash %s' % (tid, finalunihash))
|
||||
|
||||
except ConnectionError as e:
|
||||
except hashserv.client.HashConnectionError as e:
|
||||
bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
|
||||
|
||||
return False
|
||||
@@ -791,7 +754,7 @@ def clean_basepath(basepath):
|
||||
if basepath[0] == '/':
|
||||
return cleaned
|
||||
|
||||
if basepath.startswith("mc:") and basepath.count(':') >= 2:
|
||||
if basepath.startswith("mc:"):
|
||||
mc, mc_name, basepath = basepath.split(":", 2)
|
||||
mc_suffix = ':mc:' + mc_name
|
||||
else:
|
||||
@@ -817,16 +780,6 @@ def clean_basepaths_list(a):
|
||||
b.append(clean_basepath(x))
|
||||
return b
|
||||
|
||||
# Handled renamed fields
|
||||
def handle_renames(data):
|
||||
if 'basewhitelist' in data:
|
||||
data['basehash_ignore_vars'] = data['basewhitelist']
|
||||
del data['basewhitelist']
|
||||
if 'taskwhitelist' in data:
|
||||
data['taskhash_ignore_tasks'] = data['taskwhitelist']
|
||||
del data['taskwhitelist']
|
||||
|
||||
|
||||
def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
output = []
|
||||
|
||||
@@ -847,21 +800,20 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
formatparams.update(values)
|
||||
return formatstr.format(**formatparams)
|
||||
|
||||
with bb.compress.zstd.open(a, "rt", encoding="utf-8", num_threads=1) as f:
|
||||
a_data = json.load(f, object_hook=SetDecoder)
|
||||
with bb.compress.zstd.open(b, "rt", encoding="utf-8", num_threads=1) as f:
|
||||
b_data = json.load(f, object_hook=SetDecoder)
|
||||
with open(a, 'rb') as f:
|
||||
p1 = pickle.Unpickler(f)
|
||||
a_data = p1.load()
|
||||
with open(b, 'rb') as f:
|
||||
p2 = pickle.Unpickler(f)
|
||||
b_data = p2.load()
|
||||
|
||||
for data in [a_data, b_data]:
|
||||
handle_renames(data)
|
||||
|
||||
def dict_diff(a, b, ignored_vars=set()):
|
||||
def dict_diff(a, b, whitelist=set()):
|
||||
sa = set(a.keys())
|
||||
sb = set(b.keys())
|
||||
common = sa & sb
|
||||
changed = set()
|
||||
for i in common:
|
||||
if a[i] != b[i] and i not in ignored_vars:
|
||||
if a[i] != b[i] and i not in whitelist:
|
||||
changed.add(i)
|
||||
added = sb - sa
|
||||
removed = sa - sb
|
||||
@@ -869,11 +821,11 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
|
||||
def file_checksums_diff(a, b):
|
||||
from collections import Counter
|
||||
|
||||
# Convert lists back to tuples
|
||||
a = [(f[0], f[1]) for f in a]
|
||||
b = [(f[0], f[1]) for f in b]
|
||||
|
||||
# Handle old siginfo format
|
||||
if isinstance(a, dict):
|
||||
a = [(os.path.basename(f), cs) for f, cs in a.items()]
|
||||
if isinstance(b, dict):
|
||||
b = [(os.path.basename(f), cs) for f, cs in b.items()]
|
||||
# Compare lists, ensuring we can handle duplicate filenames if they exist
|
||||
removedcount = Counter(a)
|
||||
removedcount.subtract(b)
|
||||
@@ -900,15 +852,15 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
removed = [x[0] for x in removed]
|
||||
return changed, added, removed
|
||||
|
||||
if 'basehash_ignore_vars' in a_data and a_data['basehash_ignore_vars'] != b_data['basehash_ignore_vars']:
|
||||
output.append(color_format("{color_title}basehash_ignore_vars changed{color_default} from '%s' to '%s'") % (a_data['basehash_ignore_vars'], b_data['basehash_ignore_vars']))
|
||||
if a_data['basehash_ignore_vars'] and b_data['basehash_ignore_vars']:
|
||||
output.append("changed items: %s" % a_data['basehash_ignore_vars'].symmetric_difference(b_data['basehash_ignore_vars']))
|
||||
if 'basewhitelist' in a_data and a_data['basewhitelist'] != b_data['basewhitelist']:
|
||||
output.append(color_format("{color_title}basewhitelist changed{color_default} from '%s' to '%s'") % (a_data['basewhitelist'], b_data['basewhitelist']))
|
||||
if a_data['basewhitelist'] and b_data['basewhitelist']:
|
||||
output.append("changed items: %s" % a_data['basewhitelist'].symmetric_difference(b_data['basewhitelist']))
|
||||
|
||||
if 'taskhash_ignore_tasks' in a_data and a_data['taskhash_ignore_tasks'] != b_data['taskhash_ignore_tasks']:
|
||||
output.append(color_format("{color_title}taskhash_ignore_tasks changed{color_default} from '%s' to '%s'") % (a_data['taskhash_ignore_tasks'], b_data['taskhash_ignore_tasks']))
|
||||
if a_data['taskhash_ignore_tasks'] and b_data['taskhash_ignore_tasks']:
|
||||
output.append("changed items: %s" % a_data['taskhash_ignore_tasks'].symmetric_difference(b_data['taskhash_ignore_tasks']))
|
||||
if 'taskwhitelist' in a_data and a_data['taskwhitelist'] != b_data['taskwhitelist']:
|
||||
output.append(color_format("{color_title}taskwhitelist changed{color_default} from '%s' to '%s'") % (a_data['taskwhitelist'], b_data['taskwhitelist']))
|
||||
if a_data['taskwhitelist'] and b_data['taskwhitelist']:
|
||||
output.append("changed items: %s" % a_data['taskwhitelist'].symmetric_difference(b_data['taskwhitelist']))
|
||||
|
||||
if a_data['taskdeps'] != b_data['taskdeps']:
|
||||
output.append(color_format("{color_title}Task dependencies changed{color_default} from:\n%s\nto:\n%s") % (sorted(a_data['taskdeps']), sorted(b_data['taskdeps'])))
|
||||
@@ -916,23 +868,23 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
if a_data['basehash'] != b_data['basehash'] and not collapsed:
|
||||
output.append(color_format("{color_title}basehash changed{color_default} from %s to %s") % (a_data['basehash'], b_data['basehash']))
|
||||
|
||||
changed, added, removed = dict_diff(a_data['gendeps'], b_data['gendeps'], a_data['basehash_ignore_vars'] & b_data['basehash_ignore_vars'])
|
||||
changed, added, removed = dict_diff(a_data['gendeps'], b_data['gendeps'], a_data['basewhitelist'] & b_data['basewhitelist'])
|
||||
if changed:
|
||||
for dep in sorted(changed):
|
||||
for dep in changed:
|
||||
output.append(color_format("{color_title}List of dependencies for variable %s changed from '{color_default}%s{color_title}' to '{color_default}%s{color_title}'") % (dep, a_data['gendeps'][dep], b_data['gendeps'][dep]))
|
||||
if a_data['gendeps'][dep] and b_data['gendeps'][dep]:
|
||||
output.append("changed items: %s" % a_data['gendeps'][dep].symmetric_difference(b_data['gendeps'][dep]))
|
||||
if added:
|
||||
for dep in sorted(added):
|
||||
for dep in added:
|
||||
output.append(color_format("{color_title}Dependency on variable %s was added") % (dep))
|
||||
if removed:
|
||||
for dep in sorted(removed):
|
||||
for dep in removed:
|
||||
output.append(color_format("{color_title}Dependency on Variable %s was removed") % (dep))
|
||||
|
||||
|
||||
changed, added, removed = dict_diff(a_data['varvals'], b_data['varvals'])
|
||||
if changed:
|
||||
for dep in sorted(changed):
|
||||
for dep in changed:
|
||||
oldval = a_data['varvals'][dep]
|
||||
newval = b_data['varvals'][dep]
|
||||
if newval and oldval and ('\n' in oldval or '\n' in newval):
|
||||
@@ -956,9 +908,9 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
output.append(color_format("{color_title}Variable {var} value changed from '{color_default}{oldval}{color_title}' to '{color_default}{newval}{color_title}'{color_default}", var=dep, oldval=oldval, newval=newval))
|
||||
|
||||
if not 'file_checksum_values' in a_data:
|
||||
a_data['file_checksum_values'] = []
|
||||
a_data['file_checksum_values'] = {}
|
||||
if not 'file_checksum_values' in b_data:
|
||||
b_data['file_checksum_values'] = []
|
||||
b_data['file_checksum_values'] = {}
|
||||
|
||||
changed, added, removed = file_checksums_diff(a_data['file_checksum_values'], b_data['file_checksum_values'])
|
||||
if changed:
|
||||
@@ -998,11 +950,11 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
|
||||
|
||||
if 'runtaskhashes' in a_data and 'runtaskhashes' in b_data:
|
||||
a = clean_basepaths(a_data['runtaskhashes'])
|
||||
b = clean_basepaths(b_data['runtaskhashes'])
|
||||
a = a_data['runtaskhashes']
|
||||
b = b_data['runtaskhashes']
|
||||
changed, added, removed = dict_diff(a, b)
|
||||
if added:
|
||||
for dep in sorted(added):
|
||||
for dep in added:
|
||||
bdep_found = False
|
||||
if removed:
|
||||
for bdep in removed:
|
||||
@@ -1010,9 +962,9 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
#output.append("Dependency on task %s was replaced by %s with same hash" % (dep, bdep))
|
||||
bdep_found = True
|
||||
if not bdep_found:
|
||||
output.append(color_format("{color_title}Dependency on task %s was added{color_default} with hash %s") % (dep, b[dep]))
|
||||
output.append(color_format("{color_title}Dependency on task %s was added{color_default} with hash %s") % (clean_basepath(dep), b[dep]))
|
||||
if removed:
|
||||
for dep in sorted(removed):
|
||||
for dep in removed:
|
||||
adep_found = False
|
||||
if added:
|
||||
for adep in added:
|
||||
@@ -1020,11 +972,11 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
#output.append("Dependency on task %s was replaced by %s with same hash" % (adep, dep))
|
||||
adep_found = True
|
||||
if not adep_found:
|
||||
output.append(color_format("{color_title}Dependency on task %s was removed{color_default} with hash %s") % (dep, a[dep]))
|
||||
output.append(color_format("{color_title}Dependency on task %s was removed{color_default} with hash %s") % (clean_basepath(dep), a[dep]))
|
||||
if changed:
|
||||
for dep in sorted(changed):
|
||||
for dep in changed:
|
||||
if not collapsed:
|
||||
output.append(color_format("{color_title}Hash for task dependency %s changed{color_default} from %s to %s") % (dep, a[dep], b[dep]))
|
||||
output.append(color_format("{color_title}Hash for dependent task %s changed{color_default} from %s to %s") % (clean_basepath(dep), a[dep], b[dep]))
|
||||
if callable(recursecb):
|
||||
recout = recursecb(dep, a[dep], b[dep])
|
||||
if recout:
|
||||
@@ -1034,7 +986,6 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
# If a dependent hash changed, might as well print the line above and then defer to the changes in
|
||||
# that hash since in all likelyhood, they're the same changes this task also saw.
|
||||
output = [output[-1]] + recout
|
||||
break
|
||||
|
||||
a_taint = a_data.get('taint', None)
|
||||
b_taint = b_data.get('taint', None)
|
||||
@@ -1072,8 +1023,6 @@ def calc_taskhash(sigdata):
|
||||
|
||||
for c in sigdata['file_checksum_values']:
|
||||
if c[1]:
|
||||
if "./" in c[0]:
|
||||
data = data + c[0]
|
||||
data = data + c[1]
|
||||
|
||||
if 'taint' in sigdata:
|
||||
@@ -1088,33 +1037,32 @@ def calc_taskhash(sigdata):
|
||||
def dump_sigfile(a):
|
||||
output = []
|
||||
|
||||
with bb.compress.zstd.open(a, "rt", encoding="utf-8", num_threads=1) as f:
|
||||
a_data = json.load(f, object_hook=SetDecoder)
|
||||
with open(a, 'rb') as f:
|
||||
p1 = pickle.Unpickler(f)
|
||||
a_data = p1.load()
|
||||
|
||||
handle_renames(a_data)
|
||||
output.append("basewhitelist: %s" % (a_data['basewhitelist']))
|
||||
|
||||
output.append("basehash_ignore_vars: %s" % (sorted(a_data['basehash_ignore_vars'])))
|
||||
|
||||
output.append("taskhash_ignore_tasks: %s" % (sorted(a_data['taskhash_ignore_tasks'] or [])))
|
||||
output.append("taskwhitelist: %s" % (a_data['taskwhitelist']))
|
||||
|
||||
output.append("Task dependencies: %s" % (sorted(a_data['taskdeps'])))
|
||||
|
||||
output.append("basehash: %s" % (a_data['basehash']))
|
||||
|
||||
for dep in sorted(a_data['gendeps']):
|
||||
output.append("List of dependencies for variable %s is %s" % (dep, sorted(a_data['gendeps'][dep])))
|
||||
for dep in a_data['gendeps']:
|
||||
output.append("List of dependencies for variable %s is %s" % (dep, a_data['gendeps'][dep]))
|
||||
|
||||
for dep in sorted(a_data['varvals']):
|
||||
for dep in a_data['varvals']:
|
||||
output.append("Variable %s value is %s" % (dep, a_data['varvals'][dep]))
|
||||
|
||||
if 'runtaskdeps' in a_data:
|
||||
output.append("Tasks this task depends on: %s" % (sorted(a_data['runtaskdeps'])))
|
||||
output.append("Tasks this task depends on: %s" % (a_data['runtaskdeps']))
|
||||
|
||||
if 'file_checksum_values' in a_data:
|
||||
output.append("This task depends on the checksums of files: %s" % (sorted(a_data['file_checksum_values'])))
|
||||
output.append("This task depends on the checksums of files: %s" % (a_data['file_checksum_values']))
|
||||
|
||||
if 'runtaskhashes' in a_data:
|
||||
for dep in sorted(a_data['runtaskhashes']):
|
||||
for dep in a_data['runtaskhashes']:
|
||||
output.append("Hash for dependent task %s is %s" % (dep, a_data['runtaskhashes'][dep]))
|
||||
|
||||
if 'taint' in a_data:
|
||||
|
||||
@@ -39,7 +39,7 @@ class TaskData:
|
||||
"""
|
||||
BitBake Task Data implementation
|
||||
"""
|
||||
def __init__(self, halt = True, skiplist = None, allowincomplete = False):
|
||||
def __init__(self, abort = True, skiplist = None, allowincomplete = False):
|
||||
self.build_targets = {}
|
||||
self.run_targets = {}
|
||||
|
||||
@@ -57,7 +57,7 @@ class TaskData:
|
||||
self.failed_rdeps = []
|
||||
self.failed_fns = []
|
||||
|
||||
self.halt = halt
|
||||
self.abort = abort
|
||||
self.allowincomplete = allowincomplete
|
||||
|
||||
self.skiplist = skiplist
|
||||
@@ -131,7 +131,7 @@ class TaskData:
|
||||
for depend in dataCache.deps[fn]:
|
||||
dependids.add(depend)
|
||||
self.depids[fn] = list(dependids)
|
||||
logger.debug2("Added dependencies %s for %s", str(dataCache.deps[fn]), fn)
|
||||
logger.debug(2, "Added dependencies %s for %s", str(dataCache.deps[fn]), fn)
|
||||
|
||||
# Work out runtime dependencies
|
||||
if not fn in self.rdepids:
|
||||
@@ -149,9 +149,9 @@ class TaskData:
|
||||
rreclist.append(rdepend)
|
||||
rdependids.add(rdepend)
|
||||
if rdependlist:
|
||||
logger.debug2("Added runtime dependencies %s for %s", str(rdependlist), fn)
|
||||
logger.debug(2, "Added runtime dependencies %s for %s", str(rdependlist), fn)
|
||||
if rreclist:
|
||||
logger.debug2("Added runtime recommendations %s for %s", str(rreclist), fn)
|
||||
logger.debug(2, "Added runtime recommendations %s for %s", str(rreclist), fn)
|
||||
self.rdepids[fn] = list(rdependids)
|
||||
|
||||
for dep in self.depids[fn]:
|
||||
@@ -328,7 +328,7 @@ class TaskData:
|
||||
try:
|
||||
self.add_provider_internal(cfgData, dataCache, item)
|
||||
except bb.providers.NoProvider:
|
||||
if self.halt:
|
||||
if self.abort:
|
||||
raise
|
||||
self.remove_buildtarget(item)
|
||||
|
||||
@@ -378,7 +378,7 @@ class TaskData:
|
||||
for fn in eligible:
|
||||
if fn in self.failed_fns:
|
||||
continue
|
||||
logger.debug2("adding %s to satisfy %s", fn, item)
|
||||
logger.debug(2, "adding %s to satisfy %s", fn, item)
|
||||
self.add_build_target(fn, item)
|
||||
self.add_tasks(fn, dataCache)
|
||||
|
||||
@@ -431,7 +431,7 @@ class TaskData:
|
||||
for fn in eligible:
|
||||
if fn in self.failed_fns:
|
||||
continue
|
||||
logger.debug2("adding '%s' to satisfy runtime '%s'", fn, item)
|
||||
logger.debug(2, "adding '%s' to satisfy runtime '%s'", fn, item)
|
||||
self.add_runtime_target(fn, item)
|
||||
self.add_tasks(fn, dataCache)
|
||||
|
||||
@@ -446,17 +446,17 @@ class TaskData:
|
||||
return
|
||||
if not missing_list:
|
||||
missing_list = []
|
||||
logger.debug("File '%s' is unbuildable, removing...", fn)
|
||||
logger.debug(1, "File '%s' is unbuildable, removing...", fn)
|
||||
self.failed_fns.append(fn)
|
||||
for target in self.build_targets:
|
||||
if fn in self.build_targets[target]:
|
||||
self.build_targets[target].remove(fn)
|
||||
if not self.build_targets[target]:
|
||||
if len(self.build_targets[target]) == 0:
|
||||
self.remove_buildtarget(target, missing_list)
|
||||
for target in self.run_targets:
|
||||
if fn in self.run_targets[target]:
|
||||
self.run_targets[target].remove(fn)
|
||||
if not self.run_targets[target]:
|
||||
if len(self.run_targets[target]) == 0:
|
||||
self.remove_runtarget(target, missing_list)
|
||||
|
||||
def remove_buildtarget(self, target, missing_list=None):
|
||||
@@ -479,7 +479,7 @@ class TaskData:
|
||||
fn = tid.rsplit(":",1)[0]
|
||||
self.fail_fn(fn, missing_list)
|
||||
|
||||
if self.halt and target in self.external_targets:
|
||||
if self.abort and target in self.external_targets:
|
||||
logger.error("Required build target '%s' has no buildable providers.\nMissing or unbuildable dependency chain was: %s", target, missing_list)
|
||||
raise bb.providers.NoProvider(target)
|
||||
|
||||
@@ -516,7 +516,7 @@ class TaskData:
|
||||
self.add_provider_internal(cfgData, dataCache, target)
|
||||
added = added + 1
|
||||
except bb.providers.NoProvider:
|
||||
if self.halt and target in self.external_targets and not self.allowincomplete:
|
||||
if self.abort and target in self.external_targets and not self.allowincomplete:
|
||||
raise
|
||||
if not self.allowincomplete:
|
||||
self.remove_buildtarget(target)
|
||||
@@ -526,7 +526,7 @@ class TaskData:
|
||||
added = added + 1
|
||||
except (bb.providers.NoRProvider, bb.providers.MultipleRProvider):
|
||||
self.remove_runtarget(target)
|
||||
logger.debug("Resolved " + str(added) + " extra dependencies")
|
||||
logger.debug(1, "Resolved " + str(added) + " extra dependencies")
|
||||
if added == 0:
|
||||
break
|
||||
# self.dump_data()
|
||||
@@ -549,38 +549,38 @@ class TaskData:
|
||||
"""
|
||||
Dump some debug information on the internal data structures
|
||||
"""
|
||||
logger.debug3("build_names:")
|
||||
logger.debug3(", ".join(self.build_targets))
|
||||
logger.debug(3, "build_names:")
|
||||
logger.debug(3, ", ".join(self.build_targets))
|
||||
|
||||
logger.debug3("run_names:")
|
||||
logger.debug3(", ".join(self.run_targets))
|
||||
logger.debug(3, "run_names:")
|
||||
logger.debug(3, ", ".join(self.run_targets))
|
||||
|
||||
logger.debug3("build_targets:")
|
||||
logger.debug(3, "build_targets:")
|
||||
for target in self.build_targets:
|
||||
targets = "None"
|
||||
if target in self.build_targets:
|
||||
targets = self.build_targets[target]
|
||||
logger.debug3(" %s: %s", target, targets)
|
||||
logger.debug(3, " %s: %s", target, targets)
|
||||
|
||||
logger.debug3("run_targets:")
|
||||
logger.debug(3, "run_targets:")
|
||||
for target in self.run_targets:
|
||||
targets = "None"
|
||||
if target in self.run_targets:
|
||||
targets = self.run_targets[target]
|
||||
logger.debug3(" %s: %s", target, targets)
|
||||
logger.debug(3, " %s: %s", target, targets)
|
||||
|
||||
logger.debug3("tasks:")
|
||||
logger.debug(3, "tasks:")
|
||||
for tid in self.taskentries:
|
||||
logger.debug3(" %s: %s %s %s",
|
||||
logger.debug(3, " %s: %s %s %s",
|
||||
tid,
|
||||
self.taskentries[tid].idepends,
|
||||
self.taskentries[tid].irdepends,
|
||||
self.taskentries[tid].tdepends)
|
||||
|
||||
logger.debug3("dependency ids (per fn):")
|
||||
logger.debug(3, "dependency ids (per fn):")
|
||||
for fn in self.depids:
|
||||
logger.debug3(" %s: %s", fn, self.depids[fn])
|
||||
logger.debug(3, " %s: %s", fn, self.depids[fn])
|
||||
|
||||
logger.debug3("runtime dependency ids (per fn):")
|
||||
logger.debug(3, "runtime dependency ids (per fn):")
|
||||
for fn in self.rdepids:
|
||||
logger.debug3(" %s: %s", fn, self.rdepids[fn])
|
||||
logger.debug(3, " %s: %s", fn, self.rdepids[fn])
|
||||
|
||||
@@ -111,9 +111,9 @@ ${D}${libdir}/pkgconfig/*.pc
|
||||
self.assertExecs(set(["sed"]))
|
||||
|
||||
def test_parameter_expansion_modifiers(self):
|
||||
# -,+ and : are also valid modifiers for parameter expansion, but are
|
||||
# - and + are also valid modifiers for parameter expansion, but are
|
||||
# valid characters in bitbake variable names, so are not included here
|
||||
for i in ('=', '?', '#', '%', '##', '%%'):
|
||||
for i in ('=', ':-', ':=', '?', ':?', ':+', '#', '%', '##', '%%'):
|
||||
name = "foo%sbar" % i
|
||||
self.parseExpression("${%s}" % name)
|
||||
self.assertNotIn(name, self.references)
|
||||
@@ -318,7 +318,7 @@ d.getVar(a(), False)
|
||||
"filename": "example.bb",
|
||||
})
|
||||
|
||||
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), self.d)
|
||||
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
|
||||
|
||||
self.assertEqual(deps, set(["somevar", "bar", "something", "inexpand", "test", "test2", "a"]))
|
||||
|
||||
@@ -365,7 +365,7 @@ esac
|
||||
self.d.setVarFlags("FOO", {"func": True})
|
||||
self.setEmptyVars(execs)
|
||||
|
||||
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), self.d)
|
||||
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
|
||||
|
||||
self.assertEqual(deps, set(["somevar", "inverted"] + execs))
|
||||
|
||||
@@ -375,7 +375,7 @@ esac
|
||||
self.d.setVar("FOO", "foo=oe_libinstall; eval $foo")
|
||||
self.d.setVarFlag("FOO", "vardeps", "oe_libinstall")
|
||||
|
||||
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), self.d)
|
||||
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
|
||||
|
||||
self.assertEqual(deps, set(["oe_libinstall"]))
|
||||
|
||||
@@ -384,7 +384,7 @@ esac
|
||||
self.d.setVar("FOO", "foo=oe_libinstall; eval $foo")
|
||||
self.d.setVarFlag("FOO", "vardeps", "${@'oe_libinstall'}")
|
||||
|
||||
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), self.d)
|
||||
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
|
||||
|
||||
self.assertEqual(deps, set(["oe_libinstall"]))
|
||||
|
||||
@@ -399,7 +399,7 @@ esac
|
||||
# Check dependencies
|
||||
self.d.setVar('ANOTHERVAR', expr)
|
||||
self.d.setVar('TESTVAR', 'anothervalue testval testval2')
|
||||
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(), self.d)
|
||||
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), self.d)
|
||||
self.assertEqual(sorted(values.splitlines()),
|
||||
sorted([expr,
|
||||
'TESTVAR{anothervalue} = Set',
|
||||
@@ -412,24 +412,6 @@ esac
|
||||
# Check final value
|
||||
self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['anothervalue', 'yetanothervalue', 'lastone'])
|
||||
|
||||
def test_contains_vardeps_excluded(self):
|
||||
# Check the ignored_vars option to build_dependencies is handled by contains functionality
|
||||
varval = '${TESTVAR2} ${@bb.utils.filter("TESTVAR", "somevalue anothervalue", d)}'
|
||||
self.d.setVar('ANOTHERVAR', varval)
|
||||
self.d.setVar('TESTVAR', 'anothervalue testval testval2')
|
||||
self.d.setVar('TESTVAR2', 'testval3')
|
||||
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(["TESTVAR"]), self.d)
|
||||
self.assertEqual(sorted(values.splitlines()), sorted([varval]))
|
||||
self.assertEqual(deps, set(["TESTVAR2"]))
|
||||
self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['testval3', 'anothervalue'])
|
||||
|
||||
# Check the vardepsexclude flag is handled by contains functionality
|
||||
self.d.setVarFlag('ANOTHERVAR', 'vardepsexclude', 'TESTVAR')
|
||||
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(), self.d)
|
||||
self.assertEqual(sorted(values.splitlines()), sorted([varval]))
|
||||
self.assertEqual(deps, set(["TESTVAR2"]))
|
||||
self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['testval3', 'anothervalue'])
|
||||
|
||||
#Currently no wildcard support
|
||||
#def test_vardeps_wildcards(self):
|
||||
# self.d.setVar("oe_libinstall", "echo test")
|
||||
|
||||
@@ -31,7 +31,7 @@ class ColorCodeTests(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.d = bb.data.init()
|
||||
self._progress_watcher = ProgressWatcher()
|
||||
bb.event.register("bb.build.TaskProgress", self._progress_watcher.handle_event, data=self.d)
|
||||
bb.event.register("bb.build.TaskProgress", self._progress_watcher.handle_event)
|
||||
|
||||
def tearDown(self):
|
||||
bb.event.remove("bb.build.TaskProgress", None)
|
||||
|
||||
@@ -1,100 +0,0 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
from pathlib import Path
|
||||
import bb.compress.lz4
|
||||
import bb.compress.zstd
|
||||
import contextlib
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
import unittest
|
||||
import subprocess
|
||||
|
||||
|
||||
class CompressionTests(object):
|
||||
def setUp(self):
|
||||
self._t = tempfile.TemporaryDirectory()
|
||||
self.tmpdir = Path(self._t.name)
|
||||
self.addCleanup(self._t.cleanup)
|
||||
|
||||
def _file_helper(self, mode_suffix, data):
|
||||
tmp_file = self.tmpdir / "compressed"
|
||||
|
||||
with self.do_open(tmp_file, mode="w" + mode_suffix) as f:
|
||||
f.write(data)
|
||||
|
||||
with self.do_open(tmp_file, mode="r" + mode_suffix) as f:
|
||||
read_data = f.read()
|
||||
|
||||
self.assertEqual(read_data, data)
|
||||
|
||||
def test_text_file(self):
|
||||
self._file_helper("t", "Hello")
|
||||
|
||||
def test_binary_file(self):
|
||||
self._file_helper("b", "Hello".encode("utf-8"))
|
||||
|
||||
def _pipe_helper(self, mode_suffix, data):
|
||||
rfd, wfd = os.pipe()
|
||||
with open(rfd, "rb") as r, open(wfd, "wb") as w:
|
||||
with self.do_open(r, mode="r" + mode_suffix) as decompress:
|
||||
with self.do_open(w, mode="w" + mode_suffix) as compress:
|
||||
compress.write(data)
|
||||
read_data = decompress.read()
|
||||
|
||||
self.assertEqual(read_data, data)
|
||||
|
||||
def test_text_pipe(self):
|
||||
self._pipe_helper("t", "Hello")
|
||||
|
||||
def test_binary_pipe(self):
|
||||
self._pipe_helper("b", "Hello".encode("utf-8"))
|
||||
|
||||
def test_bad_decompress(self):
|
||||
tmp_file = self.tmpdir / "compressed"
|
||||
with tmp_file.open("wb") as f:
|
||||
f.write(b"\x00")
|
||||
|
||||
with self.assertRaises(OSError):
|
||||
with self.do_open(tmp_file, mode="rb", stderr=subprocess.DEVNULL) as f:
|
||||
data = f.read()
|
||||
|
||||
|
||||
class LZ4Tests(CompressionTests, unittest.TestCase):
|
||||
def setUp(self):
|
||||
if shutil.which("lz4c") is None:
|
||||
self.skipTest("'lz4c' not found")
|
||||
super().setUp()
|
||||
|
||||
@contextlib.contextmanager
|
||||
def do_open(self, *args, **kwargs):
|
||||
with bb.compress.lz4.open(*args, **kwargs) as f:
|
||||
yield f
|
||||
|
||||
|
||||
class ZStdTests(CompressionTests, unittest.TestCase):
|
||||
def setUp(self):
|
||||
if shutil.which("zstd") is None:
|
||||
self.skipTest("'zstd' not found")
|
||||
super().setUp()
|
||||
|
||||
@contextlib.contextmanager
|
||||
def do_open(self, *args, **kwargs):
|
||||
with bb.compress.zstd.open(*args, **kwargs) as f:
|
||||
yield f
|
||||
|
||||
|
||||
class PZStdTests(CompressionTests, unittest.TestCase):
|
||||
def setUp(self):
|
||||
if shutil.which("pzstd") is None:
|
||||
self.skipTest("'pzstd' not found")
|
||||
super().setUp()
|
||||
|
||||
@contextlib.contextmanager
|
||||
def do_open(self, *args, **kwargs):
|
||||
with bb.compress.zstd.open(*args, num_threads=2, **kwargs) as f:
|
||||
yield f
|
||||
@@ -1,8 +1,6 @@
|
||||
#
|
||||
# BitBake Tests for cooker.py
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
|
||||
@@ -245,35 +245,35 @@ class TestConcatOverride(unittest.TestCase):
|
||||
|
||||
def test_prepend(self):
|
||||
self.d.setVar("TEST", "${VAL}")
|
||||
self.d.setVar("TEST:prepend", "${FOO}:")
|
||||
self.d.setVar("TEST_prepend", "${FOO}:")
|
||||
self.assertEqual(self.d.getVar("TEST"), "foo:val")
|
||||
|
||||
def test_append(self):
|
||||
self.d.setVar("TEST", "${VAL}")
|
||||
self.d.setVar("TEST:append", ":${BAR}")
|
||||
self.d.setVar("TEST_append", ":${BAR}")
|
||||
self.assertEqual(self.d.getVar("TEST"), "val:bar")
|
||||
|
||||
def test_multiple_append(self):
|
||||
self.d.setVar("TEST", "${VAL}")
|
||||
self.d.setVar("TEST:prepend", "${FOO}:")
|
||||
self.d.setVar("TEST:append", ":val2")
|
||||
self.d.setVar("TEST:append", ":${BAR}")
|
||||
self.d.setVar("TEST_prepend", "${FOO}:")
|
||||
self.d.setVar("TEST_append", ":val2")
|
||||
self.d.setVar("TEST_append", ":${BAR}")
|
||||
self.assertEqual(self.d.getVar("TEST"), "foo:val:val2:bar")
|
||||
|
||||
def test_append_unset(self):
|
||||
self.d.setVar("TEST:prepend", "${FOO}:")
|
||||
self.d.setVar("TEST:append", ":val2")
|
||||
self.d.setVar("TEST:append", ":${BAR}")
|
||||
self.d.setVar("TEST_prepend", "${FOO}:")
|
||||
self.d.setVar("TEST_append", ":val2")
|
||||
self.d.setVar("TEST_append", ":${BAR}")
|
||||
self.assertEqual(self.d.getVar("TEST"), "foo::val2:bar")
|
||||
|
||||
def test_remove(self):
|
||||
self.d.setVar("TEST", "${VAL} ${BAR}")
|
||||
self.d.setVar("TEST:remove", "val")
|
||||
self.d.setVar("TEST_remove", "val")
|
||||
self.assertEqual(self.d.getVar("TEST"), " bar")
|
||||
|
||||
def test_remove_cleared(self):
|
||||
self.d.setVar("TEST", "${VAL} ${BAR}")
|
||||
self.d.setVar("TEST:remove", "val")
|
||||
self.d.setVar("TEST_remove", "val")
|
||||
self.d.setVar("TEST", "${VAL} ${BAR}")
|
||||
self.assertEqual(self.d.getVar("TEST"), "val bar")
|
||||
|
||||
@@ -281,42 +281,42 @@ class TestConcatOverride(unittest.TestCase):
|
||||
# (including that whitespace is preserved)
|
||||
def test_remove_inactive_override(self):
|
||||
self.d.setVar("TEST", "${VAL} ${BAR} 123")
|
||||
self.d.setVar("TEST:remove:inactiveoverride", "val")
|
||||
self.d.setVar("TEST_remove_inactiveoverride", "val")
|
||||
self.assertEqual(self.d.getVar("TEST"), "val bar 123")
|
||||
|
||||
def test_doubleref_remove(self):
|
||||
self.d.setVar("TEST", "${VAL} ${BAR}")
|
||||
self.d.setVar("TEST:remove", "val")
|
||||
self.d.setVar("TEST_remove", "val")
|
||||
self.d.setVar("TEST_TEST", "${TEST} ${TEST}")
|
||||
self.assertEqual(self.d.getVar("TEST_TEST"), " bar bar")
|
||||
|
||||
def test_empty_remove(self):
|
||||
self.d.setVar("TEST", "")
|
||||
self.d.setVar("TEST:remove", "val")
|
||||
self.d.setVar("TEST_remove", "val")
|
||||
self.assertEqual(self.d.getVar("TEST"), "")
|
||||
|
||||
def test_remove_expansion(self):
|
||||
self.d.setVar("BAR", "Z")
|
||||
self.d.setVar("TEST", "${BAR}/X Y")
|
||||
self.d.setVar("TEST:remove", "${BAR}/X")
|
||||
self.d.setVar("TEST_remove", "${BAR}/X")
|
||||
self.assertEqual(self.d.getVar("TEST"), " Y")
|
||||
|
||||
def test_remove_expansion_items(self):
|
||||
self.d.setVar("TEST", "A B C D")
|
||||
self.d.setVar("BAR", "B D")
|
||||
self.d.setVar("TEST:remove", "${BAR}")
|
||||
self.d.setVar("TEST_remove", "${BAR}")
|
||||
self.assertEqual(self.d.getVar("TEST"), "A C ")
|
||||
|
||||
def test_remove_preserve_whitespace(self):
|
||||
# When the removal isn't active, the original value should be preserved
|
||||
self.d.setVar("TEST", " A B")
|
||||
self.d.setVar("TEST:remove", "C")
|
||||
self.d.setVar("TEST_remove", "C")
|
||||
self.assertEqual(self.d.getVar("TEST"), " A B")
|
||||
|
||||
def test_remove_preserve_whitespace2(self):
|
||||
# When the removal is active preserve the whitespace
|
||||
self.d.setVar("TEST", " A B")
|
||||
self.d.setVar("TEST:remove", "B")
|
||||
self.d.setVar("TEST_remove", "B")
|
||||
self.assertEqual(self.d.getVar("TEST"), " A ")
|
||||
|
||||
class TestOverrides(unittest.TestCase):
|
||||
@@ -329,70 +329,70 @@ class TestOverrides(unittest.TestCase):
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvalue")
|
||||
|
||||
def test_one_override(self):
|
||||
self.d.setVar("TEST:bar", "testvalue2")
|
||||
self.d.setVar("TEST_bar", "testvalue2")
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvalue2")
|
||||
|
||||
def test_one_override_unset(self):
|
||||
self.d.setVar("TEST2:bar", "testvalue2")
|
||||
self.d.setVar("TEST2_bar", "testvalue2")
|
||||
|
||||
self.assertEqual(self.d.getVar("TEST2"), "testvalue2")
|
||||
self.assertCountEqual(list(self.d.keys()), ['TEST', 'TEST2', 'OVERRIDES', 'TEST2:bar'])
|
||||
self.assertCountEqual(list(self.d.keys()), ['TEST', 'TEST2', 'OVERRIDES', 'TEST2_bar'])
|
||||
|
||||
def test_multiple_override(self):
|
||||
self.d.setVar("TEST:bar", "testvalue2")
|
||||
self.d.setVar("TEST:local", "testvalue3")
|
||||
self.d.setVar("TEST:foo", "testvalue4")
|
||||
self.d.setVar("TEST_bar", "testvalue2")
|
||||
self.d.setVar("TEST_local", "testvalue3")
|
||||
self.d.setVar("TEST_foo", "testvalue4")
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
|
||||
self.assertCountEqual(list(self.d.keys()), ['TEST', 'TEST:foo', 'OVERRIDES', 'TEST:bar', 'TEST:local'])
|
||||
self.assertCountEqual(list(self.d.keys()), ['TEST', 'TEST_foo', 'OVERRIDES', 'TEST_bar', 'TEST_local'])
|
||||
|
||||
def test_multiple_combined_overrides(self):
|
||||
self.d.setVar("TEST:local:foo:bar", "testvalue3")
|
||||
self.d.setVar("TEST_local_foo_bar", "testvalue3")
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
|
||||
|
||||
def test_multiple_overrides_unset(self):
|
||||
self.d.setVar("TEST2:local:foo:bar", "testvalue3")
|
||||
self.d.setVar("TEST2_local_foo_bar", "testvalue3")
|
||||
self.assertEqual(self.d.getVar("TEST2"), "testvalue3")
|
||||
|
||||
def test_keyexpansion_override(self):
|
||||
self.d.setVar("LOCAL", "local")
|
||||
self.d.setVar("TEST:bar", "testvalue2")
|
||||
self.d.setVar("TEST:${LOCAL}", "testvalue3")
|
||||
self.d.setVar("TEST:foo", "testvalue4")
|
||||
self.d.setVar("TEST_bar", "testvalue2")
|
||||
self.d.setVar("TEST_${LOCAL}", "testvalue3")
|
||||
self.d.setVar("TEST_foo", "testvalue4")
|
||||
bb.data.expandKeys(self.d)
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
|
||||
|
||||
def test_rename_override(self):
|
||||
self.d.setVar("ALTERNATIVE:ncurses-tools:class-target", "a")
|
||||
self.d.setVar("ALTERNATIVE_ncurses-tools_class-target", "a")
|
||||
self.d.setVar("OVERRIDES", "class-target")
|
||||
self.d.renameVar("ALTERNATIVE:ncurses-tools", "ALTERNATIVE:lib32-ncurses-tools")
|
||||
self.assertEqual(self.d.getVar("ALTERNATIVE:lib32-ncurses-tools"), "a")
|
||||
self.d.renameVar("ALTERNATIVE_ncurses-tools", "ALTERNATIVE_lib32-ncurses-tools")
|
||||
self.assertEqual(self.d.getVar("ALTERNATIVE_lib32-ncurses-tools"), "a")
|
||||
|
||||
def test_underscore_override(self):
|
||||
self.d.setVar("TEST:bar", "testvalue2")
|
||||
self.d.setVar("TEST:some_val", "testvalue3")
|
||||
self.d.setVar("TEST:foo", "testvalue4")
|
||||
self.d.setVar("TEST_bar", "testvalue2")
|
||||
self.d.setVar("TEST_some_val", "testvalue3")
|
||||
self.d.setVar("TEST_foo", "testvalue4")
|
||||
self.d.setVar("OVERRIDES", "foo:bar:some_val")
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
|
||||
|
||||
def test_remove_with_override(self):
|
||||
self.d.setVar("TEST:bar", "testvalue2")
|
||||
self.d.setVar("TEST:some_val", "testvalue3 testvalue5")
|
||||
self.d.setVar("TEST:some_val:remove", "testvalue3")
|
||||
self.d.setVar("TEST:foo", "testvalue4")
|
||||
self.d.setVar("TEST_bar", "testvalue2")
|
||||
self.d.setVar("TEST_some_val", "testvalue3 testvalue5")
|
||||
self.d.setVar("TEST_some_val_remove", "testvalue3")
|
||||
self.d.setVar("TEST_foo", "testvalue4")
|
||||
self.d.setVar("OVERRIDES", "foo:bar:some_val")
|
||||
self.assertEqual(self.d.getVar("TEST"), " testvalue5")
|
||||
|
||||
def test_append_and_override_1(self):
|
||||
self.d.setVar("TEST:append", "testvalue2")
|
||||
self.d.setVar("TEST:bar", "testvalue3")
|
||||
self.d.setVar("TEST_append", "testvalue2")
|
||||
self.d.setVar("TEST_bar", "testvalue3")
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvalue3testvalue2")
|
||||
|
||||
def test_append_and_override_2(self):
|
||||
self.d.setVar("TEST:append:bar", "testvalue2")
|
||||
self.d.setVar("TEST_append_bar", "testvalue2")
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvaluetestvalue2")
|
||||
|
||||
def test_append_and_override_3(self):
|
||||
self.d.setVar("TEST:bar:append", "testvalue2")
|
||||
self.d.setVar("TEST_bar_append", "testvalue2")
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvalue2")
|
||||
|
||||
# Test an override with _<numeric> in it based on a real world OE issue
|
||||
@@ -400,16 +400,11 @@ class TestOverrides(unittest.TestCase):
|
||||
self.d.setVar("TARGET_ARCH", "x86_64")
|
||||
self.d.setVar("PN", "test-${TARGET_ARCH}")
|
||||
self.d.setVar("VERSION", "1")
|
||||
self.d.setVar("VERSION:pn-test-${TARGET_ARCH}", "2")
|
||||
self.d.setVar("VERSION_pn-test-${TARGET_ARCH}", "2")
|
||||
self.d.setVar("OVERRIDES", "pn-${PN}")
|
||||
bb.data.expandKeys(self.d)
|
||||
self.assertEqual(self.d.getVar("VERSION"), "2")
|
||||
|
||||
def test_append_and_unused_override(self):
|
||||
# Had a bug where an unused override append could return "" instead of None
|
||||
self.d.setVar("BAR:append:unusedoverride", "testvalue2")
|
||||
self.assertEqual(self.d.getVar("BAR"), None)
|
||||
|
||||
class TestKeyExpansion(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.d = bb.data.init()
|
||||
@@ -503,7 +498,7 @@ class TaskHash(unittest.TestCase):
|
||||
d.setVar("VAR", "val")
|
||||
# Adding an inactive removal shouldn't change the hash
|
||||
d.setVar("BAR", "notbar")
|
||||
d.setVar("MYCOMMAND:remove", "${BAR}")
|
||||
d.setVar("MYCOMMAND_remove", "${BAR}")
|
||||
nexthash = gettask_bashhash("mytask", d)
|
||||
self.assertEqual(orighash, nexthash)
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user