mirror of
https://git.yoctoproject.org/poky
synced 2026-02-20 16:39:40 +01:00
Compare commits
130 Commits
yocto-4.0.
...
yocto-3.3.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
05a8aad57c | ||
|
|
11e25e2fec | ||
|
|
96e8fcd6a2 | ||
|
|
d3fd4f6154 | ||
|
|
2ab37d09cf | ||
|
|
32f185b0cf | ||
|
|
81aabb718e | ||
|
|
7ef7ee5247 | ||
|
|
d39350cf1f | ||
|
|
6b5baa4a29 | ||
|
|
900aae50c9 | ||
|
|
337fa11044 | ||
|
|
3b39e2801d | ||
|
|
2c96b17e0d | ||
|
|
4634aca5c2 | ||
|
|
2f897b26bb | ||
|
|
6ad4febbcd | ||
|
|
6f1d6448c5 | ||
|
|
641848ab75 | ||
|
|
cc5c08c1bb | ||
|
|
eb69aa8d9c | ||
|
|
d3865958fa | ||
|
|
cc40e858d2 | ||
|
|
12e069303c | ||
|
|
1cbca26e04 | ||
|
|
e18e4cb59f | ||
|
|
7843c046ad | ||
|
|
75ea23a238 | ||
|
|
34161a0513 | ||
|
|
a0ce4f3bba | ||
|
|
b75787562e | ||
|
|
44de79f8f5 | ||
|
|
77de7815a7 | ||
|
|
22f2abe250 | ||
|
|
3a7b618eee | ||
|
|
9d41c4e5f9 | ||
|
|
93583d3ac9 | ||
|
|
3e1d3be9ef | ||
|
|
01f537542e | ||
|
|
230bfb1399 | ||
|
|
b150da374a | ||
|
|
e5ab48e8b5 | ||
|
|
b35658c7fc | ||
|
|
e4068c5359 | ||
|
|
7e61f58125 | ||
|
|
865d46981a | ||
|
|
d06606e6dd | ||
|
|
344481289b | ||
|
|
ba3ea82f68 | ||
|
|
58cbdaecf7 | ||
|
|
8aad4c243a | ||
|
|
2d5f42e6fa | ||
|
|
70fb6c0d86 | ||
|
|
49034c5cda | ||
|
|
8197c42f57 | ||
|
|
1ecf10dc33 | ||
|
|
509dd69bad | ||
|
|
81a90094bb | ||
|
|
e7aece6988 | ||
|
|
5cde220103 | ||
|
|
80c76a82ec | ||
|
|
aa8618b624 | ||
|
|
5c1a29e6de | ||
|
|
381aebe82f | ||
|
|
4de4010f3f | ||
|
|
a1f77137d2 | ||
|
|
686f914733 | ||
|
|
53390d2261 | ||
|
|
7db4ec5372 | ||
|
|
6821ee2eed | ||
|
|
461c4d5821 | ||
|
|
de631334cc | ||
|
|
a1970b56f9 | ||
|
|
5bfd931c2e | ||
|
|
043817eabd | ||
|
|
855a1596f5 | ||
|
|
a2bf564b5f | ||
|
|
c20e75d07d | ||
|
|
6708092ea4 | ||
|
|
8921498bed | ||
|
|
9a3439012d | ||
|
|
4577cbed9a | ||
|
|
c55a25d272 | ||
|
|
1fa9d0e19b | ||
|
|
efcded9727 | ||
|
|
4c15652bbd | ||
|
|
47a3f7cdd5 | ||
|
|
3db9d8dbb7 | ||
|
|
3c1f5940fb | ||
|
|
d881748096 | ||
|
|
2afb0cb4df | ||
|
|
f2c9dd561b | ||
|
|
3b0bc8961e | ||
|
|
9e33e0a215 | ||
|
|
ee6c3be627 | ||
|
|
2ddb6be223 | ||
|
|
80257e06f5 | ||
|
|
7c25e4c53c | ||
|
|
f5b9a8206d | ||
|
|
4fb8f152ee | ||
|
|
f5d3d43422 | ||
|
|
c135ee79f6 | ||
|
|
d2f044b182 | ||
|
|
d652d25f2a | ||
|
|
d390ebc18b | ||
|
|
692b2ffa92 | ||
|
|
1a3ffc03d5 | ||
|
|
b4bbdbd6dd | ||
|
|
ddcdddeadf | ||
|
|
04b23cf21e | ||
|
|
04aa3aa0a9 | ||
|
|
b354987235 | ||
|
|
98af459189 | ||
|
|
e5f3493efd | ||
|
|
0cafbb003a | ||
|
|
852f67148e | ||
|
|
a7c05409a1 | ||
|
|
8494d268f7 | ||
|
|
73c71f3d77 | ||
|
|
9b9ecfdca9 | ||
|
|
8248f857c0 | ||
|
|
04267a31cc | ||
|
|
4783bb12d2 | ||
|
|
5db4f25ece | ||
|
|
762fe99172 | ||
|
|
464472d851 | ||
|
|
2742e73760 | ||
|
|
3f8758b54d | ||
|
|
b7cba05f82 | ||
|
|
a49e5c0f4f |
@@ -1,72 +0,0 @@
|
||||
OpenEmbedded-Core and Yocto Project Maintainer Information
|
||||
==========================================================
|
||||
|
||||
OpenEmbedded and Yocto Project work jointly together to maintain the metadata,
|
||||
layers, tools and sub-projects that make up their ecosystems.
|
||||
|
||||
The projects operate through collaborative development. This currently takes
|
||||
place on mailing lists for many components as the "pull request on github"
|
||||
workflow works well for single or small numbers of maintainers but we have
|
||||
a large number, all with different specialisms and benefit from the mailing
|
||||
list review process. Changes therefore undergo peer review through mailing
|
||||
lists in many cases.
|
||||
|
||||
This file aims to acknowledge people with specific skills/knowledge/interest
|
||||
both to recognise their contributions but also empower them to help lead and
|
||||
curate those components. Where we have people with specialist knowledge in
|
||||
particular areas, during review patches/feedback from these people in these
|
||||
areas would generally carry weight.
|
||||
|
||||
This file is maintained in OE-Core but may refer to components that are separate
|
||||
to it if that makes sense in the context of maintainership. The README of specific
|
||||
layers and components should ultimately be definitive about the patch process and
|
||||
maintainership for the component.
|
||||
|
||||
Recipe Maintainers
|
||||
------------------
|
||||
|
||||
See meta/conf/distro/include/maintainers.inc
|
||||
|
||||
Component/Subsystem Maintainers
|
||||
-------------------------------
|
||||
|
||||
* Kernel (inc. linux-yocto, perf): Bruce Ashfield
|
||||
* Reproducible Builds: Joshua Watt
|
||||
* Toaster: David Reyna
|
||||
* Hash-Equivalence: Joshua Watt
|
||||
* Recipe upgrade infrastructure: Alex Kanavin
|
||||
* Toolchain: Khem Raj
|
||||
* ptest-runner: Aníbal Limón
|
||||
* opkg: Alex Stewart
|
||||
* devtool: Saul Wold
|
||||
* eSDK: Saul Wold
|
||||
* overlayfs: Vyacheslav Yurkov
|
||||
|
||||
Maintainers needed
|
||||
------------------
|
||||
|
||||
* Pseudo
|
||||
* Layer Index
|
||||
* recipetool
|
||||
* QA framework/automated testing
|
||||
* error reporting system/web UI
|
||||
* wic
|
||||
* Patchwork
|
||||
* Patchtest
|
||||
* Prelink-cross
|
||||
* Matchbox
|
||||
* Sato
|
||||
* Autobuilder
|
||||
|
||||
Layer Maintainers needed
|
||||
------------------------
|
||||
|
||||
* meta-gplv2 (ideally new strategy but active maintainer welcome)
|
||||
|
||||
Shadow maintainers/development needed
|
||||
--------------------------------------
|
||||
|
||||
* toaster
|
||||
* bitbake
|
||||
|
||||
|
||||
29
README.OE-Core
Normal file
29
README.OE-Core
Normal file
@@ -0,0 +1,29 @@
|
||||
OpenEmbedded-Core
|
||||
=================
|
||||
|
||||
OpenEmbedded-Core is a layer containing the core metadata for current versions
|
||||
of OpenEmbedded. It is distro-less (can build a functional image with
|
||||
DISTRO = "nodistro") and contains only emulated machine support.
|
||||
|
||||
For information about OpenEmbedded, see the OpenEmbedded website:
|
||||
http://www.openembedded.org/
|
||||
|
||||
The Yocto Project has extensive documentation about OE including a reference manual
|
||||
which can be found at:
|
||||
http://yoctoproject.org/documentation
|
||||
|
||||
|
||||
Contributing
|
||||
------------
|
||||
|
||||
Please refer to
|
||||
http://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
|
||||
for guidelines on how to submit patches.
|
||||
|
||||
Mailing list:
|
||||
|
||||
http://lists.openembedded.org/mailman/listinfo/openembedded-core
|
||||
|
||||
Source code:
|
||||
|
||||
http://git.openembedded.org/openembedded-core/
|
||||
@@ -1,29 +0,0 @@
|
||||
OpenEmbedded-Core
|
||||
=================
|
||||
|
||||
OpenEmbedded-Core is a layer containing the core metadata for current versions
|
||||
of OpenEmbedded. It is distro-less (can build a functional image with
|
||||
DISTRO = "nodistro") and contains only emulated machine support.
|
||||
|
||||
For information about OpenEmbedded, see the OpenEmbedded website:
|
||||
https://www.openembedded.org/
|
||||
|
||||
The Yocto Project has extensive documentation about OE including a reference manual
|
||||
which can be found at:
|
||||
https://docs.yoctoproject.org/
|
||||
|
||||
|
||||
Contributing
|
||||
------------
|
||||
|
||||
Please refer to
|
||||
https://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
|
||||
for guidelines on how to submit patches.
|
||||
|
||||
Mailing list:
|
||||
|
||||
https://lists.openembedded.org/g/openembedded-core
|
||||
|
||||
Source code:
|
||||
|
||||
https://git.openembedded.org/openembedded-core/
|
||||
1
README.hardware
Symbolic link
1
README.hardware
Symbolic link
@@ -0,0 +1 @@
|
||||
meta-yocto-bsp/README.hardware
|
||||
@@ -1 +0,0 @@
|
||||
meta-yocto-bsp/README.hardware.md
|
||||
1
README.poky
Symbolic link
1
README.poky
Symbolic link
@@ -0,0 +1 @@
|
||||
meta-poky/README.poky
|
||||
@@ -1 +0,0 @@
|
||||
meta-poky/README.poky.md
|
||||
24
SECURITY.md
24
SECURITY.md
@@ -1,24 +0,0 @@
|
||||
How to Report a Potential Vulnerability?
|
||||
========================================
|
||||
|
||||
If you would like to report a public issue (for example, one with a released
|
||||
CVE number), please report it using the
|
||||
[https://bugzilla.yoctoproject.org/enter_bug.cgi?product=Security Security Bugzilla].
|
||||
If you have a patch ready, submit it following the same procedure as any other
|
||||
patch as described in README.md.
|
||||
|
||||
If you are dealing with a not-yet released or urgent issue, please send a
|
||||
message to security AT yoctoproject DOT org, including as many details as
|
||||
possible: the layer or software module affected, the recipe and its version,
|
||||
and any example code, if available.
|
||||
|
||||
Branches maintained with security fixes
|
||||
---------------------------------------
|
||||
|
||||
See [https://wiki.yoctoproject.org/wiki/Stable_Release_and_LTS Stable release and LTS]
|
||||
for detailed info regarding the policies and maintenance of Stable branches.
|
||||
|
||||
The [https://wiki.yoctoproject.org/wiki/Releases Release page] contains a list of all
|
||||
releases of the Yocto Project. Versions in grey are no longer actively maintained with
|
||||
security patches, but well-tested patches may still be accepted for them for
|
||||
significant issues.
|
||||
@@ -7,7 +7,7 @@ One of BitBake's main users, OpenEmbedded, takes this core and builds embedded L
|
||||
stacks using a task-oriented approach.
|
||||
|
||||
For information about Bitbake, see the OpenEmbedded website:
|
||||
https://www.openembedded.org/
|
||||
http://www.openembedded.org/
|
||||
|
||||
Bitbake plain documentation can be found under the doc directory or its integrated
|
||||
html version at the Yocto Project website:
|
||||
@@ -17,7 +17,7 @@ Contributing
|
||||
------------
|
||||
|
||||
Please refer to
|
||||
https://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
|
||||
http://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
|
||||
for guidelines on how to submit patches, just note that the latter documentation is intended
|
||||
for OpenEmbedded (and its core) not bitbake patches (bitbake-devel@lists.openembedded.org)
|
||||
but in general main guidelines apply. Once the commit(s) have been created, the way to send
|
||||
@@ -28,16 +28,8 @@ branch, type:
|
||||
|
||||
Mailing list:
|
||||
|
||||
https://lists.openembedded.org/g/bitbake-devel
|
||||
http://lists.openembedded.org/mailman/listinfo/bitbake-devel
|
||||
|
||||
Source code:
|
||||
|
||||
https://git.openembedded.org/bitbake/
|
||||
|
||||
Testing:
|
||||
|
||||
Bitbake has a testsuite located in lib/bb/tests/ whichs aim to try and prevent regressions.
|
||||
You can run this with "bitbake-selftest". In particular the fetcher is well covered since
|
||||
it has so many corner cases. The datastore has many tests too. Testing with the testsuite is
|
||||
recommended before submitting patches, particularly to the fetcher and datastore. We also
|
||||
appreciate new test cases and may require them for more obscure issues.
|
||||
http://git.openembedded.org/bitbake/
|
||||
|
||||
@@ -1,24 +0,0 @@
|
||||
How to Report a Potential Vulnerability?
|
||||
========================================
|
||||
|
||||
If you would like to report a public issue (for example, one with a released
|
||||
CVE number), please report it using the
|
||||
[https://bugzilla.yoctoproject.org/enter_bug.cgi?product=Security Security Bugzilla].
|
||||
If you have a patch ready, submit it following the same procedure as any other
|
||||
patch as described in README.md.
|
||||
|
||||
If you are dealing with a not-yet released or urgent issue, please send a
|
||||
message to security AT yoctoproject DOT org, including as many details as
|
||||
possible: the layer or software module affected, the recipe and its version,
|
||||
and any example code, if available.
|
||||
|
||||
Branches maintained with security fixes
|
||||
---------------------------------------
|
||||
|
||||
See [https://wiki.yoctoproject.org/wiki/Stable_Release_and_LTS Stable release and LTS]
|
||||
for detailed info regarding the policies and maintenance of Stable branches.
|
||||
|
||||
The [https://wiki.yoctoproject.org/wiki/Releases Release page] contains a list of all
|
||||
releases of the Yocto Project. Versions in grey are no longer actively maintained with
|
||||
security patches, but well-tested patches may still be accepted for them for
|
||||
significant issues.
|
||||
@@ -12,8 +12,6 @@
|
||||
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),
|
||||
'lib'))
|
||||
@@ -25,9 +23,10 @@ except RuntimeError as exc:
|
||||
from bb import cookerdata
|
||||
from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
|
||||
|
||||
bb.utils.check_system_locale()
|
||||
if sys.getfilesystemencoding() != "utf-8":
|
||||
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
|
||||
|
||||
__version__ = "2.0.0"
|
||||
__version__ = "1.50.0"
|
||||
|
||||
if __name__ == "__main__":
|
||||
if __version__ != bb.__version__:
|
||||
|
||||
@@ -11,8 +11,6 @@
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
|
||||
warnings.simplefilter("default")
|
||||
import argparse
|
||||
import logging
|
||||
import pickle
|
||||
@@ -28,7 +26,6 @@ logger = bb.msg.logger_create(myname)
|
||||
|
||||
is_dump = myname == 'bitbake-dumpsig'
|
||||
|
||||
|
||||
def find_siginfo(tinfoil, pn, taskname, sigs=None):
|
||||
result = None
|
||||
tinfoil.set_event_mask(['bb.event.FindSigInfoResult',
|
||||
@@ -54,7 +51,6 @@ def find_siginfo(tinfoil, pn, taskname, sigs=None):
|
||||
sys.exit(2)
|
||||
return result
|
||||
|
||||
|
||||
def find_siginfo_task(bbhandler, pn, taskname, sig1=None, sig2=None):
|
||||
""" Find the most recent signature files for the specified PN/task """
|
||||
|
||||
@@ -63,13 +59,13 @@ def find_siginfo_task(bbhandler, pn, taskname, sig1=None, sig2=None):
|
||||
|
||||
if sig1 and sig2:
|
||||
sigfiles = find_siginfo(bbhandler, pn, taskname, [sig1, sig2])
|
||||
if not sigfiles:
|
||||
if len(sigfiles) == 0:
|
||||
logger.error('No sigdata files found matching %s %s matching either %s or %s' % (pn, taskname, sig1, sig2))
|
||||
sys.exit(1)
|
||||
elif sig1 not in sigfiles:
|
||||
elif not sig1 in sigfiles:
|
||||
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig1))
|
||||
sys.exit(1)
|
||||
elif sig2 not in sigfiles:
|
||||
elif not sig2 in sigfiles:
|
||||
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig2))
|
||||
sys.exit(1)
|
||||
latestfiles = [sigfiles[sig1], sigfiles[sig2]]
|
||||
@@ -89,11 +85,11 @@ def recursecb(key, hash1, hash2):
|
||||
hashfiles = find_siginfo(tinfoil, key, None, hashes)
|
||||
|
||||
recout = []
|
||||
if not hashfiles:
|
||||
if len(hashfiles) == 0:
|
||||
recout.append("Unable to find matching sigdata for %s with hashes %s or %s" % (key, hash1, hash2))
|
||||
elif hash1 not in hashfiles:
|
||||
elif not hash1 in hashfiles:
|
||||
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash1))
|
||||
elif hash2 not in hashfiles:
|
||||
elif not hash2 in hashfiles:
|
||||
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash2))
|
||||
else:
|
||||
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1], hashfiles[hash2], recursecb, color=color)
|
||||
@@ -113,36 +109,36 @@ parser.add_argument('-D', '--debug',
|
||||
|
||||
if is_dump:
|
||||
parser.add_argument("-t", "--task",
|
||||
help="find the signature data file for the last run of the specified task",
|
||||
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
|
||||
help="find the signature data file for the last run of the specified task",
|
||||
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
|
||||
|
||||
parser.add_argument("sigdatafile1",
|
||||
help="Signature file to dump. Not used when using -t/--task.",
|
||||
action="store", nargs='?', metavar="sigdatafile")
|
||||
help="Signature file to dump. Not used when using -t/--task.",
|
||||
action="store", nargs='?', metavar="sigdatafile")
|
||||
else:
|
||||
parser.add_argument('-c', '--color',
|
||||
help='Colorize the output (where %(metavar)s is %(choices)s)',
|
||||
choices=['auto', 'always', 'never'], default='auto', metavar='color')
|
||||
help='Colorize the output (where %(metavar)s is %(choices)s)',
|
||||
choices=['auto', 'always', 'never'], default='auto', metavar='color')
|
||||
|
||||
parser.add_argument('-d', '--dump',
|
||||
help='Dump the last signature data instead of comparing (equivalent to using bitbake-dumpsig)',
|
||||
action='store_true')
|
||||
help='Dump the last signature data instead of comparing (equivalent to using bitbake-dumpsig)',
|
||||
action='store_true')
|
||||
|
||||
parser.add_argument("-t", "--task",
|
||||
help="find the signature data files for the last two runs of the specified task and compare them",
|
||||
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
|
||||
help="find the signature data files for the last two runs of the specified task and compare them",
|
||||
action="store", dest="taskargs", nargs=2, metavar=('recipename', 'taskname'))
|
||||
|
||||
parser.add_argument("-s", "--signature",
|
||||
help="With -t/--task, specify the signatures to look for instead of taking the last two",
|
||||
action="store", dest="sigargs", nargs=2, metavar=('fromsig', 'tosig'))
|
||||
help="With -t/--task, specify the signatures to look for instead of taking the last two",
|
||||
action="store", dest="sigargs", nargs=2, metavar=('fromsig', 'tosig'))
|
||||
|
||||
parser.add_argument("sigdatafile1",
|
||||
help="First signature file to compare (or signature file to dump, if second not specified). Not used when using -t/--task.",
|
||||
action="store", nargs='?')
|
||||
help="First signature file to compare (or signature file to dump, if second not specified). Not used when using -t/--task.",
|
||||
action="store", nargs='?')
|
||||
|
||||
parser.add_argument("sigdatafile2",
|
||||
help="Second signature file to compare",
|
||||
action="store", nargs='?')
|
||||
help="Second signature file to compare",
|
||||
action="store", nargs='?')
|
||||
|
||||
options = parser.parse_args()
|
||||
if is_dump:
|
||||
@@ -160,8 +156,7 @@ if options.taskargs:
|
||||
with bb.tinfoil.Tinfoil() as tinfoil:
|
||||
tinfoil.prepare(config_only=True)
|
||||
if not options.dump and options.sigargs:
|
||||
files = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1], options.sigargs[0],
|
||||
options.sigargs[1])
|
||||
files = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1], options.sigargs[0], options.sigargs[1])
|
||||
else:
|
||||
files = find_siginfo_task(tinfoil, options.taskargs[0], options.taskargs[1])
|
||||
|
||||
@@ -170,8 +165,7 @@ if options.taskargs:
|
||||
output = bb.siggen.dump_sigfile(files[-1])
|
||||
else:
|
||||
if len(files) < 2:
|
||||
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (
|
||||
options.taskargs[0], options.taskargs[1]))
|
||||
logger.error('Only one matching sigdata file found for the specified task (%s %s)' % (options.taskargs[0], options.taskargs[1]))
|
||||
sys.exit(1)
|
||||
|
||||
# Recurse into signature comparison
|
||||
|
||||
@@ -1,52 +0,0 @@
|
||||
#! /usr/bin/env python3
|
||||
#
|
||||
# Copyright (C) 2021 Richard Purdie
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import argparse
|
||||
import io
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
|
||||
bindir = os.path.dirname(__file__)
|
||||
topdir = os.path.dirname(bindir)
|
||||
sys.path[0:0] = [os.path.join(topdir, 'lib')]
|
||||
|
||||
import bb.tinfoil
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description="Bitbake Query Variable")
|
||||
parser.add_argument("variable", help="variable name to query")
|
||||
parser.add_argument("-r", "--recipe", help="Recipe name to query", default=None, required=False)
|
||||
parser.add_argument('-u', '--unexpand', help='Do not expand the value (with --value)', action="store_true")
|
||||
parser.add_argument('-f', '--flag', help='Specify a variable flag to query (with --value)', default=None)
|
||||
parser.add_argument('--value', help='Only report the value, no history and no variable name', action="store_true")
|
||||
parser.add_argument('-q', '--quiet', help='Silence bitbake server logging', action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.unexpand and not args.value:
|
||||
print("--unexpand only makes sense with --value")
|
||||
sys.exit(1)
|
||||
|
||||
if args.flag and not args.value:
|
||||
print("--flag only makes sense with --value")
|
||||
sys.exit(1)
|
||||
|
||||
quiet = args.quiet
|
||||
with bb.tinfoil.Tinfoil(tracking=True, setup_logging=not quiet) as tinfoil:
|
||||
if args.recipe:
|
||||
tinfoil.prepare(quiet=3 if quiet else 2)
|
||||
d = tinfoil.parse_recipe(args.recipe)
|
||||
else:
|
||||
tinfoil.prepare(quiet=2, config_only=True)
|
||||
d = tinfoil.config_data
|
||||
if args.flag:
|
||||
print(str(d.getVarFlag(args.variable, args.flag, expand=(not args.unexpand))))
|
||||
elif args.value:
|
||||
print(str(d.getVar(args.variable, expand=(not args.unexpand))))
|
||||
else:
|
||||
bb.data.emit_var(args.variable, d=d, all=True)
|
||||
@@ -13,8 +13,6 @@ import pprint
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
|
||||
try:
|
||||
import tqdm
|
||||
@@ -56,24 +54,25 @@ def main():
|
||||
nonlocal missed_hashes
|
||||
nonlocal max_time
|
||||
|
||||
with hashserv.create_client(args.address) as client:
|
||||
for i in range(args.requests):
|
||||
taskhash = hashlib.sha256()
|
||||
taskhash.update(args.taskhash_seed.encode('utf-8'))
|
||||
taskhash.update(str(i).encode('utf-8'))
|
||||
client = hashserv.create_client(args.address)
|
||||
|
||||
start_time = time.perf_counter()
|
||||
l = client.get_unihash(METHOD, taskhash.hexdigest())
|
||||
elapsed = time.perf_counter() - start_time
|
||||
for i in range(args.requests):
|
||||
taskhash = hashlib.sha256()
|
||||
taskhash.update(args.taskhash_seed.encode('utf-8'))
|
||||
taskhash.update(str(i).encode('utf-8'))
|
||||
|
||||
with lock:
|
||||
if l:
|
||||
found_hashes += 1
|
||||
else:
|
||||
missed_hashes += 1
|
||||
start_time = time.perf_counter()
|
||||
l = client.get_unihash(METHOD, taskhash.hexdigest())
|
||||
elapsed = time.perf_counter() - start_time
|
||||
|
||||
max_time = max(elapsed, max_time)
|
||||
pbar.update()
|
||||
with lock:
|
||||
if l:
|
||||
found_hashes += 1
|
||||
else:
|
||||
missed_hashes += 1
|
||||
|
||||
max_time = max(elapsed, max_time)
|
||||
pbar.update()
|
||||
|
||||
max_time = 0
|
||||
found_hashes = 0
|
||||
@@ -151,8 +150,9 @@ def main():
|
||||
|
||||
func = getattr(args, 'func', None)
|
||||
if func:
|
||||
with hashserv.create_client(args.address) as client:
|
||||
return func(args, client)
|
||||
client = hashserv.create_client(args.address)
|
||||
|
||||
return func(args, client)
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
@@ -10,8 +10,6 @@ import sys
|
||||
import logging
|
||||
import argparse
|
||||
import sqlite3
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
|
||||
|
||||
|
||||
@@ -14,8 +14,6 @@ import logging
|
||||
import os
|
||||
import sys
|
||||
import argparse
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
|
||||
bindir = os.path.dirname(__file__)
|
||||
topdir = os.path.dirname(bindir)
|
||||
@@ -68,11 +66,11 @@ def main():
|
||||
|
||||
registered = False
|
||||
for plugin in plugins:
|
||||
if hasattr(plugin, 'tinfoil_init'):
|
||||
plugin.tinfoil_init(tinfoil)
|
||||
if hasattr(plugin, 'register_commands'):
|
||||
registered = True
|
||||
plugin.register_commands(subparsers)
|
||||
if hasattr(plugin, 'tinfoil_init'):
|
||||
plugin.tinfoil_init(tinfoil)
|
||||
|
||||
if not registered:
|
||||
logger.error("No commands registered - missing plugins?")
|
||||
|
||||
@@ -1,15 +1,11 @@
|
||||
#!/usr/bin/env python3
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import os
|
||||
import sys,logging
|
||||
import optparse
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),'lib'))
|
||||
|
||||
@@ -40,14 +36,12 @@ def main():
|
||||
dest="host", type="string", default=PRHOST_DEFAULT)
|
||||
parser.add_option("--port", help="port number(default: 8585)", action="store",
|
||||
dest="port", type="int", default=PRPORT_DEFAULT)
|
||||
parser.add_option("-r", "--read-only", help="open database in read-only mode",
|
||||
action="store_true")
|
||||
|
||||
options, args = parser.parse_args(sys.argv)
|
||||
prserv.init_logger(os.path.abspath(options.logfile),options.loglevel)
|
||||
|
||||
if options.start:
|
||||
ret=prserv.serv.start_daemon(options.dbfile, options.host, options.port,os.path.abspath(options.logfile), options.read_only)
|
||||
ret=prserv.serv.start_daemon(options.dbfile, options.host, options.port,os.path.abspath(options.logfile))
|
||||
elif options.stop:
|
||||
ret=prserv.serv.stop_daemon(options.host, options.port)
|
||||
else:
|
||||
|
||||
@@ -7,8 +7,6 @@
|
||||
|
||||
import os
|
||||
import sys, logging
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
|
||||
|
||||
import unittest
|
||||
@@ -31,7 +29,6 @@ tests = ["bb.tests.codeparser",
|
||||
"bb.tests.runqueue",
|
||||
"bb.tests.siggen",
|
||||
"bb.tests.utils",
|
||||
"bb.tests.compression",
|
||||
"hashserv.tests",
|
||||
"layerindexlib.tests.layerindexobj",
|
||||
"layerindexlib.tests.restapi",
|
||||
|
||||
@@ -8,13 +8,11 @@
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
import logging
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
|
||||
|
||||
import bb
|
||||
|
||||
bb.utils.check_system_locale()
|
||||
if sys.getfilesystemencoding() != "utf-8":
|
||||
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
|
||||
|
||||
# Users shouldn't be running this code directly
|
||||
if len(sys.argv) != 10 or not sys.argv[1].startswith("decafbad"):
|
||||
@@ -32,6 +30,8 @@ timeout = float(sys.argv[7])
|
||||
xmlrpcinterface = (sys.argv[8], int(sys.argv[9]))
|
||||
if xmlrpcinterface[0] == "None":
|
||||
xmlrpcinterface = (None, xmlrpcinterface[1])
|
||||
if timeout == "None":
|
||||
timeout = None
|
||||
|
||||
# Replace standard fds with our own
|
||||
with open('/dev/null', 'r') as si:
|
||||
|
||||
@@ -1,14 +1,11 @@
|
||||
#!/usr/bin/env python3
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
|
||||
from bb import fetch2
|
||||
import logging
|
||||
@@ -19,12 +16,11 @@ import signal
|
||||
import pickle
|
||||
import traceback
|
||||
import queue
|
||||
import shlex
|
||||
import subprocess
|
||||
from multiprocessing import Lock
|
||||
from threading import Thread
|
||||
|
||||
bb.utils.check_system_locale()
|
||||
if sys.getfilesystemencoding() != "utf-8":
|
||||
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
|
||||
|
||||
# Users shouldn't be running this code directly
|
||||
if len(sys.argv) != 2 or not sys.argv[1].startswith("decafbad"):
|
||||
@@ -91,19 +87,19 @@ def worker_fire_prepickled(event):
|
||||
worker_thread_exit = False
|
||||
|
||||
def worker_flush(worker_queue):
|
||||
worker_queue_int = bytearray()
|
||||
worker_queue_int = b""
|
||||
global worker_pipe, worker_thread_exit
|
||||
|
||||
while True:
|
||||
try:
|
||||
worker_queue_int.extend(worker_queue.get(True, 1))
|
||||
worker_queue_int = worker_queue_int + worker_queue.get(True, 1)
|
||||
except queue.Empty:
|
||||
pass
|
||||
while (worker_queue_int or not worker_queue.empty()):
|
||||
try:
|
||||
(_, ready, _) = select.select([], [worker_pipe], [], 1)
|
||||
if not worker_queue.empty():
|
||||
worker_queue_int.extend(worker_queue.get())
|
||||
worker_queue_int = worker_queue_int + worker_queue.get()
|
||||
written = os.write(worker_pipe, worker_queue_int)
|
||||
worker_queue_int = worker_queue_int[written:]
|
||||
except (IOError, OSError) as e:
|
||||
@@ -149,14 +145,9 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
|
||||
# a fork() or exec*() activates PSEUDO...
|
||||
|
||||
envbackup = {}
|
||||
fakeroot = False
|
||||
fakeenv = {}
|
||||
umask = None
|
||||
|
||||
uid = os.getuid()
|
||||
gid = os.getgid()
|
||||
|
||||
|
||||
taskdep = workerdata["taskdeps"][fn]
|
||||
if 'umask' in taskdep and taskname in taskdep['umask']:
|
||||
umask = taskdep['umask'][taskname]
|
||||
@@ -173,7 +164,6 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
|
||||
|
||||
# We can't use the fakeroot environment in a dry run as it possibly hasn't been built
|
||||
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not dry_run:
|
||||
fakeroot = True
|
||||
envvars = (workerdata["fakerootenv"][fn] or "").split()
|
||||
for key, value in (var.split('=') for var in envvars):
|
||||
envbackup[key] = os.environ.get(key)
|
||||
@@ -242,7 +232,6 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
|
||||
the_data = databuilder.mcdata[mc]
|
||||
the_data.setVar("BB_WORKERCONTEXT", "1")
|
||||
the_data.setVar("BB_TASKDEPDATA", taskdepdata)
|
||||
the_data.setVar('BB_CURRENTTASK', taskname.replace("do_", ""))
|
||||
if cfg.limited_deps:
|
||||
the_data.setVar("BB_LIMITEDDEPS", "1")
|
||||
the_data.setVar("BUILDNAME", workerdata["buildname"])
|
||||
@@ -262,13 +251,6 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
|
||||
|
||||
bb.utils.set_process_name("%s:%s" % (the_data.getVar("PN"), taskname.replace("do_", "")))
|
||||
|
||||
if not the_data.getVarFlag(taskname, 'network', False):
|
||||
if bb.utils.is_local_uid(uid):
|
||||
logger.debug("Attempting to disable network for %s" % taskname)
|
||||
bb.utils.disable_network(uid, gid)
|
||||
else:
|
||||
logger.debug("Skipping disable network for %s since %s is not a local uid." % (taskname, uid))
|
||||
|
||||
# exported_vars() returns a generator which *cannot* be passed to os.environ.update()
|
||||
# successfully. We also need to unset anything from the environment which shouldn't be there
|
||||
exports = bb.data.exported_vars(the_data)
|
||||
@@ -300,13 +282,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
|
||||
try:
|
||||
if dry_run:
|
||||
return 0
|
||||
try:
|
||||
ret = bb.build.exec_task(fn, taskname, the_data, cfg.profile)
|
||||
finally:
|
||||
if fakeroot:
|
||||
fakerootcmd = shlex.split(the_data.getVar("FAKEROOTCMD"))
|
||||
subprocess.run(fakerootcmd + ['-S'], check=True, stdout=subprocess.PIPE)
|
||||
return ret
|
||||
return bb.build.exec_task(fn, taskname, the_data, cfg.profile)
|
||||
except:
|
||||
os._exit(1)
|
||||
if not profiling:
|
||||
@@ -338,12 +314,12 @@ class runQueueWorkerPipe():
|
||||
if pipeout:
|
||||
pipeout.close()
|
||||
bb.utils.nonblockingfd(self.input)
|
||||
self.queue = bytearray()
|
||||
self.queue = b""
|
||||
|
||||
def read(self):
|
||||
start = len(self.queue)
|
||||
try:
|
||||
self.queue.extend(self.input.read(102400) or b"")
|
||||
self.queue = self.queue + (self.input.read(102400) or b"")
|
||||
except (OSError, IOError) as e:
|
||||
if e.errno != errno.EAGAIN:
|
||||
raise
|
||||
@@ -371,7 +347,7 @@ class BitbakeWorker(object):
|
||||
def __init__(self, din):
|
||||
self.input = din
|
||||
bb.utils.nonblockingfd(self.input)
|
||||
self.queue = bytearray()
|
||||
self.queue = b""
|
||||
self.cookercfg = None
|
||||
self.databuilder = None
|
||||
self.data = None
|
||||
@@ -405,7 +381,7 @@ class BitbakeWorker(object):
|
||||
if len(r) == 0:
|
||||
# EOF on pipe, server must have terminated
|
||||
self.sigterm_exception(signal.SIGTERM, None)
|
||||
self.queue.extend(r)
|
||||
self.queue = self.queue + r
|
||||
except (OSError, IOError):
|
||||
pass
|
||||
if len(self.queue):
|
||||
@@ -430,18 +406,14 @@ class BitbakeWorker(object):
|
||||
if self.queue.startswith(b"<" + item + b">"):
|
||||
index = self.queue.find(b"</" + item + b">")
|
||||
while index != -1:
|
||||
try:
|
||||
func(self.queue[(len(item) + 2):index])
|
||||
except pickle.UnpicklingError:
|
||||
workerlog_write("Unable to unpickle data: %s\n" % ":".join("{:02x}".format(c) for c in self.queue))
|
||||
raise
|
||||
func(self.queue[(len(item) + 2):index])
|
||||
self.queue = self.queue[(index + len(item) + 3):]
|
||||
index = self.queue.find(b"</" + item + b">")
|
||||
|
||||
def handle_cookercfg(self, data):
|
||||
self.cookercfg = pickle.loads(data)
|
||||
self.databuilder = bb.cookerdata.CookerDataBuilder(self.cookercfg, worker=True)
|
||||
self.databuilder.parseBaseConfiguration(worker=True)
|
||||
self.databuilder.parseBaseConfiguration()
|
||||
self.data = self.databuilder.data
|
||||
|
||||
def handle_extraconfigdata(self, data):
|
||||
@@ -541,11 +513,9 @@ except BaseException as e:
|
||||
import traceback
|
||||
sys.stderr.write(traceback.format_exc())
|
||||
sys.stderr.write(str(e))
|
||||
finally:
|
||||
worker_thread_exit = True
|
||||
worker_thread.join()
|
||||
|
||||
workerlog_write("exiting")
|
||||
if not normalexit:
|
||||
sys.exit(1)
|
||||
worker_thread_exit = True
|
||||
worker_thread.join()
|
||||
|
||||
workerlog_write("exitting")
|
||||
sys.exit(0)
|
||||
|
||||
@@ -1,7 +1,5 @@
|
||||
#!/usr/bin/env python3
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
@@ -18,8 +16,6 @@ import itertools
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
|
||||
version = 1.0
|
||||
|
||||
|
||||
@@ -33,7 +33,7 @@ databaseCheck()
|
||||
$MANAGE migrate --noinput || retval=1
|
||||
|
||||
if [ $retval -eq 1 ]; then
|
||||
echo "Failed migrations, halting system start" 1>&2
|
||||
echo "Failed migrations, aborting system start" 1>&2
|
||||
return $retval
|
||||
fi
|
||||
# Make sure that checksettings can pick up any value for TEMPLATECONF
|
||||
@@ -41,7 +41,7 @@ databaseCheck()
|
||||
$MANAGE checksettings --traceback || retval=1
|
||||
|
||||
if [ $retval -eq 1 ]; then
|
||||
printf "\nError while checking settings; exiting\n"
|
||||
printf "\nError while checking settings; aborting\n"
|
||||
return $retval
|
||||
fi
|
||||
|
||||
@@ -248,7 +248,7 @@ fi
|
||||
# 3) the sqlite db if that is being used.
|
||||
# 4) pid's we need to clean up on exit/shutdown
|
||||
export TOASTER_DIR=$TOASTERDIR
|
||||
export BB_ENV_PASSTHROUGH_ADDITIONS="$BB_ENV_PASSTHROUGH_ADDITIONS TOASTER_DIR"
|
||||
export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE TOASTER_DIR"
|
||||
|
||||
# Determine the action. If specified by arguments, fine, if not, toggle it
|
||||
if [ "$CMD" = "start" ] ; then
|
||||
|
||||
@@ -19,8 +19,6 @@ import sys
|
||||
import json
|
||||
import pickle
|
||||
import codecs
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
|
||||
from collections import namedtuple
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: MIT
|
||||
#
|
||||
# Copyright (c) 2021 Joshua Watt <JPEWhacker@gmail.com>
|
||||
#
|
||||
#
|
||||
# Dockerfile to build a bitbake hash equivalence server container
|
||||
#
|
||||
# From the root of the bitbake repository, run:
|
||||
@@ -15,9 +15,5 @@ RUN apk add --no-cache python3
|
||||
|
||||
COPY bin/bitbake-hashserv /opt/bbhashserv/bin/
|
||||
COPY lib/hashserv /opt/bbhashserv/lib/hashserv/
|
||||
COPY lib/bb /opt/bbhashserv/lib/bb/
|
||||
COPY lib/codegen.py /opt/bbhashserv/lib/codegen.py
|
||||
COPY lib/ply /opt/bbhashserv/lib/ply/
|
||||
COPY lib/bs4 /opt/bbhashserv/lib/bs4/
|
||||
|
||||
ENTRYPOINT ["/opt/bbhashserv/bin/bitbake-hashserv"]
|
||||
|
||||
@@ -1,62 +0,0 @@
|
||||
# SPDX-License-Identifier: MIT
|
||||
#
|
||||
# Copyright (c) 2022 Daniel Gomez <daniel@qtec.com>
|
||||
#
|
||||
# Dockerfile to build a bitbake PR service container
|
||||
#
|
||||
# From the root of the bitbake repository, run:
|
||||
#
|
||||
# docker build -f contrib/prserv/Dockerfile . -t prserv
|
||||
#
|
||||
# Running examples:
|
||||
#
|
||||
# 1. PR Service in RW mode, port 18585:
|
||||
#
|
||||
# docker run --detach --tty \
|
||||
# --env PORT=18585 \
|
||||
# --publish 18585:18585 \
|
||||
# --volume $PWD:/var/lib/bbprserv \
|
||||
# prserv
|
||||
#
|
||||
# 2. PR Service in RO mode, default port (8585) and custom LOGFILE:
|
||||
#
|
||||
# docker run --detach --tty \
|
||||
# --env DBMODE="--read-only" \
|
||||
# --env LOGFILE=/var/lib/bbprserv/prservro.log \
|
||||
# --publish 8585:8585 \
|
||||
# --volume $PWD:/var/lib/bbprserv \
|
||||
# prserv
|
||||
#
|
||||
|
||||
FROM alpine:3.14.4
|
||||
|
||||
RUN apk add --no-cache python3
|
||||
|
||||
COPY bin/bitbake-prserv /opt/bbprserv/bin/
|
||||
COPY lib/prserv /opt/bbprserv/lib/prserv/
|
||||
COPY lib/bb /opt/bbprserv/lib/bb/
|
||||
COPY lib/codegen.py /opt/bbprserv/lib/codegen.py
|
||||
COPY lib/ply /opt/bbprserv/lib/ply/
|
||||
COPY lib/bs4 /opt/bbprserv/lib/bs4/
|
||||
|
||||
ENV PATH=$PATH:/opt/bbprserv/bin
|
||||
|
||||
RUN mkdir -p /var/lib/bbprserv
|
||||
|
||||
ENV DBFILE=/var/lib/bbprserv/prserv.sqlite3 \
|
||||
LOGFILE=/var/lib/bbprserv/prserv.log \
|
||||
LOGLEVEL=debug \
|
||||
HOST=0.0.0.0 \
|
||||
PORT=8585 \
|
||||
DBMODE=""
|
||||
|
||||
ENTRYPOINT [ "/bin/sh", "-c", \
|
||||
"bitbake-prserv \
|
||||
--file=$DBFILE \
|
||||
--log=$LOGFILE \
|
||||
--loglevel=$LOGLEVEL \
|
||||
--start \
|
||||
--host=$HOST \
|
||||
--port=$PORT \
|
||||
$DBMODE \
|
||||
&& tail -f $LOGFILE"]
|
||||
@@ -20,7 +20,7 @@ fun! NewBBAppendTemplate()
|
||||
set nopaste
|
||||
|
||||
" New bbappend template
|
||||
0 put ='FILESEXTRAPATHS:prepend := \"${THISDIR}/${PN}:\"'
|
||||
0 put ='FILESEXTRAPATHS_prepend := \"${THISDIR}/${PN}:\"'
|
||||
2
|
||||
|
||||
if paste == 1
|
||||
|
||||
@@ -51,9 +51,9 @@ syn region bbString matchgroup=bbQuote start=+'+ skip=+\\$+ end=+'+
|
||||
syn match bbExport "^export" nextgroup=bbIdentifier skipwhite
|
||||
syn keyword bbExportFlag export contained nextgroup=bbIdentifier skipwhite
|
||||
syn match bbIdentifier "[a-zA-Z0-9\-_\.\/\+]\+" display contained
|
||||
syn match bbVarDeref "${[a-zA-Z0-9\-_:\.\/\+]\+}" contained
|
||||
syn match bbVarDeref "${[a-zA-Z0-9\-_\.\/\+]\+}" contained
|
||||
syn match bbVarEq "\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)" contained nextgroup=bbVarValue
|
||||
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.\/\+][${}a-zA-Z0-9\-_:\.\/\+]*\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbOverrideOperator,bbVarDeref nextgroup=bbVarEq
|
||||
syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.\/\+]\+\(_[${}a-zA-Z0-9\-_\.\/\+]\+\)\?\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbVarDeref nextgroup=bbVarEq
|
||||
syn match bbVarValue ".*$" contained contains=bbString,bbVarDeref,bbVarPyValue
|
||||
syn region bbVarPyValue start=+${@+ skip=+\\$+ end=+}+ contained contains=@python
|
||||
|
||||
@@ -77,15 +77,13 @@ syn keyword bbOEFunctions do_fetch do_unpack do_patch do_configure do_comp
|
||||
" Generic Functions
|
||||
syn match bbFunction "\h[0-9A-Za-z_\-\.]*" display contained contains=bbOEFunctions
|
||||
|
||||
syn keyword bbOverrideOperator append prepend remove contained
|
||||
|
||||
" BitBake shell metadata
|
||||
syn include @shell syntax/sh.vim
|
||||
if exists("b:current_syntax")
|
||||
unlet b:current_syntax
|
||||
endif
|
||||
syn keyword bbShFakeRootFlag fakeroot contained
|
||||
syn match bbShFuncDef "^\(fakeroot\s*\)\?\([\.0-9A-Za-z_:${}\-\.]\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbFunction,bbOverrideOperator,bbVarDeref,bbDelimiter nextgroup=bbShFuncRegion skipwhite
|
||||
syn match bbShFuncDef "^\(fakeroot\s*\)\?\([\.0-9A-Za-z_${}\-\.]\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbFunction,bbVarDeref,bbDelimiter nextgroup=bbShFuncRegion skipwhite
|
||||
syn region bbShFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" contained contains=@shell
|
||||
|
||||
" Python value inside shell functions
|
||||
@@ -93,7 +91,7 @@ syn region shDeref start=+${@+ skip=+\\$+ excludenl end=+}+ contained co
|
||||
|
||||
" BitBake python metadata
|
||||
syn keyword bbPyFlag python contained
|
||||
syn match bbPyFuncDef "^\(fakeroot\s*\)\?\(python\)\(\s\+[0-9A-Za-z_:${}\-\.]\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbPyFlag,bbFunction,bbOverrideOperator,bbVarDeref,bbDelimiter nextgroup=bbPyFuncRegion skipwhite
|
||||
syn match bbPyFuncDef "^\(fakeroot\s*\)\?\(python\)\(\s\+[0-9A-Za-z_${}\-\.]\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbPyFlag,bbFunction,bbVarDeref,bbDelimiter nextgroup=bbPyFuncRegion skipwhite
|
||||
syn region bbPyFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" contained contains=@python
|
||||
|
||||
" BitBake 'def'd python functions
|
||||
@@ -124,6 +122,5 @@ hi def link bbStatement Statement
|
||||
hi def link bbStatementRest Identifier
|
||||
hi def link bbOEFunctions Special
|
||||
hi def link bbVarPyValue PreProc
|
||||
hi def link bbOverrideOperator Operator
|
||||
|
||||
let b:current_syntax = "bb"
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
|
||||
# You can set these variables from the command line, and also
|
||||
# from the environment for the first two.
|
||||
SPHINXOPTS ?= -W --keep-going -j auto
|
||||
SPHINXOPTS ?= -j auto
|
||||
SPHINXBUILD ?= sphinx-build
|
||||
SOURCEDIR = .
|
||||
BUILDDIR = _build
|
||||
|
||||
@@ -8,12 +8,12 @@ Manual Organization
|
||||
|
||||
Folders exist for individual manuals as follows:
|
||||
|
||||
* bitbake-user-manual --- The BitBake User Manual
|
||||
* bitbake-user-manual - The BitBake User Manual
|
||||
|
||||
Each folder is self-contained regarding content and figures.
|
||||
|
||||
If you want to find HTML versions of the BitBake manuals on the web,
|
||||
go to https://www.openembedded.org/wiki/Documentation.
|
||||
go to http://www.openembedded.org/wiki/Documentation.
|
||||
|
||||
Sphinx
|
||||
======
|
||||
|
||||
@@ -16,7 +16,7 @@ data, or simply return information about the execution environment.
|
||||
|
||||
This chapter describes BitBake's execution process from start to finish
|
||||
when you use it to create an image. The execution process is launched
|
||||
using the following command form::
|
||||
using the following command form: ::
|
||||
|
||||
$ bitbake target
|
||||
|
||||
@@ -32,7 +32,7 @@ the BitBake command and its options, see ":ref:`The BitBake Command
|
||||
your project's ``local.conf`` configuration file.
|
||||
|
||||
A common method to determine this value for your build host is to run
|
||||
the following::
|
||||
the following: ::
|
||||
|
||||
$ grep processor /proc/cpuinfo
|
||||
|
||||
@@ -40,7 +40,7 @@ the BitBake command and its options, see ":ref:`The BitBake Command
|
||||
the number of processors, which takes into account hyper-threading.
|
||||
Thus, a quad-core build host with hyper-threading most likely shows
|
||||
eight processors, which is the value you would then assign to
|
||||
:term:`BB_NUMBER_THREADS`.
|
||||
``BB_NUMBER_THREADS``.
|
||||
|
||||
A possibly simpler solution is that some Linux distributions (e.g.
|
||||
Debian and Ubuntu) provide the ``ncpus`` command.
|
||||
@@ -65,13 +65,13 @@ data itself is of various types:
|
||||
|
||||
The ``layer.conf`` files are used to construct key variables such as
|
||||
:term:`BBPATH` and :term:`BBFILES`.
|
||||
:term:`BBPATH` is used to search for configuration and class files under the
|
||||
``conf`` and ``classes`` directories, respectively. :term:`BBFILES` is used
|
||||
``BBPATH`` is used to search for configuration and class files under the
|
||||
``conf`` and ``classes`` directories, respectively. ``BBFILES`` is used
|
||||
to locate both recipe and recipe append files (``.bb`` and
|
||||
``.bbappend``). If there is no ``bblayers.conf`` file, it is assumed the
|
||||
user has set the :term:`BBPATH` and :term:`BBFILES` directly in the environment.
|
||||
user has set the ``BBPATH`` and ``BBFILES`` directly in the environment.
|
||||
|
||||
Next, the ``bitbake.conf`` file is located using the :term:`BBPATH` variable
|
||||
Next, the ``bitbake.conf`` file is located using the ``BBPATH`` variable
|
||||
that was just constructed. The ``bitbake.conf`` file may also include
|
||||
other configuration files using the ``include`` or ``require``
|
||||
directives.
|
||||
@@ -79,8 +79,8 @@ directives.
|
||||
Prior to parsing configuration files, BitBake looks at certain
|
||||
variables, including:
|
||||
|
||||
- :term:`BB_ENV_PASSTHROUGH`
|
||||
- :term:`BB_ENV_PASSTHROUGH_ADDITIONS`
|
||||
- :term:`BB_ENV_WHITELIST`
|
||||
- :term:`BB_ENV_EXTRAWHITE`
|
||||
- :term:`BB_PRESERVE_ENV`
|
||||
- :term:`BB_ORIGENV`
|
||||
- :term:`BITBAKE_UI`
|
||||
@@ -104,7 +104,7 @@ BitBake first searches the current working directory for an optional
|
||||
contain a :term:`BBLAYERS` variable that is a
|
||||
space-delimited list of 'layer' directories. Recall that if BitBake
|
||||
cannot find a ``bblayers.conf`` file, then it is assumed the user has
|
||||
set the :term:`BBPATH` and :term:`BBFILES` variables directly in the
|
||||
set the ``BBPATH`` and ``BBFILES`` variables directly in the
|
||||
environment.
|
||||
|
||||
For each directory (layer) in this list, a ``conf/layer.conf`` file is
|
||||
@@ -114,7 +114,7 @@ files automatically set up :term:`BBPATH` and other
|
||||
variables correctly for a given build directory.
|
||||
|
||||
BitBake then expects to find the ``conf/bitbake.conf`` file somewhere in
|
||||
the user-specified :term:`BBPATH`. That configuration file generally has
|
||||
the user-specified ``BBPATH``. That configuration file generally has
|
||||
include directives to pull in any other metadata such as files specific
|
||||
to the architecture, the machine, the local environment, and so forth.
|
||||
|
||||
@@ -135,11 +135,11 @@ The ``base.bbclass`` file is always included. Other classes that are
|
||||
specified in the configuration using the
|
||||
:term:`INHERIT` variable are also included. BitBake
|
||||
searches for class files in a ``classes`` subdirectory under the paths
|
||||
in :term:`BBPATH` in the same way as configuration files.
|
||||
in ``BBPATH`` in the same way as configuration files.
|
||||
|
||||
A good way to get an idea of the configuration files and the class files
|
||||
used in your execution environment is to run the following BitBake
|
||||
command::
|
||||
command: ::
|
||||
|
||||
$ bitbake -e > mybb.log
|
||||
|
||||
@@ -155,7 +155,7 @@ execution environment.
|
||||
pair of curly braces in a shell function, the closing curly brace
|
||||
must not be located at the start of the line without leading spaces.
|
||||
|
||||
Here is an example that causes BitBake to produce a parsing error::
|
||||
Here is an example that causes BitBake to produce a parsing error: ::
|
||||
|
||||
fakeroot create_shar() {
|
||||
cat << "EOF" > ${SDK_DEPLOY}/${TOOLCHAIN_OUTPUTNAME}.sh
|
||||
@@ -184,13 +184,13 @@ Locating and Parsing Recipes
|
||||
During the configuration phase, BitBake will have set
|
||||
:term:`BBFILES`. BitBake now uses it to construct a
|
||||
list of recipes to parse, along with any append files (``.bbappend``) to
|
||||
apply. :term:`BBFILES` is a space-separated list of available files and
|
||||
supports wildcards. An example would be::
|
||||
apply. ``BBFILES`` is a space-separated list of available files and
|
||||
supports wildcards. An example would be: ::
|
||||
|
||||
BBFILES = "/path/to/bbfiles/*.bb /path/to/appends/*.bbappend"
|
||||
|
||||
BitBake parses each
|
||||
recipe and append file located with :term:`BBFILES` and stores the values of
|
||||
recipe and append file located with ``BBFILES`` and stores the values of
|
||||
various variables into the datastore.
|
||||
|
||||
.. note::
|
||||
@@ -201,18 +201,18 @@ For each file, a fresh copy of the base configuration is made, then the
|
||||
recipe is parsed line by line. Any inherit statements cause BitBake to
|
||||
find and then parse class files (``.bbclass``) using
|
||||
:term:`BBPATH` as the search path. Finally, BitBake
|
||||
parses in order any append files found in :term:`BBFILES`.
|
||||
parses in order any append files found in ``BBFILES``.
|
||||
|
||||
One common convention is to use the recipe filename to define pieces of
|
||||
metadata. For example, in ``bitbake.conf`` the recipe name and version
|
||||
are used to set the variables :term:`PN` and
|
||||
:term:`PV`::
|
||||
:term:`PV`: ::
|
||||
|
||||
PN = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
|
||||
PV = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
|
||||
PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
|
||||
PV = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
|
||||
|
||||
In this example, a recipe called "something_1.2.3.bb" would set
|
||||
:term:`PN` to "something" and :term:`PV` to "1.2.3".
|
||||
``PN`` to "something" and ``PV`` to "1.2.3".
|
||||
|
||||
By the time parsing is complete for a recipe, BitBake has a list of
|
||||
tasks that the recipe defines and a set of data consisting of keys and
|
||||
@@ -228,7 +228,7 @@ and then reload it.
|
||||
Where possible, subsequent BitBake commands reuse this cache of recipe
|
||||
information. The validity of this cache is determined by first computing
|
||||
a checksum of the base configuration data (see
|
||||
:term:`BB_HASHCONFIG_IGNORE_VARS`) and
|
||||
:term:`BB_HASHCONFIG_WHITELIST`) and
|
||||
then checking if the checksum matches. If that checksum matches what is
|
||||
in the cache and the recipe and class files have not changed, BitBake is
|
||||
able to use the cache. BitBake then reloads the cached information about
|
||||
@@ -238,7 +238,7 @@ Recipe file collections exist to allow the user to have multiple
|
||||
repositories of ``.bb`` files that contain the same exact package. For
|
||||
example, one could easily use them to make one's own local copy of an
|
||||
upstream repository, but with custom modifications that one does not
|
||||
want upstream. Here is an example::
|
||||
want upstream. Here is an example: ::
|
||||
|
||||
BBFILES = "/stuff/openembedded/*/*.bb /stuff/openembedded.modified/*/*.bb"
|
||||
BBFILE_COLLECTIONS = "upstream local"
|
||||
@@ -260,21 +260,21 @@ Providers
|
||||
|
||||
Assuming BitBake has been instructed to execute a target and that all
|
||||
the recipe files have been parsed, BitBake starts to figure out how to
|
||||
build the target. BitBake looks through the :term:`PROVIDES` list for each
|
||||
of the recipes. A :term:`PROVIDES` list is the list of names by which the
|
||||
recipe can be known. Each recipe's :term:`PROVIDES` list is created
|
||||
build the target. BitBake looks through the ``PROVIDES`` list for each
|
||||
of the recipes. A ``PROVIDES`` list is the list of names by which the
|
||||
recipe can be known. Each recipe's ``PROVIDES`` list is created
|
||||
implicitly through the recipe's :term:`PN` variable and
|
||||
explicitly through the recipe's :term:`PROVIDES`
|
||||
variable, which is optional.
|
||||
|
||||
When a recipe uses :term:`PROVIDES`, that recipe's functionality can be
|
||||
found under an alternative name or names other than the implicit :term:`PN`
|
||||
When a recipe uses ``PROVIDES``, that recipe's functionality can be
|
||||
found under an alternative name or names other than the implicit ``PN``
|
||||
name. As an example, suppose a recipe named ``keyboard_1.0.bb``
|
||||
contained the following::
|
||||
contained the following: ::
|
||||
|
||||
PROVIDES += "fullkeyboard"
|
||||
|
||||
The :term:`PROVIDES`
|
||||
The ``PROVIDES``
|
||||
list for this recipe becomes "keyboard", which is implicit, and
|
||||
"fullkeyboard", which is explicit. Consequently, the functionality found
|
||||
in ``keyboard_1.0.bb`` can be found under two different names.
|
||||
@@ -284,14 +284,14 @@ in ``keyboard_1.0.bb`` can be found under two different names.
|
||||
Preferences
|
||||
===========
|
||||
|
||||
The :term:`PROVIDES` list is only part of the solution for figuring out a
|
||||
The ``PROVIDES`` list is only part of the solution for figuring out a
|
||||
target's recipes. Because targets might have multiple providers, BitBake
|
||||
needs to prioritize providers by determining provider preferences.
|
||||
|
||||
A common example in which a target has multiple providers is
|
||||
"virtual/kernel", which is on the :term:`PROVIDES` list for each kernel
|
||||
"virtual/kernel", which is on the ``PROVIDES`` list for each kernel
|
||||
recipe. Each machine often selects the best kernel provider by using a
|
||||
line similar to the following in the machine configuration file::
|
||||
line similar to the following in the machine configuration file: ::
|
||||
|
||||
PREFERRED_PROVIDER_virtual/kernel = "linux-yocto"
|
||||
|
||||
@@ -309,10 +309,10 @@ specify a particular version. You can influence the order by using the
|
||||
:term:`DEFAULT_PREFERENCE` variable.
|
||||
|
||||
By default, files have a preference of "0". Setting
|
||||
:term:`DEFAULT_PREFERENCE` to "-1" makes the recipe unlikely to be used
|
||||
unless it is explicitly referenced. Setting :term:`DEFAULT_PREFERENCE` to
|
||||
"1" makes it likely the recipe is used. :term:`PREFERRED_VERSION` overrides
|
||||
any :term:`DEFAULT_PREFERENCE` setting. :term:`DEFAULT_PREFERENCE` is often used
|
||||
``DEFAULT_PREFERENCE`` to "-1" makes the recipe unlikely to be used
|
||||
unless it is explicitly referenced. Setting ``DEFAULT_PREFERENCE`` to
|
||||
"1" makes it likely the recipe is used. ``PREFERRED_VERSION`` overrides
|
||||
any ``DEFAULT_PREFERENCE`` setting. ``DEFAULT_PREFERENCE`` is often used
|
||||
to mark newer and more experimental recipe versions until they have
|
||||
undergone sufficient testing to be considered stable.
|
||||
|
||||
@@ -331,7 +331,7 @@ If the first recipe is named ``a_1.1.bb``, then the
|
||||
|
||||
Thus, if a recipe named ``a_1.2.bb`` exists, BitBake will choose 1.2 by
|
||||
default. However, if you define the following variable in a ``.conf``
|
||||
file that BitBake parses, you can change that preference::
|
||||
file that BitBake parses, you can change that preference: ::
|
||||
|
||||
PREFERRED_VERSION_a = "1.1"
|
||||
|
||||
@@ -394,7 +394,7 @@ ready to run, those tasks have all their dependencies met, and the
|
||||
thread threshold has not been exceeded.
|
||||
|
||||
It is worth noting that you can greatly speed up the build time by
|
||||
properly setting the :term:`BB_NUMBER_THREADS` variable.
|
||||
properly setting the ``BB_NUMBER_THREADS`` variable.
|
||||
|
||||
As each task completes, a timestamp is written to the directory
|
||||
specified by the :term:`STAMP` variable. On subsequent
|
||||
@@ -435,7 +435,7 @@ BitBake writes a shell script to
|
||||
executes the script. The generated shell script contains all the
|
||||
exported variables, and the shell functions with all variables expanded.
|
||||
Output from the shell script goes to the file
|
||||
``${``\ :term:`T`\ ``}/log.do_taskname.pid``. Looking at the expanded shell functions in
|
||||
``${T}/log.do_taskname.pid``. Looking at the expanded shell functions in
|
||||
the run file and the output in the log files is a useful debugging
|
||||
technique.
|
||||
|
||||
@@ -477,7 +477,7 @@ changes because it should not affect the output for target packages. The
|
||||
simplistic approach for excluding the working directory is to set it to
|
||||
some fixed value and create the checksum for the "run" script. BitBake
|
||||
goes one step better and uses the
|
||||
:term:`BB_BASEHASH_IGNORE_VARS` variable
|
||||
:term:`BB_HASHBASE_WHITELIST` variable
|
||||
to define a list of variables that should never be included when
|
||||
generating the signatures.
|
||||
|
||||
@@ -498,7 +498,7 @@ to the task.
|
||||
|
||||
Like the working directory case, situations exist where dependencies
|
||||
should be ignored. For these cases, you can instruct the build process
|
||||
to ignore a dependency by using a line like the following::
|
||||
to ignore a dependency by using a line like the following: ::
|
||||
|
||||
PACKAGE_ARCHS[vardepsexclude] = "MACHINE"
|
||||
|
||||
@@ -508,7 +508,7 @@ even if it does reference it.
|
||||
|
||||
Equally, there are cases where we need to add dependencies BitBake is
|
||||
not able to find. You can accomplish this by using a line like the
|
||||
following::
|
||||
following: ::
|
||||
|
||||
PACKAGE_ARCHS[vardeps] = "MACHINE"
|
||||
|
||||
@@ -523,7 +523,7 @@ it cannot figure out dependencies.
|
||||
Thus far, this section has limited discussion to the direct inputs into
|
||||
a task. Information based on direct inputs is referred to as the
|
||||
"basehash" in the code. However, there is still the question of a task's
|
||||
indirect inputs --- the things that were already built and present in the
|
||||
indirect inputs - the things that were already built and present in the
|
||||
build directory. The checksum (or signature) for a particular task needs
|
||||
to add the hashes of all the tasks on which the particular task depends.
|
||||
Choosing which dependencies to add is a policy decision. However, the
|
||||
@@ -534,11 +534,11 @@ At the code level, there are a variety of ways both the basehash and the
|
||||
dependent task hashes can be influenced. Within the BitBake
|
||||
configuration file, we can give BitBake some extra information to help
|
||||
it construct the basehash. The following statement effectively results
|
||||
in a list of global variable dependency excludes --- variables never
|
||||
in a list of global variable dependency excludes - variables never
|
||||
included in any checksum. This example uses variables from OpenEmbedded
|
||||
to help illustrate the concept::
|
||||
to help illustrate the concept: ::
|
||||
|
||||
BB_BASEHASH_IGNORE_VARS ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \
|
||||
BB_HASHBASE_WHITELIST ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \
|
||||
SSTATE_DIR THISDIR FILESEXTRAPATHS FILE_DIRNAME HOME LOGNAME SHELL \
|
||||
USER FILESPATH STAGING_DIR_HOST STAGING_DIR_TARGET COREBASE PRSERV_HOST \
|
||||
PRSERV_DUMPDIR PRSERV_DUMPFILE PRSERV_LOCKDOWN PARALLEL_MAKE \
|
||||
@@ -557,11 +557,11 @@ OpenEmbedded-Core uses: "OEBasic" and "OEBasicHash". By default, there
|
||||
is a dummy "noop" signature handler enabled in BitBake. This means that
|
||||
behavior is unchanged from previous versions. ``OE-Core`` uses the
|
||||
"OEBasicHash" signature handler by default through this setting in the
|
||||
``bitbake.conf`` file::
|
||||
``bitbake.conf`` file: ::
|
||||
|
||||
BB_SIGNATURE_HANDLER ?= "OEBasicHash"
|
||||
|
||||
The "OEBasicHash" :term:`BB_SIGNATURE_HANDLER` is the same as the "OEBasic"
|
||||
The "OEBasicHash" ``BB_SIGNATURE_HANDLER`` is the same as the "OEBasic"
|
||||
version but adds the task hash to the stamp files. This results in any
|
||||
metadata change that changes the task hash, automatically causing the
|
||||
task to be run again. This removes the need to bump
|
||||
@@ -578,7 +578,10 @@ the build. This information includes:
|
||||
- ``BB_BASEHASH_``\ *filename:taskname*: The base hashes for each
|
||||
dependent task.
|
||||
|
||||
- :term:`BB_TASKHASH`: The hash of the currently running task.
|
||||
- ``BBHASHDEPS_``\ *filename:taskname*: The task dependencies for
|
||||
each task.
|
||||
|
||||
- ``BB_TASKHASH``: The hash of the currently running task.
|
||||
|
||||
It is worth noting that BitBake's "-S" option lets you debug BitBake's
|
||||
processing of signatures. The options passed to -S allow different
|
||||
@@ -645,6 +648,13 @@ compiled binary. To handle this, BitBake calls the
|
||||
each successful setscene task to know whether or not it needs to obtain
|
||||
the dependencies of that task.
|
||||
|
||||
Finally, after all the setscene tasks have executed, BitBake calls the
|
||||
function listed in
|
||||
:term:`BB_SETSCENE_VERIFY_FUNCTION2`
|
||||
with the list of tasks BitBake thinks has been "covered". The metadata
|
||||
can then ensure that this list is correct and can inform BitBake that it
|
||||
wants specific tasks to be run regardless of the setscene result.
|
||||
|
||||
You can find more information on setscene metadata in the
|
||||
:ref:`bitbake-user-manual/bitbake-user-manual-metadata:task checksums and setscene`
|
||||
section.
|
||||
|
||||
@@ -27,7 +27,7 @@ and unpacking the files is often optionally followed by patching.
|
||||
Patching, however, is not covered by this module.
|
||||
|
||||
The code to execute the first part of this process, a fetch, looks
|
||||
something like the following::
|
||||
something like the following: ::
|
||||
|
||||
src_uri = (d.getVar('SRC_URI') or "").split()
|
||||
fetcher = bb.fetch2.Fetch(src_uri, d)
|
||||
@@ -37,7 +37,7 @@ This code sets up an instance of the fetch class. The instance uses a
|
||||
space-separated list of URLs from the :term:`SRC_URI`
|
||||
variable and then calls the ``download`` method to download the files.
|
||||
|
||||
The instantiation of the fetch class is usually followed by::
|
||||
The instantiation of the fetch class is usually followed by: ::
|
||||
|
||||
rootdir = l.getVar('WORKDIR')
|
||||
fetcher.unpack(rootdir)
|
||||
@@ -51,7 +51,7 @@ This code unpacks the downloaded files to the specified by ``WORKDIR``.
|
||||
examine the OpenEmbedded class file ``base.bbclass``
|
||||
.
|
||||
|
||||
The :term:`SRC_URI` and ``WORKDIR`` variables are not hardcoded into the
|
||||
The ``SRC_URI`` and ``WORKDIR`` variables are not hardcoded into the
|
||||
fetcher, since those fetcher methods can be (and are) called with
|
||||
different variable names. In OpenEmbedded for example, the shared state
|
||||
(sstate) code uses the fetch module to fetch the sstate files.
|
||||
@@ -64,38 +64,38 @@ URLs by looking for source files in a specific search order:
|
||||
:term:`PREMIRRORS` variable.
|
||||
|
||||
- *Source URI:* If pre-mirrors fail, BitBake uses the original URL (e.g
|
||||
from :term:`SRC_URI`).
|
||||
from ``SRC_URI``).
|
||||
|
||||
- *Mirror Sites:* If fetch failures occur, BitBake next uses mirror
|
||||
locations as defined by the :term:`MIRRORS` variable.
|
||||
|
||||
For each URL passed to the fetcher, the fetcher calls the submodule that
|
||||
handles that particular URL type. This behavior can be the source of
|
||||
some confusion when you are providing URLs for the :term:`SRC_URI` variable.
|
||||
Consider the following two URLs::
|
||||
some confusion when you are providing URLs for the ``SRC_URI`` variable.
|
||||
Consider the following two URLs: ::
|
||||
|
||||
https://git.yoctoproject.org/git/poky;protocol=git
|
||||
http://git.yoctoproject.org/git/poky;protocol=git
|
||||
git://git.yoctoproject.org/git/poky;protocol=http
|
||||
|
||||
In the former case, the URL is passed to the ``wget`` fetcher, which does not
|
||||
understand "git". Therefore, the latter case is the correct form since the Git
|
||||
fetcher does know how to use HTTP as a transport.
|
||||
|
||||
Here are some examples that show commonly used mirror definitions::
|
||||
Here are some examples that show commonly used mirror definitions: ::
|
||||
|
||||
PREMIRRORS ?= "\
|
||||
bzr://.*/.\* http://somemirror.org/sources/ \
|
||||
cvs://.*/.\* http://somemirror.org/sources/ \
|
||||
git://.*/.\* http://somemirror.org/sources/ \
|
||||
hg://.*/.\* http://somemirror.org/sources/ \
|
||||
osc://.*/.\* http://somemirror.org/sources/ \
|
||||
p4://.*/.\* http://somemirror.org/sources/ \
|
||||
svn://.*/.\* http://somemirror.org/sources/"
|
||||
bzr://.*/.\* http://somemirror.org/sources/ \\n \
|
||||
cvs://.*/.\* http://somemirror.org/sources/ \\n \
|
||||
git://.*/.\* http://somemirror.org/sources/ \\n \
|
||||
hg://.*/.\* http://somemirror.org/sources/ \\n \
|
||||
osc://.*/.\* http://somemirror.org/sources/ \\n \
|
||||
p4://.*/.\* http://somemirror.org/sources/ \\n \
|
||||
svn://.*/.\* http://somemirror.org/sources/ \\n"
|
||||
|
||||
MIRRORS =+ "\
|
||||
ftp://.*/.\* http://somemirror.org/sources/ \
|
||||
http://.*/.\* http://somemirror.org/sources/ \
|
||||
https://.*/.\* http://somemirror.org/sources/"
|
||||
ftp://.*/.\* http://somemirror.org/sources/ \\n \
|
||||
http://.*/.\* http://somemirror.org/sources/ \\n \
|
||||
https://.*/.\* http://somemirror.org/sources/ \\n"
|
||||
|
||||
It is useful to note that BitBake
|
||||
supports cross-URLs. It is possible to mirror a Git repository on an
|
||||
@@ -110,26 +110,26 @@ which is specified by the :term:`DL_DIR` variable.
|
||||
File integrity is of key importance for reproducing builds. For
|
||||
non-local archive downloads, the fetcher code can verify SHA-256 and MD5
|
||||
checksums to ensure the archives have been downloaded correctly. You can
|
||||
specify these checksums by using the :term:`SRC_URI` variable with the
|
||||
appropriate varflags as follows::
|
||||
specify these checksums by using the ``SRC_URI`` variable with the
|
||||
appropriate varflags as follows: ::
|
||||
|
||||
SRC_URI[md5sum] = "value"
|
||||
SRC_URI[sha256sum] = "value"
|
||||
|
||||
You can also specify the checksums as
|
||||
parameters on the :term:`SRC_URI` as shown below::
|
||||
parameters on the ``SRC_URI`` as shown below: ::
|
||||
|
||||
SRC_URI = "http://example.com/foobar.tar.bz2;md5sum=4a8e0f237e961fd7785d19d07fdb994d"
|
||||
|
||||
If multiple URIs exist, you can specify the checksums either directly as
|
||||
in the previous example, or you can name the URLs. The following syntax
|
||||
shows how you name the URIs::
|
||||
shows how you name the URIs: ::
|
||||
|
||||
SRC_URI = "http://example.com/foobar.tar.bz2;name=foo"
|
||||
SRC_URI[foo.md5sum] = 4a8e0f237e961fd7785d19d07fdb994d
|
||||
|
||||
After a file has been downloaded and
|
||||
has had its checksum checked, a ".done" stamp is placed in :term:`DL_DIR`.
|
||||
has had its checksum checked, a ".done" stamp is placed in ``DL_DIR``.
|
||||
BitBake uses this stamp during subsequent builds to avoid downloading or
|
||||
comparing a checksum for the file again.
|
||||
|
||||
@@ -144,10 +144,6 @@ download without a checksum triggers an error message. The
|
||||
make any attempted network access a fatal error, which is useful for
|
||||
checking that mirrors are complete as well as other things.
|
||||
|
||||
If :term:`BB_CHECK_SSL_CERTS` is set to ``0`` then SSL certificate checking will
|
||||
be disabled. This variable defaults to ``1`` so SSL certificates are normally
|
||||
checked.
|
||||
|
||||
.. _bb-the-unpack:
|
||||
|
||||
The Unpack
|
||||
@@ -167,8 +163,8 @@ govern the behavior of the unpack stage:
|
||||
- *dos:* Applies to ``.zip`` and ``.jar`` files and specifies whether
|
||||
to use DOS line ending conversion on text files.
|
||||
|
||||
- *striplevel:* Strip specified number of leading components (levels)
|
||||
from file names on extraction
|
||||
- *basepath:* Instructs the unpack stage to strip the specified
|
||||
directories from the source path when unpacking.
|
||||
|
||||
- *subdir:* Unpacks the specific URL to the specified subdirectory
|
||||
within the root directory.
|
||||
@@ -208,7 +204,7 @@ time the ``download()`` method is called.
|
||||
If you specify a directory, the entire directory is unpacked.
|
||||
|
||||
Here are a couple of example URLs, the first relative and the second
|
||||
absolute::
|
||||
absolute: ::
|
||||
|
||||
SRC_URI = "file://relativefile.patch"
|
||||
SRC_URI = "file:///Users/ich/very_important_software"
|
||||
@@ -229,12 +225,7 @@ downloaded file is useful for avoiding collisions in
|
||||
:term:`DL_DIR` when dealing with multiple files that
|
||||
have the same name.
|
||||
|
||||
If a username and password are specified in the ``SRC_URI``, a Basic
|
||||
Authorization header will be added to each request, including across redirects.
|
||||
To instead limit the Authorization header to the first request, add
|
||||
"redirectauth=0" to the list of parameters.
|
||||
|
||||
Some example URLs are as follows::
|
||||
Some example URLs are as follows: ::
|
||||
|
||||
SRC_URI = "http://oe.handhelds.org/not_there.aac"
|
||||
SRC_URI = "ftp://oe.handhelds.org/not_there_as_well.aac"
|
||||
@@ -244,13 +235,15 @@ Some example URLs are as follows::
|
||||
|
||||
Because URL parameters are delimited by semi-colons, this can
|
||||
introduce ambiguity when parsing URLs that also contain semi-colons,
|
||||
for example::
|
||||
for example:
|
||||
::
|
||||
|
||||
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git;a=snapshot;h=a5dd47"
|
||||
|
||||
|
||||
Such URLs should should be modified by replacing semi-colons with '&'
|
||||
characters::
|
||||
characters:
|
||||
::
|
||||
|
||||
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&a=snapshot&h=a5dd47"
|
||||
|
||||
@@ -258,7 +251,8 @@ Some example URLs are as follows::
|
||||
In most cases this should work. Treating semi-colons and '&' in
|
||||
queries identically is recommended by the World Wide Web Consortium
|
||||
(W3C). Note that due to the nature of the URL, you may have to
|
||||
specify the name of the downloaded file as well::
|
||||
specify the name of the downloaded file as well:
|
||||
::
|
||||
|
||||
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&a=snapshot&h=a5dd47;downloadfilename=myfile.bz2"
|
||||
|
||||
@@ -327,7 +321,7 @@ The supported parameters are as follows:
|
||||
|
||||
- *"port":* The port to which the CVS server connects.
|
||||
|
||||
Some example URLs are as follows::
|
||||
Some example URLs are as follows: ::
|
||||
|
||||
SRC_URI = "cvs://CVSROOT;module=mymodule;tag=some-version;method=ext"
|
||||
SRC_URI = "cvs://CVSROOT;module=mymodule;date=20060126;localdir=usethat"
|
||||
@@ -369,7 +363,7 @@ The supported parameters are as follows:
|
||||
username is different than the username used in the main URL, which
|
||||
is passed to the subversion command.
|
||||
|
||||
Following are three examples using svn::
|
||||
Following are three examples using svn: ::
|
||||
|
||||
SRC_URI = "svn://myrepos/proj1;module=vip;protocol=http;rev=667"
|
||||
SRC_URI = "svn://myrepos/proj1;module=opie;protocol=svn+ssh"
|
||||
@@ -396,19 +390,6 @@ This fetcher supports the following parameters:
|
||||
protocol is "file". You can also use "http", "https", "ssh" and
|
||||
"rsync".
|
||||
|
||||
.. note::
|
||||
|
||||
When ``protocol`` is "ssh", the URL expected in :term:`SRC_URI` differs
|
||||
from the one that is typically passed to ``git clone`` command and provided
|
||||
by the Git server to fetch from. For example, the URL returned by GitLab
|
||||
server for ``mesa`` when cloning over SSH is
|
||||
``git@gitlab.freedesktop.org:mesa/mesa.git``, however the expected URL in
|
||||
:term:`SRC_URI` is the following::
|
||||
|
||||
SRC_URI = "git://git@gitlab.freedesktop.org/mesa/mesa.git;branch=main;protocol=ssh;..."
|
||||
|
||||
Note the ``:`` character changed for a ``/`` before the path to the project.
|
||||
|
||||
- *"nocheckout":* Tells the fetcher to not checkout source code when
|
||||
unpacking when set to "1". Set this option for the URL where there is
|
||||
a custom routine to checkout code. The default is "0".
|
||||
@@ -424,17 +405,17 @@ This fetcher supports the following parameters:
|
||||
|
||||
- *"nobranch":* Tells the fetcher to not check the SHA validation for
|
||||
the branch when set to "1". The default is "0". Set this option for
|
||||
the recipe that refers to the commit that is valid for any namespace
|
||||
(branch, tag, ...) instead of the branch.
|
||||
the recipe that refers to the commit that is valid for a tag instead
|
||||
of the branch.
|
||||
|
||||
- *"bareclone":* Tells the fetcher to clone a bare clone into the
|
||||
destination directory without checking out a working tree. Only the
|
||||
raw Git metadata is provided. This parameter implies the "nocheckout"
|
||||
parameter as well.
|
||||
|
||||
- *"branch":* The branch(es) of the Git tree to clone. Unless
|
||||
"nobranch" is set to "1", this is a mandatory parameter. The number of
|
||||
branch parameters must match the number of name parameters.
|
||||
- *"branch":* The branch(es) of the Git tree to clone. If unset, this
|
||||
is assumed to be "master". The number of branch parameters much match
|
||||
the number of name parameters.
|
||||
|
||||
- *"rev":* The revision to use for the checkout. The default is
|
||||
"master".
|
||||
@@ -455,23 +436,15 @@ This fetcher supports the following parameters:
|
||||
parameter implies no branch and only works when the transfer protocol
|
||||
is ``file://``.
|
||||
|
||||
Here are some example URLs::
|
||||
Here are some example URLs: ::
|
||||
|
||||
SRC_URI = "git://github.com/fronteed/icheck.git;protocol=https;branch=${PV};tag=${PV}"
|
||||
SRC_URI = "git://github.com/asciidoc/asciidoc-py;protocol=https;branch=main"
|
||||
SRC_URI = "git://git@gitlab.freedesktop.org/mesa/mesa.git;branch=main;protocol=ssh;..."
|
||||
|
||||
.. note::
|
||||
|
||||
When using ``git`` as the fetcher of the main source code of your software,
|
||||
``S`` should be set accordingly::
|
||||
|
||||
S = "${WORKDIR}/git"
|
||||
SRC_URI = "git://git.oe.handhelds.org/git/vip.git;tag=version-1"
|
||||
SRC_URI = "git://git.oe.handhelds.org/git/vip.git;protocol=http"
|
||||
|
||||
.. note::
|
||||
|
||||
Specifying passwords directly in ``git://`` urls is not supported.
|
||||
There are several reasons: :term:`SRC_URI` is often written out to logs and
|
||||
There are several reasons: ``SRC_URI`` is often written out to logs and
|
||||
other places, and that could easily leak passwords; it is also all too
|
||||
easy to share metadata without removing passwords. SSH keys, ``~/.netrc``
|
||||
and ``~/.ssh/config`` files can be used as alternatives.
|
||||
@@ -511,7 +484,7 @@ repository.
|
||||
|
||||
To use this fetcher, make sure your recipe has proper
|
||||
:term:`SRC_URI`, :term:`SRCREV`, and
|
||||
:term:`PV` settings. Here is an example::
|
||||
:term:`PV` settings. Here is an example: ::
|
||||
|
||||
SRC_URI = "ccrc://cc.example.org/ccrc;vob=/example_vob;module=/example_module"
|
||||
SRCREV = "EXAMPLE_CLEARCASE_TAG"
|
||||
@@ -520,7 +493,7 @@ To use this fetcher, make sure your recipe has proper
|
||||
The fetcher uses the ``rcleartool`` or
|
||||
``cleartool`` remote client, depending on which one is available.
|
||||
|
||||
Following are options for the :term:`SRC_URI` statement:
|
||||
Following are options for the ``SRC_URI`` statement:
|
||||
|
||||
- *vob*: The name, which must include the prepending "/" character,
|
||||
of the ClearCase VOB. This option is required.
|
||||
@@ -533,7 +506,7 @@ Following are options for the :term:`SRC_URI` statement:
|
||||
The module and vob options are combined to create the load rule in the
|
||||
view config spec. As an example, consider the vob and module values from
|
||||
the SRC_URI statement at the start of this section. Combining those values
|
||||
results in the following::
|
||||
results in the following: ::
|
||||
|
||||
load /example_vob/example_module
|
||||
|
||||
@@ -582,10 +555,10 @@ password if you do not wish to keep those values in a recipe itself. If
|
||||
you choose not to use ``P4CONFIG``, or to explicitly set variables that
|
||||
``P4CONFIG`` can contain, you can specify the ``P4PORT`` value, which is
|
||||
the server's URL and port number, and you can specify a username and
|
||||
password directly in your recipe within :term:`SRC_URI`.
|
||||
password directly in your recipe within ``SRC_URI``.
|
||||
|
||||
Here is an example that relies on ``P4CONFIG`` to specify the server URL
|
||||
and port, username, and password, and fetches the Head Revision::
|
||||
and port, username, and password, and fetches the Head Revision: ::
|
||||
|
||||
SRC_URI = "p4://example-depot/main/source/..."
|
||||
SRCREV = "${AUTOREV}"
|
||||
@@ -593,7 +566,7 @@ and port, username, and password, and fetches the Head Revision::
|
||||
S = "${WORKDIR}/p4"
|
||||
|
||||
Here is an example that specifies the server URL and port, username, and
|
||||
password, and fetches a Revision based on a Label::
|
||||
password, and fetches a Revision based on a Label: ::
|
||||
|
||||
P4PORT = "tcp:p4server.example.net:1666"
|
||||
SRC_URI = "p4://user:passwd@example-depot/main/source/..."
|
||||
@@ -619,7 +592,7 @@ paths locally is desirable, the fetcher supports two parameters:
|
||||
paths locally for the specified location, even in combination with the
|
||||
``module`` parameter.
|
||||
|
||||
Here is an example use of the the ``module`` parameter::
|
||||
Here is an example use of the the ``module`` parameter: ::
|
||||
|
||||
SRC_URI = "p4://user:passwd@example-depot/main;module=source/..."
|
||||
|
||||
@@ -627,7 +600,7 @@ In this case, the content of the top-level directory ``source/`` will be fetched
|
||||
to ``${P4DIR}``, including the directory itself. The top-level directory will
|
||||
be accesible at ``${P4DIR}/source/``.
|
||||
|
||||
Here is an example use of the the ``remotepath`` parameter::
|
||||
Here is an example use of the the ``remotepath`` parameter: ::
|
||||
|
||||
SRC_URI = "p4://user:passwd@example-depot/main;module=source/...;remotepath=keep"
|
||||
|
||||
@@ -655,7 +628,7 @@ This fetcher supports the following parameters:
|
||||
|
||||
- *"manifest":* Name of the manifest file (default: ``default.xml``).
|
||||
|
||||
Here are some example URLs::
|
||||
Here are some example URLs: ::
|
||||
|
||||
SRC_URI = "repo://REPOROOT;protocol=git;branch=some_branch;manifest=my_manifest.xml"
|
||||
SRC_URI = "repo://REPOROOT;protocol=file;branch=some_branch;manifest=my_manifest.xml"
|
||||
@@ -678,108 +651,16 @@ Such functionality is set by the variable:
|
||||
delegate access to resources, if this variable is set, the Az Fetcher will
|
||||
use it when fetching artifacts from the cloud.
|
||||
|
||||
You can specify the AZ_SAS variable as shown below::
|
||||
You can specify the AZ_SAS variable as shown below: ::
|
||||
|
||||
AZ_SAS = "se=2021-01-01&sp=r&sv=2018-11-09&sr=c&skoid=<skoid>&sig=<signature>"
|
||||
|
||||
Here is an example URL::
|
||||
Here is an example URL: ::
|
||||
|
||||
SRC_URI = "az://<azure-storage-account>.blob.core.windows.net/<foo_container>/<bar_file>"
|
||||
|
||||
It can also be used when setting mirrors definitions using the :term:`PREMIRRORS` variable.
|
||||
|
||||
.. _crate-fetcher:
|
||||
|
||||
Crate Fetcher (``crate://``)
|
||||
----------------------------
|
||||
|
||||
This submodule fetches code for
|
||||
`Rust language "crates" <https://doc.rust-lang.org/reference/glossary.html?highlight=crate#crate>`__
|
||||
corresponding to Rust libraries and programs to compile. Such crates are typically shared
|
||||
on https://crates.io/ but this fetcher supports other crate registries too.
|
||||
|
||||
The format for the :term:`SRC_URI` setting must be::
|
||||
|
||||
SRC_URI = "crate://REGISTRY/NAME/VERSION"
|
||||
|
||||
Here is an example URL::
|
||||
|
||||
SRC_URI = "crate://crates.io/glob/0.2.11"
|
||||
|
||||
.. _npm-fetcher:
|
||||
|
||||
NPM Fetcher (``npm://``)
|
||||
------------------------
|
||||
|
||||
This submodule fetches source code from an
|
||||
`NPM <https://en.wikipedia.org/wiki/Npm_(software)>`__
|
||||
Javascript package registry.
|
||||
|
||||
The format for the :term:`SRC_URI` setting must be::
|
||||
|
||||
SRC_URI = "npm://some.registry.url;ParameterA=xxx;ParameterB=xxx;..."
|
||||
|
||||
This fetcher supports the following parameters:
|
||||
|
||||
- *"package":* The NPM package name. This is a mandatory parameter.
|
||||
|
||||
- *"version":* The NPM package version. This is a mandatory parameter.
|
||||
|
||||
- *"downloadfilename":* Specifies the filename used when storing the downloaded file.
|
||||
|
||||
- *"destsuffix":* Specifies the directory to use to unpack the package (default: ``npm``).
|
||||
|
||||
Note that NPM fetcher only fetches the package source itself. The dependencies
|
||||
can be fetched through the `npmsw-fetcher`_.
|
||||
|
||||
Here is an example URL with both fetchers::
|
||||
|
||||
SRC_URI = " \
|
||||
npm://registry.npmjs.org/;package=cute-files;version=${PV} \
|
||||
npmsw://${THISDIR}/${BPN}/npm-shrinkwrap.json \
|
||||
"
|
||||
|
||||
See :yocto_docs:`Creating Node Package Manager (NPM) Packages
|
||||
</dev-manual/common-tasks.html#creating-node-package-manager-npm-packages>`
|
||||
in the Yocto Project manual for details about using
|
||||
:yocto_docs:`devtool <https://docs.yoctoproject.org/ref-manual/devtool-reference.html>`
|
||||
to automatically create a recipe from an NPM URL.
|
||||
|
||||
.. _npmsw-fetcher:
|
||||
|
||||
NPM shrinkwrap Fetcher (``npmsw://``)
|
||||
-------------------------------------
|
||||
|
||||
This submodule fetches source code from an
|
||||
`NPM shrinkwrap <https://docs.npmjs.com/cli/v8/commands/npm-shrinkwrap>`__
|
||||
description file, which lists the dependencies
|
||||
of an NPM package while locking their versions.
|
||||
|
||||
The format for the :term:`SRC_URI` setting must be::
|
||||
|
||||
SRC_URI = "npmsw://some.registry.url;ParameterA=xxx;ParameterB=xxx;..."
|
||||
|
||||
This fetcher supports the following parameters:
|
||||
|
||||
- *"dev":* Set this parameter to ``1`` to install "devDependencies".
|
||||
|
||||
- *"destsuffix":* Specifies the directory to use to unpack the dependencies
|
||||
(``${S}`` by default).
|
||||
|
||||
Note that the shrinkwrap file can also be provided by the recipe for
|
||||
the package which has such dependencies, for example::
|
||||
|
||||
SRC_URI = " \
|
||||
npm://registry.npmjs.org/;package=cute-files;version=${PV} \
|
||||
npmsw://${THISDIR}/${BPN}/npm-shrinkwrap.json \
|
||||
"
|
||||
|
||||
Such a file can automatically be generated using
|
||||
:yocto_docs:`devtool <https://docs.yoctoproject.org/ref-manual/devtool-reference.html>`
|
||||
as described in the :yocto_docs:`Creating Node Package Manager (NPM) Packages
|
||||
</dev-manual/common-tasks.html#creating-node-package-manager-npm-packages>`
|
||||
section of the Yocto Project.
|
||||
|
||||
Other Fetchers
|
||||
--------------
|
||||
|
||||
@@ -789,6 +670,8 @@ Fetch submodules also exist for the following:
|
||||
|
||||
- Mercurial (``hg://``)
|
||||
|
||||
- npm (``npm://``)
|
||||
|
||||
- OSC (``osc://``)
|
||||
|
||||
- Secure FTP (``sftp://``)
|
||||
@@ -803,4 +686,4 @@ submodules. However, you might find the code helpful and readable.
|
||||
Auto Revisions
|
||||
==============
|
||||
|
||||
We need to document ``AUTOREV`` and :term:`SRCREV_FORMAT` here.
|
||||
We need to document ``AUTOREV`` and ``SRCREV_FORMAT`` here.
|
||||
|
||||
@@ -20,7 +20,7 @@ Obtaining BitBake
|
||||
|
||||
See the :ref:`bitbake-user-manual/bitbake-user-manual-hello:obtaining bitbake` section for
|
||||
information on how to obtain BitBake. Once you have the source code on
|
||||
your machine, the BitBake directory appears as follows::
|
||||
your machine, the BitBake directory appears as follows: ::
|
||||
|
||||
$ ls -al
|
||||
total 100
|
||||
@@ -49,7 +49,7 @@ Setting Up the BitBake Environment
|
||||
|
||||
First, you need to be sure that you can run BitBake. Set your working
|
||||
directory to where your local BitBake files are and run the following
|
||||
command::
|
||||
command: ::
|
||||
|
||||
$ ./bin/bitbake --version
|
||||
BitBake Build Tool Core version 1.23.0, bitbake version 1.23.0
|
||||
@@ -61,14 +61,14 @@ The recommended method to run BitBake is from a directory of your
|
||||
choice. To be able to run BitBake from any directory, you need to add
|
||||
the executable binary to your binary to your shell's environment
|
||||
``PATH`` variable. First, look at your current ``PATH`` variable by
|
||||
entering the following::
|
||||
entering the following: ::
|
||||
|
||||
$ echo $PATH
|
||||
|
||||
Next, add the directory location
|
||||
for the BitBake binary to the ``PATH``. Here is an example that adds the
|
||||
``/home/scott-lenovo/bitbake/bin`` directory to the front of the
|
||||
``PATH`` variable::
|
||||
``PATH`` variable: ::
|
||||
|
||||
$ export PATH=/home/scott-lenovo/bitbake/bin:$PATH
|
||||
|
||||
@@ -99,7 +99,7 @@ discussion mailing list about the BitBake build tool.
|
||||
|
||||
This example was inspired by and drew heavily from
|
||||
`Mailing List post - The BitBake equivalent of "Hello, World!"
|
||||
<https://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html>`_.
|
||||
<http://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html>`_.
|
||||
|
||||
As stated earlier, the goal of this example is to eventually compile
|
||||
"Hello World". However, it is unknown what BitBake needs and what you
|
||||
@@ -116,7 +116,7 @@ Following is the complete "Hello World" example.
|
||||
|
||||
#. **Create a Project Directory:** First, set up a directory for the
|
||||
"Hello World" project. Here is how you can do so in your home
|
||||
directory::
|
||||
directory: ::
|
||||
|
||||
$ mkdir ~/hello
|
||||
$ cd ~/hello
|
||||
@@ -127,7 +127,7 @@ Following is the complete "Hello World" example.
|
||||
directory is a good way to isolate your project.
|
||||
|
||||
#. **Run BitBake:** At this point, you have nothing but a project
|
||||
directory. Run the ``bitbake`` command and see what it does::
|
||||
directory. Run the ``bitbake`` command and see what it does: ::
|
||||
|
||||
$ bitbake
|
||||
The BBPATH variable is not set and bitbake did not
|
||||
@@ -145,23 +145,23 @@ Following is the complete "Hello World" example.
|
||||
|
||||
The majority of this output is specific to environment variables that
|
||||
are not directly relevant to BitBake. However, the very first
|
||||
message regarding the :term:`BBPATH` variable and the
|
||||
message regarding the ``BBPATH`` variable and the
|
||||
``conf/bblayers.conf`` file is relevant.
|
||||
|
||||
When you run BitBake, it begins looking for metadata files. The
|
||||
:term:`BBPATH` variable is what tells BitBake where
|
||||
to look for those files. :term:`BBPATH` is not set and you need to set
|
||||
it. Without :term:`BBPATH`, BitBake cannot find any configuration files
|
||||
to look for those files. ``BBPATH`` is not set and you need to set
|
||||
it. Without ``BBPATH``, BitBake cannot find any configuration files
|
||||
(``.conf``) or recipe files (``.bb``) at all. BitBake also cannot
|
||||
find the ``bitbake.conf`` file.
|
||||
|
||||
#. **Setting BBPATH:** For this example, you can set :term:`BBPATH` in
|
||||
#. **Setting BBPATH:** For this example, you can set ``BBPATH`` in
|
||||
the same manner that you set ``PATH`` earlier in the appendix. You
|
||||
should realize, though, that it is much more flexible to set the
|
||||
:term:`BBPATH` variable up in a configuration file for each project.
|
||||
``BBPATH`` variable up in a configuration file for each project.
|
||||
|
||||
From your shell, enter the following commands to set and export the
|
||||
:term:`BBPATH` variable::
|
||||
``BBPATH`` variable: ::
|
||||
|
||||
$ BBPATH="projectdirectory"
|
||||
$ export BBPATH
|
||||
@@ -175,8 +175,8 @@ Following is the complete "Hello World" example.
|
||||
("~") character as BitBake does not expand that character as the
|
||||
shell would.
|
||||
|
||||
#. **Run BitBake:** Now that you have :term:`BBPATH` defined, run the
|
||||
``bitbake`` command again::
|
||||
#. **Run BitBake:** Now that you have ``BBPATH`` defined, run the
|
||||
``bitbake`` command again: ::
|
||||
|
||||
$ bitbake
|
||||
ERROR: Traceback (most recent call last):
|
||||
@@ -205,18 +205,18 @@ Following is the complete "Hello World" example.
|
||||
recipe files. For this example, you need to create the file in your
|
||||
project directory and define some key BitBake variables. For more
|
||||
information on the ``bitbake.conf`` file, see
|
||||
https://git.openembedded.org/bitbake/tree/conf/bitbake.conf.
|
||||
http://git.openembedded.org/bitbake/tree/conf/bitbake.conf.
|
||||
|
||||
Use the following commands to create the ``conf`` directory in the
|
||||
project directory::
|
||||
project directory: ::
|
||||
|
||||
$ mkdir conf
|
||||
|
||||
From within the ``conf`` directory,
|
||||
use some editor to create the ``bitbake.conf`` so that it contains
|
||||
the following::
|
||||
the following: ::
|
||||
|
||||
PN = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
|
||||
PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
|
||||
|
||||
TMPDIR = "${TOPDIR}/tmp"
|
||||
CACHE = "${TMPDIR}/cache"
|
||||
@@ -251,7 +251,7 @@ Following is the complete "Hello World" example.
|
||||
glossary.
|
||||
|
||||
#. **Run BitBake:** After making sure that the ``conf/bitbake.conf`` file
|
||||
exists, you can run the ``bitbake`` command again::
|
||||
exists, you can run the ``bitbake`` command again: ::
|
||||
|
||||
$ bitbake
|
||||
ERROR: Traceback (most recent call last):
|
||||
@@ -278,7 +278,7 @@ Following is the complete "Hello World" example.
|
||||
in the ``classes`` directory of the project (i.e ``hello/classes``
|
||||
in this example).
|
||||
|
||||
Create the ``classes`` directory as follows::
|
||||
Create the ``classes`` directory as follows: ::
|
||||
|
||||
$ cd $HOME/hello
|
||||
$ mkdir classes
|
||||
@@ -291,7 +291,7 @@ Following is the complete "Hello World" example.
|
||||
environments BitBake is supporting.
|
||||
|
||||
#. **Run BitBake:** After making sure that the ``classes/base.bbclass``
|
||||
file exists, you can run the ``bitbake`` command again::
|
||||
file exists, you can run the ``bitbake`` command again: ::
|
||||
|
||||
$ bitbake
|
||||
Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.
|
||||
@@ -314,7 +314,7 @@ Following is the complete "Hello World" example.
|
||||
Minimally, you need a recipe file and a layer configuration file in
|
||||
your layer. The configuration file needs to be in the ``conf``
|
||||
directory inside the layer. Use these commands to set up the layer
|
||||
and the ``conf`` directory::
|
||||
and the ``conf`` directory: ::
|
||||
|
||||
$ cd $HOME
|
||||
$ mkdir mylayer
|
||||
@@ -322,12 +322,12 @@ Following is the complete "Hello World" example.
|
||||
$ mkdir conf
|
||||
|
||||
Move to the ``conf`` directory and create a ``layer.conf`` file that has the
|
||||
following::
|
||||
following: ::
|
||||
|
||||
BBPATH .= ":${LAYERDIR}"
|
||||
BBFILES += "${LAYERDIR}/*.bb"
|
||||
BBFILES += "${LAYERDIR}/\*.bb"
|
||||
BBFILE_COLLECTIONS += "mylayer"
|
||||
BBFILE_PATTERN_mylayer := "^${LAYERDIR_RE}/"
|
||||
`BBFILE_PATTERN_mylayer := "^${LAYERDIR_RE}/"
|
||||
|
||||
For information on these variables, click on :term:`BBFILES`,
|
||||
:term:`LAYERDIR`, :term:`BBFILE_COLLECTIONS` or :term:`BBFILE_PATTERN_mylayer <BBFILE_PATTERN>`
|
||||
@@ -335,7 +335,7 @@ Following is the complete "Hello World" example.
|
||||
|
||||
You need to create the recipe file next. Inside your layer at the
|
||||
top-level, use an editor and create a recipe file named
|
||||
``printhello.bb`` that has the following::
|
||||
``printhello.bb`` that has the following: ::
|
||||
|
||||
DESCRIPTION = "Prints Hello World"
|
||||
PN = 'printhello'
|
||||
@@ -356,7 +356,7 @@ Following is the complete "Hello World" example.
|
||||
follow the links to the glossary.
|
||||
|
||||
#. **Run BitBake With a Target:** Now that a BitBake target exists, run
|
||||
the command and provide that target::
|
||||
the command and provide that target: ::
|
||||
|
||||
$ cd $HOME/hello
|
||||
$ bitbake printhello
|
||||
@@ -376,7 +376,7 @@ Following is the complete "Hello World" example.
|
||||
``hello/conf`` for this example).
|
||||
|
||||
Set your working directory to the ``hello/conf`` directory and then
|
||||
create the ``bblayers.conf`` file so that it contains the following::
|
||||
create the ``bblayers.conf`` file so that it contains the following: ::
|
||||
|
||||
BBLAYERS ?= " \
|
||||
/home/<you>/mylayer \
|
||||
@@ -386,7 +386,7 @@ Following is the complete "Hello World" example.
|
||||
|
||||
#. **Run BitBake With a Target:** Now that you have supplied the
|
||||
``bblayers.conf`` file, run the ``bitbake`` command and provide the
|
||||
target::
|
||||
target: ::
|
||||
|
||||
$ bitbake printhello
|
||||
Parsing recipes: 100% |##################################################################################|
|
||||
|
||||
@@ -27,7 +27,7 @@ Linux software stacks using a task-oriented approach.
|
||||
Conceptually, BitBake is similar to GNU Make in some regards but has
|
||||
significant differences:
|
||||
|
||||
- BitBake executes tasks according to the provided metadata that builds up
|
||||
- BitBake executes tasks according to provided metadata that builds up
|
||||
the tasks. Metadata is stored in recipe (``.bb``) and related recipe
|
||||
"append" (``.bbappend``) files, configuration (``.conf``) and
|
||||
underlying include (``.inc``) files, and in class (``.bbclass``)
|
||||
@@ -60,10 +60,11 @@ member Chris Larson split the project into two distinct pieces:
|
||||
- OpenEmbedded, a metadata set utilized by BitBake
|
||||
|
||||
Today, BitBake is the primary basis of the
|
||||
`OpenEmbedded <https://www.openembedded.org/>`__ project, which is being
|
||||
used to build and maintain Linux distributions such as the `Poky
|
||||
Reference Distribution <https://www.yoctoproject.org/software-item/poky/>`__,
|
||||
developed under the umbrella of the `Yocto Project <https://www.yoctoproject.org>`__.
|
||||
`OpenEmbedded <http://www.openembedded.org/>`__ project, which is being
|
||||
used to build and maintain Linux distributions such as the `Angstrom
|
||||
Distribution <http://www.angstrom-distribution.org/>`__, and which is
|
||||
also being used as the build tool for Linux projects such as the `Yocto
|
||||
Project <http://www.yoctoproject.org>`__.
|
||||
|
||||
Prior to BitBake, no other build tool adequately met the needs of an
|
||||
aspiring embedded Linux distribution. All of the build systems used by
|
||||
@@ -247,13 +248,13 @@ underlying, similarly-named recipe files.
|
||||
|
||||
When you name an append file, you can use the "``%``" wildcard character
|
||||
to allow for matching recipe names. For example, suppose you have an
|
||||
append file named as follows::
|
||||
append file named as follows: ::
|
||||
|
||||
busybox_1.21.%.bbappend
|
||||
|
||||
That append file
|
||||
would match any ``busybox_1.21.``\ x\ ``.bb`` version of the recipe. So,
|
||||
the append file would match the following recipe names::
|
||||
the append file would match the following recipe names: ::
|
||||
|
||||
busybox_1.21.1.bb
|
||||
busybox_1.21.2.bb
|
||||
@@ -289,7 +290,7 @@ You can obtain BitBake several different ways:
|
||||
are using. The metadata is generally backwards compatible but not
|
||||
forward compatible.
|
||||
|
||||
Here is an example that clones the BitBake repository::
|
||||
Here is an example that clones the BitBake repository: ::
|
||||
|
||||
$ git clone git://git.openembedded.org/bitbake
|
||||
|
||||
@@ -297,7 +298,7 @@ You can obtain BitBake several different ways:
|
||||
Git repository into a directory called ``bitbake``. Alternatively,
|
||||
you can designate a directory after the ``git clone`` command if you
|
||||
want to call the new directory something other than ``bitbake``. Here
|
||||
is an example that names the directory ``bbdev``::
|
||||
is an example that names the directory ``bbdev``: ::
|
||||
|
||||
$ git clone git://git.openembedded.org/bitbake bbdev
|
||||
|
||||
@@ -316,9 +317,9 @@ You can obtain BitBake several different ways:
|
||||
method for getting BitBake. Cloning the repository makes it easier
|
||||
to update as patches are added to the stable branches.
|
||||
|
||||
The following example downloads a snapshot of BitBake version 1.17.0::
|
||||
The following example downloads a snapshot of BitBake version 1.17.0: ::
|
||||
|
||||
$ wget https://git.openembedded.org/bitbake/snapshot/bitbake-1.17.0.tar.gz
|
||||
$ wget http://git.openembedded.org/bitbake/snapshot/bitbake-1.17.0.tar.gz
|
||||
$ tar zxpvf bitbake-1.17.0.tar.gz
|
||||
|
||||
After extraction of the tarball using
|
||||
@@ -346,7 +347,7 @@ execution examples.
|
||||
Usage and syntax
|
||||
----------------
|
||||
|
||||
Following is the usage and syntax for BitBake::
|
||||
Following is the usage and syntax for BitBake: ::
|
||||
|
||||
$ bitbake -h
|
||||
Usage: bitbake [options] [recipename/target recipe:do_task ...]
|
||||
@@ -416,8 +417,8 @@ Following is the usage and syntax for BitBake::
|
||||
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
|
||||
Show debug logging for the specified logging domains
|
||||
-P, --profile Profile the command and save reports.
|
||||
-u UI, --ui=UI The user interface to use (knotty, ncurses, taskexp or
|
||||
teamcity - default knotty).
|
||||
-u UI, --ui=UI The user interface to use (knotty, ncurses or taskexp
|
||||
- default knotty).
|
||||
--token=XMLRPCTOKEN Specify the connection token to be used when
|
||||
connecting to a remote server.
|
||||
--revisions-changed Set the exit code depending on whether upstream
|
||||
@@ -432,9 +433,6 @@ Following is the usage and syntax for BitBake::
|
||||
Environment variable BB_SERVER_TIMEOUT.
|
||||
--no-setscene Do not run any setscene tasks. sstate will be ignored
|
||||
and everything needed, built.
|
||||
--skip-setscene Skip setscene tasks if they would be executed. Tasks
|
||||
previously restored from sstate will be kept, unlike
|
||||
--no-setscene
|
||||
--setscene-only Only run setscene tasks, don't run any real tasks.
|
||||
--remote-server=REMOTE_SERVER
|
||||
Connect to the specified server.
|
||||
@@ -471,11 +469,11 @@ default task, which is "build". BitBake obeys inter-task dependencies
|
||||
when doing so.
|
||||
|
||||
The following command runs the build task, which is the default task, on
|
||||
the ``foo_1.0.bb`` recipe file::
|
||||
the ``foo_1.0.bb`` recipe file: ::
|
||||
|
||||
$ bitbake -b foo_1.0.bb
|
||||
|
||||
The following command runs the clean task on the ``foo.bb`` recipe file::
|
||||
The following command runs the clean task on the ``foo.bb`` recipe file: ::
|
||||
|
||||
$ bitbake -b foo.bb -c clean
|
||||
|
||||
@@ -499,13 +497,13 @@ functionality, or when there are multiple versions of a recipe.
|
||||
The ``bitbake`` command, when not using "--buildfile" or "-b" only
|
||||
accepts a "PROVIDES". You cannot provide anything else. By default, a
|
||||
recipe file generally "PROVIDES" its "packagename" as shown in the
|
||||
following example::
|
||||
following example: ::
|
||||
|
||||
$ bitbake foo
|
||||
|
||||
This next example "PROVIDES" the
|
||||
package name and also uses the "-c" option to tell BitBake to just
|
||||
execute the ``do_clean`` task::
|
||||
execute the ``do_clean`` task: ::
|
||||
|
||||
$ bitbake -c clean foo
|
||||
|
||||
@@ -516,7 +514,7 @@ The BitBake command line supports specifying different tasks for
|
||||
individual targets when you specify multiple targets. For example,
|
||||
suppose you had two targets (or recipes) ``myfirstrecipe`` and
|
||||
``mysecondrecipe`` and you needed BitBake to run ``taskA`` for the first
|
||||
recipe and ``taskB`` for the second recipe::
|
||||
recipe and ``taskB`` for the second recipe: ::
|
||||
|
||||
$ bitbake myfirstrecipe:do_taskA mysecondrecipe:do_taskB
|
||||
|
||||
@@ -536,13 +534,13 @@ current working directory:
|
||||
- ``pn-buildlist``: Shows a simple list of targets that are to be
|
||||
built.
|
||||
|
||||
To stop depending on common depends, use the ``-I`` depend option and
|
||||
To stop depending on common depends, use the "-I" depend option and
|
||||
BitBake omits them from the graph. Leaving this information out can
|
||||
produce more readable graphs. This way, you can remove from the graph
|
||||
:term:`DEPENDS` from inherited classes such as ``base.bbclass``.
|
||||
``DEPENDS`` from inherited classes such as ``base.bbclass``.
|
||||
|
||||
Here are two examples that create dependency graphs. The second example
|
||||
omits depends common in OpenEmbedded from the graph::
|
||||
omits depends common in OpenEmbedded from the graph: ::
|
||||
|
||||
$ bitbake -g foo
|
||||
|
||||
@@ -566,7 +564,7 @@ for two separate targets:
|
||||
.. image:: figures/bb_multiconfig_files.png
|
||||
:align: center
|
||||
|
||||
The reason for this required file hierarchy is because the :term:`BBPATH`
|
||||
The reason for this required file hierarchy is because the ``BBPATH``
|
||||
variable is not constructed until the layers are parsed. Consequently,
|
||||
using the configuration file as a pre-configuration file is not possible
|
||||
unless it is located in the current working directory.
|
||||
@@ -584,17 +582,17 @@ accomplished by setting the
|
||||
configuration files for ``target1`` and ``target2`` defined in the build
|
||||
directory. The following statement in the ``local.conf`` file both
|
||||
enables BitBake to perform multiple configuration builds and specifies
|
||||
the two extra multiconfigs::
|
||||
the two extra multiconfigs: ::
|
||||
|
||||
BBMULTICONFIG = "target1 target2"
|
||||
|
||||
Once the target configuration files are in place and BitBake has been
|
||||
enabled to perform multiple configuration builds, use the following
|
||||
command form to start the builds::
|
||||
command form to start the builds: ::
|
||||
|
||||
$ bitbake [mc:multiconfigname:]target [[[mc:multiconfigname:]target] ... ]
|
||||
|
||||
Here is an example for two extra multiconfigs: ``target1`` and ``target2``::
|
||||
Here is an example for two extra multiconfigs: ``target1`` and ``target2``: ::
|
||||
|
||||
$ bitbake mc::target mc:target1:target mc:target2:target
|
||||
|
||||
@@ -615,12 +613,12 @@ multiconfig.
|
||||
|
||||
To enable dependencies in a multiple configuration build, you must
|
||||
declare the dependencies in the recipe using the following statement
|
||||
form::
|
||||
form: ::
|
||||
|
||||
task_or_package[mcdepends] = "mc:from_multiconfig:to_multiconfig:recipe_name:task_on_which_to_depend"
|
||||
|
||||
To better show how to use this statement, consider an example with two
|
||||
multiconfigs: ``target1`` and ``target2``::
|
||||
multiconfigs: ``target1`` and ``target2``: ::
|
||||
|
||||
image_task[mcdepends] = "mc:target1:target2:image2:rootfs_task"
|
||||
|
||||
@@ -631,7 +629,7 @@ completion of the rootfs_task used to build out image2, which is
|
||||
associated with the "target2" multiconfig.
|
||||
|
||||
Once you set up this dependency, you can build the "target1" multiconfig
|
||||
using a BitBake command as follows::
|
||||
using a BitBake command as follows: ::
|
||||
|
||||
$ bitbake mc:target1:image1
|
||||
|
||||
@@ -641,7 +639,7 @@ the ``rootfs_task`` for the "target2" multiconfig build.
|
||||
|
||||
Having a recipe depend on the root filesystem of another build might not
|
||||
seem that useful. Consider this change to the statement in the image1
|
||||
recipe::
|
||||
recipe: ::
|
||||
|
||||
image_task[mcdepends] = "mc:target1:target2:image2:image_task"
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,74 +1,32 @@
|
||||
.. SPDX-License-Identifier: CC-BY-2.5
|
||||
|
||||
===========================
|
||||
Supported Release Manuals
|
||||
===========================
|
||||
|
||||
******************************
|
||||
Release Series 3.4 (honister)
|
||||
******************************
|
||||
|
||||
- :yocto_docs:`3.4 BitBake User Manual </3.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.4.1 BitBake User Manual </3.4.1/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.4.2 BitBake User Manual </3.4.2/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
******************************
|
||||
Release Series 3.3 (hardknott)
|
||||
******************************
|
||||
|
||||
- :yocto_docs:`3.3 BitBake User Manual </3.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.3.1 BitBake User Manual </3.3.1/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.3.2 BitBake User Manual </3.3.2/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.3.3 BitBake User Manual </3.3.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.3.4 BitBake User Manual </3.3.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.3.5 BitBake User Manual </3.3.5/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
=========================
|
||||
Current Release Manuals
|
||||
=========================
|
||||
|
||||
****************************
|
||||
Release Series 3.1 (dunfell)
|
||||
3.1 'dunfell' Release Series
|
||||
****************************
|
||||
|
||||
- :yocto_docs:`3.1 BitBake User Manual </3.1/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.1 BitBake User Manual </3.1.1/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.2 BitBake User Manual </3.1.2/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.3 BitBake User Manual </3.1.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.4 BitBake User Manual </3.1.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.5 BitBake User Manual </3.1.5/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.6 BitBake User Manual </3.1.6/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.7 BitBake User Manual </3.1.7/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.8 BitBake User Manual </3.1.8/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.9 BitBake User Manual </3.1.9/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.10 BitBake User Manual </3.1.10/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.11 BitBake User Manual </3.1.11/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.12 BitBake User Manual </3.1.12/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.13 BitBake User Manual </3.1.13/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.1.14 BitBake User Manual </3.1.14/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
==========================
|
||||
Outdated Release Manuals
|
||||
Previous Release Manuals
|
||||
==========================
|
||||
|
||||
*******************************
|
||||
Release Series 3.2 (gatesgarth)
|
||||
*******************************
|
||||
|
||||
- :yocto_docs:`3.2 BitBake User Manual </3.2/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.2.1 BitBake User Manual </3.2.1/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.2.2 BitBake User Manual </3.2.2/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.2.3 BitBake User Manual </3.2.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.2.4 BitBake User Manual </3.2.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
*************************
|
||||
Release Series 3.0 (zeus)
|
||||
3.0 'zeus' Release Series
|
||||
*************************
|
||||
|
||||
- :yocto_docs:`3.0 BitBake User Manual </3.0/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.0.1 BitBake User Manual </3.0.1/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.0.2 BitBake User Manual </3.0.2/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.0.3 BitBake User Manual </3.0.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`3.0.4 BitBake User Manual </3.0.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
****************************
|
||||
Release Series 2.7 (warrior)
|
||||
2.7 'warrior' Release Series
|
||||
****************************
|
||||
|
||||
- :yocto_docs:`2.7 BitBake User Manual </2.7/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
@@ -78,7 +36,7 @@ Release Series 2.7 (warrior)
|
||||
- :yocto_docs:`2.7.4 BitBake User Manual </2.7.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
*************************
|
||||
Release Series 2.6 (thud)
|
||||
2.6 'thud' Release Series
|
||||
*************************
|
||||
|
||||
- :yocto_docs:`2.6 BitBake User Manual </2.6/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
@@ -88,16 +46,16 @@ Release Series 2.6 (thud)
|
||||
- :yocto_docs:`2.6.4 BitBake User Manual </2.6.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
*************************
|
||||
Release Series 2.5 (sumo)
|
||||
2.5 'sumo' Release Series
|
||||
*************************
|
||||
|
||||
- :yocto_docs:`2.5 Documentation </2.5>`
|
||||
- :yocto_docs:`2.5.1 Documentation </2.5.1>`
|
||||
- :yocto_docs:`2.5.2 Documentation </2.5.2>`
|
||||
- :yocto_docs:`2.5.3 Documentation </2.5.3>`
|
||||
- :yocto_docs:`2.5 BitBake User Manual </2.5/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`2.5.1 BitBake User Manual </2.5.1/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`2.5.2 BitBake User Manual </2.5.2/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
- :yocto_docs:`2.5.3 BitBake User Manual </2.5.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
**************************
|
||||
Release Series 2.4 (rocko)
|
||||
2.4 'rocko' Release Series
|
||||
**************************
|
||||
|
||||
- :yocto_docs:`2.4 BitBake User Manual </2.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
@@ -107,7 +65,7 @@ Release Series 2.4 (rocko)
|
||||
- :yocto_docs:`2.4.4 BitBake User Manual </2.4.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
*************************
|
||||
Release Series 2.3 (pyro)
|
||||
2.3 'pyro' Release Series
|
||||
*************************
|
||||
|
||||
- :yocto_docs:`2.3 BitBake User Manual </2.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
@@ -117,7 +75,7 @@ Release Series 2.3 (pyro)
|
||||
- :yocto_docs:`2.3.4 BitBake User Manual </2.3.4/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
**************************
|
||||
Release Series 2.2 (morty)
|
||||
2.2 'morty' Release Series
|
||||
**************************
|
||||
|
||||
- :yocto_docs:`2.2 BitBake User Manual </2.2/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
@@ -126,7 +84,7 @@ Release Series 2.2 (morty)
|
||||
- :yocto_docs:`2.2.3 BitBake User Manual </2.2.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
****************************
|
||||
Release Series 2.1 (krogoth)
|
||||
2.1 'krogoth' Release Series
|
||||
****************************
|
||||
|
||||
- :yocto_docs:`2.1 BitBake User Manual </2.1/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
@@ -135,7 +93,7 @@ Release Series 2.1 (krogoth)
|
||||
- :yocto_docs:`2.1.3 BitBake User Manual </2.1.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
***************************
|
||||
Release Series 2.0 (jethro)
|
||||
2.0 'jethro' Release Series
|
||||
***************************
|
||||
|
||||
- :yocto_docs:`1.9 BitBake User Manual </1.9/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
@@ -145,7 +103,7 @@ Release Series 2.0 (jethro)
|
||||
- :yocto_docs:`2.0.3 BitBake User Manual </2.0.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
*************************
|
||||
Release Series 1.8 (fido)
|
||||
1.8 'fido' Release Series
|
||||
*************************
|
||||
|
||||
- :yocto_docs:`1.8 BitBake User Manual </1.8/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
@@ -153,7 +111,7 @@ Release Series 1.8 (fido)
|
||||
- :yocto_docs:`1.8.2 BitBake User Manual </1.8.2/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
**************************
|
||||
Release Series 1.7 (dizzy)
|
||||
1.7 'dizzy' Release Series
|
||||
**************************
|
||||
|
||||
- :yocto_docs:`1.7 BitBake User Manual </1.7/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
@@ -162,7 +120,7 @@ Release Series 1.7 (dizzy)
|
||||
- :yocto_docs:`1.7.3 BitBake User Manual </1.7.3/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
**************************
|
||||
Release Series 1.6 (daisy)
|
||||
1.6 'daisy' Release Series
|
||||
**************************
|
||||
|
||||
- :yocto_docs:`1.6 BitBake User Manual </1.6/bitbake-user-manual/bitbake-user-manual.html>`
|
||||
|
||||
@@ -3,8 +3,6 @@
|
||||
#
|
||||
# Copyright (C) 2006 Tim Ansell
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
# Please Note:
|
||||
# Be careful when using mutable types (ie Dict and Lists) - operations involving these are SLOW.
|
||||
# Assign a file to __warn__ to get warnings about slow operations.
|
||||
|
||||
@@ -9,19 +9,12 @@
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
__version__ = "2.0.0"
|
||||
__version__ = "1.50.0"
|
||||
|
||||
import sys
|
||||
if sys.version_info < (3, 6, 0):
|
||||
raise RuntimeError("Sorry, python 3.6.0 or later is required for this version of bitbake")
|
||||
if sys.version_info < (3, 5, 0):
|
||||
raise RuntimeError("Sorry, python 3.5.0 or later is required for this version of bitbake")
|
||||
|
||||
if sys.version_info < (3, 10, 0):
|
||||
# With python 3.8 and 3.9, we see errors of "libgcc_s.so.1 must be installed for pthread_cancel to work"
|
||||
# https://stackoverflow.com/questions/64797838/libgcc-s-so-1-must-be-installed-for-pthread-cancel-to-work
|
||||
# https://bugs.ams1.psf.io/issue42888
|
||||
# so ensure libgcc_s is loaded early on
|
||||
import ctypes
|
||||
libgcc_s = ctypes.CDLL('libgcc_s.so.1')
|
||||
|
||||
class BBHandledException(Exception):
|
||||
"""
|
||||
@@ -78,13 +71,6 @@ class BBLoggerMixin(object):
|
||||
def verbnote(self, msg, *args, **kwargs):
|
||||
return self.log(logging.INFO + 2, msg, *args, **kwargs)
|
||||
|
||||
def warnonce(self, msg, *args, **kwargs):
|
||||
return self.log(logging.WARNING - 1, msg, *args, **kwargs)
|
||||
|
||||
def erroronce(self, msg, *args, **kwargs):
|
||||
return self.log(logging.ERROR - 1, msg, *args, **kwargs)
|
||||
|
||||
|
||||
Logger = logging.getLoggerClass()
|
||||
class BBLogger(Logger, BBLoggerMixin):
|
||||
def __init__(self, name, *args, **kwargs):
|
||||
@@ -171,15 +157,9 @@ def verbnote(*args):
|
||||
def warn(*args):
|
||||
mainlogger.warning(''.join(args))
|
||||
|
||||
def warnonce(*args):
|
||||
mainlogger.warnonce(''.join(args))
|
||||
|
||||
def error(*args, **kwargs):
|
||||
mainlogger.error(''.join(args), extra=kwargs)
|
||||
|
||||
def erroronce(*args):
|
||||
mainlogger.erroronce(''.join(args))
|
||||
|
||||
def fatal(*args, **kwargs):
|
||||
mainlogger.critical(''.join(args), extra=kwargs)
|
||||
raise BBHandledException()
|
||||
|
||||
@@ -1,33 +0,0 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import itertools
|
||||
import json
|
||||
|
||||
# The Python async server defaults to a 64K receive buffer, so we hardcode our
|
||||
# maximum chunk size. It would be better if the client and server reported to
|
||||
# each other what the maximum chunk sizes were, but that will slow down the
|
||||
# connection setup with a round trip delay so I'd rather not do that unless it
|
||||
# is necessary
|
||||
DEFAULT_MAX_CHUNK = 32 * 1024
|
||||
|
||||
|
||||
def chunkify(msg, max_chunk):
|
||||
if len(msg) < max_chunk - 1:
|
||||
yield ''.join((msg, "\n"))
|
||||
else:
|
||||
yield ''.join((json.dumps({
|
||||
'chunk-stream': None
|
||||
}), "\n"))
|
||||
|
||||
args = [iter(msg)] * (max_chunk - 1)
|
||||
for m in map(''.join, itertools.zip_longest(*args, fillvalue='')):
|
||||
yield ''.join(itertools.chain(m, "\n"))
|
||||
yield "\n"
|
||||
|
||||
|
||||
from .client import AsyncClient, Client
|
||||
from .serv import AsyncServer, AsyncServerConnection, ClientError, ServerError
|
||||
@@ -1,191 +0,0 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import abc
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import socket
|
||||
import sys
|
||||
from . import chunkify, DEFAULT_MAX_CHUNK
|
||||
|
||||
|
||||
class AsyncClient(object):
|
||||
def __init__(self, proto_name, proto_version, logger, timeout=30):
|
||||
self.reader = None
|
||||
self.writer = None
|
||||
self.max_chunk = DEFAULT_MAX_CHUNK
|
||||
self.proto_name = proto_name
|
||||
self.proto_version = proto_version
|
||||
self.logger = logger
|
||||
self.timeout = timeout
|
||||
|
||||
async def connect_tcp(self, address, port):
|
||||
async def connect_sock():
|
||||
return await asyncio.open_connection(address, port)
|
||||
|
||||
self._connect_sock = connect_sock
|
||||
|
||||
async def connect_unix(self, path):
|
||||
async def connect_sock():
|
||||
# AF_UNIX has path length issues so chdir here to workaround
|
||||
cwd = os.getcwd()
|
||||
try:
|
||||
os.chdir(os.path.dirname(path))
|
||||
# The socket must be opened synchronously so that CWD doesn't get
|
||||
# changed out from underneath us so we pass as a sock into asyncio
|
||||
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM, 0)
|
||||
sock.connect(os.path.basename(path))
|
||||
finally:
|
||||
os.chdir(cwd)
|
||||
return await asyncio.open_unix_connection(sock=sock)
|
||||
|
||||
self._connect_sock = connect_sock
|
||||
|
||||
async def setup_connection(self):
|
||||
s = '%s %s\n\n' % (self.proto_name, self.proto_version)
|
||||
self.writer.write(s.encode("utf-8"))
|
||||
await self.writer.drain()
|
||||
|
||||
async def connect(self):
|
||||
if self.reader is None or self.writer is None:
|
||||
(self.reader, self.writer) = await self._connect_sock()
|
||||
await self.setup_connection()
|
||||
|
||||
async def close(self):
|
||||
self.reader = None
|
||||
|
||||
if self.writer is not None:
|
||||
self.writer.close()
|
||||
self.writer = None
|
||||
|
||||
async def _send_wrapper(self, proc):
|
||||
count = 0
|
||||
while True:
|
||||
try:
|
||||
await self.connect()
|
||||
return await proc()
|
||||
except (
|
||||
OSError,
|
||||
ConnectionError,
|
||||
json.JSONDecodeError,
|
||||
UnicodeDecodeError,
|
||||
) as e:
|
||||
self.logger.warning("Error talking to server: %s" % e)
|
||||
if count >= 3:
|
||||
if not isinstance(e, ConnectionError):
|
||||
raise ConnectionError(str(e))
|
||||
raise e
|
||||
await self.close()
|
||||
count += 1
|
||||
|
||||
async def send_message(self, msg):
|
||||
async def get_line():
|
||||
try:
|
||||
line = await asyncio.wait_for(self.reader.readline(), self.timeout)
|
||||
except asyncio.TimeoutError:
|
||||
raise ConnectionError("Timed out waiting for server")
|
||||
|
||||
if not line:
|
||||
raise ConnectionError("Connection closed")
|
||||
|
||||
line = line.decode("utf-8")
|
||||
|
||||
if not line.endswith("\n"):
|
||||
raise ConnectionError("Bad message %r" % (line))
|
||||
|
||||
return line
|
||||
|
||||
async def proc():
|
||||
for c in chunkify(json.dumps(msg), self.max_chunk):
|
||||
self.writer.write(c.encode("utf-8"))
|
||||
await self.writer.drain()
|
||||
|
||||
l = await get_line()
|
||||
|
||||
m = json.loads(l)
|
||||
if m and "chunk-stream" in m:
|
||||
lines = []
|
||||
while True:
|
||||
l = (await get_line()).rstrip("\n")
|
||||
if not l:
|
||||
break
|
||||
lines.append(l)
|
||||
|
||||
m = json.loads("".join(lines))
|
||||
|
||||
return m
|
||||
|
||||
return await self._send_wrapper(proc)
|
||||
|
||||
async def ping(self):
|
||||
return await self.send_message(
|
||||
{'ping': {}}
|
||||
)
|
||||
|
||||
async def __aenter__(self):
|
||||
return self
|
||||
|
||||
async def __aexit__(self, exc_type, exc_value, traceback):
|
||||
await self.close()
|
||||
|
||||
|
||||
class Client(object):
|
||||
def __init__(self):
|
||||
self.client = self._get_async_client()
|
||||
self.loop = asyncio.new_event_loop()
|
||||
|
||||
# Override any pre-existing loop.
|
||||
# Without this, the PR server export selftest triggers a hang
|
||||
# when running with Python 3.7. The drawback is that there is
|
||||
# potential for issues if the PR and hash equiv (or some new)
|
||||
# clients need to both be instantiated in the same process.
|
||||
# This should be revisited if/when Python 3.9 becomes the
|
||||
# minimum required version for BitBake, as it seems not
|
||||
# required (but harmless) with it.
|
||||
asyncio.set_event_loop(self.loop)
|
||||
|
||||
self._add_methods('connect_tcp', 'ping')
|
||||
|
||||
@abc.abstractmethod
|
||||
def _get_async_client(self):
|
||||
pass
|
||||
|
||||
def _get_downcall_wrapper(self, downcall):
|
||||
def wrapper(*args, **kwargs):
|
||||
return self.loop.run_until_complete(downcall(*args, **kwargs))
|
||||
|
||||
return wrapper
|
||||
|
||||
def _add_methods(self, *methods):
|
||||
for m in methods:
|
||||
downcall = getattr(self.client, m)
|
||||
setattr(self, m, self._get_downcall_wrapper(downcall))
|
||||
|
||||
def connect_unix(self, path):
|
||||
self.loop.run_until_complete(self.client.connect_unix(path))
|
||||
self.loop.run_until_complete(self.client.connect())
|
||||
|
||||
@property
|
||||
def max_chunk(self):
|
||||
return self.client.max_chunk
|
||||
|
||||
@max_chunk.setter
|
||||
def max_chunk(self, value):
|
||||
self.client.max_chunk = value
|
||||
|
||||
def close(self):
|
||||
self.loop.run_until_complete(self.client.close())
|
||||
if sys.version_info >= (3, 6):
|
||||
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
|
||||
self.loop.close()
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc_value, traceback):
|
||||
self.close()
|
||||
return False
|
||||
@@ -1,288 +0,0 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import abc
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import signal
|
||||
import socket
|
||||
import sys
|
||||
import multiprocessing
|
||||
from . import chunkify, DEFAULT_MAX_CHUNK
|
||||
|
||||
|
||||
class ClientError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class ServerError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class AsyncServerConnection(object):
|
||||
def __init__(self, reader, writer, proto_name, logger):
|
||||
self.reader = reader
|
||||
self.writer = writer
|
||||
self.proto_name = proto_name
|
||||
self.max_chunk = DEFAULT_MAX_CHUNK
|
||||
self.handlers = {
|
||||
'chunk-stream': self.handle_chunk,
|
||||
'ping': self.handle_ping,
|
||||
}
|
||||
self.logger = logger
|
||||
|
||||
async def process_requests(self):
|
||||
try:
|
||||
self.addr = self.writer.get_extra_info('peername')
|
||||
self.logger.debug('Client %r connected' % (self.addr,))
|
||||
|
||||
# Read protocol and version
|
||||
client_protocol = await self.reader.readline()
|
||||
if client_protocol is None:
|
||||
return
|
||||
|
||||
(client_proto_name, client_proto_version) = client_protocol.decode('utf-8').rstrip().split()
|
||||
if client_proto_name != self.proto_name:
|
||||
self.logger.debug('Rejecting invalid protocol %s' % (self.proto_name))
|
||||
return
|
||||
|
||||
self.proto_version = tuple(int(v) for v in client_proto_version.split('.'))
|
||||
if not self.validate_proto_version():
|
||||
self.logger.debug('Rejecting invalid protocol version %s' % (client_proto_version))
|
||||
return
|
||||
|
||||
# Read headers. Currently, no headers are implemented, so look for
|
||||
# an empty line to signal the end of the headers
|
||||
while True:
|
||||
line = await self.reader.readline()
|
||||
if line is None:
|
||||
return
|
||||
|
||||
line = line.decode('utf-8').rstrip()
|
||||
if not line:
|
||||
break
|
||||
|
||||
# Handle messages
|
||||
while True:
|
||||
d = await self.read_message()
|
||||
if d is None:
|
||||
break
|
||||
await self.dispatch_message(d)
|
||||
await self.writer.drain()
|
||||
except ClientError as e:
|
||||
self.logger.error(str(e))
|
||||
finally:
|
||||
self.writer.close()
|
||||
|
||||
async def dispatch_message(self, msg):
|
||||
for k in self.handlers.keys():
|
||||
if k in msg:
|
||||
self.logger.debug('Handling %s' % k)
|
||||
await self.handlers[k](msg[k])
|
||||
return
|
||||
|
||||
raise ClientError("Unrecognized command %r" % msg)
|
||||
|
||||
def write_message(self, msg):
|
||||
for c in chunkify(json.dumps(msg), self.max_chunk):
|
||||
self.writer.write(c.encode('utf-8'))
|
||||
|
||||
async def read_message(self):
|
||||
l = await self.reader.readline()
|
||||
if not l:
|
||||
return None
|
||||
|
||||
try:
|
||||
message = l.decode('utf-8')
|
||||
|
||||
if not message.endswith('\n'):
|
||||
return None
|
||||
|
||||
return json.loads(message)
|
||||
except (json.JSONDecodeError, UnicodeDecodeError) as e:
|
||||
self.logger.error('Bad message from client: %r' % message)
|
||||
raise e
|
||||
|
||||
async def handle_chunk(self, request):
|
||||
lines = []
|
||||
try:
|
||||
while True:
|
||||
l = await self.reader.readline()
|
||||
l = l.rstrip(b"\n").decode("utf-8")
|
||||
if not l:
|
||||
break
|
||||
lines.append(l)
|
||||
|
||||
msg = json.loads(''.join(lines))
|
||||
except (json.JSONDecodeError, UnicodeDecodeError) as e:
|
||||
self.logger.error('Bad message from client: %r' % lines)
|
||||
raise e
|
||||
|
||||
if 'chunk-stream' in msg:
|
||||
raise ClientError("Nested chunks are not allowed")
|
||||
|
||||
await self.dispatch_message(msg)
|
||||
|
||||
async def handle_ping(self, request):
|
||||
response = {'alive': True}
|
||||
self.write_message(response)
|
||||
|
||||
|
||||
class AsyncServer(object):
|
||||
def __init__(self, logger):
|
||||
self._cleanup_socket = None
|
||||
self.logger = logger
|
||||
self.start = None
|
||||
self.address = None
|
||||
self.loop = None
|
||||
|
||||
def start_tcp_server(self, host, port):
|
||||
def start_tcp():
|
||||
self.server = self.loop.run_until_complete(
|
||||
asyncio.start_server(self.handle_client, host, port)
|
||||
)
|
||||
|
||||
for s in self.server.sockets:
|
||||
self.logger.debug('Listening on %r' % (s.getsockname(),))
|
||||
# Newer python does this automatically. Do it manually here for
|
||||
# maximum compatibility
|
||||
s.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 1)
|
||||
s.setsockopt(socket.SOL_TCP, socket.TCP_QUICKACK, 1)
|
||||
|
||||
name = self.server.sockets[0].getsockname()
|
||||
if self.server.sockets[0].family == socket.AF_INET6:
|
||||
self.address = "[%s]:%d" % (name[0], name[1])
|
||||
else:
|
||||
self.address = "%s:%d" % (name[0], name[1])
|
||||
|
||||
self.start = start_tcp
|
||||
|
||||
def start_unix_server(self, path):
|
||||
def cleanup():
|
||||
os.unlink(path)
|
||||
|
||||
def start_unix():
|
||||
cwd = os.getcwd()
|
||||
try:
|
||||
# Work around path length limits in AF_UNIX
|
||||
os.chdir(os.path.dirname(path))
|
||||
self.server = self.loop.run_until_complete(
|
||||
asyncio.start_unix_server(self.handle_client, os.path.basename(path))
|
||||
)
|
||||
finally:
|
||||
os.chdir(cwd)
|
||||
|
||||
self.logger.debug('Listening on %r' % path)
|
||||
|
||||
self._cleanup_socket = cleanup
|
||||
self.address = "unix://%s" % os.path.abspath(path)
|
||||
|
||||
self.start = start_unix
|
||||
|
||||
@abc.abstractmethod
|
||||
def accept_client(self, reader, writer):
|
||||
pass
|
||||
|
||||
async def handle_client(self, reader, writer):
|
||||
# writer.transport.set_write_buffer_limits(0)
|
||||
try:
|
||||
client = self.accept_client(reader, writer)
|
||||
await client.process_requests()
|
||||
except Exception as e:
|
||||
import traceback
|
||||
self.logger.error('Error from client: %s' % str(e), exc_info=True)
|
||||
traceback.print_exc()
|
||||
writer.close()
|
||||
self.logger.debug('Client disconnected')
|
||||
|
||||
def run_loop_forever(self):
|
||||
try:
|
||||
self.loop.run_forever()
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
|
||||
def signal_handler(self):
|
||||
self.logger.debug("Got exit signal")
|
||||
self.loop.stop()
|
||||
|
||||
def _serve_forever(self):
|
||||
try:
|
||||
self.loop.add_signal_handler(signal.SIGTERM, self.signal_handler)
|
||||
signal.pthread_sigmask(signal.SIG_UNBLOCK, [signal.SIGTERM])
|
||||
|
||||
self.run_loop_forever()
|
||||
self.server.close()
|
||||
|
||||
self.loop.run_until_complete(self.server.wait_closed())
|
||||
self.logger.debug('Server shutting down')
|
||||
finally:
|
||||
if self._cleanup_socket is not None:
|
||||
self._cleanup_socket()
|
||||
|
||||
def serve_forever(self):
|
||||
"""
|
||||
Serve requests in the current process
|
||||
"""
|
||||
# Create loop and override any loop that may have existed in
|
||||
# a parent process. It is possible that the usecases of
|
||||
# serve_forever might be constrained enough to allow using
|
||||
# get_event_loop here, but better safe than sorry for now.
|
||||
self.loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(self.loop)
|
||||
self.start()
|
||||
self._serve_forever()
|
||||
|
||||
def serve_as_process(self, *, prefunc=None, args=()):
|
||||
"""
|
||||
Serve requests in a child process
|
||||
"""
|
||||
def run(queue):
|
||||
# Create loop and override any loop that may have existed
|
||||
# in a parent process. Without doing this and instead
|
||||
# using get_event_loop, at the very minimum the hashserv
|
||||
# unit tests will hang when running the second test.
|
||||
# This happens since get_event_loop in the spawned server
|
||||
# process for the second testcase ends up with the loop
|
||||
# from the hashserv client created in the unit test process
|
||||
# when running the first testcase. The problem is somewhat
|
||||
# more general, though, as any potential use of asyncio in
|
||||
# Cooker could create a loop that needs to replaced in this
|
||||
# new process.
|
||||
self.loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(self.loop)
|
||||
try:
|
||||
self.start()
|
||||
finally:
|
||||
queue.put(self.address)
|
||||
queue.close()
|
||||
|
||||
if prefunc is not None:
|
||||
prefunc(self, *args)
|
||||
|
||||
self._serve_forever()
|
||||
|
||||
if sys.version_info >= (3, 6):
|
||||
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
|
||||
self.loop.close()
|
||||
|
||||
queue = multiprocessing.Queue()
|
||||
|
||||
# Temporarily block SIGTERM. The server process will inherit this
|
||||
# block which will ensure it doesn't receive the SIGTERM until the
|
||||
# handler is ready for it
|
||||
mask = signal.pthread_sigmask(signal.SIG_BLOCK, [signal.SIGTERM])
|
||||
try:
|
||||
self.process = multiprocessing.Process(target=run, args=(queue,))
|
||||
self.process.start()
|
||||
|
||||
self.address = queue.get()
|
||||
queue.close()
|
||||
queue.join_thread()
|
||||
|
||||
return self.process
|
||||
finally:
|
||||
signal.pthread_sigmask(signal.SIG_SETMASK, mask)
|
||||
@@ -295,13 +295,9 @@ def exec_func_python(func, d, runfile, cwd=None):
|
||||
lineno = int(d.getVarFlag(func, "lineno", False))
|
||||
bb.methodpool.insert_method(func, text, fn, lineno - 1)
|
||||
|
||||
comp = utils.better_compile(code, func, "exec_func_python() autogenerated")
|
||||
utils.better_exec(comp, {"d": d}, code, "exec_func_python() autogenerated")
|
||||
comp = utils.better_compile(code, func, "exec_python_func() autogenerated")
|
||||
utils.better_exec(comp, {"d": d}, code, "exec_python_func() autogenerated")
|
||||
finally:
|
||||
# We want any stdout/stderr to be printed before any other log messages to make debugging
|
||||
# more accurate. In some cases we seem to lose stdout/stderr entirely in logging tests without this.
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
bb.debug(2, "Python function %s finished" % func)
|
||||
|
||||
if cwd and olddir:
|
||||
@@ -569,6 +565,7 @@ exit $ret
|
||||
def _task_data(fn, task, d):
|
||||
localdata = bb.data.createCopy(d)
|
||||
localdata.setVar('BB_FILENAME', fn)
|
||||
localdata.setVar('BB_CURRENTTASK', task[3:])
|
||||
localdata.setVar('OVERRIDES', 'task-%s:%s' %
|
||||
(task[3:].replace('_', '-'), d.getVar('OVERRIDES', False)))
|
||||
localdata.finalize()
|
||||
@@ -582,7 +579,7 @@ def _exec_task(fn, task, d, quieterr):
|
||||
running it with its own local metadata, and with some useful variables set.
|
||||
"""
|
||||
if not d.getVarFlag(task, 'task', False):
|
||||
event.fire(TaskInvalid(task, fn, d), d)
|
||||
event.fire(TaskInvalid(task, d), d)
|
||||
logger.error("No such task: %s" % task)
|
||||
return 1
|
||||
|
||||
@@ -685,55 +682,47 @@ def _exec_task(fn, task, d, quieterr):
|
||||
try:
|
||||
try:
|
||||
event.fire(TaskStarted(task, fn, logfn, flags, localdata), localdata)
|
||||
except (bb.BBHandledException, SystemExit):
|
||||
return 1
|
||||
|
||||
try:
|
||||
for func in (prefuncs or '').split():
|
||||
exec_func(func, localdata)
|
||||
exec_func(task, localdata)
|
||||
for func in (postfuncs or '').split():
|
||||
exec_func(func, localdata)
|
||||
finally:
|
||||
# Need to flush and close the logs before sending events where the
|
||||
# UI may try to look at the logs.
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
except bb.BBHandledException:
|
||||
event.fire(TaskFailed(task, fn, logfn, localdata, True), localdata)
|
||||
return 1
|
||||
except Exception as exc:
|
||||
if quieterr:
|
||||
event.fire(TaskFailedSilent(task, fn, logfn, localdata), localdata)
|
||||
else:
|
||||
errprinted = errchk.triggered
|
||||
logger.error(str(exc))
|
||||
event.fire(TaskFailed(task, fn, logfn, localdata, errprinted), localdata)
|
||||
return 1
|
||||
finally:
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
|
||||
bblogger.removeHandler(handler)
|
||||
bblogger.removeHandler(handler)
|
||||
|
||||
# Restore the backup fds
|
||||
os.dup2(osi[0], osi[1])
|
||||
os.dup2(oso[0], oso[1])
|
||||
os.dup2(ose[0], ose[1])
|
||||
# Restore the backup fds
|
||||
os.dup2(osi[0], osi[1])
|
||||
os.dup2(oso[0], oso[1])
|
||||
os.dup2(ose[0], ose[1])
|
||||
|
||||
# Close the backup fds
|
||||
os.close(osi[0])
|
||||
os.close(oso[0])
|
||||
os.close(ose[0])
|
||||
|
||||
logfile.close()
|
||||
if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
|
||||
logger.debug2("Zero size logfn %s, removing", logfn)
|
||||
bb.utils.remove(logfn)
|
||||
bb.utils.remove(loglink)
|
||||
except (Exception, SystemExit) as exc:
|
||||
handled = False
|
||||
if isinstance(exc, bb.BBHandledException):
|
||||
handled = True
|
||||
|
||||
if quieterr:
|
||||
if not handled:
|
||||
logger.warning(repr(exc))
|
||||
event.fire(TaskFailedSilent(task, fn, logfn, localdata), localdata)
|
||||
else:
|
||||
errprinted = errchk.triggered
|
||||
# If the output is already on stdout, we've printed the information in the
|
||||
# logs once already so don't duplicate
|
||||
if verboseStdoutLogging or handled:
|
||||
errprinted = True
|
||||
if not handled:
|
||||
logger.error(repr(exc))
|
||||
event.fire(TaskFailed(task, fn, logfn, localdata, errprinted), localdata)
|
||||
return 1
|
||||
# Close the backup fds
|
||||
os.close(osi[0])
|
||||
os.close(oso[0])
|
||||
os.close(ose[0])
|
||||
|
||||
logfile.close()
|
||||
if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
|
||||
logger.debug2("Zero size logfn %s, removing", logfn)
|
||||
bb.utils.remove(logfn)
|
||||
bb.utils.remove(loglink)
|
||||
event.fire(TaskSucceeded(task, fn, logfn, localdata), localdata)
|
||||
|
||||
if not localdata.getVarFlag(task, 'nostamp', False) and not localdata.getVarFlag(task, 'selfstamp', False):
|
||||
@@ -835,7 +824,11 @@ def stamp_cleanmask_internal(taskname, d, file_name):
|
||||
|
||||
return [cleanmask, cleanmask.replace(taskflagname, taskflagname + "_setscene")]
|
||||
|
||||
def clean_stamp(task, d, file_name = None):
|
||||
def make_stamp(task, d, file_name = None):
|
||||
"""
|
||||
Creates/updates a stamp for a given task
|
||||
(d can be a data dict or dataCache)
|
||||
"""
|
||||
cleanmask = stamp_cleanmask_internal(task, d, file_name)
|
||||
for mask in cleanmask:
|
||||
for name in glob.glob(mask):
|
||||
@@ -846,14 +839,6 @@ def clean_stamp(task, d, file_name = None):
|
||||
if name.endswith('.taint'):
|
||||
continue
|
||||
os.unlink(name)
|
||||
return
|
||||
|
||||
def make_stamp(task, d, file_name = None):
|
||||
"""
|
||||
Creates/updates a stamp for a given task
|
||||
(d can be a data dict or dataCache)
|
||||
"""
|
||||
clean_stamp(task, d, file_name)
|
||||
|
||||
stamp = stamp_internal(task, d, file_name)
|
||||
# Remove the file and recreate to force timestamp
|
||||
@@ -942,11 +927,6 @@ def add_tasks(tasklist, d):
|
||||
task_deps[name] = {}
|
||||
if name in flags:
|
||||
deptask = d.expand(flags[name])
|
||||
if name in ['noexec', 'fakeroot', 'nostamp']:
|
||||
if deptask != '1':
|
||||
bb.warn("In a future version of BitBake, setting the '{}' flag to something other than '1' "
|
||||
"will result in the flag not being set. See YP bug #13808.".format(name))
|
||||
|
||||
task_deps[name][task] = deptask
|
||||
getTask('mcdepends')
|
||||
getTask('depends')
|
||||
@@ -1045,8 +1025,6 @@ def tasksbetween(task_start, task_end, d):
|
||||
def follow_chain(task, endtask, chain=None):
|
||||
if not chain:
|
||||
chain = []
|
||||
if task in chain:
|
||||
bb.fatal("Circular task dependencies as %s depends on itself via the chain %s" % (task, " -> ".join(chain)))
|
||||
chain.append(task)
|
||||
for othertask in tasks:
|
||||
if othertask == task:
|
||||
|
||||
@@ -19,8 +19,7 @@
|
||||
import os
|
||||
import logging
|
||||
import pickle
|
||||
from collections import defaultdict
|
||||
from collections.abc import Mapping
|
||||
from collections import defaultdict, Mapping
|
||||
import bb.utils
|
||||
from bb import PrefixLoggerAdapter
|
||||
import re
|
||||
@@ -54,12 +53,12 @@ class RecipeInfoCommon(object):
|
||||
|
||||
@classmethod
|
||||
def pkgvar(cls, var, packages, metadata):
|
||||
return dict((pkg, cls.depvar("%s:%s" % (var, pkg), metadata))
|
||||
return dict((pkg, cls.depvar("%s_%s" % (var, pkg), metadata))
|
||||
for pkg in packages)
|
||||
|
||||
@classmethod
|
||||
def taskvar(cls, var, tasks, metadata):
|
||||
return dict((task, cls.getvar("%s:task-%s" % (var, task), metadata))
|
||||
return dict((task, cls.getvar("%s_task-%s" % (var, task), metadata))
|
||||
for task in tasks)
|
||||
|
||||
@classmethod
|
||||
@@ -284,15 +283,36 @@ def parse_recipe(bb_data, bbfile, appends, mc=''):
|
||||
Parse a recipe
|
||||
"""
|
||||
|
||||
chdir_back = False
|
||||
|
||||
bb_data.setVar("__BBMULTICONFIG", mc)
|
||||
|
||||
# expand tmpdir to include this topdir
|
||||
bb_data.setVar('TMPDIR', bb_data.getVar('TMPDIR') or "")
|
||||
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
|
||||
oldpath = os.path.abspath(os.getcwd())
|
||||
bb.parse.cached_mtime_noerror(bbfile_loc)
|
||||
|
||||
if appends:
|
||||
bb_data.setVar('__BBAPPEND', " ".join(appends))
|
||||
bb_data = bb.parse.handle(bbfile, bb_data)
|
||||
return bb_data
|
||||
# The ConfHandler first looks if there is a TOPDIR and if not
|
||||
# then it would call getcwd().
|
||||
# Previously, we chdir()ed to bbfile_loc, called the handler
|
||||
# and finally chdir()ed back, a couple of thousand times. We now
|
||||
# just fill in TOPDIR to point to bbfile_loc if there is no TOPDIR yet.
|
||||
if not bb_data.getVar('TOPDIR', False):
|
||||
chdir_back = True
|
||||
bb_data.setVar('TOPDIR', bbfile_loc)
|
||||
try:
|
||||
if appends:
|
||||
bb_data.setVar('__BBAPPEND', " ".join(appends))
|
||||
bb_data = bb.parse.handle(bbfile, bb_data)
|
||||
if chdir_back:
|
||||
os.chdir(oldpath)
|
||||
return bb_data
|
||||
except:
|
||||
if chdir_back:
|
||||
os.chdir(oldpath)
|
||||
raise
|
||||
|
||||
|
||||
|
||||
class NoCache(object):
|
||||
@@ -619,7 +639,7 @@ class Cache(NoCache):
|
||||
for f in flist:
|
||||
if not f:
|
||||
continue
|
||||
f, exist = f.rsplit(":", 1)
|
||||
f, exist = f.split(":")
|
||||
if (exist == "True" and not os.path.exists(f)) or (exist == "False" and os.path.exists(f)):
|
||||
self.logger.debug2("%s's file checksum list file %s changed",
|
||||
fn, f)
|
||||
|
||||
@@ -11,13 +11,10 @@ import os
|
||||
import stat
|
||||
import bb.utils
|
||||
import logging
|
||||
import re
|
||||
from bb.cache import MultiProcessCache
|
||||
|
||||
logger = logging.getLogger("BitBake.Cache")
|
||||
|
||||
filelist_regex = re.compile(r'(?:(?<=:True)|(?<=:False))\s+')
|
||||
|
||||
# mtime cache (non-persistent)
|
||||
# based upon the assumption that files do not change during bitbake run
|
||||
class FileMtimeCache(object):
|
||||
@@ -53,7 +50,6 @@ class FileChecksumCache(MultiProcessCache):
|
||||
MultiProcessCache.__init__(self)
|
||||
|
||||
def get_checksum(self, f):
|
||||
f = os.path.normpath(f)
|
||||
entry = self.cachedata[0].get(f)
|
||||
cmtime = self.mtime_cache.cached_mtime(f)
|
||||
if entry:
|
||||
@@ -88,36 +84,22 @@ class FileChecksumCache(MultiProcessCache):
|
||||
return None
|
||||
return checksum
|
||||
|
||||
#
|
||||
# Changing the format of file-checksums is problematic as both OE and Bitbake have
|
||||
# knowledge of them. We need to encode a new piece of data, the portion of the path
|
||||
# we care about from a checksum perspective. This means that files that change subdirectory
|
||||
# are tracked by the task hashes. To do this, we do something horrible and put a "/./" into
|
||||
# the path. The filesystem handles it but it gives us a marker to know which subsection
|
||||
# of the path to cache.
|
||||
#
|
||||
def checksum_dir(pth):
|
||||
# Handle directories recursively
|
||||
if pth == "/":
|
||||
bb.fatal("Refusing to checksum /")
|
||||
pth = pth.rstrip("/")
|
||||
dirchecksums = []
|
||||
for root, dirs, files in os.walk(pth, topdown=True):
|
||||
[dirs.remove(d) for d in list(dirs) if d in localdirsexclude]
|
||||
for name in files:
|
||||
fullpth = os.path.join(root, name).replace(pth, os.path.join(pth, "."))
|
||||
fullpth = os.path.join(root, name)
|
||||
checksum = checksum_file(fullpth)
|
||||
if checksum:
|
||||
dirchecksums.append((fullpth, checksum))
|
||||
return dirchecksums
|
||||
|
||||
checksums = []
|
||||
for pth in filelist_regex.split(filelist):
|
||||
if not pth:
|
||||
continue
|
||||
pth = pth.strip()
|
||||
if not pth:
|
||||
continue
|
||||
for pth in filelist.split():
|
||||
exist = pth.split(":")[1]
|
||||
if exist == "False":
|
||||
continue
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
@@ -197,26 +195,6 @@ class BufferedLogger(Logger):
|
||||
self.target.handle(record)
|
||||
self.buffer = []
|
||||
|
||||
class DummyLogger():
|
||||
def flush(self):
|
||||
return
|
||||
|
||||
|
||||
# Starting with Python 3.8, the ast module exposes all string nodes as a
|
||||
# Constant. While earlier versions of the module also have the Constant type
|
||||
# those use the Str type to encapsulate strings.
|
||||
if sys.version_info < (3, 8):
|
||||
def node_str_value(node):
|
||||
if isinstance(node, ast.Str):
|
||||
return node.s
|
||||
return None
|
||||
else:
|
||||
def node_str_value(node):
|
||||
if isinstance(node, ast.Constant) and isinstance(node.value, str):
|
||||
return node.value
|
||||
return None
|
||||
|
||||
|
||||
class PythonParser():
|
||||
getvars = (".getVar", ".appendVar", ".prependVar", "oe.utils.conditional")
|
||||
getvarflags = (".getVarFlag", ".appendVarFlag", ".prependVarFlag")
|
||||
@@ -241,22 +219,19 @@ class PythonParser():
|
||||
def visit_Call(self, node):
|
||||
name = self.called_node_name(node.func)
|
||||
if name and (name.endswith(self.getvars) or name.endswith(self.getvarflags) or name in self.containsfuncs or name in self.containsanyfuncs):
|
||||
varname = node_str_value(node.args[0])
|
||||
if varname is not None:
|
||||
arg_str_value = None
|
||||
if len(node.args) >= 2:
|
||||
arg_str_value = node_str_value(node.args[1])
|
||||
if name in self.containsfuncs and arg_str_value is not None:
|
||||
if isinstance(node.args[0], ast.Str):
|
||||
varname = node.args[0].s
|
||||
if name in self.containsfuncs and isinstance(node.args[1], ast.Str):
|
||||
if varname not in self.contains:
|
||||
self.contains[varname] = set()
|
||||
self.contains[varname].add(arg_str_value)
|
||||
elif name in self.containsanyfuncs and arg_str_value is not None:
|
||||
self.contains[varname].add(node.args[1].s)
|
||||
elif name in self.containsanyfuncs and isinstance(node.args[1], ast.Str):
|
||||
if varname not in self.contains:
|
||||
self.contains[varname] = set()
|
||||
self.contains[varname].update(arg_str_value.split())
|
||||
self.contains[varname].update(node.args[1].s.split())
|
||||
elif name.endswith(self.getvarflags):
|
||||
if arg_str_value is not None:
|
||||
self.references.add('%s[%s]' % (varname, arg_str_value))
|
||||
if isinstance(node.args[1], ast.Str):
|
||||
self.references.add('%s[%s]' % (varname, node.args[1].s))
|
||||
else:
|
||||
self.warn(node.func, node.args[1])
|
||||
else:
|
||||
@@ -264,10 +239,10 @@ class PythonParser():
|
||||
else:
|
||||
self.warn(node.func, node.args[0])
|
||||
elif name and name.endswith(".expand"):
|
||||
arg_str_value = node_str_value(node.args[0])
|
||||
if arg_str_value is not None:
|
||||
if isinstance(node.args[0], ast.Str):
|
||||
value = node.args[0].s
|
||||
d = bb.data.init()
|
||||
parser = d.expandWithRefs(arg_str_value, self.name)
|
||||
parser = d.expandWithRefs(value, self.name)
|
||||
self.references |= parser.references
|
||||
self.execs |= parser.execs
|
||||
for varname in parser.contains:
|
||||
@@ -275,9 +250,8 @@ class PythonParser():
|
||||
self.contains[varname] = set()
|
||||
self.contains[varname] |= parser.contains[varname]
|
||||
elif name in self.execfuncs:
|
||||
arg_str_value = node_str_value(node.args[0])
|
||||
if arg_str_value is not None:
|
||||
self.var_execs.add(arg_str_value)
|
||||
if isinstance(node.args[0], ast.Str):
|
||||
self.var_execs.add(node.args[0].s)
|
||||
else:
|
||||
self.warn(node.func, node.args[0])
|
||||
elif name and isinstance(node.func, (ast.Name, ast.Attribute)):
|
||||
@@ -302,9 +276,7 @@ class PythonParser():
|
||||
self.contains = {}
|
||||
self.execs = set()
|
||||
self.references = set()
|
||||
self._log = log
|
||||
# Defer init as expensive
|
||||
self.log = DummyLogger()
|
||||
self.log = BufferedLogger('BitBake.Data.PythonParser', logging.DEBUG, log)
|
||||
|
||||
self.unhandled_message = "in call of %s, argument '%s' is not a string literal"
|
||||
self.unhandled_message = "while parsing %s, %s" % (name, self.unhandled_message)
|
||||
@@ -331,9 +303,6 @@ class PythonParser():
|
||||
self.contains[i] = set(codeparsercache.pythoncacheextras[h].contains[i])
|
||||
return
|
||||
|
||||
# Need to parse so take the hit on the real log buffer
|
||||
self.log = BufferedLogger('BitBake.Data.PythonParser', logging.DEBUG, self._log)
|
||||
|
||||
# We can't add to the linenumbers for compile, we can pad to the correct number of blank lines though
|
||||
node = "\n" * int(lineno) + node
|
||||
code = compile(check_indent(str(node)), filename, "exec",
|
||||
@@ -352,11 +321,7 @@ class ShellParser():
|
||||
self.funcdefs = set()
|
||||
self.allexecs = set()
|
||||
self.execs = set()
|
||||
self._name = name
|
||||
self._log = log
|
||||
# Defer init as expensive
|
||||
self.log = DummyLogger()
|
||||
|
||||
self.log = BufferedLogger('BitBake.Data.%s' % name, logging.DEBUG, log)
|
||||
self.unhandled_template = "unable to handle non-literal command '%s'"
|
||||
self.unhandled_template = "while parsing %s, %s" % (name, self.unhandled_template)
|
||||
|
||||
@@ -375,9 +340,6 @@ class ShellParser():
|
||||
self.execs = set(codeparsercache.shellcacheextras[h].execs)
|
||||
return self.execs
|
||||
|
||||
# Need to parse so take the hit on the real log buffer
|
||||
self.log = BufferedLogger('BitBake.Data.%s' % self._name, logging.DEBUG, self._log)
|
||||
|
||||
self._parse_shell(value)
|
||||
self.execs = set(cmd for cmd in self.allexecs if cmd not in self.funcdefs)
|
||||
|
||||
|
||||
@@ -20,7 +20,6 @@ Commands are queued in a CommandQueue
|
||||
|
||||
from collections import OrderedDict, defaultdict
|
||||
|
||||
import io
|
||||
import bb.event
|
||||
import bb.cooker
|
||||
import bb.remotedata
|
||||
@@ -65,17 +64,9 @@ class Command:
|
||||
|
||||
# Ensure cooker is ready for commands
|
||||
if command != "updateConfig" and command != "setFeatures":
|
||||
try:
|
||||
self.cooker.init_configdata()
|
||||
if not self.remotedatastores:
|
||||
self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
|
||||
except (Exception, SystemExit) as exc:
|
||||
import traceback
|
||||
if isinstance(exc, bb.BBHandledException):
|
||||
# We need to start returning real exceptions here. Until we do, we can't
|
||||
# tell if an exception is an instance of bb.BBHandledException
|
||||
return None, "bb.BBHandledException()\n" + traceback.format_exc()
|
||||
return None, traceback.format_exc()
|
||||
self.cooker.init_configdata()
|
||||
if not self.remotedatastores:
|
||||
self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
|
||||
|
||||
if hasattr(CommandsSync, command):
|
||||
# Can run synchronous commands straight away
|
||||
@@ -509,17 +500,6 @@ class CommandsSync:
|
||||
d = command.remotedatastores[dsindex].varhistory
|
||||
return getattr(d, method)(*args, **kwargs)
|
||||
|
||||
def dataStoreConnectorVarHistCmdEmit(self, command, params):
|
||||
dsindex = params[0]
|
||||
var = params[1]
|
||||
oval = params[2]
|
||||
val = params[3]
|
||||
d = command.remotedatastores[params[4]]
|
||||
|
||||
o = io.StringIO()
|
||||
command.remotedatastores[dsindex].varhistory.emit(var, oval, val, o, d)
|
||||
return o.getvalue()
|
||||
|
||||
def dataStoreConnectorIncHistCmd(self, command, params):
|
||||
dsindex = params[0]
|
||||
method = params[1]
|
||||
@@ -667,16 +647,6 @@ class CommandsAsync:
|
||||
command.finishAsyncCommand()
|
||||
findFilesMatchingInDir.needcache = False
|
||||
|
||||
def testCookerCommandEvent(self, command, params):
|
||||
"""
|
||||
Dummy command used by OEQA selftest to test tinfoil without IO
|
||||
"""
|
||||
pattern = params[0]
|
||||
|
||||
command.cooker.testCookerCommandEvent(pattern)
|
||||
command.finishAsyncCommand()
|
||||
testCookerCommandEvent.needcache = False
|
||||
|
||||
def findConfigFilePath(self, command, params):
|
||||
"""
|
||||
Find the path of the requested configuration file
|
||||
|
||||
@@ -1,196 +0,0 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
# Helper library to implement streaming compression and decompression using an
|
||||
# external process
|
||||
#
|
||||
# This library should be used directly by end users; a wrapper library for the
|
||||
# specific compression tool should be created
|
||||
|
||||
import builtins
|
||||
import io
|
||||
import os
|
||||
import subprocess
|
||||
|
||||
|
||||
def open_wrap(
|
||||
cls, filename, mode="rb", *, encoding=None, errors=None, newline=None, **kwargs
|
||||
):
|
||||
"""
|
||||
Open a compressed file in binary or text mode.
|
||||
|
||||
Users should not call this directly. A specific compression library can use
|
||||
this helper to provide it's own "open" command
|
||||
|
||||
The filename argument can be an actual filename (a str or bytes object), or
|
||||
an existing file object to read from or write to.
|
||||
|
||||
The mode argument can be "r", "rb", "w", "wb", "x", "xb", "a" or "ab" for
|
||||
binary mode, or "rt", "wt", "xt" or "at" for text mode. The default mode is
|
||||
"rb".
|
||||
|
||||
For binary mode, this function is equivalent to the cls constructor:
|
||||
cls(filename, mode). In this case, the encoding, errors and newline
|
||||
arguments must not be provided.
|
||||
|
||||
For text mode, a cls object is created, and wrapped in an
|
||||
io.TextIOWrapper instance with the specified encoding, error handling
|
||||
behavior, and line ending(s).
|
||||
"""
|
||||
if "t" in mode:
|
||||
if "b" in mode:
|
||||
raise ValueError("Invalid mode: %r" % (mode,))
|
||||
else:
|
||||
if encoding is not None:
|
||||
raise ValueError("Argument 'encoding' not supported in binary mode")
|
||||
if errors is not None:
|
||||
raise ValueError("Argument 'errors' not supported in binary mode")
|
||||
if newline is not None:
|
||||
raise ValueError("Argument 'newline' not supported in binary mode")
|
||||
|
||||
file_mode = mode.replace("t", "")
|
||||
if isinstance(filename, (str, bytes, os.PathLike, int)):
|
||||
binary_file = cls(filename, file_mode, **kwargs)
|
||||
elif hasattr(filename, "read") or hasattr(filename, "write"):
|
||||
binary_file = cls(None, file_mode, fileobj=filename, **kwargs)
|
||||
else:
|
||||
raise TypeError("filename must be a str or bytes object, or a file")
|
||||
|
||||
if "t" in mode:
|
||||
return io.TextIOWrapper(
|
||||
binary_file, encoding, errors, newline, write_through=True
|
||||
)
|
||||
else:
|
||||
return binary_file
|
||||
|
||||
|
||||
class CompressionError(OSError):
|
||||
pass
|
||||
|
||||
|
||||
class PipeFile(io.RawIOBase):
|
||||
"""
|
||||
Class that implements generically piping to/from a compression program
|
||||
|
||||
Derived classes should add the function get_compress() and get_decompress()
|
||||
that return the required commands. Input will be piped into stdin and the
|
||||
(de)compressed output should be written to stdout, e.g.:
|
||||
|
||||
class FooFile(PipeCompressionFile):
|
||||
def get_decompress(self):
|
||||
return ["fooc", "--decompress", "--stdout"]
|
||||
|
||||
def get_compress(self):
|
||||
return ["fooc", "--compress", "--stdout"]
|
||||
|
||||
"""
|
||||
|
||||
READ = 0
|
||||
WRITE = 1
|
||||
|
||||
def __init__(self, filename=None, mode="rb", *, stderr=None, fileobj=None):
|
||||
if "t" in mode or "U" in mode:
|
||||
raise ValueError("Invalid mode: {!r}".format(mode))
|
||||
|
||||
if not "b" in mode:
|
||||
mode += "b"
|
||||
|
||||
if mode.startswith("r"):
|
||||
self.mode = self.READ
|
||||
elif mode.startswith("w"):
|
||||
self.mode = self.WRITE
|
||||
else:
|
||||
raise ValueError("Invalid mode %r" % mode)
|
||||
|
||||
if fileobj is not None:
|
||||
self.fileobj = fileobj
|
||||
else:
|
||||
self.fileobj = builtins.open(filename, mode or "rb")
|
||||
|
||||
if self.mode == self.READ:
|
||||
self.p = subprocess.Popen(
|
||||
self.get_decompress(),
|
||||
stdin=self.fileobj,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=stderr,
|
||||
close_fds=True,
|
||||
)
|
||||
self.pipe = self.p.stdout
|
||||
else:
|
||||
self.p = subprocess.Popen(
|
||||
self.get_compress(),
|
||||
stdin=subprocess.PIPE,
|
||||
stdout=self.fileobj,
|
||||
stderr=stderr,
|
||||
close_fds=True,
|
||||
)
|
||||
self.pipe = self.p.stdin
|
||||
|
||||
self.__closed = False
|
||||
|
||||
def _check_process(self):
|
||||
if self.p is None:
|
||||
return
|
||||
|
||||
returncode = self.p.wait()
|
||||
if returncode:
|
||||
raise CompressionError("Process died with %d" % returncode)
|
||||
self.p = None
|
||||
|
||||
def close(self):
|
||||
if self.closed:
|
||||
return
|
||||
|
||||
self.pipe.close()
|
||||
if self.p is not None:
|
||||
self._check_process()
|
||||
self.fileobj.close()
|
||||
|
||||
self.__closed = True
|
||||
|
||||
@property
|
||||
def closed(self):
|
||||
return self.__closed
|
||||
|
||||
def fileno(self):
|
||||
return self.pipe.fileno()
|
||||
|
||||
def flush(self):
|
||||
self.pipe.flush()
|
||||
|
||||
def isatty(self):
|
||||
return self.pipe.isatty()
|
||||
|
||||
def readable(self):
|
||||
return self.mode == self.READ
|
||||
|
||||
def writable(self):
|
||||
return self.mode == self.WRITE
|
||||
|
||||
def readinto(self, b):
|
||||
if self.mode != self.READ:
|
||||
import errno
|
||||
|
||||
raise OSError(
|
||||
errno.EBADF, "read() on write-only %s object" % self.__class__.__name__
|
||||
)
|
||||
size = self.pipe.readinto(b)
|
||||
if size == 0:
|
||||
self._check_process()
|
||||
return size
|
||||
|
||||
def write(self, data):
|
||||
if self.mode != self.WRITE:
|
||||
import errno
|
||||
|
||||
raise OSError(
|
||||
errno.EBADF, "write() on read-only %s object" % self.__class__.__name__
|
||||
)
|
||||
data = self.pipe.write(data)
|
||||
|
||||
if not data:
|
||||
self._check_process()
|
||||
|
||||
return data
|
||||
@@ -1,19 +0,0 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import bb.compress._pipecompress
|
||||
|
||||
|
||||
def open(*args, **kwargs):
|
||||
return bb.compress._pipecompress.open_wrap(LZ4File, *args, **kwargs)
|
||||
|
||||
|
||||
class LZ4File(bb.compress._pipecompress.PipeFile):
|
||||
def get_compress(self):
|
||||
return ["lz4c", "-z", "-c"]
|
||||
|
||||
def get_decompress(self):
|
||||
return ["lz4c", "-d", "-c"]
|
||||
@@ -1,30 +0,0 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import bb.compress._pipecompress
|
||||
import shutil
|
||||
|
||||
|
||||
def open(*args, **kwargs):
|
||||
return bb.compress._pipecompress.open_wrap(ZstdFile, *args, **kwargs)
|
||||
|
||||
|
||||
class ZstdFile(bb.compress._pipecompress.PipeFile):
|
||||
def __init__(self, *args, num_threads=1, compresslevel=3, **kwargs):
|
||||
self.num_threads = num_threads
|
||||
self.compresslevel = compresslevel
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
def _get_zstd(self):
|
||||
if self.num_threads == 1 or not shutil.which("pzstd"):
|
||||
return ["zstd"]
|
||||
return ["pzstd", "-p", "%d" % self.num_threads]
|
||||
|
||||
def get_compress(self):
|
||||
return self._get_zstd() + ["-c", "-%d" % self.compresslevel]
|
||||
|
||||
def get_decompress(self):
|
||||
return self._get_zstd() + ["-d", "-c"]
|
||||
@@ -13,6 +13,7 @@ import sys, os, glob, os.path, re, time
|
||||
import itertools
|
||||
import logging
|
||||
import multiprocessing
|
||||
import sre_constants
|
||||
import threading
|
||||
from io import StringIO, UnsupportedOperation
|
||||
from contextlib import closing
|
||||
@@ -158,9 +159,6 @@ class BBCooker:
|
||||
for f in featureSet:
|
||||
self.featureset.setFeature(f)
|
||||
|
||||
self.orig_syspath = sys.path.copy()
|
||||
self.orig_sysmodules = [*sys.modules]
|
||||
|
||||
self.configuration = bb.cookerdata.CookerConfiguration()
|
||||
|
||||
self.idleCallBackRegister = idleCallBackRegister
|
||||
@@ -168,15 +166,27 @@ class BBCooker:
|
||||
bb.debug(1, "BBCooker starting %s" % time.time())
|
||||
sys.stdout.flush()
|
||||
|
||||
self.configwatcher = None
|
||||
self.confignotifier = None
|
||||
self.configwatcher = pyinotify.WatchManager()
|
||||
bb.debug(1, "BBCooker pyinotify1 %s" % time.time())
|
||||
sys.stdout.flush()
|
||||
|
||||
self.configwatcher.bbseen = set()
|
||||
self.configwatcher.bbwatchedfiles = set()
|
||||
self.confignotifier = pyinotify.Notifier(self.configwatcher, self.config_notifications)
|
||||
bb.debug(1, "BBCooker pyinotify2 %s" % time.time())
|
||||
sys.stdout.flush()
|
||||
self.watchmask = pyinotify.IN_CLOSE_WRITE | pyinotify.IN_CREATE | pyinotify.IN_DELETE | \
|
||||
pyinotify.IN_DELETE_SELF | pyinotify.IN_MODIFY | pyinotify.IN_MOVE_SELF | \
|
||||
pyinotify.IN_MOVED_FROM | pyinotify.IN_MOVED_TO
|
||||
self.watcher = pyinotify.WatchManager()
|
||||
bb.debug(1, "BBCooker pyinotify3 %s" % time.time())
|
||||
sys.stdout.flush()
|
||||
self.watcher.bbseen = set()
|
||||
self.watcher.bbwatchedfiles = set()
|
||||
self.notifier = pyinotify.Notifier(self.watcher, self.notifications)
|
||||
|
||||
self.watcher = None
|
||||
self.notifier = None
|
||||
bb.debug(1, "BBCooker pyinotify complete %s" % time.time())
|
||||
sys.stdout.flush()
|
||||
|
||||
# If being called by something like tinfoil, we need to clean cached data
|
||||
# which may now be invalid
|
||||
@@ -189,7 +199,7 @@ class BBCooker:
|
||||
|
||||
self.inotify_modified_files = []
|
||||
|
||||
def _process_inotify_updates(server, cooker, halt):
|
||||
def _process_inotify_updates(server, cooker, abort):
|
||||
cooker.process_inotify_updates()
|
||||
return 1.0
|
||||
|
||||
@@ -227,29 +237,9 @@ class BBCooker:
|
||||
sys.stdout.flush()
|
||||
self.handlePRServ()
|
||||
|
||||
def setupConfigWatcher(self):
|
||||
if self.configwatcher:
|
||||
self.configwatcher.close()
|
||||
self.confignotifier = None
|
||||
self.configwatcher = None
|
||||
self.configwatcher = pyinotify.WatchManager()
|
||||
self.configwatcher.bbseen = set()
|
||||
self.configwatcher.bbwatchedfiles = set()
|
||||
self.confignotifier = pyinotify.Notifier(self.configwatcher, self.config_notifications)
|
||||
|
||||
def setupParserWatcher(self):
|
||||
if self.watcher:
|
||||
self.watcher.close()
|
||||
self.notifier = None
|
||||
self.watcher = None
|
||||
self.watcher = pyinotify.WatchManager()
|
||||
self.watcher.bbseen = set()
|
||||
self.watcher.bbwatchedfiles = set()
|
||||
self.notifier = pyinotify.Notifier(self.watcher, self.notifications)
|
||||
|
||||
def process_inotify_updates(self):
|
||||
for n in [self.confignotifier, self.notifier]:
|
||||
if n and n.check_events(timeout=0):
|
||||
if n.check_events(timeout=0):
|
||||
# read notified events and enqeue them
|
||||
n.read_events()
|
||||
n.process_events()
|
||||
@@ -263,12 +253,6 @@ class BBCooker:
|
||||
return
|
||||
if not event.pathname in self.configwatcher.bbwatchedfiles:
|
||||
return
|
||||
if "IN_ISDIR" in event.maskname:
|
||||
if "IN_CREATE" in event.maskname or "IN_DELETE" in event.maskname:
|
||||
if event.pathname in self.configwatcher.bbseen:
|
||||
self.configwatcher.bbseen.remove(event.pathname)
|
||||
# Could remove all entries starting with the directory but for now...
|
||||
bb.parse.clear_cache()
|
||||
if not event.pathname in self.inotify_modified_files:
|
||||
self.inotify_modified_files.append(event.pathname)
|
||||
self.baseconfig_valid = False
|
||||
@@ -282,12 +266,6 @@ class BBCooker:
|
||||
if event.pathname.endswith("bitbake-cookerdaemon.log") \
|
||||
or event.pathname.endswith("bitbake.lock"):
|
||||
return
|
||||
if "IN_ISDIR" in event.maskname:
|
||||
if "IN_CREATE" in event.maskname or "IN_DELETE" in event.maskname:
|
||||
if event.pathname in self.watcher.bbseen:
|
||||
self.watcher.bbseen.remove(event.pathname)
|
||||
# Could remove all entries starting with the directory but for now...
|
||||
bb.parse.clear_cache()
|
||||
if not event.pathname in self.inotify_modified_files:
|
||||
self.inotify_modified_files.append(event.pathname)
|
||||
self.parsecache_valid = False
|
||||
@@ -352,13 +330,6 @@ class BBCooker:
|
||||
self.state = state.initial
|
||||
self.caches_array = []
|
||||
|
||||
sys.path = self.orig_syspath.copy()
|
||||
for mod in [*sys.modules]:
|
||||
if mod not in self.orig_sysmodules:
|
||||
del sys.modules[mod]
|
||||
|
||||
self.setupConfigWatcher()
|
||||
|
||||
# Need to preserve BB_CONSOLELOG over resets
|
||||
consolelog = None
|
||||
if hasattr(self, "data"):
|
||||
@@ -411,30 +382,16 @@ class BBCooker:
|
||||
try:
|
||||
self.prhost = prserv.serv.auto_start(self.data)
|
||||
except prserv.serv.PRServiceConfigError as e:
|
||||
bb.fatal("Unable to start PR Server, exiting, check the bitbake-cookerdaemon.log")
|
||||
bb.fatal("Unable to start PR Server, exitting")
|
||||
|
||||
if self.data.getVar("BB_HASHSERVE") == "auto":
|
||||
# Create a new hash server bound to a unix domain socket
|
||||
if not self.hashserv:
|
||||
dbfile = (self.data.getVar("PERSISTENT_DIR") or self.data.getVar("CACHE")) + "/hashserv.db"
|
||||
upstream = self.data.getVar("BB_HASHSERVE_UPSTREAM") or None
|
||||
if upstream:
|
||||
import socket
|
||||
try:
|
||||
sock = socket.create_connection(upstream.split(":"), 5)
|
||||
sock.close()
|
||||
except socket.error as e:
|
||||
bb.warn("BB_HASHSERVE_UPSTREAM is not valid, unable to connect hash equivalence server at '%s': %s"
|
||||
% (upstream, repr(e)))
|
||||
|
||||
self.hashservaddr = "unix://%s/hashserve.sock" % self.data.getVar("TOPDIR")
|
||||
self.hashserv = hashserv.create_server(
|
||||
self.hashservaddr,
|
||||
dbfile,
|
||||
sync=False,
|
||||
upstream=upstream,
|
||||
)
|
||||
self.hashserv.serve_as_process()
|
||||
self.hashserv = hashserv.create_server(self.hashservaddr, dbfile, sync=False)
|
||||
self.hashserv.process = multiprocessing.Process(target=self.hashserv.serve_forever)
|
||||
self.hashserv.process.start()
|
||||
self.data.setVar("BB_HASHSERVE", self.hashservaddr)
|
||||
self.databuilder.origdata.setVar("BB_HASHSERVE", self.hashservaddr)
|
||||
self.databuilder.data.setVar("BB_HASHSERVE", self.hashservaddr)
|
||||
@@ -534,7 +491,7 @@ class BBCooker:
|
||||
logger.debug("Base environment change, triggering reparse")
|
||||
self.reset()
|
||||
|
||||
def runCommands(self, server, data, halt):
|
||||
def runCommands(self, server, data, abort):
|
||||
"""
|
||||
Run any queued asynchronous command
|
||||
This is done by the idle handler so it runs in true context rather than
|
||||
@@ -584,8 +541,6 @@ class BBCooker:
|
||||
if not orig_tracking:
|
||||
self.enableDataTracking()
|
||||
self.reset()
|
||||
# reset() resets to the UI requested value so we have to redo this
|
||||
self.enableDataTracking()
|
||||
|
||||
def mc_base(p):
|
||||
if p.startswith('mc:'):
|
||||
@@ -609,7 +564,7 @@ class BBCooker:
|
||||
if pkgs_to_build[0] in set(ignore.split()):
|
||||
bb.fatal("%s is in ASSUME_PROVIDED" % pkgs_to_build[0])
|
||||
|
||||
taskdata, runlist = self.buildTaskData(pkgs_to_build, None, self.configuration.halt, allowincomplete=True)
|
||||
taskdata, runlist = self.buildTaskData(pkgs_to_build, None, self.configuration.abort, allowincomplete=True)
|
||||
|
||||
mc = runlist[0][0]
|
||||
fn = runlist[0][3]
|
||||
@@ -638,7 +593,7 @@ class BBCooker:
|
||||
data.emit_env(env, envdata, True)
|
||||
logger.plain(env.getvalue())
|
||||
|
||||
# emit the metadata which isn't valid shell
|
||||
# emit the metadata which isnt valid shell
|
||||
for e in sorted(envdata.keys()):
|
||||
if envdata.getVarFlag(e, 'func', False) and envdata.getVarFlag(e, 'python', False):
|
||||
logger.plain("\npython %s () {\n%s}\n", e, envdata.getVar(e, False))
|
||||
@@ -647,7 +602,7 @@ class BBCooker:
|
||||
self.disableDataTracking()
|
||||
self.reset()
|
||||
|
||||
def buildTaskData(self, pkgs_to_build, task, halt, allowincomplete=False):
|
||||
def buildTaskData(self, pkgs_to_build, task, abort, allowincomplete=False):
|
||||
"""
|
||||
Prepare a runqueue and taskdata object for iteration over pkgs_to_build
|
||||
"""
|
||||
@@ -694,7 +649,7 @@ class BBCooker:
|
||||
localdata = {}
|
||||
|
||||
for mc in self.multiconfigs:
|
||||
taskdata[mc] = bb.taskdata.TaskData(halt, skiplist=self.skiplist, allowincomplete=allowincomplete)
|
||||
taskdata[mc] = bb.taskdata.TaskData(abort, skiplist=self.skiplist, allowincomplete=allowincomplete)
|
||||
localdata[mc] = data.createCopy(self.databuilder.mcdata[mc])
|
||||
bb.data.expandKeys(localdata[mc])
|
||||
|
||||
@@ -743,18 +698,19 @@ class BBCooker:
|
||||
taskdata[mc].add_unresolved(localdata[mc], self.recipecaches[mc])
|
||||
mcdeps |= set(taskdata[mc].get_mcdepends())
|
||||
new = False
|
||||
for k in mcdeps:
|
||||
if k in seen:
|
||||
continue
|
||||
l = k.split(':')
|
||||
depmc = l[2]
|
||||
if depmc not in self.multiconfigs:
|
||||
bb.fatal("Multiconfig dependency %s depends on nonexistent multiconfig configuration named configuration %s" % (k,depmc))
|
||||
else:
|
||||
logger.debug("Adding providers for multiconfig dependency %s" % l[3])
|
||||
taskdata[depmc].add_provider(localdata[depmc], self.recipecaches[depmc], l[3])
|
||||
seen.add(k)
|
||||
new = True
|
||||
for mc in self.multiconfigs:
|
||||
for k in mcdeps:
|
||||
if k in seen:
|
||||
continue
|
||||
l = k.split(':')
|
||||
depmc = l[2]
|
||||
if depmc not in self.multiconfigs:
|
||||
bb.fatal("Multiconfig dependency %s depends on nonexistent multiconfig configuration named configuration %s" % (k,depmc))
|
||||
else:
|
||||
logger.debug("Adding providers for multiconfig dependency %s" % l[3])
|
||||
taskdata[depmc].add_provider(localdata[depmc], self.recipecaches[depmc], l[3])
|
||||
seen.add(k)
|
||||
new = True
|
||||
|
||||
for mc in self.multiconfigs:
|
||||
taskdata[mc].add_unresolved(localdata[mc], self.recipecaches[mc])
|
||||
@@ -767,7 +723,7 @@ class BBCooker:
|
||||
Prepare a runqueue and taskdata object for iteration over pkgs_to_build
|
||||
"""
|
||||
|
||||
# We set halt to False here to prevent unbuildable targets raising
|
||||
# We set abort to False here to prevent unbuildable targets raising
|
||||
# an exception when we're just generating data
|
||||
taskdata, runlist = self.buildTaskData(pkgs_to_build, task, False, allowincomplete=True)
|
||||
|
||||
@@ -844,9 +800,7 @@ class BBCooker:
|
||||
for dep in rq.rqdata.runtaskentries[tid].depends:
|
||||
(depmc, depfn, _, deptaskfn) = bb.runqueue.split_tid_mcfn(dep)
|
||||
deppn = self.recipecaches[depmc].pkg_fn[deptaskfn]
|
||||
if depmc:
|
||||
depmc = "mc:" + depmc + ":"
|
||||
depend_tree["tdepends"][dotname].append("%s%s.%s" % (depmc, deppn, bb.runqueue.taskname_from_tid(dep)))
|
||||
depend_tree["tdepends"][dotname].append("%s.%s" % (deppn, bb.runqueue.taskname_from_tid(dep)))
|
||||
if taskfn not in seen_fns:
|
||||
seen_fns.append(taskfn)
|
||||
packages = []
|
||||
@@ -1110,11 +1064,6 @@ class BBCooker:
|
||||
if matches:
|
||||
bb.event.fire(bb.event.FilesMatchingFound(filepattern, matches), self.data)
|
||||
|
||||
def testCookerCommandEvent(self, filepattern):
|
||||
# Dummy command used by OEQA selftest to test tinfoil without IO
|
||||
matches = ["A", "B"]
|
||||
bb.event.fire(bb.event.FilesMatchingFound(filepattern, matches), self.data)
|
||||
|
||||
def findProviders(self, mc=''):
|
||||
return bb.providers.findProviders(self.databuilder.mcdata[mc], self.recipecaches[mc], self.recipecaches[mc].pkg_pn)
|
||||
|
||||
@@ -1455,7 +1404,7 @@ class BBCooker:
|
||||
|
||||
# Setup taskdata structure
|
||||
taskdata = {}
|
||||
taskdata[mc] = bb.taskdata.TaskData(self.configuration.halt)
|
||||
taskdata[mc] = bb.taskdata.TaskData(self.configuration.abort)
|
||||
taskdata[mc].add_provider(self.databuilder.mcdata[mc], self.recipecaches[mc], item)
|
||||
|
||||
if quietlog:
|
||||
@@ -1471,11 +1420,11 @@ class BBCooker:
|
||||
|
||||
rq = bb.runqueue.RunQueue(self, self.data, self.recipecaches, taskdata, runlist)
|
||||
|
||||
def buildFileIdle(server, rq, halt):
|
||||
def buildFileIdle(server, rq, abort):
|
||||
|
||||
msg = None
|
||||
interrupted = 0
|
||||
if halt or self.state == state.forceshutdown:
|
||||
if abort or self.state == state.forceshutdown:
|
||||
rq.finish_runqueue(True)
|
||||
msg = "Forced shutdown"
|
||||
interrupted = 2
|
||||
@@ -1517,10 +1466,10 @@ class BBCooker:
|
||||
Attempt to build the targets specified
|
||||
"""
|
||||
|
||||
def buildTargetsIdle(server, rq, halt):
|
||||
def buildTargetsIdle(server, rq, abort):
|
||||
msg = None
|
||||
interrupted = 0
|
||||
if halt or self.state == state.forceshutdown:
|
||||
if abort or self.state == state.forceshutdown:
|
||||
rq.finish_runqueue(True)
|
||||
msg = "Forced shutdown"
|
||||
interrupted = 2
|
||||
@@ -1563,7 +1512,7 @@ class BBCooker:
|
||||
|
||||
bb.event.fire(bb.event.BuildInit(packages), self.data)
|
||||
|
||||
taskdata, runlist = self.buildTaskData(targets, task, self.configuration.halt)
|
||||
taskdata, runlist = self.buildTaskData(targets, task, self.configuration.abort)
|
||||
|
||||
buildname = self.data.getVar("BUILDNAME", False)
|
||||
|
||||
@@ -1631,7 +1580,7 @@ class BBCooker:
|
||||
|
||||
if self.state in (state.shutdown, state.forceshutdown, state.error):
|
||||
if hasattr(self.parser, 'shutdown'):
|
||||
self.parser.shutdown(clean=False)
|
||||
self.parser.shutdown(clean=False, force = True)
|
||||
self.parser.final_cleanup()
|
||||
raise bb.BBHandledException()
|
||||
|
||||
@@ -1639,8 +1588,6 @@ class BBCooker:
|
||||
self.updateCacheSync()
|
||||
|
||||
if self.state != state.parsing and not self.parsecache_valid:
|
||||
self.setupParserWatcher()
|
||||
|
||||
bb.parse.siggen.reset(self.data)
|
||||
self.parseConfiguration ()
|
||||
if CookerFeatures.SEND_SANITYEVENTS in self.featureset:
|
||||
@@ -1700,7 +1647,7 @@ class BBCooker:
|
||||
# Return a copy, don't modify the original
|
||||
pkgs_to_build = pkgs_to_build[:]
|
||||
|
||||
if not pkgs_to_build:
|
||||
if len(pkgs_to_build) == 0:
|
||||
raise NothingToBuild
|
||||
|
||||
ignore = (self.data.getVar("ASSUME_PROVIDED") or "").split()
|
||||
@@ -1747,8 +1694,6 @@ class BBCooker:
|
||||
def post_serve(self):
|
||||
self.shutdown(force=True)
|
||||
prserv.serv.auto_shutdown()
|
||||
if hasattr(bb.parse, "siggen"):
|
||||
bb.parse.siggen.exit()
|
||||
if self.hashserv:
|
||||
self.hashserv.process.terminate()
|
||||
self.hashserv.process.join()
|
||||
@@ -1762,15 +1707,13 @@ class BBCooker:
|
||||
self.state = state.shutdown
|
||||
|
||||
if self.parser:
|
||||
self.parser.shutdown(clean=not force)
|
||||
self.parser.shutdown(clean=not force, force=force)
|
||||
self.parser.final_cleanup()
|
||||
|
||||
def finishcommand(self):
|
||||
self.state = state.initial
|
||||
|
||||
def reset(self):
|
||||
if hasattr(bb.parse, "siggen"):
|
||||
bb.parse.siggen.exit()
|
||||
self.initConfigurationData()
|
||||
self.handlePRServ()
|
||||
|
||||
@@ -1799,7 +1742,7 @@ class CookerCollectFiles(object):
|
||||
def __init__(self, priorities, mc=''):
|
||||
self.mc = mc
|
||||
self.bbappends = []
|
||||
# Priorities is a list of tuples, with the second element as the pattern.
|
||||
# Priorities is a list of tupples, with the second element as the pattern.
|
||||
# We need to sort the list with the longest pattern first, and so on to
|
||||
# the shortest. This allows nested layers to be properly evaluated.
|
||||
self.bbfile_config_priorities = sorted(priorities, key=lambda tup: tup[1], reverse=True)
|
||||
@@ -1843,10 +1786,10 @@ class CookerCollectFiles(object):
|
||||
files.sort( key=lambda fileitem: self.calc_bbfile_priority(fileitem)[0] )
|
||||
config.setVar("BBFILES_PRIORITIZED", " ".join(files))
|
||||
|
||||
if not files:
|
||||
if not len(files):
|
||||
files = self.get_bbfiles()
|
||||
|
||||
if not files:
|
||||
if not len(files):
|
||||
collectlog.error("no recipe files to build, check your BBPATH and BBFILES?")
|
||||
bb.event.fire(CookerExit(), eventdata)
|
||||
|
||||
@@ -1906,7 +1849,7 @@ class CookerCollectFiles(object):
|
||||
try:
|
||||
re.compile(mask)
|
||||
bbmasks.append(mask)
|
||||
except re.error:
|
||||
except sre_constants.error:
|
||||
collectlog.critical("BBMASK contains an invalid regular expression, ignoring: %s" % mask)
|
||||
|
||||
# Then validate the combined regular expressions. This should never
|
||||
@@ -1914,7 +1857,7 @@ class CookerCollectFiles(object):
|
||||
bbmask = "|".join(bbmasks)
|
||||
try:
|
||||
bbmask_compiled = re.compile(bbmask)
|
||||
except re.error:
|
||||
except sre_constants.error:
|
||||
collectlog.critical("BBMASK is not a valid regular expression, ignoring: %s" % bbmask)
|
||||
bbmask = None
|
||||
|
||||
@@ -2032,30 +1975,15 @@ class ParsingFailure(Exception):
|
||||
Exception.__init__(self, realexception, recipe)
|
||||
|
||||
class Parser(multiprocessing.Process):
|
||||
def __init__(self, jobs, results, quit, profile):
|
||||
def __init__(self, jobs, results, quit, init, profile):
|
||||
self.jobs = jobs
|
||||
self.results = results
|
||||
self.quit = quit
|
||||
self.init = init
|
||||
multiprocessing.Process.__init__(self)
|
||||
self.context = bb.utils.get_context().copy()
|
||||
self.handlers = bb.event.get_class_handlers().copy()
|
||||
self.profile = profile
|
||||
self.queue_signals = False
|
||||
self.signal_received = []
|
||||
self.signal_threadlock = threading.Lock()
|
||||
|
||||
def catch_sig(self, signum, frame):
|
||||
if self.queue_signals:
|
||||
self.signal_received.append(signum)
|
||||
else:
|
||||
self.handle_sig(signum, frame)
|
||||
|
||||
def handle_sig(self, signum, frame):
|
||||
if signum == signal.SIGTERM:
|
||||
signal.signal(signal.SIGTERM, signal.SIG_DFL)
|
||||
os.kill(os.getpid(), signal.SIGTERM)
|
||||
elif signum == signal.SIGINT:
|
||||
signal.default_int_handler(signum, frame)
|
||||
|
||||
def run(self):
|
||||
|
||||
@@ -2075,48 +2003,36 @@ class Parser(multiprocessing.Process):
|
||||
prof.dump_stats(logfile)
|
||||
|
||||
def realrun(self):
|
||||
# Signal handling here is hard. We must not terminate any process or thread holding the write
|
||||
# lock for the event stream as it will not be released, ever, and things will hang.
|
||||
# Python handles signals in the main thread/process but they can be raised from any thread and
|
||||
# we want to defer processing of any SIGTERM/SIGINT signal until we're outside the critical section
|
||||
# and don't hold the lock (see server/process.py). We therefore always catch the signals (so any
|
||||
# new thread should also do so) and we defer handling but we handle with the local thread lock
|
||||
# held (a threading lock, not a multiprocessing one) so that no other thread in the process
|
||||
# can be in the critical section.
|
||||
signal.signal(signal.SIGTERM, self.catch_sig)
|
||||
signal.signal(signal.SIGHUP, signal.SIG_DFL)
|
||||
signal.signal(signal.SIGINT, self.catch_sig)
|
||||
bb.utils.set_process_name(multiprocessing.current_process().name)
|
||||
multiprocessing.util.Finalize(None, bb.codeparser.parser_cache_save, exitpriority=1)
|
||||
multiprocessing.util.Finalize(None, bb.fetch.fetcher_parse_save, exitpriority=1)
|
||||
if self.init:
|
||||
self.init()
|
||||
|
||||
pending = []
|
||||
try:
|
||||
while True:
|
||||
try:
|
||||
self.quit.get_nowait()
|
||||
except queue.Empty:
|
||||
pass
|
||||
else:
|
||||
break
|
||||
while True:
|
||||
try:
|
||||
self.quit.get_nowait()
|
||||
except queue.Empty:
|
||||
pass
|
||||
else:
|
||||
self.results.close()
|
||||
self.results.join_thread()
|
||||
break
|
||||
|
||||
if pending:
|
||||
result = pending.pop()
|
||||
else:
|
||||
try:
|
||||
job = self.jobs.pop()
|
||||
except IndexError:
|
||||
break
|
||||
result = self.parse(*job)
|
||||
# Clear the siggen cache after parsing to control memory usage, its huge
|
||||
bb.parse.siggen.postparsing_clean_cache()
|
||||
if pending:
|
||||
result = pending.pop()
|
||||
else:
|
||||
try:
|
||||
self.results.put(result, timeout=0.25)
|
||||
except queue.Full:
|
||||
pending.append(result)
|
||||
finally:
|
||||
self.results.close()
|
||||
self.results.join_thread()
|
||||
job = self.jobs.pop()
|
||||
except IndexError:
|
||||
self.results.close()
|
||||
self.results.join_thread()
|
||||
break
|
||||
result = self.parse(*job)
|
||||
# Clear the siggen cache after parsing to control memory usage, its huge
|
||||
bb.parse.siggen.postparsing_clean_cache()
|
||||
try:
|
||||
self.results.put(result, timeout=0.25)
|
||||
except queue.Full:
|
||||
pending.append(result)
|
||||
|
||||
def parse(self, mc, cache, filename, appends):
|
||||
try:
|
||||
@@ -2137,12 +2053,12 @@ class Parser(multiprocessing.Process):
|
||||
tb = sys.exc_info()[2]
|
||||
exc.recipe = filename
|
||||
exc.traceback = list(bb.exceptions.extract_traceback(tb, context=3))
|
||||
return True, None, exc
|
||||
return True, exc
|
||||
# Need to turn BaseExceptions into Exceptions here so we gracefully shutdown
|
||||
# and for example a worker thread doesn't just exit on its own in response to
|
||||
# a SystemExit event for example.
|
||||
except BaseException as exc:
|
||||
return True, None, ParsingFailure(exc, filename)
|
||||
return True, ParsingFailure(exc, filename)
|
||||
finally:
|
||||
bb.event.LogHandler.filter = origfilter
|
||||
|
||||
@@ -2193,6 +2109,13 @@ class CookerParser(object):
|
||||
self.processes = []
|
||||
if self.toparse:
|
||||
bb.event.fire(bb.event.ParseStarted(self.toparse), self.cfgdata)
|
||||
def init():
|
||||
signal.signal(signal.SIGTERM, signal.SIG_DFL)
|
||||
signal.signal(signal.SIGHUP, signal.SIG_DFL)
|
||||
signal.signal(signal.SIGINT, signal.SIG_IGN)
|
||||
bb.utils.set_process_name(multiprocessing.current_process().name)
|
||||
multiprocessing.util.Finalize(None, bb.codeparser.parser_cache_save, exitpriority=1)
|
||||
multiprocessing.util.Finalize(None, bb.fetch.fetcher_parse_save, exitpriority=1)
|
||||
|
||||
self.parser_quit = multiprocessing.Queue(maxsize=self.num_processes)
|
||||
self.result_queue = multiprocessing.Queue()
|
||||
@@ -2202,14 +2125,14 @@ class CookerParser(object):
|
||||
self.jobs = chunkify(list(self.willparse), self.num_processes)
|
||||
|
||||
for i in range(0, self.num_processes):
|
||||
parser = Parser(self.jobs[i], self.result_queue, self.parser_quit, self.cooker.configuration.profile)
|
||||
parser = Parser(self.jobs[i], self.result_queue, self.parser_quit, init, self.cooker.configuration.profile)
|
||||
parser.start()
|
||||
self.process_names.append(parser.name)
|
||||
self.processes.append(parser)
|
||||
|
||||
self.results = itertools.chain(self.results, self.parse_generator())
|
||||
|
||||
def shutdown(self, clean=True):
|
||||
def shutdown(self, clean=True, force=False):
|
||||
if not self.toparse:
|
||||
return
|
||||
if self.haveshutdown:
|
||||
@@ -2223,8 +2146,6 @@ class CookerParser(object):
|
||||
self.total)
|
||||
|
||||
bb.event.fire(event, self.cfgdata)
|
||||
else:
|
||||
bb.error("Parsing halted due to errors, see error messages above")
|
||||
|
||||
for process in self.processes:
|
||||
self.parser_quit.put(None)
|
||||
@@ -2238,24 +2159,11 @@ class CookerParser(object):
|
||||
break
|
||||
|
||||
for process in self.processes:
|
||||
process.join(0.5)
|
||||
|
||||
for process in self.processes:
|
||||
if process.exitcode is None:
|
||||
os.kill(process.pid, signal.SIGINT)
|
||||
|
||||
for process in self.processes:
|
||||
process.join(0.5)
|
||||
|
||||
for process in self.processes:
|
||||
if process.exitcode is None:
|
||||
if force:
|
||||
process.join(.1)
|
||||
process.terminate()
|
||||
|
||||
for process in self.processes:
|
||||
process.join()
|
||||
# Added in 3.7, cleans up zombies
|
||||
if hasattr(process, "close"):
|
||||
process.close()
|
||||
else:
|
||||
process.join()
|
||||
|
||||
self.parser_quit.close()
|
||||
# Allow data left in the cancel queue to be discarded
|
||||
@@ -2291,59 +2199,44 @@ class CookerParser(object):
|
||||
yield not cached, mc, infos
|
||||
|
||||
def parse_generator(self):
|
||||
empty = False
|
||||
while self.processes or not empty:
|
||||
for process in self.processes.copy():
|
||||
if not process.is_alive():
|
||||
process.join()
|
||||
self.processes.remove(process)
|
||||
|
||||
while True:
|
||||
if self.parsed >= self.toparse:
|
||||
break
|
||||
|
||||
try:
|
||||
result = self.result_queue.get(timeout=0.25)
|
||||
except queue.Empty:
|
||||
empty = True
|
||||
yield None, None, None
|
||||
pass
|
||||
else:
|
||||
empty = False
|
||||
yield result
|
||||
|
||||
if not (self.parsed >= self.toparse):
|
||||
raise bb.parse.ParseError("Not all recipes parsed, parser thread killed/died? Exiting.", None)
|
||||
|
||||
value = result[1]
|
||||
if isinstance(value, BaseException):
|
||||
raise value
|
||||
else:
|
||||
yield result
|
||||
|
||||
def parse_next(self):
|
||||
result = []
|
||||
parsed = None
|
||||
try:
|
||||
parsed, mc, result = next(self.results)
|
||||
if isinstance(result, BaseException):
|
||||
# Turn exceptions back into exceptions
|
||||
raise result
|
||||
if parsed is None:
|
||||
# Timeout, loop back through the main loop
|
||||
return True
|
||||
|
||||
except StopIteration:
|
||||
self.shutdown()
|
||||
return False
|
||||
except bb.BBHandledException as exc:
|
||||
self.error += 1
|
||||
logger.debug('Failed to parse recipe: %s' % exc.recipe)
|
||||
self.shutdown(clean=False)
|
||||
logger.error('Failed to parse recipe: %s' % exc.recipe)
|
||||
self.shutdown(clean=False, force=True)
|
||||
return False
|
||||
except ParsingFailure as exc:
|
||||
self.error += 1
|
||||
logger.error('Unable to parse %s: %s' %
|
||||
(exc.recipe, bb.exceptions.to_string(exc.realexception)))
|
||||
self.shutdown(clean=False)
|
||||
self.shutdown(clean=False, force=True)
|
||||
return False
|
||||
except bb.parse.ParseError as exc:
|
||||
self.error += 1
|
||||
logger.error(str(exc))
|
||||
self.shutdown(clean=False)
|
||||
self.shutdown(clean=False, force=True)
|
||||
return False
|
||||
except bb.data_smart.ExpansionError as exc:
|
||||
self.error += 1
|
||||
@@ -2352,7 +2245,7 @@ class CookerParser(object):
|
||||
tb = list(itertools.dropwhile(lambda e: e.filename.startswith(bbdir), exc.traceback))
|
||||
logger.error('ExpansionError during parsing %s', value.recipe,
|
||||
exc_info=(etype, value, tb))
|
||||
self.shutdown(clean=False)
|
||||
self.shutdown(clean=False, force=True)
|
||||
return False
|
||||
except Exception as exc:
|
||||
self.error += 1
|
||||
@@ -2364,7 +2257,7 @@ class CookerParser(object):
|
||||
# Most likely, an exception occurred during raising an exception
|
||||
import traceback
|
||||
logger.error('Exception during parse: %s' % traceback.format_exc())
|
||||
self.shutdown(clean=False)
|
||||
self.shutdown(clean=False, force=True)
|
||||
return False
|
||||
|
||||
self.current += 1
|
||||
|
||||
@@ -57,7 +57,7 @@ class ConfigParameters(object):
|
||||
|
||||
def updateToServer(self, server, environment):
|
||||
options = {}
|
||||
for o in ["halt", "force", "invalidate_stamp",
|
||||
for o in ["abort", "force", "invalidate_stamp",
|
||||
"dry_run", "dump_signatures",
|
||||
"extra_assume_provided", "profile",
|
||||
"prefile", "postfile", "server_timeout",
|
||||
@@ -86,7 +86,7 @@ class ConfigParameters(object):
|
||||
action['msg'] = "Only one target can be used with the --environment option."
|
||||
elif self.options.buildfile and len(self.options.pkgs_to_build) > 0:
|
||||
action['msg'] = "No target should be used with the --environment and --buildfile options."
|
||||
elif self.options.pkgs_to_build:
|
||||
elif len(self.options.pkgs_to_build) > 0:
|
||||
action['action'] = ["showEnvironmentTarget", self.options.pkgs_to_build]
|
||||
else:
|
||||
action['action'] = ["showEnvironment", self.options.buildfile]
|
||||
@@ -124,7 +124,7 @@ class CookerConfiguration(object):
|
||||
self.prefile = []
|
||||
self.postfile = []
|
||||
self.cmd = None
|
||||
self.halt = True
|
||||
self.abort = True
|
||||
self.force = False
|
||||
self.profile = False
|
||||
self.nosetscene = False
|
||||
@@ -160,7 +160,12 @@ def catch_parse_error(func):
|
||||
def wrapped(fn, *args):
|
||||
try:
|
||||
return func(fn, *args)
|
||||
except Exception as exc:
|
||||
except IOError as exc:
|
||||
import traceback
|
||||
parselog.critical(traceback.format_exc())
|
||||
parselog.critical("Unable to parse %s: %s" % (fn, exc))
|
||||
raise bb.BBHandledException()
|
||||
except bb.data_smart.ExpansionError as exc:
|
||||
import traceback
|
||||
|
||||
bbdir = os.path.dirname(__file__) + os.sep
|
||||
@@ -172,6 +177,9 @@ def catch_parse_error(func):
|
||||
break
|
||||
parselog.critical("Unable to parse %s" % fn, exc_info=(exc_class, exc, tb))
|
||||
raise bb.BBHandledException()
|
||||
except bb.parse.ParseError as exc:
|
||||
parselog.critical(str(exc))
|
||||
raise bb.BBHandledException()
|
||||
return wrapped
|
||||
|
||||
@catch_parse_error
|
||||
@@ -202,7 +210,7 @@ def findConfigFile(configfile, data):
|
||||
|
||||
#
|
||||
# We search for a conf/bblayers.conf under an entry in BBPATH or in cwd working
|
||||
# up to /. If that fails, bitbake would fall back to cwd.
|
||||
# up to /. If that fails, we search for a conf/bitbake.conf in BBPATH.
|
||||
#
|
||||
|
||||
def findTopdir():
|
||||
@@ -215,8 +223,11 @@ def findTopdir():
|
||||
layerconf = findConfigFile("bblayers.conf", d)
|
||||
if layerconf:
|
||||
return os.path.dirname(os.path.dirname(layerconf))
|
||||
|
||||
return os.path.abspath(os.getcwd())
|
||||
if bbpath:
|
||||
bitbakeconf = bb.utils.which(bbpath, "conf/bitbake.conf")
|
||||
if bitbakeconf:
|
||||
return os.path.dirname(os.path.dirname(bitbakeconf))
|
||||
return None
|
||||
|
||||
class CookerDataBuilder(object):
|
||||
|
||||
@@ -239,9 +250,6 @@ class CookerDataBuilder(object):
|
||||
self.savedenv = bb.data.init()
|
||||
for k in cookercfg.env:
|
||||
self.savedenv.setVar(k, cookercfg.env[k])
|
||||
if k in bb.data_smart.bitbake_renamed_vars:
|
||||
bb.error('Variable %s from the shell environment has been renamed to %s' % (k, bb.data_smart.bitbake_renamed_vars[k]))
|
||||
bb.fatal("Exiting to allow enviroment variables to be corrected")
|
||||
|
||||
filtered_keys = bb.utils.approved_variables()
|
||||
bb.data.inheritFromOS(self.basedata, self.savedenv, filtered_keys)
|
||||
@@ -253,12 +261,12 @@ class CookerDataBuilder(object):
|
||||
self.data = self.basedata
|
||||
self.mcdata = {}
|
||||
|
||||
def parseBaseConfiguration(self, worker=False):
|
||||
def parseBaseConfiguration(self):
|
||||
data_hash = hashlib.sha256()
|
||||
try:
|
||||
self.data = self.parseConfigurationFiles(self.prefiles, self.postfiles)
|
||||
|
||||
if self.data.getVar("BB_WORKERCONTEXT", False) is None and not worker:
|
||||
if self.data.getVar("BB_WORKERCONTEXT", False) is None:
|
||||
bb.fetch.fetcher_init(self.data)
|
||||
bb.parse.init_parser(self.data)
|
||||
bb.codeparser.parser_cache_init(self.data)
|
||||
@@ -283,8 +291,6 @@ class CookerDataBuilder(object):
|
||||
|
||||
multiconfig = (self.data.getVar("BBMULTICONFIG") or "").split()
|
||||
for config in multiconfig:
|
||||
if config[0].isdigit():
|
||||
bb.fatal("Multiconfig name '%s' is invalid as multiconfigs cannot start with a digit" % config)
|
||||
mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
|
||||
bb.event.fire(bb.event.ConfigParsed(), mcdata)
|
||||
self.mcdata[config] = mcdata
|
||||
@@ -293,28 +299,13 @@ class CookerDataBuilder(object):
|
||||
bb.event.fire(bb.event.MultiConfigParsed(self.mcdata), self.data)
|
||||
|
||||
self.data_hash = data_hash.hexdigest()
|
||||
except (SyntaxError, bb.BBHandledException):
|
||||
raise bb.BBHandledException()
|
||||
except bb.data_smart.ExpansionError as e:
|
||||
logger.error(str(e))
|
||||
raise bb.BBHandledException()
|
||||
|
||||
|
||||
# Handle obsolete variable names
|
||||
d = self.data
|
||||
renamedvars = d.getVarFlags('BB_RENAMED_VARIABLES') or {}
|
||||
renamedvars.update(bb.data_smart.bitbake_renamed_vars)
|
||||
issues = False
|
||||
for v in renamedvars:
|
||||
if d.getVar(v) != None or d.hasOverrides(v):
|
||||
issues = True
|
||||
loginfo = {}
|
||||
history = d.varhistory.get_variable_refs(v)
|
||||
for h in history:
|
||||
for line in history[h]:
|
||||
loginfo = {'file' : h, 'line' : line}
|
||||
bb.data.data_smart._print_rename_error(v, loginfo, renamedvars)
|
||||
if not history:
|
||||
bb.data.data_smart._print_rename_error(v, loginfo, renamedvars)
|
||||
if issues:
|
||||
except Exception:
|
||||
logger.exception("Error parsing configuration files")
|
||||
raise bb.BBHandledException()
|
||||
|
||||
# Create a copy so we can reset at a later date when UIs disconnect
|
||||
@@ -351,9 +342,6 @@ class CookerDataBuilder(object):
|
||||
layers = (data.getVar('BBLAYERS') or "").split()
|
||||
broken_layers = []
|
||||
|
||||
if not layers:
|
||||
bb.fatal("The bblayers.conf file doesn't contain any BBLAYERS definition")
|
||||
|
||||
data = bb.data.createCopy(data)
|
||||
approved = bb.utils.approved_variables()
|
||||
|
||||
@@ -408,8 +396,6 @@ class CookerDataBuilder(object):
|
||||
if c in collections_tmp:
|
||||
bb.fatal("Found duplicated BBFILE_COLLECTIONS '%s', check bblayers.conf or layer.conf to fix it." % c)
|
||||
compat = set((data.getVar("LAYERSERIES_COMPAT_%s" % c) or "").split())
|
||||
if compat and not layerseries:
|
||||
bb.fatal("No core layer found to work with layer '%s'. Missing entry in bblayers.conf?" % c)
|
||||
if compat and not (compat & layerseries):
|
||||
bb.fatal("Layer %s is not compatible with the core layer which only supports these series: %s (layer is compatible with %s)"
|
||||
% (c, " ".join(layerseries), " ".join(compat)))
|
||||
@@ -422,10 +408,7 @@ class CookerDataBuilder(object):
|
||||
msg += (" and bitbake did not find a conf/bblayers.conf file in"
|
||||
" the expected location.\nMaybe you accidentally"
|
||||
" invoked bitbake from the wrong directory?")
|
||||
bb.fatal(msg)
|
||||
|
||||
if not data.getVar("TOPDIR"):
|
||||
data.setVar("TOPDIR", os.path.abspath(os.getcwd()))
|
||||
raise SystemExit(msg)
|
||||
|
||||
data = parse_config_file(os.path.join("conf", "bitbake.conf"), data)
|
||||
|
||||
@@ -438,7 +421,7 @@ class CookerDataBuilder(object):
|
||||
for bbclass in bbclasses:
|
||||
data = _inherit(bbclass, data)
|
||||
|
||||
# Normally we only register event handlers at the end of parsing .bb files
|
||||
# Nomally we only register event handlers at the end of parsing .bb files
|
||||
# We register any handlers we've found so far here...
|
||||
for var in data.getVar('__BBHANDLERS', False) or []:
|
||||
handlerfn = data.getVarFlag(var, "filename", False)
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
@@ -76,26 +74,26 @@ def createDaemon(function, logfile):
|
||||
with open('/dev/null', 'r') as si:
|
||||
os.dup2(si.fileno(), sys.stdin.fileno())
|
||||
|
||||
with open(logfile, 'a+') as so:
|
||||
try:
|
||||
os.dup2(so.fileno(), sys.stdout.fileno())
|
||||
os.dup2(so.fileno(), sys.stderr.fileno())
|
||||
except io.UnsupportedOperation:
|
||||
sys.stdout = so
|
||||
try:
|
||||
so = open(logfile, 'a+')
|
||||
os.dup2(so.fileno(), sys.stdout.fileno())
|
||||
os.dup2(so.fileno(), sys.stderr.fileno())
|
||||
except io.UnsupportedOperation:
|
||||
sys.stdout = open(logfile, 'a+')
|
||||
|
||||
# Have stdout and stderr be the same so log output matches chronologically
|
||||
# and there aren't two separate buffers
|
||||
sys.stderr = sys.stdout
|
||||
# Have stdout and stderr be the same so log output matches chronologically
|
||||
# and there aren't two seperate buffers
|
||||
sys.stderr = sys.stdout
|
||||
|
||||
try:
|
||||
function()
|
||||
except Exception as e:
|
||||
traceback.print_exc()
|
||||
finally:
|
||||
bb.event.print_ui_queue()
|
||||
# os._exit() doesn't flush open files like os.exit() does. Manually flush
|
||||
# stdout and stderr so that any logging output will be seen, particularly
|
||||
# exception tracebacks.
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
os._exit(0)
|
||||
try:
|
||||
function()
|
||||
except Exception as e:
|
||||
traceback.print_exc()
|
||||
finally:
|
||||
bb.event.print_ui_queue()
|
||||
# os._exit() doesn't flush open files like os.exit() does. Manually flush
|
||||
# stdout and stderr so that any logging output will be seen, particularly
|
||||
# exception tracebacks.
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
os._exit(0)
|
||||
|
||||
@@ -226,7 +226,7 @@ def emit_func(func, o=sys.__stdout__, d = init()):
|
||||
deps = newdeps
|
||||
seen |= deps
|
||||
newdeps = set()
|
||||
for dep in sorted(deps):
|
||||
for dep in deps:
|
||||
if d.getVarFlag(dep, "func", False) and not d.getVarFlag(dep, "python", False):
|
||||
emit_var(dep, o, d, False) and o.write('\n')
|
||||
newdeps |= bb.codeparser.ShellParser(dep, logger).parse_shell(d.getVar(dep))
|
||||
@@ -272,37 +272,34 @@ def update_data(d):
|
||||
"""Performs final steps upon the datastore, including application of overrides"""
|
||||
d.finalize(parent = True)
|
||||
|
||||
def build_dependencies(key, keys, shelldeps, varflagsexcl, ignored_vars, d):
|
||||
def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
|
||||
deps = set()
|
||||
try:
|
||||
if key[-1] == ']':
|
||||
vf = key[:-1].split('[')
|
||||
if vf[1] == "vardepvalueexclude":
|
||||
return deps, ""
|
||||
value, parser = d.getVarFlag(vf[0], vf[1], False, retparser=True)
|
||||
deps |= parser.references
|
||||
deps = deps | (keys & parser.execs)
|
||||
return deps, value
|
||||
varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "exports", "postfuncs", "prefuncs", "lineno", "filename"]) or {}
|
||||
vardeps = varflags.get("vardeps")
|
||||
exclusions = varflags.get("vardepsexclude", "").split()
|
||||
|
||||
def handle_contains(value, contains, exclusions, d):
|
||||
newvalue = []
|
||||
if value:
|
||||
newvalue.append(str(value))
|
||||
def handle_contains(value, contains, d):
|
||||
newvalue = ""
|
||||
for k in sorted(contains):
|
||||
if k in exclusions or k in ignored_vars:
|
||||
continue
|
||||
l = (d.getVar(k) or "").split()
|
||||
for item in sorted(contains[k]):
|
||||
for word in item.split():
|
||||
if not word in l:
|
||||
newvalue.append("\n%s{%s} = Unset" % (k, item))
|
||||
newvalue += "\n%s{%s} = Unset" % (k, item)
|
||||
break
|
||||
else:
|
||||
newvalue.append("\n%s{%s} = Set" % (k, item))
|
||||
return "".join(newvalue)
|
||||
newvalue += "\n%s{%s} = Set" % (k, item)
|
||||
if not newvalue:
|
||||
return value
|
||||
if not value:
|
||||
return newvalue
|
||||
return value + newvalue
|
||||
|
||||
def handle_remove(value, deps, removes, d):
|
||||
for r in sorted(removes):
|
||||
@@ -310,7 +307,6 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, ignored_vars, d):
|
||||
value += "\n_remove of %s" % r
|
||||
deps |= r2.references
|
||||
deps = deps | (keys & r2.execs)
|
||||
value = handle_contains(value, r2.contains, exclusions, d)
|
||||
return value
|
||||
|
||||
if "vardepvalue" in varflags:
|
||||
@@ -322,7 +318,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, ignored_vars, d):
|
||||
parser.parse_python(value, filename=varflags.get("filename"), lineno=varflags.get("lineno"))
|
||||
deps = deps | parser.references
|
||||
deps = deps | (keys & parser.execs)
|
||||
value = handle_contains(value, parser.contains, exclusions, d)
|
||||
value = handle_contains(value, parser.contains, d)
|
||||
else:
|
||||
value, parsedvar = d.getVarFlag(key, "_content", False, retparser=True)
|
||||
parser = bb.codeparser.ShellParser(key, logger)
|
||||
@@ -330,7 +326,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, ignored_vars, d):
|
||||
deps = deps | shelldeps
|
||||
deps = deps | parsedvar.references
|
||||
deps = deps | (keys & parser.execs) | (keys & parsedvar.execs)
|
||||
value = handle_contains(value, parsedvar.contains, exclusions, d)
|
||||
value = handle_contains(value, parsedvar.contains, d)
|
||||
if hasattr(parsedvar, "removes"):
|
||||
value = handle_remove(value, deps, parsedvar.removes, d)
|
||||
if vardeps is None:
|
||||
@@ -345,7 +341,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, ignored_vars, d):
|
||||
value, parser = d.getVarFlag(key, "_content", False, retparser=True)
|
||||
deps |= parser.references
|
||||
deps = deps | (keys & parser.execs)
|
||||
value = handle_contains(value, parser.contains, exclusions, d)
|
||||
value = handle_contains(value, parser.contains, d)
|
||||
if hasattr(parser, "removes"):
|
||||
value = handle_remove(value, deps, parser.removes, d)
|
||||
|
||||
@@ -365,7 +361,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, ignored_vars, d):
|
||||
deps |= set(varfdeps)
|
||||
|
||||
deps |= set((vardeps or "").split())
|
||||
deps -= set(exclusions)
|
||||
deps -= set(varflags.get("vardepsexclude", "").split())
|
||||
except bb.parse.SkipRecipe:
|
||||
raise
|
||||
except Exception as e:
|
||||
@@ -375,7 +371,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, ignored_vars, d):
|
||||
#bb.note("Variable %s references %s and calls %s" % (key, str(deps), str(execs)))
|
||||
#d.setVarFlag(key, "vardeps", deps)
|
||||
|
||||
def generate_dependencies(d, ignored_vars):
|
||||
def generate_dependencies(d, whitelist):
|
||||
|
||||
keys = set(key for key in d if not key.startswith("__"))
|
||||
shelldeps = set(key for key in d.getVar("__exportlist", False) if d.getVarFlag(key, "export", False) and not d.getVarFlag(key, "unexport", False))
|
||||
@@ -386,22 +382,22 @@ def generate_dependencies(d, ignored_vars):
|
||||
|
||||
tasklist = d.getVar('__BBTASKS', False) or []
|
||||
for task in tasklist:
|
||||
deps[task], values[task] = build_dependencies(task, keys, shelldeps, varflagsexcl, ignored_vars, d)
|
||||
deps[task], values[task] = build_dependencies(task, keys, shelldeps, varflagsexcl, d)
|
||||
newdeps = deps[task]
|
||||
seen = set()
|
||||
while newdeps:
|
||||
nextdeps = newdeps - ignored_vars
|
||||
nextdeps = newdeps - whitelist
|
||||
seen |= nextdeps
|
||||
newdeps = set()
|
||||
for dep in nextdeps:
|
||||
if dep not in deps:
|
||||
deps[dep], values[dep] = build_dependencies(dep, keys, shelldeps, varflagsexcl, ignored_vars, d)
|
||||
deps[dep], values[dep] = build_dependencies(dep, keys, shelldeps, varflagsexcl, d)
|
||||
newdeps |= deps[dep]
|
||||
newdeps -= seen
|
||||
#print "For %s: %s" % (task, str(deps[task]))
|
||||
return tasklist, deps, values
|
||||
|
||||
def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
|
||||
def generate_dependency_hash(tasklist, gendeps, lookupcache, whitelist, fn):
|
||||
taskdeps = {}
|
||||
basehash = {}
|
||||
|
||||
@@ -410,11 +406,9 @@ def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
|
||||
|
||||
if data is None:
|
||||
bb.error("Task %s from %s seems to be empty?!" % (task, fn))
|
||||
data = []
|
||||
else:
|
||||
data = [data]
|
||||
data = ''
|
||||
|
||||
gendeps[task] -= ignored_vars
|
||||
gendeps[task] -= whitelist
|
||||
newdeps = gendeps[task]
|
||||
seen = set()
|
||||
while newdeps:
|
||||
@@ -422,20 +416,20 @@ def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
|
||||
seen |= nextdeps
|
||||
newdeps = set()
|
||||
for dep in nextdeps:
|
||||
if dep in ignored_vars:
|
||||
if dep in whitelist:
|
||||
continue
|
||||
gendeps[dep] -= ignored_vars
|
||||
gendeps[dep] -= whitelist
|
||||
newdeps |= gendeps[dep]
|
||||
newdeps -= seen
|
||||
|
||||
alldeps = sorted(seen)
|
||||
for dep in alldeps:
|
||||
data.append(dep)
|
||||
data = data + dep
|
||||
var = lookupcache[dep]
|
||||
if var is not None:
|
||||
data.append(str(var))
|
||||
data = data + str(var)
|
||||
k = fn + ":" + task
|
||||
basehash[k] = hashlib.sha256("".join(data).encode("utf-8")).hexdigest()
|
||||
basehash[k] = hashlib.sha256(data.encode("utf-8")).hexdigest()
|
||||
taskdeps[task] = alldeps
|
||||
|
||||
return taskdeps, basehash
|
||||
|
||||
@@ -17,7 +17,7 @@ BitBake build tools.
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import copy, re, sys, traceback
|
||||
from collections.abc import MutableMapping
|
||||
from collections import MutableMapping
|
||||
import logging
|
||||
import hashlib
|
||||
import bb, bb.codeparser
|
||||
@@ -26,25 +26,13 @@ from bb.COW import COWDictBase
|
||||
|
||||
logger = logging.getLogger("BitBake.Data")
|
||||
|
||||
__setvar_keyword__ = [":append", ":prepend", ":remove"]
|
||||
__setvar_regexp__ = re.compile(r'(?P<base>.*?)(?P<keyword>:append|:prepend|:remove)(:(?P<add>[^A-Z]*))?$')
|
||||
__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~:]+?}")
|
||||
__setvar_keyword__ = ["_append", "_prepend", "_remove"]
|
||||
__setvar_regexp__ = re.compile(r'(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>[^A-Z]*))?$')
|
||||
__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~]+?}")
|
||||
__expand_python_regexp__ = re.compile(r"\${@.+?}")
|
||||
__whitespace_split__ = re.compile(r'(\s)')
|
||||
__override_regexp__ = re.compile(r'[a-z0-9]+')
|
||||
|
||||
bitbake_renamed_vars = {
|
||||
"BB_ENV_WHITELIST": "BB_ENV_PASSTHROUGH",
|
||||
"BB_ENV_EXTRAWHITE": "BB_ENV_PASSTHROUGH_ADDITIONS",
|
||||
"BB_HASHBASE_WHITELIST": "BB_BASEHASH_IGNORE_VARS",
|
||||
"BB_HASHCONFIG_WHITELIST": "BB_HASHCONFIG_IGNORE_VARS",
|
||||
"BB_HASHTASK_WHITELIST": "BB_TASKHASH_IGNORE_TASKS",
|
||||
"BB_SETSCENE_ENFORCE_WHITELIST": "BB_SETSCENE_ENFORCE_IGNORE_TASKS",
|
||||
"MULTI_PROVIDER_WHITELIST": "BB_MULTI_PROVIDER_ALLOWED",
|
||||
"BB_STAMP_WHITELIST": "is a deprecated variable and support has been removed",
|
||||
"BB_STAMP_POLICY": "is a deprecated variable and support has been removed",
|
||||
}
|
||||
|
||||
def infer_caller_details(loginfo, parent = False, varval = True):
|
||||
"""Save the caller the trouble of specifying everything."""
|
||||
# Save effort.
|
||||
@@ -152,9 +140,6 @@ class DataContext(dict):
|
||||
self['d'] = metadata
|
||||
|
||||
def __missing__(self, key):
|
||||
# Skip commonly accessed invalid variables
|
||||
if key in ['bb', 'oe', 'int', 'bool', 'time', 'str', 'os']:
|
||||
raise KeyError(key)
|
||||
value = self.metadata.getVar(key)
|
||||
if value is None or self.metadata.getVarFlag(key, 'func', False):
|
||||
raise KeyError(key)
|
||||
@@ -166,7 +151,6 @@ class ExpansionError(Exception):
|
||||
self.expression = expression
|
||||
self.variablename = varname
|
||||
self.exception = exception
|
||||
self.varlist = [varname or expression or ""]
|
||||
if varname:
|
||||
if expression:
|
||||
self.msg = "Failure expanding variable %s, expression was %s which triggered exception %s: %s" % (varname, expression, type(exception).__name__, exception)
|
||||
@@ -176,14 +160,8 @@ class ExpansionError(Exception):
|
||||
self.msg = "Failure expanding expression %s which triggered exception %s: %s" % (expression, type(exception).__name__, exception)
|
||||
Exception.__init__(self, self.msg)
|
||||
self.args = (varname, expression, exception)
|
||||
|
||||
def addVar(self, varname):
|
||||
if varname:
|
||||
self.varlist.append(varname)
|
||||
|
||||
def __str__(self):
|
||||
chain = "\nThe variable dependency chain for the failure is: " + " -> ".join(self.varlist)
|
||||
return self.msg + chain
|
||||
return self.msg
|
||||
|
||||
class IncludeHistory(object):
|
||||
def __init__(self, parent = None, filename = '[TOP LEVEL]'):
|
||||
@@ -261,9 +239,12 @@ class VariableHistory(object):
|
||||
return
|
||||
if 'op' not in loginfo or not loginfo['op']:
|
||||
loginfo['op'] = 'set'
|
||||
if 'detail' in loginfo:
|
||||
loginfo['detail'] = str(loginfo['detail'])
|
||||
if 'variable' not in loginfo or 'file' not in loginfo:
|
||||
raise ValueError("record() missing variable or file.")
|
||||
var = loginfo['variable']
|
||||
|
||||
if var not in self.variables:
|
||||
self.variables[var] = []
|
||||
if not isinstance(self.variables[var], list):
|
||||
@@ -296,7 +277,7 @@ class VariableHistory(object):
|
||||
for (r, override) in d.overridedata[var]:
|
||||
for event in self.variable(r):
|
||||
loginfo = event.copy()
|
||||
if 'flag' in loginfo and not loginfo['flag'].startswith(("_", ":")):
|
||||
if 'flag' in loginfo and not loginfo['flag'].startswith("_"):
|
||||
continue
|
||||
loginfo['variable'] = var
|
||||
loginfo['op'] = 'override[%s]:%s' % (override, loginfo['op'])
|
||||
@@ -322,8 +303,7 @@ class VariableHistory(object):
|
||||
flag = '[%s] ' % (event['flag'])
|
||||
else:
|
||||
flag = ''
|
||||
o.write("# %s %s:%s%s\n# %s\"%s\"\n" % \
|
||||
(event['op'], event['file'], event['line'], display_func, flag, re.sub('\n', '\n# ', str(event['detail']))))
|
||||
o.write("# %s %s:%s%s\n# %s\"%s\"\n" % (event['op'], event['file'], event['line'], display_func, flag, re.sub('\n', '\n# ', event['detail'])))
|
||||
if len(history) > 1:
|
||||
o.write("# pre-expansion value:\n")
|
||||
o.write('# "%s"\n' % (commentVal))
|
||||
@@ -349,16 +329,6 @@ class VariableHistory(object):
|
||||
lines.append(line)
|
||||
return lines
|
||||
|
||||
def get_variable_refs(self, var):
|
||||
"""Return a dict of file/line references"""
|
||||
var_history = self.variable(var)
|
||||
refs = {}
|
||||
for event in var_history:
|
||||
if event['file'] not in refs:
|
||||
refs[event['file']] = []
|
||||
refs[event['file']].append(event['line'])
|
||||
return refs
|
||||
|
||||
def get_variable_items_files(self, var):
|
||||
"""
|
||||
Use variable history to map items added to a list variable and
|
||||
@@ -372,12 +342,12 @@ class VariableHistory(object):
|
||||
for event in history:
|
||||
if 'flag' in event:
|
||||
continue
|
||||
if event['op'] == ':remove':
|
||||
if event['op'] == '_remove':
|
||||
continue
|
||||
if isset and event['op'] == 'set?':
|
||||
continue
|
||||
isset = True
|
||||
items = d.expand(str(event['detail'])).split()
|
||||
items = d.expand(event['detail']).split()
|
||||
for item in items:
|
||||
# This is a little crude but is belt-and-braces to avoid us
|
||||
# having to handle every possible operation type specifically
|
||||
@@ -393,23 +363,6 @@ class VariableHistory(object):
|
||||
else:
|
||||
self.variables[var] = []
|
||||
|
||||
def _print_rename_error(var, loginfo, renamedvars, fullvar=None):
|
||||
info = ""
|
||||
if "file" in loginfo:
|
||||
info = " file: %s" % loginfo["file"]
|
||||
if "line" in loginfo:
|
||||
info += " line: %s" % loginfo["line"]
|
||||
if fullvar and fullvar != var:
|
||||
info += " referenced as: %s" % fullvar
|
||||
if info:
|
||||
info = " (%s)" % info.strip()
|
||||
renameinfo = renamedvars[var]
|
||||
if " " in renameinfo:
|
||||
# A space signals a string to display instead of a rename
|
||||
bb.erroronce('Variable %s %s%s' % (var, renameinfo, info))
|
||||
else:
|
||||
bb.erroronce('Variable %s has been renamed to %s%s' % (var, renameinfo, info))
|
||||
|
||||
class DataSmart(MutableMapping):
|
||||
def __init__(self):
|
||||
self.dict = {}
|
||||
@@ -417,8 +370,6 @@ class DataSmart(MutableMapping):
|
||||
self.inchistory = IncludeHistory()
|
||||
self.varhistory = VariableHistory(self)
|
||||
self._tracking = False
|
||||
self._var_renames = {}
|
||||
self._var_renames.update(bitbake_renamed_vars)
|
||||
|
||||
self.expand_cache = {}
|
||||
|
||||
@@ -452,17 +403,14 @@ class DataSmart(MutableMapping):
|
||||
s = __expand_python_regexp__.sub(varparse.python_sub, s)
|
||||
except SyntaxError as e:
|
||||
# Likely unmatched brackets, just don't expand the expression
|
||||
if e.msg != "EOL while scanning string literal" and not e.msg.startswith("unterminated string literal"):
|
||||
if e.msg != "EOL while scanning string literal":
|
||||
raise
|
||||
if s == olds:
|
||||
break
|
||||
except ExpansionError as e:
|
||||
e.addVar(varname)
|
||||
except ExpansionError:
|
||||
raise
|
||||
except bb.parse.SkipRecipe:
|
||||
raise
|
||||
except bb.BBHandledException:
|
||||
raise
|
||||
except Exception as exc:
|
||||
tb = sys.exc_info()[2]
|
||||
raise ExpansionError(varname, s, exc).with_traceback(tb) from exc
|
||||
@@ -530,26 +478,9 @@ class DataSmart(MutableMapping):
|
||||
else:
|
||||
self.initVar(var)
|
||||
|
||||
def hasOverrides(self, var):
|
||||
return var in self.overridedata
|
||||
|
||||
def setVar(self, var, value, **loginfo):
|
||||
#print("var=" + str(var) + " val=" + str(value))
|
||||
|
||||
if not var.startswith("__anon_") and ("_append" in var or "_prepend" in var or "_remove" in var):
|
||||
info = "%s" % var
|
||||
if "file" in loginfo:
|
||||
info += " file: %s" % loginfo["file"]
|
||||
if "line" in loginfo:
|
||||
info += " line: %s" % loginfo["line"]
|
||||
bb.fatal("Variable %s contains an operation using the old override syntax. Please convert this layer/metadata before attempting to use with a newer bitbake." % info)
|
||||
|
||||
shortvar = var.split(":", 1)[0]
|
||||
if shortvar in self._var_renames:
|
||||
_print_rename_error(shortvar, loginfo, self._var_renames, fullvar=var)
|
||||
# Mark that we have seen a renamed variable
|
||||
self.setVar("_FAILPARSINGERRORHANDLED", True)
|
||||
|
||||
self.expand_cache = {}
|
||||
parsing=False
|
||||
if 'parsing' in loginfo:
|
||||
@@ -578,7 +509,7 @@ class DataSmart(MutableMapping):
|
||||
# pay the cookie monster
|
||||
|
||||
# more cookies for the cookie monster
|
||||
if ':' in var:
|
||||
if '_' in var:
|
||||
self._setvar_update_overrides(base, **loginfo)
|
||||
|
||||
if base in self.overridevars:
|
||||
@@ -589,27 +520,27 @@ class DataSmart(MutableMapping):
|
||||
self._makeShadowCopy(var)
|
||||
|
||||
if not parsing:
|
||||
if ":append" in self.dict[var]:
|
||||
del self.dict[var][":append"]
|
||||
if ":prepend" in self.dict[var]:
|
||||
del self.dict[var][":prepend"]
|
||||
if ":remove" in self.dict[var]:
|
||||
del self.dict[var][":remove"]
|
||||
if "_append" in self.dict[var]:
|
||||
del self.dict[var]["_append"]
|
||||
if "_prepend" in self.dict[var]:
|
||||
del self.dict[var]["_prepend"]
|
||||
if "_remove" in self.dict[var]:
|
||||
del self.dict[var]["_remove"]
|
||||
if var in self.overridedata:
|
||||
active = []
|
||||
self.need_overrides()
|
||||
for (r, o) in self.overridedata[var]:
|
||||
if o in self.overridesset:
|
||||
active.append(r)
|
||||
elif ":" in o:
|
||||
if set(o.split(":")).issubset(self.overridesset):
|
||||
elif "_" in o:
|
||||
if set(o.split("_")).issubset(self.overridesset):
|
||||
active.append(r)
|
||||
for a in active:
|
||||
self.delVar(a)
|
||||
del self.overridedata[var]
|
||||
|
||||
# more cookies for the cookie monster
|
||||
if ':' in var:
|
||||
if '_' in var:
|
||||
self._setvar_update_overrides(var, **loginfo)
|
||||
|
||||
# setting var
|
||||
@@ -635,8 +566,8 @@ class DataSmart(MutableMapping):
|
||||
|
||||
def _setvar_update_overrides(self, var, **loginfo):
|
||||
# aka pay the cookie monster
|
||||
override = var[var.rfind(':')+1:]
|
||||
shortvar = var[:var.rfind(':')]
|
||||
override = var[var.rfind('_')+1:]
|
||||
shortvar = var[:var.rfind('_')]
|
||||
while override and __override_regexp__.match(override):
|
||||
if shortvar not in self.overridedata:
|
||||
self.overridedata[shortvar] = []
|
||||
@@ -645,9 +576,9 @@ class DataSmart(MutableMapping):
|
||||
self.overridedata[shortvar] = list(self.overridedata[shortvar])
|
||||
self.overridedata[shortvar].append([var, override])
|
||||
override = None
|
||||
if ":" in shortvar:
|
||||
override = var[shortvar.rfind(':')+1:]
|
||||
shortvar = var[:shortvar.rfind(':')]
|
||||
if "_" in shortvar:
|
||||
override = var[shortvar.rfind('_')+1:]
|
||||
shortvar = var[:shortvar.rfind('_')]
|
||||
if len(shortvar) == 0:
|
||||
override = None
|
||||
|
||||
@@ -671,11 +602,10 @@ class DataSmart(MutableMapping):
|
||||
self.varhistory.record(**loginfo)
|
||||
self.setVar(newkey, val, ignore=True, parsing=True)
|
||||
|
||||
srcflags = self.getVarFlags(key, False, True) or {}
|
||||
for i in srcflags:
|
||||
if i not in (__setvar_keyword__):
|
||||
for i in (__setvar_keyword__):
|
||||
src = self.getVarFlag(key, i, False)
|
||||
if src is None:
|
||||
continue
|
||||
src = srcflags[i]
|
||||
|
||||
dest = self.getVarFlag(newkey, i, False) or []
|
||||
dest.extend(src)
|
||||
@@ -687,7 +617,7 @@ class DataSmart(MutableMapping):
|
||||
self.overridedata[newkey].append([v.replace(key, newkey), o])
|
||||
self.renameVar(v, v.replace(key, newkey))
|
||||
|
||||
if ':' in newkey and val is None:
|
||||
if '_' in newkey and val is None:
|
||||
self._setvar_update_overrides(newkey, **loginfo)
|
||||
|
||||
loginfo['variable'] = key
|
||||
@@ -699,12 +629,12 @@ class DataSmart(MutableMapping):
|
||||
def appendVar(self, var, value, **loginfo):
|
||||
loginfo['op'] = 'append'
|
||||
self.varhistory.record(**loginfo)
|
||||
self.setVar(var + ":append", value, ignore=True, parsing=True)
|
||||
self.setVar(var + "_append", value, ignore=True, parsing=True)
|
||||
|
||||
def prependVar(self, var, value, **loginfo):
|
||||
loginfo['op'] = 'prepend'
|
||||
self.varhistory.record(**loginfo)
|
||||
self.setVar(var + ":prepend", value, ignore=True, parsing=True)
|
||||
self.setVar(var + "_prepend", value, ignore=True, parsing=True)
|
||||
|
||||
def delVar(self, var, **loginfo):
|
||||
self.expand_cache = {}
|
||||
@@ -715,9 +645,9 @@ class DataSmart(MutableMapping):
|
||||
self.dict[var] = {}
|
||||
if var in self.overridedata:
|
||||
del self.overridedata[var]
|
||||
if ':' in var:
|
||||
override = var[var.rfind(':')+1:]
|
||||
shortvar = var[:var.rfind(':')]
|
||||
if '_' in var:
|
||||
override = var[var.rfind('_')+1:]
|
||||
shortvar = var[:var.rfind('_')]
|
||||
while override and override.islower():
|
||||
try:
|
||||
if shortvar in self.overridedata:
|
||||
@@ -727,23 +657,15 @@ class DataSmart(MutableMapping):
|
||||
except ValueError as e:
|
||||
pass
|
||||
override = None
|
||||
if ":" in shortvar:
|
||||
override = var[shortvar.rfind(':')+1:]
|
||||
shortvar = var[:shortvar.rfind(':')]
|
||||
if "_" in shortvar:
|
||||
override = var[shortvar.rfind('_')+1:]
|
||||
shortvar = var[:shortvar.rfind('_')]
|
||||
if len(shortvar) == 0:
|
||||
override = None
|
||||
|
||||
def setVarFlag(self, var, flag, value, **loginfo):
|
||||
self.expand_cache = {}
|
||||
|
||||
if var == "BB_RENAMED_VARIABLES":
|
||||
self._var_renames[flag] = value
|
||||
|
||||
if var in self._var_renames:
|
||||
_print_rename_error(var, loginfo, self._var_renames)
|
||||
# Mark that we have seen a renamed variable
|
||||
self.setVar("_FAILPARSINGERRORHANDLED", True)
|
||||
|
||||
if 'op' not in loginfo:
|
||||
loginfo['op'] = "set"
|
||||
loginfo['flag'] = flag
|
||||
@@ -752,7 +674,7 @@ class DataSmart(MutableMapping):
|
||||
self._makeShadowCopy(var)
|
||||
self.dict[var][flag] = value
|
||||
|
||||
if flag == "_defaultval" and ':' in var:
|
||||
if flag == "_defaultval" and '_' in var:
|
||||
self._setvar_update_overrides(var, **loginfo)
|
||||
if flag == "_defaultval" and var in self.overridevars:
|
||||
self._setvar_update_overridevars(var, value)
|
||||
@@ -784,11 +706,11 @@ class DataSmart(MutableMapping):
|
||||
active = {}
|
||||
self.need_overrides()
|
||||
for (r, o) in overridedata:
|
||||
# FIXME What about double overrides both with "_" in the name?
|
||||
# What about double overrides both with "_" in the name?
|
||||
if o in self.overridesset:
|
||||
active[o] = r
|
||||
elif ":" in o:
|
||||
if set(o.split(":")).issubset(self.overridesset):
|
||||
elif "_" in o:
|
||||
if set(o.split("_")).issubset(self.overridesset):
|
||||
active[o] = r
|
||||
|
||||
mod = True
|
||||
@@ -796,10 +718,10 @@ class DataSmart(MutableMapping):
|
||||
mod = False
|
||||
for o in self.overrides:
|
||||
for a in active.copy():
|
||||
if a.endswith(":" + o):
|
||||
if a.endswith("_" + o):
|
||||
t = active[a]
|
||||
del active[a]
|
||||
active[a.replace(":" + o, "")] = t
|
||||
active[a.replace("_" + o, "")] = t
|
||||
mod = True
|
||||
elif a == o:
|
||||
match = active[a]
|
||||
@@ -818,31 +740,31 @@ class DataSmart(MutableMapping):
|
||||
value = copy.copy(local_var["_defaultval"])
|
||||
|
||||
|
||||
if flag == "_content" and local_var is not None and ":append" in local_var and not parsing:
|
||||
if flag == "_content" and local_var is not None and "_append" in local_var and not parsing:
|
||||
if not value:
|
||||
value = ""
|
||||
self.need_overrides()
|
||||
for (r, o) in local_var[":append"]:
|
||||
for (r, o) in local_var["_append"]:
|
||||
match = True
|
||||
if o:
|
||||
for o2 in o.split(":"):
|
||||
for o2 in o.split("_"):
|
||||
if not o2 in self.overrides:
|
||||
match = False
|
||||
if match:
|
||||
if value is None:
|
||||
value = ""
|
||||
value = value + r
|
||||
|
||||
if flag == "_content" and local_var is not None and ":prepend" in local_var and not parsing:
|
||||
if flag == "_content" and local_var is not None and "_prepend" in local_var and not parsing:
|
||||
if not value:
|
||||
value = ""
|
||||
self.need_overrides()
|
||||
for (r, o) in local_var[":prepend"]:
|
||||
for (r, o) in local_var["_prepend"]:
|
||||
|
||||
match = True
|
||||
if o:
|
||||
for o2 in o.split(":"):
|
||||
for o2 in o.split("_"):
|
||||
if not o2 in self.overrides:
|
||||
match = False
|
||||
if match:
|
||||
if value is None:
|
||||
value = ""
|
||||
value = r + value
|
||||
|
||||
parser = None
|
||||
@@ -851,12 +773,12 @@ class DataSmart(MutableMapping):
|
||||
if expand:
|
||||
value = parser.value
|
||||
|
||||
if value and flag == "_content" and local_var is not None and ":remove" in local_var and not parsing:
|
||||
if value and flag == "_content" and local_var is not None and "_remove" in local_var and not parsing:
|
||||
self.need_overrides()
|
||||
for (r, o) in local_var[":remove"]:
|
||||
for (r, o) in local_var["_remove"]:
|
||||
match = True
|
||||
if o:
|
||||
for o2 in o.split(":"):
|
||||
for o2 in o.split("_"):
|
||||
if not o2 in self.overrides:
|
||||
match = False
|
||||
if match:
|
||||
@@ -869,7 +791,7 @@ class DataSmart(MutableMapping):
|
||||
expanded_removes[r] = self.expand(r).split()
|
||||
|
||||
parser.removes = set()
|
||||
val = []
|
||||
val = ""
|
||||
for v in __whitespace_split__.split(parser.value):
|
||||
skip = False
|
||||
for r in removes:
|
||||
@@ -878,8 +800,8 @@ class DataSmart(MutableMapping):
|
||||
skip = True
|
||||
if skip:
|
||||
continue
|
||||
val.append(v)
|
||||
parser.value = "".join(val)
|
||||
val = val + v
|
||||
parser.value = val
|
||||
if expand:
|
||||
value = parser.value
|
||||
|
||||
@@ -942,7 +864,7 @@ class DataSmart(MutableMapping):
|
||||
|
||||
if local_var:
|
||||
for i in local_var:
|
||||
if i.startswith(("_", ":")) and not internalflags:
|
||||
if i.startswith("_") and not internalflags:
|
||||
continue
|
||||
flags[i] = local_var[i]
|
||||
if expand and i in expand:
|
||||
@@ -983,7 +905,6 @@ class DataSmart(MutableMapping):
|
||||
data.inchistory = self.inchistory.copy()
|
||||
|
||||
data._tracking = self._tracking
|
||||
data._var_renames = self._var_renames
|
||||
|
||||
data.overrides = None
|
||||
data.overridevars = copy.copy(self.overridevars)
|
||||
@@ -1006,7 +927,7 @@ class DataSmart(MutableMapping):
|
||||
value = self.getVar(variable, False)
|
||||
for key in keys:
|
||||
referrervalue = self.getVar(key, False)
|
||||
if referrervalue and isinstance(referrervalue, str) and ref in referrervalue:
|
||||
if referrervalue and ref in referrervalue:
|
||||
self.setVar(key, referrervalue.replace(ref, value))
|
||||
|
||||
def localkeys(self):
|
||||
@@ -1041,8 +962,8 @@ class DataSmart(MutableMapping):
|
||||
for (r, o) in self.overridedata[var]:
|
||||
if o in self.overridesset:
|
||||
overrides.add(var)
|
||||
elif ":" in o:
|
||||
if set(o.split(":")).issubset(self.overridesset):
|
||||
elif "_" in o:
|
||||
if set(o.split("_")).issubset(self.overridesset):
|
||||
overrides.add(var)
|
||||
|
||||
for k in keylist(self.dict):
|
||||
@@ -1072,10 +993,10 @@ class DataSmart(MutableMapping):
|
||||
d = self.createCopy()
|
||||
bb.data.expandKeys(d)
|
||||
|
||||
config_ignore_vars = set((d.getVar("BB_HASHCONFIG_IGNORE_VARS") or "").split())
|
||||
config_whitelist = set((d.getVar("BB_HASHCONFIG_WHITELIST") or "").split())
|
||||
keys = set(key for key in iter(d) if not key.startswith("__"))
|
||||
for key in keys:
|
||||
if key in config_ignore_vars:
|
||||
if key in config_whitelist:
|
||||
continue
|
||||
|
||||
value = d.getVar(key, False) or ""
|
||||
|
||||
@@ -40,7 +40,7 @@ class HeartbeatEvent(Event):
|
||||
"""Triggered at regular time intervals of 10 seconds. Other events can fire much more often
|
||||
(runQueueTaskStarted when there are many short tasks) or not at all for long periods
|
||||
of time (again runQueueTaskStarted, when there is just one long-running task), so this
|
||||
event is more suitable for doing some task-independent work occasionally."""
|
||||
event is more suitable for doing some task-independent work occassionally."""
|
||||
def __init__(self, time):
|
||||
Event.__init__(self)
|
||||
self.time = time
|
||||
@@ -132,14 +132,8 @@ def print_ui_queue():
|
||||
if not _uiready:
|
||||
from bb.msg import BBLogFormatter
|
||||
# Flush any existing buffered content
|
||||
try:
|
||||
sys.stdout.flush()
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
sys.stderr.flush()
|
||||
except:
|
||||
pass
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
stdout = logging.StreamHandler(sys.stdout)
|
||||
stderr = logging.StreamHandler(sys.stderr)
|
||||
formatter = BBLogFormatter("%(levelname)s: %(message)s")
|
||||
@@ -492,7 +486,7 @@ class BuildCompleted(BuildBase, OperationCompleted):
|
||||
BuildBase.__init__(self, n, p, failures)
|
||||
|
||||
class DiskFull(Event):
|
||||
"""Disk full case build halted"""
|
||||
"""Disk full case build aborted"""
|
||||
def __init__(self, dev, type, freespace, mountpoint):
|
||||
Event.__init__(self)
|
||||
self._dev = dev
|
||||
@@ -770,7 +764,7 @@ class LogHandler(logging.Handler):
|
||||
class MetadataEvent(Event):
|
||||
"""
|
||||
Generic event that target for OE-Core classes
|
||||
to report information during asynchronous execution
|
||||
to report information during asynchrous execution
|
||||
"""
|
||||
def __init__(self, eventtype, eventdata):
|
||||
Event.__init__(self)
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
|
||||
@@ -1,57 +0,0 @@
|
||||
There are expectations of users of the fetcher code. This file attempts to document
|
||||
some of the constraints that are present. Some are obvious, some are less so. It is
|
||||
documented in the context of how OE uses it but the API calls are generic.
|
||||
|
||||
a) network access for sources is only expected to happen in the do_fetch task.
|
||||
This is not enforced or tested but is required so that we can:
|
||||
|
||||
i) audit the sources used (i.e. for license/manifest reasons)
|
||||
ii) support offline builds with a suitable cache
|
||||
iii) allow work to continue even with downtime upstream
|
||||
iv) allow for changes upstream in incompatible ways
|
||||
v) allow rebuilding of the software in X years time
|
||||
|
||||
b) network access is not expected in do_unpack task.
|
||||
|
||||
c) you can take DL_DIR and use it as a mirror for offline builds.
|
||||
|
||||
d) access to the network is only made when explicitly configured in recipes
|
||||
(e.g. use of AUTOREV, or use of git tags which change revision).
|
||||
|
||||
e) fetcher output is deterministic (i.e. if you fetch configuration XXX now it
|
||||
will match in future exactly in a clean build with a new DL_DIR).
|
||||
One specific pain point example are git tags. They can be replaced and change
|
||||
so the git fetcher has to resolve them with the network. We use git revisions
|
||||
where possible to avoid this and ensure determinism.
|
||||
|
||||
f) network access is expected to work with the standard linux proxy variables
|
||||
so that access behind firewalls works (the fetcher sets these in the
|
||||
environment but only in the do_fetch tasks).
|
||||
|
||||
g) access during parsing has to be minimal, a "git ls-remote" for an AUTOREV
|
||||
git recipe might be ok but you can't expect to checkout a git tree.
|
||||
|
||||
h) we need to provide revision information during parsing such that a version
|
||||
for the recipe can be constructed.
|
||||
|
||||
i) versions are expected to be able to increase in a way which sorts allowing
|
||||
package feeds to operate (see PR server required for git revisions to sort).
|
||||
|
||||
j) API to query for possible version upgrades of a url is highly desireable to
|
||||
allow our automated upgrage code to function (it is implied this does always
|
||||
have network access).
|
||||
|
||||
k) Where fixes or changes to behaviour in the fetcher are made, we ask that
|
||||
test cases are added (run with "bitbake-selftest bb.tests.fetch"). We do
|
||||
have fairly extensive test coverage of the fetcher as it is the only way
|
||||
to track all of its corner cases, it still doesn't give entire coverage
|
||||
though sadly.
|
||||
|
||||
l) If using tools during parse time, they will have to be in ASSUME_PROVIDED
|
||||
in OE's context as we can't build git-native, then parse a recipe and use
|
||||
git ls-remote.
|
||||
|
||||
Not all fetchers support all features, autorev is optional and doesn't make
|
||||
sense for some. Upgrade detection means different things in different contexts
|
||||
too.
|
||||
|
||||
@@ -113,7 +113,7 @@ class MissingParameterError(BBFetchException):
|
||||
self.args = (missing, url)
|
||||
|
||||
class ParameterError(BBFetchException):
|
||||
"""Exception raised when a url cannot be processed due to invalid parameters."""
|
||||
"""Exception raised when a url cannot be proccessed due to invalid parameters."""
|
||||
def __init__(self, message, url):
|
||||
msg = "URL: '%s' has invalid parameters. %s" % (url, message)
|
||||
self.url = url
|
||||
@@ -182,7 +182,7 @@ class URI(object):
|
||||
Some notes about relative URIs: while it's specified that
|
||||
a URI beginning with <scheme>:// should either be directly
|
||||
followed by a hostname or a /, the old URI handling of the
|
||||
fetch2 library did not conform to this. Therefore, this URI
|
||||
fetch2 library did not comform to this. Therefore, this URI
|
||||
class has some kludges to make sure that URIs are parsed in
|
||||
a way comforming to bitbake's current usage. This URI class
|
||||
supports the following:
|
||||
@@ -199,7 +199,7 @@ class URI(object):
|
||||
file://hostname/absolute/path.diff (would be IETF compliant)
|
||||
|
||||
Note that the last case only applies to a list of
|
||||
explicitly allowed schemes (currently only file://), that requires
|
||||
"whitelisted" schemes (currently only file://), that requires
|
||||
its URIs to not have a network location.
|
||||
"""
|
||||
|
||||
@@ -402,24 +402,24 @@ def encodeurl(decoded):
|
||||
|
||||
if not type:
|
||||
raise MissingParameterError('type', "encoded from the data %s" % str(decoded))
|
||||
url = ['%s://' % type]
|
||||
url = '%s://' % type
|
||||
if user and type != "file":
|
||||
url.append("%s" % user)
|
||||
url += "%s" % user
|
||||
if pswd:
|
||||
url.append(":%s" % pswd)
|
||||
url.append("@")
|
||||
url += ":%s" % pswd
|
||||
url += "@"
|
||||
if host and type != "file":
|
||||
url.append("%s" % host)
|
||||
url += "%s" % host
|
||||
if path:
|
||||
# Standardise path to ensure comparisons work
|
||||
while '//' in path:
|
||||
path = path.replace("//", "/")
|
||||
url.append("%s" % urllib.parse.quote(path))
|
||||
url += "%s" % urllib.parse.quote(path)
|
||||
if p:
|
||||
for parm in p:
|
||||
url.append(";%s=%s" % (parm, p[parm]))
|
||||
url += ";%s=%s" % (parm, p[parm])
|
||||
|
||||
return "".join(url)
|
||||
return url
|
||||
|
||||
def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
|
||||
if not ud.url or not uri_find or not uri_replace:
|
||||
@@ -430,7 +430,6 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
|
||||
uri_replace_decoded = list(decodeurl(uri_replace))
|
||||
logger.debug2("For url %s comparing %s to %s" % (uri_decoded, uri_find_decoded, uri_replace_decoded))
|
||||
result_decoded = ['', '', '', '', '', {}]
|
||||
# 0 - type, 1 - host, 2 - path, 3 - user, 4- pswd, 5 - params
|
||||
for loc, i in enumerate(uri_find_decoded):
|
||||
result_decoded[loc] = uri_decoded[loc]
|
||||
regexp = i
|
||||
@@ -450,9 +449,6 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
|
||||
for l in replacements:
|
||||
uri_replace_decoded[loc][k] = uri_replace_decoded[loc][k].replace(l, replacements[l])
|
||||
result_decoded[loc][k] = uri_replace_decoded[loc][k]
|
||||
elif (loc == 3 or loc == 4) and uri_replace_decoded[loc]:
|
||||
# User/password in the replacement is just a straight replacement
|
||||
result_decoded[loc] = uri_replace_decoded[loc]
|
||||
elif (re.match(regexp, uri_decoded[loc])):
|
||||
if not uri_replace_decoded[loc]:
|
||||
result_decoded[loc] = ""
|
||||
@@ -471,15 +467,8 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
|
||||
uri_decoded[5] = {}
|
||||
elif ud.localpath and ud.method.supports_checksum(ud):
|
||||
basename = os.path.basename(ud.localpath)
|
||||
if basename:
|
||||
uri_basename = os.path.basename(uri_decoded[loc])
|
||||
# Prefix with a slash as a sentinel in case
|
||||
# result_decoded[loc] does not contain one.
|
||||
path = "/" + result_decoded[loc]
|
||||
if uri_basename and basename != uri_basename and path.endswith("/" + uri_basename):
|
||||
result_decoded[loc] = path[1:-len(uri_basename)] + basename
|
||||
elif not path.endswith("/" + basename):
|
||||
result_decoded[loc] = os.path.join(path[1:], basename)
|
||||
if basename and not result_decoded[loc].endswith(basename):
|
||||
result_decoded[loc] = os.path.join(result_decoded[loc], basename)
|
||||
else:
|
||||
return None
|
||||
result = encodeurl(result_decoded)
|
||||
@@ -573,9 +562,6 @@ def verify_checksum(ud, d, precomputed={}):
|
||||
|
||||
checksum_expected = getattr(ud, "%s_expected" % checksum_id)
|
||||
|
||||
if checksum_expected == '':
|
||||
checksum_expected = None
|
||||
|
||||
return {
|
||||
"id": checksum_id,
|
||||
"name": checksum_name,
|
||||
@@ -626,7 +612,7 @@ def verify_checksum(ud, d, precomputed={}):
|
||||
|
||||
for ci in checksum_infos:
|
||||
if ci["expected"] and ci["expected"] != ci["data"]:
|
||||
messages.append("File: '%s' has %s checksum '%s' when '%s' was " \
|
||||
messages.append("File: '%s' has %s checksum %s when %s was " \
|
||||
"expected" % (ud.localpath, ci["id"], ci["data"], ci["expected"]))
|
||||
bad_checksum = ci["data"]
|
||||
|
||||
@@ -765,12 +751,6 @@ def get_srcrev(d, method_name='sortable_revision'):
|
||||
that fetcher provides a method with the given name and the same signature as sortable_revision.
|
||||
"""
|
||||
|
||||
d.setVar("__BBSEENSRCREV", "1")
|
||||
recursion = d.getVar("__BBINSRCREV")
|
||||
if recursion:
|
||||
raise FetchError("There are recursive references in fetcher variables, likely through SRC_URI")
|
||||
d.setVar("__BBINSRCREV", True)
|
||||
|
||||
scms = []
|
||||
fetcher = Fetch(d.getVar('SRC_URI').split(), d)
|
||||
urldata = fetcher.ud
|
||||
@@ -778,14 +758,13 @@ def get_srcrev(d, method_name='sortable_revision'):
|
||||
if urldata[u].method.supports_srcrev():
|
||||
scms.append(u)
|
||||
|
||||
if not scms:
|
||||
if len(scms) == 0:
|
||||
raise FetchError("SRCREV was used yet no valid SCM was found in SRC_URI")
|
||||
|
||||
if len(scms) == 1 and len(urldata[scms[0]].names) == 1:
|
||||
autoinc, rev = getattr(urldata[scms[0]].method, method_name)(urldata[scms[0]], d, urldata[scms[0]].names[0])
|
||||
if len(rev) > 10:
|
||||
rev = rev[:10]
|
||||
d.delVar("__BBINSRCREV")
|
||||
if autoinc:
|
||||
return "AUTOINC+" + rev
|
||||
return rev
|
||||
@@ -820,49 +799,12 @@ def get_srcrev(d, method_name='sortable_revision'):
|
||||
if seenautoinc:
|
||||
format = "AUTOINC+" + format
|
||||
|
||||
d.delVar("__BBINSRCREV")
|
||||
return format
|
||||
|
||||
def localpath(url, d):
|
||||
fetcher = bb.fetch2.Fetch([url], d)
|
||||
return fetcher.localpath(url)
|
||||
|
||||
# Need to export PATH as binary could be in metadata paths
|
||||
# rather than host provided
|
||||
# Also include some other variables.
|
||||
FETCH_EXPORT_VARS = ['HOME', 'PATH',
|
||||
'HTTP_PROXY', 'http_proxy',
|
||||
'HTTPS_PROXY', 'https_proxy',
|
||||
'FTP_PROXY', 'ftp_proxy',
|
||||
'FTPS_PROXY', 'ftps_proxy',
|
||||
'NO_PROXY', 'no_proxy',
|
||||
'ALL_PROXY', 'all_proxy',
|
||||
'GIT_PROXY_COMMAND',
|
||||
'GIT_SSH',
|
||||
'GIT_SSH_COMMAND',
|
||||
'GIT_SSL_CAINFO',
|
||||
'GIT_SMART_HTTP',
|
||||
'SSH_AUTH_SOCK', 'SSH_AGENT_PID',
|
||||
'SOCKS5_USER', 'SOCKS5_PASSWD',
|
||||
'DBUS_SESSION_BUS_ADDRESS',
|
||||
'P4CONFIG',
|
||||
'SSL_CERT_FILE',
|
||||
'AWS_PROFILE',
|
||||
'AWS_ACCESS_KEY_ID',
|
||||
'AWS_SECRET_ACCESS_KEY',
|
||||
'AWS_DEFAULT_REGION']
|
||||
|
||||
def get_fetcher_environment(d):
|
||||
newenv = {}
|
||||
origenv = d.getVar("BB_ORIGENV")
|
||||
for name in bb.fetch2.FETCH_EXPORT_VARS:
|
||||
value = d.getVar(name)
|
||||
if not value and origenv:
|
||||
value = origenv.getVar(name)
|
||||
if value:
|
||||
newenv[name] = value
|
||||
return newenv
|
||||
|
||||
def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
|
||||
"""
|
||||
Run cmd returning the command output
|
||||
@@ -871,7 +813,25 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
|
||||
Optionally remove the files/directories listed in cleanup upon failure
|
||||
"""
|
||||
|
||||
exportvars = FETCH_EXPORT_VARS
|
||||
# Need to export PATH as binary could be in metadata paths
|
||||
# rather than host provided
|
||||
# Also include some other variables.
|
||||
# FIXME: Should really include all export varaiables?
|
||||
exportvars = ['HOME', 'PATH',
|
||||
'HTTP_PROXY', 'http_proxy',
|
||||
'HTTPS_PROXY', 'https_proxy',
|
||||
'FTP_PROXY', 'ftp_proxy',
|
||||
'FTPS_PROXY', 'ftps_proxy',
|
||||
'NO_PROXY', 'no_proxy',
|
||||
'ALL_PROXY', 'all_proxy',
|
||||
'GIT_PROXY_COMMAND',
|
||||
'GIT_SSH',
|
||||
'GIT_SSL_CAINFO',
|
||||
'GIT_SMART_HTTP',
|
||||
'SSH_AUTH_SOCK', 'SSH_AGENT_PID',
|
||||
'SOCKS5_USER', 'SOCKS5_PASSWD',
|
||||
'DBUS_SESSION_BUS_ADDRESS',
|
||||
'P4CONFIG']
|
||||
|
||||
if not cleanup:
|
||||
cleanup = []
|
||||
@@ -908,7 +868,7 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
|
||||
(output, errors) = bb.process.run(cmd, log=log, shell=True, stderr=subprocess.PIPE, cwd=workdir)
|
||||
success = True
|
||||
except bb.process.NotFoundError as e:
|
||||
error_message = "Fetch command %s not found" % (e.command)
|
||||
error_message = "Fetch command %s" % (e.command)
|
||||
except bb.process.ExecutionError as e:
|
||||
if e.stdout:
|
||||
output = "output:\n%s\n%s" % (e.stdout, e.stderr)
|
||||
@@ -1097,8 +1057,6 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
|
||||
|
||||
def ensure_symlink(target, link_name):
|
||||
if not os.path.exists(link_name):
|
||||
dirname = os.path.dirname(link_name)
|
||||
bb.utils.mkdirhier(dirname)
|
||||
if os.path.islink(link_name):
|
||||
# Broken symbolic link
|
||||
os.unlink(link_name)
|
||||
@@ -1182,11 +1140,11 @@ def srcrev_internal_helper(ud, d, name):
|
||||
pn = d.getVar("PN")
|
||||
attempts = []
|
||||
if name != '' and pn:
|
||||
attempts.append("SRCREV_%s:pn-%s" % (name, pn))
|
||||
attempts.append("SRCREV_%s_pn-%s" % (name, pn))
|
||||
if name != '':
|
||||
attempts.append("SRCREV_%s" % name)
|
||||
if pn:
|
||||
attempts.append("SRCREV:pn-%s" % pn)
|
||||
attempts.append("SRCREV_pn-%s" % pn)
|
||||
attempts.append("SRCREV")
|
||||
|
||||
for a in attempts:
|
||||
@@ -1476,33 +1434,30 @@ class FetchMethod(object):
|
||||
cmd = None
|
||||
|
||||
if unpack:
|
||||
tar_cmd = 'tar --extract --no-same-owner'
|
||||
if 'striplevel' in urldata.parm:
|
||||
tar_cmd += ' --strip-components=%s' % urldata.parm['striplevel']
|
||||
if file.endswith('.tar'):
|
||||
cmd = '%s -f %s' % (tar_cmd, file)
|
||||
cmd = 'tar x --no-same-owner -f %s' % file
|
||||
elif file.endswith('.tgz') or file.endswith('.tar.gz') or file.endswith('.tar.Z'):
|
||||
cmd = '%s -z -f %s' % (tar_cmd, file)
|
||||
cmd = 'tar xz --no-same-owner -f %s' % file
|
||||
elif file.endswith('.tbz') or file.endswith('.tbz2') or file.endswith('.tar.bz2'):
|
||||
cmd = 'bzip2 -dc %s | %s -f -' % (file, tar_cmd)
|
||||
cmd = 'bzip2 -dc %s | tar x --no-same-owner -f -' % file
|
||||
elif file.endswith('.gz') or file.endswith('.Z') or file.endswith('.z'):
|
||||
cmd = 'gzip -dc %s > %s' % (file, efile)
|
||||
elif file.endswith('.bz2'):
|
||||
cmd = 'bzip2 -dc %s > %s' % (file, efile)
|
||||
elif file.endswith('.txz') or file.endswith('.tar.xz'):
|
||||
cmd = 'xz -dc %s | %s -f -' % (file, tar_cmd)
|
||||
cmd = 'xz -dc %s | tar x --no-same-owner -f -' % file
|
||||
elif file.endswith('.xz'):
|
||||
cmd = 'xz -dc %s > %s' % (file, efile)
|
||||
elif file.endswith('.tar.lz'):
|
||||
cmd = 'lzip -dc %s | %s -f -' % (file, tar_cmd)
|
||||
cmd = 'lzip -dc %s | tar x --no-same-owner -f -' % file
|
||||
elif file.endswith('.lz'):
|
||||
cmd = 'lzip -dc %s > %s' % (file, efile)
|
||||
elif file.endswith('.tar.7z'):
|
||||
cmd = '7z x -so %s | %s -f -' % (file, tar_cmd)
|
||||
cmd = '7z x -so %s | tar x --no-same-owner -f -' % file
|
||||
elif file.endswith('.7z'):
|
||||
cmd = '7za x -y %s 1>/dev/null' % file
|
||||
elif file.endswith('.tzst') or file.endswith('.tar.zst'):
|
||||
cmd = 'zstd --decompress --stdout %s | %s -f -' % (file, tar_cmd)
|
||||
cmd = 'zstd --decompress --stdout %s | tar x --no-same-owner -f -' % file
|
||||
elif file.endswith('.zst'):
|
||||
cmd = 'zstd --decompress --stdout %s > %s' % (file, efile)
|
||||
elif file.endswith('.zip') or file.endswith('.jar'):
|
||||
@@ -1535,7 +1490,7 @@ class FetchMethod(object):
|
||||
raise UnpackError("Unable to unpack deb/ipk package - does not contain data.tar.* file", urldata.url)
|
||||
else:
|
||||
raise UnpackError("Unable to unpack deb/ipk package - could not list contents", urldata.url)
|
||||
cmd = 'ar x %s %s && %s -p -f %s && rm %s' % (file, datafile, tar_cmd, datafile, datafile)
|
||||
cmd = 'ar x %s %s && tar --no-same-owner -xpf %s && rm %s' % (file, datafile, datafile, datafile)
|
||||
|
||||
# If 'subdir' param exists, create a dir and use it as destination for unpack cmd
|
||||
if 'subdir' in urldata.parm:
|
||||
@@ -1661,7 +1616,7 @@ class Fetch(object):
|
||||
if localonly and cache:
|
||||
raise Exception("bb.fetch2.Fetch.__init__: cannot set cache and localonly at same time")
|
||||
|
||||
if not urls:
|
||||
if len(urls) == 0:
|
||||
urls = d.getVar("SRC_URI").split()
|
||||
self.urls = urls
|
||||
self.d = d
|
||||
@@ -1750,9 +1705,7 @@ class Fetch(object):
|
||||
self.d.setVar("BB_NO_NETWORK", "1")
|
||||
|
||||
firsterr = None
|
||||
verified_stamp = False
|
||||
if done:
|
||||
verified_stamp = m.verify_donestamp(ud, self.d)
|
||||
verified_stamp = m.verify_donestamp(ud, self.d)
|
||||
if not done and (not verified_stamp or m.need_update(ud, self.d)):
|
||||
try:
|
||||
if not trusted_network(self.d, ud.url):
|
||||
@@ -1811,11 +1764,7 @@ class Fetch(object):
|
||||
|
||||
def checkstatus(self, urls=None):
|
||||
"""
|
||||
Check all URLs exist upstream.
|
||||
|
||||
Returns None if the URLs exist, raises FetchError if the check wasn't
|
||||
successful but there wasn't an error (such as file not found), and
|
||||
raises other exceptions in error cases.
|
||||
Check all urls exist upstream
|
||||
"""
|
||||
|
||||
if not urls:
|
||||
@@ -1960,7 +1909,6 @@ from . import clearcase
|
||||
from . import npm
|
||||
from . import npmsw
|
||||
from . import az
|
||||
from . import crate
|
||||
|
||||
methods.append(local.Local())
|
||||
methods.append(wget.Wget())
|
||||
@@ -1981,4 +1929,3 @@ methods.append(clearcase.ClearCase())
|
||||
methods.append(npm.Npm())
|
||||
methods.append(npmsw.NpmShrinkWrap())
|
||||
methods.append(az.Az())
|
||||
methods.append(crate.Crate())
|
||||
|
||||
@@ -1,136 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
"""
|
||||
BitBake 'Fetch' implementation for crates.io
|
||||
"""
|
||||
|
||||
# Copyright (C) 2016 Doug Goldstein
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import bb
|
||||
from bb.fetch2 import logger, subprocess_setup, UnpackError
|
||||
from bb.fetch2.wget import Wget
|
||||
|
||||
|
||||
class Crate(Wget):
|
||||
|
||||
"""Class to fetch crates via wget"""
|
||||
|
||||
def _cargo_bitbake_path(self, rootdir):
|
||||
return os.path.join(rootdir, "cargo_home", "bitbake")
|
||||
|
||||
def supports(self, ud, d):
|
||||
"""
|
||||
Check to see if a given url is for this fetcher
|
||||
"""
|
||||
return ud.type in ['crate']
|
||||
|
||||
def recommends_checksum(self, urldata):
|
||||
return False
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
"""
|
||||
Sets up to download the respective crate from crates.io
|
||||
"""
|
||||
|
||||
if ud.type == 'crate':
|
||||
self._crate_urldata_init(ud, d)
|
||||
|
||||
super(Crate, self).urldata_init(ud, d)
|
||||
|
||||
def _crate_urldata_init(self, ud, d):
|
||||
"""
|
||||
Sets up the download for a crate
|
||||
"""
|
||||
|
||||
# URL syntax is: crate://NAME/VERSION
|
||||
# break the URL apart by /
|
||||
parts = ud.url.split('/')
|
||||
if len(parts) < 5:
|
||||
raise bb.fetch2.ParameterError("Invalid URL: Must be crate://HOST/NAME/VERSION", ud.url)
|
||||
|
||||
# last field is version
|
||||
version = parts[len(parts) - 1]
|
||||
# second to last field is name
|
||||
name = parts[len(parts) - 2]
|
||||
# host (this is to allow custom crate registries to be specified
|
||||
host = '/'.join(parts[2:len(parts) - 2])
|
||||
|
||||
# if using upstream just fix it up nicely
|
||||
if host == 'crates.io':
|
||||
host = 'crates.io/api/v1/crates'
|
||||
|
||||
ud.url = "https://%s/%s/%s/download" % (host, name, version)
|
||||
ud.parm['downloadfilename'] = "%s-%s.crate" % (name, version)
|
||||
ud.parm['name'] = name
|
||||
|
||||
logger.debug("Fetching %s to %s" % (ud.url, ud.parm['downloadfilename']))
|
||||
|
||||
def unpack(self, ud, rootdir, d):
|
||||
"""
|
||||
Uses the crate to build the necessary paths for cargo to utilize it
|
||||
"""
|
||||
if ud.type == 'crate':
|
||||
return self._crate_unpack(ud, rootdir, d)
|
||||
else:
|
||||
super(Crate, self).unpack(ud, rootdir, d)
|
||||
|
||||
def _crate_unpack(self, ud, rootdir, d):
|
||||
"""
|
||||
Unpacks a crate
|
||||
"""
|
||||
thefile = ud.localpath
|
||||
|
||||
# possible metadata we need to write out
|
||||
metadata = {}
|
||||
|
||||
# change to the rootdir to unpack but save the old working dir
|
||||
save_cwd = os.getcwd()
|
||||
os.chdir(rootdir)
|
||||
|
||||
pn = d.getVar('BPN')
|
||||
if pn == ud.parm.get('name'):
|
||||
cmd = "tar -xz --no-same-owner -f %s" % thefile
|
||||
else:
|
||||
cargo_bitbake = self._cargo_bitbake_path(rootdir)
|
||||
|
||||
cmd = "tar -xz --no-same-owner -f %s -C %s" % (thefile, cargo_bitbake)
|
||||
|
||||
# ensure we've got these paths made
|
||||
bb.utils.mkdirhier(cargo_bitbake)
|
||||
|
||||
# generate metadata necessary
|
||||
with open(thefile, 'rb') as f:
|
||||
# get the SHA256 of the original tarball
|
||||
tarhash = hashlib.sha256(f.read()).hexdigest()
|
||||
|
||||
metadata['files'] = {}
|
||||
metadata['package'] = tarhash
|
||||
|
||||
path = d.getVar('PATH')
|
||||
if path:
|
||||
cmd = "PATH=\"%s\" %s" % (path, cmd)
|
||||
bb.note("Unpacking %s to %s/" % (thefile, os.getcwd()))
|
||||
|
||||
ret = subprocess.call(cmd, preexec_fn=subprocess_setup, shell=True)
|
||||
|
||||
os.chdir(save_cwd)
|
||||
|
||||
if ret != 0:
|
||||
raise UnpackError("Unpack command %s failed with return value %s" % (cmd, ret), ud.url)
|
||||
|
||||
# if we have metadata to write out..
|
||||
if len(metadata) > 0:
|
||||
cratepath = os.path.splitext(os.path.basename(thefile))[0]
|
||||
bbpath = self._cargo_bitbake_path(rootdir)
|
||||
mdfile = '.cargo-checksum.json'
|
||||
mdpath = os.path.join(bbpath, cratepath, mdfile)
|
||||
with open(mdpath, "w") as f:
|
||||
json.dump(metadata, f)
|
||||
@@ -44,8 +44,7 @@ Supported SRC_URI options are:
|
||||
|
||||
- nobranch
|
||||
Don't check the SHA validation for branch. set this option for the recipe
|
||||
referring to commit which is valid in any namespace (branch, tag, ...)
|
||||
instead of branch.
|
||||
referring to commit which is valid in tag instead of branch.
|
||||
The default is "0", set nobranch=1 if needed.
|
||||
|
||||
- usehead
|
||||
@@ -69,7 +68,6 @@ import subprocess
|
||||
import tempfile
|
||||
import bb
|
||||
import bb.progress
|
||||
from contextlib import contextmanager
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import runfetchcmd
|
||||
from bb.fetch2 import logger
|
||||
@@ -143,11 +141,6 @@ class Git(FetchMethod):
|
||||
ud.proto = 'file'
|
||||
else:
|
||||
ud.proto = "git"
|
||||
if ud.host == "github.com" and ud.proto == "git":
|
||||
# github stopped supporting git protocol
|
||||
# https://github.blog/2021-09-01-improving-git-protocol-security-github/#no-more-unauthenticated-git
|
||||
ud.proto = "https"
|
||||
bb.warn("URL: %s uses git protocol which is no longer supported by github. Please change to ;protocol=https in the url." % ud.url)
|
||||
|
||||
if not ud.proto in ('git', 'file', 'ssh', 'http', 'https', 'rsync'):
|
||||
raise bb.fetch2.ParameterError("Invalid protocol type", ud.url)
|
||||
@@ -171,18 +164,11 @@ class Git(FetchMethod):
|
||||
ud.nocheckout = 1
|
||||
|
||||
ud.unresolvedrev = {}
|
||||
branches = ud.parm.get("branch", "").split(',')
|
||||
if branches == [""] and not ud.nobranch:
|
||||
bb.warn("URL: %s does not set any branch parameter. The future default branch used by tools and repositories is uncertain and we will therefore soon require this is set in all git urls." % ud.url)
|
||||
branches = ["master"]
|
||||
branches = ud.parm.get("branch", "master").split(',')
|
||||
if len(branches) != len(ud.names):
|
||||
raise bb.fetch2.ParameterError("The number of name and branch parameters is not balanced", ud.url)
|
||||
|
||||
ud.noshared = d.getVar("BB_GIT_NOSHARED") == "1"
|
||||
|
||||
ud.cloneflags = "-n"
|
||||
if not ud.noshared:
|
||||
ud.cloneflags += " -s"
|
||||
ud.cloneflags = "-s -n"
|
||||
if ud.bareclone:
|
||||
ud.cloneflags += " --mirror"
|
||||
|
||||
@@ -241,7 +227,7 @@ class Git(FetchMethod):
|
||||
for name in ud.names:
|
||||
ud.unresolvedrev[name] = 'HEAD'
|
||||
|
||||
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c core.fsyncobjectfiles=0 -c gc.autoDetach=false -c core.pager=cat"
|
||||
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c core.fsyncobjectfiles=0"
|
||||
|
||||
write_tarballs = d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0"
|
||||
ud.write_tarballs = write_tarballs != "0" or ud.rebaseable
|
||||
@@ -307,10 +293,7 @@ class Git(FetchMethod):
|
||||
return ud.clonedir
|
||||
|
||||
def need_update(self, ud, d):
|
||||
return self.clonedir_need_update(ud, d) \
|
||||
or self.shallow_tarball_need_update(ud) \
|
||||
or self.tarball_need_update(ud) \
|
||||
or self.lfs_need_update(ud, d)
|
||||
return self.clonedir_need_update(ud, d) or self.shallow_tarball_need_update(ud) or self.tarball_need_update(ud)
|
||||
|
||||
def clonedir_need_update(self, ud, d):
|
||||
if not os.path.exists(ud.clonedir):
|
||||
@@ -322,15 +305,6 @@ class Git(FetchMethod):
|
||||
return True
|
||||
return False
|
||||
|
||||
def lfs_need_update(self, ud, d):
|
||||
if self.clonedir_need_update(ud, d):
|
||||
return True
|
||||
|
||||
for name in ud.names:
|
||||
if not self._lfs_objects_downloaded(ud, d, name, ud.clonedir):
|
||||
return True
|
||||
return False
|
||||
|
||||
def clonedir_need_shallow_revs(self, ud, d):
|
||||
for rev in ud.shallow_revs:
|
||||
try:
|
||||
@@ -371,13 +345,9 @@ class Git(FetchMethod):
|
||||
|
||||
# If the repo still doesn't exist, fallback to cloning it
|
||||
if not os.path.exists(ud.clonedir):
|
||||
# We do this since git will use a "-l" option automatically for local urls where possible,
|
||||
# but it doesn't work when git/objects is a symlink, only works when it is a directory.
|
||||
# We do this since git will use a "-l" option automatically for local urls where possible
|
||||
if repourl.startswith("file://"):
|
||||
repourl_path = repourl[7:]
|
||||
objects = os.path.join(repourl_path, 'objects')
|
||||
if os.path.isdir(objects) and not os.path.islink(objects):
|
||||
repourl = repourl_path
|
||||
repourl = repourl[7:]
|
||||
clone_cmd = "LANG=C %s clone --bare --mirror %s %s --progress" % (ud.basecmd, shlex.quote(repourl), ud.clonedir)
|
||||
if ud.proto.lower() != 'file':
|
||||
bb.fetch2.check_network_access(d, clone_cmd, ud.url)
|
||||
@@ -391,11 +361,7 @@ class Git(FetchMethod):
|
||||
runfetchcmd("%s remote rm origin" % ud.basecmd, d, workdir=ud.clonedir)
|
||||
|
||||
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, shlex.quote(repourl)), d, workdir=ud.clonedir)
|
||||
|
||||
if ud.nobranch:
|
||||
fetch_cmd = "LANG=C %s fetch -f --progress %s refs/*:refs/*" % (ud.basecmd, shlex.quote(repourl))
|
||||
else:
|
||||
fetch_cmd = "LANG=C %s fetch -f --progress %s refs/heads/*:refs/heads/* refs/tags/*:refs/tags/*" % (ud.basecmd, shlex.quote(repourl))
|
||||
fetch_cmd = "LANG=C %s fetch -f --progress %s refs/*:refs/*" % (ud.basecmd, shlex.quote(repourl))
|
||||
if ud.proto.lower() != 'file':
|
||||
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
|
||||
progresshandler = GitProgressHandler(d)
|
||||
@@ -418,9 +384,9 @@ class Git(FetchMethod):
|
||||
if missing_rev:
|
||||
raise bb.fetch2.FetchError("Unable to find revision %s even from upstream" % missing_rev)
|
||||
|
||||
if self.lfs_need_update(ud, d):
|
||||
if self._contains_lfs(ud, d, ud.clonedir) and self._need_lfs(ud):
|
||||
# Unpack temporary working copy, use it to run 'git checkout' to force pre-fetching
|
||||
# of all LFS blobs needed at the srcrev.
|
||||
# of all LFS blobs needed at the the srcrev.
|
||||
#
|
||||
# It would be nice to just do this inline here by running 'git-lfs fetch'
|
||||
# on the bare clonedir, but that operation requires a working copy on some
|
||||
@@ -448,20 +414,6 @@ class Git(FetchMethod):
|
||||
bb.utils.remove(tmpdir, recurse=True)
|
||||
|
||||
def build_mirror_data(self, ud, d):
|
||||
|
||||
# Create as a temp file and move atomically into position to avoid races
|
||||
@contextmanager
|
||||
def create_atomic(filename):
|
||||
fd, tfile = tempfile.mkstemp(dir=os.path.dirname(filename))
|
||||
try:
|
||||
yield tfile
|
||||
umask = os.umask(0o666)
|
||||
os.umask(umask)
|
||||
os.chmod(tfile, (0o666 & ~umask))
|
||||
os.rename(tfile, filename)
|
||||
finally:
|
||||
os.close(fd)
|
||||
|
||||
if ud.shallow and ud.write_shallow_tarballs:
|
||||
if not os.path.exists(ud.fullshallow):
|
||||
if os.path.islink(ud.fullshallow):
|
||||
@@ -472,8 +424,7 @@ class Git(FetchMethod):
|
||||
self.clone_shallow_local(ud, shallowclone, d)
|
||||
|
||||
logger.info("Creating tarball of git repository")
|
||||
with create_atomic(ud.fullshallow) as tfile:
|
||||
runfetchcmd("tar -czf %s ." % tfile, d, workdir=shallowclone)
|
||||
runfetchcmd("tar -czf %s ." % ud.fullshallow, d, workdir=shallowclone)
|
||||
runfetchcmd("touch %s.done" % ud.fullshallow, d)
|
||||
finally:
|
||||
bb.utils.remove(tempdir, recurse=True)
|
||||
@@ -482,11 +433,7 @@ class Git(FetchMethod):
|
||||
os.unlink(ud.fullmirror)
|
||||
|
||||
logger.info("Creating tarball of git repository")
|
||||
with create_atomic(ud.fullmirror) as tfile:
|
||||
mtime = runfetchcmd("git log --all -1 --format=%cD", d,
|
||||
quiet=True, workdir=ud.clonedir)
|
||||
runfetchcmd("tar -czf %s --owner oe:0 --group oe:0 --mtime \"%s\" ."
|
||||
% (tfile, mtime), d, workdir=ud.clonedir)
|
||||
runfetchcmd("tar -czf %s ." % ud.fullmirror, d, workdir=ud.clonedir)
|
||||
runfetchcmd("touch %s.done" % ud.fullmirror, d)
|
||||
|
||||
def clone_shallow_local(self, ud, dest, d):
|
||||
@@ -548,24 +495,13 @@ class Git(FetchMethod):
|
||||
def unpack(self, ud, destdir, d):
|
||||
""" unpack the downloaded src to destdir"""
|
||||
|
||||
subdir = ud.parm.get("subdir")
|
||||
subpath = ud.parm.get("subpath")
|
||||
readpathspec = ""
|
||||
def_destsuffix = "git/"
|
||||
|
||||
if subpath:
|
||||
readpathspec = ":%s" % subpath
|
||||
def_destsuffix = "%s/" % os.path.basename(subpath.rstrip('/'))
|
||||
|
||||
if subdir:
|
||||
# If 'subdir' param exists, create a dir and use it as destination for unpack cmd
|
||||
if os.path.isabs(subdir):
|
||||
if not os.path.realpath(subdir).startswith(os.path.realpath(destdir)):
|
||||
raise bb.fetch2.UnpackError("subdir argument isn't a subdirectory of unpack root %s" % destdir, ud.url)
|
||||
destdir = subdir
|
||||
else:
|
||||
destdir = os.path.join(destdir, subdir)
|
||||
def_destsuffix = ""
|
||||
subdir = ud.parm.get("subpath", "")
|
||||
if subdir != "":
|
||||
readpathspec = ":%s" % subdir
|
||||
def_destsuffix = "%s/" % os.path.basename(subdir.rstrip('/'))
|
||||
else:
|
||||
readpathspec = ""
|
||||
def_destsuffix = "git/"
|
||||
|
||||
destsuffix = ud.parm.get("destsuffix", def_destsuffix)
|
||||
destdir = ud.destdir = os.path.join(destdir, destsuffix)
|
||||
@@ -612,7 +548,7 @@ class Git(FetchMethod):
|
||||
bb.note("Repository %s has LFS content but it is not being fetched" % (repourl))
|
||||
|
||||
if not ud.nocheckout:
|
||||
if subpath:
|
||||
if subdir != "":
|
||||
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d,
|
||||
workdir=destdir)
|
||||
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d, workdir=destdir)
|
||||
@@ -661,35 +597,6 @@ class Git(FetchMethod):
|
||||
raise bb.fetch2.FetchError("The command '%s' gave output with more then 1 line unexpectedly, output: '%s'" % (cmd, output))
|
||||
return output.split()[0] != "0"
|
||||
|
||||
def _lfs_objects_downloaded(self, ud, d, name, wd):
|
||||
"""
|
||||
Verifies whether the LFS objects for requested revisions have already been downloaded
|
||||
"""
|
||||
# Bail out early if this repository doesn't use LFS
|
||||
if not self._need_lfs(ud) or not self._contains_lfs(ud, d, wd):
|
||||
return True
|
||||
|
||||
# The Git LFS specification specifies ([1]) the LFS folder layout so it should be safe to check for file
|
||||
# existence.
|
||||
# [1] https://github.com/git-lfs/git-lfs/blob/main/docs/spec.md#intercepting-git
|
||||
cmd = "%s lfs ls-files -l %s" \
|
||||
% (ud.basecmd, ud.revisions[name])
|
||||
output = runfetchcmd(cmd, d, quiet=True, workdir=wd).rstrip()
|
||||
# Do not do any further matching if no objects are managed by LFS
|
||||
if not output:
|
||||
return True
|
||||
|
||||
# Match all lines beginning with the hexadecimal OID
|
||||
oid_regex = re.compile("^(([a-fA-F0-9]{2})([a-fA-F0-9]{2})[A-Fa-f0-9]+)")
|
||||
for line in output.split("\n"):
|
||||
oid = re.search(oid_regex, line)
|
||||
if not oid:
|
||||
bb.warn("git lfs ls-files output '%s' did not match expected format." % line)
|
||||
if not os.path.exists(os.path.join(wd, "lfs", "objects", oid.group(2), oid.group(3), oid.group(1))):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def _need_lfs(self, ud):
|
||||
return ud.parm.get("lfs", "1") == "1"
|
||||
|
||||
@@ -780,12 +687,6 @@ class Git(FetchMethod):
|
||||
"""
|
||||
Compute the HEAD revision for the url
|
||||
"""
|
||||
if not d.getVar("__BBSEENSRCREV"):
|
||||
raise bb.fetch2.FetchError("Recipe uses a floating tag/branch '%s' for repo '%s' without a fixed SRCREV yet doesn't call bb.fetch2.get_srcrev() (use SRCPV in PV for OE)." % (ud.unresolvedrev[name], ud.host+ud.path))
|
||||
|
||||
# Ensure we mark as not cached
|
||||
bb.fetch2.get_autorev(d)
|
||||
|
||||
output = self._lsremote(ud, d, "")
|
||||
# Tags of the form ^{} may not work, need to fallback to other form
|
||||
if ud.unresolvedrev[name][:5] == "refs/" or ud.usehead:
|
||||
@@ -861,8 +762,9 @@ class Git(FetchMethod):
|
||||
commits = None
|
||||
else:
|
||||
if not os.path.exists(rev_file) or not os.path.getsize(rev_file):
|
||||
from pipes import quote
|
||||
commits = bb.fetch2.runfetchcmd(
|
||||
"git rev-list %s -- | wc -l" % shlex.quote(rev),
|
||||
"git rev-list %s -- | wc -l" % quote(rev),
|
||||
d, quiet=True).strip().lstrip('0')
|
||||
if commits:
|
||||
open(rev_file, "w").write("%d\n" % int(commits))
|
||||
|
||||
@@ -88,7 +88,7 @@ class GitSM(Git):
|
||||
subrevision[m] = module_hash.split()[2]
|
||||
|
||||
# Convert relative to absolute uri based on parent uri
|
||||
if uris[m].startswith('..') or uris[m].startswith('./'):
|
||||
if uris[m].startswith('..'):
|
||||
newud = copy.copy(ud)
|
||||
newud.path = os.path.realpath(os.path.join(newud.path, uris[m]))
|
||||
uris[m] = Git._get_repo_url(self, newud)
|
||||
@@ -115,9 +115,6 @@ class GitSM(Git):
|
||||
# This has to be a file reference
|
||||
proto = "file"
|
||||
url = "gitsm://" + uris[module]
|
||||
if url.endswith("{}{}".format(ud.host, ud.path)):
|
||||
raise bb.fetch2.FetchError("Submodule refers to the parent repository. This will cause deadlock situation in current version of Bitbake." \
|
||||
"Consider using git fetcher instead.")
|
||||
|
||||
url += ';protocol=%s' % proto
|
||||
url += ";name=%s" % module
|
||||
@@ -139,23 +136,20 @@ class GitSM(Git):
|
||||
|
||||
return submodules != []
|
||||
|
||||
def call_process_submodules(self, ud, d, extra_check, subfunc):
|
||||
# If we're using a shallow mirror tarball it needs to be
|
||||
# unpacked temporarily so that we can examine the .gitmodules file
|
||||
if ud.shallow and os.path.exists(ud.fullshallow) and extra_check:
|
||||
tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR"))
|
||||
try:
|
||||
runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=tmpdir)
|
||||
self.process_submodules(ud, tmpdir, subfunc, d)
|
||||
finally:
|
||||
shutil.rmtree(tmpdir)
|
||||
else:
|
||||
self.process_submodules(ud, ud.clonedir, subfunc, d)
|
||||
|
||||
def need_update(self, ud, d):
|
||||
if Git.need_update(self, ud, d):
|
||||
return True
|
||||
|
||||
try:
|
||||
# Check for the nugget dropped by the download operation
|
||||
known_srcrevs = runfetchcmd("%s config --get-all bitbake.srcrev" % \
|
||||
(ud.basecmd), d, workdir=ud.clonedir)
|
||||
|
||||
if ud.revisions[ud.names[0]] in known_srcrevs.split():
|
||||
return False
|
||||
except bb.fetch2.FetchError:
|
||||
pass
|
||||
|
||||
need_update_list = []
|
||||
def need_update_submodule(ud, url, module, modpath, workdir, d):
|
||||
url += ";bareclone=1;nobranch=1"
|
||||
@@ -169,9 +163,22 @@ class GitSM(Git):
|
||||
logger.error('gitsm: submodule update check failed: %s %s' % (type(e).__name__, str(e)))
|
||||
need_update_result = True
|
||||
|
||||
self.call_process_submodules(ud, d, not os.path.exists(ud.clonedir), need_update_submodule)
|
||||
# If we're using a shallow mirror tarball it needs to be unpacked
|
||||
# temporarily so that we can examine the .gitmodules file
|
||||
if ud.shallow and os.path.exists(ud.fullshallow) and not os.path.exists(ud.clonedir):
|
||||
tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR"))
|
||||
runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=tmpdir)
|
||||
self.process_submodules(ud, tmpdir, need_update_submodule, d)
|
||||
shutil.rmtree(tmpdir)
|
||||
else:
|
||||
self.process_submodules(ud, ud.clonedir, need_update_submodule, d)
|
||||
if len(need_update_list) == 0:
|
||||
# We already have the required commits of all submodules. Drop
|
||||
# a nugget so we don't need to check again.
|
||||
runfetchcmd("%s config --add bitbake.srcrev %s" % \
|
||||
(ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=ud.clonedir)
|
||||
|
||||
if need_update_list:
|
||||
if len(need_update_list) > 0:
|
||||
logger.debug('gitsm: Submodules requiring update: %s' % (' '.join(need_update_list)))
|
||||
return True
|
||||
|
||||
@@ -192,7 +199,19 @@ class GitSM(Git):
|
||||
raise
|
||||
|
||||
Git.download(self, ud, d)
|
||||
self.call_process_submodules(ud, d, self.need_update(ud, d), download_submodule)
|
||||
|
||||
# If we're using a shallow mirror tarball it needs to be unpacked
|
||||
# temporarily so that we can examine the .gitmodules file
|
||||
if ud.shallow and os.path.exists(ud.fullshallow) and self.need_update(ud, d):
|
||||
tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR"))
|
||||
runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=tmpdir)
|
||||
self.process_submodules(ud, tmpdir, download_submodule, d)
|
||||
shutil.rmtree(tmpdir)
|
||||
else:
|
||||
self.process_submodules(ud, ud.clonedir, download_submodule, d)
|
||||
# Drop a nugget for the srcrev we've fetched (used by need_update)
|
||||
runfetchcmd("%s config --add bitbake.srcrev %s" % \
|
||||
(ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=ud.clonedir)
|
||||
|
||||
def unpack(self, ud, destdir, d):
|
||||
def unpack_submodules(ud, url, module, modpath, workdir, d):
|
||||
@@ -245,6 +264,14 @@ class GitSM(Git):
|
||||
newfetch = Fetch([url], d, cache=False)
|
||||
urldata.extend(newfetch.expanded_urldata())
|
||||
|
||||
self.call_process_submodules(ud, d, ud.method.need_update(ud, d), add_submodule)
|
||||
# If we're using a shallow mirror tarball it needs to be unpacked
|
||||
# temporarily so that we can examine the .gitmodules file
|
||||
if ud.shallow and os.path.exists(ud.fullshallow) and ud.method.need_update(ud, d):
|
||||
tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR"))
|
||||
subprocess.check_call("tar -xzf %s" % ud.fullshallow, cwd=tmpdir, shell=True)
|
||||
self.process_submodules(ud, tmpdir, add_submodule, d)
|
||||
shutil.rmtree(tmpdir)
|
||||
else:
|
||||
self.process_submodules(ud, ud.clonedir, add_submodule, d)
|
||||
|
||||
return urldata
|
||||
|
||||
@@ -52,13 +52,9 @@ def npm_filename(package, version):
|
||||
"""Get the filename of a npm package"""
|
||||
return npm_package(package) + "-" + version + ".tgz"
|
||||
|
||||
def npm_localfile(package, version=None):
|
||||
def npm_localfile(package, version):
|
||||
"""Get the local filename of a npm package"""
|
||||
if version is not None:
|
||||
filename = npm_filename(package, version)
|
||||
else:
|
||||
filename = package
|
||||
return os.path.join("npm2", filename)
|
||||
return os.path.join("npm2", npm_filename(package, version))
|
||||
|
||||
def npm_integrity(integrity):
|
||||
"""
|
||||
@@ -73,31 +69,17 @@ def npm_unpack(tarball, destdir, d):
|
||||
bb.utils.mkdirhier(destdir)
|
||||
cmd = "tar --extract --gzip --file=%s" % shlex.quote(tarball)
|
||||
cmd += " --no-same-owner"
|
||||
cmd += " --delay-directory-restore"
|
||||
cmd += " --strip-components=1"
|
||||
runfetchcmd(cmd, d, workdir=destdir)
|
||||
runfetchcmd("chmod -R +X '%s'" % (destdir), d, quiet=True, workdir=destdir)
|
||||
|
||||
class NpmEnvironment(object):
|
||||
"""
|
||||
Using a npm config file seems more reliable than using cli arguments.
|
||||
This class allows to create a controlled environment for npm commands.
|
||||
"""
|
||||
def __init__(self, d, configs=[], npmrc=None):
|
||||
def __init__(self, d, configs=None):
|
||||
self.d = d
|
||||
|
||||
self.user_config = tempfile.NamedTemporaryFile(mode="w", buffering=1)
|
||||
for key, value in configs:
|
||||
self.user_config.write("%s=%s\n" % (key, value))
|
||||
|
||||
if npmrc:
|
||||
self.global_config_name = npmrc
|
||||
else:
|
||||
self.global_config_name = "/dev/null"
|
||||
|
||||
def __del__(self):
|
||||
if self.user_config:
|
||||
self.user_config.close()
|
||||
self.configs = configs
|
||||
|
||||
def run(self, cmd, args=None, configs=None, workdir=None):
|
||||
"""Run npm command in a controlled environment"""
|
||||
@@ -105,19 +87,23 @@ class NpmEnvironment(object):
|
||||
d = bb.data.createCopy(self.d)
|
||||
d.setVar("HOME", tmpdir)
|
||||
|
||||
cfgfile = os.path.join(tmpdir, "npmrc")
|
||||
|
||||
if not workdir:
|
||||
workdir = tmpdir
|
||||
|
||||
def _run(cmd):
|
||||
cmd = "NPM_CONFIG_USERCONFIG=%s " % (self.user_config.name) + cmd
|
||||
cmd = "NPM_CONFIG_GLOBALCONFIG=%s " % (self.global_config_name) + cmd
|
||||
cmd = "NPM_CONFIG_USERCONFIG=%s " % cfgfile + cmd
|
||||
cmd = "NPM_CONFIG_GLOBALCONFIG=%s " % cfgfile + cmd
|
||||
return runfetchcmd(cmd, d, workdir=workdir)
|
||||
|
||||
if self.configs:
|
||||
for key, value in self.configs:
|
||||
_run("npm config set %s %s" % (key, shlex.quote(value)))
|
||||
|
||||
if configs:
|
||||
bb.warn("Use of configs argument of NpmEnvironment.run() function"
|
||||
" is deprecated. Please use args argument instead.")
|
||||
for key, value in configs:
|
||||
cmd += " --%s=%s" % (key, shlex.quote(value))
|
||||
_run("npm config set %s %s" % (key, shlex.quote(value)))
|
||||
|
||||
if args:
|
||||
for key, value in args:
|
||||
@@ -156,12 +142,12 @@ class Npm(FetchMethod):
|
||||
raise ParameterError("Invalid 'version' parameter", ud.url)
|
||||
|
||||
# Extract the 'registry' part of the url
|
||||
ud.registry = re.sub(r"^npm://", "https://", ud.url.split(";")[0])
|
||||
ud.registry = re.sub(r"^npm://", "http://", ud.url.split(";")[0])
|
||||
|
||||
# Using the 'downloadfilename' parameter as local filename
|
||||
# or the npm package name.
|
||||
if "downloadfilename" in ud.parm:
|
||||
ud.localfile = npm_localfile(d.expand(ud.parm["downloadfilename"]))
|
||||
ud.localfile = d.expand(ud.parm["downloadfilename"])
|
||||
else:
|
||||
ud.localfile = npm_localfile(ud.package, ud.version)
|
||||
|
||||
@@ -179,14 +165,14 @@ class Npm(FetchMethod):
|
||||
|
||||
def _resolve_proxy_url(self, ud, d):
|
||||
def _npm_view():
|
||||
args = []
|
||||
args.append(("json", "true"))
|
||||
args.append(("registry", ud.registry))
|
||||
configs = []
|
||||
configs.append(("json", "true"))
|
||||
configs.append(("registry", ud.registry))
|
||||
pkgver = shlex.quote(ud.package + "@" + ud.version)
|
||||
cmd = ud.basecmd + " view %s" % pkgver
|
||||
env = NpmEnvironment(d)
|
||||
check_network_access(d, cmd, ud.registry)
|
||||
view_string = env.run(cmd, args=args)
|
||||
view_string = env.run(cmd, configs=configs)
|
||||
|
||||
if not view_string:
|
||||
raise FetchError("Unavailable package %s" % pkgver, ud.url)
|
||||
|
||||
@@ -24,14 +24,11 @@ import bb
|
||||
from bb.fetch2 import Fetch
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import ParameterError
|
||||
from bb.fetch2 import runfetchcmd
|
||||
from bb.fetch2 import URI
|
||||
from bb.fetch2.npm import npm_integrity
|
||||
from bb.fetch2.npm import npm_localfile
|
||||
from bb.fetch2.npm import npm_unpack
|
||||
from bb.utils import is_semver
|
||||
from bb.utils import lockfile
|
||||
from bb.utils import unlockfile
|
||||
|
||||
def foreach_dependencies(shrinkwrap, callback=None, dev=False):
|
||||
"""
|
||||
@@ -81,18 +78,13 @@ class NpmShrinkWrap(FetchMethod):
|
||||
extrapaths = []
|
||||
destsubdirs = [os.path.join("node_modules", dep) for dep in deptree]
|
||||
destsuffix = os.path.join(*destsubdirs)
|
||||
unpack = True
|
||||
|
||||
integrity = params.get("integrity", None)
|
||||
resolved = params.get("resolved", None)
|
||||
version = params.get("version", None)
|
||||
|
||||
# Handle registry sources
|
||||
if is_semver(version) and integrity:
|
||||
# Handle duplicate dependencies without url
|
||||
if not resolved:
|
||||
return
|
||||
|
||||
if is_semver(version) and resolved and integrity:
|
||||
localfile = npm_localfile(name, version)
|
||||
|
||||
uri = URI(resolved)
|
||||
@@ -117,7 +109,7 @@ class NpmShrinkWrap(FetchMethod):
|
||||
|
||||
# Handle http tarball sources
|
||||
elif version.startswith("http") and integrity:
|
||||
localfile = npm_localfile(os.path.basename(version))
|
||||
localfile = os.path.join("npm2", os.path.basename(version))
|
||||
|
||||
uri = URI(version)
|
||||
uri.params["downloadfilename"] = localfile
|
||||
@@ -131,8 +123,6 @@ class NpmShrinkWrap(FetchMethod):
|
||||
|
||||
# Handle git sources
|
||||
elif version.startswith("git"):
|
||||
if version.startswith("github:"):
|
||||
version = "git+https://github.com/" + version[len("github:"):]
|
||||
regex = re.compile(r"""
|
||||
^
|
||||
git\+
|
||||
@@ -158,12 +148,7 @@ class NpmShrinkWrap(FetchMethod):
|
||||
|
||||
url = str(uri)
|
||||
|
||||
# Handle local tarball and link sources
|
||||
elif version.startswith("file"):
|
||||
localpath = version[5:]
|
||||
if not version.endswith(".tgz"):
|
||||
unpack = False
|
||||
|
||||
# local tarball sources and local link sources are unsupported
|
||||
else:
|
||||
raise ParameterError("Unsupported dependency: %s" % name, ud.url)
|
||||
|
||||
@@ -172,7 +157,6 @@ class NpmShrinkWrap(FetchMethod):
|
||||
"localpath": localpath,
|
||||
"extrapaths": extrapaths,
|
||||
"destsuffix": destsuffix,
|
||||
"unpack": unpack,
|
||||
})
|
||||
|
||||
try:
|
||||
@@ -193,7 +177,7 @@ class NpmShrinkWrap(FetchMethod):
|
||||
# This fetcher resolves multiple URIs from a shrinkwrap file and then
|
||||
# forwards it to a proxy fetcher. The management of the donestamp file,
|
||||
# the lockfile and the checksums are forwarded to the proxy fetcher.
|
||||
ud.proxy = Fetch([dep["url"] for dep in ud.deps if dep["url"]], data)
|
||||
ud.proxy = Fetch([dep["url"] for dep in ud.deps], data)
|
||||
ud.needdonestamp = False
|
||||
|
||||
@staticmethod
|
||||
@@ -203,9 +187,7 @@ class NpmShrinkWrap(FetchMethod):
|
||||
proxy_ud = ud.proxy.ud[proxy_url]
|
||||
proxy_d = ud.proxy.d
|
||||
proxy_ud.setup_localpath(proxy_d)
|
||||
lf = lockfile(proxy_ud.lockfile)
|
||||
returns.append(handle(proxy_ud.method, proxy_ud, proxy_d))
|
||||
unlockfile(lf)
|
||||
return returns
|
||||
|
||||
def verify_donestamp(self, ud, d):
|
||||
@@ -255,16 +237,7 @@ class NpmShrinkWrap(FetchMethod):
|
||||
|
||||
for dep in manual:
|
||||
depdestdir = os.path.join(destdir, dep["destsuffix"])
|
||||
if dep["url"]:
|
||||
npm_unpack(dep["localpath"], depdestdir, d)
|
||||
else:
|
||||
depsrcdir= os.path.join(destdir, dep["localpath"])
|
||||
if dep["unpack"]:
|
||||
npm_unpack(depsrcdir, depdestdir, d)
|
||||
else:
|
||||
bb.utils.mkdirhier(depdestdir)
|
||||
cmd = 'cp -fpPRH "%s/." .' % (depsrcdir)
|
||||
runfetchcmd(cmd, d, workdir=depdestdir)
|
||||
npm_unpack(dep["localpath"], depdestdir, d)
|
||||
|
||||
def clean(self, ud, d):
|
||||
"""Clean any existing full or partial download"""
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
"""
|
||||
@@ -38,7 +36,6 @@ class Osc(FetchMethod):
|
||||
# Create paths to osc checkouts
|
||||
oscdir = d.getVar("OSCDIR") or (d.getVar("DL_DIR") + "/osc")
|
||||
relpath = self._strip_leading_slashes(ud.path)
|
||||
ud.oscdir = oscdir
|
||||
ud.pkgdir = os.path.join(oscdir, ud.host)
|
||||
ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
|
||||
|
||||
@@ -46,13 +43,13 @@ class Osc(FetchMethod):
|
||||
ud.revision = ud.parm['rev']
|
||||
else:
|
||||
pv = d.getVar("PV", False)
|
||||
rev = bb.fetch2.srcrev_internal_helper(ud, d, '')
|
||||
rev = bb.fetch2.srcrev_internal_helper(ud, d)
|
||||
if rev:
|
||||
ud.revision = rev
|
||||
else:
|
||||
ud.revision = ""
|
||||
|
||||
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), relpath.replace('/', '.'), ud.revision))
|
||||
ud.localfile = d.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.path.replace('/', '.'), ud.revision))
|
||||
|
||||
def _buildosccommand(self, ud, d, command):
|
||||
"""
|
||||
@@ -89,7 +86,7 @@ class Osc(FetchMethod):
|
||||
|
||||
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
|
||||
if os.access(ud.moddir, os.R_OK):
|
||||
if os.access(os.path.join(d.getVar('OSCDIR'), ud.path, ud.module), os.R_OK):
|
||||
oscupdatecmd = self._buildosccommand(ud, d, "update")
|
||||
logger.info("Update "+ ud.url)
|
||||
# update sources there
|
||||
@@ -117,23 +114,20 @@ class Osc(FetchMethod):
|
||||
Generate a .oscrc to be used for this run.
|
||||
"""
|
||||
|
||||
config_path = os.path.join(ud.oscdir, "oscrc")
|
||||
if not os.path.exists(ud.oscdir):
|
||||
bb.utils.mkdirhier(ud.oscdir)
|
||||
|
||||
config_path = os.path.join(d.getVar('OSCDIR'), "oscrc")
|
||||
if (os.path.exists(config_path)):
|
||||
os.remove(config_path)
|
||||
|
||||
f = open(config_path, 'w')
|
||||
proto = ud.parm.get('proto', 'https')
|
||||
f.write("[general]\n")
|
||||
f.write("apiurl = %s://%s\n" % (proto, ud.host))
|
||||
f.write("apisrv = %s\n" % ud.host)
|
||||
f.write("scheme = http\n")
|
||||
f.write("su-wrapper = su -c\n")
|
||||
f.write("build-root = %s\n" % d.getVar('WORKDIR'))
|
||||
f.write("urllist = %s\n" % d.getVar("OSCURLLIST"))
|
||||
f.write("extra-pkgs = gzip\n")
|
||||
f.write("\n")
|
||||
f.write("[%s://%s]\n" % (proto, ud.host))
|
||||
f.write("[%s]\n" % ud.host)
|
||||
f.write("user = %s\n" % ud.parm["user"])
|
||||
f.write("pass = %s\n" % ud.parm["pswd"])
|
||||
f.close()
|
||||
|
||||
@@ -134,7 +134,7 @@ class Perforce(FetchMethod):
|
||||
|
||||
ud.setup_revisions(d)
|
||||
|
||||
ud.localfile = d.expand('%s_%s_%s_%s.tar.gz' % (cleanedhost, cleanedpath, cleanedmodule, ud.revision))
|
||||
ud.localfile = d.expand('%s_%s_%s_%s.tar.gz' % (cleanedhost, cleanedpath, cleandedmodule, ud.revision))
|
||||
|
||||
def _buildp4command(self, ud, d, command, depot_filename=None):
|
||||
"""
|
||||
|
||||
@@ -18,47 +18,10 @@ The aws tool must be correctly installed and configured prior to use.
|
||||
import os
|
||||
import bb
|
||||
import urllib.request, urllib.parse, urllib.error
|
||||
import re
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import FetchError
|
||||
from bb.fetch2 import runfetchcmd
|
||||
|
||||
def convertToBytes(value, unit):
|
||||
value = float(value)
|
||||
if (unit == "KiB"):
|
||||
value = value*1024.0;
|
||||
elif (unit == "MiB"):
|
||||
value = value*1024.0*1024.0;
|
||||
elif (unit == "GiB"):
|
||||
value = value*1024.0*1024.0*1024.0;
|
||||
return value
|
||||
|
||||
class S3ProgressHandler(bb.progress.LineFilterProgressHandler):
|
||||
"""
|
||||
Extract progress information from s3 cp output, e.g.:
|
||||
Completed 5.1 KiB/8.8 GiB (12.0 MiB/s) with 1 file(s) remaining
|
||||
"""
|
||||
def __init__(self, d):
|
||||
super(S3ProgressHandler, self).__init__(d)
|
||||
# Send an initial progress event so the bar gets shown
|
||||
self._fire_progress(0)
|
||||
|
||||
def writeline(self, line):
|
||||
percs = re.findall(r'^Completed (\d+.{0,1}\d*) (\w+)\/(\d+.{0,1}\d*) (\w+) (\(.+\)) with\s+', line)
|
||||
if percs:
|
||||
completed = (percs[-1][0])
|
||||
completedUnit = (percs[-1][1])
|
||||
total = (percs[-1][2])
|
||||
totalUnit = (percs[-1][3])
|
||||
completed = convertToBytes(completed, completedUnit)
|
||||
total = convertToBytes(total, totalUnit)
|
||||
progress = (completed/total)*100.0
|
||||
rate = percs[-1][4]
|
||||
self.update(progress, rate)
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
class S3(FetchMethod):
|
||||
"""Class to fetch urls via 'aws s3'"""
|
||||
|
||||
@@ -89,9 +52,7 @@ class S3(FetchMethod):
|
||||
|
||||
cmd = '%s cp s3://%s%s %s' % (ud.basecmd, ud.host, ud.path, ud.localpath)
|
||||
bb.fetch2.check_network_access(d, cmd, ud.url)
|
||||
|
||||
progresshandler = S3ProgressHandler(d)
|
||||
runfetchcmd(cmd, d, False, log=progresshandler)
|
||||
runfetchcmd(cmd, d)
|
||||
|
||||
# Additional sanity checks copied from the wget class (although there
|
||||
# are no known issues which mean these are required, treat the aws cli
|
||||
|
||||
@@ -32,7 +32,6 @@ IETF secsh internet draft:
|
||||
|
||||
import re, os
|
||||
from bb.fetch2 import check_network_access, FetchMethod, ParameterError, runfetchcmd
|
||||
import urllib
|
||||
|
||||
|
||||
__pattern__ = re.compile(r'''
|
||||
@@ -41,9 +40,9 @@ __pattern__ = re.compile(r'''
|
||||
( # Optional username/password block
|
||||
(?P<user>\S+) # username
|
||||
(:(?P<pass>\S+))? # colon followed by the password (optional)
|
||||
)?
|
||||
(?P<cparam>(;[^;]+)*)? # connection parameters block (optional)
|
||||
@
|
||||
)?
|
||||
(?P<host>\S+?) # non-greedy match of the host
|
||||
(:(?P<port>[0-9]+))? # colon followed by the port (optional)
|
||||
/
|
||||
@@ -71,7 +70,6 @@ class SSH(FetchMethod):
|
||||
"git:// prefix with protocol=ssh", urldata.url)
|
||||
m = __pattern__.match(urldata.url)
|
||||
path = m.group('path')
|
||||
path = urllib.parse.unquote(path)
|
||||
host = m.group('host')
|
||||
urldata.localpath = os.path.join(d.getVar('DL_DIR'),
|
||||
os.path.basename(os.path.normpath(path)))
|
||||
@@ -98,11 +96,6 @@ class SSH(FetchMethod):
|
||||
fr += '@%s' % host
|
||||
else:
|
||||
fr = host
|
||||
|
||||
if path[0] != '~':
|
||||
path = '/%s' % path
|
||||
path = urllib.parse.unquote(path)
|
||||
|
||||
fr += ':%s' % path
|
||||
|
||||
cmd = 'scp -B -r %s %s %s/' % (
|
||||
@@ -115,43 +108,3 @@ class SSH(FetchMethod):
|
||||
|
||||
runfetchcmd(cmd, d)
|
||||
|
||||
def checkstatus(self, fetch, urldata, d):
|
||||
"""
|
||||
Check the status of the url
|
||||
"""
|
||||
m = __pattern__.match(urldata.url)
|
||||
path = m.group('path')
|
||||
host = m.group('host')
|
||||
port = m.group('port')
|
||||
user = m.group('user')
|
||||
password = m.group('pass')
|
||||
|
||||
if port:
|
||||
portarg = '-P %s' % port
|
||||
else:
|
||||
portarg = ''
|
||||
|
||||
if user:
|
||||
fr = user
|
||||
if password:
|
||||
fr += ':%s' % password
|
||||
fr += '@%s' % host
|
||||
else:
|
||||
fr = host
|
||||
|
||||
if path[0] != '~':
|
||||
path = '/%s' % path
|
||||
path = urllib.parse.unquote(path)
|
||||
|
||||
cmd = 'ssh -o BatchMode=true %s %s [ -f %s ]' % (
|
||||
portarg,
|
||||
fr,
|
||||
path
|
||||
)
|
||||
|
||||
check_network_access(d, cmd, urldata.url)
|
||||
|
||||
if runfetchcmd(cmd, d):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
@@ -57,12 +57,7 @@ class Svn(FetchMethod):
|
||||
if 'rev' in ud.parm:
|
||||
ud.revision = ud.parm['rev']
|
||||
|
||||
# Whether to use the @REV peg-revision syntax in the svn command or not
|
||||
ud.pegrevision = True
|
||||
if 'nopegrevision' in ud.parm:
|
||||
ud.pegrevision = False
|
||||
|
||||
ud.localfile = d.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ["0", "1"][ud.pegrevision]))
|
||||
ud.localfile = d.expand('%s_%s_%s_%s_.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision))
|
||||
|
||||
def _buildsvncommand(self, ud, d, command):
|
||||
"""
|
||||
@@ -91,7 +86,7 @@ class Svn(FetchMethod):
|
||||
if command == "info":
|
||||
svncmd = "%s info %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
|
||||
elif command == "log1":
|
||||
svncmd = "%s log --limit 1 --quiet %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
|
||||
svncmd = "%s log --limit 1 %s %s://%s/%s/" % (ud.basecmd, " ".join(options), proto, svnroot, ud.module)
|
||||
else:
|
||||
suffix = ""
|
||||
|
||||
@@ -103,8 +98,7 @@ class Svn(FetchMethod):
|
||||
|
||||
if ud.revision:
|
||||
options.append("-r %s" % ud.revision)
|
||||
if ud.pegrevision:
|
||||
suffix = "@%s" % (ud.revision)
|
||||
suffix = "@%s" % (ud.revision)
|
||||
|
||||
if command == "fetch":
|
||||
transportuser = ud.parm.get("transportuser", "")
|
||||
|
||||
@@ -52,24 +52,18 @@ class WgetProgressHandler(bb.progress.LineFilterProgressHandler):
|
||||
|
||||
|
||||
class Wget(FetchMethod):
|
||||
"""Class to fetch urls via 'wget'"""
|
||||
|
||||
# CDNs like CloudFlare may do a 'browser integrity test' which can fail
|
||||
# with the standard wget/urllib User-Agent, so pretend to be a modern
|
||||
# browser.
|
||||
user_agent = "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0"
|
||||
|
||||
def check_certs(self, d):
|
||||
"""
|
||||
Should certificates be checked?
|
||||
"""
|
||||
return (d.getVar("BB_CHECK_SSL_CERTS") or "1") != "0"
|
||||
|
||||
"""Class to fetch urls via 'wget'"""
|
||||
def supports(self, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with wget.
|
||||
"""
|
||||
return ud.type in ['http', 'https', 'ftp', 'ftps']
|
||||
return ud.type in ['http', 'https', 'ftp']
|
||||
|
||||
def recommends_checksum(self, urldata):
|
||||
return True
|
||||
@@ -88,13 +82,7 @@ class Wget(FetchMethod):
|
||||
if not ud.localfile:
|
||||
ud.localfile = d.expand(urllib.parse.unquote(ud.host + ud.path).replace("/", "."))
|
||||
|
||||
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 30"
|
||||
|
||||
if ud.type == 'ftp' or ud.type == 'ftps':
|
||||
self.basecmd += " --passive-ftp"
|
||||
|
||||
if not self.check_certs(d):
|
||||
self.basecmd += " --no-check-certificate"
|
||||
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 30 --passive-ftp --no-check-certificate"
|
||||
|
||||
def _runwget(self, ud, d, command, quiet, workdir=None):
|
||||
|
||||
@@ -109,37 +97,23 @@ class Wget(FetchMethod):
|
||||
|
||||
fetchcmd = self.basecmd
|
||||
|
||||
dldir = os.path.realpath(d.getVar("DL_DIR"))
|
||||
localpath = os.path.join(dldir, ud.localfile) + ".tmp"
|
||||
bb.utils.mkdirhier(os.path.dirname(localpath))
|
||||
fetchcmd += " -O %s" % shlex.quote(localpath)
|
||||
if 'downloadfilename' in ud.parm:
|
||||
localpath = os.path.join(d.getVar("DL_DIR"), ud.localfile)
|
||||
bb.utils.mkdirhier(os.path.dirname(localpath))
|
||||
fetchcmd += " -O %s" % shlex.quote(localpath)
|
||||
|
||||
if ud.user and ud.pswd:
|
||||
fetchcmd += " --auth-no-challenge"
|
||||
if ud.parm.get("redirectauth", "1") == "1":
|
||||
# An undocumented feature of wget is that if the
|
||||
# username/password are specified on the URI, wget will only
|
||||
# send the Authorization header to the first host and not to
|
||||
# any hosts that it is redirected to. With the increasing
|
||||
# usage of temporary AWS URLs, this difference now matters as
|
||||
# AWS will reject any request that has authentication both in
|
||||
# the query parameters (from the redirect) and in the
|
||||
# Authorization header.
|
||||
fetchcmd += " --user=%s --password=%s" % (ud.user, ud.pswd)
|
||||
fetchcmd += " --user=%s --password=%s --auth-no-challenge" % (ud.user, ud.pswd)
|
||||
|
||||
uri = ud.url.split(";")[0]
|
||||
if os.path.exists(ud.localpath):
|
||||
# file exists, but we didnt complete it.. trying again..
|
||||
fetchcmd += " -c -P " + dldir + " '" + uri + "'"
|
||||
fetchcmd += d.expand(" -c -P ${DL_DIR} '%s'" % uri)
|
||||
else:
|
||||
fetchcmd += " -P " + dldir + " '" + uri + "'"
|
||||
fetchcmd += d.expand(" -P ${DL_DIR} '%s'" % uri)
|
||||
|
||||
self._runwget(ud, d, fetchcmd, False)
|
||||
|
||||
# Remove the ".tmp" and move the file into position atomically
|
||||
# Our lock prevents multiple writers but mirroring code may grab incomplete files
|
||||
os.rename(localpath, localpath[:-4])
|
||||
|
||||
# Sanity check since wget can pretend it succeed when it didn't
|
||||
# Also, this used to happen if sourceforge sent us to the mirror page
|
||||
if not os.path.exists(ud.localpath):
|
||||
@@ -235,7 +209,7 @@ class Wget(FetchMethod):
|
||||
# We let the request fail and expect it to be
|
||||
# tried once more ("try_again" in check_status()),
|
||||
# with the dead connection removed from the cache.
|
||||
# If it still fails, we give up, which can happen for bad
|
||||
# If it still fails, we give up, which can happend for bad
|
||||
# HTTP proxy settings.
|
||||
fetch.connection_cache.remove_connection(h.host, h.port)
|
||||
raise urllib.error.URLError(err)
|
||||
@@ -308,82 +282,64 @@ class Wget(FetchMethod):
|
||||
newreq = urllib.request.HTTPRedirectHandler.redirect_request(self, req, fp, code, msg, headers, newurl)
|
||||
newreq.get_method = req.get_method
|
||||
return newreq
|
||||
exported_proxies = export_proxies(d)
|
||||
|
||||
# We need to update the environment here as both the proxy and HTTPS
|
||||
# handlers need variables set. The proxy needs http_proxy and friends to
|
||||
# be set, and HTTPSHandler ends up calling into openssl to load the
|
||||
# certificates. In buildtools configurations this will be looking at the
|
||||
# wrong place for certificates by default: we set SSL_CERT_FILE to the
|
||||
# right location in the buildtools environment script but as BitBake
|
||||
# prunes prunes the environment this is lost. When binaries are executed
|
||||
# runfetchcmd ensures these values are in the environment, but this is
|
||||
# pure Python so we need to update the environment.
|
||||
#
|
||||
# Avoid tramping the environment too much by using bb.utils.environment
|
||||
# to scope the changes to the build_opener request, which is when the
|
||||
# environment lookups happen.
|
||||
newenv = bb.fetch2.get_fetcher_environment(d)
|
||||
handlers = [FixedHTTPRedirectHandler, HTTPMethodFallback]
|
||||
if exported_proxies:
|
||||
handlers.append(urllib.request.ProxyHandler())
|
||||
handlers.append(CacheHTTPHandler())
|
||||
# Since Python 2.7.9 ssl cert validation is enabled by default
|
||||
# see PEP-0476, this causes verification errors on some https servers
|
||||
# so disable by default.
|
||||
import ssl
|
||||
if hasattr(ssl, '_create_unverified_context'):
|
||||
handlers.append(urllib.request.HTTPSHandler(context=ssl._create_unverified_context()))
|
||||
opener = urllib.request.build_opener(*handlers)
|
||||
|
||||
with bb.utils.environment(**newenv):
|
||||
import ssl
|
||||
try:
|
||||
uri = ud.url.split(";")[0]
|
||||
r = urllib.request.Request(uri)
|
||||
r.get_method = lambda: "HEAD"
|
||||
# Some servers (FusionForge, as used on Alioth) require that the
|
||||
# optional Accept header is set.
|
||||
r.add_header("Accept", "*/*")
|
||||
r.add_header("User-Agent", self.user_agent)
|
||||
def add_basic_auth(login_str, request):
|
||||
'''Adds Basic auth to http request, pass in login:password as string'''
|
||||
import base64
|
||||
encodeuser = base64.b64encode(login_str.encode('utf-8')).decode("utf-8")
|
||||
authheader = "Basic %s" % encodeuser
|
||||
r.add_header("Authorization", authheader)
|
||||
|
||||
if self.check_certs(d):
|
||||
context = ssl.create_default_context()
|
||||
else:
|
||||
context = ssl._create_unverified_context()
|
||||
|
||||
handlers = [FixedHTTPRedirectHandler,
|
||||
HTTPMethodFallback,
|
||||
urllib.request.ProxyHandler(),
|
||||
CacheHTTPHandler(),
|
||||
urllib.request.HTTPSHandler(context=context)]
|
||||
opener = urllib.request.build_opener(*handlers)
|
||||
if ud.user and ud.pswd:
|
||||
add_basic_auth(ud.user + ':' + ud.pswd, r)
|
||||
|
||||
try:
|
||||
uri = ud.url.split(";")[0]
|
||||
r = urllib.request.Request(uri)
|
||||
r.get_method = lambda: "HEAD"
|
||||
# Some servers (FusionForge, as used on Alioth) require that the
|
||||
# optional Accept header is set.
|
||||
r.add_header("Accept", "*/*")
|
||||
r.add_header("User-Agent", self.user_agent)
|
||||
def add_basic_auth(login_str, request):
|
||||
'''Adds Basic auth to http request, pass in login:password as string'''
|
||||
import base64
|
||||
encodeuser = base64.b64encode(login_str.encode('utf-8')).decode("utf-8")
|
||||
authheader = "Basic %s" % encodeuser
|
||||
r.add_header("Authorization", authheader)
|
||||
|
||||
if ud.user and ud.pswd:
|
||||
add_basic_auth(ud.user + ':' + ud.pswd, r)
|
||||
|
||||
try:
|
||||
import netrc
|
||||
n = netrc.netrc()
|
||||
login, unused, password = n.authenticators(urllib.parse.urlparse(uri).hostname)
|
||||
add_basic_auth("%s:%s" % (login, password), r)
|
||||
except (TypeError, ImportError, IOError, netrc.NetrcParseError):
|
||||
pass
|
||||
|
||||
with opener.open(r, timeout=30) as response:
|
||||
pass
|
||||
except urllib.error.URLError as e:
|
||||
if try_again:
|
||||
logger.debug2("checkstatus: trying again")
|
||||
return self.checkstatus(fetch, ud, d, False)
|
||||
else:
|
||||
# debug for now to avoid spamming the logs in e.g. remote sstate searches
|
||||
logger.debug2("checkstatus() urlopen failed: %s" % e)
|
||||
return False
|
||||
except ConnectionResetError as e:
|
||||
if try_again:
|
||||
logger.debug2("checkstatus: trying again")
|
||||
return self.checkstatus(fetch, ud, d, False)
|
||||
else:
|
||||
# debug for now to avoid spamming the logs in e.g. remote sstate searches
|
||||
logger.debug2("checkstatus() urlopen failed: %s" % e)
|
||||
return False
|
||||
import netrc
|
||||
n = netrc.netrc()
|
||||
login, unused, password = n.authenticators(urllib.parse.urlparse(uri).hostname)
|
||||
add_basic_auth("%s:%s" % (login, password), r)
|
||||
except (TypeError, ImportError, IOError, netrc.NetrcParseError):
|
||||
pass
|
||||
|
||||
with opener.open(r) as response:
|
||||
pass
|
||||
except urllib.error.URLError as e:
|
||||
if try_again:
|
||||
logger.debug2("checkstatus: trying again")
|
||||
return self.checkstatus(fetch, ud, d, False)
|
||||
else:
|
||||
# debug for now to avoid spamming the logs in e.g. remote sstate searches
|
||||
logger.debug2("checkstatus() urlopen failed: %s" % e)
|
||||
return False
|
||||
except ConnectionResetError as e:
|
||||
if try_again:
|
||||
logger.debug2("checkstatus: trying again")
|
||||
return self.checkstatus(fetch, ud, d, False)
|
||||
else:
|
||||
# debug for now to avoid spamming the logs in e.g. remote sstate searches
|
||||
logger.debug2("checkstatus() urlopen failed: %s" % e)
|
||||
return False
|
||||
return True
|
||||
|
||||
def _parse_path(self, regex, s):
|
||||
@@ -516,7 +472,7 @@ class Wget(FetchMethod):
|
||||
version_dir = ['', '', '']
|
||||
version = ['', '', '']
|
||||
|
||||
dirver_regex = re.compile(r"(?P<pfx>\D*)(?P<ver>(\d+[\.\-_])*(\d+))")
|
||||
dirver_regex = re.compile(r"(?P<pfx>\D*)(?P<ver>(\d+[\.\-_])+(\d+))")
|
||||
s = dirver_regex.search(dirver)
|
||||
if s:
|
||||
version_dir[1] = s.group('ver')
|
||||
@@ -592,7 +548,7 @@ class Wget(FetchMethod):
|
||||
|
||||
# src.rpm extension was added only for rpm package. Can be removed if the rpm
|
||||
# packaged will always be considered as having to be manually upgraded
|
||||
psuffix_regex = r"(tar\.\w+|tgz|zip|xz|rpm|bz2|orig\.tar\.\w+|src\.tar\.\w+|src\.tgz|svnr\d+\.tar\.\w+|stable\.tar\.\w+|src\.rpm)"
|
||||
psuffix_regex = r"(tar\.gz|tgz|tar\.bz2|zip|xz|tar\.lz|rpm|bz2|orig\.tar\.gz|tar\.xz|src\.tar\.gz|src\.tgz|svnr\d+\.tar\.bz2|stable\.tar\.gz|src\.rpm)"
|
||||
|
||||
# match name, version and archive type of a package
|
||||
package_regex_comp = re.compile(r"(?P<name>%s?\.?v?)(?P<pver>%s)(?P<arch>%s)?[\.-](?P<type>%s$)"
|
||||
|
||||
@@ -112,6 +112,13 @@ def _showwarning(message, category, filename, lineno, file=None, line=None):
|
||||
warnlog.warning(s)
|
||||
|
||||
warnings.showwarning = _showwarning
|
||||
warnings.filterwarnings("ignore")
|
||||
warnings.filterwarnings("default", module="(<string>$|(oe|bb)\.)")
|
||||
warnings.filterwarnings("ignore", category=PendingDeprecationWarning)
|
||||
warnings.filterwarnings("ignore", category=ImportWarning)
|
||||
warnings.filterwarnings("ignore", category=DeprecationWarning, module="<string>$")
|
||||
warnings.filterwarnings("ignore", message="With-statements now directly support multiple context managers")
|
||||
|
||||
|
||||
def create_bitbake_parser():
|
||||
parser = optparse.OptionParser(
|
||||
@@ -127,7 +134,7 @@ def create_bitbake_parser():
|
||||
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
|
||||
"not handle any dependencies from other recipes.")
|
||||
|
||||
parser.add_option("-k", "--continue", action="store_false", dest="halt", default=True,
|
||||
parser.add_option("-k", "--continue", action="store_false", dest="abort", default=True,
|
||||
help="Continue as much as possible after an error. While the target that "
|
||||
"failed and anything depending on it cannot be built, as much as "
|
||||
"possible will be built before stopping.")
|
||||
|
||||
@@ -76,12 +76,7 @@ def getDiskData(BBDirs):
|
||||
return None
|
||||
|
||||
action = pathSpaceInodeRe.group(1)
|
||||
if action == "ABORT":
|
||||
# Emit a deprecation warning
|
||||
logger.warnonce("The BB_DISKMON_DIRS \"ABORT\" action has been renamed to \"HALT\", update configuration")
|
||||
action = "HALT"
|
||||
|
||||
if action not in ("HALT", "STOPTASKS", "WARN"):
|
||||
if action not in ("ABORT", "STOPTASKS", "WARN"):
|
||||
printErr("Unknown disk space monitor action: %s" % action)
|
||||
return None
|
||||
|
||||
@@ -182,7 +177,7 @@ class diskMonitor:
|
||||
# use them to avoid printing too many warning messages
|
||||
self.preFreeS = {}
|
||||
self.preFreeI = {}
|
||||
# This is for STOPTASKS and HALT, to avoid printing the message
|
||||
# This is for STOPTASKS and ABORT, to avoid printing the message
|
||||
# repeatedly while waiting for the tasks to finish
|
||||
self.checked = {}
|
||||
for k in self.devDict:
|
||||
@@ -224,8 +219,8 @@ class diskMonitor:
|
||||
self.checked[k] = True
|
||||
rq.finish_runqueue(False)
|
||||
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
|
||||
elif action == "HALT" and not self.checked[k]:
|
||||
logger.error("Immediately halt since the disk space monitor action is \"HALT\"!")
|
||||
elif action == "ABORT" and not self.checked[k]:
|
||||
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
|
||||
self.checked[k] = True
|
||||
rq.finish_runqueue(True)
|
||||
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
|
||||
@@ -234,10 +229,9 @@ class diskMonitor:
|
||||
freeInode = st.f_favail
|
||||
|
||||
if minInode and freeInode < minInode:
|
||||
# Some filesystems use dynamic inodes so can't run out.
|
||||
# This is reported by the inode count being 0 (btrfs) or the free
|
||||
# inode count being -1 (cephfs).
|
||||
if st.f_files == 0 or st.f_favail == -1:
|
||||
# Some filesystems use dynamic inodes so can't run out
|
||||
# (e.g. btrfs). This is reported by the inode count being 0.
|
||||
if st.f_files == 0:
|
||||
self.devDict[k][2] = None
|
||||
continue
|
||||
# Always show warning, the self.checked would always be False if the action is WARN
|
||||
@@ -251,8 +245,8 @@ class diskMonitor:
|
||||
self.checked[k] = True
|
||||
rq.finish_runqueue(False)
|
||||
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)
|
||||
elif action == "HALT" and not self.checked[k]:
|
||||
logger.error("Immediately halt since the disk space monitor action is \"HALT\"!")
|
||||
elif action == "ABORT" and not self.checked[k]:
|
||||
logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
|
||||
self.checked[k] = True
|
||||
rq.finish_runqueue(True)
|
||||
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)
|
||||
|
||||
@@ -30,9 +30,7 @@ class BBLogFormatter(logging.Formatter):
|
||||
PLAIN = logging.INFO + 1
|
||||
VERBNOTE = logging.INFO + 2
|
||||
ERROR = logging.ERROR
|
||||
ERRORONCE = logging.ERROR - 1
|
||||
WARNING = logging.WARNING
|
||||
WARNONCE = logging.WARNING - 1
|
||||
CRITICAL = logging.CRITICAL
|
||||
|
||||
levelnames = {
|
||||
@@ -44,9 +42,7 @@ class BBLogFormatter(logging.Formatter):
|
||||
PLAIN : '',
|
||||
VERBNOTE: 'NOTE',
|
||||
WARNING : 'WARNING',
|
||||
WARNONCE : 'WARNING',
|
||||
ERROR : 'ERROR',
|
||||
ERRORONCE : 'ERROR',
|
||||
CRITICAL: 'ERROR',
|
||||
}
|
||||
|
||||
@@ -62,9 +58,7 @@ class BBLogFormatter(logging.Formatter):
|
||||
PLAIN : BASECOLOR,
|
||||
VERBNOTE: BASECOLOR,
|
||||
WARNING : YELLOW,
|
||||
WARNONCE : YELLOW,
|
||||
ERROR : RED,
|
||||
ERRORONCE : RED,
|
||||
CRITICAL: RED,
|
||||
}
|
||||
|
||||
@@ -127,22 +121,6 @@ class BBLogFilter(object):
|
||||
return True
|
||||
return False
|
||||
|
||||
class LogFilterShowOnce(logging.Filter):
|
||||
def __init__(self):
|
||||
self.seen_warnings = set()
|
||||
self.seen_errors = set()
|
||||
|
||||
def filter(self, record):
|
||||
if record.levelno == bb.msg.BBLogFormatter.WARNONCE:
|
||||
if record.msg in self.seen_warnings:
|
||||
return False
|
||||
self.seen_warnings.add(record.msg)
|
||||
if record.levelno == bb.msg.BBLogFormatter.ERRORONCE:
|
||||
if record.msg in self.seen_errors:
|
||||
return False
|
||||
self.seen_errors.add(record.msg)
|
||||
return True
|
||||
|
||||
class LogFilterGEQLevel(logging.Filter):
|
||||
def __init__(self, level):
|
||||
self.strlevel = str(level)
|
||||
@@ -228,7 +206,6 @@ def logger_create(name, output=sys.stderr, level=logging.INFO, preserve_handlers
|
||||
"""Standalone logger creation function"""
|
||||
logger = logging.getLogger(name)
|
||||
console = logging.StreamHandler(output)
|
||||
console.addFilter(bb.msg.LogFilterShowOnce())
|
||||
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
|
||||
if color == 'always' or (color == 'auto' and output.isatty()):
|
||||
format.enable_color()
|
||||
@@ -316,17 +293,10 @@ def setLoggingConfig(defaultconfig, userconfigfile=None):
|
||||
|
||||
# Convert all level parameters to integers in case users want to use the
|
||||
# bitbake defined level names
|
||||
for name, h in logconfig["handlers"].items():
|
||||
for h in logconfig["handlers"].values():
|
||||
if "level" in h:
|
||||
h["level"] = bb.msg.stringToLevel(h["level"])
|
||||
|
||||
# Every handler needs its own instance of the once filter.
|
||||
once_filter_name = name + ".showonceFilter"
|
||||
logconfig.setdefault("filters", {})[once_filter_name] = {
|
||||
"()": "bb.msg.LogFilterShowOnce",
|
||||
}
|
||||
h.setdefault("filters", []).append(once_filter_name)
|
||||
|
||||
for l in logconfig["loggers"].values():
|
||||
if "level" in l:
|
||||
l["level"] = bb.msg.stringToLevel(l["level"])
|
||||
|
||||
@@ -49,23 +49,20 @@ class SkipPackage(SkipRecipe):
|
||||
__mtime_cache = {}
|
||||
def cached_mtime(f):
|
||||
if f not in __mtime_cache:
|
||||
res = os.stat(f)
|
||||
__mtime_cache[f] = (res.st_mtime_ns, res.st_size, res.st_ino)
|
||||
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
|
||||
return __mtime_cache[f]
|
||||
|
||||
def cached_mtime_noerror(f):
|
||||
if f not in __mtime_cache:
|
||||
try:
|
||||
res = os.stat(f)
|
||||
__mtime_cache[f] = (res.st_mtime_ns, res.st_size, res.st_ino)
|
||||
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
|
||||
except OSError:
|
||||
return 0
|
||||
return __mtime_cache[f]
|
||||
|
||||
def update_mtime(f):
|
||||
try:
|
||||
res = os.stat(f)
|
||||
__mtime_cache[f] = (res.st_mtime_ns, res.st_size, res.st_ino)
|
||||
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
|
||||
except OSError:
|
||||
if f in __mtime_cache:
|
||||
del __mtime_cache[f]
|
||||
@@ -116,8 +113,6 @@ def init(fn, data):
|
||||
return h['init'](data)
|
||||
|
||||
def init_parser(d):
|
||||
if hasattr(bb.parse, "siggen"):
|
||||
bb.parse.siggen.exit()
|
||||
bb.parse.siggen = bb.siggen.init(d)
|
||||
|
||||
def resolve_file(fn, d):
|
||||
|
||||
@@ -130,10 +130,6 @@ class DataNode(AstNode):
|
||||
else:
|
||||
val = groupd["value"]
|
||||
|
||||
if ":append" in key or ":remove" in key or ":prepend" in key:
|
||||
if op in ["append", "prepend", "postdot", "predot", "ques"]:
|
||||
bb.warn(key + " " + groupd[op] + " is not a recommended operator combination, please replace it.")
|
||||
|
||||
flag = None
|
||||
if 'flag' in groupd and groupd['flag'] is not None:
|
||||
flag = groupd['flag']
|
||||
@@ -149,7 +145,7 @@ class DataNode(AstNode):
|
||||
data.setVar(key, val, parsing=True, **loginfo)
|
||||
|
||||
class MethodNode(AstNode):
|
||||
tr_tbl = str.maketrans('/.+-@%&~', '________')
|
||||
tr_tbl = str.maketrans('/.+-@%&', '_______')
|
||||
|
||||
def __init__(self, filename, lineno, func_name, body, python, fakeroot):
|
||||
AstNode.__init__(self, filename, lineno)
|
||||
@@ -223,7 +219,7 @@ class ExportFuncsNode(AstNode):
|
||||
for flag in [ "func", "python" ]:
|
||||
if data.getVarFlag(calledfunc, flag, False):
|
||||
data.setVarFlag(func, flag, data.getVarFlag(calledfunc, flag, False))
|
||||
for flag in ["dirs", "cleandirs", "fakeroot"]:
|
||||
for flag in [ "dirs" ]:
|
||||
if data.getVarFlag(func, flag, False):
|
||||
data.setVarFlag(calledfunc, flag, data.getVarFlag(func, flag, False))
|
||||
data.setVarFlag(func, "filename", "autogenerated")
|
||||
@@ -333,10 +329,6 @@ def runAnonFuncs(d):
|
||||
def finalize(fn, d, variant = None):
|
||||
saved_handlers = bb.event.get_handlers().copy()
|
||||
try:
|
||||
# Found renamed variables. Exit immediately
|
||||
if d.getVar("_FAILPARSINGERRORHANDLED", False) == True:
|
||||
raise bb.BBHandledException()
|
||||
|
||||
for var in d.getVar('__BBHANDLERS', False) or []:
|
||||
# try to add the handler
|
||||
handlerfn = d.getVarFlag(var, "filename", False)
|
||||
|
||||
@@ -19,7 +19,10 @@ from . import ConfHandler
|
||||
from .. import resolve_file, ast, logger, ParseError
|
||||
from .ConfHandler import include, init
|
||||
|
||||
__func_start_regexp__ = re.compile(r"(((?P<py>python(?=(\s|\()))|(?P<fr>fakeroot(?=\s)))\s*)*(?P<func>[\w\.\-\+\{\}\$:]+)?\s*\(\s*\)\s*{$" )
|
||||
# For compatibility
|
||||
bb.deprecate_import(__name__, "bb.parse", ["vars_from_file"])
|
||||
|
||||
__func_start_regexp__ = re.compile(r"(((?P<py>python(?=(\s|\()))|(?P<fr>fakeroot(?=\s)))\s*)*(?P<func>[\w\.\-\+\{\}\$]+)?\s*\(\s*\)\s*{$" )
|
||||
__inherit_regexp__ = re.compile(r"inherit\s+(.+)" )
|
||||
__export_func_regexp__ = re.compile(r"EXPORT_FUNCTIONS\s+(.+)" )
|
||||
__addtask_regexp__ = re.compile(r"addtask\s+(?P<func>\w+)\s*((before\s*(?P<before>((.*(?=after))|(.*))))|(after\s*(?P<after>((.*(?=before))|(.*)))))*")
|
||||
@@ -178,10 +181,10 @@ def feeder(lineno, s, fn, root, statements, eof=False):
|
||||
|
||||
if s and s[0] == '#':
|
||||
if len(__residue__) != 0 and __residue__[0][0] != "#":
|
||||
bb.fatal("There is a comment on line %s of file %s:\n'''\n%s\n'''\nwhich is in the middle of a multiline expression. This syntax is invalid, please correct it." % (lineno, fn, s))
|
||||
bb.fatal("There is a comment on line %s of file %s (%s) which is in the middle of a multiline expression.\nBitbake used to ignore these but no longer does so, please fix your metadata as errors are likely as a result of this change." % (lineno, fn, s))
|
||||
|
||||
if len(__residue__) != 0 and __residue__[0][0] == "#" and (not s or s[0] != "#"):
|
||||
bb.fatal("There is a confusing multiline partially commented expression on line %s of file %s:\n%s\nPlease clarify whether this is all a comment or should be parsed." % (lineno - len(__residue__), fn, "\n".join(__residue__)))
|
||||
bb.fatal("There is a confusing multiline, partially commented expression on line %s of file %s (%s).\nPlease clarify whether this is all a comment or should be parsed." % (lineno, fn, s))
|
||||
|
||||
if s and s[-1] == '\\':
|
||||
__residue__.append(s[:-1])
|
||||
|
||||
@@ -20,7 +20,7 @@ from bb.parse import ParseError, resolve_file, ast, logger, handle
|
||||
__config_regexp__ = re.compile( r"""
|
||||
^
|
||||
(?P<exp>export\s+)?
|
||||
(?P<var>[a-zA-Z0-9\-_+.${}/~:]+?)
|
||||
(?P<var>[a-zA-Z0-9\-_+.${}/~]+?)
|
||||
(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?
|
||||
|
||||
\s* (
|
||||
@@ -48,7 +48,10 @@ __unset_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)$" )
|
||||
__unset_flag_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)\[([a-zA-Z0-9\-_+.]+)\]$" )
|
||||
|
||||
def init(data):
|
||||
return
|
||||
topdir = data.getVar('TOPDIR', False)
|
||||
if not topdir:
|
||||
data.setVar('TOPDIR', os.getcwd())
|
||||
|
||||
|
||||
def supports(fn, d):
|
||||
return fn[-5:] == ".conf"
|
||||
@@ -125,21 +128,16 @@ def handle(fn, data, include):
|
||||
s = f.readline()
|
||||
if not s:
|
||||
break
|
||||
origlineno = lineno
|
||||
origline = s
|
||||
w = s.strip()
|
||||
# skip empty lines
|
||||
if not w:
|
||||
continue
|
||||
s = s.rstrip()
|
||||
while s[-1] == '\\':
|
||||
line = f.readline()
|
||||
origline += line
|
||||
s2 = line.rstrip()
|
||||
s2 = f.readline().rstrip()
|
||||
lineno = lineno + 1
|
||||
if (not s2 or s2 and s2[0] != "#") and s[0] == "#" :
|
||||
bb.fatal("There is a confusing multiline, partially commented expression starting on line %s of file %s:\n%s\nPlease clarify whether this is all a comment or should be parsed." % (origlineno, fn, origline))
|
||||
|
||||
bb.fatal("There is a confusing multiline, partially commented expression on line %s of file %s (%s).\nPlease clarify whether this is all a comment or should be parsed." % (lineno, fn, s))
|
||||
s = s[:-1] + s2
|
||||
# skip comments
|
||||
if s[0] == '#':
|
||||
@@ -152,6 +150,8 @@ def handle(fn, data, include):
|
||||
if oldfile:
|
||||
data.setVar('FILE', oldfile)
|
||||
|
||||
f.close()
|
||||
|
||||
for f in confFilters:
|
||||
f(fn, data)
|
||||
|
||||
|
||||
@@ -12,14 +12,14 @@ currently, providing a key/value store accessed by 'domain'.
|
||||
#
|
||||
|
||||
import collections
|
||||
import collections.abc
|
||||
import contextlib
|
||||
import functools
|
||||
import logging
|
||||
import os.path
|
||||
import sqlite3
|
||||
import sys
|
||||
from collections.abc import Mapping
|
||||
import warnings
|
||||
from collections import Mapping
|
||||
|
||||
sqlversion = sqlite3.sqlite_version_info
|
||||
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
|
||||
@@ -29,7 +29,7 @@ if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
|
||||
logger = logging.getLogger("BitBake.PersistData")
|
||||
|
||||
@functools.total_ordering
|
||||
class SQLTable(collections.abc.MutableMapping):
|
||||
class SQLTable(collections.MutableMapping):
|
||||
class _Decorators(object):
|
||||
@staticmethod
|
||||
def retry(*, reconnect=True):
|
||||
@@ -63,7 +63,7 @@ class SQLTable(collections.abc.MutableMapping):
|
||||
"""
|
||||
Decorator that starts a database transaction and creates a database
|
||||
cursor for performing queries. If no exception is thrown, the
|
||||
database results are committed. If an exception occurs, the database
|
||||
database results are commited. If an exception occurs, the database
|
||||
is rolled back. In all cases, the cursor is closed after the
|
||||
function ends.
|
||||
|
||||
@@ -208,7 +208,7 @@ class SQLTable(collections.abc.MutableMapping):
|
||||
|
||||
def __lt__(self, other):
|
||||
if not isinstance(other, Mapping):
|
||||
raise NotImplementedError()
|
||||
raise NotImplemented
|
||||
|
||||
return len(self) < len(other)
|
||||
|
||||
@@ -238,6 +238,55 @@ class SQLTable(collections.abc.MutableMapping):
|
||||
def has_key(self, key):
|
||||
return key in self
|
||||
|
||||
|
||||
class PersistData(object):
|
||||
"""Deprecated representation of the bitbake persistent data store"""
|
||||
def __init__(self, d):
|
||||
warnings.warn("Use of PersistData is deprecated. Please use "
|
||||
"persist(domain, d) instead.",
|
||||
category=DeprecationWarning,
|
||||
stacklevel=2)
|
||||
|
||||
self.data = persist(d)
|
||||
logger.debug("Using '%s' as the persistent data cache",
|
||||
self.data.filename)
|
||||
|
||||
def addDomain(self, domain):
|
||||
"""
|
||||
Add a domain (pending deprecation)
|
||||
"""
|
||||
return self.data[domain]
|
||||
|
||||
def delDomain(self, domain):
|
||||
"""
|
||||
Removes a domain and all the data it contains
|
||||
"""
|
||||
del self.data[domain]
|
||||
|
||||
def getKeyValues(self, domain):
|
||||
"""
|
||||
Return a list of key + value pairs for a domain
|
||||
"""
|
||||
return list(self.data[domain].items())
|
||||
|
||||
def getValue(self, domain, key):
|
||||
"""
|
||||
Return the value of a key for a domain
|
||||
"""
|
||||
return self.data[domain][key]
|
||||
|
||||
def setValue(self, domain, key, value):
|
||||
"""
|
||||
Sets the value of a key for a domain
|
||||
"""
|
||||
self.data[domain][key] = value
|
||||
|
||||
def delValue(self, domain, key):
|
||||
"""
|
||||
Deletes a key/value pair
|
||||
"""
|
||||
del self.data[domain][key]
|
||||
|
||||
def persist(domain, d):
|
||||
"""Convenience factory for SQLTable objects based upon metadata"""
|
||||
import bb.utils
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
@@ -62,7 +60,7 @@ class Popen(subprocess.Popen):
|
||||
"close_fds": True,
|
||||
"preexec_fn": subprocess_setup,
|
||||
"stdout": subprocess.PIPE,
|
||||
"stderr": subprocess.PIPE,
|
||||
"stderr": subprocess.STDOUT,
|
||||
"stdin": subprocess.PIPE,
|
||||
"shell": False,
|
||||
}
|
||||
@@ -144,7 +142,7 @@ def _logged_communicate(pipe, log, input, extrafiles):
|
||||
while pipe.poll() is None:
|
||||
read_all_pipes(log, rin, outdata, errdata)
|
||||
|
||||
# Process closed, drain all pipes...
|
||||
# Pocess closed, drain all pipes...
|
||||
read_all_pipes(log, rin, outdata, errdata)
|
||||
finally:
|
||||
log.flush()
|
||||
@@ -183,8 +181,5 @@ def run(cmd, input=None, log=None, extrafiles=None, **options):
|
||||
stderr = stderr.decode("utf-8")
|
||||
|
||||
if pipe.returncode != 0:
|
||||
if log:
|
||||
# Don't duplicate the output in the exception if logging it
|
||||
raise ExecutionError(cmd, pipe.returncode, None, None)
|
||||
raise ExecutionError(cmd, pipe.returncode, stdout, stderr)
|
||||
return stdout, stderr
|
||||
|
||||
@@ -94,15 +94,12 @@ class LineFilterProgressHandler(ProgressHandler):
|
||||
while True:
|
||||
breakpos = self._linebuffer.find('\n') + 1
|
||||
if breakpos == 0:
|
||||
# for the case when the line with progress ends with only '\r'
|
||||
breakpos = self._linebuffer.find('\r') + 1
|
||||
if breakpos == 0:
|
||||
break
|
||||
break
|
||||
line = self._linebuffer[:breakpos]
|
||||
self._linebuffer = self._linebuffer[breakpos:]
|
||||
# Drop any line feeds and anything that precedes them
|
||||
lbreakpos = line.rfind('\r') + 1
|
||||
if lbreakpos and lbreakpos != breakpos:
|
||||
if lbreakpos:
|
||||
line = line[lbreakpos:]
|
||||
if self.writeline(filter_color(line)):
|
||||
super().write(line)
|
||||
@@ -148,7 +145,7 @@ class MultiStageProgressReporter:
|
||||
for tasks made up of python code spread across multiple
|
||||
classes / functions - the progress reporter object can
|
||||
be passed around or stored at the object level and calls
|
||||
to next_stage() and update() made wherever needed.
|
||||
to next_stage() and update() made whereever needed.
|
||||
"""
|
||||
def __init__(self, d, stage_weights, debug=False):
|
||||
"""
|
||||
|
||||
@@ -94,7 +94,7 @@ def versionVariableMatch(cfgData, keyword, pn):
|
||||
|
||||
# pn can contain '_', e.g. gcc-cross-x86_64 and an override cannot
|
||||
# hence we do this manually rather than use OVERRIDES
|
||||
ver = cfgData.getVar("%s_VERSION:pn-%s" % (keyword, pn))
|
||||
ver = cfgData.getVar("%s_VERSION_pn-%s" % (keyword, pn))
|
||||
if not ver:
|
||||
ver = cfgData.getVar("%s_VERSION_%s" % (keyword, pn))
|
||||
if not ver:
|
||||
@@ -133,7 +133,7 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
|
||||
if required_v is not None:
|
||||
if preferred_v is not None:
|
||||
logger.warning("REQUIRED_VERSION and PREFERRED_VERSION for package %s%s are both set using REQUIRED_VERSION %s", pn, itemstr, required_v)
|
||||
logger.warn("REQUIRED_VERSION and PREFERRED_VERSION for package %s%s are both set using REQUIRED_VERSION %s", pn, itemstr, required_v)
|
||||
else:
|
||||
logger.debug("REQUIRED_VERSION is set for package %s%s", pn, itemstr)
|
||||
# REQUIRED_VERSION always takes precedence over PREFERRED_VERSION
|
||||
@@ -173,7 +173,7 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
pv_str = '%s:%s' % (preferred_e, pv_str)
|
||||
if preferred_file is None:
|
||||
if not required:
|
||||
logger.warning("preferred version %s of %s not available%s", pv_str, pn, itemstr)
|
||||
logger.warn("preferred version %s of %s not available%s", pv_str, pn, itemstr)
|
||||
available_vers = []
|
||||
for file_set in pkg_pn:
|
||||
for f in file_set:
|
||||
@@ -185,7 +185,7 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
|
||||
available_vers.append(ver_str)
|
||||
if available_vers:
|
||||
available_vers.sort()
|
||||
logger.warning("versions of %s available: %s", pn, ' '.join(available_vers))
|
||||
logger.warn("versions of %s available: %s", pn, ' '.join(available_vers))
|
||||
if required:
|
||||
logger.error("required version %s of %s not available%s", pv_str, pn, itemstr)
|
||||
else:
|
||||
@@ -396,8 +396,8 @@ def getRuntimeProviders(dataCache, rdepend):
|
||||
return rproviders
|
||||
|
||||
# Only search dynamic packages if we can't find anything in other variables
|
||||
for pat_key in dataCache.packages_dynamic:
|
||||
pattern = pat_key.replace(r'+', r"\+")
|
||||
for pattern in dataCache.packages_dynamic:
|
||||
pattern = pattern.replace(r'+', r"\+")
|
||||
if pattern in regexp_cache:
|
||||
regexp = regexp_cache[pattern]
|
||||
else:
|
||||
@@ -408,7 +408,7 @@ def getRuntimeProviders(dataCache, rdepend):
|
||||
raise
|
||||
regexp_cache[pattern] = regexp
|
||||
if regexp.match(rdepend):
|
||||
rproviders += dataCache.packages_dynamic[pat_key]
|
||||
rproviders += dataCache.packages_dynamic[pattern]
|
||||
logger.debug("Assuming %s is a dynamic package, but it may not exist" % rdepend)
|
||||
|
||||
return rproviders
|
||||
|
||||
@@ -24,7 +24,6 @@ import pickle
|
||||
from multiprocessing import Process
|
||||
import shlex
|
||||
import pprint
|
||||
import time
|
||||
|
||||
bblogger = logging.getLogger("BitBake")
|
||||
logger = logging.getLogger("BitBake.RunQueue")
|
||||
@@ -86,19 +85,15 @@ class RunQueueStats:
|
||||
"""
|
||||
Holds statistics on the tasks handled by the associated runQueue
|
||||
"""
|
||||
def __init__(self, total, setscene_total):
|
||||
def __init__(self, total):
|
||||
self.completed = 0
|
||||
self.skipped = 0
|
||||
self.failed = 0
|
||||
self.active = 0
|
||||
self.setscene_active = 0
|
||||
self.setscene_covered = 0
|
||||
self.setscene_notcovered = 0
|
||||
self.setscene_total = setscene_total
|
||||
self.total = total
|
||||
|
||||
def copy(self):
|
||||
obj = self.__class__(self.total, self.setscene_total)
|
||||
obj = self.__class__(self.total)
|
||||
obj.__dict__.update(self.__dict__)
|
||||
return obj
|
||||
|
||||
@@ -117,13 +112,6 @@ class RunQueueStats:
|
||||
def taskActive(self):
|
||||
self.active = self.active + 1
|
||||
|
||||
def updateCovered(self, covered, notcovered):
|
||||
self.setscene_covered = covered
|
||||
self.setscene_notcovered = notcovered
|
||||
|
||||
def updateActiveSetscene(self, active):
|
||||
self.setscene_active = active
|
||||
|
||||
# These values indicate the next step due to be run in the
|
||||
# runQueue state machine
|
||||
runQueuePrepare = 2
|
||||
@@ -160,67 +148,6 @@ class RunQueueScheduler(object):
|
||||
self.buildable.append(tid)
|
||||
|
||||
self.rev_prio_map = None
|
||||
self.is_pressure_usable()
|
||||
|
||||
def is_pressure_usable(self):
|
||||
"""
|
||||
If monitoring pressure, return True if pressure files can be open and read. For example
|
||||
openSUSE /proc/pressure/* files have readable file permissions but when read the error EOPNOTSUPP (Operation not supported)
|
||||
is returned.
|
||||
"""
|
||||
if self.rq.max_cpu_pressure or self.rq.max_io_pressure or self.rq.max_memory_pressure:
|
||||
try:
|
||||
with open("/proc/pressure/cpu") as cpu_pressure_fds, \
|
||||
open("/proc/pressure/io") as io_pressure_fds, \
|
||||
open("/proc/pressure/memory") as memory_pressure_fds:
|
||||
|
||||
self.prev_cpu_pressure = cpu_pressure_fds.readline().split()[4].split("=")[1]
|
||||
self.prev_io_pressure = io_pressure_fds.readline().split()[4].split("=")[1]
|
||||
self.prev_memory_pressure = memory_pressure_fds.readline().split()[4].split("=")[1]
|
||||
self.prev_pressure_time = time.time()
|
||||
self.check_pressure = True
|
||||
except:
|
||||
bb.note("The /proc/pressure files can't be read. Continuing build without monitoring pressure")
|
||||
self.check_pressure = False
|
||||
else:
|
||||
self.check_pressure = False
|
||||
|
||||
def exceeds_max_pressure(self):
|
||||
"""
|
||||
Monitor the difference in total pressure at least once per second, if
|
||||
BB_PRESSURE_MAX_{CPU|IO|MEMORY} are set, return True if above threshold.
|
||||
"""
|
||||
if self.check_pressure:
|
||||
with open("/proc/pressure/cpu") as cpu_pressure_fds, \
|
||||
open("/proc/pressure/io") as io_pressure_fds, \
|
||||
open("/proc/pressure/memory") as memory_pressure_fds:
|
||||
# extract "total" from /proc/pressure/{cpu|io}
|
||||
curr_cpu_pressure = cpu_pressure_fds.readline().split()[4].split("=")[1]
|
||||
curr_io_pressure = io_pressure_fds.readline().split()[4].split("=")[1]
|
||||
curr_memory_pressure = memory_pressure_fds.readline().split()[4].split("=")[1]
|
||||
now = time.time()
|
||||
tdiff = now - self.prev_pressure_time
|
||||
psi_accumulation_interval = 1.0
|
||||
cpu_pressure = (float(curr_cpu_pressure) - float(self.prev_cpu_pressure)) / tdiff
|
||||
io_pressure = (float(curr_io_pressure) - float(self.prev_io_pressure)) / tdiff
|
||||
memory_pressure = (float(curr_memory_pressure) - float(self.prev_memory_pressure)) / tdiff
|
||||
exceeds_cpu_pressure = self.rq.max_cpu_pressure and cpu_pressure > self.rq.max_cpu_pressure
|
||||
exceeds_io_pressure = self.rq.max_io_pressure and io_pressure > self.rq.max_io_pressure
|
||||
exceeds_memory_pressure = self.rq.max_memory_pressure and memory_pressure > self.rq.max_memory_pressure
|
||||
|
||||
if tdiff > psi_accumulation_interval:
|
||||
self.prev_cpu_pressure = curr_cpu_pressure
|
||||
self.prev_io_pressure = curr_io_pressure
|
||||
self.prev_memory_pressure = curr_memory_pressure
|
||||
self.prev_pressure_time = now
|
||||
|
||||
pressure_state = (exceeds_cpu_pressure, exceeds_io_pressure, exceeds_memory_pressure)
|
||||
pressure_values = (round(cpu_pressure,1), self.rq.max_cpu_pressure, round(io_pressure,1), self.rq.max_io_pressure, round(memory_pressure,1), self.rq.max_memory_pressure)
|
||||
if hasattr(self, "pressure_state") and pressure_state != self.pressure_state:
|
||||
bb.note("Pressure status changed to CPU: %s, IO: %s, Mem: %s (CPU: %s/%s, IO: %s/%s, Mem: %s/%s) - using %s/%s bitbake threads" % (pressure_state + pressure_values + (len(self.rq.runq_running.difference(self.rq.runq_complete)), self.rq.number_tasks)))
|
||||
self.pressure_state = pressure_state
|
||||
return (exceeds_cpu_pressure or exceeds_io_pressure or exceeds_memory_pressure)
|
||||
return False
|
||||
|
||||
def next_buildable_task(self):
|
||||
"""
|
||||
@@ -234,12 +161,6 @@ class RunQueueScheduler(object):
|
||||
if not buildable:
|
||||
return None
|
||||
|
||||
# Bitbake requires that at least one task be active. Only check for pressure if
|
||||
# this is the case, otherwise the pressure limitation could result in no tasks
|
||||
# being active and no new tasks started thereby, at times, breaking the scheduler.
|
||||
if self.rq.stats.active and self.exceeds_max_pressure():
|
||||
return None
|
||||
|
||||
# Filter out tasks that have a max number of threads that have been exceeded
|
||||
skip_buildable = {}
|
||||
for running in self.rq.runq_running.difference(self.rq.runq_complete):
|
||||
@@ -453,9 +374,10 @@ class RunQueueData:
|
||||
self.rq = rq
|
||||
self.warn_multi_bb = False
|
||||
|
||||
self.multi_provider_allowed = (cfgData.getVar("BB_MULTI_PROVIDER_ALLOWED") or "").split()
|
||||
self.setscene_ignore_tasks = get_setscene_enforce_ignore_tasks(cfgData, targets)
|
||||
self.setscene_ignore_tasks_checked = False
|
||||
self.stampwhitelist = cfgData.getVar("BB_STAMP_WHITELIST") or ""
|
||||
self.multi_provider_whitelist = (cfgData.getVar("MULTI_PROVIDER_WHITELIST") or "").split()
|
||||
self.setscenewhitelist = get_setscene_enforce_whitelist(cfgData, targets)
|
||||
self.setscenewhitelist_checked = False
|
||||
self.setscene_enforce = (cfgData.getVar('BB_SETSCENE_ENFORCE') == "1")
|
||||
self.init_progress_reporter = bb.progress.DummyMultiStageProcessProgressReporter()
|
||||
|
||||
@@ -553,7 +475,7 @@ class RunQueueData:
|
||||
msgs.append(" Task %s (dependent Tasks %s)\n" % (dep, self.runq_depends_names(self.runtaskentries[dep].depends)))
|
||||
msgs.append("\n")
|
||||
if len(valid_chains) > 10:
|
||||
msgs.append("Halted dependency loops search after 10 matches.\n")
|
||||
msgs.append("Aborted dependency loops search after 10 matches.\n")
|
||||
raise TooManyLoops
|
||||
continue
|
||||
scan = False
|
||||
@@ -614,7 +536,7 @@ class RunQueueData:
|
||||
next_points.append(revdep)
|
||||
task_done[revdep] = True
|
||||
endpoints = next_points
|
||||
if not next_points:
|
||||
if len(next_points) == 0:
|
||||
break
|
||||
|
||||
# Circular dependency sanity check
|
||||
@@ -656,7 +578,7 @@ class RunQueueData:
|
||||
|
||||
found = False
|
||||
for mc in self.taskData:
|
||||
if taskData[mc].taskentries:
|
||||
if len(taskData[mc].taskentries) > 0:
|
||||
found = True
|
||||
break
|
||||
if not found:
|
||||
@@ -840,7 +762,7 @@ class RunQueueData:
|
||||
# Find the dependency chain endpoints
|
||||
endpoints = set()
|
||||
for tid in self.runtaskentries:
|
||||
if not deps[tid]:
|
||||
if len(deps[tid]) == 0:
|
||||
endpoints.add(tid)
|
||||
# Iterate the chains collating dependencies
|
||||
while endpoints:
|
||||
@@ -851,11 +773,11 @@ class RunQueueData:
|
||||
cumulativedeps[dep].update(cumulativedeps[tid])
|
||||
if tid in deps[dep]:
|
||||
deps[dep].remove(tid)
|
||||
if not deps[dep]:
|
||||
if len(deps[dep]) == 0:
|
||||
next.add(dep)
|
||||
endpoints = next
|
||||
#for tid in deps:
|
||||
# if deps[tid]:
|
||||
# if len(deps[tid]) != 0:
|
||||
# bb.warn("Sanity test failure, dependencies left for %s (%s)" % (tid, deps[tid]))
|
||||
|
||||
# Loop here since recrdeptasks can depend upon other recrdeptasks and we have to
|
||||
@@ -993,37 +915,39 @@ class RunQueueData:
|
||||
#
|
||||
# Once all active tasks are marked, prune the ones we don't need.
|
||||
|
||||
delcount = {}
|
||||
for tid in list(self.runtaskentries.keys()):
|
||||
if tid not in runq_build:
|
||||
delcount[tid] = self.runtaskentries[tid]
|
||||
del self.runtaskentries[tid]
|
||||
|
||||
# Handle --runall
|
||||
if self.cooker.configuration.runall:
|
||||
# re-run the mark_active and then drop unused tasks from new list
|
||||
reduced_tasklist = set(self.runtaskentries.keys())
|
||||
for tid in list(self.runtaskentries.keys()):
|
||||
if tid not in runq_build:
|
||||
reduced_tasklist.remove(tid)
|
||||
runq_build = {}
|
||||
|
||||
for task in self.cooker.configuration.runall:
|
||||
if not task.startswith("do_"):
|
||||
task = "do_{0}".format(task)
|
||||
runall_tids = set()
|
||||
for tid in reduced_tasklist:
|
||||
for tid in list(self.runtaskentries):
|
||||
wanttid = "{0}:{1}".format(fn_from_tid(tid), task)
|
||||
if wanttid in delcount:
|
||||
self.runtaskentries[wanttid] = delcount[wanttid]
|
||||
if wanttid in self.runtaskentries:
|
||||
runall_tids.add(wanttid)
|
||||
|
||||
for tid in list(runall_tids):
|
||||
mark_active(tid, 1)
|
||||
mark_active(tid,1)
|
||||
if self.cooker.configuration.force:
|
||||
invalidate_task(tid, False)
|
||||
|
||||
delcount = set()
|
||||
for tid in list(self.runtaskentries.keys()):
|
||||
if tid not in runq_build:
|
||||
delcount.add(tid)
|
||||
del self.runtaskentries[tid]
|
||||
for tid in list(self.runtaskentries.keys()):
|
||||
if tid not in runq_build:
|
||||
delcount[tid] = self.runtaskentries[tid]
|
||||
del self.runtaskentries[tid]
|
||||
|
||||
if self.cooker.configuration.runall:
|
||||
if not self.runtaskentries:
|
||||
if len(self.runtaskentries) == 0:
|
||||
bb.msg.fatal("RunQueue", "Could not find any tasks with the tasknames %s to run within the recipes of the taskgraphs of the targets %s" % (str(self.cooker.configuration.runall), str(self.targets)))
|
||||
|
||||
self.init_progress_reporter.next_stage()
|
||||
@@ -1036,19 +960,19 @@ class RunQueueData:
|
||||
for task in self.cooker.configuration.runonly:
|
||||
if not task.startswith("do_"):
|
||||
task = "do_{0}".format(task)
|
||||
runonly_tids = [k for k in self.runtaskentries.keys() if taskname_from_tid(k) == task]
|
||||
runonly_tids = { k: v for k, v in self.runtaskentries.items() if taskname_from_tid(k) == task }
|
||||
|
||||
for tid in runonly_tids:
|
||||
mark_active(tid, 1)
|
||||
for tid in list(runonly_tids):
|
||||
mark_active(tid,1)
|
||||
if self.cooker.configuration.force:
|
||||
invalidate_task(tid, False)
|
||||
|
||||
for tid in list(self.runtaskentries.keys()):
|
||||
if tid not in runq_build:
|
||||
delcount.add(tid)
|
||||
delcount[tid] = self.runtaskentries[tid]
|
||||
del self.runtaskentries[tid]
|
||||
|
||||
if not self.runtaskentries:
|
||||
if len(self.runtaskentries) == 0:
|
||||
bb.msg.fatal("RunQueue", "Could not find any tasks with the tasknames %s to run within the taskgraphs of the targets %s" % (str(self.cooker.configuration.runonly), str(self.targets)))
|
||||
|
||||
#
|
||||
@@ -1056,8 +980,8 @@ class RunQueueData:
|
||||
#
|
||||
|
||||
# Check to make sure we still have tasks to run
|
||||
if not self.runtaskentries:
|
||||
if not taskData[''].halt:
|
||||
if len(self.runtaskentries) == 0:
|
||||
if not taskData[''].abort:
|
||||
bb.msg.fatal("RunQueue", "All buildable tasks have been run but the build is incomplete (--continue mode). Errors for the tasks that failed will have been printed above.")
|
||||
else:
|
||||
bb.msg.fatal("RunQueue", "No active tasks and not in --continue mode?! Please report this bug.")
|
||||
@@ -1080,7 +1004,7 @@ class RunQueueData:
|
||||
endpoints = []
|
||||
for tid in self.runtaskentries:
|
||||
revdeps = self.runtaskentries[tid].revdeps
|
||||
if not revdeps:
|
||||
if len(revdeps) == 0:
|
||||
endpoints.append(tid)
|
||||
for dep in revdeps:
|
||||
if dep in self.runtaskentries[tid].depends:
|
||||
@@ -1116,7 +1040,7 @@ class RunQueueData:
|
||||
for prov in prov_list:
|
||||
if len(prov_list[prov]) < 2:
|
||||
continue
|
||||
if prov in self.multi_provider_allowed:
|
||||
if prov in self.multi_provider_whitelist:
|
||||
continue
|
||||
seen_pn = []
|
||||
# If two versions of the same PN are being built its fatal, we don't support it.
|
||||
@@ -1126,12 +1050,12 @@ class RunQueueData:
|
||||
seen_pn.append(pn)
|
||||
else:
|
||||
bb.fatal("Multiple versions of %s are due to be built (%s). Only one version of a given PN should be built in any given build. You likely need to set PREFERRED_VERSION_%s to select the correct version or don't depend on multiple versions." % (pn, " ".join(prov_list[prov]), pn))
|
||||
msgs = ["Multiple .bb files are due to be built which each provide %s:\n %s" % (prov, "\n ".join(prov_list[prov]))]
|
||||
msg = "Multiple .bb files are due to be built which each provide %s:\n %s" % (prov, "\n ".join(prov_list[prov]))
|
||||
#
|
||||
# Construct a list of things which uniquely depend on each provider
|
||||
# since this may help the user figure out which dependency is triggering this warning
|
||||
#
|
||||
msgs.append("\nA list of tasks depending on these providers is shown and may help explain where the dependency comes from.")
|
||||
msg += "\nA list of tasks depending on these providers is shown and may help explain where the dependency comes from."
|
||||
deplist = {}
|
||||
commondeps = None
|
||||
for provfn in prov_list[prov]:
|
||||
@@ -1151,12 +1075,12 @@ class RunQueueData:
|
||||
commondeps &= deps
|
||||
deplist[provfn] = deps
|
||||
for provfn in deplist:
|
||||
msgs.append("\n%s has unique dependees:\n %s" % (provfn, "\n ".join(deplist[provfn] - commondeps)))
|
||||
msg += "\n%s has unique dependees:\n %s" % (provfn, "\n ".join(deplist[provfn] - commondeps))
|
||||
#
|
||||
# Construct a list of provides and runtime providers for each recipe
|
||||
# (rprovides has to cover RPROVIDES, PACKAGES, PACKAGES_DYNAMIC)
|
||||
#
|
||||
msgs.append("\nIt could be that one recipe provides something the other doesn't and should. The following provider and runtime provider differences may be helpful.")
|
||||
msg += "\nIt could be that one recipe provides something the other doesn't and should. The following provider and runtime provider differences may be helpful."
|
||||
provide_results = {}
|
||||
rprovide_results = {}
|
||||
commonprovs = None
|
||||
@@ -1183,18 +1107,29 @@ class RunQueueData:
|
||||
else:
|
||||
commonrprovs &= rprovides
|
||||
rprovide_results[provfn] = rprovides
|
||||
#msgs.append("\nCommon provides:\n %s" % ("\n ".join(commonprovs)))
|
||||
#msgs.append("\nCommon rprovides:\n %s" % ("\n ".join(commonrprovs)))
|
||||
#msg += "\nCommon provides:\n %s" % ("\n ".join(commonprovs))
|
||||
#msg += "\nCommon rprovides:\n %s" % ("\n ".join(commonrprovs))
|
||||
for provfn in prov_list[prov]:
|
||||
msgs.append("\n%s has unique provides:\n %s" % (provfn, "\n ".join(provide_results[provfn] - commonprovs)))
|
||||
msgs.append("\n%s has unique rprovides:\n %s" % (provfn, "\n ".join(rprovide_results[provfn] - commonrprovs)))
|
||||
msg += "\n%s has unique provides:\n %s" % (provfn, "\n ".join(provide_results[provfn] - commonprovs))
|
||||
msg += "\n%s has unique rprovides:\n %s" % (provfn, "\n ".join(rprovide_results[provfn] - commonrprovs))
|
||||
|
||||
if self.warn_multi_bb:
|
||||
logger.verbnote("".join(msgs))
|
||||
logger.verbnote(msg)
|
||||
else:
|
||||
logger.error("".join(msgs))
|
||||
logger.error(msg)
|
||||
|
||||
self.init_progress_reporter.next_stage()
|
||||
|
||||
# Create a whitelist usable by the stamp checks
|
||||
self.stampfnwhitelist = {}
|
||||
for mc in self.taskData:
|
||||
self.stampfnwhitelist[mc] = []
|
||||
for entry in self.stampwhitelist.split():
|
||||
if entry not in self.taskData[mc].build_targets:
|
||||
continue
|
||||
fn = self.taskData.build_targets[entry][0]
|
||||
self.stampfnwhitelist[mc].append(fn)
|
||||
|
||||
self.init_progress_reporter.next_stage()
|
||||
|
||||
# Iterate over the task list looking for tasks with a 'setscene' function
|
||||
@@ -1242,9 +1177,9 @@ class RunQueueData:
|
||||
# Iterate over the task list and call into the siggen code
|
||||
dealtwith = set()
|
||||
todeal = set(self.runtaskentries)
|
||||
while todeal:
|
||||
while len(todeal) > 0:
|
||||
for tid in todeal.copy():
|
||||
if not (self.runtaskentries[tid].depends - dealtwith):
|
||||
if len(self.runtaskentries[tid].depends - dealtwith) == 0:
|
||||
dealtwith.add(tid)
|
||||
todeal.remove(tid)
|
||||
self.prepare_task_hash(tid)
|
||||
@@ -1283,6 +1218,7 @@ class RunQueue:
|
||||
self.cfgData = cfgData
|
||||
self.rqdata = RunQueueData(self, cooker, cfgData, dataCaches, taskData, targets)
|
||||
|
||||
self.stamppolicy = cfgData.getVar("BB_STAMP_POLICY") or "perfile"
|
||||
self.hashvalidate = cfgData.getVar("BB_HASHCHECK_FUNCTION") or None
|
||||
self.depvalidate = cfgData.getVar("BB_SETSCENE_DEPVALID") or None
|
||||
|
||||
@@ -1411,6 +1347,14 @@ class RunQueue:
|
||||
if taskname is None:
|
||||
taskname = tn
|
||||
|
||||
if self.stamppolicy == "perfile":
|
||||
fulldeptree = False
|
||||
else:
|
||||
fulldeptree = True
|
||||
stampwhitelist = []
|
||||
if self.stamppolicy == "whitelist":
|
||||
stampwhitelist = self.rqdata.stampfnwhitelist[mc]
|
||||
|
||||
stampfile = bb.build.stampfile(taskname, self.rqdata.dataCaches[mc], taskfn)
|
||||
|
||||
# If the stamp is missing, it's not current
|
||||
@@ -1442,7 +1386,7 @@ class RunQueue:
|
||||
continue
|
||||
if t3 and t3 > t2:
|
||||
continue
|
||||
if fn == fn2:
|
||||
if fn == fn2 or (fulldeptree and fn2 not in stampwhitelist):
|
||||
if not t2:
|
||||
logger.debug2('Stampfile %s does not exist', stampfile2)
|
||||
iscurrent = False
|
||||
@@ -1492,7 +1436,7 @@ class RunQueue:
|
||||
"""
|
||||
Run the tasks in a queue prepared by rqdata.prepare()
|
||||
Upon failure, optionally try to recover the build using any alternate providers
|
||||
(if the halt on failure configuration option isn't set)
|
||||
(if the abort on failure configuration option isn't set)
|
||||
"""
|
||||
|
||||
retval = True
|
||||
@@ -1545,10 +1489,10 @@ class RunQueue:
|
||||
self.rqexe = RunQueueExecute(self)
|
||||
|
||||
# If we don't have any setscene functions, skip execution
|
||||
if not self.rqdata.runq_setscene_tids:
|
||||
if len(self.rqdata.runq_setscene_tids) == 0:
|
||||
logger.info('No setscene tasks')
|
||||
for tid in self.rqdata.runtaskentries:
|
||||
if not self.rqdata.runtaskentries[tid].depends:
|
||||
if len(self.rqdata.runtaskentries[tid].depends) == 0:
|
||||
self.rqexe.setbuildable(tid)
|
||||
self.rqexe.tasks_notcovered.add(tid)
|
||||
self.rqexe.sqdone = True
|
||||
@@ -1742,7 +1686,7 @@ class RunQueue:
|
||||
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
|
||||
pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn]
|
||||
h = self.rqdata.runtaskentries[tid].hash
|
||||
matches = bb.siggen.find_siginfo(pn, taskname, [], self.cooker.databuilder.mcdata[mc])
|
||||
matches = bb.siggen.find_siginfo(pn, taskname, [], self.cfgData)
|
||||
match = None
|
||||
for m in matches:
|
||||
if h in m:
|
||||
@@ -1767,9 +1711,6 @@ class RunQueueExecute:
|
||||
|
||||
self.number_tasks = int(self.cfgData.getVar("BB_NUMBER_THREADS") or 1)
|
||||
self.scheduler = self.cfgData.getVar("BB_SCHEDULER") or "speed"
|
||||
self.max_cpu_pressure = self.cfgData.getVar("BB_PRESSURE_MAX_CPU")
|
||||
self.max_io_pressure = self.cfgData.getVar("BB_PRESSURE_MAX_IO")
|
||||
self.max_memory_pressure = self.cfgData.getVar("BB_PRESSURE_MAX_MEMORY")
|
||||
|
||||
self.sq_buildable = set()
|
||||
self.sq_running = set()
|
||||
@@ -1794,7 +1735,8 @@ class RunQueueExecute:
|
||||
self.holdoff_need_update = True
|
||||
self.sqdone = False
|
||||
|
||||
self.stats = RunQueueStats(len(self.rqdata.runtaskentries), len(self.rqdata.runq_setscene_tids))
|
||||
self.stats = RunQueueStats(len(self.rqdata.runtaskentries))
|
||||
self.sq_stats = RunQueueStats(len(self.rqdata.runq_setscene_tids))
|
||||
|
||||
for mc in rq.worker:
|
||||
rq.worker[mc].pipe.setrunqueueexec(self)
|
||||
@@ -1804,29 +1746,6 @@ class RunQueueExecute:
|
||||
if self.number_tasks <= 0:
|
||||
bb.fatal("Invalid BB_NUMBER_THREADS %s" % self.number_tasks)
|
||||
|
||||
lower_limit = 1.0
|
||||
upper_limit = 1000000.0
|
||||
if self.max_cpu_pressure:
|
||||
self.max_cpu_pressure = float(self.max_cpu_pressure)
|
||||
if self.max_cpu_pressure < lower_limit:
|
||||
bb.fatal("Invalid BB_PRESSURE_MAX_CPU %s, minimum value is %s." % (self.max_cpu_pressure, lower_limit))
|
||||
if self.max_cpu_pressure > upper_limit:
|
||||
bb.warn("Your build will be largely unregulated since BB_PRESSURE_MAX_CPU is set to %s. It is very unlikely that such high pressure will be experienced." % (self.max_cpu_pressure))
|
||||
|
||||
if self.max_io_pressure:
|
||||
self.max_io_pressure = float(self.max_io_pressure)
|
||||
if self.max_io_pressure < lower_limit:
|
||||
bb.fatal("Invalid BB_PRESSURE_MAX_IO %s, minimum value is %s." % (self.max_io_pressure, lower_limit))
|
||||
if self.max_io_pressure > upper_limit:
|
||||
bb.warn("Your build will be largely unregulated since BB_PRESSURE_MAX_IO is set to %s. It is very unlikely that such high pressure will be experienced." % (self.max_io_pressure))
|
||||
|
||||
if self.max_memory_pressure:
|
||||
self.max_memory_pressure = float(self.max_memory_pressure)
|
||||
if self.max_memory_pressure < lower_limit:
|
||||
bb.fatal("Invalid BB_PRESSURE_MAX_MEMORY %s, minimum value is %s." % (self.max_memory_pressure, lower_limit))
|
||||
if self.max_memory_pressure > upper_limit:
|
||||
bb.warn("Your build will be largely unregulated since BB_PRESSURE_MAX_MEMORY is set to %s. It is very unlikely that such high pressure will be experienced." % (self.max_io_pressure))
|
||||
|
||||
# List of setscene tasks which we've covered
|
||||
self.scenequeue_covered = set()
|
||||
# List of tasks which are covered (including setscene ones)
|
||||
@@ -1851,7 +1770,7 @@ class RunQueueExecute:
|
||||
bb.fatal("Invalid scheduler '%s'. Available schedulers: %s" %
|
||||
(self.scheduler, ", ".join(obj.name for obj in schedulers)))
|
||||
|
||||
#if self.rqdata.runq_setscene_tids:
|
||||
#if len(self.rqdata.runq_setscene_tids) > 0:
|
||||
self.sqdata = SQData()
|
||||
build_scenequeue_data(self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self)
|
||||
|
||||
@@ -1868,7 +1787,6 @@ class RunQueueExecute:
|
||||
else:
|
||||
self.sq_task_complete(task)
|
||||
self.sq_live.remove(task)
|
||||
self.stats.updateActiveSetscene(len(self.sq_live))
|
||||
else:
|
||||
if status != 0:
|
||||
self.task_fail(task, status, fakerootlog=fakerootlog)
|
||||
@@ -1892,7 +1810,7 @@ class RunQueueExecute:
|
||||
# worker must have died?
|
||||
pass
|
||||
|
||||
if self.failed_tids:
|
||||
if len(self.failed_tids) != 0:
|
||||
self.rq.state = runQueueFailed
|
||||
return
|
||||
|
||||
@@ -1902,13 +1820,13 @@ class RunQueueExecute:
|
||||
def finish(self):
|
||||
self.rq.state = runQueueCleanUp
|
||||
|
||||
active = self.stats.active + len(self.sq_live)
|
||||
active = self.stats.active + self.sq_stats.active
|
||||
if active > 0:
|
||||
bb.event.fire(runQueueExitWait(active), self.cfgData)
|
||||
self.rq.read_workers()
|
||||
return self.rq.active_fds()
|
||||
|
||||
if self.failed_tids:
|
||||
if len(self.failed_tids) != 0:
|
||||
self.rq.state = runQueueFailed
|
||||
return True
|
||||
|
||||
@@ -1935,7 +1853,7 @@ class RunQueueExecute:
|
||||
return valid
|
||||
|
||||
def can_start_task(self):
|
||||
active = self.stats.active + len(self.sq_live)
|
||||
active = self.stats.active + self.sq_stats.active
|
||||
can_start = active < self.number_tasks
|
||||
return can_start
|
||||
|
||||
@@ -1986,20 +1904,6 @@ class RunQueueExecute:
|
||||
self.setbuildable(revdep)
|
||||
logger.debug("Marking task %s as buildable", revdep)
|
||||
|
||||
found = None
|
||||
for t in sorted(self.sq_deferred.copy()):
|
||||
if self.sq_deferred[t] == task:
|
||||
# Allow the next deferred task to run. Any other deferred tasks should be deferred after that task.
|
||||
# We shouldn't allow all to run at once as it is prone to races.
|
||||
if not found:
|
||||
bb.debug(1, "Deferred task %s now buildable" % t)
|
||||
del self.sq_deferred[t]
|
||||
update_scenequeue_data([t], self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self, summary=False)
|
||||
found = t
|
||||
else:
|
||||
bb.debug(1, "Deferring %s after %s" % (t, found))
|
||||
self.sq_deferred[t] = found
|
||||
|
||||
def task_complete(self, task):
|
||||
self.stats.taskCompleted()
|
||||
bb.event.fire(runQueueTaskCompleted(task, self.stats, self.rq), self.cfgData)
|
||||
@@ -2014,7 +1918,7 @@ class RunQueueExecute:
|
||||
self.stats.taskFailed()
|
||||
self.failed_tids.append(task)
|
||||
|
||||
fakeroot_log = []
|
||||
fakeroot_log = ""
|
||||
if fakerootlog and os.path.exists(fakerootlog):
|
||||
with open(fakerootlog) as fakeroot_log_file:
|
||||
fakeroot_failed = False
|
||||
@@ -2024,14 +1928,14 @@ class RunQueueExecute:
|
||||
fakeroot_failed = True
|
||||
if 'doing new pid setup and server start' in line:
|
||||
break
|
||||
fakeroot_log.append(line)
|
||||
fakeroot_log = line + fakeroot_log
|
||||
|
||||
if not fakeroot_failed:
|
||||
fakeroot_log = []
|
||||
fakeroot_log = None
|
||||
|
||||
bb.event.fire(runQueueTaskFailed(task, self.stats, exitcode, self.rq, fakeroot_log=("".join(fakeroot_log) or None)), self.cfgData)
|
||||
bb.event.fire(runQueueTaskFailed(task, self.stats, exitcode, self.rq, fakeroot_log=fakeroot_log), self.cfgData)
|
||||
|
||||
if self.rqdata.taskData[''].halt:
|
||||
if self.rqdata.taskData[''].abort:
|
||||
self.rq.state = runQueueCleanUp
|
||||
|
||||
def task_skip(self, task, reason):
|
||||
@@ -2046,7 +1950,7 @@ class RunQueueExecute:
|
||||
err = False
|
||||
if not self.sqdone:
|
||||
logger.debug('We could skip tasks %s', "\n".join(sorted(self.scenequeue_covered)))
|
||||
completeevent = sceneQueueComplete(self.stats, self.rq)
|
||||
completeevent = sceneQueueComplete(self.sq_stats, self.rq)
|
||||
bb.event.fire(completeevent, self.cfgData)
|
||||
if self.sq_deferred:
|
||||
logger.error("Scenequeue had deferred entries: %s" % pprint.pformat(self.sq_deferred))
|
||||
@@ -2080,7 +1984,7 @@ class RunQueueExecute:
|
||||
if x not in self.tasks_scenequeue_done:
|
||||
logger.error("Task %s was never processed by the setscene code" % x)
|
||||
err = True
|
||||
if not self.rqdata.runtaskentries[x].depends and x not in self.runq_buildable:
|
||||
if len(self.rqdata.runtaskentries[x].depends) == 0 and x not in self.runq_buildable:
|
||||
logger.error("Task %s was never marked as buildable by the setscene code" % x)
|
||||
err = True
|
||||
return err
|
||||
@@ -2104,7 +2008,7 @@ class RunQueueExecute:
|
||||
# Find the next setscene to run
|
||||
for nexttask in self.sorted_setscene_tids:
|
||||
if nexttask in self.sq_buildable and nexttask not in self.sq_running and self.sqdata.stamps[nexttask] not in self.build_stamps.values():
|
||||
if nexttask not in self.sqdata.unskippable and self.sqdata.sq_revdeps[nexttask] and self.sqdata.sq_revdeps[nexttask].issubset(self.scenequeue_covered) and self.check_dependencies(nexttask, self.sqdata.sq_revdeps[nexttask]):
|
||||
if nexttask not in self.sqdata.unskippable and len(self.sqdata.sq_revdeps[nexttask]) > 0 and self.sqdata.sq_revdeps[nexttask].issubset(self.scenequeue_covered) and self.check_dependencies(nexttask, self.sqdata.sq_revdeps[nexttask]):
|
||||
if nexttask not in self.rqdata.target_tids:
|
||||
logger.debug2("Skipping setscene for task %s" % nexttask)
|
||||
self.sq_task_skip(nexttask)
|
||||
@@ -2157,7 +2061,7 @@ class RunQueueExecute:
|
||||
self.sq_task_failoutright(task)
|
||||
return True
|
||||
|
||||
startevent = sceneQueueTaskStarted(task, self.stats, self.rq)
|
||||
startevent = sceneQueueTaskStarted(task, self.sq_stats, self.rq)
|
||||
bb.event.fire(startevent, self.cfgData)
|
||||
|
||||
taskdepdata = self.sq_build_taskdepdata(task)
|
||||
@@ -2178,7 +2082,7 @@ class RunQueueExecute:
|
||||
self.build_stamps2.append(self.build_stamps[task])
|
||||
self.sq_running.add(task)
|
||||
self.sq_live.add(task)
|
||||
self.stats.updateActiveSetscene(len(self.sq_live))
|
||||
self.sq_stats.taskActive()
|
||||
if self.can_start_task():
|
||||
return True
|
||||
|
||||
@@ -2209,9 +2113,9 @@ class RunQueueExecute:
|
||||
if task is not None:
|
||||
(mc, fn, taskname, taskfn) = split_tid_mcfn(task)
|
||||
|
||||
if self.rqdata.setscene_ignore_tasks is not None:
|
||||
if self.check_setscene_ignore_tasks(task):
|
||||
self.task_fail(task, "setscene ignore_tasks")
|
||||
if self.rqdata.setscenewhitelist is not None:
|
||||
if self.check_setscenewhitelist(task):
|
||||
self.task_fail(task, "setscene whitelist")
|
||||
return True
|
||||
|
||||
if task in self.tasks_covered:
|
||||
@@ -2268,18 +2172,18 @@ class RunQueueExecute:
|
||||
if self.can_start_task():
|
||||
return True
|
||||
|
||||
if self.stats.active > 0 or self.sq_live:
|
||||
if self.stats.active > 0 or self.sq_stats.active > 0:
|
||||
self.rq.read_workers()
|
||||
return self.rq.active_fds()
|
||||
|
||||
# No more tasks can be run. If we have deferred setscene tasks we should run them.
|
||||
if self.sq_deferred:
|
||||
deferred_tid = list(self.sq_deferred.keys())[0]
|
||||
blocking_tid = self.sq_deferred.pop(deferred_tid)
|
||||
logger.warning("Runqeueue deadlocked on deferred tasks, forcing task %s blocked by %s" % (deferred_tid, blocking_tid))
|
||||
tid = self.sq_deferred.pop(list(self.sq_deferred.keys())[0])
|
||||
logger.warning("Runqeueue deadlocked on deferred tasks, forcing task %s" % tid)
|
||||
self.sq_task_failoutright(tid)
|
||||
return True
|
||||
|
||||
if self.failed_tids:
|
||||
if len(self.failed_tids) != 0:
|
||||
self.rq.state = runQueueFailed
|
||||
return True
|
||||
|
||||
@@ -2358,7 +2262,7 @@ class RunQueueExecute:
|
||||
covered.intersection_update(self.tasks_scenequeue_done)
|
||||
|
||||
for tid in notcovered | covered:
|
||||
if not self.rqdata.runtaskentries[tid].depends:
|
||||
if len(self.rqdata.runtaskentries[tid].depends) == 0:
|
||||
self.setbuildable(tid)
|
||||
elif self.rqdata.runtaskentries[tid].depends.issubset(self.runq_complete):
|
||||
self.setbuildable(tid)
|
||||
@@ -2400,9 +2304,6 @@ class RunQueueExecute:
|
||||
self.rqdata.runtaskentries[hashtid].unihash = unihash
|
||||
bb.parse.siggen.set_unihash(hashtid, unihash)
|
||||
toprocess.add(hashtid)
|
||||
if torehash:
|
||||
# Need to save after set_unihash above
|
||||
bb.parse.siggen.save_unitaskhashes()
|
||||
|
||||
# Work out all tasks which depend upon these
|
||||
total = set()
|
||||
@@ -2420,7 +2321,7 @@ class RunQueueExecute:
|
||||
# Now iterate those tasks in dependency order to regenerate their taskhash/unihash
|
||||
next = set()
|
||||
for p in total:
|
||||
if not self.rqdata.runtaskentries[p].depends:
|
||||
if len(self.rqdata.runtaskentries[p].depends) == 0:
|
||||
next.add(p)
|
||||
elif self.rqdata.runtaskentries[p].depends.isdisjoint(total):
|
||||
next.add(p)
|
||||
@@ -2430,7 +2331,7 @@ class RunQueueExecute:
|
||||
current = next.copy()
|
||||
next = set()
|
||||
for tid in current:
|
||||
if self.rqdata.runtaskentries[p].depends and not self.rqdata.runtaskentries[tid].depends.isdisjoint(total):
|
||||
if len(self.rqdata.runtaskentries[p].depends) and not self.rqdata.runtaskentries[tid].depends.isdisjoint(total):
|
||||
continue
|
||||
orighash = self.rqdata.runtaskentries[tid].hash
|
||||
dc = bb.parse.siggen.get_data_caches(self.rqdata.dataCaches, mc_from_tid(tid))
|
||||
@@ -2496,7 +2397,7 @@ class RunQueueExecute:
|
||||
self.tasks_scenequeue_done.remove(tid)
|
||||
for dep in self.sqdata.sq_covered_tasks[tid]:
|
||||
if dep in self.runq_complete and dep not in self.runq_tasksrun:
|
||||
bb.error("Task %s marked as completed but now needing to rerun? Halting build." % dep)
|
||||
bb.error("Task %s marked as completed but now needing to rerun? Aborting build." % dep)
|
||||
self.failed_tids.append(tid)
|
||||
self.rq.state = runQueueCleanUp
|
||||
return
|
||||
@@ -2509,6 +2410,17 @@ class RunQueueExecute:
|
||||
self.sq_buildable.remove(tid)
|
||||
if tid in self.sq_running:
|
||||
self.sq_running.remove(tid)
|
||||
harddepfail = False
|
||||
for t in self.sqdata.sq_harddeps:
|
||||
if tid in self.sqdata.sq_harddeps[t] and t in self.scenequeue_notcovered:
|
||||
harddepfail = True
|
||||
break
|
||||
if not harddepfail and self.sqdata.sq_revdeps[tid].issubset(self.scenequeue_covered | self.scenequeue_notcovered):
|
||||
if tid not in self.sq_buildable:
|
||||
self.sq_buildable.add(tid)
|
||||
if len(self.sqdata.sq_revdeps[tid]) == 0:
|
||||
self.sq_buildable.add(tid)
|
||||
|
||||
if tid in self.sqdata.outrightfail:
|
||||
self.sqdata.outrightfail.remove(tid)
|
||||
if tid in self.scenequeue_notcovered:
|
||||
@@ -2527,43 +2439,19 @@ class RunQueueExecute:
|
||||
if tid in self.build_stamps:
|
||||
del self.build_stamps[tid]
|
||||
|
||||
update_tasks.append(tid)
|
||||
update_tasks.append((tid, harddepfail, tid in self.sqdata.valid))
|
||||
|
||||
update_tasks2 = []
|
||||
for tid in update_tasks:
|
||||
harddepfail = False
|
||||
for t in self.sqdata.sq_harddeps:
|
||||
if tid in self.sqdata.sq_harddeps[t] and t in self.scenequeue_notcovered:
|
||||
harddepfail = True
|
||||
break
|
||||
if not harddepfail and self.sqdata.sq_revdeps[tid].issubset(self.scenequeue_covered | self.scenequeue_notcovered):
|
||||
if tid not in self.sq_buildable:
|
||||
self.sq_buildable.add(tid)
|
||||
if not self.sqdata.sq_revdeps[tid]:
|
||||
self.sq_buildable.add(tid)
|
||||
|
||||
update_tasks2.append((tid, harddepfail, tid in self.sqdata.valid))
|
||||
|
||||
if update_tasks2:
|
||||
if update_tasks:
|
||||
self.sqdone = False
|
||||
for mc in sorted(self.sqdata.multiconfigs):
|
||||
for tid in sorted([t[0] for t in update_tasks2]):
|
||||
if mc_from_tid(tid) != mc:
|
||||
continue
|
||||
h = pending_hash_index(tid, self.rqdata)
|
||||
if h in self.sqdata.hashes and tid != self.sqdata.hashes[h]:
|
||||
self.sq_deferred[tid] = self.sqdata.hashes[h]
|
||||
bb.note("Deferring %s after %s" % (tid, self.sqdata.hashes[h]))
|
||||
update_scenequeue_data([t[0] for t in update_tasks2], self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self, summary=False)
|
||||
update_scenequeue_data([t[0] for t in update_tasks], self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self, summary=False)
|
||||
|
||||
for (tid, harddepfail, origvalid) in update_tasks2:
|
||||
for (tid, harddepfail, origvalid) in update_tasks:
|
||||
if tid in self.sqdata.valid and not origvalid:
|
||||
hashequiv_logger.verbose("Setscene task %s became valid" % tid)
|
||||
if harddepfail:
|
||||
self.sq_task_failoutright(tid)
|
||||
|
||||
if changed:
|
||||
self.stats.updateCovered(len(self.scenequeue_covered), len(self.scenequeue_notcovered))
|
||||
self.holdoff_need_update = True
|
||||
|
||||
def scenequeue_updatecounters(self, task, fail=False):
|
||||
@@ -2597,7 +2485,6 @@ class RunQueueExecute:
|
||||
new.add(dep)
|
||||
next = new
|
||||
|
||||
self.stats.updateCovered(len(self.scenequeue_covered), len(self.scenequeue_notcovered))
|
||||
self.holdoff_need_update = True
|
||||
|
||||
def sq_task_completeoutright(self, task):
|
||||
@@ -2612,20 +2499,22 @@ class RunQueueExecute:
|
||||
self.scenequeue_updatecounters(task)
|
||||
|
||||
def sq_check_taskfail(self, task):
|
||||
if self.rqdata.setscene_ignore_tasks is not None:
|
||||
if self.rqdata.setscenewhitelist is not None:
|
||||
realtask = task.split('_setscene')[0]
|
||||
(mc, fn, taskname, taskfn) = split_tid_mcfn(realtask)
|
||||
pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn]
|
||||
if not check_setscene_enforce_ignore_tasks(pn, taskname, self.rqdata.setscene_ignore_tasks):
|
||||
if not check_setscene_enforce_whitelist(pn, taskname, self.rqdata.setscenewhitelist):
|
||||
logger.error('Task %s.%s failed' % (pn, taskname + "_setscene"))
|
||||
self.rq.state = runQueueCleanUp
|
||||
|
||||
def sq_task_complete(self, task):
|
||||
bb.event.fire(sceneQueueTaskCompleted(task, self.stats, self.rq), self.cfgData)
|
||||
self.sq_stats.taskCompleted()
|
||||
bb.event.fire(sceneQueueTaskCompleted(task, self.sq_stats, self.rq), self.cfgData)
|
||||
self.sq_task_completeoutright(task)
|
||||
|
||||
def sq_task_fail(self, task, result):
|
||||
bb.event.fire(sceneQueueTaskFailed(task, self.stats, result, self), self.cfgData)
|
||||
self.sq_stats.taskFailed()
|
||||
bb.event.fire(sceneQueueTaskFailed(task, self.sq_stats, result, self), self.cfgData)
|
||||
self.scenequeue_notcovered.add(task)
|
||||
self.scenequeue_updatecounters(task, True)
|
||||
self.sq_check_taskfail(task)
|
||||
@@ -2633,6 +2522,8 @@ class RunQueueExecute:
|
||||
def sq_task_failoutright(self, task):
|
||||
self.sq_running.add(task)
|
||||
self.sq_buildable.add(task)
|
||||
self.sq_stats.taskSkipped()
|
||||
self.sq_stats.taskCompleted()
|
||||
self.scenequeue_notcovered.add(task)
|
||||
self.scenequeue_updatecounters(task, True)
|
||||
|
||||
@@ -2640,6 +2531,8 @@ class RunQueueExecute:
|
||||
self.sq_running.add(task)
|
||||
self.sq_buildable.add(task)
|
||||
self.sq_task_completeoutright(task)
|
||||
self.sq_stats.taskSkipped()
|
||||
self.sq_stats.taskCompleted()
|
||||
|
||||
def sq_build_taskdepdata(self, task):
|
||||
def getsetscenedeps(tid):
|
||||
@@ -2679,8 +2572,8 @@ class RunQueueExecute:
|
||||
#bb.note("Task %s: " % task + str(taskdepdata).replace("], ", "],\n"))
|
||||
return taskdepdata
|
||||
|
||||
def check_setscene_ignore_tasks(self, tid):
|
||||
# Check task that is going to run against the ignore tasks list
|
||||
def check_setscenewhitelist(self, tid):
|
||||
# Check task that is going to run against the whitelist
|
||||
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
|
||||
# Ignore covered tasks
|
||||
if tid in self.tasks_covered:
|
||||
@@ -2694,15 +2587,14 @@ class RunQueueExecute:
|
||||
return False
|
||||
|
||||
pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn]
|
||||
if not check_setscene_enforce_ignore_tasks(pn, taskname, self.rqdata.setscene_ignore_tasks):
|
||||
if not check_setscene_enforce_whitelist(pn, taskname, self.rqdata.setscenewhitelist):
|
||||
if tid in self.rqdata.runq_setscene_tids:
|
||||
msg = ['Task %s.%s attempted to execute unexpectedly and should have been setscened' % (pn, taskname)]
|
||||
msg = 'Task %s.%s attempted to execute unexpectedly and should have been setscened' % (pn, taskname)
|
||||
else:
|
||||
msg = ['Task %s.%s attempted to execute unexpectedly' % (pn, taskname)]
|
||||
msg = 'Task %s.%s attempted to execute unexpectedly' % (pn, taskname)
|
||||
for t in self.scenequeue_notcovered:
|
||||
msg.append("\nTask %s, unihash %s, taskhash %s" % (t, self.rqdata.runtaskentries[t].unihash, self.rqdata.runtaskentries[t].hash))
|
||||
msg.append('\nThis is usually due to missing setscene tasks. Those missing in this build were: %s' % pprint.pformat(self.scenequeue_notcovered))
|
||||
logger.error("".join(msg))
|
||||
msg = msg + "\nTask %s, unihash %s, taskhash %s" % (t, self.rqdata.runtaskentries[t].unihash, self.rqdata.runtaskentries[t].hash)
|
||||
logger.error(msg + '\nThis is usually due to missing setscene tasks. Those missing in this build were: %s' % pprint.pformat(self.scenequeue_notcovered))
|
||||
return True
|
||||
return False
|
||||
|
||||
@@ -2741,7 +2633,7 @@ def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
|
||||
for tid in rqdata.runtaskentries:
|
||||
sq_revdeps[tid] = copy.copy(rqdata.runtaskentries[tid].revdeps)
|
||||
sq_revdeps_squash[tid] = set()
|
||||
if not sq_revdeps[tid] and tid not in rqdata.runq_setscene_tids:
|
||||
if (len(sq_revdeps[tid]) == 0) and tid not in rqdata.runq_setscene_tids:
|
||||
#bb.warn("Added endpoint %s" % (tid))
|
||||
endpoints[tid] = set()
|
||||
|
||||
@@ -2775,15 +2667,16 @@ def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
|
||||
sq_revdeps_squash[point] = set()
|
||||
if point in rqdata.runq_setscene_tids:
|
||||
sq_revdeps_squash[point] = tasks
|
||||
tasks = set()
|
||||
continue
|
||||
for dep in rqdata.runtaskentries[point].depends:
|
||||
if point in sq_revdeps[dep]:
|
||||
sq_revdeps[dep].remove(point)
|
||||
if tasks:
|
||||
sq_revdeps_squash[dep] |= tasks
|
||||
if not sq_revdeps[dep] and dep not in rqdata.runq_setscene_tids:
|
||||
if len(sq_revdeps[dep]) == 0 and dep not in rqdata.runq_setscene_tids:
|
||||
newendpoints[dep] = task
|
||||
if newendpoints:
|
||||
if len(newendpoints) != 0:
|
||||
process_endpoints(newendpoints)
|
||||
|
||||
process_endpoints(endpoints)
|
||||
@@ -2795,7 +2688,7 @@ def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
|
||||
# Take the build endpoints (no revdeps) and find the sstate tasks they depend upon
|
||||
new = True
|
||||
for tid in rqdata.runtaskentries:
|
||||
if not rqdata.runtaskentries[tid].revdeps:
|
||||
if len(rqdata.runtaskentries[tid].revdeps) == 0:
|
||||
sqdata.unskippable.add(tid)
|
||||
sqdata.unskippable |= sqrq.cantskip
|
||||
while new:
|
||||
@@ -2804,7 +2697,7 @@ def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
|
||||
for tid in sorted(orig, reverse=True):
|
||||
if tid in rqdata.runq_setscene_tids:
|
||||
continue
|
||||
if not rqdata.runtaskentries[tid].depends:
|
||||
if len(rqdata.runtaskentries[tid].depends) == 0:
|
||||
# These are tasks which have no setscene tasks in their chain, need to mark as directly buildable
|
||||
sqrq.setbuildable(tid)
|
||||
sqdata.unskippable |= rqdata.runtaskentries[tid].depends
|
||||
@@ -2819,8 +2712,8 @@ def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
|
||||
for taskcounter, tid in enumerate(rqdata.runtaskentries):
|
||||
if tid in rqdata.runq_setscene_tids:
|
||||
pass
|
||||
elif sq_revdeps_squash[tid]:
|
||||
bb.msg.fatal("RunQueue", "Something went badly wrong during scenequeue generation, halting. Please report this problem.")
|
||||
elif len(sq_revdeps_squash[tid]) != 0:
|
||||
bb.msg.fatal("RunQueue", "Something went badly wrong during scenequeue generation, aborting. Please report this problem.")
|
||||
else:
|
||||
del sq_revdeps_squash[tid]
|
||||
rqdata.init_progress_reporter.update(taskcounter)
|
||||
@@ -2884,7 +2777,7 @@ def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
|
||||
sqdata.multiconfigs = set()
|
||||
for tid in sqdata.sq_revdeps:
|
||||
sqdata.multiconfigs.add(mc_from_tid(tid))
|
||||
if not sqdata.sq_revdeps[tid]:
|
||||
if len(sqdata.sq_revdeps[tid]) == 0:
|
||||
sqrq.sq_buildable.add(tid)
|
||||
|
||||
rqdata.init_progress_reporter.finish()
|
||||
@@ -2893,19 +2786,6 @@ def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
|
||||
sqdata.stamppresent = set()
|
||||
sqdata.valid = set()
|
||||
|
||||
sqdata.hashes = {}
|
||||
sqrq.sq_deferred = {}
|
||||
for mc in sorted(sqdata.multiconfigs):
|
||||
for tid in sorted(sqdata.sq_revdeps):
|
||||
if mc_from_tid(tid) != mc:
|
||||
continue
|
||||
h = pending_hash_index(tid, rqdata)
|
||||
if h not in sqdata.hashes:
|
||||
sqdata.hashes[h] = tid
|
||||
else:
|
||||
sqrq.sq_deferred[tid] = sqdata.hashes[h]
|
||||
bb.debug(1, "Deferring %s after %s" % (tid, sqdata.hashes[h]))
|
||||
|
||||
update_scenequeue_data(sqdata.sq_revdeps, sqdata, rqdata, rq, cooker, stampcache, sqrq, summary=True)
|
||||
|
||||
# Compute a list of 'stale' sstate tasks where the current hash does not match the one
|
||||
@@ -2970,20 +2850,32 @@ def update_scenequeue_data(tids, sqdata, rqdata, rq, cooker, stampcache, sqrq, s
|
||||
|
||||
sqdata.valid |= rq.validate_hashes(tocheck, cooker.data, len(sqdata.stamppresent), False, summary=summary)
|
||||
|
||||
for tid in tids:
|
||||
if tid in sqdata.stamppresent:
|
||||
continue
|
||||
if tid in sqdata.valid:
|
||||
continue
|
||||
if tid in sqdata.noexec:
|
||||
continue
|
||||
if tid in sqrq.scenequeue_covered:
|
||||
continue
|
||||
if tid in sqrq.scenequeue_notcovered:
|
||||
continue
|
||||
if tid in sqrq.sq_deferred:
|
||||
continue
|
||||
sqdata.outrightfail.add(tid)
|
||||
sqdata.hashes = {}
|
||||
sqrq.sq_deferred = {}
|
||||
for mc in sorted(sqdata.multiconfigs):
|
||||
for tid in sorted(sqdata.sq_revdeps):
|
||||
if mc_from_tid(tid) != mc:
|
||||
continue
|
||||
if tid in sqdata.stamppresent:
|
||||
continue
|
||||
if tid in sqdata.valid:
|
||||
continue
|
||||
if tid in sqdata.noexec:
|
||||
continue
|
||||
if tid in sqrq.scenequeue_notcovered:
|
||||
continue
|
||||
if tid in sqrq.scenequeue_covered:
|
||||
continue
|
||||
|
||||
h = pending_hash_index(tid, rqdata)
|
||||
if h not in sqdata.hashes:
|
||||
if tid in tids:
|
||||
sqdata.outrightfail.add(tid)
|
||||
sqdata.hashes[h] = tid
|
||||
else:
|
||||
sqrq.sq_deferred[tid] = sqdata.hashes[h]
|
||||
bb.note("Deferring %s after %s" % (tid, sqdata.hashes[h]))
|
||||
|
||||
|
||||
class TaskFailure(Exception):
|
||||
"""
|
||||
@@ -3113,7 +3005,7 @@ class runQueuePipe():
|
||||
if pipeout:
|
||||
pipeout.close()
|
||||
bb.utils.nonblockingfd(self.input)
|
||||
self.queue = bytearray()
|
||||
self.queue = b""
|
||||
self.d = d
|
||||
self.rq = rq
|
||||
self.rqexec = rqexec
|
||||
@@ -3132,13 +3024,13 @@ class runQueuePipe():
|
||||
|
||||
start = len(self.queue)
|
||||
try:
|
||||
self.queue.extend(self.input.read(102400) or b"")
|
||||
self.queue = self.queue + (self.input.read(102400) or b"")
|
||||
except (OSError, IOError) as e:
|
||||
if e.errno != errno.EAGAIN:
|
||||
raise
|
||||
end = len(self.queue)
|
||||
found = True
|
||||
while found and self.queue:
|
||||
while found and len(self.queue):
|
||||
found = False
|
||||
index = self.queue.find(b"</event>")
|
||||
while index != -1 and self.queue.startswith(b"<event>"):
|
||||
@@ -3176,16 +3068,16 @@ class runQueuePipe():
|
||||
def close(self):
|
||||
while self.read():
|
||||
continue
|
||||
if self.queue:
|
||||
if len(self.queue) > 0:
|
||||
print("Warning, worker left partial message: %s" % self.queue)
|
||||
self.input.close()
|
||||
|
||||
def get_setscene_enforce_ignore_tasks(d, targets):
|
||||
def get_setscene_enforce_whitelist(d, targets):
|
||||
if d.getVar('BB_SETSCENE_ENFORCE') != '1':
|
||||
return None
|
||||
ignore_tasks = (d.getVar("BB_SETSCENE_ENFORCE_IGNORE_TASKS") or "").split()
|
||||
whitelist = (d.getVar("BB_SETSCENE_ENFORCE_WHITELIST") or "").split()
|
||||
outlist = []
|
||||
for item in ignore_tasks[:]:
|
||||
for item in whitelist[:]:
|
||||
if item.startswith('%:'):
|
||||
for (mc, target, task, fn) in targets:
|
||||
outlist.append(target + ':' + item.split(':')[1])
|
||||
@@ -3193,12 +3085,12 @@ def get_setscene_enforce_ignore_tasks(d, targets):
|
||||
outlist.append(item)
|
||||
return outlist
|
||||
|
||||
def check_setscene_enforce_ignore_tasks(pn, taskname, ignore_tasks):
|
||||
def check_setscene_enforce_whitelist(pn, taskname, whitelist):
|
||||
import fnmatch
|
||||
if ignore_tasks is not None:
|
||||
if whitelist is not None:
|
||||
item = '%s:%s' % (pn, taskname)
|
||||
for ignore_tasks in ignore_tasks:
|
||||
if fnmatch.fnmatch(item, ignore_tasks):
|
||||
for whitelist_item in whitelist:
|
||||
if fnmatch.fnmatch(item, whitelist_item):
|
||||
return True
|
||||
return False
|
||||
return True
|
||||
|
||||
@@ -26,8 +26,6 @@ import errno
|
||||
import re
|
||||
import datetime
|
||||
import pickle
|
||||
import traceback
|
||||
import gc
|
||||
import bb.server.xmlrpcserver
|
||||
from bb import daemonize
|
||||
from multiprocessing import queues
|
||||
@@ -149,7 +147,7 @@ class ProcessServer():
|
||||
conn = newconnections.pop(-1)
|
||||
fds.append(conn)
|
||||
self.controllersock = conn
|
||||
elif not self.timeout and not ready:
|
||||
elif self.timeout is None and not ready:
|
||||
serverlog("No timeout, exiting.")
|
||||
self.quit = True
|
||||
|
||||
@@ -219,9 +217,8 @@ class ProcessServer():
|
||||
self.command_channel_reply.send(self.cooker.command.runCommand(command))
|
||||
serverlog("Command Completed")
|
||||
except Exception as e:
|
||||
stack = traceback.format_exc()
|
||||
serverlog('Exception in server main event loop running command %s (%s)' % (command, stack))
|
||||
logger.exception('Exception in server main event loop running command %s (%s)' % (command, stack))
|
||||
serverlog('Exception in server main event loop running command %s (%s)' % (command, str(e)))
|
||||
logger.exception('Exception in server main event loop running command %s (%s)' % (command, str(e)))
|
||||
|
||||
if self.xmlrpc in ready:
|
||||
self.xmlrpc.handle_requests()
|
||||
@@ -244,6 +241,9 @@ class ProcessServer():
|
||||
|
||||
ready = self.idle_commands(.1, fds)
|
||||
|
||||
if len(threading.enumerate()) != 1:
|
||||
serverlog("More than one thread left?: " + str(threading.enumerate()))
|
||||
|
||||
serverlog("Exiting")
|
||||
# Remove the socket file so we don't get any more connections to avoid races
|
||||
try:
|
||||
@@ -261,9 +261,6 @@ class ProcessServer():
|
||||
|
||||
self.cooker.post_serve()
|
||||
|
||||
if len(threading.enumerate()) != 1:
|
||||
serverlog("More than one thread left?: " + str(threading.enumerate()))
|
||||
|
||||
# Flush logs before we release the lock
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
@@ -327,10 +324,10 @@ class ProcessServer():
|
||||
if e.errno != errno.ENOENT:
|
||||
raise
|
||||
|
||||
msg = ["Delaying shutdown due to active processes which appear to be holding bitbake.lock"]
|
||||
msg = "Delaying shutdown due to active processes which appear to be holding bitbake.lock"
|
||||
if procs:
|
||||
msg.append(":\n%s" % str(procs.decode("utf-8")))
|
||||
serverlog("".join(msg))
|
||||
msg += ":\n%s" % str(procs.decode("utf-8"))
|
||||
serverlog(msg)
|
||||
|
||||
def idle_commands(self, delay, fds=None):
|
||||
nextsleep = delay
|
||||
@@ -370,12 +367,7 @@ class ProcessServer():
|
||||
self.next_heartbeat = now + self.heartbeat_seconds
|
||||
if hasattr(self.cooker, "data"):
|
||||
heartbeat = bb.event.HeartbeatEvent(now)
|
||||
try:
|
||||
bb.event.fire(heartbeat, self.cooker.data)
|
||||
except Exception as exc:
|
||||
if not isinstance(exc, bb.BBHandledException):
|
||||
logger.exception('Running heartbeat function')
|
||||
self.quit = True
|
||||
bb.event.fire(heartbeat, self.cooker.data)
|
||||
if nextsleep and now + nextsleep > self.next_heartbeat:
|
||||
# Shorten timeout so that we we wake up in time for
|
||||
# the heartbeat.
|
||||
@@ -474,7 +466,7 @@ class BitBakeServer(object):
|
||||
try:
|
||||
r = ready.get()
|
||||
except EOFError:
|
||||
# Trap the child exiting/closing the pipe and error out
|
||||
# Trap the child exitting/closing the pipe and error out
|
||||
r = None
|
||||
if not r or r[0] != "r":
|
||||
ready.close()
|
||||
@@ -557,7 +549,7 @@ def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpc
|
||||
|
||||
server.run()
|
||||
finally:
|
||||
# Flush any messages/errors to the logfile before exit
|
||||
# Flush any ,essages/errors to the logfile before exit
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
|
||||
@@ -662,7 +654,7 @@ class BBUIEventQueue:
|
||||
self.reader = ConnectionReader(readfd)
|
||||
|
||||
self.t = threading.Thread()
|
||||
self.t.daemon = True
|
||||
self.t.setDaemon(True)
|
||||
self.t.run = self.startCallbackHandler
|
||||
self.t.start()
|
||||
|
||||
@@ -738,32 +730,10 @@ class ConnectionWriter(object):
|
||||
# Why bb.event needs this I have no idea
|
||||
self.event = self
|
||||
|
||||
def _send(self, obj):
|
||||
gc.disable()
|
||||
with self.wlock:
|
||||
self.writer.send_bytes(obj)
|
||||
gc.enable()
|
||||
|
||||
def send(self, obj):
|
||||
obj = multiprocessing.reduction.ForkingPickler.dumps(obj)
|
||||
# See notes/code in CookerParser
|
||||
# We must not terminate holding this lock else processes will hang.
|
||||
# For SIGTERM, raising afterwards avoids this.
|
||||
# For SIGINT, we don't want to have written partial data to the pipe.
|
||||
# pthread_sigmask block/unblock would be nice but doesn't work, https://bugs.python.org/issue47139
|
||||
process = multiprocessing.current_process()
|
||||
if process and hasattr(process, "queue_signals"):
|
||||
with process.signal_threadlock:
|
||||
process.queue_signals = True
|
||||
self._send(obj)
|
||||
process.queue_signals = False
|
||||
try:
|
||||
for sig in process.signal_received.pop():
|
||||
process.handle_sig(sig, None)
|
||||
except IndexError:
|
||||
pass
|
||||
else:
|
||||
self._send(obj)
|
||||
with self.wlock:
|
||||
self.writer.send_bytes(obj)
|
||||
|
||||
def fileno(self):
|
||||
return self.writer.fileno()
|
||||
|
||||
@@ -11,7 +11,6 @@ import hashlib
|
||||
import time
|
||||
import inspect
|
||||
from xmlrpc.server import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
|
||||
import bb.server.xmlrpcclient
|
||||
|
||||
import bb
|
||||
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
@@ -13,8 +11,6 @@ import pickle
|
||||
import bb.data
|
||||
import difflib
|
||||
import simplediff
|
||||
import json
|
||||
import bb.compress.zstd
|
||||
from bb.checksum import FileChecksumCache
|
||||
from bb import runqueue
|
||||
import hashserv
|
||||
@@ -23,17 +19,6 @@ import hashserv.client
|
||||
logger = logging.getLogger('BitBake.SigGen')
|
||||
hashequiv_logger = logging.getLogger('BitBake.SigGen.HashEquiv')
|
||||
|
||||
class SetEncoder(json.JSONEncoder):
|
||||
def default(self, obj):
|
||||
if isinstance(obj, set):
|
||||
return dict(_set_object=list(sorted(obj)))
|
||||
return json.JSONEncoder.default(self, obj)
|
||||
|
||||
def SetDecoder(dct):
|
||||
if '_set_object' in dct:
|
||||
return set(dct['_set_object'])
|
||||
return dct
|
||||
|
||||
def init(d):
|
||||
siggens = [obj for obj in globals().values()
|
||||
if type(obj) is type and issubclass(obj, SignatureGenerator)]
|
||||
@@ -42,6 +27,7 @@ def init(d):
|
||||
for sg in siggens:
|
||||
if desired == sg.name:
|
||||
return sg(d)
|
||||
break
|
||||
else:
|
||||
logger.error("Invalid signature generator '%s', using default 'noop'\n"
|
||||
"Available generators: %s", desired,
|
||||
@@ -157,9 +143,6 @@ class SignatureGenerator(object):
|
||||
|
||||
return DataCacheProxy()
|
||||
|
||||
def exit(self):
|
||||
return
|
||||
|
||||
class SignatureGeneratorBasic(SignatureGenerator):
|
||||
"""
|
||||
"""
|
||||
@@ -176,8 +159,8 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
self.gendeps = {}
|
||||
self.lookupcache = {}
|
||||
self.setscenetasks = set()
|
||||
self.basehash_ignore_vars = set((data.getVar("BB_BASEHASH_IGNORE_VARS") or "").split())
|
||||
self.taskhash_ignore_tasks = None
|
||||
self.basewhitelist = set((data.getVar("BB_HASHBASE_WHITELIST") or "").split())
|
||||
self.taskwhitelist = None
|
||||
self.init_rundepcheck(data)
|
||||
checksum_cache_file = data.getVar("BB_HASH_CHECKSUM_CACHE_FILE")
|
||||
if checksum_cache_file:
|
||||
@@ -192,18 +175,18 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
self.tidtopn = {}
|
||||
|
||||
def init_rundepcheck(self, data):
|
||||
self.taskhash_ignore_tasks = data.getVar("BB_TASKHASH_IGNORE_TASKS") or None
|
||||
if self.taskhash_ignore_tasks:
|
||||
self.twl = re.compile(self.taskhash_ignore_tasks)
|
||||
self.taskwhitelist = data.getVar("BB_HASHTASK_WHITELIST") or None
|
||||
if self.taskwhitelist:
|
||||
self.twl = re.compile(self.taskwhitelist)
|
||||
else:
|
||||
self.twl = None
|
||||
|
||||
def _build_data(self, fn, d):
|
||||
|
||||
ignore_mismatch = ((d.getVar("BB_HASH_IGNORE_MISMATCH") or '') == '1')
|
||||
tasklist, gendeps, lookupcache = bb.data.generate_dependencies(d, self.basehash_ignore_vars)
|
||||
tasklist, gendeps, lookupcache = bb.data.generate_dependencies(d, self.basewhitelist)
|
||||
|
||||
taskdeps, basehash = bb.data.generate_dependency_hash(tasklist, gendeps, lookupcache, self.basehash_ignore_vars, fn)
|
||||
taskdeps, basehash = bb.data.generate_dependency_hash(tasklist, gendeps, lookupcache, self.basewhitelist, fn)
|
||||
|
||||
for task in tasklist:
|
||||
tid = fn + ":" + task
|
||||
@@ -245,7 +228,7 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
# self.dump_sigtask(fn, task, d.getVar("STAMP"), False)
|
||||
|
||||
for task in taskdeps:
|
||||
d.setVar("BB_BASEHASH:task-%s" % task, self.basehash[fn + ":" + task])
|
||||
d.setVar("BB_BASEHASH_task-%s" % task, self.basehash[fn + ":" + task])
|
||||
|
||||
def postparsing_clean_cache(self):
|
||||
#
|
||||
@@ -257,8 +240,7 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
|
||||
def rundep_check(self, fn, recipename, task, dep, depname, dataCaches):
|
||||
# Return True if we should keep the dependency, False to drop it
|
||||
# We only manipulate the dependencies for packages not in the ignore
|
||||
# list
|
||||
# We only manipulate the dependencies for packages not in the whitelist
|
||||
if self.twl and not self.twl.search(recipename):
|
||||
# then process the actual dependencies
|
||||
if self.twl.search(depname):
|
||||
@@ -329,23 +311,21 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
|
||||
data = self.basehash[tid]
|
||||
for dep in self.runtaskdeps[tid]:
|
||||
data += self.get_unihash(dep)
|
||||
data = data + self.get_unihash(dep)
|
||||
|
||||
for (f, cs) in self.file_checksum_values[tid]:
|
||||
if cs:
|
||||
if "/./" in f:
|
||||
data += "./" + f.split("/./")[1]
|
||||
data += cs
|
||||
data = data + cs
|
||||
|
||||
if tid in self.taints:
|
||||
if self.taints[tid].startswith("nostamp:"):
|
||||
data += self.taints[tid][8:]
|
||||
data = data + self.taints[tid][8:]
|
||||
else:
|
||||
data += self.taints[tid]
|
||||
data = data + self.taints[tid]
|
||||
|
||||
h = hashlib.sha256(data.encode("utf-8")).hexdigest()
|
||||
self.taskhash[tid] = h
|
||||
#d.setVar("BB_TASKHASH:task-%s" % task, taskhash[task])
|
||||
#d.setVar("BB_TASKHASH_task-%s" % task, taskhash[task])
|
||||
return h
|
||||
|
||||
def writeout_file_checksum_cache(self):
|
||||
@@ -377,27 +357,22 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
|
||||
data = {}
|
||||
data['task'] = task
|
||||
data['basehash_ignore_vars'] = self.basehash_ignore_vars
|
||||
data['taskhash_ignore_tasks'] = self.taskhash_ignore_tasks
|
||||
data['basewhitelist'] = self.basewhitelist
|
||||
data['taskwhitelist'] = self.taskwhitelist
|
||||
data['taskdeps'] = self.taskdeps[fn][task]
|
||||
data['basehash'] = self.basehash[tid]
|
||||
data['gendeps'] = {}
|
||||
data['varvals'] = {}
|
||||
data['varvals'][task] = self.lookupcache[fn][task]
|
||||
for dep in self.taskdeps[fn][task]:
|
||||
if dep in self.basehash_ignore_vars:
|
||||
if dep in self.basewhitelist:
|
||||
continue
|
||||
data['gendeps'][dep] = self.gendeps[fn][dep]
|
||||
data['varvals'][dep] = self.lookupcache[fn][dep]
|
||||
|
||||
if runtime and tid in self.taskhash:
|
||||
data['runtaskdeps'] = self.runtaskdeps[tid]
|
||||
data['file_checksum_values'] = []
|
||||
for f,cs in self.file_checksum_values[tid]:
|
||||
if "/./" in f:
|
||||
data['file_checksum_values'].append(("./" + f.split("/./")[1], cs))
|
||||
else:
|
||||
data['file_checksum_values'].append((os.path.basename(f), cs))
|
||||
data['file_checksum_values'] = [(os.path.basename(f), cs) for f,cs in self.file_checksum_values[tid]]
|
||||
data['runtaskhashes'] = {}
|
||||
for dep in data['runtaskdeps']:
|
||||
data['runtaskhashes'][dep] = self.get_unihash(dep)
|
||||
@@ -421,13 +396,13 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
bb.error("Taskhash mismatch %s versus %s for %s" % (computed_taskhash, self.taskhash[tid], tid))
|
||||
sigfile = sigfile.replace(self.taskhash[tid], computed_taskhash)
|
||||
|
||||
fd, tmpfile = bb.utils.mkstemp(dir=os.path.dirname(sigfile), prefix="sigtask.")
|
||||
fd, tmpfile = tempfile.mkstemp(dir=os.path.dirname(sigfile), prefix="sigtask.")
|
||||
try:
|
||||
with bb.compress.zstd.open(fd, "wt", encoding="utf-8", num_threads=1) as f:
|
||||
json.dump(data, f, sort_keys=True, separators=(",", ":"), cls=SetEncoder)
|
||||
f.flush()
|
||||
with os.fdopen(fd, "wb") as stream:
|
||||
p = pickle.dump(data, stream, -1)
|
||||
stream.flush()
|
||||
os.chmod(tmpfile, 0o664)
|
||||
bb.utils.rename(tmpfile, sigfile)
|
||||
os.rename(tmpfile, sigfile)
|
||||
except (OSError, IOError) as err:
|
||||
try:
|
||||
os.unlink(tmpfile)
|
||||
@@ -493,18 +468,6 @@ class SignatureGeneratorUniHashMixIn(object):
|
||||
self._client = hashserv.create_client(self.server)
|
||||
return self._client
|
||||
|
||||
def reset(self, data):
|
||||
if getattr(self, '_client', None) is not None:
|
||||
self._client.close()
|
||||
self._client = None
|
||||
return super().reset(data)
|
||||
|
||||
def exit(self):
|
||||
if getattr(self, '_client', None) is not None:
|
||||
self._client.close()
|
||||
self._client = None
|
||||
return super().exit()
|
||||
|
||||
def get_stampfile_hash(self, tid):
|
||||
if tid in self.taskhash:
|
||||
# If a unique hash is reported, use it as the stampfile hash. This
|
||||
@@ -579,7 +542,7 @@ class SignatureGeneratorUniHashMixIn(object):
|
||||
hashequiv_logger.debug((1, 2)[unihash == taskhash], 'Found unihash %s in place of %s for %s from %s' % (unihash, taskhash, tid, self.server))
|
||||
else:
|
||||
hashequiv_logger.debug2('No reported unihash for %s:%s from %s' % (tid, taskhash, self.server))
|
||||
except ConnectionError as e:
|
||||
except hashserv.client.HashConnectionError as e:
|
||||
bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
|
||||
|
||||
self.set_unihash(tid, unihash)
|
||||
@@ -600,7 +563,7 @@ class SignatureGeneratorUniHashMixIn(object):
|
||||
if self.setscenetasks and tid not in self.setscenetasks:
|
||||
return
|
||||
|
||||
# This can happen if locked sigs are in action. Detect and just exit
|
||||
# This can happen if locked sigs are in action. Detect and just abort
|
||||
if taskhash != self.taskhash[tid]:
|
||||
return
|
||||
|
||||
@@ -658,7 +621,7 @@ class SignatureGeneratorUniHashMixIn(object):
|
||||
d.setVar('BB_UNIHASH', new_unihash)
|
||||
else:
|
||||
hashequiv_logger.debug('Reported task %s as unihash %s to %s' % (taskhash, unihash, self.server))
|
||||
except ConnectionError as e:
|
||||
except hashserv.client.HashConnectionError as e:
|
||||
bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
|
||||
finally:
|
||||
if sigfile:
|
||||
@@ -698,7 +661,7 @@ class SignatureGeneratorUniHashMixIn(object):
|
||||
# TODO: What to do here?
|
||||
hashequiv_logger.verbose('Task %s unihash reported as unwanted hash %s' % (tid, finalunihash))
|
||||
|
||||
except ConnectionError as e:
|
||||
except hashserv.client.HashConnectionError as e:
|
||||
bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
|
||||
|
||||
return False
|
||||
@@ -811,16 +774,6 @@ def clean_basepaths_list(a):
|
||||
b.append(clean_basepath(x))
|
||||
return b
|
||||
|
||||
# Handled renamed fields
|
||||
def handle_renames(data):
|
||||
if 'basewhitelist' in data:
|
||||
data['basehash_ignore_vars'] = data['basewhitelist']
|
||||
del data['basewhitelist']
|
||||
if 'taskwhitelist' in data:
|
||||
data['taskhash_ignore_tasks'] = data['taskwhitelist']
|
||||
del data['taskwhitelist']
|
||||
|
||||
|
||||
def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
output = []
|
||||
|
||||
@@ -841,21 +794,20 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
formatparams.update(values)
|
||||
return formatstr.format(**formatparams)
|
||||
|
||||
with bb.compress.zstd.open(a, "rt", encoding="utf-8", num_threads=1) as f:
|
||||
a_data = json.load(f, object_hook=SetDecoder)
|
||||
with bb.compress.zstd.open(b, "rt", encoding="utf-8", num_threads=1) as f:
|
||||
b_data = json.load(f, object_hook=SetDecoder)
|
||||
with open(a, 'rb') as f:
|
||||
p1 = pickle.Unpickler(f)
|
||||
a_data = p1.load()
|
||||
with open(b, 'rb') as f:
|
||||
p2 = pickle.Unpickler(f)
|
||||
b_data = p2.load()
|
||||
|
||||
for data in [a_data, b_data]:
|
||||
handle_renames(data)
|
||||
|
||||
def dict_diff(a, b, ignored_vars=set()):
|
||||
def dict_diff(a, b, whitelist=set()):
|
||||
sa = set(a.keys())
|
||||
sb = set(b.keys())
|
||||
common = sa & sb
|
||||
changed = set()
|
||||
for i in common:
|
||||
if a[i] != b[i] and i not in ignored_vars:
|
||||
if a[i] != b[i] and i not in whitelist:
|
||||
changed.add(i)
|
||||
added = sb - sa
|
||||
removed = sa - sb
|
||||
@@ -863,11 +815,11 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
|
||||
def file_checksums_diff(a, b):
|
||||
from collections import Counter
|
||||
|
||||
# Convert lists back to tuples
|
||||
a = [(f[0], f[1]) for f in a]
|
||||
b = [(f[0], f[1]) for f in b]
|
||||
|
||||
# Handle old siginfo format
|
||||
if isinstance(a, dict):
|
||||
a = [(os.path.basename(f), cs) for f, cs in a.items()]
|
||||
if isinstance(b, dict):
|
||||
b = [(os.path.basename(f), cs) for f, cs in b.items()]
|
||||
# Compare lists, ensuring we can handle duplicate filenames if they exist
|
||||
removedcount = Counter(a)
|
||||
removedcount.subtract(b)
|
||||
@@ -894,15 +846,15 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
removed = [x[0] for x in removed]
|
||||
return changed, added, removed
|
||||
|
||||
if 'basehash_ignore_vars' in a_data and a_data['basehash_ignore_vars'] != b_data['basehash_ignore_vars']:
|
||||
output.append(color_format("{color_title}basehash_ignore_vars changed{color_default} from '%s' to '%s'") % (a_data['basehash_ignore_vars'], b_data['basehash_ignore_vars']))
|
||||
if a_data['basehash_ignore_vars'] and b_data['basehash_ignore_vars']:
|
||||
output.append("changed items: %s" % a_data['basehash_ignore_vars'].symmetric_difference(b_data['basehash_ignore_vars']))
|
||||
if 'basewhitelist' in a_data and a_data['basewhitelist'] != b_data['basewhitelist']:
|
||||
output.append(color_format("{color_title}basewhitelist changed{color_default} from '%s' to '%s'") % (a_data['basewhitelist'], b_data['basewhitelist']))
|
||||
if a_data['basewhitelist'] and b_data['basewhitelist']:
|
||||
output.append("changed items: %s" % a_data['basewhitelist'].symmetric_difference(b_data['basewhitelist']))
|
||||
|
||||
if 'taskhash_ignore_tasks' in a_data and a_data['taskhash_ignore_tasks'] != b_data['taskhash_ignore_tasks']:
|
||||
output.append(color_format("{color_title}taskhash_ignore_tasks changed{color_default} from '%s' to '%s'") % (a_data['taskhash_ignore_tasks'], b_data['taskhash_ignore_tasks']))
|
||||
if a_data['taskhash_ignore_tasks'] and b_data['taskhash_ignore_tasks']:
|
||||
output.append("changed items: %s" % a_data['taskhash_ignore_tasks'].symmetric_difference(b_data['taskhash_ignore_tasks']))
|
||||
if 'taskwhitelist' in a_data and a_data['taskwhitelist'] != b_data['taskwhitelist']:
|
||||
output.append(color_format("{color_title}taskwhitelist changed{color_default} from '%s' to '%s'") % (a_data['taskwhitelist'], b_data['taskwhitelist']))
|
||||
if a_data['taskwhitelist'] and b_data['taskwhitelist']:
|
||||
output.append("changed items: %s" % a_data['taskwhitelist'].symmetric_difference(b_data['taskwhitelist']))
|
||||
|
||||
if a_data['taskdeps'] != b_data['taskdeps']:
|
||||
output.append(color_format("{color_title}Task dependencies changed{color_default} from:\n%s\nto:\n%s") % (sorted(a_data['taskdeps']), sorted(b_data['taskdeps'])))
|
||||
@@ -910,23 +862,23 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
if a_data['basehash'] != b_data['basehash'] and not collapsed:
|
||||
output.append(color_format("{color_title}basehash changed{color_default} from %s to %s") % (a_data['basehash'], b_data['basehash']))
|
||||
|
||||
changed, added, removed = dict_diff(a_data['gendeps'], b_data['gendeps'], a_data['basehash_ignore_vars'] & b_data['basehash_ignore_vars'])
|
||||
changed, added, removed = dict_diff(a_data['gendeps'], b_data['gendeps'], a_data['basewhitelist'] & b_data['basewhitelist'])
|
||||
if changed:
|
||||
for dep in sorted(changed):
|
||||
for dep in changed:
|
||||
output.append(color_format("{color_title}List of dependencies for variable %s changed from '{color_default}%s{color_title}' to '{color_default}%s{color_title}'") % (dep, a_data['gendeps'][dep], b_data['gendeps'][dep]))
|
||||
if a_data['gendeps'][dep] and b_data['gendeps'][dep]:
|
||||
output.append("changed items: %s" % a_data['gendeps'][dep].symmetric_difference(b_data['gendeps'][dep]))
|
||||
if added:
|
||||
for dep in sorted(added):
|
||||
for dep in added:
|
||||
output.append(color_format("{color_title}Dependency on variable %s was added") % (dep))
|
||||
if removed:
|
||||
for dep in sorted(removed):
|
||||
for dep in removed:
|
||||
output.append(color_format("{color_title}Dependency on Variable %s was removed") % (dep))
|
||||
|
||||
|
||||
changed, added, removed = dict_diff(a_data['varvals'], b_data['varvals'])
|
||||
if changed:
|
||||
for dep in sorted(changed):
|
||||
for dep in changed:
|
||||
oldval = a_data['varvals'][dep]
|
||||
newval = b_data['varvals'][dep]
|
||||
if newval and oldval and ('\n' in oldval or '\n' in newval):
|
||||
@@ -950,9 +902,9 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
output.append(color_format("{color_title}Variable {var} value changed from '{color_default}{oldval}{color_title}' to '{color_default}{newval}{color_title}'{color_default}", var=dep, oldval=oldval, newval=newval))
|
||||
|
||||
if not 'file_checksum_values' in a_data:
|
||||
a_data['file_checksum_values'] = []
|
||||
a_data['file_checksum_values'] = {}
|
||||
if not 'file_checksum_values' in b_data:
|
||||
b_data['file_checksum_values'] = []
|
||||
b_data['file_checksum_values'] = {}
|
||||
|
||||
changed, added, removed = file_checksums_diff(a_data['file_checksum_values'], b_data['file_checksum_values'])
|
||||
if changed:
|
||||
@@ -992,11 +944,11 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
|
||||
|
||||
if 'runtaskhashes' in a_data and 'runtaskhashes' in b_data:
|
||||
a = clean_basepaths(a_data['runtaskhashes'])
|
||||
b = clean_basepaths(b_data['runtaskhashes'])
|
||||
a = a_data['runtaskhashes']
|
||||
b = b_data['runtaskhashes']
|
||||
changed, added, removed = dict_diff(a, b)
|
||||
if added:
|
||||
for dep in sorted(added):
|
||||
for dep in added:
|
||||
bdep_found = False
|
||||
if removed:
|
||||
for bdep in removed:
|
||||
@@ -1004,9 +956,9 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
#output.append("Dependency on task %s was replaced by %s with same hash" % (dep, bdep))
|
||||
bdep_found = True
|
||||
if not bdep_found:
|
||||
output.append(color_format("{color_title}Dependency on task %s was added{color_default} with hash %s") % (dep, b[dep]))
|
||||
output.append(color_format("{color_title}Dependency on task %s was added{color_default} with hash %s") % (clean_basepath(dep), b[dep]))
|
||||
if removed:
|
||||
for dep in sorted(removed):
|
||||
for dep in removed:
|
||||
adep_found = False
|
||||
if added:
|
||||
for adep in added:
|
||||
@@ -1014,11 +966,11 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
#output.append("Dependency on task %s was replaced by %s with same hash" % (adep, dep))
|
||||
adep_found = True
|
||||
if not adep_found:
|
||||
output.append(color_format("{color_title}Dependency on task %s was removed{color_default} with hash %s") % (dep, a[dep]))
|
||||
output.append(color_format("{color_title}Dependency on task %s was removed{color_default} with hash %s") % (clean_basepath(dep), a[dep]))
|
||||
if changed:
|
||||
for dep in sorted(changed):
|
||||
for dep in changed:
|
||||
if not collapsed:
|
||||
output.append(color_format("{color_title}Hash for task dependency %s changed{color_default} from %s to %s") % (dep, a[dep], b[dep]))
|
||||
output.append(color_format("{color_title}Hash for dependent task %s changed{color_default} from %s to %s") % (clean_basepath(dep), a[dep], b[dep]))
|
||||
if callable(recursecb):
|
||||
recout = recursecb(dep, a[dep], b[dep])
|
||||
if recout:
|
||||
@@ -1028,7 +980,6 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
# If a dependent hash changed, might as well print the line above and then defer to the changes in
|
||||
# that hash since in all likelyhood, they're the same changes this task also saw.
|
||||
output = [output[-1]] + recout
|
||||
break
|
||||
|
||||
a_taint = a_data.get('taint', None)
|
||||
b_taint = b_data.get('taint', None)
|
||||
@@ -1066,8 +1017,6 @@ def calc_taskhash(sigdata):
|
||||
|
||||
for c in sigdata['file_checksum_values']:
|
||||
if c[1]:
|
||||
if "./" in c[0]:
|
||||
data = data + c[0]
|
||||
data = data + c[1]
|
||||
|
||||
if 'taint' in sigdata:
|
||||
@@ -1082,33 +1031,32 @@ def calc_taskhash(sigdata):
|
||||
def dump_sigfile(a):
|
||||
output = []
|
||||
|
||||
with bb.compress.zstd.open(a, "rt", encoding="utf-8", num_threads=1) as f:
|
||||
a_data = json.load(f, object_hook=SetDecoder)
|
||||
with open(a, 'rb') as f:
|
||||
p1 = pickle.Unpickler(f)
|
||||
a_data = p1.load()
|
||||
|
||||
handle_renames(a_data)
|
||||
output.append("basewhitelist: %s" % (a_data['basewhitelist']))
|
||||
|
||||
output.append("basehash_ignore_vars: %s" % (sorted(a_data['basehash_ignore_vars'])))
|
||||
|
||||
output.append("taskhash_ignore_tasks: %s" % (sorted(a_data['taskhash_ignore_tasks'] or [])))
|
||||
output.append("taskwhitelist: %s" % (a_data['taskwhitelist']))
|
||||
|
||||
output.append("Task dependencies: %s" % (sorted(a_data['taskdeps'])))
|
||||
|
||||
output.append("basehash: %s" % (a_data['basehash']))
|
||||
|
||||
for dep in sorted(a_data['gendeps']):
|
||||
output.append("List of dependencies for variable %s is %s" % (dep, sorted(a_data['gendeps'][dep])))
|
||||
for dep in a_data['gendeps']:
|
||||
output.append("List of dependencies for variable %s is %s" % (dep, a_data['gendeps'][dep]))
|
||||
|
||||
for dep in sorted(a_data['varvals']):
|
||||
for dep in a_data['varvals']:
|
||||
output.append("Variable %s value is %s" % (dep, a_data['varvals'][dep]))
|
||||
|
||||
if 'runtaskdeps' in a_data:
|
||||
output.append("Tasks this task depends on: %s" % (sorted(a_data['runtaskdeps'])))
|
||||
output.append("Tasks this task depends on: %s" % (a_data['runtaskdeps']))
|
||||
|
||||
if 'file_checksum_values' in a_data:
|
||||
output.append("This task depends on the checksums of files: %s" % (sorted(a_data['file_checksum_values'])))
|
||||
output.append("This task depends on the checksums of files: %s" % (a_data['file_checksum_values']))
|
||||
|
||||
if 'runtaskhashes' in a_data:
|
||||
for dep in sorted(a_data['runtaskhashes']):
|
||||
for dep in a_data['runtaskhashes']:
|
||||
output.append("Hash for dependent task %s is %s" % (dep, a_data['runtaskhashes'][dep]))
|
||||
|
||||
if 'taint' in a_data:
|
||||
|
||||
@@ -39,7 +39,7 @@ class TaskData:
|
||||
"""
|
||||
BitBake Task Data implementation
|
||||
"""
|
||||
def __init__(self, halt = True, skiplist = None, allowincomplete = False):
|
||||
def __init__(self, abort = True, skiplist = None, allowincomplete = False):
|
||||
self.build_targets = {}
|
||||
self.run_targets = {}
|
||||
|
||||
@@ -57,7 +57,7 @@ class TaskData:
|
||||
self.failed_rdeps = []
|
||||
self.failed_fns = []
|
||||
|
||||
self.halt = halt
|
||||
self.abort = abort
|
||||
self.allowincomplete = allowincomplete
|
||||
|
||||
self.skiplist = skiplist
|
||||
@@ -328,7 +328,7 @@ class TaskData:
|
||||
try:
|
||||
self.add_provider_internal(cfgData, dataCache, item)
|
||||
except bb.providers.NoProvider:
|
||||
if self.halt:
|
||||
if self.abort:
|
||||
raise
|
||||
self.remove_buildtarget(item)
|
||||
|
||||
@@ -451,12 +451,12 @@ class TaskData:
|
||||
for target in self.build_targets:
|
||||
if fn in self.build_targets[target]:
|
||||
self.build_targets[target].remove(fn)
|
||||
if not self.build_targets[target]:
|
||||
if len(self.build_targets[target]) == 0:
|
||||
self.remove_buildtarget(target, missing_list)
|
||||
for target in self.run_targets:
|
||||
if fn in self.run_targets[target]:
|
||||
self.run_targets[target].remove(fn)
|
||||
if not self.run_targets[target]:
|
||||
if len(self.run_targets[target]) == 0:
|
||||
self.remove_runtarget(target, missing_list)
|
||||
|
||||
def remove_buildtarget(self, target, missing_list=None):
|
||||
@@ -479,7 +479,7 @@ class TaskData:
|
||||
fn = tid.rsplit(":",1)[0]
|
||||
self.fail_fn(fn, missing_list)
|
||||
|
||||
if self.halt and target in self.external_targets:
|
||||
if self.abort and target in self.external_targets:
|
||||
logger.error("Required build target '%s' has no buildable providers.\nMissing or unbuildable dependency chain was: %s", target, missing_list)
|
||||
raise bb.providers.NoProvider(target)
|
||||
|
||||
@@ -516,7 +516,7 @@ class TaskData:
|
||||
self.add_provider_internal(cfgData, dataCache, target)
|
||||
added = added + 1
|
||||
except bb.providers.NoProvider:
|
||||
if self.halt and target in self.external_targets and not self.allowincomplete:
|
||||
if self.abort and target in self.external_targets and not self.allowincomplete:
|
||||
raise
|
||||
if not self.allowincomplete:
|
||||
self.remove_buildtarget(target)
|
||||
|
||||
@@ -111,9 +111,9 @@ ${D}${libdir}/pkgconfig/*.pc
|
||||
self.assertExecs(set(["sed"]))
|
||||
|
||||
def test_parameter_expansion_modifiers(self):
|
||||
# -,+ and : are also valid modifiers for parameter expansion, but are
|
||||
# - and + are also valid modifiers for parameter expansion, but are
|
||||
# valid characters in bitbake variable names, so are not included here
|
||||
for i in ('=', '?', '#', '%', '##', '%%'):
|
||||
for i in ('=', ':-', ':=', '?', ':?', ':+', '#', '%', '##', '%%'):
|
||||
name = "foo%sbar" % i
|
||||
self.parseExpression("${%s}" % name)
|
||||
self.assertNotIn(name, self.references)
|
||||
@@ -318,7 +318,7 @@ d.getVar(a(), False)
|
||||
"filename": "example.bb",
|
||||
})
|
||||
|
||||
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), self.d)
|
||||
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
|
||||
|
||||
self.assertEqual(deps, set(["somevar", "bar", "something", "inexpand", "test", "test2", "a"]))
|
||||
|
||||
@@ -365,7 +365,7 @@ esac
|
||||
self.d.setVarFlags("FOO", {"func": True})
|
||||
self.setEmptyVars(execs)
|
||||
|
||||
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), self.d)
|
||||
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
|
||||
|
||||
self.assertEqual(deps, set(["somevar", "inverted"] + execs))
|
||||
|
||||
@@ -375,7 +375,7 @@ esac
|
||||
self.d.setVar("FOO", "foo=oe_libinstall; eval $foo")
|
||||
self.d.setVarFlag("FOO", "vardeps", "oe_libinstall")
|
||||
|
||||
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), self.d)
|
||||
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
|
||||
|
||||
self.assertEqual(deps, set(["oe_libinstall"]))
|
||||
|
||||
@@ -384,7 +384,7 @@ esac
|
||||
self.d.setVar("FOO", "foo=oe_libinstall; eval $foo")
|
||||
self.d.setVarFlag("FOO", "vardeps", "${@'oe_libinstall'}")
|
||||
|
||||
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), self.d)
|
||||
deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
|
||||
|
||||
self.assertEqual(deps, set(["oe_libinstall"]))
|
||||
|
||||
@@ -399,7 +399,7 @@ esac
|
||||
# Check dependencies
|
||||
self.d.setVar('ANOTHERVAR', expr)
|
||||
self.d.setVar('TESTVAR', 'anothervalue testval testval2')
|
||||
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(), self.d)
|
||||
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), self.d)
|
||||
self.assertEqual(sorted(values.splitlines()),
|
||||
sorted([expr,
|
||||
'TESTVAR{anothervalue} = Set',
|
||||
@@ -412,50 +412,6 @@ esac
|
||||
# Check final value
|
||||
self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['anothervalue', 'yetanothervalue', 'lastone'])
|
||||
|
||||
def test_contains_vardeps_excluded(self):
|
||||
# Check the ignored_vars option to build_dependencies is handled by contains functionality
|
||||
varval = '${TESTVAR2} ${@bb.utils.filter("TESTVAR", "somevalue anothervalue", d)}'
|
||||
self.d.setVar('ANOTHERVAR', varval)
|
||||
self.d.setVar('TESTVAR', 'anothervalue testval testval2')
|
||||
self.d.setVar('TESTVAR2', 'testval3')
|
||||
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(["TESTVAR"]), self.d)
|
||||
self.assertEqual(sorted(values.splitlines()), sorted([varval]))
|
||||
self.assertEqual(deps, set(["TESTVAR2"]))
|
||||
self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['testval3', 'anothervalue'])
|
||||
|
||||
# Check the vardepsexclude flag is handled by contains functionality
|
||||
self.d.setVarFlag('ANOTHERVAR', 'vardepsexclude', 'TESTVAR')
|
||||
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(), self.d)
|
||||
self.assertEqual(sorted(values.splitlines()), sorted([varval]))
|
||||
self.assertEqual(deps, set(["TESTVAR2"]))
|
||||
self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['testval3', 'anothervalue'])
|
||||
|
||||
def test_contains_vardeps_override_operators(self):
|
||||
# Check override operators handle dependencies correctly with the contains functionality
|
||||
expr_plain = 'testval'
|
||||
expr_prepend = '${@bb.utils.filter("TESTVAR1", "testval1", d)} '
|
||||
expr_append = ' ${@bb.utils.filter("TESTVAR2", "testval2", d)}'
|
||||
expr_remove = '${@bb.utils.contains("TESTVAR3", "no-testval", "testval", "", d)}'
|
||||
# Check dependencies
|
||||
self.d.setVar('ANOTHERVAR', expr_plain)
|
||||
self.d.prependVar('ANOTHERVAR', expr_prepend)
|
||||
self.d.appendVar('ANOTHERVAR', expr_append)
|
||||
self.d.setVar('ANOTHERVAR:remove', expr_remove)
|
||||
self.d.setVar('TESTVAR1', 'blah')
|
||||
self.d.setVar('TESTVAR2', 'testval2')
|
||||
self.d.setVar('TESTVAR3', 'no-testval')
|
||||
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(), self.d)
|
||||
self.assertEqual(sorted(values.splitlines()),
|
||||
sorted([
|
||||
expr_prepend + expr_plain + expr_append,
|
||||
'_remove of ' + expr_remove,
|
||||
'TESTVAR1{testval1} = Unset',
|
||||
'TESTVAR2{testval2} = Set',
|
||||
'TESTVAR3{no-testval} = Set',
|
||||
]))
|
||||
# Check final value
|
||||
self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['testval2'])
|
||||
|
||||
#Currently no wildcard support
|
||||
#def test_vardeps_wildcards(self):
|
||||
# self.d.setVar("oe_libinstall", "echo test")
|
||||
|
||||
@@ -1,100 +0,0 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
from pathlib import Path
|
||||
import bb.compress.lz4
|
||||
import bb.compress.zstd
|
||||
import contextlib
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
import unittest
|
||||
import subprocess
|
||||
|
||||
|
||||
class CompressionTests(object):
|
||||
def setUp(self):
|
||||
self._t = tempfile.TemporaryDirectory()
|
||||
self.tmpdir = Path(self._t.name)
|
||||
self.addCleanup(self._t.cleanup)
|
||||
|
||||
def _file_helper(self, mode_suffix, data):
|
||||
tmp_file = self.tmpdir / "compressed"
|
||||
|
||||
with self.do_open(tmp_file, mode="w" + mode_suffix) as f:
|
||||
f.write(data)
|
||||
|
||||
with self.do_open(tmp_file, mode="r" + mode_suffix) as f:
|
||||
read_data = f.read()
|
||||
|
||||
self.assertEqual(read_data, data)
|
||||
|
||||
def test_text_file(self):
|
||||
self._file_helper("t", "Hello")
|
||||
|
||||
def test_binary_file(self):
|
||||
self._file_helper("b", "Hello".encode("utf-8"))
|
||||
|
||||
def _pipe_helper(self, mode_suffix, data):
|
||||
rfd, wfd = os.pipe()
|
||||
with open(rfd, "rb") as r, open(wfd, "wb") as w:
|
||||
with self.do_open(r, mode="r" + mode_suffix) as decompress:
|
||||
with self.do_open(w, mode="w" + mode_suffix) as compress:
|
||||
compress.write(data)
|
||||
read_data = decompress.read()
|
||||
|
||||
self.assertEqual(read_data, data)
|
||||
|
||||
def test_text_pipe(self):
|
||||
self._pipe_helper("t", "Hello")
|
||||
|
||||
def test_binary_pipe(self):
|
||||
self._pipe_helper("b", "Hello".encode("utf-8"))
|
||||
|
||||
def test_bad_decompress(self):
|
||||
tmp_file = self.tmpdir / "compressed"
|
||||
with tmp_file.open("wb") as f:
|
||||
f.write(b"\x00")
|
||||
|
||||
with self.assertRaises(OSError):
|
||||
with self.do_open(tmp_file, mode="rb", stderr=subprocess.DEVNULL) as f:
|
||||
data = f.read()
|
||||
|
||||
|
||||
class LZ4Tests(CompressionTests, unittest.TestCase):
|
||||
def setUp(self):
|
||||
if shutil.which("lz4c") is None:
|
||||
self.skipTest("'lz4c' not found")
|
||||
super().setUp()
|
||||
|
||||
@contextlib.contextmanager
|
||||
def do_open(self, *args, **kwargs):
|
||||
with bb.compress.lz4.open(*args, **kwargs) as f:
|
||||
yield f
|
||||
|
||||
|
||||
class ZStdTests(CompressionTests, unittest.TestCase):
|
||||
def setUp(self):
|
||||
if shutil.which("zstd") is None:
|
||||
self.skipTest("'zstd' not found")
|
||||
super().setUp()
|
||||
|
||||
@contextlib.contextmanager
|
||||
def do_open(self, *args, **kwargs):
|
||||
with bb.compress.zstd.open(*args, **kwargs) as f:
|
||||
yield f
|
||||
|
||||
|
||||
class PZStdTests(CompressionTests, unittest.TestCase):
|
||||
def setUp(self):
|
||||
if shutil.which("pzstd") is None:
|
||||
self.skipTest("'pzstd' not found")
|
||||
super().setUp()
|
||||
|
||||
@contextlib.contextmanager
|
||||
def do_open(self, *args, **kwargs):
|
||||
with bb.compress.zstd.open(*args, num_threads=2, **kwargs) as f:
|
||||
yield f
|
||||
@@ -1,8 +1,6 @@
|
||||
#
|
||||
# BitBake Tests for cooker.py
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
|
||||
@@ -245,35 +245,35 @@ class TestConcatOverride(unittest.TestCase):
|
||||
|
||||
def test_prepend(self):
|
||||
self.d.setVar("TEST", "${VAL}")
|
||||
self.d.setVar("TEST:prepend", "${FOO}:")
|
||||
self.d.setVar("TEST_prepend", "${FOO}:")
|
||||
self.assertEqual(self.d.getVar("TEST"), "foo:val")
|
||||
|
||||
def test_append(self):
|
||||
self.d.setVar("TEST", "${VAL}")
|
||||
self.d.setVar("TEST:append", ":${BAR}")
|
||||
self.d.setVar("TEST_append", ":${BAR}")
|
||||
self.assertEqual(self.d.getVar("TEST"), "val:bar")
|
||||
|
||||
def test_multiple_append(self):
|
||||
self.d.setVar("TEST", "${VAL}")
|
||||
self.d.setVar("TEST:prepend", "${FOO}:")
|
||||
self.d.setVar("TEST:append", ":val2")
|
||||
self.d.setVar("TEST:append", ":${BAR}")
|
||||
self.d.setVar("TEST_prepend", "${FOO}:")
|
||||
self.d.setVar("TEST_append", ":val2")
|
||||
self.d.setVar("TEST_append", ":${BAR}")
|
||||
self.assertEqual(self.d.getVar("TEST"), "foo:val:val2:bar")
|
||||
|
||||
def test_append_unset(self):
|
||||
self.d.setVar("TEST:prepend", "${FOO}:")
|
||||
self.d.setVar("TEST:append", ":val2")
|
||||
self.d.setVar("TEST:append", ":${BAR}")
|
||||
self.d.setVar("TEST_prepend", "${FOO}:")
|
||||
self.d.setVar("TEST_append", ":val2")
|
||||
self.d.setVar("TEST_append", ":${BAR}")
|
||||
self.assertEqual(self.d.getVar("TEST"), "foo::val2:bar")
|
||||
|
||||
def test_remove(self):
|
||||
self.d.setVar("TEST", "${VAL} ${BAR}")
|
||||
self.d.setVar("TEST:remove", "val")
|
||||
self.d.setVar("TEST_remove", "val")
|
||||
self.assertEqual(self.d.getVar("TEST"), " bar")
|
||||
|
||||
def test_remove_cleared(self):
|
||||
self.d.setVar("TEST", "${VAL} ${BAR}")
|
||||
self.d.setVar("TEST:remove", "val")
|
||||
self.d.setVar("TEST_remove", "val")
|
||||
self.d.setVar("TEST", "${VAL} ${BAR}")
|
||||
self.assertEqual(self.d.getVar("TEST"), "val bar")
|
||||
|
||||
@@ -281,42 +281,42 @@ class TestConcatOverride(unittest.TestCase):
|
||||
# (including that whitespace is preserved)
|
||||
def test_remove_inactive_override(self):
|
||||
self.d.setVar("TEST", "${VAL} ${BAR} 123")
|
||||
self.d.setVar("TEST:remove:inactiveoverride", "val")
|
||||
self.d.setVar("TEST_remove_inactiveoverride", "val")
|
||||
self.assertEqual(self.d.getVar("TEST"), "val bar 123")
|
||||
|
||||
def test_doubleref_remove(self):
|
||||
self.d.setVar("TEST", "${VAL} ${BAR}")
|
||||
self.d.setVar("TEST:remove", "val")
|
||||
self.d.setVar("TEST_remove", "val")
|
||||
self.d.setVar("TEST_TEST", "${TEST} ${TEST}")
|
||||
self.assertEqual(self.d.getVar("TEST_TEST"), " bar bar")
|
||||
|
||||
def test_empty_remove(self):
|
||||
self.d.setVar("TEST", "")
|
||||
self.d.setVar("TEST:remove", "val")
|
||||
self.d.setVar("TEST_remove", "val")
|
||||
self.assertEqual(self.d.getVar("TEST"), "")
|
||||
|
||||
def test_remove_expansion(self):
|
||||
self.d.setVar("BAR", "Z")
|
||||
self.d.setVar("TEST", "${BAR}/X Y")
|
||||
self.d.setVar("TEST:remove", "${BAR}/X")
|
||||
self.d.setVar("TEST_remove", "${BAR}/X")
|
||||
self.assertEqual(self.d.getVar("TEST"), " Y")
|
||||
|
||||
def test_remove_expansion_items(self):
|
||||
self.d.setVar("TEST", "A B C D")
|
||||
self.d.setVar("BAR", "B D")
|
||||
self.d.setVar("TEST:remove", "${BAR}")
|
||||
self.d.setVar("TEST_remove", "${BAR}")
|
||||
self.assertEqual(self.d.getVar("TEST"), "A C ")
|
||||
|
||||
def test_remove_preserve_whitespace(self):
|
||||
# When the removal isn't active, the original value should be preserved
|
||||
self.d.setVar("TEST", " A B")
|
||||
self.d.setVar("TEST:remove", "C")
|
||||
self.d.setVar("TEST_remove", "C")
|
||||
self.assertEqual(self.d.getVar("TEST"), " A B")
|
||||
|
||||
def test_remove_preserve_whitespace2(self):
|
||||
# When the removal is active preserve the whitespace
|
||||
self.d.setVar("TEST", " A B")
|
||||
self.d.setVar("TEST:remove", "B")
|
||||
self.d.setVar("TEST_remove", "B")
|
||||
self.assertEqual(self.d.getVar("TEST"), " A ")
|
||||
|
||||
class TestOverrides(unittest.TestCase):
|
||||
@@ -329,70 +329,70 @@ class TestOverrides(unittest.TestCase):
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvalue")
|
||||
|
||||
def test_one_override(self):
|
||||
self.d.setVar("TEST:bar", "testvalue2")
|
||||
self.d.setVar("TEST_bar", "testvalue2")
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvalue2")
|
||||
|
||||
def test_one_override_unset(self):
|
||||
self.d.setVar("TEST2:bar", "testvalue2")
|
||||
self.d.setVar("TEST2_bar", "testvalue2")
|
||||
|
||||
self.assertEqual(self.d.getVar("TEST2"), "testvalue2")
|
||||
self.assertCountEqual(list(self.d.keys()), ['TEST', 'TEST2', 'OVERRIDES', 'TEST2:bar'])
|
||||
self.assertCountEqual(list(self.d.keys()), ['TEST', 'TEST2', 'OVERRIDES', 'TEST2_bar'])
|
||||
|
||||
def test_multiple_override(self):
|
||||
self.d.setVar("TEST:bar", "testvalue2")
|
||||
self.d.setVar("TEST:local", "testvalue3")
|
||||
self.d.setVar("TEST:foo", "testvalue4")
|
||||
self.d.setVar("TEST_bar", "testvalue2")
|
||||
self.d.setVar("TEST_local", "testvalue3")
|
||||
self.d.setVar("TEST_foo", "testvalue4")
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
|
||||
self.assertCountEqual(list(self.d.keys()), ['TEST', 'TEST:foo', 'OVERRIDES', 'TEST:bar', 'TEST:local'])
|
||||
self.assertCountEqual(list(self.d.keys()), ['TEST', 'TEST_foo', 'OVERRIDES', 'TEST_bar', 'TEST_local'])
|
||||
|
||||
def test_multiple_combined_overrides(self):
|
||||
self.d.setVar("TEST:local:foo:bar", "testvalue3")
|
||||
self.d.setVar("TEST_local_foo_bar", "testvalue3")
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
|
||||
|
||||
def test_multiple_overrides_unset(self):
|
||||
self.d.setVar("TEST2:local:foo:bar", "testvalue3")
|
||||
self.d.setVar("TEST2_local_foo_bar", "testvalue3")
|
||||
self.assertEqual(self.d.getVar("TEST2"), "testvalue3")
|
||||
|
||||
def test_keyexpansion_override(self):
|
||||
self.d.setVar("LOCAL", "local")
|
||||
self.d.setVar("TEST:bar", "testvalue2")
|
||||
self.d.setVar("TEST:${LOCAL}", "testvalue3")
|
||||
self.d.setVar("TEST:foo", "testvalue4")
|
||||
self.d.setVar("TEST_bar", "testvalue2")
|
||||
self.d.setVar("TEST_${LOCAL}", "testvalue3")
|
||||
self.d.setVar("TEST_foo", "testvalue4")
|
||||
bb.data.expandKeys(self.d)
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
|
||||
|
||||
def test_rename_override(self):
|
||||
self.d.setVar("ALTERNATIVE:ncurses-tools:class-target", "a")
|
||||
self.d.setVar("ALTERNATIVE_ncurses-tools_class-target", "a")
|
||||
self.d.setVar("OVERRIDES", "class-target")
|
||||
self.d.renameVar("ALTERNATIVE:ncurses-tools", "ALTERNATIVE:lib32-ncurses-tools")
|
||||
self.assertEqual(self.d.getVar("ALTERNATIVE:lib32-ncurses-tools"), "a")
|
||||
self.d.renameVar("ALTERNATIVE_ncurses-tools", "ALTERNATIVE_lib32-ncurses-tools")
|
||||
self.assertEqual(self.d.getVar("ALTERNATIVE_lib32-ncurses-tools"), "a")
|
||||
|
||||
def test_underscore_override(self):
|
||||
self.d.setVar("TEST:bar", "testvalue2")
|
||||
self.d.setVar("TEST:some_val", "testvalue3")
|
||||
self.d.setVar("TEST:foo", "testvalue4")
|
||||
self.d.setVar("TEST_bar", "testvalue2")
|
||||
self.d.setVar("TEST_some_val", "testvalue3")
|
||||
self.d.setVar("TEST_foo", "testvalue4")
|
||||
self.d.setVar("OVERRIDES", "foo:bar:some_val")
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
|
||||
|
||||
def test_remove_with_override(self):
|
||||
self.d.setVar("TEST:bar", "testvalue2")
|
||||
self.d.setVar("TEST:some_val", "testvalue3 testvalue5")
|
||||
self.d.setVar("TEST:some_val:remove", "testvalue3")
|
||||
self.d.setVar("TEST:foo", "testvalue4")
|
||||
self.d.setVar("TEST_bar", "testvalue2")
|
||||
self.d.setVar("TEST_some_val", "testvalue3 testvalue5")
|
||||
self.d.setVar("TEST_some_val_remove", "testvalue3")
|
||||
self.d.setVar("TEST_foo", "testvalue4")
|
||||
self.d.setVar("OVERRIDES", "foo:bar:some_val")
|
||||
self.assertEqual(self.d.getVar("TEST"), " testvalue5")
|
||||
|
||||
def test_append_and_override_1(self):
|
||||
self.d.setVar("TEST:append", "testvalue2")
|
||||
self.d.setVar("TEST:bar", "testvalue3")
|
||||
self.d.setVar("TEST_append", "testvalue2")
|
||||
self.d.setVar("TEST_bar", "testvalue3")
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvalue3testvalue2")
|
||||
|
||||
def test_append_and_override_2(self):
|
||||
self.d.setVar("TEST:append:bar", "testvalue2")
|
||||
self.d.setVar("TEST_append_bar", "testvalue2")
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvaluetestvalue2")
|
||||
|
||||
def test_append_and_override_3(self):
|
||||
self.d.setVar("TEST:bar:append", "testvalue2")
|
||||
self.d.setVar("TEST_bar_append", "testvalue2")
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvalue2")
|
||||
|
||||
# Test an override with _<numeric> in it based on a real world OE issue
|
||||
@@ -400,16 +400,11 @@ class TestOverrides(unittest.TestCase):
|
||||
self.d.setVar("TARGET_ARCH", "x86_64")
|
||||
self.d.setVar("PN", "test-${TARGET_ARCH}")
|
||||
self.d.setVar("VERSION", "1")
|
||||
self.d.setVar("VERSION:pn-test-${TARGET_ARCH}", "2")
|
||||
self.d.setVar("VERSION_pn-test-${TARGET_ARCH}", "2")
|
||||
self.d.setVar("OVERRIDES", "pn-${PN}")
|
||||
bb.data.expandKeys(self.d)
|
||||
self.assertEqual(self.d.getVar("VERSION"), "2")
|
||||
|
||||
def test_append_and_unused_override(self):
|
||||
# Had a bug where an unused override append could return "" instead of None
|
||||
self.d.setVar("BAR:append:unusedoverride", "testvalue2")
|
||||
self.assertEqual(self.d.getVar("BAR"), None)
|
||||
|
||||
class TestKeyExpansion(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.d = bb.data.init()
|
||||
@@ -503,7 +498,7 @@ class TaskHash(unittest.TestCase):
|
||||
d.setVar("VAR", "val")
|
||||
# Adding an inactive removal shouldn't change the hash
|
||||
d.setVar("BAR", "notbar")
|
||||
d.setVar("MYCOMMAND:remove", "${BAR}")
|
||||
d.setVar("MYCOMMAND_remove", "${BAR}")
|
||||
nexthash = gettask_bashhash("mytask", d)
|
||||
self.assertEqual(orighash, nexthash)
|
||||
|
||||
|
||||
@@ -1,59 +0,0 @@
|
||||
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
|
||||
<html>
|
||||
<head>
|
||||
<title>Index of /debian/pool/main/m/minicom</title>
|
||||
</head>
|
||||
<body>
|
||||
<h1>Index of /debian/pool/main/m/minicom</h1>
|
||||
<table>
|
||||
<tr><th valign="top"><img src="/icons/blank.gif" alt="[ICO]"></th><th><a href="?C=N;O=D">Name</a></th><th><a href="?C=M;O=A">Last modified</a></th><th><a href="?C=S;O=A">Size</a></th></tr>
|
||||
<tr><th colspan="4"><hr></th></tr>
|
||||
<tr><td valign="top"><img src="/icons/back.gif" alt="[PARENTDIR]"></td><td><a href="/debian/pool/main/m/">Parent Directory</a></td><td> </td><td align="right"> - </td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1+deb8u1.debian.tar.xz">minicom_2.7-1+deb8u1.debian.tar.xz</a></td><td align="right">2017-04-24 08:22 </td><td align="right"> 14K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1+deb8u1.dsc">minicom_2.7-1+deb8u1.dsc</a></td><td align="right">2017-04-24 08:22 </td><td align="right">1.9K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1+deb8u1_amd64.deb">minicom_2.7-1+deb8u1_amd64.deb</a></td><td align="right">2017-04-25 21:10 </td><td align="right">257K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1+deb8u1_armel.deb">minicom_2.7-1+deb8u1_armel.deb</a></td><td align="right">2017-04-26 00:58 </td><td align="right">246K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1+deb8u1_armhf.deb">minicom_2.7-1+deb8u1_armhf.deb</a></td><td align="right">2017-04-26 00:58 </td><td align="right">245K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1+deb8u1_i386.deb">minicom_2.7-1+deb8u1_i386.deb</a></td><td align="right">2017-04-25 21:41 </td><td align="right">258K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1.debian.tar.xz">minicom_2.7-1.1.debian.tar.xz</a></td><td align="right">2017-04-22 09:34 </td><td align="right"> 14K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1.dsc">minicom_2.7-1.1.dsc</a></td><td align="right">2017-04-22 09:34 </td><td align="right">1.9K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_amd64.deb">minicom_2.7-1.1_amd64.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">261K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_arm64.deb">minicom_2.7-1.1_arm64.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">250K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_armel.deb">minicom_2.7-1.1_armel.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">255K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_armhf.deb">minicom_2.7-1.1_armhf.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">254K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_i386.deb">minicom_2.7-1.1_i386.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">266K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_mips.deb">minicom_2.7-1.1_mips.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">258K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_mips64el.deb">minicom_2.7-1.1_mips64el.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">259K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_mipsel.deb">minicom_2.7-1.1_mipsel.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">259K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_ppc64el.deb">minicom_2.7-1.1_ppc64el.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">253K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_s390x.deb">minicom_2.7-1.1_s390x.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">261K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_amd64.deb">minicom_2.7.1-1+b1_amd64.deb</a></td><td align="right">2018-05-06 08:14 </td><td align="right">262K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_arm64.deb">minicom_2.7.1-1+b1_arm64.deb</a></td><td align="right">2018-05-06 07:58 </td><td align="right">250K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_armel.deb">minicom_2.7.1-1+b1_armel.deb</a></td><td align="right">2018-05-06 08:45 </td><td align="right">253K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_armhf.deb">minicom_2.7.1-1+b1_armhf.deb</a></td><td align="right">2018-05-06 10:42 </td><td align="right">253K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_i386.deb">minicom_2.7.1-1+b1_i386.deb</a></td><td align="right">2018-05-06 08:55 </td><td align="right">266K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_mips.deb">minicom_2.7.1-1+b1_mips.deb</a></td><td align="right">2018-05-06 08:14 </td><td align="right">258K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_mipsel.deb">minicom_2.7.1-1+b1_mipsel.deb</a></td><td align="right">2018-05-06 12:13 </td><td align="right">259K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_ppc64el.deb">minicom_2.7.1-1+b1_ppc64el.deb</a></td><td align="right">2018-05-06 09:10 </td><td align="right">260K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_s390x.deb">minicom_2.7.1-1+b1_s390x.deb</a></td><td align="right">2018-05-06 08:14 </td><td align="right">257K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b2_mips64el.deb">minicom_2.7.1-1+b2_mips64el.deb</a></td><td align="right">2018-05-06 09:41 </td><td align="right">260K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1.debian.tar.xz">minicom_2.7.1-1.debian.tar.xz</a></td><td align="right">2017-08-13 15:40 </td><td align="right"> 14K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1.dsc">minicom_2.7.1-1.dsc</a></td><td align="right">2017-08-13 15:40 </td><td align="right">1.8K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/compressed.gif" alt="[ ]"></td><td><a href="minicom_2.7.1.orig.tar.gz">minicom_2.7.1.orig.tar.gz</a></td><td align="right">2017-08-13 15:40 </td><td align="right">855K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/compressed.gif" alt="[ ]"></td><td><a href="minicom_2.7.orig.tar.gz">minicom_2.7.orig.tar.gz</a></td><td align="right">2014-01-01 09:36 </td><td align="right">843K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2.debian.tar.xz">minicom_2.8-2.debian.tar.xz</a></td><td align="right">2021-06-15 03:47 </td><td align="right"> 14K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2.dsc">minicom_2.8-2.dsc</a></td><td align="right">2021-06-15 03:47 </td><td align="right">1.8K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_amd64.deb">minicom_2.8-2_amd64.deb</a></td><td align="right">2021-06-15 03:58 </td><td align="right">280K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_arm64.deb">minicom_2.8-2_arm64.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">275K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_armel.deb">minicom_2.8-2_armel.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">271K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_armhf.deb">minicom_2.8-2_armhf.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">272K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_i386.deb">minicom_2.8-2_i386.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">285K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_mips64el.deb">minicom_2.8-2_mips64el.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">277K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_mipsel.deb">minicom_2.8-2_mipsel.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">278K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_ppc64el.deb">minicom_2.8-2_ppc64el.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">286K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_s390x.deb">minicom_2.8-2_s390x.deb</a></td><td align="right">2021-06-15 03:58 </td><td align="right">275K</td></tr>
|
||||
<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8.orig.tar.bz2">minicom_2.8.orig.tar.bz2</a></td><td align="right">2021-01-03 12:44 </td><td align="right">598K</td></tr>
|
||||
<tr><th colspan="4"><hr></th></tr>
|
||||
</table>
|
||||
<address>Apache Server at ftp.debian.org Port 80</address>
|
||||
</body></html>
|
||||
File diff suppressed because it is too large
Load Diff
@@ -98,8 +98,8 @@ exportD = "d"
|
||||
|
||||
|
||||
overridetest = """
|
||||
RRECOMMENDS:${PN} = "a"
|
||||
RRECOMMENDS:${PN}:libc = "b"
|
||||
RRECOMMENDS_${PN} = "a"
|
||||
RRECOMMENDS_${PN}_libc = "b"
|
||||
OVERRIDES = "libc:${PN}"
|
||||
PN = "gtk+"
|
||||
"""
|
||||
@@ -110,16 +110,16 @@ PN = "gtk+"
|
||||
self.assertEqual(d.getVar("RRECOMMENDS"), "b")
|
||||
bb.data.expandKeys(d)
|
||||
self.assertEqual(d.getVar("RRECOMMENDS"), "b")
|
||||
d.setVar("RRECOMMENDS:gtk+", "c")
|
||||
d.setVar("RRECOMMENDS_gtk+", "c")
|
||||
self.assertEqual(d.getVar("RRECOMMENDS"), "c")
|
||||
|
||||
overridetest2 = """
|
||||
EXTRA_OECONF = ""
|
||||
EXTRA_OECONF:class-target = "b"
|
||||
EXTRA_OECONF:append = " c"
|
||||
EXTRA_OECONF_class-target = "b"
|
||||
EXTRA_OECONF_append = " c"
|
||||
"""
|
||||
|
||||
def test_parse_overrides2(self):
|
||||
def test_parse_overrides(self):
|
||||
f = self.parsehelper(self.overridetest2)
|
||||
d = bb.parse.handle(f.name, self.d)['']
|
||||
d.appendVar("EXTRA_OECONF", " d")
|
||||
@@ -128,7 +128,7 @@ EXTRA_OECONF:append = " c"
|
||||
|
||||
overridetest3 = """
|
||||
DESCRIPTION = "A"
|
||||
DESCRIPTION:${PN}-dev = "${DESCRIPTION} B"
|
||||
DESCRIPTION_${PN}-dev = "${DESCRIPTION} B"
|
||||
PN = "bc"
|
||||
"""
|
||||
|
||||
@@ -136,15 +136,15 @@ PN = "bc"
|
||||
f = self.parsehelper(self.overridetest3)
|
||||
d = bb.parse.handle(f.name, self.d)['']
|
||||
bb.data.expandKeys(d)
|
||||
self.assertEqual(d.getVar("DESCRIPTION:bc-dev"), "A B")
|
||||
self.assertEqual(d.getVar("DESCRIPTION_bc-dev"), "A B")
|
||||
d.setVar("DESCRIPTION", "E")
|
||||
d.setVar("DESCRIPTION:bc-dev", "C D")
|
||||
d.setVar("DESCRIPTION_bc-dev", "C D")
|
||||
d.setVar("OVERRIDES", "bc-dev")
|
||||
self.assertEqual(d.getVar("DESCRIPTION"), "C D")
|
||||
|
||||
|
||||
classextend = """
|
||||
VAR_var:override1 = "B"
|
||||
VAR_var_override1 = "B"
|
||||
EXTRA = ":override1"
|
||||
OVERRIDES = "nothing${EXTRA}"
|
||||
|
||||
@@ -194,26 +194,3 @@ deltask ${EMPTYVAR}
|
||||
self.assertTrue('addtask ignored: " do_patch"' in stdout)
|
||||
#self.assertTrue('dependent task do_foo for do_patch does not exist' in stdout)
|
||||
|
||||
broken_multiline_comment = """
|
||||
# First line of comment \\
|
||||
# Second line of comment \\
|
||||
|
||||
"""
|
||||
def test_parse_broken_multiline_comment(self):
|
||||
f = self.parsehelper(self.broken_multiline_comment)
|
||||
with self.assertRaises(bb.BBHandledException):
|
||||
d = bb.parse.handle(f.name, self.d)['']
|
||||
|
||||
|
||||
comment_in_var = """
|
||||
VAR = " \\
|
||||
SOMEVAL \\
|
||||
# some comment \\
|
||||
SOMEOTHERVAL \\
|
||||
"
|
||||
"""
|
||||
def test_parse_comment_in_var(self):
|
||||
f = self.parsehelper(self.comment_in_var)
|
||||
with self.assertRaises(bb.BBHandledException):
|
||||
d = bb.parse.handle(f.name, self.d)['']
|
||||
|
||||
|
||||
@@ -12,6 +12,6 @@ STAMP = "${TMPDIR}/stamps/${PN}"
|
||||
T = "${TMPDIR}/workdir/${PN}/temp"
|
||||
BB_NUMBER_THREADS = "4"
|
||||
|
||||
BB_BASEHASH_IGNORE_VARS = "BB_CURRENT_MC BB_HASHSERVE TMPDIR TOPDIR SLOWTASKS SSTATEVALID FILE BB_CURRENTTASK"
|
||||
BB_HASHBASE_WHITELIST = "BB_CURRENT_MC BB_HASHSERVE TMPDIR TOPDIR SLOWTASKS SSTATEVALID FILE"
|
||||
|
||||
include conf/multiconfig/${BB_CURRENT_MC}.conf
|
||||
|
||||
@@ -29,14 +29,13 @@ class RunQueueTests(unittest.TestCase):
|
||||
def run_bitbakecmd(self, cmd, builddir, sstatevalid="", slowtasks="", extraenv=None, cleanup=False):
|
||||
env = os.environ.copy()
|
||||
env["BBPATH"] = os.path.realpath(os.path.join(os.path.dirname(__file__), "runqueue-tests"))
|
||||
env["BB_ENV_PASSTHROUGH_ADDITIONS"] = "SSTATEVALID SLOWTASKS TOPDIR"
|
||||
env["BB_ENV_EXTRAWHITE"] = "SSTATEVALID SLOWTASKS"
|
||||
env["SSTATEVALID"] = sstatevalid
|
||||
env["SLOWTASKS"] = slowtasks
|
||||
env["TOPDIR"] = builddir
|
||||
if extraenv:
|
||||
for k in extraenv:
|
||||
env[k] = extraenv[k]
|
||||
env["BB_ENV_PASSTHROUGH_ADDITIONS"] = env["BB_ENV_PASSTHROUGH_ADDITIONS"] + " " + k
|
||||
env["BB_ENV_EXTRAWHITE"] = env["BB_ENV_EXTRAWHITE"] + " " + k
|
||||
try:
|
||||
output = subprocess.check_output(cmd, env=env, stderr=subprocess.STDOUT,universal_newlines=True, cwd=builddir)
|
||||
print(output)
|
||||
@@ -59,8 +58,6 @@ class RunQueueTests(unittest.TestCase):
|
||||
expected = ['a1:' + x for x in self.alltasks]
|
||||
self.assertEqual(set(tasks), set(expected))
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
def test_single_setscenevalid(self):
|
||||
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
|
||||
cmd = ["bitbake", "a1"]
|
||||
@@ -71,8 +68,6 @@ class RunQueueTests(unittest.TestCase):
|
||||
'a1:populate_sysroot', 'a1:build']
|
||||
self.assertEqual(set(tasks), set(expected))
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
def test_intermediate_setscenevalid(self):
|
||||
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
|
||||
cmd = ["bitbake", "a1"]
|
||||
@@ -82,8 +77,6 @@ class RunQueueTests(unittest.TestCase):
|
||||
'a1:populate_sysroot_setscene', 'a1:build']
|
||||
self.assertEqual(set(tasks), set(expected))
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
def test_intermediate_notcovered(self):
|
||||
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
|
||||
cmd = ["bitbake", "a1"]
|
||||
@@ -93,8 +86,6 @@ class RunQueueTests(unittest.TestCase):
|
||||
'a1:package_qa_setscene', 'a1:build', 'a1:populate_sysroot_setscene']
|
||||
self.assertEqual(set(tasks), set(expected))
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
def test_all_setscenevalid(self):
|
||||
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
|
||||
cmd = ["bitbake", "a1"]
|
||||
@@ -104,8 +95,6 @@ class RunQueueTests(unittest.TestCase):
|
||||
'a1:package_qa_setscene', 'a1:build', 'a1:populate_sysroot_setscene']
|
||||
self.assertEqual(set(tasks), set(expected))
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
def test_no_settasks(self):
|
||||
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
|
||||
cmd = ["bitbake", "a1", "-c", "patch"]
|
||||
@@ -114,8 +103,6 @@ class RunQueueTests(unittest.TestCase):
|
||||
expected = ['a1:fetch', 'a1:unpack', 'a1:patch']
|
||||
self.assertEqual(set(tasks), set(expected))
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
def test_mix_covered_notcovered(self):
|
||||
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
|
||||
cmd = ["bitbake", "a1:do_patch", "a1:do_populate_sysroot"]
|
||||
@@ -124,7 +111,6 @@ class RunQueueTests(unittest.TestCase):
|
||||
expected = ['a1:fetch', 'a1:unpack', 'a1:patch', 'a1:populate_sysroot_setscene']
|
||||
self.assertEqual(set(tasks), set(expected))
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
# Test targets with intermediate setscene tasks alongside a target with no intermediate setscene tasks
|
||||
def test_mixed_direct_tasks_setscene_tasks(self):
|
||||
@@ -136,8 +122,6 @@ class RunQueueTests(unittest.TestCase):
|
||||
'a1:package_qa_setscene', 'a1:build', 'a1:populate_sysroot_setscene']
|
||||
self.assertEqual(set(tasks), set(expected))
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
# This test slows down the execution of do_package_setscene until after other real tasks have
|
||||
# started running which tests for a bug where tasks were being lost from the buildable list of real
|
||||
# tasks if they weren't in tasks_covered or tasks_notcovered
|
||||
@@ -152,14 +136,12 @@ class RunQueueTests(unittest.TestCase):
|
||||
'a1:populate_sysroot', 'a1:build']
|
||||
self.assertEqual(set(tasks), set(expected))
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
def test_setscene_ignore_tasks(self):
|
||||
def test_setscenewhitelist(self):
|
||||
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
|
||||
cmd = ["bitbake", "a1"]
|
||||
extraenv = {
|
||||
"BB_SETSCENE_ENFORCE" : "1",
|
||||
"BB_SETSCENE_ENFORCE_IGNORE_TASKS" : "a1:do_package_write_rpm a1:do_build"
|
||||
"BB_SETSCENE_ENFORCE_WHITELIST" : "a1:do_package_write_rpm a1:do_build"
|
||||
}
|
||||
sstatevalid = "a1:do_package a1:do_package_qa a1:do_packagedata a1:do_package_write_ipk a1:do_populate_lic a1:do_populate_sysroot"
|
||||
tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv)
|
||||
@@ -167,8 +149,6 @@ class RunQueueTests(unittest.TestCase):
|
||||
'a1:populate_sysroot_setscene', 'a1:package_setscene']
|
||||
self.assertEqual(set(tasks), set(expected))
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
# Tests for problems with dependencies between setscene tasks
|
||||
def test_no_setscenevalid_harddeps(self):
|
||||
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
|
||||
@@ -182,8 +162,6 @@ class RunQueueTests(unittest.TestCase):
|
||||
'd1:populate_sysroot', 'd1:build']
|
||||
self.assertEqual(set(tasks), set(expected))
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
def test_no_setscenevalid_withdeps(self):
|
||||
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
|
||||
cmd = ["bitbake", "b1"]
|
||||
@@ -194,8 +172,6 @@ class RunQueueTests(unittest.TestCase):
|
||||
expected.remove('a1:package_qa')
|
||||
self.assertEqual(set(tasks), set(expected))
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
def test_single_a1_setscenevalid_withdeps(self):
|
||||
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
|
||||
cmd = ["bitbake", "b1"]
|
||||
@@ -206,8 +182,6 @@ class RunQueueTests(unittest.TestCase):
|
||||
'a1:populate_sysroot'] + ['b1:' + x for x in self.alltasks]
|
||||
self.assertEqual(set(tasks), set(expected))
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
def test_single_b1_setscenevalid_withdeps(self):
|
||||
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
|
||||
cmd = ["bitbake", "b1"]
|
||||
@@ -219,8 +193,6 @@ class RunQueueTests(unittest.TestCase):
|
||||
expected.remove('b1:package')
|
||||
self.assertEqual(set(tasks), set(expected))
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
def test_intermediate_setscenevalid_withdeps(self):
|
||||
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
|
||||
cmd = ["bitbake", "b1"]
|
||||
@@ -231,8 +203,6 @@ class RunQueueTests(unittest.TestCase):
|
||||
expected.remove('b1:package')
|
||||
self.assertEqual(set(tasks), set(expected))
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
def test_all_setscenevalid_withdeps(self):
|
||||
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
|
||||
cmd = ["bitbake", "b1"]
|
||||
@@ -243,8 +213,6 @@ class RunQueueTests(unittest.TestCase):
|
||||
'b1:packagedata_setscene', 'b1:package_qa_setscene', 'b1:populate_sysroot_setscene']
|
||||
self.assertEqual(set(tasks), set(expected))
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
def test_multiconfig_setscene_optimise(self):
|
||||
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
|
||||
extraenv = {
|
||||
@@ -264,8 +232,6 @@ class RunQueueTests(unittest.TestCase):
|
||||
expected.remove(x)
|
||||
self.assertEqual(set(tasks), set(expected))
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
def test_multiconfig_bbmask(self):
|
||||
# This test validates that multiconfigs can independently mask off
|
||||
# recipes they do not want with BBMASK. It works by having recipes
|
||||
@@ -282,8 +248,6 @@ class RunQueueTests(unittest.TestCase):
|
||||
cmd = ["bitbake", "mc:mc-1:fails-mc2", "mc:mc_2:fails-mc1"]
|
||||
self.run_bitbakecmd(cmd, tempdir, "", extraenv=extraenv)
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
def test_multiconfig_mcdepends(self):
|
||||
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
|
||||
extraenv = {
|
||||
@@ -314,8 +278,7 @@ class RunQueueTests(unittest.TestCase):
|
||||
["mc_2:a1:%s" % t for t in rerun_tasks]
|
||||
self.assertEqual(set(tasks), set(expected))
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
@unittest.skipIf(sys.version_info < (3, 5, 0), 'Python 3.5 or later required')
|
||||
def test_hashserv_single(self):
|
||||
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
|
||||
extraenv = {
|
||||
@@ -341,6 +304,7 @@ class RunQueueTests(unittest.TestCase):
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
@unittest.skipIf(sys.version_info < (3, 5, 0), 'Python 3.5 or later required')
|
||||
def test_hashserv_double(self):
|
||||
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
|
||||
extraenv = {
|
||||
@@ -365,6 +329,7 @@ class RunQueueTests(unittest.TestCase):
|
||||
|
||||
self.shutdown(tempdir)
|
||||
|
||||
@unittest.skipIf(sys.version_info < (3, 5, 0), 'Python 3.5 or later required')
|
||||
def test_hashserv_multiple_setscene(self):
|
||||
# Runs e1:do_package_setscene twice
|
||||
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
|
||||
@@ -396,6 +361,7 @@ class RunQueueTests(unittest.TestCase):
|
||||
|
||||
def shutdown(self, tempdir):
|
||||
# Wait for the hashserve socket to disappear else we'll see races with the tempdir cleanup
|
||||
while (os.path.exists(tempdir + "/hashserve.sock") or os.path.exists(tempdir + "cache/hashserv.db-wal") or os.path.exists(tempdir + "/bitbake.lock")):
|
||||
while os.path.exists(tempdir + "/hashserve.sock"):
|
||||
time.sleep(0.5)
|
||||
|
||||
|
||||
|
||||
@@ -418,7 +418,7 @@ MULTILINE = " stuff \\
|
||||
['MULTILINE'],
|
||||
handle_var)
|
||||
|
||||
testvalue = re.sub(r'\s+', ' ', value_in_callback.strip())
|
||||
testvalue = re.sub('\s+', ' ', value_in_callback.strip())
|
||||
self.assertEqual(expected_value, testvalue)
|
||||
|
||||
class EditBbLayersConf(unittest.TestCase):
|
||||
@@ -666,21 +666,3 @@ class GetReferencedVars(unittest.TestCase):
|
||||
|
||||
layers = [{"SRC_URI"}, {"QT_GIT", "QT_MODULE", "QT_MODULE_BRANCH_PARAM", "QT_GIT_PROTOCOL"}, {"QT_GIT_PROJECT", "QT_MODULE_BRANCH", "BPN"}, {"PN", "SPECIAL_PKGSUFFIX"}]
|
||||
self.check_referenced("${SRC_URI}", layers)
|
||||
|
||||
|
||||
class EnvironmentTests(unittest.TestCase):
|
||||
def test_environment(self):
|
||||
os.environ["A"] = "this is A"
|
||||
self.assertIn("A", os.environ)
|
||||
self.assertEqual(os.environ["A"], "this is A")
|
||||
self.assertNotIn("B", os.environ)
|
||||
|
||||
with bb.utils.environment(B="this is B"):
|
||||
self.assertIn("A", os.environ)
|
||||
self.assertEqual(os.environ["A"], "this is A")
|
||||
self.assertIn("B", os.environ)
|
||||
self.assertEqual(os.environ["B"], "this is B")
|
||||
|
||||
self.assertIn("A", os.environ)
|
||||
self.assertEqual(os.environ["A"], "this is A")
|
||||
self.assertNotIn("B", os.environ)
|
||||
|
||||
@@ -52,10 +52,6 @@ class TinfoilDataStoreConnectorVarHistory:
|
||||
def remoteCommand(self, cmd, *args, **kwargs):
|
||||
return self.tinfoil.run_command('dataStoreConnectorVarHistCmd', self.dsindex, cmd, args, kwargs)
|
||||
|
||||
def emit(self, var, oval, val, o, d):
|
||||
ret = self.tinfoil.run_command('dataStoreConnectorVarHistCmdEmit', self.dsindex, var, oval, val, d.dsindex)
|
||||
o.write(ret)
|
||||
|
||||
def __getattr__(self, name):
|
||||
if not hasattr(bb.data_smart.VariableHistory, name):
|
||||
raise AttributeError("VariableHistory has no such method %s" % name)
|
||||
@@ -324,11 +320,11 @@ class Tinfoil:
|
||||
self.recipes_parsed = False
|
||||
self.quiet = 0
|
||||
self.oldhandlers = self.logger.handlers[:]
|
||||
self.localhandlers = []
|
||||
if setup_logging:
|
||||
# This is the *client-side* logger, nothing to do with
|
||||
# logging messages from the server
|
||||
bb.msg.logger_create('BitBake', output)
|
||||
self.localhandlers = []
|
||||
for handler in self.logger.handlers:
|
||||
if handler not in self.oldhandlers:
|
||||
self.localhandlers.append(handler)
|
||||
@@ -448,7 +444,7 @@ class Tinfoil:
|
||||
self.run_actions(config_params)
|
||||
self.recipes_parsed = True
|
||||
|
||||
def run_command(self, command, *params, handle_events=True):
|
||||
def run_command(self, command, *params):
|
||||
"""
|
||||
Run a command on the server (as implemented in bb.command).
|
||||
Note that there are two types of command - synchronous and
|
||||
@@ -468,7 +464,7 @@ class Tinfoil:
|
||||
try:
|
||||
result = self.server_connection.connection.runCommand(commandline)
|
||||
finally:
|
||||
while handle_events:
|
||||
while True:
|
||||
event = self.wait_event()
|
||||
if not event:
|
||||
break
|
||||
@@ -493,7 +489,7 @@ class Tinfoil:
|
||||
Wait for an event from the server for the specified time.
|
||||
A timeout of 0 means don't wait if there are no events in the queue.
|
||||
Returns the next event in the queue or None if the timeout was
|
||||
reached. Note that in order to receive any events you will
|
||||
reached. Note that in order to recieve any events you will
|
||||
first need to set the internal event mask using set_event_mask()
|
||||
(otherwise whatever event mask the UI set up will be in effect).
|
||||
"""
|
||||
@@ -761,7 +757,7 @@ class Tinfoil:
|
||||
if parseprogress:
|
||||
parseprogress.update(event.progress)
|
||||
else:
|
||||
bb.warn("Got ProcessProgress event for something that never started?")
|
||||
bb.warn("Got ProcessProgress event for someting that never started?")
|
||||
continue
|
||||
if isinstance(event, bb.event.ProcessFinished):
|
||||
if self.quiet > 1:
|
||||
|
||||
@@ -45,7 +45,7 @@ from pprint import pformat
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
from django.db import transaction
|
||||
from django.db import transaction, connection
|
||||
|
||||
|
||||
# pylint: disable=invalid-name
|
||||
@@ -227,12 +227,6 @@ class ORMWrapper(object):
|
||||
build.completed_on = timezone.now()
|
||||
build.outcome = outcome
|
||||
build.save()
|
||||
|
||||
# We force a sync point here to force the outcome status commit,
|
||||
# which resolves a race condition with the build completion takedown
|
||||
transaction.set_autocommit(True)
|
||||
transaction.set_autocommit(False)
|
||||
|
||||
signal_runbuilds()
|
||||
|
||||
def update_target_set_license_manifest(self, target, license_manifest_path):
|
||||
@@ -489,14 +483,14 @@ class ORMWrapper(object):
|
||||
|
||||
# we already created the root directory, so ignore any
|
||||
# entry for it
|
||||
if not path:
|
||||
if len(path) == 0:
|
||||
continue
|
||||
|
||||
parent_path = "/".join(path.split("/")[:len(path.split("/")) - 1])
|
||||
if not parent_path:
|
||||
if len(parent_path) == 0:
|
||||
parent_path = "/"
|
||||
parent_obj = self._cached_get(Target_File, target = target_obj, path = parent_path, inodetype = Target_File.ITYPE_DIRECTORY)
|
||||
Target_File.objects.create(
|
||||
tf_obj = Target_File.objects.create(
|
||||
target = target_obj,
|
||||
path = path,
|
||||
size = size,
|
||||
@@ -561,7 +555,7 @@ class ORMWrapper(object):
|
||||
|
||||
parent_obj = Target_File.objects.get(target = target_obj, path = parent_path, inodetype = Target_File.ITYPE_DIRECTORY)
|
||||
|
||||
Target_File.objects.create(
|
||||
tf_obj = Target_File.objects.create(
|
||||
target = target_obj,
|
||||
path = path,
|
||||
size = size,
|
||||
@@ -577,7 +571,7 @@ class ORMWrapper(object):
|
||||
assert isinstance(build_obj, Build)
|
||||
assert isinstance(target_obj, Target)
|
||||
|
||||
errormsg = []
|
||||
errormsg = ""
|
||||
for p in packagedict:
|
||||
# Search name swtiches round the installed name vs package name
|
||||
# by default installed name == package name
|
||||
@@ -639,10 +633,10 @@ class ORMWrapper(object):
|
||||
packagefile_objects.append(Package_File( package = packagedict[p]['object'],
|
||||
path = targetpath,
|
||||
size = targetfilesize))
|
||||
if packagefile_objects:
|
||||
if len(packagefile_objects):
|
||||
Package_File.objects.bulk_create(packagefile_objects)
|
||||
except KeyError as e:
|
||||
errormsg.append(" stpi: Key error, package %s key %s \n" % (p, e))
|
||||
errormsg += " stpi: Key error, package %s key %s \n" % ( p, e )
|
||||
|
||||
# save disk installed size
|
||||
packagedict[p]['object'].installed_size = packagedict[p]['size']
|
||||
@@ -679,13 +673,13 @@ class ORMWrapper(object):
|
||||
logger.warning("Could not add dependency to the package %s "
|
||||
"because %s is an unknown package", p, px)
|
||||
|
||||
if packagedeps_objs:
|
||||
if len(packagedeps_objs) > 0:
|
||||
Package_Dependency.objects.bulk_create(packagedeps_objs)
|
||||
else:
|
||||
logger.info("No package dependencies created")
|
||||
|
||||
if errormsg:
|
||||
logger.warning("buildinfohelper: target_package_info could not identify recipes: \n%s", "".join(errormsg))
|
||||
if len(errormsg) > 0:
|
||||
logger.warning("buildinfohelper: target_package_info could not identify recipes: \n%s", errormsg)
|
||||
|
||||
def save_target_image_file_information(self, target_obj, file_name, file_size):
|
||||
Target_Image_File.objects.create(target=target_obj,
|
||||
@@ -773,7 +767,7 @@ class ORMWrapper(object):
|
||||
packagefile_objects.append(Package_File( package = bp_object,
|
||||
path = path,
|
||||
size = package_info['FILES_INFO'][path] ))
|
||||
if packagefile_objects:
|
||||
if len(packagefile_objects):
|
||||
Package_File.objects.bulk_create(packagefile_objects)
|
||||
|
||||
def _po_byname(p):
|
||||
@@ -815,7 +809,7 @@ class ORMWrapper(object):
|
||||
packagedeps_objs.append(Package_Dependency( package = bp_object,
|
||||
depends_on = _po_byname(p), dep_type = Package_Dependency.TYPE_RCONFLICTS))
|
||||
|
||||
if packagedeps_objs:
|
||||
if len(packagedeps_objs) > 0:
|
||||
Package_Dependency.objects.bulk_create(packagedeps_objs)
|
||||
|
||||
return bp_object
|
||||
@@ -832,7 +826,7 @@ class ORMWrapper(object):
|
||||
desc = vardump[root_var]['doc']
|
||||
if desc is None:
|
||||
desc = ''
|
||||
if desc:
|
||||
if len(desc):
|
||||
HelpText.objects.get_or_create(build=build_obj,
|
||||
area=HelpText.VARIABLE,
|
||||
key=k, text=desc)
|
||||
@@ -852,7 +846,7 @@ class ORMWrapper(object):
|
||||
file_name = vh['file'],
|
||||
line_number = vh['line'],
|
||||
operation = vh['op']))
|
||||
if varhist_objects:
|
||||
if len(varhist_objects):
|
||||
VariableHistory.objects.bulk_create(varhist_objects)
|
||||
|
||||
|
||||
@@ -899,6 +893,9 @@ class BuildInfoHelper(object):
|
||||
self.task_order = 0
|
||||
self.autocommit_step = 1
|
||||
self.server = server
|
||||
# we use manual transactions if the database doesn't autocommit on us
|
||||
if not connection.features.autocommits_when_autocommit_is_off:
|
||||
transaction.set_autocommit(False)
|
||||
self.orm_wrapper = ORMWrapper()
|
||||
self.has_build_history = has_build_history
|
||||
self.tmp_dir = self.server.runCommand(["getVariable", "TMPDIR"])[0]
|
||||
@@ -1062,6 +1059,27 @@ class BuildInfoHelper(object):
|
||||
|
||||
return recipe_info
|
||||
|
||||
def _get_path_information(self, task_object):
|
||||
self._ensure_build()
|
||||
|
||||
assert isinstance(task_object, Task)
|
||||
build_stats_format = "{tmpdir}/buildstats/{buildname}/{package}/"
|
||||
build_stats_path = []
|
||||
|
||||
for t in self.internal_state['targets']:
|
||||
buildname = self.internal_state['build'].build_name
|
||||
pe, pv = task_object.recipe.version.split(":",1)
|
||||
if len(pe) > 0:
|
||||
package = task_object.recipe.name + "-" + pe + "_" + pv
|
||||
else:
|
||||
package = task_object.recipe.name + "-" + pv
|
||||
|
||||
build_stats_path.append(build_stats_format.format(tmpdir=self.tmp_dir,
|
||||
buildname=buildname,
|
||||
package=package))
|
||||
|
||||
return build_stats_path
|
||||
|
||||
|
||||
################################
|
||||
## external available methods to store information
|
||||
@@ -1295,11 +1313,12 @@ class BuildInfoHelper(object):
|
||||
task_information['outcome'] = Task.OUTCOME_FAILED
|
||||
del self.internal_state['taskdata'][identifier]
|
||||
|
||||
# we force a sync point here, to get the progress bar to show
|
||||
if self.autocommit_step % 3 == 0:
|
||||
transaction.set_autocommit(True)
|
||||
transaction.set_autocommit(False)
|
||||
self.autocommit_step += 1
|
||||
if not connection.features.autocommits_when_autocommit_is_off:
|
||||
# we force a sync point here, to get the progress bar to show
|
||||
if self.autocommit_step % 3 == 0:
|
||||
transaction.set_autocommit(True)
|
||||
transaction.set_autocommit(False)
|
||||
self.autocommit_step += 1
|
||||
|
||||
self.orm_wrapper.get_update_task_object(task_information, True) # must exist
|
||||
|
||||
@@ -1385,7 +1404,7 @@ class BuildInfoHelper(object):
|
||||
assert 'pn' in event._depgraph
|
||||
assert 'tdepends' in event._depgraph
|
||||
|
||||
errormsg = []
|
||||
errormsg = ""
|
||||
|
||||
# save layer version priorities
|
||||
if 'layer-priorities' in event._depgraph.keys():
|
||||
@@ -1477,7 +1496,7 @@ class BuildInfoHelper(object):
|
||||
elif dep in self.internal_state['recipes']:
|
||||
dependency = self.internal_state['recipes'][dep]
|
||||
else:
|
||||
errormsg.append(" stpd: KeyError saving recipe dependency for %s, %s \n" % (recipe, dep))
|
||||
errormsg += " stpd: KeyError saving recipe dependency for %s, %s \n" % (recipe, dep)
|
||||
continue
|
||||
recipe_dep = Recipe_Dependency(recipe=target,
|
||||
depends_on=dependency,
|
||||
@@ -1518,8 +1537,8 @@ class BuildInfoHelper(object):
|
||||
taskdeps_objects.append(Task_Dependency( task = target, depends_on = dep ))
|
||||
Task_Dependency.objects.bulk_create(taskdeps_objects)
|
||||
|
||||
if errormsg:
|
||||
logger.warning("buildinfohelper: dependency info not identify recipes: \n%s", "".join(errormsg))
|
||||
if len(errormsg) > 0:
|
||||
logger.warning("buildinfohelper: dependency info not identify recipes: \n%s", errormsg)
|
||||
|
||||
|
||||
def store_build_package_information(self, event):
|
||||
@@ -1599,7 +1618,7 @@ class BuildInfoHelper(object):
|
||||
|
||||
if 'backlog' in self.internal_state:
|
||||
# if we have a backlog of events, do our best to save them here
|
||||
if self.internal_state['backlog']:
|
||||
if len(self.internal_state['backlog']):
|
||||
tempevent = self.internal_state['backlog'].pop()
|
||||
logger.debug("buildinfohelper: Saving stored event %s "
|
||||
% tempevent)
|
||||
@@ -1971,6 +1990,8 @@ class BuildInfoHelper(object):
|
||||
# Do not skip command line build events
|
||||
self.store_log_event(tempevent,False)
|
||||
|
||||
if not connection.features.autocommits_when_autocommit_is_off:
|
||||
transaction.set_autocommit(True)
|
||||
|
||||
# unset the brbe; this is to prevent subsequent command-line builds
|
||||
# being incorrectly attached to the previous Toaster-triggered build;
|
||||
|
||||
@@ -21,11 +21,10 @@ import fcntl
|
||||
import struct
|
||||
import copy
|
||||
import atexit
|
||||
from itertools import groupby
|
||||
|
||||
from bb.ui import uihelper
|
||||
|
||||
featureSet = [bb.cooker.CookerFeatures.SEND_SANITYEVENTS, bb.cooker.CookerFeatures.BASEDATASTORE_TRACKING]
|
||||
featureSet = [bb.cooker.CookerFeatures.SEND_SANITYEVENTS]
|
||||
|
||||
logger = logging.getLogger("BitBake")
|
||||
interactive = sys.stdout.isatty()
|
||||
@@ -228,9 +227,7 @@ class TerminalFilter(object):
|
||||
|
||||
def keepAlive(self, t):
|
||||
if not self.cuu:
|
||||
print("Bitbake still alive (no events for %ds). Active tasks:" % t)
|
||||
for t in self.helper.running_tasks:
|
||||
print(t)
|
||||
print("Bitbake still alive (%ds)" % t)
|
||||
sys.stdout.flush()
|
||||
|
||||
def updateFooter(self):
|
||||
@@ -252,68 +249,58 @@ class TerminalFilter(object):
|
||||
return
|
||||
tasks = []
|
||||
for t in runningpids:
|
||||
start_time = activetasks[t].get("starttime", None)
|
||||
if start_time:
|
||||
msg = "%s - %s (pid %s)" % (activetasks[t]["title"], self.elapsed(currenttime - start_time), activetasks[t]["pid"])
|
||||
else:
|
||||
msg = "%s (pid %s)" % (activetasks[t]["title"], activetasks[t]["pid"])
|
||||
progress = activetasks[t].get("progress", None)
|
||||
if progress is not None:
|
||||
pbar = activetasks[t].get("progressbar", None)
|
||||
rate = activetasks[t].get("rate", None)
|
||||
start_time = activetasks[t].get("starttime", None)
|
||||
if not pbar or pbar.bouncing != (progress < 0):
|
||||
if progress < 0:
|
||||
pbar = BBProgress("0: %s" % msg, 100, widgets=[' ', progressbar.BouncingSlider(), ''], extrapos=3, resize_handler=self.sigwinch_handle)
|
||||
pbar = BBProgress("0: %s (pid %s)" % (activetasks[t]["title"], activetasks[t]["pid"]), 100, widgets=[' ', progressbar.BouncingSlider(), ''], extrapos=3, resize_handler=self.sigwinch_handle)
|
||||
pbar.bouncing = True
|
||||
else:
|
||||
pbar = BBProgress("0: %s" % msg, 100, widgets=[' ', progressbar.Percentage(), ' ', progressbar.Bar(), ''], extrapos=5, resize_handler=self.sigwinch_handle)
|
||||
pbar = BBProgress("0: %s (pid %s)" % (activetasks[t]["title"], activetasks[t]["pid"]), 100, widgets=[' ', progressbar.Percentage(), ' ', progressbar.Bar(), ''], extrapos=5, resize_handler=self.sigwinch_handle)
|
||||
pbar.bouncing = False
|
||||
activetasks[t]["progressbar"] = pbar
|
||||
tasks.append((pbar, msg, progress, rate, start_time))
|
||||
tasks.append((pbar, progress, rate, start_time))
|
||||
else:
|
||||
tasks.append(msg)
|
||||
start_time = activetasks[t].get("starttime", None)
|
||||
if start_time:
|
||||
tasks.append("%s - %s (pid %s)" % (activetasks[t]["title"], self.elapsed(currenttime - start_time), activetasks[t]["pid"]))
|
||||
else:
|
||||
tasks.append("%s (pid %s)" % (activetasks[t]["title"], activetasks[t]["pid"]))
|
||||
|
||||
if self.main.shutdown:
|
||||
content = pluralise("Waiting for %s running task to finish",
|
||||
"Waiting for %s running tasks to finish", len(activetasks))
|
||||
if not self.quiet:
|
||||
content += ':'
|
||||
content = "Waiting for %s running tasks to finish:" % len(activetasks)
|
||||
print(content)
|
||||
else:
|
||||
scene_tasks = "%s of %s" % (self.helper.setscene_current, self.helper.setscene_total)
|
||||
cur_tasks = "%s of %s" % (self.helper.tasknumber_current, self.helper.tasknumber_total)
|
||||
|
||||
content = ''
|
||||
if not self.quiet:
|
||||
msg = "Setscene tasks: %s" % scene_tasks
|
||||
content += msg + "\n"
|
||||
print(msg)
|
||||
|
||||
if self.quiet:
|
||||
msg = "Running tasks (%s, %s)" % (scene_tasks, cur_tasks)
|
||||
content = "Running tasks (%s of %s)" % (self.helper.tasknumber_current, self.helper.tasknumber_total)
|
||||
elif not len(activetasks):
|
||||
msg = "No currently running tasks (%s)" % cur_tasks
|
||||
content = "No currently running tasks (%s of %s)" % (self.helper.tasknumber_current, self.helper.tasknumber_total)
|
||||
else:
|
||||
msg = "Currently %2s running tasks (%s)" % (len(activetasks), cur_tasks)
|
||||
content = "Currently %2s running tasks (%s of %s)" % (len(activetasks), self.helper.tasknumber_current, self.helper.tasknumber_total)
|
||||
maxtask = self.helper.tasknumber_total
|
||||
if not self.main_progress or self.main_progress.maxval != maxtask:
|
||||
widgets = [' ', progressbar.Percentage(), ' ', progressbar.Bar()]
|
||||
self.main_progress = BBProgress("Running tasks", maxtask, widgets=widgets, resize_handler=self.sigwinch_handle)
|
||||
self.main_progress.start(False)
|
||||
self.main_progress.setmessage(msg)
|
||||
progress = max(0, self.helper.tasknumber_current - 1)
|
||||
content += self.main_progress.update(progress)
|
||||
self.main_progress.setmessage(content)
|
||||
progress = self.helper.tasknumber_current - 1
|
||||
if progress < 0:
|
||||
progress = 0
|
||||
content = self.main_progress.update(progress)
|
||||
print('')
|
||||
lines = self.getlines(content)
|
||||
if not self.quiet:
|
||||
for tasknum, task in enumerate(tasks[:(self.rows - 1 - lines)]):
|
||||
lines = 1 + int(len(content) / (self.columns + 1))
|
||||
if self.quiet == 0:
|
||||
for tasknum, task in enumerate(tasks[:(self.rows - 2)]):
|
||||
if isinstance(task, tuple):
|
||||
pbar, msg, progress, rate, start_time = task
|
||||
pbar, progress, rate, start_time = task
|
||||
if not pbar.start_time:
|
||||
pbar.start(False)
|
||||
if start_time:
|
||||
pbar.start_time = start_time
|
||||
pbar.setmessage('%s: %s' % (tasknum, msg))
|
||||
pbar.setmessage('%s:%s' % (tasknum, pbar.msg.split(':', 1)[1]))
|
||||
pbar.setextra(rate)
|
||||
if progress > -1:
|
||||
content = pbar.update(progress)
|
||||
@@ -323,17 +310,11 @@ class TerminalFilter(object):
|
||||
else:
|
||||
content = "%s: %s" % (tasknum, task)
|
||||
print(content)
|
||||
lines = lines + self.getlines(content)
|
||||
lines = lines + 1 + int(len(content) / (self.columns + 1))
|
||||
self.footer_present = lines
|
||||
self.lastpids = runningpids[:]
|
||||
self.lastcount = self.helper.tasknumber_current
|
||||
|
||||
def getlines(self, content):
|
||||
lines = 0
|
||||
for line in content.split("\n"):
|
||||
lines = lines + 1 + int(len(line) / (self.columns + 1))
|
||||
return lines
|
||||
|
||||
def finish(self):
|
||||
if self.stdinbackup:
|
||||
fd = sys.stdin.fileno()
|
||||
@@ -558,13 +539,6 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
# Add the logging domains specified by the user on the command line
|
||||
for (domainarg, iterator) in groupby(params.debug_domains):
|
||||
dlevel = len(tuple(iterator))
|
||||
l = logconfig["loggers"].setdefault("BitBake.%s" % domainarg, {})
|
||||
l["level"] = logging.DEBUG - dlevel + 1
|
||||
l.setdefault("handlers", []).extend(["BitBake.verbconsole"])
|
||||
|
||||
conf = bb.msg.setLoggingConfig(logconfig, logconfigfile)
|
||||
|
||||
if sys.stdin.isatty() and sys.stdout.isatty():
|
||||
@@ -623,8 +597,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
warnings = 0
|
||||
taskfailures = []
|
||||
|
||||
printintervaldelta = 10 * 60 # 10 minutes
|
||||
printinterval = printintervaldelta
|
||||
printinterval = 5000
|
||||
lastprint = time.time()
|
||||
|
||||
termfilter = tf(main, helper, console_handlers, params.options.quiet)
|
||||
@@ -634,7 +607,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
try:
|
||||
if (lastprint + printinterval) <= time.time():
|
||||
termfilter.keepAlive(printinterval)
|
||||
printinterval += printintervaldelta
|
||||
printinterval += 5000
|
||||
event = eventHandler.waitEvent(0)
|
||||
if event is None:
|
||||
if main.shutdown > 1:
|
||||
@@ -665,8 +638,8 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
|
||||
if isinstance(event, logging.LogRecord):
|
||||
lastprint = time.time()
|
||||
printinterval = printintervaldelta
|
||||
if event.levelno >= bb.msg.BBLogFormatter.ERRORONCE:
|
||||
printinterval = 5000
|
||||
if event.levelno >= bb.msg.BBLogFormatter.ERROR:
|
||||
errors = errors + 1
|
||||
return_value = 1
|
||||
elif event.levelno == bb.msg.BBLogFormatter.WARNING:
|
||||
@@ -680,10 +653,10 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
continue
|
||||
|
||||
# Prefix task messages with recipe/task
|
||||
if event.taskpid in helper.pidmap and event.levelno not in [bb.msg.BBLogFormatter.PLAIN, bb.msg.BBLogFormatter.WARNONCE, bb.msg.BBLogFormatter.ERRORONCE]:
|
||||
if event.taskpid in helper.pidmap and event.levelno != bb.msg.BBLogFormatter.PLAIN:
|
||||
taskinfo = helper.running_tasks[helper.pidmap[event.taskpid]]
|
||||
event.msg = taskinfo['title'] + ': ' + event.msg
|
||||
if hasattr(event, 'fn') and event.levelno not in [bb.msg.BBLogFormatter.WARNONCE, bb.msg.BBLogFormatter.ERRORONCE]:
|
||||
if hasattr(event, 'fn'):
|
||||
event.msg = event.fn + ': ' + event.msg
|
||||
logging.getLogger(event.name).handle(event)
|
||||
continue
|
||||
@@ -772,7 +745,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
continue
|
||||
|
||||
if isinstance(event, bb.runqueue.sceneQueueTaskStarted):
|
||||
logger.info("Running setscene task %d of %d (%s)" % (event.stats.setscene_covered + event.stats.setscene_active + event.stats.setscene_notcovered + 1, event.stats.setscene_total, event.taskstring))
|
||||
logger.info("Running setscene task %d of %d (%s)" % (event.stats.completed + event.stats.active + event.stats.failed + 1, event.stats.total, event.taskstring))
|
||||
continue
|
||||
|
||||
if isinstance(event, bb.runqueue.runQueueTaskStarted):
|
||||
@@ -877,6 +850,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
state_force_shutdown()
|
||||
|
||||
main.shutdown = main.shutdown + 1
|
||||
pass
|
||||
except Exception as e:
|
||||
import traceback
|
||||
sys.stderr.write(traceback.format_exc())
|
||||
@@ -893,11 +867,11 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
for failure in taskfailures:
|
||||
summary += "\n %s" % failure
|
||||
if warnings:
|
||||
summary += pluralise("\nSummary: There was %s WARNING message.",
|
||||
"\nSummary: There were %s WARNING messages.", warnings)
|
||||
summary += pluralise("\nSummary: There was %s WARNING message shown.",
|
||||
"\nSummary: There were %s WARNING messages shown.", warnings)
|
||||
if return_value and errors:
|
||||
summary += pluralise("\nSummary: There was %s ERROR message, returning a non-zero exit code.",
|
||||
"\nSummary: There were %s ERROR messages, returning a non-zero exit code.", errors)
|
||||
summary += pluralise("\nSummary: There was %s ERROR message shown, returning a non-zero exit code.",
|
||||
"\nSummary: There were %s ERROR messages shown, returning a non-zero exit code.", errors)
|
||||
if summary and params.options.quiet == 0:
|
||||
print(summary)
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user