mirror of
https://git.yoctoproject.org/poky
synced 2026-02-16 05:33:03 +01:00
Compare commits
43 Commits
yocto-5.1.
...
yocto-4.2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
21790e71d5 | ||
|
|
b8007d3c22 | ||
|
|
bca7ec652f | ||
|
|
f73e712b6b | ||
|
|
60012ae54a | ||
|
|
45ccdcfcbc | ||
|
|
8b3b075dd5 | ||
|
|
c3248e0da1 | ||
|
|
65cc65fa8d | ||
|
|
410290c2f5 | ||
|
|
ea2feb23bc | ||
|
|
eb292619e7 | ||
|
|
b93e695de6 | ||
|
|
338bc72e4d | ||
|
|
3c0b78802d | ||
|
|
23d946b9ba | ||
|
|
1b9bcc7b19 | ||
|
|
1d4d5371ec | ||
|
|
4f833991c2 | ||
|
|
e55e243f84 | ||
|
|
20c58a6cb2 | ||
|
|
c3c439d62a | ||
|
|
bdf37e43b0 | ||
|
|
958d52f37b | ||
|
|
42a6d47754 | ||
|
|
64111246ce | ||
|
|
b1b4ad9a80 | ||
|
|
e9af582acd | ||
|
|
0a75b4afc8 | ||
|
|
d109d6452f | ||
|
|
18d1bcefec | ||
|
|
1000c4f2c0 | ||
|
|
801734bc6c | ||
|
|
a91fb4ff74 | ||
|
|
54f3339f38 | ||
|
|
50c5035dc8 | ||
|
|
c078df73b9 | ||
|
|
c570cf1733 | ||
|
|
39428da6b6 | ||
|
|
acf268757f | ||
|
|
4bb775aecb | ||
|
|
7ebcf1477a | ||
|
|
fe76a450eb |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -36,4 +36,3 @@ _toaster_clones/
|
||||
downloads/
|
||||
sstate-cache/
|
||||
toaster.sqlite
|
||||
.vscode/
|
||||
|
||||
@@ -41,7 +41,6 @@ Component/Subsystem Maintainers
|
||||
* devtool: Saul Wold
|
||||
* eSDK: Saul Wold
|
||||
* overlayfs: Vyacheslav Yurkov
|
||||
* Patchtest: Trevor Gamblin
|
||||
|
||||
Maintainers needed
|
||||
------------------
|
||||
@@ -53,6 +52,7 @@ Maintainers needed
|
||||
* error reporting system/web UI
|
||||
* wic
|
||||
* Patchwork
|
||||
* Patchtest
|
||||
* Matchbox
|
||||
* Sato
|
||||
* Autobuilder
|
||||
|
||||
35
Makefile
Normal file
35
Makefile
Normal file
@@ -0,0 +1,35 @@
|
||||
# Minimal makefile for Sphinx documentation
|
||||
#
|
||||
|
||||
# You can set these variables from the command line, and also
|
||||
# from the environment for the first two.
|
||||
SPHINXOPTS ?=
|
||||
SPHINXBUILD ?= sphinx-build
|
||||
SOURCEDIR = .
|
||||
BUILDDIR = _build
|
||||
DESTDIR = final
|
||||
|
||||
ifeq ($(shell if which $(SPHINXBUILD) >/dev/null 2>&1; then echo 1; else echo 0; fi),0)
|
||||
$(error "The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed")
|
||||
endif
|
||||
|
||||
# Put it first so that "make" without argument is like "make help".
|
||||
help:
|
||||
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
||||
|
||||
.PHONY: help Makefile.sphinx clean publish
|
||||
|
||||
publish: Makefile.sphinx html singlehtml
|
||||
rm -rf $(BUILDDIR)/$(DESTDIR)/
|
||||
mkdir -p $(BUILDDIR)/$(DESTDIR)/
|
||||
cp -r $(BUILDDIR)/html/* $(BUILDDIR)/$(DESTDIR)/
|
||||
cp $(BUILDDIR)/singlehtml/index.html $(BUILDDIR)/$(DESTDIR)/singleindex.html
|
||||
sed -i -e 's@index.html#@singleindex.html#@g' $(BUILDDIR)/$(DESTDIR)/singleindex.html
|
||||
|
||||
clean:
|
||||
@rm -rf $(BUILDDIR)
|
||||
|
||||
# Catch-all target: route all unknown targets to Sphinx using the new
|
||||
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
|
||||
%: Makefile.sphinx
|
||||
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
||||
@@ -16,13 +16,9 @@ which can be found at:
|
||||
Contributing
|
||||
------------
|
||||
|
||||
Please refer to our contributor guide here: https://docs.yoctoproject.org/dev/contributor-guide/
|
||||
for full details on how to submit changes.
|
||||
|
||||
As a quick guide, patches should be sent to openembedded-core@lists.openembedded.org
|
||||
The git command to do that would be:
|
||||
|
||||
git send-email -M -1 --to openembedded-core@lists.openembedded.org
|
||||
Please refer to
|
||||
https://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
|
||||
for guidelines on how to submit patches.
|
||||
|
||||
Mailing list:
|
||||
|
||||
|
||||
22
SECURITY.md
22
SECURITY.md
@@ -1,22 +0,0 @@
|
||||
How to Report a Potential Vulnerability?
|
||||
========================================
|
||||
|
||||
If you would like to report a public issue (for example, one with a released
|
||||
CVE number), please report it using the
|
||||
[https://bugzilla.yoctoproject.org/enter_bug.cgi?product=Security Security Bugzilla]
|
||||
|
||||
If you are dealing with a not-yet released or urgent issue, please send a
|
||||
message to security AT yoctoproject DOT org, including as many details as
|
||||
possible: the layer or software module affected, the recipe and its version,
|
||||
and any example code, if available.
|
||||
|
||||
Branches maintained with security fixes
|
||||
---------------------------------------
|
||||
|
||||
See [https://wiki.yoctoproject.org/wiki/Stable_Release_and_LTS Stable release and LTS]
|
||||
for detailed info regarding the policies and maintenance of Stable branches.
|
||||
|
||||
The [https://wiki.yoctoproject.org/wiki/Releases Release page] contains a list of all
|
||||
releases of the Yocto Project. Versions in grey are no longer actively maintained with
|
||||
security patches, but well-tested patches may still be accepted for them for
|
||||
significant issues.
|
||||
@@ -18,19 +18,16 @@ Bitbake requires Python version 3.8 or newer.
|
||||
Contributing
|
||||
------------
|
||||
|
||||
Please refer to our contributor guide here: https://docs.yoctoproject.org/contributor-guide/
|
||||
for full details on how to submit changes.
|
||||
|
||||
As a quick guide, patches should be sent to bitbake-devel@lists.openembedded.org
|
||||
The git command to do that would be:
|
||||
Please refer to
|
||||
https://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
|
||||
for guidelines on how to submit patches, just note that the latter documentation is intended
|
||||
for OpenEmbedded (and its core) not bitbake patches (bitbake-devel@lists.openembedded.org)
|
||||
but in general main guidelines apply. Once the commit(s) have been created, the way to send
|
||||
the patch is through git-send-email. For example, to send the last commit (HEAD) on current
|
||||
branch, type:
|
||||
|
||||
git send-email -M -1 --to bitbake-devel@lists.openembedded.org
|
||||
|
||||
If you're sending a patch related to the BitBake manual, make sure you copy
|
||||
the Yocto Project documentation mailing list:
|
||||
|
||||
git send-email -M -1 --to bitbake-devel@lists.openembedded.org --cc docs@lists.yoctoproject.org
|
||||
|
||||
Mailing list:
|
||||
|
||||
https://lists.openembedded.org/g/bitbake-devel
|
||||
@@ -48,7 +45,8 @@ it has so many corner cases. The datastore has many tests too. Testing with the
|
||||
recommended before submitting patches, particularly to the fetcher and datastore. We also
|
||||
appreciate new test cases and may require them for more obscure issues.
|
||||
|
||||
To run the tests "zstd" and "git" must be installed.
|
||||
To run the tests "zstd" and "git" must be installed. Git must be correctly configured, in
|
||||
particular the user.email and user.name values must be set.
|
||||
|
||||
The assumption is made that this testsuite is run from an initialized OpenEmbedded build
|
||||
environment (i.e. `source oe-init-build-env` is used). If this is not the case, run the
|
||||
@@ -56,8 +54,3 @@ testsuite as follows:
|
||||
|
||||
export PATH=$(pwd)/bin:$PATH
|
||||
bin/bitbake-selftest
|
||||
|
||||
The testsuite can alternatively be executed using pytest, e.g. obtained from PyPI (in this
|
||||
case, the PATH is configured automatically):
|
||||
|
||||
pytest
|
||||
|
||||
@@ -1,24 +0,0 @@
|
||||
How to Report a Potential Vulnerability?
|
||||
========================================
|
||||
|
||||
If you would like to report a public issue (for example, one with a released
|
||||
CVE number), please report it using the
|
||||
[https://bugzilla.yoctoproject.org/enter_bug.cgi?product=Security Security Bugzilla].
|
||||
If you have a patch ready, submit it following the same procedure as any other
|
||||
patch as described in README.md.
|
||||
|
||||
If you are dealing with a not-yet released or urgent issue, please send a
|
||||
message to security AT yoctoproject DOT org, including as many details as
|
||||
possible: the layer or software module affected, the recipe and its version,
|
||||
and any example code, if available.
|
||||
|
||||
Branches maintained with security fixes
|
||||
---------------------------------------
|
||||
|
||||
See [https://wiki.yoctoproject.org/wiki/Stable_Release_and_LTS Stable release and LTS]
|
||||
for detailed info regarding the policies and maintenance of Stable branches.
|
||||
|
||||
The [https://wiki.yoctoproject.org/wiki/Releases Release page] contains a list of all
|
||||
releases of the Yocto Project. Versions in grey are no longer actively maintained with
|
||||
security patches, but well-tested patches may still be accepted for them for
|
||||
significant issues.
|
||||
@@ -27,7 +27,7 @@ from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
|
||||
|
||||
bb.utils.check_system_locale()
|
||||
|
||||
__version__ = "2.9.1"
|
||||
__version__ = "2.4.0"
|
||||
|
||||
if __name__ == "__main__":
|
||||
if __version__ != bb.__version__:
|
||||
|
||||
@@ -72,17 +72,13 @@ def find_siginfo_task(bbhandler, pn, taskname, sig1=None, sig2=None):
|
||||
elif sig2 not in sigfiles:
|
||||
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig2))
|
||||
sys.exit(1)
|
||||
|
||||
latestfiles = [sigfiles[sig1]['path'], sigfiles[sig2]['path']]
|
||||
latestfiles = [sigfiles[sig1], sigfiles[sig2]]
|
||||
else:
|
||||
sigfiles = find_siginfo(bbhandler, pn, taskname)
|
||||
latestsigs = sorted(sigfiles.keys(), key=lambda h: sigfiles[h]['time'])[-2:]
|
||||
if not latestsigs:
|
||||
filedates = find_siginfo(bbhandler, pn, taskname)
|
||||
latestfiles = sorted(filedates.keys(), key=lambda f: filedates[f])[-2:]
|
||||
if not latestfiles:
|
||||
logger.error('No sigdata files found matching %s %s' % (pn, taskname))
|
||||
sys.exit(1)
|
||||
latestfiles = [sigfiles[latestsigs[0]]['path']]
|
||||
if len(latestsigs) > 1:
|
||||
latestfiles.append(sigfiles[latestsigs[1]]['path'])
|
||||
|
||||
return latestfiles
|
||||
|
||||
@@ -100,7 +96,7 @@ def recursecb(key, hash1, hash2):
|
||||
elif hash2 not in hashfiles:
|
||||
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash2))
|
||||
else:
|
||||
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1]['path'], hashfiles[hash2]['path'], recursecb, color=color)
|
||||
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1], hashfiles[hash2], recursecb, color=color)
|
||||
for change in out2:
|
||||
for line in change.splitlines():
|
||||
recout.append(' ' + line)
|
||||
|
||||
@@ -26,35 +26,26 @@ if __name__ == "__main__":
|
||||
parser.add_argument('-f', '--flag', help='Specify a variable flag to query (with --value)', default=None)
|
||||
parser.add_argument('--value', help='Only report the value, no history and no variable name', action="store_true")
|
||||
parser.add_argument('-q', '--quiet', help='Silence bitbake server logging', action="store_true")
|
||||
parser.add_argument('--ignore-undefined', help='Suppress any errors related to undefined variables', action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
if not args.value:
|
||||
if args.unexpand:
|
||||
sys.exit("--unexpand only makes sense with --value")
|
||||
if args.unexpand and not args.value:
|
||||
print("--unexpand only makes sense with --value")
|
||||
sys.exit(1)
|
||||
|
||||
if args.flag:
|
||||
sys.exit("--flag only makes sense with --value")
|
||||
if args.flag and not args.value:
|
||||
print("--flag only makes sense with --value")
|
||||
sys.exit(1)
|
||||
|
||||
quiet = args.quiet or args.value
|
||||
with bb.tinfoil.Tinfoil(tracking=True, setup_logging=not quiet) as tinfoil:
|
||||
with bb.tinfoil.Tinfoil(tracking=True, setup_logging=not args.quiet) as tinfoil:
|
||||
if args.recipe:
|
||||
tinfoil.prepare(quiet=3 if quiet else 2)
|
||||
tinfoil.prepare(quiet=2)
|
||||
d = tinfoil.parse_recipe(args.recipe)
|
||||
else:
|
||||
tinfoil.prepare(quiet=2, config_only=True)
|
||||
d = tinfoil.config_data
|
||||
|
||||
value = None
|
||||
if args.flag:
|
||||
value = d.getVarFlag(args.variable, args.flag, expand=not args.unexpand)
|
||||
if value is None and not args.ignore_undefined:
|
||||
sys.exit(f"The flag '{args.flag}' is not defined for variable '{args.variable}'")
|
||||
else:
|
||||
value = d.getVar(args.variable, expand=not args.unexpand)
|
||||
if value is None and not args.ignore_undefined:
|
||||
sys.exit(f"The variable '{args.variable}' is not defined")
|
||||
if args.value:
|
||||
print(str(value if value is not None else ""))
|
||||
print(str(d.getVarFlag(args.variable, args.flag, expand=(not args.unexpand))))
|
||||
elif args.value:
|
||||
print(str(d.getVar(args.variable, expand=(not args.unexpand))))
|
||||
else:
|
||||
bb.data.emit_var(args.variable, d=d, all=True)
|
||||
|
||||
@@ -14,9 +14,6 @@ import sys
|
||||
import threading
|
||||
import time
|
||||
import warnings
|
||||
import netrc
|
||||
import json
|
||||
import statistics
|
||||
warnings.simplefilter("default")
|
||||
|
||||
try:
|
||||
@@ -39,42 +36,18 @@ except ImportError:
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
|
||||
|
||||
import hashserv
|
||||
import bb.asyncrpc
|
||||
|
||||
DEFAULT_ADDRESS = 'unix://./hashserve.sock'
|
||||
METHOD = 'stress.test.method'
|
||||
|
||||
def print_user(u):
|
||||
print(f"Username: {u['username']}")
|
||||
if "permissions" in u:
|
||||
print("Permissions: " + " ".join(u["permissions"]))
|
||||
if "token" in u:
|
||||
print(f"Token: {u['token']}")
|
||||
|
||||
|
||||
def main():
|
||||
def handle_get(args, client):
|
||||
result = client.get_taskhash(args.method, args.taskhash, all_properties=True)
|
||||
if not result:
|
||||
return 0
|
||||
|
||||
print(json.dumps(result, sort_keys=True, indent=4))
|
||||
return 0
|
||||
|
||||
def handle_get_outhash(args, client):
|
||||
result = client.get_outhash(args.method, args.outhash, args.taskhash)
|
||||
if not result:
|
||||
return 0
|
||||
|
||||
print(json.dumps(result, sort_keys=True, indent=4))
|
||||
return 0
|
||||
|
||||
def handle_stats(args, client):
|
||||
if args.reset:
|
||||
s = client.reset_stats()
|
||||
else:
|
||||
s = client.get_stats()
|
||||
print(json.dumps(s, sort_keys=True, indent=4))
|
||||
pprint.pprint(s)
|
||||
return 0
|
||||
|
||||
def handle_stress(args, client):
|
||||
@@ -82,59 +55,47 @@ def main():
|
||||
nonlocal found_hashes
|
||||
nonlocal missed_hashes
|
||||
nonlocal max_time
|
||||
nonlocal times
|
||||
|
||||
with hashserv.create_client(args.address) as client:
|
||||
for i in range(args.requests):
|
||||
taskhash = hashlib.sha256()
|
||||
taskhash.update(args.taskhash_seed.encode('utf-8'))
|
||||
taskhash.update(str(i).encode('utf-8'))
|
||||
client = hashserv.create_client(args.address)
|
||||
|
||||
start_time = time.perf_counter()
|
||||
l = client.get_unihash(METHOD, taskhash.hexdigest())
|
||||
elapsed = time.perf_counter() - start_time
|
||||
for i in range(args.requests):
|
||||
taskhash = hashlib.sha256()
|
||||
taskhash.update(args.taskhash_seed.encode('utf-8'))
|
||||
taskhash.update(str(i).encode('utf-8'))
|
||||
|
||||
with lock:
|
||||
if l:
|
||||
found_hashes += 1
|
||||
else:
|
||||
missed_hashes += 1
|
||||
start_time = time.perf_counter()
|
||||
l = client.get_unihash(METHOD, taskhash.hexdigest())
|
||||
elapsed = time.perf_counter() - start_time
|
||||
|
||||
times.append(elapsed)
|
||||
pbar.update()
|
||||
with lock:
|
||||
if l:
|
||||
found_hashes += 1
|
||||
else:
|
||||
missed_hashes += 1
|
||||
|
||||
max_time = max(elapsed, max_time)
|
||||
pbar.update()
|
||||
|
||||
max_time = 0
|
||||
found_hashes = 0
|
||||
missed_hashes = 0
|
||||
lock = threading.Lock()
|
||||
times = []
|
||||
total_requests = args.clients * args.requests
|
||||
start_time = time.perf_counter()
|
||||
with ProgressBar(total=args.clients * args.requests) as pbar:
|
||||
with ProgressBar(total=total_requests) as pbar:
|
||||
threads = [threading.Thread(target=thread_main, args=(pbar, lock), daemon=False) for _ in range(args.clients)]
|
||||
for t in threads:
|
||||
t.start()
|
||||
|
||||
for t in threads:
|
||||
t.join()
|
||||
total_elapsed = time.perf_counter() - start_time
|
||||
|
||||
elapsed = time.perf_counter() - start_time
|
||||
with lock:
|
||||
mean = statistics.mean(times)
|
||||
median = statistics.median(times)
|
||||
stddev = statistics.pstdev(times)
|
||||
|
||||
print(f"Number of clients: {args.clients}")
|
||||
print(f"Requests per client: {args.requests}")
|
||||
print(f"Number of requests: {len(times)}")
|
||||
print(f"Total elapsed time: {total_elapsed:.3f}s")
|
||||
print(f"Total request rate: {len(times)/total_elapsed:.3f} req/s")
|
||||
print(f"Average request time: {mean:.3f}s")
|
||||
print(f"Median request time: {median:.3f}s")
|
||||
print(f"Request time std dev: {stddev:.3f}s")
|
||||
print(f"Maximum request time: {max(times):.3f}s")
|
||||
print(f"Minimum request time: {min(times):.3f}s")
|
||||
print(f"Hashes found: {found_hashes}")
|
||||
print(f"Hashes missed: {missed_hashes}")
|
||||
print("%d requests in %.1fs. %.1f requests per second" % (total_requests, elapsed, total_requests / elapsed))
|
||||
print("Average request time %.8fs" % (elapsed / total_requests))
|
||||
print("Max request time was %.8fs" % max_time)
|
||||
print("Found %d hashes, missed %d" % (found_hashes, missed_hashes))
|
||||
|
||||
if args.report:
|
||||
with ProgressBar(total=args.requests) as pbar:
|
||||
@@ -152,140 +113,12 @@ def main():
|
||||
with lock:
|
||||
pbar.update()
|
||||
|
||||
def handle_remove(args, client):
|
||||
where = {k: v for k, v in args.where}
|
||||
if where:
|
||||
result = client.remove(where)
|
||||
print("Removed %d row(s)" % (result["count"]))
|
||||
else:
|
||||
print("No query specified")
|
||||
|
||||
def handle_clean_unused(args, client):
|
||||
result = client.clean_unused(args.max_age)
|
||||
print("Removed %d rows" % (result["count"]))
|
||||
return 0
|
||||
|
||||
def handle_refresh_token(args, client):
|
||||
r = client.refresh_token(args.username)
|
||||
print_user(r)
|
||||
|
||||
def handle_set_user_permissions(args, client):
|
||||
r = client.set_user_perms(args.username, args.permissions)
|
||||
print_user(r)
|
||||
|
||||
def handle_get_user(args, client):
|
||||
r = client.get_user(args.username)
|
||||
print_user(r)
|
||||
|
||||
def handle_get_all_users(args, client):
|
||||
users = client.get_all_users()
|
||||
print("{username:20}| {permissions}".format(username="Username", permissions="Permissions"))
|
||||
print(("-" * 20) + "+" + ("-" * 20))
|
||||
for u in users:
|
||||
print("{username:20}| {permissions}".format(username=u["username"], permissions=" ".join(u["permissions"])))
|
||||
|
||||
def handle_new_user(args, client):
|
||||
r = client.new_user(args.username, args.permissions)
|
||||
print_user(r)
|
||||
|
||||
def handle_delete_user(args, client):
|
||||
r = client.delete_user(args.username)
|
||||
print_user(r)
|
||||
|
||||
def handle_get_db_usage(args, client):
|
||||
usage = client.get_db_usage()
|
||||
print(usage)
|
||||
tables = sorted(usage.keys())
|
||||
print("{name:20}| {rows:20}".format(name="Table name", rows="Rows"))
|
||||
print(("-" * 20) + "+" + ("-" * 20))
|
||||
for t in tables:
|
||||
print("{name:20}| {rows:<20}".format(name=t, rows=usage[t]["rows"]))
|
||||
print()
|
||||
|
||||
total_rows = sum(t["rows"] for t in usage.values())
|
||||
print(f"Total rows: {total_rows}")
|
||||
|
||||
def handle_get_db_query_columns(args, client):
|
||||
columns = client.get_db_query_columns()
|
||||
print("\n".join(sorted(columns)))
|
||||
|
||||
def handle_gc_status(args, client):
|
||||
result = client.gc_status()
|
||||
if not result["mark"]:
|
||||
print("No Garbage collection in progress")
|
||||
return 0
|
||||
|
||||
print("Current Mark: %s" % result["mark"])
|
||||
print("Total hashes to keep: %d" % result["keep"])
|
||||
print("Total hashes to remove: %s" % result["remove"])
|
||||
return 0
|
||||
|
||||
def handle_gc_mark(args, client):
|
||||
where = {k: v for k, v in args.where}
|
||||
result = client.gc_mark(args.mark, where)
|
||||
print("New hashes marked: %d" % result["count"])
|
||||
return 0
|
||||
|
||||
def handle_gc_sweep(args, client):
|
||||
result = client.gc_sweep(args.mark)
|
||||
print("Removed %d rows" % result["count"])
|
||||
return 0
|
||||
|
||||
def handle_unihash_exists(args, client):
|
||||
result = client.unihash_exists(args.unihash)
|
||||
if args.quiet:
|
||||
return 0 if result else 1
|
||||
|
||||
print("true" if result else "false")
|
||||
return 0
|
||||
|
||||
def handle_ping(args, client):
|
||||
times = []
|
||||
for i in range(1, args.count + 1):
|
||||
if not args.quiet:
|
||||
print(f"Ping {i} of {args.count}... ", end="")
|
||||
start_time = time.perf_counter()
|
||||
client.ping()
|
||||
elapsed = time.perf_counter() - start_time
|
||||
times.append(elapsed)
|
||||
if not args.quiet:
|
||||
print(f"{elapsed:.3f}s")
|
||||
|
||||
mean = statistics.mean(times)
|
||||
median = statistics.median(times)
|
||||
std_dev = statistics.pstdev(times)
|
||||
|
||||
if not args.quiet:
|
||||
print("------------------------")
|
||||
print(f"Number of pings: {len(times)}")
|
||||
print(f"Average round trip time: {mean:.3f}s")
|
||||
print(f"Median round trip time: {median:.3f}s")
|
||||
print(f"Round trip time std dev: {std_dev:.3f}s")
|
||||
print(f"Min time is: {min(times):.3f}s")
|
||||
print(f"Max time is: {max(times):.3f}s")
|
||||
return 0
|
||||
|
||||
parser = argparse.ArgumentParser(description='Hash Equivalence Client')
|
||||
parser.add_argument('--address', default=DEFAULT_ADDRESS, help='Server address (default "%(default)s")')
|
||||
parser.add_argument('--log', default='WARNING', help='Set logging level')
|
||||
parser.add_argument('--login', '-l', metavar="USERNAME", help="Authenticate as USERNAME")
|
||||
parser.add_argument('--password', '-p', metavar="TOKEN", help="Authenticate using token TOKEN")
|
||||
parser.add_argument('--become', '-b', metavar="USERNAME", help="Impersonate user USERNAME (if allowed) when performing actions")
|
||||
parser.add_argument('--no-netrc', '-n', action="store_false", dest="netrc", help="Do not use .netrc")
|
||||
|
||||
subparsers = parser.add_subparsers()
|
||||
|
||||
get_parser = subparsers.add_parser('get', help="Get the unihash for a taskhash")
|
||||
get_parser.add_argument("method", help="Method to query")
|
||||
get_parser.add_argument("taskhash", help="Task hash to query")
|
||||
get_parser.set_defaults(func=handle_get)
|
||||
|
||||
get_outhash_parser = subparsers.add_parser('get-outhash', help="Get output hash information")
|
||||
get_outhash_parser.add_argument("method", help="Method to query")
|
||||
get_outhash_parser.add_argument("outhash", help="Output hash to query")
|
||||
get_outhash_parser.add_argument("taskhash", help="Task hash to query")
|
||||
get_outhash_parser.set_defaults(func=handle_get_outhash)
|
||||
|
||||
stats_parser = subparsers.add_parser('stats', help='Show server stats')
|
||||
stats_parser.add_argument('--reset', action='store_true',
|
||||
help='Reset server stats')
|
||||
@@ -304,69 +137,6 @@ def main():
|
||||
help='Include string in outhash')
|
||||
stress_parser.set_defaults(func=handle_stress)
|
||||
|
||||
remove_parser = subparsers.add_parser('remove', help="Remove hash entries")
|
||||
remove_parser.add_argument("--where", "-w", metavar="KEY VALUE", nargs=2, action="append", default=[],
|
||||
help="Remove entries from table where KEY == VALUE")
|
||||
remove_parser.set_defaults(func=handle_remove)
|
||||
|
||||
clean_unused_parser = subparsers.add_parser('clean-unused', help="Remove unused database entries")
|
||||
clean_unused_parser.add_argument("max_age", metavar="SECONDS", type=int, help="Remove unused entries older than SECONDS old")
|
||||
clean_unused_parser.set_defaults(func=handle_clean_unused)
|
||||
|
||||
refresh_token_parser = subparsers.add_parser('refresh-token', help="Refresh auth token")
|
||||
refresh_token_parser.add_argument("--username", "-u", help="Refresh the token for another user (if authorized)")
|
||||
refresh_token_parser.set_defaults(func=handle_refresh_token)
|
||||
|
||||
set_user_perms_parser = subparsers.add_parser('set-user-perms', help="Set new permissions for user")
|
||||
set_user_perms_parser.add_argument("--username", "-u", help="Username", required=True)
|
||||
set_user_perms_parser.add_argument("permissions", metavar="PERM", nargs="*", default=[], help="New permissions")
|
||||
set_user_perms_parser.set_defaults(func=handle_set_user_permissions)
|
||||
|
||||
get_user_parser = subparsers.add_parser('get-user', help="Get user")
|
||||
get_user_parser.add_argument("--username", "-u", help="Username")
|
||||
get_user_parser.set_defaults(func=handle_get_user)
|
||||
|
||||
get_all_users_parser = subparsers.add_parser('get-all-users', help="List all users")
|
||||
get_all_users_parser.set_defaults(func=handle_get_all_users)
|
||||
|
||||
new_user_parser = subparsers.add_parser('new-user', help="Create new user")
|
||||
new_user_parser.add_argument("--username", "-u", help="Username", required=True)
|
||||
new_user_parser.add_argument("permissions", metavar="PERM", nargs="*", default=[], help="New permissions")
|
||||
new_user_parser.set_defaults(func=handle_new_user)
|
||||
|
||||
delete_user_parser = subparsers.add_parser('delete-user', help="Delete user")
|
||||
delete_user_parser.add_argument("--username", "-u", help="Username", required=True)
|
||||
delete_user_parser.set_defaults(func=handle_delete_user)
|
||||
|
||||
db_usage_parser = subparsers.add_parser('get-db-usage', help="Database Usage")
|
||||
db_usage_parser.set_defaults(func=handle_get_db_usage)
|
||||
|
||||
db_query_columns_parser = subparsers.add_parser('get-db-query-columns', help="Show columns that can be used in database queries")
|
||||
db_query_columns_parser.set_defaults(func=handle_get_db_query_columns)
|
||||
|
||||
gc_status_parser = subparsers.add_parser("gc-status", help="Show garbage collection status")
|
||||
gc_status_parser.set_defaults(func=handle_gc_status)
|
||||
|
||||
gc_mark_parser = subparsers.add_parser('gc-mark', help="Mark hashes to be kept for garbage collection")
|
||||
gc_mark_parser.add_argument("mark", help="Mark for this garbage collection operation")
|
||||
gc_mark_parser.add_argument("--where", "-w", metavar="KEY VALUE", nargs=2, action="append", default=[],
|
||||
help="Keep entries in table where KEY == VALUE")
|
||||
gc_mark_parser.set_defaults(func=handle_gc_mark)
|
||||
|
||||
gc_sweep_parser = subparsers.add_parser('gc-sweep', help="Perform garbage collection and delete any entries that are not marked")
|
||||
gc_sweep_parser.add_argument("mark", help="Mark for this garbage collection operation")
|
||||
gc_sweep_parser.set_defaults(func=handle_gc_sweep)
|
||||
|
||||
unihash_exists_parser = subparsers.add_parser('unihash-exists', help="Check if a unihash is known to the server")
|
||||
unihash_exists_parser.add_argument("--quiet", action="store_true", help="Don't print status. Instead, exit with 0 if unihash exists and 1 if it does not")
|
||||
unihash_exists_parser.add_argument("unihash", help="Unihash to check")
|
||||
unihash_exists_parser.set_defaults(func=handle_unihash_exists)
|
||||
|
||||
ping_parser = subparsers.add_parser('ping', help="Ping server")
|
||||
ping_parser.add_argument("-n", "--count", type=int, help="Number of pings. Default is %(default)s", default=10)
|
||||
ping_parser.add_argument("-q", "--quiet", action="store_true", help="Don't print each ping; only print results")
|
||||
ping_parser.set_defaults(func=handle_ping)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
logger = logging.getLogger('hashserv')
|
||||
@@ -380,30 +150,11 @@ def main():
|
||||
console.setLevel(level)
|
||||
logger.addHandler(console)
|
||||
|
||||
login = args.login
|
||||
password = args.password
|
||||
|
||||
if login is None and args.netrc:
|
||||
try:
|
||||
n = netrc.netrc()
|
||||
auth = n.authenticators(args.address)
|
||||
if auth is not None:
|
||||
login, _, password = auth
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
except netrc.NetrcParseError as e:
|
||||
sys.stderr.write(f"Error parsing {e.filename}:{e.lineno}: {e.msg}\n")
|
||||
|
||||
func = getattr(args, 'func', None)
|
||||
if func:
|
||||
try:
|
||||
with hashserv.create_client(args.address, login, password) as client:
|
||||
if args.become:
|
||||
client.become_user(args.become)
|
||||
return func(args, client)
|
||||
except bb.asyncrpc.InvokeError as e:
|
||||
print(f"ERROR: {e}")
|
||||
return 1
|
||||
client = hashserv.create_client(args.address)
|
||||
|
||||
return func(args, client)
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
@@ -11,169 +11,56 @@ import logging
|
||||
import argparse
|
||||
import sqlite3
|
||||
import warnings
|
||||
|
||||
warnings.simplefilter("default")
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), "lib"))
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
|
||||
|
||||
import hashserv
|
||||
from hashserv.server import DEFAULT_ANON_PERMS
|
||||
|
||||
VERSION = "1.0.0"
|
||||
|
||||
DEFAULT_BIND = "unix://./hashserve.sock"
|
||||
DEFAULT_BIND = 'unix://./hashserve.sock'
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Hash Equivalence Reference Server. Version=%s" % VERSION,
|
||||
formatter_class=argparse.RawTextHelpFormatter,
|
||||
epilog="""
|
||||
The bind address may take one of the following formats:
|
||||
unix://PATH - Bind to unix domain socket at PATH
|
||||
ws://ADDRESS:PORT - Bind to websocket on ADDRESS:PORT
|
||||
ADDRESS:PORT - Bind to raw TCP socket on ADDRESS:PORT
|
||||
parser = argparse.ArgumentParser(description='Hash Equivalence Reference Server. Version=%s' % VERSION,
|
||||
epilog='''The bind address is the path to a unix domain socket if it is
|
||||
prefixed with "unix://". Otherwise, it is an IP address
|
||||
and port in form ADDRESS:PORT. To bind to all addresses, leave
|
||||
the ADDRESS empty, e.g. "--bind :8686". To bind to a specific
|
||||
IPv6 address, enclose the address in "[]", e.g.
|
||||
"--bind [::1]:8686"'''
|
||||
)
|
||||
|
||||
To bind to all addresses, leave the ADDRESS empty, e.g. "--bind :8686" or
|
||||
"--bind ws://:8686". To bind to a specific IPv6 address, enclose the address in
|
||||
"[]", e.g. "--bind [::1]:8686" or "--bind ws://[::1]:8686"
|
||||
|
||||
Note that the default Anonymous permissions are designed to not break existing
|
||||
server instances when upgrading, but are not particularly secure defaults. If
|
||||
you want to use authentication, it is recommended that you use "--anon-perms
|
||||
@read" to only give anonymous users read access, or "--anon-perms @none" to
|
||||
give un-authenticated users no access at all.
|
||||
|
||||
Setting "--anon-perms @all" or "--anon-perms @user-admin" is not allowed, since
|
||||
this would allow anonymous users to manage all users accounts, which is a bad
|
||||
idea.
|
||||
|
||||
If you are using user authentication, you should run your server in websockets
|
||||
mode with an SSL terminating load balancer in front of it (as this server does
|
||||
not implement SSL). Otherwise all usernames and passwords will be transmitted
|
||||
in the clear. When configured this way, clients can connect using a secure
|
||||
websocket, as in "wss://SERVER:PORT"
|
||||
|
||||
The following permissions are supported by the server:
|
||||
|
||||
@none - No permissions
|
||||
@read - The ability to read equivalent hashes from the server
|
||||
@report - The ability to report equivalent hashes to the server
|
||||
@db-admin - Manage the hash database(s). This includes cleaning the
|
||||
database, removing hashes, etc.
|
||||
@user-admin - The ability to manage user accounts. This includes, creating
|
||||
users, deleting users, resetting login tokens, and assigning
|
||||
permissions.
|
||||
@all - All possible permissions, including any that may be added
|
||||
in the future
|
||||
""",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-b",
|
||||
"--bind",
|
||||
default=os.environ.get("HASHSERVER_BIND", DEFAULT_BIND),
|
||||
help='Bind address (default $HASHSERVER_BIND, "%(default)s")',
|
||||
)
|
||||
parser.add_argument(
|
||||
"-d",
|
||||
"--database",
|
||||
default=os.environ.get("HASHSERVER_DB", "./hashserv.db"),
|
||||
help='Database file (default $HASHSERVER_DB, "%(default)s")',
|
||||
)
|
||||
parser.add_argument(
|
||||
"-l",
|
||||
"--log",
|
||||
default=os.environ.get("HASHSERVER_LOG_LEVEL", "WARNING"),
|
||||
help='Set logging level (default $HASHSERVER_LOG_LEVEL, "%(default)s")',
|
||||
)
|
||||
parser.add_argument(
|
||||
"-u",
|
||||
"--upstream",
|
||||
default=os.environ.get("HASHSERVER_UPSTREAM", None),
|
||||
help="Upstream hashserv to pull hashes from ($HASHSERVER_UPSTREAM)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-r",
|
||||
"--read-only",
|
||||
action="store_true",
|
||||
help="Disallow write operations from clients ($HASHSERVER_READ_ONLY)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--db-username",
|
||||
default=os.environ.get("HASHSERVER_DB_USERNAME", None),
|
||||
help="Database username ($HASHSERVER_DB_USERNAME)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--db-password",
|
||||
default=os.environ.get("HASHSERVER_DB_PASSWORD", None),
|
||||
help="Database password ($HASHSERVER_DB_PASSWORD)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--anon-perms",
|
||||
metavar="PERM[,PERM[,...]]",
|
||||
default=os.environ.get("HASHSERVER_ANON_PERMS", ",".join(DEFAULT_ANON_PERMS)),
|
||||
help='Permissions to give anonymous users (default $HASHSERVER_ANON_PERMS, "%(default)s")',
|
||||
)
|
||||
parser.add_argument(
|
||||
"--admin-user",
|
||||
default=os.environ.get("HASHSERVER_ADMIN_USER", None),
|
||||
help="Create default admin user with name ADMIN_USER ($HASHSERVER_ADMIN_USER)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--admin-password",
|
||||
default=os.environ.get("HASHSERVER_ADMIN_PASSWORD", None),
|
||||
help="Create default admin user with password ADMIN_PASSWORD ($HASHSERVER_ADMIN_PASSWORD)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--reuseport",
|
||||
action="store_true",
|
||||
help="Enable SO_REUSEPORT, allowing multiple servers to bind to the same port for load balancing",
|
||||
)
|
||||
parser.add_argument('-b', '--bind', default=DEFAULT_BIND, help='Bind address (default "%(default)s")')
|
||||
parser.add_argument('-d', '--database', default='./hashserv.db', help='Database file (default "%(default)s")')
|
||||
parser.add_argument('-l', '--log', default='WARNING', help='Set logging level')
|
||||
parser.add_argument('-u', '--upstream', help='Upstream hashserv to pull hashes from')
|
||||
parser.add_argument('-r', '--read-only', action='store_true', help='Disallow write operations from clients')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
logger = logging.getLogger("hashserv")
|
||||
logger = logging.getLogger('hashserv')
|
||||
|
||||
level = getattr(logging, args.log.upper(), None)
|
||||
if not isinstance(level, int):
|
||||
raise ValueError(
|
||||
"Invalid log level: %s (Try ERROR/WARNING/INFO/DEBUG)" % args.log
|
||||
)
|
||||
raise ValueError('Invalid log level: %s' % args.log)
|
||||
|
||||
logger.setLevel(level)
|
||||
console = logging.StreamHandler()
|
||||
console.setLevel(level)
|
||||
logger.addHandler(console)
|
||||
|
||||
read_only = (os.environ.get("HASHSERVER_READ_ONLY", "0") == "1") or args.read_only
|
||||
if "," in args.anon_perms:
|
||||
anon_perms = args.anon_perms.split(",")
|
||||
else:
|
||||
anon_perms = args.anon_perms.split()
|
||||
|
||||
server = hashserv.create_server(
|
||||
args.bind,
|
||||
args.database,
|
||||
upstream=args.upstream,
|
||||
read_only=read_only,
|
||||
db_username=args.db_username,
|
||||
db_password=args.db_password,
|
||||
anon_perms=anon_perms,
|
||||
admin_username=args.admin_user,
|
||||
admin_password=args.admin_password,
|
||||
reuseport=args.reuseport,
|
||||
)
|
||||
server = hashserv.create_server(args.bind, args.database, upstream=args.upstream, read_only=args.read_only)
|
||||
server.serve_forever()
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
if __name__ == '__main__':
|
||||
try:
|
||||
ret = main()
|
||||
except Exception:
|
||||
ret = 1
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
sys.exit(ret)
|
||||
|
||||
@@ -33,7 +33,7 @@ def main():
|
||||
add_help=False)
|
||||
parser.add_argument('-d', '--debug', help='Enable debug output', action='store_true')
|
||||
parser.add_argument('-q', '--quiet', help='Print only errors', action='store_true')
|
||||
parser.add_argument('-F', '--force', help='Forced execution: can be specified multiple times. -F will force add without recipe parse verification and -FF will additionally force the run withput layer parsing.', action='count', default=0)
|
||||
parser.add_argument('-F', '--force', help='Force add without recipe parse verification', action='store_true')
|
||||
parser.add_argument('--color', choices=['auto', 'always', 'never'], default='auto', help='Colorize output (where %(metavar)s is %(choices)s)', metavar='COLOR')
|
||||
|
||||
global_args, unparsed_args = parser.parse_known_args()
|
||||
@@ -59,20 +59,16 @@ def main():
|
||||
plugins = []
|
||||
tinfoil = bb.tinfoil.Tinfoil(tracking=True)
|
||||
tinfoil.logger.setLevel(logger.getEffectiveLevel())
|
||||
if global_args.force > 1:
|
||||
bbpaths = []
|
||||
else:
|
||||
try:
|
||||
tinfoil.prepare(True)
|
||||
bbpaths = tinfoil.config_data.getVar('BBPATH').split(':')
|
||||
|
||||
try:
|
||||
for path in ([topdir] + bbpaths):
|
||||
for path in ([topdir] +
|
||||
tinfoil.config_data.getVar('BBPATH').split(':')):
|
||||
pluginpath = os.path.join(path, 'lib', 'bblayers')
|
||||
bb.utils.load_plugins(logger, plugins, pluginpath)
|
||||
|
||||
registered = False
|
||||
for plugin in plugins:
|
||||
if hasattr(plugin, 'tinfoil_init') and global_args.force <= 1:
|
||||
if hasattr(plugin, 'tinfoil_init'):
|
||||
plugin.tinfoil_init(tinfoil)
|
||||
if hasattr(plugin, 'register_commands'):
|
||||
registered = True
|
||||
|
||||
@@ -7,97 +7,49 @@
|
||||
|
||||
import os
|
||||
import sys,logging
|
||||
import argparse
|
||||
import optparse
|
||||
import warnings
|
||||
warnings.simplefilter("default")
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), "lib"))
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),'lib'))
|
||||
|
||||
import prserv
|
||||
import prserv.serv
|
||||
|
||||
VERSION = "2.0.0"
|
||||
__version__="1.0.0"
|
||||
|
||||
PRHOST_DEFAULT="0.0.0.0"
|
||||
PRHOST_DEFAULT='0.0.0.0'
|
||||
PRPORT_DEFAULT=8585
|
||||
|
||||
def init_logger(logfile, loglevel):
|
||||
numeric_level = getattr(logging, loglevel.upper(), None)
|
||||
if not isinstance(numeric_level, int):
|
||||
raise ValueError("Invalid log level: %s" % loglevel)
|
||||
FORMAT = "%(asctime)-15s %(message)s"
|
||||
logging.basicConfig(level=numeric_level, filename=logfile, format=FORMAT)
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="BitBake PR Server. Version=%s" % VERSION,
|
||||
formatter_class=argparse.RawTextHelpFormatter)
|
||||
parser = optparse.OptionParser(
|
||||
version="Bitbake PR Service Core version %s, %%prog version %s" % (prserv.__version__, __version__),
|
||||
usage = "%prog < --start | --stop > [options]")
|
||||
|
||||
parser.add_argument(
|
||||
"-f",
|
||||
"--file",
|
||||
default="prserv.sqlite3",
|
||||
help="database filename (default: prserv.sqlite3)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-l",
|
||||
"--log",
|
||||
default="prserv.log",
|
||||
help="log filename(default: prserv.log)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--loglevel",
|
||||
default="INFO",
|
||||
help="logging level, i.e. CRITICAL, ERROR, WARNING, INFO, DEBUG",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--start",
|
||||
action="store_true",
|
||||
help="start daemon",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--stop",
|
||||
action="store_true",
|
||||
help="stop daemon",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--host",
|
||||
help="ip address to bind",
|
||||
default=PRHOST_DEFAULT,
|
||||
)
|
||||
parser.add_argument(
|
||||
"--port",
|
||||
type=int,
|
||||
default=PRPORT_DEFAULT,
|
||||
help="port number (default: 8585)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-r",
|
||||
"--read-only",
|
||||
action="store_true",
|
||||
help="open database in read-only mode",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-u",
|
||||
"--upstream",
|
||||
default=os.environ.get("PRSERVER_UPSTREAM", None),
|
||||
help="Upstream PR service (host:port)",
|
||||
)
|
||||
parser.add_option("-f", "--file", help="database filename(default: prserv.sqlite3)", action="store",
|
||||
dest="dbfile", type="string", default="prserv.sqlite3")
|
||||
parser.add_option("-l", "--log", help="log filename(default: prserv.log)", action="store",
|
||||
dest="logfile", type="string", default="prserv.log")
|
||||
parser.add_option("--loglevel", help="logging level, i.e. CRITICAL, ERROR, WARNING, INFO, DEBUG",
|
||||
action = "store", type="string", dest="loglevel", default = "INFO")
|
||||
parser.add_option("--start", help="start daemon",
|
||||
action="store_true", dest="start")
|
||||
parser.add_option("--stop", help="stop daemon",
|
||||
action="store_true", dest="stop")
|
||||
parser.add_option("--host", help="ip address to bind", action="store",
|
||||
dest="host", type="string", default=PRHOST_DEFAULT)
|
||||
parser.add_option("--port", help="port number(default: 8585)", action="store",
|
||||
dest="port", type="int", default=PRPORT_DEFAULT)
|
||||
parser.add_option("-r", "--read-only", help="open database in read-only mode",
|
||||
action="store_true")
|
||||
|
||||
args = parser.parse_args()
|
||||
init_logger(os.path.abspath(args.log), args.loglevel)
|
||||
options, args = parser.parse_args(sys.argv)
|
||||
prserv.init_logger(os.path.abspath(options.logfile),options.loglevel)
|
||||
|
||||
if args.start:
|
||||
ret=prserv.serv.start_daemon(
|
||||
args.file,
|
||||
args.host,
|
||||
args.port,
|
||||
os.path.abspath(args.log),
|
||||
args.read_only,
|
||||
args.upstream
|
||||
)
|
||||
elif args.stop:
|
||||
ret=prserv.serv.stop_daemon(args.host, args.port)
|
||||
if options.start:
|
||||
ret=prserv.serv.start_daemon(options.dbfile, options.host, options.port,os.path.abspath(options.logfile), options.read_only)
|
||||
elif options.stop:
|
||||
ret=prserv.serv.stop_daemon(options.host, options.port)
|
||||
else:
|
||||
ret=parser.print_help()
|
||||
return ret
|
||||
|
||||
@@ -15,7 +15,6 @@ import unittest
|
||||
try:
|
||||
import bb
|
||||
import hashserv
|
||||
import prserv
|
||||
import layerindexlib
|
||||
except RuntimeError as exc:
|
||||
sys.exit(str(exc))
|
||||
@@ -34,7 +33,6 @@ tests = ["bb.tests.codeparser",
|
||||
"bb.tests.utils",
|
||||
"bb.tests.compression",
|
||||
"hashserv.tests",
|
||||
"prserv.tests",
|
||||
"layerindexlib.tests.layerindexobj",
|
||||
"layerindexlib.tests.restapi",
|
||||
"layerindexlib.tests.cooker"]
|
||||
|
||||
@@ -91,19 +91,19 @@ def worker_fire_prepickled(event):
|
||||
worker_thread_exit = False
|
||||
|
||||
def worker_flush(worker_queue):
|
||||
worker_queue_int = bytearray()
|
||||
worker_queue_int = b""
|
||||
global worker_pipe, worker_thread_exit
|
||||
|
||||
while True:
|
||||
try:
|
||||
worker_queue_int.extend(worker_queue.get(True, 1))
|
||||
worker_queue_int = worker_queue_int + worker_queue.get(True, 1)
|
||||
except queue.Empty:
|
||||
pass
|
||||
while (worker_queue_int or not worker_queue.empty()):
|
||||
try:
|
||||
(_, ready, _) = select.select([], [worker_pipe], [], 1)
|
||||
if not worker_queue.empty():
|
||||
worker_queue_int.extend(worker_queue.get())
|
||||
worker_queue_int = worker_queue_int + worker_queue.get()
|
||||
written = os.write(worker_pipe, worker_queue_int)
|
||||
worker_queue_int = worker_queue_int[written:]
|
||||
except (IOError, OSError) as e:
|
||||
@@ -151,7 +151,6 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
|
||||
taskhash = runtask['taskhash']
|
||||
unihash = runtask['unihash']
|
||||
appends = runtask['appends']
|
||||
layername = runtask['layername']
|
||||
taskdepdata = runtask['taskdepdata']
|
||||
quieterrors = runtask['quieterrors']
|
||||
# We need to setup the environment BEFORE the fork, since
|
||||
@@ -183,7 +182,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
|
||||
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not dry_run:
|
||||
fakeroot = True
|
||||
envvars = (runtask['fakerootenv'] or "").split()
|
||||
for key, value in (var.split('=',1) for var in envvars):
|
||||
for key, value in (var.split('=') for var in envvars):
|
||||
envbackup[key] = os.environ.get(key)
|
||||
os.environ[key] = value
|
||||
fakeenv[key] = value
|
||||
@@ -195,7 +194,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
|
||||
(fn, taskname, ', '.join(fakedirs)))
|
||||
else:
|
||||
envvars = (runtask['fakerootnoenv'] or "").split()
|
||||
for key, value in (var.split('=',1) for var in envvars):
|
||||
for key, value in (var.split('=') for var in envvars):
|
||||
envbackup[key] = os.environ.get(key)
|
||||
os.environ[key] = value
|
||||
fakeenv[key] = value
|
||||
@@ -237,13 +236,11 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
|
||||
# Let SIGHUP exit as SIGTERM
|
||||
signal.signal(signal.SIGHUP, sigterm_handler)
|
||||
|
||||
# No stdin & stdout
|
||||
# stdout is used as a status report channel and must not be used by child processes.
|
||||
dumbio = os.open(os.devnull, os.O_RDWR)
|
||||
os.dup2(dumbio, sys.stdin.fileno())
|
||||
os.dup2(dumbio, sys.stdout.fileno())
|
||||
# No stdin
|
||||
newsi = os.open(os.devnull, os.O_RDWR)
|
||||
os.dup2(newsi, sys.stdin.fileno())
|
||||
|
||||
if umask is not None:
|
||||
if umask:
|
||||
os.umask(umask)
|
||||
|
||||
try:
|
||||
@@ -265,7 +262,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
|
||||
bb.parse.siggen.set_taskhashes(workerdata["newhashes"])
|
||||
ret = 0
|
||||
|
||||
the_data = databuilder.parseRecipe(fn, appends, layername)
|
||||
the_data = databuilder.parseRecipe(fn, appends)
|
||||
the_data.setVar('BB_TASKHASH', taskhash)
|
||||
the_data.setVar('BB_UNIHASH', unihash)
|
||||
bb.parse.siggen.setup_datacache_from_datastore(fn, the_data)
|
||||
@@ -307,10 +304,6 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
|
||||
if not quieterrors:
|
||||
logger.critical(traceback.format_exc())
|
||||
os._exit(1)
|
||||
|
||||
sys.stdout.flush()
|
||||
sys.stderr.flush()
|
||||
|
||||
try:
|
||||
if dry_run:
|
||||
return 0
|
||||
@@ -352,12 +345,12 @@ class runQueueWorkerPipe():
|
||||
if pipeout:
|
||||
pipeout.close()
|
||||
bb.utils.nonblockingfd(self.input)
|
||||
self.queue = bytearray()
|
||||
self.queue = b""
|
||||
|
||||
def read(self):
|
||||
start = len(self.queue)
|
||||
try:
|
||||
self.queue.extend(self.input.read(102400) or b"")
|
||||
self.queue = self.queue + (self.input.read(102400) or b"")
|
||||
except (OSError, IOError) as e:
|
||||
if e.errno != errno.EAGAIN:
|
||||
raise
|
||||
@@ -385,7 +378,7 @@ class BitbakeWorker(object):
|
||||
def __init__(self, din):
|
||||
self.input = din
|
||||
bb.utils.nonblockingfd(self.input)
|
||||
self.queue = bytearray()
|
||||
self.queue = b""
|
||||
self.cookercfg = None
|
||||
self.databuilder = None
|
||||
self.data = None
|
||||
@@ -419,7 +412,7 @@ class BitbakeWorker(object):
|
||||
if len(r) == 0:
|
||||
# EOF on pipe, server must have terminated
|
||||
self.sigterm_exception(signal.SIGTERM, None)
|
||||
self.queue.extend(r)
|
||||
self.queue = self.queue + r
|
||||
except (OSError, IOError):
|
||||
pass
|
||||
if len(self.queue):
|
||||
@@ -439,30 +432,18 @@ class BitbakeWorker(object):
|
||||
while self.process_waitpid():
|
||||
continue
|
||||
|
||||
|
||||
def handle_item(self, item, func):
|
||||
opening_tag = b"<" + item + b">"
|
||||
if not self.queue.startswith(opening_tag):
|
||||
return
|
||||
|
||||
tag_len = len(opening_tag)
|
||||
if len(self.queue) < tag_len + 4:
|
||||
# we need to receive more data
|
||||
return
|
||||
header = self.queue[tag_len:tag_len + 4]
|
||||
payload_len = int.from_bytes(header, 'big')
|
||||
# closing tag has length (tag_len + 1)
|
||||
if len(self.queue) < tag_len * 2 + 1 + payload_len:
|
||||
# we need to receive more data
|
||||
return
|
||||
|
||||
index = self.queue.find(b"</" + item + b">")
|
||||
if index != -1:
|
||||
try:
|
||||
func(self.queue[(tag_len + 4):index])
|
||||
except pickle.UnpicklingError:
|
||||
workerlog_write("Unable to unpickle data: %s\n" % ":".join("{:02x}".format(c) for c in self.queue))
|
||||
raise
|
||||
self.queue = self.queue[(index + len(b"</") + len(item) + len(b">")):]
|
||||
if self.queue.startswith(b"<" + item + b">"):
|
||||
index = self.queue.find(b"</" + item + b">")
|
||||
while index != -1:
|
||||
try:
|
||||
func(self.queue[(len(item) + 2):index])
|
||||
except pickle.UnpicklingError:
|
||||
workerlog_write("Unable to unpickle data: %s\n" % ":".join("{:02x}".format(c) for c in self.queue))
|
||||
raise
|
||||
self.queue = self.queue[(index + len(item) + 3):]
|
||||
index = self.queue.find(b"</" + item + b">")
|
||||
|
||||
def handle_cookercfg(self, data):
|
||||
self.cookercfg = pickle.loads(data)
|
||||
|
||||
@@ -24,17 +24,15 @@ warnings.simplefilter("default")
|
||||
version = 1.0
|
||||
|
||||
|
||||
git_cmd = ['git', '-c', 'safe.bareRepository=all']
|
||||
|
||||
def main():
|
||||
if sys.version_info < (3, 4, 0):
|
||||
sys.exit('Python 3.4 or greater is required')
|
||||
|
||||
git_dir = check_output(git_cmd + ['rev-parse', '--git-dir']).rstrip()
|
||||
git_dir = check_output(['git', 'rev-parse', '--git-dir']).rstrip()
|
||||
shallow_file = os.path.join(git_dir, 'shallow')
|
||||
if os.path.exists(shallow_file):
|
||||
try:
|
||||
check_output(git_cmd + ['fetch', '--unshallow'])
|
||||
check_output(['git', 'fetch', '--unshallow'])
|
||||
except subprocess.CalledProcessError:
|
||||
try:
|
||||
os.unlink(shallow_file)
|
||||
@@ -43,21 +41,21 @@ def main():
|
||||
raise
|
||||
|
||||
args = process_args()
|
||||
revs = check_output(git_cmd + ['rev-list'] + args.revisions).splitlines()
|
||||
revs = check_output(['git', 'rev-list'] + args.revisions).splitlines()
|
||||
|
||||
make_shallow(shallow_file, args.revisions, args.refs)
|
||||
|
||||
ref_revs = check_output(git_cmd + ['rev-list'] + args.refs).splitlines()
|
||||
ref_revs = check_output(['git', 'rev-list'] + args.refs).splitlines()
|
||||
remaining_history = set(revs) & set(ref_revs)
|
||||
for rev in remaining_history:
|
||||
if check_output(git_cmd + ['rev-parse', '{}^@'.format(rev)]):
|
||||
if check_output(['git', 'rev-parse', '{}^@'.format(rev)]):
|
||||
sys.exit('Error: %s was not made shallow' % rev)
|
||||
|
||||
filter_refs(args.refs)
|
||||
|
||||
if args.shrink:
|
||||
shrink_repo(git_dir)
|
||||
subprocess.check_call(git_cmd + ['fsck', '--unreachable'])
|
||||
subprocess.check_call(['git', 'fsck', '--unreachable'])
|
||||
|
||||
|
||||
def process_args():
|
||||
@@ -74,12 +72,12 @@ def process_args():
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.refs:
|
||||
args.refs = check_output(git_cmd + ['rev-parse', '--symbolic-full-name'] + args.refs).splitlines()
|
||||
args.refs = check_output(['git', 'rev-parse', '--symbolic-full-name'] + args.refs).splitlines()
|
||||
else:
|
||||
args.refs = get_all_refs(lambda r, t, tt: t == 'commit' or tt == 'commit')
|
||||
|
||||
args.refs = list(filter(lambda r: not r.endswith('/HEAD'), args.refs))
|
||||
args.revisions = check_output(git_cmd + ['rev-parse'] + ['%s^{}' % i for i in args.revisions]).splitlines()
|
||||
args.revisions = check_output(['git', 'rev-parse'] + ['%s^{}' % i for i in args.revisions]).splitlines()
|
||||
return args
|
||||
|
||||
|
||||
@@ -97,7 +95,7 @@ def make_shallow(shallow_file, revisions, refs):
|
||||
|
||||
def get_all_refs(ref_filter=None):
|
||||
"""Return all the existing refs in this repository, optionally filtering the refs."""
|
||||
ref_output = check_output(git_cmd + ['for-each-ref', '--format=%(refname)\t%(objecttype)\t%(*objecttype)'])
|
||||
ref_output = check_output(['git', 'for-each-ref', '--format=%(refname)\t%(objecttype)\t%(*objecttype)'])
|
||||
ref_split = [tuple(iter_extend(l.rsplit('\t'), 3)) for l in ref_output.splitlines()]
|
||||
if ref_filter:
|
||||
ref_split = (e for e in ref_split if ref_filter(*e))
|
||||
@@ -115,7 +113,7 @@ def filter_refs(refs):
|
||||
all_refs = get_all_refs()
|
||||
to_remove = set(all_refs) - set(refs)
|
||||
if to_remove:
|
||||
check_output(['xargs', '-0', '-n', '1'] + git_cmd + ['update-ref', '-d', '--no-deref'],
|
||||
check_output(['xargs', '-0', '-n', '1', 'git', 'update-ref', '-d', '--no-deref'],
|
||||
input=''.join(l + '\0' for l in to_remove))
|
||||
|
||||
|
||||
@@ -128,7 +126,7 @@ def follow_history_intersections(revisions, refs):
|
||||
if rev in seen:
|
||||
continue
|
||||
|
||||
parents = check_output(git_cmd + ['rev-parse', '%s^@' % rev]).splitlines()
|
||||
parents = check_output(['git', 'rev-parse', '%s^@' % rev]).splitlines()
|
||||
|
||||
yield rev
|
||||
seen.add(rev)
|
||||
@@ -136,12 +134,12 @@ def follow_history_intersections(revisions, refs):
|
||||
if not parents:
|
||||
continue
|
||||
|
||||
check_refs = check_output(git_cmd + ['merge-base', '--independent'] + sorted(refs)).splitlines()
|
||||
check_refs = check_output(['git', 'merge-base', '--independent'] + sorted(refs)).splitlines()
|
||||
for parent in parents:
|
||||
for ref in check_refs:
|
||||
print("Checking %s vs %s" % (parent, ref))
|
||||
try:
|
||||
merge_base = check_output(git_cmd + ['merge-base', parent, ref]).rstrip()
|
||||
merge_base = check_output(['git', 'merge-base', parent, ref]).rstrip()
|
||||
except subprocess.CalledProcessError:
|
||||
continue
|
||||
else:
|
||||
@@ -161,14 +159,14 @@ def iter_except(func, exception, start=None):
|
||||
|
||||
def shrink_repo(git_dir):
|
||||
"""Shrink the newly shallow repository, removing the unreachable objects."""
|
||||
subprocess.check_call(git_cmd + ['reflog', 'expire', '--expire-unreachable=now', '--all'])
|
||||
subprocess.check_call(git_cmd + ['repack', '-ad'])
|
||||
subprocess.check_call(['git', 'reflog', 'expire', '--expire-unreachable=now', '--all'])
|
||||
subprocess.check_call(['git', 'repack', '-ad'])
|
||||
try:
|
||||
os.unlink(os.path.join(git_dir, 'objects', 'info', 'alternates'))
|
||||
except OSError as exc:
|
||||
if exc.errno != errno.ENOENT:
|
||||
raise
|
||||
subprocess.check_call(git_cmd + ['prune', '--expire', 'now'])
|
||||
subprocess.check_call(['git', 'prune', '--expire', 'now'])
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
@@ -84,7 +84,7 @@ webserverStartAll()
|
||||
echo "Starting webserver..."
|
||||
|
||||
$MANAGE runserver --noreload "$ADDR_PORT" \
|
||||
</dev/null >>${TOASTER_LOGS_DIR}/web.log 2>&1 \
|
||||
</dev/null >>${BUILDDIR}/toaster_web.log 2>&1 \
|
||||
& echo $! >${BUILDDIR}/.toastermain.pid
|
||||
|
||||
sleep 1
|
||||
@@ -181,14 +181,6 @@ WEBSERVER=1
|
||||
export TOASTER_BUILDSERVER=1
|
||||
ADDR_PORT="localhost:8000"
|
||||
TOASTERDIR=`dirname $BUILDDIR`
|
||||
# ${BUILDDIR}/toaster_logs/ became the default location for toaster logs
|
||||
# This is needed for implemented django-log-viewer: https://pypi.org/project/django-log-viewer/
|
||||
# If the directory does not exist, create it.
|
||||
TOASTER_LOGS_DIR="${BUILDDIR}/toaster_logs/"
|
||||
if [ ! -d $TOASTER_LOGS_DIR ]
|
||||
then
|
||||
mkdir $TOASTER_LOGS_DIR
|
||||
fi
|
||||
unset CMD
|
||||
for param in $*; do
|
||||
case $param in
|
||||
@@ -307,7 +299,7 @@ case $CMD in
|
||||
export BITBAKE_UI='toasterui'
|
||||
if [ $TOASTER_BUILDSERVER -eq 1 ] ; then
|
||||
$MANAGE runbuilds \
|
||||
</dev/null >>${TOASTER_LOGS_DIR}/toaster_runbuilds.log 2>&1 \
|
||||
</dev/null >>${BUILDDIR}/toaster_runbuilds.log 2>&1 \
|
||||
& echo $! >${BUILDDIR}/.runbuilds.pid
|
||||
else
|
||||
echo "Toaster build server not started."
|
||||
|
||||
@@ -30,23 +30,79 @@ sys.path.insert(0, join(dirname(dirname(abspath(__file__))), 'lib'))
|
||||
|
||||
import bb.cooker
|
||||
from bb.ui import toasterui
|
||||
from bb.ui import eventreplay
|
||||
|
||||
class EventPlayer:
|
||||
"""Emulate a connection to a bitbake server."""
|
||||
|
||||
def __init__(self, eventfile, variables):
|
||||
self.eventfile = eventfile
|
||||
self.variables = variables
|
||||
self.eventmask = []
|
||||
|
||||
def waitEvent(self, _timeout):
|
||||
"""Read event from the file."""
|
||||
line = self.eventfile.readline().strip()
|
||||
if not line:
|
||||
return
|
||||
try:
|
||||
event_str = json.loads(line)['vars'].encode('utf-8')
|
||||
event = pickle.loads(codecs.decode(event_str, 'base64'))
|
||||
event_name = "%s.%s" % (event.__module__, event.__class__.__name__)
|
||||
if event_name not in self.eventmask:
|
||||
return
|
||||
return event
|
||||
except ValueError as err:
|
||||
print("Failed loading ", line)
|
||||
raise err
|
||||
|
||||
def runCommand(self, command_line):
|
||||
"""Emulate running a command on the server."""
|
||||
name = command_line[0]
|
||||
|
||||
if name == "getVariable":
|
||||
var_name = command_line[1]
|
||||
variable = self.variables.get(var_name)
|
||||
if variable:
|
||||
return variable['v'], None
|
||||
return None, "Missing variable %s" % var_name
|
||||
|
||||
elif name == "getAllKeysWithFlags":
|
||||
dump = {}
|
||||
flaglist = command_line[1]
|
||||
for key, val in self.variables.items():
|
||||
try:
|
||||
if not key.startswith("__"):
|
||||
dump[key] = {
|
||||
'v': val['v'],
|
||||
'history' : val['history'],
|
||||
}
|
||||
for flag in flaglist:
|
||||
dump[key][flag] = val[flag]
|
||||
except Exception as err:
|
||||
print(err)
|
||||
return (dump, None)
|
||||
|
||||
elif name == 'setEventMask':
|
||||
self.eventmask = command_line[-1]
|
||||
return True, None
|
||||
|
||||
else:
|
||||
raise Exception("Command %s not implemented" % command_line[0])
|
||||
|
||||
def getEventHandle(self):
|
||||
"""
|
||||
This method is called by toasterui.
|
||||
The return value is passed to self.runCommand but not used there.
|
||||
"""
|
||||
pass
|
||||
|
||||
def main(argv):
|
||||
with open(argv[-1]) as eventfile:
|
||||
# load variables from the first line
|
||||
variables = None
|
||||
while line := eventfile.readline().strip():
|
||||
try:
|
||||
variables = json.loads(line)['allvariables']
|
||||
break
|
||||
except (KeyError, json.JSONDecodeError):
|
||||
continue
|
||||
if not variables:
|
||||
sys.exit("Cannot find allvariables entry in event log file %s" % argv[-1])
|
||||
eventfile.seek(0)
|
||||
variables = json.loads(eventfile.readline().strip())['allvariables']
|
||||
|
||||
params = namedtuple('ConfigParams', ['observe_only'])(True)
|
||||
player = eventreplay.EventPlayer(eventfile, variables)
|
||||
player = EventPlayer(eventfile, variables)
|
||||
|
||||
return toasterui.main(player, player, params)
|
||||
|
||||
|
||||
@@ -40,7 +40,7 @@ set cpo&vim
|
||||
|
||||
let s:maxoff = 50 " maximum number of lines to look backwards for ()
|
||||
|
||||
function! GetBBPythonIndent(lnum)
|
||||
function GetPythonIndent(lnum)
|
||||
|
||||
" If this line is explicitly joined: If the previous line was also joined,
|
||||
" line it up with that one, otherwise add two 'shiftwidth'
|
||||
@@ -257,7 +257,7 @@ let b:did_indent = 1
|
||||
setlocal indentkeys+=0\"
|
||||
|
||||
|
||||
function! BitbakeIndent(lnum)
|
||||
function BitbakeIndent(lnum)
|
||||
if !has('syntax_items')
|
||||
return -1
|
||||
endif
|
||||
@@ -315,7 +315,7 @@ function! BitbakeIndent(lnum)
|
||||
endif
|
||||
|
||||
if index(["bbPyDefRegion", "bbPyFuncRegion"], name) != -1
|
||||
let ret = GetBBPythonIndent(a:lnum)
|
||||
let ret = GetPythonIndent(a:lnum)
|
||||
" Should normally always be indented by at least one shiftwidth; but allow
|
||||
" return of -1 (defer to autoindent) or -2 (force indent to 0)
|
||||
if ret == 0
|
||||
|
||||
@@ -63,14 +63,13 @@ syn region bbVarFlagFlag matchgroup=bbArrayBrackets start="\[" end="\]\s*
|
||||
|
||||
" Includes and requires
|
||||
syn keyword bbInclude inherit include require contained
|
||||
syn match bbIncludeRest ".*$" contained contains=bbString,bbVarDeref,bbVarPyValue
|
||||
syn match bbIncludeRest ".*$" contained contains=bbString,bbVarDeref
|
||||
syn match bbIncludeLine "^\(inherit\|include\|require\)\s\+" contains=bbInclude nextgroup=bbIncludeRest
|
||||
|
||||
" Add taks and similar
|
||||
syn keyword bbStatement addtask deltask addhandler after before EXPORT_FUNCTIONS contained
|
||||
syn match bbStatementRest /[^\\]*$/ skipwhite contained contains=bbStatement,bbVarDeref,bbVarPyValue
|
||||
syn region bbStatementRestCont start=/.*\\$/ end=/^[^\\]*$/ contained contains=bbStatement,bbVarDeref,bbVarPyValue,bbContinue keepend
|
||||
syn match bbStatementLine "^\(addtask\|deltask\|addhandler\|after\|before\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest,bbStatementRestCont
|
||||
syn match bbStatementRest ".*$" skipwhite contained contains=bbStatement
|
||||
syn match bbStatementLine "^\(addtask\|deltask\|addhandler\|after\|before\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest
|
||||
|
||||
" OE Important Functions
|
||||
syn keyword bbOEFunctions do_fetch do_unpack do_patch do_configure do_compile do_stage do_install do_package contained
|
||||
@@ -123,7 +122,6 @@ hi def link bbPyFlag Type
|
||||
hi def link bbPyDef Statement
|
||||
hi def link bbStatement Statement
|
||||
hi def link bbStatementRest Identifier
|
||||
hi def link bbStatementRestCont Identifier
|
||||
hi def link bbOEFunctions Special
|
||||
hi def link bbVarPyValue PreProc
|
||||
hi def link bbOverrideOperator Operator
|
||||
|
||||
@@ -47,8 +47,8 @@ To install all required packages run:
|
||||
|
||||
To build the documentation locally, run:
|
||||
|
||||
$ cd doc
|
||||
$ make html
|
||||
$ cd documentation
|
||||
$ make -f Makefile.sphinx html
|
||||
|
||||
The resulting HTML index page will be _build/html/index.html, and you
|
||||
can browse your own copy of the locally generated documentation with
|
||||
|
||||
@@ -586,11 +586,10 @@ or possibly those defined in the metadata/signature handler itself. The
|
||||
simplest parameter to pass is "none", which causes a set of signature
|
||||
information to be written out into ``STAMPS_DIR`` corresponding to the
|
||||
targets specified. The other currently available parameter is
|
||||
"printdiff", which causes BitBake to try to establish the most recent
|
||||
"printdiff", which causes BitBake to try to establish the closest
|
||||
signature match it can (e.g. in the sstate cache) and then run
|
||||
compare the matched signatures to determine the stamps and delta
|
||||
where these two stamp trees diverge. This can be used to determine why
|
||||
tasks need to be re-run in situations where that is not expected.
|
||||
``bitbake-diffsigs`` over the matches to determine the stamps and delta
|
||||
where these two stamp trees diverge.
|
||||
|
||||
.. note::
|
||||
|
||||
|
||||
@@ -476,14 +476,6 @@ Here are some example URLs::
|
||||
easy to share metadata without removing passwords. SSH keys, ``~/.netrc``
|
||||
and ``~/.ssh/config`` files can be used as alternatives.
|
||||
|
||||
Using tags with the git fetcher may cause surprising behaviour. Bitbake needs to
|
||||
resolve the tag to a specific revision and to do that, it has to connect to and use
|
||||
the upstream repository. This is because the revision the tags point at can change and
|
||||
we've seen cases of this happening in well known public repositories. This can mean
|
||||
many more network connections than expected and recipes may be reparsed at every build.
|
||||
Source mirrors will also be bypassed as the upstream repository is the only source
|
||||
of truth to resolve the revision accurately. For these reasons, whilst the fetcher
|
||||
can support tags, we recommend being specific about revisions in recipes.
|
||||
|
||||
.. _gitsm-fetcher:
|
||||
|
||||
@@ -696,41 +688,6 @@ Here is an example URL::
|
||||
|
||||
It can also be used when setting mirrors definitions using the :term:`PREMIRRORS` variable.
|
||||
|
||||
.. _gcp-fetcher:
|
||||
|
||||
GCP Fetcher (``gs://``)
|
||||
--------------------------
|
||||
|
||||
This submodule fetches data from a
|
||||
`Google Cloud Storage Bucket <https://cloud.google.com/storage/docs/buckets>`__.
|
||||
It uses the `Google Cloud Storage Python Client <https://cloud.google.com/python/docs/reference/storage/latest>`__
|
||||
to check the status of objects in the bucket and download them.
|
||||
The use of the Python client makes it substantially faster than using command
|
||||
line tools such as gsutil.
|
||||
|
||||
The fetcher requires the Google Cloud Storage Python Client to be installed, along
|
||||
with the gsutil tool.
|
||||
|
||||
The fetcher requires that the machine has valid credentials for accessing the
|
||||
chosen bucket. Instructions for authentication can be found in the
|
||||
`Google Cloud documentation <https://cloud.google.com/docs/authentication/provide-credentials-adc#local-dev>`__.
|
||||
|
||||
If it used from the OpenEmbedded build system, the fetcher can be used for
|
||||
fetching sstate artifacts from a GCS bucket by specifying the
|
||||
``SSTATE_MIRRORS`` variable as shown below::
|
||||
|
||||
SSTATE_MIRRORS ?= "\
|
||||
file://.* gs://<bucket name>/PATH \
|
||||
"
|
||||
|
||||
The fetcher can also be used in recipes::
|
||||
|
||||
SRC_URI = "gs://<bucket name>/<foo_container>/<bar_file>"
|
||||
|
||||
However, the checksum of the file should be also be provided::
|
||||
|
||||
SRC_URI[sha256sum] = "<sha256 string>"
|
||||
|
||||
.. _crate-fetcher:
|
||||
|
||||
Crate Fetcher (``crate://``)
|
||||
@@ -834,8 +791,6 @@ Fetch submodules also exist for the following:
|
||||
|
||||
- OSC (``osc://``)
|
||||
|
||||
- S3 (``s3://``)
|
||||
|
||||
- Secure FTP (``sftp://``)
|
||||
|
||||
- Secure Shell (``ssh://``)
|
||||
|
||||
@@ -209,12 +209,12 @@ Following is the complete "Hello World" example.
|
||||
|
||||
.. note::
|
||||
|
||||
Without a value for :term:`PN`, the variables :term:`STAMP`, :term:`T`, and :term:`B`, prevent more
|
||||
than one recipe from working. You can fix this by either setting :term:`PN` to
|
||||
Without a value for PN , the variables STAMP , T , and B , prevent more
|
||||
than one recipe from working. You can fix this by either setting PN to
|
||||
have a value similar to what OpenEmbedded and BitBake use in the default
|
||||
``bitbake.conf`` file (see previous example). Or, by manually updating each
|
||||
recipe to set :term:`PN`. You will also need to include :term:`PN` as part of the :term:`STAMP`,
|
||||
:term:`T`, and :term:`B` variable definitions in the ``local.conf`` file.
|
||||
bitbake.conf file (see previous example). Or, by manually updating each
|
||||
recipe to set PN . You will also need to include PN as part of the STAMP
|
||||
, T , and B variable definitions in the local.conf file.
|
||||
|
||||
The ``TMPDIR`` variable establishes a directory that BitBake uses
|
||||
for build output and intermediate files other than the cached
|
||||
@@ -319,9 +319,9 @@ Following is the complete "Hello World" example.
|
||||
|
||||
.. note::
|
||||
|
||||
We are setting both ``LAYERSERIES_CORENAMES`` and :term:`LAYERSERIES_COMPAT` in this particular case, because we
|
||||
We are setting both LAYERSERIES_CORENAMES and LAYERSERIES_COMPAT in this particular case, because we
|
||||
are using bitbake without OpenEmbedded.
|
||||
You should usually just use :term:`LAYERSERIES_COMPAT` to specify the OE-Core versions for which your layer
|
||||
You should usually just use LAYERSERIES_COMPAT to specify the OE-Core versions for which your layer
|
||||
is compatible, and add the meta-openembedded layer to your project.
|
||||
|
||||
You need to create the recipe file next. Inside your layer at the
|
||||
|
||||
@@ -1519,12 +1519,6 @@ functionality of the task:
|
||||
released. You can use this variable flag to accomplish mutual
|
||||
exclusion.
|
||||
|
||||
- ``[network]``: When set to "1", allows a task to access the network. By
|
||||
default, only the ``do_fetch`` task is granted network access. Recipes
|
||||
shouldn't access the network outside of ``do_fetch`` as it usually
|
||||
undermines fetcher source mirroring, image and licence manifests, software
|
||||
auditing and supply chain security.
|
||||
|
||||
- ``[noexec]``: When set to "1", marks the task as being empty, with
|
||||
no execution required. You can use the ``[noexec]`` flag to set up
|
||||
tasks as dependency placeholders, or to disable tasks defined
|
||||
|
||||
@@ -1,91 +0,0 @@
|
||||
.. SPDX-License-Identifier: CC-BY-2.5
|
||||
|
||||
================
|
||||
Variable Context
|
||||
================
|
||||
|
||||
|
|
||||
|
||||
Variables might only have an impact or can be used in certain contexts. Some
|
||||
should only be used in global files like ``.conf``, while others are intended only
|
||||
for local files like ``.bb``. This chapter aims to describe some important variable
|
||||
contexts.
|
||||
|
||||
.. _ref-varcontext-configuration:
|
||||
|
||||
BitBake's own configuration
|
||||
===========================
|
||||
|
||||
Variables starting with ``BB_`` usually configure the behaviour of BitBake itself.
|
||||
For example, one could configure:
|
||||
|
||||
- System resources, like disk space to be used (:term:`BB_DISKMON_DIRS`),
|
||||
or the number of tasks to be run in parallel by BitBake (:term:`BB_NUMBER_THREADS`).
|
||||
|
||||
- How the fetchers shall behave, e.g., :term:`BB_FETCH_PREMIRRORONLY` is used
|
||||
by BitBake to determine if BitBake's fetcher shall search only
|
||||
:term:`PREMIRRORS` for files.
|
||||
|
||||
Those variables are usually configured globally.
|
||||
|
||||
BitBake configuration
|
||||
=====================
|
||||
|
||||
There are variables:
|
||||
|
||||
- Like :term:`B` or :term:`T`, that are used to specify directories used by
|
||||
BitBake during the build of a particular recipe. Those variables are
|
||||
specified in ``bitbake.conf``. Some, like :term:`B`, are quite often
|
||||
overwritten in recipes.
|
||||
|
||||
- Starting with ``FAKEROOT``, to configure how the ``fakeroot`` command is
|
||||
handled. Those are usually set by ``bitbake.conf`` and might get adapted in a
|
||||
``bbclass``.
|
||||
|
||||
- Detailing where BitBake will store and fetch information from, for
|
||||
data reuse between build runs like :term:`CACHE`, :term:`DL_DIR` or
|
||||
:term:`PERSISTENT_DIR`. Those are usually global.
|
||||
|
||||
|
||||
Layers and files
|
||||
================
|
||||
|
||||
Variables starting with ``LAYER`` configure how BitBake handles layers.
|
||||
Additionally, variables starting with ``BB`` configure how layers and files are
|
||||
handled. For example:
|
||||
|
||||
- :term:`LAYERDEPENDS` is used to configure on which layers a given layer
|
||||
depends.
|
||||
|
||||
- The configured layers are contained in :term:`BBLAYERS` and files in
|
||||
:term:`BBFILES`.
|
||||
|
||||
Those variables are often used in the files ``layer.conf`` and ``bblayers.conf``.
|
||||
|
||||
Recipes and packages
|
||||
====================
|
||||
|
||||
Variables handling recipes and packages can be split into:
|
||||
|
||||
- :term:`PN`, :term:`PV` or :term:`PF` for example, contain information about
|
||||
the name or revision of a recipe or package. Usually, the default set in
|
||||
``bitbake.conf`` is used, but those are from time to time overwritten in
|
||||
recipes.
|
||||
|
||||
- :term:`SUMMARY`, :term:`DESCRIPTION`, :term:`LICENSE` or :term:`HOMEPAGE`
|
||||
contain the expected information and should be set specifically for every
|
||||
recipe.
|
||||
|
||||
- In recipes, variables are also used to control build and runtime
|
||||
dependencies between recipes/packages with other recipes/packages. The
|
||||
most common should be: :term:`PROVIDES`, :term:`RPROVIDES`, :term:`DEPENDS`,
|
||||
and :term:`RDEPENDS`.
|
||||
|
||||
- There are further variables starting with ``SRC`` that specify the sources in
|
||||
a recipe like :term:`SRC_URI` or :term:`SRCDATE`. Those are also usually set
|
||||
in recipes.
|
||||
|
||||
- Which version or provider of a recipe should be given preference when
|
||||
multiple recipes would provide the same item, is controlled by variables
|
||||
starting with ``PREFERRED_``. Those are normally set in the configuration
|
||||
files of a ``MACHINE`` or ``DISTRO``.
|
||||
@@ -424,7 +424,7 @@ overview of their function and contents.
|
||||
|
||||
Example usage::
|
||||
|
||||
BB_HASHSERVE_UPSTREAM = "hashserv.yoctoproject.org:8686"
|
||||
BB_HASHSERVE_UPSTREAM = "hashserv.yocto.io:8687"
|
||||
|
||||
:term:`BB_INVALIDCONF`
|
||||
Used in combination with the ``ConfigParsed`` event to trigger
|
||||
@@ -432,15 +432,6 @@ overview of their function and contents.
|
||||
``ConfigParsed`` event can set the variable to trigger the re-parse.
|
||||
You must be careful to avoid recursive loops with this functionality.
|
||||
|
||||
:term:`BB_LOADFACTOR_MAX`
|
||||
Setting this to a value will cause BitBake to check the system load
|
||||
average before executing new tasks. If the load average is above the
|
||||
the number of CPUs multipled by this factor, no new task will be started
|
||||
unless there is no task executing. A value of "1.5" has been found to
|
||||
work reasonably. This is helpful for systems which don't have pressure
|
||||
regulation enabled, which is more granular. Pressure values take
|
||||
precedence over loadfactor.
|
||||
|
||||
:term:`BB_LOGCONFIG`
|
||||
Specifies the name of a config file that contains the user logging
|
||||
configuration. See
|
||||
@@ -572,7 +563,7 @@ overview of their function and contents.
|
||||
:term:`BB_RUNFMT` variable is undefined and the run filenames get
|
||||
created using the following form::
|
||||
|
||||
run.{func}.{pid}
|
||||
run.{task}.{pid}
|
||||
|
||||
If you want to force run files to take a specific name, you can set this
|
||||
variable in a configuration file.
|
||||
@@ -929,9 +920,9 @@ overview of their function and contents.
|
||||
section.
|
||||
|
||||
:term:`BBPATH`
|
||||
A colon-separated list used by BitBake to locate class (``.bbclass``)
|
||||
and configuration (``.conf``) files. This variable is analogous to the
|
||||
``PATH`` variable.
|
||||
Used by BitBake to locate class (``.bbclass``) and configuration
|
||||
(``.conf``) files. This variable is analogous to the ``PATH``
|
||||
variable.
|
||||
|
||||
If you run BitBake from a directory outside of the build directory,
|
||||
you must be sure to set :term:`BBPATH` to point to the build directory.
|
||||
@@ -1081,11 +1072,6 @@ overview of their function and contents.
|
||||
environment variable. The value is a colon-separated list of
|
||||
directories that are searched left-to-right in order.
|
||||
|
||||
:term:`FILE_LAYERNAME`
|
||||
During parsing and task execution, this is set to the name of the
|
||||
layer containing the recipe file. Code can use this to identify which
|
||||
layer a recipe is from.
|
||||
|
||||
:term:`GITDIR`
|
||||
The directory in which a local copy of a Git repository is stored
|
||||
when it is cloned.
|
||||
@@ -1179,8 +1165,8 @@ overview of their function and contents.
|
||||
order.
|
||||
|
||||
:term:`OVERRIDES`
|
||||
A colon-separated list that BitBake uses to control what variables are
|
||||
overridden after BitBake parses recipes and configuration files.
|
||||
BitBake uses :term:`OVERRIDES` to control what variables are overridden
|
||||
after BitBake parses recipes and configuration files.
|
||||
|
||||
Following is a simple example that uses an overrides list based on
|
||||
machine architectures: OVERRIDES = "arm:x86:mips:powerpc" You can
|
||||
|
||||
@@ -13,7 +13,6 @@ BitBake User Manual
|
||||
bitbake-user-manual/bitbake-user-manual-intro
|
||||
bitbake-user-manual/bitbake-user-manual-execution
|
||||
bitbake-user-manual/bitbake-user-manual-metadata
|
||||
bitbake-user-manual/bitbake-user-manual-ref-variables-context
|
||||
bitbake-user-manual/bitbake-user-manual-fetching
|
||||
bitbake-user-manual/bitbake-user-manual-ref-variables
|
||||
bitbake-user-manual/bitbake-user-manual-hello
|
||||
|
||||
@@ -4,15 +4,15 @@
|
||||
BitBake Supported Release Manuals
|
||||
=================================
|
||||
|
||||
*******************************
|
||||
Release Series 5.0 (scarthgap)
|
||||
*******************************
|
||||
*****************************
|
||||
Release Series 4.1 (langdale)
|
||||
*****************************
|
||||
|
||||
- :yocto_docs:`BitBake 2.8 User Manual </bitbake/2.8/>`
|
||||
- :yocto_docs:`BitBake 2.2 User Manual </bitbake/2.2/>`
|
||||
|
||||
******************************
|
||||
Release Series 4.0 (kirkstone)
|
||||
******************************
|
||||
*****************************
|
||||
Release Series 4.0 (kirstone)
|
||||
*****************************
|
||||
|
||||
- :yocto_docs:`BitBake 2.0 User Manual </bitbake/2.0/>`
|
||||
|
||||
@@ -26,24 +26,6 @@ Release Series 3.1 (dunfell)
|
||||
BitBake Outdated Release Manuals
|
||||
================================
|
||||
|
||||
*******************************
|
||||
Release Series 4.3 (nanbield)
|
||||
*******************************
|
||||
|
||||
- :yocto_docs:`BitBake 2.6 User Manual </bitbake/2.6/>`
|
||||
|
||||
*******************************
|
||||
Release Series 4.2 (mickledore)
|
||||
*******************************
|
||||
|
||||
- :yocto_docs:`BitBake 2.4 User Manual </bitbake/2.4/>`
|
||||
|
||||
*****************************
|
||||
Release Series 4.1 (langdale)
|
||||
*****************************
|
||||
|
||||
- :yocto_docs:`BitBake 2.2 User Manual </bitbake/2.2/>`
|
||||
|
||||
******************************
|
||||
Release Series 3.4 (honister)
|
||||
******************************
|
||||
|
||||
@@ -9,19 +9,12 @@
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
__version__ = "2.9.1"
|
||||
__version__ = "2.4.0"
|
||||
|
||||
import sys
|
||||
if sys.version_info < (3, 8, 0):
|
||||
raise RuntimeError("Sorry, python 3.8.0 or later is required for this version of bitbake")
|
||||
|
||||
if sys.version_info < (3, 10, 0):
|
||||
# With python 3.8 and 3.9, we see errors of "libgcc_s.so.1 must be installed for pthread_cancel to work"
|
||||
# https://stackoverflow.com/questions/64797838/libgcc-s-so-1-must-be-installed-for-pthread-cancel-to-work
|
||||
# https://bugs.ams1.psf.io/issue42888
|
||||
# so ensure libgcc_s is loaded early on
|
||||
import ctypes
|
||||
libgcc_s = ctypes.CDLL('libgcc_s.so.1')
|
||||
|
||||
class BBHandledException(Exception):
|
||||
"""
|
||||
@@ -36,7 +29,6 @@ class BBHandledException(Exception):
|
||||
|
||||
import os
|
||||
import logging
|
||||
from collections import namedtuple
|
||||
|
||||
|
||||
class NullHandler(logging.Handler):
|
||||
@@ -104,6 +96,26 @@ class BBLoggerAdapter(logging.LoggerAdapter, BBLoggerMixin):
|
||||
self.setup_bblogger(logger.name)
|
||||
super().__init__(logger, *args, **kwargs)
|
||||
|
||||
if sys.version_info < (3, 6):
|
||||
# These properties were added in Python 3.6. Add them in older versions
|
||||
# for compatibility
|
||||
@property
|
||||
def manager(self):
|
||||
return self.logger.manager
|
||||
|
||||
@manager.setter
|
||||
def manager(self, value):
|
||||
self.logger.manager = value
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
return self.logger.name
|
||||
|
||||
def __repr__(self):
|
||||
logger = self.logger
|
||||
level = logger.getLevelName(logger.getEffectiveLevel())
|
||||
return '<%s %s (%s)>' % (self.__class__.__name__, logger.name, level)
|
||||
|
||||
logging.LoggerAdapter = BBLoggerAdapter
|
||||
|
||||
logger = logging.getLogger("BitBake")
|
||||
@@ -208,14 +220,3 @@ def deprecate_import(current, modulename, fromlist, renames = None):
|
||||
|
||||
setattr(sys.modules[current], newname, newobj)
|
||||
|
||||
TaskData = namedtuple("TaskData", [
|
||||
"pn",
|
||||
"taskname",
|
||||
"fn",
|
||||
"deps",
|
||||
"provides",
|
||||
"taskhash",
|
||||
"unihash",
|
||||
"hashfn",
|
||||
"taskhash_deps",
|
||||
])
|
||||
|
||||
@@ -1,215 +0,0 @@
|
||||
#! /usr/bin/env python3
|
||||
#
|
||||
# Copyright 2023 by Garmin Ltd. or its subsidiaries
|
||||
#
|
||||
# SPDX-License-Identifier: MIT
|
||||
|
||||
|
||||
import sys
|
||||
import ctypes
|
||||
import os
|
||||
import errno
|
||||
import pwd
|
||||
import grp
|
||||
|
||||
libacl = ctypes.CDLL("libacl.so.1", use_errno=True)
|
||||
|
||||
|
||||
ACL_TYPE_ACCESS = 0x8000
|
||||
ACL_TYPE_DEFAULT = 0x4000
|
||||
|
||||
ACL_FIRST_ENTRY = 0
|
||||
ACL_NEXT_ENTRY = 1
|
||||
|
||||
ACL_UNDEFINED_TAG = 0x00
|
||||
ACL_USER_OBJ = 0x01
|
||||
ACL_USER = 0x02
|
||||
ACL_GROUP_OBJ = 0x04
|
||||
ACL_GROUP = 0x08
|
||||
ACL_MASK = 0x10
|
||||
ACL_OTHER = 0x20
|
||||
|
||||
ACL_READ = 0x04
|
||||
ACL_WRITE = 0x02
|
||||
ACL_EXECUTE = 0x01
|
||||
|
||||
acl_t = ctypes.c_void_p
|
||||
acl_entry_t = ctypes.c_void_p
|
||||
acl_permset_t = ctypes.c_void_p
|
||||
acl_perm_t = ctypes.c_uint
|
||||
|
||||
acl_tag_t = ctypes.c_int
|
||||
|
||||
libacl.acl_free.argtypes = [acl_t]
|
||||
|
||||
|
||||
def acl_free(acl):
|
||||
libacl.acl_free(acl)
|
||||
|
||||
|
||||
libacl.acl_get_file.restype = acl_t
|
||||
libacl.acl_get_file.argtypes = [ctypes.c_char_p, ctypes.c_uint]
|
||||
|
||||
|
||||
def acl_get_file(path, typ):
|
||||
acl = libacl.acl_get_file(os.fsencode(path), typ)
|
||||
if acl is None:
|
||||
err = ctypes.get_errno()
|
||||
raise OSError(err, os.strerror(err), str(path))
|
||||
|
||||
return acl
|
||||
|
||||
|
||||
libacl.acl_get_entry.argtypes = [acl_t, ctypes.c_int, ctypes.c_void_p]
|
||||
|
||||
|
||||
def acl_get_entry(acl, entry_id):
|
||||
entry = acl_entry_t()
|
||||
ret = libacl.acl_get_entry(acl, entry_id, ctypes.byref(entry))
|
||||
if ret < 0:
|
||||
err = ctypes.get_errno()
|
||||
raise OSError(err, os.strerror(err))
|
||||
|
||||
if ret == 0:
|
||||
return None
|
||||
|
||||
return entry
|
||||
|
||||
|
||||
libacl.acl_get_tag_type.argtypes = [acl_entry_t, ctypes.c_void_p]
|
||||
|
||||
|
||||
def acl_get_tag_type(entry_d):
|
||||
tag = acl_tag_t()
|
||||
ret = libacl.acl_get_tag_type(entry_d, ctypes.byref(tag))
|
||||
if ret < 0:
|
||||
err = ctypes.get_errno()
|
||||
raise OSError(err, os.strerror(err))
|
||||
return tag.value
|
||||
|
||||
|
||||
libacl.acl_get_qualifier.restype = ctypes.c_void_p
|
||||
libacl.acl_get_qualifier.argtypes = [acl_entry_t]
|
||||
|
||||
|
||||
def acl_get_qualifier(entry_d):
|
||||
ret = libacl.acl_get_qualifier(entry_d)
|
||||
if ret is None:
|
||||
err = ctypes.get_errno()
|
||||
raise OSError(err, os.strerror(err))
|
||||
return ctypes.c_void_p(ret)
|
||||
|
||||
|
||||
libacl.acl_get_permset.argtypes = [acl_entry_t, ctypes.c_void_p]
|
||||
|
||||
|
||||
def acl_get_permset(entry_d):
|
||||
permset = acl_permset_t()
|
||||
ret = libacl.acl_get_permset(entry_d, ctypes.byref(permset))
|
||||
if ret < 0:
|
||||
err = ctypes.get_errno()
|
||||
raise OSError(err, os.strerror(err))
|
||||
|
||||
return permset
|
||||
|
||||
|
||||
libacl.acl_get_perm.argtypes = [acl_permset_t, acl_perm_t]
|
||||
|
||||
|
||||
def acl_get_perm(permset_d, perm):
|
||||
ret = libacl.acl_get_perm(permset_d, perm)
|
||||
if ret < 0:
|
||||
err = ctypes.get_errno()
|
||||
raise OSError(err, os.strerror(err))
|
||||
return bool(ret)
|
||||
|
||||
|
||||
class Entry(object):
|
||||
def __init__(self, tag, qualifier, mode):
|
||||
self.tag = tag
|
||||
self.qualifier = qualifier
|
||||
self.mode = mode
|
||||
|
||||
def __str__(self):
|
||||
typ = ""
|
||||
qual = ""
|
||||
if self.tag == ACL_USER:
|
||||
typ = "user"
|
||||
qual = pwd.getpwuid(self.qualifier).pw_name
|
||||
elif self.tag == ACL_GROUP:
|
||||
typ = "group"
|
||||
qual = grp.getgrgid(self.qualifier).gr_name
|
||||
elif self.tag == ACL_USER_OBJ:
|
||||
typ = "user"
|
||||
elif self.tag == ACL_GROUP_OBJ:
|
||||
typ = "group"
|
||||
elif self.tag == ACL_MASK:
|
||||
typ = "mask"
|
||||
elif self.tag == ACL_OTHER:
|
||||
typ = "other"
|
||||
|
||||
r = "r" if self.mode & ACL_READ else "-"
|
||||
w = "w" if self.mode & ACL_WRITE else "-"
|
||||
x = "x" if self.mode & ACL_EXECUTE else "-"
|
||||
|
||||
return f"{typ}:{qual}:{r}{w}{x}"
|
||||
|
||||
|
||||
class ACL(object):
|
||||
def __init__(self, acl):
|
||||
self.acl = acl
|
||||
|
||||
def __del__(self):
|
||||
acl_free(self.acl)
|
||||
|
||||
def entries(self):
|
||||
entry_id = ACL_FIRST_ENTRY
|
||||
while True:
|
||||
entry = acl_get_entry(self.acl, entry_id)
|
||||
if entry is None:
|
||||
break
|
||||
|
||||
permset = acl_get_permset(entry)
|
||||
|
||||
mode = 0
|
||||
for m in (ACL_READ, ACL_WRITE, ACL_EXECUTE):
|
||||
if acl_get_perm(permset, m):
|
||||
mode |= m
|
||||
|
||||
qualifier = None
|
||||
tag = acl_get_tag_type(entry)
|
||||
|
||||
if tag == ACL_USER or tag == ACL_GROUP:
|
||||
qual = acl_get_qualifier(entry)
|
||||
qualifier = ctypes.cast(qual, ctypes.POINTER(ctypes.c_int))[0]
|
||||
|
||||
yield Entry(tag, qualifier, mode)
|
||||
|
||||
entry_id = ACL_NEXT_ENTRY
|
||||
|
||||
@classmethod
|
||||
def from_path(cls, path, typ):
|
||||
acl = acl_get_file(path, typ)
|
||||
return cls(acl)
|
||||
|
||||
|
||||
def main():
|
||||
import argparse
|
||||
import pwd
|
||||
import grp
|
||||
from pathlib import Path
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("path", help="File Path", type=Path)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
acl = ACL.from_path(args.path, ACL_TYPE_ACCESS)
|
||||
for entry in acl.entries():
|
||||
print(str(entry))
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -4,13 +4,30 @@
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import itertools
|
||||
import json
|
||||
|
||||
# The Python async server defaults to a 64K receive buffer, so we hardcode our
|
||||
# maximum chunk size. It would be better if the client and server reported to
|
||||
# each other what the maximum chunk sizes were, but that will slow down the
|
||||
# connection setup with a round trip delay so I'd rather not do that unless it
|
||||
# is necessary
|
||||
DEFAULT_MAX_CHUNK = 32 * 1024
|
||||
|
||||
|
||||
def chunkify(msg, max_chunk):
|
||||
if len(msg) < max_chunk - 1:
|
||||
yield ''.join((msg, "\n"))
|
||||
else:
|
||||
yield ''.join((json.dumps({
|
||||
'chunk-stream': None
|
||||
}), "\n"))
|
||||
|
||||
args = [iter(msg)] * (max_chunk - 1)
|
||||
for m in map(''.join, itertools.zip_longest(*args, fillvalue='')):
|
||||
yield ''.join(itertools.chain(m, "\n"))
|
||||
yield "\n"
|
||||
|
||||
|
||||
from .client import AsyncClient, Client
|
||||
from .serv import AsyncServer, AsyncServerConnection
|
||||
from .connection import DEFAULT_MAX_CHUNK
|
||||
from .exceptions import (
|
||||
ClientError,
|
||||
ServerError,
|
||||
ConnectionClosedError,
|
||||
InvokeError,
|
||||
)
|
||||
from .serv import AsyncServer, AsyncServerConnection, ClientError, ServerError
|
||||
|
||||
@@ -10,66 +10,22 @@ import json
|
||||
import os
|
||||
import socket
|
||||
import sys
|
||||
import re
|
||||
import contextlib
|
||||
from threading import Thread
|
||||
from .connection import StreamConnection, WebsocketConnection, DEFAULT_MAX_CHUNK
|
||||
from .exceptions import ConnectionClosedError, InvokeError
|
||||
|
||||
UNIX_PREFIX = "unix://"
|
||||
WS_PREFIX = "ws://"
|
||||
WSS_PREFIX = "wss://"
|
||||
|
||||
ADDR_TYPE_UNIX = 0
|
||||
ADDR_TYPE_TCP = 1
|
||||
ADDR_TYPE_WS = 2
|
||||
|
||||
WEBSOCKETS_MIN_VERSION = (9, 1)
|
||||
# Need websockets 10 with python 3.10+
|
||||
if sys.version_info >= (3, 10, 0):
|
||||
WEBSOCKETS_MIN_VERSION = (10, 0)
|
||||
|
||||
|
||||
def parse_address(addr):
|
||||
if addr.startswith(UNIX_PREFIX):
|
||||
return (ADDR_TYPE_UNIX, (addr[len(UNIX_PREFIX) :],))
|
||||
elif addr.startswith(WS_PREFIX) or addr.startswith(WSS_PREFIX):
|
||||
return (ADDR_TYPE_WS, (addr,))
|
||||
else:
|
||||
m = re.match(r"\[(?P<host>[^\]]*)\]:(?P<port>\d+)$", addr)
|
||||
if m is not None:
|
||||
host = m.group("host")
|
||||
port = m.group("port")
|
||||
else:
|
||||
host, port = addr.split(":")
|
||||
|
||||
return (ADDR_TYPE_TCP, (host, int(port)))
|
||||
from . import chunkify, DEFAULT_MAX_CHUNK
|
||||
|
||||
|
||||
class AsyncClient(object):
|
||||
def __init__(
|
||||
self,
|
||||
proto_name,
|
||||
proto_version,
|
||||
logger,
|
||||
timeout=30,
|
||||
server_headers=False,
|
||||
headers={},
|
||||
):
|
||||
self.socket = None
|
||||
def __init__(self, proto_name, proto_version, logger, timeout=30):
|
||||
self.reader = None
|
||||
self.writer = None
|
||||
self.max_chunk = DEFAULT_MAX_CHUNK
|
||||
self.proto_name = proto_name
|
||||
self.proto_version = proto_version
|
||||
self.logger = logger
|
||||
self.timeout = timeout
|
||||
self.needs_server_headers = server_headers
|
||||
self.server_headers = {}
|
||||
self.headers = headers
|
||||
|
||||
async def connect_tcp(self, address, port):
|
||||
async def connect_sock():
|
||||
reader, writer = await asyncio.open_connection(address, port)
|
||||
return StreamConnection(reader, writer, self.timeout, self.max_chunk)
|
||||
return await asyncio.open_connection(address, port)
|
||||
|
||||
self._connect_sock = connect_sock
|
||||
|
||||
@@ -84,81 +40,27 @@ class AsyncClient(object):
|
||||
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM, 0)
|
||||
sock.connect(os.path.basename(path))
|
||||
finally:
|
||||
os.chdir(cwd)
|
||||
reader, writer = await asyncio.open_unix_connection(sock=sock)
|
||||
return StreamConnection(reader, writer, self.timeout, self.max_chunk)
|
||||
|
||||
self._connect_sock = connect_sock
|
||||
|
||||
async def connect_websocket(self, uri):
|
||||
import websockets
|
||||
|
||||
try:
|
||||
version = tuple(
|
||||
int(v)
|
||||
for v in websockets.__version__.split(".")[
|
||||
0 : len(WEBSOCKETS_MIN_VERSION)
|
||||
]
|
||||
)
|
||||
except ValueError:
|
||||
raise ImportError(
|
||||
f"Unable to parse websockets version '{websockets.__version__}'"
|
||||
)
|
||||
|
||||
if version < WEBSOCKETS_MIN_VERSION:
|
||||
min_ver_str = ".".join(str(v) for v in WEBSOCKETS_MIN_VERSION)
|
||||
raise ImportError(
|
||||
f"Websockets version {websockets.__version__} is less than minimum required version {min_ver_str}"
|
||||
)
|
||||
|
||||
async def connect_sock():
|
||||
websocket = await websockets.connect(
|
||||
uri,
|
||||
ping_interval=None,
|
||||
open_timeout=self.timeout,
|
||||
)
|
||||
return WebsocketConnection(websocket, self.timeout)
|
||||
os.chdir(cwd)
|
||||
return await asyncio.open_unix_connection(sock=sock)
|
||||
|
||||
self._connect_sock = connect_sock
|
||||
|
||||
async def setup_connection(self):
|
||||
# Send headers
|
||||
await self.socket.send("%s %s" % (self.proto_name, self.proto_version))
|
||||
await self.socket.send(
|
||||
"needs-headers: %s" % ("true" if self.needs_server_headers else "false")
|
||||
)
|
||||
for k, v in self.headers.items():
|
||||
await self.socket.send("%s: %s" % (k, v))
|
||||
|
||||
# End of headers
|
||||
await self.socket.send("")
|
||||
|
||||
self.server_headers = {}
|
||||
if self.needs_server_headers:
|
||||
while True:
|
||||
line = await self.socket.recv()
|
||||
if not line:
|
||||
# End headers
|
||||
break
|
||||
tag, value = line.split(":", 1)
|
||||
self.server_headers[tag.lower()] = value.strip()
|
||||
|
||||
async def get_header(self, tag, default):
|
||||
await self.connect()
|
||||
return self.server_headers.get(tag, default)
|
||||
s = '%s %s\n\n' % (self.proto_name, self.proto_version)
|
||||
self.writer.write(s.encode("utf-8"))
|
||||
await self.writer.drain()
|
||||
|
||||
async def connect(self):
|
||||
if self.socket is None:
|
||||
self.socket = await self._connect_sock()
|
||||
if self.reader is None or self.writer is None:
|
||||
(self.reader, self.writer) = await self._connect_sock()
|
||||
await self.setup_connection()
|
||||
|
||||
async def disconnect(self):
|
||||
if self.socket is not None:
|
||||
await self.socket.close()
|
||||
self.socket = None
|
||||
|
||||
async def close(self):
|
||||
await self.disconnect()
|
||||
self.reader = None
|
||||
|
||||
if self.writer is not None:
|
||||
self.writer.close()
|
||||
self.writer = None
|
||||
|
||||
async def _send_wrapper(self, proc):
|
||||
count = 0
|
||||
@@ -169,7 +71,6 @@ class AsyncClient(object):
|
||||
except (
|
||||
OSError,
|
||||
ConnectionError,
|
||||
ConnectionClosedError,
|
||||
json.JSONDecodeError,
|
||||
UnicodeDecodeError,
|
||||
) as e:
|
||||
@@ -181,27 +82,49 @@ class AsyncClient(object):
|
||||
await self.close()
|
||||
count += 1
|
||||
|
||||
def check_invoke_error(self, msg):
|
||||
if isinstance(msg, dict) and "invoke-error" in msg:
|
||||
raise InvokeError(msg["invoke-error"]["message"])
|
||||
async def send_message(self, msg):
|
||||
async def get_line():
|
||||
try:
|
||||
line = await asyncio.wait_for(self.reader.readline(), self.timeout)
|
||||
except asyncio.TimeoutError:
|
||||
raise ConnectionError("Timed out waiting for server")
|
||||
|
||||
if not line:
|
||||
raise ConnectionError("Connection closed")
|
||||
|
||||
line = line.decode("utf-8")
|
||||
|
||||
if not line.endswith("\n"):
|
||||
raise ConnectionError("Bad message %r" % (line))
|
||||
|
||||
return line
|
||||
|
||||
async def invoke(self, msg):
|
||||
async def proc():
|
||||
await self.socket.send_message(msg)
|
||||
return await self.socket.recv_message()
|
||||
for c in chunkify(json.dumps(msg), self.max_chunk):
|
||||
self.writer.write(c.encode("utf-8"))
|
||||
await self.writer.drain()
|
||||
|
||||
result = await self._send_wrapper(proc)
|
||||
self.check_invoke_error(result)
|
||||
return result
|
||||
l = await get_line()
|
||||
|
||||
m = json.loads(l)
|
||||
if m and "chunk-stream" in m:
|
||||
lines = []
|
||||
while True:
|
||||
l = (await get_line()).rstrip("\n")
|
||||
if not l:
|
||||
break
|
||||
lines.append(l)
|
||||
|
||||
m = json.loads("".join(lines))
|
||||
|
||||
return m
|
||||
|
||||
return await self._send_wrapper(proc)
|
||||
|
||||
async def ping(self):
|
||||
return await self.invoke({"ping": {}})
|
||||
|
||||
async def __aenter__(self):
|
||||
return self
|
||||
|
||||
async def __aexit__(self, exc_type, exc_value, traceback):
|
||||
await self.close()
|
||||
return await self.send_message(
|
||||
{'ping': {}}
|
||||
)
|
||||
|
||||
|
||||
class Client(object):
|
||||
@@ -219,7 +142,7 @@ class Client(object):
|
||||
# required (but harmless) with it.
|
||||
asyncio.set_event_loop(self.loop)
|
||||
|
||||
self._add_methods("connect_tcp", "ping")
|
||||
self._add_methods('connect_tcp', 'ping')
|
||||
|
||||
@abc.abstractmethod
|
||||
def _get_async_client(self):
|
||||
@@ -248,19 +171,8 @@ class Client(object):
|
||||
def max_chunk(self, value):
|
||||
self.client.max_chunk = value
|
||||
|
||||
def disconnect(self):
|
||||
self.loop.run_until_complete(self.client.close())
|
||||
|
||||
def close(self):
|
||||
if self.loop:
|
||||
self.loop.run_until_complete(self.client.close())
|
||||
self.loop.run_until_complete(self.client.close())
|
||||
if sys.version_info >= (3, 6):
|
||||
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
|
||||
self.loop.close()
|
||||
self.loop = None
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc_value, traceback):
|
||||
self.close()
|
||||
return False
|
||||
self.loop.close()
|
||||
|
||||
@@ -1,146 +0,0 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import asyncio
|
||||
import itertools
|
||||
import json
|
||||
from datetime import datetime
|
||||
from .exceptions import ClientError, ConnectionClosedError
|
||||
|
||||
|
||||
# The Python async server defaults to a 64K receive buffer, so we hardcode our
|
||||
# maximum chunk size. It would be better if the client and server reported to
|
||||
# each other what the maximum chunk sizes were, but that will slow down the
|
||||
# connection setup with a round trip delay so I'd rather not do that unless it
|
||||
# is necessary
|
||||
DEFAULT_MAX_CHUNK = 32 * 1024
|
||||
|
||||
|
||||
def chunkify(msg, max_chunk):
|
||||
if len(msg) < max_chunk - 1:
|
||||
yield "".join((msg, "\n"))
|
||||
else:
|
||||
yield "".join((json.dumps({"chunk-stream": None}), "\n"))
|
||||
|
||||
args = [iter(msg)] * (max_chunk - 1)
|
||||
for m in map("".join, itertools.zip_longest(*args, fillvalue="")):
|
||||
yield "".join(itertools.chain(m, "\n"))
|
||||
yield "\n"
|
||||
|
||||
|
||||
def json_serialize(obj):
|
||||
if isinstance(obj, datetime):
|
||||
return obj.isoformat()
|
||||
raise TypeError("Type %s not serializeable" % type(obj))
|
||||
|
||||
|
||||
class StreamConnection(object):
|
||||
def __init__(self, reader, writer, timeout, max_chunk=DEFAULT_MAX_CHUNK):
|
||||
self.reader = reader
|
||||
self.writer = writer
|
||||
self.timeout = timeout
|
||||
self.max_chunk = max_chunk
|
||||
|
||||
@property
|
||||
def address(self):
|
||||
return self.writer.get_extra_info("peername")
|
||||
|
||||
async def send_message(self, msg):
|
||||
for c in chunkify(json.dumps(msg, default=json_serialize), self.max_chunk):
|
||||
self.writer.write(c.encode("utf-8"))
|
||||
await self.writer.drain()
|
||||
|
||||
async def recv_message(self):
|
||||
l = await self.recv()
|
||||
|
||||
m = json.loads(l)
|
||||
if not m:
|
||||
return m
|
||||
|
||||
if "chunk-stream" in m:
|
||||
lines = []
|
||||
while True:
|
||||
l = await self.recv()
|
||||
if not l:
|
||||
break
|
||||
lines.append(l)
|
||||
|
||||
m = json.loads("".join(lines))
|
||||
|
||||
return m
|
||||
|
||||
async def send(self, msg):
|
||||
self.writer.write(("%s\n" % msg).encode("utf-8"))
|
||||
await self.writer.drain()
|
||||
|
||||
async def recv(self):
|
||||
if self.timeout < 0:
|
||||
line = await self.reader.readline()
|
||||
else:
|
||||
try:
|
||||
line = await asyncio.wait_for(self.reader.readline(), self.timeout)
|
||||
except asyncio.TimeoutError:
|
||||
raise ConnectionError("Timed out waiting for data")
|
||||
|
||||
if not line:
|
||||
raise ConnectionClosedError("Connection closed")
|
||||
|
||||
line = line.decode("utf-8")
|
||||
|
||||
if not line.endswith("\n"):
|
||||
raise ConnectionError("Bad message %r" % (line))
|
||||
|
||||
return line.rstrip()
|
||||
|
||||
async def close(self):
|
||||
self.reader = None
|
||||
if self.writer is not None:
|
||||
self.writer.close()
|
||||
self.writer = None
|
||||
|
||||
|
||||
class WebsocketConnection(object):
|
||||
def __init__(self, socket, timeout):
|
||||
self.socket = socket
|
||||
self.timeout = timeout
|
||||
|
||||
@property
|
||||
def address(self):
|
||||
return ":".join(str(s) for s in self.socket.remote_address)
|
||||
|
||||
async def send_message(self, msg):
|
||||
await self.send(json.dumps(msg, default=json_serialize))
|
||||
|
||||
async def recv_message(self):
|
||||
m = await self.recv()
|
||||
return json.loads(m)
|
||||
|
||||
async def send(self, msg):
|
||||
import websockets.exceptions
|
||||
|
||||
try:
|
||||
await self.socket.send(msg)
|
||||
except websockets.exceptions.ConnectionClosed:
|
||||
raise ConnectionClosedError("Connection closed")
|
||||
|
||||
async def recv(self):
|
||||
import websockets.exceptions
|
||||
|
||||
try:
|
||||
if self.timeout < 0:
|
||||
return await self.socket.recv()
|
||||
|
||||
try:
|
||||
return await asyncio.wait_for(self.socket.recv(), self.timeout)
|
||||
except asyncio.TimeoutError:
|
||||
raise ConnectionError("Timed out waiting for data")
|
||||
except websockets.exceptions.ConnectionClosed:
|
||||
raise ConnectionClosedError("Connection closed")
|
||||
|
||||
async def close(self):
|
||||
if self.socket is not None:
|
||||
await self.socket.close()
|
||||
self.socket = None
|
||||
@@ -1,21 +0,0 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
|
||||
class ClientError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class InvokeError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class ServerError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class ConnectionClosedError(Exception):
|
||||
pass
|
||||
@@ -12,353 +12,241 @@ import signal
|
||||
import socket
|
||||
import sys
|
||||
import multiprocessing
|
||||
import logging
|
||||
from .connection import StreamConnection, WebsocketConnection
|
||||
from .exceptions import ClientError, ServerError, ConnectionClosedError, InvokeError
|
||||
from . import chunkify, DEFAULT_MAX_CHUNK
|
||||
|
||||
|
||||
class ClientLoggerAdapter(logging.LoggerAdapter):
|
||||
def process(self, msg, kwargs):
|
||||
return f"[Client {self.extra['address']}] {msg}", kwargs
|
||||
class ClientError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class ServerError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class AsyncServerConnection(object):
|
||||
# If a handler returns this object (e.g. `return self.NO_RESPONSE`), no
|
||||
# return message will be automatically be sent back to the client
|
||||
NO_RESPONSE = object()
|
||||
|
||||
def __init__(self, socket, proto_name, logger):
|
||||
self.socket = socket
|
||||
def __init__(self, reader, writer, proto_name, logger):
|
||||
self.reader = reader
|
||||
self.writer = writer
|
||||
self.proto_name = proto_name
|
||||
self.max_chunk = DEFAULT_MAX_CHUNK
|
||||
self.handlers = {
|
||||
"ping": self.handle_ping,
|
||||
'chunk-stream': self.handle_chunk,
|
||||
'ping': self.handle_ping,
|
||||
}
|
||||
self.logger = ClientLoggerAdapter(
|
||||
logger,
|
||||
{
|
||||
"address": socket.address,
|
||||
},
|
||||
)
|
||||
self.client_headers = {}
|
||||
|
||||
async def close(self):
|
||||
await self.socket.close()
|
||||
|
||||
async def handle_headers(self, headers):
|
||||
return {}
|
||||
self.logger = logger
|
||||
|
||||
async def process_requests(self):
|
||||
try:
|
||||
self.logger.info("Client %r connected" % (self.socket.address,))
|
||||
self.addr = self.writer.get_extra_info('peername')
|
||||
self.logger.debug('Client %r connected' % (self.addr,))
|
||||
|
||||
# Read protocol and version
|
||||
client_protocol = await self.socket.recv()
|
||||
client_protocol = await self.reader.readline()
|
||||
if not client_protocol:
|
||||
return
|
||||
|
||||
(client_proto_name, client_proto_version) = client_protocol.split()
|
||||
(client_proto_name, client_proto_version) = client_protocol.decode('utf-8').rstrip().split()
|
||||
if client_proto_name != self.proto_name:
|
||||
self.logger.debug("Rejecting invalid protocol %s" % (self.proto_name))
|
||||
self.logger.debug('Rejecting invalid protocol %s' % (self.proto_name))
|
||||
return
|
||||
|
||||
self.proto_version = tuple(int(v) for v in client_proto_version.split("."))
|
||||
self.proto_version = tuple(int(v) for v in client_proto_version.split('.'))
|
||||
if not self.validate_proto_version():
|
||||
self.logger.debug(
|
||||
"Rejecting invalid protocol version %s" % (client_proto_version)
|
||||
)
|
||||
self.logger.debug('Rejecting invalid protocol version %s' % (client_proto_version))
|
||||
return
|
||||
|
||||
# Read headers
|
||||
self.client_headers = {}
|
||||
# Read headers. Currently, no headers are implemented, so look for
|
||||
# an empty line to signal the end of the headers
|
||||
while True:
|
||||
header = await self.socket.recv()
|
||||
if not header:
|
||||
# Empty line. End of headers
|
||||
break
|
||||
tag, value = header.split(":", 1)
|
||||
self.client_headers[tag.lower()] = value.strip()
|
||||
line = await self.reader.readline()
|
||||
if not line:
|
||||
return
|
||||
|
||||
if self.client_headers.get("needs-headers", "false") == "true":
|
||||
for k, v in (await self.handle_headers(self.client_headers)).items():
|
||||
await self.socket.send("%s: %s" % (k, v))
|
||||
await self.socket.send("")
|
||||
line = line.decode('utf-8').rstrip()
|
||||
if not line:
|
||||
break
|
||||
|
||||
# Handle messages
|
||||
while True:
|
||||
d = await self.socket.recv_message()
|
||||
d = await self.read_message()
|
||||
if d is None:
|
||||
break
|
||||
try:
|
||||
response = await self.dispatch_message(d)
|
||||
except InvokeError as e:
|
||||
await self.socket.send_message(
|
||||
{"invoke-error": {"message": str(e)}}
|
||||
)
|
||||
break
|
||||
|
||||
if response is not self.NO_RESPONSE:
|
||||
await self.socket.send_message(response)
|
||||
|
||||
except ConnectionClosedError as e:
|
||||
self.logger.info(str(e))
|
||||
except (ClientError, ConnectionError) as e:
|
||||
await self.dispatch_message(d)
|
||||
await self.writer.drain()
|
||||
except ClientError as e:
|
||||
self.logger.error(str(e))
|
||||
finally:
|
||||
await self.close()
|
||||
self.writer.close()
|
||||
|
||||
async def dispatch_message(self, msg):
|
||||
for k in self.handlers.keys():
|
||||
if k in msg:
|
||||
self.logger.debug("Handling %s" % k)
|
||||
return await self.handlers[k](msg[k])
|
||||
self.logger.debug('Handling %s' % k)
|
||||
await self.handlers[k](msg[k])
|
||||
return
|
||||
|
||||
raise ClientError("Unrecognized command %r" % msg)
|
||||
|
||||
async def handle_ping(self, request):
|
||||
return {"alive": True}
|
||||
def write_message(self, msg):
|
||||
for c in chunkify(json.dumps(msg), self.max_chunk):
|
||||
self.writer.write(c.encode('utf-8'))
|
||||
|
||||
async def read_message(self):
|
||||
l = await self.reader.readline()
|
||||
if not l:
|
||||
return None
|
||||
|
||||
class StreamServer(object):
|
||||
def __init__(self, handler, logger):
|
||||
self.handler = handler
|
||||
self.logger = logger
|
||||
self.closed = False
|
||||
|
||||
async def handle_stream_client(self, reader, writer):
|
||||
# writer.transport.set_write_buffer_limits(0)
|
||||
socket = StreamConnection(reader, writer, -1)
|
||||
if self.closed:
|
||||
await socket.close()
|
||||
return
|
||||
|
||||
await self.handler(socket)
|
||||
|
||||
async def stop(self):
|
||||
self.closed = True
|
||||
|
||||
|
||||
class TCPStreamServer(StreamServer):
|
||||
def __init__(self, host, port, handler, logger, *, reuseport=False):
|
||||
super().__init__(handler, logger)
|
||||
self.host = host
|
||||
self.port = port
|
||||
self.reuseport = reuseport
|
||||
|
||||
def start(self, loop):
|
||||
self.server = loop.run_until_complete(
|
||||
asyncio.start_server(
|
||||
self.handle_stream_client,
|
||||
self.host,
|
||||
self.port,
|
||||
reuse_port=self.reuseport,
|
||||
)
|
||||
)
|
||||
|
||||
for s in self.server.sockets:
|
||||
self.logger.debug("Listening on %r" % (s.getsockname(),))
|
||||
# Newer python does this automatically. Do it manually here for
|
||||
# maximum compatibility
|
||||
s.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 1)
|
||||
s.setsockopt(socket.SOL_TCP, socket.TCP_QUICKACK, 1)
|
||||
|
||||
# Enable keep alives. This prevents broken client connections
|
||||
# from persisting on the server for long periods of time.
|
||||
s.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
|
||||
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 30)
|
||||
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 15)
|
||||
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 4)
|
||||
|
||||
name = self.server.sockets[0].getsockname()
|
||||
if self.server.sockets[0].family == socket.AF_INET6:
|
||||
self.address = "[%s]:%d" % (name[0], name[1])
|
||||
else:
|
||||
self.address = "%s:%d" % (name[0], name[1])
|
||||
|
||||
return [self.server.wait_closed()]
|
||||
|
||||
async def stop(self):
|
||||
await super().stop()
|
||||
self.server.close()
|
||||
|
||||
def cleanup(self):
|
||||
pass
|
||||
|
||||
|
||||
class UnixStreamServer(StreamServer):
|
||||
def __init__(self, path, handler, logger):
|
||||
super().__init__(handler, logger)
|
||||
self.path = path
|
||||
|
||||
def start(self, loop):
|
||||
cwd = os.getcwd()
|
||||
try:
|
||||
# Work around path length limits in AF_UNIX
|
||||
os.chdir(os.path.dirname(self.path))
|
||||
self.server = loop.run_until_complete(
|
||||
asyncio.start_unix_server(
|
||||
self.handle_stream_client, os.path.basename(self.path)
|
||||
)
|
||||
)
|
||||
finally:
|
||||
os.chdir(cwd)
|
||||
message = l.decode('utf-8')
|
||||
|
||||
self.logger.debug("Listening on %r" % self.path)
|
||||
self.address = "unix://%s" % os.path.abspath(self.path)
|
||||
return [self.server.wait_closed()]
|
||||
if not message.endswith('\n'):
|
||||
return None
|
||||
|
||||
async def stop(self):
|
||||
await super().stop()
|
||||
self.server.close()
|
||||
return json.loads(message)
|
||||
except (json.JSONDecodeError, UnicodeDecodeError) as e:
|
||||
self.logger.error('Bad message from client: %r' % message)
|
||||
raise e
|
||||
|
||||
def cleanup(self):
|
||||
os.unlink(self.path)
|
||||
async def handle_chunk(self, request):
|
||||
lines = []
|
||||
try:
|
||||
while True:
|
||||
l = await self.reader.readline()
|
||||
l = l.rstrip(b"\n").decode("utf-8")
|
||||
if not l:
|
||||
break
|
||||
lines.append(l)
|
||||
|
||||
msg = json.loads(''.join(lines))
|
||||
except (json.JSONDecodeError, UnicodeDecodeError) as e:
|
||||
self.logger.error('Bad message from client: %r' % lines)
|
||||
raise e
|
||||
|
||||
class WebsocketsServer(object):
|
||||
def __init__(self, host, port, handler, logger, *, reuseport=False):
|
||||
self.host = host
|
||||
self.port = port
|
||||
self.handler = handler
|
||||
self.logger = logger
|
||||
self.reuseport = reuseport
|
||||
if 'chunk-stream' in msg:
|
||||
raise ClientError("Nested chunks are not allowed")
|
||||
|
||||
def start(self, loop):
|
||||
import websockets.server
|
||||
await self.dispatch_message(msg)
|
||||
|
||||
self.server = loop.run_until_complete(
|
||||
websockets.server.serve(
|
||||
self.client_handler,
|
||||
self.host,
|
||||
self.port,
|
||||
ping_interval=None,
|
||||
reuse_port=self.reuseport,
|
||||
)
|
||||
)
|
||||
|
||||
for s in self.server.sockets:
|
||||
self.logger.debug("Listening on %r" % (s.getsockname(),))
|
||||
|
||||
# Enable keep alives. This prevents broken client connections
|
||||
# from persisting on the server for long periods of time.
|
||||
s.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
|
||||
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 30)
|
||||
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 15)
|
||||
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 4)
|
||||
|
||||
name = self.server.sockets[0].getsockname()
|
||||
if self.server.sockets[0].family == socket.AF_INET6:
|
||||
self.address = "ws://[%s]:%d" % (name[0], name[1])
|
||||
else:
|
||||
self.address = "ws://%s:%d" % (name[0], name[1])
|
||||
|
||||
return [self.server.wait_closed()]
|
||||
|
||||
async def stop(self):
|
||||
self.server.close()
|
||||
|
||||
def cleanup(self):
|
||||
pass
|
||||
|
||||
async def client_handler(self, websocket):
|
||||
socket = WebsocketConnection(websocket, -1)
|
||||
await self.handler(socket)
|
||||
async def handle_ping(self, request):
|
||||
response = {'alive': True}
|
||||
self.write_message(response)
|
||||
|
||||
|
||||
class AsyncServer(object):
|
||||
def __init__(self, logger):
|
||||
self._cleanup_socket = None
|
||||
self.logger = logger
|
||||
self.start = None
|
||||
self.address = None
|
||||
self.loop = None
|
||||
self.run_tasks = []
|
||||
|
||||
def start_tcp_server(self, host, port, *, reuseport=False):
|
||||
self.server = TCPStreamServer(
|
||||
host,
|
||||
port,
|
||||
self._client_handler,
|
||||
self.logger,
|
||||
reuseport=reuseport,
|
||||
)
|
||||
def start_tcp_server(self, host, port):
|
||||
def start_tcp():
|
||||
self.server = self.loop.run_until_complete(
|
||||
asyncio.start_server(self.handle_client, host, port)
|
||||
)
|
||||
|
||||
for s in self.server.sockets:
|
||||
self.logger.debug('Listening on %r' % (s.getsockname(),))
|
||||
# Newer python does this automatically. Do it manually here for
|
||||
# maximum compatibility
|
||||
s.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 1)
|
||||
s.setsockopt(socket.SOL_TCP, socket.TCP_QUICKACK, 1)
|
||||
|
||||
# Enable keep alives. This prevents broken client connections
|
||||
# from persisting on the server for long periods of time.
|
||||
s.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
|
||||
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 30)
|
||||
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 15)
|
||||
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 4)
|
||||
|
||||
name = self.server.sockets[0].getsockname()
|
||||
if self.server.sockets[0].family == socket.AF_INET6:
|
||||
self.address = "[%s]:%d" % (name[0], name[1])
|
||||
else:
|
||||
self.address = "%s:%d" % (name[0], name[1])
|
||||
|
||||
self.start = start_tcp
|
||||
|
||||
def start_unix_server(self, path):
|
||||
self.server = UnixStreamServer(path, self._client_handler, self.logger)
|
||||
def cleanup():
|
||||
os.unlink(path)
|
||||
|
||||
def start_websocket_server(self, host, port, reuseport=False):
|
||||
self.server = WebsocketsServer(
|
||||
host,
|
||||
port,
|
||||
self._client_handler,
|
||||
self.logger,
|
||||
reuseport=reuseport,
|
||||
)
|
||||
def start_unix():
|
||||
cwd = os.getcwd()
|
||||
try:
|
||||
# Work around path length limits in AF_UNIX
|
||||
os.chdir(os.path.dirname(path))
|
||||
self.server = self.loop.run_until_complete(
|
||||
asyncio.start_unix_server(self.handle_client, os.path.basename(path))
|
||||
)
|
||||
finally:
|
||||
os.chdir(cwd)
|
||||
|
||||
async def _client_handler(self, socket):
|
||||
address = socket.address
|
||||
self.logger.debug('Listening on %r' % path)
|
||||
|
||||
self._cleanup_socket = cleanup
|
||||
self.address = "unix://%s" % os.path.abspath(path)
|
||||
|
||||
self.start = start_unix
|
||||
|
||||
@abc.abstractmethod
|
||||
def accept_client(self, reader, writer):
|
||||
pass
|
||||
|
||||
async def handle_client(self, reader, writer):
|
||||
# writer.transport.set_write_buffer_limits(0)
|
||||
try:
|
||||
client = self.accept_client(socket)
|
||||
client = self.accept_client(reader, writer)
|
||||
await client.process_requests()
|
||||
except Exception as e:
|
||||
import traceback
|
||||
|
||||
self.logger.error(
|
||||
"Error from client %s: %s" % (address, str(e)), exc_info=True
|
||||
)
|
||||
self.logger.error('Error from client: %s' % str(e), exc_info=True)
|
||||
traceback.print_exc()
|
||||
finally:
|
||||
self.logger.debug("Client %s disconnected", address)
|
||||
await socket.close()
|
||||
writer.close()
|
||||
self.logger.debug('Client disconnected')
|
||||
|
||||
@abc.abstractmethod
|
||||
def accept_client(self, socket):
|
||||
pass
|
||||
|
||||
async def stop(self):
|
||||
self.logger.debug("Stopping server")
|
||||
await self.server.stop()
|
||||
|
||||
def start(self):
|
||||
tasks = self.server.start(self.loop)
|
||||
self.address = self.server.address
|
||||
return tasks
|
||||
def run_loop_forever(self):
|
||||
try:
|
||||
self.loop.run_forever()
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
|
||||
def signal_handler(self):
|
||||
self.logger.debug("Got exit signal")
|
||||
self.loop.create_task(self.stop())
|
||||
self.loop.stop()
|
||||
|
||||
def _serve_forever(self, tasks):
|
||||
def _serve_forever(self):
|
||||
try:
|
||||
self.loop.add_signal_handler(signal.SIGTERM, self.signal_handler)
|
||||
self.loop.add_signal_handler(signal.SIGINT, self.signal_handler)
|
||||
self.loop.add_signal_handler(signal.SIGQUIT, self.signal_handler)
|
||||
signal.pthread_sigmask(signal.SIG_UNBLOCK, [signal.SIGTERM])
|
||||
|
||||
self.loop.run_until_complete(asyncio.gather(*tasks))
|
||||
self.run_loop_forever()
|
||||
self.server.close()
|
||||
|
||||
self.logger.debug("Server shutting down")
|
||||
self.loop.run_until_complete(self.server.wait_closed())
|
||||
self.logger.debug('Server shutting down')
|
||||
finally:
|
||||
self.server.cleanup()
|
||||
if self._cleanup_socket is not None:
|
||||
self._cleanup_socket()
|
||||
|
||||
def serve_forever(self):
|
||||
"""
|
||||
Serve requests in the current process
|
||||
"""
|
||||
self._create_loop()
|
||||
tasks = self.start()
|
||||
self._serve_forever(tasks)
|
||||
self.loop.close()
|
||||
|
||||
def _create_loop(self):
|
||||
# Create loop and override any loop that may have existed in
|
||||
# a parent process. It is possible that the usecases of
|
||||
# serve_forever might be constrained enough to allow using
|
||||
# get_event_loop here, but better safe than sorry for now.
|
||||
self.loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(self.loop)
|
||||
self.start()
|
||||
self._serve_forever()
|
||||
|
||||
def serve_as_process(self, *, prefunc=None, args=(), log_level=None):
|
||||
def serve_as_process(self, *, prefunc=None, args=()):
|
||||
"""
|
||||
Serve requests in a child process
|
||||
"""
|
||||
|
||||
def run(queue):
|
||||
# Create loop and override any loop that may have existed
|
||||
# in a parent process. Without doing this and instead
|
||||
@@ -371,24 +259,21 @@ class AsyncServer(object):
|
||||
# more general, though, as any potential use of asyncio in
|
||||
# Cooker could create a loop that needs to replaced in this
|
||||
# new process.
|
||||
self._create_loop()
|
||||
self.loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(self.loop)
|
||||
try:
|
||||
self.address = None
|
||||
tasks = self.start()
|
||||
self.start()
|
||||
finally:
|
||||
# Always put the server address to wake up the parent task
|
||||
queue.put(self.address)
|
||||
queue.close()
|
||||
|
||||
if prefunc is not None:
|
||||
prefunc(self, *args)
|
||||
|
||||
if log_level is not None:
|
||||
self.logger.setLevel(log_level)
|
||||
self._serve_forever()
|
||||
|
||||
self._serve_forever(tasks)
|
||||
|
||||
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
|
||||
if sys.version_info >= (3, 6):
|
||||
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
|
||||
self.loop.close()
|
||||
|
||||
queue = multiprocessing.Queue()
|
||||
|
||||
@@ -197,8 +197,6 @@ def exec_func(func, d, dirs = None):
|
||||
for cdir in d.expand(cleandirs).split():
|
||||
bb.utils.remove(cdir, True)
|
||||
bb.utils.mkdirhier(cdir)
|
||||
if cdir == oldcwd:
|
||||
os.chdir(cdir)
|
||||
|
||||
if flags and dirs is None:
|
||||
dirs = flags.get('dirs')
|
||||
@@ -743,7 +741,7 @@ def _exec_task(fn, task, d, quieterr):
|
||||
|
||||
if quieterr:
|
||||
if not handled:
|
||||
logger.warning(str(exc))
|
||||
logger.warning(repr(exc))
|
||||
event.fire(TaskFailedSilent(task, fn, logfn, localdata), localdata)
|
||||
else:
|
||||
errprinted = errchk.triggered
|
||||
@@ -752,7 +750,7 @@ def _exec_task(fn, task, d, quieterr):
|
||||
if verboseStdoutLogging or handled:
|
||||
errprinted = True
|
||||
if not handled:
|
||||
logger.error(str(exc))
|
||||
logger.error(repr(exc))
|
||||
event.fire(TaskFailed(task, fn, logfn, localdata, errprinted), localdata)
|
||||
return 1
|
||||
|
||||
@@ -932,13 +930,9 @@ def add_tasks(tasklist, d):
|
||||
# don't assume holding a reference
|
||||
d.setVar('_task_deps', task_deps)
|
||||
|
||||
def ensure_task_prefix(name):
|
||||
if name[:3] != "do_":
|
||||
name = "do_" + name
|
||||
return name
|
||||
|
||||
def addtask(task, before, after, d):
|
||||
task = ensure_task_prefix(task)
|
||||
if task[:3] != "do_":
|
||||
task = "do_" + task
|
||||
|
||||
d.setVarFlag(task, "task", 1)
|
||||
bbtasks = d.getVar('__BBTASKS', False) or []
|
||||
@@ -950,20 +944,19 @@ def addtask(task, before, after, d):
|
||||
if after is not None:
|
||||
# set up deps for function
|
||||
for entry in after.split():
|
||||
entry = ensure_task_prefix(entry)
|
||||
if entry not in existing:
|
||||
existing.append(entry)
|
||||
d.setVarFlag(task, "deps", existing)
|
||||
if before is not None:
|
||||
# set up things that depend on this func
|
||||
for entry in before.split():
|
||||
entry = ensure_task_prefix(entry)
|
||||
existing = d.getVarFlag(entry, "deps", False) or []
|
||||
if task not in existing:
|
||||
d.setVarFlag(entry, "deps", [task] + existing)
|
||||
|
||||
def deltask(task, d):
|
||||
task = ensure_task_prefix(task)
|
||||
if task[:3] != "do_":
|
||||
task = "do_" + task
|
||||
|
||||
bbtasks = d.getVar('__BBTASKS', False) or []
|
||||
if task in bbtasks:
|
||||
|
||||
@@ -28,7 +28,7 @@ import shutil
|
||||
|
||||
logger = logging.getLogger("BitBake.Cache")
|
||||
|
||||
__cache_version__ = "156"
|
||||
__cache_version__ = "155"
|
||||
|
||||
def getCacheFile(path, filename, mc, data_hash):
|
||||
mcspec = ''
|
||||
@@ -344,7 +344,9 @@ def virtualfn2realfn(virtualfn):
|
||||
"""
|
||||
mc = ""
|
||||
if virtualfn.startswith('mc:') and virtualfn.count(':') >= 2:
|
||||
(_, mc, virtualfn) = virtualfn.split(':', 2)
|
||||
elems = virtualfn.split(':')
|
||||
mc = elems[1]
|
||||
virtualfn = ":".join(elems[2:])
|
||||
|
||||
fn = virtualfn
|
||||
cls = ""
|
||||
@@ -367,7 +369,7 @@ def realfn2virtual(realfn, cls, mc):
|
||||
|
||||
def variant2virtual(realfn, variant):
|
||||
"""
|
||||
Convert a real filename + a variant to a virtual filename
|
||||
Convert a real filename + the associated subclass keyword to a virtual filename
|
||||
"""
|
||||
if variant == "":
|
||||
return realfn
|
||||
@@ -441,7 +443,7 @@ class Cache(object):
|
||||
else:
|
||||
symlink = os.path.join(self.cachedir, "bb_cache.dat")
|
||||
|
||||
if os.path.exists(symlink) or os.path.islink(symlink):
|
||||
if os.path.exists(symlink):
|
||||
bb.utils.remove(symlink)
|
||||
try:
|
||||
os.symlink(os.path.basename(self.cachefile), symlink)
|
||||
@@ -512,11 +514,11 @@ class Cache(object):
|
||||
|
||||
return len(self.depends_cache)
|
||||
|
||||
def parse(self, filename, appends, layername):
|
||||
def parse(self, filename, appends):
|
||||
"""Parse the specified filename, returning the recipe information"""
|
||||
self.logger.debug("Parsing %s", filename)
|
||||
infos = []
|
||||
datastores = self.databuilder.parseRecipeVariants(filename, appends, mc=self.mc, layername=layername)
|
||||
datastores = self.databuilder.parseRecipeVariants(filename, appends, mc=self.mc)
|
||||
depends = []
|
||||
variants = []
|
||||
# Process the "real" fn last so we can store variants list
|
||||
@@ -779,6 +781,25 @@ class MulticonfigCache(Mapping):
|
||||
for k in self.__caches:
|
||||
yield k
|
||||
|
||||
def init(cooker):
|
||||
"""
|
||||
The Objective: Cache the minimum amount of data possible yet get to the
|
||||
stage of building packages (i.e. tryBuild) without reparsing any .bb files.
|
||||
|
||||
To do this, we intercept getVar calls and only cache the variables we see
|
||||
being accessed. We rely on the cache getVar calls being made for all
|
||||
variables bitbake might need to use to reach this stage. For each cached
|
||||
file we need to track:
|
||||
|
||||
* Its mtime
|
||||
* The mtimes of all its dependencies
|
||||
* Whether it caused a parse.SkipRecipe exception
|
||||
|
||||
Files causing parsing errors are evicted from the cache.
|
||||
|
||||
"""
|
||||
return Cache(cooker.configuration.data, cooker.configuration.data_hash)
|
||||
|
||||
|
||||
class CacheData(object):
|
||||
"""
|
||||
|
||||
@@ -62,7 +62,6 @@ def check_indent(codestr):
|
||||
modulecode_deps = {}
|
||||
|
||||
def add_module_functions(fn, functions, namespace):
|
||||
import os
|
||||
fstat = os.stat(fn)
|
||||
fixedhash = fn + ":" + str(fstat.st_size) + ":" + str(fstat.st_mtime)
|
||||
for f in functions:
|
||||
@@ -72,11 +71,6 @@ def add_module_functions(fn, functions, namespace):
|
||||
parser.parse_python(None, filename=fn, lineno=1, fixedhash=fixedhash+f)
|
||||
#bb.warn("Cached %s" % f)
|
||||
except KeyError:
|
||||
targetfn = inspect.getsourcefile(functions[f])
|
||||
if fn != targetfn:
|
||||
# Skip references to other modules outside this file
|
||||
#bb.warn("Skipping %s" % name)
|
||||
continue
|
||||
lines, lineno = inspect.getsourcelines(functions[f])
|
||||
src = "".join(lines)
|
||||
parser.parse_python(src, filename=fn, lineno=lineno, fixedhash=fixedhash+f)
|
||||
@@ -87,17 +81,14 @@ def add_module_functions(fn, functions, namespace):
|
||||
if e in functions:
|
||||
execs.remove(e)
|
||||
execs.add(namespace + "." + e)
|
||||
visitorcode = None
|
||||
if hasattr(functions[f], 'visitorcode'):
|
||||
visitorcode = getattr(functions[f], "visitorcode")
|
||||
modulecode_deps[name] = [parser.references.copy(), execs, parser.var_execs.copy(), parser.contains.copy(), parser.extra, visitorcode]
|
||||
#bb.warn("%s: %s\nRefs:%s Execs: %s %s %s" % (name, fn, parser.references, parser.execs, parser.var_execs, parser.contains))
|
||||
modulecode_deps[name] = [parser.references.copy(), execs, parser.var_execs.copy(), parser.contains.copy()]
|
||||
#bb.warn("%s: %s\nRefs:%s Execs: %s %s %s" % (name, src, parser.references, parser.execs, parser.var_execs, parser.contains))
|
||||
|
||||
def update_module_dependencies(d):
|
||||
for mod in modulecode_deps:
|
||||
excludes = set((d.getVarFlag(mod, "vardepsexclude") or "").split())
|
||||
if excludes:
|
||||
modulecode_deps[mod] = [modulecode_deps[mod][0] - excludes, modulecode_deps[mod][1] - excludes, modulecode_deps[mod][2] - excludes, modulecode_deps[mod][3], modulecode_deps[mod][4], modulecode_deps[mod][5]]
|
||||
modulecode_deps[mod] = [modulecode_deps[mod][0] - excludes, modulecode_deps[mod][1] - excludes, modulecode_deps[mod][2] - excludes, modulecode_deps[mod][3]]
|
||||
|
||||
# A custom getstate/setstate using tuples is actually worth 15% cachesize by
|
||||
# avoiding duplication of the attribute names!
|
||||
@@ -120,22 +111,21 @@ class SetCache(object):
|
||||
codecache = SetCache()
|
||||
|
||||
class pythonCacheLine(object):
|
||||
def __init__(self, refs, execs, contains, extra):
|
||||
def __init__(self, refs, execs, contains):
|
||||
self.refs = codecache.internSet(refs)
|
||||
self.execs = codecache.internSet(execs)
|
||||
self.contains = {}
|
||||
for c in contains:
|
||||
self.contains[c] = codecache.internSet(contains[c])
|
||||
self.extra = extra
|
||||
|
||||
def __getstate__(self):
|
||||
return (self.refs, self.execs, self.contains, self.extra)
|
||||
return (self.refs, self.execs, self.contains)
|
||||
|
||||
def __setstate__(self, state):
|
||||
(refs, execs, contains, extra) = state
|
||||
self.__init__(refs, execs, contains, extra)
|
||||
(refs, execs, contains) = state
|
||||
self.__init__(refs, execs, contains)
|
||||
def __hash__(self):
|
||||
l = (hash(self.refs), hash(self.execs), hash(self.extra))
|
||||
l = (hash(self.refs), hash(self.execs))
|
||||
for c in sorted(self.contains.keys()):
|
||||
l = l + (c, hash(self.contains[c]))
|
||||
return hash(l)
|
||||
@@ -164,7 +154,7 @@ class CodeParserCache(MultiProcessCache):
|
||||
# so that an existing cache gets invalidated. Additionally you'll need
|
||||
# to increment __cache_version__ in cache.py in order to ensure that old
|
||||
# recipe caches don't trigger "Taskhash mismatch" errors.
|
||||
CACHE_VERSION = 14
|
||||
CACHE_VERSION = 11
|
||||
|
||||
def __init__(self):
|
||||
MultiProcessCache.__init__(self)
|
||||
@@ -178,8 +168,8 @@ class CodeParserCache(MultiProcessCache):
|
||||
self.pythoncachelines = {}
|
||||
self.shellcachelines = {}
|
||||
|
||||
def newPythonCacheLine(self, refs, execs, contains, extra):
|
||||
cacheline = pythonCacheLine(refs, execs, contains, extra)
|
||||
def newPythonCacheLine(self, refs, execs, contains):
|
||||
cacheline = pythonCacheLine(refs, execs, contains)
|
||||
h = hash(cacheline)
|
||||
if h in self.pythoncachelines:
|
||||
return self.pythoncachelines[h]
|
||||
@@ -264,28 +254,20 @@ class PythonParser():
|
||||
|
||||
def visit_Call(self, node):
|
||||
name = self.called_node_name(node.func)
|
||||
if name and name in modulecode_deps and modulecode_deps[name][5]:
|
||||
visitorcode = modulecode_deps[name][5]
|
||||
contains, execs, warn = visitorcode(name, node.args)
|
||||
for i in contains:
|
||||
self.contains[i] = contains[i]
|
||||
self.execs |= execs
|
||||
if warn:
|
||||
self.warn(node.func, warn)
|
||||
elif name and (name.endswith(self.getvars) or name.endswith(self.getvarflags) or name in self.containsfuncs or name in self.containsanyfuncs):
|
||||
if isinstance(node.args[0], ast.Constant) and isinstance(node.args[0].value, str):
|
||||
varname = node.args[0].value
|
||||
if name in self.containsfuncs and isinstance(node.args[1], ast.Constant):
|
||||
if name and (name.endswith(self.getvars) or name.endswith(self.getvarflags) or name in self.containsfuncs or name in self.containsanyfuncs):
|
||||
if isinstance(node.args[0], ast.Str):
|
||||
varname = node.args[0].s
|
||||
if name in self.containsfuncs and isinstance(node.args[1], ast.Str):
|
||||
if varname not in self.contains:
|
||||
self.contains[varname] = set()
|
||||
self.contains[varname].add(node.args[1].value)
|
||||
elif name in self.containsanyfuncs and isinstance(node.args[1], ast.Constant):
|
||||
self.contains[varname].add(node.args[1].s)
|
||||
elif name in self.containsanyfuncs and isinstance(node.args[1], ast.Str):
|
||||
if varname not in self.contains:
|
||||
self.contains[varname] = set()
|
||||
self.contains[varname].update(node.args[1].value.split())
|
||||
self.contains[varname].update(node.args[1].s.split())
|
||||
elif name.endswith(self.getvarflags):
|
||||
if isinstance(node.args[1], ast.Constant):
|
||||
self.references.add('%s[%s]' % (varname, node.args[1].value))
|
||||
if isinstance(node.args[1], ast.Str):
|
||||
self.references.add('%s[%s]' % (varname, node.args[1].s))
|
||||
else:
|
||||
self.warn(node.func, node.args[1])
|
||||
else:
|
||||
@@ -293,8 +275,8 @@ class PythonParser():
|
||||
else:
|
||||
self.warn(node.func, node.args[0])
|
||||
elif name and name.endswith(".expand"):
|
||||
if isinstance(node.args[0], ast.Constant):
|
||||
value = node.args[0].value
|
||||
if isinstance(node.args[0], ast.Str):
|
||||
value = node.args[0].s
|
||||
d = bb.data.init()
|
||||
parser = d.expandWithRefs(value, self.name)
|
||||
self.references |= parser.references
|
||||
@@ -304,8 +286,8 @@ class PythonParser():
|
||||
self.contains[varname] = set()
|
||||
self.contains[varname] |= parser.contains[varname]
|
||||
elif name in self.execfuncs:
|
||||
if isinstance(node.args[0], ast.Constant):
|
||||
self.var_execs.add(node.args[0].value)
|
||||
if isinstance(node.args[0], ast.Str):
|
||||
self.var_execs.add(node.args[0].s)
|
||||
else:
|
||||
self.warn(node.func, node.args[0])
|
||||
elif name and isinstance(node.func, (ast.Name, ast.Attribute)):
|
||||
@@ -355,7 +337,6 @@ class PythonParser():
|
||||
self.contains = {}
|
||||
for i in codeparsercache.pythoncache[h].contains:
|
||||
self.contains[i] = set(codeparsercache.pythoncache[h].contains[i])
|
||||
self.extra = codeparsercache.pythoncache[h].extra
|
||||
return
|
||||
|
||||
if h in codeparsercache.pythoncacheextras:
|
||||
@@ -364,7 +345,6 @@ class PythonParser():
|
||||
self.contains = {}
|
||||
for i in codeparsercache.pythoncacheextras[h].contains:
|
||||
self.contains[i] = set(codeparsercache.pythoncacheextras[h].contains[i])
|
||||
self.extra = codeparsercache.pythoncacheextras[h].extra
|
||||
return
|
||||
|
||||
if fixedhash and not node:
|
||||
@@ -383,11 +363,8 @@ class PythonParser():
|
||||
self.visit_Call(n)
|
||||
|
||||
self.execs.update(self.var_execs)
|
||||
self.extra = None
|
||||
if fixedhash:
|
||||
self.extra = bbhash(str(node))
|
||||
|
||||
codeparsercache.pythoncacheextras[h] = codeparsercache.newPythonCacheLine(self.references, self.execs, self.contains, self.extra)
|
||||
codeparsercache.pythoncacheextras[h] = codeparsercache.newPythonCacheLine(self.references, self.execs, self.contains)
|
||||
|
||||
class ShellParser():
|
||||
def __init__(self, name, log):
|
||||
@@ -506,34 +483,19 @@ class ShellParser():
|
||||
"""
|
||||
|
||||
words = list(words)
|
||||
for word in words:
|
||||
for word in list(words):
|
||||
wtree = pyshlex.make_wordtree(word[1])
|
||||
for part in wtree:
|
||||
if not isinstance(part, list):
|
||||
continue
|
||||
|
||||
candidates = [part]
|
||||
if part[0] in ('`', '$('):
|
||||
command = pyshlex.wordtree_as_string(part[1:-1])
|
||||
self._parse_shell(command)
|
||||
|
||||
# If command is of type:
|
||||
#
|
||||
# var="... $(cmd [...]) ..."
|
||||
#
|
||||
# Then iterate on what's between the quotes and if we find a
|
||||
# list, make that what we check for below.
|
||||
if len(part) >= 3 and part[0] == '"':
|
||||
for p in part[1:-1]:
|
||||
if isinstance(p, list):
|
||||
candidates.append(p)
|
||||
|
||||
for candidate in candidates:
|
||||
if len(candidate) >= 2:
|
||||
if candidate[0] in ('`', '$('):
|
||||
command = pyshlex.wordtree_as_string(candidate[1:-1])
|
||||
self._parse_shell(command)
|
||||
|
||||
if word[0] in ("cmd_name", "cmd_word"):
|
||||
if word in words:
|
||||
words.remove(word)
|
||||
if word[0] in ("cmd_name", "cmd_word"):
|
||||
if word in words:
|
||||
words.remove(word)
|
||||
|
||||
usetoken = False
|
||||
for word in words:
|
||||
|
||||
@@ -65,7 +65,7 @@ class Command:
|
||||
command = commandline.pop(0)
|
||||
|
||||
# Ensure cooker is ready for commands
|
||||
if command not in ["updateConfig", "setFeatures", "ping"]:
|
||||
if command != "updateConfig" and command != "setFeatures":
|
||||
try:
|
||||
self.cooker.init_configdata()
|
||||
if not self.remotedatastores:
|
||||
@@ -85,6 +85,7 @@ class Command:
|
||||
if not hasattr(command_method, 'readonly') or not getattr(command_method, 'readonly'):
|
||||
return None, "Not able to execute not readonly commands in readonly mode"
|
||||
try:
|
||||
self.cooker.process_inotify_updates_apply()
|
||||
if getattr(command_method, 'needconfig', True):
|
||||
self.cooker.updateCacheSync()
|
||||
result = command_method(self, commandline)
|
||||
@@ -108,6 +109,7 @@ class Command:
|
||||
|
||||
def runAsyncCommand(self, _, process_server, halt):
|
||||
try:
|
||||
self.cooker.process_inotify_updates_apply()
|
||||
if self.cooker.state in (bb.cooker.state.error, bb.cooker.state.shutdown, bb.cooker.state.forceshutdown):
|
||||
# updateCache will trigger a shutdown of the parser
|
||||
# and then raise BBHandledException triggering an exit
|
||||
@@ -167,8 +169,6 @@ class CommandsSync:
|
||||
Allow a UI to check the server is still alive
|
||||
"""
|
||||
return "Still alive!"
|
||||
ping.needconfig = False
|
||||
ping.readonly = True
|
||||
|
||||
def stateShutdown(self, command, params):
|
||||
"""
|
||||
@@ -307,11 +307,6 @@ class CommandsSync:
|
||||
return ret
|
||||
getLayerPriorities.readonly = True
|
||||
|
||||
def revalidateCaches(self, command, params):
|
||||
"""Called by UI clients when metadata may have changed"""
|
||||
command.cooker.revalidateCaches()
|
||||
parseConfiguration.needconfig = False
|
||||
|
||||
def getRecipes(self, command, params):
|
||||
try:
|
||||
mc = params[0]
|
||||
@@ -550,8 +545,8 @@ class CommandsSync:
|
||||
and return a datastore object representing the environment
|
||||
for the recipe.
|
||||
"""
|
||||
virtualfn = params[0]
|
||||
(fn, cls, mc) = bb.cache.virtualfn2realfn(virtualfn)
|
||||
fn = params[0]
|
||||
mc = bb.runqueue.mc_from_tid(fn)
|
||||
appends = params[1]
|
||||
appendlist = params[2]
|
||||
if len(params) > 3:
|
||||
@@ -566,7 +561,6 @@ class CommandsSync:
|
||||
appendfiles = command.cooker.collections[mc].get_file_appends(fn)
|
||||
else:
|
||||
appendfiles = []
|
||||
layername = command.cooker.collections[mc].calc_bbfile_priority(fn)[2]
|
||||
# We are calling bb.cache locally here rather than on the server,
|
||||
# but that's OK because it doesn't actually need anything from
|
||||
# the server barring the global datastore (which we have a remote
|
||||
@@ -574,10 +568,10 @@ class CommandsSync:
|
||||
if config_data:
|
||||
# We have to use a different function here if we're passing in a datastore
|
||||
# NOTE: we took a copy above, so we don't do it here again
|
||||
envdata = command.cooker.databuilder._parse_recipe(config_data, fn, appendfiles, mc, layername)[cls]
|
||||
envdata = command.cooker.databuilder._parse_recipe(config_data, fn, appendfiles, mc)['']
|
||||
else:
|
||||
# Use the standard path
|
||||
envdata = command.cooker.databuilder.parseRecipe(virtualfn, appendfiles, layername)
|
||||
envdata = command.cooker.databuilder.parseRecipe(fn, appendfiles)
|
||||
idx = command.remotedatastores.store(envdata)
|
||||
return DataStoreConnectionHandle(idx)
|
||||
parseRecipeFile.readonly = True
|
||||
@@ -777,14 +771,7 @@ class CommandsAsync:
|
||||
(mc, pn) = bb.runqueue.split_mc(params[0])
|
||||
taskname = params[1]
|
||||
sigs = params[2]
|
||||
bb.siggen.check_siggen_version(bb.siggen)
|
||||
res = bb.siggen.find_siginfo(pn, taskname, sigs, command.cooker.databuilder.mcdata[mc])
|
||||
bb.event.fire(bb.event.FindSigInfoResult(res), command.cooker.databuilder.mcdata[mc])
|
||||
command.finishAsyncCommand()
|
||||
findSigInfo.needcache = False
|
||||
|
||||
def getTaskSignatures(self, command, params):
|
||||
res = command.cooker.getTaskSignatures(params[0], params[1])
|
||||
bb.event.fire(bb.event.GetTaskSignatureResult(res), command.cooker.data)
|
||||
command.finishAsyncCommand()
|
||||
getTaskSignatures.needcache = True
|
||||
|
||||
@@ -17,11 +17,12 @@ import threading
|
||||
from io import StringIO, UnsupportedOperation
|
||||
from contextlib import closing
|
||||
from collections import defaultdict, namedtuple
|
||||
import bb, bb.command
|
||||
import bb, bb.exceptions, bb.command
|
||||
from bb import utils, data, parse, event, cache, providers, taskdata, runqueue, build
|
||||
import queue
|
||||
import signal
|
||||
import prserv.serv
|
||||
import pyinotify
|
||||
import json
|
||||
import pickle
|
||||
import codecs
|
||||
@@ -102,15 +103,12 @@ class CookerFeatures(object):
|
||||
|
||||
class EventWriter:
|
||||
def __init__(self, cooker, eventfile):
|
||||
self.file_inited = None
|
||||
self.cooker = cooker
|
||||
self.eventfile = eventfile
|
||||
self.event_queue = []
|
||||
|
||||
def write_variables(self):
|
||||
with open(self.eventfile, "a") as f:
|
||||
f.write("%s\n" % json.dumps({ "allvariables" : self.cooker.getAllKeysWithFlags(["doc", "func"])}))
|
||||
|
||||
def send(self, event):
|
||||
def write_event(self, event):
|
||||
with open(self.eventfile, "a") as f:
|
||||
try:
|
||||
str_event = codecs.encode(pickle.dumps(event), 'base64').decode('utf-8')
|
||||
@@ -120,6 +118,28 @@ class EventWriter:
|
||||
import traceback
|
||||
print(err, traceback.format_exc())
|
||||
|
||||
def send(self, event):
|
||||
if self.file_inited:
|
||||
# we have the file, just write the event
|
||||
self.write_event(event)
|
||||
else:
|
||||
# init on bb.event.BuildStarted
|
||||
name = "%s.%s" % (event.__module__, event.__class__.__name__)
|
||||
if name in ("bb.event.BuildStarted", "bb.cooker.CookerExit"):
|
||||
with open(self.eventfile, "w") as f:
|
||||
f.write("%s\n" % json.dumps({ "allvariables" : self.cooker.getAllKeysWithFlags(["doc", "func"])}))
|
||||
|
||||
self.file_inited = True
|
||||
|
||||
# write pending events
|
||||
for evt in self.event_queue:
|
||||
self.write_event(evt)
|
||||
|
||||
# also write the current event
|
||||
self.write_event(event)
|
||||
else:
|
||||
# queue all events until the file is inited
|
||||
self.event_queue.append(event)
|
||||
|
||||
#============================================================================#
|
||||
# BBCooker
|
||||
@@ -131,8 +151,6 @@ class BBCooker:
|
||||
|
||||
def __init__(self, featureSet=None, server=None):
|
||||
self.recipecaches = None
|
||||
self.baseconfig_valid = False
|
||||
self.parsecache_valid = False
|
||||
self.eventlog = None
|
||||
self.skiplist = {}
|
||||
self.featureset = CookerFeatures()
|
||||
@@ -153,9 +171,17 @@ class BBCooker:
|
||||
self.waitIdle = server.wait_for_idle
|
||||
|
||||
bb.debug(1, "BBCooker starting %s" % time.time())
|
||||
sys.stdout.flush()
|
||||
|
||||
self.configwatched = {}
|
||||
self.parsewatched = {}
|
||||
self.configwatcher = None
|
||||
self.confignotifier = None
|
||||
|
||||
self.watchmask = pyinotify.IN_CLOSE_WRITE | pyinotify.IN_CREATE | pyinotify.IN_DELETE | \
|
||||
pyinotify.IN_DELETE_SELF | pyinotify.IN_MODIFY | pyinotify.IN_MOVE_SELF | \
|
||||
pyinotify.IN_MOVED_FROM | pyinotify.IN_MOVED_TO
|
||||
|
||||
self.watcher = None
|
||||
self.notifier = None
|
||||
|
||||
# If being called by something like tinfoil, we need to clean cached data
|
||||
# which may now be invalid
|
||||
@@ -166,6 +192,8 @@ class BBCooker:
|
||||
self.hashserv = None
|
||||
self.hashservaddr = None
|
||||
|
||||
self.inotify_modified_files = []
|
||||
|
||||
# TOSTOP must not be set or our children will hang when they output
|
||||
try:
|
||||
fd = sys.stdout.fileno()
|
||||
@@ -189,37 +217,135 @@ class BBCooker:
|
||||
signal.signal(signal.SIGHUP, self.sigterm_exception)
|
||||
|
||||
bb.debug(1, "BBCooker startup complete %s" % time.time())
|
||||
sys.stdout.flush()
|
||||
|
||||
self.inotify_threadlock = threading.Lock()
|
||||
|
||||
def init_configdata(self):
|
||||
if not hasattr(self, "data"):
|
||||
self.initConfigurationData()
|
||||
bb.debug(1, "BBCooker parsed base configuration %s" % time.time())
|
||||
sys.stdout.flush()
|
||||
self.handlePRServ()
|
||||
|
||||
def _baseconfig_set(self, value):
|
||||
if value and not self.baseconfig_valid:
|
||||
bb.server.process.serverlog("Base config valid")
|
||||
elif not value and self.baseconfig_valid:
|
||||
bb.server.process.serverlog("Base config invalidated")
|
||||
self.baseconfig_valid = value
|
||||
def setupConfigWatcher(self):
|
||||
with bb.utils.lock_timeout(self.inotify_threadlock):
|
||||
if self.configwatcher:
|
||||
self.configwatcher.close()
|
||||
self.confignotifier = None
|
||||
self.configwatcher = None
|
||||
self.configwatcher = pyinotify.WatchManager()
|
||||
self.configwatcher.bbseen = set()
|
||||
self.configwatcher.bbwatchedfiles = set()
|
||||
self.confignotifier = pyinotify.Notifier(self.configwatcher, self.config_notifications)
|
||||
|
||||
def _parsecache_set(self, value):
|
||||
if value and not self.parsecache_valid:
|
||||
bb.server.process.serverlog("Parse cache valid")
|
||||
elif not value and self.parsecache_valid:
|
||||
bb.server.process.serverlog("Parse cache invalidated")
|
||||
self.parsecache_valid = value
|
||||
def setupParserWatcher(self):
|
||||
with bb.utils.lock_timeout(self.inotify_threadlock):
|
||||
if self.watcher:
|
||||
self.watcher.close()
|
||||
self.notifier = None
|
||||
self.watcher = None
|
||||
self.watcher = pyinotify.WatchManager()
|
||||
self.watcher.bbseen = set()
|
||||
self.watcher.bbwatchedfiles = set()
|
||||
self.notifier = pyinotify.Notifier(self.watcher, self.notifications)
|
||||
|
||||
def add_filewatch(self, deps, configwatcher=False):
|
||||
if configwatcher:
|
||||
watcher = self.configwatched
|
||||
else:
|
||||
watcher = self.parsewatched
|
||||
def process_inotify_updates(self):
|
||||
with bb.utils.lock_timeout(self.inotify_threadlock):
|
||||
for n in [self.confignotifier, self.notifier]:
|
||||
if n and n.check_events(timeout=0):
|
||||
# read notified events and enqueue them
|
||||
n.read_events()
|
||||
|
||||
def process_inotify_updates_apply(self):
|
||||
with bb.utils.lock_timeout(self.inotify_threadlock):
|
||||
for n in [self.confignotifier, self.notifier]:
|
||||
if n and n.check_events(timeout=0):
|
||||
n.read_events()
|
||||
n.process_events()
|
||||
|
||||
def config_notifications(self, event):
|
||||
if event.maskname == "IN_Q_OVERFLOW":
|
||||
bb.warn("inotify event queue overflowed, invalidating caches.")
|
||||
self.parsecache_valid = False
|
||||
self.baseconfig_valid = False
|
||||
bb.parse.clear_cache()
|
||||
return
|
||||
if not event.pathname in self.configwatcher.bbwatchedfiles:
|
||||
return
|
||||
if "IN_ISDIR" in event.maskname:
|
||||
if "IN_CREATE" in event.maskname or "IN_DELETE" in event.maskname:
|
||||
if event.pathname in self.configwatcher.bbseen:
|
||||
self.configwatcher.bbseen.remove(event.pathname)
|
||||
# Could remove all entries starting with the directory but for now...
|
||||
bb.parse.clear_cache()
|
||||
if not event.pathname in self.inotify_modified_files:
|
||||
self.inotify_modified_files.append(event.pathname)
|
||||
self.baseconfig_valid = False
|
||||
|
||||
def notifications(self, event):
|
||||
if event.maskname == "IN_Q_OVERFLOW":
|
||||
bb.warn("inotify event queue overflowed, invalidating caches.")
|
||||
self.parsecache_valid = False
|
||||
bb.parse.clear_cache()
|
||||
return
|
||||
if event.pathname.endswith("bitbake-cookerdaemon.log") \
|
||||
or event.pathname.endswith("bitbake.lock"):
|
||||
return
|
||||
if "IN_ISDIR" in event.maskname:
|
||||
if "IN_CREATE" in event.maskname or "IN_DELETE" in event.maskname:
|
||||
if event.pathname in self.watcher.bbseen:
|
||||
self.watcher.bbseen.remove(event.pathname)
|
||||
# Could remove all entries starting with the directory but for now...
|
||||
bb.parse.clear_cache()
|
||||
if not event.pathname in self.inotify_modified_files:
|
||||
self.inotify_modified_files.append(event.pathname)
|
||||
self.parsecache_valid = False
|
||||
|
||||
def add_filewatch(self, deps, watcher=None, dirs=False):
|
||||
if not watcher:
|
||||
watcher = self.watcher
|
||||
for i in deps:
|
||||
f = i[0]
|
||||
mtime = i[1]
|
||||
watcher[f] = mtime
|
||||
watcher.bbwatchedfiles.add(i[0])
|
||||
if dirs:
|
||||
f = i[0]
|
||||
else:
|
||||
f = os.path.dirname(i[0])
|
||||
if f in watcher.bbseen:
|
||||
continue
|
||||
watcher.bbseen.add(f)
|
||||
watchtarget = None
|
||||
while True:
|
||||
# We try and add watches for files that don't exist but if they did, would influence
|
||||
# the parser. The parent directory of these files may not exist, in which case we need
|
||||
# to watch any parent that does exist for changes.
|
||||
try:
|
||||
watcher.add_watch(f, self.watchmask, quiet=False)
|
||||
if watchtarget:
|
||||
watcher.bbwatchedfiles.add(watchtarget)
|
||||
break
|
||||
except pyinotify.WatchManagerError as e:
|
||||
if 'ENOENT' in str(e):
|
||||
watchtarget = f
|
||||
f = os.path.dirname(f)
|
||||
if f in watcher.bbseen:
|
||||
break
|
||||
watcher.bbseen.add(f)
|
||||
continue
|
||||
if 'ENOSPC' in str(e):
|
||||
providerlog.error("No space left on device or exceeds fs.inotify.max_user_watches?")
|
||||
providerlog.error("To check max_user_watches: sysctl -n fs.inotify.max_user_watches.")
|
||||
providerlog.error("To modify max_user_watches: sysctl -n -w fs.inotify.max_user_watches=<value>.")
|
||||
providerlog.error("Root privilege is required to modify max_user_watches.")
|
||||
raise
|
||||
|
||||
def handle_inotify_updates(self):
|
||||
# reload files for which we got notifications
|
||||
for p in self.inotify_modified_files:
|
||||
bb.parse.update_cache(p)
|
||||
if p in bb.parse.BBHandler.cached_statements:
|
||||
del bb.parse.BBHandler.cached_statements[p]
|
||||
self.inotify_modified_files = []
|
||||
|
||||
def sigterm_exception(self, signum, stackframe):
|
||||
if signum == signal.SIGTERM:
|
||||
@@ -250,7 +376,8 @@ class BBCooker:
|
||||
if mod not in self.orig_sysmodules:
|
||||
del sys.modules[mod]
|
||||
|
||||
self.configwatched = {}
|
||||
self.handle_inotify_updates()
|
||||
self.setupConfigWatcher()
|
||||
|
||||
# Need to preserve BB_CONSOLELOG over resets
|
||||
consolelog = None
|
||||
@@ -281,12 +408,9 @@ class BBCooker:
|
||||
self.databuilder = bb.cookerdata.CookerDataBuilder(self.configuration, False)
|
||||
self.databuilder.parseBaseConfiguration()
|
||||
self.data = self.databuilder.data
|
||||
self.data_hash = self.databuilder.data_hash
|
||||
self.extraconfigdata = {}
|
||||
|
||||
eventlog = self.data.getVar("BB_DEFAULT_EVENTLOG")
|
||||
if not self.configuration.writeeventlog and eventlog:
|
||||
self.setupEventLog(eventlog)
|
||||
|
||||
if consolelog:
|
||||
self.data.setVar("BB_CONSOLELOG", consolelog)
|
||||
|
||||
@@ -296,10 +420,10 @@ class BBCooker:
|
||||
self.disableDataTracking()
|
||||
|
||||
for mc in self.databuilder.mcdata.values():
|
||||
self.add_filewatch(mc.getVar("__base_depends", False), configwatcher=True)
|
||||
self.add_filewatch(mc.getVar("__base_depends", False), self.configwatcher)
|
||||
|
||||
self._baseconfig_set(True)
|
||||
self._parsecache_set(False)
|
||||
self.baseconfig_valid = True
|
||||
self.parsecache_valid = False
|
||||
|
||||
def handlePRServ(self):
|
||||
# Setup a PR Server based on the new configuration
|
||||
@@ -314,13 +438,13 @@ class BBCooker:
|
||||
dbfile = (self.data.getVar("PERSISTENT_DIR") or self.data.getVar("CACHE")) + "/hashserv.db"
|
||||
upstream = self.data.getVar("BB_HASHSERVE_UPSTREAM") or None
|
||||
if upstream:
|
||||
import socket
|
||||
try:
|
||||
with hashserv.create_client(upstream) as client:
|
||||
client.ping()
|
||||
except (ConnectionError, ImportError) as e:
|
||||
sock = socket.create_connection(upstream.split(":"), 5)
|
||||
sock.close()
|
||||
except socket.error as e:
|
||||
bb.warn("BB_HASHSERVE_UPSTREAM is not valid, unable to connect hash equivalence server at '%s': %s"
|
||||
% (upstream, repr(e)))
|
||||
upstream = None
|
||||
|
||||
self.hashservaddr = "unix://%s/hashserve.sock" % self.data.getVar("TOPDIR")
|
||||
self.hashserv = hashserv.create_server(
|
||||
@@ -329,7 +453,7 @@ class BBCooker:
|
||||
sync=False,
|
||||
upstream=upstream,
|
||||
)
|
||||
self.hashserv.serve_as_process(log_level=logging.WARNING)
|
||||
self.hashserv.serve_as_process()
|
||||
for mc in self.databuilder.mcdata:
|
||||
self.databuilder.mcorigdata[mc].setVar("BB_HASHSERVE", self.hashservaddr)
|
||||
self.databuilder.mcdata[mc].setVar("BB_HASHSERVE", self.hashservaddr)
|
||||
@@ -346,34 +470,6 @@ class BBCooker:
|
||||
if hasattr(self, "data"):
|
||||
self.data.disableTracking()
|
||||
|
||||
def revalidateCaches(self):
|
||||
bb.parse.clear_cache()
|
||||
|
||||
clean = True
|
||||
for f in self.configwatched:
|
||||
if not bb.parse.check_mtime(f, self.configwatched[f]):
|
||||
bb.server.process.serverlog("Found %s changed, invalid cache" % f)
|
||||
self._baseconfig_set(False)
|
||||
self._parsecache_set(False)
|
||||
clean = False
|
||||
break
|
||||
|
||||
if clean:
|
||||
for f in self.parsewatched:
|
||||
if not bb.parse.check_mtime(f, self.parsewatched[f]):
|
||||
bb.server.process.serverlog("Found %s changed, invalid cache" % f)
|
||||
self._parsecache_set(False)
|
||||
clean = False
|
||||
break
|
||||
|
||||
if not clean:
|
||||
bb.parse.BBHandler.cached_statements = {}
|
||||
|
||||
# If writes were made to any of the data stores, we need to recalculate the data
|
||||
# store cache
|
||||
if hasattr(self, "databuilder"):
|
||||
self.databuilder.calc_datastore_hashes()
|
||||
|
||||
def parseConfiguration(self):
|
||||
self.updateCacheSync()
|
||||
|
||||
@@ -392,24 +488,8 @@ class BBCooker:
|
||||
self.recipecaches[mc] = bb.cache.CacheData(self.caches_array)
|
||||
|
||||
self.handleCollections(self.data.getVar("BBFILE_COLLECTIONS"))
|
||||
self.collections = {}
|
||||
for mc in self.multiconfigs:
|
||||
self.collections[mc] = CookerCollectFiles(self.bbfile_config_priorities, mc)
|
||||
|
||||
self._parsecache_set(False)
|
||||
|
||||
def setupEventLog(self, eventlog):
|
||||
if self.eventlog and self.eventlog[0] != eventlog:
|
||||
bb.event.unregister_UIHhandler(self.eventlog[1])
|
||||
self.eventlog = None
|
||||
if not self.eventlog or self.eventlog[0] != eventlog:
|
||||
# we log all events to a file if so directed
|
||||
# register the log file writer as UI Handler
|
||||
if not os.path.exists(os.path.dirname(eventlog)):
|
||||
bb.utils.mkdirhier(os.path.dirname(eventlog))
|
||||
writer = EventWriter(self, eventlog)
|
||||
EventLogWriteHandler = namedtuple('EventLogWriteHandler', ['event'])
|
||||
self.eventlog = (eventlog, bb.event.register_UIHhandler(EventLogWriteHandler(writer)), writer)
|
||||
self.parsecache_valid = False
|
||||
|
||||
def updateConfigOpts(self, options, environment, cmdline):
|
||||
self.ui_cmdline = cmdline
|
||||
@@ -430,7 +510,14 @@ class BBCooker:
|
||||
setattr(self.configuration, o, options[o])
|
||||
|
||||
if self.configuration.writeeventlog:
|
||||
self.setupEventLog(self.configuration.writeeventlog)
|
||||
if self.eventlog and self.eventlog[0] != self.configuration.writeeventlog:
|
||||
bb.event.unregister_UIHhandler(self.eventlog[1])
|
||||
if not self.eventlog or self.eventlog[0] != self.configuration.writeeventlog:
|
||||
# we log all events to a file if so directed
|
||||
# register the log file writer as UI Handler
|
||||
writer = EventWriter(self, self.configuration.writeeventlog)
|
||||
EventLogWriteHandler = namedtuple('EventLogWriteHandler', ['event'])
|
||||
self.eventlog = (self.configuration.writeeventlog, bb.event.register_UIHhandler(EventLogWriteHandler(writer)))
|
||||
|
||||
bb.msg.loggerDefaultLogLevel = self.configuration.default_loglevel
|
||||
bb.msg.loggerDefaultDomains = self.configuration.debug_domains
|
||||
@@ -460,7 +547,6 @@ class BBCooker:
|
||||
# Now update all the variables not in the datastore to match
|
||||
self.configuration.env = environment
|
||||
|
||||
self.revalidateCaches()
|
||||
if not clean:
|
||||
logger.debug("Base environment change, triggering reparse")
|
||||
self.reset()
|
||||
@@ -538,14 +624,13 @@ class BBCooker:
|
||||
|
||||
if fn:
|
||||
try:
|
||||
layername = self.collections[mc].calc_bbfile_priority(fn)[2]
|
||||
envdata = self.databuilder.parseRecipe(fn, self.collections[mc].get_file_appends(fn), layername)
|
||||
envdata = self.databuilder.parseRecipe(fn, self.collections[mc].get_file_appends(fn))
|
||||
except Exception as e:
|
||||
parselog.exception("Unable to read %s", fn)
|
||||
raise
|
||||
else:
|
||||
if not mc in self.databuilder.mcdata:
|
||||
bb.fatal('No multiconfig named "%s" found' % mc)
|
||||
bb.fatal('Not multiconfig named "%s" found' % mc)
|
||||
envdata = self.databuilder.mcdata[mc]
|
||||
data.expandKeys(envdata)
|
||||
parse.ast.runAnonFuncs(envdata)
|
||||
@@ -684,14 +769,14 @@ class BBCooker:
|
||||
bb.event.fire(bb.event.TreeDataPreparationCompleted(len(fulltargetlist)), self.data)
|
||||
return taskdata, runlist
|
||||
|
||||
def prepareTreeData(self, pkgs_to_build, task, halt=False):
|
||||
def prepareTreeData(self, pkgs_to_build, task):
|
||||
"""
|
||||
Prepare a runqueue and taskdata object for iteration over pkgs_to_build
|
||||
"""
|
||||
|
||||
# We set halt to False here to prevent unbuildable targets raising
|
||||
# an exception when we're just generating data
|
||||
taskdata, runlist = self.buildTaskData(pkgs_to_build, task, halt, allowincomplete=True)
|
||||
taskdata, runlist = self.buildTaskData(pkgs_to_build, task, False, allowincomplete=True)
|
||||
|
||||
return runlist, taskdata
|
||||
|
||||
@@ -705,7 +790,7 @@ class BBCooker:
|
||||
if not task.startswith("do_"):
|
||||
task = "do_%s" % task
|
||||
|
||||
runlist, taskdata = self.prepareTreeData(pkgs_to_build, task, halt=True)
|
||||
runlist, taskdata = self.prepareTreeData(pkgs_to_build, task)
|
||||
rq = bb.runqueue.RunQueue(self, self.data, self.recipecaches, taskdata, runlist)
|
||||
rq.rqdata.prepare()
|
||||
return self.buildDependTree(rq, taskdata)
|
||||
@@ -1277,8 +1362,8 @@ class BBCooker:
|
||||
if bf.startswith("/") or bf.startswith("../"):
|
||||
bf = os.path.abspath(bf)
|
||||
|
||||
collections = {mc: CookerCollectFiles(self.bbfile_config_priorities, mc)}
|
||||
filelist, masked, searchdirs = collections[mc].collect_bbfiles(self.databuilder.mcdata[mc], self.databuilder.mcdata[mc])
|
||||
self.collections = {mc: CookerCollectFiles(self.bbfile_config_priorities, mc)}
|
||||
filelist, masked, searchdirs = self.collections[mc].collect_bbfiles(self.databuilder.mcdata[mc], self.databuilder.mcdata[mc])
|
||||
try:
|
||||
os.stat(bf)
|
||||
bf = os.path.abspath(bf)
|
||||
@@ -1342,10 +1427,9 @@ class BBCooker:
|
||||
self.buildSetVars()
|
||||
self.reset_mtime_caches()
|
||||
|
||||
bb_caches = bb.cache.MulticonfigCache(self.databuilder, self.databuilder.data_hash, self.caches_array)
|
||||
bb_caches = bb.cache.MulticonfigCache(self.databuilder, self.data_hash, self.caches_array)
|
||||
|
||||
layername = self.collections[mc].calc_bbfile_priority(fn)[2]
|
||||
infos = bb_caches[mc].parse(fn, self.collections[mc].get_file_appends(fn), layername)
|
||||
infos = bb_caches[mc].parse(fn, self.collections[mc].get_file_appends(fn))
|
||||
infos = dict(infos)
|
||||
|
||||
fn = bb.cache.realfn2virtual(fn, cls, mc)
|
||||
@@ -1390,8 +1474,6 @@ class BBCooker:
|
||||
buildname = self.databuilder.mcdata[mc].getVar("BUILDNAME")
|
||||
if fireevents:
|
||||
bb.event.fire(bb.event.BuildStarted(buildname, [item]), self.databuilder.mcdata[mc])
|
||||
if self.eventlog:
|
||||
self.eventlog[2].write_variables()
|
||||
bb.event.enable_heartbeat()
|
||||
|
||||
# Execute the runqueue
|
||||
@@ -1427,7 +1509,7 @@ class BBCooker:
|
||||
bb.event.fire(bb.event.BuildCompleted(len(rq.rqdata.runtaskentries), buildname, item, failures, interrupted), self.databuilder.mcdata[mc])
|
||||
bb.event.disable_heartbeat()
|
||||
# We trashed self.recipecaches above
|
||||
self._parsecache_set(False)
|
||||
self.parsecache_valid = False
|
||||
self.configuration.limited_deps = False
|
||||
bb.parse.siggen.reset(self.data)
|
||||
if quietlog:
|
||||
@@ -1439,36 +1521,6 @@ class BBCooker:
|
||||
|
||||
self.idleCallBackRegister(buildFileIdle, rq)
|
||||
|
||||
def getTaskSignatures(self, target, tasks):
|
||||
sig = []
|
||||
getAllTaskSignatures = False
|
||||
|
||||
if not tasks:
|
||||
tasks = ["do_build"]
|
||||
getAllTaskSignatures = True
|
||||
|
||||
for task in tasks:
|
||||
taskdata, runlist = self.buildTaskData(target, task, self.configuration.halt)
|
||||
rq = bb.runqueue.RunQueue(self, self.data, self.recipecaches, taskdata, runlist)
|
||||
rq.rqdata.prepare()
|
||||
|
||||
for l in runlist:
|
||||
mc, pn, taskname, fn = l
|
||||
|
||||
taskdep = rq.rqdata.dataCaches[mc].task_deps[fn]
|
||||
for t in taskdep['tasks']:
|
||||
if t in taskdep['nostamp'] or "setscene" in t:
|
||||
continue
|
||||
tid = bb.runqueue.build_tid(mc, fn, t)
|
||||
|
||||
if t in task or getAllTaskSignatures:
|
||||
try:
|
||||
sig.append([pn, t, rq.rqdata.get_task_unihash(tid)])
|
||||
except KeyError:
|
||||
sig.append(self.getTaskSignatures(target, [t])[0])
|
||||
|
||||
return sig
|
||||
|
||||
def buildTargets(self, targets, task):
|
||||
"""
|
||||
Attempt to build the targets specified
|
||||
@@ -1534,8 +1586,6 @@ class BBCooker:
|
||||
|
||||
for mc in self.multiconfigs:
|
||||
bb.event.fire(bb.event.BuildStarted(buildname, ntargets), self.databuilder.mcdata[mc])
|
||||
if self.eventlog:
|
||||
self.eventlog[2].write_variables()
|
||||
bb.event.enable_heartbeat()
|
||||
|
||||
rq = bb.runqueue.RunQueue(self, self.data, self.recipecaches, taskdata, runlist)
|
||||
@@ -1546,13 +1596,7 @@ class BBCooker:
|
||||
|
||||
|
||||
def getAllKeysWithFlags(self, flaglist):
|
||||
def dummy_autorev(d):
|
||||
return
|
||||
|
||||
dump = {}
|
||||
# Horrible but for now we need to avoid any sideeffects of autorev being called
|
||||
saved = bb.fetch2.get_autorev
|
||||
bb.fetch2.get_autorev = dummy_autorev
|
||||
for k in self.data.keys():
|
||||
try:
|
||||
expand = True
|
||||
@@ -1572,7 +1616,6 @@ class BBCooker:
|
||||
dump[k][d] = None
|
||||
except Exception as e:
|
||||
print(e)
|
||||
bb.fetch2.get_autorev = saved
|
||||
return dump
|
||||
|
||||
|
||||
@@ -1580,6 +1623,8 @@ class BBCooker:
|
||||
if self.state == state.running:
|
||||
return
|
||||
|
||||
self.handle_inotify_updates()
|
||||
|
||||
if not self.baseconfig_valid:
|
||||
logger.debug("Reloading base configuration data")
|
||||
self.initConfigurationData()
|
||||
@@ -1600,8 +1645,7 @@ class BBCooker:
|
||||
self.updateCacheSync()
|
||||
|
||||
if self.state != state.parsing and not self.parsecache_valid:
|
||||
bb.server.process.serverlog("Parsing started")
|
||||
self.parsewatched = {}
|
||||
self.setupParserWatcher()
|
||||
|
||||
bb.parse.siggen.reset(self.data)
|
||||
self.parseConfiguration ()
|
||||
@@ -1616,22 +1660,25 @@ class BBCooker:
|
||||
for dep in self.configuration.extra_assume_provided:
|
||||
self.recipecaches[mc].ignored_dependencies.add(dep)
|
||||
|
||||
self.collections = {}
|
||||
|
||||
mcfilelist = {}
|
||||
total_masked = 0
|
||||
searchdirs = set()
|
||||
for mc in self.multiconfigs:
|
||||
self.collections[mc] = CookerCollectFiles(self.bbfile_config_priorities, mc)
|
||||
(filelist, masked, search) = self.collections[mc].collect_bbfiles(self.databuilder.mcdata[mc], self.databuilder.mcdata[mc])
|
||||
|
||||
mcfilelist[mc] = filelist
|
||||
total_masked += masked
|
||||
searchdirs |= set(search)
|
||||
|
||||
# Add mtimes for directories searched for bb/bbappend files
|
||||
# Add inotify watches for directories searched for bb/bbappend files
|
||||
for dirent in searchdirs:
|
||||
self.add_filewatch([(dirent, bb.parse.cached_mtime_noerror(dirent))])
|
||||
self.add_filewatch([[dirent]], dirs=True)
|
||||
|
||||
self.parser = CookerParser(self, mcfilelist, total_masked)
|
||||
self._parsecache_set(True)
|
||||
self.parsecache_valid = True
|
||||
|
||||
self.state = state.parsing
|
||||
|
||||
@@ -1749,7 +1796,8 @@ class BBCooker:
|
||||
self.data = self.databuilder.data
|
||||
# In theory tinfoil could have modified the base data before parsing,
|
||||
# ideally need to track if anything did modify the datastore
|
||||
self._parsecache_set(False)
|
||||
self.parsecache_valid = False
|
||||
|
||||
|
||||
class CookerExit(bb.event.Event):
|
||||
"""
|
||||
@@ -1770,10 +1818,10 @@ class CookerCollectFiles(object):
|
||||
self.bbfile_config_priorities = sorted(priorities, key=lambda tup: tup[1], reverse=True)
|
||||
|
||||
def calc_bbfile_priority(self, filename):
|
||||
for layername, _, regex, pri in self.bbfile_config_priorities:
|
||||
for _, _, regex, pri in self.bbfile_config_priorities:
|
||||
if regex.match(filename):
|
||||
return pri, regex, layername
|
||||
return 0, None, None
|
||||
return pri, regex
|
||||
return 0, None
|
||||
|
||||
def get_bbfiles(self):
|
||||
"""Get list of default .bb files by reading out the current directory"""
|
||||
@@ -1792,7 +1840,7 @@ class CookerCollectFiles(object):
|
||||
for ignored in ('SCCS', 'CVS', '.svn'):
|
||||
if ignored in dirs:
|
||||
dirs.remove(ignored)
|
||||
found += [os.path.join(dir, f) for f in files if (f.endswith(('.bb', '.bbappend')))]
|
||||
found += [os.path.join(dir, f) for f in files if (f.endswith(['.bb', '.bbappend']))]
|
||||
|
||||
return found
|
||||
|
||||
@@ -1815,9 +1863,9 @@ class CookerCollectFiles(object):
|
||||
collectlog.error("no recipe files to build, check your BBPATH and BBFILES?")
|
||||
bb.event.fire(CookerExit(), eventdata)
|
||||
|
||||
# We need to track where we look so that we can know when the cache is invalid. There
|
||||
# is no nice way to do this, this is horrid. We intercept the os.listdir() and os.scandir()
|
||||
# calls while we run glob().
|
||||
# We need to track where we look so that we can add inotify watches. There
|
||||
# is no nice way to do this, this is horrid. We intercept the os.listdir()
|
||||
# (or os.scandir() for python 3.6+) calls while we run glob().
|
||||
origlistdir = os.listdir
|
||||
if hasattr(os, 'scandir'):
|
||||
origscandir = os.scandir
|
||||
@@ -1946,7 +1994,7 @@ class CookerCollectFiles(object):
|
||||
# Calculate priorities for each file
|
||||
for p in pkgfns:
|
||||
realfn, cls, mc = bb.cache.virtualfn2realfn(p)
|
||||
priorities[p], regex, _ = self.calc_bbfile_priority(realfn)
|
||||
priorities[p], regex = self.calc_bbfile_priority(realfn)
|
||||
if regex in unmatched_regex:
|
||||
matched_regex.add(regex)
|
||||
unmatched_regex.remove(regex)
|
||||
@@ -2083,7 +2131,7 @@ class Parser(multiprocessing.Process):
|
||||
self.results.close()
|
||||
self.results.join_thread()
|
||||
|
||||
def parse(self, mc, cache, filename, appends, layername):
|
||||
def parse(self, mc, cache, filename, appends):
|
||||
try:
|
||||
origfilter = bb.event.LogHandler.filter
|
||||
# Record the filename we're parsing into any events generated
|
||||
@@ -2097,10 +2145,11 @@ class Parser(multiprocessing.Process):
|
||||
bb.event.set_class_handlers(self.handlers.copy())
|
||||
bb.event.LogHandler.filter = parse_filter
|
||||
|
||||
return True, mc, cache.parse(filename, appends, layername)
|
||||
return True, mc, cache.parse(filename, appends)
|
||||
except Exception as exc:
|
||||
tb = sys.exc_info()[2]
|
||||
exc.recipe = filename
|
||||
exc.traceback = list(bb.exceptions.extract_traceback(tb, context=3))
|
||||
return True, None, exc
|
||||
# Need to turn BaseExceptions into Exceptions here so we gracefully shutdown
|
||||
# and for example a worker thread doesn't just exit on its own in response to
|
||||
@@ -2115,7 +2164,7 @@ class CookerParser(object):
|
||||
self.mcfilelist = mcfilelist
|
||||
self.cooker = cooker
|
||||
self.cfgdata = cooker.data
|
||||
self.cfghash = cooker.databuilder.data_hash
|
||||
self.cfghash = cooker.data_hash
|
||||
self.cfgbuilder = cooker.databuilder
|
||||
|
||||
# Accounting statistics
|
||||
@@ -2136,11 +2185,10 @@ class CookerParser(object):
|
||||
for mc in self.cooker.multiconfigs:
|
||||
for filename in self.mcfilelist[mc]:
|
||||
appends = self.cooker.collections[mc].get_file_appends(filename)
|
||||
layername = self.cooker.collections[mc].calc_bbfile_priority(filename)[2]
|
||||
if not self.bb_caches[mc].cacheValid(filename, appends):
|
||||
self.willparse.add((mc, self.bb_caches[mc], filename, appends, layername))
|
||||
self.willparse.add((mc, self.bb_caches[mc], filename, appends))
|
||||
else:
|
||||
self.fromcache.add((mc, self.bb_caches[mc], filename, appends, layername))
|
||||
self.fromcache.add((mc, self.bb_caches[mc], filename, appends))
|
||||
|
||||
self.total = len(self.fromcache) + len(self.willparse)
|
||||
self.toparse = len(self.willparse)
|
||||
@@ -2227,8 +2275,9 @@ class CookerParser(object):
|
||||
|
||||
for process in self.processes:
|
||||
process.join()
|
||||
# clean up zombies
|
||||
process.close()
|
||||
# Added in 3.7, cleans up zombies
|
||||
if hasattr(process, "close"):
|
||||
process.close()
|
||||
|
||||
bb.codeparser.parser_cache_save()
|
||||
bb.codeparser.parser_cache_savemerge()
|
||||
@@ -2238,20 +2287,19 @@ class CookerParser(object):
|
||||
profiles = []
|
||||
for i in self.process_names:
|
||||
logfile = "profile-parse-%s.log" % i
|
||||
if os.path.exists(logfile) and os.path.getsize(logfile):
|
||||
if os.path.exists(logfile):
|
||||
profiles.append(logfile)
|
||||
|
||||
if profiles:
|
||||
pout = "profile-parse.log.processed"
|
||||
bb.utils.process_profilelog(profiles, pout = pout)
|
||||
print("Processed parsing statistics saved to %s" % (pout))
|
||||
pout = "profile-parse.log.processed"
|
||||
bb.utils.process_profilelog(profiles, pout = pout)
|
||||
print("Processed parsing statistics saved to %s" % (pout))
|
||||
|
||||
def final_cleanup(self):
|
||||
if self.syncthread:
|
||||
self.syncthread.join()
|
||||
|
||||
def load_cached(self):
|
||||
for mc, cache, filename, appends, layername in self.fromcache:
|
||||
for mc, cache, filename, appends in self.fromcache:
|
||||
infos = cache.loadCached(filename, appends)
|
||||
yield False, mc, infos
|
||||
|
||||
@@ -2301,12 +2349,8 @@ class CookerParser(object):
|
||||
return False
|
||||
except ParsingFailure as exc:
|
||||
self.error += 1
|
||||
|
||||
exc_desc = str(exc)
|
||||
if isinstance(exc, SystemExit) and not isinstance(exc.code, str):
|
||||
exc_desc = 'Exited with "%d"' % exc.code
|
||||
|
||||
logger.error('Unable to parse %s: %s' % (exc.recipe, exc_desc))
|
||||
logger.error('Unable to parse %s: %s' %
|
||||
(exc.recipe, bb.exceptions.to_string(exc.realexception)))
|
||||
self.shutdown(clean=False)
|
||||
return False
|
||||
except bb.parse.ParseError as exc:
|
||||
@@ -2315,33 +2359,20 @@ class CookerParser(object):
|
||||
self.shutdown(clean=False, eventmsg=str(exc))
|
||||
return False
|
||||
except bb.data_smart.ExpansionError as exc:
|
||||
def skip_frames(f, fn_prefix):
|
||||
while f and f.tb_frame.f_code.co_filename.startswith(fn_prefix):
|
||||
f = f.tb_next
|
||||
return f
|
||||
|
||||
self.error += 1
|
||||
bbdir = os.path.dirname(__file__) + os.sep
|
||||
etype, value, tb = sys.exc_info()
|
||||
|
||||
# Remove any frames where the code comes from bitbake. This
|
||||
# prevents deep (and pretty useless) backtraces for expansion error
|
||||
tb = skip_frames(tb, bbdir)
|
||||
cur = tb
|
||||
while cur:
|
||||
cur.tb_next = skip_frames(cur.tb_next, bbdir)
|
||||
cur = cur.tb_next
|
||||
|
||||
etype, value, _ = sys.exc_info()
|
||||
tb = list(itertools.dropwhile(lambda e: e.filename.startswith(bbdir), exc.traceback))
|
||||
logger.error('ExpansionError during parsing %s', value.recipe,
|
||||
exc_info=(etype, value, tb))
|
||||
self.shutdown(clean=False)
|
||||
return False
|
||||
except Exception as exc:
|
||||
self.error += 1
|
||||
_, value, _ = sys.exc_info()
|
||||
etype, value, tb = sys.exc_info()
|
||||
if hasattr(value, "recipe"):
|
||||
logger.error('Unable to parse %s' % value.recipe,
|
||||
exc_info=sys.exc_info())
|
||||
exc_info=(etype, value, exc.traceback))
|
||||
else:
|
||||
# Most likely, an exception occurred during raising an exception
|
||||
import traceback
|
||||
@@ -2371,10 +2402,9 @@ class CookerParser(object):
|
||||
bb.cache.SiggenRecipeInfo.reset()
|
||||
to_reparse = set()
|
||||
for mc in self.cooker.multiconfigs:
|
||||
layername = self.cooker.collections[mc].calc_bbfile_priority(filename)[2]
|
||||
to_reparse.add((mc, filename, self.cooker.collections[mc].get_file_appends(filename), layername))
|
||||
to_reparse.add((mc, filename, self.cooker.collections[mc].get_file_appends(filename)))
|
||||
|
||||
for mc, filename, appends, layername in to_reparse:
|
||||
infos = self.bb_caches[mc].parse(filename, appends, layername)
|
||||
for mc, filename, appends in to_reparse:
|
||||
infos = self.bb_caches[mc].parse(filename, appends)
|
||||
for vfn, info_array in infos:
|
||||
self.cooker.recipecaches[mc].add_from_recipeinfo(vfn, info_array)
|
||||
|
||||
@@ -254,16 +254,9 @@ class CookerDataBuilder(object):
|
||||
self.data = self.basedata
|
||||
self.mcdata = {}
|
||||
|
||||
def calc_datastore_hashes(self):
|
||||
data_hash = hashlib.sha256()
|
||||
data_hash.update(self.data.get_hash().encode('utf-8'))
|
||||
multiconfig = (self.data.getVar("BBMULTICONFIG") or "").split()
|
||||
for config in multiconfig:
|
||||
data_hash.update(self.mcdata[config].get_hash().encode('utf-8'))
|
||||
self.data_hash = data_hash.hexdigest()
|
||||
|
||||
def parseBaseConfiguration(self, worker=False):
|
||||
mcdata = {}
|
||||
data_hash = hashlib.sha256()
|
||||
try:
|
||||
self.data = self.parseConfigurationFiles(self.prefiles, self.postfiles)
|
||||
|
||||
@@ -286,6 +279,7 @@ class CookerDataBuilder(object):
|
||||
bb.event.fire(bb.event.ConfigParsed(), self.data)
|
||||
|
||||
bb.parse.init_parser(self.data)
|
||||
data_hash.update(self.data.get_hash().encode('utf-8'))
|
||||
mcdata[''] = self.data
|
||||
|
||||
multiconfig = (self.data.getVar("BBMULTICONFIG") or "").split()
|
||||
@@ -295,9 +289,11 @@ class CookerDataBuilder(object):
|
||||
parsed_mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
|
||||
bb.event.fire(bb.event.ConfigParsed(), parsed_mcdata)
|
||||
mcdata[config] = parsed_mcdata
|
||||
data_hash.update(parsed_mcdata.get_hash().encode('utf-8'))
|
||||
if multiconfig:
|
||||
bb.event.fire(bb.event.MultiConfigParsed(mcdata), self.data)
|
||||
|
||||
self.data_hash = data_hash.hexdigest()
|
||||
except bb.data_smart.ExpansionError as e:
|
||||
logger.error(str(e))
|
||||
raise bb.BBHandledException()
|
||||
@@ -332,7 +328,6 @@ class CookerDataBuilder(object):
|
||||
for mc in mcdata:
|
||||
self.mcdata[mc] = bb.data.createCopy(mcdata[mc])
|
||||
self.data = self.mcdata['']
|
||||
self.calc_datastore_hashes()
|
||||
|
||||
def reset(self):
|
||||
# We may not have run parseBaseConfiguration() yet
|
||||
@@ -499,19 +494,18 @@ class CookerDataBuilder(object):
|
||||
return data
|
||||
|
||||
@staticmethod
|
||||
def _parse_recipe(bb_data, bbfile, appends, mc, layername):
|
||||
def _parse_recipe(bb_data, bbfile, appends, mc=''):
|
||||
bb_data.setVar("__BBMULTICONFIG", mc)
|
||||
bb_data.setVar("FILE_LAYERNAME", layername)
|
||||
|
||||
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
|
||||
bb.parse.cached_mtime_noerror(bbfile_loc)
|
||||
|
||||
if appends:
|
||||
bb_data.setVar('__BBAPPEND', " ".join(appends))
|
||||
bb_data = bb.parse.handle(bbfile, bb_data)
|
||||
return bb_data
|
||||
|
||||
return bb.parse.handle(bbfile, bb_data)
|
||||
|
||||
def parseRecipeVariants(self, bbfile, appends, virtonly=False, mc=None, layername=None):
|
||||
def parseRecipeVariants(self, bbfile, appends, virtonly=False, mc=None):
|
||||
"""
|
||||
Load and parse one .bb build file
|
||||
Return the data and whether parsing resulted in the file being skipped
|
||||
@@ -521,31 +515,32 @@ class CookerDataBuilder(object):
|
||||
(bbfile, virtual, mc) = bb.cache.virtualfn2realfn(bbfile)
|
||||
bb_data = self.mcdata[mc].createCopy()
|
||||
bb_data.setVar("__ONLYFINALISE", virtual or "default")
|
||||
return self._parse_recipe(bb_data, bbfile, appends, mc, layername)
|
||||
datastores = self._parse_recipe(bb_data, bbfile, appends, mc)
|
||||
return datastores
|
||||
|
||||
if mc is not None:
|
||||
bb_data = self.mcdata[mc].createCopy()
|
||||
return self._parse_recipe(bb_data, bbfile, appends, mc, layername)
|
||||
return self._parse_recipe(bb_data, bbfile, appends, mc)
|
||||
|
||||
bb_data = self.data.createCopy()
|
||||
datastores = self._parse_recipe(bb_data, bbfile, appends, '', layername)
|
||||
datastores = self._parse_recipe(bb_data, bbfile, appends)
|
||||
|
||||
for mc in self.mcdata:
|
||||
if not mc:
|
||||
continue
|
||||
bb_data = self.mcdata[mc].createCopy()
|
||||
newstores = self._parse_recipe(bb_data, bbfile, appends, mc, layername)
|
||||
newstores = self._parse_recipe(bb_data, bbfile, appends, mc)
|
||||
for ns in newstores:
|
||||
datastores["mc:%s:%s" % (mc, ns)] = newstores[ns]
|
||||
|
||||
return datastores
|
||||
|
||||
def parseRecipe(self, virtualfn, appends, layername):
|
||||
def parseRecipe(self, virtualfn, appends):
|
||||
"""
|
||||
Return a complete set of data for fn.
|
||||
To do this, we need to parse the file.
|
||||
"""
|
||||
logger.debug("Parsing %s (full)" % virtualfn)
|
||||
(fn, virtual, mc) = bb.cache.virtualfn2realfn(virtualfn)
|
||||
datastores = self.parseRecipeVariants(virtualfn, appends, virtonly=True, layername=layername)
|
||||
return datastores[virtual]
|
||||
bb_data = self.parseRecipeVariants(virtualfn, appends, virtonly=True)
|
||||
return bb_data[virtual]
|
||||
|
||||
@@ -285,7 +285,6 @@ def build_dependencies(key, keys, mod_funcs, shelldeps, varflagsexcl, ignored_va
|
||||
value += "\n_remove of %s" % r
|
||||
deps |= r2.references
|
||||
deps = deps | (keys & r2.execs)
|
||||
value = handle_contains(value, r2.contains, exclusions, d)
|
||||
return value
|
||||
|
||||
deps = set()
|
||||
@@ -293,7 +292,7 @@ def build_dependencies(key, keys, mod_funcs, shelldeps, varflagsexcl, ignored_va
|
||||
if key in mod_funcs:
|
||||
exclusions = set()
|
||||
moddep = bb.codeparser.modulecode_deps[key]
|
||||
value = handle_contains(moddep[4], moddep[3], exclusions, d)
|
||||
value = handle_contains("", moddep[3], exclusions, d)
|
||||
return frozenset((moddep[0] | keys & moddep[1]) - ignored_vars), value
|
||||
|
||||
if key[-1] == ']':
|
||||
|
||||
@@ -16,10 +16,7 @@ BitBake build tools.
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import builtins
|
||||
import copy
|
||||
import re
|
||||
import sys
|
||||
import copy, re, sys, traceback
|
||||
from collections.abc import MutableMapping
|
||||
import logging
|
||||
import hashlib
|
||||
@@ -153,21 +150,19 @@ class VariableParse:
|
||||
value = utils.better_eval(codeobj, DataContext(self.d), {'d' : self.d})
|
||||
return str(value)
|
||||
|
||||
class DataContext(dict):
|
||||
excluded = set([i for i in dir(builtins) if not i.startswith('_')] + ['oe'])
|
||||
|
||||
class DataContext(dict):
|
||||
def __init__(self, metadata, **kwargs):
|
||||
self.metadata = metadata
|
||||
dict.__init__(self, **kwargs)
|
||||
self['d'] = metadata
|
||||
self.context = set(bb.utils.get_context())
|
||||
|
||||
def __missing__(self, key):
|
||||
if key in self.excluded or key in self.context:
|
||||
# Skip commonly accessed invalid variables
|
||||
if key in ['bb', 'oe', 'int', 'bool', 'time', 'str', 'os']:
|
||||
raise KeyError(key)
|
||||
|
||||
value = self.metadata.getVar(key)
|
||||
if value is None:
|
||||
if value is None or self.metadata.getVarFlag(key, 'func', False):
|
||||
raise KeyError(key)
|
||||
else:
|
||||
return value
|
||||
@@ -272,9 +267,12 @@ class VariableHistory(object):
|
||||
return
|
||||
if 'op' not in loginfo or not loginfo['op']:
|
||||
loginfo['op'] = 'set'
|
||||
if 'detail' in loginfo:
|
||||
loginfo['detail'] = str(loginfo['detail'])
|
||||
if 'variable' not in loginfo or 'file' not in loginfo:
|
||||
raise ValueError("record() missing variable or file.")
|
||||
var = loginfo['variable']
|
||||
|
||||
if var not in self.variables:
|
||||
self.variables[var] = []
|
||||
if not isinstance(self.variables[var], list):
|
||||
@@ -333,8 +331,7 @@ class VariableHistory(object):
|
||||
flag = '[%s] ' % (event['flag'])
|
||||
else:
|
||||
flag = ''
|
||||
o.write("# %s %s:%s%s\n# %s\"%s\"\n" % \
|
||||
(event['op'], event['file'], event['line'], display_func, flag, re.sub('\n', '\n# ', str(event['detail']))))
|
||||
o.write("# %s %s:%s%s\n# %s\"%s\"\n" % (event['op'], event['file'], event['line'], display_func, flag, re.sub('\n', '\n# ', event['detail'])))
|
||||
if len(history) > 1:
|
||||
o.write("# pre-expansion value:\n")
|
||||
o.write('# "%s"\n' % (commentVal))
|
||||
@@ -388,7 +385,7 @@ class VariableHistory(object):
|
||||
if isset and event['op'] == 'set?':
|
||||
continue
|
||||
isset = True
|
||||
items = d.expand(str(event['detail'])).split()
|
||||
items = d.expand(event['detail']).split()
|
||||
for item in items:
|
||||
# This is a little crude but is belt-and-braces to avoid us
|
||||
# having to handle every possible operation type specifically
|
||||
|
||||
@@ -19,6 +19,7 @@ import sys
|
||||
import threading
|
||||
import traceback
|
||||
|
||||
import bb.exceptions
|
||||
import bb.utils
|
||||
|
||||
# This is the pid for which we should generate the event. This is set when
|
||||
@@ -256,15 +257,14 @@ def register(name, handler, mask=None, filename=None, lineno=None, data=None):
|
||||
# handle string containing python code
|
||||
if isinstance(handler, str):
|
||||
tmp = "def %s(e, d):\n%s" % (name, handler)
|
||||
# Inject empty lines to make code match lineno in filename
|
||||
if lineno is not None:
|
||||
tmp = "\n" * (lineno-1) + tmp
|
||||
try:
|
||||
code = bb.methodpool.compile_cache(tmp)
|
||||
if not code:
|
||||
if filename is None:
|
||||
filename = "%s(e, d)" % name
|
||||
code = compile(tmp, filename, "exec", ast.PyCF_ONLY_AST)
|
||||
if lineno is not None:
|
||||
ast.increment_lineno(code, lineno-1)
|
||||
code = compile(code, filename, "exec")
|
||||
bb.methodpool.compile_cache_add(tmp, code)
|
||||
except SyntaxError:
|
||||
@@ -758,7 +758,13 @@ class LogHandler(logging.Handler):
|
||||
|
||||
def emit(self, record):
|
||||
if record.exc_info:
|
||||
record.bb_exc_formatted = traceback.format_exception(*record.exc_info)
|
||||
etype, value, tb = record.exc_info
|
||||
if hasattr(tb, 'tb_next'):
|
||||
tb = list(bb.exceptions.extract_traceback(tb, context=3))
|
||||
# Need to turn the value into something the logging system can pickle
|
||||
record.bb_exc_info = (etype, value, tb)
|
||||
record.bb_exc_formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
|
||||
value = str(value)
|
||||
record.exc_info = None
|
||||
fire(record, None)
|
||||
|
||||
@@ -851,14 +857,6 @@ class FindSigInfoResult(Event):
|
||||
Event.__init__(self)
|
||||
self.result = result
|
||||
|
||||
class GetTaskSignatureResult(Event):
|
||||
"""
|
||||
Event to return results from GetTaskSignatures command
|
||||
"""
|
||||
def __init__(self, sig):
|
||||
Event.__init__(self)
|
||||
self.sig = sig
|
||||
|
||||
class ParseError(Event):
|
||||
"""
|
||||
Event to indicate parse failed
|
||||
|
||||
96
bitbake/lib/bb/exceptions.py
Normal file
96
bitbake/lib/bb/exceptions.py
Normal file
@@ -0,0 +1,96 @@
|
||||
#
|
||||
# Copyright BitBake Contributors
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
|
||||
import inspect
|
||||
import traceback
|
||||
import bb.namedtuple_with_abc
|
||||
from collections import namedtuple
|
||||
|
||||
|
||||
class TracebackEntry(namedtuple.abc):
|
||||
"""Pickleable representation of a traceback entry"""
|
||||
_fields = 'filename lineno function args code_context index'
|
||||
_header = ' File "{0.filename}", line {0.lineno}, in {0.function}{0.args}'
|
||||
|
||||
def format(self, formatter=None):
|
||||
if not self.code_context:
|
||||
return self._header.format(self) + '\n'
|
||||
|
||||
formatted = [self._header.format(self) + ':\n']
|
||||
|
||||
for lineindex, line in enumerate(self.code_context):
|
||||
if formatter:
|
||||
line = formatter(line)
|
||||
|
||||
if lineindex == self.index:
|
||||
formatted.append(' >%s' % line)
|
||||
else:
|
||||
formatted.append(' %s' % line)
|
||||
return formatted
|
||||
|
||||
def __str__(self):
|
||||
return ''.join(self.format())
|
||||
|
||||
def _get_frame_args(frame):
|
||||
"""Get the formatted arguments and class (if available) for a frame"""
|
||||
arginfo = inspect.getargvalues(frame)
|
||||
|
||||
try:
|
||||
if not arginfo.args:
|
||||
return '', None
|
||||
# There have been reports from the field of python 2.6 which doesn't
|
||||
# return a namedtuple here but simply a tuple so fallback gracefully if
|
||||
# args isn't present.
|
||||
except AttributeError:
|
||||
return '', None
|
||||
|
||||
firstarg = arginfo.args[0]
|
||||
if firstarg == 'self':
|
||||
self = arginfo.locals['self']
|
||||
cls = self.__class__.__name__
|
||||
|
||||
arginfo.args.pop(0)
|
||||
del arginfo.locals['self']
|
||||
else:
|
||||
cls = None
|
||||
|
||||
formatted = inspect.formatargvalues(*arginfo)
|
||||
return formatted, cls
|
||||
|
||||
def extract_traceback(tb, context=1):
|
||||
frames = inspect.getinnerframes(tb, context)
|
||||
for frame, filename, lineno, function, code_context, index in frames:
|
||||
formatted_args, cls = _get_frame_args(frame)
|
||||
if cls:
|
||||
function = '%s.%s' % (cls, function)
|
||||
yield TracebackEntry(filename, lineno, function, formatted_args,
|
||||
code_context, index)
|
||||
|
||||
def format_extracted(extracted, formatter=None, limit=None):
|
||||
if limit:
|
||||
extracted = extracted[-limit:]
|
||||
|
||||
formatted = []
|
||||
for tracebackinfo in extracted:
|
||||
formatted.extend(tracebackinfo.format(formatter))
|
||||
return formatted
|
||||
|
||||
|
||||
def format_exception(etype, value, tb, context=1, limit=None, formatter=None):
|
||||
formatted = ['Traceback (most recent call last):\n']
|
||||
|
||||
if hasattr(tb, 'tb_next'):
|
||||
tb = extract_traceback(tb, context)
|
||||
|
||||
formatted.extend(format_extracted(tb, formatter, limit))
|
||||
formatted.extend(traceback.format_exception_only(etype, value))
|
||||
return formatted
|
||||
|
||||
def to_string(exc):
|
||||
if isinstance(exc, SystemExit):
|
||||
if not isinstance(exc.code, str):
|
||||
return 'Exited with "%d"' % exc.code
|
||||
return str(exc)
|
||||
@@ -290,12 +290,12 @@ class URI(object):
|
||||
|
||||
def _param_str_split(self, string, elmdelim, kvdelim="="):
|
||||
ret = collections.OrderedDict()
|
||||
for k, v in [x.split(kvdelim, 1) if kvdelim in x else (x, None) for x in string.split(elmdelim) if x]:
|
||||
for k, v in [x.split(kvdelim, 1) for x in string.split(elmdelim) if x]:
|
||||
ret[k] = v
|
||||
return ret
|
||||
|
||||
def _param_str_join(self, dict_, elmdelim, kvdelim="="):
|
||||
return elmdelim.join([kvdelim.join([k, v]) if v else k for k, v in dict_.items()])
|
||||
return elmdelim.join([kvdelim.join([k, v]) for k, v in dict_.items()])
|
||||
|
||||
@property
|
||||
def hostport(self):
|
||||
@@ -388,7 +388,7 @@ def decodeurl(url):
|
||||
if s:
|
||||
if not '=' in s:
|
||||
raise MalformedUrl(url, "The URL: '%s' is invalid: parameter %s does not specify a value (missing '=')" % (url, s))
|
||||
s1, s2 = s.split('=', 1)
|
||||
s1, s2 = s.split('=')
|
||||
p[s1] = s2
|
||||
|
||||
return type, host, urllib.parse.unquote(path), user, pswd, p
|
||||
@@ -499,30 +499,30 @@ def fetcher_init(d):
|
||||
Calls before this must not hit the cache.
|
||||
"""
|
||||
|
||||
with bb.persist_data.persist('BB_URI_HEADREVS', d) as revs:
|
||||
try:
|
||||
# fetcher_init is called multiple times, so make sure we only save the
|
||||
# revs the first time it is called.
|
||||
if not bb.fetch2.saved_headrevs:
|
||||
bb.fetch2.saved_headrevs = dict(revs)
|
||||
except:
|
||||
pass
|
||||
revs = bb.persist_data.persist('BB_URI_HEADREVS', d)
|
||||
try:
|
||||
# fetcher_init is called multiple times, so make sure we only save the
|
||||
# revs the first time it is called.
|
||||
if not bb.fetch2.saved_headrevs:
|
||||
bb.fetch2.saved_headrevs = dict(revs)
|
||||
except:
|
||||
pass
|
||||
|
||||
# When to drop SCM head revisions controlled by user policy
|
||||
srcrev_policy = d.getVar('BB_SRCREV_POLICY') or "clear"
|
||||
if srcrev_policy == "cache":
|
||||
logger.debug("Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
|
||||
elif srcrev_policy == "clear":
|
||||
logger.debug("Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
|
||||
revs.clear()
|
||||
else:
|
||||
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
|
||||
# When to drop SCM head revisions controlled by user policy
|
||||
srcrev_policy = d.getVar('BB_SRCREV_POLICY') or "clear"
|
||||
if srcrev_policy == "cache":
|
||||
logger.debug("Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
|
||||
elif srcrev_policy == "clear":
|
||||
logger.debug("Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
|
||||
revs.clear()
|
||||
else:
|
||||
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
|
||||
|
||||
_checksum_cache.init_cache(d.getVar("BB_CACHEDIR"))
|
||||
_checksum_cache.init_cache(d.getVar("BB_CACHEDIR"))
|
||||
|
||||
for m in methods:
|
||||
if hasattr(m, "init"):
|
||||
m.init(d)
|
||||
for m in methods:
|
||||
if hasattr(m, "init"):
|
||||
m.init(d)
|
||||
|
||||
def fetcher_parse_save():
|
||||
_checksum_cache.save_extras()
|
||||
@@ -536,8 +536,8 @@ def fetcher_compare_revisions(d):
|
||||
when bitbake was started and return true if they have changed.
|
||||
"""
|
||||
|
||||
with dict(bb.persist_data.persist('BB_URI_HEADREVS', d)) as headrevs:
|
||||
return headrevs != bb.fetch2.saved_headrevs
|
||||
headrevs = dict(bb.persist_data.persist('BB_URI_HEADREVS', d))
|
||||
return headrevs != bb.fetch2.saved_headrevs
|
||||
|
||||
def mirror_from_string(data):
|
||||
mirrors = (data or "").replace('\\n',' ').split()
|
||||
@@ -753,7 +753,7 @@ def get_autorev(d):
|
||||
d.setVar("__BBAUTOREV_SEEN", True)
|
||||
return "AUTOINC"
|
||||
|
||||
def _get_srcrev(d, method_name='sortable_revision'):
|
||||
def get_srcrev(d, method_name='sortable_revision'):
|
||||
"""
|
||||
Return the revision string, usually for use in the version string (PV) of the current package
|
||||
Most packages usually only have one SCM so we just pass on the call.
|
||||
@@ -774,7 +774,6 @@ def _get_srcrev(d, method_name='sortable_revision'):
|
||||
d.setVar("__BBINSRCREV", True)
|
||||
|
||||
scms = []
|
||||
revs = []
|
||||
fetcher = Fetch(d.getVar('SRC_URI').split(), d)
|
||||
urldata = fetcher.ud
|
||||
for u in urldata:
|
||||
@@ -782,19 +781,16 @@ def _get_srcrev(d, method_name='sortable_revision'):
|
||||
scms.append(u)
|
||||
|
||||
if not scms:
|
||||
d.delVar("__BBINSRCREV")
|
||||
return "", revs
|
||||
|
||||
raise FetchError("SRCREV was used yet no valid SCM was found in SRC_URI")
|
||||
|
||||
if len(scms) == 1 and len(urldata[scms[0]].names) == 1:
|
||||
autoinc, rev = getattr(urldata[scms[0]].method, method_name)(urldata[scms[0]], d, urldata[scms[0]].names[0])
|
||||
revs.append(rev)
|
||||
if len(rev) > 10:
|
||||
rev = rev[:10]
|
||||
d.delVar("__BBINSRCREV")
|
||||
if autoinc:
|
||||
return "AUTOINC+" + rev, revs
|
||||
return rev, revs
|
||||
return "AUTOINC+" + rev
|
||||
return rev
|
||||
|
||||
#
|
||||
# Mutiple SCMs are in SRC_URI so we resort to SRCREV_FORMAT
|
||||
@@ -810,7 +806,6 @@ def _get_srcrev(d, method_name='sortable_revision'):
|
||||
ud = urldata[scm]
|
||||
for name in ud.names:
|
||||
autoinc, rev = getattr(ud.method, method_name)(ud, d, name)
|
||||
revs.append(rev)
|
||||
seenautoinc = seenautoinc or autoinc
|
||||
if len(rev) > 10:
|
||||
rev = rev[:10]
|
||||
@@ -828,21 +823,7 @@ def _get_srcrev(d, method_name='sortable_revision'):
|
||||
format = "AUTOINC+" + format
|
||||
|
||||
d.delVar("__BBINSRCREV")
|
||||
return format, revs
|
||||
|
||||
def get_hashvalue(d, method_name='sortable_revision'):
|
||||
pkgv, revs = _get_srcrev(d, method_name=method_name)
|
||||
return " ".join(revs)
|
||||
|
||||
def get_pkgv_string(d, method_name='sortable_revision'):
|
||||
pkgv, revs = _get_srcrev(d, method_name=method_name)
|
||||
return pkgv
|
||||
|
||||
def get_srcrev(d, method_name='sortable_revision'):
|
||||
pkgv, revs = _get_srcrev(d, method_name=method_name)
|
||||
if not pkgv:
|
||||
raise FetchError("SRCREV was used yet no valid SCM was found in SRC_URI")
|
||||
return pkgv
|
||||
return format
|
||||
|
||||
def localpath(url, d):
|
||||
fetcher = bb.fetch2.Fetch([url], d)
|
||||
@@ -872,12 +853,8 @@ FETCH_EXPORT_VARS = ['HOME', 'PATH',
|
||||
'AWS_PROFILE',
|
||||
'AWS_ACCESS_KEY_ID',
|
||||
'AWS_SECRET_ACCESS_KEY',
|
||||
'AWS_ROLE_ARN',
|
||||
'AWS_WEB_IDENTITY_TOKEN_FILE',
|
||||
'AWS_DEFAULT_REGION',
|
||||
'AWS_SESSION_TOKEN',
|
||||
'GIT_CACHE_PATH',
|
||||
'REMOTE_CONTAINERS_IPC',
|
||||
'SSL_CERT_DIR']
|
||||
|
||||
def get_fetcher_environment(d):
|
||||
@@ -943,10 +920,7 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
|
||||
elif e.stderr:
|
||||
output = "output:\n%s" % e.stderr
|
||||
else:
|
||||
if log:
|
||||
output = "see logfile for output"
|
||||
else:
|
||||
output = "no output"
|
||||
output = "no output"
|
||||
error_message = "Fetch command %s failed with exit code %s, %s" % (e.command, e.exitcode, output)
|
||||
except bb.process.CmdError as e:
|
||||
error_message = "Fetch command %s could not be run:\n%s" % (e.command, e.msg)
|
||||
@@ -1118,8 +1092,7 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
|
||||
logger.debug("Mirror fetch failure for url %s (original url: %s)" % (ud.url, origud.url))
|
||||
logger.debug(str(e))
|
||||
try:
|
||||
if ud.method.cleanup_upon_failure():
|
||||
ud.method.clean(ud, ld)
|
||||
ud.method.clean(ud, ld)
|
||||
except UnboundLocalError:
|
||||
pass
|
||||
return False
|
||||
@@ -1261,7 +1234,7 @@ def get_checksum_file_list(d):
|
||||
ud = fetch.ud[u]
|
||||
if ud and isinstance(ud.method, local.Local):
|
||||
found = False
|
||||
paths = ud.method.localfile_searchpaths(ud, d)
|
||||
paths = ud.method.localpaths(ud, d)
|
||||
for f in paths:
|
||||
pth = ud.decodedurl
|
||||
if os.path.exists(f):
|
||||
@@ -1317,7 +1290,7 @@ class FetchData(object):
|
||||
|
||||
if checksum_name in self.parm:
|
||||
checksum_expected = self.parm[checksum_name]
|
||||
elif self.type not in ["http", "https", "ftp", "ftps", "sftp", "s3", "az", "crate", "gs", "gomod"]:
|
||||
elif self.type not in ["http", "https", "ftp", "ftps", "sftp", "s3", "az", "crate"]:
|
||||
checksum_expected = None
|
||||
else:
|
||||
checksum_expected = d.getVarFlag("SRC_URI", checksum_name)
|
||||
@@ -1429,9 +1402,6 @@ class FetchMethod(object):
|
||||
Is localpath something that can be represented by a checksum?
|
||||
"""
|
||||
|
||||
# We cannot compute checksums for None
|
||||
if urldata.localpath is None:
|
||||
return False
|
||||
# We cannot compute checksums for directories
|
||||
if os.path.isdir(urldata.localpath):
|
||||
return False
|
||||
@@ -1444,12 +1414,6 @@ class FetchMethod(object):
|
||||
"""
|
||||
return False
|
||||
|
||||
def cleanup_upon_failure(self):
|
||||
"""
|
||||
When a fetch fails, should clean() be called?
|
||||
"""
|
||||
return True
|
||||
|
||||
def verify_donestamp(self, ud, d):
|
||||
"""
|
||||
Verify the donestamp file
|
||||
@@ -1592,7 +1556,6 @@ class FetchMethod(object):
|
||||
unpackdir = rootdir
|
||||
|
||||
if not unpack or not cmd:
|
||||
urldata.unpack_tracer.unpack("file-copy", unpackdir)
|
||||
# If file == dest, then avoid any copies, as we already put the file into dest!
|
||||
dest = os.path.join(unpackdir, os.path.basename(file))
|
||||
if file != dest and not (os.path.exists(dest) and os.path.samefile(file, dest)):
|
||||
@@ -1606,9 +1569,7 @@ class FetchMethod(object):
|
||||
if urlpath.find("/") != -1:
|
||||
destdir = urlpath.rsplit("/", 1)[0] + '/'
|
||||
bb.utils.mkdirhier("%s/%s" % (unpackdir, destdir))
|
||||
cmd = 'cp --force --preserve=timestamps --no-dereference --recursive -H "%s" "%s"' % (file, destdir)
|
||||
else:
|
||||
urldata.unpack_tracer.unpack("archive-extract", unpackdir)
|
||||
cmd = 'cp -fpPRH "%s" "%s"' % (file, destdir)
|
||||
|
||||
if not cmd:
|
||||
return
|
||||
@@ -1662,13 +1623,13 @@ class FetchMethod(object):
|
||||
if not hasattr(self, "_latest_revision"):
|
||||
raise ParameterError("The fetcher for this URL does not support _latest_revision", ud.url)
|
||||
|
||||
with bb.persist_data.persist('BB_URI_HEADREVS', d) as revs:
|
||||
key = self.generate_revision_key(ud, d, name)
|
||||
try:
|
||||
return revs[key]
|
||||
except KeyError:
|
||||
revs[key] = rev = self._latest_revision(ud, d, name)
|
||||
return rev
|
||||
revs = bb.persist_data.persist('BB_URI_HEADREVS', d)
|
||||
key = self.generate_revision_key(ud, d, name)
|
||||
try:
|
||||
return revs[key]
|
||||
except KeyError:
|
||||
revs[key] = rev = self._latest_revision(ud, d, name)
|
||||
return rev
|
||||
|
||||
def sortable_revision(self, ud, d, name):
|
||||
latest_rev = self._build_revision(ud, d, name)
|
||||
@@ -1700,55 +1661,6 @@ class FetchMethod(object):
|
||||
"""
|
||||
return []
|
||||
|
||||
|
||||
class DummyUnpackTracer(object):
|
||||
"""
|
||||
Abstract API definition for a class that traces unpacked source files back
|
||||
to their respective upstream SRC_URI entries, for software composition
|
||||
analysis, license compliance and detailed SBOM generation purposes.
|
||||
User may load their own unpack tracer class (instead of the dummy
|
||||
one) by setting the BB_UNPACK_TRACER_CLASS config parameter.
|
||||
"""
|
||||
def start(self, unpackdir, urldata_dict, d):
|
||||
"""
|
||||
Start tracing the core Fetch.unpack process, using an index to map
|
||||
unpacked files to each SRC_URI entry.
|
||||
This method is called by Fetch.unpack and it may receive nested calls by
|
||||
gitsm and npmsw fetchers, that expand SRC_URI entries by adding implicit
|
||||
URLs and by recursively calling Fetch.unpack from new (nested) Fetch
|
||||
instances.
|
||||
"""
|
||||
return
|
||||
def start_url(self, url):
|
||||
"""Start tracing url unpack process.
|
||||
This method is called by Fetch.unpack before the fetcher-specific unpack
|
||||
method starts, and it may receive nested calls by gitsm and npmsw
|
||||
fetchers.
|
||||
"""
|
||||
return
|
||||
def unpack(self, unpack_type, destdir):
|
||||
"""
|
||||
Set unpack_type and destdir for current url.
|
||||
This method is called by the fetcher-specific unpack method after url
|
||||
tracing started.
|
||||
"""
|
||||
return
|
||||
def finish_url(self, url):
|
||||
"""Finish tracing url unpack process and update the file index.
|
||||
This method is called by Fetch.unpack after the fetcher-specific unpack
|
||||
method finished its job, and it may receive nested calls by gitsm
|
||||
and npmsw fetchers.
|
||||
"""
|
||||
return
|
||||
def complete(self):
|
||||
"""
|
||||
Finish tracing the Fetch.unpack process, and check if all nested
|
||||
Fecth.unpack calls (if any) have been completed; if so, save collected
|
||||
metadata.
|
||||
"""
|
||||
return
|
||||
|
||||
|
||||
class Fetch(object):
|
||||
def __init__(self, urls, d, cache = True, localonly = False, connection_cache = None):
|
||||
if localonly and cache:
|
||||
@@ -1769,30 +1681,10 @@ class Fetch(object):
|
||||
if key in urldata_cache:
|
||||
self.ud = urldata_cache[key]
|
||||
|
||||
# the unpack_tracer object needs to be made available to possible nested
|
||||
# Fetch instances (when those are created by gitsm and npmsw fetchers)
|
||||
# so we set it as a global variable
|
||||
global unpack_tracer
|
||||
try:
|
||||
unpack_tracer
|
||||
except NameError:
|
||||
class_path = d.getVar("BB_UNPACK_TRACER_CLASS")
|
||||
if class_path:
|
||||
# use user-defined unpack tracer class
|
||||
import importlib
|
||||
module_name, _, class_name = class_path.rpartition(".")
|
||||
module = importlib.import_module(module_name)
|
||||
class_ = getattr(module, class_name)
|
||||
unpack_tracer = class_()
|
||||
else:
|
||||
# fall back to the dummy/abstract class
|
||||
unpack_tracer = DummyUnpackTracer()
|
||||
|
||||
for url in urls:
|
||||
if url not in self.ud:
|
||||
try:
|
||||
self.ud[url] = FetchData(url, d, localonly)
|
||||
self.ud[url].unpack_tracer = unpack_tracer
|
||||
except NonLocalMethod:
|
||||
if localonly:
|
||||
self.ud[url] = None
|
||||
@@ -1895,7 +1787,7 @@ class Fetch(object):
|
||||
logger.debug(str(e))
|
||||
firsterr = e
|
||||
# Remove any incomplete fetch
|
||||
if not verified_stamp and m.cleanup_upon_failure():
|
||||
if not verified_stamp:
|
||||
m.clean(ud, self.d)
|
||||
logger.debug("Trying MIRRORS")
|
||||
mirrors = mirror_from_string(self.d.getVar('MIRRORS'))
|
||||
@@ -1958,7 +1850,7 @@ class Fetch(object):
|
||||
ret = m.try_mirrors(self, ud, self.d, mirrors, True)
|
||||
|
||||
if not ret:
|
||||
raise FetchError("URL doesn't work", u)
|
||||
raise FetchError("URL %s doesn't work" % u, u)
|
||||
|
||||
def unpack(self, root, urls=None):
|
||||
"""
|
||||
@@ -1968,8 +1860,6 @@ class Fetch(object):
|
||||
if not urls:
|
||||
urls = self.urls
|
||||
|
||||
unpack_tracer.start(root, self.ud, self.d)
|
||||
|
||||
for u in urls:
|
||||
ud = self.ud[u]
|
||||
ud.setup_localpath(self.d)
|
||||
@@ -1977,15 +1867,11 @@ class Fetch(object):
|
||||
if ud.lockfile:
|
||||
lf = bb.utils.lockfile(ud.lockfile)
|
||||
|
||||
unpack_tracer.start_url(u)
|
||||
ud.method.unpack(ud, root, self.d)
|
||||
unpack_tracer.finish_url(u)
|
||||
|
||||
if ud.lockfile:
|
||||
bb.utils.unlockfile(lf)
|
||||
|
||||
unpack_tracer.complete()
|
||||
|
||||
def clean(self, urls=None):
|
||||
"""
|
||||
Clean files that the fetcher gets or places
|
||||
@@ -2087,8 +1973,6 @@ from . import npm
|
||||
from . import npmsw
|
||||
from . import az
|
||||
from . import crate
|
||||
from . import gcp
|
||||
from . import gomod
|
||||
|
||||
methods.append(local.Local())
|
||||
methods.append(wget.Wget())
|
||||
@@ -2110,6 +1994,3 @@ methods.append(npm.Npm())
|
||||
methods.append(npmsw.NpmShrinkWrap())
|
||||
methods.append(az.Az())
|
||||
methods.append(crate.Crate())
|
||||
methods.append(gcp.GCP())
|
||||
methods.append(gomod.GoMod())
|
||||
methods.append(gomod.GoModGit())
|
||||
|
||||
@@ -108,7 +108,7 @@ class ClearCase(FetchMethod):
|
||||
ud.module.replace("/", "."),
|
||||
ud.label.replace("/", "."))
|
||||
|
||||
ud.viewname = "%s-view%s" % (ud.identifier, d.getVar("DATETIME"))
|
||||
ud.viewname = "%s-view%s" % (ud.identifier, d.getVar("DATETIME", d, True))
|
||||
ud.csname = "%s-config-spec" % (ud.identifier)
|
||||
ud.ccasedir = os.path.join(d.getVar("DL_DIR"), ud.type)
|
||||
ud.viewdir = os.path.join(ud.ccasedir, ud.viewname)
|
||||
@@ -196,7 +196,7 @@ class ClearCase(FetchMethod):
|
||||
|
||||
def need_update(self, ud, d):
|
||||
if ("LATEST" in ud.label) or (ud.customspec and "LATEST" in ud.customspec):
|
||||
ud.identifier += "-%s" % d.getVar("DATETIME")
|
||||
ud.identifier += "-%s" % d.getVar("DATETIME",d, True)
|
||||
return True
|
||||
if os.path.exists(ud.localpath):
|
||||
return False
|
||||
|
||||
@@ -59,18 +59,17 @@ class Crate(Wget):
|
||||
# version is expected to be the last token
|
||||
# but ignore possible url parameters which will be used
|
||||
# by the top fetcher class
|
||||
version = parts[-1].split(";")[0]
|
||||
version, _, _ = parts[len(parts) -1].partition(";")
|
||||
# second to last field is name
|
||||
name = parts[-2]
|
||||
name = parts[len(parts) - 2]
|
||||
# host (this is to allow custom crate registries to be specified
|
||||
host = '/'.join(parts[2:-2])
|
||||
host = '/'.join(parts[2:len(parts) - 2])
|
||||
|
||||
# if using upstream just fix it up nicely
|
||||
if host == 'crates.io':
|
||||
host = 'crates.io/api/v1/crates'
|
||||
|
||||
ud.url = "https://%s/%s/%s/download" % (host, name, version)
|
||||
ud.versionsurl = "https://%s/%s/versions" % (host, name)
|
||||
ud.parm['downloadfilename'] = "%s-%s.crate" % (name, version)
|
||||
if 'name' not in ud.parm:
|
||||
ud.parm['name'] = '%s-%s' % (name, version)
|
||||
@@ -99,13 +98,11 @@ class Crate(Wget):
|
||||
save_cwd = os.getcwd()
|
||||
os.chdir(rootdir)
|
||||
|
||||
bp = d.getVar('BP')
|
||||
if bp == ud.parm.get('name'):
|
||||
pn = d.getVar('BPN')
|
||||
if pn == ud.parm.get('name'):
|
||||
cmd = "tar -xz --no-same-owner -f %s" % thefile
|
||||
ud.unpack_tracer.unpack("crate-extract", rootdir)
|
||||
else:
|
||||
cargo_bitbake = self._cargo_bitbake_path(rootdir)
|
||||
ud.unpack_tracer.unpack("cargo-extract", cargo_bitbake)
|
||||
|
||||
cmd = "tar -xz --no-same-owner -f %s -C %s" % (thefile, cargo_bitbake)
|
||||
|
||||
@@ -140,11 +137,3 @@ class Crate(Wget):
|
||||
mdpath = os.path.join(bbpath, cratepath, mdfile)
|
||||
with open(mdpath, "w") as f:
|
||||
json.dump(metadata, f)
|
||||
|
||||
def latest_versionstring(self, ud, d):
|
||||
from functools import cmp_to_key
|
||||
json_data = json.loads(self._fetch_index(ud.versionsurl, ud, d))
|
||||
versions = [(0, i["num"], "") for i in json_data["versions"]]
|
||||
versions = sorted(versions, key=cmp_to_key(bb.utils.vercmp))
|
||||
|
||||
return (versions[-1][1], "")
|
||||
|
||||
@@ -1,102 +0,0 @@
|
||||
"""
|
||||
BitBake 'Fetch' implementation for Google Cloup Platform Storage.
|
||||
|
||||
Class for fetching files from Google Cloud Storage using the
|
||||
Google Cloud Storage Python Client. The GCS Python Client must
|
||||
be correctly installed, configured and authenticated prior to use.
|
||||
Additionally, gsutil must also be installed.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2023, Snap Inc.
|
||||
#
|
||||
# Based in part on bb.fetch2.s3:
|
||||
# Copyright (C) 2017 Andre McCurdy
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
|
||||
|
||||
import os
|
||||
import bb
|
||||
import urllib.parse, urllib.error
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import FetchError
|
||||
from bb.fetch2 import logger
|
||||
|
||||
class GCP(FetchMethod):
|
||||
"""
|
||||
Class to fetch urls via GCP's Python API.
|
||||
"""
|
||||
def __init__(self):
|
||||
self.gcp_client = None
|
||||
|
||||
def supports(self, ud, d):
|
||||
"""
|
||||
Check to see if a given url can be fetched with GCP.
|
||||
"""
|
||||
return ud.type in ['gs']
|
||||
|
||||
def recommends_checksum(self, urldata):
|
||||
return True
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
if 'downloadfilename' in ud.parm:
|
||||
ud.basename = ud.parm['downloadfilename']
|
||||
else:
|
||||
ud.basename = os.path.basename(ud.path)
|
||||
|
||||
ud.localfile = d.expand(urllib.parse.unquote(ud.basename))
|
||||
|
||||
def get_gcp_client(self):
|
||||
from google.cloud import storage
|
||||
self.gcp_client = storage.Client(project=None)
|
||||
|
||||
def download(self, ud, d):
|
||||
"""
|
||||
Fetch urls using the GCP API.
|
||||
Assumes localpath was called first.
|
||||
"""
|
||||
from google.api_core.exceptions import NotFound
|
||||
logger.debug2(f"Trying to download gs://{ud.host}{ud.path} to {ud.localpath}")
|
||||
if self.gcp_client is None:
|
||||
self.get_gcp_client()
|
||||
|
||||
bb.fetch2.check_network_access(d, "blob.download_to_filename", f"gs://{ud.host}{ud.path}")
|
||||
|
||||
# Path sometimes has leading slash, so strip it
|
||||
path = ud.path.lstrip("/")
|
||||
blob = self.gcp_client.bucket(ud.host).blob(path)
|
||||
try:
|
||||
blob.download_to_filename(ud.localpath)
|
||||
except NotFound:
|
||||
raise FetchError("The GCP API threw a NotFound exception")
|
||||
|
||||
# Additional sanity checks copied from the wget class (although there
|
||||
# are no known issues which mean these are required, treat the GCP API
|
||||
# tool with a little healthy suspicion).
|
||||
if not os.path.exists(ud.localpath):
|
||||
raise FetchError(f"The GCP API returned success for gs://{ud.host}{ud.path} but {ud.localpath} doesn't exist?!")
|
||||
|
||||
if os.path.getsize(ud.localpath) == 0:
|
||||
os.remove(ud.localpath)
|
||||
raise FetchError(f"The downloaded file for gs://{ud.host}{ud.path} resulted in a zero size file?! Deleting and failing since this isn't right.")
|
||||
|
||||
return True
|
||||
|
||||
def checkstatus(self, fetch, ud, d):
|
||||
"""
|
||||
Check the status of a URL.
|
||||
"""
|
||||
logger.debug2(f"Checking status of gs://{ud.host}{ud.path}")
|
||||
if self.gcp_client is None:
|
||||
self.get_gcp_client()
|
||||
|
||||
bb.fetch2.check_network_access(d, "gcp_client.bucket(ud.host).blob(path).exists()", f"gs://{ud.host}{ud.path}")
|
||||
|
||||
# Path sometimes has leading slash, so strip it
|
||||
path = ud.path.lstrip("/")
|
||||
if self.gcp_client.bucket(ud.host).blob(path).exists() == False:
|
||||
raise FetchError(f"The GCP API reported that gs://{ud.host}{ud.path} does not exist")
|
||||
else:
|
||||
return True
|
||||
@@ -48,23 +48,10 @@ Supported SRC_URI options are:
|
||||
instead of branch.
|
||||
The default is "0", set nobranch=1 if needed.
|
||||
|
||||
- subpath
|
||||
Limit the checkout to a specific subpath of the tree.
|
||||
By default, checkout the whole tree, set subpath=<path> if needed
|
||||
|
||||
- destsuffix
|
||||
The name of the path in which to place the checkout.
|
||||
By default, the path is git/, set destsuffix=<suffix> if needed
|
||||
|
||||
- usehead
|
||||
For local git:// urls to use the current branch HEAD as the revision for use with
|
||||
AUTOREV. Implies nobranch.
|
||||
|
||||
- lfs
|
||||
Enable the checkout to use LFS for large files. This will download all LFS files
|
||||
in the download step, as the unpack step does not have network access.
|
||||
The default is "1", set lfs=0 to skip.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (C) 2005 Richard Purdie
|
||||
@@ -78,7 +65,6 @@ import fnmatch
|
||||
import os
|
||||
import re
|
||||
import shlex
|
||||
import shutil
|
||||
import subprocess
|
||||
import tempfile
|
||||
import bb
|
||||
@@ -87,7 +73,6 @@ from contextlib import contextmanager
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import runfetchcmd
|
||||
from bb.fetch2 import logger
|
||||
from bb.fetch2 import trusted_network
|
||||
|
||||
|
||||
sha1_re = re.compile(r'^[0-9a-f]{40}$')
|
||||
@@ -150,9 +135,6 @@ class Git(FetchMethod):
|
||||
def supports_checksum(self, urldata):
|
||||
return False
|
||||
|
||||
def cleanup_upon_failure(self):
|
||||
return False
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
"""
|
||||
init git specific variable within url data
|
||||
@@ -262,7 +244,7 @@ class Git(FetchMethod):
|
||||
for name in ud.names:
|
||||
ud.unresolvedrev[name] = 'HEAD'
|
||||
|
||||
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c gc.autoDetach=false -c core.pager=cat -c safe.bareRepository=all -c clone.defaultRemoteName=origin"
|
||||
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c gc.autoDetach=false -c core.pager=cat"
|
||||
|
||||
write_tarballs = d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0"
|
||||
ud.write_tarballs = write_tarballs != "0" or ud.rebaseable
|
||||
@@ -277,7 +259,7 @@ class Git(FetchMethod):
|
||||
ud.unresolvedrev[name] = ud.revisions[name]
|
||||
ud.revisions[name] = self.latest_revision(ud, d, name)
|
||||
|
||||
gitsrcname = '%s%s' % (ud.host.replace(':', '.'), ud.path.replace('/', '.').replace('*', '.').replace(' ','_').replace('(', '_').replace(')', '_'))
|
||||
gitsrcname = '%s%s' % (ud.host.replace(':', '.'), ud.path.replace('/', '.').replace('*', '.').replace(' ','_'))
|
||||
if gitsrcname.startswith('.'):
|
||||
gitsrcname = gitsrcname[1:]
|
||||
|
||||
@@ -328,10 +310,7 @@ class Git(FetchMethod):
|
||||
return ud.clonedir
|
||||
|
||||
def need_update(self, ud, d):
|
||||
return self.clonedir_need_update(ud, d) \
|
||||
or self.shallow_tarball_need_update(ud) \
|
||||
or self.tarball_need_update(ud) \
|
||||
or self.lfs_need_update(ud, d)
|
||||
return self.clonedir_need_update(ud, d) or self.shallow_tarball_need_update(ud) or self.tarball_need_update(ud)
|
||||
|
||||
def clonedir_need_update(self, ud, d):
|
||||
if not os.path.exists(ud.clonedir):
|
||||
@@ -343,15 +322,6 @@ class Git(FetchMethod):
|
||||
return True
|
||||
return False
|
||||
|
||||
def lfs_need_update(self, ud, d):
|
||||
if self.clonedir_need_update(ud, d):
|
||||
return True
|
||||
|
||||
for name in ud.names:
|
||||
if not self._lfs_objects_downloaded(ud, d, name, ud.clonedir):
|
||||
return True
|
||||
return False
|
||||
|
||||
def clonedir_need_shallow_revs(self, ud, d):
|
||||
for rev in ud.shallow_revs:
|
||||
try:
|
||||
@@ -371,16 +341,6 @@ class Git(FetchMethod):
|
||||
# is not possible
|
||||
if bb.utils.to_boolean(d.getVar("BB_FETCH_PREMIRRORONLY")):
|
||||
return True
|
||||
# If the url is not in trusted network, that is, BB_NO_NETWORK is set to 0
|
||||
# and BB_ALLOWED_NETWORKS does not contain the host that ud.url uses, then
|
||||
# we need to try premirrors first as using upstream is destined to fail.
|
||||
if not trusted_network(d, ud.url):
|
||||
return True
|
||||
# the following check is to ensure incremental fetch in downloads, this is
|
||||
# because the premirror might be old and does not contain the new rev required,
|
||||
# and this will cause a total removal and new clone. So if we can reach to
|
||||
# network, we prefer upstream over premirror, though the premirror might contain
|
||||
# the new rev.
|
||||
if os.path.exists(ud.clonedir):
|
||||
return False
|
||||
return True
|
||||
@@ -401,40 +361,12 @@ class Git(FetchMethod):
|
||||
else:
|
||||
tmpdir = tempfile.mkdtemp(dir=d.getVar('DL_DIR'))
|
||||
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=tmpdir)
|
||||
output = runfetchcmd("%s remote" % ud.basecmd, d, quiet=True, workdir=ud.clonedir)
|
||||
if 'mirror' in output:
|
||||
runfetchcmd("%s remote rm mirror" % ud.basecmd, d, workdir=ud.clonedir)
|
||||
runfetchcmd("%s remote add --mirror=fetch mirror %s" % (ud.basecmd, tmpdir), d, workdir=ud.clonedir)
|
||||
fetch_cmd = "LANG=C %s fetch -f --update-head-ok --progress mirror " % (ud.basecmd)
|
||||
fetch_cmd = "LANG=C %s fetch -f --progress %s " % (ud.basecmd, shlex.quote(tmpdir))
|
||||
runfetchcmd(fetch_cmd, d, workdir=ud.clonedir)
|
||||
repourl = self._get_repo_url(ud)
|
||||
|
||||
needs_clone = False
|
||||
if os.path.exists(ud.clonedir):
|
||||
# The directory may exist, but not be the top level of a bare git
|
||||
# repository in which case it needs to be deleted and re-cloned.
|
||||
try:
|
||||
# Since clones can be bare, use --absolute-git-dir instead of --show-toplevel
|
||||
output = runfetchcmd("LANG=C %s rev-parse --absolute-git-dir" % ud.basecmd, d, workdir=ud.clonedir)
|
||||
toplevel = output.rstrip()
|
||||
|
||||
if not bb.utils.path_is_descendant(toplevel, ud.clonedir):
|
||||
logger.warning("Top level directory '%s' is not a descendant of '%s'. Re-cloning", toplevel, ud.clonedir)
|
||||
needs_clone = True
|
||||
except bb.fetch2.FetchError as e:
|
||||
logger.warning("Unable to get top level for %s (not a git directory?): %s", ud.clonedir, e)
|
||||
needs_clone = True
|
||||
except FileNotFoundError as e:
|
||||
logger.warning("%s", e)
|
||||
needs_clone = True
|
||||
|
||||
if needs_clone:
|
||||
shutil.rmtree(ud.clonedir)
|
||||
else:
|
||||
needs_clone = True
|
||||
|
||||
# If the repo still doesn't exist, fallback to cloning it
|
||||
if needs_clone:
|
||||
if not os.path.exists(ud.clonedir):
|
||||
# We do this since git will use a "-l" option automatically for local urls where possible,
|
||||
# but it doesn't work when git/objects is a symlink, only works when it is a directory.
|
||||
if repourl.startswith("file://"):
|
||||
@@ -482,7 +414,7 @@ class Git(FetchMethod):
|
||||
if missing_rev:
|
||||
raise bb.fetch2.FetchError("Unable to find revision %s even from upstream" % missing_rev)
|
||||
|
||||
if self.lfs_need_update(ud, d):
|
||||
if self._contains_lfs(ud, d, ud.clonedir) and self._need_lfs(ud):
|
||||
# Unpack temporary working copy, use it to run 'git checkout' to force pre-fetching
|
||||
# of all LFS blobs needed at the srcrev.
|
||||
#
|
||||
@@ -505,8 +437,8 @@ class Git(FetchMethod):
|
||||
# Only do this if the unpack resulted in a .git/lfs directory being
|
||||
# created; this only happens if at least one blob needed to be
|
||||
# downloaded.
|
||||
if os.path.exists(os.path.join(ud.destdir, ".git", "lfs")):
|
||||
runfetchcmd("tar -cf - lfs | tar -xf - -C %s" % ud.clonedir, d, workdir="%s/.git" % ud.destdir)
|
||||
if os.path.exists(os.path.join(tmpdir, "git", ".git", "lfs")):
|
||||
runfetchcmd("tar -cf - lfs | tar -xf - -C %s" % ud.clonedir, d, workdir="%s/git/.git" % tmpdir)
|
||||
|
||||
def build_mirror_data(self, ud, d):
|
||||
|
||||
@@ -544,38 +476,25 @@ class Git(FetchMethod):
|
||||
|
||||
logger.info("Creating tarball of git repository")
|
||||
with create_atomic(ud.fullmirror) as tfile:
|
||||
mtime = runfetchcmd("{} log --all -1 --format=%cD".format(ud.basecmd), d,
|
||||
mtime = runfetchcmd("git log --all -1 --format=%cD", d,
|
||||
quiet=True, workdir=ud.clonedir)
|
||||
runfetchcmd("tar -czf %s --owner oe:0 --group oe:0 --mtime \"%s\" ."
|
||||
% (tfile, mtime), d, workdir=ud.clonedir)
|
||||
runfetchcmd("touch %s.done" % ud.fullmirror, d)
|
||||
|
||||
def clone_shallow_local(self, ud, dest, d):
|
||||
"""
|
||||
Shallow fetch from ud.clonedir (${DL_DIR}/git2/<gitrepo> by default):
|
||||
- For BB_GIT_SHALLOW_DEPTH: git fetch --depth <depth> rev
|
||||
- For BB_GIT_SHALLOW_REVS: git fetch --shallow-exclude=<revs> rev
|
||||
"""
|
||||
"""Clone the repo and make it shallow.
|
||||
|
||||
bb.utils.mkdirhier(dest)
|
||||
init_cmd = "%s init -q" % ud.basecmd
|
||||
if ud.bareclone:
|
||||
init_cmd += " --bare"
|
||||
runfetchcmd(init_cmd, d, workdir=dest)
|
||||
runfetchcmd("%s remote add origin %s" % (ud.basecmd, ud.clonedir), d, workdir=dest)
|
||||
|
||||
# Check the histories which should be excluded
|
||||
shallow_exclude = ''
|
||||
for revision in ud.shallow_revs:
|
||||
shallow_exclude += " --shallow-exclude=%s" % revision
|
||||
The upstream url of the new clone isn't set at this time, as it'll be
|
||||
set correctly when unpacked."""
|
||||
runfetchcmd("%s clone %s %s %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, dest), d)
|
||||
|
||||
to_parse, shallow_branches = [], []
|
||||
for name in ud.names:
|
||||
revision = ud.revisions[name]
|
||||
depth = ud.shallow_depths[name]
|
||||
|
||||
# The --depth and --shallow-exclude can't be used together
|
||||
if depth and shallow_exclude:
|
||||
raise bb.fetch2.FetchError("BB_GIT_SHALLOW_REVS is set, but BB_GIT_SHALLOW_DEPTH is not 0.")
|
||||
if depth:
|
||||
to_parse.append('%s~%d^{}' % (revision, depth - 1))
|
||||
|
||||
# For nobranch, we need a ref, otherwise the commits will be
|
||||
# removed, and for non-nobranch, we truncate the branch to our
|
||||
@@ -588,49 +507,36 @@ class Git(FetchMethod):
|
||||
else:
|
||||
ref = "refs/remotes/origin/%s" % branch
|
||||
|
||||
fetch_cmd = "%s fetch origin %s" % (ud.basecmd, revision)
|
||||
if depth:
|
||||
fetch_cmd += " --depth %s" % depth
|
||||
|
||||
if shallow_exclude:
|
||||
fetch_cmd += shallow_exclude
|
||||
|
||||
# Advertise the revision for lower version git such as 2.25.1:
|
||||
# error: Server does not allow request for unadvertised object.
|
||||
# The ud.clonedir is a local temporary dir, will be removed when
|
||||
# fetch is done, so we can do anything on it.
|
||||
adv_cmd = 'git branch -f advertise-%s %s' % (revision, revision)
|
||||
runfetchcmd(adv_cmd, d, workdir=ud.clonedir)
|
||||
|
||||
runfetchcmd(fetch_cmd, d, workdir=dest)
|
||||
shallow_branches.append(ref)
|
||||
runfetchcmd("%s update-ref %s %s" % (ud.basecmd, ref, revision), d, workdir=dest)
|
||||
|
||||
# Map srcrev+depths to revisions
|
||||
parsed_depths = runfetchcmd("%s rev-parse %s" % (ud.basecmd, " ".join(to_parse)), d, workdir=dest)
|
||||
|
||||
# Resolve specified revisions
|
||||
parsed_revs = runfetchcmd("%s rev-parse %s" % (ud.basecmd, " ".join('"%s^{}"' % r for r in ud.shallow_revs)), d, workdir=dest)
|
||||
shallow_revisions = parsed_depths.splitlines() + parsed_revs.splitlines()
|
||||
|
||||
# Apply extra ref wildcards
|
||||
all_refs_remote = runfetchcmd("%s ls-remote origin 'refs/*'" % ud.basecmd, \
|
||||
d, workdir=dest).splitlines()
|
||||
all_refs = []
|
||||
for line in all_refs_remote:
|
||||
all_refs.append(line.split()[-1])
|
||||
extra_refs = []
|
||||
all_refs = runfetchcmd('%s for-each-ref "--format=%%(refname)"' % ud.basecmd,
|
||||
d, workdir=dest).splitlines()
|
||||
for r in ud.shallow_extra_refs:
|
||||
if not ud.bareclone:
|
||||
r = r.replace('refs/heads/', 'refs/remotes/origin/')
|
||||
|
||||
if '*' in r:
|
||||
matches = filter(lambda a: fnmatch.fnmatchcase(a, r), all_refs)
|
||||
extra_refs.extend(matches)
|
||||
shallow_branches.extend(matches)
|
||||
else:
|
||||
extra_refs.append(r)
|
||||
shallow_branches.append(r)
|
||||
|
||||
for ref in extra_refs:
|
||||
ref_fetch = os.path.basename(ref)
|
||||
runfetchcmd("%s fetch origin --depth 1 %s" % (ud.basecmd, ref_fetch), d, workdir=dest)
|
||||
revision = runfetchcmd("%s rev-parse FETCH_HEAD" % ud.basecmd, d, workdir=dest)
|
||||
runfetchcmd("%s update-ref %s %s" % (ud.basecmd, ref, revision), d, workdir=dest)
|
||||
|
||||
# The url is local ud.clonedir, set it to upstream one
|
||||
repourl = self._get_repo_url(ud)
|
||||
runfetchcmd("%s remote set-url origin %s" % (ud.basecmd, shlex.quote(repourl)), d, workdir=dest)
|
||||
# Make the repository shallow
|
||||
shallow_cmd = [self.make_shallow_path, '-s']
|
||||
for b in shallow_branches:
|
||||
shallow_cmd.append('-r')
|
||||
shallow_cmd.append(b)
|
||||
shallow_cmd.extend(shallow_revisions)
|
||||
runfetchcmd(subprocess.list2cmdline(shallow_cmd), d, workdir=dest)
|
||||
|
||||
def unpack(self, ud, destdir, d):
|
||||
""" unpack the downloaded src to destdir"""
|
||||
@@ -658,8 +564,6 @@ class Git(FetchMethod):
|
||||
destdir = ud.destdir = os.path.join(destdir, destsuffix)
|
||||
if os.path.exists(destdir):
|
||||
bb.utils.prunedir(destdir)
|
||||
if not ud.bareclone:
|
||||
ud.unpack_tracer.unpack("git", destdir)
|
||||
|
||||
need_lfs = self._need_lfs(ud)
|
||||
|
||||
@@ -698,8 +602,6 @@ class Git(FetchMethod):
|
||||
raise bb.fetch2.FetchError("Repository %s has LFS content, install git-lfs on host to download (or set lfs=0 to ignore it)" % (repourl))
|
||||
elif not need_lfs:
|
||||
bb.note("Repository %s has LFS content but it is not being fetched" % (repourl))
|
||||
else:
|
||||
runfetchcmd("%s lfs install --local" % ud.basecmd, d, workdir=destdir)
|
||||
|
||||
if not ud.nocheckout:
|
||||
if subpath:
|
||||
@@ -751,35 +653,6 @@ class Git(FetchMethod):
|
||||
raise bb.fetch2.FetchError("The command '%s' gave output with more then 1 line unexpectedly, output: '%s'" % (cmd, output))
|
||||
return output.split()[0] != "0"
|
||||
|
||||
def _lfs_objects_downloaded(self, ud, d, name, wd):
|
||||
"""
|
||||
Verifies whether the LFS objects for requested revisions have already been downloaded
|
||||
"""
|
||||
# Bail out early if this repository doesn't use LFS
|
||||
if not self._need_lfs(ud) or not self._contains_lfs(ud, d, wd):
|
||||
return True
|
||||
|
||||
# The Git LFS specification specifies ([1]) the LFS folder layout so it should be safe to check for file
|
||||
# existence.
|
||||
# [1] https://github.com/git-lfs/git-lfs/blob/main/docs/spec.md#intercepting-git
|
||||
cmd = "%s lfs ls-files -l %s" \
|
||||
% (ud.basecmd, ud.revisions[name])
|
||||
output = runfetchcmd(cmd, d, quiet=True, workdir=wd).rstrip()
|
||||
# Do not do any further matching if no objects are managed by LFS
|
||||
if not output:
|
||||
return True
|
||||
|
||||
# Match all lines beginning with the hexadecimal OID
|
||||
oid_regex = re.compile("^(([a-fA-F0-9]{2})([a-fA-F0-9]{2})[A-Fa-f0-9]+)")
|
||||
for line in output.split("\n"):
|
||||
oid = re.search(oid_regex, line)
|
||||
if not oid:
|
||||
bb.warn("git lfs ls-files output '%s' did not match expected format." % line)
|
||||
if not os.path.exists(os.path.join(wd, "lfs", "objects", oid.group(2), oid.group(3), oid.group(1))):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def _need_lfs(self, ud):
|
||||
return ud.parm.get("lfs", "1") == "1"
|
||||
|
||||
@@ -788,11 +661,8 @@ class Git(FetchMethod):
|
||||
Check if the repository has 'lfs' (large file) content
|
||||
"""
|
||||
|
||||
if ud.nobranch:
|
||||
# If no branch is specified, use the current git commit
|
||||
refname = self._build_revision(ud, d, ud.names[0])
|
||||
elif wd == ud.clonedir:
|
||||
# The bare clonedir doesn't use the remote names; it has the branch immediately.
|
||||
# The bare clonedir doesn't use the remote names; it has the branch immediately.
|
||||
if wd == ud.clonedir:
|
||||
refname = ud.branches[ud.names[0]]
|
||||
else:
|
||||
refname = "origin/%s" % ud.branches[ud.names[0]]
|
||||
@@ -897,42 +767,38 @@ class Git(FetchMethod):
|
||||
"""
|
||||
pupver = ('', '')
|
||||
|
||||
tagregex = re.compile(d.getVar('UPSTREAM_CHECK_GITTAGREGEX') or r"(?P<pver>([0-9][\.|_]?)+)")
|
||||
try:
|
||||
output = self._lsremote(ud, d, "refs/tags/*")
|
||||
except (bb.fetch2.FetchError, bb.fetch2.NetworkAccess) as e:
|
||||
bb.note("Could not list remote: %s" % str(e))
|
||||
return pupver
|
||||
|
||||
rev_tag_re = re.compile(r"([0-9a-f]{40})\s+refs/tags/(.*)")
|
||||
pver_re = re.compile(d.getVar('UPSTREAM_CHECK_GITTAGREGEX') or r"(?P<pver>([0-9][\.|_]?)+)")
|
||||
nonrel_re = re.compile(r"(alpha|beta|rc|final)+")
|
||||
|
||||
verstring = ""
|
||||
revision = ""
|
||||
for line in output.split("\n"):
|
||||
if not line:
|
||||
break
|
||||
|
||||
m = rev_tag_re.match(line)
|
||||
if not m:
|
||||
continue
|
||||
|
||||
(revision, tag) = m.groups()
|
||||
|
||||
tag_head = line.split("/")[-1]
|
||||
# Ignore non-released branches
|
||||
if nonrel_re.search(tag):
|
||||
m = re.search(r"(alpha|beta|rc|final)+", tag_head)
|
||||
if m:
|
||||
continue
|
||||
|
||||
# search for version in the line
|
||||
m = pver_re.search(tag)
|
||||
if not m:
|
||||
tag = tagregex.search(tag_head)
|
||||
if tag is None:
|
||||
continue
|
||||
|
||||
pver = m.group('pver').replace("_", ".")
|
||||
tag = tag.group('pver')
|
||||
tag = tag.replace("_", ".")
|
||||
|
||||
if verstring and bb.utils.vercmp(("0", pver, ""), ("0", verstring, "")) < 0:
|
||||
if verstring and bb.utils.vercmp(("0", tag, ""), ("0", verstring, "")) < 0:
|
||||
continue
|
||||
|
||||
verstring = pver
|
||||
verstring = tag
|
||||
revision = line.split()[0]
|
||||
pupver = (verstring, revision)
|
||||
|
||||
return pupver
|
||||
@@ -952,8 +818,9 @@ class Git(FetchMethod):
|
||||
commits = None
|
||||
else:
|
||||
if not os.path.exists(rev_file) or not os.path.getsize(rev_file):
|
||||
from pipes import quote
|
||||
commits = bb.fetch2.runfetchcmd(
|
||||
"git rev-list %s -- | wc -l" % shlex.quote(rev),
|
||||
"git rev-list %s -- | wc -l" % quote(rev),
|
||||
d, quiet=True).strip().lstrip('0')
|
||||
if commits:
|
||||
open(rev_file, "w").write("%d\n" % int(commits))
|
||||
|
||||
@@ -123,13 +123,6 @@ class GitSM(Git):
|
||||
url += ";name=%s" % module
|
||||
url += ";subpath=%s" % module
|
||||
url += ";nobranch=1"
|
||||
url += ";lfs=%s" % self._need_lfs(ud)
|
||||
# Note that adding "user=" here to give credentials to the
|
||||
# submodule is not supported. Since using SRC_URI to give git://
|
||||
# URL a password is not supported, one have to use one of the
|
||||
# recommended way (eg. ~/.netrc or SSH config) which does specify
|
||||
# the user (See comment in git.py).
|
||||
# So, we will not take patches adding "user=" support here.
|
||||
|
||||
ld = d.createCopy()
|
||||
# Not necessary to set SRC_URI, since we're passing the URI to
|
||||
@@ -147,19 +140,6 @@ class GitSM(Git):
|
||||
|
||||
return submodules != []
|
||||
|
||||
def call_process_submodules(self, ud, d, extra_check, subfunc):
|
||||
# If we're using a shallow mirror tarball it needs to be
|
||||
# unpacked temporarily so that we can examine the .gitmodules file
|
||||
if ud.shallow and os.path.exists(ud.fullshallow) and extra_check:
|
||||
tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR"))
|
||||
try:
|
||||
runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=tmpdir)
|
||||
self.process_submodules(ud, tmpdir, subfunc, d)
|
||||
finally:
|
||||
shutil.rmtree(tmpdir)
|
||||
else:
|
||||
self.process_submodules(ud, ud.clonedir, subfunc, d)
|
||||
|
||||
def need_update(self, ud, d):
|
||||
if Git.need_update(self, ud, d):
|
||||
return True
|
||||
@@ -177,7 +157,15 @@ class GitSM(Git):
|
||||
logger.error('gitsm: submodule update check failed: %s %s' % (type(e).__name__, str(e)))
|
||||
need_update_result = True
|
||||
|
||||
self.call_process_submodules(ud, d, not os.path.exists(ud.clonedir), need_update_submodule)
|
||||
# If we're using a shallow mirror tarball it needs to be unpacked
|
||||
# temporarily so that we can examine the .gitmodules file
|
||||
if ud.shallow and os.path.exists(ud.fullshallow) and not os.path.exists(ud.clonedir):
|
||||
tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR"))
|
||||
runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=tmpdir)
|
||||
self.process_submodules(ud, tmpdir, need_update_submodule, d)
|
||||
shutil.rmtree(tmpdir)
|
||||
else:
|
||||
self.process_submodules(ud, ud.clonedir, need_update_submodule, d)
|
||||
|
||||
if need_update_list:
|
||||
logger.debug('gitsm: Submodules requiring update: %s' % (' '.join(need_update_list)))
|
||||
@@ -200,7 +188,16 @@ class GitSM(Git):
|
||||
raise
|
||||
|
||||
Git.download(self, ud, d)
|
||||
self.call_process_submodules(ud, d, self.need_update(ud, d), download_submodule)
|
||||
|
||||
# If we're using a shallow mirror tarball it needs to be unpacked
|
||||
# temporarily so that we can examine the .gitmodules file
|
||||
if ud.shallow and os.path.exists(ud.fullshallow) and self.need_update(ud, d):
|
||||
tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR"))
|
||||
runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=tmpdir)
|
||||
self.process_submodules(ud, tmpdir, download_submodule, d)
|
||||
shutil.rmtree(tmpdir)
|
||||
else:
|
||||
self.process_submodules(ud, ud.clonedir, download_submodule, d)
|
||||
|
||||
def unpack(self, ud, destdir, d):
|
||||
def unpack_submodules(ud, url, module, modpath, workdir, d):
|
||||
@@ -214,10 +211,6 @@ class GitSM(Git):
|
||||
|
||||
try:
|
||||
newfetch = Fetch([url], d, cache=False)
|
||||
# modpath is needed by unpack tracer to calculate submodule
|
||||
# checkout dir
|
||||
new_ud = newfetch.ud[url]
|
||||
new_ud.modpath = modpath
|
||||
newfetch.unpack(root=os.path.dirname(os.path.join(repo_conf, 'modules', module)))
|
||||
except Exception as e:
|
||||
logger.error('gitsm: submodule unpack failed: %s %s' % (type(e).__name__, str(e)))
|
||||
@@ -243,12 +236,10 @@ class GitSM(Git):
|
||||
ret = self.process_submodules(ud, ud.destdir, unpack_submodules, d)
|
||||
|
||||
if not ud.bareclone and ret:
|
||||
# All submodules should already be downloaded and configured in the tree. This simply
|
||||
# sets up the configuration and checks out the files. The main project config should
|
||||
# remain unmodified, and no download from the internet should occur. As such, lfs smudge
|
||||
# should also be skipped as these files were already smudged in the fetch stage if lfs
|
||||
# was enabled.
|
||||
runfetchcmd("GIT_LFS_SKIP_SMUDGE=1 %s submodule update --recursive --no-fetch" % (ud.basecmd), d, quiet=True, workdir=ud.destdir)
|
||||
# All submodules should already be downloaded and configured in the tree. This simply sets
|
||||
# up the configuration and checks out the files. The main project config should remain
|
||||
# unmodified, and no download from the internet should occur.
|
||||
runfetchcmd("%s submodule update --recursive --no-fetch" % (ud.basecmd), d, quiet=True, workdir=ud.destdir)
|
||||
|
||||
def implicit_urldata(self, ud, d):
|
||||
import shutil, subprocess, tempfile
|
||||
@@ -259,6 +250,14 @@ class GitSM(Git):
|
||||
newfetch = Fetch([url], d, cache=False)
|
||||
urldata.extend(newfetch.expanded_urldata())
|
||||
|
||||
self.call_process_submodules(ud, d, ud.method.need_update(ud, d), add_submodule)
|
||||
# If we're using a shallow mirror tarball it needs to be unpacked
|
||||
# temporarily so that we can examine the .gitmodules file
|
||||
if ud.shallow and os.path.exists(ud.fullshallow) and ud.method.need_update(ud, d):
|
||||
tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR"))
|
||||
subprocess.check_call("tar -xzf %s" % ud.fullshallow, cwd=tmpdir, shell=True)
|
||||
self.process_submodules(ud, tmpdir, add_submodule, d)
|
||||
shutil.rmtree(tmpdir)
|
||||
else:
|
||||
self.process_submodules(ud, ud.clonedir, add_submodule, d)
|
||||
|
||||
return urldata
|
||||
|
||||
@@ -1,268 +0,0 @@
|
||||
"""
|
||||
BitBake 'Fetch' implementation for Go modules
|
||||
|
||||
The gomod/gomodgit fetchers are used to download Go modules to the module cache
|
||||
from a module proxy or directly from a version control repository.
|
||||
|
||||
Example SRC_URI:
|
||||
|
||||
SRC_URI += "gomod://golang.org/x/net;version=v0.9.0;sha256sum=..."
|
||||
SRC_URI += "gomodgit://golang.org/x/net;version=v0.9.0;repo=go.googlesource.com/net;srcrev=..."
|
||||
|
||||
Required SRC_URI parameters:
|
||||
|
||||
- version
|
||||
The version of the module.
|
||||
|
||||
Optional SRC_URI parameters:
|
||||
|
||||
- mod
|
||||
Fetch and unpack the go.mod file only instead of the complete module.
|
||||
The go command may need to download go.mod files for many different modules
|
||||
when computing the build list, and go.mod files are much smaller than
|
||||
module zip files.
|
||||
The default is "0", set mod=1 for the go.mod file only.
|
||||
|
||||
- sha256sum
|
||||
The checksum of the module zip file, or the go.mod file in case of fetching
|
||||
only the go.mod file. Alternatively, set the SRC_URI varible flag for
|
||||
"module@version.sha256sum".
|
||||
|
||||
- protocol
|
||||
The method used when fetching directly from a version control repository.
|
||||
The default is "https" for git.
|
||||
|
||||
- repo
|
||||
The URL when fetching directly from a version control repository. Required
|
||||
when the URL is different from the module path.
|
||||
|
||||
- srcrev
|
||||
The revision identifier used when fetching directly from a version control
|
||||
repository. Alternatively, set the SRCREV varible for "module@version".
|
||||
|
||||
- subdir
|
||||
The module subdirectory when fetching directly from a version control
|
||||
repository. Required when the module is not located in the root of the
|
||||
repository.
|
||||
|
||||
Related variables:
|
||||
|
||||
- GO_MOD_PROXY
|
||||
The module proxy used by the fetcher.
|
||||
|
||||
- GO_MOD_CACHE_DIR
|
||||
The directory where the module cache is located.
|
||||
This must match the exported GOMODCACHE variable for the go command to find
|
||||
the downloaded modules.
|
||||
|
||||
See the Go modules reference, https://go.dev/ref/mod, for more information
|
||||
about the module cache, module proxies and version control systems.
|
||||
"""
|
||||
|
||||
import hashlib
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import subprocess
|
||||
import zipfile
|
||||
|
||||
import bb
|
||||
from bb.fetch2 import FetchError
|
||||
from bb.fetch2 import MissingParameterError
|
||||
from bb.fetch2 import runfetchcmd
|
||||
from bb.fetch2 import subprocess_setup
|
||||
from bb.fetch2.git import Git
|
||||
from bb.fetch2.wget import Wget
|
||||
|
||||
|
||||
def escape(path):
|
||||
"""Escape capital letters using exclamation points."""
|
||||
return re.sub(r'([A-Z])', lambda m: '!' + m.group(1).lower(), path)
|
||||
|
||||
|
||||
class GoMod(Wget):
|
||||
"""Class to fetch Go modules from a Go module proxy via wget"""
|
||||
|
||||
def supports(self, ud, d):
|
||||
"""Check to see if a given URL is for this fetcher."""
|
||||
return ud.type == 'gomod'
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
"""Set up to download the module from the module proxy.
|
||||
|
||||
Set up to download the module zip file to the module cache directory
|
||||
and unpack the go.mod file (unless downloading only the go.mod file):
|
||||
|
||||
cache/download/<module>/@v/<version>.zip: The module zip file.
|
||||
cache/download/<module>/@v/<version>.mod: The go.mod file.
|
||||
"""
|
||||
|
||||
proxy = d.getVar('GO_MOD_PROXY') or 'proxy.golang.org'
|
||||
moddir = d.getVar('GO_MOD_CACHE_DIR') or 'pkg/mod'
|
||||
|
||||
if 'version' not in ud.parm:
|
||||
raise MissingParameterError('version', ud.url)
|
||||
|
||||
module = ud.host
|
||||
if ud.path != '/':
|
||||
module += ud.path
|
||||
ud.parm['module'] = module
|
||||
|
||||
# Set URL and filename for wget download
|
||||
path = escape(module + '/@v/' + ud.parm['version'])
|
||||
if ud.parm.get('mod', '0') == '1':
|
||||
path += '.mod'
|
||||
else:
|
||||
path += '.zip'
|
||||
ud.parm['unpack'] = '0'
|
||||
ud.url = bb.fetch2.encodeurl(
|
||||
('https', proxy, '/' + path, None, None, None))
|
||||
ud.parm['downloadfilename'] = path
|
||||
|
||||
# Set name parameter if sha256sum is set in recipe
|
||||
name = f"{module}@{ud.parm['version']}"
|
||||
if d.getVarFlag('SRC_URI', name + '.sha256sum'):
|
||||
ud.parm['name'] = name
|
||||
|
||||
# Set subdir for unpack
|
||||
ud.parm['subdir'] = os.path.join(moddir, 'cache/download',
|
||||
os.path.dirname(path))
|
||||
|
||||
super().urldata_init(ud, d)
|
||||
|
||||
def unpack(self, ud, rootdir, d):
|
||||
"""Unpack the module in the module cache."""
|
||||
|
||||
# Unpack the module zip file or go.mod file
|
||||
super().unpack(ud, rootdir, d)
|
||||
|
||||
if ud.localpath.endswith('.zip'):
|
||||
# Unpack the go.mod file from the zip file
|
||||
module = ud.parm['module']
|
||||
unpackdir = os.path.join(rootdir, ud.parm['subdir'])
|
||||
name = os.path.basename(ud.localpath).rsplit('.', 1)[0] + '.mod'
|
||||
bb.note(f"Unpacking {name} to {unpackdir}/")
|
||||
with zipfile.ZipFile(ud.localpath) as zf:
|
||||
with open(os.path.join(unpackdir, name), mode='wb') as mf:
|
||||
try:
|
||||
f = module + '@' + ud.parm['version'] + '/go.mod'
|
||||
shutil.copyfileobj(zf.open(f), mf)
|
||||
except KeyError:
|
||||
# If the module does not have a go.mod file, synthesize
|
||||
# one containing only a module statement.
|
||||
mf.write(f'module {module}\n'.encode())
|
||||
|
||||
|
||||
class GoModGit(Git):
|
||||
"""Class to fetch Go modules directly from a git repository"""
|
||||
|
||||
def supports(self, ud, d):
|
||||
"""Check to see if a given URL is for this fetcher."""
|
||||
return ud.type == 'gomodgit'
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
"""Set up to download the module from the git repository.
|
||||
|
||||
Set up to download the git repository to the module cache directory and
|
||||
unpack the module zip file and the go.mod file:
|
||||
|
||||
cache/vcs/<hash>: The bare git repository.
|
||||
cache/download/<module>/@v/<version>.zip: The module zip file.
|
||||
cache/download/<module>/@v/<version>.mod: The go.mod file.
|
||||
"""
|
||||
|
||||
moddir = d.getVar('GO_MOD_CACHE_DIR') or 'pkg/mod'
|
||||
|
||||
if 'version' not in ud.parm:
|
||||
raise MissingParameterError('version', ud.url)
|
||||
|
||||
module = ud.host
|
||||
if ud.path != '/':
|
||||
module += ud.path
|
||||
ud.parm['module'] = module
|
||||
|
||||
# Set host, path and srcrev for git download
|
||||
if 'repo' in ud.parm:
|
||||
repo = ud.parm['repo']
|
||||
idx = repo.find('/')
|
||||
if idx != -1:
|
||||
ud.host = repo[:idx]
|
||||
ud.path = repo[idx:]
|
||||
else:
|
||||
ud.host = repo
|
||||
ud.path = ''
|
||||
if 'protocol' not in ud.parm:
|
||||
ud.parm['protocol'] = 'https'
|
||||
name = f"{module}@{ud.parm['version']}"
|
||||
ud.names = [name]
|
||||
srcrev = d.getVar('SRCREV_' + name)
|
||||
if srcrev:
|
||||
if 'srcrev' not in ud.parm:
|
||||
ud.parm['srcrev'] = srcrev
|
||||
else:
|
||||
if 'srcrev' in ud.parm:
|
||||
d.setVar('SRCREV_' + name, ud.parm['srcrev'])
|
||||
if 'branch' not in ud.parm:
|
||||
ud.parm['nobranch'] = '1'
|
||||
|
||||
# Set subpath, subdir and bareclone for git unpack
|
||||
if 'subdir' in ud.parm:
|
||||
ud.parm['subpath'] = ud.parm['subdir']
|
||||
key = f"git3:{ud.parm['protocol']}://{ud.host}{ud.path}".encode()
|
||||
ud.parm['key'] = key
|
||||
ud.parm['subdir'] = os.path.join(moddir, 'cache/vcs',
|
||||
hashlib.sha256(key).hexdigest())
|
||||
ud.parm['bareclone'] = '1'
|
||||
|
||||
super().urldata_init(ud, d)
|
||||
|
||||
def unpack(self, ud, rootdir, d):
|
||||
"""Unpack the module in the module cache."""
|
||||
|
||||
# Unpack the bare git repository
|
||||
super().unpack(ud, rootdir, d)
|
||||
|
||||
moddir = d.getVar('GO_MOD_CACHE_DIR') or 'pkg/mod'
|
||||
|
||||
# Create the info file
|
||||
module = ud.parm['module']
|
||||
repodir = os.path.join(rootdir, ud.parm['subdir'])
|
||||
with open(repodir + '.info', 'wb') as f:
|
||||
f.write(ud.parm['key'])
|
||||
|
||||
# Unpack the go.mod file from the repository
|
||||
unpackdir = os.path.join(rootdir, moddir, 'cache/download',
|
||||
escape(module), '@v')
|
||||
bb.utils.mkdirhier(unpackdir)
|
||||
srcrev = ud.parm['srcrev']
|
||||
version = ud.parm['version']
|
||||
escaped_version = escape(version)
|
||||
cmd = f"git ls-tree -r --name-only '{srcrev}'"
|
||||
if 'subpath' in ud.parm:
|
||||
cmd += f" '{ud.parm['subpath']}'"
|
||||
files = runfetchcmd(cmd, d, workdir=repodir).split()
|
||||
name = escaped_version + '.mod'
|
||||
bb.note(f"Unpacking {name} to {unpackdir}/")
|
||||
with open(os.path.join(unpackdir, name), mode='wb') as mf:
|
||||
f = 'go.mod'
|
||||
if 'subpath' in ud.parm:
|
||||
f = os.path.join(ud.parm['subpath'], f)
|
||||
if f in files:
|
||||
cmd = ['git', 'cat-file', 'blob', srcrev + ':' + f]
|
||||
subprocess.check_call(cmd, stdout=mf, cwd=repodir,
|
||||
preexec_fn=subprocess_setup)
|
||||
else:
|
||||
# If the module does not have a go.mod file, synthesize one
|
||||
# containing only a module statement.
|
||||
mf.write(f'module {module}\n'.encode())
|
||||
|
||||
# Synthesize the module zip file from the repository
|
||||
name = escaped_version + '.zip'
|
||||
bb.note(f"Unpacking {name} to {unpackdir}/")
|
||||
with zipfile.ZipFile(os.path.join(unpackdir, name), mode='w') as zf:
|
||||
prefix = module + '@' + version + '/'
|
||||
for f in files:
|
||||
cmd = ['git', 'cat-file', 'blob', srcrev + ':' + f]
|
||||
data = subprocess.check_output(cmd, cwd=repodir,
|
||||
preexec_fn=subprocess_setup)
|
||||
zf.writestr(prefix + f, data)
|
||||
@@ -242,7 +242,6 @@ class Hg(FetchMethod):
|
||||
revflag = "-r %s" % ud.revision
|
||||
subdir = ud.parm.get("destsuffix", ud.module)
|
||||
codir = "%s/%s" % (destdir, subdir)
|
||||
ud.unpack_tracer.unpack("hg", codir)
|
||||
|
||||
scmdata = ud.parm.get("scmdata", "")
|
||||
if scmdata != "nokeep":
|
||||
|
||||
@@ -41,9 +41,9 @@ class Local(FetchMethod):
|
||||
"""
|
||||
Return the local filename of a given url assuming a successful fetch.
|
||||
"""
|
||||
return self.localfile_searchpaths(urldata, d)[-1]
|
||||
return self.localpaths(urldata, d)[-1]
|
||||
|
||||
def localfile_searchpaths(self, urldata, d):
|
||||
def localpaths(self, urldata, d):
|
||||
"""
|
||||
Return the local filename of a given url assuming a successful fetch.
|
||||
"""
|
||||
@@ -51,13 +51,11 @@ class Local(FetchMethod):
|
||||
path = urldata.decodedurl
|
||||
newpath = path
|
||||
if path[0] == "/":
|
||||
logger.debug2("Using absolute %s" % (path))
|
||||
return [path]
|
||||
filespath = d.getVar('FILESPATH')
|
||||
if filespath:
|
||||
logger.debug2("Searching for %s in paths:\n %s" % (path, "\n ".join(filespath.split(":"))))
|
||||
newpath, hist = bb.utils.which(filespath, path, history=True)
|
||||
logger.debug2("Using %s for %s" % (newpath, path))
|
||||
searched.extend(hist)
|
||||
return searched
|
||||
|
||||
|
||||
@@ -42,15 +42,11 @@ from bb.utils import is_semver
|
||||
|
||||
def npm_package(package):
|
||||
"""Convert the npm package name to remove unsupported character"""
|
||||
# For scoped package names ('@user/package') the '/' is replaced by a '-'.
|
||||
# This is similar to what 'npm pack' does, but 'npm pack' also strips the
|
||||
# leading '@', which can lead to ambiguous package names.
|
||||
name = re.sub("/", "-", package)
|
||||
name = name.lower()
|
||||
name = re.sub(r"[^\-a-z0-9@]", "", name)
|
||||
name = name.strip("-")
|
||||
return name
|
||||
|
||||
# Scoped package names (with the @) use the same naming convention
|
||||
# as the 'npm pack' command.
|
||||
if package.startswith("@"):
|
||||
return re.sub("/", "-", package[1:])
|
||||
return package
|
||||
|
||||
def npm_filename(package, version):
|
||||
"""Get the filename of a npm package"""
|
||||
@@ -107,7 +103,6 @@ class NpmEnvironment(object):
|
||||
"""Run npm command in a controlled environment"""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
d = bb.data.createCopy(self.d)
|
||||
d.setVar("PATH", d.getVar("PATH")) # PATH might contain $HOME - evaluate it before patching
|
||||
d.setVar("HOME", tmpdir)
|
||||
|
||||
if not workdir:
|
||||
@@ -299,7 +294,6 @@ class Npm(FetchMethod):
|
||||
destsuffix = ud.parm.get("destsuffix", "npm")
|
||||
destdir = os.path.join(rootdir, destsuffix)
|
||||
npm_unpack(ud.localpath, destdir, d)
|
||||
ud.unpack_tracer.unpack("npm", destdir)
|
||||
|
||||
def clean(self, ud, d):
|
||||
"""Clean any existing full or partial download"""
|
||||
|
||||
@@ -41,9 +41,8 @@ def foreach_dependencies(shrinkwrap, callback=None, dev=False):
|
||||
with:
|
||||
name = the package name (string)
|
||||
params = the package parameters (dictionary)
|
||||
destdir = the destination of the package (string)
|
||||
deptree = the package dependency tree (array of strings)
|
||||
"""
|
||||
# For handling old style dependencies entries in shinkwrap files
|
||||
def _walk_deps(deps, deptree):
|
||||
for name in deps:
|
||||
subtree = [*deptree, name]
|
||||
@@ -53,22 +52,9 @@ def foreach_dependencies(shrinkwrap, callback=None, dev=False):
|
||||
continue
|
||||
elif deps[name].get("bundled", False):
|
||||
continue
|
||||
destsubdirs = [os.path.join("node_modules", dep) for dep in subtree]
|
||||
destsuffix = os.path.join(*destsubdirs)
|
||||
callback(name, deps[name], destsuffix)
|
||||
callback(name, deps[name], subtree)
|
||||
|
||||
# packages entry means new style shrinkwrap file, else use dependencies
|
||||
packages = shrinkwrap.get("packages", None)
|
||||
if packages is not None:
|
||||
for package in packages:
|
||||
if package != "":
|
||||
name = package.split('node_modules/')[-1]
|
||||
package_infos = packages.get(package, {})
|
||||
if dev == False and package_infos.get("dev", False):
|
||||
continue
|
||||
callback(name, package_infos, package)
|
||||
else:
|
||||
_walk_deps(shrinkwrap.get("dependencies", {}), [])
|
||||
_walk_deps(shrinkwrap.get("dependencies", {}), [])
|
||||
|
||||
class NpmShrinkWrap(FetchMethod):
|
||||
"""Class to fetch all package from a shrinkwrap file"""
|
||||
@@ -89,15 +75,17 @@ class NpmShrinkWrap(FetchMethod):
|
||||
# Resolve the dependencies
|
||||
ud.deps = []
|
||||
|
||||
def _resolve_dependency(name, params, destsuffix):
|
||||
def _resolve_dependency(name, params, deptree):
|
||||
url = None
|
||||
localpath = None
|
||||
extrapaths = []
|
||||
destsubdirs = [os.path.join("node_modules", dep) for dep in deptree]
|
||||
destsuffix = os.path.join(*destsubdirs)
|
||||
unpack = True
|
||||
|
||||
integrity = params.get("integrity", None)
|
||||
resolved = params.get("resolved", None)
|
||||
version = params.get("version", resolved)
|
||||
version = params.get("version", None)
|
||||
|
||||
# Handle registry sources
|
||||
if is_semver(version) and integrity:
|
||||
@@ -184,7 +172,6 @@ class NpmShrinkWrap(FetchMethod):
|
||||
uri = URI("git://" + str(groups["url"]))
|
||||
uri.params["protocol"] = str(groups["protocol"])
|
||||
uri.params["rev"] = str(groups["rev"])
|
||||
uri.params["nobranch"] = "1"
|
||||
uri.params["destsuffix"] = destsuffix
|
||||
|
||||
url = str(uri)
|
||||
@@ -192,9 +179,7 @@ class NpmShrinkWrap(FetchMethod):
|
||||
else:
|
||||
raise ParameterError("Unsupported dependency: %s" % name, ud.url)
|
||||
|
||||
# name is needed by unpack tracer for module mapping
|
||||
ud.deps.append({
|
||||
"name": name,
|
||||
"url": url,
|
||||
"localpath": localpath,
|
||||
"extrapaths": extrapaths,
|
||||
@@ -228,15 +213,13 @@ class NpmShrinkWrap(FetchMethod):
|
||||
@staticmethod
|
||||
def _foreach_proxy_method(ud, handle):
|
||||
returns = []
|
||||
#Check if there are dependencies before try to fetch them
|
||||
if len(ud.deps) > 0:
|
||||
for proxy_url in ud.proxy.urls:
|
||||
proxy_ud = ud.proxy.ud[proxy_url]
|
||||
proxy_d = ud.proxy.d
|
||||
proxy_ud.setup_localpath(proxy_d)
|
||||
lf = lockfile(proxy_ud.lockfile)
|
||||
returns.append(handle(proxy_ud.method, proxy_ud, proxy_d))
|
||||
unlockfile(lf)
|
||||
for proxy_url in ud.proxy.urls:
|
||||
proxy_ud = ud.proxy.ud[proxy_url]
|
||||
proxy_d = ud.proxy.d
|
||||
proxy_ud.setup_localpath(proxy_d)
|
||||
lf = lockfile(proxy_ud.lockfile)
|
||||
returns.append(handle(proxy_ud.method, proxy_ud, proxy_d))
|
||||
unlockfile(lf)
|
||||
return returns
|
||||
|
||||
def verify_donestamp(self, ud, d):
|
||||
@@ -269,11 +252,10 @@ class NpmShrinkWrap(FetchMethod):
|
||||
|
||||
def unpack(self, ud, rootdir, d):
|
||||
"""Unpack the downloaded dependencies"""
|
||||
destdir = rootdir
|
||||
destdir = d.getVar("S")
|
||||
destsuffix = ud.parm.get("destsuffix")
|
||||
if destsuffix:
|
||||
destdir = os.path.join(rootdir, destsuffix)
|
||||
ud.unpack_tracer.unpack("npm-shrinkwrap", destdir)
|
||||
|
||||
bb.utils.mkdirhier(destdir)
|
||||
bb.utils.copyfile(ud.shrinkwrap_file,
|
||||
|
||||
@@ -210,6 +210,3 @@ class Svn(FetchMethod):
|
||||
|
||||
def _build_revision(self, ud, d):
|
||||
return ud.revision
|
||||
|
||||
def supports_checksum(self, urldata):
|
||||
return False
|
||||
|
||||
@@ -87,10 +87,7 @@ class Wget(FetchMethod):
|
||||
if not ud.localfile:
|
||||
ud.localfile = d.expand(urllib.parse.unquote(ud.host + ud.path).replace("/", "."))
|
||||
|
||||
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 100"
|
||||
|
||||
if ud.type == 'ftp' or ud.type == 'ftps':
|
||||
self.basecmd += " --passive-ftp"
|
||||
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 30 --passive-ftp"
|
||||
|
||||
if not self.check_certs(d):
|
||||
self.basecmd += " --no-check-certificate"
|
||||
@@ -108,8 +105,7 @@ class Wget(FetchMethod):
|
||||
|
||||
fetchcmd = self.basecmd
|
||||
|
||||
dldir = os.path.realpath(d.getVar("DL_DIR"))
|
||||
localpath = os.path.join(dldir, ud.localfile) + ".tmp"
|
||||
localpath = os.path.join(d.getVar("DL_DIR"), ud.localfile) + ".tmp"
|
||||
bb.utils.mkdirhier(os.path.dirname(localpath))
|
||||
fetchcmd += " -O %s" % shlex.quote(localpath)
|
||||
|
||||
@@ -129,21 +125,12 @@ class Wget(FetchMethod):
|
||||
uri = ud.url.split(";")[0]
|
||||
if os.path.exists(ud.localpath):
|
||||
# file exists, but we didnt complete it.. trying again..
|
||||
fetchcmd += " -c -P " + dldir + " '" + uri + "'"
|
||||
fetchcmd += d.expand(" -c -P ${DL_DIR} '%s'" % uri)
|
||||
else:
|
||||
fetchcmd += " -P " + dldir + " '" + uri + "'"
|
||||
fetchcmd += d.expand(" -P ${DL_DIR} '%s'" % uri)
|
||||
|
||||
self._runwget(ud, d, fetchcmd, False)
|
||||
|
||||
# Sanity check since wget can pretend it succeed when it didn't
|
||||
# Also, this used to happen if sourceforge sent us to the mirror page
|
||||
if not os.path.exists(localpath):
|
||||
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (uri, localpath), uri)
|
||||
|
||||
if os.path.getsize(localpath) == 0:
|
||||
os.remove(localpath)
|
||||
raise FetchError("The fetch of %s resulted in a zero size file?! Deleting and failing since this isn't right." % (uri), uri)
|
||||
|
||||
# Try and verify any checksum now, meaning if it isn't correct, we don't remove the
|
||||
# original file, which might be a race (imagine two recipes referencing the same
|
||||
# source, one with an incorrect checksum)
|
||||
@@ -153,6 +140,15 @@ class Wget(FetchMethod):
|
||||
# Our lock prevents multiple writers but mirroring code may grab incomplete files
|
||||
os.rename(localpath, localpath[:-4])
|
||||
|
||||
# Sanity check since wget can pretend it succeed when it didn't
|
||||
# Also, this used to happen if sourceforge sent us to the mirror page
|
||||
if not os.path.exists(ud.localpath):
|
||||
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (uri, ud.localpath), uri)
|
||||
|
||||
if os.path.getsize(ud.localpath) == 0:
|
||||
os.remove(ud.localpath)
|
||||
raise FetchError("The fetch of %s resulted in a zero size file?! Deleting and failing since this isn't right." % (uri), uri)
|
||||
|
||||
return True
|
||||
|
||||
def checkstatus(self, fetch, ud, d, try_again=True):
|
||||
@@ -244,12 +240,7 @@ class Wget(FetchMethod):
|
||||
fetch.connection_cache.remove_connection(h.host, h.port)
|
||||
raise urllib.error.URLError(err)
|
||||
else:
|
||||
try:
|
||||
r = h.getresponse()
|
||||
except TimeoutError as e:
|
||||
if fetch.connection_cache:
|
||||
fetch.connection_cache.remove_connection(h.host, h.port)
|
||||
raise TimeoutError(e)
|
||||
r = h.getresponse()
|
||||
|
||||
# Pick apart the HTTPResponse object to get the addinfourl
|
||||
# object initialized properly.
|
||||
@@ -376,7 +367,7 @@ class Wget(FetchMethod):
|
||||
except (FileNotFoundError, netrc.NetrcParseError):
|
||||
pass
|
||||
|
||||
with opener.open(r, timeout=100) as response:
|
||||
with opener.open(r, timeout=30) as response:
|
||||
pass
|
||||
except (urllib.error.URLError, ConnectionResetError, TimeoutError) as e:
|
||||
if try_again:
|
||||
@@ -384,7 +375,7 @@ class Wget(FetchMethod):
|
||||
return self.checkstatus(fetch, ud, d, False)
|
||||
else:
|
||||
# debug for now to avoid spamming the logs in e.g. remote sstate searches
|
||||
logger.debug2("checkstatus() urlopen failed for %s: %s" % (uri,e))
|
||||
logger.debug2("checkstatus() urlopen failed: %s" % e)
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
@@ -217,9 +217,7 @@ def create_bitbake_parser():
|
||||
"execution. The SIGNATURE_HANDLER parameter is passed to the "
|
||||
"handler. Two common values are none and printdiff but the handler "
|
||||
"may define more/less. none means only dump the signature, printdiff"
|
||||
" means recursively compare the dumped signature with the most recent"
|
||||
" one in a local build or sstate cache (can be used to find out why tasks re-run"
|
||||
" when that is not expected)")
|
||||
" means compare the dumped signature with the cached one.")
|
||||
|
||||
exec_group.add_argument("--revisions-changed", action="store_true",
|
||||
help="Set the exit code depending on whether upstream floating "
|
||||
|
||||
@@ -234,10 +234,9 @@ class diskMonitor:
|
||||
freeInode = st.f_favail
|
||||
|
||||
if minInode and freeInode < minInode:
|
||||
# Some filesystems use dynamic inodes so can't run out.
|
||||
# This is reported by the inode count being 0 (btrfs) or the free
|
||||
# inode count being -1 (cephfs).
|
||||
if st.f_files == 0 or st.f_favail == -1:
|
||||
# Some filesystems use dynamic inodes so can't run out
|
||||
# (e.g. btrfs). This is reported by the inode count being 0.
|
||||
if st.f_files == 0:
|
||||
self.devDict[k][2] = None
|
||||
continue
|
||||
# Always show warning, the self.checked would always be False if the action is WARN
|
||||
|
||||
@@ -89,6 +89,10 @@ class BBLogFormatter(logging.Formatter):
|
||||
msg = logging.Formatter.format(self, record)
|
||||
if hasattr(record, 'bb_exc_formatted'):
|
||||
msg += '\n' + ''.join(record.bb_exc_formatted)
|
||||
elif hasattr(record, 'bb_exc_info'):
|
||||
etype, value, tb = record.bb_exc_info
|
||||
formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
|
||||
msg += '\n' + ''.join(formatted)
|
||||
return msg
|
||||
|
||||
def colorize(self, record):
|
||||
@@ -226,7 +230,7 @@ def logger_create(name, output=sys.stderr, level=logging.INFO, preserve_handlers
|
||||
console = logging.StreamHandler(output)
|
||||
console.addFilter(bb.msg.LogFilterShowOnce())
|
||||
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
|
||||
if color == 'always' or (color == 'auto' and output.isatty() and os.environ.get('NO_COLOR', '') == ''):
|
||||
if color == 'always' or (color == 'auto' and output.isatty()):
|
||||
format.enable_color()
|
||||
console.setFormatter(format)
|
||||
if preserve_handlers:
|
||||
|
||||
@@ -49,32 +49,20 @@ class SkipPackage(SkipRecipe):
|
||||
__mtime_cache = {}
|
||||
def cached_mtime(f):
|
||||
if f not in __mtime_cache:
|
||||
res = os.stat(f)
|
||||
__mtime_cache[f] = (res.st_mtime_ns, res.st_size, res.st_ino)
|
||||
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
|
||||
return __mtime_cache[f]
|
||||
|
||||
def cached_mtime_noerror(f):
|
||||
if f not in __mtime_cache:
|
||||
try:
|
||||
res = os.stat(f)
|
||||
__mtime_cache[f] = (res.st_mtime_ns, res.st_size, res.st_ino)
|
||||
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
|
||||
except OSError:
|
||||
return 0
|
||||
return __mtime_cache[f]
|
||||
|
||||
def check_mtime(f, mtime):
|
||||
try:
|
||||
res = os.stat(f)
|
||||
current_mtime = (res.st_mtime_ns, res.st_size, res.st_ino)
|
||||
__mtime_cache[f] = current_mtime
|
||||
except OSError:
|
||||
current_mtime = 0
|
||||
return current_mtime == mtime
|
||||
|
||||
def update_mtime(f):
|
||||
try:
|
||||
res = os.stat(f)
|
||||
__mtime_cache[f] = (res.st_mtime_ns, res.st_size, res.st_ino)
|
||||
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
|
||||
except OSError:
|
||||
if f in __mtime_cache:
|
||||
del __mtime_cache[f]
|
||||
|
||||
@@ -211,12 +211,10 @@ class ExportFuncsNode(AstNode):
|
||||
|
||||
def eval(self, data):
|
||||
|
||||
sentinel = " # Export function set\n"
|
||||
for func in self.n:
|
||||
calledfunc = self.classname + "_" + func
|
||||
|
||||
basevar = data.getVar(func, False)
|
||||
if basevar and sentinel not in basevar:
|
||||
if data.getVar(func, False) and not data.getVarFlag(func, 'export_func', False):
|
||||
continue
|
||||
|
||||
if data.getVar(func, False):
|
||||
@@ -233,23 +231,22 @@ class ExportFuncsNode(AstNode):
|
||||
data.setVarFlag(func, "lineno", 1)
|
||||
|
||||
if data.getVarFlag(calledfunc, "python", False):
|
||||
data.setVar(func, sentinel + " bb.build.exec_func('" + calledfunc + "', d)\n", parsing=True)
|
||||
data.setVar(func, " bb.build.exec_func('" + calledfunc + "', d)\n", parsing=True)
|
||||
else:
|
||||
if "-" in self.classname:
|
||||
bb.fatal("The classname %s contains a dash character and is calling an sh function %s using EXPORT_FUNCTIONS. Since a dash is illegal in sh function names, this cannot work, please rename the class or don't use EXPORT_FUNCTIONS." % (self.classname, calledfunc))
|
||||
data.setVar(func, sentinel + " " + calledfunc + "\n", parsing=True)
|
||||
data.setVar(func, " " + calledfunc + "\n", parsing=True)
|
||||
data.setVarFlag(func, 'export_func', '1')
|
||||
|
||||
class AddTaskNode(AstNode):
|
||||
def __init__(self, filename, lineno, tasks, before, after):
|
||||
def __init__(self, filename, lineno, func, before, after):
|
||||
AstNode.__init__(self, filename, lineno)
|
||||
self.tasks = tasks
|
||||
self.func = func
|
||||
self.before = before
|
||||
self.after = after
|
||||
|
||||
def eval(self, data):
|
||||
tasks = self.tasks.split()
|
||||
for task in tasks:
|
||||
bb.build.addtask(task, self.before, self.after, data)
|
||||
bb.build.addtask(self.func, self.before, self.after, data)
|
||||
|
||||
class DelTaskNode(AstNode):
|
||||
def __init__(self, filename, lineno, tasks):
|
||||
@@ -316,16 +313,6 @@ class InheritNode(AstNode):
|
||||
def eval(self, data):
|
||||
bb.parse.BBHandler.inherit(self.classes, self.filename, self.lineno, data)
|
||||
|
||||
class InheritDeferredNode(AstNode):
|
||||
def __init__(self, filename, lineno, classes):
|
||||
AstNode.__init__(self, filename, lineno)
|
||||
self.inherit = (classes, filename, lineno)
|
||||
|
||||
def eval(self, data):
|
||||
inherits = data.getVar('__BBDEFINHERITS', False) or []
|
||||
inherits.append(self.inherit)
|
||||
data.setVar('__BBDEFINHERITS', inherits)
|
||||
|
||||
def handleInclude(statements, filename, lineno, m, force):
|
||||
statements.append(IncludeNode(filename, lineno, m.group(1), force))
|
||||
|
||||
@@ -350,11 +337,21 @@ def handlePythonMethod(statements, filename, lineno, funcname, modulename, body)
|
||||
def handleExportFuncs(statements, filename, lineno, m, classname):
|
||||
statements.append(ExportFuncsNode(filename, lineno, m.group(1), classname))
|
||||
|
||||
def handleAddTask(statements, filename, lineno, tasks, before, after):
|
||||
statements.append(AddTaskNode(filename, lineno, tasks, before, after))
|
||||
def handleAddTask(statements, filename, lineno, m):
|
||||
func = m.group("func")
|
||||
before = m.group("before")
|
||||
after = m.group("after")
|
||||
if func is None:
|
||||
return
|
||||
|
||||
def handleDelTask(statements, filename, lineno, tasks):
|
||||
statements.append(DelTaskNode(filename, lineno, tasks))
|
||||
statements.append(AddTaskNode(filename, lineno, func, before, after))
|
||||
|
||||
def handleDelTask(statements, filename, lineno, m):
|
||||
func = m.group(1)
|
||||
if func is None:
|
||||
return
|
||||
|
||||
statements.append(DelTaskNode(filename, lineno, func))
|
||||
|
||||
def handleBBHandlers(statements, filename, lineno, m):
|
||||
statements.append(BBHandlerNode(filename, lineno, m.group(1)))
|
||||
@@ -366,10 +363,6 @@ def handleInherit(statements, filename, lineno, m):
|
||||
classes = m.group(1)
|
||||
statements.append(InheritNode(filename, lineno, classes))
|
||||
|
||||
def handleInheritDeferred(statements, filename, lineno, m):
|
||||
classes = m.group(1)
|
||||
statements.append(InheritDeferredNode(filename, lineno, classes))
|
||||
|
||||
def runAnonFuncs(d):
|
||||
code = []
|
||||
for funcname in d.getVar("__BBANONFUNCS", False) or []:
|
||||
@@ -436,14 +429,6 @@ def multi_finalize(fn, d):
|
||||
logger.debug("Appending .bbappend file %s to %s", append, fn)
|
||||
bb.parse.BBHandler.handle(append, d, True)
|
||||
|
||||
while True:
|
||||
inherits = d.getVar('__BBDEFINHERITS', False) or []
|
||||
if not inherits:
|
||||
break
|
||||
inherit, filename, lineno = inherits.pop(0)
|
||||
d.setVar('__BBDEFINHERITS', inherits)
|
||||
bb.parse.BBHandler.inherit(inherit, filename, lineno, d, deferred=True)
|
||||
|
||||
onlyfinalise = d.getVar("__ONLYFINALISE", False)
|
||||
|
||||
safe_d = d
|
||||
|
||||
@@ -21,10 +21,9 @@ from .ConfHandler import include, init
|
||||
|
||||
__func_start_regexp__ = re.compile(r"(((?P<py>python(?=(\s|\()))|(?P<fr>fakeroot(?=\s)))\s*)*(?P<func>[\w\.\-\+\{\}\$:]+)?\s*\(\s*\)\s*{$" )
|
||||
__inherit_regexp__ = re.compile(r"inherit\s+(.+)" )
|
||||
__inherit_def_regexp__ = re.compile(r"inherit_defer\s+(.+)" )
|
||||
__export_func_regexp__ = re.compile(r"EXPORT_FUNCTIONS\s+(.+)" )
|
||||
__addtask_regexp__ = re.compile(r"addtask\s+([^#\n]+)(?P<comment>#.*|.*?)")
|
||||
__deltask_regexp__ = re.compile(r"deltask\s+([^#\n]+)(?P<comment>#.*|.*?)")
|
||||
__addtask_regexp__ = re.compile(r"addtask\s+(?P<func>\w+)\s*((before\s*(?P<before>((.*(?=after))|(.*))))|(after\s*(?P<after>((.*(?=before))|(.*)))))*")
|
||||
__deltask_regexp__ = re.compile(r"deltask\s+(.+)")
|
||||
__addhandler_regexp__ = re.compile(r"addhandler\s+(.+)" )
|
||||
__def_regexp__ = re.compile(r"def\s+(\w+).*:" )
|
||||
__python_func_regexp__ = re.compile(r"(\s+.*)|(^$)|(^#)" )
|
||||
@@ -34,7 +33,6 @@ __infunc__ = []
|
||||
__inpython__ = False
|
||||
__body__ = []
|
||||
__classname__ = ""
|
||||
__residue__ = []
|
||||
|
||||
cached_statements = {}
|
||||
|
||||
@@ -42,10 +40,8 @@ def supports(fn, d):
|
||||
"""Return True if fn has a supported extension"""
|
||||
return os.path.splitext(fn)[-1] in [".bb", ".bbclass", ".inc"]
|
||||
|
||||
def inherit(files, fn, lineno, d, deferred=False):
|
||||
def inherit(files, fn, lineno, d):
|
||||
__inherit_cache = d.getVar('__inherit_cache', False) or []
|
||||
#if "${" in files and not deferred:
|
||||
# bb.warn("%s:%s has non deferred conditional inherit" % (fn, lineno))
|
||||
files = d.expand(files).split()
|
||||
for file in files:
|
||||
classtype = d.getVar("__bbclasstype", False)
|
||||
@@ -81,7 +77,7 @@ def inherit(files, fn, lineno, d, deferred=False):
|
||||
__inherit_cache = d.getVar('__inherit_cache', False) or []
|
||||
|
||||
def get_statements(filename, absolute_filename, base_name):
|
||||
global cached_statements, __residue__, __body__
|
||||
global cached_statements
|
||||
|
||||
try:
|
||||
return cached_statements[absolute_filename]
|
||||
@@ -101,11 +97,6 @@ def get_statements(filename, absolute_filename, base_name):
|
||||
# add a blank line to close out any python definition
|
||||
feeder(lineno, "", filename, base_name, statements, eof=True)
|
||||
|
||||
if __residue__:
|
||||
raise ParseError("Unparsed lines %s: %s" % (filename, str(__residue__)), filename, lineno)
|
||||
if __body__:
|
||||
raise ParseError("Unparsed lines from unclosed function %s: %s" % (filename, str(__body__)), filename, lineno)
|
||||
|
||||
if filename.endswith(".bbclass") or filename.endswith(".inc"):
|
||||
cached_statements[absolute_filename] = statements
|
||||
return statements
|
||||
@@ -239,38 +230,29 @@ def feeder(lineno, s, fn, root, statements, eof=False):
|
||||
|
||||
m = __addtask_regexp__.match(s)
|
||||
if m:
|
||||
after = ""
|
||||
before = ""
|
||||
if len(m.group().split()) == 2:
|
||||
# Check and warn for "addtask task1 task2"
|
||||
m2 = re.match(r"addtask\s+(?P<func>\w+)(?P<ignores>.*)", s)
|
||||
if m2 and m2.group('ignores'):
|
||||
logger.warning('addtask ignored: "%s"' % m2.group('ignores'))
|
||||
|
||||
# This code splits on 'before' and 'after' instead of on whitespace so we can defer
|
||||
# evaluation to as late as possible.
|
||||
tasks = m.group(1).split(" before ")[0].split(" after ")[0]
|
||||
|
||||
for exp in m.group(1).split(" before "):
|
||||
exp2 = exp.split(" after ")
|
||||
if len(exp2) > 1:
|
||||
after = after + " ".join(exp2[1:])
|
||||
|
||||
for exp in m.group(1).split(" after "):
|
||||
exp2 = exp.split(" before ")
|
||||
if len(exp2) > 1:
|
||||
before = before + " ".join(exp2[1:])
|
||||
|
||||
# Check and warn for having task with a keyword as part of task name
|
||||
# Check and warn for "addtask task1 before task2 before task3", the
|
||||
# similar to "after"
|
||||
taskexpression = s.split()
|
||||
for word in ('before', 'after'):
|
||||
if taskexpression.count(word) > 1:
|
||||
logger.warning("addtask contained multiple '%s' keywords, only one is supported" % word)
|
||||
|
||||
# Check and warn for having task with exprssion as part of task name
|
||||
for te in taskexpression:
|
||||
if any( ( "%s_" % keyword ) in te for keyword in bb.data_smart.__setvar_keyword__ ):
|
||||
raise ParseError("Task name '%s' contains a keyword which is not recommended/supported.\nPlease rename the task not to include the keyword.\n%s" % (te, ("\n".join(map(str, bb.data_smart.__setvar_keyword__)))), fn)
|
||||
|
||||
if tasks is not None:
|
||||
ast.handleAddTask(statements, fn, lineno, tasks, before, after)
|
||||
ast.handleAddTask(statements, fn, lineno, m)
|
||||
return
|
||||
|
||||
m = __deltask_regexp__.match(s)
|
||||
if m:
|
||||
task = m.group(1)
|
||||
if task is not None:
|
||||
ast.handleDelTask(statements, fn, lineno, task)
|
||||
ast.handleDelTask(statements, fn, lineno, m)
|
||||
return
|
||||
|
||||
m = __addhandler_regexp__.match(s)
|
||||
@@ -283,11 +265,6 @@ def feeder(lineno, s, fn, root, statements, eof=False):
|
||||
ast.handleInherit(statements, fn, lineno, m)
|
||||
return
|
||||
|
||||
m = __inherit_def_regexp__.match(s)
|
||||
if m:
|
||||
ast.handleInheritDeferred(statements, fn, lineno, m)
|
||||
return
|
||||
|
||||
return ConfHandler.feeder(lineno, s, fn, statements, conffile=False)
|
||||
|
||||
# Add us to the handlers list
|
||||
|
||||
@@ -154,7 +154,6 @@ class SQLTable(collections.abc.MutableMapping):
|
||||
|
||||
def __exit__(self, *excinfo):
|
||||
self.connection.__exit__(*excinfo)
|
||||
self.connection.close()
|
||||
|
||||
@_Decorators.retry()
|
||||
@_Decorators.transaction
|
||||
|
||||
@@ -14,7 +14,6 @@ import os
|
||||
import sys
|
||||
import stat
|
||||
import errno
|
||||
import itertools
|
||||
import logging
|
||||
import re
|
||||
import bb
|
||||
@@ -158,7 +157,7 @@ class RunQueueScheduler(object):
|
||||
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
|
||||
self.stamps[tid] = bb.parse.siggen.stampfile_mcfn(taskname, taskfn, extrainfo=False)
|
||||
if tid in self.rq.runq_buildable:
|
||||
self.buildable.add(tid)
|
||||
self.buildable.append(tid)
|
||||
|
||||
self.rev_prio_map = None
|
||||
self.is_pressure_usable()
|
||||
@@ -201,36 +200,19 @@ class RunQueueScheduler(object):
|
||||
curr_memory_pressure = memory_pressure_fds.readline().split()[4].split("=")[1]
|
||||
now = time.time()
|
||||
tdiff = now - self.prev_pressure_time
|
||||
psi_accumulation_interval = 1.0
|
||||
cpu_pressure = (float(curr_cpu_pressure) - float(self.prev_cpu_pressure)) / tdiff
|
||||
io_pressure = (float(curr_io_pressure) - float(self.prev_io_pressure)) / tdiff
|
||||
memory_pressure = (float(curr_memory_pressure) - float(self.prev_memory_pressure)) / tdiff
|
||||
exceeds_cpu_pressure = self.rq.max_cpu_pressure and cpu_pressure > self.rq.max_cpu_pressure
|
||||
exceeds_io_pressure = self.rq.max_io_pressure and io_pressure > self.rq.max_io_pressure
|
||||
exceeds_memory_pressure = self.rq.max_memory_pressure and memory_pressure > self.rq.max_memory_pressure
|
||||
|
||||
if tdiff > psi_accumulation_interval:
|
||||
if tdiff > 1.0:
|
||||
exceeds_cpu_pressure = self.rq.max_cpu_pressure and (float(curr_cpu_pressure) - float(self.prev_cpu_pressure)) / tdiff > self.rq.max_cpu_pressure
|
||||
exceeds_io_pressure = self.rq.max_io_pressure and (float(curr_io_pressure) - float(self.prev_io_pressure)) / tdiff > self.rq.max_io_pressure
|
||||
exceeds_memory_pressure = self.rq.max_memory_pressure and (float(curr_memory_pressure) - float(self.prev_memory_pressure)) / tdiff > self.rq.max_memory_pressure
|
||||
self.prev_cpu_pressure = curr_cpu_pressure
|
||||
self.prev_io_pressure = curr_io_pressure
|
||||
self.prev_memory_pressure = curr_memory_pressure
|
||||
self.prev_pressure_time = now
|
||||
|
||||
pressure_state = (exceeds_cpu_pressure, exceeds_io_pressure, exceeds_memory_pressure)
|
||||
pressure_values = (round(cpu_pressure,1), self.rq.max_cpu_pressure, round(io_pressure,1), self.rq.max_io_pressure, round(memory_pressure,1), self.rq.max_memory_pressure)
|
||||
if hasattr(self, "pressure_state") and pressure_state != self.pressure_state:
|
||||
bb.note("Pressure status changed to CPU: %s, IO: %s, Mem: %s (CPU: %s/%s, IO: %s/%s, Mem: %s/%s) - using %s/%s bitbake threads" % (pressure_state + pressure_values + (len(self.rq.runq_running.difference(self.rq.runq_complete)), self.rq.number_tasks)))
|
||||
self.pressure_state = pressure_state
|
||||
else:
|
||||
exceeds_cpu_pressure = self.rq.max_cpu_pressure and (float(curr_cpu_pressure) - float(self.prev_cpu_pressure)) > self.rq.max_cpu_pressure
|
||||
exceeds_io_pressure = self.rq.max_io_pressure and (float(curr_io_pressure) - float(self.prev_io_pressure)) > self.rq.max_io_pressure
|
||||
exceeds_memory_pressure = self.rq.max_memory_pressure and (float(curr_memory_pressure) - float(self.prev_memory_pressure)) > self.rq.max_memory_pressure
|
||||
return (exceeds_cpu_pressure or exceeds_io_pressure or exceeds_memory_pressure)
|
||||
elif self.rq.max_loadfactor:
|
||||
limit = False
|
||||
loadfactor = float(os.getloadavg()[0]) / os.cpu_count()
|
||||
# bb.warn("Comparing %s to %s" % (loadfactor, self.rq.max_loadfactor))
|
||||
if loadfactor > self.rq.max_loadfactor:
|
||||
limit = True
|
||||
if hasattr(self, "loadfactor_limit") and limit != self.loadfactor_limit:
|
||||
bb.note("Load average limiting set to %s as load average: %s - using %s/%s bitbake threads" % (limit, loadfactor, len(self.rq.runq_running.difference(self.rq.runq_complete)), self.rq.number_tasks))
|
||||
self.loadfactor_limit = limit
|
||||
return limit
|
||||
return False
|
||||
|
||||
def next_buildable_task(self):
|
||||
@@ -281,11 +263,11 @@ class RunQueueScheduler(object):
|
||||
best = None
|
||||
bestprio = None
|
||||
for tid in buildable:
|
||||
taskname = taskname_from_tid(tid)
|
||||
if taskname in skip_buildable and skip_buildable[taskname] >= int(self.skip_maxthread[taskname]):
|
||||
continue
|
||||
prio = self.rev_prio_map[tid]
|
||||
if bestprio is None or bestprio > prio:
|
||||
taskname = taskname_from_tid(tid)
|
||||
if taskname in skip_buildable and skip_buildable[taskname] >= int(self.skip_maxthread[taskname]):
|
||||
continue
|
||||
stamp = self.stamps[tid]
|
||||
if stamp in self.rq.build_stamps.values():
|
||||
continue
|
||||
@@ -1015,32 +997,25 @@ class RunQueueData:
|
||||
# Handle --runall
|
||||
if self.cooker.configuration.runall:
|
||||
# re-run the mark_active and then drop unused tasks from new list
|
||||
reduced_tasklist = set(self.runtaskentries.keys())
|
||||
for tid in list(self.runtaskentries.keys()):
|
||||
if tid not in runq_build:
|
||||
reduced_tasklist.remove(tid)
|
||||
runq_build = {}
|
||||
|
||||
runall_tids = set()
|
||||
added = True
|
||||
while added:
|
||||
reduced_tasklist = set(self.runtaskentries.keys())
|
||||
for tid in list(self.runtaskentries.keys()):
|
||||
if tid not in runq_build:
|
||||
reduced_tasklist.remove(tid)
|
||||
runq_build = {}
|
||||
|
||||
orig = runall_tids
|
||||
for task in self.cooker.configuration.runall:
|
||||
if not task.startswith("do_"):
|
||||
task = "do_{0}".format(task)
|
||||
runall_tids = set()
|
||||
for task in self.cooker.configuration.runall:
|
||||
if not task.startswith("do_"):
|
||||
task = "do_{0}".format(task)
|
||||
for tid in reduced_tasklist:
|
||||
wanttid = "{0}:{1}".format(fn_from_tid(tid), task)
|
||||
if wanttid in self.runtaskentries:
|
||||
runall_tids.add(wanttid)
|
||||
for tid in reduced_tasklist:
|
||||
wanttid = "{0}:{1}".format(fn_from_tid(tid), task)
|
||||
if wanttid in self.runtaskentries:
|
||||
runall_tids.add(wanttid)
|
||||
|
||||
for tid in list(runall_tids):
|
||||
mark_active(tid, 1)
|
||||
self.target_tids.append(tid)
|
||||
if self.cooker.configuration.force:
|
||||
invalidate_task(tid, False)
|
||||
added = runall_tids - orig
|
||||
for tid in list(runall_tids):
|
||||
mark_active(tid, 1)
|
||||
if self.cooker.configuration.force:
|
||||
invalidate_task(tid, False)
|
||||
|
||||
delcount = set()
|
||||
for tid in list(self.runtaskentries.keys()):
|
||||
@@ -1274,41 +1249,27 @@ class RunQueueData:
|
||||
|
||||
bb.parse.siggen.set_setscene_tasks(self.runq_setscene_tids)
|
||||
|
||||
starttime = time.time()
|
||||
lasttime = starttime
|
||||
|
||||
# Iterate over the task list and call into the siggen code
|
||||
dealtwith = set()
|
||||
todeal = set(self.runtaskentries)
|
||||
while todeal:
|
||||
ready = set()
|
||||
for tid in todeal.copy():
|
||||
if not (self.runtaskentries[tid].depends - dealtwith):
|
||||
self.runtaskentries[tid].taskhash_deps = bb.parse.siggen.prep_taskhash(tid, self.runtaskentries[tid].depends, self.dataCaches)
|
||||
# get_taskhash for a given tid *must* be called before get_unihash* below
|
||||
self.runtaskentries[tid].hash = bb.parse.siggen.get_taskhash(tid, self.runtaskentries[tid].depends, self.dataCaches)
|
||||
ready.add(tid)
|
||||
unihashes = bb.parse.siggen.get_unihashes(ready)
|
||||
for tid in ready:
|
||||
dealtwith.add(tid)
|
||||
todeal.remove(tid)
|
||||
self.runtaskentries[tid].unihash = unihashes[tid]
|
||||
|
||||
bb.event.check_for_interrupts(self.cooker.data)
|
||||
|
||||
if time.time() > (lasttime + 30):
|
||||
lasttime = time.time()
|
||||
hashequiv_logger.verbose("Initial setup loop progress: %s of %s in %s" % (len(todeal), len(self.runtaskentries), lasttime - starttime))
|
||||
|
||||
endtime = time.time()
|
||||
if (endtime-starttime > 60):
|
||||
hashequiv_logger.verbose("Initial setup loop took: %s" % (endtime-starttime))
|
||||
dealtwith.add(tid)
|
||||
todeal.remove(tid)
|
||||
self.prepare_task_hash(tid)
|
||||
bb.event.check_for_interrupts(self.cooker.data)
|
||||
|
||||
bb.parse.siggen.writeout_file_checksum_cache()
|
||||
|
||||
#self.dump_data()
|
||||
return len(self.runtaskentries)
|
||||
|
||||
def prepare_task_hash(self, tid):
|
||||
bb.parse.siggen.prep_taskhash(tid, self.runtaskentries[tid].depends, self.dataCaches)
|
||||
self.runtaskentries[tid].hash = bb.parse.siggen.get_taskhash(tid, self.runtaskentries[tid].depends, self.dataCaches)
|
||||
self.runtaskentries[tid].unihash = bb.parse.siggen.get_unihash(tid)
|
||||
|
||||
def dump_data(self):
|
||||
"""
|
||||
Dump some debug information on the internal data structures
|
||||
@@ -1350,36 +1311,24 @@ class RunQueue:
|
||||
self.worker = {}
|
||||
self.fakeworker = {}
|
||||
|
||||
@staticmethod
|
||||
def send_pickled_data(worker, data, name):
|
||||
msg = bytearray()
|
||||
msg.extend(b"<" + name.encode() + b">")
|
||||
pickled_data = pickle.dumps(data)
|
||||
msg.extend(len(pickled_data).to_bytes(4, 'big'))
|
||||
msg.extend(pickled_data)
|
||||
msg.extend(b"</" + name.encode() + b">")
|
||||
worker.stdin.write(msg)
|
||||
|
||||
def _start_worker(self, mc, fakeroot = False, rqexec = None):
|
||||
logger.debug("Starting bitbake-worker")
|
||||
magic = "decafbad"
|
||||
if self.cooker.configuration.profile:
|
||||
magic = "decafbadbad"
|
||||
fakerootlogs = None
|
||||
|
||||
workerscript = os.path.realpath(os.path.dirname(__file__) + "/../../bin/bitbake-worker")
|
||||
if fakeroot:
|
||||
magic = magic + "beef"
|
||||
mcdata = self.cooker.databuilder.mcdata[mc]
|
||||
fakerootcmd = shlex.split(mcdata.getVar("FAKEROOTCMD"))
|
||||
fakerootenv = (mcdata.getVar("FAKEROOTBASEENV") or "").split()
|
||||
env = os.environ.copy()
|
||||
for key, value in (var.split('=',1) for var in fakerootenv):
|
||||
for key, value in (var.split('=') for var in fakerootenv):
|
||||
env[key] = value
|
||||
worker = subprocess.Popen(fakerootcmd + [sys.executable, workerscript, magic], stdout=subprocess.PIPE, stdin=subprocess.PIPE, env=env)
|
||||
worker = subprocess.Popen(fakerootcmd + ["bitbake-worker", magic], stdout=subprocess.PIPE, stdin=subprocess.PIPE, env=env)
|
||||
fakerootlogs = self.rqdata.dataCaches[mc].fakerootlogs
|
||||
else:
|
||||
worker = subprocess.Popen([sys.executable, workerscript, magic], stdout=subprocess.PIPE, stdin=subprocess.PIPE)
|
||||
worker = subprocess.Popen(["bitbake-worker", magic], stdout=subprocess.PIPE, stdin=subprocess.PIPE)
|
||||
bb.utils.nonblockingfd(worker.stdout)
|
||||
workerpipe = runQueuePipe(worker.stdout, None, self.cfgData, self, rqexec, fakerootlogs=fakerootlogs)
|
||||
|
||||
@@ -1397,9 +1346,9 @@ class RunQueue:
|
||||
"umask" : self.cfgData.getVar("BB_DEFAULT_UMASK"),
|
||||
}
|
||||
|
||||
RunQueue.send_pickled_data(worker, self.cooker.configuration, "cookerconfig")
|
||||
RunQueue.send_pickled_data(worker, self.cooker.extraconfigdata, "extraconfigdata")
|
||||
RunQueue.send_pickled_data(worker, workerdata, "workerdata")
|
||||
worker.stdin.write(b"<cookerconfig>" + pickle.dumps(self.cooker.configuration) + b"</cookerconfig>")
|
||||
worker.stdin.write(b"<extraconfigdata>" + pickle.dumps(self.cooker.extraconfigdata) + b"</extraconfigdata>")
|
||||
worker.stdin.write(b"<workerdata>" + pickle.dumps(workerdata) + b"</workerdata>")
|
||||
worker.stdin.flush()
|
||||
|
||||
return RunQueueWorker(worker, workerpipe)
|
||||
@@ -1409,7 +1358,7 @@ class RunQueue:
|
||||
return
|
||||
logger.debug("Teardown for bitbake-worker")
|
||||
try:
|
||||
RunQueue.send_pickled_data(worker.process, b"", "quit")
|
||||
worker.process.stdin.write(b"<quit></quit>")
|
||||
worker.process.stdin.flush()
|
||||
worker.process.stdin.close()
|
||||
except IOError:
|
||||
@@ -1421,12 +1370,12 @@ class RunQueue:
|
||||
continue
|
||||
worker.pipe.close()
|
||||
|
||||
def start_worker(self, rqexec):
|
||||
def start_worker(self):
|
||||
if self.worker:
|
||||
self.teardown_workers()
|
||||
self.teardown = False
|
||||
for mc in self.rqdata.dataCaches:
|
||||
self.worker[mc] = self._start_worker(mc, False, rqexec)
|
||||
self.worker[mc] = self._start_worker(mc)
|
||||
|
||||
def start_fakeworker(self, rqexec, mc):
|
||||
if not mc in self.fakeworker:
|
||||
@@ -1586,9 +1535,6 @@ class RunQueue:
|
||||
('bb.event.HeartbeatEvent',), data=self.cfgData)
|
||||
self.dm_event_handler_registered = True
|
||||
|
||||
self.rqdata.init_progress_reporter.next_stage()
|
||||
self.rqexe = RunQueueExecute(self)
|
||||
|
||||
dump = self.cooker.configuration.dump_signatures
|
||||
if dump:
|
||||
self.rqdata.init_progress_reporter.finish()
|
||||
@@ -1600,8 +1546,10 @@ class RunQueue:
|
||||
self.state = runQueueComplete
|
||||
|
||||
if self.state is runQueueSceneInit:
|
||||
self.start_worker(self.rqexe)
|
||||
self.rqdata.init_progress_reporter.finish()
|
||||
self.rqdata.init_progress_reporter.next_stage()
|
||||
self.start_worker()
|
||||
self.rqdata.init_progress_reporter.next_stage()
|
||||
self.rqexe = RunQueueExecute(self)
|
||||
|
||||
# If we don't have any setscene functions, skip execution
|
||||
if not self.rqdata.runq_setscene_tids:
|
||||
@@ -1716,17 +1664,6 @@ class RunQueue:
|
||||
return
|
||||
|
||||
def print_diffscenetasks(self):
|
||||
def get_root_invalid_tasks(task, taskdepends, valid, noexec, visited_invalid):
|
||||
invalidtasks = []
|
||||
for t in taskdepends[task].depends:
|
||||
if t not in valid and t not in visited_invalid:
|
||||
invalidtasks.extend(get_root_invalid_tasks(t, taskdepends, valid, noexec, visited_invalid))
|
||||
visited_invalid.add(t)
|
||||
|
||||
direct_invalid = [t for t in taskdepends[task].depends if t not in valid]
|
||||
if not direct_invalid and task not in noexec:
|
||||
invalidtasks = [task]
|
||||
return invalidtasks
|
||||
|
||||
noexec = []
|
||||
tocheck = set()
|
||||
@@ -1760,49 +1697,46 @@ class RunQueue:
|
||||
valid_new.add(dep)
|
||||
|
||||
invalidtasks = set()
|
||||
for tid in self.rqdata.runtaskentries:
|
||||
if tid not in valid_new and tid not in noexec:
|
||||
invalidtasks.add(tid)
|
||||
|
||||
toptasks = set(["{}:{}".format(t[3], t[2]) for t in self.rqdata.targets])
|
||||
for tid in toptasks:
|
||||
found = set()
|
||||
processed = set()
|
||||
for tid in invalidtasks:
|
||||
toprocess = set([tid])
|
||||
while toprocess:
|
||||
next = set()
|
||||
visited_invalid = set()
|
||||
for t in toprocess:
|
||||
if t not in valid_new and t not in noexec:
|
||||
invalidtasks.update(get_root_invalid_tasks(t, self.rqdata.runtaskentries, valid_new, noexec, visited_invalid))
|
||||
continue
|
||||
if t in self.rqdata.runq_setscene_tids:
|
||||
for dep in self.rqexe.sqdata.sq_deps[t]:
|
||||
next.add(dep)
|
||||
continue
|
||||
|
||||
for dep in self.rqdata.runtaskentries[t].depends:
|
||||
next.add(dep)
|
||||
|
||||
if dep in invalidtasks:
|
||||
found.add(tid)
|
||||
if dep not in processed:
|
||||
processed.add(dep)
|
||||
next.add(dep)
|
||||
toprocess = next
|
||||
if tid in found:
|
||||
toprocess = set()
|
||||
|
||||
tasklist = []
|
||||
for tid in invalidtasks:
|
||||
for tid in invalidtasks.difference(found):
|
||||
tasklist.append(tid)
|
||||
|
||||
if tasklist:
|
||||
bb.plain("The differences between the current build and any cached tasks start at the following tasks:\n" + "\n".join(tasklist))
|
||||
|
||||
return invalidtasks
|
||||
return invalidtasks.difference(found)
|
||||
|
||||
def write_diffscenetasks(self, invalidtasks):
|
||||
bb.siggen.check_siggen_version(bb.siggen)
|
||||
|
||||
# Define recursion callback
|
||||
def recursecb(key, hash1, hash2):
|
||||
hashes = [hash1, hash2]
|
||||
bb.debug(1, "Recursively looking for recipe {} hashes {}".format(key, hashes))
|
||||
hashfiles = bb.siggen.find_siginfo(key, None, hashes, self.cfgData)
|
||||
bb.debug(1, "Found hashfiles:\n{}".format(hashfiles))
|
||||
|
||||
recout = []
|
||||
if len(hashfiles) == 2:
|
||||
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1]['path'], hashfiles[hash2]['path'], recursecb)
|
||||
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1], hashfiles[hash2], recursecb)
|
||||
recout.extend(list(' ' + l for l in out2))
|
||||
else:
|
||||
recout.append("Unable to find matching sigdata for %s with hashes %s or %s" % (key, hash1, hash2))
|
||||
@@ -1813,25 +1747,20 @@ class RunQueue:
|
||||
for tid in invalidtasks:
|
||||
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
|
||||
pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn]
|
||||
h = self.rqdata.runtaskentries[tid].unihash
|
||||
bb.debug(1, "Looking for recipe {} task {}".format(pn, taskname))
|
||||
h = self.rqdata.runtaskentries[tid].hash
|
||||
matches = bb.siggen.find_siginfo(pn, taskname, [], self.cooker.databuilder.mcdata[mc])
|
||||
bb.debug(1, "Found hashfiles:\n{}".format(matches))
|
||||
match = None
|
||||
for m in matches.values():
|
||||
if h in m['path']:
|
||||
match = m['path']
|
||||
for m in matches:
|
||||
if h in m:
|
||||
match = m
|
||||
if match is None:
|
||||
bb.fatal("Can't find a task we're supposed to have written out? (hash: %s tid: %s)?" % (h, tid))
|
||||
bb.fatal("Can't find a task we're supposed to have written out? (hash: %s)?" % h)
|
||||
matches = {k : v for k, v in iter(matches.items()) if h not in k}
|
||||
matches_local = {k : v for k, v in iter(matches.items()) if h not in k and not v['sstate']}
|
||||
if matches_local:
|
||||
matches = matches_local
|
||||
if matches:
|
||||
latestmatch = matches[sorted(matches.keys(), key=lambda h: matches[h]['time'])[-1]]['path']
|
||||
latestmatch = sorted(matches.keys(), key=lambda f: matches[f])[-1]
|
||||
prevh = __find_sha256__.search(latestmatch).group(0)
|
||||
output = bb.siggen.compare_sigfiles(latestmatch, match, recursecb)
|
||||
bb.plain("\nTask %s:%s couldn't be used from the cache because:\n We need hash %s, most recent matching task was %s\n " % (pn, taskname, h, prevh) + '\n '.join(output))
|
||||
bb.plain("\nTask %s:%s couldn't be used from the cache because:\n We need hash %s, closest matching task was %s\n " % (pn, taskname, h, prevh) + '\n '.join(output))
|
||||
|
||||
|
||||
class RunQueueExecute:
|
||||
@@ -1847,7 +1776,6 @@ class RunQueueExecute:
|
||||
self.max_cpu_pressure = self.cfgData.getVar("BB_PRESSURE_MAX_CPU")
|
||||
self.max_io_pressure = self.cfgData.getVar("BB_PRESSURE_MAX_IO")
|
||||
self.max_memory_pressure = self.cfgData.getVar("BB_PRESSURE_MAX_MEMORY")
|
||||
self.max_loadfactor = self.cfgData.getVar("BB_LOADFACTOR_MAX")
|
||||
|
||||
self.sq_buildable = set()
|
||||
self.sq_running = set()
|
||||
@@ -1865,8 +1793,6 @@ class RunQueueExecute:
|
||||
self.build_stamps2 = []
|
||||
self.failed_tids = []
|
||||
self.sq_deferred = {}
|
||||
self.sq_needed_harddeps = set()
|
||||
self.sq_harddep_deferred = set()
|
||||
|
||||
self.stampcache = {}
|
||||
|
||||
@@ -1876,6 +1802,11 @@ class RunQueueExecute:
|
||||
|
||||
self.stats = RunQueueStats(len(self.rqdata.runtaskentries), len(self.rqdata.runq_setscene_tids))
|
||||
|
||||
for mc in rq.worker:
|
||||
rq.worker[mc].pipe.setrunqueueexec(self)
|
||||
for mc in rq.fakeworker:
|
||||
rq.fakeworker[mc].pipe.setrunqueueexec(self)
|
||||
|
||||
if self.number_tasks <= 0:
|
||||
bb.fatal("Invalid BB_NUMBER_THREADS %s" % self.number_tasks)
|
||||
|
||||
@@ -1901,11 +1832,6 @@ class RunQueueExecute:
|
||||
bb.fatal("Invalid BB_PRESSURE_MAX_MEMORY %s, minimum value is %s." % (self.max_memory_pressure, lower_limit))
|
||||
if self.max_memory_pressure > upper_limit:
|
||||
bb.warn("Your build will be largely unregulated since BB_PRESSURE_MAX_MEMORY is set to %s. It is very unlikely that such high pressure will be experienced." % (self.max_io_pressure))
|
||||
|
||||
if self.max_loadfactor:
|
||||
self.max_loadfactor = float(self.max_loadfactor)
|
||||
if self.max_loadfactor <= 0:
|
||||
bb.fatal("Invalid BB_LOADFACTOR_MAX %s, needs to be greater than zero." % (self.max_loadfactor))
|
||||
|
||||
# List of setscene tasks which we've covered
|
||||
self.scenequeue_covered = set()
|
||||
@@ -1916,6 +1842,11 @@ class RunQueueExecute:
|
||||
self.tasks_notcovered = set()
|
||||
self.scenequeue_notneeded = set()
|
||||
|
||||
# We can't skip specified target tasks which aren't setscene tasks
|
||||
self.cantskip = set(self.rqdata.target_tids)
|
||||
self.cantskip.difference_update(self.rqdata.runq_setscene_tids)
|
||||
self.cantskip.intersection_update(self.rqdata.runtaskentries)
|
||||
|
||||
schedulers = self.get_schedulers()
|
||||
for scheduler in schedulers:
|
||||
if self.scheduler == scheduler.name:
|
||||
@@ -1928,25 +1859,7 @@ class RunQueueExecute:
|
||||
|
||||
#if self.rqdata.runq_setscene_tids:
|
||||
self.sqdata = SQData()
|
||||
build_scenequeue_data(self.sqdata, self.rqdata, self)
|
||||
|
||||
update_scenequeue_data(self.sqdata.sq_revdeps, self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self, summary=True)
|
||||
|
||||
# Compute a list of 'stale' sstate tasks where the current hash does not match the one
|
||||
# in any stamp files. Pass the list out to metadata as an event.
|
||||
found = {}
|
||||
for tid in self.rqdata.runq_setscene_tids:
|
||||
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
|
||||
stamps = bb.build.find_stale_stamps(taskname, taskfn)
|
||||
if stamps:
|
||||
if mc not in found:
|
||||
found[mc] = {}
|
||||
found[mc][tid] = stamps
|
||||
for mc in found:
|
||||
event = bb.event.StaleSetSceneTasks(found[mc])
|
||||
bb.event.fire(event, self.cooker.databuilder.mcdata[mc])
|
||||
|
||||
self.build_taskdepdata_cache()
|
||||
build_scenequeue_data(self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self)
|
||||
|
||||
def runqueue_process_waitpid(self, task, status, fakerootlog=None):
|
||||
|
||||
@@ -1972,14 +1885,14 @@ class RunQueueExecute:
|
||||
def finish_now(self):
|
||||
for mc in self.rq.worker:
|
||||
try:
|
||||
RunQueue.send_pickled_data(self.rq.worker[mc].process, b"", "finishnow")
|
||||
self.rq.worker[mc].process.stdin.write(b"<finishnow></finishnow>")
|
||||
self.rq.worker[mc].process.stdin.flush()
|
||||
except IOError:
|
||||
# worker must have died?
|
||||
pass
|
||||
for mc in self.rq.fakeworker:
|
||||
try:
|
||||
RunQueue.send_pickled_data(self.rq.fakeworker[mc].process, b"", "finishnow")
|
||||
self.rq.fakeworker[mc].process.stdin.write(b"<finishnow></finishnow>")
|
||||
self.rq.fakeworker[mc].process.stdin.flush()
|
||||
except IOError:
|
||||
# worker must have died?
|
||||
@@ -2078,19 +1991,11 @@ class RunQueueExecute:
|
||||
self.setbuildable(revdep)
|
||||
logger.debug("Marking task %s as buildable", revdep)
|
||||
|
||||
found = None
|
||||
for t in sorted(self.sq_deferred.copy()):
|
||||
for t in self.sq_deferred.copy():
|
||||
if self.sq_deferred[t] == task:
|
||||
# Allow the next deferred task to run. Any other deferred tasks should be deferred after that task.
|
||||
# We shouldn't allow all to run at once as it is prone to races.
|
||||
if not found:
|
||||
bb.debug(1, "Deferred task %s now buildable" % t)
|
||||
del self.sq_deferred[t]
|
||||
update_scenequeue_data([t], self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self, summary=False)
|
||||
found = t
|
||||
else:
|
||||
bb.debug(1, "Deferring %s after %s" % (t, found))
|
||||
self.sq_deferred[t] = found
|
||||
logger.debug2("Deferred task %s now buildable" % t)
|
||||
del self.sq_deferred[t]
|
||||
update_scenequeue_data([t], self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self, summary=False)
|
||||
|
||||
def task_complete(self, task):
|
||||
self.stats.taskCompleted()
|
||||
@@ -2190,24 +2095,13 @@ class RunQueueExecute:
|
||||
if not hasattr(self, "sorted_setscene_tids"):
|
||||
# Don't want to sort this set every execution
|
||||
self.sorted_setscene_tids = sorted(self.rqdata.runq_setscene_tids)
|
||||
# Resume looping where we left off when we returned to feed the mainloop
|
||||
self.setscene_tids_generator = itertools.cycle(self.rqdata.runq_setscene_tids)
|
||||
|
||||
task = None
|
||||
if not self.sqdone and self.can_start_task():
|
||||
loopcount = 0
|
||||
# Find the next setscene to run, exit the loop when we've processed all tids or found something to execute
|
||||
while loopcount < len(self.rqdata.runq_setscene_tids):
|
||||
loopcount += 1
|
||||
nexttask = next(self.setscene_tids_generator)
|
||||
if nexttask in self.sq_buildable and nexttask not in self.sq_running and self.sqdata.stamps[nexttask] not in self.build_stamps.values() and nexttask not in self.sq_harddep_deferred:
|
||||
if nexttask in self.sq_deferred and self.sq_deferred[nexttask] not in self.runq_complete:
|
||||
# Skip deferred tasks quickly before the 'expensive' tests below - this is key to performant multiconfig builds
|
||||
continue
|
||||
if nexttask not in self.sqdata.unskippable and self.sqdata.sq_revdeps[nexttask] and \
|
||||
nexttask not in self.sq_needed_harddeps and \
|
||||
self.sqdata.sq_revdeps[nexttask].issubset(self.scenequeue_covered) and \
|
||||
self.check_dependencies(nexttask, self.sqdata.sq_revdeps[nexttask]):
|
||||
# Find the next setscene to run
|
||||
for nexttask in self.sorted_setscene_tids:
|
||||
if nexttask in self.sq_buildable and nexttask not in self.sq_running and self.sqdata.stamps[nexttask] not in self.build_stamps.values():
|
||||
if nexttask not in self.sqdata.unskippable and self.sqdata.sq_revdeps[nexttask] and self.sqdata.sq_revdeps[nexttask].issubset(self.scenequeue_covered) and self.check_dependencies(nexttask, self.sqdata.sq_revdeps[nexttask]):
|
||||
if nexttask not in self.rqdata.target_tids:
|
||||
logger.debug2("Skipping setscene for task %s" % nexttask)
|
||||
self.sq_task_skip(nexttask)
|
||||
@@ -2215,25 +2109,13 @@ class RunQueueExecute:
|
||||
if nexttask in self.sq_deferred:
|
||||
del self.sq_deferred[nexttask]
|
||||
return True
|
||||
if nexttask in self.sqdata.sq_harddeps_rev and not self.sqdata.sq_harddeps_rev[nexttask].issubset(self.scenequeue_covered | self.scenequeue_notcovered):
|
||||
logger.debug2("Deferring %s due to hard dependencies" % nexttask)
|
||||
updated = False
|
||||
for dep in self.sqdata.sq_harddeps_rev[nexttask]:
|
||||
if dep not in self.sq_needed_harddeps:
|
||||
logger.debug2("Enabling task %s as it is a hard dependency" % dep)
|
||||
self.sq_buildable.add(dep)
|
||||
self.sq_needed_harddeps.add(dep)
|
||||
updated = True
|
||||
self.sq_harddep_deferred.add(nexttask)
|
||||
if updated:
|
||||
return True
|
||||
continue
|
||||
# If covered tasks are running, need to wait for them to complete
|
||||
for t in self.sqdata.sq_covered_tasks[nexttask]:
|
||||
if t in self.runq_running and t not in self.runq_complete:
|
||||
continue
|
||||
if nexttask in self.sq_deferred:
|
||||
# Deferred tasks that were still deferred were skipped above so we now need to process
|
||||
if self.sq_deferred[nexttask] not in self.runq_complete:
|
||||
continue
|
||||
logger.debug("Task %s no longer deferred" % nexttask)
|
||||
del self.sq_deferred[nexttask]
|
||||
valid = self.rq.validate_hashes(set([nexttask]), self.cooker.data, 0, False, summary=False)
|
||||
@@ -2276,7 +2158,6 @@ class RunQueueExecute:
|
||||
bb.event.fire(startevent, self.cfgData)
|
||||
|
||||
taskdep = self.rqdata.dataCaches[mc].task_deps[taskfn]
|
||||
realfn = bb.cache.virtualfn2realfn(taskfn)[0]
|
||||
runtask = {
|
||||
'fn' : taskfn,
|
||||
'task' : task,
|
||||
@@ -2285,7 +2166,6 @@ class RunQueueExecute:
|
||||
'unihash' : self.rqdata.get_task_unihash(task),
|
||||
'quieterrors' : True,
|
||||
'appends' : self.cooker.collections[mc].get_file_appends(taskfn),
|
||||
'layername' : self.cooker.collections[mc].calc_bbfile_priority(realfn)[2],
|
||||
'taskdepdata' : self.sq_build_taskdepdata(task),
|
||||
'dry_run' : False,
|
||||
'taskdep': taskdep,
|
||||
@@ -2297,10 +2177,10 @@ class RunQueueExecute:
|
||||
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not self.cooker.configuration.dry_run:
|
||||
if not mc in self.rq.fakeworker:
|
||||
self.rq.start_fakeworker(self, mc)
|
||||
RunQueue.send_pickled_data(self.rq.fakeworker[mc].process, runtask, "runtask")
|
||||
self.rq.fakeworker[mc].process.stdin.write(b"<runtask>" + pickle.dumps(runtask) + b"</runtask>")
|
||||
self.rq.fakeworker[mc].process.stdin.flush()
|
||||
else:
|
||||
RunQueue.send_pickled_data(self.rq.worker[mc].process, runtask, "runtask")
|
||||
self.rq.worker[mc].process.stdin.write(b"<runtask>" + pickle.dumps(runtask) + b"</runtask>")
|
||||
self.rq.worker[mc].process.stdin.flush()
|
||||
|
||||
self.build_stamps[task] = bb.parse.siggen.stampfile_mcfn(taskname, taskfn, extrainfo=False)
|
||||
@@ -2371,7 +2251,6 @@ class RunQueueExecute:
|
||||
bb.event.fire(startevent, self.cfgData)
|
||||
|
||||
taskdep = self.rqdata.dataCaches[mc].task_deps[taskfn]
|
||||
realfn = bb.cache.virtualfn2realfn(taskfn)[0]
|
||||
runtask = {
|
||||
'fn' : taskfn,
|
||||
'task' : task,
|
||||
@@ -2380,7 +2259,6 @@ class RunQueueExecute:
|
||||
'unihash' : self.rqdata.get_task_unihash(task),
|
||||
'quieterrors' : False,
|
||||
'appends' : self.cooker.collections[mc].get_file_appends(taskfn),
|
||||
'layername' : self.cooker.collections[mc].calc_bbfile_priority(realfn)[2],
|
||||
'taskdepdata' : self.build_taskdepdata(task),
|
||||
'dry_run' : self.rqdata.setscene_enforce,
|
||||
'taskdep': taskdep,
|
||||
@@ -2398,10 +2276,10 @@ class RunQueueExecute:
|
||||
self.rq.state = runQueueFailed
|
||||
self.stats.taskFailed()
|
||||
return True
|
||||
RunQueue.send_pickled_data(self.rq.fakeworker[mc].process, runtask, "runtask")
|
||||
self.rq.fakeworker[mc].process.stdin.write(b"<runtask>" + pickle.dumps(runtask) + b"</runtask>")
|
||||
self.rq.fakeworker[mc].process.stdin.flush()
|
||||
else:
|
||||
RunQueue.send_pickled_data(self.rq.worker[mc].process, runtask, "runtask")
|
||||
self.rq.worker[mc].process.stdin.write(b"<runtask>" + pickle.dumps(runtask) + b"</runtask>")
|
||||
self.rq.worker[mc].process.stdin.flush()
|
||||
|
||||
self.build_stamps[task] = bb.parse.siggen.stampfile_mcfn(taskname, taskfn, extrainfo=False)
|
||||
@@ -2455,25 +2333,6 @@ class RunQueueExecute:
|
||||
ret.add(dep)
|
||||
return ret
|
||||
|
||||
# Build the individual cache entries in advance once to save time
|
||||
def build_taskdepdata_cache(self):
|
||||
taskdepdata_cache = {}
|
||||
for task in self.rqdata.runtaskentries:
|
||||
(mc, fn, taskname, taskfn) = split_tid_mcfn(task)
|
||||
taskdepdata_cache[task] = bb.TaskData(
|
||||
pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn],
|
||||
taskname = taskname,
|
||||
fn = fn,
|
||||
deps = self.filtermcdeps(task, mc, self.rqdata.runtaskentries[task].depends),
|
||||
provides = self.rqdata.dataCaches[mc].fn_provides[taskfn],
|
||||
taskhash = self.rqdata.runtaskentries[task].hash,
|
||||
unihash = self.rqdata.runtaskentries[task].unihash,
|
||||
hashfn = self.rqdata.dataCaches[mc].hashfn[taskfn],
|
||||
taskhash_deps = self.rqdata.runtaskentries[task].taskhash_deps,
|
||||
)
|
||||
|
||||
self.taskdepdata_cache = taskdepdata_cache
|
||||
|
||||
# We filter out multiconfig dependencies from taskdepdata we pass to the tasks
|
||||
# as most code can't handle them
|
||||
def build_taskdepdata(self, task):
|
||||
@@ -2485,11 +2344,15 @@ class RunQueueExecute:
|
||||
while next:
|
||||
additional = []
|
||||
for revdep in next:
|
||||
self.taskdepdata_cache[revdep] = self.taskdepdata_cache[revdep]._replace(
|
||||
unihash=self.rqdata.runtaskentries[revdep].unihash
|
||||
)
|
||||
taskdepdata[revdep] = self.taskdepdata_cache[revdep]
|
||||
for revdep2 in self.taskdepdata_cache[revdep].deps:
|
||||
(mc, fn, taskname, taskfn) = split_tid_mcfn(revdep)
|
||||
pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn]
|
||||
deps = self.rqdata.runtaskentries[revdep].depends
|
||||
provides = self.rqdata.dataCaches[mc].fn_provides[taskfn]
|
||||
taskhash = self.rqdata.runtaskentries[revdep].hash
|
||||
unihash = self.rqdata.runtaskentries[revdep].unihash
|
||||
deps = self.filtermcdeps(task, mc, deps)
|
||||
taskdepdata[revdep] = [pn, taskname, fn, deps, provides, taskhash, unihash]
|
||||
for revdep2 in deps:
|
||||
if revdep2 not in taskdepdata:
|
||||
additional.append(revdep2)
|
||||
next = additional
|
||||
@@ -2503,7 +2366,7 @@ class RunQueueExecute:
|
||||
return
|
||||
|
||||
notcovered = set(self.scenequeue_notcovered)
|
||||
notcovered |= self.sqdata.cantskip
|
||||
notcovered |= self.cantskip
|
||||
for tid in self.scenequeue_notcovered:
|
||||
notcovered |= self.sqdata.sq_covered_tasks[tid]
|
||||
notcovered |= self.sqdata.unskippable.difference(self.rqdata.runq_setscene_tids)
|
||||
@@ -2558,6 +2421,9 @@ class RunQueueExecute:
|
||||
self.rqdata.runtaskentries[hashtid].unihash = unihash
|
||||
bb.parse.siggen.set_unihash(hashtid, unihash)
|
||||
toprocess.add(hashtid)
|
||||
if torehash:
|
||||
# Need to save after set_unihash above
|
||||
bb.parse.siggen.save_unitaskhashes()
|
||||
|
||||
# Work out all tasks which depend upon these
|
||||
total = set()
|
||||
@@ -2580,28 +2446,17 @@ class RunQueueExecute:
|
||||
elif self.rqdata.runtaskentries[p].depends.isdisjoint(total):
|
||||
next.add(p)
|
||||
|
||||
starttime = time.time()
|
||||
lasttime = starttime
|
||||
|
||||
# When an item doesn't have dependencies in total, we can process it. Drop items from total when handled
|
||||
while next:
|
||||
current = next.copy()
|
||||
next = set()
|
||||
ready = {}
|
||||
for tid in current:
|
||||
if self.rqdata.runtaskentries[p].depends and not self.rqdata.runtaskentries[tid].depends.isdisjoint(total):
|
||||
continue
|
||||
# get_taskhash for a given tid *must* be called before get_unihash* below
|
||||
ready[tid] = bb.parse.siggen.get_taskhash(tid, self.rqdata.runtaskentries[tid].depends, self.rqdata.dataCaches)
|
||||
|
||||
unihashes = bb.parse.siggen.get_unihashes(ready.keys())
|
||||
|
||||
for tid in ready:
|
||||
orighash = self.rqdata.runtaskentries[tid].hash
|
||||
newhash = ready[tid]
|
||||
newhash = bb.parse.siggen.get_taskhash(tid, self.rqdata.runtaskentries[tid].depends, self.rqdata.dataCaches)
|
||||
origuni = self.rqdata.runtaskentries[tid].unihash
|
||||
newuni = unihashes[tid]
|
||||
|
||||
newuni = bb.parse.siggen.get_unihash(tid)
|
||||
# FIXME, need to check it can come from sstate at all for determinism?
|
||||
remapped = False
|
||||
if newuni == origuni:
|
||||
@@ -2622,21 +2477,12 @@ class RunQueueExecute:
|
||||
next |= self.rqdata.runtaskentries[tid].revdeps
|
||||
total.remove(tid)
|
||||
next.intersection_update(total)
|
||||
bb.event.check_for_interrupts(self.cooker.data)
|
||||
|
||||
if time.time() > (lasttime + 30):
|
||||
lasttime = time.time()
|
||||
hashequiv_logger.verbose("Rehash loop slow progress: %s in %s" % (len(total), lasttime - starttime))
|
||||
|
||||
endtime = time.time()
|
||||
if (endtime-starttime > 60):
|
||||
hashequiv_logger.verbose("Rehash loop took more than 60s: %s" % (endtime-starttime))
|
||||
|
||||
if changed:
|
||||
for mc in self.rq.worker:
|
||||
RunQueue.send_pickled_data(self.rq.worker[mc].process, bb.parse.siggen.get_taskhashes(), "newtaskhashes")
|
||||
self.rq.worker[mc].process.stdin.write(b"<newtaskhashes>" + pickle.dumps(bb.parse.siggen.get_taskhashes()) + b"</newtaskhashes>")
|
||||
for mc in self.rq.fakeworker:
|
||||
RunQueue.send_pickled_data(self.rq.fakeworker[mc].process, bb.parse.siggen.get_taskhashes(), "newtaskhashes")
|
||||
self.rq.fakeworker[mc].process.stdin.write(b"<newtaskhashes>" + pickle.dumps(bb.parse.siggen.get_taskhashes()) + b"</newtaskhashes>")
|
||||
|
||||
hashequiv_logger.debug(pprint.pformat("Tasks changed:\n%s" % (changed)))
|
||||
|
||||
@@ -2706,8 +2552,8 @@ class RunQueueExecute:
|
||||
update_tasks2 = []
|
||||
for tid in update_tasks:
|
||||
harddepfail = False
|
||||
for t in self.sqdata.sq_harddeps_rev[tid]:
|
||||
if t in self.scenequeue_notcovered:
|
||||
for t in self.sqdata.sq_harddeps:
|
||||
if tid in self.sqdata.sq_harddeps[t] and t in self.scenequeue_notcovered:
|
||||
harddepfail = True
|
||||
break
|
||||
if not harddepfail and self.sqdata.sq_revdeps[tid].issubset(self.scenequeue_covered | self.scenequeue_notcovered):
|
||||
@@ -2739,14 +2585,12 @@ class RunQueueExecute:
|
||||
|
||||
if changed:
|
||||
self.stats.updateCovered(len(self.scenequeue_covered), len(self.scenequeue_notcovered))
|
||||
self.sq_needed_harddeps = set()
|
||||
self.sq_harddep_deferred = set()
|
||||
self.holdoff_need_update = True
|
||||
|
||||
def scenequeue_updatecounters(self, task, fail=False):
|
||||
|
||||
if fail and task in self.sqdata.sq_harddeps:
|
||||
for dep in sorted(self.sqdata.sq_harddeps[task]):
|
||||
for dep in sorted(self.sqdata.sq_deps[task]):
|
||||
if fail and task in self.sqdata.sq_harddeps and dep in self.sqdata.sq_harddeps[task]:
|
||||
if dep in self.scenequeue_covered or dep in self.scenequeue_notcovered:
|
||||
# dependency could be already processed, e.g. noexec setscene task
|
||||
continue
|
||||
@@ -2756,12 +2600,7 @@ class RunQueueExecute:
|
||||
logger.debug2("%s was unavailable and is a hard dependency of %s so skipping" % (task, dep))
|
||||
self.sq_task_failoutright(dep)
|
||||
continue
|
||||
|
||||
# For performance, only compute allcovered once if needed
|
||||
if self.sqdata.sq_deps[task]:
|
||||
allcovered = self.scenequeue_covered | self.scenequeue_notcovered
|
||||
for dep in sorted(self.sqdata.sq_deps[task]):
|
||||
if self.sqdata.sq_revdeps[dep].issubset(allcovered):
|
||||
if self.sqdata.sq_revdeps[dep].issubset(self.scenequeue_covered | self.scenequeue_notcovered):
|
||||
if dep not in self.sq_buildable:
|
||||
self.sq_buildable.add(dep)
|
||||
|
||||
@@ -2779,13 +2618,6 @@ class RunQueueExecute:
|
||||
new.add(dep)
|
||||
next = new
|
||||
|
||||
# If this task was one which other setscene tasks have a hard dependency upon, we need
|
||||
# to walk through the hard dependencies and allow execution of those which have completed dependencies.
|
||||
if task in self.sqdata.sq_harddeps:
|
||||
for dep in self.sq_harddep_deferred.copy():
|
||||
if self.sqdata.sq_harddeps_rev[dep].issubset(self.scenequeue_covered | self.scenequeue_notcovered):
|
||||
self.sq_harddep_deferred.remove(dep)
|
||||
|
||||
self.stats.updateCovered(len(self.scenequeue_covered), len(self.scenequeue_notcovered))
|
||||
self.holdoff_need_update = True
|
||||
|
||||
@@ -2854,19 +2686,12 @@ class RunQueueExecute:
|
||||
additional = []
|
||||
for revdep in next:
|
||||
(mc, fn, taskname, taskfn) = split_tid_mcfn(revdep)
|
||||
pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn]
|
||||
deps = getsetscenedeps(revdep)
|
||||
|
||||
taskdepdata[revdep] = bb.TaskData(
|
||||
pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn],
|
||||
taskname = taskname,
|
||||
fn = fn,
|
||||
deps = deps,
|
||||
provides = self.rqdata.dataCaches[mc].fn_provides[taskfn],
|
||||
taskhash = self.rqdata.runtaskentries[revdep].hash,
|
||||
unihash = self.rqdata.runtaskentries[revdep].unihash,
|
||||
hashfn = self.rqdata.dataCaches[mc].hashfn[taskfn],
|
||||
taskhash_deps = self.rqdata.runtaskentries[revdep].taskhash_deps,
|
||||
)
|
||||
provides = self.rqdata.dataCaches[mc].fn_provides[taskfn]
|
||||
taskhash = self.rqdata.runtaskentries[revdep].hash
|
||||
unihash = self.rqdata.runtaskentries[revdep].unihash
|
||||
taskdepdata[revdep] = [pn, taskname, fn, deps, provides, taskhash, unihash]
|
||||
for revdep2 in deps:
|
||||
if revdep2 not in taskdepdata:
|
||||
additional.append(revdep2)
|
||||
@@ -2910,7 +2735,6 @@ class SQData(object):
|
||||
self.sq_revdeps = {}
|
||||
# Injected inter-setscene task dependencies
|
||||
self.sq_harddeps = {}
|
||||
self.sq_harddeps_rev = {}
|
||||
# Cache of stamp files so duplicates can't run in parallel
|
||||
self.stamps = {}
|
||||
# Setscene tasks directly depended upon by the build
|
||||
@@ -2920,17 +2744,12 @@ class SQData(object):
|
||||
# A list of normal tasks a setscene task covers
|
||||
self.sq_covered_tasks = {}
|
||||
|
||||
def build_scenequeue_data(sqdata, rqdata, sqrq):
|
||||
def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
|
||||
|
||||
sq_revdeps = {}
|
||||
sq_revdeps_squash = {}
|
||||
sq_collated_deps = {}
|
||||
|
||||
# We can't skip specified target tasks which aren't setscene tasks
|
||||
sqdata.cantskip = set(rqdata.target_tids)
|
||||
sqdata.cantskip.difference_update(rqdata.runq_setscene_tids)
|
||||
sqdata.cantskip.intersection_update(rqdata.runtaskentries)
|
||||
|
||||
# We need to construct a dependency graph for the setscene functions. Intermediate
|
||||
# dependencies between the setscene tasks only complicate the code. This code
|
||||
# therefore aims to collapse the huge runqueue dependency tree into a smaller one
|
||||
@@ -2999,7 +2818,7 @@ def build_scenequeue_data(sqdata, rqdata, sqrq):
|
||||
for tid in rqdata.runtaskentries:
|
||||
if not rqdata.runtaskentries[tid].revdeps:
|
||||
sqdata.unskippable.add(tid)
|
||||
sqdata.unskippable |= sqdata.cantskip
|
||||
sqdata.unskippable |= sqrq.cantskip
|
||||
while new:
|
||||
new = False
|
||||
orig = sqdata.unskippable.copy()
|
||||
@@ -3038,7 +2857,6 @@ def build_scenequeue_data(sqdata, rqdata, sqrq):
|
||||
idepends = rqdata.taskData[mc].taskentries[realtid].idepends
|
||||
sqdata.stamps[tid] = bb.parse.siggen.stampfile_mcfn(taskname, taskfn, extrainfo=False)
|
||||
|
||||
sqdata.sq_harddeps_rev[tid] = set()
|
||||
for (depname, idependtask) in idepends:
|
||||
|
||||
if depname not in rqdata.taskData[mc].build_targets:
|
||||
@@ -3051,15 +2869,20 @@ def build_scenequeue_data(sqdata, rqdata, sqrq):
|
||||
if deptid not in rqdata.runtaskentries:
|
||||
bb.msg.fatal("RunQueue", "Task %s depends upon non-existent task %s:%s" % (realtid, depfn, idependtask))
|
||||
|
||||
logger.debug2("Adding hard setscene dependency %s for %s" % (deptid, tid))
|
||||
|
||||
if not deptid in sqdata.sq_harddeps:
|
||||
sqdata.sq_harddeps[deptid] = set()
|
||||
sqdata.sq_harddeps[deptid].add(tid)
|
||||
sqdata.sq_harddeps_rev[tid].add(deptid)
|
||||
|
||||
sq_revdeps_squash[tid].add(deptid)
|
||||
# Have to zero this to avoid circular dependencies
|
||||
sq_revdeps_squash[deptid] = set()
|
||||
|
||||
rqdata.init_progress_reporter.next_stage()
|
||||
|
||||
for task in sqdata.sq_harddeps:
|
||||
for dep in sqdata.sq_harddeps[task]:
|
||||
sq_revdeps_squash[dep].add(task)
|
||||
|
||||
rqdata.init_progress_reporter.next_stage()
|
||||
|
||||
#for tid in sq_revdeps_squash:
|
||||
@@ -3086,7 +2909,7 @@ def build_scenequeue_data(sqdata, rqdata, sqrq):
|
||||
if not sqdata.sq_revdeps[tid]:
|
||||
sqrq.sq_buildable.add(tid)
|
||||
|
||||
rqdata.init_progress_reporter.next_stage()
|
||||
rqdata.init_progress_reporter.finish()
|
||||
|
||||
sqdata.noexec = set()
|
||||
sqdata.stamppresent = set()
|
||||
@@ -3103,7 +2926,23 @@ def build_scenequeue_data(sqdata, rqdata, sqrq):
|
||||
sqdata.hashes[h] = tid
|
||||
else:
|
||||
sqrq.sq_deferred[tid] = sqdata.hashes[h]
|
||||
bb.debug(1, "Deferring %s after %s" % (tid, sqdata.hashes[h]))
|
||||
bb.note("Deferring %s after %s" % (tid, sqdata.hashes[h]))
|
||||
|
||||
update_scenequeue_data(sqdata.sq_revdeps, sqdata, rqdata, rq, cooker, stampcache, sqrq, summary=True)
|
||||
|
||||
# Compute a list of 'stale' sstate tasks where the current hash does not match the one
|
||||
# in any stamp files. Pass the list out to metadata as an event.
|
||||
found = {}
|
||||
for tid in rqdata.runq_setscene_tids:
|
||||
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
|
||||
stamps = bb.build.find_stale_stamps(taskname, taskfn)
|
||||
if stamps:
|
||||
if mc not in found:
|
||||
found[mc] = {}
|
||||
found[mc][tid] = stamps
|
||||
for mc in found:
|
||||
event = bb.event.StaleSetSceneTasks(found[mc])
|
||||
bb.event.fire(event, cooker.databuilder.mcdata[mc])
|
||||
|
||||
def check_setscene_stamps(tid, rqdata, rq, stampcache, noexecstamp=False):
|
||||
|
||||
@@ -3299,12 +3138,15 @@ class runQueuePipe():
|
||||
if pipeout:
|
||||
pipeout.close()
|
||||
bb.utils.nonblockingfd(self.input)
|
||||
self.queue = bytearray()
|
||||
self.queue = b""
|
||||
self.d = d
|
||||
self.rq = rq
|
||||
self.rqexec = rqexec
|
||||
self.fakerootlogs = fakerootlogs
|
||||
|
||||
def setrunqueueexec(self, rqexec):
|
||||
self.rqexec = rqexec
|
||||
|
||||
def read(self):
|
||||
for workers, name in [(self.rq.worker, "Worker"), (self.rq.fakeworker, "Fakeroot")]:
|
||||
for worker in workers.values():
|
||||
@@ -3315,7 +3157,7 @@ class runQueuePipe():
|
||||
|
||||
start = len(self.queue)
|
||||
try:
|
||||
self.queue.extend(self.input.read(102400) or b"")
|
||||
self.queue = self.queue + (self.input.read(102400) or b"")
|
||||
except (OSError, IOError) as e:
|
||||
if e.errno != errno.EAGAIN:
|
||||
raise
|
||||
|
||||
@@ -38,13 +38,9 @@ logger = logging.getLogger('BitBake')
|
||||
class ProcessTimeout(SystemExit):
|
||||
pass
|
||||
|
||||
def currenttime():
|
||||
return datetime.datetime.now().strftime('%H:%M:%S.%f')
|
||||
|
||||
def serverlog(msg):
|
||||
print(str(os.getpid()) + " " + currenttime() + " " + msg)
|
||||
#Seems a flush here triggers filesytem sync like behaviour and long hangs in the server
|
||||
#sys.stdout.flush()
|
||||
print(str(os.getpid()) + " " + datetime.datetime.now().strftime('%H:%M:%S.%f') + " " + msg)
|
||||
sys.stdout.flush()
|
||||
|
||||
#
|
||||
# When we have lockfile issues, try and find infomation about which process is
|
||||
@@ -293,9 +289,7 @@ class ProcessServer():
|
||||
continue
|
||||
try:
|
||||
serverlog("Running command %s" % command)
|
||||
reply = self.cooker.command.runCommand(command, self)
|
||||
serverlog("Sending reply %s" % repr(reply))
|
||||
self.command_channel_reply.send(reply)
|
||||
self.command_channel_reply.send(self.cooker.command.runCommand(command, self))
|
||||
serverlog("Command Completed (socket: %s)" % os.path.exists(self.sockname))
|
||||
except Exception as e:
|
||||
stack = traceback.format_exc()
|
||||
@@ -381,7 +375,7 @@ class ProcessServer():
|
||||
lock = bb.utils.lockfile(lockfile, shared=False, retry=False, block=False)
|
||||
if not lock:
|
||||
newlockcontents = get_lock_contents(lockfile)
|
||||
if not newlockcontents[0].startswith([f"{os.getpid()}\n", f"{os.getpid()} "]):
|
||||
if not newlockcontents[0].startswith([os.getpid() + "\n", os.getpid() + " "]):
|
||||
# A new server was started, the lockfile contents changed, we can exit
|
||||
serverlog("Lockfile now contains different contents, exiting: " + str(newlockcontents))
|
||||
return
|
||||
@@ -402,22 +396,6 @@ class ProcessServer():
|
||||
serverlog("".join(msg))
|
||||
|
||||
def idle_thread(self):
|
||||
if self.cooker.configuration.profile:
|
||||
try:
|
||||
import cProfile as profile
|
||||
except:
|
||||
import profile
|
||||
prof = profile.Profile()
|
||||
|
||||
ret = profile.Profile.runcall(prof, self.idle_thread_internal)
|
||||
|
||||
prof.dump_stats("profile-mainloop.log")
|
||||
bb.utils.process_profilelog("profile-mainloop.log")
|
||||
serverlog("Raw profiling information saved to profile-mainloop.log and processed statistics to profile-mainloop.log.processed")
|
||||
else:
|
||||
self.idle_thread_internal()
|
||||
|
||||
def idle_thread_internal(self):
|
||||
def remove_idle_func(function):
|
||||
with bb.utils.lock_timeout(self._idlefuncsLock):
|
||||
del self._idlefuns[function]
|
||||
@@ -427,6 +405,12 @@ class ProcessServer():
|
||||
nextsleep = 0.1
|
||||
fds = []
|
||||
|
||||
try:
|
||||
self.cooker.process_inotify_updates()
|
||||
except Exception as exc:
|
||||
serverlog("Exception %s in inofify updates broke the idle_thread, exiting" % traceback.format_exc())
|
||||
self.quit = True
|
||||
|
||||
with bb.utils.lock_timeout(self._idlefuncsLock):
|
||||
items = list(self._idlefuns.items())
|
||||
|
||||
@@ -516,18 +500,12 @@ class ServerCommunicator():
|
||||
self.recv = recv
|
||||
|
||||
def runCommand(self, command):
|
||||
try:
|
||||
self.connection.send(command)
|
||||
except BrokenPipeError as e:
|
||||
raise BrokenPipeError("bitbake-server might have died or been forcibly stopped, ie. OOM killed") from e
|
||||
self.connection.send(command)
|
||||
if not self.recv.poll(30):
|
||||
logger.info("No reply from server in 30s (for command %s at %s)" % (command[0], currenttime()))
|
||||
logger.info("No reply from server in 30s")
|
||||
if not self.recv.poll(30):
|
||||
raise ProcessTimeout("Timeout while waiting for a reply from the bitbake server (60s at %s)" % currenttime())
|
||||
try:
|
||||
ret, exc = self.recv.get()
|
||||
except EOFError as e:
|
||||
raise EOFError("bitbake-server might have died or been forcibly stopped, ie. OOM killed") from e
|
||||
raise ProcessTimeout("Timeout while waiting for a reply from the bitbake server (60s)")
|
||||
ret, exc = self.recv.get()
|
||||
# Should probably turn all exceptions in exc back into exceptions?
|
||||
# For now, at least handle BBHandledException
|
||||
if exc and ("BBHandledException" in exc or "SystemExit" in exc):
|
||||
@@ -642,7 +620,7 @@ class BitBakeServer(object):
|
||||
os.set_inheritable(self.bitbake_lock.fileno(), True)
|
||||
os.set_inheritable(self.readypipein, True)
|
||||
serverscript = os.path.realpath(os.path.dirname(__file__) + "/../../../bin/bitbake-server")
|
||||
os.execl(sys.executable, sys.executable, serverscript, "decafbad", str(self.bitbake_lock.fileno()), str(self.readypipein), self.logfile, self.bitbake_lock.name, self.sockname, str(self.server_timeout or 0), str(int(self.profile)), str(self.xmlrpcinterface[0]), str(self.xmlrpcinterface[1]))
|
||||
os.execl(sys.executable, "bitbake-server", serverscript, "decafbad", str(self.bitbake_lock.fileno()), str(self.readypipein), self.logfile, self.bitbake_lock.name, self.sockname, str(self.server_timeout or 0), str(int(self.profile)), str(self.xmlrpcinterface[0]), str(self.xmlrpcinterface[1]))
|
||||
|
||||
def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpcinterface, profile):
|
||||
|
||||
@@ -882,10 +860,11 @@ class ConnectionWriter(object):
|
||||
process.queue_signals = True
|
||||
self._send(obj)
|
||||
process.queue_signals = False
|
||||
|
||||
while len(process.signal_received) > 0:
|
||||
sig = process.signal_received.pop()
|
||||
process.handle_sig(sig, None)
|
||||
try:
|
||||
for sig in process.signal_received.pop():
|
||||
process.handle_sig(sig, None)
|
||||
except IndexError:
|
||||
pass
|
||||
else:
|
||||
self._send(obj)
|
||||
|
||||
|
||||
@@ -15,7 +15,6 @@ import difflib
|
||||
import simplediff
|
||||
import json
|
||||
import types
|
||||
from contextlib import contextmanager
|
||||
import bb.compress.zstd
|
||||
from bb.checksum import FileChecksumCache
|
||||
from bb import runqueue
|
||||
@@ -25,24 +24,6 @@ import hashserv.client
|
||||
logger = logging.getLogger('BitBake.SigGen')
|
||||
hashequiv_logger = logging.getLogger('BitBake.SigGen.HashEquiv')
|
||||
|
||||
#find_siginfo and find_siginfo_version are set by the metadata siggen
|
||||
# The minimum version of the find_siginfo function we need
|
||||
find_siginfo_minversion = 2
|
||||
|
||||
HASHSERV_ENVVARS = [
|
||||
"SSL_CERT_DIR",
|
||||
"SSL_CERT_FILE",
|
||||
"NO_PROXY",
|
||||
"HTTPS_PROXY",
|
||||
"HTTP_PROXY"
|
||||
]
|
||||
|
||||
def check_siggen_version(siggen):
|
||||
if not hasattr(siggen, "find_siginfo_version"):
|
||||
bb.fatal("Siggen from metadata (OE-Core?) is too old, please update it (no version found)")
|
||||
if siggen.find_siginfo_version < siggen.find_siginfo_minversion:
|
||||
bb.fatal("Siggen from metadata (OE-Core?) is too old, please update it (%s vs %s)" % (siggen.find_siginfo_version, siggen.find_siginfo_minversion))
|
||||
|
||||
class SetEncoder(json.JSONEncoder):
|
||||
def default(self, obj):
|
||||
if isinstance(obj, set) or isinstance(obj, frozenset):
|
||||
@@ -111,18 +92,9 @@ class SignatureGenerator(object):
|
||||
if flag:
|
||||
self.datacaches[mc].stamp_extrainfo[mcfn][t] = flag
|
||||
|
||||
def get_cached_unihash(self, tid):
|
||||
return None
|
||||
|
||||
def get_unihash(self, tid):
|
||||
unihash = self.get_cached_unihash(tid)
|
||||
if unihash:
|
||||
return unihash
|
||||
return self.taskhash[tid]
|
||||
|
||||
def get_unihashes(self, tids):
|
||||
return {tid: self.get_unihash(tid) for tid in tids}
|
||||
|
||||
def prep_taskhash(self, tid, deps, dataCaches):
|
||||
return
|
||||
|
||||
@@ -201,17 +173,15 @@ class SignatureGenerator(object):
|
||||
def save_unitaskhashes(self):
|
||||
return
|
||||
|
||||
def copy_unitaskhashes(self, targetdir):
|
||||
return
|
||||
|
||||
def set_setscene_tasks(self, setscene_tasks):
|
||||
return
|
||||
|
||||
def exit(self):
|
||||
return
|
||||
|
||||
def build_pnid(mc, pn, taskname):
|
||||
if mc:
|
||||
return "mc:" + mc + ":" + pn + ":" + taskname
|
||||
return pn + ":" + taskname
|
||||
|
||||
class SignatureGeneratorBasic(SignatureGenerator):
|
||||
"""
|
||||
"""
|
||||
@@ -286,6 +256,10 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
bb.warn("Error during finalise of %s" % mcfn)
|
||||
raise
|
||||
|
||||
#Slow but can be useful for debugging mismatched basehashes
|
||||
#for task in self.taskdeps[mcfn]:
|
||||
# self.dump_sigtask(mcfn, task, d.getVar("STAMP"), False)
|
||||
|
||||
basehashes = {}
|
||||
for task in taskdeps:
|
||||
basehashes[task] = self.basehash[mcfn + ":" + task]
|
||||
@@ -295,11 +269,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
d.setVar("__siggen_varvals", lookupcache)
|
||||
d.setVar("__siggen_taskdeps", taskdeps)
|
||||
|
||||
#Slow but can be useful for debugging mismatched basehashes
|
||||
#self.setup_datacache_from_datastore(mcfn, d)
|
||||
#for task in taskdeps:
|
||||
# self.dump_sigtask(mcfn, task, d.getVar("STAMP"), False)
|
||||
|
||||
def setup_datacache_from_datastore(self, mcfn, d):
|
||||
super().setup_datacache_from_datastore(mcfn, d)
|
||||
|
||||
@@ -340,19 +309,15 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
recipename = dataCaches[mc].pkg_fn[mcfn]
|
||||
|
||||
self.tidtopn[tid] = recipename
|
||||
# save hashfn for deps into siginfo?
|
||||
for dep in deps:
|
||||
(depmc, _, deptask, depmcfn) = bb.runqueue.split_tid_mcfn(dep)
|
||||
dep_pn = dataCaches[depmc].pkg_fn[depmcfn]
|
||||
|
||||
if not self.rundep_check(mcfn, recipename, task, dep, dep_pn, dataCaches):
|
||||
for dep in sorted(deps, key=clean_basepath):
|
||||
(depmc, _, _, depmcfn) = bb.runqueue.split_tid_mcfn(dep)
|
||||
depname = dataCaches[depmc].pkg_fn[depmcfn]
|
||||
if not self.rundep_check(mcfn, recipename, task, dep, depname, dataCaches):
|
||||
continue
|
||||
|
||||
if dep not in self.taskhash:
|
||||
bb.fatal("%s is not in taskhash, caller isn't calling in dependency order?" % dep)
|
||||
|
||||
dep_pnid = build_pnid(depmc, dep_pn, deptask)
|
||||
self.runtaskdeps[tid].append((dep_pnid, dep))
|
||||
self.runtaskdeps[tid].append(dep)
|
||||
|
||||
if task in dataCaches[mc].file_checksums[mcfn]:
|
||||
if self.checksum_cache:
|
||||
@@ -378,15 +343,15 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
self.taints[tid] = taint
|
||||
logger.warning("%s is tainted from a forced run" % tid)
|
||||
|
||||
return set(dep for _, dep in self.runtaskdeps[tid])
|
||||
return
|
||||
|
||||
def get_taskhash(self, tid, deps, dataCaches):
|
||||
|
||||
data = self.basehash[tid]
|
||||
for dep in sorted(self.runtaskdeps[tid]):
|
||||
data += self.get_unihash(dep[1])
|
||||
for dep in self.runtaskdeps[tid]:
|
||||
data += self.get_unihash(dep)
|
||||
|
||||
for (f, cs) in sorted(self.file_checksum_values[tid], key=clean_checksum_file_path):
|
||||
for (f, cs) in self.file_checksum_values[tid]:
|
||||
if cs:
|
||||
if "/./" in f:
|
||||
data += "./" + f.split("/./")[1]
|
||||
@@ -415,6 +380,9 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
def save_unitaskhashes(self):
|
||||
self.unihash_cache.save(self.unitaskhashes)
|
||||
|
||||
def copy_unitaskhashes(self, targetdir):
|
||||
self.unihash_cache.copyfile(targetdir)
|
||||
|
||||
def dump_sigtask(self, mcfn, task, stampbase, runtime):
|
||||
tid = mcfn + ":" + task
|
||||
mc = bb.runqueue.mc_from_tid(mcfn)
|
||||
@@ -441,21 +409,21 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
data['varvals'][task] = self.datacaches[mc].siggen_varvals[mcfn][task]
|
||||
for dep in self.datacaches[mc].siggen_taskdeps[mcfn][task]:
|
||||
if dep in self.basehash_ignore_vars:
|
||||
continue
|
||||
continue
|
||||
data['gendeps'][dep] = self.datacaches[mc].siggen_gendeps[mcfn][dep]
|
||||
data['varvals'][dep] = self.datacaches[mc].siggen_varvals[mcfn][dep]
|
||||
|
||||
if runtime and tid in self.taskhash:
|
||||
data['runtaskdeps'] = [dep[0] for dep in sorted(self.runtaskdeps[tid])]
|
||||
data['runtaskdeps'] = self.runtaskdeps[tid]
|
||||
data['file_checksum_values'] = []
|
||||
for f,cs in sorted(self.file_checksum_values[tid], key=clean_checksum_file_path):
|
||||
for f,cs in self.file_checksum_values[tid]:
|
||||
if "/./" in f:
|
||||
data['file_checksum_values'].append(("./" + f.split("/./")[1], cs))
|
||||
else:
|
||||
data['file_checksum_values'].append((os.path.basename(f), cs))
|
||||
data['runtaskhashes'] = {}
|
||||
for dep in self.runtaskdeps[tid]:
|
||||
data['runtaskhashes'][dep[0]] = self.get_unihash(dep[1])
|
||||
for dep in data['runtaskdeps']:
|
||||
data['runtaskhashes'][dep] = self.get_unihash(dep)
|
||||
data['taskhash'] = self.taskhash[tid]
|
||||
data['unihash'] = self.get_unihash(tid)
|
||||
|
||||
@@ -533,79 +501,32 @@ class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
|
||||
class SignatureGeneratorUniHashMixIn(object):
|
||||
def __init__(self, data):
|
||||
self.extramethod = {}
|
||||
# NOTE: The cache only tracks hashes that exist. Hashes that don't
|
||||
# exist are always queried from the server since it is possible for
|
||||
# hashes to appear over time, but much less likely for them to
|
||||
# disappear
|
||||
self.unihash_exists_cache = set()
|
||||
self.username = None
|
||||
self.password = None
|
||||
self.env = {}
|
||||
|
||||
origenv = data.getVar("BB_ORIGENV")
|
||||
for e in HASHSERV_ENVVARS:
|
||||
value = data.getVar(e)
|
||||
if not value and origenv:
|
||||
value = origenv.getVar(e)
|
||||
if value:
|
||||
self.env[e] = value
|
||||
super().__init__(data)
|
||||
|
||||
def get_taskdata(self):
|
||||
return (self.server, self.method, self.extramethod, self.username, self.password, self.env) + super().get_taskdata()
|
||||
return (self.server, self.method, self.extramethod) + super().get_taskdata()
|
||||
|
||||
def set_taskdata(self, data):
|
||||
self.server, self.method, self.extramethod, self.username, self.password, self.env = data[:6]
|
||||
super().set_taskdata(data[6:])
|
||||
self.server, self.method, self.extramethod = data[:3]
|
||||
super().set_taskdata(data[3:])
|
||||
|
||||
def get_hashserv_creds(self):
|
||||
if self.username and self.password:
|
||||
return {
|
||||
"username": self.username,
|
||||
"password": self.password,
|
||||
}
|
||||
|
||||
return {}
|
||||
|
||||
@contextmanager
|
||||
def _client_env(self):
|
||||
orig_env = os.environ.copy()
|
||||
try:
|
||||
for k, v in self.env.items():
|
||||
os.environ[k] = v
|
||||
|
||||
yield
|
||||
finally:
|
||||
for k, v in self.env.items():
|
||||
if k in orig_env:
|
||||
os.environ[k] = orig_env[k]
|
||||
else:
|
||||
del os.environ[k]
|
||||
|
||||
@contextmanager
|
||||
def client(self):
|
||||
with self._client_env():
|
||||
if getattr(self, '_client', None) is None:
|
||||
self._client = hashserv.create_client(self.server, **self.get_hashserv_creds())
|
||||
yield self._client
|
||||
if getattr(self, '_client', None) is None:
|
||||
self._client = hashserv.create_client(self.server)
|
||||
return self._client
|
||||
|
||||
def reset(self, data):
|
||||
self.__close_clients()
|
||||
if getattr(self, '_client', None) is not None:
|
||||
self._client.close()
|
||||
self._client = None
|
||||
return super().reset(data)
|
||||
|
||||
def exit(self):
|
||||
self.__close_clients()
|
||||
if getattr(self, '_client', None) is not None:
|
||||
self._client.close()
|
||||
self._client = None
|
||||
return super().exit()
|
||||
|
||||
def __close_clients(self):
|
||||
with self._client_env():
|
||||
if getattr(self, '_client', None) is not None:
|
||||
self._client.close()
|
||||
self._client = None
|
||||
if getattr(self, '_client_pool', None) is not None:
|
||||
self._client_pool.close()
|
||||
self._client_pool = None
|
||||
|
||||
def get_stampfile_hash(self, tid):
|
||||
if tid in self.taskhash:
|
||||
# If a unique hash is reported, use it as the stampfile hash. This
|
||||
@@ -637,7 +558,7 @@ class SignatureGeneratorUniHashMixIn(object):
|
||||
return None
|
||||
return unihash
|
||||
|
||||
def get_cached_unihash(self, tid):
|
||||
def get_unihash(self, tid):
|
||||
taskhash = self.taskhash[tid]
|
||||
|
||||
# If its not a setscene task we can return
|
||||
@@ -652,96 +573,40 @@ class SignatureGeneratorUniHashMixIn(object):
|
||||
self.unihash[tid] = unihash
|
||||
return unihash
|
||||
|
||||
return None
|
||||
# In the absence of being able to discover a unique hash from the
|
||||
# server, make it be equivalent to the taskhash. The unique "hash" only
|
||||
# really needs to be a unique string (not even necessarily a hash), but
|
||||
# making it match the taskhash has a few advantages:
|
||||
#
|
||||
# 1) All of the sstate code that assumes hashes can be the same
|
||||
# 2) It provides maximal compatibility with builders that don't use
|
||||
# an equivalency server
|
||||
# 3) The value is easy for multiple independent builders to derive the
|
||||
# same unique hash from the same input. This means that if the
|
||||
# independent builders find the same taskhash, but it isn't reported
|
||||
# to the server, there is a better chance that they will agree on
|
||||
# the unique hash.
|
||||
unihash = taskhash
|
||||
|
||||
def _get_method(self, tid):
|
||||
method = self.method
|
||||
if tid in self.extramethod:
|
||||
method = method + self.extramethod[tid]
|
||||
|
||||
return method
|
||||
|
||||
def unihashes_exist(self, query):
|
||||
if len(query) == 0:
|
||||
return {}
|
||||
|
||||
query_keys = []
|
||||
result = {}
|
||||
for key, unihash in query.items():
|
||||
if unihash in self.unihash_exists_cache:
|
||||
result[key] = True
|
||||
else:
|
||||
query_keys.append(key)
|
||||
|
||||
if query_keys:
|
||||
with self.client() as client:
|
||||
query_result = client.unihash_exists_batch(query[k] for k in query_keys)
|
||||
|
||||
for idx, key in enumerate(query_keys):
|
||||
exists = query_result[idx]
|
||||
if exists:
|
||||
self.unihash_exists_cache.add(query[key])
|
||||
result[key] = exists
|
||||
|
||||
return result
|
||||
|
||||
def get_unihash(self, tid):
|
||||
return self.get_unihashes([tid])[tid]
|
||||
|
||||
def get_unihashes(self, tids):
|
||||
"""
|
||||
For a iterable of tids, returns a dictionary that maps each tid to a
|
||||
unihash
|
||||
"""
|
||||
result = {}
|
||||
query_tids = []
|
||||
|
||||
for tid in tids:
|
||||
unihash = self.get_cached_unihash(tid)
|
||||
if unihash:
|
||||
result[tid] = unihash
|
||||
else:
|
||||
query_tids.append(tid)
|
||||
|
||||
if query_tids:
|
||||
unihashes = []
|
||||
try:
|
||||
with self.client() as client:
|
||||
unihashes = client.get_unihash_batch((self._get_method(tid), self.taskhash[tid]) for tid in query_tids)
|
||||
except (ConnectionError, FileNotFoundError) as e:
|
||||
bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
|
||||
|
||||
for idx, tid in enumerate(query_tids):
|
||||
# In the absence of being able to discover a unique hash from the
|
||||
# server, make it be equivalent to the taskhash. The unique "hash" only
|
||||
# really needs to be a unique string (not even necessarily a hash), but
|
||||
# making it match the taskhash has a few advantages:
|
||||
#
|
||||
# 1) All of the sstate code that assumes hashes can be the same
|
||||
# 2) It provides maximal compatibility with builders that don't use
|
||||
# an equivalency server
|
||||
# 3) The value is easy for multiple independent builders to derive the
|
||||
# same unique hash from the same input. This means that if the
|
||||
# independent builders find the same taskhash, but it isn't reported
|
||||
# to the server, there is a better chance that they will agree on
|
||||
# the unique hash.
|
||||
taskhash = self.taskhash[tid]
|
||||
|
||||
if unihashes and unihashes[idx]:
|
||||
unihash = unihashes[idx]
|
||||
try:
|
||||
method = self.method
|
||||
if tid in self.extramethod:
|
||||
method = method + self.extramethod[tid]
|
||||
data = self.client().get_unihash(method, self.taskhash[tid])
|
||||
if data:
|
||||
unihash = data
|
||||
# A unique hash equal to the taskhash is not very interesting,
|
||||
# so it is reported it at debug level 2. If they differ, that
|
||||
# is much more interesting, so it is reported at debug level 1
|
||||
hashequiv_logger.bbdebug((1, 2)[unihash == taskhash], 'Found unihash %s in place of %s for %s from %s' % (unihash, taskhash, tid, self.server))
|
||||
else:
|
||||
hashequiv_logger.debug2('No reported unihash for %s:%s from %s' % (tid, taskhash, self.server))
|
||||
unihash = taskhash
|
||||
except ConnectionError as e:
|
||||
bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
|
||||
|
||||
self.set_unihash(tid, unihash)
|
||||
self.unihash[tid] = unihash
|
||||
result[tid] = unihash
|
||||
|
||||
return result
|
||||
self.set_unihash(tid, unihash)
|
||||
self.unihash[tid] = unihash
|
||||
return unihash
|
||||
|
||||
def report_unihash(self, path, task, d):
|
||||
import importlib
|
||||
@@ -805,9 +670,7 @@ class SignatureGeneratorUniHashMixIn(object):
|
||||
if tid in self.extramethod:
|
||||
method = method + self.extramethod[tid]
|
||||
|
||||
with self.client() as client:
|
||||
data = client.report_unihash(taskhash, method, outhash, unihash, extra_data)
|
||||
|
||||
data = self.client().report_unihash(taskhash, method, outhash, unihash, extra_data)
|
||||
new_unihash = data['unihash']
|
||||
|
||||
if new_unihash != unihash:
|
||||
@@ -817,7 +680,7 @@ class SignatureGeneratorUniHashMixIn(object):
|
||||
d.setVar('BB_UNIHASH', new_unihash)
|
||||
else:
|
||||
hashequiv_logger.debug('Reported task %s as unihash %s to %s' % (taskhash, unihash, self.server))
|
||||
except (ConnectionError, FileNotFoundError) as e:
|
||||
except ConnectionError as e:
|
||||
bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
|
||||
finally:
|
||||
if sigfile:
|
||||
@@ -838,9 +701,7 @@ class SignatureGeneratorUniHashMixIn(object):
|
||||
if tid in self.extramethod:
|
||||
method = method + self.extramethod[tid]
|
||||
|
||||
with self.client() as client:
|
||||
data = client.report_unihash_equiv(taskhash, method, wanted_unihash, extra_data)
|
||||
|
||||
data = self.client().report_unihash_equiv(taskhash, method, wanted_unihash, extra_data)
|
||||
hashequiv_logger.verbose('Reported task %s as unihash %s to %s (%s)' % (tid, wanted_unihash, self.server, str(data)))
|
||||
|
||||
if data is None:
|
||||
@@ -859,7 +720,7 @@ class SignatureGeneratorUniHashMixIn(object):
|
||||
# TODO: What to do here?
|
||||
hashequiv_logger.verbose('Task %s unihash reported as unwanted hash %s' % (tid, finalunihash))
|
||||
|
||||
except (ConnectionError, FileNotFoundError) as e:
|
||||
except ConnectionError as e:
|
||||
bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
|
||||
|
||||
return False
|
||||
@@ -874,12 +735,6 @@ class SignatureGeneratorTestEquivHash(SignatureGeneratorUniHashMixIn, SignatureG
|
||||
self.server = data.getVar('BB_HASHSERVE')
|
||||
self.method = "sstate_output_hash"
|
||||
|
||||
def clean_checksum_file_path(file_checksum_tuple):
|
||||
f, cs = file_checksum_tuple
|
||||
if "/./" in f:
|
||||
return "./" + f.split("/./")[1]
|
||||
return os.path.basename(f)
|
||||
|
||||
def dump_this_task(outfile, d):
|
||||
import bb.parse
|
||||
mcfn = d.getVar("BB_FILENAME")
|
||||
@@ -938,6 +793,39 @@ def list_inline_diff(oldlist, newlist, colors=None):
|
||||
ret.append(item)
|
||||
return '[%s]' % (', '.join(ret))
|
||||
|
||||
def clean_basepath(basepath):
|
||||
basepath, dir, recipe_task = basepath.rsplit("/", 2)
|
||||
cleaned = dir + '/' + recipe_task
|
||||
|
||||
if basepath[0] == '/':
|
||||
return cleaned
|
||||
|
||||
if basepath.startswith("mc:") and basepath.count(':') >= 2:
|
||||
mc, mc_name, basepath = basepath.split(":", 2)
|
||||
mc_suffix = ':mc:' + mc_name
|
||||
else:
|
||||
mc_suffix = ''
|
||||
|
||||
# mc stuff now removed from basepath. Whatever was next, if present will be the first
|
||||
# suffix. ':/', recipe path start, marks the end of this. Something like
|
||||
# 'virtual:a[:b[:c]]:/path...' (b and c being optional)
|
||||
if basepath[0] != '/':
|
||||
cleaned += ':' + basepath.split(':/', 1)[0]
|
||||
|
||||
return cleaned + mc_suffix
|
||||
|
||||
def clean_basepaths(a):
|
||||
b = {}
|
||||
for x in a:
|
||||
b[clean_basepath(x)] = a[x]
|
||||
return b
|
||||
|
||||
def clean_basepaths_list(a):
|
||||
b = []
|
||||
for x in a:
|
||||
b.append(clean_basepath(x))
|
||||
return b
|
||||
|
||||
# Handled renamed fields
|
||||
def handle_renames(data):
|
||||
if 'basewhitelist' in data:
|
||||
@@ -968,18 +856,10 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
formatparams.update(values)
|
||||
return formatstr.format(**formatparams)
|
||||
|
||||
try:
|
||||
with bb.compress.zstd.open(a, "rt", encoding="utf-8", num_threads=1) as f:
|
||||
a_data = json.load(f, object_hook=SetDecoder)
|
||||
except (TypeError, OSError) as err:
|
||||
bb.error("Failed to open sigdata file '%s': %s" % (a, str(err)))
|
||||
raise err
|
||||
try:
|
||||
with bb.compress.zstd.open(b, "rt", encoding="utf-8", num_threads=1) as f:
|
||||
b_data = json.load(f, object_hook=SetDecoder)
|
||||
except (TypeError, OSError) as err:
|
||||
bb.error("Failed to open sigdata file '%s': %s" % (b, str(err)))
|
||||
raise err
|
||||
with bb.compress.zstd.open(a, "rt", encoding="utf-8", num_threads=1) as f:
|
||||
a_data = json.load(f, object_hook=SetDecoder)
|
||||
with bb.compress.zstd.open(b, "rt", encoding="utf-8", num_threads=1) as f:
|
||||
b_data = json.load(f, object_hook=SetDecoder)
|
||||
|
||||
for data in [a_data, b_data]:
|
||||
handle_renames(data)
|
||||
@@ -1114,11 +994,11 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
a = a_data['runtaskdeps'][idx]
|
||||
b = b_data['runtaskdeps'][idx]
|
||||
if a_data['runtaskhashes'][a] != b_data['runtaskhashes'][b] and not collapsed:
|
||||
changed.append("%s with hash %s\n changed to\n%s with hash %s" % (a, a_data['runtaskhashes'][a], b, b_data['runtaskhashes'][b]))
|
||||
changed.append("%s with hash %s\n changed to\n%s with hash %s" % (clean_basepath(a), a_data['runtaskhashes'][a], clean_basepath(b), b_data['runtaskhashes'][b]))
|
||||
|
||||
if changed:
|
||||
clean_a = a_data['runtaskdeps']
|
||||
clean_b = b_data['runtaskdeps']
|
||||
clean_a = clean_basepaths_list(a_data['runtaskdeps'])
|
||||
clean_b = clean_basepaths_list(b_data['runtaskdeps'])
|
||||
if clean_a != clean_b:
|
||||
output.append(color_format("{color_title}runtaskdeps changed:{color_default}\n%s") % list_inline_diff(clean_a, clean_b, colors))
|
||||
else:
|
||||
@@ -1127,8 +1007,8 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
|
||||
|
||||
|
||||
if 'runtaskhashes' in a_data and 'runtaskhashes' in b_data:
|
||||
a = a_data['runtaskhashes']
|
||||
b = b_data['runtaskhashes']
|
||||
a = clean_basepaths(a_data['runtaskhashes'])
|
||||
b = clean_basepaths(b_data['runtaskhashes'])
|
||||
changed, added, removed = dict_diff(a, b)
|
||||
if added:
|
||||
for dep in sorted(added):
|
||||
@@ -1217,12 +1097,8 @@ def calc_taskhash(sigdata):
|
||||
def dump_sigfile(a):
|
||||
output = []
|
||||
|
||||
try:
|
||||
with bb.compress.zstd.open(a, "rt", encoding="utf-8", num_threads=1) as f:
|
||||
a_data = json.load(f, object_hook=SetDecoder)
|
||||
except (TypeError, OSError) as err:
|
||||
bb.error("Failed to open sigdata file '%s': %s" % (a, str(err)))
|
||||
raise err
|
||||
with bb.compress.zstd.open(a, "rt", encoding="utf-8", num_threads=1) as f:
|
||||
a_data = json.load(f, object_hook=SetDecoder)
|
||||
|
||||
handle_renames(a_data)
|
||||
|
||||
|
||||
@@ -44,7 +44,6 @@ class VariableReferenceTest(ReferenceTest):
|
||||
def parseExpression(self, exp):
|
||||
parsedvar = self.d.expandWithRefs(exp, None)
|
||||
self.references = parsedvar.references
|
||||
self.execs = parsedvar.execs
|
||||
|
||||
def test_simple_reference(self):
|
||||
self.setEmptyVars(["FOO"])
|
||||
@@ -62,11 +61,6 @@ class VariableReferenceTest(ReferenceTest):
|
||||
self.parseExpression("${@d.getVar('BAR') + 'foo'}")
|
||||
self.assertReferences(set(["BAR"]))
|
||||
|
||||
def test_python_exec_reference(self):
|
||||
self.parseExpression("${@eval('3 * 5')}")
|
||||
self.assertReferences(set())
|
||||
self.assertExecs(set(["eval"]))
|
||||
|
||||
class ShellReferenceTest(ReferenceTest):
|
||||
|
||||
def parseExpression(self, exp):
|
||||
@@ -106,46 +100,6 @@ ${D}${libdir}/pkgconfig/*.pc
|
||||
self.parseExpression("foo=$(echo bar)")
|
||||
self.assertExecs(set(["echo"]))
|
||||
|
||||
def test_assign_subshell_expansion_quotes(self):
|
||||
self.parseExpression('foo="$(echo bar)"')
|
||||
self.assertExecs(set(["echo"]))
|
||||
|
||||
def test_assign_subshell_expansion_nested(self):
|
||||
self.parseExpression('foo="$(func1 "$(func2 bar$(func3))")"')
|
||||
self.assertExecs(set(["func1", "func2", "func3"]))
|
||||
|
||||
def test_assign_subshell_expansion_multiple(self):
|
||||
self.parseExpression('foo="$(func1 "$(func2)") $(func3)"')
|
||||
self.assertExecs(set(["func1", "func2", "func3"]))
|
||||
|
||||
def test_assign_subshell_expansion_escaped_quotes(self):
|
||||
self.parseExpression('foo="\\"fo\\"o$(func1)"')
|
||||
self.assertExecs(set(["func1"]))
|
||||
|
||||
def test_assign_subshell_expansion_empty(self):
|
||||
self.parseExpression('foo="bar$()foo"')
|
||||
self.assertExecs(set())
|
||||
|
||||
def test_assign_subshell_backticks(self):
|
||||
self.parseExpression("foo=`echo bar`")
|
||||
self.assertExecs(set(["echo"]))
|
||||
|
||||
def test_assign_subshell_backticks_quotes(self):
|
||||
self.parseExpression('foo="`echo bar`"')
|
||||
self.assertExecs(set(["echo"]))
|
||||
|
||||
def test_assign_subshell_backticks_multiple(self):
|
||||
self.parseExpression('foo="`func1 bar` `func2`"')
|
||||
self.assertExecs(set(["func1", "func2"]))
|
||||
|
||||
def test_assign_subshell_backticks_escaped_quotes(self):
|
||||
self.parseExpression('foo="\\"fo\\"o`func1`"')
|
||||
self.assertExecs(set(["func1"]))
|
||||
|
||||
def test_assign_subshell_backticks_empty(self):
|
||||
self.parseExpression('foo="bar``foo"')
|
||||
self.assertExecs(set())
|
||||
|
||||
def test_shell_unexpanded(self):
|
||||
self.setEmptyVars(["QT_BASE_NAME"])
|
||||
self.parseExpression('echo "${QT_BASE_NAME}"')
|
||||
@@ -476,37 +430,11 @@ esac
|
||||
self.assertEqual(deps, set(["TESTVAR2"]))
|
||||
self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['testval3', 'anothervalue'])
|
||||
|
||||
def test_contains_vardeps_override_operators(self):
|
||||
# Check override operators handle dependencies correctly with the contains functionality
|
||||
expr_plain = 'testval'
|
||||
expr_prepend = '${@bb.utils.filter("TESTVAR1", "testval1", d)} '
|
||||
expr_append = ' ${@bb.utils.filter("TESTVAR2", "testval2", d)}'
|
||||
expr_remove = '${@bb.utils.contains("TESTVAR3", "no-testval", "testval", "", d)}'
|
||||
# Check dependencies
|
||||
self.d.setVar('ANOTHERVAR', expr_plain)
|
||||
self.d.prependVar('ANOTHERVAR', expr_prepend)
|
||||
self.d.appendVar('ANOTHERVAR', expr_append)
|
||||
self.d.setVar('ANOTHERVAR:remove', expr_remove)
|
||||
self.d.setVar('TESTVAR1', 'blah')
|
||||
self.d.setVar('TESTVAR2', 'testval2')
|
||||
self.d.setVar('TESTVAR3', 'no-testval')
|
||||
deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(), set(), self.d, self.d)
|
||||
self.assertEqual(sorted(values.splitlines()),
|
||||
sorted([
|
||||
expr_prepend + expr_plain + expr_append,
|
||||
'_remove of ' + expr_remove,
|
||||
'TESTVAR1{testval1} = Unset',
|
||||
'TESTVAR2{testval2} = Set',
|
||||
'TESTVAR3{no-testval} = Set',
|
||||
]))
|
||||
# Check final value
|
||||
self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['testval2'])
|
||||
|
||||
#Currently no wildcard support
|
||||
#def test_vardeps_wildcards(self):
|
||||
# self.d.setVar("oe_libinstall", "echo test")
|
||||
# self.d.setVar("FOO", "foo=oe_libinstall; eval $foo")
|
||||
# self.d.setVarFlag("FOO", "vardeps", "oe_*")
|
||||
# self.assertEqual(deps, set(["oe_libinstall"]))
|
||||
# self.assertEquals(deps, set(["oe_libinstall"]))
|
||||
|
||||
|
||||
|
||||
@@ -77,18 +77,6 @@ class DataExpansions(unittest.TestCase):
|
||||
val = self.d.expand("${@d.getVar('foo') + ' ${bar}'}")
|
||||
self.assertEqual(str(val), "value_of_foo value_of_bar")
|
||||
|
||||
def test_python_snippet_function_reference(self):
|
||||
self.d.setVar("TESTVAL", "testvalue")
|
||||
self.d.setVar("testfunc", 'd.getVar("TESTVAL")')
|
||||
context = bb.utils.get_context()
|
||||
context["testfunc"] = lambda d: d.getVar("TESTVAL")
|
||||
val = self.d.expand("${@testfunc(d)}")
|
||||
self.assertEqual(str(val), "testvalue")
|
||||
|
||||
def test_python_snippet_builtin_metadata(self):
|
||||
self.d.setVar("eval", "INVALID")
|
||||
self.d.expand("${@eval('3')}")
|
||||
|
||||
def test_python_unexpanded(self):
|
||||
self.d.setVar("bar", "${unsetvar}")
|
||||
val = self.d.expand("${@d.getVar('foo') + ' ${bar}'}")
|
||||
@@ -395,16 +383,6 @@ class TestOverrides(unittest.TestCase):
|
||||
self.d.setVar("OVERRIDES", "foo:bar:some_val")
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
|
||||
|
||||
# Test an override with _<numeric> in it based on a real world OE issue
|
||||
def test_underscore_override_2(self):
|
||||
self.d.setVar("TARGET_ARCH", "x86_64")
|
||||
self.d.setVar("PN", "test-${TARGET_ARCH}")
|
||||
self.d.setVar("VERSION", "1")
|
||||
self.d.setVar("VERSION:pn-test-${TARGET_ARCH}", "2")
|
||||
self.d.setVar("OVERRIDES", "pn-${PN}")
|
||||
bb.data.expandKeys(self.d)
|
||||
self.assertEqual(self.d.getVar("VERSION"), "2")
|
||||
|
||||
def test_remove_with_override(self):
|
||||
self.d.setVar("TEST:bar", "testvalue2")
|
||||
self.d.setVar("TEST:some_val", "testvalue3 testvalue5")
|
||||
@@ -426,6 +404,16 @@ class TestOverrides(unittest.TestCase):
|
||||
self.d.setVar("TEST:bar:append", "testvalue2")
|
||||
self.assertEqual(self.d.getVar("TEST"), "testvalue2")
|
||||
|
||||
# Test an override with _<numeric> in it based on a real world OE issue
|
||||
def test_underscore_override(self):
|
||||
self.d.setVar("TARGET_ARCH", "x86_64")
|
||||
self.d.setVar("PN", "test-${TARGET_ARCH}")
|
||||
self.d.setVar("VERSION", "1")
|
||||
self.d.setVar("VERSION:pn-test-${TARGET_ARCH}", "2")
|
||||
self.d.setVar("OVERRIDES", "pn-${PN}")
|
||||
bb.data.expandKeys(self.d)
|
||||
self.assertEqual(self.d.getVar("VERSION"), "2")
|
||||
|
||||
def test_append_and_unused_override(self):
|
||||
# Had a bug where an unused override append could return "" instead of None
|
||||
self.d.setVar("BAR:append:unusedoverride", "testvalue2")
|
||||
|
||||
@@ -13,7 +13,6 @@ import pickle
|
||||
import threading
|
||||
import time
|
||||
import unittest
|
||||
import tempfile
|
||||
from unittest.mock import Mock
|
||||
from unittest.mock import call
|
||||
|
||||
@@ -469,8 +468,6 @@ class EventClassesTest(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
bb.event.worker_pid = EventClassesTest._worker_pid
|
||||
self.d = bb.data.init()
|
||||
bb.parse.siggen = bb.siggen.init(self.d)
|
||||
|
||||
def test_Event(self):
|
||||
""" Test the Event base class """
|
||||
@@ -953,24 +950,3 @@ class EventClassesTest(unittest.TestCase):
|
||||
event = bb.event.FindSigInfoResult(result)
|
||||
self.assertEqual(event.result, result)
|
||||
self.assertEqual(event.pid, EventClassesTest._worker_pid)
|
||||
|
||||
def test_lineno_in_eventhandler(self):
|
||||
# The error lineno is 5, not 4 since the first line is '\n'
|
||||
error_line = """
|
||||
# Comment line1
|
||||
# Comment line2
|
||||
python test_lineno_in_eventhandler() {
|
||||
This is an error line
|
||||
}
|
||||
addhandler test_lineno_in_eventhandler
|
||||
test_lineno_in_eventhandler[eventmask] = "bb.event.ConfigParsed"
|
||||
"""
|
||||
|
||||
with self.assertLogs() as logs:
|
||||
f = tempfile.NamedTemporaryFile(suffix = '.bb')
|
||||
f.write(bytes(error_line, "utf-8"))
|
||||
f.flush()
|
||||
d = bb.parse.handle(f.name, self.d)['']
|
||||
|
||||
output = "".join(logs.output)
|
||||
self.assertTrue(" line 5\n" in output)
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -177,19 +177,7 @@ python () {
|
||||
|
||||
addtask_deltask = """
|
||||
addtask do_patch after do_foo after do_unpack before do_configure before do_compile
|
||||
addtask do_fetch2 do_patch2
|
||||
|
||||
addtask do_myplaintask
|
||||
addtask do_myplaintask2
|
||||
deltask do_myplaintask2
|
||||
addtask do_mytask# comment
|
||||
addtask do_mytask2 # comment2
|
||||
addtask do_mytask3
|
||||
deltask do_mytask3# comment
|
||||
deltask do_mytask4 # comment2
|
||||
|
||||
# Ensure a missing task prefix on after works
|
||||
addtask do_mytask5 after mytask
|
||||
addtask do_fetch do_patch
|
||||
|
||||
MYVAR = "do_patch"
|
||||
EMPTYVAR = ""
|
||||
@@ -197,12 +185,15 @@ deltask do_fetch ${MYVAR} ${EMPTYVAR}
|
||||
deltask ${EMPTYVAR}
|
||||
"""
|
||||
def test_parse_addtask_deltask(self):
|
||||
|
||||
import sys
|
||||
f = self.parsehelper(self.addtask_deltask)
|
||||
d = bb.parse.handle(f.name, self.d)['']
|
||||
|
||||
self.assertEqual(['do_fetch2', 'do_patch2', 'do_myplaintask', 'do_mytask', 'do_mytask2', 'do_mytask5'], d.getVar("__BBTASKS"))
|
||||
self.assertEqual(['do_mytask'], d.getVarFlag("do_mytask5", "deps"))
|
||||
stdout = sys.stdout.getvalue()
|
||||
self.assertTrue("addtask contained multiple 'before' keywords" in stdout)
|
||||
self.assertTrue("addtask contained multiple 'after' keywords" in stdout)
|
||||
self.assertTrue('addtask ignored: " do_patch"' in stdout)
|
||||
#self.assertTrue('dependent task do_foo for do_patch does not exist' in stdout)
|
||||
|
||||
broken_multiline_comment = """
|
||||
# First line of comment \\
|
||||
@@ -250,101 +241,3 @@ unset A[flag@.service]
|
||||
with self.assertRaises(bb.parse.ParseError):
|
||||
d = bb.parse.handle(f.name, self.d)['']
|
||||
|
||||
export_function_recipe = """
|
||||
inherit someclass
|
||||
"""
|
||||
|
||||
export_function_recipe2 = """
|
||||
inherit someclass
|
||||
|
||||
do_compile () {
|
||||
false
|
||||
}
|
||||
|
||||
python do_compilepython () {
|
||||
bb.note("Something else")
|
||||
}
|
||||
|
||||
"""
|
||||
export_function_class = """
|
||||
someclass_do_compile() {
|
||||
true
|
||||
}
|
||||
|
||||
python someclass_do_compilepython () {
|
||||
bb.note("Something")
|
||||
}
|
||||
|
||||
EXPORT_FUNCTIONS do_compile do_compilepython
|
||||
"""
|
||||
|
||||
export_function_class2 = """
|
||||
secondclass_do_compile() {
|
||||
true
|
||||
}
|
||||
|
||||
python secondclass_do_compilepython () {
|
||||
bb.note("Something")
|
||||
}
|
||||
|
||||
EXPORT_FUNCTIONS do_compile do_compilepython
|
||||
"""
|
||||
|
||||
def test_parse_export_functions(self):
|
||||
def check_function_flags(d):
|
||||
self.assertEqual(d.getVarFlag("do_compile", "func"), 1)
|
||||
self.assertEqual(d.getVarFlag("do_compilepython", "func"), 1)
|
||||
self.assertEqual(d.getVarFlag("do_compile", "python"), None)
|
||||
self.assertEqual(d.getVarFlag("do_compilepython", "python"), "1")
|
||||
|
||||
with tempfile.TemporaryDirectory() as tempdir:
|
||||
self.d.setVar("__bbclasstype", "recipe")
|
||||
recipename = tempdir + "/recipe.bb"
|
||||
os.makedirs(tempdir + "/classes")
|
||||
with open(tempdir + "/classes/someclass.bbclass", "w") as f:
|
||||
f.write(self.export_function_class)
|
||||
f.flush()
|
||||
with open(tempdir + "/classes/secondclass.bbclass", "w") as f:
|
||||
f.write(self.export_function_class2)
|
||||
f.flush()
|
||||
|
||||
with open(recipename, "w") as f:
|
||||
f.write(self.export_function_recipe)
|
||||
f.flush()
|
||||
os.chdir(tempdir)
|
||||
d = bb.parse.handle(recipename, bb.data.createCopy(self.d))['']
|
||||
self.assertIn("someclass_do_compile", d.getVar("do_compile"))
|
||||
self.assertIn("someclass_do_compilepython", d.getVar("do_compilepython"))
|
||||
check_function_flags(d)
|
||||
|
||||
recipename2 = tempdir + "/recipe2.bb"
|
||||
with open(recipename2, "w") as f:
|
||||
f.write(self.export_function_recipe2)
|
||||
f.flush()
|
||||
|
||||
d = bb.parse.handle(recipename2, bb.data.createCopy(self.d))['']
|
||||
self.assertNotIn("someclass_do_compile", d.getVar("do_compile"))
|
||||
self.assertNotIn("someclass_do_compilepython", d.getVar("do_compilepython"))
|
||||
self.assertIn("false", d.getVar("do_compile"))
|
||||
self.assertIn("else", d.getVar("do_compilepython"))
|
||||
check_function_flags(d)
|
||||
|
||||
with open(recipename, "a+") as f:
|
||||
f.write("\ninherit secondclass\n")
|
||||
f.flush()
|
||||
with open(recipename2, "a+") as f:
|
||||
f.write("\ninherit secondclass\n")
|
||||
f.flush()
|
||||
|
||||
d = bb.parse.handle(recipename, bb.data.createCopy(self.d))['']
|
||||
self.assertIn("secondclass_do_compile", d.getVar("do_compile"))
|
||||
self.assertIn("secondclass_do_compilepython", d.getVar("do_compilepython"))
|
||||
check_function_flags(d)
|
||||
|
||||
d = bb.parse.handle(recipename2, bb.data.createCopy(self.d))['']
|
||||
self.assertNotIn("someclass_do_compile", d.getVar("do_compile"))
|
||||
self.assertNotIn("someclass_do_compilepython", d.getVar("do_compilepython"))
|
||||
self.assertIn("false", d.getVar("do_compile"))
|
||||
self.assertIn("else", d.getVar("do_compilepython"))
|
||||
check_function_flags(d)
|
||||
|
||||
|
||||
@@ -17,12 +17,75 @@ import bb.siggen
|
||||
|
||||
class SiggenTest(unittest.TestCase):
|
||||
|
||||
def test_build_pnid(self):
|
||||
tests = {
|
||||
('', 'helloworld', 'do_sometask') : 'helloworld:do_sometask',
|
||||
('XX', 'helloworld', 'do_sometask') : 'mc:XX:helloworld:do_sometask',
|
||||
}
|
||||
def test_clean_basepath_simple_target_basepath(self):
|
||||
basepath = '/full/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask'
|
||||
expected_cleaned = 'helloworld/helloworld_1.2.3.bb:do_sometask'
|
||||
|
||||
for t in tests:
|
||||
self.assertEqual(bb.siggen.build_pnid(*t), tests[t])
|
||||
actual_cleaned = bb.siggen.clean_basepath(basepath)
|
||||
|
||||
self.assertEqual(actual_cleaned, expected_cleaned)
|
||||
|
||||
def test_clean_basepath_basic_virtual_basepath(self):
|
||||
basepath = 'virtual:something:/full/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask'
|
||||
expected_cleaned = 'helloworld/helloworld_1.2.3.bb:do_sometask:virtual:something'
|
||||
|
||||
actual_cleaned = bb.siggen.clean_basepath(basepath)
|
||||
|
||||
self.assertEqual(actual_cleaned, expected_cleaned)
|
||||
|
||||
def test_clean_basepath_mc_basepath(self):
|
||||
basepath = 'mc:somemachine:/full/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask'
|
||||
expected_cleaned = 'helloworld/helloworld_1.2.3.bb:do_sometask:mc:somemachine'
|
||||
|
||||
actual_cleaned = bb.siggen.clean_basepath(basepath)
|
||||
|
||||
self.assertEqual(actual_cleaned, expected_cleaned)
|
||||
|
||||
def test_clean_basepath_virtual_long_prefix_basepath(self):
|
||||
basepath = 'virtual:something:A:B:C:/full/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask'
|
||||
expected_cleaned = 'helloworld/helloworld_1.2.3.bb:do_sometask:virtual:something:A:B:C'
|
||||
|
||||
actual_cleaned = bb.siggen.clean_basepath(basepath)
|
||||
|
||||
self.assertEqual(actual_cleaned, expected_cleaned)
|
||||
|
||||
def test_clean_basepath_mc_virtual_basepath(self):
|
||||
basepath = 'mc:somemachine:virtual:something:/full/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask'
|
||||
expected_cleaned = 'helloworld/helloworld_1.2.3.bb:do_sometask:virtual:something:mc:somemachine'
|
||||
|
||||
actual_cleaned = bb.siggen.clean_basepath(basepath)
|
||||
|
||||
self.assertEqual(actual_cleaned, expected_cleaned)
|
||||
|
||||
def test_clean_basepath_mc_virtual_long_prefix_basepath(self):
|
||||
basepath = 'mc:X:virtual:something:C:B:A:/full/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask'
|
||||
expected_cleaned = 'helloworld/helloworld_1.2.3.bb:do_sometask:virtual:something:C:B:A:mc:X'
|
||||
|
||||
actual_cleaned = bb.siggen.clean_basepath(basepath)
|
||||
|
||||
self.assertEqual(actual_cleaned, expected_cleaned)
|
||||
|
||||
|
||||
# def test_clean_basepath_performance(self):
|
||||
# input_basepaths = [
|
||||
# 'mc:X:/full/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask',
|
||||
# 'mc:X:virtual:something:C:B:A:/full/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask',
|
||||
# 'virtual:something:C:B:A:/different/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask',
|
||||
# 'virtual:something:A:/full/path/to/poky/meta/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask',
|
||||
# '/this/is/most/common/input/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask',
|
||||
# '/and/should/be/tested/with/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask',
|
||||
# '/more/weight/recipes-whatever/helloworld/helloworld_1.2.3.bb:do_sometask',
|
||||
# ]
|
||||
|
||||
# time_start = time.time()
|
||||
|
||||
# i = 2000000
|
||||
# while i >= 0:
|
||||
# for basepath in input_basepaths:
|
||||
# bb.siggen.clean_basepath(basepath)
|
||||
# i -= 1
|
||||
|
||||
# elapsed = time.time() - time_start
|
||||
# print('{} ({}s)'.format(self.id(), round(elapsed, 3)))
|
||||
|
||||
# self.assertTrue(False)
|
||||
|
||||
@@ -325,11 +325,11 @@ class Tinfoil:
|
||||
self.recipes_parsed = False
|
||||
self.quiet = 0
|
||||
self.oldhandlers = self.logger.handlers[:]
|
||||
self.localhandlers = []
|
||||
if setup_logging:
|
||||
# This is the *client-side* logger, nothing to do with
|
||||
# logging messages from the server
|
||||
bb.msg.logger_create('BitBake', output)
|
||||
self.localhandlers = []
|
||||
for handler in self.logger.handlers:
|
||||
if handler not in self.oldhandlers:
|
||||
self.localhandlers.append(handler)
|
||||
@@ -449,12 +449,6 @@ class Tinfoil:
|
||||
self.run_actions(config_params)
|
||||
self.recipes_parsed = True
|
||||
|
||||
def modified_files(self):
|
||||
"""
|
||||
Notify the server it needs to revalidate it's caches since the client has modified files
|
||||
"""
|
||||
self.run_command("revalidateCaches")
|
||||
|
||||
def run_command(self, command, *params, handle_events=True):
|
||||
"""
|
||||
Run a command on the server (as implemented in bb.command).
|
||||
|
||||
@@ -559,10 +559,7 @@ class ORMWrapper(object):
|
||||
# we might have an invalid link; no way to detect this. just set it to None
|
||||
filetarget_obj = None
|
||||
|
||||
try:
|
||||
parent_obj = Target_File.objects.get(target = target_obj, path = parent_path, inodetype = Target_File.ITYPE_DIRECTORY)
|
||||
except Target_File.DoesNotExist:
|
||||
parent_obj = None
|
||||
parent_obj = Target_File.objects.get(target = target_obj, path = parent_path, inodetype = Target_File.ITYPE_DIRECTORY)
|
||||
|
||||
Target_File.objects.create(
|
||||
target = target_obj,
|
||||
@@ -1749,6 +1746,7 @@ class BuildInfoHelper(object):
|
||||
|
||||
buildname = self.server.runCommand(['getVariable', 'BUILDNAME'])[0]
|
||||
machine = self.server.runCommand(['getVariable', 'MACHINE'])[0]
|
||||
image_name = self.server.runCommand(['getVariable', 'IMAGE_NAME'])[0]
|
||||
|
||||
# location of the manifest files for this build;
|
||||
# note that this file is only produced if an image is produced
|
||||
@@ -1769,18 +1767,6 @@ class BuildInfoHelper(object):
|
||||
# filter out anything which isn't an image target
|
||||
image_targets = [target for target in targets if target.is_image]
|
||||
|
||||
if len(image_targets) > 0:
|
||||
#if there are image targets retrieve image_name
|
||||
image_name = self.server.runCommand(['getVariable', 'IMAGE_NAME'])[0]
|
||||
if not image_name:
|
||||
#When build target is an image and image_name is not found as an environment variable
|
||||
logger.info("IMAGE_NAME not found, extracting from bitbake command")
|
||||
cmd = self.server.runCommand(['getVariable','BB_CMDLINE'])[0]
|
||||
#filter out tokens that are command line options
|
||||
cmd = [token for token in cmd if not token.startswith('-')]
|
||||
image_name = cmd[1].split(':', 1)[0] # remove everything after : in image name
|
||||
logger.info("IMAGE_NAME found as : %s " % image_name)
|
||||
|
||||
for image_target in image_targets:
|
||||
# this is set to True if we find at least one file relating to
|
||||
# this target; if this remains False after the scan, we copy the
|
||||
|
||||
@@ -1,86 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
#
|
||||
# This file re-uses code spread throughout other Bitbake source files.
|
||||
# As such, all other copyrights belong to their own right holders.
|
||||
#
|
||||
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import pickle
|
||||
import codecs
|
||||
|
||||
|
||||
class EventPlayer:
|
||||
"""Emulate a connection to a bitbake server."""
|
||||
|
||||
def __init__(self, eventfile, variables):
|
||||
self.eventfile = eventfile
|
||||
self.variables = variables
|
||||
self.eventmask = []
|
||||
|
||||
def waitEvent(self, _timeout):
|
||||
"""Read event from the file."""
|
||||
line = self.eventfile.readline().strip()
|
||||
if not line:
|
||||
return
|
||||
try:
|
||||
decodedline = json.loads(line)
|
||||
if 'allvariables' in decodedline:
|
||||
self.variables = decodedline['allvariables']
|
||||
return
|
||||
if not 'vars' in decodedline:
|
||||
raise ValueError
|
||||
event_str = decodedline['vars'].encode('utf-8')
|
||||
event = pickle.loads(codecs.decode(event_str, 'base64'))
|
||||
event_name = "%s.%s" % (event.__module__, event.__class__.__name__)
|
||||
if event_name not in self.eventmask:
|
||||
return
|
||||
return event
|
||||
except ValueError as err:
|
||||
print("Failed loading ", line)
|
||||
raise err
|
||||
|
||||
def runCommand(self, command_line):
|
||||
"""Emulate running a command on the server."""
|
||||
name = command_line[0]
|
||||
|
||||
if name == "getVariable":
|
||||
var_name = command_line[1]
|
||||
variable = self.variables.get(var_name)
|
||||
if variable:
|
||||
return variable['v'], None
|
||||
return None, "Missing variable %s" % var_name
|
||||
|
||||
elif name == "getAllKeysWithFlags":
|
||||
dump = {}
|
||||
flaglist = command_line[1]
|
||||
for key, val in self.variables.items():
|
||||
try:
|
||||
if not key.startswith("__"):
|
||||
dump[key] = {
|
||||
'v': val['v'],
|
||||
'history' : val['history'],
|
||||
}
|
||||
for flag in flaglist:
|
||||
dump[key][flag] = val[flag]
|
||||
except Exception as err:
|
||||
print(err)
|
||||
return (dump, None)
|
||||
|
||||
elif name == 'setEventMask':
|
||||
self.eventmask = command_line[-1]
|
||||
return True, None
|
||||
|
||||
else:
|
||||
raise Exception("Command %s not implemented" % command_line[0])
|
||||
|
||||
def getEventHandle(self):
|
||||
"""
|
||||
This method is called by toasterui.
|
||||
The return value is passed to self.runCommand but not used there.
|
||||
"""
|
||||
pass
|
||||
@@ -179,7 +179,7 @@ class TerminalFilter(object):
|
||||
new[3] = new[3] & ~termios.ECHO
|
||||
termios.tcsetattr(fd, termios.TCSADRAIN, new)
|
||||
curses.setupterm()
|
||||
if curses.tigetnum("colors") > 2 and os.environ.get('NO_COLOR', '') == '':
|
||||
if curses.tigetnum("colors") > 2:
|
||||
for h in handlers:
|
||||
try:
|
||||
h.formatter.enable_color()
|
||||
@@ -420,11 +420,6 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
except bb.BBHandledException:
|
||||
drain_events_errorhandling(eventHandler)
|
||||
return 1
|
||||
except Exception as e:
|
||||
# bitbake-server comms failure
|
||||
early_logger = bb.msg.logger_create('bitbake', sys.stdout)
|
||||
early_logger.fatal("Attempting to set server environment: %s", e)
|
||||
return 1
|
||||
|
||||
if params.options.quiet == 0:
|
||||
console_loglevel = loglevel
|
||||
@@ -577,8 +572,6 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
else:
|
||||
log_exec_tty = False
|
||||
|
||||
should_print_hyperlinks = sys.stdout.isatty() and os.environ.get('NO_COLOR', '') == ''
|
||||
|
||||
helper = uihelper.BBUIHelper()
|
||||
|
||||
# Look for the specially designated handlers which need to be passed to the
|
||||
@@ -592,12 +585,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
return
|
||||
|
||||
llevel, debug_domains = bb.msg.constructLogOptions()
|
||||
try:
|
||||
server.runCommand(["setEventMask", server.getEventHandle(), llevel, debug_domains, _evt_list])
|
||||
except (BrokenPipeError, EOFError) as e:
|
||||
# bitbake-server comms failure
|
||||
logger.fatal("Attempting to set event mask: %s", e)
|
||||
return 1
|
||||
server.runCommand(["setEventMask", server.getEventHandle(), llevel, debug_domains, _evt_list])
|
||||
|
||||
# The logging_tree module is *extremely* helpful in debugging logging
|
||||
# domains. Uncomment here to dump the logging tree when bitbake starts
|
||||
@@ -606,11 +594,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
|
||||
universe = False
|
||||
if not params.observe_only:
|
||||
try:
|
||||
params.updateFromServer(server)
|
||||
except Exception as e:
|
||||
logger.fatal("Fetching command line: %s", e)
|
||||
return 1
|
||||
params.updateFromServer(server)
|
||||
cmdline = params.parseActions()
|
||||
if not cmdline:
|
||||
print("Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
|
||||
@@ -621,12 +605,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
if cmdline['action'][0] == "buildTargets" and "universe" in cmdline['action'][1]:
|
||||
universe = True
|
||||
|
||||
try:
|
||||
ret, error = server.runCommand(cmdline['action'])
|
||||
except (BrokenPipeError, EOFError) as e:
|
||||
# bitbake-server comms failure
|
||||
logger.fatal("Command '{}' failed: %s".format(cmdline), e)
|
||||
return 1
|
||||
ret, error = server.runCommand(cmdline['action'])
|
||||
if error:
|
||||
logger.error("Command '%s' failed: %s" % (cmdline, error))
|
||||
return 1
|
||||
@@ -642,7 +621,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
return_value = 0
|
||||
errors = 0
|
||||
warnings = 0
|
||||
taskfailures = {}
|
||||
taskfailures = []
|
||||
|
||||
printintervaldelta = 10 * 60 # 10 minutes
|
||||
printinterval = printintervaldelta
|
||||
@@ -728,8 +707,6 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
if isinstance(event, bb.build.TaskFailed):
|
||||
return_value = 1
|
||||
print_event_log(event, includelogs, loglines, termfilter)
|
||||
k = "{}:{}".format(event._fn, event._task)
|
||||
taskfailures[k] = event.logfile
|
||||
if isinstance(event, bb.build.TaskBase):
|
||||
logger.info(event._message)
|
||||
continue
|
||||
@@ -825,7 +802,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
|
||||
if isinstance(event, bb.runqueue.runQueueTaskFailed):
|
||||
return_value = 1
|
||||
taskfailures.setdefault(event.taskstring)
|
||||
taskfailures.append(event.taskstring)
|
||||
logger.error(str(event))
|
||||
continue
|
||||
|
||||
@@ -877,26 +854,15 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
|
||||
logger.error("Unknown event: %s", event)
|
||||
|
||||
except (BrokenPipeError, EOFError) as e:
|
||||
# bitbake-server comms failure, don't attempt further comms and exit
|
||||
logger.fatal("Executing event: %s", e)
|
||||
return_value = 1
|
||||
errors = errors + 1
|
||||
main.shutdown = 3
|
||||
except EnvironmentError as ioerror:
|
||||
termfilter.clearFooter()
|
||||
# ignore interrupted io
|
||||
if ioerror.args[0] == 4:
|
||||
continue
|
||||
sys.stderr.write(str(ioerror))
|
||||
main.shutdown = 2
|
||||
if not params.observe_only:
|
||||
try:
|
||||
_, error = server.runCommand(["stateForceShutdown"])
|
||||
except (BrokenPipeError, EOFError) as e:
|
||||
# bitbake-server comms failure, don't attempt further comms and exit
|
||||
logger.fatal("Unable to force shutdown: %s", e)
|
||||
main.shutdown = 3
|
||||
_, error = server.runCommand(["stateForceShutdown"])
|
||||
main.shutdown = 2
|
||||
except KeyboardInterrupt:
|
||||
termfilter.clearFooter()
|
||||
if params.observe_only:
|
||||
@@ -905,13 +871,9 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
|
||||
def state_force_shutdown():
|
||||
print("\nSecond Keyboard Interrupt, stopping...\n")
|
||||
try:
|
||||
_, error = server.runCommand(["stateForceShutdown"])
|
||||
if error:
|
||||
logger.error("Unable to cleanly stop: %s" % error)
|
||||
except (BrokenPipeError, EOFError) as e:
|
||||
# bitbake-server comms failure
|
||||
logger.fatal("Unable to cleanly stop: %s", e)
|
||||
_, error = server.runCommand(["stateForceShutdown"])
|
||||
if error:
|
||||
logger.error("Unable to cleanly stop: %s" % error)
|
||||
|
||||
if not params.observe_only and main.shutdown == 1:
|
||||
state_force_shutdown()
|
||||
@@ -924,9 +886,6 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
_, error = server.runCommand(["stateShutdown"])
|
||||
if error:
|
||||
logger.error("Unable to cleanly shutdown: %s" % error)
|
||||
except (BrokenPipeError, EOFError) as e:
|
||||
# bitbake-server comms failure
|
||||
logger.fatal("Unable to cleanly shutdown: %s", e)
|
||||
except KeyboardInterrupt:
|
||||
state_force_shutdown()
|
||||
|
||||
@@ -934,33 +893,18 @@ def main(server, eventHandler, params, tf = TerminalFilter):
|
||||
except Exception as e:
|
||||
import traceback
|
||||
sys.stderr.write(traceback.format_exc())
|
||||
main.shutdown = 2
|
||||
if not params.observe_only:
|
||||
try:
|
||||
_, error = server.runCommand(["stateForceShutdown"])
|
||||
except (BrokenPipeError, EOFError) as e:
|
||||
# bitbake-server comms failure, don't attempt further comms and exit
|
||||
logger.fatal("Unable to force shutdown: %s", e)
|
||||
main.shudown = 3
|
||||
_, error = server.runCommand(["stateForceShutdown"])
|
||||
main.shutdown = 2
|
||||
return_value = 1
|
||||
try:
|
||||
termfilter.clearFooter()
|
||||
summary = ""
|
||||
def format_hyperlink(url, link_text):
|
||||
if should_print_hyperlinks:
|
||||
start = f'\033]8;;{url}\033\\'
|
||||
end = '\033]8;;\033\\'
|
||||
return f'{start}{link_text}{end}'
|
||||
return link_text
|
||||
|
||||
if taskfailures:
|
||||
summary += pluralise("\nSummary: %s task failed:",
|
||||
"\nSummary: %s tasks failed:", len(taskfailures))
|
||||
for (failure, log_file) in taskfailures.items():
|
||||
for failure in taskfailures:
|
||||
summary += "\n %s" % failure
|
||||
if log_file:
|
||||
hyperlink = format_hyperlink(f"file://{log_file}", log_file)
|
||||
summary += "\n log: {}".format(hyperlink)
|
||||
if warnings:
|
||||
summary += pluralise("\nSummary: There was %s WARNING message.",
|
||||
"\nSummary: There were %s WARNING messages.", warnings)
|
||||
|
||||
@@ -227,9 +227,6 @@ class NCursesUI:
|
||||
shutdown = 0
|
||||
|
||||
try:
|
||||
if not params.observe_only:
|
||||
params.updateToServer(server, os.environ.copy())
|
||||
|
||||
params.updateFromServer(server)
|
||||
cmdline = params.parseActions()
|
||||
if not cmdline:
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -30,6 +30,7 @@ import bb.build
|
||||
import bb.command
|
||||
import bb.cooker
|
||||
import bb.event
|
||||
import bb.exceptions
|
||||
import bb.runqueue
|
||||
from bb.ui import uihelper
|
||||
|
||||
@@ -101,6 +102,10 @@ class TeamcityLogFormatter(logging.Formatter):
|
||||
details = ""
|
||||
if hasattr(record, 'bb_exc_formatted'):
|
||||
details = ''.join(record.bb_exc_formatted)
|
||||
elif hasattr(record, 'bb_exc_info'):
|
||||
etype, value, tb = record.bb_exc_info
|
||||
formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
|
||||
details = ''.join(formatted)
|
||||
|
||||
if record.levelno in [bb.msg.BBLogFormatter.ERROR, bb.msg.BBLogFormatter.CRITICAL]:
|
||||
# ERROR gets a separate errorDetails field
|
||||
|
||||
@@ -385,7 +385,7 @@ def main(server, eventHandler, params):
|
||||
main.shutdown = 1
|
||||
|
||||
logger.info("ToasterUI build done, brbe: %s", brbe)
|
||||
break
|
||||
continue
|
||||
|
||||
if isinstance(event, (bb.command.CommandCompleted,
|
||||
bb.command.CommandFailed,
|
||||
|
||||
@@ -50,7 +50,7 @@ def clean_context():
|
||||
|
||||
def get_context():
|
||||
return _context
|
||||
|
||||
|
||||
|
||||
def set_context(ctx):
|
||||
_context = ctx
|
||||
@@ -212,8 +212,8 @@ def explode_dep_versions2(s, *, sort=True):
|
||||
inversion = True
|
||||
# This list is based on behavior and supported comparisons from deb, opkg and rpm.
|
||||
#
|
||||
# Even though =<, <<, ==, !=, =>, and >> may not be supported,
|
||||
# we list each possibly valid item.
|
||||
# Even though =<, <<, ==, !=, =>, and >> may not be supported,
|
||||
# we list each possibly valid item.
|
||||
# The build system is responsible for validation of what it supports.
|
||||
if i.startswith(('<=', '=<', '<<', '==', '!=', '>=', '=>', '>>')):
|
||||
lastcmp = i[0:2]
|
||||
@@ -347,7 +347,7 @@ def _print_exception(t, value, tb, realfile, text, context):
|
||||
exception = traceback.format_exception_only(t, value)
|
||||
error.append('Error executing a python function in %s:\n' % realfile)
|
||||
|
||||
# Strip 'us' from the stack (better_exec call) unless that was where the
|
||||
# Strip 'us' from the stack (better_exec call) unless that was where the
|
||||
# error came from
|
||||
if tb.tb_next is not None:
|
||||
tb = tb.tb_next
|
||||
@@ -604,6 +604,7 @@ def preserved_envvars():
|
||||
v = [
|
||||
'BBPATH',
|
||||
'BB_PRESERVE_ENV',
|
||||
'BB_ENV_PASSTHROUGH',
|
||||
'BB_ENV_PASSTHROUGH_ADDITIONS',
|
||||
]
|
||||
return v + preserved_envvars_exported()
|
||||
@@ -745,9 +746,9 @@ def prunedir(topdir, ionice=False):
|
||||
# but thats possibly insane and suffixes is probably going to be small
|
||||
#
|
||||
def prune_suffix(var, suffixes, d):
|
||||
"""
|
||||
"""
|
||||
See if var ends with any of the suffixes listed and
|
||||
remove it if found
|
||||
remove it if found
|
||||
"""
|
||||
for suffix in suffixes:
|
||||
if suffix and var.endswith(suffix):
|
||||
@@ -758,8 +759,7 @@ def mkdirhier(directory):
|
||||
"""Create a directory like 'mkdir -p', but does not complain if
|
||||
directory already exists like os.makedirs
|
||||
"""
|
||||
if '${' in str(directory):
|
||||
bb.fatal("Directory name {} contains unexpanded bitbake variable. This may cause build failures and WORKDIR polution.".format(directory))
|
||||
|
||||
try:
|
||||
os.makedirs(directory)
|
||||
except OSError as e:
|
||||
@@ -1001,9 +1001,9 @@ def umask(new_mask):
|
||||
os.umask(current_mask)
|
||||
|
||||
def to_boolean(string, default=None):
|
||||
"""
|
||||
"""
|
||||
Check input string and return boolean value True/False/None
|
||||
depending upon the checks
|
||||
depending upon the checks
|
||||
"""
|
||||
if not string:
|
||||
return default
|
||||
@@ -1142,10 +1142,7 @@ def get_referenced_vars(start_expr, d):
|
||||
|
||||
|
||||
def cpu_count():
|
||||
try:
|
||||
return len(os.sched_getaffinity(0))
|
||||
except OSError:
|
||||
return multiprocessing.cpu_count()
|
||||
return multiprocessing.cpu_count()
|
||||
|
||||
def nonblockingfd(fd):
|
||||
fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK)
|
||||
@@ -1831,29 +1828,6 @@ def mkstemp(suffix=None, prefix=None, dir=None, text=False):
|
||||
prefix = tempfile.gettempprefix() + entropy
|
||||
return tempfile.mkstemp(suffix=suffix, prefix=prefix, dir=dir, text=text)
|
||||
|
||||
def path_is_descendant(descendant, ancestor):
|
||||
"""
|
||||
Returns True if the path `descendant` is a descendant of `ancestor`
|
||||
(including being equivalent to `ancestor` itself). Otherwise returns False.
|
||||
Correctly accounts for symlinks, bind mounts, etc. by using
|
||||
os.path.samestat() to compare paths
|
||||
|
||||
May raise any exception that os.stat() raises
|
||||
"""
|
||||
|
||||
ancestor_stat = os.stat(ancestor)
|
||||
|
||||
# Recurse up each directory component of the descendant to see if it is
|
||||
# equivalent to the ancestor
|
||||
check_dir = os.path.abspath(descendant).rstrip("/")
|
||||
while check_dir:
|
||||
check_stat = os.stat(check_dir)
|
||||
if os.path.samestat(check_stat, ancestor_stat):
|
||||
return True
|
||||
check_dir = os.path.dirname(check_dir).rstrip("/")
|
||||
|
||||
return False
|
||||
|
||||
# If we don't have a timeout of some kind and a process/thread exits badly (for example
|
||||
# OOM killed) and held a lock, we'd just hang in the lock futex forever. It is better
|
||||
# we exit at some point than hang. 5 minutes with no progress means we're probably deadlocked.
|
||||
|
||||
@@ -1,126 +0,0 @@
|
||||
#! /usr/bin/env python3
|
||||
#
|
||||
# Copyright 2023 by Garmin Ltd. or its subsidiaries
|
||||
#
|
||||
# SPDX-License-Identifier: MIT
|
||||
|
||||
import sys
|
||||
import ctypes
|
||||
import os
|
||||
import errno
|
||||
|
||||
libc = ctypes.CDLL("libc.so.6", use_errno=True)
|
||||
fsencoding = sys.getfilesystemencoding()
|
||||
|
||||
|
||||
libc.listxattr.argtypes = [ctypes.c_char_p, ctypes.c_char_p, ctypes.c_size_t]
|
||||
libc.llistxattr.argtypes = [ctypes.c_char_p, ctypes.c_char_p, ctypes.c_size_t]
|
||||
|
||||
|
||||
def listxattr(path, follow=True):
|
||||
func = libc.listxattr if follow else libc.llistxattr
|
||||
|
||||
os_path = os.fsencode(path)
|
||||
|
||||
while True:
|
||||
length = func(os_path, None, 0)
|
||||
|
||||
if length < 0:
|
||||
err = ctypes.get_errno()
|
||||
raise OSError(err, os.strerror(err), str(path))
|
||||
|
||||
if length == 0:
|
||||
return []
|
||||
|
||||
arr = ctypes.create_string_buffer(length)
|
||||
|
||||
read_length = func(os_path, arr, length)
|
||||
if read_length != length:
|
||||
# Race!
|
||||
continue
|
||||
|
||||
return [a.decode(fsencoding) for a in arr.raw.split(b"\x00") if a]
|
||||
|
||||
|
||||
libc.getxattr.argtypes = [
|
||||
ctypes.c_char_p,
|
||||
ctypes.c_char_p,
|
||||
ctypes.c_char_p,
|
||||
ctypes.c_size_t,
|
||||
]
|
||||
libc.lgetxattr.argtypes = [
|
||||
ctypes.c_char_p,
|
||||
ctypes.c_char_p,
|
||||
ctypes.c_char_p,
|
||||
ctypes.c_size_t,
|
||||
]
|
||||
|
||||
|
||||
def getxattr(path, name, follow=True):
|
||||
func = libc.getxattr if follow else libc.lgetxattr
|
||||
|
||||
os_path = os.fsencode(path)
|
||||
os_name = os.fsencode(name)
|
||||
|
||||
while True:
|
||||
length = func(os_path, os_name, None, 0)
|
||||
|
||||
if length < 0:
|
||||
err = ctypes.get_errno()
|
||||
if err == errno.ENODATA:
|
||||
return None
|
||||
raise OSError(err, os.strerror(err), str(path))
|
||||
|
||||
if length == 0:
|
||||
return ""
|
||||
|
||||
arr = ctypes.create_string_buffer(length)
|
||||
|
||||
read_length = func(os_path, os_name, arr, length)
|
||||
if read_length != length:
|
||||
# Race!
|
||||
continue
|
||||
|
||||
return arr.raw
|
||||
|
||||
|
||||
def get_all_xattr(path, follow=True):
|
||||
attrs = {}
|
||||
|
||||
names = listxattr(path, follow)
|
||||
|
||||
for name in names:
|
||||
value = getxattr(path, name, follow)
|
||||
if value is None:
|
||||
# This can happen if a value is erased after listxattr is called,
|
||||
# so ignore it
|
||||
continue
|
||||
attrs[name] = value
|
||||
|
||||
return attrs
|
||||
|
||||
|
||||
def main():
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("path", help="File Path", type=Path)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
attrs = get_all_xattr(args.path)
|
||||
|
||||
for name, value in attrs.items():
|
||||
try:
|
||||
value = value.decode(fsencoding)
|
||||
except UnicodeDecodeError:
|
||||
pass
|
||||
|
||||
print(f"{name} = {value}")
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -51,13 +51,11 @@ class ActionPlugin(LayerPlugin):
|
||||
try:
|
||||
notadded, _ = bb.utils.edit_bblayers_conf(bblayers_conf, layerdirs, None)
|
||||
if not (args.force or notadded):
|
||||
self.tinfoil.modified_files()
|
||||
try:
|
||||
self.tinfoil.run_command('parseConfiguration')
|
||||
except (bb.tinfoil.TinfoilUIException, bb.BBHandledException):
|
||||
# Restore the back up copy of bblayers.conf
|
||||
shutil.copy2(backup, bblayers_conf)
|
||||
self.tinfoil.modified_files()
|
||||
bb.fatal("Parse failure with the specified layer added, exiting.")
|
||||
else:
|
||||
for item in notadded:
|
||||
@@ -83,9 +81,6 @@ class ActionPlugin(LayerPlugin):
|
||||
layerdir = os.path.abspath(item)
|
||||
layerdirs.append(layerdir)
|
||||
(_, notremoved) = bb.utils.edit_bblayers_conf(bblayers_conf, None, layerdirs)
|
||||
if args.force > 1:
|
||||
return 0
|
||||
self.tinfoil.modified_files()
|
||||
if notremoved:
|
||||
for item in notremoved:
|
||||
sys.stderr.write("No layers matching %s found in BBLAYERS\n" % item)
|
||||
@@ -245,9 +240,6 @@ build results (as the layer priority order has effectively changed).
|
||||
if not entry_found:
|
||||
logger.warning("File %s does not match the flattened layer's BBFILES setting, you may need to edit conf/layer.conf or move the file elsewhere" % f1full)
|
||||
|
||||
self.tinfoil.modified_files()
|
||||
|
||||
|
||||
def get_file_layer(self, filename):
|
||||
layerdir = self.get_file_layerdir(filename)
|
||||
if layerdir:
|
||||
|
||||
@@ -282,10 +282,7 @@ Lists recipes with the bbappends that apply to them as subitems.
|
||||
else:
|
||||
logger.plain('=== Appended recipes ===')
|
||||
|
||||
|
||||
cooker_data = self.tinfoil.cooker.recipecaches[args.mc]
|
||||
|
||||
pnlist = list(cooker_data.pkg_pn.keys())
|
||||
pnlist = list(self.tinfoil.cooker_data.pkg_pn.keys())
|
||||
pnlist.sort()
|
||||
appends = False
|
||||
for pn in pnlist:
|
||||
@@ -298,7 +295,7 @@ Lists recipes with the bbappends that apply to them as subitems.
|
||||
if not found:
|
||||
continue
|
||||
|
||||
if self.show_appends_for_pn(pn, cooker_data, args.mc):
|
||||
if self.show_appends_for_pn(pn):
|
||||
appends = True
|
||||
|
||||
if not args.pnspec and self.show_appends_for_skipped():
|
||||
@@ -307,10 +304,8 @@ Lists recipes with the bbappends that apply to them as subitems.
|
||||
if not appends:
|
||||
logger.plain('No append files found')
|
||||
|
||||
def show_appends_for_pn(self, pn, cooker_data, mc):
|
||||
filenames = cooker_data.pkg_pn[pn]
|
||||
if mc:
|
||||
pn = "mc:%s:%s" % (mc, pn)
|
||||
def show_appends_for_pn(self, pn):
|
||||
filenames = self.tinfoil.cooker_data.pkg_pn[pn]
|
||||
|
||||
best = self.tinfoil.find_best_provider(pn)
|
||||
best_filename = os.path.basename(best[3])
|
||||
@@ -535,7 +530,6 @@ NOTE: .bbappend files can impact the dependencies.
|
||||
|
||||
parser_show_appends = self.add_command(sp, 'show-appends', self.do_show_appends)
|
||||
parser_show_appends.add_argument('pnspec', nargs='*', help='optional recipe name specification (wildcards allowed, enclose in quotes to avoid shell expansion)')
|
||||
parser_show_appends.add_argument('--mc', help='use specified multiconfig', default='')
|
||||
|
||||
parser_show_cross_depends = self.add_command(sp, 'show-cross-depends', self.do_show_cross_depends)
|
||||
parser_show_cross_depends.add_argument('-f', '--filenames', help='show full file path', action='store_true')
|
||||
|
||||
@@ -1,49 +0,0 @@
|
||||
Behold, mortal, the origins of Beautiful Soup...
|
||||
================================================
|
||||
|
||||
Leonard Richardson is the primary maintainer.
|
||||
|
||||
Aaron DeVore and Isaac Muse have made significant contributions to the
|
||||
code base.
|
||||
|
||||
Mark Pilgrim provided the encoding detection code that forms the base
|
||||
of UnicodeDammit.
|
||||
|
||||
Thomas Kluyver and Ezio Melotti finished the work of getting Beautiful
|
||||
Soup 4 working under Python 3.
|
||||
|
||||
Simon Willison wrote soupselect, which was used to make Beautiful Soup
|
||||
support CSS selectors. Isaac Muse wrote SoupSieve, which made it
|
||||
possible to _remove_ the CSS selector code from Beautiful Soup.
|
||||
|
||||
Sam Ruby helped with a lot of edge cases.
|
||||
|
||||
Jonathan Ellis was awarded the prestigious Beau Potage D'Or for his
|
||||
work in solving the nestable tags conundrum.
|
||||
|
||||
An incomplete list of people have contributed patches to Beautiful
|
||||
Soup:
|
||||
|
||||
Istvan Albert, Andrew Lin, Anthony Baxter, Oliver Beattie, Andrew
|
||||
Boyko, Tony Chang, Francisco Canas, "Delong", Zephyr Fang, Fuzzy,
|
||||
Roman Gaufman, Yoni Gilad, Richie Hindle, Toshihiro Kamiya, Peteris
|
||||
Krumins, Kent Johnson, Marek Kapolka, Andreas Kostyrka, Roel Kramer,
|
||||
Ben Last, Robert Leftwich, Stefaan Lippens, "liquider", Staffan
|
||||
Malmgren, Ksenia Marasanova, JP Moins, Adam Monsen, John Nagle, "Jon",
|
||||
Ed Oskiewicz, Martijn Peters, Greg Phillips, Giles Radford, Stefano
|
||||
Revera, Arthur Rudolph, Marko Samastur, James Salter, Jouni Sepp<70>nen,
|
||||
Alexander Schmolck, Tim Shirley, Geoffrey Sneddon, Ville Skytt<74>,
|
||||
"Vikas", Jens Svalgaard, Andy Theyers, Eric Weiser, Glyn Webster, John
|
||||
Wiseman, Paul Wright, Danny Yoo
|
||||
|
||||
An incomplete list of people who made suggestions or found bugs or
|
||||
found ways to break Beautiful Soup:
|
||||
|
||||
Hanno B<>ck, Matteo Bertini, Chris Curvey, Simon Cusack, Bruce Eckel,
|
||||
Matt Ernst, Michael Foord, Tom Harris, Bill de hOra, Donald Howes,
|
||||
Matt Patterson, Scott Roberts, Steve Strassmann, Mike Williams,
|
||||
warchild at redho dot com, Sami Kuisma, Carlos Rocha, Bob Hutchison,
|
||||
Joren Mc, Michal Migurski, John Kleven, Tim Heaney, Tripp Lilley, Ed
|
||||
Summers, Dennis Sutch, Chris Smith, Aaron Swartz, Stuart
|
||||
Turner, Greg Edwards, Kevin J Kalupson, Nikos Kouremenos, Artur de
|
||||
Sousa Rocha, Yichun Wei, Per Vognsen
|
||||
43
bitbake/lib/bs4/AUTHORS.txt
Normal file
43
bitbake/lib/bs4/AUTHORS.txt
Normal file
@@ -0,0 +1,43 @@
|
||||
Behold, mortal, the origins of Beautiful Soup...
|
||||
================================================
|
||||
|
||||
Leonard Richardson is the primary programmer.
|
||||
|
||||
Aaron DeVore is awesome.
|
||||
|
||||
Mark Pilgrim provided the encoding detection code that forms the base
|
||||
of UnicodeDammit.
|
||||
|
||||
Thomas Kluyver and Ezio Melotti finished the work of getting Beautiful
|
||||
Soup 4 working under Python 3.
|
||||
|
||||
Simon Willison wrote soupselect, which was used to make Beautiful Soup
|
||||
support CSS selectors.
|
||||
|
||||
Sam Ruby helped with a lot of edge cases.
|
||||
|
||||
Jonathan Ellis was awarded the prestigous Beau Potage D'Or for his
|
||||
work in solving the nestable tags conundrum.
|
||||
|
||||
An incomplete list of people have contributed patches to Beautiful
|
||||
Soup:
|
||||
|
||||
Istvan Albert, Andrew Lin, Anthony Baxter, Andrew Boyko, Tony Chang,
|
||||
Zephyr Fang, Fuzzy, Roman Gaufman, Yoni Gilad, Richie Hindle, Peteris
|
||||
Krumins, Kent Johnson, Ben Last, Robert Leftwich, Staffan Malmgren,
|
||||
Ksenia Marasanova, JP Moins, Adam Monsen, John Nagle, "Jon", Ed
|
||||
Oskiewicz, Greg Phillips, Giles Radford, Arthur Rudolph, Marko
|
||||
Samastur, Jouni Sepp<70>nen, Alexander Schmolck, Andy Theyers, Glyn
|
||||
Webster, Paul Wright, Danny Yoo
|
||||
|
||||
An incomplete list of people who made suggestions or found bugs or
|
||||
found ways to break Beautiful Soup:
|
||||
|
||||
Hanno B<>ck, Matteo Bertini, Chris Curvey, Simon Cusack, Bruce Eckel,
|
||||
Matt Ernst, Michael Foord, Tom Harris, Bill de hOra, Donald Howes,
|
||||
Matt Patterson, Scott Roberts, Steve Strassmann, Mike Williams,
|
||||
warchild at redho dot com, Sami Kuisma, Carlos Rocha, Bob Hutchison,
|
||||
Joren Mc, Michal Migurski, John Kleven, Tim Heaney, Tripp Lilley, Ed
|
||||
Summers, Dennis Sutch, Chris Smith, Aaron Sweep^W Swartz, Stuart
|
||||
Turner, Greg Edwards, Kevin J Kalupson, Nikos Kouremenos, Artur de
|
||||
Sousa Rocha, Yichun Wei, Per Vognsen
|
||||
File diff suppressed because it is too large
Load Diff
26
bitbake/lib/bs4/COPYING.txt
Normal file
26
bitbake/lib/bs4/COPYING.txt
Normal file
@@ -0,0 +1,26 @@
|
||||
Beautiful Soup is made available under the MIT license:
|
||||
|
||||
Copyright (c) 2004-2012 Leonard Richardson
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining
|
||||
a copy of this software and associated documentation files (the
|
||||
"Software"), to deal in the Software without restriction, including
|
||||
without limitation the rights to use, copy, modify, merge, publish,
|
||||
distribute, sublicense, and/or sell copies of the Software, and to
|
||||
permit persons to whom the Software is furnished to do so, subject to
|
||||
the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be
|
||||
included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
||||
BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
||||
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE, DAMMIT.
|
||||
|
||||
Beautiful Soup incorporates code from the html5lib library, which is
|
||||
also made available under the MIT license.
|
||||
@@ -1,31 +0,0 @@
|
||||
Beautiful Soup is made available under the MIT license:
|
||||
|
||||
Copyright (c) Leonard Richardson
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining
|
||||
a copy of this software and associated documentation files (the
|
||||
"Software"), to deal in the Software without restriction, including
|
||||
without limitation the rights to use, copy, modify, merge, publish,
|
||||
distribute, sublicense, and/or sell copies of the Software, and to
|
||||
permit persons to whom the Software is furnished to do so, subject to
|
||||
the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be
|
||||
included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
||||
BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
||||
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
|
||||
Beautiful Soup incorporates code from the html5lib library, which is
|
||||
also made available under the MIT license. Copyright (c) James Graham
|
||||
and other contributors
|
||||
|
||||
Beautiful Soup has an optional dependency on the soupsieve library,
|
||||
which is also made available under the MIT license. Copyright (c)
|
||||
Isaac Muse
|
||||
1066
bitbake/lib/bs4/NEWS.txt
Normal file
1066
bitbake/lib/bs4/NEWS.txt
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,99 +1,65 @@
|
||||
"""Beautiful Soup Elixir and Tonic - "The Screen-Scraper's Friend".
|
||||
|
||||
"""Beautiful Soup
|
||||
Elixir and Tonic
|
||||
"The Screen-Scraper's Friend"
|
||||
http://www.crummy.com/software/BeautifulSoup/
|
||||
|
||||
Beautiful Soup uses a pluggable XML or HTML parser to parse a
|
||||
(possibly invalid) document into a tree representation. Beautiful Soup
|
||||
provides methods and Pythonic idioms that make it easy to navigate,
|
||||
search, and modify the parse tree.
|
||||
provides provides methods and Pythonic idioms that make it easy to
|
||||
navigate, search, and modify the parse tree.
|
||||
|
||||
Beautiful Soup works with Python 3.6 and up. It works better if lxml
|
||||
Beautiful Soup works with Python 2.6 and up. It works better if lxml
|
||||
and/or html5lib is installed.
|
||||
|
||||
For more than you ever wanted to know about Beautiful Soup, see the
|
||||
documentation: http://www.crummy.com/software/BeautifulSoup/bs4/doc/
|
||||
documentation:
|
||||
http://www.crummy.com/software/BeautifulSoup/bs4/doc/
|
||||
"""
|
||||
|
||||
__author__ = "Leonard Richardson (leonardr@segfault.org)"
|
||||
__version__ = "4.12.3"
|
||||
__copyright__ = "Copyright (c) 2004-2024 Leonard Richardson"
|
||||
# Use of this source code is governed by the MIT license.
|
||||
__version__ = "4.4.1"
|
||||
__copyright__ = "Copyright (c) 2004-2015 Leonard Richardson"
|
||||
__license__ = "MIT"
|
||||
|
||||
__all__ = ['BeautifulSoup']
|
||||
|
||||
from collections import Counter
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import traceback
|
||||
import warnings
|
||||
|
||||
# The very first thing we do is give a useful error if someone is
|
||||
# running this code under Python 2.
|
||||
if sys.version_info.major < 3:
|
||||
raise ImportError('You are trying to use a Python 3-specific version of Beautiful Soup under Python 2. This will not work. The final version of Beautiful Soup to support Python 2 was 4.9.3.')
|
||||
|
||||
from .builder import (
|
||||
builder_registry,
|
||||
ParserRejectedMarkup,
|
||||
XMLParsedAsHTMLWarning,
|
||||
HTMLParserTreeBuilder
|
||||
)
|
||||
from .builder import builder_registry, ParserRejectedMarkup
|
||||
from .dammit import UnicodeDammit
|
||||
from .element import (
|
||||
CData,
|
||||
Comment,
|
||||
CSS,
|
||||
DEFAULT_OUTPUT_ENCODING,
|
||||
Declaration,
|
||||
Doctype,
|
||||
NavigableString,
|
||||
PageElement,
|
||||
ProcessingInstruction,
|
||||
PYTHON_SPECIFIC_ENCODINGS,
|
||||
ResultSet,
|
||||
Script,
|
||||
Stylesheet,
|
||||
SoupStrainer,
|
||||
Tag,
|
||||
TemplateString,
|
||||
)
|
||||
|
||||
# Define some custom warnings.
|
||||
class GuessedAtParserWarning(UserWarning):
|
||||
"""The warning issued when BeautifulSoup has to guess what parser to
|
||||
use -- probably because no parser was specified in the constructor.
|
||||
"""
|
||||
# The very first thing we do is give a useful error if someone is
|
||||
# running this code under Python 3 without converting it.
|
||||
'You are trying to run the Python 2 version of Beautiful Soup under Python 3. This will not work.'!='You need to convert the code, either by installing it (`python setup.py install`) or by running 2to3 (`2to3 -w bs4`).'
|
||||
|
||||
class MarkupResemblesLocatorWarning(UserWarning):
|
||||
"""The warning issued when BeautifulSoup is given 'markup' that
|
||||
actually looks like a resource locator -- a URL or a path to a file
|
||||
on disk.
|
||||
"""
|
||||
|
||||
|
||||
class BeautifulSoup(Tag):
|
||||
"""A data structure representing a parsed HTML or XML document.
|
||||
"""
|
||||
This class defines the basic interface called by the tree builders.
|
||||
|
||||
Most of the methods you'll call on a BeautifulSoup object are inherited from
|
||||
PageElement or Tag.
|
||||
|
||||
Internally, this class defines the basic interface called by the
|
||||
tree builders when converting an HTML/XML document into a data
|
||||
structure. The interface abstracts away the differences between
|
||||
parsers. To write a new tree builder, you'll need to understand
|
||||
these methods as a whole.
|
||||
|
||||
These methods will be called by the BeautifulSoup constructor:
|
||||
* reset()
|
||||
* feed(markup)
|
||||
These methods will be called by the parser:
|
||||
reset()
|
||||
feed(markup)
|
||||
|
||||
The tree builder may call these methods from its feed() implementation:
|
||||
* handle_starttag(name, attrs) # See note about return value
|
||||
* handle_endtag(name)
|
||||
* handle_data(data) # Appends to the current data node
|
||||
* endData(containerClass) # Ends the current data node
|
||||
handle_starttag(name, attrs) # See note about return value
|
||||
handle_endtag(name)
|
||||
handle_data(data) # Appends to the current data node
|
||||
endData(containerClass=NavigableString) # Ends the current data node
|
||||
|
||||
No matter how complicated the underlying parser is, you should be
|
||||
able to build a tree using 'start tag' events, 'end tag' events,
|
||||
@@ -103,77 +69,24 @@ class BeautifulSoup(Tag):
|
||||
like HTML's <br> tag), call handle_starttag and then
|
||||
handle_endtag.
|
||||
"""
|
||||
|
||||
# Since BeautifulSoup subclasses Tag, it's possible to treat it as
|
||||
# a Tag with a .name. This name makes it clear the BeautifulSoup
|
||||
# object isn't a real markup tag.
|
||||
ROOT_TAG_NAME = '[document]'
|
||||
|
||||
# If the end-user gives no indication which tree builder they
|
||||
# want, look for one with these features.
|
||||
DEFAULT_BUILDER_FEATURES = ['html', 'fast']
|
||||
|
||||
# A string containing all ASCII whitespace characters, used in
|
||||
# endData() to detect data chunks that seem 'empty'.
|
||||
ASCII_SPACES = '\x20\x0a\x09\x0c\x0d'
|
||||
|
||||
NO_PARSER_SPECIFIED_WARNING = "No parser was explicitly specified, so I'm using the best available %(markup_type)s parser for this system (\"%(parser)s\"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.\n\nThe code that caused this warning is on line %(line_number)s of the file %(filename)s. To get rid of this warning, pass the additional argument 'features=\"%(parser)s\"' to the BeautifulSoup constructor.\n"
|
||||
|
||||
NO_PARSER_SPECIFIED_WARNING = "No parser was explicitly specified, so I'm using the best available %(markup_type)s parser for this system (\"%(parser)s\"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.\n\nTo get rid of this warning, change this:\n\n BeautifulSoup([your markup])\n\nto this:\n\n BeautifulSoup([your markup], \"%(parser)s\")\n"
|
||||
|
||||
def __init__(self, markup="", features=None, builder=None,
|
||||
parse_only=None, from_encoding=None, exclude_encodings=None,
|
||||
element_classes=None, **kwargs):
|
||||
"""Constructor.
|
||||
**kwargs):
|
||||
"""The Soup object is initialized as the 'root tag', and the
|
||||
provided markup (which can be a string or a file-like object)
|
||||
is fed into the underlying parser."""
|
||||
|
||||
:param markup: A string or a file-like object representing
|
||||
markup to be parsed.
|
||||
|
||||
:param features: Desirable features of the parser to be
|
||||
used. This may be the name of a specific parser ("lxml",
|
||||
"lxml-xml", "html.parser", or "html5lib") or it may be the
|
||||
type of markup to be used ("html", "html5", "xml"). It's
|
||||
recommended that you name a specific parser, so that
|
||||
Beautiful Soup gives you the same results across platforms
|
||||
and virtual environments.
|
||||
|
||||
:param builder: A TreeBuilder subclass to instantiate (or
|
||||
instance to use) instead of looking one up based on
|
||||
`features`. You only need to use this if you've implemented a
|
||||
custom TreeBuilder.
|
||||
|
||||
:param parse_only: A SoupStrainer. Only parts of the document
|
||||
matching the SoupStrainer will be considered. This is useful
|
||||
when parsing part of a document that would otherwise be too
|
||||
large to fit into memory.
|
||||
|
||||
:param from_encoding: A string indicating the encoding of the
|
||||
document to be parsed. Pass this in if Beautiful Soup is
|
||||
guessing wrongly about the document's encoding.
|
||||
|
||||
:param exclude_encodings: A list of strings indicating
|
||||
encodings known to be wrong. Pass this in if you don't know
|
||||
the document's encoding but you know Beautiful Soup's guess is
|
||||
wrong.
|
||||
|
||||
:param element_classes: A dictionary mapping BeautifulSoup
|
||||
classes like Tag and NavigableString, to other classes you'd
|
||||
like to be instantiated instead as the parse tree is
|
||||
built. This is useful for subclassing Tag or NavigableString
|
||||
to modify default behavior.
|
||||
|
||||
:param kwargs: For backwards compatibility purposes, the
|
||||
constructor accepts certain keyword arguments used in
|
||||
Beautiful Soup 3. None of these arguments do anything in
|
||||
Beautiful Soup 4; they will result in a warning and then be
|
||||
ignored.
|
||||
|
||||
Apart from this, any keyword arguments passed into the
|
||||
BeautifulSoup constructor are propagated to the TreeBuilder
|
||||
constructor. This makes it possible to configure a
|
||||
TreeBuilder by passing in arguments, not just by saying which
|
||||
one to use.
|
||||
"""
|
||||
if 'convertEntities' in kwargs:
|
||||
del kwargs['convertEntities']
|
||||
warnings.warn(
|
||||
"BS4 does not respect the convertEntities argument to the "
|
||||
"BeautifulSoup constructor. Entities are always converted "
|
||||
@@ -212,10 +125,10 @@ class BeautifulSoup(Tag):
|
||||
if old_name in kwargs:
|
||||
warnings.warn(
|
||||
'The "%s" argument to the BeautifulSoup constructor '
|
||||
'has been renamed to "%s."' % (old_name, new_name),
|
||||
DeprecationWarning, stacklevel=3
|
||||
)
|
||||
return kwargs.pop(old_name)
|
||||
'has been renamed to "%s."' % (old_name, new_name))
|
||||
value = kwargs[old_name]
|
||||
del kwargs[old_name]
|
||||
return value
|
||||
return None
|
||||
|
||||
parse_only = parse_only or deprecated_argument(
|
||||
@@ -224,23 +137,13 @@ class BeautifulSoup(Tag):
|
||||
from_encoding = from_encoding or deprecated_argument(
|
||||
"fromEncoding", "from_encoding")
|
||||
|
||||
if from_encoding and isinstance(markup, str):
|
||||
warnings.warn("You provided Unicode markup but also provided a value for from_encoding. Your from_encoding will be ignored.")
|
||||
from_encoding = None
|
||||
if len(kwargs) > 0:
|
||||
arg = list(kwargs.keys()).pop()
|
||||
raise TypeError(
|
||||
"__init__() got an unexpected keyword argument '%s'" % arg)
|
||||
|
||||
self.element_classes = element_classes or dict()
|
||||
|
||||
# We need this information to track whether or not the builder
|
||||
# was specified well enough that we can omit the 'you need to
|
||||
# specify a parser' warning.
|
||||
original_builder = builder
|
||||
original_features = features
|
||||
|
||||
if isinstance(builder, type):
|
||||
# A builder class was passed in; it needs to be instantiated.
|
||||
builder_class = builder
|
||||
builder = None
|
||||
elif builder is None:
|
||||
if builder is None:
|
||||
original_features = features
|
||||
if isinstance(features, str):
|
||||
features = [features]
|
||||
if features is None or len(features) == 0:
|
||||
@@ -251,227 +154,85 @@ class BeautifulSoup(Tag):
|
||||
"Couldn't find a tree builder with the features you "
|
||||
"requested: %s. Do you need to install a parser library?"
|
||||
% ",".join(features))
|
||||
|
||||
# At this point either we have a TreeBuilder instance in
|
||||
# builder, or we have a builder_class that we can instantiate
|
||||
# with the remaining **kwargs.
|
||||
if builder is None:
|
||||
builder = builder_class(**kwargs)
|
||||
if not original_builder and not (
|
||||
original_features == builder.NAME or
|
||||
original_features in builder.ALTERNATE_NAMES
|
||||
) and markup:
|
||||
# The user did not tell us which TreeBuilder to use,
|
||||
# and we had to guess. Issue a warning.
|
||||
builder = builder_class()
|
||||
if not (original_features == builder.NAME or
|
||||
original_features in builder.ALTERNATE_NAMES):
|
||||
if builder.is_xml:
|
||||
markup_type = "XML"
|
||||
else:
|
||||
markup_type = "HTML"
|
||||
warnings.warn(self.NO_PARSER_SPECIFIED_WARNING % dict(
|
||||
parser=builder.NAME,
|
||||
markup_type=markup_type))
|
||||
|
||||
# This code adapted from warnings.py so that we get the same line
|
||||
# of code as our warnings.warn() call gets, even if the answer is wrong
|
||||
# (as it may be in a multithreading situation).
|
||||
caller = None
|
||||
try:
|
||||
caller = sys._getframe(1)
|
||||
except ValueError:
|
||||
pass
|
||||
if caller:
|
||||
globals = caller.f_globals
|
||||
line_number = caller.f_lineno
|
||||
else:
|
||||
globals = sys.__dict__
|
||||
line_number= 1
|
||||
filename = globals.get('__file__')
|
||||
if filename:
|
||||
fnl = filename.lower()
|
||||
if fnl.endswith((".pyc", ".pyo")):
|
||||
filename = filename[:-1]
|
||||
if filename:
|
||||
# If there is no filename at all, the user is most likely in a REPL,
|
||||
# and the warning is not necessary.
|
||||
values = dict(
|
||||
filename=filename,
|
||||
line_number=line_number,
|
||||
parser=builder.NAME,
|
||||
markup_type=markup_type
|
||||
)
|
||||
warnings.warn(
|
||||
self.NO_PARSER_SPECIFIED_WARNING % values,
|
||||
GuessedAtParserWarning, stacklevel=2
|
||||
)
|
||||
else:
|
||||
if kwargs:
|
||||
warnings.warn("Keyword arguments to the BeautifulSoup constructor will be ignored. These would normally be passed into the TreeBuilder constructor, but a TreeBuilder instance was passed in as `builder`.")
|
||||
|
||||
self.builder = builder
|
||||
self.is_xml = builder.is_xml
|
||||
self.known_xml = self.is_xml
|
||||
self._namespaces = dict()
|
||||
self.builder.soup = self
|
||||
|
||||
self.parse_only = parse_only
|
||||
|
||||
if hasattr(markup, 'read'): # It's a file-type object.
|
||||
markup = markup.read()
|
||||
elif len(markup) <= 256 and (
|
||||
(isinstance(markup, bytes) and not b'<' in markup)
|
||||
or (isinstance(markup, str) and not '<' in markup)
|
||||
):
|
||||
# Issue warnings for a couple beginner problems
|
||||
elif len(markup) <= 256:
|
||||
# Print out warnings for a couple beginner problems
|
||||
# involving passing non-markup to Beautiful Soup.
|
||||
# Beautiful Soup will still parse the input as markup,
|
||||
# since that is sometimes the intended behavior.
|
||||
if not self._markup_is_url(markup):
|
||||
self._markup_resembles_filename(markup)
|
||||
# just in case that's what the user really wants.
|
||||
if (isinstance(markup, str)
|
||||
and not os.path.supports_unicode_filenames):
|
||||
possible_filename = markup.encode("utf8")
|
||||
else:
|
||||
possible_filename = markup
|
||||
is_file = False
|
||||
try:
|
||||
is_file = os.path.exists(possible_filename)
|
||||
except Exception as e:
|
||||
# This is almost certainly a problem involving
|
||||
# characters not valid in filenames on this
|
||||
# system. Just let it go.
|
||||
pass
|
||||
if is_file:
|
||||
if isinstance(markup, str):
|
||||
markup = markup.encode("utf8")
|
||||
warnings.warn(
|
||||
'"%s" looks like a filename, not markup. You should probably open this file and pass the filehandle into Beautiful Soup.' % markup)
|
||||
if markup[:5] == "http:" or markup[:6] == "https:":
|
||||
# TODO: This is ugly but I couldn't get it to work in
|
||||
# Python 3 otherwise.
|
||||
if ((isinstance(markup, bytes) and not b' ' in markup)
|
||||
or (isinstance(markup, str) and not ' ' in markup)):
|
||||
if isinstance(markup, str):
|
||||
markup = markup.encode("utf8")
|
||||
warnings.warn(
|
||||
'"%s" looks like a URL. Beautiful Soup is not an HTTP client. You should probably use an HTTP client to get the document behind the URL, and feed that document to Beautiful Soup.' % markup)
|
||||
|
||||
rejections = []
|
||||
success = False
|
||||
for (self.markup, self.original_encoding, self.declared_html_encoding,
|
||||
self.contains_replacement_characters) in (
|
||||
self.builder.prepare_markup(
|
||||
markup, from_encoding, exclude_encodings=exclude_encodings)):
|
||||
self.reset()
|
||||
self.builder.initialize_soup(self)
|
||||
try:
|
||||
self._feed()
|
||||
success = True
|
||||
break
|
||||
except ParserRejectedMarkup as e:
|
||||
rejections.append(e)
|
||||
except ParserRejectedMarkup:
|
||||
pass
|
||||
|
||||
if not success:
|
||||
other_exceptions = [str(e) for e in rejections]
|
||||
raise ParserRejectedMarkup(
|
||||
"The markup you provided was rejected by the parser. Trying a different parser or a different encoding may help.\n\nOriginal exception(s) from parser:\n " + "\n ".join(other_exceptions)
|
||||
)
|
||||
|
||||
# Clear out the markup and remove the builder's circular
|
||||
# reference to this object.
|
||||
self.markup = None
|
||||
self.builder.soup = None
|
||||
|
||||
def _clone(self):
|
||||
"""Create a new BeautifulSoup object with the same TreeBuilder,
|
||||
but not associated with any markup.
|
||||
def __copy__(self):
|
||||
return type(self)(self.encode(), builder=self.builder)
|
||||
|
||||
This is the first step of the deepcopy process.
|
||||
"""
|
||||
clone = type(self)("", None, self.builder)
|
||||
|
||||
# Keep track of the encoding of the original document,
|
||||
# since we won't be parsing it again.
|
||||
clone.original_encoding = self.original_encoding
|
||||
return clone
|
||||
|
||||
def __getstate__(self):
|
||||
# Frequently a tree builder can't be pickled.
|
||||
d = dict(self.__dict__)
|
||||
if 'builder' in d and d['builder'] is not None and not self.builder.picklable:
|
||||
d['builder'] = type(self.builder)
|
||||
# Store the contents as a Unicode string.
|
||||
d['contents'] = []
|
||||
d['markup'] = self.decode()
|
||||
|
||||
# If _most_recent_element is present, it's a Tag object left
|
||||
# over from initial parse. It might not be picklable and we
|
||||
# don't need it.
|
||||
if '_most_recent_element' in d:
|
||||
del d['_most_recent_element']
|
||||
if 'builder' in d and not self.builder.picklable:
|
||||
del d['builder']
|
||||
return d
|
||||
|
||||
def __setstate__(self, state):
|
||||
# If necessary, restore the TreeBuilder by looking it up.
|
||||
self.__dict__ = state
|
||||
if isinstance(self.builder, type):
|
||||
self.builder = self.builder()
|
||||
elif not self.builder:
|
||||
# We don't know which builder was used to build this
|
||||
# parse tree, so use a default we know is always available.
|
||||
self.builder = HTMLParserTreeBuilder()
|
||||
self.builder.soup = self
|
||||
self.reset()
|
||||
self._feed()
|
||||
return state
|
||||
|
||||
|
||||
@classmethod
|
||||
def _decode_markup(cls, markup):
|
||||
"""Ensure `markup` is bytes so it's safe to send into warnings.warn.
|
||||
|
||||
TODO: warnings.warn had this problem back in 2010 but it might not
|
||||
anymore.
|
||||
"""
|
||||
if isinstance(markup, bytes):
|
||||
decoded = markup.decode('utf-8', 'replace')
|
||||
else:
|
||||
decoded = markup
|
||||
return decoded
|
||||
|
||||
@classmethod
|
||||
def _markup_is_url(cls, markup):
|
||||
"""Error-handling method to raise a warning if incoming markup looks
|
||||
like a URL.
|
||||
|
||||
:param markup: A string.
|
||||
:return: Whether or not the markup resembles a URL
|
||||
closely enough to justify a warning.
|
||||
"""
|
||||
if isinstance(markup, bytes):
|
||||
space = b' '
|
||||
cant_start_with = (b"http:", b"https:")
|
||||
elif isinstance(markup, str):
|
||||
space = ' '
|
||||
cant_start_with = ("http:", "https:")
|
||||
else:
|
||||
return False
|
||||
|
||||
if any(markup.startswith(prefix) for prefix in cant_start_with):
|
||||
if not space in markup:
|
||||
warnings.warn(
|
||||
'The input looks more like a URL than markup. You may want to use'
|
||||
' an HTTP client like requests to get the document behind'
|
||||
' the URL, and feed that document to Beautiful Soup.',
|
||||
MarkupResemblesLocatorWarning,
|
||||
stacklevel=3
|
||||
)
|
||||
return True
|
||||
return False
|
||||
|
||||
@classmethod
|
||||
def _markup_resembles_filename(cls, markup):
|
||||
"""Error-handling method to raise a warning if incoming markup
|
||||
resembles a filename.
|
||||
|
||||
:param markup: A bytestring or string.
|
||||
:return: Whether or not the markup resembles a filename
|
||||
closely enough to justify a warning.
|
||||
"""
|
||||
path_characters = '/\\'
|
||||
extensions = ['.html', '.htm', '.xml', '.xhtml', '.txt']
|
||||
if isinstance(markup, bytes):
|
||||
path_characters = path_characters.encode("utf8")
|
||||
extensions = [x.encode('utf8') for x in extensions]
|
||||
filelike = False
|
||||
if any(x in markup for x in path_characters):
|
||||
filelike = True
|
||||
else:
|
||||
lower = markup.lower()
|
||||
if any(lower.endswith(ext) for ext in extensions):
|
||||
filelike = True
|
||||
if filelike:
|
||||
warnings.warn(
|
||||
'The input looks more like a filename than markup. You may'
|
||||
' want to open this file and pass the filehandle into'
|
||||
' Beautiful Soup.',
|
||||
MarkupResemblesLocatorWarning, stacklevel=3
|
||||
)
|
||||
return True
|
||||
return False
|
||||
|
||||
def _feed(self):
|
||||
"""Internal method that parses previously set markup, creating a large
|
||||
number of Tag and NavigableString objects.
|
||||
"""
|
||||
# Convert the document to Unicode.
|
||||
self.builder.reset()
|
||||
|
||||
@@ -482,111 +243,48 @@ class BeautifulSoup(Tag):
|
||||
self.popTag()
|
||||
|
||||
def reset(self):
|
||||
"""Reset this object to a state as though it had never parsed any
|
||||
markup.
|
||||
"""
|
||||
Tag.__init__(self, self, self.builder, self.ROOT_TAG_NAME)
|
||||
self.hidden = 1
|
||||
self.builder.reset()
|
||||
self.current_data = []
|
||||
self.currentTag = None
|
||||
self.tagStack = []
|
||||
self.open_tag_counter = Counter()
|
||||
self.preserve_whitespace_tag_stack = []
|
||||
self.string_container_stack = []
|
||||
self._most_recent_element = None
|
||||
self.pushTag(self)
|
||||
|
||||
def new_tag(self, name, namespace=None, nsprefix=None, attrs={},
|
||||
sourceline=None, sourcepos=None, **kwattrs):
|
||||
"""Create a new Tag associated with this BeautifulSoup object.
|
||||
def new_tag(self, name, namespace=None, nsprefix=None, **attrs):
|
||||
"""Create a new tag associated with this soup."""
|
||||
return Tag(None, self.builder, name, namespace, nsprefix, attrs)
|
||||
|
||||
:param name: The name of the new Tag.
|
||||
:param namespace: The URI of the new Tag's XML namespace, if any.
|
||||
:param prefix: The prefix for the new Tag's XML namespace, if any.
|
||||
:param attrs: A dictionary of this Tag's attribute values; can
|
||||
be used instead of `kwattrs` for attributes like 'class'
|
||||
that are reserved words in Python.
|
||||
:param sourceline: The line number where this tag was
|
||||
(purportedly) found in its source document.
|
||||
:param sourcepos: The character position within `sourceline` where this
|
||||
tag was (purportedly) found.
|
||||
:param kwattrs: Keyword arguments for the new Tag's attribute values.
|
||||
def new_string(self, s, subclass=NavigableString):
|
||||
"""Create a new NavigableString associated with this soup."""
|
||||
return subclass(s)
|
||||
|
||||
"""
|
||||
kwattrs.update(attrs)
|
||||
return self.element_classes.get(Tag, Tag)(
|
||||
None, self.builder, name, namespace, nsprefix, kwattrs,
|
||||
sourceline=sourceline, sourcepos=sourcepos
|
||||
)
|
||||
|
||||
def string_container(self, base_class=None):
|
||||
container = base_class or NavigableString
|
||||
|
||||
# There may be a general override of NavigableString.
|
||||
container = self.element_classes.get(
|
||||
container, container
|
||||
)
|
||||
|
||||
# On top of that, we may be inside a tag that needs a special
|
||||
# container class.
|
||||
if self.string_container_stack and container is NavigableString:
|
||||
container = self.builder.string_containers.get(
|
||||
self.string_container_stack[-1].name, container
|
||||
)
|
||||
return container
|
||||
|
||||
def new_string(self, s, subclass=None):
|
||||
"""Create a new NavigableString associated with this BeautifulSoup
|
||||
object.
|
||||
"""
|
||||
container = self.string_container(subclass)
|
||||
return container(s)
|
||||
|
||||
def insert_before(self, *args):
|
||||
"""This method is part of the PageElement API, but `BeautifulSoup` doesn't implement
|
||||
it because there is nothing before or after it in the parse tree.
|
||||
"""
|
||||
def insert_before(self, successor):
|
||||
raise NotImplementedError("BeautifulSoup objects don't support insert_before().")
|
||||
|
||||
def insert_after(self, *args):
|
||||
"""This method is part of the PageElement API, but `BeautifulSoup` doesn't implement
|
||||
it because there is nothing before or after it in the parse tree.
|
||||
"""
|
||||
def insert_after(self, successor):
|
||||
raise NotImplementedError("BeautifulSoup objects don't support insert_after().")
|
||||
|
||||
def popTag(self):
|
||||
"""Internal method called by _popToTag when a tag is closed."""
|
||||
tag = self.tagStack.pop()
|
||||
if tag.name in self.open_tag_counter:
|
||||
self.open_tag_counter[tag.name] -= 1
|
||||
if self.preserve_whitespace_tag_stack and tag == self.preserve_whitespace_tag_stack[-1]:
|
||||
self.preserve_whitespace_tag_stack.pop()
|
||||
if self.string_container_stack and tag == self.string_container_stack[-1]:
|
||||
self.string_container_stack.pop()
|
||||
#print("Pop", tag.name)
|
||||
#print "Pop", tag.name
|
||||
if self.tagStack:
|
||||
self.currentTag = self.tagStack[-1]
|
||||
return self.currentTag
|
||||
|
||||
def pushTag(self, tag):
|
||||
"""Internal method called by handle_starttag when a tag is opened."""
|
||||
#print("Push", tag.name)
|
||||
if self.currentTag is not None:
|
||||
#print "Push", tag.name
|
||||
if self.currentTag:
|
||||
self.currentTag.contents.append(tag)
|
||||
self.tagStack.append(tag)
|
||||
self.currentTag = self.tagStack[-1]
|
||||
if tag.name != self.ROOT_TAG_NAME:
|
||||
self.open_tag_counter[tag.name] += 1
|
||||
if tag.name in self.builder.preserve_whitespace_tags:
|
||||
self.preserve_whitespace_tag_stack.append(tag)
|
||||
if tag.name in self.builder.string_containers:
|
||||
self.string_container_stack.append(tag)
|
||||
|
||||
def endData(self, containerClass=None):
|
||||
"""Method called by the TreeBuilder when the end of a data segment
|
||||
occurs.
|
||||
"""
|
||||
def endData(self, containerClass=NavigableString):
|
||||
if self.current_data:
|
||||
current_data = ''.join(self.current_data)
|
||||
# If whitespace is not preserved, and this string contains
|
||||
@@ -613,93 +311,61 @@ class BeautifulSoup(Tag):
|
||||
not self.parse_only.search(current_data)):
|
||||
return
|
||||
|
||||
containerClass = self.string_container(containerClass)
|
||||
o = containerClass(current_data)
|
||||
self.object_was_parsed(o)
|
||||
|
||||
def object_was_parsed(self, o, parent=None, most_recent_element=None):
|
||||
"""Method called by the TreeBuilder to integrate an object into the parse tree."""
|
||||
if parent is None:
|
||||
parent = self.currentTag
|
||||
if most_recent_element is not None:
|
||||
previous_element = most_recent_element
|
||||
else:
|
||||
previous_element = self._most_recent_element
|
||||
"""Add an object to the parse tree."""
|
||||
parent = parent or self.currentTag
|
||||
previous_element = most_recent_element or self._most_recent_element
|
||||
|
||||
next_element = previous_sibling = next_sibling = None
|
||||
if isinstance(o, Tag):
|
||||
next_element = o.next_element
|
||||
next_sibling = o.next_sibling
|
||||
previous_sibling = o.previous_sibling
|
||||
if previous_element is None:
|
||||
if not previous_element:
|
||||
previous_element = o.previous_element
|
||||
|
||||
fix = parent.next_element is not None
|
||||
|
||||
o.setup(parent, previous_element, next_element, previous_sibling, next_sibling)
|
||||
|
||||
self._most_recent_element = o
|
||||
parent.contents.append(o)
|
||||
|
||||
# Check if we are inserting into an already parsed node.
|
||||
if fix:
|
||||
self._linkage_fixer(parent)
|
||||
if parent.next_sibling:
|
||||
# This node is being inserted into an element that has
|
||||
# already been parsed. Deal with any dangling references.
|
||||
index = parent.contents.index(o)
|
||||
if index == 0:
|
||||
previous_element = parent
|
||||
previous_sibling = None
|
||||
else:
|
||||
previous_element = previous_sibling = parent.contents[index-1]
|
||||
if index == len(parent.contents)-1:
|
||||
next_element = parent.next_sibling
|
||||
next_sibling = None
|
||||
else:
|
||||
next_element = next_sibling = parent.contents[index+1]
|
||||
|
||||
def _linkage_fixer(self, el):
|
||||
"""Make sure linkage of this fragment is sound."""
|
||||
|
||||
first = el.contents[0]
|
||||
child = el.contents[-1]
|
||||
descendant = child
|
||||
|
||||
if child is first and el.parent is not None:
|
||||
# Parent should be linked to first child
|
||||
el.next_element = child
|
||||
# We are no longer linked to whatever this element is
|
||||
prev_el = child.previous_element
|
||||
if prev_el is not None and prev_el is not el:
|
||||
prev_el.next_element = None
|
||||
# First child should be linked to the parent, and no previous siblings.
|
||||
child.previous_element = el
|
||||
child.previous_sibling = None
|
||||
|
||||
# We have no sibling as we've been appended as the last.
|
||||
child.next_sibling = None
|
||||
|
||||
# This index is a tag, dig deeper for a "last descendant"
|
||||
if isinstance(child, Tag) and child.contents:
|
||||
descendant = child._last_descendant(False)
|
||||
|
||||
# As the final step, link last descendant. It should be linked
|
||||
# to the parent's next sibling (if found), else walk up the chain
|
||||
# and find a parent with a sibling. It should have no next sibling.
|
||||
descendant.next_element = None
|
||||
descendant.next_sibling = None
|
||||
target = el
|
||||
while True:
|
||||
if target is None:
|
||||
break
|
||||
elif target.next_sibling is not None:
|
||||
descendant.next_element = target.next_sibling
|
||||
target.next_sibling.previous_element = child
|
||||
break
|
||||
target = target.parent
|
||||
o.previous_element = previous_element
|
||||
if previous_element:
|
||||
previous_element.next_element = o
|
||||
o.next_element = next_element
|
||||
if next_element:
|
||||
next_element.previous_element = o
|
||||
o.next_sibling = next_sibling
|
||||
if next_sibling:
|
||||
next_sibling.previous_sibling = o
|
||||
o.previous_sibling = previous_sibling
|
||||
if previous_sibling:
|
||||
previous_sibling.next_sibling = o
|
||||
|
||||
def _popToTag(self, name, nsprefix=None, inclusivePop=True):
|
||||
"""Pops the tag stack up to and including the most recent
|
||||
instance of the given tag.
|
||||
|
||||
If there are no open tags with the given name, nothing will be
|
||||
popped.
|
||||
|
||||
:param name: Pop up to the most recent tag with this name.
|
||||
:param nsprefix: The namespace prefix that goes with `name`.
|
||||
:param inclusivePop: It this is false, pops the tag stack up
|
||||
to but *not* including the most recent instqance of the
|
||||
given tag.
|
||||
|
||||
"""
|
||||
#print("Popping to %s" % name)
|
||||
instance of the given tag. If inclusivePop is false, pops the tag
|
||||
stack up to but *not* including the most recent instqance of
|
||||
the given tag."""
|
||||
#print "Popping to %s" % name
|
||||
if name == self.ROOT_TAG_NAME:
|
||||
# The BeautifulSoup object itself can never be popped.
|
||||
return
|
||||
@@ -708,8 +374,6 @@ class BeautifulSoup(Tag):
|
||||
|
||||
stack_size = len(self.tagStack)
|
||||
for i in range(stack_size - 1, 0, -1):
|
||||
if not self.open_tag_counter.get(name):
|
||||
break
|
||||
t = self.tagStack[i]
|
||||
if (name == t.name and nsprefix == t.prefix):
|
||||
if inclusivePop:
|
||||
@@ -719,26 +383,16 @@ class BeautifulSoup(Tag):
|
||||
|
||||
return most_recently_popped
|
||||
|
||||
def handle_starttag(self, name, namespace, nsprefix, attrs, sourceline=None,
|
||||
sourcepos=None, namespaces=None):
|
||||
"""Called by the tree builder when a new tag is encountered.
|
||||
def handle_starttag(self, name, namespace, nsprefix, attrs):
|
||||
"""Push a start tag on to the stack.
|
||||
|
||||
:param name: Name of the tag.
|
||||
:param nsprefix: Namespace prefix for the tag.
|
||||
:param attrs: A dictionary of attribute values.
|
||||
:param sourceline: The line number where this tag was found in its
|
||||
source document.
|
||||
:param sourcepos: The character position within `sourceline` where this
|
||||
tag was found.
|
||||
:param namespaces: A dictionary of all namespace prefix mappings
|
||||
currently in scope in the document.
|
||||
|
||||
If this method returns None, the tag was rejected by an active
|
||||
SoupStrainer. You should proceed as if the tag had not occurred
|
||||
If this method returns None, the tag was rejected by the
|
||||
SoupStrainer. You should proceed as if the tag had not occured
|
||||
in the document. For instance, if this was a self-closing tag,
|
||||
don't call handle_endtag.
|
||||
"""
|
||||
# print("Start tag %s: %s" % (name, attrs))
|
||||
|
||||
# print "Start tag %s: %s" % (name, attrs)
|
||||
self.endData()
|
||||
|
||||
if (self.parse_only and len(self.tagStack) <= 1
|
||||
@@ -746,54 +400,34 @@ class BeautifulSoup(Tag):
|
||||
or not self.parse_only.search_tag(name, attrs))):
|
||||
return None
|
||||
|
||||
tag = self.element_classes.get(Tag, Tag)(
|
||||
self, self.builder, name, namespace, nsprefix, attrs,
|
||||
self.currentTag, self._most_recent_element,
|
||||
sourceline=sourceline, sourcepos=sourcepos,
|
||||
namespaces=namespaces
|
||||
)
|
||||
tag = Tag(self, self.builder, name, namespace, nsprefix, attrs,
|
||||
self.currentTag, self._most_recent_element)
|
||||
if tag is None:
|
||||
return tag
|
||||
if self._most_recent_element is not None:
|
||||
if self._most_recent_element:
|
||||
self._most_recent_element.next_element = tag
|
||||
self._most_recent_element = tag
|
||||
self.pushTag(tag)
|
||||
return tag
|
||||
|
||||
def handle_endtag(self, name, nsprefix=None):
|
||||
"""Called by the tree builder when an ending tag is encountered.
|
||||
|
||||
:param name: Name of the tag.
|
||||
:param nsprefix: Namespace prefix for the tag.
|
||||
"""
|
||||
#print("End tag: " + name)
|
||||
#print "End tag: " + name
|
||||
self.endData()
|
||||
self._popToTag(name, nsprefix)
|
||||
|
||||
|
||||
def handle_data(self, data):
|
||||
"""Called by the tree builder when a chunk of textual data is encountered."""
|
||||
self.current_data.append(data)
|
||||
|
||||
|
||||
def decode(self, pretty_print=False,
|
||||
eventual_encoding=DEFAULT_OUTPUT_ENCODING,
|
||||
formatter="minimal", iterator=None):
|
||||
"""Returns a string or Unicode representation of the parse tree
|
||||
as an HTML or XML document.
|
||||
formatter="minimal"):
|
||||
"""Returns a string or Unicode representation of this document.
|
||||
To get Unicode, pass None for encoding."""
|
||||
|
||||
:param pretty_print: If this is True, indentation will be used to
|
||||
make the document more readable.
|
||||
:param eventual_encoding: The encoding of the final document.
|
||||
If this is None, the document will be a Unicode string.
|
||||
"""
|
||||
if self.is_xml:
|
||||
# Print the XML declaration
|
||||
encoding_part = ''
|
||||
if eventual_encoding in PYTHON_SPECIFIC_ENCODINGS:
|
||||
# This is a special Python encoding; it can't actually
|
||||
# go into an XML document because it means nothing
|
||||
# outside of Python.
|
||||
eventual_encoding = None
|
||||
if eventual_encoding != None:
|
||||
if eventual_encoding is not None:
|
||||
encoding_part = ' encoding="%s"' % eventual_encoding
|
||||
prefix = '<?xml version="1.0"%s?>\n' % encoding_part
|
||||
else:
|
||||
@@ -803,9 +437,9 @@ class BeautifulSoup(Tag):
|
||||
else:
|
||||
indent_level = 0
|
||||
return prefix + super(BeautifulSoup, self).decode(
|
||||
indent_level, eventual_encoding, formatter, iterator)
|
||||
indent_level, eventual_encoding, formatter)
|
||||
|
||||
# Aliases to make it easier to get started quickly, e.g. 'from bs4 import _soup'
|
||||
# Alias to make it easier to type import: 'from bs4 import _soup'
|
||||
_s = BeautifulSoup
|
||||
_soup = BeautifulSoup
|
||||
|
||||
@@ -816,25 +450,19 @@ class BeautifulStoneSoup(BeautifulSoup):
|
||||
kwargs['features'] = 'xml'
|
||||
warnings.warn(
|
||||
'The BeautifulStoneSoup class is deprecated. Instead of using '
|
||||
'it, pass features="xml" into the BeautifulSoup constructor.',
|
||||
DeprecationWarning, stacklevel=2
|
||||
)
|
||||
'it, pass features="xml" into the BeautifulSoup constructor.')
|
||||
super(BeautifulStoneSoup, self).__init__(*args, **kwargs)
|
||||
|
||||
|
||||
class StopParsing(Exception):
|
||||
"""Exception raised by a TreeBuilder if it's unable to continue parsing."""
|
||||
pass
|
||||
|
||||
class FeatureNotFound(ValueError):
|
||||
"""Exception raised by the BeautifulSoup constructor if no parser with the
|
||||
requested features is found.
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
#If this file is run as a script, act as an HTML pretty-printer.
|
||||
#By default, act as an HTML pretty-printer.
|
||||
if __name__ == '__main__':
|
||||
import sys
|
||||
soup = BeautifulSoup(sys.stdin)
|
||||
print((soup.prettify()))
|
||||
print(soup.prettify())
|
||||
|
||||
@@ -1,21 +1,11 @@
|
||||
# Use of this source code is governed by the MIT license.
|
||||
__license__ = "MIT"
|
||||
|
||||
from collections import defaultdict
|
||||
import itertools
|
||||
import re
|
||||
import warnings
|
||||
import sys
|
||||
from bs4.element import (
|
||||
CharsetMetaAttributeValue,
|
||||
ContentMetaAttributeValue,
|
||||
RubyParenthesisString,
|
||||
RubyTextString,
|
||||
Stylesheet,
|
||||
Script,
|
||||
TemplateString,
|
||||
nonwhitespace_re
|
||||
)
|
||||
whitespace_re
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
'HTMLTreeBuilder',
|
||||
@@ -32,41 +22,20 @@ XML = 'xml'
|
||||
HTML = 'html'
|
||||
HTML_5 = 'html5'
|
||||
|
||||
class XMLParsedAsHTMLWarning(UserWarning):
|
||||
"""The warning issued when an HTML parser is used to parse
|
||||
XML that is not XHTML.
|
||||
"""
|
||||
MESSAGE = """It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features="xml"` into the BeautifulSoup constructor."""
|
||||
|
||||
|
||||
class TreeBuilderRegistry(object):
|
||||
"""A way of looking up TreeBuilder subclasses by their name or by desired
|
||||
features.
|
||||
"""
|
||||
|
||||
|
||||
def __init__(self):
|
||||
self.builders_for_feature = defaultdict(list)
|
||||
self.builders = []
|
||||
|
||||
def register(self, treebuilder_class):
|
||||
"""Register a treebuilder based on its advertised features.
|
||||
|
||||
:param treebuilder_class: A subclass of Treebuilder. its .features
|
||||
attribute should list its features.
|
||||
"""
|
||||
"""Register a treebuilder based on its advertised features."""
|
||||
for feature in treebuilder_class.features:
|
||||
self.builders_for_feature[feature].insert(0, treebuilder_class)
|
||||
self.builders.insert(0, treebuilder_class)
|
||||
|
||||
def lookup(self, *features):
|
||||
"""Look up a TreeBuilder subclass with the desired features.
|
||||
|
||||
:param features: A list of features to look for. If none are
|
||||
provided, the most recently registered TreeBuilder subclass
|
||||
will be used.
|
||||
:return: A TreeBuilder subclass, or None if there's no
|
||||
registered subclass with all the requested features.
|
||||
"""
|
||||
if len(self.builders) == 0:
|
||||
# There are no builders at all.
|
||||
return None
|
||||
@@ -109,7 +78,7 @@ class TreeBuilderRegistry(object):
|
||||
builder_registry = TreeBuilderRegistry()
|
||||
|
||||
class TreeBuilder(object):
|
||||
"""Turn a textual document into a Beautiful Soup object tree."""
|
||||
"""Turn a document into a Beautiful Soup object tree."""
|
||||
|
||||
NAME = "[Unknown tree builder]"
|
||||
ALTERNATE_NAMES = []
|
||||
@@ -117,89 +86,19 @@ class TreeBuilder(object):
|
||||
|
||||
is_xml = False
|
||||
picklable = False
|
||||
preserve_whitespace_tags = set()
|
||||
empty_element_tags = None # A tag will be considered an empty-element
|
||||
# tag when and only when it has no contents.
|
||||
|
||||
|
||||
# A value for these tag/attribute combinations is a space- or
|
||||
# comma-separated list of CDATA, rather than a single CDATA.
|
||||
DEFAULT_CDATA_LIST_ATTRIBUTES = defaultdict(list)
|
||||
cdata_list_attributes = {}
|
||||
|
||||
# Whitespace should be preserved inside these tags.
|
||||
DEFAULT_PRESERVE_WHITESPACE_TAGS = set()
|
||||
|
||||
# The textual contents of tags with these names should be
|
||||
# instantiated with some class other than NavigableString.
|
||||
DEFAULT_STRING_CONTAINERS = {}
|
||||
|
||||
USE_DEFAULT = object()
|
||||
|
||||
# Most parsers don't keep track of line numbers.
|
||||
TRACKS_LINE_NUMBERS = False
|
||||
|
||||
def __init__(self, multi_valued_attributes=USE_DEFAULT,
|
||||
preserve_whitespace_tags=USE_DEFAULT,
|
||||
store_line_numbers=USE_DEFAULT,
|
||||
string_containers=USE_DEFAULT,
|
||||
):
|
||||
"""Constructor.
|
||||
|
||||
:param multi_valued_attributes: If this is set to None, the
|
||||
TreeBuilder will not turn any values for attributes like
|
||||
'class' into lists. Setting this to a dictionary will
|
||||
customize this behavior; look at DEFAULT_CDATA_LIST_ATTRIBUTES
|
||||
for an example.
|
||||
|
||||
Internally, these are called "CDATA list attributes", but that
|
||||
probably doesn't make sense to an end-user, so the argument name
|
||||
is `multi_valued_attributes`.
|
||||
|
||||
:param preserve_whitespace_tags: A list of tags to treat
|
||||
the way <pre> tags are treated in HTML. Tags in this list
|
||||
are immune from pretty-printing; their contents will always be
|
||||
output as-is.
|
||||
|
||||
:param string_containers: A dictionary mapping tag names to
|
||||
the classes that should be instantiated to contain the textual
|
||||
contents of those tags. The default is to use NavigableString
|
||||
for every tag, no matter what the name. You can override the
|
||||
default by changing DEFAULT_STRING_CONTAINERS.
|
||||
|
||||
:param store_line_numbers: If the parser keeps track of the
|
||||
line numbers and positions of the original markup, that
|
||||
information will, by default, be stored in each corresponding
|
||||
`Tag` object. You can turn this off by passing
|
||||
store_line_numbers=False. If the parser you're using doesn't
|
||||
keep track of this information, then setting store_line_numbers=True
|
||||
will do nothing.
|
||||
"""
|
||||
def __init__(self):
|
||||
self.soup = None
|
||||
if multi_valued_attributes is self.USE_DEFAULT:
|
||||
multi_valued_attributes = self.DEFAULT_CDATA_LIST_ATTRIBUTES
|
||||
self.cdata_list_attributes = multi_valued_attributes
|
||||
if preserve_whitespace_tags is self.USE_DEFAULT:
|
||||
preserve_whitespace_tags = self.DEFAULT_PRESERVE_WHITESPACE_TAGS
|
||||
self.preserve_whitespace_tags = preserve_whitespace_tags
|
||||
if store_line_numbers == self.USE_DEFAULT:
|
||||
store_line_numbers = self.TRACKS_LINE_NUMBERS
|
||||
self.store_line_numbers = store_line_numbers
|
||||
if string_containers == self.USE_DEFAULT:
|
||||
string_containers = self.DEFAULT_STRING_CONTAINERS
|
||||
self.string_containers = string_containers
|
||||
|
||||
def initialize_soup(self, soup):
|
||||
"""The BeautifulSoup object has been initialized and is now
|
||||
being associated with the TreeBuilder.
|
||||
|
||||
:param soup: A BeautifulSoup object.
|
||||
"""
|
||||
self.soup = soup
|
||||
|
||||
def reset(self):
|
||||
"""Do any work necessary to reset the underlying parser
|
||||
for a new document.
|
||||
|
||||
By default, this does nothing.
|
||||
"""
|
||||
pass
|
||||
|
||||
def can_be_empty_element(self, tag_name):
|
||||
@@ -211,58 +110,24 @@ class TreeBuilder(object):
|
||||
For instance: an HTMLBuilder does not consider a <p> tag to be
|
||||
an empty-element tag (it's not in
|
||||
HTMLBuilder.empty_element_tags). This means an empty <p> tag
|
||||
will be presented as "<p></p>", not "<p/>" or "<p>".
|
||||
will be presented as "<p></p>", not "<p />".
|
||||
|
||||
The default implementation has no opinion about which tags are
|
||||
empty-element tags, so a tag will be presented as an
|
||||
empty-element tag if and only if it has no children.
|
||||
"<foo></foo>" will become "<foo/>", and "<foo>bar</foo>" will
|
||||
empty-element tag if and only if it has no contents.
|
||||
"<foo></foo>" will become "<foo />", and "<foo>bar</foo>" will
|
||||
be left alone.
|
||||
|
||||
:param tag_name: The name of a markup tag.
|
||||
"""
|
||||
if self.empty_element_tags is None:
|
||||
return True
|
||||
return tag_name in self.empty_element_tags
|
||||
|
||||
|
||||
def feed(self, markup):
|
||||
"""Run some incoming markup through some parsing process,
|
||||
populating the `BeautifulSoup` object in self.soup.
|
||||
|
||||
This method is not implemented in TreeBuilder; it must be
|
||||
implemented in subclasses.
|
||||
|
||||
:return: None.
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
def prepare_markup(self, markup, user_specified_encoding=None,
|
||||
document_declared_encoding=None, exclude_encodings=None):
|
||||
"""Run any preliminary steps necessary to make incoming markup
|
||||
acceptable to the parser.
|
||||
|
||||
:param markup: Some markup -- probably a bytestring.
|
||||
:param user_specified_encoding: The user asked to try this encoding.
|
||||
:param document_declared_encoding: The markup itself claims to be
|
||||
in this encoding. NOTE: This argument is not used by the
|
||||
calling code and can probably be removed.
|
||||
:param exclude_encodings: The user asked _not_ to try any of
|
||||
these encodings.
|
||||
|
||||
:yield: A series of 4-tuples:
|
||||
(markup, encoding, declared encoding,
|
||||
has undergone character replacement)
|
||||
|
||||
Each 4-tuple represents a strategy for converting the
|
||||
document to Unicode and parsing it. Each strategy will be tried
|
||||
in turn.
|
||||
|
||||
By default, the only strategy is to parse the markup
|
||||
as-is. See `LXMLTreeBuilderForXML` and
|
||||
`HTMLParserTreeBuilder` for implementations that take into
|
||||
account the quirks of particular parsers.
|
||||
"""
|
||||
yield markup, None, None, False
|
||||
document_declared_encoding=None):
|
||||
return markup, None, None, False
|
||||
|
||||
def test_fragment_to_document(self, fragment):
|
||||
"""Wrap an HTML fragment to make it look like a document.
|
||||
@@ -274,36 +139,16 @@ class TreeBuilder(object):
|
||||
results against other HTML fragments.
|
||||
|
||||
This method should not be used outside of tests.
|
||||
|
||||
:param fragment: A string -- fragment of HTML.
|
||||
:return: A string -- a full HTML document.
|
||||
"""
|
||||
return fragment
|
||||
|
||||
def set_up_substitutions(self, tag):
|
||||
"""Set up any substitutions that will need to be performed on
|
||||
a `Tag` when it's output as a string.
|
||||
|
||||
By default, this does nothing. See `HTMLTreeBuilder` for a
|
||||
case where this is used.
|
||||
|
||||
:param tag: A `Tag`
|
||||
:return: Whether or not a substitution was performed.
|
||||
"""
|
||||
return False
|
||||
|
||||
def _replace_cdata_list_attribute_values(self, tag_name, attrs):
|
||||
"""When an attribute value is associated with a tag that can
|
||||
have multiple values for that attribute, convert the string
|
||||
value to a list of strings.
|
||||
"""Replaces class="foo bar" with class=["foo", "bar"]
|
||||
|
||||
Basically, replaces class="foo bar" with class=["foo", "bar"]
|
||||
|
||||
NOTE: This method modifies its input in place.
|
||||
|
||||
:param tag_name: The name of a tag.
|
||||
:param attrs: A dictionary containing the tag's attributes.
|
||||
Any appropriate attribute values will be modified in place.
|
||||
Modifies its input in place.
|
||||
"""
|
||||
if not attrs:
|
||||
return attrs
|
||||
@@ -318,7 +163,7 @@ class TreeBuilder(object):
|
||||
# values. Split it into a list.
|
||||
value = attrs[attr]
|
||||
if isinstance(value, str):
|
||||
values = nonwhitespace_re.findall(value)
|
||||
values = whitespace_re.split(value)
|
||||
else:
|
||||
# html5lib sometimes calls setAttributes twice
|
||||
# for the same tag when rearranging the parse
|
||||
@@ -329,13 +174,9 @@ class TreeBuilder(object):
|
||||
values = value
|
||||
attrs[attr] = values
|
||||
return attrs
|
||||
|
||||
class SAXTreeBuilder(TreeBuilder):
|
||||
"""A Beautiful Soup treebuilder that listens for SAX events.
|
||||
|
||||
This is not currently used for anything, but it demonstrates
|
||||
how a simple TreeBuilder would work.
|
||||
"""
|
||||
class SAXTreeBuilder(TreeBuilder):
|
||||
"""A Beautiful Soup treebuilder that listens for SAX events."""
|
||||
|
||||
def feed(self, markup):
|
||||
raise NotImplementedError()
|
||||
@@ -345,11 +186,11 @@ class SAXTreeBuilder(TreeBuilder):
|
||||
|
||||
def startElement(self, name, attrs):
|
||||
attrs = dict((key[1], value) for key, value in list(attrs.items()))
|
||||
#print("Start %s, %r" % (name, attrs))
|
||||
#print "Start %s, %r" % (name, attrs)
|
||||
self.soup.handle_starttag(name, attrs)
|
||||
|
||||
def endElement(self, name):
|
||||
#print("End %s" % name)
|
||||
#print "End %s" % name
|
||||
self.soup.handle_endtag(name)
|
||||
|
||||
def startElementNS(self, nsTuple, nodeName, attrs):
|
||||
@@ -386,44 +227,10 @@ class HTMLTreeBuilder(TreeBuilder):
|
||||
Such as which tags are empty-element tags.
|
||||
"""
|
||||
|
||||
empty_element_tags = set([
|
||||
# These are from HTML5.
|
||||
'area', 'base', 'br', 'col', 'embed', 'hr', 'img', 'input', 'keygen', 'link', 'menuitem', 'meta', 'param', 'source', 'track', 'wbr',
|
||||
|
||||
# These are from earlier versions of HTML and are removed in HTML5.
|
||||
'basefont', 'bgsound', 'command', 'frame', 'image', 'isindex', 'nextid', 'spacer'
|
||||
])
|
||||
preserve_whitespace_tags = set(['pre', 'textarea'])
|
||||
empty_element_tags = set(['br' , 'hr', 'input', 'img', 'meta',
|
||||
'spacer', 'link', 'frame', 'base'])
|
||||
|
||||
# The HTML standard defines these as block-level elements. Beautiful
|
||||
# Soup does not treat these elements differently from other elements,
|
||||
# but it may do so eventually, and this information is available if
|
||||
# you need to use it.
|
||||
block_elements = set(["address", "article", "aside", "blockquote", "canvas", "dd", "div", "dl", "dt", "fieldset", "figcaption", "figure", "footer", "form", "h1", "h2", "h3", "h4", "h5", "h6", "header", "hr", "li", "main", "nav", "noscript", "ol", "output", "p", "pre", "section", "table", "tfoot", "ul", "video"])
|
||||
|
||||
# These HTML tags need special treatment so they can be
|
||||
# represented by a string class other than NavigableString.
|
||||
#
|
||||
# For some of these tags, it's because the HTML standard defines
|
||||
# an unusual content model for them. I made this list by going
|
||||
# through the HTML spec
|
||||
# (https://html.spec.whatwg.org/#metadata-content) and looking for
|
||||
# "metadata content" elements that can contain strings.
|
||||
#
|
||||
# The Ruby tags (<rt> and <rp>) are here despite being normal
|
||||
# "phrasing content" tags, because the content they contain is
|
||||
# qualitatively different from other text in the document, and it
|
||||
# can be useful to be able to distinguish it.
|
||||
#
|
||||
# TODO: Arguably <noscript> could go here but it seems
|
||||
# qualitatively different from the other tags.
|
||||
DEFAULT_STRING_CONTAINERS = {
|
||||
'rt' : RubyTextString,
|
||||
'rp' : RubyParenthesisString,
|
||||
'style': Stylesheet,
|
||||
'script': Script,
|
||||
'template': TemplateString,
|
||||
}
|
||||
|
||||
# The HTML standard defines these attributes as containing a
|
||||
# space-separated list of values, not a single value. That is,
|
||||
# class="foo bar" means that the 'class' attribute has two values,
|
||||
@@ -431,7 +238,7 @@ class HTMLTreeBuilder(TreeBuilder):
|
||||
# encounter one of these attributes, we will parse its value into
|
||||
# a list of values if possible. Upon output, the list will be
|
||||
# converted back into a string.
|
||||
DEFAULT_CDATA_LIST_ATTRIBUTES = {
|
||||
cdata_list_attributes = {
|
||||
"*" : ['class', 'accesskey', 'dropzone'],
|
||||
"a" : ['rel', 'rev'],
|
||||
"link" : ['rel', 'rev'],
|
||||
@@ -448,19 +255,7 @@ class HTMLTreeBuilder(TreeBuilder):
|
||||
"output" : ["for"],
|
||||
}
|
||||
|
||||
DEFAULT_PRESERVE_WHITESPACE_TAGS = set(['pre', 'textarea'])
|
||||
|
||||
def set_up_substitutions(self, tag):
|
||||
"""Replace the declared encoding in a <meta> tag with a placeholder,
|
||||
to be substituted when the tag is output to a string.
|
||||
|
||||
An HTML document may come in to Beautiful Soup as one
|
||||
encoding, but exit in a different encoding, and the <meta> tag
|
||||
needs to be changed to reflect this.
|
||||
|
||||
:param tag: A `Tag`
|
||||
:return: Whether or not a substitution was performed.
|
||||
"""
|
||||
# We are only interested in <meta> tags
|
||||
if tag.name != 'meta':
|
||||
return False
|
||||
@@ -493,107 +288,10 @@ class HTMLTreeBuilder(TreeBuilder):
|
||||
|
||||
return (meta_encoding is not None)
|
||||
|
||||
class DetectsXMLParsedAsHTML(object):
|
||||
"""A mixin class for any class (a TreeBuilder, or some class used by a
|
||||
TreeBuilder) that's in a position to detect whether an XML
|
||||
document is being incorrectly parsed as HTML, and issue an
|
||||
appropriate warning.
|
||||
|
||||
This requires being able to observe an incoming processing
|
||||
instruction that might be an XML declaration, and also able to
|
||||
observe tags as they're opened. If you can't do that for a given
|
||||
TreeBuilder, there's a less reliable implementation based on
|
||||
examining the raw markup.
|
||||
"""
|
||||
|
||||
# Regular expression for seeing if markup has an <html> tag.
|
||||
LOOKS_LIKE_HTML = re.compile("<[^ +]html", re.I)
|
||||
LOOKS_LIKE_HTML_B = re.compile(b"<[^ +]html", re.I)
|
||||
|
||||
XML_PREFIX = '<?xml'
|
||||
XML_PREFIX_B = b'<?xml'
|
||||
|
||||
@classmethod
|
||||
def warn_if_markup_looks_like_xml(cls, markup, stacklevel=3):
|
||||
"""Perform a check on some markup to see if it looks like XML
|
||||
that's not XHTML. If so, issue a warning.
|
||||
|
||||
This is much less reliable than doing the check while parsing,
|
||||
but some of the tree builders can't do that.
|
||||
|
||||
:param stacklevel: The stacklevel of the code calling this
|
||||
function.
|
||||
|
||||
:return: True if the markup looks like non-XHTML XML, False
|
||||
otherwise.
|
||||
|
||||
"""
|
||||
if isinstance(markup, bytes):
|
||||
prefix = cls.XML_PREFIX_B
|
||||
looks_like_html = cls.LOOKS_LIKE_HTML_B
|
||||
else:
|
||||
prefix = cls.XML_PREFIX
|
||||
looks_like_html = cls.LOOKS_LIKE_HTML
|
||||
|
||||
if (markup is not None
|
||||
and markup.startswith(prefix)
|
||||
and not looks_like_html.search(markup[:500])
|
||||
):
|
||||
cls._warn(stacklevel=stacklevel+2)
|
||||
return True
|
||||
return False
|
||||
|
||||
@classmethod
|
||||
def _warn(cls, stacklevel=5):
|
||||
"""Issue a warning about XML being parsed as HTML."""
|
||||
warnings.warn(
|
||||
XMLParsedAsHTMLWarning.MESSAGE, XMLParsedAsHTMLWarning,
|
||||
stacklevel=stacklevel
|
||||
)
|
||||
|
||||
def _initialize_xml_detector(self):
|
||||
"""Call this method before parsing a document."""
|
||||
self._first_processing_instruction = None
|
||||
self._root_tag = None
|
||||
|
||||
def _document_might_be_xml(self, processing_instruction):
|
||||
"""Call this method when encountering an XML declaration, or a
|
||||
"processing instruction" that might be an XML declaration.
|
||||
"""
|
||||
if (self._first_processing_instruction is not None
|
||||
or self._root_tag is not None):
|
||||
# The document has already started. Don't bother checking
|
||||
# anymore.
|
||||
return
|
||||
|
||||
self._first_processing_instruction = processing_instruction
|
||||
|
||||
# We won't know until we encounter the first tag whether or
|
||||
# not this is actually a problem.
|
||||
|
||||
def _root_tag_encountered(self, name):
|
||||
"""Call this when you encounter the document's root tag.
|
||||
|
||||
This is where we actually check whether an XML document is
|
||||
being incorrectly parsed as HTML, and issue the warning.
|
||||
"""
|
||||
if self._root_tag is not None:
|
||||
# This method was incorrectly called multiple times. Do
|
||||
# nothing.
|
||||
return
|
||||
|
||||
self._root_tag = name
|
||||
if (name != 'html' and self._first_processing_instruction is not None
|
||||
and self._first_processing_instruction.lower().startswith('xml ')):
|
||||
# We encountered an XML declaration and then a tag other
|
||||
# than 'html'. This is a reliable indicator that a
|
||||
# non-XHTML document is being parsed as XML.
|
||||
self._warn()
|
||||
|
||||
|
||||
def register_treebuilders_from(module):
|
||||
"""Copy TreeBuilders from the given module into this module."""
|
||||
this_module = sys.modules[__name__]
|
||||
# I'm fairly sure this is not the best way to do this.
|
||||
this_module = sys.modules['bs4.builder']
|
||||
for name in module.__all__:
|
||||
obj = getattr(module, name)
|
||||
|
||||
@@ -604,22 +302,12 @@ def register_treebuilders_from(module):
|
||||
this_module.builder_registry.register(obj)
|
||||
|
||||
class ParserRejectedMarkup(Exception):
|
||||
"""An Exception to be raised when the underlying parser simply
|
||||
refuses to parse the given markup.
|
||||
"""
|
||||
def __init__(self, message_or_exception):
|
||||
"""Explain why the parser rejected the given markup, either
|
||||
with a textual explanation or another exception.
|
||||
"""
|
||||
if isinstance(message_or_exception, Exception):
|
||||
e = message_or_exception
|
||||
message_or_exception = "%s: %s" % (e.__class__.__name__, str(e))
|
||||
super(ParserRejectedMarkup, self).__init__(message_or_exception)
|
||||
|
||||
pass
|
||||
|
||||
# Builders are registered in reverse order of priority, so that custom
|
||||
# builder registrations will take precedence. In general, we want lxml
|
||||
# to take precedence over html5lib, because it's faster. And we only
|
||||
# want to use HTMLParser as a last resort.
|
||||
# want to use HTMLParser as a last result.
|
||||
from . import _htmlparser
|
||||
register_treebuilders_from(_htmlparser)
|
||||
try:
|
||||
|
||||
@@ -1,14 +1,9 @@
|
||||
# Use of this source code is governed by the MIT license.
|
||||
__license__ = "MIT"
|
||||
|
||||
__all__ = [
|
||||
'HTML5TreeBuilder',
|
||||
]
|
||||
|
||||
import warnings
|
||||
import re
|
||||
from bs4.builder import (
|
||||
DetectsXMLParsedAsHTML,
|
||||
PERMISSIVE,
|
||||
HTML,
|
||||
HTML_5,
|
||||
@@ -16,13 +11,17 @@ from bs4.builder import (
|
||||
)
|
||||
from bs4.element import (
|
||||
NamespacedAttribute,
|
||||
nonwhitespace_re,
|
||||
whitespace_re,
|
||||
)
|
||||
import html5lib
|
||||
from html5lib.constants import (
|
||||
namespaces,
|
||||
prefixes,
|
||||
)
|
||||
try:
|
||||
# html5lib >= 0.99999999/1.0b9
|
||||
from html5lib.treebuilders import base as treebuildersbase
|
||||
except ImportError:
|
||||
# html5lib <= 0.9999999/1.0b8
|
||||
from html5lib.treebuilders import _base as treebuildersbase
|
||||
from html5lib.constants import namespaces
|
||||
|
||||
from bs4.element import (
|
||||
Comment,
|
||||
Doctype,
|
||||
@@ -30,37 +29,13 @@ from bs4.element import (
|
||||
Tag,
|
||||
)
|
||||
|
||||
try:
|
||||
# Pre-0.99999999
|
||||
from html5lib.treebuilders import _base as treebuilder_base
|
||||
new_html5lib = False
|
||||
except ImportError as e:
|
||||
# 0.99999999 and up
|
||||
from html5lib.treebuilders import base as treebuilder_base
|
||||
new_html5lib = True
|
||||
|
||||
class HTML5TreeBuilder(HTMLTreeBuilder):
|
||||
"""Use html5lib to build a tree.
|
||||
|
||||
Note that this TreeBuilder does not support some features common
|
||||
to HTML TreeBuilders. Some of these features could theoretically
|
||||
be implemented, but at the very least it's quite difficult,
|
||||
because html5lib moves the parse tree around as it's being built.
|
||||
|
||||
* This TreeBuilder doesn't use different subclasses of NavigableString
|
||||
based on the name of the tag in which the string was found.
|
||||
|
||||
* You can't use a SoupStrainer to parse only part of a document.
|
||||
"""
|
||||
"""Use html5lib to build a tree."""
|
||||
|
||||
NAME = "html5lib"
|
||||
|
||||
features = [NAME, PERMISSIVE, HTML_5, HTML]
|
||||
|
||||
# html5lib can tell us which line number and position in the
|
||||
# original file is the source of an element.
|
||||
TRACKS_LINE_NUMBERS = True
|
||||
|
||||
def prepare_markup(self, markup, user_specified_encoding,
|
||||
document_declared_encoding=None, exclude_encodings=None):
|
||||
# Store the user-specified encoding for use later on.
|
||||
@@ -70,56 +45,27 @@ class HTML5TreeBuilder(HTMLTreeBuilder):
|
||||
# ATM because the html5lib TreeBuilder doesn't use
|
||||
# UnicodeDammit.
|
||||
if exclude_encodings:
|
||||
warnings.warn(
|
||||
"You provided a value for exclude_encoding, but the html5lib tree builder doesn't support exclude_encoding.",
|
||||
stacklevel=3
|
||||
)
|
||||
|
||||
# html5lib only parses HTML, so if it's given XML that's worth
|
||||
# noting.
|
||||
DetectsXMLParsedAsHTML.warn_if_markup_looks_like_xml(
|
||||
markup, stacklevel=3
|
||||
)
|
||||
|
||||
warnings.warn("You provided a value for exclude_encoding, but the html5lib tree builder doesn't support exclude_encoding.")
|
||||
yield (markup, None, None, False)
|
||||
|
||||
# These methods are defined by Beautiful Soup.
|
||||
def feed(self, markup):
|
||||
if self.soup.parse_only is not None:
|
||||
warnings.warn(
|
||||
"You provided a value for parse_only, but the html5lib tree builder doesn't support parse_only. The entire document will be parsed.",
|
||||
stacklevel=4
|
||||
)
|
||||
warnings.warn("You provided a value for parse_only, but the html5lib tree builder doesn't support parse_only. The entire document will be parsed.")
|
||||
parser = html5lib.HTMLParser(tree=self.create_treebuilder)
|
||||
self.underlying_builder.parser = parser
|
||||
extra_kwargs = dict()
|
||||
if not isinstance(markup, str):
|
||||
if new_html5lib:
|
||||
extra_kwargs['override_encoding'] = self.user_specified_encoding
|
||||
else:
|
||||
extra_kwargs['encoding'] = self.user_specified_encoding
|
||||
doc = parser.parse(markup, **extra_kwargs)
|
||||
|
||||
doc = parser.parse(markup, encoding=self.user_specified_encoding)
|
||||
|
||||
# Set the character encoding detected by the tokenizer.
|
||||
if isinstance(markup, str):
|
||||
# We need to special-case this because html5lib sets
|
||||
# charEncoding to UTF-8 if it gets Unicode input.
|
||||
doc.original_encoding = None
|
||||
else:
|
||||
original_encoding = parser.tokenizer.stream.charEncoding[0]
|
||||
if not isinstance(original_encoding, str):
|
||||
# In 0.99999999 and up, the encoding is an html5lib
|
||||
# Encoding object. We want to use a string for compatibility
|
||||
# with other tree builders.
|
||||
original_encoding = original_encoding.name
|
||||
doc.original_encoding = original_encoding
|
||||
self.underlying_builder.parser = None
|
||||
|
||||
doc.original_encoding = parser.tokenizer.stream.charEncoding[0]
|
||||
|
||||
def create_treebuilder(self, namespaceHTMLElements):
|
||||
self.underlying_builder = TreeBuilderForHtml5lib(
|
||||
namespaceHTMLElements, self.soup,
|
||||
store_line_numbers=self.store_line_numbers
|
||||
)
|
||||
self.soup, namespaceHTMLElements)
|
||||
return self.underlying_builder
|
||||
|
||||
def test_fragment_to_document(self, fragment):
|
||||
@@ -127,30 +73,12 @@ class HTML5TreeBuilder(HTMLTreeBuilder):
|
||||
return '<html><head></head><body>%s</body></html>' % fragment
|
||||
|
||||
|
||||
class TreeBuilderForHtml5lib(treebuilder_base.TreeBuilder):
|
||||
|
||||
def __init__(self, namespaceHTMLElements, soup=None,
|
||||
store_line_numbers=True, **kwargs):
|
||||
if soup:
|
||||
self.soup = soup
|
||||
else:
|
||||
from bs4 import BeautifulSoup
|
||||
# TODO: Why is the parser 'html.parser' here? To avoid an
|
||||
# infinite loop?
|
||||
self.soup = BeautifulSoup(
|
||||
"", "html.parser", store_line_numbers=store_line_numbers,
|
||||
**kwargs
|
||||
)
|
||||
# TODO: What are **kwargs exactly? Should they be passed in
|
||||
# here in addition to/instead of being passed to the BeautifulSoup
|
||||
# constructor?
|
||||
class TreeBuilderForHtml5lib(treebuildersbase.TreeBuilder):
|
||||
|
||||
def __init__(self, soup, namespaceHTMLElements):
|
||||
self.soup = soup
|
||||
super(TreeBuilderForHtml5lib, self).__init__(namespaceHTMLElements)
|
||||
|
||||
# This will be set later to an html5lib.html5parser.HTMLParser
|
||||
# object, which we can use to track the current line number.
|
||||
self.parser = None
|
||||
self.store_line_numbers = store_line_numbers
|
||||
|
||||
def documentClass(self):
|
||||
self.soup.reset()
|
||||
return Element(self.soup, self.soup, None)
|
||||
@@ -164,26 +92,14 @@ class TreeBuilderForHtml5lib(treebuilder_base.TreeBuilder):
|
||||
self.soup.object_was_parsed(doctype)
|
||||
|
||||
def elementClass(self, name, namespace):
|
||||
kwargs = {}
|
||||
if self.parser and self.store_line_numbers:
|
||||
# This represents the point immediately after the end of the
|
||||
# tag. We don't know when the tag started, but we do know
|
||||
# where it ended -- the character just before this one.
|
||||
sourceline, sourcepos = self.parser.tokenizer.stream.position()
|
||||
kwargs['sourceline'] = sourceline
|
||||
kwargs['sourcepos'] = sourcepos-1
|
||||
tag = self.soup.new_tag(name, namespace, **kwargs)
|
||||
|
||||
tag = self.soup.new_tag(name, namespace)
|
||||
return Element(tag, self.soup, namespace)
|
||||
|
||||
def commentClass(self, data):
|
||||
return TextNode(Comment(data), self.soup)
|
||||
|
||||
def fragmentClass(self):
|
||||
from bs4 import BeautifulSoup
|
||||
# TODO: Why is the parser 'html.parser' here? To avoid an
|
||||
# infinite loop?
|
||||
self.soup = BeautifulSoup("", "html.parser")
|
||||
self.soup = BeautifulSoup("")
|
||||
self.soup.name = "[document_fragment]"
|
||||
return Element(self.soup, self.soup, None)
|
||||
|
||||
@@ -195,57 +111,7 @@ class TreeBuilderForHtml5lib(treebuilder_base.TreeBuilder):
|
||||
return self.soup
|
||||
|
||||
def getFragment(self):
|
||||
return treebuilder_base.TreeBuilder.getFragment(self).element
|
||||
|
||||
def testSerializer(self, element):
|
||||
from bs4 import BeautifulSoup
|
||||
rv = []
|
||||
doctype_re = re.compile(r'^(.*?)(?: PUBLIC "(.*?)"(?: "(.*?)")?| SYSTEM "(.*?)")?$')
|
||||
|
||||
def serializeElement(element, indent=0):
|
||||
if isinstance(element, BeautifulSoup):
|
||||
pass
|
||||
if isinstance(element, Doctype):
|
||||
m = doctype_re.match(element)
|
||||
if m:
|
||||
name = m.group(1)
|
||||
if m.lastindex > 1:
|
||||
publicId = m.group(2) or ""
|
||||
systemId = m.group(3) or m.group(4) or ""
|
||||
rv.append("""|%s<!DOCTYPE %s "%s" "%s">""" %
|
||||
(' ' * indent, name, publicId, systemId))
|
||||
else:
|
||||
rv.append("|%s<!DOCTYPE %s>" % (' ' * indent, name))
|
||||
else:
|
||||
rv.append("|%s<!DOCTYPE >" % (' ' * indent,))
|
||||
elif isinstance(element, Comment):
|
||||
rv.append("|%s<!-- %s -->" % (' ' * indent, element))
|
||||
elif isinstance(element, NavigableString):
|
||||
rv.append("|%s\"%s\"" % (' ' * indent, element))
|
||||
else:
|
||||
if element.namespace:
|
||||
name = "%s %s" % (prefixes[element.namespace],
|
||||
element.name)
|
||||
else:
|
||||
name = element.name
|
||||
rv.append("|%s<%s>" % (' ' * indent, name))
|
||||
if element.attrs:
|
||||
attributes = []
|
||||
for name, value in list(element.attrs.items()):
|
||||
if isinstance(name, NamespacedAttribute):
|
||||
name = "%s %s" % (prefixes[name.namespace], name.name)
|
||||
if isinstance(value, list):
|
||||
value = " ".join(value)
|
||||
attributes.append((name, value))
|
||||
|
||||
for name, value in sorted(attributes):
|
||||
rv.append('|%s%s="%s"' % (' ' * (indent + 2), name, value))
|
||||
indent += 2
|
||||
for child in element.children:
|
||||
serializeElement(child, indent)
|
||||
serializeElement(element, 0)
|
||||
|
||||
return "\n".join(rv)
|
||||
return treebuildersbase.TreeBuilder.getFragment(self).element
|
||||
|
||||
class AttrList(object):
|
||||
def __init__(self, element):
|
||||
@@ -256,14 +122,14 @@ class AttrList(object):
|
||||
def __setitem__(self, name, value):
|
||||
# If this attribute is a multi-valued attribute for this element,
|
||||
# turn its value into a list.
|
||||
list_attr = self.element.cdata_list_attributes or {}
|
||||
if (name in list_attr.get('*', [])
|
||||
list_attr = HTML5TreeBuilder.cdata_list_attributes
|
||||
if (name in list_attr['*']
|
||||
or (self.element.name in list_attr
|
||||
and name in list_attr.get(self.element.name, []))):
|
||||
and name in list_attr[self.element.name])):
|
||||
# A node that is being cloned may have already undergone
|
||||
# this procedure.
|
||||
if not isinstance(value, list):
|
||||
value = nonwhitespace_re.findall(value)
|
||||
value = whitespace_re.split(value)
|
||||
self.element[name] = value
|
||||
def items(self):
|
||||
return list(self.attrs.items())
|
||||
@@ -277,9 +143,9 @@ class AttrList(object):
|
||||
return name in list(self.attrs.keys())
|
||||
|
||||
|
||||
class Element(treebuilder_base.Node):
|
||||
class Element(treebuildersbase.Node):
|
||||
def __init__(self, element, soup, namespace):
|
||||
treebuilder_base.Node.__init__(self, element.name)
|
||||
treebuildersbase.Node.__init__(self, element.name)
|
||||
self.element = element
|
||||
self.soup = soup
|
||||
self.namespace = namespace
|
||||
@@ -298,15 +164,13 @@ class Element(treebuilder_base.Node):
|
||||
child = node
|
||||
elif node.element.__class__ == NavigableString:
|
||||
string_child = child = node.element
|
||||
node.parent = self
|
||||
else:
|
||||
child = node.element
|
||||
node.parent = self
|
||||
|
||||
if not isinstance(child, str) and child.parent is not None:
|
||||
node.element.extract()
|
||||
|
||||
if (string_child is not None and self.element.contents
|
||||
if (string_child and self.element.contents
|
||||
and self.element.contents[-1].__class__ == NavigableString):
|
||||
# We are appending a string onto another string.
|
||||
# TODO This has O(n^2) performance, for input like
|
||||
@@ -339,12 +203,12 @@ class Element(treebuilder_base.Node):
|
||||
most_recent_element=most_recent_element)
|
||||
|
||||
def getAttributes(self):
|
||||
if isinstance(self.element, Comment):
|
||||
return {}
|
||||
return AttrList(self.element)
|
||||
|
||||
def setAttributes(self, attributes):
|
||||
|
||||
if attributes is not None and len(attributes) > 0:
|
||||
|
||||
converted_attributes = []
|
||||
for name, value in list(attributes.items()):
|
||||
if isinstance(name, tuple):
|
||||
@@ -366,11 +230,11 @@ class Element(treebuilder_base.Node):
|
||||
attributes = property(getAttributes, setAttributes)
|
||||
|
||||
def insertText(self, data, insertBefore=None):
|
||||
text = TextNode(self.soup.new_string(data), self.soup)
|
||||
if insertBefore:
|
||||
self.insertBefore(text, insertBefore)
|
||||
text = TextNode(self.soup.new_string(data), self.soup)
|
||||
self.insertBefore(data, insertBefore)
|
||||
else:
|
||||
self.appendChild(text)
|
||||
self.appendChild(data)
|
||||
|
||||
def insertBefore(self, node, refNode):
|
||||
index = self.element.index(refNode.element)
|
||||
@@ -389,10 +253,9 @@ class Element(treebuilder_base.Node):
|
||||
|
||||
def reparentChildren(self, new_parent):
|
||||
"""Move all of this tag's children into another tag."""
|
||||
# print("MOVE", self.element.contents)
|
||||
# print("FROM", self.element)
|
||||
# print("TO", new_parent.element)
|
||||
|
||||
# print "MOVE", self.element.contents
|
||||
# print "FROM", self.element
|
||||
# print "TO", new_parent.element
|
||||
element = self.element
|
||||
new_parent_element = new_parent.element
|
||||
# Determine what this tag's next_element will be once all the children
|
||||
@@ -411,35 +274,29 @@ class Element(treebuilder_base.Node):
|
||||
new_parents_last_descendant_next_element = new_parent_element.next_element
|
||||
|
||||
to_append = element.contents
|
||||
append_after = new_parent_element.contents
|
||||
if len(to_append) > 0:
|
||||
# Set the first child's previous_element and previous_sibling
|
||||
# to elements within the new parent
|
||||
first_child = to_append[0]
|
||||
if new_parents_last_descendant is not None:
|
||||
if new_parents_last_descendant:
|
||||
first_child.previous_element = new_parents_last_descendant
|
||||
else:
|
||||
first_child.previous_element = new_parent_element
|
||||
first_child.previous_sibling = new_parents_last_child
|
||||
if new_parents_last_descendant is not None:
|
||||
if new_parents_last_descendant:
|
||||
new_parents_last_descendant.next_element = first_child
|
||||
else:
|
||||
new_parent_element.next_element = first_child
|
||||
if new_parents_last_child is not None:
|
||||
if new_parents_last_child:
|
||||
new_parents_last_child.next_sibling = first_child
|
||||
|
||||
# Find the very last element being moved. It is now the
|
||||
# parent's last descendant. It has no .next_sibling and
|
||||
# its .next_element is whatever the previous last
|
||||
# descendant had.
|
||||
last_childs_last_descendant = to_append[-1]._last_descendant(False, True)
|
||||
|
||||
last_childs_last_descendant.next_element = new_parents_last_descendant_next_element
|
||||
if new_parents_last_descendant_next_element is not None:
|
||||
# TODO: This code has no test coverage and I'm not sure
|
||||
# how to get html5lib to go through this path, but it's
|
||||
# just the other side of the previous line.
|
||||
new_parents_last_descendant_next_element.previous_element = last_childs_last_descendant
|
||||
last_childs_last_descendant.next_sibling = None
|
||||
# Fix the last child's next_element and next_sibling
|
||||
last_child = to_append[-1]
|
||||
last_child.next_element = new_parents_last_descendant_next_element
|
||||
if new_parents_last_descendant_next_element:
|
||||
new_parents_last_descendant_next_element.previous_element = last_child
|
||||
last_child.next_sibling = None
|
||||
|
||||
for child in to_append:
|
||||
child.parent = new_parent_element
|
||||
@@ -449,9 +306,9 @@ class Element(treebuilder_base.Node):
|
||||
element.contents = []
|
||||
element.next_element = final_next_element
|
||||
|
||||
# print("DONE WITH MOVE")
|
||||
# print("FROM", self.element)
|
||||
# print("TO", new_parent_element)
|
||||
# print "DONE WITH MOVE"
|
||||
# print "FROM", self.element
|
||||
# print "TO", new_parent_element
|
||||
|
||||
def cloneNode(self):
|
||||
tag = self.soup.new_tag(self.element.name, self.namespace)
|
||||
@@ -464,7 +321,7 @@ class Element(treebuilder_base.Node):
|
||||
return self.element.contents
|
||||
|
||||
def getNameTuple(self):
|
||||
if self.namespace == None:
|
||||
if self.namespace is None:
|
||||
return namespaces["html"], self.name
|
||||
else:
|
||||
return self.namespace, self.name
|
||||
@@ -473,7 +330,7 @@ class Element(treebuilder_base.Node):
|
||||
|
||||
class TextNode(Element):
|
||||
def __init__(self, element, soup):
|
||||
treebuilder_base.Node.__init__(self, None)
|
||||
treebuildersbase.Node.__init__(self, None)
|
||||
self.element = element
|
||||
self.soup = soup
|
||||
|
||||
|
||||
@@ -1,18 +1,35 @@
|
||||
# encoding: utf-8
|
||||
"""Use the HTMLParser library to parse HTML files that aren't too bad."""
|
||||
|
||||
# Use of this source code is governed by the MIT license.
|
||||
__license__ = "MIT"
|
||||
|
||||
__all__ = [
|
||||
'HTMLParserTreeBuilder',
|
||||
]
|
||||
|
||||
from html.parser import HTMLParser
|
||||
|
||||
try:
|
||||
from html.parser import HTMLParseError
|
||||
except ImportError as e:
|
||||
# HTMLParseError is removed in Python 3.5. Since it can never be
|
||||
# thrown in 3.5, we can just define our own class as a placeholder.
|
||||
class HTMLParseError(Exception):
|
||||
pass
|
||||
|
||||
import sys
|
||||
import warnings
|
||||
|
||||
# Starting in Python 3.2, the HTMLParser constructor takes a 'strict'
|
||||
# argument, which we'd like to set to False. Unfortunately,
|
||||
# http://bugs.python.org/issue13273 makes strict=True a better bet
|
||||
# before Python 3.2.3.
|
||||
#
|
||||
# At the end of this file, we monkeypatch HTMLParser so that
|
||||
# strict=True works well on Python 3.2.2.
|
||||
major, minor, release = sys.version_info[:3]
|
||||
CONSTRUCTOR_TAKES_STRICT = major == 3 and minor == 2 and release >= 3
|
||||
CONSTRUCTOR_STRICT_IS_DEPRECATED = major == 3 and minor == 3
|
||||
CONSTRUCTOR_TAKES_CONVERT_CHARREFS = major == 3 and minor >= 4
|
||||
|
||||
|
||||
from bs4.element import (
|
||||
CData,
|
||||
Comment,
|
||||
@@ -23,8 +40,6 @@ from bs4.element import (
|
||||
from bs4.dammit import EntitySubstitution, UnicodeDammit
|
||||
|
||||
from bs4.builder import (
|
||||
DetectsXMLParsedAsHTML,
|
||||
ParserRejectedMarkup,
|
||||
HTML,
|
||||
HTMLTreeBuilder,
|
||||
STRICT,
|
||||
@@ -33,84 +48,8 @@ from bs4.builder import (
|
||||
|
||||
HTMLPARSER = 'html.parser'
|
||||
|
||||
class BeautifulSoupHTMLParser(HTMLParser, DetectsXMLParsedAsHTML):
|
||||
"""A subclass of the Python standard library's HTMLParser class, which
|
||||
listens for HTMLParser events and translates them into calls
|
||||
to Beautiful Soup's tree construction API.
|
||||
"""
|
||||
|
||||
# Strategies for handling duplicate attributes
|
||||
IGNORE = 'ignore'
|
||||
REPLACE = 'replace'
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
"""Constructor.
|
||||
|
||||
:param on_duplicate_attribute: A strategy for what to do if a
|
||||
tag includes the same attribute more than once. Accepted
|
||||
values are: REPLACE (replace earlier values with later
|
||||
ones, the default), IGNORE (keep the earliest value
|
||||
encountered), or a callable. A callable must take three
|
||||
arguments: the dictionary of attributes already processed,
|
||||
the name of the duplicate attribute, and the most recent value
|
||||
encountered.
|
||||
"""
|
||||
self.on_duplicate_attribute = kwargs.pop(
|
||||
'on_duplicate_attribute', self.REPLACE
|
||||
)
|
||||
HTMLParser.__init__(self, *args, **kwargs)
|
||||
|
||||
# Keep a list of empty-element tags that were encountered
|
||||
# without an explicit closing tag. If we encounter a closing tag
|
||||
# of this type, we'll associate it with one of those entries.
|
||||
#
|
||||
# This isn't a stack because we don't care about the
|
||||
# order. It's a list of closing tags we've already handled and
|
||||
# will ignore, assuming they ever show up.
|
||||
self.already_closed_empty_element = []
|
||||
|
||||
self._initialize_xml_detector()
|
||||
|
||||
def error(self, message):
|
||||
# NOTE: This method is required so long as Python 3.9 is
|
||||
# supported. The corresponding code is removed from HTMLParser
|
||||
# in 3.5, but not removed from ParserBase until 3.10.
|
||||
# https://github.com/python/cpython/issues/76025
|
||||
#
|
||||
# The original implementation turned the error into a warning,
|
||||
# but in every case I discovered, this made HTMLParser
|
||||
# immediately crash with an error message that was less
|
||||
# helpful than the warning. The new implementation makes it
|
||||
# more clear that html.parser just can't parse this
|
||||
# markup. The 3.10 implementation does the same, though it
|
||||
# raises AssertionError rather than calling a method. (We
|
||||
# catch this error and wrap it in a ParserRejectedMarkup.)
|
||||
raise ParserRejectedMarkup(message)
|
||||
|
||||
def handle_startendtag(self, name, attrs):
|
||||
"""Handle an incoming empty-element tag.
|
||||
|
||||
This is only called when the markup looks like <tag/>.
|
||||
|
||||
:param name: Name of the tag.
|
||||
:param attrs: Dictionary of the tag's attributes.
|
||||
"""
|
||||
# is_startend() tells handle_starttag not to close the tag
|
||||
# just because its name matches a known empty-element tag. We
|
||||
# know that this is an empty-element tag and we want to call
|
||||
# handle_endtag ourselves.
|
||||
tag = self.handle_starttag(name, attrs, handle_empty_element=False)
|
||||
self.handle_endtag(name)
|
||||
|
||||
def handle_starttag(self, name, attrs, handle_empty_element=True):
|
||||
"""Handle an opening tag, e.g. '<tag>'
|
||||
|
||||
:param name: Name of the tag.
|
||||
:param attrs: Dictionary of the tag's attributes.
|
||||
:param handle_empty_element: True if this tag is known to be
|
||||
an empty-element tag (i.e. there is not expected to be any
|
||||
closing tag).
|
||||
"""
|
||||
class BeautifulSoupHTMLParser(HTMLParser):
|
||||
def handle_starttag(self, name, attrs):
|
||||
# XXX namespace
|
||||
attr_dict = {}
|
||||
for key, value in attrs:
|
||||
@@ -118,78 +57,20 @@ class BeautifulSoupHTMLParser(HTMLParser, DetectsXMLParsedAsHTML):
|
||||
# for consistency with the other tree builders.
|
||||
if value is None:
|
||||
value = ''
|
||||
if key in attr_dict:
|
||||
# A single attribute shows up multiple times in this
|
||||
# tag. How to handle it depends on the
|
||||
# on_duplicate_attribute setting.
|
||||
on_dupe = self.on_duplicate_attribute
|
||||
if on_dupe == self.IGNORE:
|
||||
pass
|
||||
elif on_dupe in (None, self.REPLACE):
|
||||
attr_dict[key] = value
|
||||
else:
|
||||
on_dupe(attr_dict, key, value)
|
||||
else:
|
||||
attr_dict[key] = value
|
||||
attr_dict[key] = value
|
||||
attrvalue = '""'
|
||||
#print("START", name)
|
||||
sourceline, sourcepos = self.getpos()
|
||||
tag = self.soup.handle_starttag(
|
||||
name, None, None, attr_dict, sourceline=sourceline,
|
||||
sourcepos=sourcepos
|
||||
)
|
||||
if tag and tag.is_empty_element and handle_empty_element:
|
||||
# Unlike other parsers, html.parser doesn't send separate end tag
|
||||
# events for empty-element tags. (It's handled in
|
||||
# handle_startendtag, but only if the original markup looked like
|
||||
# <tag/>.)
|
||||
#
|
||||
# So we need to call handle_endtag() ourselves. Since we
|
||||
# know the start event is identical to the end event, we
|
||||
# don't want handle_endtag() to cross off any previous end
|
||||
# events for tags of this name.
|
||||
self.handle_endtag(name, check_already_closed=False)
|
||||
self.soup.handle_starttag(name, None, None, attr_dict)
|
||||
|
||||
# But we might encounter an explicit closing tag for this tag
|
||||
# later on. If so, we want to ignore it.
|
||||
self.already_closed_empty_element.append(name)
|
||||
def handle_endtag(self, name):
|
||||
self.soup.handle_endtag(name)
|
||||
|
||||
if self._root_tag is None:
|
||||
self._root_tag_encountered(name)
|
||||
|
||||
def handle_endtag(self, name, check_already_closed=True):
|
||||
"""Handle a closing tag, e.g. '</tag>'
|
||||
|
||||
:param name: A tag name.
|
||||
:param check_already_closed: True if this tag is expected to
|
||||
be the closing portion of an empty-element tag,
|
||||
e.g. '<tag></tag>'.
|
||||
"""
|
||||
#print("END", name)
|
||||
if check_already_closed and name in self.already_closed_empty_element:
|
||||
# This is a redundant end tag for an empty-element tag.
|
||||
# We've already called handle_endtag() for it, so just
|
||||
# check it off the list.
|
||||
#print("ALREADY CLOSED", name)
|
||||
self.already_closed_empty_element.remove(name)
|
||||
else:
|
||||
self.soup.handle_endtag(name)
|
||||
|
||||
def handle_data(self, data):
|
||||
"""Handle some textual data that shows up between tags."""
|
||||
self.soup.handle_data(data)
|
||||
|
||||
def handle_charref(self, name):
|
||||
"""Handle a numeric character reference by converting it to the
|
||||
corresponding Unicode character and treating it as textual
|
||||
data.
|
||||
|
||||
:param name: Character number, possibly in hexadecimal.
|
||||
"""
|
||||
# TODO: This was originally a workaround for a bug in
|
||||
# HTMLParser. (http://bugs.python.org/issue13633) The bug has
|
||||
# been fixed, but removing this code still makes some
|
||||
# Beautiful Soup tests fail. This needs investigation.
|
||||
# XXX workaround for a bug in HTMLParser. Remove this once
|
||||
# it's fixed in all supported versions.
|
||||
# http://bugs.python.org/issue13633
|
||||
if name.startswith('x'):
|
||||
real_name = int(name.lstrip('x'), 16)
|
||||
elif name.startswith('X'):
|
||||
@@ -197,71 +78,37 @@ class BeautifulSoupHTMLParser(HTMLParser, DetectsXMLParsedAsHTML):
|
||||
else:
|
||||
real_name = int(name)
|
||||
|
||||
data = None
|
||||
if real_name < 256:
|
||||
# HTML numeric entities are supposed to reference Unicode
|
||||
# code points, but sometimes they reference code points in
|
||||
# some other encoding (ahem, Windows-1252). E.g. “
|
||||
# instead of É for LEFT DOUBLE QUOTATION MARK. This
|
||||
# code tries to detect this situation and compensate.
|
||||
for encoding in (self.soup.original_encoding, 'windows-1252'):
|
||||
if not encoding:
|
||||
continue
|
||||
try:
|
||||
data = bytearray([real_name]).decode(encoding)
|
||||
except UnicodeDecodeError as e:
|
||||
pass
|
||||
if not data:
|
||||
try:
|
||||
data = chr(real_name)
|
||||
except (ValueError, OverflowError) as e:
|
||||
pass
|
||||
data = data or "\N{REPLACEMENT CHARACTER}"
|
||||
try:
|
||||
data = chr(real_name)
|
||||
except (ValueError, OverflowError) as e:
|
||||
data = "\N{REPLACEMENT CHARACTER}"
|
||||
|
||||
self.handle_data(data)
|
||||
|
||||
def handle_entityref(self, name):
|
||||
"""Handle a named entity reference by converting it to the
|
||||
corresponding Unicode character(s) and treating it as textual
|
||||
data.
|
||||
|
||||
:param name: Name of the entity reference.
|
||||
"""
|
||||
character = EntitySubstitution.HTML_ENTITY_TO_CHARACTER.get(name)
|
||||
if character is not None:
|
||||
data = character
|
||||
else:
|
||||
# If this were XML, it would be ambiguous whether "&foo"
|
||||
# was an character entity reference with a missing
|
||||
# semicolon or the literal string "&foo". Since this is
|
||||
# HTML, we have a complete list of all character entity references,
|
||||
# and this one wasn't found, so assume it's the literal string "&foo".
|
||||
data = "&%s" % name
|
||||
data = "&%s;" % name
|
||||
self.handle_data(data)
|
||||
|
||||
def handle_comment(self, data):
|
||||
"""Handle an HTML comment.
|
||||
|
||||
:param data: The text of the comment.
|
||||
"""
|
||||
self.soup.endData()
|
||||
self.soup.handle_data(data)
|
||||
self.soup.endData(Comment)
|
||||
|
||||
def handle_decl(self, data):
|
||||
"""Handle a DOCTYPE declaration.
|
||||
|
||||
:param data: The text of the declaration.
|
||||
"""
|
||||
self.soup.endData()
|
||||
data = data[len("DOCTYPE "):]
|
||||
if data.startswith("DOCTYPE "):
|
||||
data = data[len("DOCTYPE "):]
|
||||
elif data == 'DOCTYPE':
|
||||
# i.e. "<!DOCTYPE>"
|
||||
data = ''
|
||||
self.soup.handle_data(data)
|
||||
self.soup.endData(Doctype)
|
||||
|
||||
def unknown_decl(self, data):
|
||||
"""Handle a declaration of unknown type -- probably a CDATA block.
|
||||
|
||||
:param data: The text of the declaration.
|
||||
"""
|
||||
if data.upper().startswith('CDATA['):
|
||||
cls = CData
|
||||
data = data[len('CDATA['):]
|
||||
@@ -272,116 +119,144 @@ class BeautifulSoupHTMLParser(HTMLParser, DetectsXMLParsedAsHTML):
|
||||
self.soup.endData(cls)
|
||||
|
||||
def handle_pi(self, data):
|
||||
"""Handle a processing instruction.
|
||||
|
||||
:param data: The text of the instruction.
|
||||
"""
|
||||
self.soup.endData()
|
||||
self.soup.handle_data(data)
|
||||
self._document_might_be_xml(data)
|
||||
self.soup.endData(ProcessingInstruction)
|
||||
|
||||
|
||||
class HTMLParserTreeBuilder(HTMLTreeBuilder):
|
||||
"""A Beautiful soup `TreeBuilder` that uses the `HTMLParser` parser,
|
||||
found in the Python standard library.
|
||||
"""
|
||||
|
||||
is_xml = False
|
||||
picklable = True
|
||||
NAME = HTMLPARSER
|
||||
features = [NAME, HTML, STRICT]
|
||||
|
||||
# The html.parser knows which line number and position in the
|
||||
# original file is the source of an element.
|
||||
TRACKS_LINE_NUMBERS = True
|
||||
def __init__(self, *args, **kwargs):
|
||||
if CONSTRUCTOR_TAKES_STRICT and not CONSTRUCTOR_STRICT_IS_DEPRECATED:
|
||||
kwargs['strict'] = False
|
||||
if CONSTRUCTOR_TAKES_CONVERT_CHARREFS:
|
||||
kwargs['convert_charrefs'] = False
|
||||
self.parser_args = (args, kwargs)
|
||||
|
||||
def __init__(self, parser_args=None, parser_kwargs=None, **kwargs):
|
||||
"""Constructor.
|
||||
|
||||
:param parser_args: Positional arguments to pass into
|
||||
the BeautifulSoupHTMLParser constructor, once it's
|
||||
invoked.
|
||||
:param parser_kwargs: Keyword arguments to pass into
|
||||
the BeautifulSoupHTMLParser constructor, once it's
|
||||
invoked.
|
||||
:param kwargs: Keyword arguments for the superclass constructor.
|
||||
"""
|
||||
# Some keyword arguments will be pulled out of kwargs and placed
|
||||
# into parser_kwargs.
|
||||
extra_parser_kwargs = dict()
|
||||
for arg in ('on_duplicate_attribute',):
|
||||
if arg in kwargs:
|
||||
value = kwargs.pop(arg)
|
||||
extra_parser_kwargs[arg] = value
|
||||
super(HTMLParserTreeBuilder, self).__init__(**kwargs)
|
||||
parser_args = parser_args or []
|
||||
parser_kwargs = parser_kwargs or {}
|
||||
parser_kwargs.update(extra_parser_kwargs)
|
||||
parser_kwargs['convert_charrefs'] = False
|
||||
self.parser_args = (parser_args, parser_kwargs)
|
||||
|
||||
def prepare_markup(self, markup, user_specified_encoding=None,
|
||||
document_declared_encoding=None, exclude_encodings=None):
|
||||
|
||||
"""Run any preliminary steps necessary to make incoming markup
|
||||
acceptable to the parser.
|
||||
|
||||
:param markup: Some markup -- probably a bytestring.
|
||||
:param user_specified_encoding: The user asked to try this encoding.
|
||||
:param document_declared_encoding: The markup itself claims to be
|
||||
in this encoding.
|
||||
:param exclude_encodings: The user asked _not_ to try any of
|
||||
these encodings.
|
||||
|
||||
:yield: A series of 4-tuples:
|
||||
(markup, encoding, declared encoding,
|
||||
has undergone character replacement)
|
||||
|
||||
Each 4-tuple represents a strategy for converting the
|
||||
document to Unicode and parsing it. Each strategy will be tried
|
||||
in turn.
|
||||
"""
|
||||
:return: A 4-tuple (markup, original encoding, encoding
|
||||
declared within markup, whether any characters had to be
|
||||
replaced with REPLACEMENT CHARACTER).
|
||||
"""
|
||||
if isinstance(markup, str):
|
||||
# Parse Unicode as-is.
|
||||
yield (markup, None, None, False)
|
||||
return
|
||||
|
||||
# Ask UnicodeDammit to sniff the most likely encoding.
|
||||
|
||||
# This was provided by the end-user; treat it as a known
|
||||
# definite encoding per the algorithm laid out in the HTML5
|
||||
# spec. (See the EncodingDetector class for details.)
|
||||
known_definite_encodings = [user_specified_encoding]
|
||||
|
||||
# This was found in the document; treat it as a slightly lower-priority
|
||||
# user encoding.
|
||||
user_encodings = [document_declared_encoding]
|
||||
|
||||
try_encodings = [user_specified_encoding, document_declared_encoding]
|
||||
dammit = UnicodeDammit(
|
||||
markup,
|
||||
known_definite_encodings=known_definite_encodings,
|
||||
user_encodings=user_encodings,
|
||||
is_html=True,
|
||||
exclude_encodings=exclude_encodings
|
||||
)
|
||||
dammit = UnicodeDammit(markup, try_encodings, is_html=True,
|
||||
exclude_encodings=exclude_encodings)
|
||||
yield (dammit.markup, dammit.original_encoding,
|
||||
dammit.declared_html_encoding,
|
||||
dammit.contains_replacement_characters)
|
||||
|
||||
def feed(self, markup):
|
||||
"""Run some incoming markup through some parsing process,
|
||||
populating the `BeautifulSoup` object in self.soup.
|
||||
"""
|
||||
args, kwargs = self.parser_args
|
||||
parser = BeautifulSoupHTMLParser(*args, **kwargs)
|
||||
parser.soup = self.soup
|
||||
try:
|
||||
parser.feed(markup)
|
||||
parser.close()
|
||||
except AssertionError as e:
|
||||
# html.parser raises AssertionError in rare cases to
|
||||
# indicate a fatal problem with the markup, especially
|
||||
# when there's an error in the doctype declaration.
|
||||
raise ParserRejectedMarkup(e)
|
||||
parser.already_closed_empty_element = []
|
||||
except HTMLParseError as e:
|
||||
warnings.warn(RuntimeWarning(
|
||||
"Python's built-in HTMLParser cannot parse the given document. This is not a bug in Beautiful Soup. The best solution is to install an external parser (lxml or html5lib), and use Beautiful Soup with that parser. See http://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-a-parser for help."))
|
||||
raise e
|
||||
|
||||
# Patch 3.2 versions of HTMLParser earlier than 3.2.3 to use some
|
||||
# 3.2.3 code. This ensures they don't treat markup like <p></p> as a
|
||||
# string.
|
||||
#
|
||||
# XXX This code can be removed once most Python 3 users are on 3.2.3.
|
||||
if major == 3 and minor == 2 and not CONSTRUCTOR_TAKES_STRICT:
|
||||
import re
|
||||
attrfind_tolerant = re.compile(
|
||||
r'\s*((?<=[\'"\s])[^\s/>][^\s/=>]*)(\s*=+\s*'
|
||||
r'(\'[^\']*\'|"[^"]*"|(?![\'"])[^>\s]*))?')
|
||||
HTMLParserTreeBuilder.attrfind_tolerant = attrfind_tolerant
|
||||
|
||||
locatestarttagend = re.compile(r"""
|
||||
<[a-zA-Z][-.a-zA-Z0-9:_]* # tag name
|
||||
(?:\s+ # whitespace before attribute name
|
||||
(?:[a-zA-Z_][-.:a-zA-Z0-9_]* # attribute name
|
||||
(?:\s*=\s* # value indicator
|
||||
(?:'[^']*' # LITA-enclosed value
|
||||
|\"[^\"]*\" # LIT-enclosed value
|
||||
|[^'\">\s]+ # bare value
|
||||
)
|
||||
)?
|
||||
)
|
||||
)*
|
||||
\s* # trailing whitespace
|
||||
""", re.VERBOSE)
|
||||
BeautifulSoupHTMLParser.locatestarttagend = locatestarttagend
|
||||
|
||||
from html.parser import tagfind, attrfind
|
||||
|
||||
def parse_starttag(self, i):
|
||||
self.__starttag_text = None
|
||||
endpos = self.check_for_whole_start_tag(i)
|
||||
if endpos < 0:
|
||||
return endpos
|
||||
rawdata = self.rawdata
|
||||
self.__starttag_text = rawdata[i:endpos]
|
||||
|
||||
# Now parse the data between i+1 and j into a tag and attrs
|
||||
attrs = []
|
||||
match = tagfind.match(rawdata, i+1)
|
||||
assert match, 'unexpected call to parse_starttag()'
|
||||
k = match.end()
|
||||
self.lasttag = tag = rawdata[i+1:k].lower()
|
||||
while k < endpos:
|
||||
if self.strict:
|
||||
m = attrfind.match(rawdata, k)
|
||||
else:
|
||||
m = attrfind_tolerant.match(rawdata, k)
|
||||
if not m:
|
||||
break
|
||||
attrname, rest, attrvalue = m.group(1, 2, 3)
|
||||
if not rest:
|
||||
attrvalue = None
|
||||
elif attrvalue[:1] == '\'' == attrvalue[-1:] or \
|
||||
attrvalue[:1] == '"' == attrvalue[-1:]:
|
||||
attrvalue = attrvalue[1:-1]
|
||||
if attrvalue:
|
||||
attrvalue = self.unescape(attrvalue)
|
||||
attrs.append((attrname.lower(), attrvalue))
|
||||
k = m.end()
|
||||
|
||||
end = rawdata[k:endpos].strip()
|
||||
if end not in (">", "/>"):
|
||||
lineno, offset = self.getpos()
|
||||
if "\n" in self.__starttag_text:
|
||||
lineno = lineno + self.__starttag_text.count("\n")
|
||||
offset = len(self.__starttag_text) \
|
||||
- self.__starttag_text.rfind("\n")
|
||||
else:
|
||||
offset = offset + len(self.__starttag_text)
|
||||
if self.strict:
|
||||
self.error("junk characters in start tag: %r"
|
||||
% (rawdata[k:endpos][:20],))
|
||||
self.handle_data(rawdata[i:endpos])
|
||||
return endpos
|
||||
if end.endswith('/>'):
|
||||
# XHTML-style empty tag: <span attr="value" />
|
||||
self.handle_startendtag(tag, attrs)
|
||||
else:
|
||||
self.handle_starttag(tag, attrs)
|
||||
if tag in self.CDATA_CONTENT_ELEMENTS:
|
||||
self.set_cdata_mode(tag)
|
||||
return endpos
|
||||
|
||||
def set_cdata_mode(self, elem):
|
||||
self.cdata_elem = elem.lower()
|
||||
self.interesting = re.compile(r'</\s*%s\s*>' % self.cdata_elem, re.I)
|
||||
|
||||
BeautifulSoupHTMLParser.parse_starttag = parse_starttag
|
||||
BeautifulSoupHTMLParser.set_cdata_mode = set_cdata_mode
|
||||
|
||||
CONSTRUCTOR_TAKES_STRICT = True
|
||||
|
||||
@@ -1,28 +1,19 @@
|
||||
# Use of this source code is governed by the MIT license.
|
||||
__license__ = "MIT"
|
||||
|
||||
__all__ = [
|
||||
'LXMLTreeBuilderForXML',
|
||||
'LXMLTreeBuilder',
|
||||
]
|
||||
|
||||
try:
|
||||
from collections.abc import Callable # Python 3.6
|
||||
except ImportError as e:
|
||||
from collections import Callable
|
||||
|
||||
from io import BytesIO
|
||||
from io import StringIO
|
||||
import collections
|
||||
from lxml import etree
|
||||
from bs4.element import (
|
||||
Comment,
|
||||
Doctype,
|
||||
NamespacedAttribute,
|
||||
ProcessingInstruction,
|
||||
XMLProcessingInstruction,
|
||||
)
|
||||
from bs4.builder import (
|
||||
DetectsXMLParsedAsHTML,
|
||||
FAST,
|
||||
HTML,
|
||||
HTMLTreeBuilder,
|
||||
@@ -34,15 +25,10 @@ from bs4.dammit import EncodingDetector
|
||||
|
||||
LXML = 'lxml'
|
||||
|
||||
def _invert(d):
|
||||
"Invert a dictionary."
|
||||
return dict((v,k) for k, v in list(d.items()))
|
||||
|
||||
class LXMLTreeBuilderForXML(TreeBuilder):
|
||||
DEFAULT_PARSER_CLASS = etree.XMLParser
|
||||
|
||||
is_xml = True
|
||||
processing_instruction_class = XMLProcessingInstruction
|
||||
|
||||
NAME = "lxml-xml"
|
||||
ALTERNATE_NAMES = ["xml"]
|
||||
@@ -54,79 +40,26 @@ class LXMLTreeBuilderForXML(TreeBuilder):
|
||||
|
||||
# This namespace mapping is specified in the XML Namespace
|
||||
# standard.
|
||||
DEFAULT_NSMAPS = dict(xml='http://www.w3.org/XML/1998/namespace')
|
||||
DEFAULT_NSMAPS = {'http://www.w3.org/XML/1998/namespace' : "xml"}
|
||||
|
||||
DEFAULT_NSMAPS_INVERTED = _invert(DEFAULT_NSMAPS)
|
||||
|
||||
# NOTE: If we parsed Element objects and looked at .sourceline,
|
||||
# we'd be able to see the line numbers from the original document.
|
||||
# But instead we build an XMLParser or HTMLParser object to serve
|
||||
# as the target of parse messages, and those messages don't include
|
||||
# line numbers.
|
||||
# See: https://bugs.launchpad.net/lxml/+bug/1846906
|
||||
|
||||
def initialize_soup(self, soup):
|
||||
"""Let the BeautifulSoup object know about the standard namespace
|
||||
mapping.
|
||||
|
||||
:param soup: A `BeautifulSoup`.
|
||||
"""
|
||||
super(LXMLTreeBuilderForXML, self).initialize_soup(soup)
|
||||
self._register_namespaces(self.DEFAULT_NSMAPS)
|
||||
|
||||
def _register_namespaces(self, mapping):
|
||||
"""Let the BeautifulSoup object know about namespaces encountered
|
||||
while parsing the document.
|
||||
|
||||
This might be useful later on when creating CSS selectors.
|
||||
|
||||
This will track (almost) all namespaces, even ones that were
|
||||
only in scope for part of the document. If two namespaces have
|
||||
the same prefix, only the first one encountered will be
|
||||
tracked. Un-prefixed namespaces are not tracked.
|
||||
|
||||
:param mapping: A dictionary mapping namespace prefixes to URIs.
|
||||
"""
|
||||
for key, value in list(mapping.items()):
|
||||
# This is 'if key' and not 'if key is not None' because we
|
||||
# don't track un-prefixed namespaces. Soupselect will
|
||||
# treat an un-prefixed namespace as the default, which
|
||||
# causes confusion in some cases.
|
||||
if key and key not in self.soup._namespaces:
|
||||
# Let the BeautifulSoup object know about a new namespace.
|
||||
# If there are multiple namespaces defined with the same
|
||||
# prefix, the first one in the document takes precedence.
|
||||
self.soup._namespaces[key] = value
|
||||
|
||||
def default_parser(self, encoding):
|
||||
"""Find the default parser for the given encoding.
|
||||
|
||||
:param encoding: A string.
|
||||
:return: Either a parser object or a class, which
|
||||
will be instantiated with default arguments.
|
||||
"""
|
||||
# This can either return a parser object or a class, which
|
||||
# will be instantiated with default arguments.
|
||||
if self._default_parser is not None:
|
||||
return self._default_parser
|
||||
return etree.XMLParser(
|
||||
target=self, strip_cdata=False, recover=True, encoding=encoding)
|
||||
|
||||
def parser_for(self, encoding):
|
||||
"""Instantiate an appropriate parser for the given encoding.
|
||||
|
||||
:param encoding: A string.
|
||||
:return: A parser object such as an `etree.XMLParser`.
|
||||
"""
|
||||
# Use the default parser.
|
||||
parser = self.default_parser(encoding)
|
||||
|
||||
if isinstance(parser, Callable):
|
||||
if isinstance(parser, collections.Callable):
|
||||
# Instantiate the parser with default arguments
|
||||
parser = parser(
|
||||
target=self, strip_cdata=False, recover=True, encoding=encoding
|
||||
)
|
||||
parser = parser(target=self, strip_cdata=False, encoding=encoding)
|
||||
return parser
|
||||
|
||||
def __init__(self, parser=None, empty_element_tags=None, **kwargs):
|
||||
def __init__(self, parser=None, empty_element_tags=None):
|
||||
# TODO: Issue a warning if parser is present but not a
|
||||
# callable, since that means there's no way to create new
|
||||
# parsers for different encodings.
|
||||
@@ -134,10 +67,8 @@ class LXMLTreeBuilderForXML(TreeBuilder):
|
||||
if empty_element_tags is not None:
|
||||
self.empty_element_tags = set(empty_element_tags)
|
||||
self.soup = None
|
||||
self.nsmaps = [self.DEFAULT_NSMAPS_INVERTED]
|
||||
self.active_namespace_prefixes = [dict(self.DEFAULT_NSMAPS)]
|
||||
super(LXMLTreeBuilderForXML, self).__init__(**kwargs)
|
||||
|
||||
self.nsmaps = [self.DEFAULT_NSMAPS]
|
||||
|
||||
def _getNsTag(self, tag):
|
||||
# Split the namespace URL out of a fully-qualified lxml tag
|
||||
# name. Copied from lxml's src/lxml/sax.py.
|
||||
@@ -149,51 +80,16 @@ class LXMLTreeBuilderForXML(TreeBuilder):
|
||||
def prepare_markup(self, markup, user_specified_encoding=None,
|
||||
exclude_encodings=None,
|
||||
document_declared_encoding=None):
|
||||
"""Run any preliminary steps necessary to make incoming markup
|
||||
acceptable to the parser.
|
||||
|
||||
lxml really wants to get a bytestring and convert it to
|
||||
Unicode itself. So instead of using UnicodeDammit to convert
|
||||
the bytestring to Unicode using different encodings, this
|
||||
implementation uses EncodingDetector to iterate over the
|
||||
encodings, and tell lxml to try to parse the document as each
|
||||
one in turn.
|
||||
|
||||
:param markup: Some markup -- hopefully a bytestring.
|
||||
:param user_specified_encoding: The user asked to try this encoding.
|
||||
:param document_declared_encoding: The markup itself claims to be
|
||||
in this encoding.
|
||||
:param exclude_encodings: The user asked _not_ to try any of
|
||||
these encodings.
|
||||
|
||||
:yield: A series of 4-tuples:
|
||||
"""
|
||||
:yield: A series of 4-tuples.
|
||||
(markup, encoding, declared encoding,
|
||||
has undergone character replacement)
|
||||
|
||||
Each 4-tuple represents a strategy for converting the
|
||||
document to Unicode and parsing it. Each strategy will be tried
|
||||
in turn.
|
||||
Each 4-tuple represents a strategy for parsing the document.
|
||||
"""
|
||||
is_html = not self.is_xml
|
||||
if is_html:
|
||||
self.processing_instruction_class = ProcessingInstruction
|
||||
# We're in HTML mode, so if we're given XML, that's worth
|
||||
# noting.
|
||||
DetectsXMLParsedAsHTML.warn_if_markup_looks_like_xml(
|
||||
markup, stacklevel=3
|
||||
)
|
||||
else:
|
||||
self.processing_instruction_class = XMLProcessingInstruction
|
||||
|
||||
if isinstance(markup, str):
|
||||
# We were given Unicode. Maybe lxml can parse Unicode on
|
||||
# this system?
|
||||
|
||||
# TODO: This is a workaround for
|
||||
# https://bugs.launchpad.net/lxml/+bug/1948551.
|
||||
# We can remove it once the upstream issue is fixed.
|
||||
if len(markup) > 0 and markup[0] == u'\N{BYTE ORDER MARK}':
|
||||
markup = markup[1:]
|
||||
yield markup, None, document_declared_encoding, False
|
||||
|
||||
if isinstance(markup, str):
|
||||
@@ -202,19 +98,14 @@ class LXMLTreeBuilderForXML(TreeBuilder):
|
||||
yield (markup.encode("utf8"), "utf8",
|
||||
document_declared_encoding, False)
|
||||
|
||||
# This was provided by the end-user; treat it as a known
|
||||
# definite encoding per the algorithm laid out in the HTML5
|
||||
# spec. (See the EncodingDetector class for details.)
|
||||
known_definite_encodings = [user_specified_encoding]
|
||||
|
||||
# This was found in the document; treat it as a slightly lower-priority
|
||||
# user encoding.
|
||||
user_encodings = [document_declared_encoding]
|
||||
# Instead of using UnicodeDammit to convert the bytestring to
|
||||
# Unicode using different encodings, use EncodingDetector to
|
||||
# iterate over the encodings, and tell lxml to try to parse
|
||||
# the document as each one in turn.
|
||||
is_html = not self.is_xml
|
||||
try_encodings = [user_specified_encoding, document_declared_encoding]
|
||||
detector = EncodingDetector(
|
||||
markup, known_definite_encodings=known_definite_encodings,
|
||||
user_encodings=user_encodings, is_html=is_html,
|
||||
exclude_encodings=exclude_encodings
|
||||
)
|
||||
markup, try_encodings, is_html, exclude_encodings)
|
||||
for encoding in detector.encodings:
|
||||
yield (detector.markup, encoding, document_declared_encoding, False)
|
||||
|
||||
@@ -237,45 +128,25 @@ class LXMLTreeBuilderForXML(TreeBuilder):
|
||||
self.parser.feed(data)
|
||||
self.parser.close()
|
||||
except (UnicodeDecodeError, LookupError, etree.ParserError) as e:
|
||||
raise ParserRejectedMarkup(e)
|
||||
raise ParserRejectedMarkup(str(e))
|
||||
|
||||
def close(self):
|
||||
self.nsmaps = [self.DEFAULT_NSMAPS_INVERTED]
|
||||
self.nsmaps = [self.DEFAULT_NSMAPS]
|
||||
|
||||
def start(self, name, attrs, nsmap={}):
|
||||
# Make sure attrs is a mutable dict--lxml may send an immutable dictproxy.
|
||||
attrs = dict(attrs)
|
||||
nsprefix = None
|
||||
# Invert each namespace map as it comes in.
|
||||
if len(nsmap) == 0 and len(self.nsmaps) > 1:
|
||||
# There are no new namespaces for this tag, but
|
||||
# non-default namespaces are in play, so we need a
|
||||
# separate tag stack to know when they end.
|
||||
self.nsmaps.append(None)
|
||||
if len(self.nsmaps) > 1:
|
||||
# There are no new namespaces for this tag, but
|
||||
# non-default namespaces are in play, so we need a
|
||||
# separate tag stack to know when they end.
|
||||
self.nsmaps.append(None)
|
||||
elif len(nsmap) > 0:
|
||||
# A new namespace mapping has come into play.
|
||||
|
||||
# First, Let the BeautifulSoup object know about it.
|
||||
self._register_namespaces(nsmap)
|
||||
|
||||
# Then, add it to our running list of inverted namespace
|
||||
# mappings.
|
||||
self.nsmaps.append(_invert(nsmap))
|
||||
|
||||
# The currently active namespace prefixes have
|
||||
# changed. Calculate the new mapping so it can be stored
|
||||
# with all Tag objects created while these prefixes are in
|
||||
# scope.
|
||||
current_mapping = dict(self.active_namespace_prefixes[-1])
|
||||
current_mapping.update(nsmap)
|
||||
|
||||
# We should not track un-prefixed namespaces as we can only hold one
|
||||
# and it will be recognized as the default namespace by soupsieve,
|
||||
# which may be confusing in some situations.
|
||||
if '' in current_mapping:
|
||||
del current_mapping['']
|
||||
self.active_namespace_prefixes.append(current_mapping)
|
||||
|
||||
inverted_nsmap = dict((value, key) for key, value in list(nsmap.items()))
|
||||
self.nsmaps.append(inverted_nsmap)
|
||||
# Also treat the namespace mapping as a set of attributes on the
|
||||
# tag, so we can recreate it later.
|
||||
attrs = attrs.copy()
|
||||
@@ -300,11 +171,8 @@ class LXMLTreeBuilderForXML(TreeBuilder):
|
||||
|
||||
namespace, name = self._getNsTag(name)
|
||||
nsprefix = self._prefix_for_namespace(namespace)
|
||||
self.soup.handle_starttag(
|
||||
name, namespace, nsprefix, attrs,
|
||||
namespaces=self.active_namespace_prefixes[-1]
|
||||
)
|
||||
|
||||
self.soup.handle_starttag(name, namespace, nsprefix, attrs)
|
||||
|
||||
def _prefix_for_namespace(self, namespace):
|
||||
"""Find the currently active prefix for the given namespace."""
|
||||
if namespace is None:
|
||||
@@ -328,20 +196,13 @@ class LXMLTreeBuilderForXML(TreeBuilder):
|
||||
if len(self.nsmaps) > 1:
|
||||
# This tag, or one of its parents, introduced a namespace
|
||||
# mapping, so pop it off the stack.
|
||||
out_of_scope_nsmap = self.nsmaps.pop()
|
||||
self.nsmaps.pop()
|
||||
|
||||
if out_of_scope_nsmap is not None:
|
||||
# This tag introduced a namespace mapping which is no
|
||||
# longer in scope. Recalculate the currently active
|
||||
# namespace prefixes.
|
||||
self.active_namespace_prefixes.pop()
|
||||
|
||||
def pi(self, target, data):
|
||||
self.soup.endData()
|
||||
data = target + ' ' + data
|
||||
self.soup.handle_data(data)
|
||||
self.soup.endData(self.processing_instruction_class)
|
||||
|
||||
self.soup.handle_data(target + ' ' + data)
|
||||
self.soup.endData(ProcessingInstruction)
|
||||
|
||||
def data(self, content):
|
||||
self.soup.handle_data(content)
|
||||
|
||||
@@ -368,7 +229,6 @@ class LXMLTreeBuilder(HTMLTreeBuilder, LXMLTreeBuilderForXML):
|
||||
|
||||
features = ALTERNATE_NAMES + [NAME, HTML, FAST, PERMISSIVE]
|
||||
is_xml = False
|
||||
processing_instruction_class = ProcessingInstruction
|
||||
|
||||
def default_parser(self, encoding):
|
||||
return etree.HTMLParser
|
||||
@@ -380,7 +240,7 @@ class LXMLTreeBuilder(HTMLTreeBuilder, LXMLTreeBuilderForXML):
|
||||
self.parser.feed(markup)
|
||||
self.parser.close()
|
||||
except (UnicodeDecodeError, LookupError, etree.ParserError) as e:
|
||||
raise ParserRejectedMarkup(e)
|
||||
raise ParserRejectedMarkup(str(e))
|
||||
|
||||
|
||||
def test_fragment_to_document(self, fragment):
|
||||
|
||||
@@ -1,274 +0,0 @@
|
||||
"""Integration code for CSS selectors using Soup Sieve (pypi: soupsieve)."""
|
||||
|
||||
# We don't use soupsieve
|
||||
soupsieve = None
|
||||
|
||||
|
||||
class CSS(object):
|
||||
"""A proxy object against the soupsieve library, to simplify its
|
||||
CSS selector API.
|
||||
|
||||
Acquire this object through the .css attribute on the
|
||||
BeautifulSoup object, or on the Tag you want to use as the
|
||||
starting point for a CSS selector.
|
||||
|
||||
The main advantage of doing this is that the tag to be selected
|
||||
against doesn't need to be explicitly specified in the function
|
||||
calls, since it's already scoped to a tag.
|
||||
"""
|
||||
|
||||
def __init__(self, tag, api=soupsieve):
|
||||
"""Constructor.
|
||||
|
||||
You don't need to instantiate this class yourself; instead,
|
||||
access the .css attribute on the BeautifulSoup object, or on
|
||||
the Tag you want to use as the starting point for your CSS
|
||||
selector.
|
||||
|
||||
:param tag: All CSS selectors will use this as their starting
|
||||
point.
|
||||
|
||||
:param api: A plug-in replacement for the soupsieve module,
|
||||
designed mainly for use in tests.
|
||||
"""
|
||||
if api is None:
|
||||
raise NotImplementedError(
|
||||
"Cannot execute CSS selectors because the soupsieve package is not installed."
|
||||
)
|
||||
self.api = api
|
||||
self.tag = tag
|
||||
|
||||
def escape(self, ident):
|
||||
"""Escape a CSS identifier.
|
||||
|
||||
This is a simple wrapper around soupselect.escape(). See the
|
||||
documentation for that function for more information.
|
||||
"""
|
||||
if soupsieve is None:
|
||||
raise NotImplementedError(
|
||||
"Cannot escape CSS identifiers because the soupsieve package is not installed."
|
||||
)
|
||||
return self.api.escape(ident)
|
||||
|
||||
def _ns(self, ns, select):
|
||||
"""Normalize a dictionary of namespaces."""
|
||||
if not isinstance(select, self.api.SoupSieve) and ns is None:
|
||||
# If the selector is a precompiled pattern, it already has
|
||||
# a namespace context compiled in, which cannot be
|
||||
# replaced.
|
||||
ns = self.tag._namespaces
|
||||
return ns
|
||||
|
||||
def _rs(self, results):
|
||||
"""Normalize a list of results to a Resultset.
|
||||
|
||||
A ResultSet is more consistent with the rest of Beautiful
|
||||
Soup's API, and ResultSet.__getattr__ has a helpful error
|
||||
message if you try to treat a list of results as a single
|
||||
result (a common mistake).
|
||||
"""
|
||||
# Import here to avoid circular import
|
||||
from bs4.element import ResultSet
|
||||
return ResultSet(None, results)
|
||||
|
||||
def compile(self, select, namespaces=None, flags=0, **kwargs):
|
||||
"""Pre-compile a selector and return the compiled object.
|
||||
|
||||
:param selector: A CSS selector.
|
||||
|
||||
:param namespaces: A dictionary mapping namespace prefixes
|
||||
used in the CSS selector to namespace URIs. By default,
|
||||
Beautiful Soup will use the prefixes it encountered while
|
||||
parsing the document.
|
||||
|
||||
:param flags: Flags to be passed into Soup Sieve's
|
||||
soupsieve.compile() method.
|
||||
|
||||
:param kwargs: Keyword arguments to be passed into SoupSieve's
|
||||
soupsieve.compile() method.
|
||||
|
||||
:return: A precompiled selector object.
|
||||
:rtype: soupsieve.SoupSieve
|
||||
"""
|
||||
return self.api.compile(
|
||||
select, self._ns(namespaces, select), flags, **kwargs
|
||||
)
|
||||
|
||||
def select_one(self, select, namespaces=None, flags=0, **kwargs):
|
||||
"""Perform a CSS selection operation on the current Tag and return the
|
||||
first result.
|
||||
|
||||
This uses the Soup Sieve library. For more information, see
|
||||
that library's documentation for the soupsieve.select_one()
|
||||
method.
|
||||
|
||||
:param selector: A CSS selector.
|
||||
|
||||
:param namespaces: A dictionary mapping namespace prefixes
|
||||
used in the CSS selector to namespace URIs. By default,
|
||||
Beautiful Soup will use the prefixes it encountered while
|
||||
parsing the document.
|
||||
|
||||
:param flags: Flags to be passed into Soup Sieve's
|
||||
soupsieve.select_one() method.
|
||||
|
||||
:param kwargs: Keyword arguments to be passed into SoupSieve's
|
||||
soupsieve.select_one() method.
|
||||
|
||||
:return: A Tag, or None if the selector has no match.
|
||||
:rtype: bs4.element.Tag
|
||||
|
||||
"""
|
||||
return self.api.select_one(
|
||||
select, self.tag, self._ns(namespaces, select), flags, **kwargs
|
||||
)
|
||||
|
||||
def select(self, select, namespaces=None, limit=0, flags=0, **kwargs):
|
||||
"""Perform a CSS selection operation on the current Tag.
|
||||
|
||||
This uses the Soup Sieve library. For more information, see
|
||||
that library's documentation for the soupsieve.select()
|
||||
method.
|
||||
|
||||
:param selector: A string containing a CSS selector.
|
||||
|
||||
:param namespaces: A dictionary mapping namespace prefixes
|
||||
used in the CSS selector to namespace URIs. By default,
|
||||
Beautiful Soup will pass in the prefixes it encountered while
|
||||
parsing the document.
|
||||
|
||||
:param limit: After finding this number of results, stop looking.
|
||||
|
||||
:param flags: Flags to be passed into Soup Sieve's
|
||||
soupsieve.select() method.
|
||||
|
||||
:param kwargs: Keyword arguments to be passed into SoupSieve's
|
||||
soupsieve.select() method.
|
||||
|
||||
:return: A ResultSet of Tag objects.
|
||||
:rtype: bs4.element.ResultSet
|
||||
|
||||
"""
|
||||
if limit is None:
|
||||
limit = 0
|
||||
|
||||
return self._rs(
|
||||
self.api.select(
|
||||
select, self.tag, self._ns(namespaces, select), limit, flags,
|
||||
**kwargs
|
||||
)
|
||||
)
|
||||
|
||||
def iselect(self, select, namespaces=None, limit=0, flags=0, **kwargs):
|
||||
"""Perform a CSS selection operation on the current Tag.
|
||||
|
||||
This uses the Soup Sieve library. For more information, see
|
||||
that library's documentation for the soupsieve.iselect()
|
||||
method. It is the same as select(), but it returns a generator
|
||||
instead of a list.
|
||||
|
||||
:param selector: A string containing a CSS selector.
|
||||
|
||||
:param namespaces: A dictionary mapping namespace prefixes
|
||||
used in the CSS selector to namespace URIs. By default,
|
||||
Beautiful Soup will pass in the prefixes it encountered while
|
||||
parsing the document.
|
||||
|
||||
:param limit: After finding this number of results, stop looking.
|
||||
|
||||
:param flags: Flags to be passed into Soup Sieve's
|
||||
soupsieve.iselect() method.
|
||||
|
||||
:param kwargs: Keyword arguments to be passed into SoupSieve's
|
||||
soupsieve.iselect() method.
|
||||
|
||||
:return: A generator
|
||||
:rtype: types.GeneratorType
|
||||
"""
|
||||
return self.api.iselect(
|
||||
select, self.tag, self._ns(namespaces, select), limit, flags, **kwargs
|
||||
)
|
||||
|
||||
def closest(self, select, namespaces=None, flags=0, **kwargs):
|
||||
"""Find the Tag closest to this one that matches the given selector.
|
||||
|
||||
This uses the Soup Sieve library. For more information, see
|
||||
that library's documentation for the soupsieve.closest()
|
||||
method.
|
||||
|
||||
:param selector: A string containing a CSS selector.
|
||||
|
||||
:param namespaces: A dictionary mapping namespace prefixes
|
||||
used in the CSS selector to namespace URIs. By default,
|
||||
Beautiful Soup will pass in the prefixes it encountered while
|
||||
parsing the document.
|
||||
|
||||
:param flags: Flags to be passed into Soup Sieve's
|
||||
soupsieve.closest() method.
|
||||
|
||||
:param kwargs: Keyword arguments to be passed into SoupSieve's
|
||||
soupsieve.closest() method.
|
||||
|
||||
:return: A Tag, or None if there is no match.
|
||||
:rtype: bs4.Tag
|
||||
|
||||
"""
|
||||
return self.api.closest(
|
||||
select, self.tag, self._ns(namespaces, select), flags, **kwargs
|
||||
)
|
||||
|
||||
def match(self, select, namespaces=None, flags=0, **kwargs):
|
||||
"""Check whether this Tag matches the given CSS selector.
|
||||
|
||||
This uses the Soup Sieve library. For more information, see
|
||||
that library's documentation for the soupsieve.match()
|
||||
method.
|
||||
|
||||
:param: a CSS selector.
|
||||
|
||||
:param namespaces: A dictionary mapping namespace prefixes
|
||||
used in the CSS selector to namespace URIs. By default,
|
||||
Beautiful Soup will pass in the prefixes it encountered while
|
||||
parsing the document.
|
||||
|
||||
:param flags: Flags to be passed into Soup Sieve's
|
||||
soupsieve.match() method.
|
||||
|
||||
:param kwargs: Keyword arguments to be passed into SoupSieve's
|
||||
soupsieve.match() method.
|
||||
|
||||
:return: True if this Tag matches the selector; False otherwise.
|
||||
:rtype: bool
|
||||
"""
|
||||
return self.api.match(
|
||||
select, self.tag, self._ns(namespaces, select), flags, **kwargs
|
||||
)
|
||||
|
||||
def filter(self, select, namespaces=None, flags=0, **kwargs):
|
||||
"""Filter this Tag's direct children based on the given CSS selector.
|
||||
|
||||
This uses the Soup Sieve library. It works the same way as
|
||||
passing this Tag into that library's soupsieve.filter()
|
||||
method. More information, for more information see the
|
||||
documentation for soupsieve.filter().
|
||||
|
||||
:param namespaces: A dictionary mapping namespace prefixes
|
||||
used in the CSS selector to namespace URIs. By default,
|
||||
Beautiful Soup will pass in the prefixes it encountered while
|
||||
parsing the document.
|
||||
|
||||
:param flags: Flags to be passed into Soup Sieve's
|
||||
soupsieve.filter() method.
|
||||
|
||||
:param kwargs: Keyword arguments to be passed into SoupSieve's
|
||||
soupsieve.filter() method.
|
||||
|
||||
:return: A ResultSet of Tag objects.
|
||||
:rtype: bs4.element.ResultSet
|
||||
|
||||
"""
|
||||
return self._rs(
|
||||
self.api.filter(
|
||||
select, self.tag, self._ns(namespaces, select), flags, **kwargs
|
||||
)
|
||||
)
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user