Compare commits

..

127 Commits

Author SHA1 Message Date
Steve Sakoman
bf9f2f6f60 build-appliance-image: Update to nanbield head revision
(From OE-Core rev: cce77e8e79c860f4ef0ac4a86b9375bf87507360)

Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 08:31:43 -10:00
Steve Sakoman
3bcf525a68 poky.conf: bump version for 4.3.1 release
(From meta-yocto rev: 7324ba75c9ca2cb90704296e3882ad9f46497f61)

Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Lee Chee Yang
2b90e1725c release-notes-4.3: add Repositories / Downloads section
Add Repositories/Downloads Section for 4.3 release notes.

(From yocto-docs rev: 6b98a6164263298648e89b5a5ae1260a58f1bb35)

Signed-off-by: Lee Chee Yang <chee.yang.lee@intel.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Michael Halstead
b23377f070 docs: add support for nanbield (4.3) release
This adds support for the Nanbield (4.3) release and update the
current dev branch to Scarthgap.

(From yocto-docs rev: b66ba1c2d117033493f3ec25ebcf121cde200286)

Signed-off-by: Michael Halstead <mhalstead@linuxfoundation.org>
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Ross Burton
9c1fb1c9ef oeqa/selftest/debuginfod: improve selftest
This test was occasionally failing for no obvious reason, so refactor
and improve:

- While waiting for the daemon, check that it is still running and
  explicitly timeout after 10s when making the HTTP call.

- While waiting for the daemon to be ready, log the current state of the
  daemon so we can tell if we're timing out as it is still scanning.

- This was in fact the cause of the intermittant failures, because the
  TMPDIR is reused between tests and may contain a large number of
  packages. Do the tests in an isolated TMPDIR to hopefully mitigate this
  issue and increase the timeout to two minutes.

- Decorate the test using runqemu as such so that can be skipped in
  environments without runqemu

- Add a second test that doesn't use runqemu or images, which is faster
  but less realistic.

(From OE-Core rev: 99590fac1bfb5474f5bf0e02d3888b518af9fb3e)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 88b660aaae2527736b6eccec4c952eee969e20a2)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Archana Polampalli
6a35bdf571 vim: Upgrade 9.0.2048 -> 9.0.2068
This includes CVE fix for CVE-2023-46246.
9198c1f2b (tag: v9.0.2068) patch 9.0.2068: [security] overflow in :history

References:
https://nvd.nist.gov/vuln/detail/CVE-2023-46246

(From OE-Core rev: 55dba750cb37fdf09b9b8b768c5ebea86c769248)

Signed-off-by: Archana Polampalli <archana.polampalli@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 63bc72ccb63d2f8eb591d7cc481657a538f0fd42)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Steve Sakoman
08bf0e6743 vim: use upstream generated .po files
A previous commit attempted to fix reproducibility errors by forcing
regeneration of .po files. Unfortunately this triggered a different
type of reproducibility issue.

Work around this by adjusting the timestamps of the troublesome .po
files so they are not regenerated and we use the shipped upstream
versions of the files.

The shipped version of ru.cp1251.po doesn't seem to have been created
with the vim tooling and specifies CP1251 instead of cp1251, fix that.

(From OE-Core rev: 14629902c9bb8ac155cf1077377589ab086c5020)

Signed-off-by: Steve Sakoman <steve@sakoman.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 13d9551ba626f001c71bf908df16caf1d739cf13)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Richard Purdie
899eeaf3fb vim: Improve locale handling
When making checkouts from git, the timestamps can vary and occasionally two files
can end up with the same stamp. This triggers make to regenerate ru.cp1251.po from
ru.po for example. If it isn't regenerated, the output isn't quite the same leading
to reproducibility issues (CP1251 vs cp1251).

Since we added all locales to buildtools tarball now, we can drop the locale
restrictions too. We need to generate a native binary for the sjis conversion
tool so also tweak that.

(From OE-Core rev: fdbdfd90f114ace6891f08625fd3fa8e66959ff7)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 042c1a501b1dae5ddb31307b461be02c3591c589)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Trevor Gamblin
1ce41a86e3 patchtest: rework license checksum tests
Remove the pretest_lic_files_chksum_modified_not_mentioned test entirely
and use pyparsing in test_lic_files_chksum_modified_not_mentioned to
scan the patches for lines starting with either "+LIC_FILES_CHKSUM" or
"-LIC_FILES_CHKSUM".  If either is found but no "License-Update" tag is
present in the commit, fail the test.

(From OE-Core rev: d1871afcec769b6503852adf6217460897ecf301)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 8e1bda0eb225ada22fdf5990edfec512be1d6629)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Trevor Gamblin
bee889b6a1 patchtest-send-results: fix sender parsing
Not all mbox 'from' fields will contain angle brackets, so the
re.findall invocation used for getting a reply_address may fail. Use a
simpler reference to the field to get the sender's email address.

(From OE-Core rev: 78e76e2e4f71485a632f1c1ae83032e0e9341a9e)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 86e9afe09a346586114133f5a7470304d2ed733f)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Trevor Gamblin
d171bc2f28 patchtest: remove test for CVE tag in mbox
After patchtest went live it was determined that testing for a CVE tag
in the mbox commit message is unnecessary, since it will already be in
the shortlog and in any carried patches. Remove the test and the
associated selftest files so that its absence isn't flagged in future
test results.

(From OE-Core rev: bf9671896eb60880b5dad36c2706855932ce091f)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 54690f18f04a2ab993a85d551ce4f8d0fa56618a)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Trevor Gamblin
52e8e9e2b6 patchtest: make pylint tests compatible with 3.x
pylint 3.x has removed epylint, which is now a separate module. To avoid
adding another recipe or using outdated modules, modify the
test_python_pylint tests so that they use the standard pylint API.

(From OE-Core rev: 8b3c6837fe2367fa7aa20b2ee5be554be98f2acd)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 72be3d6a116febf46130cccbe12afe5ad93779b5)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Trevor Gamblin
d1c1d93077 patchtest-send-results: add In-Reply-To
Rework the script for sending results to use send_raw_email and specify
the 'In-Reply-To' field so that patchtest replies to the emails, rather
than sending them standalone to the submitter and mailing list.

(From OE-Core rev: b15537e2e13fd932e16fef5f7a25a3ab2130a19e)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 0c45c92e7f26aea4edf2cfa577b7ba51384e59d3)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Trevor Gamblin
44dd3383c7 patchtest-send-results: send results to submitter
Modify patchtest-send-results so that it extracts the submitter's email
address and responds to them with the patch testresults. Also make a
minor adjustment to the suggestions provided with each email and include
a link to the Patchtest wiki page for additional clarification on
specific failures.

(From OE-Core rev: fe9ec57a07f4e341505030fdf49a5827f01a626f)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 64ed88e32cf9e04772319ff6e66c602d1cff4fd7)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Trevor Gamblin
cce3cab334 patchtest: shorten test result outputs
Some test result lines in TestMbox and TestPatch are still too long to
avoid being flagged by the mailer script. Clean them up by removing
redundant information, so that they are all under the length limit of
220 characters.

(From OE-Core rev: c543469e2da32a474a60a497b5d51fd9fc43dbb4)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit c10d0bb542b23fbdc14d76dfa8e5885aa4d33083)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Trevor Gamblin
9f4d69790c patchtest: reduce checksum test output length
The test_lic_files_chksum_modified_not_mentioned test in TestMetadata is
outputting very long lines that fail the maximum length check when
sending email results, preventing the actual errors from being
displayed. Reduce the length of the failure message by rewording and
removing redundant information.

(From OE-Core rev: e3c680bab99f7e5f0cb7874ada0297b5fac66702)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 2e2625735181160e9760a6f3af4955bda2ea6d4d)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Trevor Gamblin
f575a3bdd5 patchtest: simplify test directory structure
Consolidate the various mbox tests into a new TestMbox class, metadata
tests into TestMetadata, and patch tests into TestPatch. Also update the
selftest filenames to match the changes. The test contents are not
significantly changed (other than to reference the new class names).
While this doesn't improve overall readability, it does result in more
obvious categorization, and more importantly reduces the number of calls
to setup tinfoil in the tests, resulting in a roughly 25% reduction in
runtime.

Before:

[tgamblin@megalith poky]$ time ./meta/lib/patchtest/selftest/selftest
XPASS: PatchSignedOffBy.test_signed_off_by_presence (file: PatchSignedOffBy.test_signed_off_by_presence.pass)
XFAIL: Shortlog.test_shortlog_format (file: Shortlog.test_shortlog_format.fail)
XFAIL: MboxFormat.test_mbox_format (file: MboxFormat.test_mbox_format.1.fail)
XPASS: Shortlog.test_shortlog_length (file: Shortlog.test_shortlog_length.pass)
XFAIL: CommitMessage.test_commit_message_presence (file: CommitMessage.test_commit_message_presence.fail)
XFAIL: SrcUri.test_src_uri_left_files (file: SrcUri.test_src_uri_left_files.fail)
XPASS: Author.test_author_valid (file: Author.test_author_valid.1.pass)
XFAIL: LicFilesChkSum.test_lic_files_chksum_modified_not_mentioned (file: LicFilesChkSum.test_lic_files_chksum_modified_not_mentioned.fail)
XPASS: CVE.test_cve_tag_format (file: CVE.test_cve_tag_format.pass)
XPASS: CVE.test_cve_presence_in_commit_message (file: CVE.test_cve_presence_in_commit_message.pass)
XFAIL: CVE.test_cve_tag_format (file: CVE.test_cve_tag_format.fail)
XFAIL: Author.test_author_valid (file: Author.test_author_valid.1.fail)
XFAIL: LicFilesChkSum.test_lic_files_chksum_presence (file: LicFilesChkSum.test_lic_files_chksum_presence.fail)
XSKIP: Merge.test_series_merge_on_head (file: Merge.test_series_merge_on_head.2.skip)
XPASS: MboxFormat.test_mbox_format (file: MboxFormat.test_mbox_format.pass)
XFAIL: SignedOffBy.test_signed_off_by_presence (file: SignedOffBy.test_signed_off_by_presence.1.fail)
XPASS: Shortlog.test_shortlog_format (file: Shortlog.test_shortlog_format.pass)
XFAIL: SignedOffBy.test_signed_off_by_presence (file: SignedOffBy.test_signed_off_by_presence.2.fail)
XFAIL: MboxFormat.test_mbox_format (file: MboxFormat.test_mbox_format.2.fail)
XFAIL: Summary.test_summary_presence (file: Summary.test_summary_presence.fail)
XPASS: Author.test_author_valid (file: Author.test_author_valid.2.pass)
XSKIP: Merge.test_series_merge_on_head (file: Merge.test_series_merge_on_head.1.skip)
XPASS: Bugzilla.test_bugzilla_entry_format (file: Bugzilla.test_bugzilla_entry_format.pass)
XFAIL: CVE.test_cve_presence_in_commit_message (file: CVE.test_cve_presence_in_commit_message.fail)
XPASS: SignedOffBy.test_signed_off_by_presence (file: SignedOffBy.test_signed_off_by_presence.pass)
XPASS: LicFilesChkSum.test_lic_files_chksum_presence (file: LicFilesChkSum.test_lic_files_chksum_presence.pass)
XPASS: CommitMessage.test_commit_message_presence (file: CommitMessage.test_commit_message_presence.pass)
XPASS: Summary.test_summary_presence (file: Summary.test_summary_presence.pass)
XPASS: LicFilesChkSum.test_lic_files_chksum_modified_not_mentioned (file: LicFilesChkSum.test_lic_files_chksum_modified_not_mentioned.pass)
XFAIL: Shortlog.test_shortlog_length (file: Shortlog.test_shortlog_length.fail)
XFAIL: PatchSignedOffBy.test_signed_off_by_presence (file: PatchSignedOffBy.test_signed_off_by_presence.fail)
XFAIL: Bugzilla.test_bugzilla_entry_format (file: Bugzilla.test_bugzilla_entry_format.fail)
XPASS: SrcUri.test_src_uri_left_files (file: SrcUri.test_src_uri_left_files.pass)
XFAIL: Author.test_author_valid (file: Author.test_author_valid.2.fail)
============================================================================
Testsuite summary for patchtest
============================================================================
============================================================================

real    24m14.386s
user    1m13.599s
sys     0m21.477s

After:

[tgamblin@megalith poky]$ time ./meta/lib/patchtest/selftest/selftest
XFAIL: TestMbox.test_bugzilla_entry_format (file: TestMbox.test_bugzilla_entry_format.fail)
XPASS: TestMetadata.test_summary_presence (file: TestMetadata.test_summary_presence.pass)
XFAIL: TestMbox.test_mbox_format (file: TestMbox.test_mbox_format.1.fail)
XFAIL: TestMetadata.test_src_uri_left_files (file: TestMetadata.test_src_uri_left_files.fail)
XSKIP: TestMbox.test_series_merge_on_head (file: TestMbox.test_series_merge_on_head.2.skip)
XPASS: TestMbox.test_commit_message_presence (file: TestMbox.test_commit_message_presence.pass)
XFAIL: TestMbox.test_commit_message_presence (file: TestMbox.test_commit_message_presence.fail)
XPASS: TestMbox.test_signed_off_by_presence (file: TestMbox.test_signed_off_by_presence.pass)
XFAIL: TestPatch.test_cve_tag_format (file: TestPatch.test_cve_tag_format.fail)
XFAIL: TestMbox.test_author_valid (file: TestMbox.test_author_valid.1.fail)
XFAIL: TestMbox.test_shortlog_length (file: TestMbox.test_shortlog_length.fail)
XPASS: TestMbox.test_mbox_format (file: TestMbox.test_mbox_format.pass)
XFAIL: TestPatch.test_signed_off_by_presence (file: TestPatch.test_signed_off_by_presence.fail)
XFAIL: TestMbox.test_shortlog_format (file: TestMbox.test_shortlog_format.fail)
XFAIL: TestMbox.test_mbox_format (file: TestMbox.test_mbox_format.2.fail)
XPASS: TestPatch.test_cve_tag_format (file: TestPatch.test_cve_tag_format.pass)
XSKIP: TestMbox.test_series_merge_on_head (file: TestMbox.test_series_merge_on_head.1.skip)
XPASS: TestMbox.test_author_valid (file: TestMbox.test_author_valid.2.pass)
XPASS: TestMetadata.test_lic_files_chksum_modified_not_mentioned (file: TestMetadata.test_lic_files_chksum_modified_not_mentioned.pass)
XPASS: TestMbox.test_bugzilla_entry_format (file: TestMbox.test_bugzilla_entry_format.pass)
XPASS: TestMetadata.test_src_uri_left_files (file: TestMetadata.test_src_uri_left_files.pass)
XPASS: TestMetadata.test_lic_files_chksum_presence (file: TestMetadata.test_lic_files_chksum_presence.pass)
XPASS: TestMbox.test_cve_presence_in_commit_message (file: TestMbox.test_cve_presence_in_commit_message.pass)
XFAIL: TestMbox.test_signed_off_by_presence (file: TestMbox.test_signed_off_by_presence.2.fail)
XFAIL: TestMbox.test_author_valid (file: TestMbox.test_author_valid.2.fail)
XFAIL: TestMetadata.test_lic_files_chksum_presence (file: TestMetadata.test_lic_files_chksum_presence.fail)
XPASS: TestMbox.test_shortlog_format (file: TestMbox.test_shortlog_format.pass)
XPASS: TestMbox.test_author_valid (file: TestMbox.test_author_valid.1.pass)
XPASS: TestPatch.test_signed_off_by_presence (file: TestPatch.test_signed_off_by_presence.pass)
XFAIL: TestMetadata.test_lic_files_chksum_modified_not_mentioned (file: TestMetadata.test_lic_files_chksum_modified_not_mentioned.fail)
XPASS: TestMbox.test_shortlog_length (file: TestMbox.test_shortlog_length.pass)
XFAIL: TestMbox.test_signed_off_by_presence (file: TestMbox.test_signed_off_by_presence.1.fail)
XFAIL: TestMbox.test_cve_presence_in_commit_message (file: TestMbox.test_cve_presence_in_commit_message.fail)
XFAIL: TestMetadata.test_summary_presence (file: TestMetadata.test_summary_presence.fail)
============================================================================
Testsuite summary for patchtest
============================================================================
============================================================================
real    18m39.749s
user    0m41.857s
sys     0m14.708s

(From OE-Core rev: 497e128546b37d6cf5fb86188fff4c7a22526ec8)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit f788592da2fd0e21638ce2c3326675a060ba51cf)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Trevor Gamblin
5f1c17d70c patchtest/selftest: add XSKIP, update test files
Since we are skipping the merge test, two of the selftests now report
SKIP instead of XPASS/XFAIL as expected. Adjust the two files to have
the right endings for XSKIP, and add the category so that it can be used
for more extensive testing in the future.

(From OE-Core rev: a354265065516ec634042ea8210f97aaba7ff43c)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 3331f53c0be2575784a042bb2401eeba4f2a5a3e)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Trevor Gamblin
3f4011aba4 patchtest-send-results: check max line length, simplify responses
Check that the maximum line length of the testresult file is less than
220 characters, to help guard against malicious changes being sent in
email responses. If any line exceeds this length, replace the normal
testresults used in the response with a line stating that tests failed,
but the results could not be processed. Also clean up the respone
substrings slightly to go along with the change.

(From OE-Core rev: 8e7e39134df926203b7bdfad22916e0d5da0589d)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit b0d53cf587dc9afb97f00c1089e45b758e96dd7c)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Trevor Gamblin
48537fb77b patchtest: disable merge test
Disable the merge-on-head test until patchtest properly handles merging
of series subsets and accounts for patches that are rapidly merged (i.e.
before patchtest is run).

(From OE-Core rev: 97c1d8aa7318e36a037aa6c8b721fb75608a92a7)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e561c614dc72b7f8bf5e09a09bbe6ebc3cf500bb)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Trevor Gamblin
174a642755 patchtest-send-results: improve subject line
Pull the actual email's subject line from the .mbox file and use that in
patchtest's test results response, so that it's clearer which patch it
is replying to.

(From OE-Core rev: 86d00a1b5233250fbea32113ad9c43bd78778406)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 98ca0b151517b3544454fd5c1656a2de631c4897)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Trevor Gamblin
2a89e081ca patchtest: fix lic_files_chksum test regex
the test_lic_files_chksum_modified_not_mentioned test in patchtest
wasn't picking up on 'License-Update:' tags correctly. Use pyparsing's
AtLineStart class to simplify the regex setup and search.

(From OE-Core rev: 978c819c33c0552750f5383f0ed761d505b4bdf5)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit dc9126e45e74b915faaf296037e7ece41785bf4a)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Trevor Gamblin
7dd1867c77 patchtest: skip merge test if not targeting master
Avoid testing mergeability of a patch when not targeting master, so that
patches tested via other means (e.g. maintainer branches and AB runs)
don't get unnecessarily reviewed an extra time.

(From OE-Core rev: 127fa20bd1d628548f7f4ba087b3ca105e705098)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e6cf23e353f48c57249681bd0b12bd8494d4959a)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Trevor Gamblin
5972abb328 patchtest: test regardless of mergeability
(From OE-Core rev: 06d2066a5061d23a316f65cfc731ce44b576b2bf)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit dc089073eb120de76c8907e476c341ed3e97c164)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Ross Burton
a7a7320737 patchtest: remove unused imports
(From OE-Core rev: 673d1b51b5183a81d1daaa7a48250aa938679883)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit be8429d986335aae65c2426862b97836ba46e42a)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Alejandro Hernandez Samaniego
4f6d210ee0 baremetal-helloworld: Pull in fix for race condition on x86-64
It was previously discovered that there was a race condition during the Makefile
execution between the assemble and compile targets, the previous fix attempted
to serialize the build targets, but the fix was missing for x86-64.

Pull in latest commit from upstream to fix this issue on x86-64.

[YOCTO #15146]

(From OE-Core rev: 2b236342971cc7a349f6724874a02af8952d378a)

Signed-off-by: Alejandro Enedino Hernandez Samaniego <alejandro@enedino.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit e7e1631a1efbcf421de801e94734f67f25668540)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Rouven Czerwinski
d5f021238c glib-2.0: Remove unnecessary assignement
FILES:${PN}-utils is += extended and than replaced completely later,
remove the first extension.

(From OE-Core rev: da90f904c47250fbb71f03a3ce961a23dba47a80)

Signed-off-by: Rouven Czerwinski <r.czerwinski@pengutronix.de>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit d9d61c5217938749e3edc5f8a5c987f46bbab3d7)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Richard Purdie
2db7c24bd8 base: Ensure recipes using mercurial-native have certificates
If you try and fetch using mercurial-native, you see certificate errors since
it is configured to find ones in the sysroot, not the system. Add the missing
dependency so that mercurial recipes using the native tool work.

Found trying to make mirroring for old meta-oe stable branches work.

(From OE-Core rev: c48206dd82a2faab477002b1ac04d835920755d0)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit fc567e35b374f8b08975602609ee71e64357fb3d)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Xiangyu Chen
c7f18e6c43 linux-yocto: make sure the pahole-native available before do_kernel_configme
When using debug-btf.scc in a clean workspace, the CONFIG_MODULE_ALLOW_BTF_MISMATCH cannot
apply to kernel until clean the kernel code(bitbake linux-yocto -c cleanall) and rebuild.

After tracking the code, some options depend on CONFIG_PAHOLE_VERSION, it was generated by
scripts/pahole-version.sh in kernel, but during do_kernel_configme step, the pahole-native
is not available in sysroot-native, so need to wait pahole-native install to sysroot-native
before do_kernel_configme.

(From OE-Core rev: f9d434902df4ac0c17a94a977c045c4face65414)

Signed-off-by: Xiangyu Chen <xiangyu.chen@windriver.com>
Signed-off-by: Luca Ceresoli <luca.ceresoli@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 217a4db53edbd88001f6390bbff39e5dd3d137af)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Julien Stephan
83a2a6a65e oeqa/selftest/devtool: abort if a local workspace already exist
if user run devtool selftests with a local workspacelayer
the tests fail with various error such as:

- devtool.DevtoolAddTests.test_devtool_add just hangs
- devtool.DevtoolModifyTests.* fail with the following error:

 ERROR: Found duplicated BBFILE_COLLECTIONS 'workspacelayer', check bblayers.conf or layer.conf to fix it.
 Found duplicated BBFILE_COLLECTIONS 'workspacelayer', check bblayers.conf or layer.conf to fix it.

Check if a workspacelayer exists, warn the user and abort the tests

(From OE-Core rev: b8756f6e20d15f1cc724784d8ceb45a969ec7f81)

Signed-off-by: Julien Stephan <jstephan@baylibre.com>
Signed-off-by: Luca Ceresoli <luca.ceresoli@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit a74962cfb0485f6f2b9e2b751c33c8eafca8705a)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Dmitry Baryshkov
6d424e1d02 kernel-arch: drop CCACHE from KERNEL_STRIP definition
Building linux-yocto with ccache enabled results in the 'command not
found' error, because kernel-yocto.bbclass passes the KERNEL_STRIP
as a single value, whic is then interpreted as a command name.

ERROR: Fatal errors occurred in subprocesses:
[Errno 2] No such file or directory: 'ccache aarch64-linaro-linux-strip': Traceback (most recent call last):
  File "/home/lumag/Projects/RPB/build-rpb/conf/../../layers/openembedded-core/meta/lib/oe/utils.py", line 288, in run
    ret = self._target(*self._args, **self._kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Fixes: 03973c8c1c93 ("kernel: Add kernel specific STRIP variable")
(From OE-Core rev: 595b2a89d1af01645cea5d4163b100d59c951db6)

Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 41f019afc41f800b622c46a6d7cf1beffc97716a)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Khem Raj
7e5b4743f4 kernel.bbclass: Use strip utility used for kernel build in do_package
os.environ does not pass this down to runstrip() function and in
strip_execs() its using STRIP bitbake variable to find the strip utility
to use. Since there might be a trailing whitespace in KERNEL_STRIP
remove that otherwise python is not able to launch it.
e.g.

FileNotFoundError: [Errno 2] No such file or directory: 'riscv64-yoe-linux-strip '

This is more evident when STRIP and KERNEL_STRIP are different utilities
e.g. when using clang as default toolchain but using gcc+binutils only for
kernel build.

(From OE-Core rev: e0bd7ce93a75c7ddb6b1c572453c37407e7e32da)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Cc: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Luca Ceresoli <luca.ceresoli@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 77497dbdca92ab4d6386a071bc281c42a7e8a14b)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Peter Kjellerstedt
cae5e1ee3d bb-matrix-plot.sh: Show underscores correctly in labels
Underscores previously caused the next character in the label to be
printed using subscript due to the enhanced string support in gnuplot.

(From OE-Core rev: a8039d601187b28d9cec4402c9e0bd72b2805eb2)

Signed-off-by: Peter Kjellerstedt <peter.kjellerstedt@axis.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 282b48f90f77e0766993018d22fe03dd303febdc)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
luca fancellu
20a4de703c oeqa/ssh: Handle SSHCall timeout error code
The current code in ssh.py is terminating the ssh process that
does not finish its computation in a given timeout (when timeout
is passed), the SSHCall function is returning the process error
code.

The Openssl ssh before version 8.6_p1 is returning 0 when it is
terminated, from commit 8a9520836e71830f4fccca066dba73fea3d16bda
onwards (version >= 8.6_p1) ssh is returning 255 instead.

So for version of ssh older than 8.6_p1 when the SSHCall time out,
the return code will be 0, meaning success, which is wrong.

Fix this issue checking if the process has timeout (hence it's been
terminated) and checking if the returned code is 0, in that case
set it to 255 to advertise that an error occurred.

Add a test case excercising the timeout in the SSHTest, test_ssh
test function.

(From OE-Core rev: 82215c855ee39b4e39f24113241a7fb3f20f9531)

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 948fecca1db4c7a30fcca5fcf5eef95cd12efb00)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Max Krummenacher
2dec4dcecf Revert "bin_package.bbclass: Inhibit the default dependencies"
This reverts commit d1d09bd4d7be88f0e341d5fccbfbefeb98d4b727.

The commit not only removes the dependencies on the cross compiler
but also does not depend on e.g. virtual/${TARGET_PREFIX}compilerlibs
and virtual/libc which in turn makes the file-rdeps qa check fail
if installing binaries linked against e.g. libc or libstdc++.

(From OE-Core rev: 47b436c42ba1ef3b24e8fe48c7ea274b1a53a60e)

Signed-off-by: Max Krummenacher <max.krummenacher@toradex.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit ababf6ceebe360c5f59a57428566c27b7a97a9e6)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
William Lyu
423af114ee perl: fix intermittent test failure
Fixes [YOCTO #15136]

This fix addresses the intermittent failure of the Perl ptest
t/op/sigsystem.t.

(From OE-Core rev: a9c39c67e8421103f14302f6cf7aa2bf6a940cba)

Signed-off-by: William Lyu <William.Lyu@windriver.com>
Signed-off-by: Randy MacLeod <randy.macleod@windriver.com>
Reported-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Luca Ceresoli <luca.ceresoli@bootlin.com>
(cherry picked from commit 8c1ee92efa107ed055f1737640a027fa89077494)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Joshua Watt
45736b12e1 goarch: Move Go architecture mapping to a library
Other spaces uses the Go architecture definitions as their own (for
example, container arches are defined to be Go arches). To make it
easier for other places to use this mapping, move the code that does the
translation of OpenEmbedded arches to Go arches to a library.

(From OE-Core rev: 5e0267aeb7d9f575f270f6856a67ac62ce8a0f71)

Signed-off-by: Joshua Watt <JPEWhacker@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 3e86f72fc2e1cc2e5ea4b4499722d736941167ce)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Peter Marko
006a8f1891 openssl: Upgrade 3.1.3 -> 3.1.4
https://github.com/openssl/openssl/blob/openssl-3.1/NEWS.md#major-changes-between-openssl-313-and-openssl-314-24-oct-2023

Major changes between OpenSSL 3.1.3 and OpenSSL 3.1.4 [24 Oct 2023]
* Mitigate incorrect resize handling for symmetric cipher keys and IVs. (CVE-2023-5363)

(From OE-Core rev: de390034aecb23226a532dad56c821b4edee35bb)

Signed-off-by: Peter Marko <peter.marko@siemens.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
(cherry picked from commit 104ba16de434a08b0c8ba4208be187f0ad1a2cf8)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Khem Raj
051a926579 llvm: Upgrade to 17.0.3
Brings following fixes

* 888437e1b600 [asan] Ensure __asan_register_elf_globals is called in COMDAT asan.module_ctor (#67745)
* 2e00f4ca4e91 [clang-format][doc] Update the Linux kernel coding style URL
* aeb83c3783a6 [clang-format] Fix a serious bug in git-clang-format (#65723)
* 268faa377aee [LSan] Mark create_thread_leak.cpp as UNSUPPORTED: darwin.
* 491a91e8eea2 [PowerPC] Use zext instead of anyext in custom and combine (#68784)
* 8ce6b65c89ad [PowerPC] Add test for #68783 (NFC)
* 7a23a5d43c67 [clang-format] Fix a bug in RemoveParentheses: ReturnStatement (#67911)
* be4016e52779 [X86] Fix logic for optimizing movmsk(bitcast(shuffle(x))); PR67287
* 496b174053bd [X86] Add tests for incorrectly optimizing out shuffle used in `movmsk`; PR67287
* f50c6382c716 [clang] [MinGW] Explicitly always pass the -fno-use-init-array (#68571)
* d10b731adcc8 [LVI][CVP] Treat undef like a full range (#68190)
* 37b79e779f44 [X86] combineConcatVectorOps - only concatenate single-use subops
* 5a13ce2d6020 Bump version to 17.0.3
* e7b3b94cf500 [clang] Correct behavior of `LLVM_UNREACHABLE_OPTIMIZE=OFF` for `Release` builds (#68284)
* f0a687d821c1 [LLD] [COFF] Fix handling of comdat .drectve sections (#68116)
* 8a8ade49ff49 workflows/release-binaries: Use more cores to avoid the 6 hour timeout (#67874)
* 1090b91a2840 [AArch64] Disable loop alignment for Windows targets (#67894)
* 69c8c96691c7 [Sema] Use underlying type of scoped enum for -Wformat diagnostics (#67378)
* b2417f51dbbd (tag: llvmorg-17.0.2) Fix release/export.sh to export runtimes tarball, too (#67404)
* 23988a1d82d5 [libc++] Fix `std::pair`'s  pair-like constructor's incorrect assumption (#66585)
* 33e14ecd6aac [CodeGen] Don't treat thread local globals as large data (#67764)
* 03f797b51df6 [workflow] Fix abi checker in llvm-tests. Same fix as in 99fb0af80d16b0ff886f032441392219e1cac452 (#67957)
* f6cf58eed973 [clang] [MinGW] Tolerate mingw specific linker options during compilation (#67891)
* b338a2830a2c [LLD] [COFF] Restore the current dir as the first entry in the search path (#67857)
* 6a5be8e95b43 [LLD] [COFF] Clarify -print-search-path for the empty string element (#67856)
* 71be0aafe357 [NFC] clang-format lld/COFF/Driver.cpp and lld/Common/Filesystem.cpp
* 0a2d7dae6ef2 [compiler-rt] Reinstate removal of CRT choice flags from CMAKE_*_FLAGS* (#67935)
* 098e653a5bed [MemCpyOpt] Merge alias metadatas when replacing arguments (#67539)
* 78d201ebc3e2 [MemCpyOpt] Add test for #67539 (NFC)
* e718f3240a57 [DependencyScanningFilesystem] Make sure the local/shared cache filename lookups use only absolute paths (#66122)
* 45066b9fbc7b [Sema] Fix fixit cast printing inside macros (#66853)
* 87ec1f460d0e Work around two more instances of __noinline__ conflicts. (#66138)
* 9da5b7a93bca [lldb] Fix building LLDB standlone without framework
* c056d720b534 [lldb][NFCI] Change logic to find clang resource dir in standalone builds
* cb23434f9e63 [XCOFF] Do not generate the special .ref for zero-length sections (#66805)
* 1b55dc9d94c3 Fix buildbot failure caused by D157623
* 28d81a2bfa0a [lld][COFF] Remove incorrect flag from EHcont table
* b7eba056b93c workflows/release-tasks: Setup FileCheck and not for release-lit (#66799)
* 9678f11b057c [StackColoring] Handle fixed object index
* 49e9ee190080 [StackColoring] Handle SEH catch object stack slots conservatively
* 17123a60b87c [X86] Add test for #66984 (NFC)
* 2839aa915066 [SimpleLoopUnswitch] Fix exponential unswitch
* 773f136d6faa [SimpleLoopUnswitch] Fix reversed branch during condition injection
* 4362f3e4cf48 [clang] Include `expected-no-diagnostics` in newly-added test (NFC)
* 5f1fcc43e592 [clang] Bail out when handling union access with virtual inheritance
* 178cf5bc8732 [clang][Diagnostics] Fix wrong line number display (#65238)
* 25a150b830f6 Revert "[InlineCost] Check for conflicting target attributes early"

(From OE-Core rev: 2c161d842af31b4194d54409bba46cdcc33c1e16)

Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit 8cfb833b66e514ea911aa4fbdc72592a06233f68)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Wang Mingyu
bb64157bff libsdl2: upgrade 2.28.3 -> 2.28.4
This is a stable bugfix release, with the following changes:

Enable clipping for zero sized rectangles in the SDL renderer
Notify X11 clipboard managers when the clipboard changes
Fixed sensor timestamps for third-party PS5 controllers
Added detection for Logitech and Simagic racing wheels

(From OE-Core rev: 3923426c799f8772fb84303000d04ac3d968e84f)

Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
Signed-off-by: Luca Ceresoli <luca.ceresoli@bootlin.com>
(cherry picked from commit f47de111cd66c3f9a5a6d5589e1fd034027a0a75)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Wang Mingyu
7180db61b6 ell: upgrade 0.58 -> 0.59
Changelog:
 Fix issue with symbol visibility.

(From OE-Core rev: daebf66af566e56bb9f4cb6c0e23330221e3ebbc)

Signed-off-by: Wang Mingyu <wangmy@fujitsu.com>
Signed-off-by: Luca Ceresoli <luca.ceresoli@bootlin.com>
(cherry picked from commit 14eba663b56f8f3b9c3aff5661cbe2aa7befe86e)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Alex Stewart
5c3f9cf00e libsndfile1: fix CVE-2022-33065
(From OE-Core rev: 84ea91d63147c19ebf5909f7e9f377ddb1a52a7b)

Signed-off-by: Alex Stewart <alex.stewart@ni.com>
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
(cherry picked from commit f34991c7eeb91702a44ac8b4a190fcb45dac57cb)
Signed-off-by: Steve Sakoman <steve@sakoman.com>
2023-11-24 05:01:37 -10:00
Richard Purdie
2e9c2a2381 layer.conf: Switch layer to nanbield series only
(From OE-Core rev: 28e6fde4627ffd053dde8a8d44441a40dafd545c)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-09 17:09:49 +00:00
Michael Opdenacker
90e004cfe2 migration-guides: fix empty sections
(From yocto-docs rev: 897d5017eae6b3af2d5d489fc4e0915d9ce21458)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Michael Opdenacker
2ef3fd8c21 ref-manual: classes: explain cml1 class name
(From yocto-docs rev: 0ee4b7417087c105a4419b316c6b2c195c343f82)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
CC: Martin Jansa <martin.jansa@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Paul Eggleton
a5a10bfec7 migration-4.3: additional migration items
Add the following:

* Removed recipes
* One removed class
* Output file name changes
* Versioning changes
* tunctl removal

(From yocto-docs rev: 72114088bc9be184aab7b55087ea97a32a65cd6d)

Signed-off-by: Paul Eggleton <bluelightning@bluelightning.org>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Paul Eggleton
8292c949a0 migration-4.3: adjustments to existing text
* Reword the layername side-effects commentary to be a bit more readable
* Extend edgerouter removal description
* Correct capitalisation of systemd
* For QEMU_USE_SLIRP, specify what to use instead, and adjust the
  following list item to use the same style
* Extend statement on -crossssdk / MLPREFIX change to indicate what
  needs to be done

(From yocto-docs rev: bfc49b59b6cd905cef0294792f05661b36181a6e)

Signed-off-by: Paul Eggleton <bluelightning@bluelightning.org>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Paul Eggleton
25716e9d99 migration-4.3: remove some unnecessary items
Remove some items from the 4.3 migration guide:

* The PERLVERSION and PERLARCH items are already mentioned under the
  removed variables section
* The jsDelivr item is interesting, but it isn't a backwards
  compatibility issue that the user would need to take action to
  resolve, and we already cover it in the release notes.

(From yocto-docs rev: c72d190cd8ccc471a0b93b90b272c95cd57ef3dc)

Signed-off-by: Paul Eggleton <bluelightning@bluelightning.org>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Paul Eggleton
44ff7e1340 release-notes-4.3: feature additions
Some additional feature items from combing through commits (not 100%
complete yet.)

(From yocto-docs rev: 05c13cf0964a892a38531e3cfac68687278ee601)

Signed-off-by: Paul Eggleton <bluelightning@bluelightning.org>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Paul Eggleton
fcbe7a5caa release-notes-4.3: move new classes to Rust section
These are both Rust-related, let's move them the Rust section since they
are more notable there.

(From yocto-docs rev: 0510136abf8868d510125bae7f4096342bb94ec0)

Signed-off-by: Paul Eggleton <bluelightning@bluelightning.org>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Paul Eggleton
f0d9d84a74 release-notes-4.3: remove the Distribution section
This section doesn't make a lot of sense to separate out. The SPDX
change is now no longer Poky-specific, and the poky-altcfg usrmerge
change is not really notable given that poky-altcfg is not widely used
outside of our testing and also itself selects systemd as INIT_MANAGER
and thus requires usrmerge anyway (as noted elsewhere).

(From yocto-docs rev: 234379c81db810c1fc3b860d51a59c200e97b2ca)

Signed-off-by: Paul Eggleton <bluelightning@bluelightning.org>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Paul Eggleton
1ae470c15a release-notes-4.3: add CVEs, recipe upgrades, license changes, contributors
Add the list of CVE fixes, recipe upgrades (from commits since layer
index version comparison not currently working), license changes and
contributor list.

(From yocto-docs rev: 32bc3d603894ddefb4766fdf4e10442f1aa75216)

Signed-off-by: Paul Eggleton <bluelightning@bluelightning.org>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Paul Eggleton
f662f8e57a release-notes-4.3: tweaks to existing text
A few grammar tweaks.

(From yocto-docs rev: a3e1258be27a08147b062603bd1b6526b26e9516)

Signed-off-by: Paul Eggleton <bluelightning@bluelightning.org>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Paul Eggleton
9536ba3c6c release-notes-4.3: fix some typos
(From yocto-docs rev: 3c98d2a1bc023aed75261ed7f4e18977b587d2f0)

Signed-off-by: Paul Eggleton <bluelightning@bluelightning.org>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Paul Eggleton
20b23e1fba ref-manual: remove semicolons from *PROCESS_COMMAND variables
In nanbield these are no longer needed - spaces are sufficient.
The code still handles any semicolons (replacing them with spaces before
interpreting the value), but let's avoid people adding them from now on
in case we decide to change that in future.

(From yocto-docs rev: 2947f6309f86cdf5322a39d4420e77431a8e3572)

Signed-off-by: Paul Eggleton <bluelightning@bluelightning.org>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Paul Eggleton
98ab1b436a ref-manual: update SDK_NAME variable documentation
Update for changes in nanbield. Note that I am documenting what is set
by poky.conf here (since this is Yocto Project documentation), which is
slightly different from what is done in meta/conf/bitbake.conf.

(From yocto-docs rev: 9764cb9e19788eb1caea0d2e95fbe7a5c19887d4)

Signed-off-by: Paul Eggleton <bluelightning@bluelightning.org>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Paul Eggleton
abc2b81652 Remove references to apm in MACHINE_FEATURES
apm is no longer supported in nanbield.

(From yocto-docs rev: fa07d34db3b5ba670ed2dc1228ffb3c0c09b3c08)

Signed-off-by: Paul Eggleton <bluelightning@bluelightning.org>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Michael Opdenacker
73d64902fd bsp-guide: bsp.rst: update beaglebone example
(From yocto-docs rev: 8fb31b507c37d2c11e9dc98559bd7d145e1dce04)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
BELHADJ SALEM Talel
c329d14347 overview-manual: concepts: Add Bitbake Tasks Map
Create a Map to detail how BitBake handles a recipe's tasks
and its compile/runtime dependencies along with detailed comments.

(From yocto-docs rev: 7f0ab56aa302babab6c9d600a8d8a91708cf75f7)

Signed-off-by: Talel BELHAJSALEM <bhstalel@gmail.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Marta Rybczynska
3151b63cb6 dev-manual: extend the description of CVE patch preparation
Extend the description on how to prepare a patch for a CVE issue.
Add a more illustrative and current example of how to modify
the patch file. Add an example of how to use CVE_STATUS.

(From yocto-docs rev: f982f6be6b52ba0915b2e6f712270dec5dde64fc)

Signed-off-by: Marta Rybczynska <marta.rybczynska@syslinbit.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Ross Burton
aebf95e7c7 migration-guides: git recipes reword
(From yocto-docs rev: 9ef7cfd47a53ed45f3d0db8534a42cefbfdf63b3)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Ross Burton
a24c6cad13 migration-guides: packaging changes
(From yocto-docs rev: 7558c99f50f4d96e12299a5b3c1059a71281a475)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Ross Burton
7116cd7350 migration-guides: add BitBake changes
(From yocto-docs rev: c719d78cc9d7fb5092d2f5d0285b3eea9ad8acfe)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Ross Burton
cf0b21e7de migration-guides: add utility notes
(From yocto-docs rev: ba0dcf57944058d9d5f2f791d463c72098c49561)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Ross Burton
ad3e54bd5f migration-guides: add testing notes
(From yocto-docs rev: cd71d0406c96b44cc872f9eb4c8604bcdd62fed6)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Ross Burton
ff26beb48f migration-guides: enabling SPDX only for Poky, not a global default
(From yocto-docs rev: fae0b4af717602d04e06d8619389d6b50e0e8e2d)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Ross Burton
7be7f0f852 migration-guides: remove SERIAL_CONSOLES_CHECK
(From yocto-docs rev: 364f8c17ba380107b2d837e17403307c3e04477c)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Ross Burton
52fa1a3c52 migration-guides: add kernel notes
(From yocto-docs rev: 45b67c5a37d560738037478b28cb7eb3d2f8e966)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Ross Burton
14d33f1d2e migration-guides: mention CDN
(From yocto-docs rev: b7efe7984f9bd62891dc72a6763a6a5935454fdf)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Ross Burton
45830dcc7f migration-guides: mention LLVM 17
(From yocto-docs rev: 64099ca9b89dd74df7b3a6a287b95a5a317cf916)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Ross Burton
dfb846621d migration-guides: remove non-notable change
(From yocto-docs rev: 7e6276993fa3ce9c87e4d7945f140f381a99a902)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Ross Burton
0ffe438e8f migration-guides: QEMU_USE_SLIRP variable removed
(From yocto-docs rev: f50e9fe501ccafd18ed2d8a9e505be503a721846)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Ross Burton
b14a3e31ee migration-guides: edgerouter machine removed
(From yocto-docs rev: e2f7b7feea061ee584c554b64efd583a70debcac)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Ross Burton
6010b8e8e8 migration-guides: add debian 12 to newly supported distros
(From yocto-docs rev: cccc13437d6172e6b0134288aa67972b001e8d28)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Michael Opdenacker
eeab4261db migration-guides: further updates for release 4.3
(From yocto-docs rev: fcd7490afba8e70740a2d4c17f759bf3e330e88a)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Michael Opdenacker
4eabedf187 ref-manual: variables: remove SERIAL_CONSOLES_CHECK
No longer in use in Poky (dropped in Nanbield through
multiple commits)

(From yocto-docs rev: e5d39e85a0db27bfc857fae9649f799179888eee)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Michael Opdenacker
a9003d3a83 ref-manual: variables: add RECIPE_MAINTAINER
(From yocto-docs rev: 30e41530402a4f9c37f77e89bae7469b68aad901)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Michael Opdenacker
0565bd0379 ref-manual: variables: mention new CDN for SSTATE_MIRRORS
(From yocto-docs rev: 4ef0c24b206d71c348ff657a2ab83ab857539fb6)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Michael Opdenacker
7b8ce9b979 ref-manual: document cargo_c class
(From yocto-docs rev: 74fc6a70d4636b37fe4eab290ea974e0f1531dbf)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
CC: Frederic Martinsons <frederic.martinsons@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-06 22:52:32 +00:00
Ross Burton
96290c8b1c cve-check: don't warn if a patch is remote
We don't make do_cve_check depend on do_unpack because that would be a
waste of time 99% of the time.  The compromise here is that we can't
scan remote patches for issues, but this isn't a problem so downgrade
the warning to a note.

Also move the check for CVEs in the filename before the local file check
so that even with remote patches, we still check for CVE references in
the name.

(From OE-Core rev: 201f0e1d55ca2fa6ab948a82d94e52c6a77ca7d2)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-03 13:49:23 +00:00
Ross Burton
5cdac8795d cve-check: slightly more verbose warning when adding the same package twice
Occasionally the cve-check tool will warn that it is adding the same
package twice.  Knowing what this package is might be the first step
towards understanding where this message comes from.

(From OE-Core rev: 699863be46fab91d5729fce1dc5b795761247f98)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-03 13:49:23 +00:00
Ross Burton
7b119ca128 cve-check: sort the package list in the JSON report
The JSON report generated by the cve-check class is basically a huge
list of packages.  This list of packages is, however, unsorted.

To make things easier for people comparing the JSON, or more
specifically for git when archiving the JSON over time in a git
repository, we can sort the list by package name.

(From OE-Core rev: f3d9dd947e678078b57b4b607e231b702c26dd4a)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-03 13:49:23 +00:00
Ross Burton
3892744324 pixman: ignore CVE-2023-37769
This issue relates to a floating point exception in stress-test, which
is an unlikely security exploit at the best of times, but the test is
not installed so isn't relevant.

(From OE-Core rev: a36d62a06be6cce1a438f8f2178eb60aad6b7267)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-03 13:49:23 +00:00
Ross Burton
1ab33843ef zlib: ignore CVE-2023-45853
This CVE relates to a bug in the minizip tool, but we don't build that.

(From OE-Core rev: 5b06913e5883c35390c87f6660a0578c73ff4ddd)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-03 13:49:23 +00:00
Ross Burton
0542c12e89 libxml2: ignore disputed CVE-2023-45322
This CVE is a use-after-free which theoretically can be an exploit
vector, but this UAF only occurs when malloc() fails.  As it's
unlikely that the user can orchestrate malloc() failures at just the
place to break on _this_ malloc and not others it is disputed that this
is actually a security issue.

The underlying bug has been fixed, and will be incorporated into the
next release.

(From OE-Core rev: 8c70e7cecb1beb30a5be4ea9bbc89c2f2e11853b)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-03 13:49:23 +00:00
Ross Burton
372c596db1 linux-yocto: update CVE exclusions
(From OE-Core rev: d401ed0666a3bcb10b013f38e1a528dca62a9c0d)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-03 13:49:23 +00:00
Lee Chee Yang
d3724f0d04 documentation.conf: drop SERIAL_CONSOLES_CHECK
remove obsolete SERIAL_CONSOLES_CHECK.

(From OE-Core rev: 5ec0371e2837428cb1596b5f40f5653de8b64526)

Signed-off-by: Lee Chee Yang <chee.yang.lee@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-03 09:35:39 +00:00
Lee Chee Yang
7888592393 machine: drop obsolete SERIAL_CONSOLES_CHECK
(From meta-yocto rev: 715de050774907dd5596d826929b6588593a91ae)

Signed-off-by: Lee Chee Yang <chee.yang.lee@intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-11-02 11:20:25 +00:00
Michael Opdenacker
bc00caadc9 ref-manual: document MESON_TARGET
(From yocto-docs rev: 8109eeb5b7a4e5b2f50047e049ce0295bdc94856)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
CC: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-31 13:12:06 +00:00
Michael Opdenacker
6fb4c79030 manuals: improve description of CVE_STATUS and CVE_STATUS_GROUPS
- Mention CVE_STATUS_GROUPS in the development manual
  (otherwise only present in the reference manual, but with
  no reference to it)

- In the reference manual description of CVE_STATUS,
  link back to the development manual, to provide context.

(From yocto-docs rev: cfef5fe41b6c819e783c88829448ae38141650a5)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-31 13:12:06 +00:00
Michael Opdenacker
1e1d892699 migration-guides: further updates for 4.3
(From yocto-docs rev: 3a4d172f0d5668f3c6527bd80d1dad7831e72e89)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-31 13:12:06 +00:00
Michael Opdenacker
b6948e5524 ref-manual: document KERNEL_STRIP
(From yocto-docs rev: 0e1861dcb8819b86aba6a3e024efb8bfe4c300ad)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-31 13:12:06 +00:00
Michael Opdenacker
779e407a80 migration-guides: mention runqemu change in serial port management
Plus a minor whitespace fix change

(From yocto-docs rev: 6f7e1b935168464b4682a8687aa6d031a1a9fb73)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Reported-by: Mark Hatle <mark.hatle@kernel.crashing.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-31 13:12:06 +00:00
Michael Opdenacker
b910386c6a migration-guides: updates for 4.3
(From yocto-docs rev: a2d79ed745df6fe243e6c5e1001d406001c0d3a7)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
CC: Paul Eggleton <bluelightning@bluelightning.org>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-31 13:12:06 +00:00
Michael Opdenacker
148c203bd1 ref-manual: variables: document OEQA_REPRODUCIBLE_TEST_PACKAGE
Introduced by
https://git.yoctoproject.org/poky/commit/?id=88abdec715ed0c1f613c9b5132cd45db741d5c65

(From yocto-docs rev: 2e64352653cd7e89a2b08d84d6f7a1e039d4346a)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-31 13:12:06 +00:00
Michael Opdenacker
8a032f4dbd ref-manual: document KERNEL_LOCALVERSION
Introduced by
https://git.yoctoproject.org/poky/commit/?id=66ed174ccdf7a89cb998f503cc6b631e2d1adcc0

(From yocto-docs rev: 4bdd4976667b802895b13541b77191a65335a175)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
CC: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-31 13:12:06 +00:00
Michael Opdenacker
724a10232c test-manual: reproducible-builds: stop mentioning LTO bug
Now that https://bugzilla.yoctoproject.org/show_bug.cgi?id=14481
is closed.

(From yocto-docs rev: de23d389f3fe7c2e18325cf29361d90b9bb19ead)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-31 13:12:06 +00:00
Marta Rybczynska
e7ab20fda4 dev-manual: add security team processes
Add the initial version of the section on vulnerability reports,
operations of the Security Team with a
transcription of https://wiki.yoctoproject.org/wiki/Security_private_reporting

(From yocto-docs rev: 2b86ac95c557f1e57176cceff428eb63e56c6328)

Signed-off-by: Marta Rybczynska <marta.rybczynska@syslinbit.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-31 13:12:06 +00:00
Eero Aaltonen
35394fc7e9 ref-manual: add systemd-resolved to distro features
systemd-resolved is a distro feature added in poky commit
6f30e3586e

(From yocto-docs rev: 2adb9c0a37f7bdbb293e78d71c872ca3bd9c06c4)

Signed-off-by: Eero Aaltonen <eero.aaltonen@vaisala.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-31 13:12:06 +00:00
Michael Opdenacker
2e0b3adf18 manuals: correct "yocto-linux" by "linux-yocto"
(From yocto-docs rev: 1fc5046100f27126711df0513d1ad87a9a54f55a)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-31 13:12:06 +00:00
Marta Rybczynska
fb6d870a75 bitbake: SECURITY.md: add file
Add a SECURITY.md file with hints for security researchers and other
parties who might report potential security vulnerabilities.

(Bitbake rev: 936fcec41efacc4ce988c81882a9ae6403702bea)

Signed-off-by: Marta Rybczynska <marta.rybczynska@syslinbit.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-24 12:52:39 +01:00
BELHADJ SALEM Talel
0ddd876f9f ref-manual: variables: add example for SYSROOT_DIRS variable
(From yocto-docs rev: 65b62118da6f355e56c489c6be08ba9ea94b9f04)

Signed-off-by: Talel BELHAJSALEM <bhstalel@gmail.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
BELHADJ SALEM Talel
cfa4458422 ref-manual: variables: add TOOLCHAIN_OPTIONS variable
(From yocto-docs rev: 6f7bd97a6d3d6d8cfd149a7e07df35da4141e650)

Signed-off-by: Talel BELHAJSALEM <bhstalel@gmail.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
BELHADJ SALEM Talel
f20b3d92eb ref-manual: variables: add RECIPE_SYSROOT and RECIPE_SYSROOT_NATIVE
(From yocto-docs rev: 8aa25e2a668d35bab2f79457248abcde92dc92aa)

Signed-off-by: Talel BELHAJSALEM <bhstalel@gmail.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
Michael Opdenacker
d0d66f5337 dev-manual: start.rst: remove obsolete reference
Remove a reference to a web resource which is clearly marked as obsolete.
Replace the unnecessarily verbose note by just links to the mentioned tools.

[YOCTO #15233]

(From yocto-docs rev: 3f979f5d2446d57d75f0c4ad2199510d533880e8)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
Michael Opdenacker
e704b9b4bc brief-yoctoprojectqs: use new CDN mirror for sstate
Recommended instead of the Yocto Project mirror, because expected
to be faster. Make sure you only set one such mirror.

(From yocto-docs rev: 5a2d09501ab807a0f61c10533f3bd81894f6f20e)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
CC: richard.purdie@linuxfoundation.org
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
Jérémy Rosen
f35420ba71 ref-manual: Add documentation for the unimplemented-ptest QA warning
(From yocto-docs rev: d90106ff2d905e457659acdb65a91ce5dcfdd05e)

Signed-off-by: Jérémy Rosen <jeremy.rosen@smile.fr>
Reviewed-by: Yoann Congal <yoann.congal@smile.fr>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
Trevor Gamblin
fc0e384d19 contributor-guide: clarify patchtest usage
- Make it clear that patchtest only supports openembedded-core for now
- Add a short list of instructions for installing Python module
  dependencies on the host
- Add a step to add meta-selftest with bitbake layers so that all tests
  can run

(From yocto-docs rev: bcd58d68e72226be1930593f5f7fb37de15b7913)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
Trevor Gamblin
e3cfbe2d78 contributor-guide: add patchtest section
(From yocto-docs rev: 236cd04d62bdf653aae9b41d32d9f87848a34339)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
BELHADJ SALEM Talel
d9e40c6025 dev-manual: layers: Add notes about layer.conf
As discussed before with Richard Purdie, the code supports this but the documentation does not.
Developers in general will not notice this or focus on it because they do not mess with the
layer.conf template file, but in my opinion I think more details can help.

(From yocto-docs rev: 15fc103d4ddd14698c8e75cc654ac157ca1ad740)

Signed-off-by: Talel BELHAJSALEM <bhstalel@gmail.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
Quentin Schulz
55d6a19062 ref-manual: variables: provide no-match example for COMPATIBLE_MACHINE
COMPATIBLE_MACHINE is used to forbid the use of a recipe or its packages
for a specific set of machines.

In some cases, it may make more sense to have the logic inverted and
have the recipe always forbidden except for hand-picked machines. Such
could be the case for pieces of software that only support some
architectures. In that scenario, it is sometimes a bit easier on the eye
and for maintenance to use the OVERRIDES mechanism but for that, a
default should be set.

COMPATIBLE_MACHINE:aarch64 = "^(aarch64)$"
COMPATIBLE_MACHINE:mips64 = "^(mips64)$"

wouldn't do much because if COMPATIBLE_MACHINE isn't set, the recipe is
assumed compatible and therefore, if no default is provided we enter
that case.

Hence, we need to add

COMPATIBLE_MACHINE = "^$"

as default so that it only matches the empty string, which isn't
possible for MACHINEOVERRIDES.

Cc: Quentin Schulz <foss+yocto@0leil.net>
(From yocto-docs rev: 52196d39bc85de267daffb0074eb59786751f57d)

Signed-off-by: Quentin Schulz <quentin.schulz@theobroma-systems.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
Robert P. J. Day
f81ed4fd61 profile-manual: aesthetic cleanups
Various aesthetic cleanups of section 1 of that manual, including:

  * replace 'HOWTO' with manual
  * add more examples of sdk-related images
  * font fixes

(From yocto-docs rev: 608e93e13a8316a8d40e0675d4335084efa3736a)

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
BELHADJ SALEM Talel
273fbf4e76 ref-manual: Fix PACKAGECONFIG term and add an example
PACKAGECONFIG's first and second flag value will be added to PACKAGECONFIG_CONFARGS
and then it will be added to the appropriate variable (EXTRA_OECMAKE, or ...)
So we need to only mention PACKAGECONFIG_CONFARGS and it will lead to other variables.

I added a custom example that can help understanding very well PACKAGECONFIG.

(From yocto-docs rev: 7f26b0c0a08d6be9810128369265b0c494e7191b)

Signed-off-by: Talel BELHAJSALEM <bhstalel@gmail.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
Robert P. J. Day
dde4dd6bc1 dev-manual: new-recipe.rst: add missing parenthesis to "Patching Code" section
Add missing parenthesis, and another example of a compressed patch filename.

(From yocto-docs rev: d44ccb5ed4292b0371651f38b9a0e3083f60ae87)

Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
Michael Opdenacker
7e2bedcc5a bsp-guide: bsp: skip Intel machines no longer supported in Poky
(From yocto-docs rev: 0f8fe127eb9ae2f56b280a7634ea7ab9a270f382)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Reported-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
Michael Opdenacker
0df0de095a manuals: update list of supported machines
The EdgeRouter machine is no longer supported.
https://git.yoctoproject.org/poky/commit/?id=0c64d0e4317e3749f7f7ed9ecd5d08bbb0cedc9e

(From yocto-docs rev: e600522f2d2514bdd888c91043b9c59563ee7a6d)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
Michael Opdenacker
7eca8e35db sdk-manual: appendix-obtain: improve and update descriptions
- Improve text formatting
- Stop mentioning all possible values
- Update examples
- Correct descriptions

(From yocto-docs rev: f7437c2efa1014dc46481993b5e87d52dcf42b05)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
Michael Opdenacker
115aa0a4fd dev-manual: wic: update "wic list images" output
(From yocto-docs rev: b9791285e5df4fa124230d2da4dcabb67088e23b)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
Michael Opdenacker
b21d2b401e manuals: update linux-yocto append examples
(From yocto-docs rev: 0d195d66e434ddedd33bf8db89643fa5ab192e29)

Signed-off-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
Arne Schwerdt
9431c9292d ref-manual: Warn about COMPATIBLE_MACHINE skipping native recipes
(From yocto-docs rev: fcc9b54cc46a0831f79a96e041cbe8deed58cf66)

Signed-off-by: Arne Schwerdt <arne.schwerdt@elbbits.com>
Reviewed-by: Michael Opdenacker <michael.opdenacker@bootlin.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-20 15:07:07 +01:00
Richard Purdie
15b576c410 build-appliance-image: Update to nanbield head revision
(From OE-Core rev: 4c261f8cbdf0c7196a74daad041d04eb093015f3)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-19 23:56:34 +01:00
Richard Purdie
631f5c6d4f build-appliance-image: Update to nanbield head revision
(From OE-Core rev: 6ecb3dac0b0033ae92a2727a0ae8803d52edaa64)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-19 23:15:04 +01:00
Richard Purdie
37e997f797 build-appliance-image: Update to nanbield head revision
(From OE-Core rev: 12fa669ea2372e759139430b23edc041e86fb543)

Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-19 17:01:41 +01:00
Bruce Ashfield
157e13d4a0 linux-yocto/6.5: serial: core: integrate upstream fixes
Integrating the following commit(s) to linux-yocto/6.5:

    14f83e409308 serial: core: test for -EINPROGRESS during tx power management validation
    1b5b735f311f serial: core: Fix checks for tx runtime PM state
    dee98a75d75c Revert "serial-core: disable power managment for serial tx"

(From OE-Core rev: 4c9a85ed1d69e55963cd77122e5c869b30f3dbe4)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-19 17:01:04 +01:00
Bruce Ashfield
482c4d0a95 linux-yocto/6.5: config: remove VIDEO_STK1160_COMMON
Integrating the following commit(s) to linux-yocto/.:

    4531e74daf0 media/media-usb-tv.cfg: remove VIDEO_STK1160_COMMON

(From OE-Core rev: 40f2edd66afe5e5af607e110da78eb0a4a0b9cb9)

Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-19 17:01:04 +01:00
Ross Burton
53c006ba1a patchtest: sort when reading patches from a directory
When reading patches from a directory it's important to sort the output
of os.listdir(), as that returns the files in an effectively random
order.  We can't test the patches apply if they're applied in the wrong
order, and typically patch filenames are prefixed with a counter to
ensure the order is correct.

(From OE-Core rev: b2bbd5b4071d913ed24a9ffe43d4a97b0db16c6c)

Signed-off-by: Ross Burton <ross.burton@arm.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-19 17:01:04 +01:00
Trevor Gamblin
a98b1229df patchtest: check for untracked changes
[YOCTO #15243]

Avoid overwriting local changes when running patchtest by checking for
anything unstaged or uncommitted in the target repo, and logging an
error if something is found. This will provide the user helpful feedback
if (for example) they forgot to commit a change for their patch under
test, and will leave the target repository in a reasonable state (rather
than a temporary branch created by patchtest).

(From OE-Core rev: 2d24ff9568d729b17cfc746d0948e63c78d9f3ae)

Signed-off-by: Trevor Gamblin <tgamblin@baylibre.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2023-10-19 17:01:04 +01:00
2868 changed files with 43266 additions and 160183 deletions

1
.gitignore vendored
View File

@@ -36,4 +36,3 @@ _toaster_clones/
downloads/
sstate-cache/
toaster.sqlite
.vscode/

View File

@@ -22,7 +22,7 @@ for full details on how to submit changes.
As a quick guide, patches should be sent to openembedded-core@lists.openembedded.org
The git command to do that would be:
git send-email -M -1 --to openembedded-core@lists.openembedded.org --subject-prefix='scarthgap][PATCH'
git send-email -M -1 --to openembedded-core@lists.openembedded.org
Mailing list:

View File

@@ -18,7 +18,7 @@ Bitbake requires Python version 3.8 or newer.
Contributing
------------
Please refer to our contributor guide here: https://docs.yoctoproject.org/contributor-guide/
Please refer to our contributor guide here: https://docs.yoctoproject.org/dev/contributor-guide/
for full details on how to submit changes.
As a quick guide, patches should be sent to bitbake-devel@lists.openembedded.org

View File

@@ -27,7 +27,7 @@ from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
bb.utils.check_system_locale()
__version__ = "2.8.1"
__version__ = "2.6.0"
if __name__ == "__main__":
if __version__ != bb.__version__:

View File

@@ -72,17 +72,13 @@ def find_siginfo_task(bbhandler, pn, taskname, sig1=None, sig2=None):
elif sig2 not in sigfiles:
logger.error('No sigdata files found matching %s %s with signature %s' % (pn, taskname, sig2))
sys.exit(1)
latestfiles = [sigfiles[sig1]['path'], sigfiles[sig2]['path']]
latestfiles = [sigfiles[sig1], sigfiles[sig2]]
else:
sigfiles = find_siginfo(bbhandler, pn, taskname)
latestsigs = sorted(sigfiles.keys(), key=lambda h: sigfiles[h]['time'])[-2:]
if not latestsigs:
filedates = find_siginfo(bbhandler, pn, taskname)
latestfiles = sorted(filedates.keys(), key=lambda f: filedates[f])[-2:]
if not latestfiles:
logger.error('No sigdata files found matching %s %s' % (pn, taskname))
sys.exit(1)
latestfiles = [sigfiles[latestsigs[0]]['path']]
if len(latestsigs) > 1:
latestfiles.append(sigfiles[latestsigs[1]]['path'])
return latestfiles
@@ -100,7 +96,7 @@ def recursecb(key, hash1, hash2):
elif hash2 not in hashfiles:
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash2))
else:
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1]['path'], hashfiles[hash2]['path'], recursecb, color=color)
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1], hashfiles[hash2], recursecb, color=color)
for change in out2:
for line in change.splitlines():
recout.append(' ' + line)

View File

@@ -14,8 +14,6 @@ import sys
import threading
import time
import warnings
import netrc
import json
warnings.simplefilter("default")
try:
@@ -38,42 +36,18 @@ except ImportError:
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
import hashserv
import bb.asyncrpc
DEFAULT_ADDRESS = 'unix://./hashserve.sock'
METHOD = 'stress.test.method'
def print_user(u):
print(f"Username: {u['username']}")
if "permissions" in u:
print("Permissions: " + " ".join(u["permissions"]))
if "token" in u:
print(f"Token: {u['token']}")
def main():
def handle_get(args, client):
result = client.get_taskhash(args.method, args.taskhash, all_properties=True)
if not result:
return 0
print(json.dumps(result, sort_keys=True, indent=4))
return 0
def handle_get_outhash(args, client):
result = client.get_outhash(args.method, args.outhash, args.taskhash)
if not result:
return 0
print(json.dumps(result, sort_keys=True, indent=4))
return 0
def handle_stats(args, client):
if args.reset:
s = client.reset_stats()
else:
s = client.get_stats()
print(json.dumps(s, sort_keys=True, indent=4))
pprint.pprint(s)
return 0
def handle_stress(args, client):
@@ -82,24 +56,25 @@ def main():
nonlocal missed_hashes
nonlocal max_time
with hashserv.create_client(args.address) as client:
for i in range(args.requests):
taskhash = hashlib.sha256()
taskhash.update(args.taskhash_seed.encode('utf-8'))
taskhash.update(str(i).encode('utf-8'))
client = hashserv.create_client(args.address)
start_time = time.perf_counter()
l = client.get_unihash(METHOD, taskhash.hexdigest())
elapsed = time.perf_counter() - start_time
for i in range(args.requests):
taskhash = hashlib.sha256()
taskhash.update(args.taskhash_seed.encode('utf-8'))
taskhash.update(str(i).encode('utf-8'))
with lock:
if l:
found_hashes += 1
else:
missed_hashes += 1
start_time = time.perf_counter()
l = client.get_unihash(METHOD, taskhash.hexdigest())
elapsed = time.perf_counter() - start_time
max_time = max(elapsed, max_time)
pbar.update()
with lock:
if l:
found_hashes += 1
else:
missed_hashes += 1
max_time = max(elapsed, max_time)
pbar.update()
max_time = 0
found_hashes = 0
@@ -151,101 +126,12 @@ def main():
print("Removed %d rows" % (result["count"]))
return 0
def handle_refresh_token(args, client):
r = client.refresh_token(args.username)
print_user(r)
def handle_set_user_permissions(args, client):
r = client.set_user_perms(args.username, args.permissions)
print_user(r)
def handle_get_user(args, client):
r = client.get_user(args.username)
print_user(r)
def handle_get_all_users(args, client):
users = client.get_all_users()
print("{username:20}| {permissions}".format(username="Username", permissions="Permissions"))
print(("-" * 20) + "+" + ("-" * 20))
for u in users:
print("{username:20}| {permissions}".format(username=u["username"], permissions=" ".join(u["permissions"])))
def handle_new_user(args, client):
r = client.new_user(args.username, args.permissions)
print_user(r)
def handle_delete_user(args, client):
r = client.delete_user(args.username)
print_user(r)
def handle_get_db_usage(args, client):
usage = client.get_db_usage()
print(usage)
tables = sorted(usage.keys())
print("{name:20}| {rows:20}".format(name="Table name", rows="Rows"))
print(("-" * 20) + "+" + ("-" * 20))
for t in tables:
print("{name:20}| {rows:<20}".format(name=t, rows=usage[t]["rows"]))
print()
total_rows = sum(t["rows"] for t in usage.values())
print(f"Total rows: {total_rows}")
def handle_get_db_query_columns(args, client):
columns = client.get_db_query_columns()
print("\n".join(sorted(columns)))
def handle_gc_status(args, client):
result = client.gc_status()
if not result["mark"]:
print("No Garbage collection in progress")
return 0
print("Current Mark: %s" % result["mark"])
print("Total hashes to keep: %d" % result["keep"])
print("Total hashes to remove: %s" % result["remove"])
return 0
def handle_gc_mark(args, client):
where = {k: v for k, v in args.where}
result = client.gc_mark(args.mark, where)
print("New hashes marked: %d" % result["count"])
return 0
def handle_gc_sweep(args, client):
result = client.gc_sweep(args.mark)
print("Removed %d rows" % result["count"])
return 0
def handle_unihash_exists(args, client):
result = client.unihash_exists(args.unihash)
if args.quiet:
return 0 if result else 1
print("true" if result else "false")
return 0
parser = argparse.ArgumentParser(description='Hash Equivalence Client')
parser.add_argument('--address', default=DEFAULT_ADDRESS, help='Server address (default "%(default)s")')
parser.add_argument('--log', default='WARNING', help='Set logging level')
parser.add_argument('--login', '-l', metavar="USERNAME", help="Authenticate as USERNAME")
parser.add_argument('--password', '-p', metavar="TOKEN", help="Authenticate using token TOKEN")
parser.add_argument('--become', '-b', metavar="USERNAME", help="Impersonate user USERNAME (if allowed) when performing actions")
parser.add_argument('--no-netrc', '-n', action="store_false", dest="netrc", help="Do not use .netrc")
subparsers = parser.add_subparsers()
get_parser = subparsers.add_parser('get', help="Get the unihash for a taskhash")
get_parser.add_argument("method", help="Method to query")
get_parser.add_argument("taskhash", help="Task hash to query")
get_parser.set_defaults(func=handle_get)
get_outhash_parser = subparsers.add_parser('get-outhash', help="Get output hash information")
get_outhash_parser.add_argument("method", help="Method to query")
get_outhash_parser.add_argument("outhash", help="Output hash to query")
get_outhash_parser.add_argument("taskhash", help="Task hash to query")
get_outhash_parser.set_defaults(func=handle_get_outhash)
stats_parser = subparsers.add_parser('stats', help='Show server stats')
stats_parser.add_argument('--reset', action='store_true',
help='Reset server stats')
@@ -273,55 +159,6 @@ def main():
clean_unused_parser.add_argument("max_age", metavar="SECONDS", type=int, help="Remove unused entries older than SECONDS old")
clean_unused_parser.set_defaults(func=handle_clean_unused)
refresh_token_parser = subparsers.add_parser('refresh-token', help="Refresh auth token")
refresh_token_parser.add_argument("--username", "-u", help="Refresh the token for another user (if authorized)")
refresh_token_parser.set_defaults(func=handle_refresh_token)
set_user_perms_parser = subparsers.add_parser('set-user-perms', help="Set new permissions for user")
set_user_perms_parser.add_argument("--username", "-u", help="Username", required=True)
set_user_perms_parser.add_argument("permissions", metavar="PERM", nargs="*", default=[], help="New permissions")
set_user_perms_parser.set_defaults(func=handle_set_user_permissions)
get_user_parser = subparsers.add_parser('get-user', help="Get user")
get_user_parser.add_argument("--username", "-u", help="Username")
get_user_parser.set_defaults(func=handle_get_user)
get_all_users_parser = subparsers.add_parser('get-all-users', help="List all users")
get_all_users_parser.set_defaults(func=handle_get_all_users)
new_user_parser = subparsers.add_parser('new-user', help="Create new user")
new_user_parser.add_argument("--username", "-u", help="Username", required=True)
new_user_parser.add_argument("permissions", metavar="PERM", nargs="*", default=[], help="New permissions")
new_user_parser.set_defaults(func=handle_new_user)
delete_user_parser = subparsers.add_parser('delete-user', help="Delete user")
delete_user_parser.add_argument("--username", "-u", help="Username", required=True)
delete_user_parser.set_defaults(func=handle_delete_user)
db_usage_parser = subparsers.add_parser('get-db-usage', help="Database Usage")
db_usage_parser.set_defaults(func=handle_get_db_usage)
db_query_columns_parser = subparsers.add_parser('get-db-query-columns', help="Show columns that can be used in database queries")
db_query_columns_parser.set_defaults(func=handle_get_db_query_columns)
gc_status_parser = subparsers.add_parser("gc-status", help="Show garbage collection status")
gc_status_parser.set_defaults(func=handle_gc_status)
gc_mark_parser = subparsers.add_parser('gc-mark', help="Mark hashes to be kept for garbage collection")
gc_mark_parser.add_argument("mark", help="Mark for this garbage collection operation")
gc_mark_parser.add_argument("--where", "-w", metavar="KEY VALUE", nargs=2, action="append", default=[],
help="Keep entries in table where KEY == VALUE")
gc_mark_parser.set_defaults(func=handle_gc_mark)
gc_sweep_parser = subparsers.add_parser('gc-sweep', help="Perform garbage collection and delete any entries that are not marked")
gc_sweep_parser.add_argument("mark", help="Mark for this garbage collection operation")
gc_sweep_parser.set_defaults(func=handle_gc_sweep)
unihash_exists_parser = subparsers.add_parser('unihash-exists', help="Check if a unihash is known to the server")
unihash_exists_parser.add_argument("--quiet", action="store_true", help="Don't print status. Instead, exit with 0 if unihash exists and 1 if it does not")
unihash_exists_parser.add_argument("unihash", help="Unihash to check")
unihash_exists_parser.set_defaults(func=handle_unihash_exists)
args = parser.parse_args()
logger = logging.getLogger('hashserv')
@@ -335,30 +172,11 @@ def main():
console.setLevel(level)
logger.addHandler(console)
login = args.login
password = args.password
if login is None and args.netrc:
try:
n = netrc.netrc()
auth = n.authenticators(args.address)
if auth is not None:
login, _, password = auth
except FileNotFoundError:
pass
except netrc.NetrcParseError as e:
sys.stderr.write(f"Error parsing {e.filename}:{e.lineno}: {e.msg}\n")
func = getattr(args, 'func', None)
if func:
try:
with hashserv.create_client(args.address, login, password) as client:
if args.become:
client.become_user(args.become)
return func(args, client)
except bb.asyncrpc.InvokeError as e:
print(f"ERROR: {e}")
return 1
client = hashserv.create_client(args.address)
return func(args, client)
return 0

View File

@@ -11,161 +11,56 @@ import logging
import argparse
import sqlite3
import warnings
warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), "lib"))
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
import hashserv
from hashserv.server import DEFAULT_ANON_PERMS
VERSION = "1.0.0"
DEFAULT_BIND = "unix://./hashserve.sock"
DEFAULT_BIND = 'unix://./hashserve.sock'
def main():
parser = argparse.ArgumentParser(
description="Hash Equivalence Reference Server. Version=%s" % VERSION,
formatter_class=argparse.RawTextHelpFormatter,
epilog="""
The bind address may take one of the following formats:
unix://PATH - Bind to unix domain socket at PATH
ws://ADDRESS:PORT - Bind to websocket on ADDRESS:PORT
ADDRESS:PORT - Bind to raw TCP socket on ADDRESS:PORT
parser = argparse.ArgumentParser(description='Hash Equivalence Reference Server. Version=%s' % VERSION,
epilog='''The bind address is the path to a unix domain socket if it is
prefixed with "unix://". Otherwise, it is an IP address
and port in form ADDRESS:PORT. To bind to all addresses, leave
the ADDRESS empty, e.g. "--bind :8686". To bind to a specific
IPv6 address, enclose the address in "[]", e.g.
"--bind [::1]:8686"'''
)
To bind to all addresses, leave the ADDRESS empty, e.g. "--bind :8686" or
"--bind ws://:8686". To bind to a specific IPv6 address, enclose the address in
"[]", e.g. "--bind [::1]:8686" or "--bind ws://[::1]:8686"
Note that the default Anonymous permissions are designed to not break existing
server instances when upgrading, but are not particularly secure defaults. If
you want to use authentication, it is recommended that you use "--anon-perms
@read" to only give anonymous users read access, or "--anon-perms @none" to
give un-authenticated users no access at all.
Setting "--anon-perms @all" or "--anon-perms @user-admin" is not allowed, since
this would allow anonymous users to manage all users accounts, which is a bad
idea.
If you are using user authentication, you should run your server in websockets
mode with an SSL terminating load balancer in front of it (as this server does
not implement SSL). Otherwise all usernames and passwords will be transmitted
in the clear. When configured this way, clients can connect using a secure
websocket, as in "wss://SERVER:PORT"
The following permissions are supported by the server:
@none - No permissions
@read - The ability to read equivalent hashes from the server
@report - The ability to report equivalent hashes to the server
@db-admin - Manage the hash database(s). This includes cleaning the
database, removing hashes, etc.
@user-admin - The ability to manage user accounts. This includes, creating
users, deleting users, resetting login tokens, and assigning
permissions.
@all - All possible permissions, including any that may be added
in the future
""",
)
parser.add_argument(
"-b",
"--bind",
default=os.environ.get("HASHSERVER_BIND", DEFAULT_BIND),
help='Bind address (default $HASHSERVER_BIND, "%(default)s")',
)
parser.add_argument(
"-d",
"--database",
default=os.environ.get("HASHSERVER_DB", "./hashserv.db"),
help='Database file (default $HASHSERVER_DB, "%(default)s")',
)
parser.add_argument(
"-l",
"--log",
default=os.environ.get("HASHSERVER_LOG_LEVEL", "WARNING"),
help='Set logging level (default $HASHSERVER_LOG_LEVEL, "%(default)s")',
)
parser.add_argument(
"-u",
"--upstream",
default=os.environ.get("HASHSERVER_UPSTREAM", None),
help="Upstream hashserv to pull hashes from ($HASHSERVER_UPSTREAM)",
)
parser.add_argument(
"-r",
"--read-only",
action="store_true",
help="Disallow write operations from clients ($HASHSERVER_READ_ONLY)",
)
parser.add_argument(
"--db-username",
default=os.environ.get("HASHSERVER_DB_USERNAME", None),
help="Database username ($HASHSERVER_DB_USERNAME)",
)
parser.add_argument(
"--db-password",
default=os.environ.get("HASHSERVER_DB_PASSWORD", None),
help="Database password ($HASHSERVER_DB_PASSWORD)",
)
parser.add_argument(
"--anon-perms",
metavar="PERM[,PERM[,...]]",
default=os.environ.get("HASHSERVER_ANON_PERMS", ",".join(DEFAULT_ANON_PERMS)),
help='Permissions to give anonymous users (default $HASHSERVER_ANON_PERMS, "%(default)s")',
)
parser.add_argument(
"--admin-user",
default=os.environ.get("HASHSERVER_ADMIN_USER", None),
help="Create default admin user with name ADMIN_USER ($HASHSERVER_ADMIN_USER)",
)
parser.add_argument(
"--admin-password",
default=os.environ.get("HASHSERVER_ADMIN_PASSWORD", None),
help="Create default admin user with password ADMIN_PASSWORD ($HASHSERVER_ADMIN_PASSWORD)",
)
parser.add_argument('-b', '--bind', default=DEFAULT_BIND, help='Bind address (default "%(default)s")')
parser.add_argument('-d', '--database', default='./hashserv.db', help='Database file (default "%(default)s")')
parser.add_argument('-l', '--log', default='WARNING', help='Set logging level')
parser.add_argument('-u', '--upstream', help='Upstream hashserv to pull hashes from')
parser.add_argument('-r', '--read-only', action='store_true', help='Disallow write operations from clients')
args = parser.parse_args()
logger = logging.getLogger("hashserv")
logger = logging.getLogger('hashserv')
level = getattr(logging, args.log.upper(), None)
if not isinstance(level, int):
raise ValueError("Invalid log level: %s (Try ERROR/WARNING/INFO/DEBUG)" % args.log)
raise ValueError('Invalid log level: %s' % args.log)
logger.setLevel(level)
console = logging.StreamHandler()
console.setLevel(level)
logger.addHandler(console)
read_only = (os.environ.get("HASHSERVER_READ_ONLY", "0") == "1") or args.read_only
if "," in args.anon_perms:
anon_perms = args.anon_perms.split(",")
else:
anon_perms = args.anon_perms.split()
server = hashserv.create_server(
args.bind,
args.database,
upstream=args.upstream,
read_only=read_only,
db_username=args.db_username,
db_password=args.db_password,
anon_perms=anon_perms,
admin_username=args.admin_user,
admin_password=args.admin_password,
)
server = hashserv.create_server(args.bind, args.database, upstream=args.upstream, read_only=args.read_only)
server.serve_forever()
return 0
if __name__ == "__main__":
if __name__ == '__main__':
try:
ret = main()
except Exception:
ret = 1
import traceback
traceback.print_exc()
sys.exit(ret)

View File

@@ -7,77 +7,49 @@
import os
import sys,logging
import argparse
import optparse
import warnings
warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), "lib"))
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),'lib'))
import prserv
import prserv.serv
VERSION = "1.1.0"
__version__="1.0.0"
PRHOST_DEFAULT="0.0.0.0"
PRHOST_DEFAULT='0.0.0.0'
PRPORT_DEFAULT=8585
def main():
parser = argparse.ArgumentParser(
description="BitBake PR Server. Version=%s" % VERSION,
formatter_class=argparse.RawTextHelpFormatter)
parser = optparse.OptionParser(
version="Bitbake PR Service Core version %s, %%prog version %s" % (prserv.__version__, __version__),
usage = "%prog < --start | --stop > [options]")
parser.add_argument(
"-f",
"--file",
default="prserv.sqlite3",
help="database filename (default: prserv.sqlite3)",
)
parser.add_argument(
"-l",
"--log",
default="prserv.log",
help="log filename(default: prserv.log)",
)
parser.add_argument(
"--loglevel",
default="INFO",
help="logging level, i.e. CRITICAL, ERROR, WARNING, INFO, DEBUG",
)
parser.add_argument(
"--start",
action="store_true",
help="start daemon",
)
parser.add_argument(
"--stop",
action="store_true",
help="stop daemon",
)
parser.add_argument(
"--host",
help="ip address to bind",
default=PRHOST_DEFAULT,
)
parser.add_argument(
"--port",
type=int,
default=PRPORT_DEFAULT,
help="port number (default: 8585)",
)
parser.add_argument(
"-r",
"--read-only",
action="store_true",
help="open database in read-only mode",
)
parser.add_option("-f", "--file", help="database filename(default: prserv.sqlite3)", action="store",
dest="dbfile", type="string", default="prserv.sqlite3")
parser.add_option("-l", "--log", help="log filename(default: prserv.log)", action="store",
dest="logfile", type="string", default="prserv.log")
parser.add_option("--loglevel", help="logging level, i.e. CRITICAL, ERROR, WARNING, INFO, DEBUG",
action = "store", type="string", dest="loglevel", default = "INFO")
parser.add_option("--start", help="start daemon",
action="store_true", dest="start")
parser.add_option("--stop", help="stop daemon",
action="store_true", dest="stop")
parser.add_option("--host", help="ip address to bind", action="store",
dest="host", type="string", default=PRHOST_DEFAULT)
parser.add_option("--port", help="port number(default: 8585)", action="store",
dest="port", type="int", default=PRPORT_DEFAULT)
parser.add_option("-r", "--read-only", help="open database in read-only mode",
action="store_true")
args = parser.parse_args()
prserv.init_logger(os.path.abspath(args.log), args.loglevel)
options, args = parser.parse_args(sys.argv)
prserv.init_logger(os.path.abspath(options.logfile),options.loglevel)
if args.start:
ret=prserv.serv.start_daemon(args.file, args.host, args.port, os.path.abspath(args.log), args.read_only)
elif args.stop:
ret=prserv.serv.stop_daemon(args.host, args.port)
if options.start:
ret=prserv.serv.start_daemon(options.dbfile, options.host, options.port,os.path.abspath(options.logfile), options.read_only)
elif options.stop:
ret=prserv.serv.stop_daemon(options.host, options.port)
else:
ret=parser.print_help()
return ret

View File

@@ -183,7 +183,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not dry_run:
fakeroot = True
envvars = (runtask['fakerootenv'] or "").split()
for key, value in (var.split('=',1) for var in envvars):
for key, value in (var.split('=') for var in envvars):
envbackup[key] = os.environ.get(key)
os.environ[key] = value
fakeenv[key] = value
@@ -195,7 +195,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
(fn, taskname, ', '.join(fakedirs)))
else:
envvars = (runtask['fakerootnoenv'] or "").split()
for key, value in (var.split('=',1) for var in envvars):
for key, value in (var.split('=') for var in envvars):
envbackup[key] = os.environ.get(key)
os.environ[key] = value
fakeenv[key] = value
@@ -237,13 +237,11 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
# Let SIGHUP exit as SIGTERM
signal.signal(signal.SIGHUP, sigterm_handler)
# No stdin & stdout
# stdout is used as a status report channel and must not be used by child processes.
dumbio = os.open(os.devnull, os.O_RDWR)
os.dup2(dumbio, sys.stdin.fileno())
os.dup2(dumbio, sys.stdout.fileno())
# No stdin
newsi = os.open(os.devnull, os.O_RDWR)
os.dup2(newsi, sys.stdin.fileno())
if umask is not None:
if umask:
os.umask(umask)
try:
@@ -307,10 +305,6 @@ def fork_off_task(cfg, data, databuilder, workerdata, extraconfigdata, runtask):
if not quieterrors:
logger.critical(traceback.format_exc())
os._exit(1)
sys.stdout.flush()
sys.stderr.flush()
try:
if dry_run:
return 0
@@ -439,30 +433,18 @@ class BitbakeWorker(object):
while self.process_waitpid():
continue
def handle_item(self, item, func):
opening_tag = b"<" + item + b">"
if not self.queue.startswith(opening_tag):
return
tag_len = len(opening_tag)
if len(self.queue) < tag_len + 4:
# we need to receive more data
return
header = self.queue[tag_len:tag_len + 4]
payload_len = int.from_bytes(header, 'big')
# closing tag has length (tag_len + 1)
if len(self.queue) < tag_len * 2 + 1 + payload_len:
# we need to receive more data
return
index = self.queue.find(b"</" + item + b">")
if index != -1:
try:
func(self.queue[(tag_len + 4):index])
except pickle.UnpicklingError:
workerlog_write("Unable to unpickle data: %s\n" % ":".join("{:02x}".format(c) for c in self.queue))
raise
self.queue = self.queue[(index + len(b"</") + len(item) + len(b">")):]
if self.queue.startswith(b"<" + item + b">"):
index = self.queue.find(b"</" + item + b">")
while index != -1:
try:
func(self.queue[(len(item) + 2):index])
except pickle.UnpicklingError:
workerlog_write("Unable to unpickle data: %s\n" % ":".join("{:02x}".format(c) for c in self.queue))
raise
self.queue = self.queue[(index + len(item) + 3):]
index = self.queue.find(b"</" + item + b">")
def handle_cookercfg(self, data):
self.cookercfg = pickle.loads(data)

View File

@@ -24,17 +24,15 @@ warnings.simplefilter("default")
version = 1.0
git_cmd = ['git', '-c', 'safe.bareRepository=all']
def main():
if sys.version_info < (3, 4, 0):
sys.exit('Python 3.4 or greater is required')
git_dir = check_output(git_cmd + ['rev-parse', '--git-dir']).rstrip()
git_dir = check_output(['git', 'rev-parse', '--git-dir']).rstrip()
shallow_file = os.path.join(git_dir, 'shallow')
if os.path.exists(shallow_file):
try:
check_output(git_cmd + ['fetch', '--unshallow'])
check_output(['git', 'fetch', '--unshallow'])
except subprocess.CalledProcessError:
try:
os.unlink(shallow_file)
@@ -43,21 +41,21 @@ def main():
raise
args = process_args()
revs = check_output(git_cmd + ['rev-list'] + args.revisions).splitlines()
revs = check_output(['git', 'rev-list'] + args.revisions).splitlines()
make_shallow(shallow_file, args.revisions, args.refs)
ref_revs = check_output(git_cmd + ['rev-list'] + args.refs).splitlines()
ref_revs = check_output(['git', 'rev-list'] + args.refs).splitlines()
remaining_history = set(revs) & set(ref_revs)
for rev in remaining_history:
if check_output(git_cmd + ['rev-parse', '{}^@'.format(rev)]):
if check_output(['git', 'rev-parse', '{}^@'.format(rev)]):
sys.exit('Error: %s was not made shallow' % rev)
filter_refs(args.refs)
if args.shrink:
shrink_repo(git_dir)
subprocess.check_call(git_cmd + ['fsck', '--unreachable'])
subprocess.check_call(['git', 'fsck', '--unreachable'])
def process_args():
@@ -74,12 +72,12 @@ def process_args():
args = parser.parse_args()
if args.refs:
args.refs = check_output(git_cmd + ['rev-parse', '--symbolic-full-name'] + args.refs).splitlines()
args.refs = check_output(['git', 'rev-parse', '--symbolic-full-name'] + args.refs).splitlines()
else:
args.refs = get_all_refs(lambda r, t, tt: t == 'commit' or tt == 'commit')
args.refs = list(filter(lambda r: not r.endswith('/HEAD'), args.refs))
args.revisions = check_output(git_cmd + ['rev-parse'] + ['%s^{}' % i for i in args.revisions]).splitlines()
args.revisions = check_output(['git', 'rev-parse'] + ['%s^{}' % i for i in args.revisions]).splitlines()
return args
@@ -97,7 +95,7 @@ def make_shallow(shallow_file, revisions, refs):
def get_all_refs(ref_filter=None):
"""Return all the existing refs in this repository, optionally filtering the refs."""
ref_output = check_output(git_cmd + ['for-each-ref', '--format=%(refname)\t%(objecttype)\t%(*objecttype)'])
ref_output = check_output(['git', 'for-each-ref', '--format=%(refname)\t%(objecttype)\t%(*objecttype)'])
ref_split = [tuple(iter_extend(l.rsplit('\t'), 3)) for l in ref_output.splitlines()]
if ref_filter:
ref_split = (e for e in ref_split if ref_filter(*e))
@@ -115,7 +113,7 @@ def filter_refs(refs):
all_refs = get_all_refs()
to_remove = set(all_refs) - set(refs)
if to_remove:
check_output(['xargs', '-0', '-n', '1'] + git_cmd + ['update-ref', '-d', '--no-deref'],
check_output(['xargs', '-0', '-n', '1', 'git', 'update-ref', '-d', '--no-deref'],
input=''.join(l + '\0' for l in to_remove))
@@ -128,7 +126,7 @@ def follow_history_intersections(revisions, refs):
if rev in seen:
continue
parents = check_output(git_cmd + ['rev-parse', '%s^@' % rev]).splitlines()
parents = check_output(['git', 'rev-parse', '%s^@' % rev]).splitlines()
yield rev
seen.add(rev)
@@ -136,12 +134,12 @@ def follow_history_intersections(revisions, refs):
if not parents:
continue
check_refs = check_output(git_cmd + ['merge-base', '--independent'] + sorted(refs)).splitlines()
check_refs = check_output(['git', 'merge-base', '--independent'] + sorted(refs)).splitlines()
for parent in parents:
for ref in check_refs:
print("Checking %s vs %s" % (parent, ref))
try:
merge_base = check_output(git_cmd + ['merge-base', parent, ref]).rstrip()
merge_base = check_output(['git', 'merge-base', parent, ref]).rstrip()
except subprocess.CalledProcessError:
continue
else:
@@ -161,14 +159,14 @@ def iter_except(func, exception, start=None):
def shrink_repo(git_dir):
"""Shrink the newly shallow repository, removing the unreachable objects."""
subprocess.check_call(git_cmd + ['reflog', 'expire', '--expire-unreachable=now', '--all'])
subprocess.check_call(git_cmd + ['repack', '-ad'])
subprocess.check_call(['git', 'reflog', 'expire', '--expire-unreachable=now', '--all'])
subprocess.check_call(['git', 'repack', '-ad'])
try:
os.unlink(os.path.join(git_dir, 'objects', 'info', 'alternates'))
except OSError as exc:
if exc.errno != errno.ENOENT:
raise
subprocess.check_call(git_cmd + ['prune', '--expire', 'now'])
subprocess.check_call(['git', 'prune', '--expire', 'now'])
if __name__ == '__main__':

View File

@@ -84,7 +84,7 @@ webserverStartAll()
echo "Starting webserver..."
$MANAGE runserver --noreload "$ADDR_PORT" \
</dev/null >>${TOASTER_LOGS_DIR}/web.log 2>&1 \
</dev/null >>${BUILDDIR}/toaster_web.log 2>&1 \
& echo $! >${BUILDDIR}/.toastermain.pid
sleep 1
@@ -181,14 +181,6 @@ WEBSERVER=1
export TOASTER_BUILDSERVER=1
ADDR_PORT="localhost:8000"
TOASTERDIR=`dirname $BUILDDIR`
# ${BUILDDIR}/toaster_logs/ became the default location for toaster logs
# This is needed for implemented django-log-viewer: https://pypi.org/project/django-log-viewer/
# If the directory does not exist, create it.
TOASTER_LOGS_DIR="${BUILDDIR}/toaster_logs/"
if [ ! -d $TOASTER_LOGS_DIR ]
then
mkdir $TOASTER_LOGS_DIR
fi
unset CMD
for param in $*; do
case $param in
@@ -307,7 +299,7 @@ case $CMD in
export BITBAKE_UI='toasterui'
if [ $TOASTER_BUILDSERVER -eq 1 ] ; then
$MANAGE runbuilds \
</dev/null >>${TOASTER_LOGS_DIR}/toaster_runbuilds.log 2>&1 \
</dev/null >>${BUILDDIR}/toaster_runbuilds.log 2>&1 \
& echo $! >${BUILDDIR}/.runbuilds.pid
else
echo "Toaster build server not started."

View File

@@ -30,23 +30,79 @@ sys.path.insert(0, join(dirname(dirname(abspath(__file__))), 'lib'))
import bb.cooker
from bb.ui import toasterui
from bb.ui import eventreplay
class EventPlayer:
"""Emulate a connection to a bitbake server."""
def __init__(self, eventfile, variables):
self.eventfile = eventfile
self.variables = variables
self.eventmask = []
def waitEvent(self, _timeout):
"""Read event from the file."""
line = self.eventfile.readline().strip()
if not line:
return
try:
event_str = json.loads(line)['vars'].encode('utf-8')
event = pickle.loads(codecs.decode(event_str, 'base64'))
event_name = "%s.%s" % (event.__module__, event.__class__.__name__)
if event_name not in self.eventmask:
return
return event
except ValueError as err:
print("Failed loading ", line)
raise err
def runCommand(self, command_line):
"""Emulate running a command on the server."""
name = command_line[0]
if name == "getVariable":
var_name = command_line[1]
variable = self.variables.get(var_name)
if variable:
return variable['v'], None
return None, "Missing variable %s" % var_name
elif name == "getAllKeysWithFlags":
dump = {}
flaglist = command_line[1]
for key, val in self.variables.items():
try:
if not key.startswith("__"):
dump[key] = {
'v': val['v'],
'history' : val['history'],
}
for flag in flaglist:
dump[key][flag] = val[flag]
except Exception as err:
print(err)
return (dump, None)
elif name == 'setEventMask':
self.eventmask = command_line[-1]
return True, None
else:
raise Exception("Command %s not implemented" % command_line[0])
def getEventHandle(self):
"""
This method is called by toasterui.
The return value is passed to self.runCommand but not used there.
"""
pass
def main(argv):
with open(argv[-1]) as eventfile:
# load variables from the first line
variables = None
while line := eventfile.readline().strip():
try:
variables = json.loads(line)['allvariables']
break
except (KeyError, json.JSONDecodeError):
continue
if not variables:
sys.exit("Cannot find allvariables entry in event log file %s" % argv[-1])
eventfile.seek(0)
variables = json.loads(eventfile.readline().strip())['allvariables']
params = namedtuple('ConfigParams', ['observe_only'])(True)
player = eventreplay.EventPlayer(eventfile, variables)
player = EventPlayer(eventfile, variables)
return toasterui.main(player, player, params)

View File

@@ -63,14 +63,13 @@ syn region bbVarFlagFlag matchgroup=bbArrayBrackets start="\[" end="\]\s*
" Includes and requires
syn keyword bbInclude inherit include require contained
syn match bbIncludeRest ".*$" contained contains=bbString,bbVarDeref,bbVarPyValue
syn match bbIncludeRest ".*$" contained contains=bbString,bbVarDeref
syn match bbIncludeLine "^\(inherit\|include\|require\)\s\+" contains=bbInclude nextgroup=bbIncludeRest
" Add taks and similar
syn keyword bbStatement addtask deltask addhandler after before EXPORT_FUNCTIONS contained
syn match bbStatementRest /[^\\]*$/ skipwhite contained contains=bbStatement,bbVarDeref,bbVarPyValue
syn region bbStatementRestCont start=/.*\\$/ end=/^[^\\]*$/ contained contains=bbStatement,bbVarDeref,bbVarPyValue,bbContinue keepend
syn match bbStatementLine "^\(addtask\|deltask\|addhandler\|after\|before\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest,bbStatementRestCont
syn match bbStatementRest ".*$" skipwhite contained contains=bbStatement
syn match bbStatementLine "^\(addtask\|deltask\|addhandler\|after\|before\|EXPORT_FUNCTIONS\)\s\+" contains=bbStatement nextgroup=bbStatementRest
" OE Important Functions
syn keyword bbOEFunctions do_fetch do_unpack do_patch do_configure do_compile do_stage do_install do_package contained
@@ -123,7 +122,6 @@ hi def link bbPyFlag Type
hi def link bbPyDef Statement
hi def link bbStatement Statement
hi def link bbStatementRest Identifier
hi def link bbStatementRestCont Identifier
hi def link bbOEFunctions Special
hi def link bbVarPyValue PreProc
hi def link bbOverrideOperator Operator

View File

@@ -586,11 +586,10 @@ or possibly those defined in the metadata/signature handler itself. The
simplest parameter to pass is "none", which causes a set of signature
information to be written out into ``STAMPS_DIR`` corresponding to the
targets specified. The other currently available parameter is
"printdiff", which causes BitBake to try to establish the most recent
"printdiff", which causes BitBake to try to establish the closest
signature match it can (e.g. in the sstate cache) and then run
compare the matched signatures to determine the stamps and delta
where these two stamp trees diverge. This can be used to determine why
tasks need to be re-run in situations where that is not expected.
``bitbake-diffsigs`` over the matches to determine the stamps and delta
where these two stamp trees diverge.
.. note::

View File

@@ -1,91 +0,0 @@
.. SPDX-License-Identifier: CC-BY-2.5
================
Variable Context
================
|
Variables might only have an impact or can be used in certain contexts. Some
should only be used in global files like ``.conf``, while others are intended only
for local files like ``.bb``. This chapter aims to describe some important variable
contexts.
.. _ref-varcontext-configuration:
BitBake's own configuration
===========================
Variables starting with ``BB_`` usually configure the behaviour of BitBake itself.
For example, one could configure:
- System resources, like disk space to be used (:term:`BB_DISKMON_DIRS`),
or the number of tasks to be run in parallel by BitBake (:term:`BB_NUMBER_THREADS`).
- How the fetchers shall behave, e.g., :term:`BB_FETCH_PREMIRRORONLY` is used
by BitBake to determine if BitBake's fetcher shall search only
:term:`PREMIRRORS` for files.
Those variables are usually configured globally.
BitBake configuration
=====================
There are variables:
- Like :term:`B` or :term:`T`, that are used to specify directories used by
BitBake during the build of a particular recipe. Those variables are
specified in ``bitbake.conf``. Some, like :term:`B`, are quite often
overwritten in recipes.
- Starting with ``FAKEROOT``, to configure how the ``fakeroot`` command is
handled. Those are usually set by ``bitbake.conf`` and might get adapted in a
``bbclass``.
- Detailing where BitBake will store and fetch information from, for
data reuse between build runs like :term:`CACHE`, :term:`DL_DIR` or
:term:`PERSISTENT_DIR`. Those are usually global.
Layers and files
================
Variables starting with ``LAYER`` configure how BitBake handles layers.
Additionally, variables starting with ``BB`` configure how layers and files are
handled. For example:
- :term:`LAYERDEPENDS` is used to configure on which layers a given layer
depends.
- The configured layers are contained in :term:`BBLAYERS` and files in
:term:`BBFILES`.
Those variables are often used in the files ``layer.conf`` and ``bblayers.conf``.
Recipes and packages
====================
Variables handling recipes and packages can be split into:
- :term:`PN`, :term:`PV` or :term:`PF` for example, contain information about
the name or revision of a recipe or package. Usually, the default set in
``bitbake.conf`` is used, but those are from time to time overwritten in
recipes.
- :term:`SUMMARY`, :term:`DESCRIPTION`, :term:`LICENSE` or :term:`HOMEPAGE`
contain the expected information and should be set specifically for every
recipe.
- In recipes, variables are also used to control build and runtime
dependencies between recipes/packages with other recipes/packages. The
most common should be: :term:`PROVIDES`, :term:`RPROVIDES`, :term:`DEPENDS`,
and :term:`RDEPENDS`.
- There are further variables starting with ``SRC`` that specify the sources in
a recipe like :term:`SRC_URI` or :term:`SRCDATE`. Those are also usually set
in recipes.
- Which version or provider of a recipe should be given preference when
multiple recipes would provide the same item, is controlled by variables
starting with ``PREFERRED_``. Those are normally set in the configuration
files of a ``MACHINE`` or ``DISTRO``.

View File

@@ -424,7 +424,7 @@ overview of their function and contents.
Example usage::
BB_HASHSERVE_UPSTREAM = "hashserv.yoctoproject.org:8686"
BB_HASHSERVE_UPSTREAM = "hashserv.yocto.io:8687"
:term:`BB_INVALIDCONF`
Used in combination with the ``ConfigParsed`` event to trigger
@@ -432,15 +432,6 @@ overview of their function and contents.
``ConfigParsed`` event can set the variable to trigger the re-parse.
You must be careful to avoid recursive loops with this functionality.
:term:`BB_LOADFACTOR_MAX`
Setting this to a value will cause BitBake to check the system load
average before executing new tasks. If the load average is above the
the number of CPUs multipled by this factor, no new task will be started
unless there is no task executing. A value of "1.5" has been found to
work reasonably. This is helpful for systems which don't have pressure
regulation enabled, which is more granular. Pressure values take
precedence over loadfactor.
:term:`BB_LOGCONFIG`
Specifies the name of a config file that contains the user logging
configuration. See

View File

@@ -13,7 +13,6 @@ BitBake User Manual
bitbake-user-manual/bitbake-user-manual-intro
bitbake-user-manual/bitbake-user-manual-execution
bitbake-user-manual/bitbake-user-manual-metadata
bitbake-user-manual/bitbake-user-manual-ref-variables-context
bitbake-user-manual/bitbake-user-manual-fetching
bitbake-user-manual/bitbake-user-manual-ref-variables
bitbake-user-manual/bitbake-user-manual-hello

View File

@@ -36,9 +36,8 @@ class COWDictMeta(COWMeta):
__marker__ = tuple()
def __str__(cls):
ignored_keys = set(["__count__", "__doc__", "__module__", "__firstlineno__", "__static_attributes__"])
keys = set(cls.__dict__.keys()) - ignored_keys
return "<COWDict Level: %i Current Keys: %i>" % (cls.__count__, len(keys))
# FIXME: I have magic numbers!
return "<COWDict Level: %i Current Keys: %i>" % (cls.__count__, len(cls.__dict__) - 3)
__repr__ = __str__
@@ -162,9 +161,8 @@ class COWDictMeta(COWMeta):
class COWSetMeta(COWDictMeta):
def __str__(cls):
ignored_keys = set(["__count__", "__doc__", "__module__", "__firstlineno__", "__static_attributes__"])
keys = set(cls.__dict__.keys()) - ignored_keys
return "<COWSet Level: %i Current Keys: %i>" % (cls.__count__, len(keys))
# FIXME: I have magic numbers!
return "<COWSet Level: %i Current Keys: %i>" % (cls.__count__, len(cls.__dict__) - 3)
__repr__ = __str__

View File

@@ -9,19 +9,12 @@
# SPDX-License-Identifier: GPL-2.0-only
#
__version__ = "2.8.1"
__version__ = "2.6.0"
import sys
if sys.version_info < (3, 8, 0):
raise RuntimeError("Sorry, python 3.8.0 or later is required for this version of bitbake")
if sys.version_info < (3, 10, 0):
# With python 3.8 and 3.9, we see errors of "libgcc_s.so.1 must be installed for pthread_cancel to work"
# https://stackoverflow.com/questions/64797838/libgcc-s-so-1-must-be-installed-for-pthread-cancel-to-work
# https://bugs.ams1.psf.io/issue42888
# so ensure libgcc_s is loaded early on
import ctypes
libgcc_s = ctypes.CDLL('libgcc_s.so.1')
class BBHandledException(Exception):
"""
@@ -36,35 +29,6 @@ class BBHandledException(Exception):
import os
import logging
from collections import namedtuple
import multiprocessing as mp
# Python 3.14 changes the default multiprocessing context from "fork" to
# "forkserver". However, bitbake heavily relies on "fork" behavior to
# efficiently pass data to the child processes. Places that need this should do:
# from bb import multiprocessing
# in place of
# import multiprocessing
class MultiprocessingContext(object):
"""
Multiprocessing proxy object that uses the "fork" context for a property if
available, otherwise goes to the main multiprocessing module. This allows
it to be a drop-in replacement for the multiprocessing module, but use the
fork context
"""
def __init__(self):
super().__setattr__("_ctx", mp.get_context("fork"))
def __getattr__(self, name):
if hasattr(self._ctx, name):
return getattr(self._ctx, name)
return getattr(mp, name)
def __setattr__(self, name, value):
raise AttributeError(f"Unable to set attribute {name}")
multiprocessing = MultiprocessingContext()
class NullHandler(logging.Handler):
@@ -256,14 +220,3 @@ def deprecate_import(current, modulename, fromlist, renames = None):
setattr(sys.modules[current], newname, newobj)
TaskData = namedtuple("TaskData", [
"pn",
"taskname",
"fn",
"deps",
"provides",
"taskhash",
"unihash",
"hashfn",
"taskhash_deps",
])

View File

@@ -4,13 +4,30 @@
# SPDX-License-Identifier: GPL-2.0-only
#
import itertools
import json
from .client import AsyncClient, Client, ClientPool
from .serv import AsyncServer, AsyncServerConnection
from .connection import DEFAULT_MAX_CHUNK
from .exceptions import (
ClientError,
ServerError,
ConnectionClosedError,
InvokeError,
)
# The Python async server defaults to a 64K receive buffer, so we hardcode our
# maximum chunk size. It would be better if the client and server reported to
# each other what the maximum chunk sizes were, but that will slow down the
# connection setup with a round trip delay so I'd rather not do that unless it
# is necessary
DEFAULT_MAX_CHUNK = 32 * 1024
def chunkify(msg, max_chunk):
if len(msg) < max_chunk - 1:
yield ''.join((msg, "\n"))
else:
yield ''.join((json.dumps({
'chunk-stream': None
}), "\n"))
args = [iter(msg)] * (max_chunk - 1)
for m in map(''.join, itertools.zip_longest(*args, fillvalue='')):
yield ''.join(itertools.chain(m, "\n"))
yield "\n"
from .client import AsyncClient, Client
from .serv import AsyncServer, AsyncServerConnection, ClientError, ServerError

View File

@@ -10,59 +10,22 @@ import json
import os
import socket
import sys
import re
import contextlib
from threading import Thread
from .connection import StreamConnection, WebsocketConnection, DEFAULT_MAX_CHUNK
from .exceptions import ConnectionClosedError, InvokeError
from . import chunkify, DEFAULT_MAX_CHUNK
UNIX_PREFIX = "unix://"
WS_PREFIX = "ws://"
WSS_PREFIX = "wss://"
ADDR_TYPE_UNIX = 0
ADDR_TYPE_TCP = 1
ADDR_TYPE_WS = 2
def parse_address(addr):
if addr.startswith(UNIX_PREFIX):
return (ADDR_TYPE_UNIX, (addr[len(UNIX_PREFIX) :],))
elif addr.startswith(WS_PREFIX) or addr.startswith(WSS_PREFIX):
return (ADDR_TYPE_WS, (addr,))
else:
m = re.match(r"\[(?P<host>[^\]]*)\]:(?P<port>\d+)$", addr)
if m is not None:
host = m.group("host")
port = m.group("port")
else:
host, port = addr.split(":")
return (ADDR_TYPE_TCP, (host, int(port)))
class AsyncClient(object):
def __init__(
self,
proto_name,
proto_version,
logger,
timeout=30,
server_headers=False,
headers={},
):
self.socket = None
def __init__(self, proto_name, proto_version, logger, timeout=30):
self.reader = None
self.writer = None
self.max_chunk = DEFAULT_MAX_CHUNK
self.proto_name = proto_name
self.proto_version = proto_version
self.logger = logger
self.timeout = timeout
self.needs_server_headers = server_headers
self.server_headers = {}
self.headers = headers
async def connect_tcp(self, address, port):
async def connect_sock():
reader, writer = await asyncio.open_connection(address, port)
return StreamConnection(reader, writer, self.timeout, self.max_chunk)
return await asyncio.open_connection(address, port)
self._connect_sock = connect_sock
@@ -77,63 +40,27 @@ class AsyncClient(object):
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM, 0)
sock.connect(os.path.basename(path))
finally:
os.chdir(cwd)
reader, writer = await asyncio.open_unix_connection(sock=sock)
return StreamConnection(reader, writer, self.timeout, self.max_chunk)
self._connect_sock = connect_sock
async def connect_websocket(self, uri):
import websockets
async def connect_sock():
websocket = await websockets.connect(
uri,
ping_interval=None,
open_timeout=self.timeout,
)
return WebsocketConnection(websocket, self.timeout)
os.chdir(cwd)
return await asyncio.open_unix_connection(sock=sock)
self._connect_sock = connect_sock
async def setup_connection(self):
# Send headers
await self.socket.send("%s %s" % (self.proto_name, self.proto_version))
await self.socket.send(
"needs-headers: %s" % ("true" if self.needs_server_headers else "false")
)
for k, v in self.headers.items():
await self.socket.send("%s: %s" % (k, v))
# End of headers
await self.socket.send("")
self.server_headers = {}
if self.needs_server_headers:
while True:
line = await self.socket.recv()
if not line:
# End headers
break
tag, value = line.split(":", 1)
self.server_headers[tag.lower()] = value.strip()
async def get_header(self, tag, default):
await self.connect()
return self.server_headers.get(tag, default)
s = '%s %s\n\n' % (self.proto_name, self.proto_version)
self.writer.write(s.encode("utf-8"))
await self.writer.drain()
async def connect(self):
if self.socket is None:
self.socket = await self._connect_sock()
if self.reader is None or self.writer is None:
(self.reader, self.writer) = await self._connect_sock()
await self.setup_connection()
async def disconnect(self):
if self.socket is not None:
await self.socket.close()
self.socket = None
async def close(self):
await self.disconnect()
self.reader = None
if self.writer is not None:
self.writer.close()
self.writer = None
async def _send_wrapper(self, proc):
count = 0
@@ -144,7 +71,6 @@ class AsyncClient(object):
except (
OSError,
ConnectionError,
ConnectionClosedError,
json.JSONDecodeError,
UnicodeDecodeError,
) as e:
@@ -156,27 +82,49 @@ class AsyncClient(object):
await self.close()
count += 1
def check_invoke_error(self, msg):
if isinstance(msg, dict) and "invoke-error" in msg:
raise InvokeError(msg["invoke-error"]["message"])
async def send_message(self, msg):
async def get_line():
try:
line = await asyncio.wait_for(self.reader.readline(), self.timeout)
except asyncio.TimeoutError:
raise ConnectionError("Timed out waiting for server")
if not line:
raise ConnectionError("Connection closed")
line = line.decode("utf-8")
if not line.endswith("\n"):
raise ConnectionError("Bad message %r" % (line))
return line
async def invoke(self, msg):
async def proc():
await self.socket.send_message(msg)
return await self.socket.recv_message()
for c in chunkify(json.dumps(msg), self.max_chunk):
self.writer.write(c.encode("utf-8"))
await self.writer.drain()
result = await self._send_wrapper(proc)
self.check_invoke_error(result)
return result
l = await get_line()
m = json.loads(l)
if m and "chunk-stream" in m:
lines = []
while True:
l = (await get_line()).rstrip("\n")
if not l:
break
lines.append(l)
m = json.loads("".join(lines))
return m
return await self._send_wrapper(proc)
async def ping(self):
return await self.invoke({"ping": {}})
async def __aenter__(self):
return self
async def __aexit__(self, exc_type, exc_value, traceback):
await self.close()
return await self.send_message(
{'ping': {}}
)
class Client(object):
@@ -194,7 +142,7 @@ class Client(object):
# required (but harmless) with it.
asyncio.set_event_loop(self.loop)
self._add_methods("connect_tcp", "ping")
self._add_methods('connect_tcp', 'ping')
@abc.abstractmethod
def _get_async_client(self):
@@ -223,95 +171,8 @@ class Client(object):
def max_chunk(self, value):
self.client.max_chunk = value
def disconnect(self):
def close(self):
self.loop.run_until_complete(self.client.close())
def close(self):
if self.loop:
self.loop.run_until_complete(self.client.close())
if sys.version_info >= (3, 6):
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
self.loop.close()
self.loop = None
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
return False
class ClientPool(object):
def __init__(self, max_clients):
self.avail_clients = []
self.num_clients = 0
self.max_clients = max_clients
self.loop = None
self.client_condition = None
@abc.abstractmethod
async def _new_client(self):
raise NotImplementedError("Must be implemented in derived class")
def close(self):
if self.client_condition:
self.client_condition = None
if self.loop:
self.loop.run_until_complete(self.__close_clients())
if sys.version_info >= (3, 6):
self.loop.run_until_complete(self.loop.shutdown_asyncgens())
self.loop.close()
self.loop = None
def run_tasks(self, tasks):
if not self.loop:
self.loop = asyncio.new_event_loop()
thread = Thread(target=self.__thread_main, args=(tasks,))
thread.start()
thread.join()
@contextlib.asynccontextmanager
async def get_client(self):
async with self.client_condition:
if self.avail_clients:
client = self.avail_clients.pop()
elif self.num_clients < self.max_clients:
self.num_clients += 1
client = await self._new_client()
else:
while not self.avail_clients:
await self.client_condition.wait()
client = self.avail_clients.pop()
try:
yield client
finally:
async with self.client_condition:
self.avail_clients.append(client)
self.client_condition.notify()
def __thread_main(self, tasks):
async def process_task(task):
async with self.get_client() as client:
await task(client)
asyncio.set_event_loop(self.loop)
if not self.client_condition:
self.client_condition = asyncio.Condition()
tasks = [process_task(t) for t in tasks]
self.loop.run_until_complete(asyncio.gather(*tasks))
async def __close_clients(self):
for c in self.avail_clients:
await c.close()
self.avail_clients = []
self.num_clients = 0
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
return False
self.loop.close()

View File

@@ -1,146 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import asyncio
import itertools
import json
from datetime import datetime
from .exceptions import ClientError, ConnectionClosedError
# The Python async server defaults to a 64K receive buffer, so we hardcode our
# maximum chunk size. It would be better if the client and server reported to
# each other what the maximum chunk sizes were, but that will slow down the
# connection setup with a round trip delay so I'd rather not do that unless it
# is necessary
DEFAULT_MAX_CHUNK = 32 * 1024
def chunkify(msg, max_chunk):
if len(msg) < max_chunk - 1:
yield "".join((msg, "\n"))
else:
yield "".join((json.dumps({"chunk-stream": None}), "\n"))
args = [iter(msg)] * (max_chunk - 1)
for m in map("".join, itertools.zip_longest(*args, fillvalue="")):
yield "".join(itertools.chain(m, "\n"))
yield "\n"
def json_serialize(obj):
if isinstance(obj, datetime):
return obj.isoformat()
raise TypeError("Type %s not serializeable" % type(obj))
class StreamConnection(object):
def __init__(self, reader, writer, timeout, max_chunk=DEFAULT_MAX_CHUNK):
self.reader = reader
self.writer = writer
self.timeout = timeout
self.max_chunk = max_chunk
@property
def address(self):
return self.writer.get_extra_info("peername")
async def send_message(self, msg):
for c in chunkify(json.dumps(msg, default=json_serialize), self.max_chunk):
self.writer.write(c.encode("utf-8"))
await self.writer.drain()
async def recv_message(self):
l = await self.recv()
m = json.loads(l)
if not m:
return m
if "chunk-stream" in m:
lines = []
while True:
l = await self.recv()
if not l:
break
lines.append(l)
m = json.loads("".join(lines))
return m
async def send(self, msg):
self.writer.write(("%s\n" % msg).encode("utf-8"))
await self.writer.drain()
async def recv(self):
if self.timeout < 0:
line = await self.reader.readline()
else:
try:
line = await asyncio.wait_for(self.reader.readline(), self.timeout)
except asyncio.TimeoutError:
raise ConnectionError("Timed out waiting for data")
if not line:
raise ConnectionClosedError("Connection closed")
line = line.decode("utf-8")
if not line.endswith("\n"):
raise ConnectionError("Bad message %r" % (line))
return line.rstrip()
async def close(self):
self.reader = None
if self.writer is not None:
self.writer.close()
self.writer = None
class WebsocketConnection(object):
def __init__(self, socket, timeout):
self.socket = socket
self.timeout = timeout
@property
def address(self):
return ":".join(str(s) for s in self.socket.remote_address)
async def send_message(self, msg):
await self.send(json.dumps(msg, default=json_serialize))
async def recv_message(self):
m = await self.recv()
return json.loads(m)
async def send(self, msg):
import websockets.exceptions
try:
await self.socket.send(msg)
except websockets.exceptions.ConnectionClosed:
raise ConnectionClosedError("Connection closed")
async def recv(self):
import websockets.exceptions
try:
if self.timeout < 0:
return await self.socket.recv()
try:
return await asyncio.wait_for(self.socket.recv(), self.timeout)
except asyncio.TimeoutError:
raise ConnectionError("Timed out waiting for data")
except websockets.exceptions.ConnectionClosed:
raise ConnectionClosedError("Connection closed")
async def close(self):
if self.socket is not None:
await self.socket.close()
self.socket = None

View File

@@ -1,21 +0,0 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
class ClientError(Exception):
pass
class InvokeError(Exception):
pass
class ServerError(Exception):
pass
class ConnectionClosedError(Exception):
pass

View File

@@ -11,334 +11,242 @@ import os
import signal
import socket
import sys
from bb import multiprocessing
import logging
from .connection import StreamConnection, WebsocketConnection
from .exceptions import ClientError, ServerError, ConnectionClosedError, InvokeError
import multiprocessing
from . import chunkify, DEFAULT_MAX_CHUNK
class ClientLoggerAdapter(logging.LoggerAdapter):
def process(self, msg, kwargs):
return f"[Client {self.extra['address']}] {msg}", kwargs
class ClientError(Exception):
pass
class ServerError(Exception):
pass
class AsyncServerConnection(object):
# If a handler returns this object (e.g. `return self.NO_RESPONSE`), no
# return message will be automatically be sent back to the client
NO_RESPONSE = object()
def __init__(self, socket, proto_name, logger):
self.socket = socket
def __init__(self, reader, writer, proto_name, logger):
self.reader = reader
self.writer = writer
self.proto_name = proto_name
self.max_chunk = DEFAULT_MAX_CHUNK
self.handlers = {
"ping": self.handle_ping,
'chunk-stream': self.handle_chunk,
'ping': self.handle_ping,
}
self.logger = ClientLoggerAdapter(
logger,
{
"address": socket.address,
},
)
self.client_headers = {}
async def close(self):
await self.socket.close()
async def handle_headers(self, headers):
return {}
self.logger = logger
async def process_requests(self):
try:
self.logger.info("Client %r connected" % (self.socket.address,))
self.addr = self.writer.get_extra_info('peername')
self.logger.debug('Client %r connected' % (self.addr,))
# Read protocol and version
client_protocol = await self.socket.recv()
client_protocol = await self.reader.readline()
if not client_protocol:
return
(client_proto_name, client_proto_version) = client_protocol.split()
(client_proto_name, client_proto_version) = client_protocol.decode('utf-8').rstrip().split()
if client_proto_name != self.proto_name:
self.logger.debug("Rejecting invalid protocol %s" % (self.proto_name))
self.logger.debug('Rejecting invalid protocol %s' % (self.proto_name))
return
self.proto_version = tuple(int(v) for v in client_proto_version.split("."))
self.proto_version = tuple(int(v) for v in client_proto_version.split('.'))
if not self.validate_proto_version():
self.logger.debug(
"Rejecting invalid protocol version %s" % (client_proto_version)
)
self.logger.debug('Rejecting invalid protocol version %s' % (client_proto_version))
return
# Read headers
self.client_headers = {}
# Read headers. Currently, no headers are implemented, so look for
# an empty line to signal the end of the headers
while True:
header = await self.socket.recv()
if not header:
# Empty line. End of headers
break
tag, value = header.split(":", 1)
self.client_headers[tag.lower()] = value.strip()
line = await self.reader.readline()
if not line:
return
if self.client_headers.get("needs-headers", "false") == "true":
for k, v in (await self.handle_headers(self.client_headers)).items():
await self.socket.send("%s: %s" % (k, v))
await self.socket.send("")
line = line.decode('utf-8').rstrip()
if not line:
break
# Handle messages
while True:
d = await self.socket.recv_message()
d = await self.read_message()
if d is None:
break
try:
response = await self.dispatch_message(d)
except InvokeError as e:
await self.socket.send_message(
{"invoke-error": {"message": str(e)}}
)
break
if response is not self.NO_RESPONSE:
await self.socket.send_message(response)
except ConnectionClosedError as e:
self.logger.info(str(e))
except (ClientError, ConnectionError) as e:
await self.dispatch_message(d)
await self.writer.drain()
except ClientError as e:
self.logger.error(str(e))
finally:
await self.close()
self.writer.close()
async def dispatch_message(self, msg):
for k in self.handlers.keys():
if k in msg:
self.logger.debug("Handling %s" % k)
return await self.handlers[k](msg[k])
self.logger.debug('Handling %s' % k)
await self.handlers[k](msg[k])
return
raise ClientError("Unrecognized command %r" % msg)
async def handle_ping(self, request):
return {"alive": True}
def write_message(self, msg):
for c in chunkify(json.dumps(msg), self.max_chunk):
self.writer.write(c.encode('utf-8'))
async def read_message(self):
l = await self.reader.readline()
if not l:
return None
class StreamServer(object):
def __init__(self, handler, logger):
self.handler = handler
self.logger = logger
self.closed = False
async def handle_stream_client(self, reader, writer):
# writer.transport.set_write_buffer_limits(0)
socket = StreamConnection(reader, writer, -1)
if self.closed:
await socket.close()
return
await self.handler(socket)
async def stop(self):
self.closed = True
class TCPStreamServer(StreamServer):
def __init__(self, host, port, handler, logger):
super().__init__(handler, logger)
self.host = host
self.port = port
def start(self, loop):
self.server = loop.run_until_complete(
asyncio.start_server(self.handle_stream_client, self.host, self.port)
)
for s in self.server.sockets:
self.logger.debug("Listening on %r" % (s.getsockname(),))
# Newer python does this automatically. Do it manually here for
# maximum compatibility
s.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 1)
s.setsockopt(socket.SOL_TCP, socket.TCP_QUICKACK, 1)
# Enable keep alives. This prevents broken client connections
# from persisting on the server for long periods of time.
s.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 30)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 15)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 4)
name = self.server.sockets[0].getsockname()
if self.server.sockets[0].family == socket.AF_INET6:
self.address = "[%s]:%d" % (name[0], name[1])
else:
self.address = "%s:%d" % (name[0], name[1])
return [self.server.wait_closed()]
async def stop(self):
await super().stop()
self.server.close()
def cleanup(self):
pass
class UnixStreamServer(StreamServer):
def __init__(self, path, handler, logger):
super().__init__(handler, logger)
self.path = path
def start(self, loop):
cwd = os.getcwd()
try:
# Work around path length limits in AF_UNIX
os.chdir(os.path.dirname(self.path))
self.server = loop.run_until_complete(
asyncio.start_unix_server(
self.handle_stream_client, os.path.basename(self.path)
)
)
finally:
os.chdir(cwd)
message = l.decode('utf-8')
self.logger.debug("Listening on %r" % self.path)
self.address = "unix://%s" % os.path.abspath(self.path)
return [self.server.wait_closed()]
if not message.endswith('\n'):
return None
async def stop(self):
await super().stop()
self.server.close()
return json.loads(message)
except (json.JSONDecodeError, UnicodeDecodeError) as e:
self.logger.error('Bad message from client: %r' % message)
raise e
def cleanup(self):
os.unlink(self.path)
async def handle_chunk(self, request):
lines = []
try:
while True:
l = await self.reader.readline()
l = l.rstrip(b"\n").decode("utf-8")
if not l:
break
lines.append(l)
msg = json.loads(''.join(lines))
except (json.JSONDecodeError, UnicodeDecodeError) as e:
self.logger.error('Bad message from client: %r' % lines)
raise e
class WebsocketsServer(object):
def __init__(self, host, port, handler, logger):
self.host = host
self.port = port
self.handler = handler
self.logger = logger
if 'chunk-stream' in msg:
raise ClientError("Nested chunks are not allowed")
def start(self, loop):
import websockets.server
await self.dispatch_message(msg)
self.server = loop.run_until_complete(
websockets.server.serve(
self.client_handler,
self.host,
self.port,
ping_interval=None,
)
)
for s in self.server.sockets:
self.logger.debug("Listening on %r" % (s.getsockname(),))
# Enable keep alives. This prevents broken client connections
# from persisting on the server for long periods of time.
s.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 30)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 15)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 4)
name = self.server.sockets[0].getsockname()
if self.server.sockets[0].family == socket.AF_INET6:
self.address = "ws://[%s]:%d" % (name[0], name[1])
else:
self.address = "ws://%s:%d" % (name[0], name[1])
return [self.server.wait_closed()]
async def stop(self):
self.server.close()
def cleanup(self):
pass
async def client_handler(self, websocket):
socket = WebsocketConnection(websocket, -1)
await self.handler(socket)
async def handle_ping(self, request):
response = {'alive': True}
self.write_message(response)
class AsyncServer(object):
def __init__(self, logger):
self._cleanup_socket = None
self.logger = logger
self.start = None
self.address = None
self.loop = None
self.run_tasks = []
def start_tcp_server(self, host, port):
self.server = TCPStreamServer(host, port, self._client_handler, self.logger)
def start_tcp():
self.server = self.loop.run_until_complete(
asyncio.start_server(self.handle_client, host, port)
)
for s in self.server.sockets:
self.logger.debug('Listening on %r' % (s.getsockname(),))
# Newer python does this automatically. Do it manually here for
# maximum compatibility
s.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 1)
s.setsockopt(socket.SOL_TCP, socket.TCP_QUICKACK, 1)
# Enable keep alives. This prevents broken client connections
# from persisting on the server for long periods of time.
s.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 30)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 15)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 4)
name = self.server.sockets[0].getsockname()
if self.server.sockets[0].family == socket.AF_INET6:
self.address = "[%s]:%d" % (name[0], name[1])
else:
self.address = "%s:%d" % (name[0], name[1])
self.start = start_tcp
def start_unix_server(self, path):
self.server = UnixStreamServer(path, self._client_handler, self.logger)
def cleanup():
os.unlink(path)
def start_websocket_server(self, host, port):
self.server = WebsocketsServer(host, port, self._client_handler, self.logger)
def start_unix():
cwd = os.getcwd()
try:
# Work around path length limits in AF_UNIX
os.chdir(os.path.dirname(path))
self.server = self.loop.run_until_complete(
asyncio.start_unix_server(self.handle_client, os.path.basename(path))
)
finally:
os.chdir(cwd)
async def _client_handler(self, socket):
address = socket.address
self.logger.debug('Listening on %r' % path)
self._cleanup_socket = cleanup
self.address = "unix://%s" % os.path.abspath(path)
self.start = start_unix
@abc.abstractmethod
def accept_client(self, reader, writer):
pass
async def handle_client(self, reader, writer):
# writer.transport.set_write_buffer_limits(0)
try:
client = self.accept_client(socket)
client = self.accept_client(reader, writer)
await client.process_requests()
except Exception as e:
import traceback
self.logger.error(
"Error from client %s: %s" % (address, str(e)), exc_info=True
)
self.logger.error('Error from client: %s' % str(e), exc_info=True)
traceback.print_exc()
finally:
self.logger.debug("Client %s disconnected", address)
await socket.close()
writer.close()
self.logger.debug('Client disconnected')
@abc.abstractmethod
def accept_client(self, socket):
pass
async def stop(self):
self.logger.debug("Stopping server")
await self.server.stop()
def start(self):
tasks = self.server.start(self.loop)
self.address = self.server.address
return tasks
def run_loop_forever(self):
try:
self.loop.run_forever()
except KeyboardInterrupt:
pass
def signal_handler(self):
self.logger.debug("Got exit signal")
self.loop.create_task(self.stop())
self.loop.stop()
def _serve_forever(self, tasks):
def _serve_forever(self):
try:
self.loop.add_signal_handler(signal.SIGTERM, self.signal_handler)
self.loop.add_signal_handler(signal.SIGINT, self.signal_handler)
self.loop.add_signal_handler(signal.SIGQUIT, self.signal_handler)
signal.pthread_sigmask(signal.SIG_UNBLOCK, [signal.SIGTERM])
self.loop.run_until_complete(asyncio.gather(*tasks))
self.run_loop_forever()
self.server.close()
self.logger.debug("Server shutting down")
self.loop.run_until_complete(self.server.wait_closed())
self.logger.debug('Server shutting down')
finally:
self.server.cleanup()
if self._cleanup_socket is not None:
self._cleanup_socket()
def serve_forever(self):
"""
Serve requests in the current process
"""
self._create_loop()
tasks = self.start()
self._serve_forever(tasks)
self.loop.close()
def _create_loop(self):
# Create loop and override any loop that may have existed in
# a parent process. It is possible that the usecases of
# serve_forever might be constrained enough to allow using
# get_event_loop here, but better safe than sorry for now.
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
self.start()
self._serve_forever()
def serve_as_process(self, *, prefunc=None, args=(), log_level=None):
def serve_as_process(self, *, prefunc=None, args=()):
"""
Serve requests in a child process
"""
def run(queue):
# Create loop and override any loop that may have existed
# in a parent process. Without doing this and instead
@@ -351,22 +259,18 @@ class AsyncServer(object):
# more general, though, as any potential use of asyncio in
# Cooker could create a loop that needs to replaced in this
# new process.
self._create_loop()
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
try:
self.address = None
tasks = self.start()
self.start()
finally:
# Always put the server address to wake up the parent task
queue.put(self.address)
queue.close()
if prefunc is not None:
prefunc(self, *args)
if log_level is not None:
self.logger.setLevel(log_level)
self._serve_forever(tasks)
self._serve_forever()
if sys.version_info >= (3, 6):
self.loop.run_until_complete(self.loop.shutdown_asyncgens())

View File

@@ -344,7 +344,9 @@ def virtualfn2realfn(virtualfn):
"""
mc = ""
if virtualfn.startswith('mc:') and virtualfn.count(':') >= 2:
(_, mc, virtualfn) = virtualfn.split(':', 2)
elems = virtualfn.split(':')
mc = elems[1]
virtualfn = ":".join(elems[2:])
fn = virtualfn
cls = ""
@@ -367,7 +369,7 @@ def realfn2virtual(realfn, cls, mc):
def variant2virtual(realfn, variant):
"""
Convert a real filename + a variant to a virtual filename
Convert a real filename + the associated subclass keyword to a virtual filename
"""
if variant == "":
return realfn

View File

@@ -62,7 +62,6 @@ def check_indent(codestr):
modulecode_deps = {}
def add_module_functions(fn, functions, namespace):
import os
fstat = os.stat(fn)
fixedhash = fn + ":" + str(fstat.st_size) + ":" + str(fstat.st_mtime)
for f in functions:
@@ -72,11 +71,6 @@ def add_module_functions(fn, functions, namespace):
parser.parse_python(None, filename=fn, lineno=1, fixedhash=fixedhash+f)
#bb.warn("Cached %s" % f)
except KeyError:
targetfn = inspect.getsourcefile(functions[f])
if fn != targetfn:
# Skip references to other modules outside this file
#bb.warn("Skipping %s" % name)
continue
lines, lineno = inspect.getsourcelines(functions[f])
src = "".join(lines)
parser.parse_python(src, filename=fn, lineno=lineno, fixedhash=fixedhash+f)
@@ -87,14 +81,14 @@ def add_module_functions(fn, functions, namespace):
if e in functions:
execs.remove(e)
execs.add(namespace + "." + e)
modulecode_deps[name] = [parser.references.copy(), execs, parser.var_execs.copy(), parser.contains.copy(), parser.extra]
modulecode_deps[name] = [parser.references.copy(), execs, parser.var_execs.copy(), parser.contains.copy()]
#bb.warn("%s: %s\nRefs:%s Execs: %s %s %s" % (name, fn, parser.references, parser.execs, parser.var_execs, parser.contains))
def update_module_dependencies(d):
for mod in modulecode_deps:
excludes = set((d.getVarFlag(mod, "vardepsexclude") or "").split())
if excludes:
modulecode_deps[mod] = [modulecode_deps[mod][0] - excludes, modulecode_deps[mod][1] - excludes, modulecode_deps[mod][2] - excludes, modulecode_deps[mod][3], modulecode_deps[mod][4]]
modulecode_deps[mod] = [modulecode_deps[mod][0] - excludes, modulecode_deps[mod][1] - excludes, modulecode_deps[mod][2] - excludes, modulecode_deps[mod][3]]
# A custom getstate/setstate using tuples is actually worth 15% cachesize by
# avoiding duplication of the attribute names!
@@ -117,22 +111,21 @@ class SetCache(object):
codecache = SetCache()
class pythonCacheLine(object):
def __init__(self, refs, execs, contains, extra):
def __init__(self, refs, execs, contains):
self.refs = codecache.internSet(refs)
self.execs = codecache.internSet(execs)
self.contains = {}
for c in contains:
self.contains[c] = codecache.internSet(contains[c])
self.extra = extra
def __getstate__(self):
return (self.refs, self.execs, self.contains, self.extra)
return (self.refs, self.execs, self.contains)
def __setstate__(self, state):
(refs, execs, contains, extra) = state
self.__init__(refs, execs, contains, extra)
(refs, execs, contains) = state
self.__init__(refs, execs, contains)
def __hash__(self):
l = (hash(self.refs), hash(self.execs), hash(self.extra))
l = (hash(self.refs), hash(self.execs))
for c in sorted(self.contains.keys()):
l = l + (c, hash(self.contains[c]))
return hash(l)
@@ -161,7 +154,7 @@ class CodeParserCache(MultiProcessCache):
# so that an existing cache gets invalidated. Additionally you'll need
# to increment __cache_version__ in cache.py in order to ensure that old
# recipe caches don't trigger "Taskhash mismatch" errors.
CACHE_VERSION = 12
CACHE_VERSION = 11
def __init__(self):
MultiProcessCache.__init__(self)
@@ -175,8 +168,8 @@ class CodeParserCache(MultiProcessCache):
self.pythoncachelines = {}
self.shellcachelines = {}
def newPythonCacheLine(self, refs, execs, contains, extra):
cacheline = pythonCacheLine(refs, execs, contains, extra)
def newPythonCacheLine(self, refs, execs, contains):
cacheline = pythonCacheLine(refs, execs, contains)
h = hash(cacheline)
if h in self.pythoncachelines:
return self.pythoncachelines[h]
@@ -264,17 +257,17 @@ class PythonParser():
if name and (name.endswith(self.getvars) or name.endswith(self.getvarflags) or name in self.containsfuncs or name in self.containsanyfuncs):
if isinstance(node.args[0], ast.Constant) and isinstance(node.args[0].value, str):
varname = node.args[0].value
if name in self.containsfuncs and isinstance(node.args[1], ast.Constant):
if name in self.containsfuncs and isinstance(node.args[1], ast.Str):
if varname not in self.contains:
self.contains[varname] = set()
self.contains[varname].add(node.args[1].value)
elif name in self.containsanyfuncs and isinstance(node.args[1], ast.Constant):
self.contains[varname].add(node.args[1].s)
elif name in self.containsanyfuncs and isinstance(node.args[1], ast.Str):
if varname not in self.contains:
self.contains[varname] = set()
self.contains[varname].update(node.args[1].value.split())
self.contains[varname].update(node.args[1].s.split())
elif name.endswith(self.getvarflags):
if isinstance(node.args[1], ast.Constant):
self.references.add('%s[%s]' % (varname, node.args[1].value))
if isinstance(node.args[1], ast.Str):
self.references.add('%s[%s]' % (varname, node.args[1].s))
else:
self.warn(node.func, node.args[1])
else:
@@ -282,8 +275,8 @@ class PythonParser():
else:
self.warn(node.func, node.args[0])
elif name and name.endswith(".expand"):
if isinstance(node.args[0], ast.Constant):
value = node.args[0].value
if isinstance(node.args[0], ast.Str):
value = node.args[0].s
d = bb.data.init()
parser = d.expandWithRefs(value, self.name)
self.references |= parser.references
@@ -293,8 +286,8 @@ class PythonParser():
self.contains[varname] = set()
self.contains[varname] |= parser.contains[varname]
elif name in self.execfuncs:
if isinstance(node.args[0], ast.Constant):
self.var_execs.add(node.args[0].value)
if isinstance(node.args[0], ast.Str):
self.var_execs.add(node.args[0].s)
else:
self.warn(node.func, node.args[0])
elif name and isinstance(node.func, (ast.Name, ast.Attribute)):
@@ -344,7 +337,6 @@ class PythonParser():
self.contains = {}
for i in codeparsercache.pythoncache[h].contains:
self.contains[i] = set(codeparsercache.pythoncache[h].contains[i])
self.extra = codeparsercache.pythoncache[h].extra
return
if h in codeparsercache.pythoncacheextras:
@@ -353,7 +345,6 @@ class PythonParser():
self.contains = {}
for i in codeparsercache.pythoncacheextras[h].contains:
self.contains[i] = set(codeparsercache.pythoncacheextras[h].contains[i])
self.extra = codeparsercache.pythoncacheextras[h].extra
return
if fixedhash and not node:
@@ -372,11 +363,8 @@ class PythonParser():
self.visit_Call(n)
self.execs.update(self.var_execs)
self.extra = None
if fixedhash:
self.extra = bbhash(str(node))
codeparsercache.pythoncacheextras[h] = codeparsercache.newPythonCacheLine(self.references, self.execs, self.contains, self.extra)
codeparsercache.pythoncacheextras[h] = codeparsercache.newPythonCacheLine(self.references, self.execs, self.contains)
class ShellParser():
def __init__(self, name, log):

View File

@@ -420,30 +420,15 @@ class CommandsSync:
return command.cooker.recipecaches[mc].pkg_dp
getDefaultPreference.readonly = True
def getSkippedRecipes(self, command, params):
"""
Get the map of skipped recipes for the specified multiconfig/mc name (`params[0]`).
Invoked by `bb.tinfoil.Tinfoil.get_skipped_recipes`
:param command: Internally used parameter.
:param params: Parameter array. params[0] is multiconfig/mc name. If not given, then default mc '' is assumed.
:return: Dict whose keys are virtualfns and values are `bb.cooker.SkippedPackage`
"""
try:
mc = params[0]
except IndexError:
mc = ''
# Return list sorted by reverse priority order
import bb.cache
def sortkey(x):
vfn, _ = x
realfn, _, item_mc = bb.cache.virtualfn2realfn(vfn)
return -command.cooker.collections[item_mc].calc_bbfile_priority(realfn)[0], vfn
realfn, _, mc = bb.cache.virtualfn2realfn(vfn)
return (-command.cooker.collections[mc].calc_bbfile_priority(realfn)[0], vfn)
skipdict = OrderedDict(sorted(command.cooker.skiplist_by_mc[mc].items(), key=sortkey))
skipdict = OrderedDict(sorted(command.cooker.skiplist.items(), key=sortkey))
return list(skipdict.items())
getSkippedRecipes.readonly = True
@@ -565,8 +550,8 @@ class CommandsSync:
and return a datastore object representing the environment
for the recipe.
"""
virtualfn = params[0]
(fn, cls, mc) = bb.cache.virtualfn2realfn(virtualfn)
fn = params[0]
mc = bb.runqueue.mc_from_tid(fn)
appends = params[1]
appendlist = params[2]
if len(params) > 3:
@@ -589,10 +574,10 @@ class CommandsSync:
if config_data:
# We have to use a different function here if we're passing in a datastore
# NOTE: we took a copy above, so we don't do it here again
envdata = command.cooker.databuilder._parse_recipe(config_data, fn, appendfiles, mc, layername)[cls]
envdata = command.cooker.databuilder._parse_recipe(config_data, fn, appendfiles, mc, layername)['']
else:
# Use the standard path
envdata = command.cooker.databuilder.parseRecipe(virtualfn, appendfiles, layername)
envdata = command.cooker.databuilder.parseRecipe(fn, appendfiles, layername)
idx = command.remotedatastores.store(envdata)
return DataStoreConnectionHandle(idx)
parseRecipeFile.readonly = True
@@ -792,7 +777,6 @@ class CommandsAsync:
(mc, pn) = bb.runqueue.split_mc(params[0])
taskname = params[1]
sigs = params[2]
bb.siggen.check_siggen_version(bb.siggen)
res = bb.siggen.find_siginfo(pn, taskname, sigs, command.cooker.databuilder.mcdata[mc])
bb.event.fire(bb.event.FindSigInfoResult(res), command.cooker.databuilder.mcdata[mc])
command.finishAsyncCommand()

View File

@@ -12,12 +12,12 @@
import sys, os, glob, os.path, re, time
import itertools
import logging
from bb import multiprocessing
import multiprocessing
import threading
from io import StringIO, UnsupportedOperation
from contextlib import closing
from collections import defaultdict, namedtuple
import bb, bb.command
import bb, bb.exceptions, bb.command
from bb import utils, data, parse, event, cache, providers, taskdata, runqueue, build
import queue
import signal
@@ -102,15 +102,12 @@ class CookerFeatures(object):
class EventWriter:
def __init__(self, cooker, eventfile):
self.file_inited = None
self.cooker = cooker
self.eventfile = eventfile
self.event_queue = []
def write_variables(self):
with open(self.eventfile, "a") as f:
f.write("%s\n" % json.dumps({ "allvariables" : self.cooker.getAllKeysWithFlags(["doc", "func"])}))
def send(self, event):
def write_event(self, event):
with open(self.eventfile, "a") as f:
try:
str_event = codecs.encode(pickle.dumps(event), 'base64').decode('utf-8')
@@ -120,6 +117,28 @@ class EventWriter:
import traceback
print(err, traceback.format_exc())
def send(self, event):
if self.file_inited:
# we have the file, just write the event
self.write_event(event)
else:
# init on bb.event.BuildStarted
name = "%s.%s" % (event.__module__, event.__class__.__name__)
if name in ("bb.event.BuildStarted", "bb.cooker.CookerExit"):
with open(self.eventfile, "w") as f:
f.write("%s\n" % json.dumps({ "allvariables" : self.cooker.getAllKeysWithFlags(["doc", "func"])}))
self.file_inited = True
# write pending events
for evt in self.event_queue:
self.write_event(evt)
# also write the current event
self.write_event(event)
else:
# queue all events until the file is inited
self.event_queue.append(event)
#============================================================================#
# BBCooker
@@ -134,8 +153,7 @@ class BBCooker:
self.baseconfig_valid = False
self.parsecache_valid = False
self.eventlog = None
# The skiplists, one per multiconfig
self.skiplist_by_mc = defaultdict(dict)
self.skiplist = {}
self.featureset = CookerFeatures()
if featureSet:
for f in featureSet:
@@ -285,10 +303,6 @@ class BBCooker:
self.data_hash = self.databuilder.data_hash
self.extraconfigdata = {}
eventlog = self.data.getVar("BB_DEFAULT_EVENTLOG")
if not self.configuration.writeeventlog and eventlog:
self.setupEventLog(eventlog)
if consolelog:
self.data.setVar("BB_CONSOLELOG", consolelog)
@@ -316,13 +330,13 @@ class BBCooker:
dbfile = (self.data.getVar("PERSISTENT_DIR") or self.data.getVar("CACHE")) + "/hashserv.db"
upstream = self.data.getVar("BB_HASHSERVE_UPSTREAM") or None
if upstream:
import socket
try:
with hashserv.create_client(upstream) as client:
client.ping()
except (ConnectionError, ImportError) as e:
sock = socket.create_connection(upstream.split(":"), 5)
sock.close()
except socket.error as e:
bb.warn("BB_HASHSERVE_UPSTREAM is not valid, unable to connect hash equivalence server at '%s': %s"
% (upstream, repr(e)))
upstream = None
self.hashservaddr = "unix://%s/hashserve.sock" % self.data.getVar("TOPDIR")
self.hashserv = hashserv.create_server(
@@ -331,7 +345,7 @@ class BBCooker:
sync=False,
upstream=upstream,
)
self.hashserv.serve_as_process(log_level=logging.WARNING)
self.hashserv.serve_as_process()
for mc in self.databuilder.mcdata:
self.databuilder.mcorigdata[mc].setVar("BB_HASHSERVE", self.hashservaddr)
self.databuilder.mcdata[mc].setVar("BB_HASHSERVE", self.hashservaddr)
@@ -395,19 +409,6 @@ class BBCooker:
self._parsecache_set(False)
def setupEventLog(self, eventlog):
if self.eventlog and self.eventlog[0] != eventlog:
bb.event.unregister_UIHhandler(self.eventlog[1])
self.eventlog = None
if not self.eventlog or self.eventlog[0] != eventlog:
# we log all events to a file if so directed
# register the log file writer as UI Handler
if not os.path.exists(os.path.dirname(eventlog)):
bb.utils.mkdirhier(os.path.dirname(eventlog))
writer = EventWriter(self, eventlog)
EventLogWriteHandler = namedtuple('EventLogWriteHandler', ['event'])
self.eventlog = (eventlog, bb.event.register_UIHhandler(EventLogWriteHandler(writer)), writer)
def updateConfigOpts(self, options, environment, cmdline):
self.ui_cmdline = cmdline
clean = True
@@ -427,7 +428,14 @@ class BBCooker:
setattr(self.configuration, o, options[o])
if self.configuration.writeeventlog:
self.setupEventLog(self.configuration.writeeventlog)
if self.eventlog and self.eventlog[0] != self.configuration.writeeventlog:
bb.event.unregister_UIHhandler(self.eventlog[1])
if not self.eventlog or self.eventlog[0] != self.configuration.writeeventlog:
# we log all events to a file if so directed
# register the log file writer as UI Handler
writer = EventWriter(self, self.configuration.writeeventlog)
EventLogWriteHandler = namedtuple('EventLogWriteHandler', ['event'])
self.eventlog = (self.configuration.writeeventlog, bb.event.register_UIHhandler(EventLogWriteHandler(writer)))
bb.msg.loggerDefaultLogLevel = self.configuration.default_loglevel
bb.msg.loggerDefaultDomains = self.configuration.debug_domains
@@ -613,8 +621,8 @@ class BBCooker:
localdata = {}
for mc in self.multiconfigs:
taskdata[mc] = bb.taskdata.TaskData(halt, skiplist=self.skiplist_by_mc[mc], allowincomplete=allowincomplete)
localdata[mc] = bb.data.createCopy(self.databuilder.mcdata[mc])
taskdata[mc] = bb.taskdata.TaskData(halt, skiplist=self.skiplist, allowincomplete=allowincomplete)
localdata[mc] = data.createCopy(self.databuilder.mcdata[mc])
bb.data.expandKeys(localdata[mc])
current = 0
@@ -934,7 +942,7 @@ class BBCooker:
for mc in self.multiconfigs:
# First get list of recipes, including skipped
recipefns = list(self.recipecaches[mc].pkg_fn.keys())
recipefns.extend(self.skiplist_by_mc[mc].keys())
recipefns.extend(self.skiplist.keys())
# Work out list of bbappends that have been applied
applied_appends = []
@@ -1387,8 +1395,6 @@ class BBCooker:
buildname = self.databuilder.mcdata[mc].getVar("BUILDNAME")
if fireevents:
bb.event.fire(bb.event.BuildStarted(buildname, [item]), self.databuilder.mcdata[mc])
if self.eventlog:
self.eventlog[2].write_variables()
bb.event.enable_heartbeat()
# Execute the runqueue
@@ -1460,6 +1466,7 @@ class BBCooker:
if t in task or getAllTaskSignatures:
try:
rq.rqdata.prepare_task_hash(tid)
sig.append([pn, t, rq.rqdata.get_task_unihash(tid)])
except KeyError:
sig.append(self.getTaskSignatures(target, [t])[0])
@@ -1531,8 +1538,6 @@ class BBCooker:
for mc in self.multiconfigs:
bb.event.fire(bb.event.BuildStarted(buildname, ntargets), self.databuilder.mcdata[mc])
if self.eventlog:
self.eventlog[2].write_variables()
bb.event.enable_heartbeat()
rq = bb.runqueue.RunQueue(self, self.data, self.recipecaches, taskdata, runlist)
@@ -1543,13 +1548,7 @@ class BBCooker:
def getAllKeysWithFlags(self, flaglist):
def dummy_autorev(d):
return
dump = {}
# Horrible but for now we need to avoid any sideeffects of autorev being called
saved = bb.fetch2.get_autorev
bb.fetch2.get_autorev = dummy_autorev
for k in self.data.keys():
try:
expand = True
@@ -1569,7 +1568,6 @@ class BBCooker:
dump[k][d] = None
except Exception as e:
print(e)
bb.fetch2.get_autorev = saved
return dump
@@ -1789,7 +1787,7 @@ class CookerCollectFiles(object):
for ignored in ('SCCS', 'CVS', '.svn'):
if ignored in dirs:
dirs.remove(ignored)
found += [os.path.join(dir, f) for f in files if (f.endswith(('.bb', '.bbappend')))]
found += [os.path.join(dir, f) for f in files if (f.endswith(['.bb', '.bbappend']))]
return found
@@ -2098,6 +2096,7 @@ class Parser(multiprocessing.Process):
except Exception as exc:
tb = sys.exc_info()[2]
exc.recipe = filename
exc.traceback = list(bb.exceptions.extract_traceback(tb, context=3))
return True, None, exc
# Need to turn BaseExceptions into Exceptions here so we gracefully shutdown
# and for example a worker thread doesn't just exit on its own in response to
@@ -2298,12 +2297,8 @@ class CookerParser(object):
return False
except ParsingFailure as exc:
self.error += 1
exc_desc = str(exc)
if isinstance(exc, SystemExit) and not isinstance(exc.code, str):
exc_desc = 'Exited with "%d"' % exc.code
logger.error('Unable to parse %s: %s' % (exc.recipe, exc_desc))
logger.error('Unable to parse %s: %s' %
(exc.recipe, bb.exceptions.to_string(exc.realexception)))
self.shutdown(clean=False)
return False
except bb.parse.ParseError as exc:
@@ -2312,33 +2307,20 @@ class CookerParser(object):
self.shutdown(clean=False, eventmsg=str(exc))
return False
except bb.data_smart.ExpansionError as exc:
def skip_frames(f, fn_prefix):
while f and f.tb_frame.f_code.co_filename.startswith(fn_prefix):
f = f.tb_next
return f
self.error += 1
bbdir = os.path.dirname(__file__) + os.sep
etype, value, tb = sys.exc_info()
# Remove any frames where the code comes from bitbake. This
# prevents deep (and pretty useless) backtraces for expansion error
tb = skip_frames(tb, bbdir)
cur = tb
while cur:
cur.tb_next = skip_frames(cur.tb_next, bbdir)
cur = cur.tb_next
etype, value, _ = sys.exc_info()
tb = list(itertools.dropwhile(lambda e: e.filename.startswith(bbdir), exc.traceback))
logger.error('ExpansionError during parsing %s', value.recipe,
exc_info=(etype, value, tb))
self.shutdown(clean=False)
return False
except Exception as exc:
self.error += 1
_, value, _ = sys.exc_info()
etype, value, tb = sys.exc_info()
if hasattr(value, "recipe"):
logger.error('Unable to parse %s' % value.recipe,
exc_info=sys.exc_info())
exc_info=(etype, value, exc.traceback))
else:
# Most likely, an exception occurred during raising an exception
import traceback
@@ -2359,7 +2341,7 @@ class CookerParser(object):
for virtualfn, info_array in result:
if info_array[0].skipped:
self.skipped += 1
self.cooker.skiplist_by_mc[mc][virtualfn] = SkippedPackage(info_array[0])
self.cooker.skiplist[virtualfn] = SkippedPackage(info_array[0])
self.bb_caches[mc].add_info(virtualfn, info_array, self.cooker.recipecaches[mc],
parsed=parsed, watcher = self.cooker.add_filewatch)
return True

View File

@@ -503,8 +503,8 @@ class CookerDataBuilder(object):
if appends:
bb_data.setVar('__BBAPPEND', " ".join(appends))
return bb.parse.handle(bbfile, bb_data)
bb_data = bb.parse.handle(bbfile, bb_data)
return bb_data
def parseRecipeVariants(self, bbfile, appends, virtonly=False, mc=None, layername=None):
"""
@@ -516,7 +516,8 @@ class CookerDataBuilder(object):
(bbfile, virtual, mc) = bb.cache.virtualfn2realfn(bbfile)
bb_data = self.mcdata[mc].createCopy()
bb_data.setVar("__ONLYFINALISE", virtual or "default")
return self._parse_recipe(bb_data, bbfile, appends, mc, layername)
datastores = self._parse_recipe(bb_data, bbfile, appends, mc, layername)
return datastores
if mc is not None:
bb_data = self.mcdata[mc].createCopy()
@@ -542,5 +543,5 @@ class CookerDataBuilder(object):
"""
logger.debug("Parsing %s (full)" % virtualfn)
(fn, virtual, mc) = bb.cache.virtualfn2realfn(virtualfn)
datastores = self.parseRecipeVariants(virtualfn, appends, virtonly=True, layername=layername)
return datastores[virtual]
bb_data = self.parseRecipeVariants(virtualfn, appends, virtonly=True, layername=layername)
return bb_data[virtual]

View File

@@ -293,7 +293,7 @@ def build_dependencies(key, keys, mod_funcs, shelldeps, varflagsexcl, ignored_va
if key in mod_funcs:
exclusions = set()
moddep = bb.codeparser.modulecode_deps[key]
value = handle_contains(moddep[4], moddep[3], exclusions, d)
value = handle_contains("", moddep[3], exclusions, d)
return frozenset((moddep[0] | keys & moddep[1]) - ignored_vars), value
if key[-1] == ']':

View File

@@ -31,7 +31,7 @@ logger = logging.getLogger("BitBake.Data")
__setvar_keyword__ = [":append", ":prepend", ":remove"]
__setvar_regexp__ = re.compile(r'(?P<base>.*?)(?P<keyword>:append|:prepend|:remove)(:(?P<add>[^A-Z]*))?$')
__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~:]+}")
__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~:]+?}")
__expand_python_regexp__ = re.compile(r"\${@(?:{.*?}|.)+?}")
__whitespace_split__ = re.compile(r'(\s)')
__override_regexp__ = re.compile(r'[a-z0-9]+')
@@ -272,9 +272,12 @@ class VariableHistory(object):
return
if 'op' not in loginfo or not loginfo['op']:
loginfo['op'] = 'set'
if 'detail' in loginfo:
loginfo['detail'] = str(loginfo['detail'])
if 'variable' not in loginfo or 'file' not in loginfo:
raise ValueError("record() missing variable or file.")
var = loginfo['variable']
if var not in self.variables:
self.variables[var] = []
if not isinstance(self.variables[var], list):
@@ -333,8 +336,7 @@ class VariableHistory(object):
flag = '[%s] ' % (event['flag'])
else:
flag = ''
o.write("# %s %s:%s%s\n# %s\"%s\"\n" % \
(event['op'], event['file'], event['line'], display_func, flag, re.sub('\n', '\n# ', str(event['detail']))))
o.write("# %s %s:%s%s\n# %s\"%s\"\n" % (event['op'], event['file'], event['line'], display_func, flag, re.sub('\n', '\n# ', event['detail'])))
if len(history) > 1:
o.write("# pre-expansion value:\n")
o.write('# "%s"\n' % (commentVal))
@@ -388,7 +390,7 @@ class VariableHistory(object):
if isset and event['op'] == 'set?':
continue
isset = True
items = d.expand(str(event['detail'])).split()
items = d.expand(event['detail']).split()
for item in items:
# This is a little crude but is belt-and-braces to avoid us
# having to handle every possible operation type specifically
@@ -580,9 +582,12 @@ class DataSmart(MutableMapping):
else:
loginfo['op'] = keyword
self.varhistory.record(**loginfo)
# todo make sure keyword is not __doc__ or __module__
# pay the cookie monster
# more cookies for the cookie monster
self._setvar_update_overrides(base, **loginfo)
if ':' in var:
self._setvar_update_overrides(base, **loginfo)
if base in self.overridevars:
self._setvar_update_overridevars(var, value)
@@ -635,7 +640,6 @@ class DataSmart(MutableMapping):
nextnew.update(vardata.contains.keys())
new = nextnew
self.overrides = None
self.expand_cache = {}
def _setvar_update_overrides(self, var, **loginfo):
# aka pay the cookie monster

View File

@@ -19,6 +19,7 @@ import sys
import threading
import traceback
import bb.exceptions
import bb.utils
# This is the pid for which we should generate the event. This is set when
@@ -194,12 +195,7 @@ def fire_ui_handlers(event, d):
ui_queue.append(event)
return
with bb.utils.lock_timeout_nocheck(_thread_lock) as lock:
if not lock:
# If we can't get the lock, we may be recursively called, queue and return
ui_queue.append(event)
return
with bb.utils.lock_timeout(_thread_lock):
errors = []
for h in _ui_handlers:
#print "Sending event %s" % event
@@ -218,9 +214,6 @@ def fire_ui_handlers(event, d):
for h in errors:
del _ui_handlers[h]
while ui_queue:
fire_ui_handlers(ui_queue.pop(), d)
def fire(event, d):
"""Fire off an Event"""
@@ -264,15 +257,14 @@ def register(name, handler, mask=None, filename=None, lineno=None, data=None):
# handle string containing python code
if isinstance(handler, str):
tmp = "def %s(e, d):\n%s" % (name, handler)
# Inject empty lines to make code match lineno in filename
if lineno is not None:
tmp = "\n" * (lineno-1) + tmp
try:
code = bb.methodpool.compile_cache(tmp)
if not code:
if filename is None:
filename = "%s(e, d)" % name
code = compile(tmp, filename, "exec", ast.PyCF_ONLY_AST)
if lineno is not None:
ast.increment_lineno(code, lineno-1)
code = compile(code, filename, "exec")
bb.methodpool.compile_cache_add(tmp, code)
except SyntaxError:
@@ -766,7 +758,13 @@ class LogHandler(logging.Handler):
def emit(self, record):
if record.exc_info:
record.bb_exc_formatted = traceback.format_exception(*record.exc_info)
etype, value, tb = record.exc_info
if hasattr(tb, 'tb_next'):
tb = list(bb.exceptions.extract_traceback(tb, context=3))
# Need to turn the value into something the logging system can pickle
record.bb_exc_info = (etype, value, tb)
record.bb_exc_formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
value = str(value)
record.exc_info = None
fire(record, None)

View File

@@ -0,0 +1,96 @@
#
# Copyright BitBake Contributors
#
# SPDX-License-Identifier: GPL-2.0-only
#
import inspect
import traceback
import bb.namedtuple_with_abc
from collections import namedtuple
class TracebackEntry(namedtuple.abc):
"""Pickleable representation of a traceback entry"""
_fields = 'filename lineno function args code_context index'
_header = ' File "{0.filename}", line {0.lineno}, in {0.function}{0.args}'
def format(self, formatter=None):
if not self.code_context:
return self._header.format(self) + '\n'
formatted = [self._header.format(self) + ':\n']
for lineindex, line in enumerate(self.code_context):
if formatter:
line = formatter(line)
if lineindex == self.index:
formatted.append(' >%s' % line)
else:
formatted.append(' %s' % line)
return formatted
def __str__(self):
return ''.join(self.format())
def _get_frame_args(frame):
"""Get the formatted arguments and class (if available) for a frame"""
arginfo = inspect.getargvalues(frame)
try:
if not arginfo.args:
return '', None
# There have been reports from the field of python 2.6 which doesn't
# return a namedtuple here but simply a tuple so fallback gracefully if
# args isn't present.
except AttributeError:
return '', None
firstarg = arginfo.args[0]
if firstarg == 'self':
self = arginfo.locals['self']
cls = self.__class__.__name__
arginfo.args.pop(0)
del arginfo.locals['self']
else:
cls = None
formatted = inspect.formatargvalues(*arginfo)
return formatted, cls
def extract_traceback(tb, context=1):
frames = inspect.getinnerframes(tb, context)
for frame, filename, lineno, function, code_context, index in frames:
formatted_args, cls = _get_frame_args(frame)
if cls:
function = '%s.%s' % (cls, function)
yield TracebackEntry(filename, lineno, function, formatted_args,
code_context, index)
def format_extracted(extracted, formatter=None, limit=None):
if limit:
extracted = extracted[-limit:]
formatted = []
for tracebackinfo in extracted:
formatted.extend(tracebackinfo.format(formatter))
return formatted
def format_exception(etype, value, tb, context=1, limit=None, formatter=None):
formatted = ['Traceback (most recent call last):\n']
if hasattr(tb, 'tb_next'):
tb = extract_traceback(tb, context)
formatted.extend(format_extracted(tb, formatter, limit))
formatted.extend(traceback.format_exception_only(etype, value))
return formatted
def to_string(exc):
if isinstance(exc, SystemExit):
if not isinstance(exc.code, str):
return 'Exited with "%d"' % exc.code
return str(exc)

View File

@@ -237,7 +237,7 @@ class URI(object):
# to RFC compliant URL format. E.g.:
# file://foo.diff -> file:foo.diff
if urlp.scheme in self._netloc_forbidden:
uri = re.sub(r"(?<=:)//(?!/)", "", uri, count=1)
uri = re.sub("(?<=:)//(?!/)", "", uri, 1)
reparse = 1
if reparse:
@@ -290,12 +290,12 @@ class URI(object):
def _param_str_split(self, string, elmdelim, kvdelim="="):
ret = collections.OrderedDict()
for k, v in [x.split(kvdelim, 1) if kvdelim in x else (x, None) for x in string.split(elmdelim) if x]:
for k, v in [x.split(kvdelim, 1) for x in string.split(elmdelim) if x]:
ret[k] = v
return ret
def _param_str_join(self, dict_, elmdelim, kvdelim="="):
return elmdelim.join([kvdelim.join([k, v]) if v else k for k, v in dict_.items()])
return elmdelim.join([kvdelim.join([k, v]) for k, v in dict_.items()])
@property
def hostport(self):
@@ -460,7 +460,7 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
for k in replacements:
uri_replace_decoded[loc] = uri_replace_decoded[loc].replace(k, replacements[k])
#bb.note("%s %s %s" % (regexp, uri_replace_decoded[loc], uri_decoded[loc]))
result_decoded[loc] = re.sub(regexp, uri_replace_decoded[loc], uri_decoded[loc], count=1)
result_decoded[loc] = re.sub(regexp, uri_replace_decoded[loc], uri_decoded[loc], 1)
if loc == 2:
# Handle path manipulations
basename = None
@@ -499,30 +499,30 @@ def fetcher_init(d):
Calls before this must not hit the cache.
"""
with bb.persist_data.persist('BB_URI_HEADREVS', d) as revs:
try:
# fetcher_init is called multiple times, so make sure we only save the
# revs the first time it is called.
if not bb.fetch2.saved_headrevs:
bb.fetch2.saved_headrevs = dict(revs)
except:
pass
revs = bb.persist_data.persist('BB_URI_HEADREVS', d)
try:
# fetcher_init is called multiple times, so make sure we only save the
# revs the first time it is called.
if not bb.fetch2.saved_headrevs:
bb.fetch2.saved_headrevs = dict(revs)
except:
pass
# When to drop SCM head revisions controlled by user policy
srcrev_policy = d.getVar('BB_SRCREV_POLICY') or "clear"
if srcrev_policy == "cache":
logger.debug("Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
elif srcrev_policy == "clear":
logger.debug("Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
revs.clear()
else:
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
# When to drop SCM head revisions controlled by user policy
srcrev_policy = d.getVar('BB_SRCREV_POLICY') or "clear"
if srcrev_policy == "cache":
logger.debug("Keeping SRCREV cache due to cache policy of: %s", srcrev_policy)
elif srcrev_policy == "clear":
logger.debug("Clearing SRCREV cache due to cache policy of: %s", srcrev_policy)
revs.clear()
else:
raise FetchError("Invalid SRCREV cache policy of: %s" % srcrev_policy)
_checksum_cache.init_cache(d.getVar("BB_CACHEDIR"))
_checksum_cache.init_cache(d.getVar("BB_CACHEDIR"))
for m in methods:
if hasattr(m, "init"):
m.init(d)
for m in methods:
if hasattr(m, "init"):
m.init(d)
def fetcher_parse_save():
_checksum_cache.save_extras()
@@ -536,8 +536,8 @@ def fetcher_compare_revisions(d):
when bitbake was started and return true if they have changed.
"""
with dict(bb.persist_data.persist('BB_URI_HEADREVS', d)) as headrevs:
return headrevs != bb.fetch2.saved_headrevs
headrevs = dict(bb.persist_data.persist('BB_URI_HEADREVS', d))
return headrevs != bb.fetch2.saved_headrevs
def mirror_from_string(data):
mirrors = (data or "").replace('\\n',' ').split()
@@ -872,10 +872,7 @@ FETCH_EXPORT_VARS = ['HOME', 'PATH',
'AWS_PROFILE',
'AWS_ACCESS_KEY_ID',
'AWS_SECRET_ACCESS_KEY',
'AWS_ROLE_ARN',
'AWS_WEB_IDENTITY_TOKEN_FILE',
'AWS_DEFAULT_REGION',
'AWS_SESSION_TOKEN',
'GIT_CACHE_PATH',
'REMOTE_CONTAINERS_IPC',
'SSL_CERT_DIR']
@@ -943,10 +940,7 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
elif e.stderr:
output = "output:\n%s" % e.stderr
else:
if log:
output = "see logfile for output"
else:
output = "no output"
output = "no output"
error_message = "Fetch command %s failed with exit code %s, %s" % (e.command, e.exitcode, output)
except bb.process.CmdError as e:
error_message = "Fetch command %s could not be run:\n%s" % (e.command, e.msg)
@@ -1118,8 +1112,7 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
logger.debug("Mirror fetch failure for url %s (original url: %s)" % (ud.url, origud.url))
logger.debug(str(e))
try:
if ud.method.cleanup_upon_failure():
ud.method.clean(ud, ld)
ud.method.clean(ud, ld)
except UnboundLocalError:
pass
return False
@@ -1444,12 +1437,6 @@ class FetchMethod(object):
"""
return False
def cleanup_upon_failure(self):
"""
When a fetch fails, should clean() be called?
"""
return True
def verify_donestamp(self, ud, d):
"""
Verify the donestamp file
@@ -1606,7 +1593,7 @@ class FetchMethod(object):
if urlpath.find("/") != -1:
destdir = urlpath.rsplit("/", 1)[0] + '/'
bb.utils.mkdirhier("%s/%s" % (unpackdir, destdir))
cmd = 'cp --force --preserve=timestamps --no-dereference --recursive -H "%s" "%s"' % (file, destdir)
cmd = 'cp -fpPRH "%s" "%s"' % (file, destdir)
else:
urldata.unpack_tracer.unpack("archive-extract", unpackdir)
@@ -1662,13 +1649,13 @@ class FetchMethod(object):
if not hasattr(self, "_latest_revision"):
raise ParameterError("The fetcher for this URL does not support _latest_revision", ud.url)
with bb.persist_data.persist('BB_URI_HEADREVS', d) as revs:
key = self.generate_revision_key(ud, d, name)
try:
return revs[key]
except KeyError:
revs[key] = rev = self._latest_revision(ud, d, name)
return rev
revs = bb.persist_data.persist('BB_URI_HEADREVS', d)
key = self.generate_revision_key(ud, d, name)
try:
return revs[key]
except KeyError:
revs[key] = rev = self._latest_revision(ud, d, name)
return rev
def sortable_revision(self, ud, d, name):
latest_rev = self._build_revision(ud, d, name)
@@ -1895,7 +1882,7 @@ class Fetch(object):
logger.debug(str(e))
firsterr = e
# Remove any incomplete fetch
if not verified_stamp and m.cleanup_upon_failure():
if not verified_stamp:
m.clean(ud, self.d)
logger.debug("Trying MIRRORS")
mirrors = mirror_from_string(self.d.getVar('MIRRORS'))
@@ -1958,7 +1945,7 @@ class Fetch(object):
ret = m.try_mirrors(self, ud, self.d, mirrors, True)
if not ret:
raise FetchError("URL doesn't work", u)
raise FetchError("URL %s doesn't work" % u, u)
def unpack(self, root, urls=None):
"""

View File

@@ -57,20 +57,16 @@ class GCP(FetchMethod):
Fetch urls using the GCP API.
Assumes localpath was called first.
"""
from google.api_core.exceptions import NotFound
logger.debug2(f"Trying to download gs://{ud.host}{ud.path} to {ud.localpath}")
if self.gcp_client is None:
self.get_gcp_client()
bb.fetch2.check_network_access(d, "blob.download_to_filename", f"gs://{ud.host}{ud.path}")
bb.fetch2.check_network_access(d, "gsutil stat", ud.url)
# Path sometimes has leading slash, so strip it
path = ud.path.lstrip("/")
blob = self.gcp_client.bucket(ud.host).blob(path)
try:
blob.download_to_filename(ud.localpath)
except NotFound:
raise FetchError("The GCP API threw a NotFound exception")
blob.download_to_filename(ud.localpath)
# Additional sanity checks copied from the wget class (although there
# are no known issues which mean these are required, treat the GCP API
@@ -92,7 +88,7 @@ class GCP(FetchMethod):
if self.gcp_client is None:
self.get_gcp_client()
bb.fetch2.check_network_access(d, "gcp_client.bucket(ud.host).blob(path).exists()", f"gs://{ud.host}{ud.path}")
bb.fetch2.check_network_access(d, "gsutil stat", ud.url)
# Path sometimes has leading slash, so strip it
path = ud.path.lstrip("/")

View File

@@ -48,23 +48,10 @@ Supported SRC_URI options are:
instead of branch.
The default is "0", set nobranch=1 if needed.
- subpath
Limit the checkout to a specific subpath of the tree.
By default, checkout the whole tree, set subpath=<path> if needed
- destsuffix
The name of the path in which to place the checkout.
By default, the path is git/, set destsuffix=<suffix> if needed
- usehead
For local git:// urls to use the current branch HEAD as the revision for use with
AUTOREV. Implies nobranch.
- lfs
Enable the checkout to use LFS for large files. This will download all LFS files
in the download step, as the unpack step does not have network access.
The default is "1", set lfs=0 to skip.
"""
# Copyright (C) 2005 Richard Purdie
@@ -87,7 +74,6 @@ from contextlib import contextmanager
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
from bb.fetch2 import trusted_network
sha1_re = re.compile(r'^[0-9a-f]{40}$')
@@ -150,9 +136,6 @@ class Git(FetchMethod):
def supports_checksum(self, urldata):
return False
def cleanup_upon_failure(self):
return False
def urldata_init(self, ud, d):
"""
init git specific variable within url data
@@ -262,7 +245,7 @@ class Git(FetchMethod):
for name in ud.names:
ud.unresolvedrev[name] = 'HEAD'
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c gc.autoDetach=false -c core.pager=cat -c safe.bareRepository=all"
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c gc.autoDetach=false -c core.pager=cat"
write_tarballs = d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0"
ud.write_tarballs = write_tarballs != "0" or ud.rebaseable
@@ -277,7 +260,7 @@ class Git(FetchMethod):
ud.unresolvedrev[name] = ud.revisions[name]
ud.revisions[name] = self.latest_revision(ud, d, name)
gitsrcname = '%s%s' % (ud.host.replace(':', '.'), ud.path.replace('/', '.').replace('*', '.').replace(' ','_').replace('(', '_').replace(')', '_'))
gitsrcname = '%s%s' % (ud.host.replace(':', '.'), ud.path.replace('/', '.').replace('*', '.').replace(' ','_'))
if gitsrcname.startswith('.'):
gitsrcname = gitsrcname[1:]
@@ -328,10 +311,7 @@ class Git(FetchMethod):
return ud.clonedir
def need_update(self, ud, d):
return self.clonedir_need_update(ud, d) \
or self.shallow_tarball_need_update(ud) \
or self.tarball_need_update(ud) \
or self.lfs_need_update(ud, d)
return self.clonedir_need_update(ud, d) or self.shallow_tarball_need_update(ud) or self.tarball_need_update(ud)
def clonedir_need_update(self, ud, d):
if not os.path.exists(ud.clonedir):
@@ -343,15 +323,6 @@ class Git(FetchMethod):
return True
return False
def lfs_need_update(self, ud, d):
if self.clonedir_need_update(ud, d):
return True
for name in ud.names:
if not self._lfs_objects_downloaded(ud, d, name, ud.clonedir):
return True
return False
def clonedir_need_shallow_revs(self, ud, d):
for rev in ud.shallow_revs:
try:
@@ -371,16 +342,6 @@ class Git(FetchMethod):
# is not possible
if bb.utils.to_boolean(d.getVar("BB_FETCH_PREMIRRORONLY")):
return True
# If the url is not in trusted network, that is, BB_NO_NETWORK is set to 0
# and BB_ALLOWED_NETWORKS does not contain the host that ud.url uses, then
# we need to try premirrors first as using upstream is destined to fail.
if not trusted_network(d, ud.url):
return True
# the following check is to ensure incremental fetch in downloads, this is
# because the premirror might be old and does not contain the new rev required,
# and this will cause a total removal and new clone. So if we can reach to
# network, we prefer upstream over premirror, though the premirror might contain
# the new rev.
if os.path.exists(ud.clonedir):
return False
return True
@@ -401,11 +362,7 @@ class Git(FetchMethod):
else:
tmpdir = tempfile.mkdtemp(dir=d.getVar('DL_DIR'))
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=tmpdir)
output = runfetchcmd("%s remote" % ud.basecmd, d, quiet=True, workdir=ud.clonedir)
if 'mirror' in output:
runfetchcmd("%s remote rm mirror" % ud.basecmd, d, workdir=ud.clonedir)
runfetchcmd("%s remote add --mirror=fetch mirror %s" % (ud.basecmd, tmpdir), d, workdir=ud.clonedir)
fetch_cmd = "LANG=C %s fetch -f --update-head-ok --progress mirror " % (ud.basecmd)
fetch_cmd = "LANG=C %s fetch -f --progress %s " % (ud.basecmd, shlex.quote(tmpdir))
runfetchcmd(fetch_cmd, d, workdir=ud.clonedir)
repourl = self._get_repo_url(ud)
@@ -482,7 +439,7 @@ class Git(FetchMethod):
if missing_rev:
raise bb.fetch2.FetchError("Unable to find revision %s even from upstream" % missing_rev)
if self.lfs_need_update(ud, d):
if self._contains_lfs(ud, d, ud.clonedir) and self._need_lfs(ud):
# Unpack temporary working copy, use it to run 'git checkout' to force pre-fetching
# of all LFS blobs needed at the srcrev.
#
@@ -505,8 +462,8 @@ class Git(FetchMethod):
# Only do this if the unpack resulted in a .git/lfs directory being
# created; this only happens if at least one blob needed to be
# downloaded.
if os.path.exists(os.path.join(ud.destdir, ".git", "lfs")):
runfetchcmd("tar -cf - lfs | tar -xf - -C %s" % ud.clonedir, d, workdir="%s/.git" % ud.destdir)
if os.path.exists(os.path.join(tmpdir, "git", ".git", "lfs")):
runfetchcmd("tar -cf - lfs | tar -xf - -C %s" % ud.clonedir, d, workdir="%s/git/.git" % tmpdir)
def build_mirror_data(self, ud, d):
@@ -544,7 +501,7 @@ class Git(FetchMethod):
logger.info("Creating tarball of git repository")
with create_atomic(ud.fullmirror) as tfile:
mtime = runfetchcmd("{} log --all -1 --format=%cD".format(ud.basecmd), d,
mtime = runfetchcmd("git log --all -1 --format=%cD", d,
quiet=True, workdir=ud.clonedir)
runfetchcmd("tar -czf %s --owner oe:0 --group oe:0 --mtime \"%s\" ."
% (tfile, mtime), d, workdir=ud.clonedir)
@@ -672,8 +629,6 @@ class Git(FetchMethod):
raise bb.fetch2.FetchError("Repository %s has LFS content, install git-lfs on host to download (or set lfs=0 to ignore it)" % (repourl))
elif not need_lfs:
bb.note("Repository %s has LFS content but it is not being fetched" % (repourl))
else:
runfetchcmd("%s lfs install --local" % ud.basecmd, d, workdir=destdir)
if not ud.nocheckout:
if subpath:
@@ -725,35 +680,6 @@ class Git(FetchMethod):
raise bb.fetch2.FetchError("The command '%s' gave output with more then 1 line unexpectedly, output: '%s'" % (cmd, output))
return output.split()[0] != "0"
def _lfs_objects_downloaded(self, ud, d, name, wd):
"""
Verifies whether the LFS objects for requested revisions have already been downloaded
"""
# Bail out early if this repository doesn't use LFS
if not self._need_lfs(ud) or not self._contains_lfs(ud, d, wd):
return True
# The Git LFS specification specifies ([1]) the LFS folder layout so it should be safe to check for file
# existence.
# [1] https://github.com/git-lfs/git-lfs/blob/main/docs/spec.md#intercepting-git
cmd = "%s lfs ls-files -l %s" \
% (ud.basecmd, ud.revisions[name])
output = runfetchcmd(cmd, d, quiet=True, workdir=wd).rstrip()
# Do not do any further matching if no objects are managed by LFS
if not output:
return True
# Match all lines beginning with the hexadecimal OID
oid_regex = re.compile("^(([a-fA-F0-9]{2})([a-fA-F0-9]{2})[A-Fa-f0-9]+)")
for line in output.split("\n"):
oid = re.search(oid_regex, line)
if not oid:
bb.warn("git lfs ls-files output '%s' did not match expected format." % line)
if not os.path.exists(os.path.join(wd, "lfs", "objects", oid.group(2), oid.group(3), oid.group(1))):
return False
return True
def _need_lfs(self, ud):
return ud.parm.get("lfs", "1") == "1"
@@ -762,11 +688,8 @@ class Git(FetchMethod):
Check if the repository has 'lfs' (large file) content
"""
if ud.nobranch:
# If no branch is specified, use the current git commit
refname = self._build_revision(ud, d, ud.names[0])
elif wd == ud.clonedir:
# The bare clonedir doesn't use the remote names; it has the branch immediately.
# The bare clonedir doesn't use the remote names; it has the branch immediately.
if wd == ud.clonedir:
refname = ud.branches[ud.names[0]]
else:
refname = "origin/%s" % ud.branches[ud.names[0]]
@@ -871,42 +794,38 @@ class Git(FetchMethod):
"""
pupver = ('', '')
tagregex = re.compile(d.getVar('UPSTREAM_CHECK_GITTAGREGEX') or r"(?P<pver>([0-9][\.|_]?)+)")
try:
output = self._lsremote(ud, d, "refs/tags/*")
except (bb.fetch2.FetchError, bb.fetch2.NetworkAccess) as e:
bb.note("Could not list remote: %s" % str(e))
return pupver
rev_tag_re = re.compile(r"([0-9a-f]{40})\s+refs/tags/(.*)")
pver_re = re.compile(d.getVar('UPSTREAM_CHECK_GITTAGREGEX') or r"(?P<pver>([0-9][\.|_]?)+)")
nonrel_re = re.compile(r"(alpha|beta|rc|final)+")
verstring = ""
revision = ""
for line in output.split("\n"):
if not line:
break
m = rev_tag_re.match(line)
if not m:
continue
(revision, tag) = m.groups()
tag_head = line.split("/")[-1]
# Ignore non-released branches
if nonrel_re.search(tag):
m = re.search(r"(alpha|beta|rc|final)+", tag_head)
if m:
continue
# search for version in the line
m = pver_re.search(tag)
if not m:
tag = tagregex.search(tag_head)
if tag is None:
continue
pver = m.group('pver').replace("_", ".")
tag = tag.group('pver')
tag = tag.replace("_", ".")
if verstring and bb.utils.vercmp(("0", pver, ""), ("0", verstring, "")) < 0:
if verstring and bb.utils.vercmp(("0", tag, ""), ("0", verstring, "")) < 0:
continue
verstring = pver
verstring = tag
revision = line.split()[0]
pupver = (verstring, revision)
return pupver
@@ -926,8 +845,9 @@ class Git(FetchMethod):
commits = None
else:
if not os.path.exists(rev_file) or not os.path.getsize(rev_file):
from pipes import quote
commits = bb.fetch2.runfetchcmd(
"git rev-list %s -- | wc -l" % shlex.quote(rev),
"git rev-list %s -- | wc -l" % quote(rev),
d, quiet=True).strip().lstrip('0')
if commits:
open(rev_file, "w").write("%d\n" % int(commits))

View File

@@ -123,7 +123,7 @@ class GitSM(Git):
url += ";name=%s" % module
url += ";subpath=%s" % module
url += ";nobranch=1"
url += ";lfs=%s" % ("1" if self._need_lfs(ud) else "0")
url += ";lfs=%s" % self._need_lfs(ud)
# Note that adding "user=" here to give credentials to the
# submodule is not supported. Since using SRC_URI to give git://
# URL a password is not supported, one have to use one of the
@@ -147,19 +147,6 @@ class GitSM(Git):
return submodules != []
def call_process_submodules(self, ud, d, extra_check, subfunc):
# If we're using a shallow mirror tarball it needs to be
# unpacked temporarily so that we can examine the .gitmodules file
if ud.shallow and os.path.exists(ud.fullshallow) and extra_check:
tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR"))
try:
runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=tmpdir)
self.process_submodules(ud, tmpdir, subfunc, d)
finally:
shutil.rmtree(tmpdir)
else:
self.process_submodules(ud, ud.clonedir, subfunc, d)
def need_update(self, ud, d):
if Git.need_update(self, ud, d):
return True
@@ -177,7 +164,15 @@ class GitSM(Git):
logger.error('gitsm: submodule update check failed: %s %s' % (type(e).__name__, str(e)))
need_update_result = True
self.call_process_submodules(ud, d, not os.path.exists(ud.clonedir), need_update_submodule)
# If we're using a shallow mirror tarball it needs to be unpacked
# temporarily so that we can examine the .gitmodules file
if ud.shallow and os.path.exists(ud.fullshallow) and not os.path.exists(ud.clonedir):
tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR"))
runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=tmpdir)
self.process_submodules(ud, tmpdir, need_update_submodule, d)
shutil.rmtree(tmpdir)
else:
self.process_submodules(ud, ud.clonedir, need_update_submodule, d)
if need_update_list:
logger.debug('gitsm: Submodules requiring update: %s' % (' '.join(need_update_list)))
@@ -200,7 +195,16 @@ class GitSM(Git):
raise
Git.download(self, ud, d)
self.call_process_submodules(ud, d, self.need_update(ud, d), download_submodule)
# If we're using a shallow mirror tarball it needs to be unpacked
# temporarily so that we can examine the .gitmodules file
if ud.shallow and os.path.exists(ud.fullshallow) and self.need_update(ud, d):
tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR"))
runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=tmpdir)
self.process_submodules(ud, tmpdir, download_submodule, d)
shutil.rmtree(tmpdir)
else:
self.process_submodules(ud, ud.clonedir, download_submodule, d)
def unpack(self, ud, destdir, d):
def unpack_submodules(ud, url, module, modpath, workdir, d):
@@ -243,24 +247,12 @@ class GitSM(Git):
ret = self.process_submodules(ud, ud.destdir, unpack_submodules, d)
if not ud.bareclone and ret:
cmdprefix = ""
# Avoid LFS smudging (replacing the LFS pointers with the actual content) when LFS shouldn't be used but git-lfs is installed.
if not self._need_lfs(ud):
cmdprefix = "GIT_LFS_SKIP_SMUDGE=1 "
runfetchcmd("%s%s submodule update --recursive --no-fetch" % (cmdprefix, ud.basecmd), d, quiet=True, workdir=ud.destdir)
def clean(self, ud, d):
def clean_submodule(ud, url, module, modpath, workdir, d):
url += ";bareclone=1;nobranch=1"
try:
newfetch = Fetch([url], d, cache=False)
newfetch.clean()
except Exception as e:
logger.warning('gitsm: submodule clean failed: %s %s' % (type(e).__name__, str(e)))
self.call_process_submodules(ud, d, True, clean_submodule)
# Clean top git dir
Git.clean(self, ud, d)
# All submodules should already be downloaded and configured in the tree. This simply
# sets up the configuration and checks out the files. The main project config should
# remain unmodified, and no download from the internet should occur. As such, lfs smudge
# should also be skipped as these files were already smudged in the fetch stage if lfs
# was enabled.
runfetchcmd("GIT_LFS_SKIP_SMUDGE=1 %s submodule update --recursive --no-fetch" % (ud.basecmd), d, quiet=True, workdir=ud.destdir)
def implicit_urldata(self, ud, d):
import shutil, subprocess, tempfile
@@ -271,6 +263,14 @@ class GitSM(Git):
newfetch = Fetch([url], d, cache=False)
urldata.extend(newfetch.expanded_urldata())
self.call_process_submodules(ud, d, ud.method.need_update(ud, d), add_submodule)
# If we're using a shallow mirror tarball it needs to be unpacked
# temporarily so that we can examine the .gitmodules file
if ud.shallow and os.path.exists(ud.fullshallow) and ud.method.need_update(ud, d):
tmpdir = tempfile.mkdtemp(dir=d.getVar("DL_DIR"))
subprocess.check_call("tar -xzf %s" % ud.fullshallow, cwd=tmpdir, shell=True)
self.process_submodules(ud, tmpdir, add_submodule, d)
shutil.rmtree(tmpdir)
else:
self.process_submodules(ud, ud.clonedir, add_submodule, d)
return urldata

View File

@@ -87,10 +87,7 @@ class Wget(FetchMethod):
if not ud.localfile:
ud.localfile = d.expand(urllib.parse.unquote(ud.host + ud.path).replace("/", "."))
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 100"
if ud.type == 'ftp' or ud.type == 'ftps':
self.basecmd += " --passive-ftp"
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 30 --passive-ftp"
if not self.check_certs(d):
self.basecmd += " --no-check-certificate"
@@ -108,8 +105,7 @@ class Wget(FetchMethod):
fetchcmd = self.basecmd
dldir = os.path.realpath(d.getVar("DL_DIR"))
localpath = os.path.join(dldir, ud.localfile) + ".tmp"
localpath = os.path.join(d.getVar("DL_DIR"), ud.localfile) + ".tmp"
bb.utils.mkdirhier(os.path.dirname(localpath))
fetchcmd += " -O %s" % shlex.quote(localpath)
@@ -129,21 +125,12 @@ class Wget(FetchMethod):
uri = ud.url.split(";")[0]
if os.path.exists(ud.localpath):
# file exists, but we didnt complete it.. trying again..
fetchcmd += " -c -P " + dldir + " '" + uri + "'"
fetchcmd += d.expand(" -c -P ${DL_DIR} '%s'" % uri)
else:
fetchcmd += " -P " + dldir + " '" + uri + "'"
fetchcmd += d.expand(" -P ${DL_DIR} '%s'" % uri)
self._runwget(ud, d, fetchcmd, False)
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page
if not os.path.exists(localpath):
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (uri, localpath), uri)
if os.path.getsize(localpath) == 0:
os.remove(localpath)
raise FetchError("The fetch of %s resulted in a zero size file?! Deleting and failing since this isn't right." % (uri), uri)
# Try and verify any checksum now, meaning if it isn't correct, we don't remove the
# original file, which might be a race (imagine two recipes referencing the same
# source, one with an incorrect checksum)
@@ -153,6 +140,15 @@ class Wget(FetchMethod):
# Our lock prevents multiple writers but mirroring code may grab incomplete files
os.rename(localpath, localpath[:-4])
# Sanity check since wget can pretend it succeed when it didn't
# Also, this used to happen if sourceforge sent us to the mirror page
if not os.path.exists(ud.localpath):
raise FetchError("The fetch command returned success for url %s but %s doesn't exist?!" % (uri, ud.localpath), uri)
if os.path.getsize(ud.localpath) == 0:
os.remove(ud.localpath)
raise FetchError("The fetch of %s resulted in a zero size file?! Deleting and failing since this isn't right." % (uri), uri)
return True
def checkstatus(self, fetch, ud, d, try_again=True):
@@ -344,11 +340,8 @@ class Wget(FetchMethod):
opener = urllib.request.build_opener(*handlers)
try:
parts = urllib.parse.urlparse(ud.url.split(";")[0])
if parts.query:
uri = "{}://{}{}?{}".format(parts.scheme, parts.netloc, parts.path, parts.query)
else:
uri = "{}://{}{}".format(parts.scheme, parts.netloc, parts.path)
uri_base = ud.url.split(";")[0]
uri = "{}://{}{}".format(urllib.parse.urlparse(uri_base).scheme, ud.host, ud.path)
r = urllib.request.Request(uri)
r.get_method = lambda: "HEAD"
# Some servers (FusionForge, as used on Alioth) require that the
@@ -374,7 +367,7 @@ class Wget(FetchMethod):
except (FileNotFoundError, netrc.NetrcParseError):
pass
with opener.open(r, timeout=100) as response:
with opener.open(r, timeout=30) as response:
pass
except (urllib.error.URLError, ConnectionResetError, TimeoutError) as e:
if try_again:
@@ -382,7 +375,7 @@ class Wget(FetchMethod):
return self.checkstatus(fetch, ud, d, False)
else:
# debug for now to avoid spamming the logs in e.g. remote sstate searches
logger.debug2("checkstatus() urlopen failed for %s: %s" % (uri,e))
logger.debug2("checkstatus() urlopen failed: %s" % e)
return False
return True

View File

@@ -217,9 +217,7 @@ def create_bitbake_parser():
"execution. The SIGNATURE_HANDLER parameter is passed to the "
"handler. Two common values are none and printdiff but the handler "
"may define more/less. none means only dump the signature, printdiff"
" means recursively compare the dumped signature with the most recent"
" one in a local build or sstate cache (can be used to find out why tasks re-run"
" when that is not expected)")
" means compare the dumped signature with the cached one.")
exec_group.add_argument("--revisions-changed", action="store_true",
help="Set the exit code depending on whether upstream floating "

View File

@@ -89,6 +89,10 @@ class BBLogFormatter(logging.Formatter):
msg = logging.Formatter.format(self, record)
if hasattr(record, 'bb_exc_formatted'):
msg += '\n' + ''.join(record.bb_exc_formatted)
elif hasattr(record, 'bb_exc_info'):
etype, value, tb = record.bb_exc_info
formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
msg += '\n' + ''.join(formatted)
return msg
def colorize(self, record):
@@ -226,7 +230,7 @@ def logger_create(name, output=sys.stderr, level=logging.INFO, preserve_handlers
console = logging.StreamHandler(output)
console.addFilter(bb.msg.LogFilterShowOnce())
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
if color == 'always' or (color == 'auto' and output.isatty() and os.environ.get('NO_COLOR', '') == ''):
if color == 'always' or (color == 'auto' and output.isatty()):
format.enable_color()
console.setFormatter(format)
if preserve_handlers:

View File

@@ -49,23 +49,20 @@ class SkipPackage(SkipRecipe):
__mtime_cache = {}
def cached_mtime(f):
if f not in __mtime_cache:
res = os.stat(f)
__mtime_cache[f] = (res.st_mtime_ns, res.st_size, res.st_ino)
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
return __mtime_cache[f]
def cached_mtime_noerror(f):
if f not in __mtime_cache:
try:
res = os.stat(f)
__mtime_cache[f] = (res.st_mtime_ns, res.st_size, res.st_ino)
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
except OSError:
return 0
return __mtime_cache[f]
def check_mtime(f, mtime):
try:
res = os.stat(f)
current_mtime = (res.st_mtime_ns, res.st_size, res.st_ino)
current_mtime = os.stat(f)[stat.ST_MTIME]
__mtime_cache[f] = current_mtime
except OSError:
current_mtime = 0
@@ -73,8 +70,7 @@ def check_mtime(f, mtime):
def update_mtime(f):
try:
res = os.stat(f)
__mtime_cache[f] = (res.st_mtime_ns, res.st_size, res.st_ino)
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
except OSError:
if f in __mtime_cache:
del __mtime_cache[f]

View File

@@ -211,12 +211,10 @@ class ExportFuncsNode(AstNode):
def eval(self, data):
sentinel = " # Export function set\n"
for func in self.n:
calledfunc = self.classname + "_" + func
basevar = data.getVar(func, False)
if basevar and sentinel not in basevar:
if data.getVar(func, False) and not data.getVarFlag(func, 'export_func', False):
continue
if data.getVar(func, False):
@@ -233,11 +231,12 @@ class ExportFuncsNode(AstNode):
data.setVarFlag(func, "lineno", 1)
if data.getVarFlag(calledfunc, "python", False):
data.setVar(func, sentinel + " bb.build.exec_func('" + calledfunc + "', d)\n", parsing=True)
data.setVar(func, " bb.build.exec_func('" + calledfunc + "', d)\n", parsing=True)
else:
if "-" in self.classname:
bb.fatal("The classname %s contains a dash character and is calling an sh function %s using EXPORT_FUNCTIONS. Since a dash is illegal in sh function names, this cannot work, please rename the class or don't use EXPORT_FUNCTIONS." % (self.classname, calledfunc))
data.setVar(func, sentinel + " " + calledfunc + "\n", parsing=True)
data.setVar(func, " " + calledfunc + "\n", parsing=True)
data.setVarFlag(func, 'export_func', '1')
class AddTaskNode(AstNode):
def __init__(self, filename, lineno, func, before, after):
@@ -314,16 +313,6 @@ class InheritNode(AstNode):
def eval(self, data):
bb.parse.BBHandler.inherit(self.classes, self.filename, self.lineno, data)
class InheritDeferredNode(AstNode):
def __init__(self, filename, lineno, classes):
AstNode.__init__(self, filename, lineno)
self.inherit = (classes, filename, lineno)
def eval(self, data):
inherits = data.getVar('__BBDEFINHERITS', False) or []
inherits.append(self.inherit)
data.setVar('__BBDEFINHERITS', inherits)
def handleInclude(statements, filename, lineno, m, force):
statements.append(IncludeNode(filename, lineno, m.group(1), force))
@@ -374,10 +363,6 @@ def handleInherit(statements, filename, lineno, m):
classes = m.group(1)
statements.append(InheritNode(filename, lineno, classes))
def handleInheritDeferred(statements, filename, lineno, m):
classes = m.group(1)
statements.append(InheritDeferredNode(filename, lineno, classes))
def runAnonFuncs(d):
code = []
for funcname in d.getVar("__BBANONFUNCS", False) or []:
@@ -391,14 +376,6 @@ def finalize(fn, d, variant = None):
if d.getVar("_FAILPARSINGERRORHANDLED", False) == True:
raise bb.BBHandledException()
while True:
inherits = d.getVar('__BBDEFINHERITS', False) or []
if not inherits:
break
inherit, filename, lineno = inherits.pop(0)
d.setVar('__BBDEFINHERITS', inherits)
bb.parse.BBHandler.inherit(inherit, filename, lineno, d, deferred=True)
for var in d.getVar('__BBHANDLERS', False) or []:
# try to add the handler
handlerfn = d.getVarFlag(var, "filename", False)
@@ -487,9 +464,7 @@ def multi_finalize(fn, d):
d.setVar("BBEXTENDVARIANT", variantmap[name])
else:
d.setVar("PN", "%s-%s" % (pn, name))
inherits = d.getVar('__BBDEFINHERITS', False) or []
inherits.append((extendedmap[name], fn, 0))
d.setVar('__BBDEFINHERITS', inherits)
bb.parse.BBHandler.inherit(extendedmap[name], fn, 0, d)
safe_d.setVar("BBCLASSEXTEND", extended)
_create_variants(datastores, extendedmap.keys(), extendfunc, onlyfinalise)

View File

@@ -21,7 +21,6 @@ from .ConfHandler import include, init
__func_start_regexp__ = re.compile(r"(((?P<py>python(?=(\s|\()))|(?P<fr>fakeroot(?=\s)))\s*)*(?P<func>[\w\.\-\+\{\}\$:]+)?\s*\(\s*\)\s*{$" )
__inherit_regexp__ = re.compile(r"inherit\s+(.+)" )
__inherit_def_regexp__ = re.compile(r"inherit_defer\s+(.+)" )
__export_func_regexp__ = re.compile(r"EXPORT_FUNCTIONS\s+(.+)" )
__addtask_regexp__ = re.compile(r"addtask\s+(?P<func>\w+)\s*((before\s*(?P<before>((.*(?=after))|(.*))))|(after\s*(?P<after>((.*(?=before))|(.*)))))*")
__deltask_regexp__ = re.compile(r"deltask\s+(.+)")
@@ -34,7 +33,6 @@ __infunc__ = []
__inpython__ = False
__body__ = []
__classname__ = ""
__residue__ = []
cached_statements = {}
@@ -42,10 +40,8 @@ def supports(fn, d):
"""Return True if fn has a supported extension"""
return os.path.splitext(fn)[-1] in [".bb", ".bbclass", ".inc"]
def inherit(files, fn, lineno, d, deferred=False):
def inherit(files, fn, lineno, d):
__inherit_cache = d.getVar('__inherit_cache', False) or []
#if "${" in files and not deferred:
# bb.warn("%s:%s has non deferred conditional inherit" % (fn, lineno))
files = d.expand(files).split()
for file in files:
classtype = d.getVar("__bbclasstype", False)
@@ -81,7 +77,7 @@ def inherit(files, fn, lineno, d, deferred=False):
__inherit_cache = d.getVar('__inherit_cache', False) or []
def get_statements(filename, absolute_filename, base_name):
global cached_statements, __residue__, __body__
global cached_statements
try:
return cached_statements[absolute_filename]
@@ -101,11 +97,6 @@ def get_statements(filename, absolute_filename, base_name):
# add a blank line to close out any python definition
feeder(lineno, "", filename, base_name, statements, eof=True)
if __residue__:
raise ParseError("Unparsed lines %s: %s" % (filename, str(__residue__)), filename, lineno)
if __body__:
raise ParseError("Unparsed lines from unclosed function %s: %s" % (filename, str(__body__)), filename, lineno)
if filename.endswith(".bbclass") or filename.endswith(".inc"):
cached_statements[absolute_filename] = statements
return statements
@@ -274,11 +265,6 @@ def feeder(lineno, s, fn, root, statements, eof=False):
ast.handleInherit(statements, fn, lineno, m)
return
m = __inherit_def_regexp__.match(s)
if m:
ast.handleInheritDeferred(statements, fn, lineno, m)
return
return ConfHandler.feeder(lineno, s, fn, statements, conffile=False)
# Add us to the handlers list

View File

@@ -154,7 +154,6 @@ class SQLTable(collections.abc.MutableMapping):
def __exit__(self, *excinfo):
self.connection.__exit__(*excinfo)
self.connection.close()
@_Decorators.retry()
@_Decorators.transaction

View File

@@ -14,7 +14,6 @@ import os
import sys
import stat
import errno
import itertools
import logging
import re
import bb
@@ -158,7 +157,7 @@ class RunQueueScheduler(object):
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
self.stamps[tid] = bb.parse.siggen.stampfile_mcfn(taskname, taskfn, extrainfo=False)
if tid in self.rq.runq_buildable:
self.buildable.add(tid)
self.buildable.append(tid)
self.rev_prio_map = None
self.is_pressure_usable()
@@ -221,16 +220,6 @@ class RunQueueScheduler(object):
bb.note("Pressure status changed to CPU: %s, IO: %s, Mem: %s (CPU: %s/%s, IO: %s/%s, Mem: %s/%s) - using %s/%s bitbake threads" % (pressure_state + pressure_values + (len(self.rq.runq_running.difference(self.rq.runq_complete)), self.rq.number_tasks)))
self.pressure_state = pressure_state
return (exceeds_cpu_pressure or exceeds_io_pressure or exceeds_memory_pressure)
elif self.rq.max_loadfactor:
limit = False
loadfactor = float(os.getloadavg()[0]) / os.cpu_count()
# bb.warn("Comparing %s to %s" % (loadfactor, self.rq.max_loadfactor))
if loadfactor > self.rq.max_loadfactor:
limit = True
if hasattr(self, "loadfactor_limit") and limit != self.loadfactor_limit:
bb.note("Load average limiting set to %s as load average: %s - using %s/%s bitbake threads" % (limit, loadfactor, len(self.rq.runq_running.difference(self.rq.runq_complete)), self.rq.number_tasks))
self.loadfactor_limit = limit
return limit
return False
def next_buildable_task(self):
@@ -281,11 +270,11 @@ class RunQueueScheduler(object):
best = None
bestprio = None
for tid in buildable:
taskname = taskname_from_tid(tid)
if taskname in skip_buildable and skip_buildable[taskname] >= int(self.skip_maxthread[taskname]):
continue
prio = self.rev_prio_map[tid]
if bestprio is None or bestprio > prio:
taskname = taskname_from_tid(tid)
if taskname in skip_buildable and skip_buildable[taskname] >= int(self.skip_maxthread[taskname]):
continue
stamp = self.stamps[tid]
if stamp in self.rq.build_stamps.values():
continue
@@ -729,8 +718,6 @@ class RunQueueData:
if mc == frommc:
fn = taskData[mcdep].build_targets[pn][0]
newdep = '%s:%s' % (fn,deptask)
if newdep not in taskData[mcdep].taskentries:
bb.fatal("Task mcdepends on non-existent task %s" % (newdep))
taskData[mc].taskentries[tid].tdepends.append(newdep)
for mc in taskData:
@@ -1017,32 +1004,25 @@ class RunQueueData:
# Handle --runall
if self.cooker.configuration.runall:
# re-run the mark_active and then drop unused tasks from new list
reduced_tasklist = set(self.runtaskentries.keys())
for tid in list(self.runtaskentries.keys()):
if tid not in runq_build:
reduced_tasklist.remove(tid)
runq_build = {}
runall_tids = set()
added = True
while added:
reduced_tasklist = set(self.runtaskentries.keys())
for tid in list(self.runtaskentries.keys()):
if tid not in runq_build:
reduced_tasklist.remove(tid)
runq_build = {}
orig = runall_tids
for task in self.cooker.configuration.runall:
if not task.startswith("do_"):
task = "do_{0}".format(task)
runall_tids = set()
for task in self.cooker.configuration.runall:
if not task.startswith("do_"):
task = "do_{0}".format(task)
for tid in reduced_tasklist:
wanttid = "{0}:{1}".format(fn_from_tid(tid), task)
if wanttid in self.runtaskentries:
runall_tids.add(wanttid)
for tid in reduced_tasklist:
wanttid = "{0}:{1}".format(fn_from_tid(tid), task)
if wanttid in self.runtaskentries:
runall_tids.add(wanttid)
for tid in list(runall_tids):
mark_active(tid, 1)
self.target_tids.append(tid)
if self.cooker.configuration.force:
invalidate_task(tid, False)
added = runall_tids - orig
for tid in list(runall_tids):
mark_active(tid, 1)
if self.cooker.configuration.force:
invalidate_task(tid, False)
delcount = set()
for tid in list(self.runtaskentries.keys()):
@@ -1276,41 +1256,27 @@ class RunQueueData:
bb.parse.siggen.set_setscene_tasks(self.runq_setscene_tids)
starttime = time.time()
lasttime = starttime
# Iterate over the task list and call into the siggen code
dealtwith = set()
todeal = set(self.runtaskentries)
while todeal:
ready = set()
for tid in todeal.copy():
if not (self.runtaskentries[tid].depends - dealtwith):
self.runtaskentries[tid].taskhash_deps = bb.parse.siggen.prep_taskhash(tid, self.runtaskentries[tid].depends, self.dataCaches)
# get_taskhash for a given tid *must* be called before get_unihash* below
self.runtaskentries[tid].hash = bb.parse.siggen.get_taskhash(tid, self.runtaskentries[tid].depends, self.dataCaches)
ready.add(tid)
unihashes = bb.parse.siggen.get_unihashes(ready)
for tid in ready:
dealtwith.add(tid)
todeal.remove(tid)
self.runtaskentries[tid].unihash = unihashes[tid]
bb.event.check_for_interrupts(self.cooker.data)
if time.time() > (lasttime + 30):
lasttime = time.time()
hashequiv_logger.verbose("Initial setup loop progress: %s of %s in %s" % (len(todeal), len(self.runtaskentries), lasttime - starttime))
endtime = time.time()
if (endtime-starttime > 60):
hashequiv_logger.verbose("Initial setup loop took: %s" % (endtime-starttime))
dealtwith.add(tid)
todeal.remove(tid)
self.prepare_task_hash(tid)
bb.event.check_for_interrupts(self.cooker.data)
bb.parse.siggen.writeout_file_checksum_cache()
#self.dump_data()
return len(self.runtaskentries)
def prepare_task_hash(self, tid):
bb.parse.siggen.prep_taskhash(tid, self.runtaskentries[tid].depends, self.dataCaches)
self.runtaskentries[tid].hash = bb.parse.siggen.get_taskhash(tid, self.runtaskentries[tid].depends, self.dataCaches)
self.runtaskentries[tid].unihash = bb.parse.siggen.get_unihash(tid)
def dump_data(self):
"""
Dump some debug information on the internal data structures
@@ -1352,16 +1318,6 @@ class RunQueue:
self.worker = {}
self.fakeworker = {}
@staticmethod
def send_pickled_data(worker, data, name):
msg = bytearray()
msg.extend(b"<" + name.encode() + b">")
pickled_data = pickle.dumps(data)
msg.extend(len(pickled_data).to_bytes(4, 'big'))
msg.extend(pickled_data)
msg.extend(b"</" + name.encode() + b">")
worker.stdin.write(msg)
def _start_worker(self, mc, fakeroot = False, rqexec = None):
logger.debug("Starting bitbake-worker")
magic = "decafbad"
@@ -1376,7 +1332,7 @@ class RunQueue:
fakerootcmd = shlex.split(mcdata.getVar("FAKEROOTCMD"))
fakerootenv = (mcdata.getVar("FAKEROOTBASEENV") or "").split()
env = os.environ.copy()
for key, value in (var.split('=',1) for var in fakerootenv):
for key, value in (var.split('=') for var in fakerootenv):
env[key] = value
worker = subprocess.Popen(fakerootcmd + [sys.executable, workerscript, magic], stdout=subprocess.PIPE, stdin=subprocess.PIPE, env=env)
fakerootlogs = self.rqdata.dataCaches[mc].fakerootlogs
@@ -1399,9 +1355,9 @@ class RunQueue:
"umask" : self.cfgData.getVar("BB_DEFAULT_UMASK"),
}
RunQueue.send_pickled_data(worker, self.cooker.configuration, "cookerconfig")
RunQueue.send_pickled_data(worker, self.cooker.extraconfigdata, "extraconfigdata")
RunQueue.send_pickled_data(worker, workerdata, "workerdata")
worker.stdin.write(b"<cookerconfig>" + pickle.dumps(self.cooker.configuration) + b"</cookerconfig>")
worker.stdin.write(b"<extraconfigdata>" + pickle.dumps(self.cooker.extraconfigdata) + b"</extraconfigdata>")
worker.stdin.write(b"<workerdata>" + pickle.dumps(workerdata) + b"</workerdata>")
worker.stdin.flush()
return RunQueueWorker(worker, workerpipe)
@@ -1411,7 +1367,7 @@ class RunQueue:
return
logger.debug("Teardown for bitbake-worker")
try:
RunQueue.send_pickled_data(worker.process, b"", "quit")
worker.process.stdin.write(b"<quit></quit>")
worker.process.stdin.flush()
worker.process.stdin.close()
except IOError:
@@ -1423,12 +1379,12 @@ class RunQueue:
continue
worker.pipe.close()
def start_worker(self, rqexec):
def start_worker(self):
if self.worker:
self.teardown_workers()
self.teardown = False
for mc in self.rqdata.dataCaches:
self.worker[mc] = self._start_worker(mc, False, rqexec)
self.worker[mc] = self._start_worker(mc)
def start_fakeworker(self, rqexec, mc):
if not mc in self.fakeworker:
@@ -1588,9 +1544,6 @@ class RunQueue:
('bb.event.HeartbeatEvent',), data=self.cfgData)
self.dm_event_handler_registered = True
self.rqdata.init_progress_reporter.next_stage()
self.rqexe = RunQueueExecute(self)
dump = self.cooker.configuration.dump_signatures
if dump:
self.rqdata.init_progress_reporter.finish()
@@ -1602,8 +1555,10 @@ class RunQueue:
self.state = runQueueComplete
if self.state is runQueueSceneInit:
self.start_worker(self.rqexe)
self.rqdata.init_progress_reporter.finish()
self.rqdata.init_progress_reporter.next_stage()
self.start_worker()
self.rqdata.init_progress_reporter.next_stage()
self.rqexe = RunQueueExecute(self)
# If we don't have any setscene functions, skip execution
if not self.rqdata.runq_setscene_tids:
@@ -1718,17 +1673,6 @@ class RunQueue:
return
def print_diffscenetasks(self):
def get_root_invalid_tasks(task, taskdepends, valid, noexec, visited_invalid):
invalidtasks = []
for t in taskdepends[task].depends:
if t not in valid and t not in visited_invalid:
invalidtasks.extend(get_root_invalid_tasks(t, taskdepends, valid, noexec, visited_invalid))
visited_invalid.add(t)
direct_invalid = [t for t in taskdepends[task].depends if t not in valid]
if not direct_invalid and task not in noexec:
invalidtasks = [task]
return invalidtasks
noexec = []
tocheck = set()
@@ -1762,49 +1706,46 @@ class RunQueue:
valid_new.add(dep)
invalidtasks = set()
for tid in self.rqdata.runtaskentries:
if tid not in valid_new and tid not in noexec:
invalidtasks.add(tid)
toptasks = set(["{}:{}".format(t[3], t[2]) for t in self.rqdata.targets])
for tid in toptasks:
found = set()
processed = set()
for tid in invalidtasks:
toprocess = set([tid])
while toprocess:
next = set()
visited_invalid = set()
for t in toprocess:
if t not in valid_new and t not in noexec:
invalidtasks.update(get_root_invalid_tasks(t, self.rqdata.runtaskentries, valid_new, noexec, visited_invalid))
continue
if t in self.rqdata.runq_setscene_tids:
for dep in self.rqexe.sqdata.sq_deps[t]:
next.add(dep)
continue
for dep in self.rqdata.runtaskentries[t].depends:
next.add(dep)
if dep in invalidtasks:
found.add(tid)
if dep not in processed:
processed.add(dep)
next.add(dep)
toprocess = next
if tid in found:
toprocess = set()
tasklist = []
for tid in invalidtasks:
for tid in invalidtasks.difference(found):
tasklist.append(tid)
if tasklist:
bb.plain("The differences between the current build and any cached tasks start at the following tasks:\n" + "\n".join(tasklist))
return invalidtasks
return invalidtasks.difference(found)
def write_diffscenetasks(self, invalidtasks):
bb.siggen.check_siggen_version(bb.siggen)
# Define recursion callback
def recursecb(key, hash1, hash2):
hashes = [hash1, hash2]
bb.debug(1, "Recursively looking for recipe {} hashes {}".format(key, hashes))
hashfiles = bb.siggen.find_siginfo(key, None, hashes, self.cfgData)
bb.debug(1, "Found hashfiles:\n{}".format(hashfiles))
recout = []
if len(hashfiles) == 2:
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1]['path'], hashfiles[hash2]['path'], recursecb)
out2 = bb.siggen.compare_sigfiles(hashfiles[hash1], hashfiles[hash2], recursecb)
recout.extend(list(' ' + l for l in out2))
else:
recout.append("Unable to find matching sigdata for %s with hashes %s or %s" % (key, hash1, hash2))
@@ -1815,25 +1756,20 @@ class RunQueue:
for tid in invalidtasks:
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn]
h = self.rqdata.runtaskentries[tid].unihash
bb.debug(1, "Looking for recipe {} task {}".format(pn, taskname))
h = self.rqdata.runtaskentries[tid].hash
matches = bb.siggen.find_siginfo(pn, taskname, [], self.cooker.databuilder.mcdata[mc])
bb.debug(1, "Found hashfiles:\n{}".format(matches))
match = None
for m in matches.values():
if h in m['path']:
match = m['path']
for m in matches:
if h in m:
match = m
if match is None:
bb.fatal("Can't find a task we're supposed to have written out? (hash: %s tid: %s)?" % (h, tid))
bb.fatal("Can't find a task we're supposed to have written out? (hash: %s)?" % h)
matches = {k : v for k, v in iter(matches.items()) if h not in k}
matches_local = {k : v for k, v in iter(matches.items()) if h not in k and not v['sstate']}
if matches_local:
matches = matches_local
if matches:
latestmatch = matches[sorted(matches.keys(), key=lambda h: matches[h]['time'])[-1]]['path']
latestmatch = sorted(matches.keys(), key=lambda f: matches[f])[-1]
prevh = __find_sha256__.search(latestmatch).group(0)
output = bb.siggen.compare_sigfiles(latestmatch, match, recursecb)
bb.plain("\nTask %s:%s couldn't be used from the cache because:\n We need hash %s, most recent matching task was %s\n " % (pn, taskname, h, prevh) + '\n '.join(output))
bb.plain("\nTask %s:%s couldn't be used from the cache because:\n We need hash %s, closest matching task was %s\n " % (pn, taskname, h, prevh) + '\n '.join(output))
class RunQueueExecute:
@@ -1849,7 +1785,6 @@ class RunQueueExecute:
self.max_cpu_pressure = self.cfgData.getVar("BB_PRESSURE_MAX_CPU")
self.max_io_pressure = self.cfgData.getVar("BB_PRESSURE_MAX_IO")
self.max_memory_pressure = self.cfgData.getVar("BB_PRESSURE_MAX_MEMORY")
self.max_loadfactor = self.cfgData.getVar("BB_LOADFACTOR_MAX")
self.sq_buildable = set()
self.sq_running = set()
@@ -1867,8 +1802,6 @@ class RunQueueExecute:
self.build_stamps2 = []
self.failed_tids = []
self.sq_deferred = {}
self.sq_needed_harddeps = set()
self.sq_harddep_deferred = set()
self.stampcache = {}
@@ -1878,6 +1811,11 @@ class RunQueueExecute:
self.stats = RunQueueStats(len(self.rqdata.runtaskentries), len(self.rqdata.runq_setscene_tids))
for mc in rq.worker:
rq.worker[mc].pipe.setrunqueueexec(self)
for mc in rq.fakeworker:
rq.fakeworker[mc].pipe.setrunqueueexec(self)
if self.number_tasks <= 0:
bb.fatal("Invalid BB_NUMBER_THREADS %s" % self.number_tasks)
@@ -1903,11 +1841,6 @@ class RunQueueExecute:
bb.fatal("Invalid BB_PRESSURE_MAX_MEMORY %s, minimum value is %s." % (self.max_memory_pressure, lower_limit))
if self.max_memory_pressure > upper_limit:
bb.warn("Your build will be largely unregulated since BB_PRESSURE_MAX_MEMORY is set to %s. It is very unlikely that such high pressure will be experienced." % (self.max_io_pressure))
if self.max_loadfactor:
self.max_loadfactor = float(self.max_loadfactor)
if self.max_loadfactor <= 0:
bb.fatal("Invalid BB_LOADFACTOR_MAX %s, needs to be greater than zero." % (self.max_loadfactor))
# List of setscene tasks which we've covered
self.scenequeue_covered = set()
@@ -1918,6 +1851,11 @@ class RunQueueExecute:
self.tasks_notcovered = set()
self.scenequeue_notneeded = set()
# We can't skip specified target tasks which aren't setscene tasks
self.cantskip = set(self.rqdata.target_tids)
self.cantskip.difference_update(self.rqdata.runq_setscene_tids)
self.cantskip.intersection_update(self.rqdata.runtaskentries)
schedulers = self.get_schedulers()
for scheduler in schedulers:
if self.scheduler == scheduler.name:
@@ -1930,25 +1868,7 @@ class RunQueueExecute:
#if self.rqdata.runq_setscene_tids:
self.sqdata = SQData()
build_scenequeue_data(self.sqdata, self.rqdata, self)
update_scenequeue_data(self.sqdata.sq_revdeps, self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self, summary=True)
# Compute a list of 'stale' sstate tasks where the current hash does not match the one
# in any stamp files. Pass the list out to metadata as an event.
found = {}
for tid in self.rqdata.runq_setscene_tids:
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
stamps = bb.build.find_stale_stamps(taskname, taskfn)
if stamps:
if mc not in found:
found[mc] = {}
found[mc][tid] = stamps
for mc in found:
event = bb.event.StaleSetSceneTasks(found[mc])
bb.event.fire(event, self.cooker.databuilder.mcdata[mc])
self.build_taskdepdata_cache()
build_scenequeue_data(self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self)
def runqueue_process_waitpid(self, task, status, fakerootlog=None):
@@ -1974,14 +1894,14 @@ class RunQueueExecute:
def finish_now(self):
for mc in self.rq.worker:
try:
RunQueue.send_pickled_data(self.rq.worker[mc].process, b"", "finishnow")
self.rq.worker[mc].process.stdin.write(b"<finishnow></finishnow>")
self.rq.worker[mc].process.stdin.flush()
except IOError:
# worker must have died?
pass
for mc in self.rq.fakeworker:
try:
RunQueue.send_pickled_data(self.rq.fakeworker[mc].process, b"", "finishnow")
self.rq.fakeworker[mc].process.stdin.write(b"<finishnow></finishnow>")
self.rq.fakeworker[mc].process.stdin.flush()
except IOError:
# worker must have died?
@@ -2192,24 +2112,13 @@ class RunQueueExecute:
if not hasattr(self, "sorted_setscene_tids"):
# Don't want to sort this set every execution
self.sorted_setscene_tids = sorted(self.rqdata.runq_setscene_tids)
# Resume looping where we left off when we returned to feed the mainloop
self.setscene_tids_generator = itertools.cycle(self.rqdata.runq_setscene_tids)
task = None
if not self.sqdone and self.can_start_task():
loopcount = 0
# Find the next setscene to run, exit the loop when we've processed all tids or found something to execute
while loopcount < len(self.rqdata.runq_setscene_tids):
loopcount += 1
nexttask = next(self.setscene_tids_generator)
if nexttask in self.sq_buildable and nexttask not in self.sq_running and self.sqdata.stamps[nexttask] not in self.build_stamps.values() and nexttask not in self.sq_harddep_deferred:
if nexttask in self.sq_deferred and self.sq_deferred[nexttask] not in self.runq_complete:
# Skip deferred tasks quickly before the 'expensive' tests below - this is key to performant multiconfig builds
continue
if nexttask not in self.sqdata.unskippable and self.sqdata.sq_revdeps[nexttask] and \
nexttask not in self.sq_needed_harddeps and \
self.sqdata.sq_revdeps[nexttask].issubset(self.scenequeue_covered) and \
self.check_dependencies(nexttask, self.sqdata.sq_revdeps[nexttask]):
# Find the next setscene to run
for nexttask in self.sorted_setscene_tids:
if nexttask in self.sq_buildable and nexttask not in self.sq_running and self.sqdata.stamps[nexttask] not in self.build_stamps.values():
if nexttask not in self.sqdata.unskippable and self.sqdata.sq_revdeps[nexttask] and self.sqdata.sq_revdeps[nexttask].issubset(self.scenequeue_covered) and self.check_dependencies(nexttask, self.sqdata.sq_revdeps[nexttask]):
if nexttask not in self.rqdata.target_tids:
logger.debug2("Skipping setscene for task %s" % nexttask)
self.sq_task_skip(nexttask)
@@ -2217,25 +2126,13 @@ class RunQueueExecute:
if nexttask in self.sq_deferred:
del self.sq_deferred[nexttask]
return True
if nexttask in self.sqdata.sq_harddeps_rev and not self.sqdata.sq_harddeps_rev[nexttask].issubset(self.scenequeue_covered | self.scenequeue_notcovered):
logger.debug2("Deferring %s due to hard dependencies" % nexttask)
updated = False
for dep in self.sqdata.sq_harddeps_rev[nexttask]:
if dep not in self.sq_needed_harddeps:
logger.debug2("Enabling task %s as it is a hard dependency" % dep)
self.sq_buildable.add(dep)
self.sq_needed_harddeps.add(dep)
updated = True
self.sq_harddep_deferred.add(nexttask)
if updated:
return True
continue
# If covered tasks are running, need to wait for them to complete
for t in self.sqdata.sq_covered_tasks[nexttask]:
if t in self.runq_running and t not in self.runq_complete:
continue
if nexttask in self.sq_deferred:
# Deferred tasks that were still deferred were skipped above so we now need to process
if self.sq_deferred[nexttask] not in self.runq_complete:
continue
logger.debug("Task %s no longer deferred" % nexttask)
del self.sq_deferred[nexttask]
valid = self.rq.validate_hashes(set([nexttask]), self.cooker.data, 0, False, summary=False)
@@ -2299,10 +2196,10 @@ class RunQueueExecute:
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not self.cooker.configuration.dry_run:
if not mc in self.rq.fakeworker:
self.rq.start_fakeworker(self, mc)
RunQueue.send_pickled_data(self.rq.fakeworker[mc].process, runtask, "runtask")
self.rq.fakeworker[mc].process.stdin.write(b"<runtask>" + pickle.dumps(runtask) + b"</runtask>")
self.rq.fakeworker[mc].process.stdin.flush()
else:
RunQueue.send_pickled_data(self.rq.worker[mc].process, runtask, "runtask")
self.rq.worker[mc].process.stdin.write(b"<runtask>" + pickle.dumps(runtask) + b"</runtask>")
self.rq.worker[mc].process.stdin.flush()
self.build_stamps[task] = bb.parse.siggen.stampfile_mcfn(taskname, taskfn, extrainfo=False)
@@ -2400,10 +2297,10 @@ class RunQueueExecute:
self.rq.state = runQueueFailed
self.stats.taskFailed()
return True
RunQueue.send_pickled_data(self.rq.fakeworker[mc].process, runtask, "runtask")
self.rq.fakeworker[mc].process.stdin.write(b"<runtask>" + pickle.dumps(runtask) + b"</runtask>")
self.rq.fakeworker[mc].process.stdin.flush()
else:
RunQueue.send_pickled_data(self.rq.worker[mc].process, runtask, "runtask")
self.rq.worker[mc].process.stdin.write(b"<runtask>" + pickle.dumps(runtask) + b"</runtask>")
self.rq.worker[mc].process.stdin.flush()
self.build_stamps[task] = bb.parse.siggen.stampfile_mcfn(taskname, taskfn, extrainfo=False)
@@ -2457,25 +2354,6 @@ class RunQueueExecute:
ret.add(dep)
return ret
# Build the individual cache entries in advance once to save time
def build_taskdepdata_cache(self):
taskdepdata_cache = {}
for task in self.rqdata.runtaskentries:
(mc, fn, taskname, taskfn) = split_tid_mcfn(task)
taskdepdata_cache[task] = bb.TaskData(
pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn],
taskname = taskname,
fn = fn,
deps = self.filtermcdeps(task, mc, self.rqdata.runtaskentries[task].depends),
provides = self.rqdata.dataCaches[mc].fn_provides[taskfn],
taskhash = self.rqdata.runtaskentries[task].hash,
unihash = self.rqdata.runtaskentries[task].unihash,
hashfn = self.rqdata.dataCaches[mc].hashfn[taskfn],
taskhash_deps = self.rqdata.runtaskentries[task].taskhash_deps,
)
self.taskdepdata_cache = taskdepdata_cache
# We filter out multiconfig dependencies from taskdepdata we pass to the tasks
# as most code can't handle them
def build_taskdepdata(self, task):
@@ -2487,11 +2365,16 @@ class RunQueueExecute:
while next:
additional = []
for revdep in next:
self.taskdepdata_cache[revdep] = self.taskdepdata_cache[revdep]._replace(
unihash=self.rqdata.runtaskentries[revdep].unihash
)
taskdepdata[revdep] = self.taskdepdata_cache[revdep]
for revdep2 in self.taskdepdata_cache[revdep].deps:
(mc, fn, taskname, taskfn) = split_tid_mcfn(revdep)
pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn]
deps = self.rqdata.runtaskentries[revdep].depends
provides = self.rqdata.dataCaches[mc].fn_provides[taskfn]
taskhash = self.rqdata.runtaskentries[revdep].hash
unihash = self.rqdata.runtaskentries[revdep].unihash
deps = self.filtermcdeps(task, mc, deps)
hashfn = self.rqdata.dataCaches[mc].hashfn[taskfn]
taskdepdata[revdep] = [pn, taskname, fn, deps, provides, taskhash, unihash, hashfn]
for revdep2 in deps:
if revdep2 not in taskdepdata:
additional.append(revdep2)
next = additional
@@ -2505,7 +2388,7 @@ class RunQueueExecute:
return
notcovered = set(self.scenequeue_notcovered)
notcovered |= self.sqdata.cantskip
notcovered |= self.cantskip
for tid in self.scenequeue_notcovered:
notcovered |= self.sqdata.sq_covered_tasks[tid]
notcovered |= self.sqdata.unskippable.difference(self.rqdata.runq_setscene_tids)
@@ -2585,28 +2468,17 @@ class RunQueueExecute:
elif self.rqdata.runtaskentries[p].depends.isdisjoint(total):
next.add(p)
starttime = time.time()
lasttime = starttime
# When an item doesn't have dependencies in total, we can process it. Drop items from total when handled
while next:
current = next.copy()
next = set()
ready = {}
for tid in current:
if self.rqdata.runtaskentries[p].depends and not self.rqdata.runtaskentries[tid].depends.isdisjoint(total):
continue
# get_taskhash for a given tid *must* be called before get_unihash* below
ready[tid] = bb.parse.siggen.get_taskhash(tid, self.rqdata.runtaskentries[tid].depends, self.rqdata.dataCaches)
unihashes = bb.parse.siggen.get_unihashes(ready.keys())
for tid in ready:
orighash = self.rqdata.runtaskentries[tid].hash
newhash = ready[tid]
newhash = bb.parse.siggen.get_taskhash(tid, self.rqdata.runtaskentries[tid].depends, self.rqdata.dataCaches)
origuni = self.rqdata.runtaskentries[tid].unihash
newuni = unihashes[tid]
newuni = bb.parse.siggen.get_unihash(tid)
# FIXME, need to check it can come from sstate at all for determinism?
remapped = False
if newuni == origuni:
@@ -2627,21 +2499,12 @@ class RunQueueExecute:
next |= self.rqdata.runtaskentries[tid].revdeps
total.remove(tid)
next.intersection_update(total)
bb.event.check_for_interrupts(self.cooker.data)
if time.time() > (lasttime + 30):
lasttime = time.time()
hashequiv_logger.verbose("Rehash loop slow progress: %s in %s" % (len(total), lasttime - starttime))
endtime = time.time()
if (endtime-starttime > 60):
hashequiv_logger.verbose("Rehash loop took more than 60s: %s" % (endtime-starttime))
if changed:
for mc in self.rq.worker:
RunQueue.send_pickled_data(self.rq.worker[mc].process, bb.parse.siggen.get_taskhashes(), "newtaskhashes")
self.rq.worker[mc].process.stdin.write(b"<newtaskhashes>" + pickle.dumps(bb.parse.siggen.get_taskhashes()) + b"</newtaskhashes>")
for mc in self.rq.fakeworker:
RunQueue.send_pickled_data(self.rq.fakeworker[mc].process, bb.parse.siggen.get_taskhashes(), "newtaskhashes")
self.rq.fakeworker[mc].process.stdin.write(b"<newtaskhashes>" + pickle.dumps(bb.parse.siggen.get_taskhashes()) + b"</newtaskhashes>")
hashequiv_logger.debug(pprint.pformat("Tasks changed:\n%s" % (changed)))
@@ -2711,8 +2574,8 @@ class RunQueueExecute:
update_tasks2 = []
for tid in update_tasks:
harddepfail = False
for t in self.sqdata.sq_harddeps_rev[tid]:
if t in self.scenequeue_notcovered:
for t in self.sqdata.sq_harddeps:
if tid in self.sqdata.sq_harddeps[t] and t in self.scenequeue_notcovered:
harddepfail = True
break
if not harddepfail and self.sqdata.sq_revdeps[tid].issubset(self.scenequeue_covered | self.scenequeue_notcovered):
@@ -2744,14 +2607,12 @@ class RunQueueExecute:
if changed:
self.stats.updateCovered(len(self.scenequeue_covered), len(self.scenequeue_notcovered))
self.sq_needed_harddeps = set()
self.sq_harddep_deferred = set()
self.holdoff_need_update = True
def scenequeue_updatecounters(self, task, fail=False):
if fail and task in self.sqdata.sq_harddeps:
for dep in sorted(self.sqdata.sq_harddeps[task]):
for dep in sorted(self.sqdata.sq_deps[task]):
if fail and task in self.sqdata.sq_harddeps and dep in self.sqdata.sq_harddeps[task]:
if dep in self.scenequeue_covered or dep in self.scenequeue_notcovered:
# dependency could be already processed, e.g. noexec setscene task
continue
@@ -2761,12 +2622,7 @@ class RunQueueExecute:
logger.debug2("%s was unavailable and is a hard dependency of %s so skipping" % (task, dep))
self.sq_task_failoutright(dep)
continue
# For performance, only compute allcovered once if needed
if self.sqdata.sq_deps[task]:
allcovered = self.scenequeue_covered | self.scenequeue_notcovered
for dep in sorted(self.sqdata.sq_deps[task]):
if self.sqdata.sq_revdeps[dep].issubset(allcovered):
if self.sqdata.sq_revdeps[dep].issubset(self.scenequeue_covered | self.scenequeue_notcovered):
if dep not in self.sq_buildable:
self.sq_buildable.add(dep)
@@ -2784,13 +2640,6 @@ class RunQueueExecute:
new.add(dep)
next = new
# If this task was one which other setscene tasks have a hard dependency upon, we need
# to walk through the hard dependencies and allow execution of those which have completed dependencies.
if task in self.sqdata.sq_harddeps:
for dep in self.sq_harddep_deferred.copy():
if self.sqdata.sq_harddeps_rev[dep].issubset(self.scenequeue_covered | self.scenequeue_notcovered):
self.sq_harddep_deferred.remove(dep)
self.stats.updateCovered(len(self.scenequeue_covered), len(self.scenequeue_notcovered))
self.holdoff_need_update = True
@@ -2859,19 +2708,13 @@ class RunQueueExecute:
additional = []
for revdep in next:
(mc, fn, taskname, taskfn) = split_tid_mcfn(revdep)
pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn]
deps = getsetscenedeps(revdep)
taskdepdata[revdep] = bb.TaskData(
pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn],
taskname = taskname,
fn = fn,
deps = deps,
provides = self.rqdata.dataCaches[mc].fn_provides[taskfn],
taskhash = self.rqdata.runtaskentries[revdep].hash,
unihash = self.rqdata.runtaskentries[revdep].unihash,
hashfn = self.rqdata.dataCaches[mc].hashfn[taskfn],
taskhash_deps = self.rqdata.runtaskentries[revdep].taskhash_deps,
)
provides = self.rqdata.dataCaches[mc].fn_provides[taskfn]
taskhash = self.rqdata.runtaskentries[revdep].hash
unihash = self.rqdata.runtaskentries[revdep].unihash
hashfn = self.rqdata.dataCaches[mc].hashfn[taskfn]
taskdepdata[revdep] = [pn, taskname, fn, deps, provides, taskhash, unihash, hashfn]
for revdep2 in deps:
if revdep2 not in taskdepdata:
additional.append(revdep2)
@@ -2915,7 +2758,6 @@ class SQData(object):
self.sq_revdeps = {}
# Injected inter-setscene task dependencies
self.sq_harddeps = {}
self.sq_harddeps_rev = {}
# Cache of stamp files so duplicates can't run in parallel
self.stamps = {}
# Setscene tasks directly depended upon by the build
@@ -2925,17 +2767,12 @@ class SQData(object):
# A list of normal tasks a setscene task covers
self.sq_covered_tasks = {}
def build_scenequeue_data(sqdata, rqdata, sqrq):
def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
sq_revdeps = {}
sq_revdeps_squash = {}
sq_collated_deps = {}
# We can't skip specified target tasks which aren't setscene tasks
sqdata.cantskip = set(rqdata.target_tids)
sqdata.cantskip.difference_update(rqdata.runq_setscene_tids)
sqdata.cantskip.intersection_update(rqdata.runtaskentries)
# We need to construct a dependency graph for the setscene functions. Intermediate
# dependencies between the setscene tasks only complicate the code. This code
# therefore aims to collapse the huge runqueue dependency tree into a smaller one
@@ -3004,7 +2841,7 @@ def build_scenequeue_data(sqdata, rqdata, sqrq):
for tid in rqdata.runtaskentries:
if not rqdata.runtaskentries[tid].revdeps:
sqdata.unskippable.add(tid)
sqdata.unskippable |= sqdata.cantskip
sqdata.unskippable |= sqrq.cantskip
while new:
new = False
orig = sqdata.unskippable.copy()
@@ -3043,7 +2880,6 @@ def build_scenequeue_data(sqdata, rqdata, sqrq):
idepends = rqdata.taskData[mc].taskentries[realtid].idepends
sqdata.stamps[tid] = bb.parse.siggen.stampfile_mcfn(taskname, taskfn, extrainfo=False)
sqdata.sq_harddeps_rev[tid] = set()
for (depname, idependtask) in idepends:
if depname not in rqdata.taskData[mc].build_targets:
@@ -3056,15 +2892,20 @@ def build_scenequeue_data(sqdata, rqdata, sqrq):
if deptid not in rqdata.runtaskentries:
bb.msg.fatal("RunQueue", "Task %s depends upon non-existent task %s:%s" % (realtid, depfn, idependtask))
logger.debug2("Adding hard setscene dependency %s for %s" % (deptid, tid))
if not deptid in sqdata.sq_harddeps:
sqdata.sq_harddeps[deptid] = set()
sqdata.sq_harddeps[deptid].add(tid)
sqdata.sq_harddeps_rev[tid].add(deptid)
sq_revdeps_squash[tid].add(deptid)
# Have to zero this to avoid circular dependencies
sq_revdeps_squash[deptid] = set()
rqdata.init_progress_reporter.next_stage()
for task in sqdata.sq_harddeps:
for dep in sqdata.sq_harddeps[task]:
sq_revdeps_squash[dep].add(task)
rqdata.init_progress_reporter.next_stage()
#for tid in sq_revdeps_squash:
@@ -3091,7 +2932,7 @@ def build_scenequeue_data(sqdata, rqdata, sqrq):
if not sqdata.sq_revdeps[tid]:
sqrq.sq_buildable.add(tid)
rqdata.init_progress_reporter.next_stage()
rqdata.init_progress_reporter.finish()
sqdata.noexec = set()
sqdata.stamppresent = set()
@@ -3110,6 +2951,22 @@ def build_scenequeue_data(sqdata, rqdata, sqrq):
sqrq.sq_deferred[tid] = sqdata.hashes[h]
bb.debug(1, "Deferring %s after %s" % (tid, sqdata.hashes[h]))
update_scenequeue_data(sqdata.sq_revdeps, sqdata, rqdata, rq, cooker, stampcache, sqrq, summary=True)
# Compute a list of 'stale' sstate tasks where the current hash does not match the one
# in any stamp files. Pass the list out to metadata as an event.
found = {}
for tid in rqdata.runq_setscene_tids:
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
stamps = bb.build.find_stale_stamps(taskname, taskfn)
if stamps:
if mc not in found:
found[mc] = {}
found[mc][tid] = stamps
for mc in found:
event = bb.event.StaleSetSceneTasks(found[mc])
bb.event.fire(event, cooker.databuilder.mcdata[mc])
def check_setscene_stamps(tid, rqdata, rq, stampcache, noexecstamp=False):
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
@@ -3310,6 +3167,9 @@ class runQueuePipe():
self.rqexec = rqexec
self.fakerootlogs = fakerootlogs
def setrunqueueexec(self, rqexec):
self.rqexec = rqexec
def read(self):
for workers, name in [(self.rq.worker, "Worker"), (self.rq.fakeworker, "Fakeroot")]:
for worker in workers.values():

View File

@@ -13,7 +13,7 @@
import bb
import bb.event
import logging
from bb import multiprocessing
import multiprocessing
import threading
import array
import os
@@ -402,22 +402,6 @@ class ProcessServer():
serverlog("".join(msg))
def idle_thread(self):
if self.cooker.configuration.profile:
try:
import cProfile as profile
except:
import profile
prof = profile.Profile()
ret = profile.Profile.runcall(prof, self.idle_thread_internal)
prof.dump_stats("profile-mainloop.log")
bb.utils.process_profilelog("profile-mainloop.log")
serverlog("Raw profiling information saved to profile-mainloop.log and processed statistics to profile-mainloop.log.processed")
else:
self.idle_thread_internal()
def idle_thread_internal(self):
def remove_idle_func(function):
with bb.utils.lock_timeout(self._idlefuncsLock):
del self._idlefuns[function]
@@ -516,18 +500,12 @@ class ServerCommunicator():
self.recv = recv
def runCommand(self, command):
try:
self.connection.send(command)
except BrokenPipeError as e:
raise BrokenPipeError("bitbake-server might have died or been forcibly stopped, ie. OOM killed") from e
self.connection.send(command)
if not self.recv.poll(30):
logger.info("No reply from server in 30s (for command %s at %s)" % (command[0], currenttime()))
if not self.recv.poll(30):
raise ProcessTimeout("Timeout while waiting for a reply from the bitbake server (60s at %s)" % currenttime())
try:
ret, exc = self.recv.get()
except EOFError as e:
raise EOFError("bitbake-server might have died or been forcibly stopped, ie. OOM killed") from e
ret, exc = self.recv.get()
# Should probably turn all exceptions in exc back into exceptions?
# For now, at least handle BBHandledException
if exc and ("BBHandledException" in exc or "SystemExit" in exc):

View File

@@ -15,7 +15,6 @@ import difflib
import simplediff
import json
import types
from contextlib import contextmanager
import bb.compress.zstd
from bb.checksum import FileChecksumCache
from bb import runqueue
@@ -25,24 +24,6 @@ import hashserv.client
logger = logging.getLogger('BitBake.SigGen')
hashequiv_logger = logging.getLogger('BitBake.SigGen.HashEquiv')
#find_siginfo and find_siginfo_version are set by the metadata siggen
# The minimum version of the find_siginfo function we need
find_siginfo_minversion = 2
HASHSERV_ENVVARS = [
"SSL_CERT_DIR",
"SSL_CERT_FILE",
"NO_PROXY",
"HTTPS_PROXY",
"HTTP_PROXY"
]
def check_siggen_version(siggen):
if not hasattr(siggen, "find_siginfo_version"):
bb.fatal("Siggen from metadata (OE-Core?) is too old, please update it (no version found)")
if siggen.find_siginfo_version < siggen.find_siginfo_minversion:
bb.fatal("Siggen from metadata (OE-Core?) is too old, please update it (%s vs %s)" % (siggen.find_siginfo_version, siggen.find_siginfo_minversion))
class SetEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, set) or isinstance(obj, frozenset):
@@ -111,18 +92,9 @@ class SignatureGenerator(object):
if flag:
self.datacaches[mc].stamp_extrainfo[mcfn][t] = flag
def get_cached_unihash(self, tid):
return None
def get_unihash(self, tid):
unihash = self.get_cached_unihash(tid)
if unihash:
return unihash
return self.taskhash[tid]
def get_unihashes(self, tids):
return {tid: self.get_unihash(tid) for tid in tids}
def prep_taskhash(self, tid, deps, dataCaches):
return
@@ -381,7 +353,7 @@ class SignatureGeneratorBasic(SignatureGenerator):
self.taints[tid] = taint
logger.warning("%s is tainted from a forced run" % tid)
return set(dep for _, dep in self.runtaskdeps[tid])
return
def get_taskhash(self, tid, deps, dataCaches):
@@ -539,86 +511,32 @@ class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
class SignatureGeneratorUniHashMixIn(object):
def __init__(self, data):
self.extramethod = {}
# NOTE: The cache only tracks hashes that exist. Hashes that don't
# exist are always queries from the server since it is possible for
# hashes to appear over time, but much less likely for them to
# disappear
self.unihash_exists_cache = set()
self.username = None
self.password = None
self.env = {}
origenv = data.getVar("BB_ORIGENV")
for e in HASHSERV_ENVVARS:
value = data.getVar(e)
if not value and origenv:
value = origenv.getVar(e)
if value:
self.env[e] = value
super().__init__(data)
def get_taskdata(self):
return (self.server, self.method, self.extramethod, self.max_parallel, self.username, self.password, self.env) + super().get_taskdata()
return (self.server, self.method, self.extramethod) + super().get_taskdata()
def set_taskdata(self, data):
self.server, self.method, self.extramethod, self.max_parallel, self.username, self.password, self.env = data[:7]
super().set_taskdata(data[7:])
self.server, self.method, self.extramethod = data[:3]
super().set_taskdata(data[3:])
def get_hashserv_creds(self):
if self.username and self.password:
return {
"username": self.username,
"password": self.password,
}
return {}
@contextmanager
def _client_env(self):
orig_env = os.environ.copy()
try:
for k, v in self.env.items():
os.environ[k] = v
yield
finally:
for k, v in self.env.items():
if k in orig_env:
os.environ[k] = orig_env[k]
else:
del os.environ[k]
@contextmanager
def client(self):
with self._client_env():
if getattr(self, '_client', None) is None:
self._client = hashserv.create_client(self.server, **self.get_hashserv_creds())
yield self._client
@contextmanager
def client_pool(self):
with self._client_env():
if getattr(self, '_client_pool', None) is None:
self._client_pool = hashserv.client.ClientPool(self.server, self.max_parallel, **self.get_hashserv_creds())
yield self._client_pool
if getattr(self, '_client', None) is None:
self._client = hashserv.create_client(self.server)
return self._client
def reset(self, data):
self.__close_clients()
if getattr(self, '_client', None) is not None:
self._client.close()
self._client = None
return super().reset(data)
def exit(self):
self.__close_clients()
if getattr(self, '_client', None) is not None:
self._client.close()
self._client = None
return super().exit()
def __close_clients(self):
with self._client_env():
if getattr(self, '_client', None) is not None:
self._client.close()
self._client = None
if getattr(self, '_client_pool', None) is not None:
self._client_pool.close()
self._client_pool = None
def get_stampfile_hash(self, tid):
if tid in self.taskhash:
# If a unique hash is reported, use it as the stampfile hash. This
@@ -650,7 +568,7 @@ class SignatureGeneratorUniHashMixIn(object):
return None
return unihash
def get_cached_unihash(self, tid):
def get_unihash(self, tid):
taskhash = self.taskhash[tid]
# If its not a setscene task we can return
@@ -665,108 +583,40 @@ class SignatureGeneratorUniHashMixIn(object):
self.unihash[tid] = unihash
return unihash
return None
# In the absence of being able to discover a unique hash from the
# server, make it be equivalent to the taskhash. The unique "hash" only
# really needs to be a unique string (not even necessarily a hash), but
# making it match the taskhash has a few advantages:
#
# 1) All of the sstate code that assumes hashes can be the same
# 2) It provides maximal compatibility with builders that don't use
# an equivalency server
# 3) The value is easy for multiple independent builders to derive the
# same unique hash from the same input. This means that if the
# independent builders find the same taskhash, but it isn't reported
# to the server, there is a better chance that they will agree on
# the unique hash.
unihash = taskhash
def _get_method(self, tid):
method = self.method
if tid in self.extramethod:
method = method + self.extramethod[tid]
return method
def unihashes_exist(self, query):
if len(query) == 0:
return {}
uncached_query = {}
result = {}
for key, unihash in query.items():
if unihash in self.unihash_exists_cache:
result[key] = True
else:
uncached_query[key] = unihash
if self.max_parallel <= 1 or len(uncached_query) <= 1:
# No parallelism required. Make the query serially with the single client
with self.client() as client:
uncached_result = {
key: client.unihash_exists(value) for key, value in uncached_query.items()
}
else:
with self.client_pool() as client_pool:
uncached_result = client_pool.unihashes_exist(uncached_query)
for key, exists in uncached_result.items():
if exists:
self.unihash_exists_cache.add(query[key])
result[key] = exists
return result
def get_unihash(self, tid):
return self.get_unihashes([tid])[tid]
def get_unihashes(self, tids):
"""
For a iterable of tids, returns a dictionary that maps each tid to a
unihash
"""
result = {}
queries = {}
query_result = {}
for tid in tids:
unihash = self.get_cached_unihash(tid)
if unihash:
result[tid] = unihash
else:
queries[tid] = (self._get_method(tid), self.taskhash[tid])
if len(queries) == 0:
return result
if self.max_parallel <= 1 or len(queries) <= 1:
# No parallelism required. Make the query using a single client
with self.client() as client:
keys = list(queries.keys())
unihashes = client.get_unihash_batch(queries[k] for k in keys)
for idx, k in enumerate(keys):
query_result[k] = unihashes[idx]
else:
with self.client_pool() as client_pool:
query_result = client_pool.get_unihashes(queries)
for tid, unihash in query_result.items():
# In the absence of being able to discover a unique hash from the
# server, make it be equivalent to the taskhash. The unique "hash" only
# really needs to be a unique string (not even necessarily a hash), but
# making it match the taskhash has a few advantages:
#
# 1) All of the sstate code that assumes hashes can be the same
# 2) It provides maximal compatibility with builders that don't use
# an equivalency server
# 3) The value is easy for multiple independent builders to derive the
# same unique hash from the same input. This means that if the
# independent builders find the same taskhash, but it isn't reported
# to the server, there is a better chance that they will agree on
# the unique hash.
taskhash = self.taskhash[tid]
if unihash:
try:
method = self.method
if tid in self.extramethod:
method = method + self.extramethod[tid]
data = self.client().get_unihash(method, self.taskhash[tid])
if data:
unihash = data
# A unique hash equal to the taskhash is not very interesting,
# so it is reported it at debug level 2. If they differ, that
# is much more interesting, so it is reported at debug level 1
hashequiv_logger.bbdebug((1, 2)[unihash == taskhash], 'Found unihash %s in place of %s for %s from %s' % (unihash, taskhash, tid, self.server))
else:
hashequiv_logger.debug2('No reported unihash for %s:%s from %s' % (tid, taskhash, self.server))
unihash = taskhash
except ConnectionError as e:
bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
self.set_unihash(tid, unihash)
self.unihash[tid] = unihash
result[tid] = unihash
return result
self.set_unihash(tid, unihash)
self.unihash[tid] = unihash
return unihash
def report_unihash(self, path, task, d):
import importlib
@@ -830,9 +680,7 @@ class SignatureGeneratorUniHashMixIn(object):
if tid in self.extramethod:
method = method + self.extramethod[tid]
with self.client() as client:
data = client.report_unihash(taskhash, method, outhash, unihash, extra_data)
data = self.client().report_unihash(taskhash, method, outhash, unihash, extra_data)
new_unihash = data['unihash']
if new_unihash != unihash:
@@ -863,9 +711,7 @@ class SignatureGeneratorUniHashMixIn(object):
if tid in self.extramethod:
method = method + self.extramethod[tid]
with self.client() as client:
data = client.report_unihash_equiv(taskhash, method, wanted_unihash, extra_data)
data = self.client().report_unihash_equiv(taskhash, method, wanted_unihash, extra_data)
hashequiv_logger.verbose('Reported task %s as unihash %s to %s (%s)' % (tid, wanted_unihash, self.server, str(data)))
if data is None:
@@ -898,7 +744,6 @@ class SignatureGeneratorTestEquivHash(SignatureGeneratorUniHashMixIn, SignatureG
super().init_rundepcheck(data)
self.server = data.getVar('BB_HASHSERVE')
self.method = "sstate_output_hash"
self.max_parallel = 1
def clean_checksum_file_path(file_checksum_tuple):
f, cs = file_checksum_tuple
@@ -994,18 +839,10 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
formatparams.update(values)
return formatstr.format(**formatparams)
try:
with bb.compress.zstd.open(a, "rt", encoding="utf-8", num_threads=1) as f:
a_data = json.load(f, object_hook=SetDecoder)
except (TypeError, OSError) as err:
bb.error("Failed to open sigdata file '%s': %s" % (a, str(err)))
raise err
try:
with bb.compress.zstd.open(b, "rt", encoding="utf-8", num_threads=1) as f:
b_data = json.load(f, object_hook=SetDecoder)
except (TypeError, OSError) as err:
bb.error("Failed to open sigdata file '%s': %s" % (b, str(err)))
raise err
with bb.compress.zstd.open(a, "rt", encoding="utf-8", num_threads=1) as f:
a_data = json.load(f, object_hook=SetDecoder)
with bb.compress.zstd.open(b, "rt", encoding="utf-8", num_threads=1) as f:
b_data = json.load(f, object_hook=SetDecoder)
for data in [a_data, b_data]:
handle_renames(data)
@@ -1243,12 +1080,8 @@ def calc_taskhash(sigdata):
def dump_sigfile(a):
output = []
try:
with bb.compress.zstd.open(a, "rt", encoding="utf-8", num_threads=1) as f:
a_data = json.load(f, object_hook=SetDecoder)
except (TypeError, OSError) as err:
bb.error("Failed to open sigdata file '%s': %s" % (a, str(err)))
raise err
with bb.compress.zstd.open(a, "rt", encoding="utf-8", num_threads=1) as f:
a_data = json.load(f, object_hook=SetDecoder)
handle_renames(a_data)

View File

@@ -467,6 +467,6 @@ esac
# self.d.setVar("oe_libinstall", "echo test")
# self.d.setVar("FOO", "foo=oe_libinstall; eval $foo")
# self.d.setVarFlag("FOO", "vardeps", "oe_*")
# self.assertEqual(deps, set(["oe_libinstall"]))
# self.assertEquals(deps, set(["oe_libinstall"]))

View File

@@ -395,16 +395,6 @@ class TestOverrides(unittest.TestCase):
self.d.setVar("OVERRIDES", "foo:bar:some_val")
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
# Test an override with _<numeric> in it based on a real world OE issue
def test_underscore_override_2(self):
self.d.setVar("TARGET_ARCH", "x86_64")
self.d.setVar("PN", "test-${TARGET_ARCH}")
self.d.setVar("VERSION", "1")
self.d.setVar("VERSION:pn-test-${TARGET_ARCH}", "2")
self.d.setVar("OVERRIDES", "pn-${PN}")
bb.data.expandKeys(self.d)
self.assertEqual(self.d.getVar("VERSION"), "2")
def test_remove_with_override(self):
self.d.setVar("TEST:bar", "testvalue2")
self.d.setVar("TEST:some_val", "testvalue3 testvalue5")
@@ -426,6 +416,16 @@ class TestOverrides(unittest.TestCase):
self.d.setVar("TEST:bar:append", "testvalue2")
self.assertEqual(self.d.getVar("TEST"), "testvalue2")
# Test an override with _<numeric> in it based on a real world OE issue
def test_underscore_override(self):
self.d.setVar("TARGET_ARCH", "x86_64")
self.d.setVar("PN", "test-${TARGET_ARCH}")
self.d.setVar("VERSION", "1")
self.d.setVar("VERSION:pn-test-${TARGET_ARCH}", "2")
self.d.setVar("OVERRIDES", "pn-${PN}")
bb.data.expandKeys(self.d)
self.assertEqual(self.d.getVar("VERSION"), "2")
def test_append_and_unused_override(self):
# Had a bug where an unused override append could return "" instead of None
self.d.setVar("BAR:append:unusedoverride", "testvalue2")

View File

@@ -13,7 +13,6 @@ import pickle
import threading
import time
import unittest
import tempfile
from unittest.mock import Mock
from unittest.mock import call
@@ -469,8 +468,6 @@ class EventClassesTest(unittest.TestCase):
def setUp(self):
bb.event.worker_pid = EventClassesTest._worker_pid
self.d = bb.data.init()
bb.parse.siggen = bb.siggen.init(self.d)
def test_Event(self):
""" Test the Event base class """
@@ -953,24 +950,3 @@ class EventClassesTest(unittest.TestCase):
event = bb.event.FindSigInfoResult(result)
self.assertEqual(event.result, result)
self.assertEqual(event.pid, EventClassesTest._worker_pid)
def test_lineno_in_eventhandler(self):
# The error lineno is 5, not 4 since the first line is '\n'
error_line = """
# Comment line1
# Comment line2
python test_lineno_in_eventhandler() {
This is an error line
}
addhandler test_lineno_in_eventhandler
test_lineno_in_eventhandler[eventmask] = "bb.event.ConfigParsed"
"""
with self.assertLogs() as logs:
f = tempfile.NamedTemporaryFile(suffix = '.bb')
f.write(bytes(error_line, "utf-8"))
f.flush()
d = bb.parse.handle(f.name, self.d)['']
output = "".join(logs.output)
self.assertTrue(" line 5\n" in output)

View File

@@ -6,7 +6,6 @@
# SPDX-License-Identifier: GPL-2.0-only
#
import contextlib
import unittest
import hashlib
import tempfile
@@ -308,21 +307,6 @@ class URITest(unittest.TestCase):
'params': {"someparam" : "1"},
'query': {},
'relative': True
},
"https://www.innodisk.com/Download_file?9BE0BF6657;downloadfilename=EGPL-T101.zip": {
'uri': 'https://www.innodisk.com/Download_file?9BE0BF6657;downloadfilename=EGPL-T101.zip',
'scheme': 'https',
'hostname': 'www.innodisk.com',
'port': None,
'hostport': 'www.innodisk.com',
'path': '/Download_file',
'userinfo': '',
'userinfo': '',
'username': '',
'password': '',
'params': {"downloadfilename" : "EGPL-T101.zip"},
'query': {"9BE0BF6657": None},
'relative': False
}
}
@@ -432,9 +416,9 @@ class FetcherTest(unittest.TestCase):
def git(self, cmd, cwd=None):
if isinstance(cmd, str):
cmd = 'git -c safe.bareRepository=all ' + cmd
cmd = 'git ' + cmd
else:
cmd = ['git', '-c', 'safe.bareRepository=all'] + cmd
cmd = ['git'] + cmd
if cwd is None:
cwd = self.gitdir
return bb.process.run(cmd, cwd=cwd)[0]
@@ -1123,25 +1107,6 @@ class FetcherNetworkTest(FetcherTest):
if os.path.exists(os.path.join(repo_path, 'bitbake-gitsm-test1')):
self.assertTrue(os.path.exists(os.path.join(repo_path, 'bitbake-gitsm-test1', 'bitbake')), msg='submodule of submodule missing')
@skipIfNoNetwork()
def test_git_submodule_restricted_network_premirrors(self):
# this test is to ensure that premirrors will be tried in restricted network
# that is, BB_ALLOWED_NETWORKS does not contain the domain the url uses
url = "gitsm://github.com/grpc/grpc.git;protocol=https;name=grpc;branch=v1.60.x;rev=0ef13a7555dbaadd4633399242524129eef5e231"
# create a download directory to be used as premirror later
tempdir = tempfile.mkdtemp(prefix="bitbake-fetch-")
dl_premirror = os.path.join(tempdir, "download-premirror")
os.mkdir(dl_premirror)
self.d.setVar("DL_DIR", dl_premirror)
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
# now use the premirror in restricted network
self.d.setVar("DL_DIR", self.dldir)
self.d.setVar("PREMIRRORS", "gitsm://.*/.* gitsm://%s/git2/MIRRORNAME;protocol=file" % dl_premirror)
self.d.setVar("BB_ALLOWED_NETWORKS", "*.some.domain")
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
@skipIfNoNetwork()
def test_git_submodule_dbus_broker(self):
# The following external repositories have show failures in fetch and unpack operations
@@ -1282,9 +1247,8 @@ class SVNTest(FetcherTest):
cwd=repo_dir)
bb.process.run("svn co %s svnfetch_co" % self.repo_url, cwd=self.tempdir)
# Github won't emulate SVN anymore (see https://github.blog/2023-01-20-sunsetting-subversion-support/)
# Use still accessible svn repo (only trunk to avoid longer downloads)
bb.process.run("svn propset svn:externals 'bitbake https://svn.apache.org/repos/asf/serf/trunk' .",
# Github will emulate SVN. Use this to check if we're downloding...
bb.process.run("svn propset svn:externals 'bitbake https://github.com/PhilipHazel/pcre2.git' .",
cwd=os.path.join(self.tempdir, 'svnfetch_co', 'trunk'))
bb.process.run("svn commit --non-interactive -m 'Add external'",
cwd=os.path.join(self.tempdir, 'svnfetch_co', 'trunk'))
@@ -1312,8 +1276,8 @@ class SVNTest(FetcherTest):
self.assertTrue(os.path.exists(os.path.join(self.unpackdir, 'trunk')), msg="Missing trunk")
self.assertTrue(os.path.exists(os.path.join(self.unpackdir, 'trunk', 'README.md')), msg="Missing contents")
self.assertFalse(os.path.exists(os.path.join(self.unpackdir, 'trunk/bitbake/protocols')), msg="External dir should NOT exist")
self.assertFalse(os.path.exists(os.path.join(self.unpackdir, 'trunk/bitbake/protocols', 'fcgi_buckets.h')), msg="External fcgi_buckets.h should NOT exit")
self.assertFalse(os.path.exists(os.path.join(self.unpackdir, 'trunk/bitbake/trunk')), msg="External dir should NOT exist")
self.assertFalse(os.path.exists(os.path.join(self.unpackdir, 'trunk/bitbake/trunk', 'README')), msg="External README should NOT exit")
@skipIfNoSvn()
def test_external_svn(self):
@@ -1326,8 +1290,8 @@ class SVNTest(FetcherTest):
self.assertTrue(os.path.exists(os.path.join(self.unpackdir, 'trunk')), msg="Missing trunk")
self.assertTrue(os.path.exists(os.path.join(self.unpackdir, 'trunk', 'README.md')), msg="Missing contents")
self.assertTrue(os.path.exists(os.path.join(self.unpackdir, 'trunk/bitbake/protocols')), msg="External dir should exist")
self.assertTrue(os.path.exists(os.path.join(self.unpackdir, 'trunk/bitbake/protocols', 'fcgi_buckets.h')), msg="External fcgi_buckets.h should exit")
self.assertTrue(os.path.exists(os.path.join(self.unpackdir, 'trunk/bitbake/trunk')), msg="External dir should exist")
self.assertTrue(os.path.exists(os.path.join(self.unpackdir, 'trunk/bitbake/trunk', 'README')), msg="External README should exit")
class TrustedNetworksTest(FetcherTest):
def test_trusted_network(self):
@@ -1405,39 +1369,37 @@ class FetchLatestVersionTest(FetcherTest):
test_git_uris = {
# version pattern "X.Y.Z"
("mx-1.0", "git://github.com/clutter-project/mx.git;branch=mx-1.4;protocol=https", "9b1db6b8060bd00b121a692f942404a24ae2960f", "", "")
("mx-1.0", "git://github.com/clutter-project/mx.git;branch=mx-1.4;protocol=https", "9b1db6b8060bd00b121a692f942404a24ae2960f", "")
: "1.99.4",
# version pattern "vX.Y"
# mirror of git.infradead.org since network issues interfered with testing
("mtd-utils", "git://git.yoctoproject.org/mtd-utils.git;branch=master;protocol=https", "ca39eb1d98e736109c64ff9c1aa2a6ecca222d8f", "", "")
("mtd-utils", "git://git.yoctoproject.org/mtd-utils.git;branch=master;protocol=https", "ca39eb1d98e736109c64ff9c1aa2a6ecca222d8f", "")
: "1.5.0",
# version pattern "pkg_name-X.Y"
# mirror of git://anongit.freedesktop.org/git/xorg/proto/presentproto since network issues interfered with testing
("presentproto", "git://git.yoctoproject.org/bbfetchtests-presentproto;branch=master;protocol=https", "24f3a56e541b0a9e6c6ee76081f441221a120ef9", "", "")
("presentproto", "git://git.yoctoproject.org/bbfetchtests-presentproto;branch=master;protocol=https", "24f3a56e541b0a9e6c6ee76081f441221a120ef9", "")
: "1.0",
# version pattern "pkg_name-vX.Y.Z"
("dtc", "git://git.yoctoproject.org/bbfetchtests-dtc.git;branch=master;protocol=https", "65cc4d2748a2c2e6f27f1cf39e07a5dbabd80ebf", "", "")
("dtc", "git://git.yoctoproject.org/bbfetchtests-dtc.git;branch=master;protocol=https", "65cc4d2748a2c2e6f27f1cf39e07a5dbabd80ebf", "")
: "1.4.0",
# combination version pattern
("sysprof", "git://git.yoctoproject.org/sysprof.git;protocol=https;branch=master", "cd44ee6644c3641507fb53b8a2a69137f2971219", "", "")
("sysprof", "git://gitlab.gnome.org/GNOME/sysprof.git;protocol=https;branch=master", "cd44ee6644c3641507fb53b8a2a69137f2971219", "")
: "1.2.0",
("u-boot-mkimage", "git://git.yoctoproject.org/bbfetchtests-u-boot.git;branch=master;protocol=https", "62c175fbb8a0f9a926c88294ea9f7e88eb898f6c", "", "")
("u-boot-mkimage", "git://git.denx.de/u-boot.git;branch=master;protocol=git", "62c175fbb8a0f9a926c88294ea9f7e88eb898f6c", "")
: "2014.01",
# version pattern "yyyymmdd"
("mobile-broadband-provider-info", "git://git.yoctoproject.org/mobile-broadband-provider-info.git;protocol=https;branch=master", "4ed19e11c2975105b71b956440acdb25d46a347d", "", "")
("mobile-broadband-provider-info", "git://gitlab.gnome.org/GNOME/mobile-broadband-provider-info.git;protocol=https;branch=master", "4ed19e11c2975105b71b956440acdb25d46a347d", "")
: "20120614",
# packages with a valid UPSTREAM_CHECK_GITTAGREGEX
# mirror of git://anongit.freedesktop.org/xorg/driver/xf86-video-omap since network issues interfered with testing
("xf86-video-omap", "git://git.yoctoproject.org/bbfetchtests-xf86-video-omap;branch=master;protocol=https", "ae0394e687f1a77e966cf72f895da91840dffb8f", r"(?P<pver>(\d+\.(\d\.?)*))", "")
("xf86-video-omap", "git://git.yoctoproject.org/bbfetchtests-xf86-video-omap;branch=master;protocol=https", "ae0394e687f1a77e966cf72f895da91840dffb8f", r"(?P<pver>(\d+\.(\d\.?)*))")
: "0.4.3",
("build-appliance-image", "git://git.yoctoproject.org/poky;branch=master;protocol=https", "b37dd451a52622d5b570183a81583cc34c2ff555", r"(?P<pver>(([0-9][\.|_]?)+[0-9]))", "")
("build-appliance-image", "git://git.yoctoproject.org/poky;branch=master;protocol=https", "b37dd451a52622d5b570183a81583cc34c2ff555", r"(?P<pver>(([0-9][\.|_]?)+[0-9]))")
: "11.0.0",
("chkconfig-alternatives-native", "git://github.com/kergoth/chkconfig;branch=sysroot;protocol=https", "cd437ecbd8986c894442f8fce1e0061e20f04dee", r"chkconfig\-(?P<pver>((\d+[\.\-_]*)+))", "")
("chkconfig-alternatives-native", "git://github.com/kergoth/chkconfig;branch=sysroot;protocol=https", "cd437ecbd8986c894442f8fce1e0061e20f04dee", r"chkconfig\-(?P<pver>((\d+[\.\-_]*)+))")
: "1.3.59",
("remake", "git://github.com/rocky/remake.git;protocol=https;branch=master", "f05508e521987c8494c92d9c2871aec46307d51d", r"(?P<pver>(\d+\.(\d+\.)*\d*(\+dbg\d+(\.\d+)*)*))", "")
("remake", "git://github.com/rocky/remake.git;protocol=https;branch=master", "f05508e521987c8494c92d9c2871aec46307d51d", r"(?P<pver>(\d+\.(\d+\.)*\d*(\+dbg\d+(\.\d+)*)*))")
: "3.82+dbg0.9",
("sysdig", "git://github.com/draios/sysdig.git;branch=dev;protocol=https", "4fb6288275f567f63515df0ff0a6518043ecfa9b", r"^(?P<pver>\d+(\.\d+)+)", "10.0.0")
: "0.28.0",
}
test_wget_uris = {
@@ -1505,13 +1467,10 @@ class FetchLatestVersionTest(FetcherTest):
self.assertTrue(verstring, msg="Could not find upstream version for %s" % k[0])
r = bb.utils.vercmp_string(v, verstring)
self.assertTrue(r == -1 or r == 0, msg="Package %s, version: %s <= %s" % (k[0], v, verstring))
if k[4]:
r = bb.utils.vercmp_string(verstring, k[4])
self.assertTrue(r == -1 or r == 0, msg="Package %s, version: %s <= %s" % (k[0], verstring, k[4]))
def test_wget_latest_versionstring(self):
testdata = os.path.dirname(os.path.abspath(__file__)) + "/fetch-testdata"
server = HTTPService(testdata, host="127.0.0.1")
server = HTTPService(testdata)
server.start()
port = server.port
try:
@@ -1519,10 +1478,10 @@ class FetchLatestVersionTest(FetcherTest):
self.d.setVar("PN", k[0])
checkuri = ""
if k[2]:
checkuri = "http://127.0.0.1:%s/" % port + k[2]
checkuri = "http://localhost:%s/" % port + k[2]
self.d.setVar("UPSTREAM_CHECK_URI", checkuri)
self.d.setVar("UPSTREAM_CHECK_REGEX", k[3])
url = "http://127.0.0.1:%s/" % port + k[1]
url = "http://localhost:%s/" % port + k[1]
ud = bb.fetch2.FetchData(url, self.d)
pupver = ud.method.latest_versionstring(ud, self.d)
verstring = pupver[0]
@@ -1715,8 +1674,6 @@ class GitShallowTest(FetcherTest):
if cwd is None:
cwd = self.gitdir
actual_refs = self.git(['for-each-ref', '--format=%(refname)'], cwd=cwd).splitlines()
# Resolve references into the same format as the comparision (needed by git 2.48 onwards)
actual_refs = self.git(['rev-parse', '--symbolic-full-name'] + actual_refs, cwd=cwd).splitlines()
full_expected = self.git(['rev-parse', '--symbolic-full-name'] + expected_refs, cwd=cwd).splitlines()
self.assertEqual(sorted(set(full_expected)), sorted(set(actual_refs)))
@@ -2279,14 +2236,10 @@ class GitLfsTest(FetcherTest):
bb.utils.mkdirhier(self.srcdir)
self.git_init(cwd=self.srcdir)
self.commit_file('.gitattributes', '*.mp3 filter=lfs -text')
def commit_file(self, filename, content):
with open(os.path.join(self.srcdir, filename), "w") as f:
f.write(content)
self.git(["add", filename], cwd=self.srcdir)
self.git(["commit", "-m", "Change"], cwd=self.srcdir)
return self.git(["rev-parse", "HEAD"], cwd=self.srcdir).strip()
with open(os.path.join(self.srcdir, '.gitattributes'), 'wt') as attrs:
attrs.write('*.mp3 filter=lfs -text')
self.git(['add', '.gitattributes'], cwd=self.srcdir)
self.git(['commit', '-m', "attributes", '.gitattributes'], cwd=self.srcdir)
def fetch(self, uri=None, download=True):
uris = self.d.getVar('SRC_URI').split()
@@ -2306,44 +2259,6 @@ class GitLfsTest(FetcherTest):
unpacked_lfs_file = os.path.join(self.d.getVar('WORKDIR'), 'git', "Cat_poster_1.jpg")
return unpacked_lfs_file
@skipIfNoGitLFS()
def test_fetch_lfs_on_srcrev_change(self):
"""Test if fetch downloads missing LFS objects when a different revision within an existing repository is requested"""
self.git(["lfs", "install", "--local"], cwd=self.srcdir)
@contextlib.contextmanager
def hide_upstream_repository():
"""Hide the upstream repository to make sure that git lfs cannot pull from it"""
temp_name = self.srcdir + ".bak"
os.rename(self.srcdir, temp_name)
try:
yield
finally:
os.rename(temp_name, self.srcdir)
def fetch_and_verify(revision, filename, content):
self.d.setVar('SRCREV', revision)
fetcher, ud = self.fetch()
with hide_upstream_repository():
workdir = self.d.getVar('WORKDIR')
fetcher.unpack(workdir)
with open(os.path.join(workdir, "git", filename)) as f:
self.assertEqual(f.read(), content)
commit_1 = self.commit_file("a.mp3", "version 1")
commit_2 = self.commit_file("a.mp3", "version 2")
self.d.setVar('SRC_URI', "git://%s;protocol=file;lfs=1;branch=master" % self.srcdir)
# Seed the local download folder by fetching the latest commit and verifying that the LFS contents are
# available even when the upstream repository disappears.
fetch_and_verify(commit_2, "a.mp3", "version 2")
# Verify that even when an older revision is fetched, the needed LFS objects are fetched into the download
# folder.
fetch_and_verify(commit_1, "a.mp3", "version 1")
@skipIfNoGitLFS()
@skipIfNoNetwork()
def test_real_git_lfs_repo_succeeds_without_lfs_param(self):
@@ -2362,7 +2277,7 @@ class GitLfsTest(FetcherTest):
@skipIfNoGitLFS()
@skipIfNoNetwork()
def test_real_git_lfs_repo_skips(self):
def test_real_git_lfs_repo_succeeds(self):
self.d.setVar('SRC_URI', "git://gitlab.com/gitlab-examples/lfs.git;protocol=https;branch=master;lfs=0")
f = self.get_real_git_lfs_file()
# This is the actual non-smudged placeholder file on the repo if git-lfs does not run
@@ -2375,41 +2290,24 @@ class GitLfsTest(FetcherTest):
with open(f) as fh:
self.assertEqual(lfs_file, fh.read())
@skipIfNoGitLFS()
def test_lfs_enabled(self):
import shutil
uri = 'git://%s;protocol=file;lfs=1;branch=master' % self.srcdir
self.d.setVar('SRC_URI', uri)
# With git-lfs installed, test that we can fetch and unpack
fetcher, ud = self.fetch()
shutil.rmtree(self.gitdir, ignore_errors=True)
fetcher.unpack(self.d.getVar('WORKDIR'))
@skipIfNoGitLFS()
def test_lfs_disabled(self):
import shutil
uri = 'git://%s;protocol=file;lfs=0;branch=master' % self.srcdir
self.d.setVar('SRC_URI', uri)
# Verify that the fetcher can survive even if the source
# repository has Git LFS usage configured.
fetcher, ud = self.fetch()
fetcher.unpack(self.d.getVar('WORKDIR'))
def test_lfs_enabled_not_installed(self):
import shutil
uri = 'git://%s;protocol=file;lfs=1;branch=master' % self.srcdir
self.d.setVar('SRC_URI', uri)
# Careful: suppress initial attempt at downloading
# Careful: suppress initial attempt at downloading until
# we know whether git-lfs is installed.
fetcher, ud = self.fetch(uri=None, download=False)
self.assertIsNotNone(ud.method._find_git_lfs)
# If git-lfs can be found, the unpack should be successful. Only
# attempt this with the real live copy of git-lfs installed.
if ud.method._find_git_lfs(self.d):
fetcher.download()
shutil.rmtree(self.gitdir, ignore_errors=True)
fetcher.unpack(self.d.getVar('WORKDIR'))
# Artificially assert that git-lfs is not installed, so
# we can verify a failure to unpack in it's absence.
old_find_git_lfs = ud.method._find_git_lfs
try:
# If git-lfs cannot be found, the unpack should throw an error
@@ -2421,21 +2319,29 @@ class GitLfsTest(FetcherTest):
finally:
ud.method._find_git_lfs = old_find_git_lfs
def test_lfs_disabled_not_installed(self):
def test_lfs_disabled(self):
import shutil
uri = 'git://%s;protocol=file;lfs=0;branch=master' % self.srcdir
self.d.setVar('SRC_URI', uri)
# Careful: suppress initial attempt at downloading
fetcher, ud = self.fetch(uri=None, download=False)
# In contrast to test_lfs_enabled(), allow the implicit download
# done by self.fetch() to occur here. The point of this test case
# is to verify that the fetcher can survive even if the source
# repository has Git LFS usage configured.
fetcher, ud = self.fetch()
self.assertIsNotNone(ud.method._find_git_lfs)
# Artificially assert that git-lfs is not installed, so
# we can verify a failure to unpack in it's absence.
old_find_git_lfs = ud.method._find_git_lfs
try:
# Even if git-lfs cannot be found, the unpack should be successful
fetcher.download()
# If git-lfs can be found, the unpack should be successful. A
# live copy of git-lfs is not required for this case, so
# unconditionally forge its presence.
ud.method._find_git_lfs = lambda d: True
shutil.rmtree(self.gitdir, ignore_errors=True)
fetcher.unpack(self.d.getVar('WORKDIR'))
# If git-lfs cannot be found, the unpack should be successful
ud.method._find_git_lfs = lambda d: False
shutil.rmtree(self.gitdir, ignore_errors=True)
fetcher.unpack(self.d.getVar('WORKDIR'))
@@ -3136,11 +3042,9 @@ class FetchPremirroronlyLocalTest(FetcherTest):
self.d.setVar("BB_FETCH_PREMIRRORONLY", "1")
self.d.setVar("BB_NO_NETWORK", "1")
self.d.setVar("PREMIRRORS", self.recipe_url + " " + "file://{}".format(self.mirrordir) + " \n")
self.mirrorname = "git2_git.fake.repo.bitbake.tar.gz"
self.mirrorfile = os.path.join(self.mirrordir, self.mirrorname)
self.testfilename = "bitbake-fetch.test"
def make_git_repo(self):
self.mirrorname = "git2_git.fake.repo.bitbake.tar.gz"
recipeurl = "git:/git.fake.repo/bitbake"
os.makedirs(self.gitdir)
self.git_init(cwd=self.gitdir)
@@ -3150,41 +3054,15 @@ class FetchPremirroronlyLocalTest(FetcherTest):
def git_new_commit(self):
import random
testfilename = "bibake-fetch.test"
os.unlink(os.path.join(self.mirrordir, self.mirrorname))
branch = self.git("branch --show-current", self.gitdir).split()
with open(os.path.join(self.gitdir, self.testfilename), "w") as testfile:
testfile.write("File {} from branch {}; Useless random data {}".format(self.testfilename, branch, random.random()))
self.git("add {}".format(self.testfilename), self.gitdir)
self.git("commit -a -m \"This random commit {} in branch {}. I'm useless.\"".format(random.random(), branch), self.gitdir)
with open(os.path.join(self.gitdir, testfilename), "w") as testfile:
testfile.write("Useless random data {}".format(random.random()))
self.git("add {}".format(testfilename), self.gitdir)
self.git("commit -a -m \"This random commit {}. I'm useless.\"".format(random.random()), self.gitdir)
bb.process.run('tar -czvf {} .'.format(os.path.join(self.mirrordir, self.mirrorname)), cwd = self.gitdir)
return self.git("rev-parse HEAD", self.gitdir).strip()
def git_new_branch(self, name):
self.git_new_commit()
head = self.git("rev-parse HEAD", self.gitdir).strip()
self.git("checkout -b {}".format(name), self.gitdir)
newrev = self.git_new_commit()
self.git("checkout {}".format(head), self.gitdir)
return newrev
def test_mirror_multiple_fetches(self):
self.make_git_repo()
self.d.setVar("SRCREV", self.git_new_commit())
fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
fetcher.download()
fetcher.unpack(self.unpackdir)
## New commit in premirror. it's not in the download_dir
self.d.setVar("SRCREV", self.git_new_commit())
fetcher2 = bb.fetch.Fetch([self.recipe_url], self.d)
fetcher2.download()
fetcher2.unpack(self.unpackdir)
## New commit in premirror. it's not in the download_dir
self.d.setVar("SRCREV", self.git_new_commit())
fetcher3 = bb.fetch.Fetch([self.recipe_url], self.d)
fetcher3.download()
fetcher3.unpack(self.unpackdir)
def test_mirror_commit_nonexistent(self):
self.make_git_repo()
self.d.setVar("SRCREV", "0"*40)
@@ -3205,59 +3083,6 @@ class FetchPremirroronlyLocalTest(FetcherTest):
with self.assertRaises(bb.fetch2.NetworkAccess):
fetcher.download()
def test_mirror_tarball_multiple_branches(self):
"""
test if PREMIRRORS can handle multiple name/branches correctly
both branches have required revisions
"""
self.make_git_repo()
branch1rev = self.git_new_branch("testbranch1")
branch2rev = self.git_new_branch("testbranch2")
self.recipe_url = "git://git.fake.repo/bitbake;branch=testbranch1,testbranch2;protocol=https;name=branch1,branch2"
self.d.setVar("SRCREV_branch1", branch1rev)
self.d.setVar("SRCREV_branch2", branch2rev)
fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
self.assertTrue(os.path.exists(self.mirrorfile), "Mirror file doesn't exist")
fetcher.download()
fetcher.unpack(os.path.join(self.tempdir, "unpacked"))
unpacked = os.path.join(self.tempdir, "unpacked", "git", self.testfilename)
self.assertTrue(os.path.exists(unpacked), "Repo has not been unpackaged properly!")
with open(unpacked, 'r') as f:
content = f.read()
## We expect to see testbranch1 in the file, not master, not testbranch2
self.assertTrue(content.find("testbranch1") != -1, "Wrong branch has been checked out!")
def test_mirror_tarball_multiple_branches_nobranch(self):
"""
test if PREMIRRORS can handle multiple name/branches correctly
Unbalanced name/branches raises ParameterError
"""
self.make_git_repo()
branch1rev = self.git_new_branch("testbranch1")
branch2rev = self.git_new_branch("testbranch2")
self.recipe_url = "git://git.fake.repo/bitbake;branch=testbranch1;protocol=https;name=branch1,branch2"
self.d.setVar("SRCREV_branch1", branch1rev)
self.d.setVar("SRCREV_branch2", branch2rev)
with self.assertRaises(bb.fetch2.ParameterError):
fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
def test_mirror_tarball_multiple_branches_norev(self):
"""
test if PREMIRRORS can handle multiple name/branches correctly
one of the branches specifies non existing SRCREV
"""
self.make_git_repo()
branch1rev = self.git_new_branch("testbranch1")
branch2rev = self.git_new_branch("testbranch2")
self.recipe_url = "git://git.fake.repo/bitbake;branch=testbranch1,testbranch2;protocol=https;name=branch1,branch2"
self.d.setVar("SRCREV_branch1", branch1rev)
self.d.setVar("SRCREV_branch2", "0"*40)
fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
self.assertTrue(os.path.exists(self.mirrorfile), "Mirror file doesn't exist")
with self.assertRaises(bb.fetch2.NetworkAccess):
fetcher.download()
class FetchPremirroronlyNetworkTest(FetcherTest):
def setUp(self):

View File

@@ -243,101 +243,3 @@ unset A[flag@.service]
with self.assertRaises(bb.parse.ParseError):
d = bb.parse.handle(f.name, self.d)['']
export_function_recipe = """
inherit someclass
"""
export_function_recipe2 = """
inherit someclass
do_compile () {
false
}
python do_compilepython () {
bb.note("Something else")
}
"""
export_function_class = """
someclass_do_compile() {
true
}
python someclass_do_compilepython () {
bb.note("Something")
}
EXPORT_FUNCTIONS do_compile do_compilepython
"""
export_function_class2 = """
secondclass_do_compile() {
true
}
python secondclass_do_compilepython () {
bb.note("Something")
}
EXPORT_FUNCTIONS do_compile do_compilepython
"""
def test_parse_export_functions(self):
def check_function_flags(d):
self.assertEqual(d.getVarFlag("do_compile", "func"), 1)
self.assertEqual(d.getVarFlag("do_compilepython", "func"), 1)
self.assertEqual(d.getVarFlag("do_compile", "python"), None)
self.assertEqual(d.getVarFlag("do_compilepython", "python"), "1")
with tempfile.TemporaryDirectory() as tempdir:
self.d.setVar("__bbclasstype", "recipe")
recipename = tempdir + "/recipe.bb"
os.makedirs(tempdir + "/classes")
with open(tempdir + "/classes/someclass.bbclass", "w") as f:
f.write(self.export_function_class)
f.flush()
with open(tempdir + "/classes/secondclass.bbclass", "w") as f:
f.write(self.export_function_class2)
f.flush()
with open(recipename, "w") as f:
f.write(self.export_function_recipe)
f.flush()
os.chdir(tempdir)
d = bb.parse.handle(recipename, bb.data.createCopy(self.d))['']
self.assertIn("someclass_do_compile", d.getVar("do_compile"))
self.assertIn("someclass_do_compilepython", d.getVar("do_compilepython"))
check_function_flags(d)
recipename2 = tempdir + "/recipe2.bb"
with open(recipename2, "w") as f:
f.write(self.export_function_recipe2)
f.flush()
d = bb.parse.handle(recipename2, bb.data.createCopy(self.d))['']
self.assertNotIn("someclass_do_compile", d.getVar("do_compile"))
self.assertNotIn("someclass_do_compilepython", d.getVar("do_compilepython"))
self.assertIn("false", d.getVar("do_compile"))
self.assertIn("else", d.getVar("do_compilepython"))
check_function_flags(d)
with open(recipename, "a+") as f:
f.write("\ninherit secondclass\n")
f.flush()
with open(recipename2, "a+") as f:
f.write("\ninherit secondclass\n")
f.flush()
d = bb.parse.handle(recipename, bb.data.createCopy(self.d))['']
self.assertIn("secondclass_do_compile", d.getVar("do_compile"))
self.assertIn("secondclass_do_compilepython", d.getVar("do_compilepython"))
check_function_flags(d)
d = bb.parse.handle(recipename2, bb.data.createCopy(self.d))['']
self.assertNotIn("someclass_do_compile", d.getVar("do_compile"))
self.assertNotIn("someclass_do_compilepython", d.getVar("do_compilepython"))
self.assertIn("false", d.getVar("do_compile"))
self.assertIn("else", d.getVar("do_compilepython"))
check_function_flags(d)

View File

@@ -1,2 +0,0 @@
do_build[mcdepends] = "mc::mc-1:h1:do_invalid"

View File

@@ -26,7 +26,7 @@ class RunQueueTests(unittest.TestCase):
a1_sstatevalid = "a1:do_package a1:do_package_qa a1:do_packagedata a1:do_package_write_ipk a1:do_package_write_rpm a1:do_populate_lic a1:do_populate_sysroot"
b1_sstatevalid = "b1:do_package b1:do_package_qa b1:do_packagedata b1:do_package_write_ipk b1:do_package_write_rpm b1:do_populate_lic b1:do_populate_sysroot"
def run_bitbakecmd(self, cmd, builddir, sstatevalid="", slowtasks="", extraenv=None, cleanup=False, allowfailure=False):
def run_bitbakecmd(self, cmd, builddir, sstatevalid="", slowtasks="", extraenv=None, cleanup=False):
env = os.environ.copy()
env["BBPATH"] = os.path.realpath(os.path.join(os.path.dirname(__file__), "runqueue-tests"))
env["BB_ENV_PASSTHROUGH_ADDITIONS"] = "SSTATEVALID SLOWTASKS TOPDIR"
@@ -41,8 +41,6 @@ class RunQueueTests(unittest.TestCase):
output = subprocess.check_output(cmd, env=env, stderr=subprocess.STDOUT,universal_newlines=True, cwd=builddir)
print(output)
except subprocess.CalledProcessError as e:
if allowfailure:
return e.output
self.fail("Command %s failed with %s" % (cmd, e.output))
tasks = []
tasklog = builddir + "/task.log"
@@ -316,13 +314,6 @@ class RunQueueTests(unittest.TestCase):
["mc_2:a1:%s" % t for t in rerun_tasks]
self.assertEqual(set(tasks), set(expected))
# Check that a multiconfig that doesn't exist rasies a correct error message
error_output = self.run_bitbakecmd(["bitbake", "g1"], tempdir, "", extraenv=extraenv, cleanup=True, allowfailure=True)
self.assertIn("non-existent task", error_output)
# If the word 'Traceback' or 'KeyError' is in the output we've regressed
self.assertNotIn("Traceback", error_output)
self.assertNotIn("KeyError", error_output)
self.shutdown(tempdir)
def test_hashserv_single(self):

View File

@@ -3,7 +3,7 @@
#
import http.server
from bb import multiprocessing
import multiprocessing
import os
import traceback
import signal
@@ -43,7 +43,7 @@ class HTTPService(object):
self.process = multiprocessing.Process(target=self.server.server_start, args=[self.root_dir, self.logger])
# The signal handler from testimage.bbclass can cause deadlocks here
# if the HTTPServer is terminated before it can restore the standard
# if the HTTPServer is terminated before it can restore the standard
#signal behaviour
orig = signal.getsignal(signal.SIGTERM)
signal.signal(signal.SIGTERM, signal.SIG_DFL)

View File

@@ -188,19 +188,11 @@ class TinfoilCookerAdapter:
self._cache[name] = attrvalue
return attrvalue
class TinfoilSkiplistByMcAdapter:
def __init__(self, tinfoil):
self.tinfoil = tinfoil
def __getitem__(self, mc):
return self.tinfoil.get_skipped_recipes(mc)
def __init__(self, tinfoil):
self.tinfoil = tinfoil
self.multiconfigs = [''] + (tinfoil.config_data.getVar('BBMULTICONFIG') or '').split()
self.collections = {}
self.recipecaches = {}
self.skiplist_by_mc = self.TinfoilSkiplistByMcAdapter(tinfoil)
for mc in self.multiconfigs:
self.collections[mc] = self.TinfoilCookerCollectionAdapter(tinfoil, mc)
self.recipecaches[mc] = self.TinfoilRecipeCacheAdapter(tinfoil, mc)
@@ -209,6 +201,8 @@ class TinfoilCookerAdapter:
# Grab these only when they are requested since they aren't always used
if name in self._cache:
return self._cache[name]
elif name == 'skiplist':
attrvalue = self.tinfoil.get_skipped_recipes()
elif name == 'bbfile_config_priorities':
ret = self.tinfoil.run_command('getLayerPriorities')
bbfile_config_priorities = []
@@ -520,12 +514,12 @@ class Tinfoil:
"""
return defaultdict(list, self.run_command('getOverlayedRecipes', mc))
def get_skipped_recipes(self, mc=''):
def get_skipped_recipes(self):
"""
Find recipes which were skipped (i.e. SkipRecipe was raised
during parsing).
"""
return OrderedDict(self.run_command('getSkippedRecipes', mc))
return OrderedDict(self.run_command('getSkippedRecipes'))
def get_all_providers(self, mc=''):
return defaultdict(list, self.run_command('allProviders', mc))
@@ -539,7 +533,6 @@ class Tinfoil:
def get_runtime_providers(self, rdep):
return self.run_command('getRuntimeProviders', rdep)
# TODO: teach this method about mc
def get_recipe_file(self, pn):
"""
Get the file name for the specified recipe/target. Raises
@@ -548,7 +541,6 @@ class Tinfoil:
"""
best = self.find_best_provider(pn)
if not best or (len(best) > 3 and not best[3]):
# TODO: pass down mc
skiplist = self.get_skipped_recipes()
taskdata = bb.taskdata.TaskData(None, skiplist=skiplist)
skipreasons = taskdata.get_reasons(pn)

View File

@@ -1,86 +0,0 @@
#!/usr/bin/env python3
#
# SPDX-License-Identifier: GPL-2.0-only
#
# This file re-uses code spread throughout other Bitbake source files.
# As such, all other copyrights belong to their own right holders.
#
import os
import sys
import json
import pickle
import codecs
class EventPlayer:
"""Emulate a connection to a bitbake server."""
def __init__(self, eventfile, variables):
self.eventfile = eventfile
self.variables = variables
self.eventmask = []
def waitEvent(self, _timeout):
"""Read event from the file."""
line = self.eventfile.readline().strip()
if not line:
return
try:
decodedline = json.loads(line)
if 'allvariables' in decodedline:
self.variables = decodedline['allvariables']
return
if not 'vars' in decodedline:
raise ValueError
event_str = decodedline['vars'].encode('utf-8')
event = pickle.loads(codecs.decode(event_str, 'base64'))
event_name = "%s.%s" % (event.__module__, event.__class__.__name__)
if event_name not in self.eventmask:
return
return event
except ValueError as err:
print("Failed loading ", line)
raise err
def runCommand(self, command_line):
"""Emulate running a command on the server."""
name = command_line[0]
if name == "getVariable":
var_name = command_line[1]
variable = self.variables.get(var_name)
if variable:
return variable['v'], None
return None, "Missing variable %s" % var_name
elif name == "getAllKeysWithFlags":
dump = {}
flaglist = command_line[1]
for key, val in self.variables.items():
try:
if not key.startswith("__"):
dump[key] = {
'v': val['v'],
'history' : val['history'],
}
for flag in flaglist:
dump[key][flag] = val[flag]
except Exception as err:
print(err)
return (dump, None)
elif name == 'setEventMask':
self.eventmask = command_line[-1]
return True, None
else:
raise Exception("Command %s not implemented" % command_line[0])
def getEventHandle(self):
"""
This method is called by toasterui.
The return value is passed to self.runCommand but not used there.
"""
pass

View File

@@ -131,7 +131,7 @@ class TerminalFilter(object):
def getTerminalColumns(self):
def ioctl_GWINSZ(fd):
try:
cr = struct.unpack('hhhh', fcntl.ioctl(fd, self.termios.TIOCGWINSZ, b'12345678'))[0:2]
cr = struct.unpack('hh', fcntl.ioctl(fd, self.termios.TIOCGWINSZ, '1234'))
except:
return None
return cr
@@ -145,7 +145,7 @@ class TerminalFilter(object):
pass
if not cr:
try:
cr = (int(os.environ['LINES']), int(os.environ['COLUMNS']))
cr = (os.environ['LINES'], os.environ['COLUMNS'])
except:
cr = (25, 80)
return cr
@@ -179,7 +179,7 @@ class TerminalFilter(object):
new[3] = new[3] & ~termios.ECHO
termios.tcsetattr(fd, termios.TCSADRAIN, new)
curses.setupterm()
if curses.tigetnum("colors") > 2 and os.environ.get('NO_COLOR', '') == '':
if curses.tigetnum("colors") > 2:
for h in handlers:
try:
h.formatter.enable_color()
@@ -420,11 +420,6 @@ def main(server, eventHandler, params, tf = TerminalFilter):
except bb.BBHandledException:
drain_events_errorhandling(eventHandler)
return 1
except Exception as e:
# bitbake-server comms failure
early_logger = bb.msg.logger_create('bitbake', sys.stdout)
early_logger.fatal("Attempting to set server environment: %s", e)
return 1
if params.options.quiet == 0:
console_loglevel = loglevel
@@ -577,8 +572,6 @@ def main(server, eventHandler, params, tf = TerminalFilter):
else:
log_exec_tty = False
should_print_hyperlinks = sys.stdout.isatty() and os.environ.get('NO_COLOR', '') == ''
helper = uihelper.BBUIHelper()
# Look for the specially designated handlers which need to be passed to the
@@ -592,12 +585,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
return
llevel, debug_domains = bb.msg.constructLogOptions()
try:
server.runCommand(["setEventMask", server.getEventHandle(), llevel, debug_domains, _evt_list])
except (BrokenPipeError, EOFError) as e:
# bitbake-server comms failure
logger.fatal("Attempting to set event mask: %s", e)
return 1
server.runCommand(["setEventMask", server.getEventHandle(), llevel, debug_domains, _evt_list])
# The logging_tree module is *extremely* helpful in debugging logging
# domains. Uncomment here to dump the logging tree when bitbake starts
@@ -606,11 +594,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
universe = False
if not params.observe_only:
try:
params.updateFromServer(server)
except Exception as e:
logger.fatal("Fetching command line: %s", e)
return 1
params.updateFromServer(server)
cmdline = params.parseActions()
if not cmdline:
print("Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
@@ -621,12 +605,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
if cmdline['action'][0] == "buildTargets" and "universe" in cmdline['action'][1]:
universe = True
try:
ret, error = server.runCommand(cmdline['action'])
except (BrokenPipeError, EOFError) as e:
# bitbake-server comms failure
logger.fatal("Command '{}' failed: %s".format(cmdline), e)
return 1
ret, error = server.runCommand(cmdline['action'])
if error:
logger.error("Command '%s' failed: %s" % (cmdline, error))
return 1
@@ -642,7 +621,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
return_value = 0
errors = 0
warnings = 0
taskfailures = {}
taskfailures = []
printintervaldelta = 10 * 60 # 10 minutes
printinterval = printintervaldelta
@@ -728,8 +707,6 @@ def main(server, eventHandler, params, tf = TerminalFilter):
if isinstance(event, bb.build.TaskFailed):
return_value = 1
print_event_log(event, includelogs, loglines, termfilter)
k = "{}:{}".format(event._fn, event._task)
taskfailures[k] = event.logfile
if isinstance(event, bb.build.TaskBase):
logger.info(event._message)
continue
@@ -825,7 +802,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
if isinstance(event, bb.runqueue.runQueueTaskFailed):
return_value = 1
taskfailures.setdefault(event.taskstring)
taskfailures.append(event.taskstring)
logger.error(str(event))
continue
@@ -877,26 +854,15 @@ def main(server, eventHandler, params, tf = TerminalFilter):
logger.error("Unknown event: %s", event)
except (BrokenPipeError, EOFError) as e:
# bitbake-server comms failure, don't attempt further comms and exit
logger.fatal("Executing event: %s", e)
return_value = 1
errors = errors + 1
main.shutdown = 3
except EnvironmentError as ioerror:
termfilter.clearFooter()
# ignore interrupted io
if ioerror.args[0] == 4:
continue
sys.stderr.write(str(ioerror))
main.shutdown = 2
if not params.observe_only:
try:
_, error = server.runCommand(["stateForceShutdown"])
except (BrokenPipeError, EOFError) as e:
# bitbake-server comms failure, don't attempt further comms and exit
logger.fatal("Unable to force shutdown: %s", e)
main.shutdown = 3
_, error = server.runCommand(["stateForceShutdown"])
main.shutdown = 2
except KeyboardInterrupt:
termfilter.clearFooter()
if params.observe_only:
@@ -905,13 +871,9 @@ def main(server, eventHandler, params, tf = TerminalFilter):
def state_force_shutdown():
print("\nSecond Keyboard Interrupt, stopping...\n")
try:
_, error = server.runCommand(["stateForceShutdown"])
if error:
logger.error("Unable to cleanly stop: %s" % error)
except (BrokenPipeError, EOFError) as e:
# bitbake-server comms failure
logger.fatal("Unable to cleanly stop: %s", e)
_, error = server.runCommand(["stateForceShutdown"])
if error:
logger.error("Unable to cleanly stop: %s" % error)
if not params.observe_only and main.shutdown == 1:
state_force_shutdown()
@@ -924,9 +886,6 @@ def main(server, eventHandler, params, tf = TerminalFilter):
_, error = server.runCommand(["stateShutdown"])
if error:
logger.error("Unable to cleanly shutdown: %s" % error)
except (BrokenPipeError, EOFError) as e:
# bitbake-server comms failure
logger.fatal("Unable to cleanly shutdown: %s", e)
except KeyboardInterrupt:
state_force_shutdown()
@@ -934,33 +893,18 @@ def main(server, eventHandler, params, tf = TerminalFilter):
except Exception as e:
import traceback
sys.stderr.write(traceback.format_exc())
main.shutdown = 2
if not params.observe_only:
try:
_, error = server.runCommand(["stateForceShutdown"])
except (BrokenPipeError, EOFError) as e:
# bitbake-server comms failure, don't attempt further comms and exit
logger.fatal("Unable to force shutdown: %s", e)
main.shudown = 3
_, error = server.runCommand(["stateForceShutdown"])
main.shutdown = 2
return_value = 1
try:
termfilter.clearFooter()
summary = ""
def format_hyperlink(url, link_text):
if should_print_hyperlinks:
start = f'\033]8;;{url}\033\\'
end = '\033]8;;\033\\'
return f'{start}{link_text}{end}'
return link_text
if taskfailures:
summary += pluralise("\nSummary: %s task failed:",
"\nSummary: %s tasks failed:", len(taskfailures))
for (failure, log_file) in taskfailures.items():
for failure in taskfailures:
summary += "\n %s" % failure
if log_file:
hyperlink = format_hyperlink(f"file://{log_file}", log_file)
summary += "\n log: {}".format(hyperlink)
if warnings:
summary += pluralise("\nSummary: There was %s WARNING message.",
"\nSummary: There were %s WARNING messages.", warnings)

View File

@@ -227,9 +227,6 @@ class NCursesUI:
shutdown = 0
try:
if not params.observe_only:
params.updateToServer(server, os.environ.copy())
params.updateFromServer(server)
cmdline = params.parseActions()
if not cmdline:

File diff suppressed because it is too large Load Diff

View File

@@ -30,6 +30,7 @@ import bb.build
import bb.command
import bb.cooker
import bb.event
import bb.exceptions
import bb.runqueue
from bb.ui import uihelper
@@ -101,6 +102,10 @@ class TeamcityLogFormatter(logging.Formatter):
details = ""
if hasattr(record, 'bb_exc_formatted'):
details = ''.join(record.bb_exc_formatted)
elif hasattr(record, 'bb_exc_info'):
etype, value, tb = record.bb_exc_info
formatted = bb.exceptions.format_exception(etype, value, tb, limit=5)
details = ''.join(formatted)
if record.levelno in [bb.msg.BBLogFormatter.ERROR, bb.msg.BBLogFormatter.CRITICAL]:
# ERROR gets a separate errorDetails field

View File

@@ -385,7 +385,7 @@ def main(server, eventHandler, params):
main.shutdown = 1
logger.info("ToasterUI build done, brbe: %s", brbe)
break
continue
if isinstance(event, (bb.command.CommandCompleted,
bb.command.CommandFailed,

View File

@@ -14,7 +14,7 @@ import logging
import bb
import bb.msg
import locale
from bb import multiprocessing
import multiprocessing
import fcntl
import importlib
import importlib.machinery
@@ -50,7 +50,7 @@ def clean_context():
def get_context():
return _context
def set_context(ctx):
_context = ctx
@@ -212,8 +212,8 @@ def explode_dep_versions2(s, *, sort=True):
inversion = True
# This list is based on behavior and supported comparisons from deb, opkg and rpm.
#
# Even though =<, <<, ==, !=, =>, and >> may not be supported,
# we list each possibly valid item.
# Even though =<, <<, ==, !=, =>, and >> may not be supported,
# we list each possibly valid item.
# The build system is responsible for validation of what it supports.
if i.startswith(('<=', '=<', '<<', '==', '!=', '>=', '=>', '>>')):
lastcmp = i[0:2]
@@ -347,7 +347,7 @@ def _print_exception(t, value, tb, realfile, text, context):
exception = traceback.format_exception_only(t, value)
error.append('Error executing a python function in %s:\n' % realfile)
# Strip 'us' from the stack (better_exec call) unless that was where the
# Strip 'us' from the stack (better_exec call) unless that was where the
# error came from
if tb.tb_next is not None:
tb = tb.tb_next
@@ -604,6 +604,7 @@ def preserved_envvars():
v = [
'BBPATH',
'BB_PRESERVE_ENV',
'BB_ENV_PASSTHROUGH',
'BB_ENV_PASSTHROUGH_ADDITIONS',
]
return v + preserved_envvars_exported()
@@ -745,9 +746,9 @@ def prunedir(topdir, ionice=False):
# but thats possibly insane and suffixes is probably going to be small
#
def prune_suffix(var, suffixes, d):
"""
"""
See if var ends with any of the suffixes listed and
remove it if found
remove it if found
"""
for suffix in suffixes:
if suffix and var.endswith(suffix):
@@ -758,8 +759,7 @@ def mkdirhier(directory):
"""Create a directory like 'mkdir -p', but does not complain if
directory already exists like os.makedirs
"""
if '${' in str(directory):
bb.fatal("Directory name {} contains unexpanded bitbake variable. This may cause build failures and WORKDIR polution.".format(directory))
try:
os.makedirs(directory)
except OSError as e:
@@ -1001,9 +1001,9 @@ def umask(new_mask):
os.umask(current_mask)
def to_boolean(string, default=None):
"""
"""
Check input string and return boolean value True/False/None
depending upon the checks
depending upon the checks
"""
if not string:
return default
@@ -1142,10 +1142,7 @@ def get_referenced_vars(start_expr, d):
def cpu_count():
try:
return len(os.sched_getaffinity(0))
except OSError:
return multiprocessing.cpu_count()
return multiprocessing.cpu_count()
def nonblockingfd(fd):
fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK)
@@ -1174,6 +1171,8 @@ def process_profilelog(fn, pout = None):
#
def multiprocessingpool(*args, **kwargs):
import multiprocessing.pool
#import multiprocessing.util
#multiprocessing.util.log_to_stderr(10)
# Deal with a multiprocessing bug where signals to the processes would be delayed until the work
# completes. Putting in a timeout means the signals (like SIGINT/SIGTERM) get processed.
@@ -1852,42 +1851,15 @@ def path_is_descendant(descendant, ancestor):
return False
# Recomputing the sets in signal.py is expensive (bitbake -pP idle)
# so try and use _signal directly to avoid it
valid_signals = signal.valid_signals()
try:
import _signal
sigmask = _signal.pthread_sigmask
except ImportError:
sigmask = signal.pthread_sigmask
# If we don't have a timeout of some kind and a process/thread exits badly (for example
# OOM killed) and held a lock, we'd just hang in the lock futex forever. It is better
# we exit at some point than hang. 5 minutes with no progress means we're probably deadlocked.
# This function can still deadlock python since it can't signal the other threads to exit
# (signals are handled in the main thread) and even os._exit() will wait on non-daemon threads
# to exit.
@contextmanager
def lock_timeout(lock):
held = lock.acquire(timeout=5*60)
try:
s = sigmask(signal.SIG_BLOCK, valid_signals)
held = lock.acquire(timeout=5*60)
if not held:
bb.server.process.serverlog("Couldn't get the lock for 5 mins, timed out, exiting.\n%s" % traceback.format_stack())
os._exit(1)
yield held
finally:
lock.release()
sigmask(signal.SIG_SETMASK, s)
# A version of lock_timeout without the check that the lock was locked and a shorter timeout
@contextmanager
def lock_timeout_nocheck(lock):
try:
s = sigmask(signal.SIG_BLOCK, valid_signals)
l = lock.acquire(timeout=10)
yield l
finally:
if l:
lock.release()
sigmask(signal.SIG_SETMASK, s)

View File

@@ -142,11 +142,10 @@ skipped recipes will also be listed, with a " (skipped)" suffix.
# Ensure we list skipped recipes
# We are largely guessing about PN, PV and the preferred version here,
# but we have no choice since skipped recipes are not fully parsed
skiplist = list(self.tinfoil.cooker.skiplist_by_mc[mc].keys())
skiplist = list(self.tinfoil.cooker.skiplist.keys())
mcspec = 'mc:%s:' % mc
if mc:
mcspec = f'mc:{mc}:'
skiplist = [s[len(mcspec):] if s.startswith(mcspec) else s for s in skiplist]
skiplist = [s[len(mcspec):] for s in skiplist if s.startswith(mcspec)]
for fn in skiplist:
recipe_parts = os.path.splitext(os.path.basename(fn))[0].split('_')
@@ -163,7 +162,7 @@ skipped recipes will also be listed, with a " (skipped)" suffix.
def print_item(f, pn, ver, layer, ispref):
if not selected_layer or layer == selected_layer:
if not bare and f in skiplist:
skipped = ' (skipped: %s)' % self.tinfoil.cooker.skiplist_by_mc[mc][f].skipreason
skipped = ' (skipped: %s)' % self.tinfoil.cooker.skiplist[f].skipreason
else:
skipped = ''
if show_filenames:
@@ -302,7 +301,7 @@ Lists recipes with the bbappends that apply to them as subitems.
if self.show_appends_for_pn(pn, cooker_data, args.mc):
appends = True
if not args.pnspec and self.show_appends_for_skipped(args.mc):
if not args.pnspec and self.show_appends_for_skipped():
appends = True
if not appends:
@@ -318,9 +317,9 @@ Lists recipes with the bbappends that apply to them as subitems.
return self.show_appends_output(filenames, best_filename)
def show_appends_for_skipped(self, mc):
def show_appends_for_skipped(self):
filenames = [os.path.basename(f)
for f in self.tinfoil.cooker.skiplist_by_mc[mc].keys()]
for f in self.tinfoil.cooker.skiplist.keys()]
return self.show_appends_output(filenames, None, " (skipped)")
def show_appends_output(self, filenames, best_filename, name_suffix = ''):

View File

@@ -585,7 +585,7 @@ class SiblingTest(TreeTest):
</html>'''
# All that whitespace looks good but makes the tests more
# difficult. Get rid of it.
markup = re.compile(r"\n\s*").sub("", markup)
markup = re.compile("\n\s*").sub("", markup)
self.tree = self.soup(markup)

View File

@@ -392,7 +392,19 @@ class SourceGenerator(NodeVisitor):
def visit_Name(self, node):
self.write(node.id)
def visit_Str(self, node):
self.write(repr(node.s))
def visit_Bytes(self, node):
self.write(repr(node.s))
def visit_Num(self, node):
self.write(repr(node.n))
def visit_Constant(self, node):
# Python 3.8 deprecated visit_Num(), visit_Str(), visit_Bytes(),
# visit_NameConstant() and visit_Ellipsis(). They can be removed once we
# require 3.8+.
self.write(repr(node.value))
def visit_Tuple(self, node):

View File

@@ -5,102 +5,151 @@
import asyncio
from contextlib import closing
import re
import sqlite3
import itertools
import json
from collections import namedtuple
from urllib.parse import urlparse
from bb.asyncrpc.client import parse_address, ADDR_TYPE_UNIX, ADDR_TYPE_WS
User = namedtuple("User", ("username", "permissions"))
UNIX_PREFIX = "unix://"
def create_server(
addr,
dbname,
*,
sync=True,
upstream=None,
read_only=False,
db_username=None,
db_password=None,
anon_perms=None,
admin_username=None,
admin_password=None,
):
def sqlite_engine():
from .sqlite import DatabaseEngine
ADDR_TYPE_UNIX = 0
ADDR_TYPE_TCP = 1
return DatabaseEngine(dbname, sync)
# The Python async server defaults to a 64K receive buffer, so we hardcode our
# maximum chunk size. It would be better if the client and server reported to
# each other what the maximum chunk sizes were, but that will slow down the
# connection setup with a round trip delay so I'd rather not do that unless it
# is necessary
DEFAULT_MAX_CHUNK = 32 * 1024
def sqlalchemy_engine():
from .sqlalchemy import DatabaseEngine
UNIHASH_TABLE_DEFINITION = (
("method", "TEXT NOT NULL", "UNIQUE"),
("taskhash", "TEXT NOT NULL", "UNIQUE"),
("unihash", "TEXT NOT NULL", ""),
)
return DatabaseEngine(dbname, db_username, db_password)
UNIHASH_TABLE_COLUMNS = tuple(name for name, _, _ in UNIHASH_TABLE_DEFINITION)
from . import server
OUTHASH_TABLE_DEFINITION = (
("method", "TEXT NOT NULL", "UNIQUE"),
("taskhash", "TEXT NOT NULL", "UNIQUE"),
("outhash", "TEXT NOT NULL", "UNIQUE"),
("created", "DATETIME", ""),
if "://" in dbname:
db_engine = sqlalchemy_engine()
# Optional fields
("owner", "TEXT", ""),
("PN", "TEXT", ""),
("PV", "TEXT", ""),
("PR", "TEXT", ""),
("task", "TEXT", ""),
("outhash_siginfo", "TEXT", ""),
)
OUTHASH_TABLE_COLUMNS = tuple(name for name, _, _ in OUTHASH_TABLE_DEFINITION)
def _make_table(cursor, name, definition):
cursor.execute('''
CREATE TABLE IF NOT EXISTS {name} (
id INTEGER PRIMARY KEY AUTOINCREMENT,
{fields}
UNIQUE({unique})
)
'''.format(
name=name,
fields=" ".join("%s %s," % (name, typ) for name, typ, _ in definition),
unique=", ".join(name for name, _, flags in definition if "UNIQUE" in flags)
))
def setup_database(database, sync=True):
db = sqlite3.connect(database)
db.row_factory = sqlite3.Row
with closing(db.cursor()) as cursor:
_make_table(cursor, "unihashes_v2", UNIHASH_TABLE_DEFINITION)
_make_table(cursor, "outhashes_v2", OUTHASH_TABLE_DEFINITION)
cursor.execute('PRAGMA journal_mode = WAL')
cursor.execute('PRAGMA synchronous = %s' % ('NORMAL' if sync else 'OFF'))
# Drop old indexes
cursor.execute('DROP INDEX IF EXISTS taskhash_lookup')
cursor.execute('DROP INDEX IF EXISTS outhash_lookup')
cursor.execute('DROP INDEX IF EXISTS taskhash_lookup_v2')
cursor.execute('DROP INDEX IF EXISTS outhash_lookup_v2')
# TODO: Upgrade from tasks_v2?
cursor.execute('DROP TABLE IF EXISTS tasks_v2')
# Create new indexes
cursor.execute('CREATE INDEX IF NOT EXISTS taskhash_lookup_v3 ON unihashes_v2 (method, taskhash)')
cursor.execute('CREATE INDEX IF NOT EXISTS outhash_lookup_v3 ON outhashes_v2 (method, outhash)')
return db
def parse_address(addr):
if addr.startswith(UNIX_PREFIX):
return (ADDR_TYPE_UNIX, (addr[len(UNIX_PREFIX):],))
else:
db_engine = sqlite_engine()
m = re.match(r'\[(?P<host>[^\]]*)\]:(?P<port>\d+)$', addr)
if m is not None:
host = m.group('host')
port = m.group('port')
else:
host, port = addr.split(':')
if anon_perms is None:
anon_perms = server.DEFAULT_ANON_PERMS
return (ADDR_TYPE_TCP, (host, int(port)))
s = server.Server(
db_engine,
upstream=upstream,
read_only=read_only,
anon_perms=anon_perms,
admin_username=admin_username,
admin_password=admin_password,
)
def chunkify(msg, max_chunk):
if len(msg) < max_chunk - 1:
yield ''.join((msg, "\n"))
else:
yield ''.join((json.dumps({
'chunk-stream': None
}), "\n"))
args = [iter(msg)] * (max_chunk - 1)
for m in map(''.join, itertools.zip_longest(*args, fillvalue='')):
yield ''.join(itertools.chain(m, "\n"))
yield "\n"
def create_server(addr, dbname, *, sync=True, upstream=None, read_only=False):
from . import server
db = setup_database(dbname, sync=sync)
s = server.Server(db, upstream=upstream, read_only=read_only)
(typ, a) = parse_address(addr)
if typ == ADDR_TYPE_UNIX:
s.start_unix_server(*a)
elif typ == ADDR_TYPE_WS:
url = urlparse(a[0])
s.start_websocket_server(url.hostname, url.port)
else:
s.start_tcp_server(*a)
return s
def create_client(addr, username=None, password=None):
def create_client(addr):
from . import client
c = client.Client()
c = client.Client(username, password)
(typ, a) = parse_address(addr)
if typ == ADDR_TYPE_UNIX:
c.connect_unix(*a)
else:
c.connect_tcp(*a)
try:
(typ, a) = parse_address(addr)
if typ == ADDR_TYPE_UNIX:
c.connect_unix(*a)
elif typ == ADDR_TYPE_WS:
c.connect_websocket(*a)
else:
c.connect_tcp(*a)
return c
except Exception as e:
c.close()
raise e
return c
async def create_async_client(addr, username=None, password=None):
async def create_async_client(addr):
from . import client
c = client.AsyncClient()
c = client.AsyncClient(username, password)
(typ, a) = parse_address(addr)
if typ == ADDR_TYPE_UNIX:
await c.connect_unix(*a)
else:
await c.connect_tcp(*a)
try:
(typ, a) = parse_address(addr)
if typ == ADDR_TYPE_UNIX:
await c.connect_unix(*a)
elif typ == ADDR_TYPE_WS:
await c.connect_websocket(*a)
else:
await c.connect_tcp(*a)
return c
except Exception as e:
await c.close()
raise e
return c

View File

@@ -5,430 +5,127 @@
import logging
import socket
import asyncio
import bb.asyncrpc
import json
from . import create_async_client
logger = logging.getLogger("hashserv.client")
class Batch(object):
def __init__(self):
self.done = False
self.cond = asyncio.Condition()
self.pending = []
self.results = []
self.sent_count = 0
async def recv(self, socket):
while True:
async with self.cond:
await self.cond.wait_for(lambda: self.pending or self.done)
if not self.pending:
if self.done:
return
continue
r = await socket.recv()
self.results.append(r)
async with self.cond:
self.pending.pop(0)
async def send(self, socket, msgs):
try:
# In the event of a restart due to a reconnect, all in-flight
# messages need to be resent first to keep to result count in sync
for m in self.pending:
await socket.send(m)
for m in msgs:
# Add the message to the pending list before attempting to send
# it so that if the send fails it will be retried
async with self.cond:
self.pending.append(m)
self.cond.notify()
self.sent_count += 1
await socket.send(m)
finally:
async with self.cond:
self.done = True
self.cond.notify()
async def process(self, socket, msgs):
await asyncio.gather(
self.recv(socket),
self.send(socket, msgs),
)
if len(self.results) != self.sent_count:
raise ValueError(
f"Expected result count {len(self.results)}. Expected {self.sent_count}"
)
return self.results
class AsyncClient(bb.asyncrpc.AsyncClient):
MODE_NORMAL = 0
MODE_GET_STREAM = 1
MODE_EXIST_STREAM = 2
def __init__(self, username=None, password=None):
super().__init__("OEHASHEQUIV", "1.1", logger)
def __init__(self):
super().__init__('OEHASHEQUIV', '1.1', logger)
self.mode = self.MODE_NORMAL
self.username = username
self.password = password
self.saved_become_user = None
async def setup_connection(self):
await super().setup_connection()
cur_mode = self.mode
self.mode = self.MODE_NORMAL
if self.username:
# Save off become user temporarily because auth() resets it
become = self.saved_become_user
await self.auth(self.username, self.password)
if become:
await self.become_user(become)
async def send_stream_batch(self, mode, msgs):
"""
Does a "batch" process of stream messages. This sends the query
messages as fast as possible, and simultaneously attempts to read the
messages back. This helps to mitigate the effects of latency to the
hash equivalence server be allowing multiple queries to be "in-flight"
at once
The implementation does more complicated tracking using a count of sent
messages so that `msgs` can be a generator function (i.e. its length is
unknown)
"""
b = Batch()
await self._set_mode(cur_mode)
async def send_stream(self, msg):
async def proc():
nonlocal b
await self._set_mode(mode)
return await b.process(self.socket, msgs)
self.writer.write(("%s\n" % msg).encode("utf-8"))
await self.writer.drain()
l = await self.reader.readline()
if not l:
raise ConnectionError("Connection closed")
return l.decode("utf-8").rstrip()
return await self._send_wrapper(proc)
async def invoke(self, *args, **kwargs):
# It's OK if connection errors cause a failure here, because the mode
# is also reset to normal on a new connection
await self._set_mode(self.MODE_NORMAL)
return await super().invoke(*args, **kwargs)
async def _set_mode(self, new_mode):
async def stream_to_normal():
await self.socket.send("END")
return await self.socket.recv()
async def normal_to_stream(command):
r = await self.invoke({command: None})
if new_mode == self.MODE_NORMAL and self.mode == self.MODE_GET_STREAM:
r = await self.send_stream("END")
if r != "ok":
raise ConnectionError(
f"Unable to transition to stream mode: Bad response from server {r!r}"
)
self.logger.debug("Mode is now %s", command)
if new_mode == self.mode:
return
self.logger.debug("Transitioning mode %s -> %s", self.mode, new_mode)
# Always transition to normal mode before switching to any other mode
if self.mode != self.MODE_NORMAL:
r = await self._send_wrapper(stream_to_normal)
raise ConnectionError("Bad response from server %r" % r)
elif new_mode == self.MODE_GET_STREAM and self.mode == self.MODE_NORMAL:
r = await self.send_message({"get-stream": None})
if r != "ok":
self.check_invoke_error(r)
raise ConnectionError(
f"Unable to transition to normal mode: Bad response from server {r!r}"
)
self.logger.debug("Mode is now normal")
if new_mode == self.MODE_GET_STREAM:
await normal_to_stream("get-stream")
elif new_mode == self.MODE_EXIST_STREAM:
await normal_to_stream("exists-stream")
elif new_mode != self.MODE_NORMAL:
raise Exception("Undefined mode transition {self.mode!r} -> {new_mode!r}")
raise ConnectionError("Bad response from server %r" % r)
elif new_mode != self.mode:
raise Exception(
"Undefined mode transition %r -> %r" % (self.mode, new_mode)
)
self.mode = new_mode
async def get_unihash(self, method, taskhash):
r = await self.get_unihash_batch([(method, taskhash)])
return r[0]
async def get_unihash_batch(self, args):
result = await self.send_stream_batch(
self.MODE_GET_STREAM,
(f"{method} {taskhash}" for method, taskhash in args),
)
return [r if r else None for r in result]
await self._set_mode(self.MODE_GET_STREAM)
r = await self.send_stream("%s %s" % (method, taskhash))
if not r:
return None
return r
async def report_unihash(self, taskhash, method, outhash, unihash, extra={}):
await self._set_mode(self.MODE_NORMAL)
m = extra.copy()
m["taskhash"] = taskhash
m["method"] = method
m["outhash"] = outhash
m["unihash"] = unihash
return await self.invoke({"report": m})
return await self.send_message({"report": m})
async def report_unihash_equiv(self, taskhash, method, unihash, extra={}):
await self._set_mode(self.MODE_NORMAL)
m = extra.copy()
m["taskhash"] = taskhash
m["method"] = method
m["unihash"] = unihash
return await self.invoke({"report-equiv": m})
return await self.send_message({"report-equiv": m})
async def get_taskhash(self, method, taskhash, all_properties=False):
return await self.invoke(
await self._set_mode(self.MODE_NORMAL)
return await self.send_message(
{"get": {"taskhash": taskhash, "method": method, "all": all_properties}}
)
async def unihash_exists(self, unihash):
r = await self.unihash_exists_batch([unihash])
return r[0]
async def unihash_exists_batch(self, unihashes):
result = await self.send_stream_batch(self.MODE_EXIST_STREAM, unihashes)
return [r == "true" for r in result]
async def get_outhash(self, method, outhash, taskhash, with_unihash=True):
return await self.invoke(
{
"get-outhash": {
"outhash": outhash,
"taskhash": taskhash,
"method": method,
"with_unihash": with_unihash,
}
}
await self._set_mode(self.MODE_NORMAL)
return await self.send_message(
{"get-outhash": {"outhash": outhash, "taskhash": taskhash, "method": method, "with_unihash": with_unihash}}
)
async def get_stats(self):
return await self.invoke({"get-stats": None})
await self._set_mode(self.MODE_NORMAL)
return await self.send_message({"get-stats": None})
async def reset_stats(self):
return await self.invoke({"reset-stats": None})
await self._set_mode(self.MODE_NORMAL)
return await self.send_message({"reset-stats": None})
async def backfill_wait(self):
return (await self.invoke({"backfill-wait": None}))["tasks"]
await self._set_mode(self.MODE_NORMAL)
return (await self.send_message({"backfill-wait": None}))["tasks"]
async def remove(self, where):
return await self.invoke({"remove": {"where": where}})
await self._set_mode(self.MODE_NORMAL)
return await self.send_message({"remove": {"where": where}})
async def clean_unused(self, max_age):
return await self.invoke({"clean-unused": {"max_age_seconds": max_age}})
async def auth(self, username, token):
result = await self.invoke({"auth": {"username": username, "token": token}})
self.username = username
self.password = token
self.saved_become_user = None
return result
async def refresh_token(self, username=None):
m = {}
if username:
m["username"] = username
result = await self.invoke({"refresh-token": m})
if (
self.username
and not self.saved_become_user
and result["username"] == self.username
):
self.password = result["token"]
return result
async def set_user_perms(self, username, permissions):
return await self.invoke(
{"set-user-perms": {"username": username, "permissions": permissions}}
)
async def get_user(self, username=None):
m = {}
if username:
m["username"] = username
return await self.invoke({"get-user": m})
async def get_all_users(self):
return (await self.invoke({"get-all-users": {}}))["users"]
async def new_user(self, username, permissions):
return await self.invoke(
{"new-user": {"username": username, "permissions": permissions}}
)
async def delete_user(self, username):
return await self.invoke({"delete-user": {"username": username}})
async def become_user(self, username):
result = await self.invoke({"become-user": {"username": username}})
if username == self.username:
self.saved_become_user = None
else:
self.saved_become_user = username
return result
async def get_db_usage(self):
return (await self.invoke({"get-db-usage": {}}))["usage"]
async def get_db_query_columns(self):
return (await self.invoke({"get-db-query-columns": {}}))["columns"]
async def gc_status(self):
return await self.invoke({"gc-status": {}})
async def gc_mark(self, mark, where):
"""
Starts a new garbage collection operation identified by "mark". If
garbage collection is already in progress with "mark", the collection
is continued.
All unihash entries that match the "where" clause are marked to be
kept. In addition, any new entries added to the database after this
command will be automatically marked with "mark"
"""
return await self.invoke({"gc-mark": {"mark": mark, "where": where}})
async def gc_sweep(self, mark):
"""
Finishes garbage collection for "mark". All unihash entries that have
not been marked will be deleted.
It is recommended to clean unused outhash entries after running this to
cleanup any dangling outhashes
"""
return await self.invoke({"gc-sweep": {"mark": mark}})
await self._set_mode(self.MODE_NORMAL)
return await self.send_message({"clean-unused": {"max_age_seconds": max_age}})
class Client(bb.asyncrpc.Client):
def __init__(self, username=None, password=None):
self.username = username
self.password = password
def __init__(self):
super().__init__()
self._add_methods(
"connect_tcp",
"connect_websocket",
"get_unihash",
"get_unihash_batch",
"report_unihash",
"report_unihash_equiv",
"get_taskhash",
"unihash_exists",
"unihash_exists_batch",
"get_outhash",
"get_stats",
"reset_stats",
"backfill_wait",
"remove",
"clean_unused",
"auth",
"refresh_token",
"set_user_perms",
"get_user",
"get_all_users",
"new_user",
"delete_user",
"become_user",
"get_db_usage",
"get_db_query_columns",
"gc_status",
"gc_mark",
"gc_sweep",
)
def _get_async_client(self):
return AsyncClient(self.username, self.password)
class ClientPool(bb.asyncrpc.ClientPool):
def __init__(
self,
address,
max_clients,
*,
username=None,
password=None,
become=None,
):
super().__init__(max_clients)
self.address = address
self.username = username
self.password = password
self.become = become
async def _new_client(self):
client = await create_async_client(
self.address,
username=self.username,
password=self.password,
)
if self.become:
await client.become_user(self.become)
return client
def _run_key_tasks(self, queries, call):
results = {key: None for key in queries.keys()}
def make_task(key, args):
async def task(client):
nonlocal results
unihash = await call(client, args)
results[key] = unihash
return task
def gen_tasks():
for key, args in queries.items():
yield make_task(key, args)
self.run_tasks(gen_tasks())
return results
def get_unihashes(self, queries):
"""
Query multiple unihashes in parallel.
The queries argument is a dictionary with arbitrary key. The values
must be a tuple of (method, taskhash).
Returns a dictionary with a corresponding key for each input key, and
the value is the queried unihash (which might be none if the query
failed)
"""
async def call(client, args):
method, taskhash = args
return await client.get_unihash(method, taskhash)
return self._run_key_tasks(queries, call)
def unihashes_exist(self, queries):
"""
Query multiple unihash existence checks in parallel.
The queries argument is a dictionary with arbitrary key. The values
must be a unihash.
Returns a dictionary with a corresponding key for each input key, and
the value is True or False if the unihash is known by the server (or
None if there was a failure)
"""
async def call(client, unihash):
return await client.unihash_exists(unihash)
return self._run_key_tasks(queries, call)
return AsyncClient()

File diff suppressed because it is too large Load Diff

View File

@@ -1,598 +0,0 @@
#! /usr/bin/env python3
#
# Copyright (C) 2023 Garmin Ltd.
#
# SPDX-License-Identifier: GPL-2.0-only
#
import logging
from datetime import datetime
from . import User
from sqlalchemy.ext.asyncio import create_async_engine
from sqlalchemy.pool import NullPool
from sqlalchemy import (
MetaData,
Column,
Table,
Text,
Integer,
UniqueConstraint,
DateTime,
Index,
select,
insert,
exists,
literal,
and_,
delete,
update,
func,
inspect,
)
import sqlalchemy.engine
from sqlalchemy.orm import declarative_base
from sqlalchemy.exc import IntegrityError
from sqlalchemy.dialects.postgresql import insert as postgres_insert
Base = declarative_base()
class UnihashesV3(Base):
__tablename__ = "unihashes_v3"
id = Column(Integer, primary_key=True, autoincrement=True)
method = Column(Text, nullable=False)
taskhash = Column(Text, nullable=False)
unihash = Column(Text, nullable=False)
gc_mark = Column(Text, nullable=False)
__table_args__ = (
UniqueConstraint("method", "taskhash"),
Index("taskhash_lookup_v4", "method", "taskhash"),
Index("unihash_lookup_v1", "unihash"),
)
class OuthashesV2(Base):
__tablename__ = "outhashes_v2"
id = Column(Integer, primary_key=True, autoincrement=True)
method = Column(Text, nullable=False)
taskhash = Column(Text, nullable=False)
outhash = Column(Text, nullable=False)
created = Column(DateTime)
owner = Column(Text)
PN = Column(Text)
PV = Column(Text)
PR = Column(Text)
task = Column(Text)
outhash_siginfo = Column(Text)
__table_args__ = (
UniqueConstraint("method", "taskhash", "outhash"),
Index("outhash_lookup_v3", "method", "outhash"),
)
class Users(Base):
__tablename__ = "users"
id = Column(Integer, primary_key=True, autoincrement=True)
username = Column(Text, nullable=False)
token = Column(Text, nullable=False)
permissions = Column(Text)
__table_args__ = (UniqueConstraint("username"),)
class Config(Base):
__tablename__ = "config"
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(Text, nullable=False)
value = Column(Text)
__table_args__ = (
UniqueConstraint("name"),
Index("config_lookup", "name"),
)
#
# Old table versions
#
DeprecatedBase = declarative_base()
class UnihashesV2(DeprecatedBase):
__tablename__ = "unihashes_v2"
id = Column(Integer, primary_key=True, autoincrement=True)
method = Column(Text, nullable=False)
taskhash = Column(Text, nullable=False)
unihash = Column(Text, nullable=False)
__table_args__ = (
UniqueConstraint("method", "taskhash"),
Index("taskhash_lookup_v3", "method", "taskhash"),
)
class DatabaseEngine(object):
def __init__(self, url, username=None, password=None):
self.logger = logging.getLogger("hashserv.sqlalchemy")
self.url = sqlalchemy.engine.make_url(url)
if username is not None:
self.url = self.url.set(username=username)
if password is not None:
self.url = self.url.set(password=password)
async def create(self):
def check_table_exists(conn, name):
return inspect(conn).has_table(name)
self.logger.info("Using database %s", self.url)
if self.url.drivername == 'postgresql+psycopg':
# Psygopg 3 (psygopg) driver can handle async connection pooling
self.engine = create_async_engine(self.url, max_overflow=-1)
else:
self.engine = create_async_engine(self.url, poolclass=NullPool)
async with self.engine.begin() as conn:
# Create tables
self.logger.info("Creating tables...")
await conn.run_sync(Base.metadata.create_all)
if await conn.run_sync(check_table_exists, UnihashesV2.__tablename__):
self.logger.info("Upgrading Unihashes V2 -> V3...")
statement = insert(UnihashesV3).from_select(
["id", "method", "unihash", "taskhash", "gc_mark"],
select(
UnihashesV2.id,
UnihashesV2.method,
UnihashesV2.unihash,
UnihashesV2.taskhash,
literal("").label("gc_mark"),
),
)
self.logger.debug("%s", statement)
await conn.execute(statement)
await conn.run_sync(Base.metadata.drop_all, [UnihashesV2.__table__])
self.logger.info("Upgrade complete")
def connect(self, logger):
return Database(self.engine, logger)
def map_row(row):
if row is None:
return None
return dict(**row._mapping)
def map_user(row):
if row is None:
return None
return User(
username=row.username,
permissions=set(row.permissions.split()),
)
def _make_condition_statement(table, condition):
where = {}
for c in table.__table__.columns:
if c.key in condition and condition[c.key] is not None:
where[c] = condition[c.key]
return [(k == v) for k, v in where.items()]
class Database(object):
def __init__(self, engine, logger):
self.engine = engine
self.db = None
self.logger = logger
async def __aenter__(self):
self.db = await self.engine.connect()
return self
async def __aexit__(self, exc_type, exc_value, traceback):
await self.close()
async def close(self):
await self.db.close()
self.db = None
async def _execute(self, statement):
self.logger.debug("%s", statement)
return await self.db.execute(statement)
async def _set_config(self, name, value):
while True:
result = await self._execute(
update(Config).where(Config.name == name).values(value=value)
)
if result.rowcount == 0:
self.logger.debug("Config '%s' not found. Adding it", name)
try:
await self._execute(insert(Config).values(name=name, value=value))
except IntegrityError:
# Race. Try again
continue
break
def _get_config_subquery(self, name, default=None):
if default is not None:
return func.coalesce(
select(Config.value).where(Config.name == name).scalar_subquery(),
default,
)
return select(Config.value).where(Config.name == name).scalar_subquery()
async def _get_config(self, name):
result = await self._execute(select(Config.value).where(Config.name == name))
row = result.first()
if row is None:
return None
return row.value
async def get_unihash_by_taskhash_full(self, method, taskhash):
async with self.db.begin():
result = await self._execute(
select(
OuthashesV2,
UnihashesV3.unihash.label("unihash"),
)
.join(
UnihashesV3,
and_(
UnihashesV3.method == OuthashesV2.method,
UnihashesV3.taskhash == OuthashesV2.taskhash,
),
)
.where(
OuthashesV2.method == method,
OuthashesV2.taskhash == taskhash,
)
.order_by(
OuthashesV2.created.asc(),
)
.limit(1)
)
return map_row(result.first())
async def get_unihash_by_outhash(self, method, outhash):
async with self.db.begin():
result = await self._execute(
select(OuthashesV2, UnihashesV3.unihash.label("unihash"))
.join(
UnihashesV3,
and_(
UnihashesV3.method == OuthashesV2.method,
UnihashesV3.taskhash == OuthashesV2.taskhash,
),
)
.where(
OuthashesV2.method == method,
OuthashesV2.outhash == outhash,
)
.order_by(
OuthashesV2.created.asc(),
)
.limit(1)
)
return map_row(result.first())
async def unihash_exists(self, unihash):
async with self.db.begin():
result = await self._execute(
select(UnihashesV3).where(UnihashesV3.unihash == unihash).limit(1)
)
return result.first() is not None
async def get_outhash(self, method, outhash):
async with self.db.begin():
result = await self._execute(
select(OuthashesV2)
.where(
OuthashesV2.method == method,
OuthashesV2.outhash == outhash,
)
.order_by(
OuthashesV2.created.asc(),
)
.limit(1)
)
return map_row(result.first())
async def get_equivalent_for_outhash(self, method, outhash, taskhash):
async with self.db.begin():
result = await self._execute(
select(
OuthashesV2.taskhash.label("taskhash"),
UnihashesV3.unihash.label("unihash"),
)
.join(
UnihashesV3,
and_(
UnihashesV3.method == OuthashesV2.method,
UnihashesV3.taskhash == OuthashesV2.taskhash,
),
)
.where(
OuthashesV2.method == method,
OuthashesV2.outhash == outhash,
OuthashesV2.taskhash != taskhash,
)
.order_by(
OuthashesV2.created.asc(),
)
.limit(1)
)
return map_row(result.first())
async def get_equivalent(self, method, taskhash):
async with self.db.begin():
result = await self._execute(
select(
UnihashesV3.unihash,
UnihashesV3.method,
UnihashesV3.taskhash,
).where(
UnihashesV3.method == method,
UnihashesV3.taskhash == taskhash,
)
)
return map_row(result.first())
async def remove(self, condition):
async def do_remove(table):
where = _make_condition_statement(table, condition)
if where:
async with self.db.begin():
result = await self._execute(delete(table).where(*where))
return result.rowcount
return 0
count = 0
count += await do_remove(UnihashesV3)
count += await do_remove(OuthashesV2)
return count
async def get_current_gc_mark(self):
async with self.db.begin():
return await self._get_config("gc-mark")
async def gc_status(self):
async with self.db.begin():
gc_mark_subquery = self._get_config_subquery("gc-mark", "")
result = await self._execute(
select(func.count())
.select_from(UnihashesV3)
.where(UnihashesV3.gc_mark == gc_mark_subquery)
)
keep_rows = result.scalar()
result = await self._execute(
select(func.count())
.select_from(UnihashesV3)
.where(UnihashesV3.gc_mark != gc_mark_subquery)
)
remove_rows = result.scalar()
return (keep_rows, remove_rows, await self._get_config("gc-mark"))
async def gc_mark(self, mark, condition):
async with self.db.begin():
await self._set_config("gc-mark", mark)
where = _make_condition_statement(UnihashesV3, condition)
if not where:
return 0
result = await self._execute(
update(UnihashesV3)
.values(gc_mark=self._get_config_subquery("gc-mark", ""))
.where(*where)
)
return result.rowcount
async def gc_sweep(self):
async with self.db.begin():
result = await self._execute(
delete(UnihashesV3).where(
# A sneaky conditional that provides some errant use
# protection: If the config mark is NULL, this will not
# match any rows because No default is specified in the
# select statement
UnihashesV3.gc_mark
!= self._get_config_subquery("gc-mark")
)
)
await self._set_config("gc-mark", None)
return result.rowcount
async def clean_unused(self, oldest):
async with self.db.begin():
result = await self._execute(
delete(OuthashesV2).where(
OuthashesV2.created < oldest,
~(
select(UnihashesV3.id)
.where(
UnihashesV3.method == OuthashesV2.method,
UnihashesV3.taskhash == OuthashesV2.taskhash,
)
.limit(1)
.exists()
),
)
)
return result.rowcount
async def insert_unihash(self, method, taskhash, unihash):
# Postgres specific ignore on insert duplicate
if self.engine.name == "postgresql":
statement = (
postgres_insert(UnihashesV3)
.values(
method=method,
taskhash=taskhash,
unihash=unihash,
gc_mark=self._get_config_subquery("gc-mark", ""),
)
.on_conflict_do_nothing(index_elements=("method", "taskhash"))
)
else:
statement = insert(UnihashesV3).values(
method=method,
taskhash=taskhash,
unihash=unihash,
gc_mark=self._get_config_subquery("gc-mark", ""),
)
try:
async with self.db.begin():
result = await self._execute(statement)
return result.rowcount != 0
except IntegrityError:
self.logger.debug(
"%s, %s, %s already in unihash database", method, taskhash, unihash
)
return False
async def insert_outhash(self, data):
outhash_columns = set(c.key for c in OuthashesV2.__table__.columns)
data = {k: v for k, v in data.items() if k in outhash_columns}
if "created" in data and not isinstance(data["created"], datetime):
data["created"] = datetime.fromisoformat(data["created"])
# Postgres specific ignore on insert duplicate
if self.engine.name == "postgresql":
statement = (
postgres_insert(OuthashesV2)
.values(**data)
.on_conflict_do_nothing(
index_elements=("method", "taskhash", "outhash")
)
)
else:
statement = insert(OuthashesV2).values(**data)
try:
async with self.db.begin():
result = await self._execute(statement)
return result.rowcount != 0
except IntegrityError:
self.logger.debug(
"%s, %s already in outhash database", data["method"], data["outhash"]
)
return False
async def _get_user(self, username):
async with self.db.begin():
result = await self._execute(
select(
Users.username,
Users.permissions,
Users.token,
).where(
Users.username == username,
)
)
return result.first()
async def lookup_user_token(self, username):
row = await self._get_user(username)
if not row:
return None, None
return map_user(row), row.token
async def lookup_user(self, username):
return map_user(await self._get_user(username))
async def set_user_token(self, username, token):
async with self.db.begin():
result = await self._execute(
update(Users)
.where(
Users.username == username,
)
.values(
token=token,
)
)
return result.rowcount != 0
async def set_user_perms(self, username, permissions):
async with self.db.begin():
result = await self._execute(
update(Users)
.where(Users.username == username)
.values(permissions=" ".join(permissions))
)
return result.rowcount != 0
async def get_all_users(self):
async with self.db.begin():
result = await self._execute(
select(
Users.username,
Users.permissions,
)
)
return [map_user(row) for row in result]
async def new_user(self, username, permissions, token):
try:
async with self.db.begin():
await self._execute(
insert(Users).values(
username=username,
permissions=" ".join(permissions),
token=token,
)
)
return True
except IntegrityError as e:
self.logger.debug("Cannot create new user %s: %s", username, e)
return False
async def delete_user(self, username):
async with self.db.begin():
result = await self._execute(
delete(Users).where(Users.username == username)
)
return result.rowcount != 0
async def get_usage(self):
usage = {}
async with self.db.begin() as session:
for name, table in Base.metadata.tables.items():
result = await self._execute(
statement=select(func.count()).select_from(table)
)
usage[name] = {
"rows": result.scalar(),
}
return usage
async def get_query_columns(self):
columns = set()
for table in (UnihashesV3, OuthashesV2):
for c in table.__table__.columns:
if not isinstance(c.type, Text):
continue
columns.add(c.key)
return list(columns)

View File

@@ -1,562 +0,0 @@
#! /usr/bin/env python3
#
# Copyright (C) 2023 Garmin Ltd.
#
# SPDX-License-Identifier: GPL-2.0-only
#
import sqlite3
import logging
from contextlib import closing
from . import User
logger = logging.getLogger("hashserv.sqlite")
UNIHASH_TABLE_DEFINITION = (
("method", "TEXT NOT NULL", "UNIQUE"),
("taskhash", "TEXT NOT NULL", "UNIQUE"),
("unihash", "TEXT NOT NULL", ""),
("gc_mark", "TEXT NOT NULL", ""),
)
UNIHASH_TABLE_COLUMNS = tuple(name for name, _, _ in UNIHASH_TABLE_DEFINITION)
OUTHASH_TABLE_DEFINITION = (
("method", "TEXT NOT NULL", "UNIQUE"),
("taskhash", "TEXT NOT NULL", "UNIQUE"),
("outhash", "TEXT NOT NULL", "UNIQUE"),
("created", "DATETIME", ""),
# Optional fields
("owner", "TEXT", ""),
("PN", "TEXT", ""),
("PV", "TEXT", ""),
("PR", "TEXT", ""),
("task", "TEXT", ""),
("outhash_siginfo", "TEXT", ""),
)
OUTHASH_TABLE_COLUMNS = tuple(name for name, _, _ in OUTHASH_TABLE_DEFINITION)
USERS_TABLE_DEFINITION = (
("username", "TEXT NOT NULL", "UNIQUE"),
("token", "TEXT NOT NULL", ""),
("permissions", "TEXT NOT NULL", ""),
)
USERS_TABLE_COLUMNS = tuple(name for name, _, _ in USERS_TABLE_DEFINITION)
CONFIG_TABLE_DEFINITION = (
("name", "TEXT NOT NULL", "UNIQUE"),
("value", "TEXT", ""),
)
CONFIG_TABLE_COLUMNS = tuple(name for name, _, _ in CONFIG_TABLE_DEFINITION)
def _make_table(cursor, name, definition):
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS {name} (
id INTEGER PRIMARY KEY AUTOINCREMENT,
{fields}
UNIQUE({unique})
)
""".format(
name=name,
fields=" ".join("%s %s," % (name, typ) for name, typ, _ in definition),
unique=", ".join(
name for name, _, flags in definition if "UNIQUE" in flags
),
)
)
def map_user(row):
if row is None:
return None
return User(
username=row["username"],
permissions=set(row["permissions"].split()),
)
def _make_condition_statement(columns, condition):
where = {}
for c in columns:
if c in condition and condition[c] is not None:
where[c] = condition[c]
return where, " AND ".join("%s=:%s" % (k, k) for k in where.keys())
def _get_sqlite_version(cursor):
cursor.execute("SELECT sqlite_version()")
version = []
for v in cursor.fetchone()[0].split("."):
try:
version.append(int(v))
except ValueError:
version.append(v)
return tuple(version)
def _schema_table_name(version):
if version >= (3, 33):
return "sqlite_schema"
return "sqlite_master"
class DatabaseEngine(object):
def __init__(self, dbname, sync):
self.dbname = dbname
self.logger = logger
self.sync = sync
async def create(self):
db = sqlite3.connect(self.dbname)
db.row_factory = sqlite3.Row
with closing(db.cursor()) as cursor:
_make_table(cursor, "unihashes_v3", UNIHASH_TABLE_DEFINITION)
_make_table(cursor, "outhashes_v2", OUTHASH_TABLE_DEFINITION)
_make_table(cursor, "users", USERS_TABLE_DEFINITION)
_make_table(cursor, "config", CONFIG_TABLE_DEFINITION)
cursor.execute("PRAGMA journal_mode = WAL")
cursor.execute(
"PRAGMA synchronous = %s" % ("NORMAL" if self.sync else "OFF")
)
# Drop old indexes
cursor.execute("DROP INDEX IF EXISTS taskhash_lookup")
cursor.execute("DROP INDEX IF EXISTS outhash_lookup")
cursor.execute("DROP INDEX IF EXISTS taskhash_lookup_v2")
cursor.execute("DROP INDEX IF EXISTS outhash_lookup_v2")
cursor.execute("DROP INDEX IF EXISTS taskhash_lookup_v3")
# TODO: Upgrade from tasks_v2?
cursor.execute("DROP TABLE IF EXISTS tasks_v2")
# Create new indexes
cursor.execute(
"CREATE INDEX IF NOT EXISTS taskhash_lookup_v4 ON unihashes_v3 (method, taskhash)"
)
cursor.execute(
"CREATE INDEX IF NOT EXISTS unihash_lookup_v1 ON unihashes_v3 (unihash)"
)
cursor.execute(
"CREATE INDEX IF NOT EXISTS outhash_lookup_v3 ON outhashes_v2 (method, outhash)"
)
cursor.execute("CREATE INDEX IF NOT EXISTS config_lookup ON config (name)")
sqlite_version = _get_sqlite_version(cursor)
cursor.execute(
f"""
SELECT name FROM {_schema_table_name(sqlite_version)} WHERE type = 'table' AND name = 'unihashes_v2'
"""
)
if cursor.fetchone():
self.logger.info("Upgrading Unihashes V2 -> V3...")
cursor.execute(
"""
INSERT INTO unihashes_v3 (id, method, unihash, taskhash, gc_mark)
SELECT id, method, unihash, taskhash, '' FROM unihashes_v2
"""
)
cursor.execute("DROP TABLE unihashes_v2")
db.commit()
self.logger.info("Upgrade complete")
def connect(self, logger):
return Database(logger, self.dbname, self.sync)
class Database(object):
def __init__(self, logger, dbname, sync):
self.dbname = dbname
self.logger = logger
self.db = sqlite3.connect(self.dbname)
self.db.row_factory = sqlite3.Row
with closing(self.db.cursor()) as cursor:
cursor.execute("PRAGMA journal_mode = WAL")
cursor.execute(
"PRAGMA synchronous = %s" % ("NORMAL" if sync else "OFF")
)
self.sqlite_version = _get_sqlite_version(cursor)
async def __aenter__(self):
return self
async def __aexit__(self, exc_type, exc_value, traceback):
await self.close()
async def _set_config(self, cursor, name, value):
cursor.execute(
"""
INSERT OR REPLACE INTO config (id, name, value) VALUES
((SELECT id FROM config WHERE name=:name), :name, :value)
""",
{
"name": name,
"value": value,
},
)
async def _get_config(self, cursor, name):
cursor.execute(
"SELECT value FROM config WHERE name=:name",
{
"name": name,
},
)
row = cursor.fetchone()
if row is None:
return None
return row["value"]
async def close(self):
self.db.close()
async def get_unihash_by_taskhash_full(self, method, taskhash):
with closing(self.db.cursor()) as cursor:
cursor.execute(
"""
SELECT *, unihashes_v3.unihash AS unihash FROM outhashes_v2
INNER JOIN unihashes_v3 ON unihashes_v3.method=outhashes_v2.method AND unihashes_v3.taskhash=outhashes_v2.taskhash
WHERE outhashes_v2.method=:method AND outhashes_v2.taskhash=:taskhash
ORDER BY outhashes_v2.created ASC
LIMIT 1
""",
{
"method": method,
"taskhash": taskhash,
},
)
return cursor.fetchone()
async def get_unihash_by_outhash(self, method, outhash):
with closing(self.db.cursor()) as cursor:
cursor.execute(
"""
SELECT *, unihashes_v3.unihash AS unihash FROM outhashes_v2
INNER JOIN unihashes_v3 ON unihashes_v3.method=outhashes_v2.method AND unihashes_v3.taskhash=outhashes_v2.taskhash
WHERE outhashes_v2.method=:method AND outhashes_v2.outhash=:outhash
ORDER BY outhashes_v2.created ASC
LIMIT 1
""",
{
"method": method,
"outhash": outhash,
},
)
return cursor.fetchone()
async def unihash_exists(self, unihash):
with closing(self.db.cursor()) as cursor:
cursor.execute(
"""
SELECT * FROM unihashes_v3 WHERE unihash=:unihash
LIMIT 1
""",
{
"unihash": unihash,
},
)
return cursor.fetchone() is not None
async def get_outhash(self, method, outhash):
with closing(self.db.cursor()) as cursor:
cursor.execute(
"""
SELECT * FROM outhashes_v2
WHERE outhashes_v2.method=:method AND outhashes_v2.outhash=:outhash
ORDER BY outhashes_v2.created ASC
LIMIT 1
""",
{
"method": method,
"outhash": outhash,
},
)
return cursor.fetchone()
async def get_equivalent_for_outhash(self, method, outhash, taskhash):
with closing(self.db.cursor()) as cursor:
cursor.execute(
"""
SELECT outhashes_v2.taskhash AS taskhash, unihashes_v3.unihash AS unihash FROM outhashes_v2
INNER JOIN unihashes_v3 ON unihashes_v3.method=outhashes_v2.method AND unihashes_v3.taskhash=outhashes_v2.taskhash
-- Select any matching output hash except the one we just inserted
WHERE outhashes_v2.method=:method AND outhashes_v2.outhash=:outhash AND outhashes_v2.taskhash!=:taskhash
-- Pick the oldest hash
ORDER BY outhashes_v2.created ASC
LIMIT 1
""",
{
"method": method,
"outhash": outhash,
"taskhash": taskhash,
},
)
return cursor.fetchone()
async def get_equivalent(self, method, taskhash):
with closing(self.db.cursor()) as cursor:
cursor.execute(
"SELECT taskhash, method, unihash FROM unihashes_v3 WHERE method=:method AND taskhash=:taskhash",
{
"method": method,
"taskhash": taskhash,
},
)
return cursor.fetchone()
async def remove(self, condition):
def do_remove(columns, table_name, cursor):
where, clause = _make_condition_statement(columns, condition)
if where:
query = f"DELETE FROM {table_name} WHERE {clause}"
cursor.execute(query, where)
return cursor.rowcount
return 0
count = 0
with closing(self.db.cursor()) as cursor:
count += do_remove(OUTHASH_TABLE_COLUMNS, "outhashes_v2", cursor)
count += do_remove(UNIHASH_TABLE_COLUMNS, "unihashes_v3", cursor)
self.db.commit()
return count
async def get_current_gc_mark(self):
with closing(self.db.cursor()) as cursor:
return await self._get_config(cursor, "gc-mark")
async def gc_status(self):
with closing(self.db.cursor()) as cursor:
cursor.execute(
"""
SELECT COUNT() FROM unihashes_v3 WHERE
gc_mark=COALESCE((SELECT value FROM config WHERE name='gc-mark'), '')
"""
)
keep_rows = cursor.fetchone()[0]
cursor.execute(
"""
SELECT COUNT() FROM unihashes_v3 WHERE
gc_mark!=COALESCE((SELECT value FROM config WHERE name='gc-mark'), '')
"""
)
remove_rows = cursor.fetchone()[0]
current_mark = await self._get_config(cursor, "gc-mark")
return (keep_rows, remove_rows, current_mark)
async def gc_mark(self, mark, condition):
with closing(self.db.cursor()) as cursor:
await self._set_config(cursor, "gc-mark", mark)
where, clause = _make_condition_statement(UNIHASH_TABLE_COLUMNS, condition)
new_rows = 0
if where:
cursor.execute(
f"""
UPDATE unihashes_v3 SET
gc_mark=COALESCE((SELECT value FROM config WHERE name='gc-mark'), '')
WHERE {clause}
""",
where,
)
new_rows = cursor.rowcount
self.db.commit()
return new_rows
async def gc_sweep(self):
with closing(self.db.cursor()) as cursor:
# NOTE: COALESCE is not used in this query so that if the current
# mark is NULL, nothing will happen
cursor.execute(
"""
DELETE FROM unihashes_v3 WHERE
gc_mark!=(SELECT value FROM config WHERE name='gc-mark')
"""
)
count = cursor.rowcount
await self._set_config(cursor, "gc-mark", None)
self.db.commit()
return count
async def clean_unused(self, oldest):
with closing(self.db.cursor()) as cursor:
cursor.execute(
"""
DELETE FROM outhashes_v2 WHERE created<:oldest AND NOT EXISTS (
SELECT unihashes_v3.id FROM unihashes_v3 WHERE unihashes_v3.method=outhashes_v2.method AND unihashes_v3.taskhash=outhashes_v2.taskhash LIMIT 1
)
""",
{
"oldest": oldest,
},
)
self.db.commit()
return cursor.rowcount
async def insert_unihash(self, method, taskhash, unihash):
with closing(self.db.cursor()) as cursor:
prevrowid = cursor.lastrowid
cursor.execute(
"""
INSERT OR IGNORE INTO unihashes_v3 (method, taskhash, unihash, gc_mark) VALUES
(
:method,
:taskhash,
:unihash,
COALESCE((SELECT value FROM config WHERE name='gc-mark'), '')
)
""",
{
"method": method,
"taskhash": taskhash,
"unihash": unihash,
},
)
self.db.commit()
return cursor.lastrowid != prevrowid
async def insert_outhash(self, data):
data = {k: v for k, v in data.items() if k in OUTHASH_TABLE_COLUMNS}
keys = sorted(data.keys())
query = "INSERT OR IGNORE INTO outhashes_v2 ({fields}) VALUES({values})".format(
fields=", ".join(keys),
values=", ".join(":" + k for k in keys),
)
with closing(self.db.cursor()) as cursor:
prevrowid = cursor.lastrowid
cursor.execute(query, data)
self.db.commit()
return cursor.lastrowid != prevrowid
def _get_user(self, username):
with closing(self.db.cursor()) as cursor:
cursor.execute(
"""
SELECT username, permissions, token FROM users WHERE username=:username
""",
{
"username": username,
},
)
return cursor.fetchone()
async def lookup_user_token(self, username):
row = self._get_user(username)
if row is None:
return None, None
return map_user(row), row["token"]
async def lookup_user(self, username):
return map_user(self._get_user(username))
async def set_user_token(self, username, token):
with closing(self.db.cursor()) as cursor:
cursor.execute(
"""
UPDATE users SET token=:token WHERE username=:username
""",
{
"username": username,
"token": token,
},
)
self.db.commit()
return cursor.rowcount != 0
async def set_user_perms(self, username, permissions):
with closing(self.db.cursor()) as cursor:
cursor.execute(
"""
UPDATE users SET permissions=:permissions WHERE username=:username
""",
{
"username": username,
"permissions": " ".join(permissions),
},
)
self.db.commit()
return cursor.rowcount != 0
async def get_all_users(self):
with closing(self.db.cursor()) as cursor:
cursor.execute("SELECT username, permissions FROM users")
return [map_user(r) for r in cursor.fetchall()]
async def new_user(self, username, permissions, token):
with closing(self.db.cursor()) as cursor:
try:
cursor.execute(
"""
INSERT INTO users (username, token, permissions) VALUES (:username, :token, :permissions)
""",
{
"username": username,
"token": token,
"permissions": " ".join(permissions),
},
)
self.db.commit()
return True
except sqlite3.IntegrityError:
return False
async def delete_user(self, username):
with closing(self.db.cursor()) as cursor:
cursor.execute(
"""
DELETE FROM users WHERE username=:username
""",
{
"username": username,
},
)
self.db.commit()
return cursor.rowcount != 0
async def get_usage(self):
usage = {}
with closing(self.db.cursor()) as cursor:
cursor.execute(
f"""
SELECT name FROM {_schema_table_name(self.sqlite_version)} WHERE type = 'table' AND name NOT LIKE 'sqlite_%'
"""
)
for row in cursor.fetchall():
cursor.execute(
"""
SELECT COUNT() FROM %s
"""
% row["name"],
)
usage[row["name"]] = {
"rows": cursor.fetchone()[0],
}
return usage
async def get_query_columns(self):
columns = set()
for name, typ, _ in UNIHASH_TABLE_DEFINITION + OUTHASH_TABLE_DEFINITION:
if typ.startswith("TEXT"):
columns.add(name)
return list(columns)

File diff suppressed because it is too large Load Diff

View File

@@ -178,9 +178,9 @@ class LayerIndex():
'''Load the layerindex.
indexURI - An index to load. (Use multiple calls to load multiple indexes)
reload - If reload is True, then any previously loaded indexes will be forgotten.
load - List of elements to load. Default loads all items.
Note: plugs may ignore this.
@@ -383,14 +383,7 @@ layerBranches set. If not, they are effectively blank.'''
# Get a list of dependencies and then recursively process them
for layerdependency in layerbranch.index.layerDependencies_layerBranchId[layerbranch.id]:
try:
deplayerbranch = layerdependency.dependency_layerBranch
except AttributeError as e:
logger.error('LayerBranch does not exist for dependent layer {}:{}\n' \
' Cannot continue successfully.\n' \
' You might be able to resolve this by checking out the layer locally.\n' \
' Consider reaching out the to the layer maintainers or the layerindex admins' \
.format(layerdependency.dependency.name, layerbranch.branch.name))
deplayerbranch = layerdependency.dependency_layerBranch
if ignores and deplayerbranch.layer.name in ignores:
continue
@@ -853,7 +846,7 @@ class LayerIndexObj():
continue
for layerdependency in layerbranch.index.layerDependencies_layerBranchId[layerbranch.id]:
deplayerbranch = layerdependency.dependency_layerBranch or None
deplayerbranch = layerdependency.dependency_layerBranch
if ignores and deplayerbranch.layer.name in ignores:
continue

View File

@@ -253,7 +253,7 @@ class ProgressBar(object):
if (self.maxval is not UnknownLength
and not 0 <= value <= self.maxval):
self.maxval = value
raise ValueError('Value out of range')
self.currval = value

View File

@@ -7,13 +7,13 @@
__version__ = "1.0.0"
import os, time
import sys, logging
import sys,logging
def init_logger(logfile, loglevel):
numeric_level = getattr(logging, loglevel.upper(), None)
if not isinstance(numeric_level, int):
raise ValueError("Invalid log level: %s" % loglevel)
FORMAT = "%(asctime)-15s %(message)s"
raise ValueError('Invalid log level: %s' % loglevel)
FORMAT = '%(asctime)-15s %(message)s'
logging.basicConfig(level=numeric_level, filename=logfile, format=FORMAT)
class NotFoundError(Exception):

View File

@@ -11,61 +11,40 @@ logger = logging.getLogger("BitBake.PRserv")
class PRAsyncClient(bb.asyncrpc.AsyncClient):
def __init__(self):
super().__init__("PRSERVICE", "1.0", logger)
super().__init__('PRSERVICE', '1.0', logger)
async def getPR(self, version, pkgarch, checksum):
response = await self.invoke(
{"get-pr": {"version": version, "pkgarch": pkgarch, "checksum": checksum}}
response = await self.send_message(
{'get-pr': {'version': version, 'pkgarch': pkgarch, 'checksum': checksum}}
)
if response:
return response["value"]
async def test_pr(self, version, pkgarch, checksum):
response = await self.invoke(
{"test-pr": {"version": version, "pkgarch": pkgarch, "checksum": checksum}}
)
if response:
return response["value"]
async def test_package(self, version, pkgarch):
response = await self.invoke(
{"test-package": {"version": version, "pkgarch": pkgarch}}
)
if response:
return response["value"]
async def max_package_pr(self, version, pkgarch):
response = await self.invoke(
{"max-package-pr": {"version": version, "pkgarch": pkgarch}}
)
if response:
return response["value"]
return response['value']
async def importone(self, version, pkgarch, checksum, value):
response = await self.invoke(
{"import-one": {"version": version, "pkgarch": pkgarch, "checksum": checksum, "value": value}}
response = await self.send_message(
{'import-one': {'version': version, 'pkgarch': pkgarch, 'checksum': checksum, 'value': value}}
)
if response:
return response["value"]
return response['value']
async def export(self, version, pkgarch, checksum, colinfo):
response = await self.invoke(
{"export": {"version": version, "pkgarch": pkgarch, "checksum": checksum, "colinfo": colinfo}}
response = await self.send_message(
{'export': {'version': version, 'pkgarch': pkgarch, 'checksum': checksum, 'colinfo': colinfo}}
)
if response:
return (response["metainfo"], response["datainfo"])
return (response['metainfo'], response['datainfo'])
async def is_readonly(self):
response = await self.invoke(
{"is-readonly": {}}
response = await self.send_message(
{'is-readonly': {}}
)
if response:
return response["readonly"]
return response['readonly']
class PRClient(bb.asyncrpc.Client):
def __init__(self):
super().__init__()
self._add_methods("getPR", "test_pr", "test_package", "importone", "export", "is_readonly")
self._add_methods('getPR', 'importone', 'export', 'is_readonly')
def _get_async_client(self):
return PRAsyncClient()

View File

@@ -38,9 +38,9 @@ class PRTable(object):
self.read_only = read_only
self.dirty = False
if nohist:
self.table = "%s_nohist" % table
self.table = "%s_nohist" % table
else:
self.table = "%s_hist" % table
self.table = "%s_hist" % table
if self.read_only:
table_exists = self._execute(
@@ -64,7 +64,7 @@ class PRTable(object):
try:
return self.conn.execute(*query)
except sqlite3.OperationalError as exc:
if "is locked" in str(exc) and end > time.time():
if 'is locked' in str(exc) and end > time.time():
continue
raise exc
@@ -78,53 +78,7 @@ class PRTable(object):
self.sync()
self.dirty = False
def test_package(self, version, pkgarch):
"""Returns whether the specified package version is found in the database for the specified architecture"""
# Just returns the value if found or None otherwise
data=self._execute("SELECT value FROM %s WHERE version=? AND pkgarch=?;" % self.table,
(version, pkgarch))
row=data.fetchone()
if row is not None:
return True
else:
return False
def test_value(self, version, pkgarch, value):
"""Returns whether the specified value is found in the database for the specified package and architecture"""
# Just returns the value if found or None otherwise
data=self._execute("SELECT value FROM %s WHERE version=? AND pkgarch=? and value=?;" % self.table,
(version, pkgarch, value))
row=data.fetchone()
if row is not None:
return True
else:
return False
def find_value(self, version, pkgarch, checksum):
"""Returns the value for the specified checksum if found or None otherwise."""
data=self._execute("SELECT value FROM %s WHERE version=? AND pkgarch=? AND checksum=?;" % self.table,
(version, pkgarch, checksum))
row=data.fetchone()
if row is not None:
return row[0]
else:
return None
def find_max_value(self, version, pkgarch):
"""Returns the greatest value for (version, pkgarch), or None if not found. Doesn't create a new value"""
data = self._execute("SELECT max(value) FROM %s where version=? AND pkgarch=?;" % (self.table),
(version, pkgarch))
row = data.fetchone()
if row is not None:
return row[0]
else:
return None
def _get_value_hist(self, version, pkgarch, checksum):
def _getValueHist(self, version, pkgarch, checksum):
data=self._execute("SELECT value FROM %s WHERE version=? AND pkgarch=? AND checksum=?;" % self.table,
(version, pkgarch, checksum))
row=data.fetchone()
@@ -133,7 +87,7 @@ class PRTable(object):
else:
#no value found, try to insert
if self.read_only:
data = self._execute("SELECT ifnull(max(value)+1, 0) FROM %s where version=? AND pkgarch=?;" % (self.table),
data = self._execute("SELECT ifnull(max(value)+1,0) FROM %s where version=? AND pkgarch=?;" % (self.table),
(version, pkgarch))
row = data.fetchone()
if row is not None:
@@ -142,9 +96,9 @@ class PRTable(object):
return 0
try:
self._execute("INSERT INTO %s VALUES (?, ?, ?, (select ifnull(max(value)+1, 0) from %s where version=? AND pkgarch=?));"
% (self.table, self.table),
(version, pkgarch, checksum, version, pkgarch))
self._execute("INSERT INTO %s VALUES (?, ?, ?, (select ifnull(max(value)+1,0) from %s where version=? AND pkgarch=?));"
% (self.table,self.table),
(version,pkgarch, checksum,version, pkgarch))
except sqlite3.IntegrityError as exc:
logger.error(str(exc))
@@ -158,10 +112,10 @@ class PRTable(object):
else:
raise prserv.NotFoundError
def _get_value_no_hist(self, version, pkgarch, checksum):
def _getValueNohist(self, version, pkgarch, checksum):
data=self._execute("SELECT value FROM %s \
WHERE version=? AND pkgarch=? AND checksum=? AND \
value >= (select max(value) from %s where version=? AND pkgarch=?);"
value >= (select max(value) from %s where version=? AND pkgarch=?);"
% (self.table, self.table),
(version, pkgarch, checksum, version, pkgarch))
row=data.fetchone()
@@ -170,13 +124,17 @@ class PRTable(object):
else:
#no value found, try to insert
if self.read_only:
data = self._execute("SELECT ifnull(max(value)+1, 0) FROM %s where version=? AND pkgarch=?;" % (self.table),
data = self._execute("SELECT ifnull(max(value)+1,0) FROM %s where version=? AND pkgarch=?;" % (self.table),
(version, pkgarch))
return data.fetchone()[0]
row = data.fetchone()
if row is not None:
return row[0]
else:
return 0
try:
self._execute("INSERT OR REPLACE INTO %s VALUES (?, ?, ?, (select ifnull(max(value)+1, 0) from %s where version=? AND pkgarch=?));"
% (self.table, self.table),
self._execute("INSERT OR REPLACE INTO %s VALUES (?, ?, ?, (select ifnull(max(value)+1,0) from %s where version=? AND pkgarch=?));"
% (self.table,self.table),
(version, pkgarch, checksum, version, pkgarch))
except sqlite3.IntegrityError as exc:
logger.error(str(exc))
@@ -192,17 +150,17 @@ class PRTable(object):
else:
raise prserv.NotFoundError
def get_value(self, version, pkgarch, checksum):
def getValue(self, version, pkgarch, checksum):
if self.nohist:
return self._get_value_no_hist(version, pkgarch, checksum)
return self._getValueNohist(version, pkgarch, checksum)
else:
return self._get_value_hist(version, pkgarch, checksum)
return self._getValueHist(version, pkgarch, checksum)
def _import_hist(self, version, pkgarch, checksum, value):
def _importHist(self, version, pkgarch, checksum, value):
if self.read_only:
return None
val = None
val = None
data = self._execute("SELECT value FROM %s WHERE version=? AND pkgarch=? AND checksum=?;" % self.table,
(version, pkgarch, checksum))
row = data.fetchone()
@@ -225,27 +183,27 @@ class PRTable(object):
val = row[0]
return val
def _import_no_hist(self, version, pkgarch, checksum, value):
def _importNohist(self, version, pkgarch, checksum, value):
if self.read_only:
return None
try:
#try to insert
self._execute("INSERT INTO %s VALUES (?, ?, ?, ?);" % (self.table),
(version, pkgarch, checksum, value))
(version, pkgarch, checksum,value))
except sqlite3.IntegrityError as exc:
#already have the record, try to update
try:
self._execute("UPDATE %s SET value=? WHERE version=? AND pkgarch=? AND checksum=? AND value<?"
self._execute("UPDATE %s SET value=? WHERE version=? AND pkgarch=? AND checksum=? AND value<?"
% (self.table),
(value, version, pkgarch, checksum, value))
(value,version,pkgarch,checksum,value))
except sqlite3.IntegrityError as exc:
logger.error(str(exc))
self.dirty = True
data = self._execute("SELECT value FROM %s WHERE version=? AND pkgarch=? AND checksum=? AND value>=?;" % self.table,
(version, pkgarch, checksum, value))
(version,pkgarch,checksum,value))
row=data.fetchone()
if row is not None:
return row[0]
@@ -254,33 +212,33 @@ class PRTable(object):
def importone(self, version, pkgarch, checksum, value):
if self.nohist:
return self._import_no_hist(version, pkgarch, checksum, value)
return self._importNohist(version, pkgarch, checksum, value)
else:
return self._import_hist(version, pkgarch, checksum, value)
return self._importHist(version, pkgarch, checksum, value)
def export(self, version, pkgarch, checksum, colinfo):
metainfo = {}
#column info
#column info
if colinfo:
metainfo["tbl_name"] = self.table
metainfo["core_ver"] = prserv.__version__
metainfo["col_info"] = []
metainfo['tbl_name'] = self.table
metainfo['core_ver'] = prserv.__version__
metainfo['col_info'] = []
data = self._execute("PRAGMA table_info(%s);" % self.table)
for row in data:
col = {}
col["name"] = row["name"]
col["type"] = row["type"]
col["notnull"] = row["notnull"]
col["dflt_value"] = row["dflt_value"]
col["pk"] = row["pk"]
metainfo["col_info"].append(col)
col['name'] = row['name']
col['type'] = row['type']
col['notnull'] = row['notnull']
col['dflt_value'] = row['dflt_value']
col['pk'] = row['pk']
metainfo['col_info'].append(col)
#data info
datainfo = []
if self.nohist:
sqlstmt = "SELECT T1.version, T1.pkgarch, T1.checksum, T1.value FROM %s as T1, \
(SELECT version, pkgarch, max(value) as maxvalue FROM %s GROUP BY version, pkgarch) as T2 \
(SELECT version,pkgarch,max(value) as maxvalue FROM %s GROUP BY version,pkgarch) as T2 \
WHERE T1.version=T2.version AND T1.pkgarch=T2.pkgarch AND T1.value=T2.maxvalue " % (self.table, self.table)
else:
sqlstmt = "SELECT * FROM %s as T1 WHERE 1=1 " % self.table
@@ -303,12 +261,12 @@ class PRTable(object):
else:
data = self._execute(sqlstmt)
for row in data:
if row["version"]:
if row['version']:
col = {}
col["version"] = row["version"]
col["pkgarch"] = row["pkgarch"]
col["checksum"] = row["checksum"]
col["value"] = row["value"]
col['version'] = row['version']
col['pkgarch'] = row['pkgarch']
col['checksum'] = row['checksum']
col['value'] = row['value']
datainfo.append(col)
return (metainfo, datainfo)
@@ -317,7 +275,7 @@ class PRTable(object):
for line in self.conn.iterdump():
writeCount = writeCount + len(line) + 1
fd.write(line)
fd.write("\n")
fd.write('\n')
return writeCount
class PRData(object):
@@ -344,7 +302,7 @@ class PRData(object):
def disconnect(self):
self.connection.close()
def __getitem__(self, tblname):
def __getitem__(self,tblname):
if not isinstance(tblname, str):
raise TypeError("tblname argument must be a string, not '%s'" %
type(tblname))
@@ -358,4 +316,4 @@ class PRData(object):
if tblname in self._tables:
del self._tables[tblname]
logger.info("drop table %s" % (tblname))
self.connection.execute("DROP TABLE IF EXISTS %s;" % tblname)
self.connection.execute("DROP TABLE IF EXISTS %s;" % tblname)

View File

@@ -20,101 +20,77 @@ PIDPREFIX = "/tmp/PRServer_%s_%s.pid"
singleton = None
class PRServerClient(bb.asyncrpc.AsyncServerConnection):
def __init__(self, socket, server):
super().__init__(socket, "PRSERVICE", server.logger)
self.server = server
def __init__(self, reader, writer, table, read_only):
super().__init__(reader, writer, 'PRSERVICE', logger)
self.handlers.update({
"get-pr": self.handle_get_pr,
"test-pr": self.handle_test_pr,
"test-package": self.handle_test_package,
"max-package-pr": self.handle_max_package_pr,
"import-one": self.handle_import_one,
"export": self.handle_export,
"is-readonly": self.handle_is_readonly,
'get-pr': self.handle_get_pr,
'import-one': self.handle_import_one,
'export': self.handle_export,
'is-readonly': self.handle_is_readonly,
})
self.table = table
self.read_only = read_only
def validate_proto_version(self):
return (self.proto_version == (1, 0))
async def dispatch_message(self, msg):
try:
return await super().dispatch_message(msg)
await super().dispatch_message(msg)
except:
self.server.table.sync()
self.table.sync()
raise
else:
self.server.table.sync_if_dirty()
async def handle_test_pr(self, request):
'''Finds the PR value corresponding to the request. If not found, returns None and doesn't insert a new value'''
version = request["version"]
pkgarch = request["pkgarch"]
checksum = request["checksum"]
value = self.server.table.find_value(version, pkgarch, checksum)
return {"value": value}
async def handle_test_package(self, request):
'''Tells whether there are entries for (version, pkgarch) in the db. Returns True or False'''
version = request["version"]
pkgarch = request["pkgarch"]
value = self.server.table.test_package(version, pkgarch)
return {"value": value}
async def handle_max_package_pr(self, request):
'''Finds the greatest PR value for (version, pkgarch) in the db. Returns None if no entry was found'''
version = request["version"]
pkgarch = request["pkgarch"]
value = self.server.table.find_max_value(version, pkgarch)
return {"value": value}
self.table.sync_if_dirty()
async def handle_get_pr(self, request):
version = request["version"]
pkgarch = request["pkgarch"]
checksum = request["checksum"]
version = request['version']
pkgarch = request['pkgarch']
checksum = request['checksum']
response = None
try:
value = self.server.table.get_value(version, pkgarch, checksum)
response = {"value": value}
value = self.table.getValue(version, pkgarch, checksum)
response = {'value': value}
except prserv.NotFoundError:
self.logger.error("failure storing value in database for (%s, %s)",version, checksum)
logger.error("can not find value for (%s, %s)",version, checksum)
except sqlite3.Error as exc:
logger.error(str(exc))
return response
self.write_message(response)
async def handle_import_one(self, request):
response = None
if not self.server.read_only:
version = request["version"]
pkgarch = request["pkgarch"]
checksum = request["checksum"]
value = request["value"]
if not self.read_only:
version = request['version']
pkgarch = request['pkgarch']
checksum = request['checksum']
value = request['value']
value = self.server.table.importone(version, pkgarch, checksum, value)
value = self.table.importone(version, pkgarch, checksum, value)
if value is not None:
response = {"value": value}
response = {'value': value}
return response
self.write_message(response)
async def handle_export(self, request):
version = request["version"]
pkgarch = request["pkgarch"]
checksum = request["checksum"]
colinfo = request["colinfo"]
version = request['version']
pkgarch = request['pkgarch']
checksum = request['checksum']
colinfo = request['colinfo']
try:
(metainfo, datainfo) = self.server.table.export(version, pkgarch, checksum, colinfo)
(metainfo, datainfo) = self.table.export(version, pkgarch, checksum, colinfo)
except sqlite3.Error as exc:
self.logger.error(str(exc))
logger.error(str(exc))
metainfo = datainfo = None
return {"metainfo": metainfo, "datainfo": datainfo}
response = {'metainfo': metainfo, 'datainfo': datainfo}
self.write_message(response)
async def handle_is_readonly(self, request):
return {"readonly": self.server.read_only}
response = {'readonly': self.read_only}
self.write_message(response)
class PRServer(bb.asyncrpc.AsyncServer):
def __init__(self, dbfile, read_only=False):
@@ -123,23 +99,20 @@ class PRServer(bb.asyncrpc.AsyncServer):
self.table = None
self.read_only = read_only
def accept_client(self, socket):
return PRServerClient(socket, self)
def accept_client(self, reader, writer):
return PRServerClient(reader, writer, self.table, self.read_only)
def start(self):
tasks = super().start()
def _serve_forever(self):
self.db = prserv.db.PRData(self.dbfile, read_only=self.read_only)
self.table = self.db["PRMAIN"]
self.logger.info("Started PRServer with DBfile: %s, Address: %s, PID: %s" %
logger.info("Started PRServer with DBfile: %s, Address: %s, PID: %s" %
(self.dbfile, self.address, str(os.getpid())))
return tasks
super()._serve_forever()
async def stop(self):
self.table.sync_if_dirty()
self.db.disconnect()
await super().stop()
def signal_handler(self):
super().signal_handler()
@@ -156,12 +129,12 @@ class PRServSingleton(object):
def start(self):
self.prserv = PRServer(self.dbfile)
self.prserv.start_tcp_server(socket.gethostbyname(self.host), self.port)
self.process = self.prserv.serve_as_process(log_level=logging.WARNING)
self.process = self.prserv.serve_as_process()
if not self.prserv.address:
raise PRServiceConfigError
if not self.port:
self.port = int(self.prserv.address.rsplit(":", 1)[1])
self.port = int(self.prserv.address.rsplit(':', 1)[1])
def run_as_daemon(func, pidfile, logfile):
"""
@@ -197,18 +170,18 @@ def run_as_daemon(func, pidfile, logfile):
# stdout/stderr or it could be 'real' unix fd forking where we need
# to physically close the fds to prevent the program launching us from
# potentially hanging on a pipe. Handle both cases.
si = open("/dev/null", "r")
si = open('/dev/null', 'r')
try:
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(si.fileno(),sys.stdin.fileno())
except (AttributeError, io.UnsupportedOperation):
sys.stdin = si
so = open(logfile, "a+")
so = open(logfile, 'a+')
try:
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(so.fileno(),sys.stdout.fileno())
except (AttributeError, io.UnsupportedOperation):
sys.stdout = so
try:
os.dup2(so.fileno(), sys.stderr.fileno())
os.dup2(so.fileno(),sys.stderr.fileno())
except (AttributeError, io.UnsupportedOperation):
sys.stderr = so
@@ -226,7 +199,7 @@ def run_as_daemon(func, pidfile, logfile):
# write pidfile
pid = str(os.getpid())
with open(pidfile, "w") as pf:
with open(pidfile, 'w') as pf:
pf.write("%s\n" % pid)
func()
@@ -271,15 +244,15 @@ def stop_daemon(host, port):
# so at least advise the user which ports the corresponding server is listening
ports = []
portstr = ""
for pf in glob.glob(PIDPREFIX % (ip, "*")):
for pf in glob.glob(PIDPREFIX % (ip,'*')):
bn = os.path.basename(pf)
root, _ = os.path.splitext(bn)
ports.append(root.split("_")[-1])
ports.append(root.split('_')[-1])
if len(ports):
portstr = "Wrong port? Other ports listening at %s: %s" % (host, " ".join(ports))
portstr = "Wrong port? Other ports listening at %s: %s" % (host, ' '.join(ports))
sys.stderr.write("pidfile %s does not exist. Daemon not running? %s\n"
% (pidfile, portstr))
% (pidfile,portstr))
return 1
try:
@@ -288,11 +261,8 @@ def stop_daemon(host, port):
os.kill(pid, signal.SIGTERM)
time.sleep(0.1)
try:
if os.path.exists(pidfile):
os.remove(pidfile)
except FileNotFoundError:
# The PID file might have been removed by the exiting process
pass
except OSError as e:
err = str(e)
@@ -310,7 +280,7 @@ def is_running(pid):
return True
def is_local_special(host, port):
if (host == "localhost" or host == "127.0.0.1") and not port:
if (host == 'localhost' or host == '127.0.0.1') and not port:
return True
else:
return False
@@ -321,7 +291,7 @@ class PRServiceConfigError(Exception):
def auto_start(d):
global singleton
host_params = list(filter(None, (d.getVar("PRSERV_HOST") or "").split(":")))
host_params = list(filter(None, (d.getVar('PRSERV_HOST') or '').split(':')))
if not host_params:
# Shutdown any existing PR Server
auto_shutdown()
@@ -330,7 +300,7 @@ def auto_start(d):
if len(host_params) != 2:
# Shutdown any existing PR Server
auto_shutdown()
logger.critical("\n".join(["PRSERV_HOST: incorrect format",
logger.critical('\n'.join(['PRSERV_HOST: incorrect format',
'Usage: PRSERV_HOST = "<hostname>:<port>"']))
raise PRServiceConfigError
@@ -374,17 +344,17 @@ def auto_shutdown():
def ping(host, port):
from . import client
with client.PRClient() as conn:
conn.connect_tcp(host, port)
return conn.ping()
conn = client.PRClient()
conn.connect_tcp(host, port)
return conn.ping()
def connect(host, port):
from . import client
global singleton
if host.strip().lower() == "localhost" and not port:
host = "localhost"
if host.strip().lower() == 'localhost' and not port:
host = 'localhost'
port = singleton.port
conn = client.PRClient()

View File

@@ -12,7 +12,7 @@
</object>
<object model="orm.toastersetting" pk="4">
<field type="CharField" name="name">DEFCONF_MACHINE</field>
<field type="CharField" name="value">qemux86-64</field>
<field type="CharField" name="value">qemux86</field>
</object>
<object model="orm.toastersetting" pk="5">
<field type="CharField" name="name">DEFCONF_SSTATE_DIR</field>

View File

@@ -1,22 +0,0 @@
# Generated by Django 4.2.5 on 2023-11-23 18:44
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('orm', '0020_models_bigautofield'),
]
operations = [
migrations.CreateModel(
name='EventLogsImports',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255)),
('imported', models.BooleanField(default=False)),
('build_id', models.IntegerField(blank=True, null=True)),
],
),
]

View File

@@ -1389,6 +1389,9 @@ class Machine(models.Model):
return "Machine " + self.name + "(" + self.description + ")"
class BitbakeVersion(models.Model):
name = models.CharField(max_length=32, unique = True)
@@ -1850,8 +1853,6 @@ def signal_runbuilds():
os.kill(int(pidf.read()), SIGUSR1)
except FileNotFoundError:
logger.info("Stopping existing runbuilds: no current process found")
except ProcessLookupError:
logger.warning("Stopping existing runbuilds: process lookup not found")
class Distro(models.Model):
search_allowed_fields = ["name", "description", "layer_version__layer__name"]
@@ -1868,15 +1869,6 @@ class Distro(models.Model):
def __unicode__(self):
return "Distro " + self.name + "(" + self.description + ")"
class EventLogsImports(models.Model):
name = models.CharField(max_length=255)
imported = models.BooleanField(default=False)
build_id = models.IntegerField(blank=True, null=True)
def __str__(self):
return self.name
django.db.models.signals.post_save.connect(invalidate_cache)
django.db.models.signals.post_delete.connect(invalidate_cache)
django.db.models.signals.m2m_changed.connect(invalidate_cache)

View File

@@ -1,16 +0,0 @@
# -- FILE: pytest.ini (or tox.ini)
[pytest]
# --create-db - force re creation of the test database
# https://pytest-django.readthedocs.io/en/latest/database.html#create-db-force-re-creation-of-the-test-database
# --html=report.html --self-contained-html
# https://docs.pytest.org/en/latest/usage.html#creating-html-reports
# https://pytest-html.readthedocs.io/en/latest/user_guide.html#creating-a-self-contained-report
addopts = --create-db --html="Toaster Tests Report.html" --self-contained-html
# Define environment variables using pytest-env
# A pytest plugin that enables you to set environment variables in the pytest.ini file.
# https://pypi.org/project/pytest-env/
env =
TOASTER_BUILDSERVER=1
DJANGO_SETTINGS_MODULE=toastermain.settings_test

View File

@@ -19,15 +19,12 @@ import os
import time
import unittest
import pytest
from selenium import webdriver
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.common.exceptions import NoSuchElementException, \
StaleElementReferenceException, TimeoutException, \
SessionNotCreatedException
StaleElementReferenceException, TimeoutException
def create_selenium_driver(cls,browser='chrome'):
# set default browser string based on env (if available)
@@ -36,32 +33,7 @@ def create_selenium_driver(cls,browser='chrome'):
browser = env_browser
if browser == 'chrome':
options = webdriver.ChromeOptions()
options.add_argument('--headless')
options.add_argument('--disable-infobars')
options.add_argument('--disable-dev-shm-usage')
options.add_argument('--no-sandbox')
options.add_argument('--remote-debugging-port=9222')
try:
return webdriver.Chrome(options=options)
except SessionNotCreatedException as e:
exit_message = "Halting tests prematurely to avoid cascading errors."
# check if chrome / chromedriver exists
chrome_path = os.popen("find ~/.cache/selenium/chrome/ -name 'chrome' -type f -print -quit").read().strip()
if not chrome_path:
pytest.exit(f"Failed to install/find chrome.\n{exit_message}")
chromedriver_path = os.popen("find ~/.cache/selenium/chromedriver/ -name 'chromedriver' -type f -print -quit").read().strip()
if not chromedriver_path:
pytest.exit(f"Failed to install/find chromedriver.\n{exit_message}")
# check if depends on each are fulfilled
depends_chrome = os.popen(f"ldd {chrome_path} | grep 'not found'").read().strip()
if depends_chrome:
pytest.exit(f"Missing chrome dependencies.\n{depends_chrome}\n{exit_message}")
depends_chromedriver = os.popen(f"ldd {chromedriver_path} | grep 'not found'").read().strip()
if depends_chromedriver:
pytest.exit(f"Missing chromedriver dependencies.\n{depends_chromedriver}\n{exit_message}")
# print original error otherwise
pytest.exit(f"Failed to start chromedriver.\n{e}\n{exit_message}")
return webdriver.Chrome()
elif browser == 'firefox':
return webdriver.Firefox()
elif browser == 'marionette':
@@ -93,9 +65,7 @@ class Wait(WebDriverWait):
_TIMEOUT = 10
_POLL_FREQUENCY = 0.5
def __init__(self, driver, timeout=_TIMEOUT, poll=_POLL_FREQUENCY):
self._TIMEOUT = timeout
self._POLL_FREQUENCY = poll
def __init__(self, driver):
super(Wait, self).__init__(driver, self._TIMEOUT, self._POLL_FREQUENCY)
def until(self, method, message=''):
@@ -167,8 +137,6 @@ class SeleniumTestCaseBase(unittest.TestCase):
""" Clean up webdriver driver """
cls.driver.quit()
# Allow driver resources to be properly freed before proceeding with further tests
time.sleep(5)
super(SeleniumTestCaseBase, cls).tearDownClass()
def get(self, url):
@@ -182,13 +150,6 @@ class SeleniumTestCaseBase(unittest.TestCase):
abs_url = '%s%s' % (self.live_server_url, url)
self.driver.get(abs_url)
try: # Ensure page is loaded before proceeding
self.wait_until_visible("#global-nav", poll=3)
except NoSuchElementException:
self.driver.implicitly_wait(3)
except TimeoutException:
self.driver.implicitly_wait(3)
def find(self, selector):
""" Find single element by CSS selector """
return self.driver.find_element(By.CSS_SELECTOR, selector)
@@ -208,34 +169,18 @@ class SeleniumTestCaseBase(unittest.TestCase):
""" Return the element which currently has focus on the page """
return self.driver.switch_to.active_element
def wait_until_present(self, selector, poll=0.5):
def wait_until_present(self, selector):
""" Wait until element matching CSS selector is on the page """
is_present = lambda driver: self.find(selector)
msg = 'An element matching "%s" should be on the page' % selector
element = Wait(self.driver, poll=poll).until(is_present, msg)
if poll > 2:
time.sleep(poll) # element need more delay to be present
element = Wait(self.driver).until(is_present, msg)
return element
def wait_until_visible(self, selector, poll=1):
def wait_until_visible(self, selector):
""" Wait until element matching CSS selector is visible on the page """
is_visible = lambda driver: self.find(selector).is_displayed()
msg = 'An element matching "%s" should be visible' % selector
Wait(self.driver, poll=poll).until(is_visible, msg)
time.sleep(poll) # wait for visibility to settle
return self.find(selector)
def wait_until_clickable(self, selector, poll=1):
""" Wait until element matching CSS selector is visible on the page """
WebDriverWait(
self.driver,
Wait._TIMEOUT,
poll_frequency=poll
).until(
EC.element_to_be_clickable((By.ID, selector.removeprefix('#')
)
)
)
Wait(self.driver).until(is_visible, msg)
return self.find(selector)
def wait_until_focused(self, selector):

View File

@@ -7,16 +7,13 @@
# SPDX-License-Identifier: GPL-2.0-only
#
import os
import re
import re, time
from django.urls import reverse
from selenium.webdriver.support.select import Select
from django.utils import timezone
from bldcontrol.models import BuildRequest
from tests.browser.selenium_helpers import SeleniumTestCase
from orm.models import BitbakeVersion, Layer, Layer_Version, Recipe, Release, Project, Build, Target, Task
from orm.models import BitbakeVersion, Release, Project, Build, Target
from selenium.webdriver.common.by import By
@@ -28,8 +25,7 @@ class TestAllBuildsPage(SeleniumTestCase):
CLI_BUILDS_PROJECT_NAME = 'command line builds'
def setUp(self):
builldir = os.environ.get('BUILDDIR', './')
bbv = BitbakeVersion.objects.create(name='bbv1', giturl=f'{builldir}/',
bbv = BitbakeVersion.objects.create(name='bbv1', giturl='/tmp/',
branch='master', dirpath='')
release = Release.objects.create(name='release1',
bitbake_version=bbv)
@@ -75,7 +71,7 @@ class TestAllBuildsPage(SeleniumTestCase):
'[data-role="data-recent-build-buildtime-field"]' % build.id
# because this loads via Ajax, wait for it to be visible
self.wait_until_visible(selector)
self.wait_until_present(selector)
build_time_spans = self.find_all(selector)
@@ -85,7 +81,7 @@ class TestAllBuildsPage(SeleniumTestCase):
def _get_row_for_build(self, build):
""" Get the table row for the build from the all builds table """
self.wait_until_visible('#allbuildstable')
self.wait_until_present('#allbuildstable')
rows = self.find_all('#allbuildstable tr')
@@ -106,66 +102,6 @@ class TestAllBuildsPage(SeleniumTestCase):
return found_row
def _get_create_builds(self, **kwargs):
""" Create a build and return the build object """
build1 = Build.objects.create(**self.project1_build_success)
build2 = Build.objects.create(**self.project1_build_failure)
# add some targets to these builds so they have recipe links
# (and so we can find the row in the ToasterTable corresponding to
# a particular build)
Target.objects.create(build=build1, target='foo')
Target.objects.create(build=build2, target='bar')
if kwargs:
# Create kwargs.get('success') builds with success status with target
# and kwargs.get('failure') builds with failure status with target
for i in range(kwargs.get('success', 0)):
now = timezone.now()
self.project1_build_success['started_on'] = now
self.project1_build_success[
'completed_on'] = now - timezone.timedelta(days=i)
build = Build.objects.create(**self.project1_build_success)
Target.objects.create(build=build,
target=f'{i}_success_recipe',
task=f'{i}_success_task')
self._set_buildRequest_and_task_on_build(build)
for i in range(kwargs.get('failure', 0)):
now = timezone.now()
self.project1_build_failure['started_on'] = now
self.project1_build_failure[
'completed_on'] = now - timezone.timedelta(days=i)
build = Build.objects.create(**self.project1_build_failure)
Target.objects.create(build=build,
target=f'{i}_fail_recipe',
task=f'{i}_fail_task')
self._set_buildRequest_and_task_on_build(build)
return build1, build2
def _create_recipe(self):
""" Add a recipe to the database and return it """
layer = Layer.objects.create()
layer_version = Layer_Version.objects.create(layer=layer)
return Recipe.objects.create(name='recipe_foo', layer_version=layer_version)
def _set_buildRequest_and_task_on_build(self, build):
""" Set buildRequest and task on build """
build.recipes_parsed = 1
build.save()
buildRequest = BuildRequest.objects.create(
build=build,
project=self.project1,
state=BuildRequest.REQ_COMPLETED)
build.build_request = buildRequest
recipe = self._create_recipe()
task = Task.objects.create(build=build,
recipe=recipe,
task_name='task',
outcome=Task.OUTCOME_SUCCESS)
task.save()
build.save()
def test_show_tasks_with_suffix(self):
""" Task should be shown as suffix on build name """
build = Build.objects.create(**self.project1_build_success)
@@ -175,7 +111,7 @@ class TestAllBuildsPage(SeleniumTestCase):
url = reverse('all-builds')
self.get(url)
self.wait_until_visible('td[class="target"]')
self.wait_until_present('td[class="target"]')
cell = self.find('td[class="target"]')
content = cell.get_attribute('innerHTML')
@@ -192,15 +128,14 @@ class TestAllBuildsPage(SeleniumTestCase):
but should be shown for other builds
"""
build1 = Build.objects.create(**self.project1_build_success)
default_build = Build.objects.create(
**self.default_project_build_success)
default_build = Build.objects.create(**self.default_project_build_success)
url = reverse('all-builds')
self.get(url)
# should see a rebuild button for non-command-line builds
self.wait_until_visible('#allbuildstable tbody tr')
selector = 'div[data-latest-build-result="%s"] .rebuild-btn' % build1.id
time.sleep(2)
run_again_button = self.find_all(selector)
self.assertEqual(len(run_again_button), 1,
'should see a rebuild button for non-cli builds')
@@ -211,6 +146,7 @@ class TestAllBuildsPage(SeleniumTestCase):
self.assertEqual(len(run_again_button), 0,
'should not see a rebuild button for cli builds')
def test_tooltips_on_project_name(self):
"""
Test tooltips shown next to project name in the main table
@@ -224,7 +160,6 @@ class TestAllBuildsPage(SeleniumTestCase):
url = reverse('all-builds')
self.get(url)
self.wait_until_visible('#allbuildstable', poll=3)
# get the project name cells from the table
cells = self.find_all('#allbuildstable td[class="project"]')
@@ -233,7 +168,7 @@ class TestAllBuildsPage(SeleniumTestCase):
for cell in cells:
content = cell.get_attribute('innerHTML')
help_icons = cell.find_elements(By.CSS_SELECTOR, selector)
help_icons = cell.find_elements_by_css_selector(selector)
if re.search(self.PROJECT_NAME, content):
# no help icon next to non-cli project name
@@ -253,224 +188,38 @@ class TestAllBuildsPage(SeleniumTestCase):
recent builds area; failed builds should not have links on the time column,
or in the recent builds area
"""
build1, build2 = self._get_create_builds()
build1 = Build.objects.create(**self.project1_build_success)
build2 = Build.objects.create(**self.project1_build_failure)
# add some targets to these builds so they have recipe links
# (and so we can find the row in the ToasterTable corresponding to
# a particular build)
Target.objects.create(build=build1, target='foo')
Target.objects.create(build=build2, target='bar')
url = reverse('all-builds')
self.get(url)
self.wait_until_visible('#allbuildstable', poll=3)
# test recent builds area for successful build
element = self._get_build_time_element(build1)
links = element.find_elements(By.CSS_SELECTOR, 'a')
msg = 'should be a link on the build time for a successful recent build'
self.assertEqual(len(links), 1, msg)
self.assertEquals(len(links), 1, msg)
# test recent builds area for failed build
element = self._get_build_time_element(build2)
links = element.find_elements(By.CSS_SELECTOR, 'a')
msg = 'should not be a link on the build time for a failed recent build'
self.assertEqual(len(links), 0, msg)
self.assertEquals(len(links), 0, msg)
# test the time column for successful build
build1_row = self._get_row_for_build(build1)
links = build1_row.find_elements(By.CSS_SELECTOR, 'td.time a')
msg = 'should be a link on the build time for a successful build'
self.assertEqual(len(links), 1, msg)
self.assertEquals(len(links), 1, msg)
# test the time column for failed build
build2_row = self._get_row_for_build(build2)
links = build2_row.find_elements(By.CSS_SELECTOR, 'td.time a')
msg = 'should not be a link on the build time for a failed build'
self.assertEqual(len(links), 0, msg)
def test_builds_table_search_box(self):
""" Test the search box in the builds table on the all builds page """
self._get_create_builds()
url = reverse('all-builds')
self.get(url)
# Check search box is present and works
self.wait_until_visible('#allbuildstable tbody tr')
search_box = self.find('#search-input-allbuildstable')
self.assertTrue(search_box.is_displayed())
# Check that we can search for a build by recipe name
search_box.send_keys('foo')
search_btn = self.find('#search-submit-allbuildstable')
search_btn.click()
self.wait_until_visible('#allbuildstable tbody tr')
rows = self.find_all('#allbuildstable tbody tr')
self.assertTrue(len(rows) >= 1)
def test_filtering_on_failure_tasks_column(self):
""" Test the filtering on failure tasks column in the builds table on the all builds page """
def _check_if_filter_failed_tasks_column_is_visible():
# check if failed tasks filter column is visible, if not click on it
# Check edit column
edit_column = self.find('#edit-columns-button')
self.assertTrue(edit_column.is_displayed())
edit_column.click()
# Check dropdown is visible
self.wait_until_visible('ul.dropdown-menu.editcol')
filter_fails_task_checkbox = self.find('#checkbox-failed_tasks')
if not filter_fails_task_checkbox.is_selected():
filter_fails_task_checkbox.click()
edit_column.click()
self._get_create_builds(success=10, failure=10)
url = reverse('all-builds')
self.get(url)
# Check filtering on failure tasks column
self.wait_until_visible('#allbuildstable tbody tr')
_check_if_filter_failed_tasks_column_is_visible()
failed_tasks_filter = self.find('#failed_tasks_filter')
failed_tasks_filter.click()
# Check popup is visible
self.wait_until_visible('#filter-modal-allbuildstable')
self.assertTrue(
self.find('#filter-modal-allbuildstable').is_displayed())
# Check that we can filter by failure tasks
build_without_failure_tasks = self.find(
'#failed_tasks_filter\\:without_failed_tasks')
build_without_failure_tasks.click()
# click on apply button
self.find('#filter-modal-allbuildstable .btn-primary').click()
self.wait_until_visible('#allbuildstable tbody tr')
# Check if filter is applied, by checking if failed_tasks_filter has btn-primary class
self.assertTrue(self.find('#failed_tasks_filter').get_attribute(
'class').find('btn-primary') != -1)
def test_filtering_on_completedOn_column(self):
""" Test the filtering on completed_on column in the builds table on the all builds page """
self._get_create_builds(success=10, failure=10)
url = reverse('all-builds')
self.get(url)
# Check filtering on failure tasks column
self.wait_until_visible('#allbuildstable tbody tr')
completed_on_filter = self.find('#completed_on_filter')
completed_on_filter.click()
# Check popup is visible
self.wait_until_visible('#filter-modal-allbuildstable')
self.assertTrue(
self.find('#filter-modal-allbuildstable').is_displayed())
# Check that we can filter by failure tasks
build_without_failure_tasks = self.find(
'#completed_on_filter\\:date_range')
build_without_failure_tasks.click()
# click on apply button
self.find('#filter-modal-allbuildstable .btn-primary').click()
self.wait_until_visible('#allbuildstable tbody tr')
# Check if filter is applied, by checking if completed_on_filter has btn-primary class
self.assertTrue(self.find('#completed_on_filter').get_attribute(
'class').find('btn-primary') != -1)
# Filter by date range
self.find('#completed_on_filter').click()
self.wait_until_visible('#filter-modal-allbuildstable')
date_ranges = self.driver.find_elements(
By.XPATH, '//input[@class="form-control hasDatepicker"]')
today = timezone.now()
yestersday = today - timezone.timedelta(days=1)
date_ranges[0].send_keys(yestersday.strftime('%Y-%m-%d'))
date_ranges[1].send_keys(today.strftime('%Y-%m-%d'))
self.find('#filter-modal-allbuildstable .btn-primary').click()
self.wait_until_visible('#allbuildstable tbody tr')
self.assertTrue(self.find('#completed_on_filter').get_attribute(
'class').find('btn-primary') != -1)
# Check if filter is applied, number of builds displayed should be 6
self.assertTrue(len(self.find_all('#allbuildstable tbody tr')) >= 4)
def test_builds_table_editColumn(self):
""" Test the edit column feature in the builds table on the all builds page """
self._get_create_builds(success=10, failure=10)
def test_edit_column(check_box_id):
# Check that we can hide/show table column
check_box = self.find(f'#{check_box_id}')
th_class = str(check_box_id).replace('checkbox-', '')
if check_box.is_selected():
# check if column is visible in table
self.assertTrue(
self.find(
f'#allbuildstable thead th.{th_class}'
).is_displayed(),
f"The {th_class} column is checked in EditColumn dropdown, but it's not visible in table"
)
check_box.click()
# check if column is hidden in table
self.assertFalse(
self.find(
f'#allbuildstable thead th.{th_class}'
).is_displayed(),
f"The {th_class} column is unchecked in EditColumn dropdown, but it's visible in table"
)
else:
# check if column is hidden in table
self.assertFalse(
self.find(
f'#allbuildstable thead th.{th_class}'
).is_displayed(),
f"The {th_class} column is unchecked in EditColumn dropdown, but it's visible in table"
)
check_box.click()
# check if column is visible in table
self.assertTrue(
self.find(
f'#allbuildstable thead th.{th_class}'
).is_displayed(),
f"The {th_class} column is checked in EditColumn dropdown, but it's not visible in table"
)
url = reverse('all-builds')
self.get(url)
self.wait_until_visible('#allbuildstable tbody tr')
# Check edit column
edit_column = self.find('#edit-columns-button')
self.assertTrue(edit_column.is_displayed())
edit_column.click()
# Check dropdown is visible
self.wait_until_visible('ul.dropdown-menu.editcol')
# Check that we can hide the edit column
test_edit_column('checkbox-errors_no')
test_edit_column('checkbox-failed_tasks')
test_edit_column('checkbox-image_files')
test_edit_column('checkbox-project')
test_edit_column('checkbox-started_on')
test_edit_column('checkbox-time')
test_edit_column('checkbox-warnings_no')
def test_builds_table_show_rows(self):
""" Test the show rows feature in the builds table on the all builds page """
self._get_create_builds(success=100, failure=100)
def test_show_rows(row_to_show, show_row_link):
# Check that we can show rows == row_to_show
show_row_link.select_by_value(str(row_to_show))
self.wait_until_visible('#allbuildstable tbody tr', poll=3)
# check at least some rows are visible
self.assertTrue(
len(self.find_all('#allbuildstable tbody tr')) > 0
)
url = reverse('all-builds')
self.get(url)
self.wait_until_visible('#allbuildstable tbody tr')
show_rows = self.driver.find_elements(
By.XPATH,
'//select[@class="form-control pagesize-allbuildstable"]'
)
# Check show rows
for show_row_link in show_rows:
show_row_link = Select(show_row_link)
test_show_rows(10, show_row_link)
test_show_rows(25, show_row_link)
test_show_rows(50, show_row_link)
test_show_rows(100, show_row_link)
test_show_rows(150, show_row_link)
self.assertEquals(len(links), 0, msg)

View File

@@ -7,12 +7,10 @@
# SPDX-License-Identifier: GPL-2.0-only
#
import os
import re
from django.urls import reverse
from django.utils import timezone
from selenium.webdriver.support.select import Select
from tests.browser.selenium_helpers import SeleniumTestCase
from orm.models import BitbakeVersion, Release, Project, Build
@@ -20,7 +18,6 @@ from orm.models import ProjectVariable
from selenium.webdriver.common.by import By
class TestAllProjectsPage(SeleniumTestCase):
""" Browser tests for projects page /projects/ """
@@ -30,8 +27,7 @@ class TestAllProjectsPage(SeleniumTestCase):
def setUp(self):
""" Add default project manually """
project = Project.objects.create_project(
self.CLI_BUILDS_PROJECT_NAME, None)
project = Project.objects.create_project(self.CLI_BUILDS_PROJECT_NAME, None)
self.default_project = project
self.default_project.is_default = True
self.default_project.save()
@@ -41,17 +37,6 @@ class TestAllProjectsPage(SeleniumTestCase):
self.release = None
def _create_projects(self, nb_project=10):
projects = []
for i in range(1, nb_project + 1):
projects.append(
Project(
name='test project {}'.format(i),
release=self.release,
)
)
Project.objects.bulk_create(projects)
def _add_build_to_default_project(self):
""" Add a build to the default project (not used in all tests) """
now = timezone.now()
@@ -62,14 +47,12 @@ class TestAllProjectsPage(SeleniumTestCase):
def _add_non_default_project(self):
""" Add another project """
builldir = os.environ.get('BUILDDIR', './')
bbv = BitbakeVersion.objects.create(name='test bbv', giturl=f'{builldir}/',
bbv = BitbakeVersion.objects.create(name='test bbv', giturl='/tmp/',
branch='master', dirpath='')
self.release = Release.objects.create(name='test release',
branch_name='master',
bitbake_version=bbv)
self.project = Project.objects.create_project(
self.PROJECT_NAME, self.release)
self.project = Project.objects.create_project(self.PROJECT_NAME, self.release)
self.project.is_default = False
self.project.save()
@@ -81,7 +64,7 @@ class TestAllProjectsPage(SeleniumTestCase):
def _get_row_for_project(self, project_name):
""" Get the HTML row for a project, or None if not found """
self.wait_until_visible('#projectstable tbody tr', poll=3)
self.wait_until_present('#projectstable tbody tr')
rows = self.find_all('#projectstable tbody tr')
# find the row with a project name matching the one supplied
@@ -112,8 +95,7 @@ class TestAllProjectsPage(SeleniumTestCase):
url = reverse('all-projects')
self.get(url)
default_project_row = self._get_row_for_project(
self.default_project.name)
default_project_row = self._get_row_for_project(self.default_project.name)
self.assertNotEqual(default_project_row, None,
'default project "cli builds" should be in page')
@@ -133,8 +115,7 @@ class TestAllProjectsPage(SeleniumTestCase):
self.wait_until_visible("#projectstable tr")
# find the row for the default project
default_project_row = self._get_row_for_project(
self.default_project.name)
default_project_row = self._get_row_for_project(self.default_project.name)
# check the release text for the default project
selector = 'span[data-project-field="release"] span.text-muted'
@@ -169,8 +150,7 @@ class TestAllProjectsPage(SeleniumTestCase):
self.wait_until_visible("#projectstable tr")
# find the row for the default project
default_project_row = self._get_row_for_project(
self.default_project.name)
default_project_row = self._get_row_for_project(self.default_project.name)
# check the machine cell for the default project
selector = 'span[data-project-field="machine"] span.text-muted'
@@ -205,15 +185,13 @@ class TestAllProjectsPage(SeleniumTestCase):
self.get(reverse('all-projects'))
# find the row for the default project
default_project_row = self._get_row_for_project(
self.default_project.name)
default_project_row = self._get_row_for_project(self.default_project.name)
# check the link on the name field
selector = 'span[data-project-field="name"] a'
element = default_project_row.find_element(By.CSS_SELECTOR, selector)
link_url = element.get_attribute('href').strip()
expected_url = reverse(
'projectbuilds', args=(self.default_project.id,))
expected_url = reverse('projectbuilds', args=(self.default_project.id,))
msg = 'link on default project name should point to builds but was %s' % link_url
self.assertTrue(link_url.endswith(expected_url), msg)
@@ -227,111 +205,3 @@ class TestAllProjectsPage(SeleniumTestCase):
expected_url = reverse('project', args=(self.project.id,))
msg = 'link on project name should point to configuration but was %s' % link_url
self.assertTrue(link_url.endswith(expected_url), msg)
def test_allProject_table_search_box(self):
""" Test the search box in the all project table on the all projects page """
self._create_projects()
url = reverse('all-projects')
self.get(url)
# Chseck search box is present and works
self.wait_until_visible('#projectstable tbody tr', poll=3)
search_box = self.find('#search-input-projectstable')
self.assertTrue(search_box.is_displayed())
# Check that we can search for a project by project name
search_box.send_keys('test project 10')
search_btn = self.find('#search-submit-projectstable')
search_btn.click()
self.wait_until_visible('#projectstable tbody tr', poll=3)
rows = self.find_all('#projectstable tbody tr')
self.assertTrue(len(rows) == 1)
def test_allProject_table_editColumn(self):
""" Test the edit column feature in the projects table on the all projects page """
self._create_projects()
def test_edit_column(check_box_id):
# Check that we can hide/show table column
check_box = self.find(f'#{check_box_id}')
th_class = str(check_box_id).replace('checkbox-', '')
if check_box.is_selected():
# check if column is visible in table
self.assertTrue(
self.find(
f'#projectstable thead th.{th_class}'
).is_displayed(),
f"The {th_class} column is checked in EditColumn dropdown, but it's not visible in table"
)
check_box.click()
# check if column is hidden in table
self.assertFalse(
self.find(
f'#projectstable thead th.{th_class}'
).is_displayed(),
f"The {th_class} column is unchecked in EditColumn dropdown, but it's visible in table"
)
else:
# check if column is hidden in table
self.assertFalse(
self.find(
f'#projectstable thead th.{th_class}'
).is_displayed(),
f"The {th_class} column is unchecked in EditColumn dropdown, but it's visible in table"
)
check_box.click()
# check if column is visible in table
self.assertTrue(
self.find(
f'#projectstable thead th.{th_class}'
).is_displayed(),
f"The {th_class} column is checked in EditColumn dropdown, but it's not visible in table"
)
url = reverse('all-projects')
self.get(url)
self.wait_until_visible('#projectstable tbody tr', poll=3)
# Check edit column
edit_column = self.find('#edit-columns-button')
self.assertTrue(edit_column.is_displayed())
edit_column.click()
# Check dropdown is visible
self.wait_until_visible('ul.dropdown-menu.editcol')
# Check that we can hide the edit column
test_edit_column('checkbox-errors')
test_edit_column('checkbox-image_files')
test_edit_column('checkbox-last_build_outcome')
test_edit_column('checkbox-recipe_name')
test_edit_column('checkbox-warnings')
def test_allProject_table_show_rows(self):
""" Test the show rows feature in the projects table on the all projects page """
self._create_projects(nb_project=200)
def test_show_rows(row_to_show, show_row_link):
# Check that we can show rows == row_to_show
show_row_link.select_by_value(str(row_to_show))
self.wait_until_visible('#projectstable tbody tr', poll=3)
# check at least some rows are visible
self.assertTrue(
len(self.find_all('#projectstable tbody tr')) > 0
)
url = reverse('all-projects')
self.get(url)
self.wait_until_visible('#projectstable tbody tr', poll=3)
show_rows = self.driver.find_elements(
By.XPATH,
'//select[@class="form-control pagesize-projectstable"]'
)
# Check show rows
for show_row_link in show_rows:
show_row_link = Select(show_row_link)
test_show_rows(10, show_row_link)
test_show_rows(25, show_row_link)
test_show_rows(50, show_row_link)
test_show_rows(100, show_row_link)
test_show_rows(150, show_row_link)

View File

@@ -7,7 +7,6 @@
# SPDX-License-Identifier: GPL-2.0-only
#
import os
from django.urls import reverse
from django.utils import timezone
@@ -22,8 +21,7 @@ class TestBuildDashboardPage(SeleniumTestCase):
""" Tests for the build dashboard /build/X """
def setUp(self):
builldir = os.environ.get('BUILDDIR', './')
bbv = BitbakeVersion.objects.create(name='bbv1', giturl=f'{builldir}/',
bbv = BitbakeVersion.objects.create(name='bbv1', giturl='/tmp/',
branch='master', dirpath="")
release = Release.objects.create(name='release1',
bitbake_version=bbv)
@@ -162,7 +160,6 @@ class TestBuildDashboardPage(SeleniumTestCase):
"""
url = reverse('builddashboard', args=(build.id,))
self.get(url)
self.wait_until_visible('#global-nav', poll=3)
def _get_build_dashboard_errors(self, build):
"""

View File

@@ -7,7 +7,6 @@
# SPDX-License-Identifier: GPL-2.0-only
#
import os
from django.urls import reverse
from django.utils import timezone
@@ -21,8 +20,7 @@ class TestBuildDashboardPageArtifacts(SeleniumTestCase):
""" Tests for artifacts on the build dashboard /build/X """
def setUp(self):
builldir = os.environ.get('BUILDDIR', './')
bbv = BitbakeVersion.objects.create(name='bbv1', giturl=f'{builldir}/',
bbv = BitbakeVersion.objects.create(name='bbv1', giturl='/tmp/',
branch='master', dirpath="")
release = Release.objects.create(name='release1',
bitbake_version=bbv)
@@ -199,12 +197,12 @@ class TestBuildDashboardPageArtifacts(SeleniumTestCase):
# check package count and size, link on target name
selector = '[data-value="target-package-count"]'
element = self.find(selector)
self.assertEqual(element.text, '1',
self.assertEquals(element.text, '1',
'package count should be shown for image builds')
selector = '[data-value="target-package-size"]'
element = self.find(selector)
self.assertEqual(element.text, '1.0 KB',
self.assertEquals(element.text, '1.0 KB',
'package size should be shown for image builds')
selector = '[data-link="target-packages"]'

View File

@@ -1,103 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# BitBake Toaster UI tests implementation
#
# Copyright (C) 2023 Savoir-faire Linux Inc
#
# SPDX-License-Identifier: GPL-2.0-only
import pytest
from django.urls import reverse
from selenium.webdriver.support.ui import Select
from tests.browser.selenium_helpers import SeleniumTestCase
from orm.models import BitbakeVersion, Project, Release
from selenium.webdriver.common.by import By
class TestDeleteProject(SeleniumTestCase):
def setUp(self):
bitbake, _ = BitbakeVersion.objects.get_or_create(
name="master",
giturl="git://master",
branch="master",
dirpath="master")
self.release, _ = Release.objects.get_or_create(
name="master",
description="Yocto Project master",
branch_name="master",
helptext="latest",
bitbake_version=bitbake)
Release.objects.get_or_create(
name="foo",
description="Yocto Project foo",
branch_name="foo",
helptext="latest",
bitbake_version=bitbake)
@pytest.mark.django_db
def test_delete_project(self):
""" Test delete a project
- Check delete modal is visible
- Check delete modal has right text
- Confirm delete
- Check project is deleted
"""
project_name = "project_to_delete"
url = reverse('newproject')
self.get(url)
self.enter_text('#new-project-name', project_name)
select = Select(self.find('#projectversion'))
select.select_by_value(str(self.release.pk))
self.click("#create-project-button")
# We should get redirected to the new project's page with the
# notification at the top
element = self.wait_until_visible('#project-created-notification')
self.assertTrue(project_name in element.text,
"New project name not in new project notification")
self.assertTrue(Project.objects.filter(name=project_name).count(),
"New project not found in database")
# Delete project
delete_project_link = self.driver.find_element(
By.XPATH, '//a[@href="#delete-project-modal"]')
delete_project_link.click()
# Check delete modal is visible
self.wait_until_visible('#delete-project-modal')
# Check delete modal has right text
modal_header_text = self.find('#delete-project-modal .modal-header').text
self.assertTrue(
"Are you sure you want to delete this project?" in modal_header_text,
"Delete project modal header text is wrong")
modal_body_text = self.find('#delete-project-modal .modal-body').text
self.assertTrue(
"Cancel its builds currently in progress" in modal_body_text,
"Modal body doesn't contain: Cancel its builds currently in progress")
self.assertTrue(
"Remove its configuration information" in modal_body_text,
"Modal body doesn't contain: Remove its configuration information")
self.assertTrue(
"Remove its imported layers" in modal_body_text,
"Modal body doesn't contain: Remove its imported layers")
self.assertTrue(
"Remove its custom images" in modal_body_text,
"Modal body doesn't contain: Remove its custom images")
self.assertTrue(
"Remove all its build information" in modal_body_text,
"Modal body doesn't contain: Remove all its build information")
# Confirm delete
delete_btn = self.find('#delete-project-confirmed')
delete_btn.click()
# Check project is deleted
self.wait_until_visible('#change-notification')
delete_notification = self.find('#change-notification-msg')
self.assertTrue("You have deleted 1 project:" in delete_notification.text)
self.assertTrue(project_name in delete_notification.text)
self.assertFalse(Project.objects.filter(name=project_name).exists(),
"Project not deleted from database")

View File

@@ -10,10 +10,8 @@
from django.urls import reverse
from django.utils import timezone
from tests.browser.selenium_helpers import SeleniumTestCase
from selenium.webdriver.common.by import By
from orm.models import Layer, Layer_Version, Project, Build
from orm.models import Project, Build
class TestLandingPage(SeleniumTestCase):
""" Tests for redirects on the landing page """
@@ -31,130 +29,6 @@ class TestLandingPage(SeleniumTestCase):
self.project.is_default = True
self.project.save()
def test_icon_info_visible_and_clickable(self):
""" Test that the information icon is visible and clickable """
self.get(reverse('landing'))
info_sign = self.find('#toaster-version-info-sign')
# check that the info sign is visible
self.assertTrue(info_sign.is_displayed())
# check that the info sign is clickable
# and info modal is appearing when clicking on the info sign
info_sign.click() # click on the info sign make attribute 'aria-describedby' visible
info_model_id = info_sign.get_attribute('aria-describedby')
info_modal = self.find(f'#{info_model_id}')
self.assertTrue(info_modal.is_displayed())
self.assertTrue("Toaster version information" in info_modal.text)
def test_documentation_link_displayed(self):
""" Test that the documentation link is displayed """
self.get(reverse('landing'))
documentation_link = self.find('#navbar-docs > a')
# check that the documentation link is visible
self.assertTrue(documentation_link.is_displayed())
# check browser open new tab toaster manual when clicking on the documentation link
self.assertEqual(documentation_link.get_attribute('target'), '_blank')
self.assertEqual(
documentation_link.get_attribute('href'),
'http://docs.yoctoproject.org/toaster-manual/index.html#toaster-user-manual')
self.assertTrue("Documentation" in documentation_link.text)
def test_openembedded_jumbotron_link_visible_and_clickable(self):
""" Test OpenEmbedded link jumbotron is visible and clickable: """
self.get(reverse('landing'))
jumbotron = self.find('.jumbotron')
# check OpenEmbedded
openembedded = jumbotron.find_element(By.LINK_TEXT, 'OpenEmbedded')
self.assertTrue(openembedded.is_displayed())
openembedded.click()
self.assertTrue("openembedded.org" in self.driver.current_url)
def test_bitbake_jumbotron_link_visible_and_clickable(self):
""" Test BitBake link jumbotron is visible and clickable: """
self.get(reverse('landing'))
jumbotron = self.find('.jumbotron')
# check BitBake
bitbake = jumbotron.find_element(By.LINK_TEXT, 'BitBake')
self.assertTrue(bitbake.is_displayed())
bitbake.click()
self.assertTrue(
"docs.yoctoproject.org/bitbake.html" in self.driver.current_url)
def test_yoctoproject_jumbotron_link_visible_and_clickable(self):
""" Test Yocto Project link jumbotron is visible and clickable: """
self.get(reverse('landing'))
jumbotron = self.find('.jumbotron')
# check Yocto Project
yoctoproject = jumbotron.find_element(By.LINK_TEXT, 'Yocto Project')
self.assertTrue(yoctoproject.is_displayed())
yoctoproject.click()
self.assertTrue("yoctoproject.org" in self.driver.current_url)
def test_link_setup_using_toaster_visible_and_clickable(self):
""" Test big magenta button setting up and using toaster link in jumbotron
if visible and clickable
"""
self.get(reverse('landing'))
jumbotron = self.find('.jumbotron')
# check Big magenta button
big_magenta_button = jumbotron.find_element(By.LINK_TEXT,
'Toaster is ready to capture your command line builds'
)
self.assertTrue(big_magenta_button.is_displayed())
big_magenta_button.click()
self.assertTrue(
"docs.yoctoproject.org/toaster-manual/setup-and-use.html#setting-up-and-using-toaster" in self.driver.current_url)
def test_link_create_new_project_in_jumbotron_visible_and_clickable(self):
""" Test big blue button create new project jumbotron if visible and clickable """
# Create a layer and a layer version to make visible the big blue button
layer = Layer.objects.create(name='bar')
Layer_Version.objects.create(layer=layer)
self.get(reverse('landing'))
jumbotron = self.find('.jumbotron')
# check Big Blue button
big_blue_button = jumbotron.find_element(By.LINK_TEXT,
'Create your first Toaster project to run manage builds'
)
self.assertTrue(big_blue_button.is_displayed())
big_blue_button.click()
self.assertTrue("toastergui/newproject/" in self.driver.current_url)
def test_toaster_manual_link_visible_and_clickable(self):
""" Test Read the Toaster manual link jumbotron is visible and clickable: """
self.get(reverse('landing'))
jumbotron = self.find('.jumbotron')
# check Read the Toaster manual
toaster_manual = jumbotron.find_element(
By.LINK_TEXT, 'Read the Toaster manual')
self.assertTrue(toaster_manual.is_displayed())
toaster_manual.click()
self.assertTrue(
"https://docs.yoctoproject.org/toaster-manual/index.html#toaster-user-manual" in self.driver.current_url)
def test_contrib_to_toaster_link_visible_and_clickable(self):
""" Test Contribute to Toaster link jumbotron is visible and clickable: """
self.get(reverse('landing'))
jumbotron = self.find('.jumbotron')
# check Contribute to Toaster
contribute_to_toaster = jumbotron.find_element(
By.LINK_TEXT, 'Contribute to Toaster')
self.assertTrue(contribute_to_toaster.is_displayed())
contribute_to_toaster.click()
self.assertTrue(
"wiki.yoctoproject.org/wiki/contribute_to_toaster" in str(self.driver.current_url).lower())
def test_only_default_project(self):
"""
No projects except default
@@ -213,9 +87,10 @@ class TestLandingPage(SeleniumTestCase):
self.get(reverse('landing'))
self.wait_until_visible("#latest-builds", poll=3)
elements = self.find_all('#allbuildstable')
self.assertEqual(len(elements), 1, 'should redirect to builds')
content = self.get_page_source()
self.assertTrue(self.PROJECT_NAME in content,
'should show builds for project %s' % self.PROJECT_NAME)
self.assertFalse(self.CLI_BUILDS_PROJECT_NAME in content,
'should not show builds for cli project')

View File

@@ -8,7 +8,6 @@
#
from django.urls import reverse
from selenium.common.exceptions import ElementClickInterceptedException, TimeoutException
from tests.browser.selenium_helpers import SeleniumTestCase
from orm.models import Layer, Layer_Version, Project, LayerSource, Release
@@ -64,12 +63,11 @@ class TestLayerDetailsPage(SeleniumTestCase):
args=(self.project.pk,
self.imported_layer_version.pk))
def _edit_layerdetails(self):
def test_edit_layerdetails(self):
""" Edit all the editable fields for the layer refresh the page and
check that the new values exist"""
self.get(self.url)
self.wait_until_visible("#add-remove-layer-btn")
self.click("#add-remove-layer-btn")
self.click("#edit-layer-source")
@@ -107,18 +105,7 @@ class TestLayerDetailsPage(SeleniumTestCase):
for save_btn in self.find_all(".change-btn"):
save_btn.click()
try:
self.wait_until_visible("#save-changes-for-switch", poll=3)
btn_save_chg_for_switch = self.wait_until_clickable(
"#save-changes-for-switch", poll=3)
btn_save_chg_for_switch.click()
except ElementClickInterceptedException:
self.skipTest(
"save-changes-for-switch click intercepted. Element not visible or maybe covered by another element.")
except TimeoutException:
self.skipTest(
"save-changes-for-switch is not clickable within the specified timeout.")
self.click("#save-changes-for-switch")
self.wait_until_visible("#edit-layer-source")
# Refresh the page to see if the new values are returned
@@ -147,18 +134,7 @@ class TestLayerDetailsPage(SeleniumTestCase):
new_dir = "/home/test/my-meta-dir"
dir_input.send_keys(new_dir)
try:
self.wait_until_visible("#save-changes-for-switch", poll=3)
btn_save_chg_for_switch = self.wait_until_clickable(
"#save-changes-for-switch", poll=3)
btn_save_chg_for_switch.click()
except ElementClickInterceptedException:
self.skipTest(
"save-changes-for-switch click intercepted. Element not properly visible or maybe behind another element.")
except TimeoutException:
self.skipTest(
"save-changes-for-switch is not clickable within the specified timeout.")
self.click("#save-changes-for-switch")
self.wait_until_visible("#edit-layer-source")
# Refresh the page to see if the new values are returned
@@ -168,13 +144,6 @@ class TestLayerDetailsPage(SeleniumTestCase):
"Expected %s in the dir value for layer directory" %
new_dir)
def test_edit_layerdetails_page(self):
try:
self._edit_layerdetails()
except ElementClickInterceptedException:
self.skipTest(
"ElementClickInterceptedException occured. Element not visible or maybe covered by another element.")
def test_delete_layer(self):
""" Delete the layer """

View File

@@ -6,6 +6,7 @@
#
# Copyright (C) 2013-2016 Intel Corporation
#
import time
from django.urls import reverse
from django.utils import timezone
from tests.browser.selenium_helpers import SeleniumTestCase
@@ -46,7 +47,7 @@ class TestMostRecentBuildsStates(SeleniumTestCase):
# build queued; check shown as queued
selector = base_selector + '[data-build-state="Queued"]'
element = self.wait_until_visible(selector)
self.assertRegex(element.get_attribute('innerHTML'),
self.assertRegexpMatches(element.get_attribute('innerHTML'),
'Build queued', 'build should show queued status')
# waiting for recipes to be parsed
@@ -96,7 +97,7 @@ class TestMostRecentBuildsStates(SeleniumTestCase):
selector = base_selector + '[data-build-state="Starting"]'
element = self.wait_until_visible(selector)
self.assertRegex(element.get_attribute('innerHTML'),
self.assertRegexpMatches(element.get_attribute('innerHTML'),
'Tasks starting', 'build should show "tasks starting" status')
# first task finished; check tasks progress bar
@@ -185,7 +186,7 @@ class TestMostRecentBuildsStates(SeleniumTestCase):
selector = '[data-latest-build-result="%s"] ' \
'[data-build-state="Cancelling"]' % build.id
element = self.wait_until_visible(selector)
self.assertRegex(element.get_attribute('innerHTML'),
self.assertRegexpMatches(element.get_attribute('innerHTML'),
'Cancelling the build', 'build should show "cancelling" status')
# check cancelled state
@@ -197,5 +198,5 @@ class TestMostRecentBuildsStates(SeleniumTestCase):
selector = '[data-latest-build-result="%s"] ' \
'[data-build-state="Cancelled"]' % build.id
element = self.wait_until_visible(selector)
self.assertRegex(element.get_attribute('innerHTML'),
self.assertRegexpMatches(element.get_attribute('innerHTML'),
'Build cancelled', 'build should show "cancelled" status')

View File

@@ -45,16 +45,11 @@ class TestNewCustomImagePage(SeleniumTestCase):
)
# add a fake image recipe to the layer that can be customised
builldir = os.environ.get('BUILDDIR', './')
self.recipe = Recipe.objects.create(
name='core-image-minimal',
layer_version=layer_version,
file_path=f'{builldir}/core-image-minimal.bb',
is_image=True
)
# create a tmp file for the recipe
with open(self.recipe.file_path, 'w') as f:
f.write('foo')
# another project with a custom image already in it
project2 = Project.objects.create(name='whoop', release=release)
@@ -90,7 +85,6 @@ class TestNewCustomImagePage(SeleniumTestCase):
"""
url = reverse('newcustomimage', args=(self.project.id,))
self.get(url)
self.wait_until_visible('#global-nav', poll=3)
self.click('button[data-recipe="%s"]' % self.recipe.id)
@@ -138,7 +132,7 @@ class TestNewCustomImagePage(SeleniumTestCase):
"""
self._create_custom_image(self.recipe.name)
element = self.wait_until_visible('#invalid-name-help')
self.assertRegex(element.text.strip(),
self.assertRegexpMatches(element.text.strip(),
'image with this name already exists')
def test_new_duplicates_project_image(self):
@@ -156,4 +150,4 @@ class TestNewCustomImagePage(SeleniumTestCase):
self._create_custom_image(custom_image_name)
element = self.wait_until_visible('#invalid-name-help')
expected = 'An image with this name already exists in this project'
self.assertRegex(element.text.strip(), expected)
self.assertRegexpMatches(element.text.strip(), expected)

View File

@@ -6,6 +6,8 @@
#
# SPDX-License-Identifier: GPL-2.0-only
#
import time
from django.urls import reverse
from tests.browser.selenium_helpers import SeleniumTestCase
from selenium.webdriver.support.ui import Select
@@ -47,18 +49,18 @@ class TestNewProjectPage(SeleniumTestCase):
url = reverse('newproject')
self.get(url)
self.wait_until_visible('#new-project-name', poll=3)
self.enter_text('#new-project-name', project_name)
select = Select(self.find('#projectversion'))
select.select_by_value(str(self.release.pk))
time.sleep(1)
self.click("#create-project-button")
time.sleep(2)
# We should get redirected to the new project's page with the
# notification at the top
element = self.wait_until_visible(
'#project-created-notification', poll=3)
element = self.wait_until_visible('#project-created-notification')
self.assertTrue(project_name in element.text,
"New project name not in new project notification")
@@ -79,7 +81,6 @@ class TestNewProjectPage(SeleniumTestCase):
url = reverse('newproject')
self.get(url)
self.wait_until_visible('#new-project-name', poll=3)
self.enter_text('#new-project-name', project_name)
@@ -90,9 +91,9 @@ class TestNewProjectPage(SeleniumTestCase):
radio.click()
self.click("#create-project-button")
time.sleep(2)
self.wait_until_present('#hint-error-project-name', poll=3)
element = self.find('#hint-error-project-name')
element = self.wait_until_visible('#hint-error-project-name')
self.assertTrue(("Project names must be unique" in element.text),
"Did not find unique project name error message")
@@ -104,6 +105,7 @@ class TestNewProjectPage(SeleniumTestCase):
except InvalidElementStateException:
pass
time.sleep(2)
self.assertTrue(
(Project.objects.filter(name=project_name).count() == 1),
"New project not found in database")

View File

@@ -7,7 +7,6 @@
# SPDX-License-Identifier: GPL-2.0-only
#
import os
import re
from django.urls import reverse
@@ -23,8 +22,7 @@ class TestProjectBuildsPage(SeleniumTestCase):
CLI_BUILDS_PROJECT_NAME = 'command line builds'
def setUp(self):
builldir = os.environ.get('BUILDDIR', './')
bbv = BitbakeVersion.objects.create(name='bbv1', giturl=f'{builldir}/',
bbv = BitbakeVersion.objects.create(name='bbv1', giturl='/tmp/',
branch='master', dirpath='')
release = Release.objects.create(name='release1',
bitbake_version=bbv)

View File

@@ -7,7 +7,6 @@
# SPDX-License-Identifier: GPL-2.0-only
#
import os
from django.urls import reverse
from tests.browser.selenium_helpers import SeleniumTestCase
@@ -23,8 +22,7 @@ class TestProjectConfigsPage(SeleniumTestCase):
'any of these characters'
def setUp(self):
builldir = os.environ.get('BUILDDIR', './')
bbv = BitbakeVersion.objects.create(name='bbv1', giturl=f'{builldir}/',
bbv = BitbakeVersion.objects.create(name='bbv1', giturl='/tmp/',
branch='master', dirpath='')
release = Release.objects.create(name='release1',
bitbake_version=bbv)

View File

@@ -27,13 +27,3 @@ class TestSample(SeleniumTestCase):
self.get(url)
brand_link = self.find('.toaster-navbar-brand a.brand')
self.assertEqual(brand_link.text.strip(), 'Toaster')
def test_no_builds_message(self):
""" Test that a message is shown when there are no builds """
url = reverse('all-builds')
self.get(url)
self.wait_until_visible('#empty-state-allbuildstable') # wait for the empty state div to appear
div_msg = self.find('#empty-state-allbuildstable .alert-info')
msg = 'Sorry - no data found'
self.assertEqual(div_msg.text, msg)

View File

@@ -8,7 +8,6 @@
#
from datetime import datetime
import os
from django.urls import reverse
from django.utils import timezone
@@ -60,8 +59,7 @@ class TestToasterTableUI(SeleniumTestCase):
later = now + timezone.timedelta(hours=1)
even_later = later + timezone.timedelta(hours=1)
builldir = os.environ.get('BUILDDIR', './')
bbv = BitbakeVersion.objects.create(name='test bbv', giturl=f'{builldir}/',
bbv = BitbakeVersion.objects.create(name='test bbv', giturl='/tmp/',
branch='master', dirpath='')
release = Release.objects.create(name='test release',
branch_name='master',

View File

@@ -88,7 +88,7 @@ def load_build_environment():
class BuildTest(unittest.TestCase):
PROJECT_NAME = "Testbuild"
BUILDDIR = os.environ.get("BUILDDIR")
BUILDDIR = "/tmp/build/"
def build(self, target):
# So that the buildinfo helper uses the test database'
@@ -116,19 +116,10 @@ class BuildTest(unittest.TestCase):
project = Project.objects.create_project(name=BuildTest.PROJECT_NAME,
release=release)
passthrough_variable_names = ["SSTATE_DIR", "DL_DIR", "SSTATE_MIRRORS", "BB_HASHSERVE", "BB_HASHSERVE_UPSTREAM"]
for variable_name in passthrough_variable_names:
current_variable = os.environ.get(variable_name)
if current_variable:
ProjectVariable.objects.get_or_create(
name=variable_name,
value=current_variable,
project=project)
if os.environ.get("TOASTER_TEST_USE_SSTATE_MIRROR"):
ProjectVariable.objects.get_or_create(
name="SSTATE_MIRRORS",
value="file://.* http://sstate.yoctoproject.org/all/PATH;downloadfilename=PATH",
value="file://.* http://sstate.yoctoproject.org/PATH;downloadfilename=PATH",
project=project)
ProjectTarget.objects.create(project=project,

View File

@@ -10,7 +10,6 @@
# Ionut Chisanovici, Paul Eggleton and Cristian Iorga
import os
import pytest
from django.db.models import Q
@@ -21,13 +20,13 @@ from orm.models import CustomImagePackage
from tests.builds.buildtest import BuildTest
@pytest.mark.order(4)
@pytest.mark.django_db(True)
class BuildCoreImageMinimal(BuildTest):
"""Build core-image-minimal and test the results"""
def setUp(self):
self.completed_build = self.target_already_built("core-image-minimal")
self.completed_build = self.build("core-image-minimal")
self.built = self.target_already_built("core-image-minimal")
# Check if build name is unique - tc_id=795
def test_Build_Unique_Name(self):
@@ -46,6 +45,17 @@ class BuildCoreImageMinimal(BuildTest):
total_builds,
msg='Build cooker log path is not unique')
# Check if task order is unique for one build - tc=824
def test_Task_Unique_Order(self):
total_task_order = Task.objects.filter(
build=self.built).values('order').count()
distinct_task_order = Task.objects.filter(
build=self.completed_build).values('order').distinct().count()
self.assertEqual(total_task_order,
distinct_task_order,
msg='Errors task order is not unique')
# Check task order sequence for one build - tc=825
def test_Task_Order_Sequence(self):
cnt_err = []
@@ -89,6 +99,7 @@ class BuildCoreImageMinimal(BuildTest):
'task_name',
'sstate_result')
cnt_err = []
for task in tasks:
if (task['sstate_result'] != Task.SSTATE_NA and
task['sstate_result'] != Task.SSTATE_MISS):
@@ -211,7 +222,6 @@ class BuildCoreImageMinimal(BuildTest):
# orm_build.outcome=0 then if the file exists and its size matches
# the file_size value. Need to add the tc in the test run
def test_Target_File_Name_Populated(self):
cnt_err = []
builds = Build.objects.filter(outcome=0).values('id')
for build in builds:
targets = Target.objects.filter(
@@ -221,6 +231,7 @@ class BuildCoreImageMinimal(BuildTest):
target_id=target['id']).values('id',
'file_name',
'file_size')
cnt_err = []
for file_info in target_files:
target_id = file_info['id']
target_file_name = file_info['file_name']

Some files were not shown because too many files have changed in this diff Show More