When qemu machine hangs, the ssh commands done by tests are not timing out. do_testimage() task has last logs like this: DEBUG: time: 1673531086.3155053, endtime: 1673531686.315502 The test process is stuck for hours, or for ever if the executing command or test case did not set a timeout correctly. The default 300 second timeout is not working when target hangs. Note that timeout is really a "inactive timeout" since data returned by the process will reset the timeout. Make the process stdout non-blocking so read() will always return right away using os.set_blocking() available in python 3.5 and later. Then change from python codec reader to plain read() and make the ssh subprocess stdout non-blocking. Even with select() making sure the file had input to be read, the codec reader was trying to find more stuff and blocking for ever when process hangs. While at it, add a small timeout to read data in larger chunks if possible. This avoids reading data one or few characters at a time and makes the debug logs more readable. close() the stdout file in all cases after read loop is complete. Then make sure to wait or kill the ssh subprocess in all cases. Just reading the output stream and receiving EOF there does not mean that the process exited, and wait() needs a timeout if the process is hanging. In the end kill the process and return the return value and captured output utf-8 encoded, just like before these changes. This fixes ssh run() related deadlocks when a qemu target hangs completely. (From OE-Core rev: 04f080802b4a28709a105e4f0ead56a7a2da42b4) Signed-off-by: Mikko Rapeli <mikko.rapeli@linaro.org> Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com> (cherry picked from commit 9c63970fce3a3d6029745252a6ec2bf9b9da862d) Signed-off-by: Steve Sakoman <steve@sakoman.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
= OEQA (v2) Framework =
== Introduction ==
This is version 2 of the OEQA framework. Base clases are located in the 'oeqa/core' directory and subsequent components must extend from these.
The main design consideration was to implement the needed functionality on top of the Python unittest framework. To achieve this goal, the following modules are used:
* oeqa/core/runner.py: Provides OETestResult and OETestRunner base
classes extending the unittest class. These classes support exporting
results to different formats; currently RAW and XML support exist.
* oeqa/core/loader.py: Provides OETestLoader extending the unittest class.
It also features a unified implementation of decorator support and
filtering test cases.
* oeqa/core/case.py: Provides OETestCase base class extending
unittest.TestCase and provides access to the Test data (td), Test context
and Logger functionality.
* oeqa/core/decorator: Provides OETestDecorator, a new class to implement
decorators for Test cases.
* oeqa/core/context: Provides OETestContext, a high-level API for
loadTests and runTests of certain Test component and
OETestContextExecutor a base class to enable oe-test to discover/use
the Test component.
Also, a new 'oe-test' runner is located under 'scripts', allowing scans for components that supports OETestContextExecutor (see below).
== Terminology ==
* Test component: The area of testing in the Project, for example: runtime, SDK, eSDK, selftest.
* Test data: Data associated with the Test component. Currently we use bitbake datastore as
a Test data input.
* Test context: A context of what tests needs to be run and how to do it; this additionally
provides access to the Test data and could have custom methods and/or attrs.
== oe-test ==
The new tool, oe-test, has the ability to scan the code base for test components and provide a unified way to run test cases. Internally it scans folders inside oeqa module in order to find specific classes that implement a test component.
== Usage ==
Executing the example test component
$ source oe-init-build-env
$ oe-test core
Getting help
$ oe-test -h
== Creating new Test Component ==
Adding a new test component the developer needs to extend OETestContext/OETestContextExecutor (from context.py) and OETestCase (from case.py)
== Selftesting the framework ==
Run all tests:
$ PATH=$PATH:../../ python3 -m unittest discover -s tests
Run some test:
$ cd tests/
$ ./test_data.py