mirror of
https://git.yoctoproject.org/poky
synced 2026-02-20 08:29:42 +01:00
Compare commits
395 Commits
yocto-2.0.
...
fido-13.0.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b74ea963ce | ||
|
|
963a0c2f76 | ||
|
|
f963541dc0 | ||
|
|
57eb677f0e | ||
|
|
5fecc456e5 | ||
|
|
4ce2a556ca | ||
|
|
174b15c2c0 | ||
|
|
bb931159ea | ||
|
|
346188f176 | ||
|
|
3b9a1144a5 | ||
|
|
687327d494 | ||
|
|
c18dff0871 | ||
|
|
ab7f4c1a6d | ||
|
|
78b2eaa516 | ||
|
|
1245acce18 | ||
|
|
6516ecd075 | ||
|
|
d6dcddbb3d | ||
|
|
1c65b4bdd8 | ||
|
|
e4f0b095e8 | ||
|
|
b50596d8f6 | ||
|
|
814bdc8a55 | ||
|
|
fa87963b35 | ||
|
|
d7cad22f48 | ||
|
|
7ccc4e25a3 | ||
|
|
b64fcadb20 | ||
|
|
7e8bd02a57 | ||
|
|
bdba0dd12d | ||
|
|
4fc8836379 | ||
|
|
0b1ea952ad | ||
|
|
541876e3e5 | ||
|
|
f08e550a89 | ||
|
|
be837d96fa | ||
|
|
cde69ef352 | ||
|
|
29495d0e8d | ||
|
|
704b4bf52c | ||
|
|
38fd01f417 | ||
|
|
1f404c9bd8 | ||
|
|
e232ad758a | ||
|
|
328d35b53d | ||
|
|
2adb210c8c | ||
|
|
982baf1130 | ||
|
|
38f48913ad | ||
|
|
aecebf120b | ||
|
|
dd86cb34e6 | ||
|
|
36978ade7f | ||
|
|
407975a71e | ||
|
|
86625d2a59 | ||
|
|
0c925855f7 | ||
|
|
bdf4d19dc3 | ||
|
|
29ef78c85a | ||
|
|
78fa7de4f2 | ||
|
|
ad872f2757 | ||
|
|
baab91ee8c | ||
|
|
f22f5b2e52 | ||
|
|
cd2c9acdbd | ||
|
|
08f9fbbc97 | ||
|
|
5954c4e593 | ||
|
|
9d9cc9dfbc | ||
|
|
4b824d5bed | ||
|
|
3ff9e84883 | ||
|
|
01a50376df | ||
|
|
7e9d0f0c7d | ||
|
|
65dfd5efd5 | ||
|
|
fac23dfd45 | ||
|
|
66941f0c08 | ||
|
|
13e7544ae9 | ||
|
|
e2ca16c6d3 | ||
|
|
f9a8370345 | ||
|
|
6dd71217a8 | ||
|
|
417c86b42b | ||
|
|
ea9016b60b | ||
|
|
3b732f01c9 | ||
|
|
f81b891959 | ||
|
|
411a70dea3 | ||
|
|
4e224f3922 | ||
|
|
2f2b0e8428 | ||
|
|
c9f56848e0 | ||
|
|
d46acb9296 | ||
|
|
ef7eb171a4 | ||
|
|
65a51fe6bf | ||
|
|
08d3259041 | ||
|
|
83aa565d93 | ||
|
|
0b506a72f4 | ||
|
|
ee88b51cf2 | ||
|
|
3773a7d16c | ||
|
|
e4453c98bd | ||
|
|
23c8b861d1 | ||
|
|
a77d2cc81c | ||
|
|
43969db1c7 | ||
|
|
8f8327d99b | ||
|
|
08607f7864 | ||
|
|
2468978cb4 | ||
|
|
5bda039a8e | ||
|
|
8fd70a3d70 | ||
|
|
0e2c483f7d | ||
|
|
010a940ba1 | ||
|
|
5977435538 | ||
|
|
5cdbf0ad06 | ||
|
|
33973cd841 | ||
|
|
cfb195ae80 | ||
|
|
55eee7ae3c | ||
|
|
0ece3d8ab5 | ||
|
|
1137495bbd | ||
|
|
58b038b778 | ||
|
|
f6430d42b4 | ||
|
|
b24aeb7b42 | ||
|
|
1cd4b260b5 | ||
|
|
5ce20aeb02 | ||
|
|
9c245f1d79 | ||
|
|
d3959f672a | ||
|
|
e9c47914f2 | ||
|
|
fed3899694 | ||
|
|
6ae42d8851 | ||
|
|
f76f0555b9 | ||
|
|
affbc3f649 | ||
|
|
919c7bf5c6 | ||
|
|
e4f3e5440b | ||
|
|
274d571316 | ||
|
|
f3324915ff | ||
|
|
e834984ce7 | ||
|
|
aa88803046 | ||
|
|
903cf70f3a | ||
|
|
8045026d49 | ||
|
|
60836f602a | ||
|
|
179c17970a | ||
|
|
e118d980da | ||
|
|
45d0819daf | ||
|
|
4acf353ffd | ||
|
|
4dd8c7e805 | ||
|
|
a8d4f5cd2b | ||
|
|
3716298102 | ||
|
|
ed010c7a38 | ||
|
|
3746f83469 | ||
|
|
4a2af0eaa9 | ||
|
|
c75869a7f8 | ||
|
|
d31b9e6326 | ||
|
|
20eab30866 | ||
|
|
d7f6451ad1 | ||
|
|
520c455b83 | ||
|
|
b317d79fb7 | ||
|
|
8c9551a7a3 | ||
|
|
6bc56e634a | ||
|
|
f64d73f8c4 | ||
|
|
f1d3b5f185 | ||
|
|
d1b6e1dfdb | ||
|
|
2bc3328b8b | ||
|
|
d768a80391 | ||
|
|
66dd73d1d1 | ||
|
|
eabd7359d1 | ||
|
|
af86c079dd | ||
|
|
a8d1d48cba | ||
|
|
34afb7cb38 | ||
|
|
606a6d36de | ||
|
|
cefb6645d7 | ||
|
|
1ee92bc5d5 | ||
|
|
34195e3210 | ||
|
|
c498744945 | ||
|
|
bf28314e59 | ||
|
|
7606fba211 | ||
|
|
ccaae1890e | ||
|
|
b610e29670 | ||
|
|
6a52d6d368 | ||
|
|
ab84c98f9f | ||
|
|
e82008724e | ||
|
|
34f38540d8 | ||
|
|
a9ddd71880 | ||
|
|
53f761d05e | ||
|
|
c583473a2b | ||
|
|
71676e587f | ||
|
|
86bf04ca9b | ||
|
|
d3f6656683 | ||
|
|
cdc2c8aeaa | ||
|
|
c495dccf58 | ||
|
|
8fc71f4247 | ||
|
|
10ee6be5b8 | ||
|
|
5b8a8f9490 | ||
|
|
fb3f5c68fd | ||
|
|
342958fce1 | ||
|
|
744d1ff8ce | ||
|
|
7d88afbdfd | ||
|
|
08a09c3410 | ||
|
|
d271a8e004 | ||
|
|
409bdca8e3 | ||
|
|
dc70442747 | ||
|
|
e34336da87 | ||
|
|
3b8170562c | ||
|
|
f0d6f7fe95 | ||
|
|
2cc0ea83d3 | ||
|
|
def79f5f31 | ||
|
|
7620a4ad38 | ||
|
|
6182fc36ae | ||
|
|
1901d6b02c | ||
|
|
50338c36c8 | ||
|
|
f831a73052 | ||
|
|
9e14d6efa2 | ||
|
|
50241173b7 | ||
|
|
c97d806d2f | ||
|
|
2f1a4d0f73 | ||
|
|
b9b7c619da | ||
|
|
3599cb53f8 | ||
|
|
e7b12a4e63 | ||
|
|
276606afe6 | ||
|
|
eb4a134a60 | ||
|
|
508c4cac32 | ||
|
|
e9ae2f8417 | ||
|
|
12cc076832 | ||
|
|
ca6ef30c7a | ||
|
|
59e4f9fc12 | ||
|
|
4dc652a0c3 | ||
|
|
83ac75ffb3 | ||
|
|
e23ec53caa | ||
|
|
45592f15d8 | ||
|
|
08dcacb083 | ||
|
|
944b46c223 | ||
|
|
26e18a143b | ||
|
|
704a053dd5 | ||
|
|
09eb069ccf | ||
|
|
67d83673ea | ||
|
|
6504fa687a | ||
|
|
7a277e901d | ||
|
|
b611530351 | ||
|
|
3e3507d02e | ||
|
|
8d07cacde3 | ||
|
|
205e9ed896 | ||
|
|
edb2b4760c | ||
|
|
b749898186 | ||
|
|
7b7b67fcef | ||
|
|
3857acab1a | ||
|
|
dc97c54915 | ||
|
|
2dd79784e8 | ||
|
|
e4ccc4dae3 | ||
|
|
732cdba31e | ||
|
|
af3518a605 | ||
|
|
f4b1bcbc38 | ||
|
|
605e8fad88 | ||
|
|
636cde886f | ||
|
|
b5ada042b5 | ||
|
|
9f524997ff | ||
|
|
4b64fb5baa | ||
|
|
d730e14639 | ||
|
|
75a982e74f | ||
|
|
1e64bf4a7e | ||
|
|
427c70443d | ||
|
|
0456e83de4 | ||
|
|
c18b02e5e9 | ||
|
|
aa263849ba | ||
|
|
534d2f3a5d | ||
|
|
2c3ae1ed7a | ||
|
|
0c20afe810 | ||
|
|
aab4a03e19 | ||
|
|
47876b91e9 | ||
|
|
2536f8d363 | ||
|
|
c6269fbbc4 | ||
|
|
52c03366ec | ||
|
|
c53661af38 | ||
|
|
2bb285d6b8 | ||
|
|
5553c4cd8d | ||
|
|
aefa1a6eea | ||
|
|
bd7710bff7 | ||
|
|
62675e990f | ||
|
|
006aafcb3c | ||
|
|
882f8c5743 | ||
|
|
d685adb430 | ||
|
|
35bbff5e5d | ||
|
|
8766cae8b4 | ||
|
|
f416e2b9bb | ||
|
|
0c045cab74 | ||
|
|
8d6080ebe0 | ||
|
|
c3613297f0 | ||
|
|
ec3a90915f | ||
|
|
75c0781c5e | ||
|
|
e267a38cd7 | ||
|
|
5b7487567c | ||
|
|
66fc24c6cb | ||
|
|
310e37485b | ||
|
|
5edbe941f9 | ||
|
|
66b63d921b | ||
|
|
08b10e93c4 | ||
|
|
a536594605 | ||
|
|
e2e522a6ed | ||
|
|
145fe000ac | ||
|
|
4206fcbf22 | ||
|
|
f366ff2c03 | ||
|
|
320a2206e0 | ||
|
|
e4f3cf8950 | ||
|
|
2a9486875d | ||
|
|
28e8418644 | ||
|
|
97e5d37719 | ||
|
|
7209c0bb13 | ||
|
|
4c8b3e10f7 | ||
|
|
63ee4133b8 | ||
|
|
47aee60bfe | ||
|
|
69cdf9658a | ||
|
|
4a90b6b0d9 | ||
|
|
0c7340105e | ||
|
|
ac91a56adc | ||
|
|
831fe9fa27 | ||
|
|
0947319266 | ||
|
|
da7196cc2d | ||
|
|
8614206f39 | ||
|
|
06cbcff7be | ||
|
|
12cc91a768 | ||
|
|
fea38f087c | ||
|
|
fe996c253e | ||
|
|
9dcec27bd5 | ||
|
|
854edda7d7 | ||
|
|
4aa45aca7a | ||
|
|
c6a9013728 | ||
|
|
1cc6544d69 | ||
|
|
19933e4474 | ||
|
|
41432d9d79 | ||
|
|
7144eb7774 | ||
|
|
59e5021b1f | ||
|
|
17c5df02e2 | ||
|
|
075f0b2c88 | ||
|
|
70832d863f | ||
|
|
90d943ae62 | ||
|
|
46669d3074 | ||
|
|
8f39398dbf | ||
|
|
be46715a9d | ||
|
|
f73d6752db | ||
|
|
33558eacc8 | ||
|
|
0db74ea8f4 | ||
|
|
08246fc7d4 | ||
|
|
d691916ef8 | ||
|
|
2f6a0659fe | ||
|
|
888fb56e97 | ||
|
|
1444bef551 | ||
|
|
d1df6afab2 | ||
|
|
faf19c5274 | ||
|
|
dee274305d | ||
|
|
20d2d2096a | ||
|
|
a274e39f0c | ||
|
|
dfaa74d34f | ||
|
|
d66075e595 | ||
|
|
5f46b438a8 | ||
|
|
674ffa8c68 | ||
|
|
9b6eee9f30 | ||
|
|
c636c22484 | ||
|
|
98ae8387e3 | ||
|
|
f96ffe74a9 | ||
|
|
66f1700481 | ||
|
|
d2d94d071c | ||
|
|
0b82198c8d | ||
|
|
cf475ab8ea | ||
|
|
4ef59fe419 | ||
|
|
64ccb07092 | ||
|
|
acf41de1fa | ||
|
|
3eabeef5d6 | ||
|
|
2f050af982 | ||
|
|
dbab9c0ba1 | ||
|
|
07fc680a4a | ||
|
|
8ab5bc3e1e | ||
|
|
59813630e5 | ||
|
|
ad7edfbde3 | ||
|
|
6fdf368f76 | ||
|
|
40db5ea7df | ||
|
|
56a4b1e1d4 | ||
|
|
80f3b20060 | ||
|
|
693ba0b14f | ||
|
|
e7b07d33c2 | ||
|
|
aa47ba52f0 | ||
|
|
00b2c3a5e4 | ||
|
|
23a0e97d2e | ||
|
|
0fb825c5e5 | ||
|
|
b0cb740fe0 | ||
|
|
6dd5e472eb | ||
|
|
3ff649953d | ||
|
|
8db54a9cfa | ||
|
|
77b1711afb | ||
|
|
1e551ec0c3 | ||
|
|
e643f3defe | ||
|
|
31d301a3f6 | ||
|
|
b803944ce6 | ||
|
|
64b75d4276 | ||
|
|
2ac2706bb7 | ||
|
|
a3360f2cc6 | ||
|
|
ef73b474fb | ||
|
|
b936350fc2 | ||
|
|
d81ee4d277 | ||
|
|
232ccf23cf | ||
|
|
a7e20761bd | ||
|
|
bc090aa673 | ||
|
|
6891ae6425 | ||
|
|
a842038bca | ||
|
|
7c0846cc5b | ||
|
|
47686dc42f | ||
|
|
938fa5cebf | ||
|
|
18661937e4 | ||
|
|
c1d31cf2c7 | ||
|
|
8b3d3e7c95 | ||
|
|
fe5b98019d | ||
|
|
184e00a36b | ||
|
|
8f1decb32b | ||
|
|
36ac2c6dfd |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -21,6 +21,3 @@ documentation/user-manual/user-manual.html
|
||||
documentation/user-manual/user-manual.pdf
|
||||
documentation/user-manual/user-manual.tgz
|
||||
pull-*/
|
||||
bitbake/lib/toaster/contrib/tts/backlog.txt
|
||||
bitbake/lib/toaster/contrib/tts/log/*
|
||||
bitbake/lib/toaster/contrib/tts/.cache/*
|
||||
@@ -105,7 +105,9 @@ Intel Atom platforms:
|
||||
|
||||
and is likely to work on many unlisted Atom/Core/Xeon based devices. The MACHINE
|
||||
type supports ethernet, wifi, sound, and Intel/vesa graphics by default in
|
||||
addition to common PC input devices, busses, and so on.
|
||||
addition to common PC input devices, busses, and so on. Note that it does not
|
||||
included the binary-only graphic drivers used on some Atom platforms, for
|
||||
accelerated graphics on these machines please refer to meta-intel.
|
||||
|
||||
Depending on the device, it can boot from a traditional hard-disk, a USB device,
|
||||
or over the network. Writing generated images to physical media is
|
||||
|
||||
@@ -33,21 +33,17 @@ except RuntimeError as exc:
|
||||
sys.exit(str(exc))
|
||||
|
||||
from bb import cookerdata
|
||||
from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
|
||||
|
||||
__version__ = "1.28.0"
|
||||
from bb.main import bitbake_main, BitBakeConfigParameters
|
||||
|
||||
if __name__ == "__main__":
|
||||
if __version__ != bb.__version__:
|
||||
sys.exit("Bitbake core version and program version mismatch!")
|
||||
try:
|
||||
sys.exit(bitbake_main(BitBakeConfigParameters(sys.argv),
|
||||
cookerdata.CookerConfiguration()))
|
||||
except BBMainException as err:
|
||||
sys.exit(err)
|
||||
ret = bitbake_main(BitBakeConfigParameters(sys.argv),
|
||||
cookerdata.CookerConfiguration())
|
||||
except bb.BBHandledException:
|
||||
sys.exit(1)
|
||||
ret = 1
|
||||
except Exception:
|
||||
ret = 1
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
sys.exit(ret)
|
||||
|
||||
|
||||
@@ -54,8 +54,6 @@ def logger_create(name, output=sys.stderr):
|
||||
|
||||
logger = logger_create('bitbake-layers', sys.stdout)
|
||||
|
||||
class UserError(Exception):
|
||||
pass
|
||||
|
||||
class Commands():
|
||||
def __init__(self):
|
||||
@@ -64,22 +62,24 @@ class Commands():
|
||||
|
||||
def init_bbhandler(self, config_only = False):
|
||||
if not self.bbhandler:
|
||||
self.bbhandler = bb.tinfoil.Tinfoil(tracking=True)
|
||||
self.bbhandler = bb.tinfoil.Tinfoil()
|
||||
self.bblayers = (self.bbhandler.config_data.getVar('BBLAYERS', True) or "").split()
|
||||
self.bbhandler.prepare(config_only)
|
||||
layerconfs = self.bbhandler.config_data.varhistory.get_variable_items_files('BBFILE_COLLECTIONS', self.bbhandler.config_data)
|
||||
self.bbfile_collections = {layer: os.path.dirname(os.path.dirname(path)) for layer, path in layerconfs.iteritems()}
|
||||
|
||||
|
||||
def do_show_layers(self, args):
|
||||
"""show current configured layers"""
|
||||
self.init_bbhandler(config_only = True)
|
||||
logger.plain("%s %s %s" % ("layer".ljust(20), "path".ljust(40), "priority"))
|
||||
logger.plain('=' * 74)
|
||||
for layer, _, regex, pri in self.bbhandler.cooker.recipecache.bbfile_config_priorities:
|
||||
layerdir = self.bbfile_collections.get(layer, None)
|
||||
for layerdir in self.bblayers:
|
||||
layername = self.get_layer_name(layerdir)
|
||||
logger.plain("%s %s %d" % (layername.ljust(20), layerdir.ljust(40), pri))
|
||||
layerpri = 0
|
||||
for layer, _, regex, pri in self.bbhandler.cooker.recipecache.bbfile_config_priorities:
|
||||
if regex.match(os.path.join(layerdir, 'test')):
|
||||
layerpri = pri
|
||||
break
|
||||
|
||||
logger.plain("%s %s %d" % (layername.ljust(20), layerdir.ljust(40), layerpri))
|
||||
|
||||
|
||||
def do_add_layer(self, args):
|
||||
@@ -120,8 +120,6 @@ Removes the specified layer from bblayers.conf
|
||||
|
||||
if args.layerdir.startswith('*'):
|
||||
layerdir = args.layerdir
|
||||
elif not '/' in args.layerdir:
|
||||
layerdir = '*/%s' % args.layerdir
|
||||
else:
|
||||
layerdir = os.path.abspath(args.layerdir)
|
||||
(_, notremoved) = bb.utils.edit_bblayers_conf(bblayers_conf, None, layerdir)
|
||||
@@ -267,7 +265,6 @@ Removes the specified layer from bblayers.conf
|
||||
apiurl = self.bbhandler.config_data.getVar('BBLAYERS_LAYERINDEX_URL', True)
|
||||
if not apiurl:
|
||||
logger.error("Cannot get BBLAYERS_LAYERINDEX_URL")
|
||||
return 1
|
||||
else:
|
||||
if apiurl[-1] != '/':
|
||||
apiurl += '/'
|
||||
@@ -390,7 +387,7 @@ are overlayed will also be listed, with a " (skipped)" suffix.
|
||||
"""
|
||||
self.init_bbhandler()
|
||||
|
||||
items_listed = self.list_recipes('Overlayed recipes', None, True, args.same_version, args.filenames, True, None)
|
||||
items_listed = self.list_recipes('Overlayed recipes', None, True, args.same_version, args.filenames, True)
|
||||
|
||||
# Check for overlayed .bbclass files
|
||||
classes = defaultdict(list)
|
||||
@@ -453,22 +450,11 @@ skipped recipes will also be listed, with a " (skipped)" suffix.
|
||||
"""
|
||||
self.init_bbhandler()
|
||||
|
||||
inheritlist = args.inherits.split(',') if args.inherits else []
|
||||
if inheritlist or args.pnspec or args.multiple:
|
||||
title = 'Matching recipes:'
|
||||
else:
|
||||
title = 'Available recipes:'
|
||||
self.list_recipes(title, args.pnspec, False, False, args.filenames, args.multiple, inheritlist)
|
||||
title = 'Available recipes:'
|
||||
self.list_recipes(title, args.pnspec, False, False, args.filenames, args.multiple)
|
||||
|
||||
|
||||
def list_recipes(self, title, pnspec, show_overlayed_only, show_same_ver_only, show_filenames, show_multi_provider_only, inherits):
|
||||
if inherits:
|
||||
bbpath = str(self.bbhandler.config_data.getVar('BBPATH', True))
|
||||
for classname in inherits:
|
||||
classfile = 'classes/%s.bbclass' % classname
|
||||
if not bb.utils.which(bbpath, classfile, history=False):
|
||||
raise UserError('No class named %s found in BBPATH' % classfile)
|
||||
|
||||
def list_recipes(self, title, pnspec, show_overlayed_only, show_same_ver_only, show_filenames, show_multi_provider_only):
|
||||
pkg_pn = self.bbhandler.cooker.recipecache.pkg_pn
|
||||
(latest_versions, preferred_versions) = bb.providers.findProviders(self.bbhandler.config_data, self.bbhandler.cooker.recipecache, pkg_pn)
|
||||
allproviders = bb.providers.allProviders(self.bbhandler.cooker.recipecache)
|
||||
@@ -506,9 +492,6 @@ skipped recipes will also be listed, with a " (skipped)" suffix.
|
||||
logger.plain("%s:", pn)
|
||||
logger.plain(" %s %s%s", layer.ljust(20), ver, skipped)
|
||||
|
||||
global_inherit = (self.bbhandler.config_data.getVar('INHERIT', True) or "").split()
|
||||
cls_re = re.compile('classes/')
|
||||
|
||||
preffiles = []
|
||||
items_listed = False
|
||||
for p in sorted(pkg_pn):
|
||||
@@ -520,28 +503,11 @@ skipped recipes will also be listed, with a " (skipped)" suffix.
|
||||
pref = preferred_versions[p]
|
||||
realfn = bb.cache.Cache.virtualfn2realfn(pref[1])
|
||||
preffile = realfn[0]
|
||||
|
||||
# We only display once per recipe, we should prefer non extended versions of the
|
||||
# recipe if present (so e.g. in OpenEmbedded, openssl rather than nativesdk-openssl
|
||||
# which would otherwise sort first).
|
||||
if realfn[1] and realfn[0] in self.bbhandler.cooker.recipecache.pkg_fn:
|
||||
continue
|
||||
|
||||
if inherits:
|
||||
matchcount = 0
|
||||
recipe_inherits = self.bbhandler.cooker_data.inherits.get(preffile, [])
|
||||
for cls in recipe_inherits:
|
||||
if cls_re.match(cls):
|
||||
continue
|
||||
classname = os.path.splitext(os.path.basename(cls))[0]
|
||||
if classname in global_inherit:
|
||||
continue
|
||||
elif classname in inherits:
|
||||
matchcount += 1
|
||||
if matchcount != len(inherits):
|
||||
# No match - skip this recipe
|
||||
continue
|
||||
|
||||
if preffile not in preffiles:
|
||||
preflayer = self.get_file_layer(preffile)
|
||||
multilayer = False
|
||||
@@ -629,7 +595,7 @@ build results (as the layer priority order has effectively changed).
|
||||
return layerdir
|
||||
return None
|
||||
|
||||
applied_appends = []
|
||||
appended_recipes = []
|
||||
for layer in layers:
|
||||
overlayed = []
|
||||
for f in self.bbhandler.cooker.collection.overlayed.iterkeys():
|
||||
@@ -657,27 +623,31 @@ build results (as the layer priority order has effectively changed).
|
||||
logger.warn('Overwriting file %s', fdest)
|
||||
bb.utils.copyfile(f1full, fdest)
|
||||
if ext == '.bb':
|
||||
for append in self.bbhandler.cooker.collection.get_file_appends(f1full):
|
||||
if layer_path_match(append):
|
||||
logger.plain(' Applying append %s to %s' % (append, fdest))
|
||||
self.apply_append(append, fdest)
|
||||
applied_appends.append(append)
|
||||
if f1 in self.bbhandler.cooker.collection.appendlist:
|
||||
appends = self.bbhandler.cooker.collection.appendlist[f1]
|
||||
if appends:
|
||||
logger.plain(' Applying appends to %s' % fdest )
|
||||
for appendname in appends:
|
||||
if layer_path_match(appendname):
|
||||
self.apply_append(appendname, fdest)
|
||||
appended_recipes.append(f1)
|
||||
|
||||
# Take care of when some layers are excluded and yet we have included bbappends for those recipes
|
||||
for b in self.bbhandler.cooker.collection.bbappends:
|
||||
(recipename, appendname) = b
|
||||
if appendname not in applied_appends:
|
||||
for recipename in self.bbhandler.cooker.collection.appendlist.iterkeys():
|
||||
if recipename not in appended_recipes:
|
||||
appends = self.bbhandler.cooker.collection.appendlist[recipename]
|
||||
first_append = None
|
||||
layer = layer_path_match(appendname)
|
||||
if layer:
|
||||
if first_append:
|
||||
self.apply_append(appendname, first_append)
|
||||
else:
|
||||
fdest = appendname[len(layer):]
|
||||
fdest = os.path.normpath(os.sep.join([outputdir,fdest]))
|
||||
bb.utils.mkdirhier(os.path.dirname(fdest))
|
||||
bb.utils.copyfile(appendname, fdest)
|
||||
first_append = fdest
|
||||
for appendname in appends:
|
||||
layer = layer_path_match(appendname)
|
||||
if layer:
|
||||
if first_append:
|
||||
self.apply_append(appendname, first_append)
|
||||
else:
|
||||
fdest = appendname[len(layer):]
|
||||
fdest = os.path.normpath(os.sep.join([outputdir,fdest]))
|
||||
bb.utils.mkdirhier(os.path.dirname(fdest))
|
||||
bb.utils.copyfile(appendname, fdest)
|
||||
first_append = fdest
|
||||
|
||||
# Get the regex for the first layer in our list (which is where the conf/layer.conf file will
|
||||
# have come from)
|
||||
@@ -713,22 +683,25 @@ build results (as the layer priority order has effectively changed).
|
||||
logger.warning("File %s does not match the flattened layer's BBFILES setting, you may need to edit conf/layer.conf or move the file elsewhere" % f1full)
|
||||
|
||||
def get_file_layer(self, filename):
|
||||
layerdir = self.get_file_layerdir(filename)
|
||||
if layerdir:
|
||||
return self.get_layer_name(layerdir)
|
||||
else:
|
||||
return '?'
|
||||
for layer, _, regex, _ in self.bbhandler.cooker.recipecache.bbfile_config_priorities:
|
||||
if regex.match(filename):
|
||||
for layerdir in self.bblayers:
|
||||
if regex.match(os.path.join(layerdir, 'test')) and re.match(layerdir, filename):
|
||||
return self.get_layer_name(layerdir)
|
||||
return "?"
|
||||
|
||||
def get_file_layerdir(self, filename):
|
||||
layer = bb.utils.get_file_layer(filename, self.bbhandler.config_data)
|
||||
return self.bbfile_collections.get(layer, None)
|
||||
for layer, _, regex, _ in self.bbhandler.cooker.recipecache.bbfile_config_priorities:
|
||||
if regex.match(filename):
|
||||
for layerdir in self.bblayers:
|
||||
if regex.match(os.path.join(layerdir, 'test')) and re.match(layerdir, filename):
|
||||
return layerdir
|
||||
return "?"
|
||||
|
||||
def remove_layer_prefix(self, f):
|
||||
"""Remove the layer_dir prefix, e.g., f = /path/to/layer_dir/foo/blah, the
|
||||
return value will be: layer_dir/foo/blah"""
|
||||
f_layerdir = self.get_file_layerdir(f)
|
||||
if not f_layerdir:
|
||||
return f
|
||||
prefix = os.path.join(os.path.dirname(f_layerdir), '')
|
||||
return f[len(prefix):] if f.startswith(prefix) else f
|
||||
|
||||
@@ -736,11 +709,13 @@ build results (as the layer priority order has effectively changed).
|
||||
return os.path.basename(layerdir.rstrip(os.sep))
|
||||
|
||||
def apply_append(self, appendname, recipename):
|
||||
with open(appendname, 'r') as appendfile:
|
||||
with open(recipename, 'a') as recipefile:
|
||||
recipefile.write('\n')
|
||||
recipefile.write('##### bbappended from %s #####\n' % self.get_file_layer(appendname))
|
||||
recipefile.writelines(appendfile.readlines())
|
||||
appendfile = open(appendname, 'r')
|
||||
recipefile = open(recipename, 'a')
|
||||
recipefile.write('\n')
|
||||
recipefile.write('##### bbappended from %s #####\n' % self.get_file_layer(appendname))
|
||||
recipefile.writelines(appendfile.readlines())
|
||||
recipefile.close()
|
||||
appendfile.close()
|
||||
|
||||
def do_show_appends(self, args):
|
||||
"""list bbappend files and recipe files they apply to
|
||||
@@ -748,21 +723,18 @@ build results (as the layer priority order has effectively changed).
|
||||
Lists recipes with the bbappends that apply to them as subitems.
|
||||
"""
|
||||
self.init_bbhandler()
|
||||
if not self.bbhandler.cooker.collection.appendlist:
|
||||
logger.plain('No append files found')
|
||||
return 0
|
||||
|
||||
logger.plain('=== Appended recipes ===')
|
||||
|
||||
pnlist = list(self.bbhandler.cooker_data.pkg_pn.keys())
|
||||
pnlist.sort()
|
||||
appends = False
|
||||
for pn in pnlist:
|
||||
if self.show_appends_for_pn(pn):
|
||||
appends = True
|
||||
self.show_appends_for_pn(pn)
|
||||
|
||||
if self.show_appends_for_skipped():
|
||||
appends = True
|
||||
|
||||
if not appends:
|
||||
logger.plain('No append files found')
|
||||
self.show_appends_for_skipped()
|
||||
|
||||
def show_appends_for_pn(self, pn):
|
||||
filenames = self.bbhandler.cooker_data.pkg_pn[pn]
|
||||
@@ -773,12 +745,12 @@ Lists recipes with the bbappends that apply to them as subitems.
|
||||
self.bbhandler.cooker_data.pkg_pn)
|
||||
best_filename = os.path.basename(best[3])
|
||||
|
||||
return self.show_appends_output(filenames, best_filename)
|
||||
self.show_appends_output(filenames, best_filename)
|
||||
|
||||
def show_appends_for_skipped(self):
|
||||
filenames = [os.path.basename(f)
|
||||
for f in self.bbhandler.cooker.skiplist.iterkeys()]
|
||||
return self.show_appends_output(filenames, None, " (skipped)")
|
||||
self.show_appends_output(filenames, None, " (skipped)")
|
||||
|
||||
def show_appends_output(self, filenames, best_filename, name_suffix = ''):
|
||||
appended, missing = self.get_appends_for_files(filenames)
|
||||
@@ -792,9 +764,7 @@ Lists recipes with the bbappends that apply to them as subitems.
|
||||
if best_filename in missing:
|
||||
logger.warn('%s: missing append for preferred version',
|
||||
best_filename)
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def get_appends_for_files(self, filenames):
|
||||
appended, notappended = [], []
|
||||
@@ -908,19 +878,20 @@ NOTE: .bbappend files can impact the dependencies.
|
||||
|
||||
# The 'require/include xxx' in the bb file
|
||||
pv_re = re.compile(r"\${PV}")
|
||||
with open(f, 'r') as fnfile:
|
||||
fnfile = open(f, 'r')
|
||||
line = fnfile.readline()
|
||||
while line:
|
||||
m, keyword = self.match_require_include(line)
|
||||
# Found the 'require/include xxxx'
|
||||
if m:
|
||||
needed_file = m.group(1)
|
||||
# Replace the ${PV} with the real PV
|
||||
if pv_re.search(needed_file) and f in self.bbhandler.cooker_data.pkg_pepvpr:
|
||||
pv = self.bbhandler.cooker_data.pkg_pepvpr[f][1]
|
||||
needed_file = re.sub(r"\${PV}", pv, needed_file)
|
||||
self.print_cross_files(bbpath, keyword, layername, f, needed_file, args.filenames, ignore_layers)
|
||||
line = fnfile.readline()
|
||||
while line:
|
||||
m, keyword = self.match_require_include(line)
|
||||
# Found the 'require/include xxxx'
|
||||
if m:
|
||||
needed_file = m.group(1)
|
||||
# Replace the ${PV} with the real PV
|
||||
if pv_re.search(needed_file) and f in self.bbhandler.cooker_data.pkg_pepvpr:
|
||||
pv = self.bbhandler.cooker_data.pkg_pepvpr[f][1]
|
||||
needed_file = re.sub(r"\${PV}", pv, needed_file)
|
||||
self.print_cross_files(bbpath, keyword, layername, f, needed_file, args.filenames, ignore_layers)
|
||||
line = fnfile.readline()
|
||||
fnfile.close()
|
||||
|
||||
# The "require/include xxx" in conf/machine/*.conf, .inc and .bbclass
|
||||
conf_re = re.compile(".*/conf/machine/[^\/]*\.conf$")
|
||||
@@ -934,19 +905,20 @@ NOTE: .bbappend files can impact the dependencies.
|
||||
f = os.path.join(dirpath, name)
|
||||
s = conf_re.match(f) or inc_re.match(f) or bbclass_re.match(f)
|
||||
if s:
|
||||
with open(f, 'r') as ffile:
|
||||
ffile = open(f, 'r')
|
||||
line = ffile.readline()
|
||||
while line:
|
||||
m, keyword = self.match_require_include(line)
|
||||
# Only bbclass has the "inherit xxx" here.
|
||||
bbclass=""
|
||||
if not m and f.endswith(".bbclass"):
|
||||
m, keyword = self.match_inherit(line)
|
||||
bbclass=".bbclass"
|
||||
# Find a 'require/include xxxx'
|
||||
if m:
|
||||
self.print_cross_files(bbpath, keyword, layername, f, m.group(1) + bbclass, args.filenames, ignore_layers)
|
||||
line = ffile.readline()
|
||||
while line:
|
||||
m, keyword = self.match_require_include(line)
|
||||
# Only bbclass has the "inherit xxx" here.
|
||||
bbclass=""
|
||||
if not m and f.endswith(".bbclass"):
|
||||
m, keyword = self.match_inherit(line)
|
||||
bbclass=".bbclass"
|
||||
# Find a 'require/include xxxx'
|
||||
if m:
|
||||
self.print_cross_files(bbpath, keyword, layername, f, m.group(1) + bbclass, args.filenames, ignore_layers)
|
||||
line = ffile.readline()
|
||||
ffile.close()
|
||||
|
||||
def print_cross_files(self, bbpath, keyword, layername, f, needed_filename, show_filenames, ignore_layers):
|
||||
"""Print the depends that crosses a layer boundary"""
|
||||
@@ -1023,7 +995,6 @@ def main():
|
||||
parser_show_recipes = add_command('show-recipes', cmds.do_show_recipes)
|
||||
parser_show_recipes.add_argument('-f', '--filenames', help='instead of the default formatting, list filenames of higher priority recipes with the ones they overlay indented underneath', action='store_true')
|
||||
parser_show_recipes.add_argument('-m', '--multiple', help='only list where multiple recipes (in the same layer or different layers) exist for the same recipe name', action='store_true')
|
||||
parser_show_recipes.add_argument('-i', '--inherits', help='only list recipes that inherit the named class', metavar='CLASS', default='')
|
||||
parser_show_recipes.add_argument('pnspec', nargs='?', help='optional recipe name specification (wildcards allowed, enclose in quotes to avoid shell expansion)')
|
||||
|
||||
parser_show_appends = add_command('show-appends', cmds.do_show_appends)
|
||||
@@ -1053,11 +1024,7 @@ def main():
|
||||
elif args.quiet:
|
||||
logger.setLevel(logging.ERROR)
|
||||
|
||||
try:
|
||||
ret = args.func(args)
|
||||
except UserError as err:
|
||||
logger.error(str(err))
|
||||
ret = 1
|
||||
ret = args.func(args)
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
@@ -26,30 +26,24 @@ except RuntimeError as exc:
|
||||
sys.exit(str(exc))
|
||||
|
||||
def usage():
|
||||
print('usage: [BB_SKIP_NETTESTS=yes] %s [-v] [testname1 [testname2]...]' % os.path.basename(sys.argv[0]))
|
||||
print('usage: %s [testname1 [testname2]...]' % os.path.basename(sys.argv[0]))
|
||||
|
||||
verbosity = 1
|
||||
|
||||
tests = sys.argv[1:]
|
||||
if '-v' in sys.argv:
|
||||
tests.remove('-v')
|
||||
verbosity = 2
|
||||
|
||||
if tests:
|
||||
if len(sys.argv) > 1:
|
||||
if '--help' in sys.argv[1:]:
|
||||
usage()
|
||||
sys.exit(0)
|
||||
|
||||
tests = sys.argv[1:]
|
||||
else:
|
||||
tests = ["bb.tests.codeparser",
|
||||
"bb.tests.cow",
|
||||
"bb.tests.data",
|
||||
"bb.tests.fetch",
|
||||
"bb.tests.parse",
|
||||
"bb.tests.utils"]
|
||||
|
||||
for t in tests:
|
||||
t = '.'.join(t.split('.')[:3])
|
||||
__import__(t)
|
||||
|
||||
unittest.main(argv=["bitbake-selftest"] + tests, verbosity=verbosity)
|
||||
unittest.main(argv=["bitbake-selftest"] + tests)
|
||||
|
||||
|
||||
@@ -10,7 +10,6 @@ import bb
|
||||
import select
|
||||
import errno
|
||||
import signal
|
||||
from multiprocessing import Lock
|
||||
|
||||
# Users shouldn't be running this code directly
|
||||
if len(sys.argv) != 2 or not sys.argv[1].startswith("decafbad"):
|
||||
@@ -25,15 +24,6 @@ if sys.argv[1] == "decafbadbad":
|
||||
except:
|
||||
import profile
|
||||
|
||||
# Unbuffer stdout to avoid log truncation in the event
|
||||
# of an unorderly exit as well as to provide timely
|
||||
# updates to log files for use with tail
|
||||
try:
|
||||
if sys.stdout.name == '<stdout>':
|
||||
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
|
||||
except:
|
||||
pass
|
||||
|
||||
logger = logging.getLogger("BitBake")
|
||||
|
||||
try:
|
||||
@@ -45,9 +35,6 @@ except ImportError:
|
||||
|
||||
worker_pipe = sys.stdout.fileno()
|
||||
bb.utils.nonblockingfd(worker_pipe)
|
||||
# Need to guard against multiprocessing being used in child processes
|
||||
# and multiple processes trying to write to the parent at the same time
|
||||
worker_pipe_lock = None
|
||||
|
||||
handler = bb.event.LogHandler()
|
||||
logger.addHandler(handler)
|
||||
@@ -84,21 +71,14 @@ def worker_flush():
|
||||
written = os.write(worker_pipe, worker_queue)
|
||||
worker_queue = worker_queue[written:]
|
||||
except (IOError, OSError) as e:
|
||||
if e.errno != errno.EAGAIN and e.errno != errno.EPIPE:
|
||||
if e.errno != errno.EAGAIN:
|
||||
raise
|
||||
|
||||
def worker_child_fire(event, d):
|
||||
global worker_pipe
|
||||
global worker_pipe_lock
|
||||
|
||||
data = "<event>" + pickle.dumps(event) + "</event>"
|
||||
try:
|
||||
worker_pipe_lock.acquire()
|
||||
worker_pipe.write(data)
|
||||
worker_pipe_lock.release()
|
||||
except IOError:
|
||||
sigterm_handler(None, None)
|
||||
raise
|
||||
worker_pipe.write(data)
|
||||
|
||||
bb.event.worker_fire = worker_fire
|
||||
|
||||
@@ -164,20 +144,17 @@ def fork_off_task(cfg, data, workerdata, fn, task, taskname, appends, taskdepdat
|
||||
if pid == 0:
|
||||
def child():
|
||||
global worker_pipe
|
||||
global worker_pipe_lock
|
||||
pipein.close()
|
||||
|
||||
signal.signal(signal.SIGTERM, sigterm_handler)
|
||||
# Let SIGHUP exit as SIGTERM
|
||||
signal.signal(signal.SIGHUP, sigterm_handler)
|
||||
bb.utils.signal_on_parent_exit("SIGTERM")
|
||||
|
||||
# Save out the PID so that the event can include it the
|
||||
# events
|
||||
bb.event.worker_pid = os.getpid()
|
||||
bb.event.worker_fire = worker_child_fire
|
||||
worker_pipe = pipeout
|
||||
worker_pipe_lock = Lock()
|
||||
|
||||
# Make the child the process group leader and ensure no
|
||||
# child process will be controlled by the current terminal
|
||||
@@ -309,16 +286,13 @@ class BitbakeWorker(object):
|
||||
def serve(self):
|
||||
while True:
|
||||
(ready, _, _) = select.select([self.input] + [i.input for i in self.build_pipes.values()], [] , [], 1)
|
||||
if self.input in ready:
|
||||
if self.input in ready or len(self.queue):
|
||||
start = len(self.queue)
|
||||
try:
|
||||
r = self.input.read()
|
||||
if len(r) == 0:
|
||||
# EOF on pipe, server must have terminated
|
||||
self.sigterm_exception(signal.SIGTERM, None)
|
||||
self.queue = self.queue + r
|
||||
self.queue = self.queue + self.input.read()
|
||||
except (OSError, IOError):
|
||||
pass
|
||||
if len(self.queue):
|
||||
end = len(self.queue)
|
||||
self.handle_item("cookerconfig", self.handle_cookercfg)
|
||||
self.handle_item("workerdata", self.handle_workerdata)
|
||||
self.handle_item("runtask", self.handle_runtask)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
#!/bin/sh
|
||||
#!/bin/bash
|
||||
# (c) 2013 Intel Corp.
|
||||
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
@@ -28,104 +28,79 @@
|
||||
|
||||
# Helper function to kill a background toaster development server
|
||||
|
||||
webserverKillAll()
|
||||
function webserverKillAll()
|
||||
{
|
||||
local pidfile
|
||||
for pidfile in ${BUILDDIR}/.toastermain.pid; do
|
||||
if [ -f ${pidfile} ]; then
|
||||
pid=`cat ${pidfile}`
|
||||
while kill -0 $pid 2>/dev/null; do
|
||||
kill -SIGTERM -$pid 2>/dev/null
|
||||
sleep 1
|
||||
# Kill processes if they are still running - may happen in interactive shells
|
||||
ps fux | grep "python.*manage.py runserver" | awk '{print $2}' | xargs kill
|
||||
done
|
||||
rm ${pidfile}
|
||||
fi
|
||||
done
|
||||
local pidfile
|
||||
for pidfile in ${BUILDDIR}/.toastermain.pid; do
|
||||
if [ -f ${pidfile} ]; then
|
||||
while kill -0 $(< ${pidfile}) 2>/dev/null; do
|
||||
kill -SIGTERM -$(< ${pidfile}) 2>/dev/null
|
||||
sleep 1;
|
||||
# Kill processes if they are still running - may happen in interactive shells
|
||||
ps fux | grep "python.*manage.py runserver" | awk '{print $2}' | xargs kill
|
||||
done;
|
||||
rm ${pidfile}
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
webserverStartAll()
|
||||
function webserverStartAll()
|
||||
{
|
||||
# do not start if toastermain points to a valid process
|
||||
if ! cat "${BUILDDIR}/.toastermain.pid" 2>/dev/null | xargs -I{} kill -0 {} ; then
|
||||
retval=1
|
||||
rm "${BUILDDIR}/.toastermain.pid"
|
||||
fi
|
||||
# do not start if toastermain points to a valid process
|
||||
if ! cat "${BUILDDIR}/.toastermain.pid" 2>/dev/null | xargs -I{} kill -0 {} ; then
|
||||
retval=1
|
||||
rm "${BUILDDIR}/.toastermain.pid"
|
||||
fi
|
||||
|
||||
retval=0
|
||||
# you can always add a superuser later via
|
||||
# python bitbake/lib/toaster/manage.py python manage.py createsuperuser --username=<ME>
|
||||
python $BBBASEDIR/lib/toaster/manage.py syncdb --noinput || retval=1
|
||||
|
||||
python $BBBASEDIR/lib/toaster/manage.py migrate orm || retval=2
|
||||
|
||||
if [ $retval -eq 1 ]; then
|
||||
echo "Failed db sync, aborting system start" 1>&2
|
||||
retval=0
|
||||
python $BBBASEDIR/lib/toaster/manage.py syncdb || retval=1
|
||||
python $BBBASEDIR/lib/toaster/manage.py migrate orm || retval=2
|
||||
if [ $retval -eq 1 ]; then
|
||||
echo "Failed db sync, stopping system start" 1>&2
|
||||
elif [ $retval -eq 2 ]; then
|
||||
echo -e "\nError on migration, trying to recover... \n"
|
||||
python $BBBASEDIR/lib/toaster/manage.py migrate orm 0001_initial --fake
|
||||
retval=0
|
||||
python $BBBASEDIR/lib/toaster/manage.py migrate orm || retval=1
|
||||
fi
|
||||
if [ "x$TOASTER_MANAGED" == "x1" ]; then
|
||||
python $BBBASEDIR/lib/toaster/manage.py migrate bldcontrol || retval=1
|
||||
python $BBBASEDIR/lib/toaster/manage.py checksettings --traceback || retval=1
|
||||
fi
|
||||
if [ $retval -eq 0 ]; then
|
||||
echo "Starting webserver..."
|
||||
python $BBBASEDIR/lib/toaster/manage.py runserver "0.0.0.0:$WEB_PORT" </dev/null >>${BUILDDIR}/toaster_web.log 2>&1 & echo $! >${BUILDDIR}/.toastermain.pid
|
||||
sleep 1
|
||||
if ! cat "${BUILDDIR}/.toastermain.pid" | xargs -I{} kill -0 {} ; then
|
||||
retval=1
|
||||
rm "${BUILDDIR}/.toastermain.pid"
|
||||
else
|
||||
echo "Webserver address: http://0.0.0.0:$WEB_PORT/"
|
||||
fi
|
||||
fi
|
||||
return $retval
|
||||
fi
|
||||
|
||||
python $BBBASEDIR/lib/toaster/manage.py migrate orm || retval=1
|
||||
|
||||
if [ $retval -eq 1 ]; then
|
||||
printf "\nError on orm migration, rolling back...\n"
|
||||
python $BBBASEDIR/lib/toaster/manage.py migrate orm 0001_initial --fake
|
||||
return $retval
|
||||
fi
|
||||
|
||||
python $BBBASEDIR/lib/toaster/manage.py migrate bldcontrol || retval=1
|
||||
|
||||
if [ $retval -eq 1 ]; then
|
||||
printf "\nError on bldcontrol migration, rolling back...\n"
|
||||
python $BBBASEDIR/lib/toaster/manage.py migrate bldcontrol 0001_initial --fake
|
||||
return $retval
|
||||
fi
|
||||
|
||||
if [ "$TOASTER_MANAGED" = '1' ]; then
|
||||
python $BBBASEDIR/lib/toaster/manage.py checksettings --traceback || retval=1
|
||||
fi
|
||||
|
||||
if [ $retval -eq 1 ]; then
|
||||
printf "\nError while checking settings; aborting\n"
|
||||
return $retval
|
||||
fi
|
||||
|
||||
echo "Starting webserver..."
|
||||
|
||||
python $BBBASEDIR/lib/toaster/manage.py runserver "0.0.0.0:$WEB_PORT" </dev/null >>${BUILDDIR}/toaster_web.log 2>&1 & echo $! >${BUILDDIR}/.toastermain.pid
|
||||
|
||||
sleep 1
|
||||
|
||||
if ! cat "${BUILDDIR}/.toastermain.pid" | xargs -I{} kill -0 {} ; then
|
||||
retval=1
|
||||
rm "${BUILDDIR}/.toastermain.pid"
|
||||
else
|
||||
echo "Webserver address: http://0.0.0.0:$WEB_PORT/"
|
||||
fi
|
||||
|
||||
return $retval
|
||||
}
|
||||
|
||||
# Helper functions to add a special configuration file
|
||||
|
||||
addtoConfiguration()
|
||||
function addtoConfiguration()
|
||||
{
|
||||
file=$1
|
||||
shift
|
||||
echo "#Created by toaster start script" > ${BUILDDIR}/conf/$file
|
||||
for var in "$@"; do echo $var >> ${BUILDDIR}/conf/$file; done
|
||||
file=$1
|
||||
shift
|
||||
echo "#Created by toaster start script" > ${BUILDDIR}/conf/$file
|
||||
for var in "$@"; do echo $var >> ${BUILDDIR}/conf/$file; done
|
||||
}
|
||||
|
||||
INSTOPSYSTEM=0
|
||||
|
||||
# define the stop command
|
||||
stop_system()
|
||||
function stop_system()
|
||||
{
|
||||
# prevent reentry
|
||||
if [ $INSTOPSYSTEM -eq 1 ]; then return; fi
|
||||
if [ $INSTOPSYSTEM == 1 ]; then return; fi
|
||||
INSTOPSYSTEM=1
|
||||
if [ -f ${BUILDDIR}/.toasterui.pid ]; then
|
||||
kill `cat ${BUILDDIR}/.toasterui.pid` 2>/dev/null
|
||||
kill $(< ${BUILDDIR}/.toasterui.pid ) 2>/dev/null
|
||||
rm ${BUILDDIR}/.toasterui.pid
|
||||
fi
|
||||
BBSERVER=0.0.0.0:-1 bitbake -m
|
||||
@@ -138,29 +113,29 @@ stop_system()
|
||||
INSTOPSYSTEM=0
|
||||
}
|
||||
|
||||
check_pidbyfile() {
|
||||
[ -e $1 ] && kill -0 `cat $1` 2>/dev/null
|
||||
function check_pidbyfile() {
|
||||
[ -e $1 ] && kill -0 $(< $1) 2>/dev/null
|
||||
}
|
||||
|
||||
|
||||
notify_chldexit() {
|
||||
if [ $NOTOASTERUI -eq 0 ]; then
|
||||
function notify_chldexit() {
|
||||
if [ $NOTOASTERUI == 0 ]; then
|
||||
check_pidbyfile ${BUILDDIR}/.toasterui.pid && return
|
||||
stop_system
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
verify_prereq() {
|
||||
# Verify prerequisites
|
||||
function verify_prereq() {
|
||||
# Verify prerequisites
|
||||
|
||||
if ! echo "import django; print (1,) == django.VERSION[0:1] and django.VERSION[1:2][0] in (6,)" | python 2>/dev/null | grep True >/dev/null; then
|
||||
printf "This program needs Django 1.6. Please install with\n\npip install django==1.6\n"
|
||||
echo -e "This program needs Django 1.6. Please install with\n\npip install django==1.6\n"
|
||||
return 2
|
||||
fi
|
||||
|
||||
if ! echo "import south; print reduce(lambda x, y: 2 if x==2 else 0 if x == 0 else y, map(lambda x: 1+cmp(x[1]-x[0],0), zip([0,8,4], map(int,south.__version__.split(\".\"))))) > 0" | python 2>/dev/null | grep True >/dev/null; then
|
||||
printf "This program needs South 0.8.4. Please install with\n\npip install south==0.8.4\n"
|
||||
echo -e "This program needs South 0.8.4. Please install with\n\npip install south==0.8.4\n"
|
||||
return 2
|
||||
fi
|
||||
return 0
|
||||
@@ -168,52 +143,14 @@ verify_prereq() {
|
||||
|
||||
|
||||
# read command line parameters
|
||||
if [ -n "$BASH_SOURCE" ] ; then
|
||||
TOASTER=${BASH_SOURCE}
|
||||
elif [ -n "$ZSH_NAME" ] ; then
|
||||
TOASTER=${(%):-%x}
|
||||
else
|
||||
TOASTER=$0
|
||||
fi
|
||||
|
||||
[ `basename \"$0\"` = `basename \"${TOASTER}\"` ] && TOASTER_MANAGED=1
|
||||
|
||||
BBBASEDIR=`dirname $TOASTER`/..
|
||||
|
||||
BBBASEDIR=`dirname ${BASH_SOURCE}`/..
|
||||
RUNNING=0
|
||||
|
||||
NOTOASTERUI=0
|
||||
WEBSERVER=1
|
||||
TOASTER_BRBE=""
|
||||
if [ "$WEB_PORT" = "" ]; then
|
||||
WEB_PORT="8000"
|
||||
fi
|
||||
# this is the configuraton file we are using for toaster
|
||||
# note default is assuming yocto. Override this if you are
|
||||
# running in a pure OE environment and use the toasterconf.json
|
||||
# in meta/conf/toasterconf.json
|
||||
# note: for future there are a number of relative path assumptions
|
||||
# in the local layers that currently prevent using an arbitrary
|
||||
# toasterconf.json
|
||||
if [ "$TOASTER_CONF" = "" ]; then
|
||||
TOASTER_CONF="$(dirname $TOASTER)/../../meta-yocto/conf/toasterconf.json"
|
||||
export TOASTER_CONF=$(python -c "import os; print os.path.realpath('$TOASTER_CONF')")
|
||||
fi
|
||||
if [ ! -f $TOASTER_CONF ]; then
|
||||
echo "$TOASTER_CONF configuration file not found. set TOASTER_CONF to specify a path"
|
||||
[ "$TOASTER_MANAGED" = '1' ] && exit 1 || return 1
|
||||
fi
|
||||
# this defines the dir toaster will use for
|
||||
# 1) clones of layers (in _toaster_clones )
|
||||
# 2) the build dir (in build)
|
||||
# 3) the sqlite db if that is being used.
|
||||
# 4) pid's we need to clean up on exit/shutdown
|
||||
# note: for future. in order to make this an arbitrary directory, we need to
|
||||
# make sure that the toaster.sqlite file doesn't default to `pwd` like it currently does.
|
||||
export TOASTER_DIR=`pwd`
|
||||
|
||||
|
||||
NOBROWSER=0
|
||||
WEB_PORT="8000"
|
||||
|
||||
for param in $*; do
|
||||
case $param in
|
||||
@@ -223,9 +160,6 @@ for param in $*; do
|
||||
noweb )
|
||||
WEBSERVER=0
|
||||
;;
|
||||
nobrowser )
|
||||
NOBROWSER=1
|
||||
;;
|
||||
brbe=* )
|
||||
TOASTER_BRBE=$'\n'"TOASTER_BRBE=\""${param#*=}"\""
|
||||
;;
|
||||
@@ -234,64 +168,71 @@ for param in $*; do
|
||||
esac
|
||||
done
|
||||
|
||||
if [ "$TOASTER_MANAGED" = '1' ]; then
|
||||
|
||||
if [ -z "$ZSH_NAME" ] && [ `basename \"$0\"` = `basename \"$BASH_SOURCE\"` ]; then
|
||||
# We are called as standalone. We refuse to run in a build environment - we need the interactive mode for that.
|
||||
# Start just the web server, point the web browser to the interface, and start any Django services.
|
||||
|
||||
if ! verify_prereq; then
|
||||
echo "Error: Could not verify that the needed dependencies are installed. Please use virtualenv and pip to install dependencies listed in toaster-requirements.txt" 1>&2
|
||||
exit 1
|
||||
echo -e "Error: Could not verify that the needed dependencies are installed. Please use virtualenv and pip to install dependencies listed in toaster-requirements.txt" 1>&2;
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
if [ -n "$BUILDDIR" ]; then
|
||||
printf "Error: It looks like you sourced oe-init-build-env. Toaster cannot start in build mode from an oe-core build environment.\n You should be starting Toaster from a new terminal window.\n" 1>&2
|
||||
exit 1
|
||||
echo -e "Error: It looks like you sourced oe-init-build-env. Toaster cannot start in build mode from an oe-core build environment.\n You should be starting Toaster from a new terminal window." 1>&2;
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
if [ "x`which daemon`" == "x" ]; then
|
||||
echo -e "Failed dependency; toaster needs the 'daemon' program in order to be able to start builds'. Please install the 'daemon' program from your distribution repositories or http://www.libslack.org/daemon/" 1>&2;
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
# Define a fake builddir where only the pid files are actually created. No real builds will take place here.
|
||||
BUILDDIR=/tmp/toaster_$$
|
||||
if [ -d "$BUILDDIR" ]; then
|
||||
echo "Previous toaster run directory $BUILDDIR found, cowardly refusing to start. Please remove the directory when that toaster instance is over" 2>&1
|
||||
exit 1
|
||||
echo -e "Previous toaster run directory $BUILDDIR found, cowardly refusing to start. Please remove the directory when that toaster instance is over" 2>&1
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
mkdir -p "$BUILDDIR"
|
||||
|
||||
RUNNING=1
|
||||
trap_ctrlc() {
|
||||
function trap_ctrlc() {
|
||||
echo "** Stopping system"
|
||||
webserverKillAll
|
||||
RUNNING=0
|
||||
}
|
||||
|
||||
do_cleanup() {
|
||||
function do_cleanup() {
|
||||
find "$BUILDDIR" -type f | xargs rm
|
||||
rmdir "$BUILDDIR"
|
||||
}
|
||||
cleanup() {
|
||||
function cleanup() {
|
||||
if grep -ir error "$BUILDDIR" >/dev/null; then
|
||||
if grep -irn "That port is already in use" "$BUILDDIR"; then
|
||||
echo "You can use the \"webport=PORTNUMBER\" parameter to start Toaster on a different port (port $WEB_PORT is already in use)"
|
||||
do_cleanup
|
||||
else
|
||||
printf "\nErrors found in the Toaster log files present in '$BUILDDIR'. Directory will not be cleaned.\n Please review the errors and notify toaster@yoctoproject.org or submit a bug https://bugzilla.yoctoproject.org/enter_bug.cgi?product=Toaster"
|
||||
echo -e "\nErrors found in the Toaster log files present in '$BUILDDIR'. Directory will not be cleaned.\n Please review the errors and notify toaster@yoctoproject.org or submit a bug https://bugzilla.yoctoproject.org/enter_bug.cgi?product=Toaster"
|
||||
fi
|
||||
else
|
||||
echo "No errors found, removing the run directory '$BUILDDIR'"
|
||||
do_cleanup
|
||||
fi
|
||||
fi;
|
||||
}
|
||||
TOASTER_MANAGED=1
|
||||
export TOASTER_MANAGED=1
|
||||
if [ $WEBSERVER -gt 0 ] && ! webserverStartAll; then
|
||||
echo "Failed to start the web server, stopping" 1>&2
|
||||
echo "Failed to start the web server, stopping" 1>&2;
|
||||
cleanup
|
||||
exit 1
|
||||
exit 1;
|
||||
fi
|
||||
if [ $WEBSERVER -gt 0 ] && [ $NOBROWSER -eq 0 ] ; then
|
||||
if [ $WEBSERVER -gt 0 ]; then
|
||||
echo "Starting browser..."
|
||||
xdg-open http://127.0.0.1:$WEB_PORT/ >/dev/null 2>&1 &
|
||||
fi
|
||||
trap trap_ctrlc 2
|
||||
trap trap_ctrlc SIGINT
|
||||
echo "Toaster is now running. You can stop it with Ctrl-C"
|
||||
while [ $RUNNING -gt 0 ]; do
|
||||
python $BBBASEDIR/lib/toaster/manage.py runbuilds 2>&1 | tee -a "$BUILDDIR/toaster.log"
|
||||
@@ -304,27 +245,27 @@ fi
|
||||
|
||||
|
||||
if ! verify_prereq; then
|
||||
echo "Error: Could not verify that the needed dependencies are installed. Please use virtualenv and pip to install dependencies listed in toaster-requirements.txt" 1>&2
|
||||
return 1
|
||||
echo -e "Error: Could not verify that the needed dependencies are installed. Please use virtualenv and pip to install dependencies listed in toaster-requirements.txt" 1>&2;
|
||||
return 1;
|
||||
fi
|
||||
|
||||
|
||||
# We make sure we're running in the current shell and in a good environment
|
||||
if [ -z "$BUILDDIR" ] || ! which bitbake >/dev/null 2>&1 ; then
|
||||
echo "Error: Build environment is not setup or bitbake is not in path." 1>&2
|
||||
if [ -z "$BUILDDIR" ] || [ -z `which bitbake` ]; then
|
||||
echo "Error: Build environment is not setup or bitbake is not in path." 1>&2;
|
||||
return 2
|
||||
fi
|
||||
|
||||
|
||||
# Determine the action. If specified by arguments, fine, if not, toggle it
|
||||
if [ "$1" = 'start' ] || [ "$1" = 'stop' ]; then
|
||||
if [ "x$1" == "xstart" ] || [ "x$1" == "xstop" ]; then
|
||||
CMD="$1"
|
||||
else
|
||||
if [ -z "$BBSERVER" ]; then
|
||||
CMD="start"
|
||||
else
|
||||
CMD="stop"
|
||||
fi
|
||||
fi;
|
||||
fi
|
||||
|
||||
echo "The system will $CMD."
|
||||
@@ -333,16 +274,16 @@ echo "The system will $CMD."
|
||||
|
||||
lock=1
|
||||
if [ -e $BUILDDIR/bitbake.lock ]; then
|
||||
python -c "import fcntl; fcntl.flock(open(\"$BUILDDIR/bitbake.lock\"), fcntl.LOCK_EX|fcntl.LOCK_NB)" 2>/dev/null || lock=0
|
||||
(flock -n 200 ) 200<$BUILDDIR/bitbake.lock || lock=0
|
||||
fi
|
||||
|
||||
if [ ${CMD} = 'start' ] && [ $lock -eq 0 ]; then
|
||||
if [ ${CMD} == "start" ] && [ $lock -eq 0 ]; then
|
||||
echo "Error: bitbake lock state error. File locks show that the system is on." 1>&2
|
||||
echo "Please wait for the current build to finish, stop and then start the system again." 1>&2
|
||||
return 3
|
||||
fi
|
||||
|
||||
if [ ${CMD} = 'start' ] && [ -e $BUILDDIR/.toastermain.pid ] && kill -0 `cat $BUILDDIR/.toastermain.pid`; then
|
||||
if [ ${CMD} == "start" ] && [ -e $BUILDDIR/.toastermain.pid ] && kill -0 `cat $BUILDDIR/.toastermain.pid`; then
|
||||
echo "Warning: bitbake appears to be dead, but the Toaster web server is running. Something fishy is going on." 1>&2
|
||||
echo "Cleaning up the web server to start from a clean slate."
|
||||
webserverKillAll
|
||||
@@ -362,7 +303,7 @@ case $CMD in
|
||||
unset BBSERVER
|
||||
PREREAD=""
|
||||
if [ -e ${BUILDDIR}/conf/toaster-pre.conf ]; then
|
||||
rm ${BUILDDIR}/conf/toaster-pre.conf
|
||||
rm ${BUILDDIR}/conf/toaster-pre.conf
|
||||
fi
|
||||
bitbake $PREREAD --postread conf/toaster.conf --server-only -t xmlrpc -B 0.0.0.0:0
|
||||
if [ $? -ne 0 ]; then
|
||||
@@ -370,7 +311,7 @@ case $CMD in
|
||||
echo "Bitbake server start failed"
|
||||
else
|
||||
export BBSERVER=0.0.0.0:-1
|
||||
if [ $NOTOASTERUI -eq 0 ]; then # we start the TOASTERUI only if not inhibited
|
||||
if [ $NOTOASTERUI == 0 ]; then # we start the TOASTERUI only if not inhibited
|
||||
bitbake --observe-only -u toasterui >>${BUILDDIR}/toaster_ui.log 2>&1 & echo $! >${BUILDDIR}/.toasterui.pid
|
||||
fi
|
||||
fi
|
||||
@@ -396,3 +337,4 @@ case $CMD in
|
||||
;;
|
||||
esac
|
||||
|
||||
|
||||
|
||||
@@ -26,7 +26,6 @@
|
||||
# as a build eventlog, and the ToasterUI is used to process events in the file
|
||||
# and log data in the database
|
||||
|
||||
from __future__ import print_function
|
||||
import os
|
||||
import sys, logging
|
||||
|
||||
@@ -40,6 +39,12 @@ from bb.ui import toasterui
|
||||
import sys
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
console = logging.StreamHandler(sys.stdout)
|
||||
format_str = "%(levelname)s: %(message)s"
|
||||
logging.basicConfig(format=format_str)
|
||||
|
||||
|
||||
import json, pickle
|
||||
|
||||
|
||||
@@ -163,12 +168,12 @@ class MockConfigParameters():
|
||||
# run toaster ui on our mock bitbake class
|
||||
if __name__ == "__main__":
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: %s event.log " % sys.argv[0])
|
||||
logger.error("Usage: %s event.log " % sys.argv[0])
|
||||
sys.exit(1)
|
||||
|
||||
file_name = sys.argv[-1]
|
||||
mock_connection = FileReadEventsServerConnection(file_name)
|
||||
configParams = MockConfigParameters()
|
||||
|
||||
# run the main program and set exit code to the returned value
|
||||
sys.exit(toasterui.main(mock_connection.connection, mock_connection.events, configParams))
|
||||
# run the main program
|
||||
toasterui.main(mock_connection.connection, mock_connection.events, configParams)
|
||||
|
||||
@@ -1,15 +1,7 @@
|
||||
<?xml version='1.0'?>
|
||||
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://www.w3.org/1999/xhtml" xmlns:fo="http://www.w3.org/1999/XSL/Format" version="1.0">
|
||||
|
||||
<xsl:import href="http://downloads.yoctoproject.org/mirror/docbook-mirror/docbook-xsl-1.76.1/xhtml/docbook.xsl" />
|
||||
|
||||
<!--
|
||||
|
||||
<xsl:import href="../template/1.76.1/docbook-xsl-1.76.1/xhtml/docbook.xsl" />
|
||||
|
||||
<xsl:import href="http://docbook.sourceforge.net/release/xsl/1.76.1/xhtml/docbook.xsl" />
|
||||
|
||||
-->
|
||||
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/xhtml/docbook.xsl" />
|
||||
|
||||
<xsl:include href="../template/permalinks.xsl"/>
|
||||
<xsl:include href="../template/section.title.xsl"/>
|
||||
|
||||
@@ -21,7 +21,7 @@
|
||||
The execution process is launched using the following command
|
||||
form:
|
||||
<literallayout class='monospaced'>
|
||||
$ bitbake <replaceable>target</replaceable>
|
||||
$ bitbake <target>
|
||||
</literallayout>
|
||||
For information on the BitBake command and its options,
|
||||
see
|
||||
@@ -37,16 +37,14 @@
|
||||
</para>
|
||||
|
||||
<para>
|
||||
A common method to determine this value for your build host is to run
|
||||
the following:
|
||||
A common way to determine this value for your build host is to run:
|
||||
<literallayout class='monospaced'>
|
||||
$ grep processor /proc/cpuinfo
|
||||
</literallayout>
|
||||
This command returns the number of processors, which takes into
|
||||
account hyper-threading.
|
||||
Thus, a quad-core build host with hyper-threading most likely
|
||||
shows eight processors, which is the value you would then assign to
|
||||
<filename>BB_NUMBER_THREADS</filename>.
|
||||
and count the number of processors displayed. Note that the number of
|
||||
processors will take into account hyper-threading, so that a quad-core
|
||||
build host with hyper-threading will most likely show eight processors,
|
||||
which is the value you would then assign to that variable.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
@@ -287,8 +285,8 @@
|
||||
<link linkend='var-PN'><filename>PN</filename></link> and
|
||||
<link linkend='var-PV'><filename>PV</filename></link>:
|
||||
<literallayout class='monospaced'>
|
||||
PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
|
||||
PV = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
|
||||
PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE'),d)[0] or 'defaultpkgname'}"
|
||||
PV = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE'),d)[1] or '1.0'}"
|
||||
</literallayout>
|
||||
In this example, a recipe called "something_1.2.3.bb" would set
|
||||
<filename>PN</filename> to "something" and
|
||||
@@ -784,13 +782,13 @@
|
||||
make some dependency and hash information available to the build.
|
||||
This information includes:
|
||||
<itemizedlist>
|
||||
<listitem><para><filename>BB_BASEHASH_task-</filename><replaceable>taskname</replaceable>:
|
||||
<listitem><para><filename>BB_BASEHASH_task-<taskname></filename>:
|
||||
The base hashes for each task in the recipe.
|
||||
</para></listitem>
|
||||
<listitem><para><filename>BB_BASEHASH_</filename><replaceable>filename</replaceable><filename>:</filename><replaceable>taskname</replaceable>:
|
||||
<listitem><para><filename>BB_BASEHASH_<filename:taskname></filename>:
|
||||
The base hashes for each dependent task.
|
||||
</para></listitem>
|
||||
<listitem><para><filename>BBHASHDEPS_</filename><replaceable>filename</replaceable><filename>:</filename><replaceable>taskname</replaceable>:
|
||||
<listitem><para><filename>BBHASHDEPS_<filename:taskname></filename>:
|
||||
The task dependencies for each task.
|
||||
</para></listitem>
|
||||
<listitem><para><filename>BB_TASKHASH</filename>:
|
||||
|
||||
@@ -157,8 +157,8 @@
|
||||
<filename>SRC_URI</filename> variable with the appropriate
|
||||
varflags as follows:
|
||||
<literallayout class='monospaced'>
|
||||
SRC_URI[md5sum] = "<replaceable>value</replaceable>"
|
||||
SRC_URI[sha256sum] = "<replaceable>value</replaceable>"
|
||||
SRC_URI[md5sum] = "value"
|
||||
SRC_URI[sha256sum] = "value"
|
||||
</literallayout>
|
||||
You can also specify the checksums as parameters on the
|
||||
<filename>SRC_URI</filename> as shown below:
|
||||
@@ -325,25 +325,6 @@
|
||||
SRC_URI = "ftp://you@oe.handhelds.org/home/you/secret.plan"
|
||||
</literallayout>
|
||||
</para>
|
||||
<note>
|
||||
Because URL parameters are delimited by semi-colons, this can
|
||||
introduce ambiguity when parsing URLs that also contain semi-colons,
|
||||
for example:
|
||||
<literallayout class='monospaced'>
|
||||
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git;a=snapshot;h=a5dd47"
|
||||
</literallayout>
|
||||
Such URLs should should be modified by replacing semi-colons with '&' characters:
|
||||
<literallayout class='monospaced'>
|
||||
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&a=snapshot&h=a5dd47"
|
||||
</literallayout>
|
||||
In most cases this should work. Treating semi-colons and '&' in queries
|
||||
identically is recommended by the World Wide Web Consortium (W3C).
|
||||
Note that due to the nature of the URL, you may have to specify the name
|
||||
of the downloaded file as well:
|
||||
<literallayout class='monospaced'>
|
||||
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&a=snapshot&h=a5dd47;downloadfilename=myfile.bz2"
|
||||
</literallayout>
|
||||
</note>
|
||||
</section>
|
||||
|
||||
<section id='cvs-fetcher'>
|
||||
@@ -647,7 +628,7 @@
|
||||
<literallayout class='monospaced'>
|
||||
SRC_URI = "ccrc://cc.example.org/ccrc;vob=/example_vob;module=/example_module"
|
||||
SRCREV = "EXAMPLE_CLEARCASE_TAG"
|
||||
PV = "${@d.getVar("SRCREV", False).replace("/", "+")}"
|
||||
PV = "${@d.getVar("SRCREV").replace("/", "+")}"
|
||||
</literallayout>
|
||||
The fetcher uses the <filename>rcleartool</filename> or
|
||||
<filename>cleartool</filename> remote client, depending on
|
||||
@@ -665,19 +646,13 @@
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis><filename>module</filename></emphasis>:
|
||||
The module, which must include the
|
||||
prepending "/" character, in the selected VOB.
|
||||
<note>
|
||||
The <filename>module</filename> and <filename>vob</filename>
|
||||
options are combined to create the <filename>load</filename> rule in
|
||||
the view config spec.
|
||||
As an example, consider the <filename>vob</filename> and
|
||||
<filename>module</filename> values from the
|
||||
<filename>SRC_URI</filename> statement at the start of this section.
|
||||
Combining those values results in the following:
|
||||
<literallayout class='monospaced'>
|
||||
load /example_vob/example_module
|
||||
</literallayout>
|
||||
</note>
|
||||
prepending "/" character, in the selected VOB
|
||||
The <filename>module</filename> and <filename>vob</filename>
|
||||
options are combined to create the following load rule in
|
||||
the view config spec:
|
||||
<literallayout class='monospaced'>
|
||||
load <vob><module>
|
||||
</literallayout>
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis><filename>proto</filename></emphasis>:
|
||||
The protocol, which can be either <filename>http</filename> or
|
||||
|
||||
@@ -221,7 +221,7 @@
|
||||
<para>From your shell, enter the following commands to set and
|
||||
export the <filename>BBPATH</filename> variable:
|
||||
<literallayout class='monospaced'>
|
||||
$ BBPATH="<replaceable>projectdirectory</replaceable>"
|
||||
$ BBPATH="<projectdirectory>"
|
||||
$ export BBPATH
|
||||
</literallayout>
|
||||
Use your actual project directory in the command.
|
||||
|
||||
@@ -327,8 +327,8 @@
|
||||
The following lines select the values of a package name
|
||||
and its version number, respectively:
|
||||
<literallayout class='monospaced'>
|
||||
PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
|
||||
PV = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
|
||||
PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE'),d)[0] or 'defaultpkgname'}"
|
||||
PV = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE'),d)[1] or '1.0'}"
|
||||
</literallayout>
|
||||
</para>
|
||||
</section>
|
||||
@@ -952,7 +952,7 @@
|
||||
<listitem><para>
|
||||
The class needs to define the function as follows:
|
||||
<literallayout class='monospaced'>
|
||||
<replaceable>classname</replaceable><filename>_</filename><replaceable>functionname</replaceable>
|
||||
<classname>_<functionname>
|
||||
</literallayout>
|
||||
For example, if you have a class file
|
||||
<filename>bar.bbclass</filename> and a function named
|
||||
@@ -966,7 +966,7 @@
|
||||
The class needs to contain the <filename>EXPORT_FUNCTIONS</filename>
|
||||
statement as follows:
|
||||
<literallayout class='monospaced'>
|
||||
EXPORT_FUNCTIONS <replaceable>functionname</replaceable>
|
||||
EXPORT_FUNCTIONS <functionname>
|
||||
</literallayout>
|
||||
For example, continuing with the same example, the
|
||||
statement in the <filename>bar.bbclass</filename> would be
|
||||
@@ -1065,41 +1065,13 @@
|
||||
<title>Deleting a Task</title>
|
||||
|
||||
<para>
|
||||
As well as being able to add tasks, you can delete them.
|
||||
Simply use the <filename>deltask</filename> command to
|
||||
delete a task.
|
||||
As well as being able to add tasks, tasks can also be deleted.
|
||||
This is done simply with <filename>deltask</filename> command.
|
||||
For example, to delete the example task used in the previous
|
||||
sections, you would use:
|
||||
<literallayout class='monospaced'>
|
||||
deltask printdate
|
||||
</literallayout>
|
||||
If you delete a task using the <filename>deltask</filename>
|
||||
command and the task has dependencies, the dependencies are
|
||||
not reconnected.
|
||||
For example, suppose you have three tasks named
|
||||
<filename>do_a</filename>, <filename>do_b</filename>, and
|
||||
<filename>do_c</filename>.
|
||||
Furthermore, <filename>do_c</filename> is dependent on
|
||||
<filename>do_b</filename>, which in turn is dependent on
|
||||
<filename>do_a</filename>.
|
||||
Given this scenario, if you use <filename>deltask</filename>
|
||||
to delete <filename>do_b</filename>, the implicit dependency
|
||||
relationship between <filename>do_c</filename> and
|
||||
<filename>do_a</filename> through <filename>do_b</filename>
|
||||
no longer exists, and <filename>do_c</filename> dependencies
|
||||
are not updated to include <filename>do_a</filename>.
|
||||
Thus, <filename>do_c</filename> is free to run before
|
||||
<filename>do_a</filename>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
If you want dependencies such as these to remain intact, use
|
||||
the <filename>noexec</filename> varflag to disable the task
|
||||
instead of using the <filename>deltask</filename> command to
|
||||
delete it:
|
||||
<literallayout class='monospaced'>
|
||||
do_b[noexec] = "1"
|
||||
</literallayout>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
@@ -1163,7 +1135,7 @@
|
||||
<para>
|
||||
The <filename>BB_ORIGENV</filename> variable returns a datastore
|
||||
object that can be queried using the standard datastore operators
|
||||
such as <filename>getVar(, False)</filename>.
|
||||
such as <filename>getVar()</filename>.
|
||||
The datastore object is useful, for example, to find the original
|
||||
<filename>DISPLAY</filename> variable.
|
||||
Here is an example:
|
||||
@@ -1192,7 +1164,7 @@
|
||||
BitBake reads and writes varflags to the datastore using the following
|
||||
command forms:
|
||||
<literallayout class='monospaced'>
|
||||
<replaceable>variable</replaceable> = d.getVarFlags("<replaceable>variable</replaceable>")
|
||||
<variable> = d.getVarFlags("<variable>")
|
||||
self.d.setVarFlags("FOO", {"func": True})
|
||||
</literallayout>
|
||||
</para>
|
||||
@@ -1213,36 +1185,11 @@
|
||||
Tasks support a number of these flags which control various
|
||||
functionality of the task:
|
||||
<itemizedlist>
|
||||
<listitem><para><emphasis>cleandirs:</emphasis>
|
||||
Empty directories that should created before the task runs.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>depends:</emphasis>
|
||||
Controls inter-task dependencies.
|
||||
See the
|
||||
<link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>
|
||||
variable and the
|
||||
"<link linkend='inter-task-dependencies'>Inter-Task Dependencies</link>"
|
||||
section for more information.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>deptask:</emphasis>
|
||||
Controls task build-time dependencies.
|
||||
See the
|
||||
<link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>
|
||||
variable and the
|
||||
"<link linkend='build-dependencies'>Build Dependencies</link>"
|
||||
section for more information.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>dirs:</emphasis>
|
||||
Directories that should be created before the task runs.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>lockfiles:</emphasis>
|
||||
Specifies one or more lockfiles to lock while the task
|
||||
executes.
|
||||
Only one task may hold a lockfile, and any task that
|
||||
attempts to lock an already locked file will block until
|
||||
the lock is released.
|
||||
You can use this variable flag to accomplish mutual
|
||||
exclusion.
|
||||
<listitem><para><emphasis>cleandirs:</emphasis>
|
||||
Empty directories that should created before the task runs.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>noexec:</emphasis>
|
||||
Marks the tasks as being empty and no execution required.
|
||||
@@ -1254,20 +1201,15 @@
|
||||
Tells BitBake to not generate a stamp file for a task,
|
||||
which implies the task should always be executed.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>postfuncs:</emphasis>
|
||||
List of functions to call after the completion of the task.
|
||||
<listitem><para><emphasis>umask:</emphasis>
|
||||
The umask to run the task under.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>prefuncs:</emphasis>
|
||||
List of functions to call before the task executes.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>rdepends:</emphasis>
|
||||
Controls inter-task runtime dependencies.
|
||||
<listitem><para><emphasis>deptask:</emphasis>
|
||||
Controls task build-time dependencies.
|
||||
See the
|
||||
<link linkend='var-RDEPENDS'><filename>RDEPENDS</filename></link>
|
||||
variable, the
|
||||
<link linkend='var-RRECOMMENDS'><filename>RRECOMMENDS</filename></link>
|
||||
variable, and the
|
||||
"<link linkend='inter-task-dependencies'>Inter-Task Dependencies</link>"
|
||||
<link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>
|
||||
variable and the
|
||||
"<link linkend='build-dependencies'>Build Dependencies</link>"
|
||||
section for more information.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>rdeptask:</emphasis>
|
||||
@@ -1280,11 +1222,6 @@
|
||||
"<link linkend='runtime-dependencies'>Runtime Dependencies</link>"
|
||||
section for more information.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>recideptask:</emphasis>
|
||||
When set in conjunction with
|
||||
<filename>recrdeptask</filename>, specifies a task that
|
||||
should be inspected for additional dependencies.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>recrdeptask:</emphasis>
|
||||
Controls task recursive runtime dependencies.
|
||||
See the
|
||||
@@ -1295,14 +1232,35 @@
|
||||
"<link linkend='recursive-dependencies'>Recursive Dependencies</link>"
|
||||
section for more information.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>depends:</emphasis>
|
||||
Controls inter-task dependencies.
|
||||
See the
|
||||
<link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>
|
||||
variable and the
|
||||
"<link linkend='inter-task-dependencies'>Inter-Task Dependencies</link>"
|
||||
section for more information.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>rdepends:</emphasis>
|
||||
Controls inter-task runtime dependencies.
|
||||
See the
|
||||
<link linkend='var-RDEPENDS'><filename>RDEPENDS</filename></link>
|
||||
variable, the
|
||||
<link linkend='var-RRECOMMENDS'><filename>RRECOMMENDS</filename></link>
|
||||
variable, and the
|
||||
"<link linkend='inter-task-dependencies'>Inter-Task Dependencies</link>"
|
||||
section for more information.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>postfuncs:</emphasis>
|
||||
List of functions to call after the completion of the task.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>prefuncs:</emphasis>
|
||||
List of functions to call before the task executes.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>stamp-extra-info:</emphasis>
|
||||
Extra stamp information to append to the task's stamp.
|
||||
As an example, OpenEmbedded uses this flag to allow
|
||||
machine-specific tasks.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>umask:</emphasis>
|
||||
The umask to run the task under.
|
||||
</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
@@ -1322,16 +1280,16 @@
|
||||
does not allow BitBake to automatically determine
|
||||
that the variable is referred to.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>vardepsexclude:</emphasis>
|
||||
Specifies a space-separated list of variables
|
||||
that should be excluded from a variable's dependencies
|
||||
for the purposes of calculating its signature.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>vardepvalue:</emphasis>
|
||||
If set, instructs BitBake to ignore the actual
|
||||
value of the variable and instead use the specified
|
||||
value when calculating the variable's signature.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>vardepsexclude:</emphasis>
|
||||
Specifies a space-separated list of variables
|
||||
that should be excluded from a variable's dependencies
|
||||
for the purposes of calculating its signature.
|
||||
</para></listitem>
|
||||
<listitem><para><emphasis>vardepvalueexclude:</emphasis>
|
||||
Specifies a pipe-separated list of strings to exclude
|
||||
from the variable's value when calculating the
|
||||
|
||||
@@ -102,56 +102,6 @@
|
||||
</glossdef>
|
||||
</glossentry>
|
||||
|
||||
<glossentry id='var-BB_ALLOWED_NETWORKS'><glossterm>BB_ALLOWED_NETWORKS</glossterm>
|
||||
<glossdef>
|
||||
<para>
|
||||
Specifies a space-delimited list of hosts that the fetcher
|
||||
is allowed to use to obtain the required source code.
|
||||
Following are considerations surrounding this variable:
|
||||
<itemizedlist>
|
||||
<listitem><para>
|
||||
This host list is only used if
|
||||
<link linkend='var-BB_NO_NETWORK'><filename>BB_NO_NETWORK</filename></link>
|
||||
is either not set or set to "0".
|
||||
</para></listitem>
|
||||
<listitem><para>
|
||||
Limited support for wildcard matching against the
|
||||
beginning of host names exists.
|
||||
For example, the following setting matches
|
||||
<filename>git.gnu.org</filename>,
|
||||
<filename>ftp.gnu.org</filename>, and
|
||||
<filename>foo.git.gnu.org</filename>.
|
||||
<literallayout class='monospaced'>
|
||||
BB_ALLOWED_NETWORKS = "*.gnu.org"
|
||||
</literallayout>
|
||||
</para></listitem>
|
||||
<listitem><para>
|
||||
Mirrors not in the host list are skipped and
|
||||
logged in debug.
|
||||
</para></listitem>
|
||||
<listitem><para>
|
||||
Attempts to access networks not in the host list
|
||||
cause a failure.
|
||||
</para></listitem>
|
||||
</itemizedlist>
|
||||
Using <filename>BB_ALLOWED_NETWORKS</filename> in
|
||||
conjunction with
|
||||
<link linkend='var-PREMIRRORS'><filename>PREMIRRORS</filename></link>
|
||||
is very useful.
|
||||
Adding the host you want to use to
|
||||
<filename>PREMIRRORS</filename> results in the source code
|
||||
being fetched from an allowed location and avoids raising
|
||||
an error when a host that is not allowed is in a
|
||||
<link linkend='var-SRC_URI'><filename>SRC_URI</filename></link>
|
||||
statement.
|
||||
This is because the fetcher does not attempt to use the
|
||||
host listed in <filename>SRC_URI</filename> after a
|
||||
successful fetch from the
|
||||
<filename>PREMIRRORS</filename> occurs.
|
||||
</para>
|
||||
</glossdef>
|
||||
</glossentry>
|
||||
|
||||
<glossentry id='var-BB_CONSOLELOG'><glossterm>BB_CONSOLELOG</glossterm>
|
||||
<glossdef>
|
||||
<para>
|
||||
@@ -856,56 +806,6 @@
|
||||
</glossdef>
|
||||
</glossentry>
|
||||
|
||||
<glossentry id='var-BB_TASK_IONICE_LEVEL'><glossterm>BB_TASK_IONICE_LEVEL</glossterm>
|
||||
<glossdef>
|
||||
<para>
|
||||
Allows adjustment of a task's Input/Output priority.
|
||||
During Autobuilder testing, random failures can occur
|
||||
for tasks due to I/O starvation.
|
||||
These failures occur during various QEMU runtime timeouts.
|
||||
You can use the <filename>BB_TASK_IONICE_LEVEL</filename>
|
||||
variable to adjust the I/O priority of these tasks.
|
||||
<note>
|
||||
This variable works similarly to the
|
||||
<link linkend='var-BB_TASK_NICE_LEVEL'><filename>BB_TASK_NICE_LEVEL</filename></link>
|
||||
variable except with a task's I/O priorities.
|
||||
</note>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Set the variable as follows:
|
||||
<literallayout class='monospaced'>
|
||||
BB_TASK_IONICE_LEVEL = "<replaceable>class</replaceable>.<replaceable>prio</replaceable>"
|
||||
</literallayout>
|
||||
For <replaceable>class</replaceable>, the default value is
|
||||
"2", which is a best effort.
|
||||
You can use "1" for realtime and "3" for idle.
|
||||
If you want to use realtime, you must have superuser
|
||||
privileges.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
For <replaceable>prio</replaceable>, you can use any
|
||||
value from "0", which is the highest priority, to "7",
|
||||
which is the lowest.
|
||||
The default value is "4".
|
||||
You do not need any special privileges to use this range
|
||||
of priority values.
|
||||
<note>
|
||||
In order for your I/O priority settings to take effect,
|
||||
you need the Completely Fair Queuing (CFQ) Scheduler
|
||||
selected for the backing block device.
|
||||
To select the scheduler, use the following command form
|
||||
where <replaceable>device</replaceable> is the device
|
||||
(e.g. sda, sdb, and so forth):
|
||||
<literallayout class='monospaced'>
|
||||
$ sudo sh -c “echo cfq > /sys/block/<replaceable>device</replaceable>/queu/scheduler
|
||||
</literallayout>
|
||||
</note>
|
||||
</para>
|
||||
</glossdef>
|
||||
</glossentry>
|
||||
|
||||
<glossentry id='var-BB_TASK_NICE_LEVEL'><glossterm>BB_TASK_NICE_LEVEL</glossterm>
|
||||
<glossdef>
|
||||
<para>
|
||||
@@ -972,7 +872,7 @@
|
||||
that run on the target <filename>MACHINE</filename>;
|
||||
"nativesdk", which targets the SDK machine instead of
|
||||
<filename>MACHINE</filename>; and "mulitlibs" in the form
|
||||
"<filename>multilib:</filename><replaceable>multilib_name</replaceable>".
|
||||
"<filename>multilib:<multilib_name></filename>".
|
||||
</para>
|
||||
|
||||
<para>
|
||||
@@ -984,7 +884,7 @@
|
||||
metadata:
|
||||
<literallayout class='monospaced'>
|
||||
BBCLASSEXTEND =+ "native nativesdk"
|
||||
BBCLASSEXTEND =+ "multilib:<replaceable>multilib_name</replaceable>"
|
||||
BBCLASSEXTEND =+ "multilib:<multilib_name>"
|
||||
</literallayout>
|
||||
</para>
|
||||
</glossdef>
|
||||
@@ -1116,20 +1016,6 @@
|
||||
</glossdef>
|
||||
</glossentry>
|
||||
|
||||
<glossentry id='var-BBLAYERS_FETCH_DIR'><glossterm>BBLAYERS_FETCH_DIR</glossterm>
|
||||
<glossdef>
|
||||
<para>
|
||||
Sets the base location where layers are stored.
|
||||
By default, this location is set to
|
||||
<filename>${COREBASE}</filename>.
|
||||
This setting is used in conjunction with
|
||||
<filename>bitbake-layers layerindex-fetch</filename> and
|
||||
tells <filename>bitbake-layers</filename> where to place
|
||||
the fetched layers.
|
||||
</para>
|
||||
</glossdef>
|
||||
</glossentry>
|
||||
|
||||
<glossentry id='var-BBMASK'><glossterm>BBMASK</glossterm>
|
||||
<glossdef>
|
||||
<para>
|
||||
@@ -1205,9 +1091,9 @@
|
||||
Set the variable as you would any environment variable
|
||||
and then run BitBake:
|
||||
<literallayout class='monospaced'>
|
||||
$ BBPATH="<replaceable>build_directory</replaceable>"
|
||||
$ BBPATH="<build_directory>"
|
||||
$ export BBPATH
|
||||
$ bitbake <replaceable>target</replaceable>
|
||||
$ bitbake <target>
|
||||
</literallayout>
|
||||
</para>
|
||||
</glossdef>
|
||||
@@ -1223,15 +1109,6 @@
|
||||
</glossdef>
|
||||
</glossentry>
|
||||
|
||||
<glossentry id='var-BBTARGETS'><glossterm>BBTARGETS</glossterm>
|
||||
<glossdef>
|
||||
<para>
|
||||
Allows you to use a configuration file to add to the list
|
||||
of command-line target recipes you want to build.
|
||||
</para>
|
||||
</glossdef>
|
||||
</glossentry>
|
||||
|
||||
<glossentry id='var-BBVERSIONS'><glossterm>BBVERSIONS</glossterm>
|
||||
<glossdef>
|
||||
<para>
|
||||
@@ -2011,7 +1888,7 @@
|
||||
Here is the general syntax to specify versions with
|
||||
the <filename>RDEPENDS</filename> variable:
|
||||
<literallayout class='monospaced'>
|
||||
RDEPENDS_${PN} = "<replaceable>package</replaceable> (<replaceable>operator</replaceable> <replaceable>version</replaceable>)"
|
||||
RDEPENDS_${PN} = "<package> (<operator> <version>)"
|
||||
</literallayout>
|
||||
For <filename>operator</filename>, you can specify the
|
||||
following:
|
||||
@@ -2077,7 +1954,7 @@
|
||||
Here is the general syntax to specify versions with
|
||||
the <filename>RRECOMMENDS</filename> variable:
|
||||
<literallayout class='monospaced'>
|
||||
RRECOMMENDS_${PN} = "<replaceable>package</replaceable> (<replaceable>operator</replaceable> <replaceable>version</replaceable>)"
|
||||
RRECOMMENDS_${PN} = "<package> (<operator> <version>)"
|
||||
</literallayout>
|
||||
For <filename>operator</filename>, you can specify the
|
||||
following:
|
||||
|
||||
@@ -21,7 +21,7 @@
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
__version__ = "1.28.0"
|
||||
__version__ = "1.26.0"
|
||||
|
||||
import sys
|
||||
if sys.version_info < (2, 7, 3):
|
||||
@@ -94,11 +94,11 @@ def note(*args):
|
||||
def warn(*args):
|
||||
logger.warn(''.join(args))
|
||||
|
||||
def error(*args, **kwargs):
|
||||
logger.error(''.join(args), extra=kwargs)
|
||||
def error(*args):
|
||||
logger.error(''.join(args))
|
||||
|
||||
def fatal(*args, **kwargs):
|
||||
logger.critical(''.join(args), extra=kwargs)
|
||||
def fatal(*args):
|
||||
logger.critical(''.join(args))
|
||||
raise BBHandledException()
|
||||
|
||||
def deprecated(func, name=None, advice=""):
|
||||
|
||||
@@ -31,7 +31,6 @@ import logging
|
||||
import shlex
|
||||
import glob
|
||||
import time
|
||||
import stat
|
||||
import bb
|
||||
import bb.msg
|
||||
import bb.process
|
||||
@@ -43,20 +42,6 @@ logger = logging.getLogger('BitBake.Build')
|
||||
|
||||
NULL = open(os.devnull, 'r+')
|
||||
|
||||
__mtime_cache = {}
|
||||
|
||||
def cached_mtime_noerror(f):
|
||||
if f not in __mtime_cache:
|
||||
try:
|
||||
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
|
||||
except OSError:
|
||||
return 0
|
||||
return __mtime_cache[f]
|
||||
|
||||
def reset_cache():
|
||||
global __mtime_cache
|
||||
__mtime_cache = {}
|
||||
|
||||
# When we execute a Python function, we'd like certain things
|
||||
# in all namespaces, hence we add them to __builtins__.
|
||||
# If we do not do this and use the exec globals, they will
|
||||
@@ -159,7 +144,7 @@ class LogTee(object):
|
||||
def exec_func(func, d, dirs = None):
|
||||
"""Execute a BB 'function'"""
|
||||
|
||||
body = d.getVar(func, False)
|
||||
body = d.getVar(func)
|
||||
if not body:
|
||||
if body is None:
|
||||
logger.warn("Function %s doesn't exist", func)
|
||||
@@ -313,7 +298,7 @@ def exec_func_shell(func, d, runfile, cwd=None):
|
||||
# cleanup
|
||||
ret=$?
|
||||
trap '' 0
|
||||
exit $ret
|
||||
exit $?
|
||||
''')
|
||||
|
||||
os.chmod(runfile, 0775)
|
||||
@@ -329,52 +314,14 @@ exit $ret
|
||||
else:
|
||||
logfile = sys.stdout
|
||||
|
||||
def readfifo(data):
|
||||
lines = data.split('\0')
|
||||
for line in lines:
|
||||
splitval = line.split(' ', 1)
|
||||
cmd = splitval[0]
|
||||
if len(splitval) > 1:
|
||||
value = splitval[1]
|
||||
else:
|
||||
value = ''
|
||||
if cmd == 'bbplain':
|
||||
bb.plain(value)
|
||||
elif cmd == 'bbnote':
|
||||
bb.note(value)
|
||||
elif cmd == 'bbwarn':
|
||||
bb.warn(value)
|
||||
elif cmd == 'bberror':
|
||||
bb.error(value)
|
||||
elif cmd == 'bbfatal':
|
||||
# The caller will call exit themselves, so bb.error() is
|
||||
# what we want here rather than bb.fatal()
|
||||
bb.error(value)
|
||||
elif cmd == 'bbfatal_log':
|
||||
bb.error(value, forcelog=True)
|
||||
elif cmd == 'bbdebug':
|
||||
splitval = value.split(' ', 1)
|
||||
level = int(splitval[0])
|
||||
value = splitval[1]
|
||||
bb.debug(level, value)
|
||||
bb.debug(2, "Executing shell function %s" % func)
|
||||
|
||||
tempdir = d.getVar('T', True)
|
||||
fifopath = os.path.join(tempdir, 'fifo.%s' % os.getpid())
|
||||
if os.path.exists(fifopath):
|
||||
os.unlink(fifopath)
|
||||
os.mkfifo(fifopath)
|
||||
with open(fifopath, 'r+') as fifo:
|
||||
try:
|
||||
bb.debug(2, "Executing shell function %s" % func)
|
||||
|
||||
try:
|
||||
with open(os.devnull, 'r+') as stdin:
|
||||
bb.process.run(cmd, shell=False, stdin=stdin, log=logfile, extrafiles=[(fifo,readfifo)])
|
||||
except bb.process.CmdError:
|
||||
logfn = d.getVar('BB_LOGFILE', True)
|
||||
raise FuncFailed(func, logfn)
|
||||
finally:
|
||||
os.unlink(fifopath)
|
||||
try:
|
||||
with open(os.devnull, 'r+') as stdin:
|
||||
bb.process.run(cmd, shell=False, stdin=stdin, log=logfile)
|
||||
except bb.process.CmdError:
|
||||
logfn = d.getVar('BB_LOGFILE', True)
|
||||
raise FuncFailed(func, logfn)
|
||||
|
||||
bb.debug(2, "Shell function %s finished" % func)
|
||||
|
||||
@@ -413,13 +360,6 @@ def _exec_task(fn, task, d, quieterr):
|
||||
nice = int(nice) - curnice
|
||||
newnice = os.nice(nice)
|
||||
logger.debug(1, "Renice to %s " % newnice)
|
||||
ionice = localdata.getVar("BB_TASK_IONICE_LEVEL", True)
|
||||
if ionice:
|
||||
try:
|
||||
cls, prio = ionice.split(".", 1)
|
||||
bb.utils.ioprio_set(os.getpid(), int(cls), int(prio))
|
||||
except:
|
||||
bb.warn("Invalid ionice level %s" % ionice)
|
||||
|
||||
bb.utils.mkdirhier(tempdir)
|
||||
|
||||
@@ -455,10 +395,7 @@ def _exec_task(fn, task, d, quieterr):
|
||||
self.triggered = False
|
||||
logging.Handler.__init__(self, logging.ERROR)
|
||||
def emit(self, record):
|
||||
if getattr(record, 'forcelog', False):
|
||||
self.triggered = False
|
||||
else:
|
||||
self.triggered = True
|
||||
self.triggered = True
|
||||
|
||||
# Handle logfiles
|
||||
si = open('/dev/null', 'r')
|
||||
@@ -598,7 +535,7 @@ def stamp_internal(taskname, d, file_name, baseonly=False):
|
||||
stamp = bb.parse.siggen.stampfile(stamp, file_name, taskname, extrainfo)
|
||||
|
||||
stampdir = os.path.dirname(stamp)
|
||||
if cached_mtime_noerror(stampdir) == 0:
|
||||
if bb.parse.cached_mtime_noerror(stampdir) == 0:
|
||||
bb.utils.mkdirhier(stampdir)
|
||||
|
||||
return stamp
|
||||
@@ -693,8 +630,8 @@ def stampfile(taskname, d, file_name = None):
|
||||
"""
|
||||
return stamp_internal(taskname, d, file_name)
|
||||
|
||||
def add_tasks(tasklist, d):
|
||||
task_deps = d.getVar('_task_deps', False)
|
||||
def add_tasks(tasklist, deltasklist, d):
|
||||
task_deps = d.getVar('_task_deps')
|
||||
if not task_deps:
|
||||
task_deps = {}
|
||||
if not 'tasks' in task_deps:
|
||||
@@ -705,6 +642,9 @@ def add_tasks(tasklist, d):
|
||||
for task in tasklist:
|
||||
task = d.expand(task)
|
||||
|
||||
if task in deltasklist:
|
||||
continue
|
||||
|
||||
d.setVarFlag(task, 'task', 1)
|
||||
|
||||
if not task in task_deps['tasks']:
|
||||
@@ -741,8 +681,8 @@ def addtask(task, before, after, d):
|
||||
task = "do_" + task
|
||||
|
||||
d.setVarFlag(task, "task", 1)
|
||||
bbtasks = d.getVar('__BBTASKS', False) or []
|
||||
if task not in bbtasks:
|
||||
bbtasks = d.getVar('__BBTASKS') or []
|
||||
if not task in bbtasks:
|
||||
bbtasks.append(task)
|
||||
d.setVar('__BBTASKS', bbtasks)
|
||||
|
||||
@@ -764,14 +704,8 @@ def deltask(task, d):
|
||||
if task[:3] != "do_":
|
||||
task = "do_" + task
|
||||
|
||||
bbtasks = d.getVar('__BBTASKS', False) or []
|
||||
if task in bbtasks:
|
||||
bbtasks.remove(task)
|
||||
d.setVar('__BBTASKS', bbtasks)
|
||||
bbtasks = d.getVar('__BBDELTASKS') or []
|
||||
if not task in bbtasks:
|
||||
bbtasks.append(task)
|
||||
d.setVar('__BBDELTASKS', bbtasks)
|
||||
|
||||
d.delVarFlag(task, 'deps')
|
||||
for bbtask in d.getVar('__BBTASKS', False) or []:
|
||||
deps = d.getVarFlag(bbtask, 'deps') or []
|
||||
if task in deps:
|
||||
deps.remove(task)
|
||||
d.setVarFlag(bbtask, 'deps', deps)
|
||||
|
||||
@@ -528,20 +528,7 @@ class Cache(object):
|
||||
|
||||
if hasattr(info_array[0], 'file_checksums'):
|
||||
for _, fl in info_array[0].file_checksums.items():
|
||||
fl = fl.strip()
|
||||
while fl:
|
||||
# A .split() would be simpler but means spaces or colons in filenames would break
|
||||
a = fl.find(":True")
|
||||
b = fl.find(":False")
|
||||
if ((a < 0) and b) or ((b > 0) and (b < a)):
|
||||
f = fl[:b+6]
|
||||
fl = fl[b+7:]
|
||||
elif ((b < 0) and a) or ((a > 0) and (a < b)):
|
||||
f = fl[:a+5]
|
||||
fl = fl[a+6:]
|
||||
else:
|
||||
break
|
||||
fl = fl.strip()
|
||||
for f in fl.split():
|
||||
if "*" in f:
|
||||
continue
|
||||
f, exist = f.split(":")
|
||||
@@ -672,25 +659,25 @@ class Cache(object):
|
||||
"""
|
||||
chdir_back = False
|
||||
|
||||
from bb import parse
|
||||
from bb import data, parse
|
||||
|
||||
# expand tmpdir to include this topdir
|
||||
config.setVar('TMPDIR', config.getVar('TMPDIR', True) or "")
|
||||
data.setVar('TMPDIR', data.getVar('TMPDIR', config, 1) or "", config)
|
||||
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
|
||||
oldpath = os.path.abspath(os.getcwd())
|
||||
parse.cached_mtime_noerror(bbfile_loc)
|
||||
bb_data = config.createCopy()
|
||||
bb_data = data.init_db(config)
|
||||
# The ConfHandler first looks if there is a TOPDIR and if not
|
||||
# then it would call getcwd().
|
||||
# Previously, we chdir()ed to bbfile_loc, called the handler
|
||||
# and finally chdir()ed back, a couple of thousand times. We now
|
||||
# just fill in TOPDIR to point to bbfile_loc if there is no TOPDIR yet.
|
||||
if not bb_data.getVar('TOPDIR', False):
|
||||
if not data.getVar('TOPDIR', bb_data):
|
||||
chdir_back = True
|
||||
bb_data.setVar('TOPDIR', bbfile_loc)
|
||||
data.setVar('TOPDIR', bbfile_loc, bb_data)
|
||||
try:
|
||||
if appends:
|
||||
bb_data.setVar('__BBAPPEND', " ".join(appends))
|
||||
data.setVar('__BBAPPEND', " ".join(appends), bb_data)
|
||||
bb_data = parse.handle(bbfile, bb_data)
|
||||
if chdir_back:
|
||||
os.chdir(oldpath)
|
||||
|
||||
@@ -92,9 +92,6 @@ class pythonCacheLine(object):
|
||||
for c in sorted(self.contains.keys()):
|
||||
l = l + (c, hash(self.contains[c]))
|
||||
return hash(l)
|
||||
def __repr__(self):
|
||||
return " ".join([str(self.refs), str(self.execs), str(self.contains)])
|
||||
|
||||
|
||||
class shellCacheLine(object):
|
||||
def __init__(self, execs):
|
||||
@@ -108,8 +105,6 @@ class shellCacheLine(object):
|
||||
self.__init__(execs)
|
||||
def __hash__(self):
|
||||
return hash(self.execs)
|
||||
def __repr__(self):
|
||||
return str(self.execs)
|
||||
|
||||
class CodeParserCache(MultiProcessCache):
|
||||
cache_file_name = "bb_codeparser.dat"
|
||||
@@ -245,9 +240,6 @@ class PythonParser():
|
||||
self.unhandled_message = "while parsing %s, %s" % (name, self.unhandled_message)
|
||||
|
||||
def parse_python(self, node):
|
||||
if not node or not node.strip():
|
||||
return
|
||||
|
||||
h = hash(str(node))
|
||||
|
||||
if h in codeparsercache.pythoncache:
|
||||
|
||||
@@ -68,12 +68,10 @@ class Command:
|
||||
if not hasattr(command_method, 'readonly') or False == getattr(command_method, 'readonly'):
|
||||
return None, "Not able to execute not readonly commands in readonly mode"
|
||||
try:
|
||||
if getattr(command_method, 'needconfig', False):
|
||||
self.cooker.updateCacheSync()
|
||||
result = command_method(self, commandline)
|
||||
except CommandError as exc:
|
||||
return None, exc.args[0]
|
||||
except (Exception, SystemExit):
|
||||
except Exception:
|
||||
import traceback
|
||||
return None, traceback.format_exc()
|
||||
else:
|
||||
@@ -181,16 +179,6 @@ class CommandsSync:
|
||||
value = str(params[1])
|
||||
command.cooker.data.setVar(varname, value)
|
||||
|
||||
def getSetVariable(self, command, params):
|
||||
"""
|
||||
Read the value of a variable from data and set it into the datastore
|
||||
which effectively expands and locks the value.
|
||||
"""
|
||||
varname = params[0]
|
||||
result = self.getVariable(command, params)
|
||||
command.cooker.data.setVar(varname, result)
|
||||
return result
|
||||
|
||||
def setConfig(self, command, params):
|
||||
"""
|
||||
Set the value of variable in configuration
|
||||
@@ -216,7 +204,6 @@ class CommandsSync:
|
||||
postfiles = params[1].split()
|
||||
command.cooker.configuration.prefile = prefiles
|
||||
command.cooker.configuration.postfile = postfiles
|
||||
setPrePostConfFiles.needconfig = False
|
||||
|
||||
def getCpuCount(self, command, params):
|
||||
"""
|
||||
@@ -224,12 +211,10 @@ class CommandsSync:
|
||||
"""
|
||||
return bb.utils.cpu_count()
|
||||
getCpuCount.readonly = True
|
||||
getCpuCount.needconfig = False
|
||||
|
||||
def matchFile(self, command, params):
|
||||
fMatch = params[0]
|
||||
return command.cooker.matchFile(fMatch)
|
||||
matchFile.needconfig = False
|
||||
|
||||
def generateNewImage(self, command, params):
|
||||
image = params[0]
|
||||
@@ -243,7 +228,6 @@ class CommandsSync:
|
||||
def ensureDir(self, command, params):
|
||||
directory = params[0]
|
||||
bb.utils.mkdirhier(directory)
|
||||
ensureDir.needconfig = False
|
||||
|
||||
def setVarFile(self, command, params):
|
||||
"""
|
||||
@@ -254,7 +238,6 @@ class CommandsSync:
|
||||
default_file = params[2]
|
||||
op = params[3]
|
||||
command.cooker.modifyConfigurationVar(var, val, default_file, op)
|
||||
setVarFile.needconfig = False
|
||||
|
||||
def removeVarFile(self, command, params):
|
||||
"""
|
||||
@@ -262,7 +245,6 @@ class CommandsSync:
|
||||
"""
|
||||
var = params[0]
|
||||
command.cooker.removeConfigurationVar(var)
|
||||
removeVarFile.needconfig = False
|
||||
|
||||
def createConfigFile(self, command, params):
|
||||
"""
|
||||
@@ -270,7 +252,6 @@ class CommandsSync:
|
||||
"""
|
||||
name = params[0]
|
||||
command.cooker.createConfigFile(name)
|
||||
createConfigFile.needconfig = False
|
||||
|
||||
def setEventMask(self, command, params):
|
||||
handlerNum = params[0]
|
||||
@@ -278,7 +259,6 @@ class CommandsSync:
|
||||
debug_domains = params[2]
|
||||
mask = params[3]
|
||||
return bb.event.set_UIHmask(handlerNum, llevel, debug_domains, mask)
|
||||
setEventMask.needconfig = False
|
||||
|
||||
def setFeatures(self, command, params):
|
||||
"""
|
||||
@@ -286,7 +266,7 @@ class CommandsSync:
|
||||
"""
|
||||
features = params[0]
|
||||
command.cooker.setFeatures(features)
|
||||
setFeatures.needconfig = False
|
||||
|
||||
# although we change the internal state of the cooker, this is transparent since
|
||||
# we always take and leave the cooker in state.initial
|
||||
setFeatures.readonly = True
|
||||
@@ -295,7 +275,6 @@ class CommandsSync:
|
||||
options = params[0]
|
||||
environment = params[1]
|
||||
command.cooker.updateConfigOpts(options, environment)
|
||||
updateConfig.needconfig = False
|
||||
|
||||
class CommandsAsync:
|
||||
"""
|
||||
|
||||
@@ -35,7 +35,7 @@ from contextlib import closing
|
||||
from functools import wraps
|
||||
from collections import defaultdict
|
||||
import bb, bb.exceptions, bb.command
|
||||
from bb import utils, data, parse, event, cache, providers, taskdata, runqueue, build
|
||||
from bb import utils, data, parse, event, cache, providers, taskdata, runqueue
|
||||
import Queue
|
||||
import signal
|
||||
import subprocess
|
||||
@@ -114,26 +114,23 @@ class BBCooker:
|
||||
Manages one bitbake build run
|
||||
"""
|
||||
|
||||
def __init__(self, configuration, featureSet=None):
|
||||
def __init__(self, configuration, featureSet = []):
|
||||
self.recipecache = None
|
||||
self.skiplist = {}
|
||||
self.featureset = CookerFeatures()
|
||||
if featureSet:
|
||||
for f in featureSet:
|
||||
self.featureset.setFeature(f)
|
||||
for f in featureSet:
|
||||
self.featureset.setFeature(f)
|
||||
|
||||
self.configuration = configuration
|
||||
|
||||
self.configwatcher = pyinotify.WatchManager()
|
||||
self.configwatcher.bbseen = []
|
||||
self.configwatcher.bbwatchedfiles = []
|
||||
self.confignotifier = pyinotify.Notifier(self.configwatcher, self.config_notifications)
|
||||
self.watchmask = pyinotify.IN_CLOSE_WRITE | pyinotify.IN_CREATE | pyinotify.IN_DELETE | \
|
||||
pyinotify.IN_DELETE_SELF | pyinotify.IN_MODIFY | pyinotify.IN_MOVE_SELF | \
|
||||
pyinotify.IN_MOVED_FROM | pyinotify.IN_MOVED_TO
|
||||
self.watcher = pyinotify.WatchManager()
|
||||
self.watcher.bbseen = []
|
||||
self.watcher.bbwatchedfiles = []
|
||||
self.notifier = pyinotify.Notifier(self.watcher, self.notifications)
|
||||
|
||||
|
||||
@@ -156,7 +153,9 @@ class BBCooker:
|
||||
|
||||
# Take a lock so only one copy of bitbake can run against a given build
|
||||
# directory at a time
|
||||
if not self.lockBitbake():
|
||||
lockfile = self.data.expand("${TOPDIR}/bitbake.lock")
|
||||
self.lock = bb.utils.lockfile(lockfile, False, False)
|
||||
if not self.lock:
|
||||
bb.fatal("Only one copy of bitbake should be run against a build directory")
|
||||
try:
|
||||
self.lock.seek(0)
|
||||
@@ -187,8 +186,6 @@ class BBCooker:
|
||||
signal.signal(signal.SIGHUP, self.sigterm_exception)
|
||||
|
||||
def config_notifications(self, event):
|
||||
if not event.pathname in self.configwatcher.bbwatchedfiles:
|
||||
return
|
||||
if not event.path in self.inotify_modified_files:
|
||||
self.inotify_modified_files.append(event.path)
|
||||
self.baseconfig_valid = False
|
||||
@@ -202,27 +199,20 @@ class BBCooker:
|
||||
if not watcher:
|
||||
watcher = self.watcher
|
||||
for i in deps:
|
||||
watcher.bbwatchedfiles.append(i[0])
|
||||
f = os.path.dirname(i[0])
|
||||
if f in watcher.bbseen:
|
||||
continue
|
||||
watcher.bbseen.append(f)
|
||||
watchtarget = None
|
||||
while True:
|
||||
# We try and add watches for files that don't exist but if they did, would influence
|
||||
# the parser. The parent directory of these files may not exist, in which case we need
|
||||
# to watch any parent that does exist for changes.
|
||||
try:
|
||||
watcher.add_watch(f, self.watchmask, quiet=False)
|
||||
if watchtarget:
|
||||
watcher.bbwatchedfiles.append(watchtarget)
|
||||
break
|
||||
except pyinotify.WatchManagerError as e:
|
||||
if 'ENOENT' in str(e):
|
||||
watchtarget = f
|
||||
f = os.path.dirname(f)
|
||||
if f in watcher.bbseen:
|
||||
break
|
||||
watcher.bbseen.append(f)
|
||||
continue
|
||||
if 'ENOSPC' in str(e):
|
||||
@@ -255,11 +245,6 @@ class BBCooker:
|
||||
self.state = state.initial
|
||||
self.caches_array = []
|
||||
|
||||
# Need to preserve BB_CONSOLELOG over resets
|
||||
consolelog = None
|
||||
if hasattr(self, "data"):
|
||||
consolelog = self.data.getVar("BB_CONSOLELOG", True)
|
||||
|
||||
if CookerFeatures.BASEDATASTORE_TRACKING in self.featureset:
|
||||
self.enableDataTracking()
|
||||
|
||||
@@ -286,8 +271,6 @@ class BBCooker:
|
||||
self.data = self.databuilder.data
|
||||
self.data_hash = self.databuilder.data_hash
|
||||
|
||||
if consolelog:
|
||||
self.data.setVar("BB_CONSOLELOG", consolelog)
|
||||
|
||||
# we log all events to a file if so directed
|
||||
if self.configuration.writeeventlog:
|
||||
@@ -368,10 +351,6 @@ class BBCooker:
|
||||
if CookerFeatures.BASEDATASTORE_TRACKING in self.featureset:
|
||||
self.disableDataTracking()
|
||||
|
||||
self.data.renameVar("__depends", "__base_depends")
|
||||
self.add_filewatch(self.data.getVar("__base_depends", False), self.configwatcher)
|
||||
|
||||
|
||||
def enableDataTracking(self):
|
||||
self.configuration.tracking = True
|
||||
if hasattr(self, "data"):
|
||||
@@ -409,7 +388,7 @@ class BBCooker:
|
||||
|
||||
replaced = False
|
||||
#do not save if nothing changed
|
||||
if str(val) == self.data.getVar(var, False):
|
||||
if str(val) == self.data.getVar(var):
|
||||
return
|
||||
|
||||
conf_files = self.data.varhistory.get_variable_files(var)
|
||||
@@ -421,7 +400,7 @@ class BBCooker:
|
||||
listval += "%s " % value
|
||||
val = listval
|
||||
|
||||
topdir = self.data.getVar("TOPDIR", False)
|
||||
topdir = self.data.getVar("TOPDIR")
|
||||
|
||||
#comment or replace operations made on var
|
||||
for conf_file in conf_files:
|
||||
@@ -476,7 +455,7 @@ class BBCooker:
|
||||
|
||||
def removeConfigurationVar(self, var):
|
||||
conf_files = self.data.varhistory.get_variable_files(var)
|
||||
topdir = self.data.getVar("TOPDIR", False)
|
||||
topdir = self.data.getVar("TOPDIR")
|
||||
|
||||
for conf_file in conf_files:
|
||||
if topdir in conf_file:
|
||||
@@ -516,7 +495,7 @@ class BBCooker:
|
||||
|
||||
def parseConfiguration(self):
|
||||
# Set log file verbosity
|
||||
verboselogs = bb.utils.to_boolean(self.data.getVar("BB_VERBOSE_LOGS", False))
|
||||
verboselogs = bb.utils.to_boolean(self.data.getVar("BB_VERBOSE_LOGS", "0"))
|
||||
if verboselogs:
|
||||
bb.msg.loggerVerboseLogs = True
|
||||
|
||||
@@ -534,16 +513,9 @@ class BBCooker:
|
||||
self.handleCollections( self.data.getVar("BBFILE_COLLECTIONS", True) )
|
||||
|
||||
def updateConfigOpts(self, options, environment):
|
||||
clean = True
|
||||
for o in options:
|
||||
if o in ['prefile', 'postfile']:
|
||||
clean = False
|
||||
server_val = getattr(self.configuration, "%s_server" % o)
|
||||
if not options[o] and server_val:
|
||||
# restore value provided on server start
|
||||
setattr(self.configuration, o, server_val)
|
||||
continue
|
||||
setattr(self.configuration, o, options[o])
|
||||
clean = True
|
||||
for k in bb.utils.approved_variables():
|
||||
if k in environment and k not in self.configuration.env:
|
||||
logger.debug(1, "Updating environment variable %s to %s" % (k, environment[k]))
|
||||
@@ -593,14 +565,12 @@ class BBCooker:
|
||||
|
||||
logger.plain("%-35s %25s %25s", p, lateststr, prefstr)
|
||||
|
||||
def showEnvironment(self, buildfile=None, pkgs_to_build=None):
|
||||
def showEnvironment(self, buildfile = None, pkgs_to_build = []):
|
||||
"""
|
||||
Show the outer or per-recipe environment
|
||||
"""
|
||||
fn = None
|
||||
envdata = None
|
||||
if not pkgs_to_build:
|
||||
pkgs_to_build = []
|
||||
|
||||
if buildfile:
|
||||
# Parse the configuration here. We need to do it explicitly here since
|
||||
@@ -615,7 +585,7 @@ class BBCooker:
|
||||
if pkgs_to_build[0] in set(ignore.split()):
|
||||
bb.fatal("%s is in ASSUME_PROVIDED" % pkgs_to_build[0])
|
||||
|
||||
taskdata, runlist, pkgs_to_build = self.buildTaskData(pkgs_to_build, None, self.configuration.abort, allowincomplete=True)
|
||||
taskdata, runlist, pkgs_to_build = self.buildTaskData(pkgs_to_build, None, self.configuration.abort)
|
||||
|
||||
targetid = taskdata.getbuild_id(pkgs_to_build[0])
|
||||
fnid = taskdata.build_targets[targetid][0]
|
||||
@@ -645,10 +615,10 @@ class BBCooker:
|
||||
data.expandKeys(envdata)
|
||||
for e in envdata.keys():
|
||||
if data.getVarFlag( e, 'python', envdata ):
|
||||
logger.plain("\npython %s () {\n%s}\n", e, envdata.getVar(e, True))
|
||||
logger.plain("\npython %s () {\n%s}\n", e, data.getVar(e, envdata, 1))
|
||||
|
||||
|
||||
def buildTaskData(self, pkgs_to_build, task, abort, allowincomplete=False):
|
||||
def buildTaskData(self, pkgs_to_build, task, abort):
|
||||
"""
|
||||
Prepare a runqueue and taskdata object for iteration over pkgs_to_build
|
||||
"""
|
||||
@@ -663,7 +633,7 @@ class BBCooker:
|
||||
localdata = data.createCopy(self.data)
|
||||
bb.data.update_data(localdata)
|
||||
bb.data.expandKeys(localdata)
|
||||
taskdata = bb.taskdata.TaskData(abort, skiplist=self.skiplist, allowincomplete=allowincomplete)
|
||||
taskdata = bb.taskdata.TaskData(abort, skiplist=self.skiplist)
|
||||
|
||||
current = 0
|
||||
runlist = []
|
||||
@@ -675,9 +645,7 @@ class BBCooker:
|
||||
ktask = k2[1]
|
||||
taskdata.add_provider(localdata, self.recipecache, k)
|
||||
current += 1
|
||||
if not ktask.startswith("do_"):
|
||||
ktask = "do_%s" % ktask
|
||||
runlist.append([k, ktask])
|
||||
runlist.append([k, "do_%s" % ktask])
|
||||
bb.event.fire(bb.event.TreeDataPreparationProgress(current, len(fulltargetlist)), self.data)
|
||||
taskdata.add_unresolved(localdata, self.recipecache)
|
||||
bb.event.fire(bb.event.TreeDataPreparationCompleted(len(fulltargetlist)), self.data)
|
||||
@@ -690,7 +658,7 @@ class BBCooker:
|
||||
|
||||
# We set abort to False here to prevent unbuildable targets raising
|
||||
# an exception when we're just generating data
|
||||
taskdata, runlist, pkgs_to_build = self.buildTaskData(pkgs_to_build, task, False, allowincomplete=True)
|
||||
taskdata, runlist, pkgs_to_build = self.buildTaskData(pkgs_to_build, task, False)
|
||||
|
||||
return runlist, taskdata
|
||||
|
||||
@@ -933,27 +901,17 @@ class BBCooker:
|
||||
print("}", file=tdepends_file)
|
||||
logger.info("Task dependencies saved to 'task-depends.dot'")
|
||||
|
||||
def show_appends_with_no_recipes(self):
|
||||
# Determine which bbappends haven't been applied
|
||||
|
||||
# First get list of recipes, including skipped
|
||||
recipefns = self.recipecache.pkg_fn.keys()
|
||||
recipefns.extend(self.skiplist.keys())
|
||||
|
||||
# Work out list of bbappends that have been applied
|
||||
applied_appends = []
|
||||
for fn in recipefns:
|
||||
applied_appends.extend(self.collection.get_file_appends(fn))
|
||||
|
||||
appends_without_recipes = []
|
||||
for _, appendfn in self.collection.bbappends:
|
||||
if not appendfn in applied_appends:
|
||||
appends_without_recipes.append(appendfn)
|
||||
|
||||
def show_appends_with_no_recipes( self ):
|
||||
appends_without_recipes = [self.collection.appendlist[recipe]
|
||||
for recipe in self.collection.appendlist
|
||||
if recipe not in self.collection.appliedappendlist]
|
||||
if appends_without_recipes:
|
||||
msg = 'No recipes available for:\n %s' % '\n '.join(appends_without_recipes)
|
||||
warn_only = self.data.getVar("BB_DANGLINGAPPENDS_WARNONLY", \
|
||||
False) or "no"
|
||||
appendlines = (' %s' % append
|
||||
for appends in appends_without_recipes
|
||||
for append in appends)
|
||||
msg = 'No recipes available for:\n%s' % '\n'.join(appendlines)
|
||||
warn_only = data.getVar("BB_DANGLINGAPPENDS_WARNONLY", \
|
||||
self.data, False) or "no"
|
||||
if warn_only.lower() in ("1", "yes", "true"):
|
||||
bb.warn(msg)
|
||||
else:
|
||||
@@ -1000,8 +958,8 @@ class BBCooker:
|
||||
# Generate a list of parsed configuration files by searching the files
|
||||
# listed in the __depends and __base_depends variables with a .conf suffix.
|
||||
conffiles = []
|
||||
dep_files = self.data.getVar('__base_depends', False) or []
|
||||
dep_files = dep_files + (self.data.getVar('__depends', False) or [])
|
||||
dep_files = self.data.getVar('__base_depends') or []
|
||||
dep_files = dep_files + (self.data.getVar('__depends') or [])
|
||||
|
||||
for f in dep_files:
|
||||
if f[0].endswith(".conf"):
|
||||
@@ -1077,13 +1035,13 @@ class BBCooker:
|
||||
|
||||
return pkg_list
|
||||
|
||||
def generateTargetsTree(self, klass=None, pkgs=None):
|
||||
def generateTargetsTree(self, klass=None, pkgs=[]):
|
||||
"""
|
||||
Generate a dependency tree of buildable targets
|
||||
Generate an event with the result
|
||||
"""
|
||||
# if the caller hasn't specified a pkgs list default to universe
|
||||
if not pkgs:
|
||||
if not len(pkgs):
|
||||
pkgs = ['universe']
|
||||
# if inherited_class passed ensure all recipes which inherit the
|
||||
# specified class are included in pkgs
|
||||
@@ -1218,12 +1176,9 @@ class BBCooker:
|
||||
"""
|
||||
Setup any variables needed before starting a build
|
||||
"""
|
||||
t = time.gmtime()
|
||||
if not self.data.getVar("BUILDNAME", False):
|
||||
self.data.setVar("BUILDNAME", "${DATE}${TIME}")
|
||||
self.data.setVar("BUILDSTART", time.strftime('%m/%d/%Y %H:%M:%S', t))
|
||||
self.data.setVar("DATE", time.strftime('%Y%m%d', t))
|
||||
self.data.setVar("TIME", time.strftime('%H%M%S', t))
|
||||
if not self.data.getVar("BUILDNAME"):
|
||||
self.data.setVar("BUILDNAME", time.strftime('%Y%m%d%H%M'))
|
||||
self.data.setVar("BUILDSTART", time.strftime('%m/%d/%Y %H:%M:%S', time.gmtime()))
|
||||
|
||||
def matchFiles(self, bf):
|
||||
"""
|
||||
@@ -1316,36 +1271,29 @@ class BBCooker:
|
||||
# Invalidate task for target if force mode active
|
||||
if self.configuration.force:
|
||||
logger.verbose("Invalidate task %s, %s", task, fn)
|
||||
if not task.startswith("do_"):
|
||||
task = "do_%s" % task
|
||||
bb.parse.siggen.invalidate_task(task, self.recipecache, fn)
|
||||
bb.parse.siggen.invalidate_task('do_%s' % task, self.recipecache, fn)
|
||||
|
||||
# Setup taskdata structure
|
||||
taskdata = bb.taskdata.TaskData(self.configuration.abort)
|
||||
taskdata.add_provider(self.data, self.recipecache, item)
|
||||
|
||||
buildname = self.data.getVar("BUILDNAME", True)
|
||||
buildname = self.data.getVar("BUILDNAME")
|
||||
bb.event.fire(bb.event.BuildStarted(buildname, [item]), self.expanded_data)
|
||||
|
||||
# Execute the runqueue
|
||||
if not task.startswith("do_"):
|
||||
task = "do_%s" % task
|
||||
runlist = [[item, task]]
|
||||
runlist = [[item, "do_%s" % task]]
|
||||
|
||||
rq = bb.runqueue.RunQueue(self, self.data, self.recipecache, taskdata, runlist)
|
||||
|
||||
def buildFileIdle(server, rq, abort):
|
||||
|
||||
msg = None
|
||||
interrupted = 0
|
||||
if abort or self.state == state.forceshutdown:
|
||||
rq.finish_runqueue(True)
|
||||
msg = "Forced shutdown"
|
||||
interrupted = 2
|
||||
elif self.state == state.shutdown:
|
||||
rq.finish_runqueue(False)
|
||||
msg = "Stopped build"
|
||||
interrupted = 1
|
||||
failures = 0
|
||||
try:
|
||||
retval = rq.execute_runqueue()
|
||||
@@ -1357,7 +1305,7 @@ class BBCooker:
|
||||
return False
|
||||
|
||||
if not retval:
|
||||
bb.event.fire(bb.event.BuildCompleted(len(rq.rqdata.runq_fnid), buildname, item, failures, interrupted), self.expanded_data)
|
||||
bb.event.fire(bb.event.BuildCompleted(len(rq.rqdata.runq_fnid), buildname, item, failures), self.expanded_data)
|
||||
self.command.finishAsyncCommand(msg)
|
||||
return False
|
||||
if retval is True:
|
||||
@@ -1373,15 +1321,12 @@ class BBCooker:
|
||||
|
||||
def buildTargetsIdle(server, rq, abort):
|
||||
msg = None
|
||||
interrupted = 0
|
||||
if abort or self.state == state.forceshutdown:
|
||||
rq.finish_runqueue(True)
|
||||
msg = "Forced shutdown"
|
||||
interrupted = 2
|
||||
elif self.state == state.shutdown:
|
||||
rq.finish_runqueue(False)
|
||||
msg = "Stopped build"
|
||||
interrupted = 1
|
||||
failures = 0
|
||||
try:
|
||||
retval = rq.execute_runqueue()
|
||||
@@ -1393,38 +1338,19 @@ class BBCooker:
|
||||
return False
|
||||
|
||||
if not retval:
|
||||
bb.event.fire(bb.event.BuildCompleted(len(rq.rqdata.runq_fnid), buildname, targets, failures, interrupted), self.data)
|
||||
bb.event.fire(bb.event.BuildCompleted(len(rq.rqdata.runq_fnid), buildname, targets, failures), self.data)
|
||||
self.command.finishAsyncCommand(msg)
|
||||
return False
|
||||
if retval is True:
|
||||
return True
|
||||
return retval
|
||||
|
||||
build.reset_cache()
|
||||
self.buildSetVars()
|
||||
|
||||
# If we are told to do the None task then query the default task
|
||||
if (task == None):
|
||||
task = self.configuration.cmd
|
||||
|
||||
if not task.startswith("do_"):
|
||||
task = "do_%s" % task
|
||||
|
||||
taskdata, runlist, fulltargetlist = self.buildTaskData(targets, task, self.configuration.abort)
|
||||
|
||||
buildname = self.data.getVar("BUILDNAME", False)
|
||||
|
||||
# make targets to always look as <target>:do_<task>
|
||||
ntargets = []
|
||||
for target in fulltargetlist:
|
||||
if ":" in target:
|
||||
if ":do_" not in target:
|
||||
target = "%s:do_%s" % tuple(target.split(":", 1))
|
||||
else:
|
||||
target = "%s:%s" % (target, task)
|
||||
ntargets.append(target)
|
||||
|
||||
bb.event.fire(bb.event.BuildStarted(buildname, ntargets), self.data)
|
||||
buildname = self.data.getVar("BUILDNAME")
|
||||
bb.event.fire(bb.event.BuildStarted(buildname, fulltargetlist), self.data)
|
||||
|
||||
rq = bb.runqueue.RunQueue(self, self.data, self.recipecache, taskdata, runlist)
|
||||
if 'universe' in targets:
|
||||
@@ -1477,7 +1403,7 @@ class BBCooker:
|
||||
if base_image is None:
|
||||
imagefile.write("inherit core-image\n")
|
||||
else:
|
||||
topdir = self.data.getVar("TOPDIR", False)
|
||||
topdir = self.data.getVar("TOPDIR")
|
||||
if topdir in base_image:
|
||||
base_image = require_line.split()[1]
|
||||
imagefile.write("require " + base_image + "\n")
|
||||
@@ -1499,21 +1425,6 @@ class BBCooker:
|
||||
if timestamp:
|
||||
return timestr
|
||||
|
||||
def updateCacheSync(self):
|
||||
if self.state == state.running:
|
||||
return
|
||||
|
||||
# reload files for which we got notifications
|
||||
for p in self.inotify_modified_files:
|
||||
bb.parse.update_cache(p)
|
||||
self.inotify_modified_files = []
|
||||
|
||||
if not self.baseconfig_valid:
|
||||
logger.debug(1, "Reloading base configuration data")
|
||||
self.initConfigurationData()
|
||||
self.baseconfig_valid = True
|
||||
self.parsecache_valid = False
|
||||
|
||||
# This is called for all async commands when self.state != running
|
||||
def updateCache(self):
|
||||
if self.state == state.running:
|
||||
@@ -1525,7 +1436,17 @@ class BBCooker:
|
||||
raise bb.BBHandledException()
|
||||
|
||||
if self.state != state.parsing:
|
||||
self.updateCacheSync()
|
||||
|
||||
# reload files for which we got notifications
|
||||
for p in self.inotify_modified_files:
|
||||
bb.parse.update_cache(p)
|
||||
self.inotify_modified_files = []
|
||||
|
||||
if not self.baseconfig_valid:
|
||||
logger.debug(1, "Reloading base configuration data")
|
||||
self.initConfigurationData()
|
||||
self.baseconfig_valid = True
|
||||
self.parsecache_valid = False
|
||||
|
||||
if self.state != state.parsing and not self.parsecache_valid:
|
||||
self.parseConfiguration ()
|
||||
@@ -1541,6 +1462,9 @@ class BBCooker:
|
||||
self.collection = CookerCollectFiles(self.recipecache.bbfile_config_priorities)
|
||||
(filelist, masked) = self.collection.collect_bbfiles(self.data, self.expanded_data)
|
||||
|
||||
self.data.renameVar("__depends", "__base_depends")
|
||||
self.add_filewatch(self.data.getVar("__base_depends"), self.configwatcher)
|
||||
|
||||
self.parser = CookerParser(self, filelist, masked)
|
||||
self.parsecache_valid = True
|
||||
|
||||
@@ -1554,11 +1478,6 @@ class BBCooker:
|
||||
self.handlePrefProviders()
|
||||
self.recipecache.bbfile_priority = self.collection.collection_priorities(self.recipecache.pkg_fn, self.data)
|
||||
self.state = state.running
|
||||
|
||||
# Send an event listing all stamps reachable after parsing
|
||||
# which the metadata may use to clean up stale data
|
||||
event = bb.event.ReachableStamps(self.recipecache.stamp)
|
||||
bb.event.fire(event, self.expanded_data)
|
||||
return None
|
||||
|
||||
return True
|
||||
@@ -1649,19 +1568,6 @@ class BBCooker:
|
||||
def reset(self):
|
||||
self.initConfigurationData()
|
||||
|
||||
def lockBitbake(self):
|
||||
if not hasattr(self, 'lock'):
|
||||
self.lock = None
|
||||
if self.data:
|
||||
lockfile = self.data.expand("${TOPDIR}/bitbake.lock")
|
||||
if lockfile:
|
||||
self.lock = bb.utils.lockfile(lockfile, False, False)
|
||||
return self.lock
|
||||
|
||||
def unlockBitbake(self):
|
||||
if hasattr(self, 'lock') and self.lock:
|
||||
bb.utils.unlockfile(self.lock)
|
||||
|
||||
def server_main(cooker, func, *args):
|
||||
cooker.pre_serve()
|
||||
|
||||
@@ -1696,7 +1602,9 @@ class CookerExit(bb.event.Event):
|
||||
|
||||
class CookerCollectFiles(object):
|
||||
def __init__(self, priorities):
|
||||
self.appendlist = {}
|
||||
self.bbappends = []
|
||||
self.appliedappendlist = []
|
||||
self.bbfile_config_priorities = priorities
|
||||
|
||||
def calc_bbfile_priority( self, filename, matched = None ):
|
||||
@@ -1791,6 +1699,10 @@ class CookerCollectFiles(object):
|
||||
for f in bbappend:
|
||||
base = os.path.basename(f).replace('.bbappend', '.bb')
|
||||
self.bbappends.append((base, f))
|
||||
if not base in self.appendlist:
|
||||
self.appendlist[base] = []
|
||||
if f not in self.appendlist[base]:
|
||||
self.appendlist[base].append(f)
|
||||
|
||||
# Find overlayed recipes
|
||||
# bbfiles will be in priority order which makes this easy
|
||||
@@ -1815,6 +1727,7 @@ class CookerCollectFiles(object):
|
||||
for b in self.bbappends:
|
||||
(bbappend, filename) = b
|
||||
if (bbappend == f) or ('%' in bbappend and bbappend.startswith(f[:bbappend.index('%')])):
|
||||
self.appliedappendlist.append(bbappend)
|
||||
filelist.append(filename)
|
||||
return filelist
|
||||
|
||||
@@ -1914,6 +1827,8 @@ class Parser(multiprocessing.Process):
|
||||
finally:
|
||||
logfile = "profile-parse-%s.log" % multiprocessing.current_process().name
|
||||
prof.dump_stats(logfile)
|
||||
bb.utils.process_profilelog(logfile)
|
||||
print("Raw profiling information saved to %s and processed statistics to %s.processed" % (logfile, logfile))
|
||||
|
||||
def realrun(self):
|
||||
if self.init:
|
||||
@@ -1983,7 +1898,6 @@ class CookerParser(object):
|
||||
self.current = 0
|
||||
self.num_processes = int(self.cfgdata.getVar("BB_NUMBER_PARSE_THREADS", True) or
|
||||
multiprocessing.cpu_count())
|
||||
self.process_names = []
|
||||
|
||||
self.bb_cache = bb.cache.Cache(self.cfgdata, self.cfghash, cooker.caches_array)
|
||||
self.fromcache = []
|
||||
@@ -2019,7 +1933,6 @@ class CookerParser(object):
|
||||
for i in range(0, self.num_processes):
|
||||
parser = Parser(self.jobs, self.result_queue, self.parser_quit, init, self.cooker.configuration.profile)
|
||||
parser.start()
|
||||
self.process_names.append(parser.name)
|
||||
self.processes.append(parser)
|
||||
|
||||
self.results = itertools.chain(self.results, self.parse_generator())
|
||||
@@ -2063,16 +1976,6 @@ class CookerParser(object):
|
||||
multiprocessing.util.Finalize(None, sync.join, exitpriority=-100)
|
||||
bb.codeparser.parser_cache_savemerge(self.cooker.data)
|
||||
bb.fetch.fetcher_parse_done(self.cooker.data)
|
||||
if self.cooker.configuration.profile:
|
||||
profiles = []
|
||||
for i in self.process_names:
|
||||
logfile = "profile-parse-%s.log" % i
|
||||
if os.path.exists(logfile):
|
||||
profiles.append(logfile)
|
||||
|
||||
pout = "profile-parse.log.processed"
|
||||
bb.utils.process_profilelog(profiles, pout = pout)
|
||||
print("Processed parsing statistics saved to %s" % (pout))
|
||||
|
||||
def load_cached(self):
|
||||
for filename, appends in self.fromcache:
|
||||
|
||||
@@ -63,9 +63,9 @@ class ConfigParameters(object):
|
||||
raise Exception("Unable to set configuration option 'cmd' on the server: %s" % error)
|
||||
|
||||
if not self.options.pkgs_to_build:
|
||||
bbpkgs, error = server.runCommand(["getVariable", "BBTARGETS"])
|
||||
bbpkgs, error = server.runCommand(["getVariable", "BBPKGS"])
|
||||
if error:
|
||||
raise Exception("Unable to get the value of BBTARGETS from the server: %s" % error)
|
||||
raise Exception("Unable to get the value of BBPKGS from the server: %s" % error)
|
||||
if bbpkgs:
|
||||
self.options.pkgs_to_build.extend(bbpkgs.split())
|
||||
|
||||
@@ -73,8 +73,7 @@ class ConfigParameters(object):
|
||||
options = {}
|
||||
for o in ["abort", "tryaltconfigs", "force", "invalidate_stamp",
|
||||
"verbose", "debug", "dry_run", "dump_signatures",
|
||||
"debug_domains", "extra_assume_provided", "profile",
|
||||
"prefile", "postfile"]:
|
||||
"debug_domains", "extra_assume_provided", "profile"]:
|
||||
options[o] = getattr(self.options, o)
|
||||
|
||||
ret, error = server.runCommand(["updateConfig", options, environment])
|
||||
@@ -129,8 +128,6 @@ class CookerConfiguration(object):
|
||||
self.extra_assume_provided = []
|
||||
self.prefile = []
|
||||
self.postfile = []
|
||||
self.prefile_server = []
|
||||
self.postfile_server = []
|
||||
self.debug = 0
|
||||
self.cmd = None
|
||||
self.abort = True
|
||||
@@ -176,23 +173,11 @@ def catch_parse_error(func):
|
||||
def wrapped(fn, *args):
|
||||
try:
|
||||
return func(fn, *args)
|
||||
except IOError as exc:
|
||||
except (IOError, bb.parse.ParseError, bb.data_smart.ExpansionError) as exc:
|
||||
import traceback
|
||||
parselog.critical(traceback.format_exc())
|
||||
parselog.critical( traceback.format_exc())
|
||||
parselog.critical("Unable to parse %s: %s" % (fn, exc))
|
||||
sys.exit(1)
|
||||
except (bb.parse.ParseError, bb.data_smart.ExpansionError) as exc:
|
||||
import traceback
|
||||
|
||||
bbdir = os.path.dirname(__file__) + os.sep
|
||||
exc_class, exc, tb = sys.exc_info()
|
||||
for tb in iter(lambda: tb.tb_next, None):
|
||||
# Skip frames in bitbake itself, we only want the metadata
|
||||
fn, _, _, _ = traceback.extract_tb(tb, 1)[0]
|
||||
if not fn.startswith(bbdir):
|
||||
break
|
||||
parselog.critical("Unable to parse %s", fn, exc_info=(exc_class, exc, tb))
|
||||
sys.exit(1)
|
||||
return wrapped
|
||||
|
||||
@catch_parse_error
|
||||
@@ -284,11 +269,8 @@ class CookerDataBuilder(object):
|
||||
layers = (data.getVar('BBLAYERS', True) or "").split()
|
||||
|
||||
data = bb.data.createCopy(data)
|
||||
approved = bb.utils.approved_variables()
|
||||
for layer in layers:
|
||||
parselog.debug(2, "Adding layer %s", layer)
|
||||
if 'HOME' in approved and '~' in layer:
|
||||
layer = os.path.expanduser(layer)
|
||||
data.setVar('LAYERDIR', layer)
|
||||
data = parse_config_file(os.path.join(layer, "conf", "layer.conf"), data)
|
||||
data.expandVarref('LAYERDIR')
|
||||
@@ -316,15 +298,15 @@ class CookerDataBuilder(object):
|
||||
|
||||
# Nomally we only register event handlers at the end of parsing .bb files
|
||||
# We register any handlers we've found so far here...
|
||||
for var in data.getVar('__BBHANDLERS', False) or []:
|
||||
bb.event.register(var, data.getVar(var, False), (data.getVarFlag(var, "eventmask", True) or "").split())
|
||||
for var in data.getVar('__BBHANDLERS') or []:
|
||||
bb.event.register(var, data.getVar(var), (data.getVarFlag(var, "eventmask", True) or "").split())
|
||||
|
||||
if data.getVar("BB_WORKERCONTEXT", False) is None:
|
||||
bb.fetch.fetcher_init(data)
|
||||
bb.codeparser.parser_cache_init(data)
|
||||
bb.event.fire(bb.event.ConfigParsed(), data)
|
||||
|
||||
if data.getVar("BB_INVALIDCONF", False) is True:
|
||||
if data.getVar("BB_INVALIDCONF") is True:
|
||||
data.setVar("BB_INVALIDCONF", False)
|
||||
self.parseConfigurationFiles(self.prefiles, self.postfiles)
|
||||
return
|
||||
|
||||
@@ -84,7 +84,7 @@ def setVar(var, value, d):
|
||||
d.setVar(var, value)
|
||||
|
||||
|
||||
def getVar(var, d, exp = False):
|
||||
def getVar(var, d, exp = 0):
|
||||
"""Gets the value of a variable"""
|
||||
return d.getVar(var, exp)
|
||||
|
||||
@@ -159,12 +159,13 @@ def expandKeys(alterdata, readdata = None):
|
||||
|
||||
# These two for loops are split for performance to maximise the
|
||||
# usefulness of the expand cache
|
||||
for key in sorted(todolist):
|
||||
|
||||
for key in todolist:
|
||||
ekey = todolist[key]
|
||||
newval = alterdata.getVar(ekey, False)
|
||||
if newval is not None:
|
||||
val = alterdata.getVar(key, False)
|
||||
if val is not None:
|
||||
newval = alterdata.getVar(ekey, 0)
|
||||
if newval:
|
||||
val = alterdata.getVar(key, 0)
|
||||
if val is not None and newval is not None:
|
||||
bb.warn("Variable key %s (%s) replaces original key %s (%s)." % (key, val, ekey, newval))
|
||||
alterdata.renameVar(key, ekey)
|
||||
|
||||
@@ -174,7 +175,7 @@ def inheritFromOS(d, savedenv, permitted):
|
||||
for s in savedenv.keys():
|
||||
if s in permitted:
|
||||
try:
|
||||
d.setVar(s, savedenv.getVar(s, True), op = 'from env')
|
||||
d.setVar(s, getVar(s, savedenv, True), op = 'from env')
|
||||
if s in exportlist:
|
||||
d.setVarFlag(s, "export", True, op = 'auto env export')
|
||||
except TypeError:
|
||||
@@ -182,39 +183,39 @@ def inheritFromOS(d, savedenv, permitted):
|
||||
|
||||
def emit_var(var, o=sys.__stdout__, d = init(), all=False):
|
||||
"""Emit a variable to be sourced by a shell."""
|
||||
if d.getVarFlag(var, "python"):
|
||||
return False
|
||||
if getVarFlag(var, "python", d):
|
||||
return 0
|
||||
|
||||
export = d.getVarFlag(var, "export")
|
||||
unexport = d.getVarFlag(var, "unexport")
|
||||
func = d.getVarFlag(var, "func")
|
||||
export = getVarFlag(var, "export", d)
|
||||
unexport = getVarFlag(var, "unexport", d)
|
||||
func = getVarFlag(var, "func", d)
|
||||
if not all and not export and not unexport and not func:
|
||||
return False
|
||||
return 0
|
||||
|
||||
try:
|
||||
if all:
|
||||
oval = d.getVar(var, False)
|
||||
val = d.getVar(var, True)
|
||||
oval = getVar(var, d, 0)
|
||||
val = getVar(var, d, 1)
|
||||
except (KeyboardInterrupt, bb.build.FuncFailed):
|
||||
raise
|
||||
except Exception as exc:
|
||||
o.write('# expansion of %s threw %s: %s\n' % (var, exc.__class__.__name__, str(exc)))
|
||||
return False
|
||||
return 0
|
||||
|
||||
if all:
|
||||
d.varhistory.emit(var, oval, val, o, d)
|
||||
d.varhistory.emit(var, oval, val, o)
|
||||
|
||||
if (var.find("-") != -1 or var.find(".") != -1 or var.find('{') != -1 or var.find('}') != -1 or var.find('+') != -1) and not all:
|
||||
return False
|
||||
return 0
|
||||
|
||||
varExpanded = d.expand(var)
|
||||
varExpanded = expand(var, d)
|
||||
|
||||
if unexport:
|
||||
o.write('unset %s\n' % varExpanded)
|
||||
return False
|
||||
return 0
|
||||
|
||||
if val is None:
|
||||
return False
|
||||
return 0
|
||||
|
||||
val = str(val)
|
||||
|
||||
@@ -223,7 +224,7 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
|
||||
val = val[3:] # Strip off "() "
|
||||
o.write("%s() %s\n" % (varExpanded, val))
|
||||
o.write("export -f %s\n" % (varExpanded))
|
||||
return True
|
||||
return 1
|
||||
|
||||
if func:
|
||||
# NOTE: should probably check for unbalanced {} within the var
|
||||
@@ -239,7 +240,7 @@ def emit_var(var, o=sys.__stdout__, d = init(), all=False):
|
||||
alter = re.sub('\n', ' \\\n', alter)
|
||||
alter = re.sub('\\$', '\\\\$', alter)
|
||||
o.write('%s="%s"\n' % (varExpanded, alter))
|
||||
return False
|
||||
return 0
|
||||
|
||||
def emit_env(o=sys.__stdout__, d = init(), all=False):
|
||||
"""Emits all items in the data store in a format such that it can be sourced by a shell."""
|
||||
@@ -271,9 +272,8 @@ def emit_func(func, o=sys.__stdout__, d = init()):
|
||||
|
||||
keys = (key for key in d.keys() if not key.startswith("__") and not d.getVarFlag(key, "func"))
|
||||
for key in keys:
|
||||
emit_var(key, o, d, False)
|
||||
emit_var(key, o, d, False) and o.write('\n')
|
||||
|
||||
o.write('\n')
|
||||
emit_var(func, o, d, False) and o.write('\n')
|
||||
newdeps = bb.codeparser.ShellParser(func, logger).parse_shell(d.getVar(func, True))
|
||||
newdeps |= set((d.getVarFlag(func, "vardeps", True) or "").split())
|
||||
@@ -420,7 +420,7 @@ def generate_dependencies(d):
|
||||
deps = {}
|
||||
values = {}
|
||||
|
||||
tasklist = d.getVar('__BBTASKS', False) or []
|
||||
tasklist = d.getVar('__BBTASKS') or []
|
||||
for task in tasklist:
|
||||
deps[task], values[task] = build_dependencies(task, keys, shelldeps, varflagsexcl, d)
|
||||
newdeps = deps[task]
|
||||
@@ -438,7 +438,7 @@ def generate_dependencies(d):
|
||||
return tasklist, deps, values
|
||||
|
||||
def inherits_class(klass, d):
|
||||
val = d.getVar('__inherit_cache', False) or []
|
||||
val = getVar('__inherit_cache', d) or []
|
||||
needle = os.path.join('classes', '%s.bbclass' % klass)
|
||||
for v in val:
|
||||
if v.endswith(needle):
|
||||
|
||||
@@ -54,36 +54,27 @@ def infer_caller_details(loginfo, parent = False, varval = True):
|
||||
return
|
||||
# Infer caller's likely values for variable (var) and value (value),
|
||||
# to reduce clutter in the rest of the code.
|
||||
above = None
|
||||
def set_above():
|
||||
if varval and ('variable' not in loginfo or 'detail' not in loginfo):
|
||||
try:
|
||||
raise Exception
|
||||
except Exception:
|
||||
tb = sys.exc_info()[2]
|
||||
if parent:
|
||||
return tb.tb_frame.f_back.f_back.f_back
|
||||
above = tb.tb_frame.f_back.f_back
|
||||
else:
|
||||
return tb.tb_frame.f_back.f_back
|
||||
|
||||
if varval and ('variable' not in loginfo or 'detail' not in loginfo):
|
||||
if not above:
|
||||
above = set_above()
|
||||
lcls = above.f_locals.items()
|
||||
above = tb.tb_frame.f_back
|
||||
lcls = above.f_locals.items()
|
||||
for k, v in lcls:
|
||||
if k == 'value' and 'detail' not in loginfo:
|
||||
loginfo['detail'] = v
|
||||
if k == 'var' and 'variable' not in loginfo:
|
||||
loginfo['variable'] = v
|
||||
# Infer file/line/function from traceback
|
||||
# Don't use traceback.extract_stack() since it fills the line contents which
|
||||
# we don't need and that hits stat syscalls
|
||||
if 'file' not in loginfo:
|
||||
if not above:
|
||||
above = set_above()
|
||||
f = above.f_back
|
||||
line = f.f_lineno
|
||||
file = f.f_code.co_filename
|
||||
func = f.f_code.co_name
|
||||
depth = 3
|
||||
if parent:
|
||||
depth = 4
|
||||
file, line, func, text = traceback.extract_stack(limit = depth)[0]
|
||||
loginfo['file'] = file
|
||||
loginfo['line'] = line
|
||||
if func not in loginfo:
|
||||
@@ -240,10 +231,6 @@ class VariableHistory(object):
|
||||
|
||||
if var not in self.variables:
|
||||
self.variables[var] = []
|
||||
if not isinstance(self.variables[var], list):
|
||||
return
|
||||
if 'nodups' in loginfo and loginfo in self.variables[var]:
|
||||
return
|
||||
self.variables[var].append(loginfo.copy())
|
||||
|
||||
def variable(self, var):
|
||||
@@ -252,20 +239,8 @@ class VariableHistory(object):
|
||||
else:
|
||||
return []
|
||||
|
||||
def emit(self, var, oval, val, o, d):
|
||||
def emit(self, var, oval, val, o):
|
||||
history = self.variable(var)
|
||||
|
||||
# Append override history
|
||||
if var in d.overridedata:
|
||||
for (r, override) in d.overridedata[var]:
|
||||
for event in self.variable(r):
|
||||
loginfo = event.copy()
|
||||
if 'flag' in loginfo and not loginfo['flag'].startswith("_"):
|
||||
continue
|
||||
loginfo['variable'] = var
|
||||
loginfo['op'] = 'override[%s]:%s' % (override, loginfo['op'])
|
||||
history.append(loginfo)
|
||||
|
||||
commentVal = re.sub('\n', '\n#', str(oval))
|
||||
if history:
|
||||
if len(history) == 1:
|
||||
@@ -312,31 +287,6 @@ class VariableHistory(object):
|
||||
lines.append(line)
|
||||
return lines
|
||||
|
||||
def get_variable_items_files(self, var, d):
|
||||
"""
|
||||
Use variable history to map items added to a list variable and
|
||||
the files in which they were added.
|
||||
"""
|
||||
history = self.variable(var)
|
||||
finalitems = (d.getVar(var, True) or '').split()
|
||||
filemap = {}
|
||||
isset = False
|
||||
for event in history:
|
||||
if 'flag' in event:
|
||||
continue
|
||||
if event['op'] == '_remove':
|
||||
continue
|
||||
if isset and event['op'] == 'set?':
|
||||
continue
|
||||
isset = True
|
||||
items = d.expand(event['detail']).split()
|
||||
for item in items:
|
||||
# This is a little crude but is belt-and-braces to avoid us
|
||||
# having to handle every possible operation type specifically
|
||||
if item in finalitems and not item in filemap:
|
||||
filemap[item] = event['file']
|
||||
return filemap
|
||||
|
||||
def del_var_history(self, var, f=None, line=None):
|
||||
"""If file f and line are not given, the entire history of var is deleted"""
|
||||
if var in self.variables:
|
||||
@@ -346,23 +296,23 @@ class VariableHistory(object):
|
||||
self.variables[var] = []
|
||||
|
||||
class DataSmart(MutableMapping):
|
||||
def __init__(self):
|
||||
def __init__(self, special = None, seen = None ):
|
||||
self.dict = {}
|
||||
|
||||
if special is None:
|
||||
special = COWDictBase.copy()
|
||||
if seen is None:
|
||||
seen = COWDictBase.copy()
|
||||
|
||||
self.inchistory = IncludeHistory()
|
||||
self.varhistory = VariableHistory(self)
|
||||
self._tracking = False
|
||||
|
||||
self.expand_cache = {}
|
||||
|
||||
# cookie monster tribute
|
||||
# Need to be careful about writes to overridedata as
|
||||
# its only a shallow copy, could influence other data store
|
||||
# copies!
|
||||
self.overridedata = {}
|
||||
self.overrides = None
|
||||
self.overridevars = set(["OVERRIDES", "FILE"])
|
||||
self.inoverride = False
|
||||
self._special_values = special
|
||||
self._seen_overrides = seen
|
||||
|
||||
self.expand_cache = {}
|
||||
|
||||
def enableTracking(self):
|
||||
self._tracking = True
|
||||
@@ -392,8 +342,7 @@ class DataSmart(MutableMapping):
|
||||
except bb.parse.SkipRecipe:
|
||||
raise
|
||||
except Exception as exc:
|
||||
exc_class, exc, tb = sys.exc_info()
|
||||
raise ExpansionError, ExpansionError(varname, s, exc), tb
|
||||
raise ExpansionError(varname, s, exc)
|
||||
|
||||
varparse.value = s
|
||||
|
||||
@@ -405,34 +354,97 @@ class DataSmart(MutableMapping):
|
||||
def expand(self, s, varname = None):
|
||||
return self.expandWithRefs(s, varname).value
|
||||
|
||||
|
||||
def finalize(self, parent = False):
|
||||
return
|
||||
|
||||
def internal_finalize(self, parent = False):
|
||||
"""Performs final steps upon the datastore, including application of overrides"""
|
||||
self.overrides = None
|
||||
|
||||
def need_overrides(self):
|
||||
if self.overrides is not None:
|
||||
return
|
||||
if self.inoverride:
|
||||
return
|
||||
for count in range(5):
|
||||
self.inoverride = True
|
||||
# Can end up here recursively so setup dummy values
|
||||
self.overrides = []
|
||||
self.overridesset = set()
|
||||
self.overrides = (self.getVar("OVERRIDES", True) or "").split(":") or []
|
||||
self.overridesset = set(self.overrides)
|
||||
self.inoverride = False
|
||||
self.expand_cache = {}
|
||||
newoverrides = (self.getVar("OVERRIDES", True) or "").split(":") or []
|
||||
if newoverrides == self.overrides:
|
||||
break
|
||||
self.overrides = newoverrides
|
||||
self.overridesset = set(self.overrides)
|
||||
else:
|
||||
bb.fatal("Overrides could not be expanded into a stable state after 5 iterations, overrides must be being referenced by other overridden variables in some recursive fashion. Please provide your configuration to bitbake-devel so we can laugh, er, I mean try and understand how to make it work.")
|
||||
overrides = (self.getVar("OVERRIDES", True) or "").split(":") or []
|
||||
finalize_caller = {
|
||||
'op': 'finalize',
|
||||
}
|
||||
infer_caller_details(finalize_caller, parent = parent, varval = False)
|
||||
|
||||
#
|
||||
# Well let us see what breaks here. We used to iterate
|
||||
# over each variable and apply the override and then
|
||||
# do the line expanding.
|
||||
# If we have bad luck - which we will have - the keys
|
||||
# where in some order that is so important for this
|
||||
# method which we don't have anymore.
|
||||
# Anyway we will fix that and write test cases this
|
||||
# time.
|
||||
|
||||
#
|
||||
# First we apply all overrides
|
||||
# Then we will handle _append and _prepend and store the _remove
|
||||
# information for later.
|
||||
#
|
||||
|
||||
# We only want to report finalization once per variable overridden.
|
||||
finalizes_reported = {}
|
||||
|
||||
for o in overrides:
|
||||
# calculate '_'+override
|
||||
l = len(o) + 1
|
||||
|
||||
# see if one should even try
|
||||
if o not in self._seen_overrides:
|
||||
continue
|
||||
|
||||
vars = self._seen_overrides[o].copy()
|
||||
for var in vars:
|
||||
name = var[:-l]
|
||||
try:
|
||||
# Report only once, even if multiple changes.
|
||||
if name not in finalizes_reported:
|
||||
finalizes_reported[name] = True
|
||||
finalize_caller['variable'] = name
|
||||
finalize_caller['detail'] = 'was: ' + str(self.getVar(name, False))
|
||||
self.varhistory.record(**finalize_caller)
|
||||
# Copy history of the override over.
|
||||
for event in self.varhistory.variable(var):
|
||||
loginfo = event.copy()
|
||||
loginfo['variable'] = name
|
||||
loginfo['op'] = 'override[%s]:%s' % (o, loginfo['op'])
|
||||
self.varhistory.record(**loginfo)
|
||||
self.setVar(name, self.getVar(var, False), op = 'finalize', file = 'override[%s]' % o, line = '')
|
||||
self.delVar(var)
|
||||
except Exception:
|
||||
logger.info("Untracked delVar")
|
||||
|
||||
# now on to the appends and prepends, and stashing the removes
|
||||
for op in __setvar_keyword__:
|
||||
if op in self._special_values:
|
||||
appends = self._special_values[op] or []
|
||||
for append in appends:
|
||||
keep = []
|
||||
for (a, o) in self.getVarFlag(append, op) or []:
|
||||
match = True
|
||||
if o:
|
||||
for o2 in o.split("_"):
|
||||
if not o2 in overrides:
|
||||
match = False
|
||||
if not match:
|
||||
keep.append((a ,o))
|
||||
continue
|
||||
|
||||
if op == "_append":
|
||||
sval = self.getVar(append, False) or ""
|
||||
sval += a
|
||||
self.setVar(append, sval)
|
||||
elif op == "_prepend":
|
||||
sval = a + (self.getVar(append, False) or "")
|
||||
self.setVar(append, sval)
|
||||
elif op == "_remove":
|
||||
removes = self.getVarFlag(append, "_removeactive", False) or []
|
||||
removes.extend(a.split())
|
||||
self.setVarFlag(append, "_removeactive", removes, ignore=True)
|
||||
|
||||
# We save overrides that may be applied at some later stage
|
||||
if keep:
|
||||
self.setVarFlag(append, op, keep, ignore=True)
|
||||
else:
|
||||
self.delVarFlag(append, op, ignore=True)
|
||||
|
||||
def initVar(self, var):
|
||||
self.expand_cache = {}
|
||||
@@ -463,10 +475,6 @@ class DataSmart(MutableMapping):
|
||||
|
||||
def setVar(self, var, value, **loginfo):
|
||||
#print("var=" + str(var) + " val=" + str(value))
|
||||
parsing=False
|
||||
if 'parsing' in loginfo:
|
||||
parsing=True
|
||||
|
||||
if 'op' not in loginfo:
|
||||
loginfo['op'] = "set"
|
||||
self.expand_cache = {}
|
||||
@@ -488,93 +496,52 @@ class DataSmart(MutableMapping):
|
||||
self.varhistory.record(**loginfo)
|
||||
# todo make sure keyword is not __doc__ or __module__
|
||||
# pay the cookie monster
|
||||
try:
|
||||
self._special_values[keyword].add(base)
|
||||
except KeyError:
|
||||
self._special_values[keyword] = set()
|
||||
self._special_values[keyword].add(base)
|
||||
|
||||
# more cookies for the cookie monster
|
||||
if '_' in var:
|
||||
self._setvar_update_overrides(base, **loginfo)
|
||||
|
||||
if base in self.overridevars:
|
||||
self._setvar_update_overridevars(var, value)
|
||||
return
|
||||
|
||||
if not var in self.dict:
|
||||
self._makeShadowCopy(var)
|
||||
|
||||
if not parsing:
|
||||
if "_append" in self.dict[var]:
|
||||
del self.dict[var]["_append"]
|
||||
if "_prepend" in self.dict[var]:
|
||||
del self.dict[var]["_prepend"]
|
||||
if var in self.overridedata:
|
||||
active = []
|
||||
self.need_overrides()
|
||||
for (r, o) in self.overridedata[var]:
|
||||
if o in self.overridesset:
|
||||
active.append(r)
|
||||
elif "_" in o:
|
||||
if set(o.split("_")).issubset(self.overridesset):
|
||||
active.append(r)
|
||||
for a in active:
|
||||
self.delVar(a)
|
||||
del self.overridedata[var]
|
||||
|
||||
# more cookies for the cookie monster
|
||||
if '_' in var:
|
||||
self._setvar_update_overrides(var, **loginfo)
|
||||
self._setvar_update_overrides(var)
|
||||
|
||||
# setting var
|
||||
self.dict[var]["_content"] = value
|
||||
self.varhistory.record(**loginfo)
|
||||
|
||||
if var in self.overridevars:
|
||||
self._setvar_update_overridevars(var, value)
|
||||
|
||||
def _setvar_update_overridevars(self, var, value):
|
||||
vardata = self.expandWithRefs(value, var)
|
||||
new = vardata.references
|
||||
new.update(vardata.contains.keys())
|
||||
while not new.issubset(self.overridevars):
|
||||
nextnew = set()
|
||||
self.overridevars.update(new)
|
||||
for i in new:
|
||||
vardata = self.expandWithRefs(self.getVar(i, True), i)
|
||||
nextnew.update(vardata.references)
|
||||
nextnew.update(vardata.contains.keys())
|
||||
new = nextnew
|
||||
self.internal_finalize(True)
|
||||
|
||||
def _setvar_update_overrides(self, var, **loginfo):
|
||||
def _setvar_update_overrides(self, var):
|
||||
# aka pay the cookie monster
|
||||
override = var[var.rfind('_')+1:]
|
||||
shortvar = var[:var.rfind('_')]
|
||||
while override:
|
||||
if shortvar not in self.overridedata:
|
||||
self.overridedata[shortvar] = []
|
||||
if [var, override] not in self.overridedata[shortvar]:
|
||||
# Force CoW by recreating the list first
|
||||
self.overridedata[shortvar] = list(self.overridedata[shortvar])
|
||||
self.overridedata[shortvar].append([var, override])
|
||||
if override not in self._seen_overrides:
|
||||
self._seen_overrides[override] = set()
|
||||
self._seen_overrides[override].add( var )
|
||||
override = None
|
||||
if "_" in shortvar:
|
||||
override = var[shortvar.rfind('_')+1:]
|
||||
shortvar = var[:shortvar.rfind('_')]
|
||||
if len(shortvar) == 0:
|
||||
override = None
|
||||
|
||||
def getVar(self, var, expand=False, noweakdefault=False, parsing=False):
|
||||
return self.getVarFlag(var, "_content", expand, noweakdefault, parsing)
|
||||
def getVar(self, var, expand=False, noweakdefault=False):
|
||||
return self.getVarFlag(var, "_content", expand, noweakdefault)
|
||||
|
||||
def renameVar(self, key, newkey, **loginfo):
|
||||
"""
|
||||
Rename the variable key to newkey
|
||||
"""
|
||||
val = self.getVar(key, 0, parsing=True)
|
||||
val = self.getVar(key, 0)
|
||||
if val is not None:
|
||||
loginfo['variable'] = newkey
|
||||
loginfo['op'] = 'rename from %s' % key
|
||||
loginfo['detail'] = val
|
||||
self.varhistory.record(**loginfo)
|
||||
self.setVar(newkey, val, ignore=True, parsing=True)
|
||||
self.setVar(newkey, val, ignore=True)
|
||||
|
||||
for i in (__setvar_keyword__):
|
||||
src = self.getVarFlag(key, i)
|
||||
@@ -585,14 +552,9 @@ class DataSmart(MutableMapping):
|
||||
dest.extend(src)
|
||||
self.setVarFlag(newkey, i, dest, ignore=True)
|
||||
|
||||
if key in self.overridedata:
|
||||
self.overridedata[newkey] = []
|
||||
for (v, o) in self.overridedata[key]:
|
||||
self.overridedata[newkey].append([v.replace(key, newkey), o])
|
||||
self.renameVar(v, v.replace(key, newkey))
|
||||
|
||||
if '_' in newkey and val is None:
|
||||
self._setvar_update_overrides(newkey, **loginfo)
|
||||
if i in self._special_values and key in self._special_values[i]:
|
||||
self._special_values[i].remove(key)
|
||||
self._special_values[i].add(newkey)
|
||||
|
||||
loginfo['variable'] = key
|
||||
loginfo['op'] = 'rename (to)'
|
||||
@@ -603,12 +565,14 @@ class DataSmart(MutableMapping):
|
||||
def appendVar(self, var, value, **loginfo):
|
||||
loginfo['op'] = 'append'
|
||||
self.varhistory.record(**loginfo)
|
||||
self.setVar(var + "_append", value, ignore=True, parsing=True)
|
||||
newvalue = (self.getVar(var, False) or "") + value
|
||||
self.setVar(var, newvalue, ignore=True)
|
||||
|
||||
def prependVar(self, var, value, **loginfo):
|
||||
loginfo['op'] = 'prepend'
|
||||
self.varhistory.record(**loginfo)
|
||||
self.setVar(var + "_prepend", value, ignore=True, parsing=True)
|
||||
newvalue = value + (self.getVar(var, False) or "")
|
||||
self.setVar(var, newvalue, ignore=True)
|
||||
|
||||
def delVar(self, var, **loginfo):
|
||||
loginfo['detail'] = ""
|
||||
@@ -616,28 +580,12 @@ class DataSmart(MutableMapping):
|
||||
self.varhistory.record(**loginfo)
|
||||
self.expand_cache = {}
|
||||
self.dict[var] = {}
|
||||
if var in self.overridedata:
|
||||
del self.overridedata[var]
|
||||
if '_' in var:
|
||||
override = var[var.rfind('_')+1:]
|
||||
shortvar = var[:var.rfind('_')]
|
||||
while override:
|
||||
try:
|
||||
if shortvar in self.overridedata:
|
||||
# Force CoW by recreating the list first
|
||||
self.overridedata[shortvar] = list(self.overridedata[shortvar])
|
||||
self.overridedata[shortvar].remove([var, override])
|
||||
except ValueError as e:
|
||||
pass
|
||||
override = None
|
||||
if "_" in shortvar:
|
||||
override = var[shortvar.rfind('_')+1:]
|
||||
shortvar = var[:shortvar.rfind('_')]
|
||||
if len(shortvar) == 0:
|
||||
override = None
|
||||
if override and override in self._seen_overrides and var in self._seen_overrides[override]:
|
||||
self._seen_overrides[override].remove(var)
|
||||
|
||||
def setVarFlag(self, var, flag, value, **loginfo):
|
||||
self.expand_cache = {}
|
||||
if 'op' not in loginfo:
|
||||
loginfo['op'] = "set"
|
||||
loginfo['flag'] = flag
|
||||
@@ -647,9 +595,7 @@ class DataSmart(MutableMapping):
|
||||
self.dict[var][flag] = value
|
||||
|
||||
if flag == "_defaultval" and '_' in var:
|
||||
self._setvar_update_overrides(var, **loginfo)
|
||||
if flag == "_defaultval" and var in self.overridevars:
|
||||
self._setvar_update_overridevars(var, value)
|
||||
self._setvar_update_overrides(var)
|
||||
|
||||
if flag == "unexport" or flag == "export":
|
||||
if not "__exportlist" in self.dict:
|
||||
@@ -658,71 +604,14 @@ class DataSmart(MutableMapping):
|
||||
self.dict["__exportlist"]["_content"] = set()
|
||||
self.dict["__exportlist"]["_content"].add(var)
|
||||
|
||||
def getVarFlag(self, var, flag, expand=False, noweakdefault=False, parsing=False):
|
||||
def getVarFlag(self, var, flag, expand=False, noweakdefault=False):
|
||||
local_var = self._findVar(var)
|
||||
value = None
|
||||
if flag == "_content" and var in self.overridedata and not parsing:
|
||||
match = False
|
||||
active = {}
|
||||
self.need_overrides()
|
||||
for (r, o) in self.overridedata[var]:
|
||||
# What about double overrides both with "_" in the name?
|
||||
if o in self.overridesset:
|
||||
active[o] = r
|
||||
elif "_" in o:
|
||||
if set(o.split("_")).issubset(self.overridesset):
|
||||
active[o] = r
|
||||
|
||||
mod = True
|
||||
while mod:
|
||||
mod = False
|
||||
for o in self.overrides:
|
||||
for a in active.copy():
|
||||
if a.endswith("_" + o):
|
||||
t = active[a]
|
||||
del active[a]
|
||||
active[a.replace("_" + o, "")] = t
|
||||
mod = True
|
||||
elif a == o:
|
||||
match = active[a]
|
||||
del active[a]
|
||||
if match:
|
||||
value = self.getVar(match)
|
||||
|
||||
if local_var is not None and value is None:
|
||||
if local_var is not None:
|
||||
if flag in local_var:
|
||||
value = copy.copy(local_var[flag])
|
||||
elif flag == "_content" and "_defaultval" in local_var and not noweakdefault:
|
||||
value = copy.copy(local_var["_defaultval"])
|
||||
|
||||
|
||||
if flag == "_content" and local_var is not None and "_append" in local_var and not parsing:
|
||||
if not value:
|
||||
value = ""
|
||||
self.need_overrides()
|
||||
for (r, o) in local_var["_append"]:
|
||||
match = True
|
||||
if o:
|
||||
for o2 in o.split("_"):
|
||||
if not o2 in self.overrides:
|
||||
match = False
|
||||
if match:
|
||||
value = value + r
|
||||
|
||||
if flag == "_content" and local_var is not None and "_prepend" in local_var and not parsing:
|
||||
if not value:
|
||||
value = ""
|
||||
self.need_overrides()
|
||||
for (r, o) in local_var["_prepend"]:
|
||||
|
||||
match = True
|
||||
if o:
|
||||
for o2 in o.split("_"):
|
||||
if not o2 in self.overrides:
|
||||
match = False
|
||||
if match:
|
||||
value = r + value
|
||||
|
||||
if expand and value:
|
||||
# Only getvar (flag == _content) hits the expand cache
|
||||
cachename = None
|
||||
@@ -731,30 +620,19 @@ class DataSmart(MutableMapping):
|
||||
else:
|
||||
cachename = var + "[" + flag + "]"
|
||||
value = self.expand(value, cachename)
|
||||
|
||||
if value and flag == "_content" and local_var is not None and "_remove" in local_var:
|
||||
removes = []
|
||||
self.need_overrides()
|
||||
for (r, o) in local_var["_remove"]:
|
||||
match = True
|
||||
if o:
|
||||
for o2 in o.split("_"):
|
||||
if not o2 in self.overrides:
|
||||
match = False
|
||||
if match:
|
||||
removes.extend(self.expand(r).split())
|
||||
|
||||
if value and flag == "_content" and local_var is not None and "_removeactive" in local_var:
|
||||
removes = [self.expand(r).split() for r in local_var["_removeactive"]]
|
||||
removes = reduce(lambda a, b: a+b, removes, [])
|
||||
filtered = filter(lambda v: v not in removes,
|
||||
value.split())
|
||||
value = " ".join(filtered)
|
||||
if expand and var in self.expand_cache:
|
||||
if expand:
|
||||
# We need to ensure the expand cache has the correct value
|
||||
# flag == "_content" here
|
||||
self.expand_cache[var].value = value
|
||||
return value
|
||||
|
||||
def delVarFlag(self, var, flag, **loginfo):
|
||||
self.expand_cache = {}
|
||||
local_var = self._findVar(var)
|
||||
if not local_var:
|
||||
return
|
||||
@@ -784,7 +662,6 @@ class DataSmart(MutableMapping):
|
||||
self.setVarFlag(var, flag, newvalue, ignore=True)
|
||||
|
||||
def setVarFlags(self, var, flags, **loginfo):
|
||||
self.expand_cache = {}
|
||||
infer_caller_details(loginfo)
|
||||
if not var in self.dict:
|
||||
self._makeShadowCopy(var)
|
||||
@@ -814,7 +691,6 @@ class DataSmart(MutableMapping):
|
||||
|
||||
|
||||
def delVarFlags(self, var, **loginfo):
|
||||
self.expand_cache = {}
|
||||
if not var in self.dict:
|
||||
self._makeShadowCopy(var)
|
||||
|
||||
@@ -832,12 +708,13 @@ class DataSmart(MutableMapping):
|
||||
else:
|
||||
del self.dict[var]
|
||||
|
||||
|
||||
def createCopy(self):
|
||||
"""
|
||||
Create a copy of self by setting _data to self
|
||||
"""
|
||||
# we really want this to be a DataSmart...
|
||||
data = DataSmart()
|
||||
data = DataSmart(seen=self._seen_overrides.copy(), special=self._special_values.copy())
|
||||
data.dict["_data"] = self.dict
|
||||
data.varhistory = self.varhistory.copy()
|
||||
data.varhistory.datasmart = data
|
||||
@@ -845,12 +722,6 @@ class DataSmart(MutableMapping):
|
||||
|
||||
data._tracking = self._tracking
|
||||
|
||||
data.overrides = None
|
||||
data.overridevars = copy.copy(self.overridevars)
|
||||
# Should really be a deepcopy but has heavy overhead.
|
||||
# Instead, we're careful with writes.
|
||||
data.overridedata = copy.copy(self.overridedata)
|
||||
|
||||
return data
|
||||
|
||||
def expandVarref(self, variable, parents=False):
|
||||
@@ -876,7 +747,6 @@ class DataSmart(MutableMapping):
|
||||
|
||||
def __iter__(self):
|
||||
deleted = set()
|
||||
overrides = set()
|
||||
def keylist(d):
|
||||
klist = set()
|
||||
for key in d:
|
||||
@@ -884,8 +754,6 @@ class DataSmart(MutableMapping):
|
||||
continue
|
||||
if key in deleted:
|
||||
continue
|
||||
if key in overrides:
|
||||
continue
|
||||
if not d[key]:
|
||||
deleted.add(key)
|
||||
continue
|
||||
@@ -896,21 +764,9 @@ class DataSmart(MutableMapping):
|
||||
|
||||
return klist
|
||||
|
||||
self.need_overrides()
|
||||
for var in self.overridedata:
|
||||
for (r, o) in self.overridedata[var]:
|
||||
if o in self.overridesset:
|
||||
overrides.add(var)
|
||||
elif "_" in o:
|
||||
if set(o.split("_")).issubset(self.overridesset):
|
||||
overrides.add(var)
|
||||
|
||||
for k in keylist(self.dict):
|
||||
yield k
|
||||
|
||||
for k in overrides:
|
||||
yield k
|
||||
|
||||
def __len__(self):
|
||||
return len(frozenset(self))
|
||||
|
||||
|
||||
@@ -69,14 +69,9 @@ _ui_handler_seq = 0
|
||||
_event_handler_map = {}
|
||||
_catchall_handlers = {}
|
||||
_eventfilter = None
|
||||
_uiready = False
|
||||
|
||||
def execute_handler(name, handler, event, d):
|
||||
event.data = d
|
||||
addedd = False
|
||||
if 'd' not in __builtins__:
|
||||
__builtins__['d'] = d
|
||||
addedd = True
|
||||
try:
|
||||
ret = handler(event)
|
||||
except (bb.parse.SkipRecipe, bb.BBHandledException):
|
||||
@@ -92,8 +87,6 @@ def execute_handler(name, handler, event, d):
|
||||
raise
|
||||
finally:
|
||||
del event.data
|
||||
if addedd:
|
||||
del __builtins__['d']
|
||||
|
||||
def fire_class_handlers(event, d):
|
||||
if isinstance(event, logging.LogRecord):
|
||||
@@ -114,7 +107,7 @@ def print_ui_queue():
|
||||
"""If we're exiting before a UI has been spawned, display any queued
|
||||
LogRecords to the console."""
|
||||
logger = logging.getLogger("BitBake")
|
||||
if not _uiready:
|
||||
if not _ui_handlers:
|
||||
from bb.msg import BBLogFormatter
|
||||
console = logging.StreamHandler(sys.stdout)
|
||||
console.setFormatter(BBLogFormatter("%(levelname)s: %(message)s"))
|
||||
@@ -136,7 +129,7 @@ def print_ui_queue():
|
||||
logger.handle(event)
|
||||
|
||||
def fire_ui_handlers(event, d):
|
||||
if not _uiready:
|
||||
if not _ui_handlers:
|
||||
# No UI handlers registered yet, queue up the messages
|
||||
ui_queue.append(event)
|
||||
return
|
||||
@@ -177,7 +170,7 @@ def fire_from_worker(event, d):
|
||||
fire_ui_handlers(event, d)
|
||||
|
||||
noop = lambda _: None
|
||||
def register(name, handler, mask=None):
|
||||
def register(name, handler, mask=[]):
|
||||
"""Register an Event handler"""
|
||||
|
||||
# already registered
|
||||
@@ -220,10 +213,7 @@ def set_eventfilter(func):
|
||||
global _eventfilter
|
||||
_eventfilter = func
|
||||
|
||||
def register_UIHhandler(handler, mainui=False):
|
||||
if mainui:
|
||||
global _uiready
|
||||
_uiready = True
|
||||
def register_UIHhandler(handler):
|
||||
bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
|
||||
_ui_handlers[_ui_handler_seq] = handler
|
||||
level, debug_domains = bb.msg.constructLogOptions()
|
||||
@@ -374,12 +364,11 @@ class BuildStarted(BuildBase, OperationStarted):
|
||||
|
||||
class BuildCompleted(BuildBase, OperationCompleted):
|
||||
"""bbmake build run completed"""
|
||||
def __init__(self, total, n, p, failures=0, interrupted=0):
|
||||
def __init__(self, total, n, p, failures = 0):
|
||||
if not failures:
|
||||
OperationCompleted.__init__(self, total, "Building Succeeded")
|
||||
else:
|
||||
OperationCompleted.__init__(self, total, "Building Failed")
|
||||
self._interrupted = interrupted
|
||||
BuildBase.__init__(self, n, p, failures)
|
||||
|
||||
class DiskFull(Event):
|
||||
@@ -394,7 +383,7 @@ class DiskFull(Event):
|
||||
class NoProvider(Event):
|
||||
"""No Provider for an Event"""
|
||||
|
||||
def __init__(self, item, runtime=False, dependees=None, reasons=None, close_matches=None):
|
||||
def __init__(self, item, runtime=False, dependees=None, reasons=[], close_matches=[]):
|
||||
Event.__init__(self)
|
||||
self._item = item
|
||||
self._runtime = runtime
|
||||
@@ -508,16 +497,6 @@ class TargetsTreeGenerated(Event):
|
||||
Event.__init__(self)
|
||||
self._model = model
|
||||
|
||||
class ReachableStamps(Event):
|
||||
"""
|
||||
An event listing all stamps reachable after parsing
|
||||
which the metadata may use to clean up stale data
|
||||
"""
|
||||
|
||||
def __init__(self, stamps):
|
||||
Event.__init__(self)
|
||||
self.stamps = stamps
|
||||
|
||||
class FilesMatchingFound(Event):
|
||||
"""
|
||||
Event when a list of files matching the supplied pattern has
|
||||
|
||||
@@ -61,17 +61,6 @@ class BBFetchException(Exception):
|
||||
def __str__(self):
|
||||
return self.msg
|
||||
|
||||
class UntrustedUrl(BBFetchException):
|
||||
"""Exception raised when encountering a host not listed in BB_ALLOWED_NETWORKS"""
|
||||
def __init__(self, url, message=''):
|
||||
if message:
|
||||
msg = message
|
||||
else:
|
||||
msg = "The URL: '%s' is not trusted and cannot be used" % url
|
||||
self.url = url
|
||||
BBFetchException.__init__(self, msg)
|
||||
self.args = (url,)
|
||||
|
||||
class MalformedUrl(BBFetchException):
|
||||
"""Exception raised when encountering an invalid url"""
|
||||
def __init__(self, url, message=''):
|
||||
@@ -464,7 +453,7 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d):
|
||||
for k in replacements:
|
||||
uri_replace_decoded[loc] = uri_replace_decoded[loc].replace(k, replacements[k])
|
||||
#bb.note("%s %s %s" % (regexp, uri_replace_decoded[loc], uri_decoded[loc]))
|
||||
result_decoded[loc] = re.sub(regexp, uri_replace_decoded[loc], uri_decoded[loc], 1)
|
||||
result_decoded[loc] = re.sub(regexp, uri_replace_decoded[loc], uri_decoded[loc])
|
||||
if loc == 2:
|
||||
# Handle path manipulations
|
||||
basename = None
|
||||
@@ -628,7 +617,7 @@ def verify_checksum(ud, d, precomputed={}):
|
||||
}
|
||||
|
||||
|
||||
def verify_donestamp(ud, d, origud=None):
|
||||
def verify_donestamp(ud, d):
|
||||
"""
|
||||
Check whether the done stamp file has the right checksums (if the fetch
|
||||
method supports them). If it doesn't, delete the done stamp and force
|
||||
@@ -640,8 +629,7 @@ def verify_donestamp(ud, d, origud=None):
|
||||
if not os.path.exists(ud.donestamp):
|
||||
return False
|
||||
|
||||
if (not ud.method.supports_checksum(ud) or
|
||||
(origud and not origud.method.supports_checksum(origud))):
|
||||
if not ud.method.supports_checksum(ud):
|
||||
# done stamp exists, checksums not supported; assume the local file is
|
||||
# current
|
||||
return True
|
||||
@@ -721,7 +709,7 @@ def get_autorev(d):
|
||||
d.setVar('__BB_DONT_CACHE', '1')
|
||||
return "AUTOINC"
|
||||
|
||||
def get_srcrev(d, method_name='sortable_revision'):
|
||||
def get_srcrev(d):
|
||||
"""
|
||||
Return the revsion string, usually for use in the version string (PV) of the current package
|
||||
Most packages usually only have one SCM so we just pass on the call.
|
||||
@@ -730,9 +718,6 @@ def get_srcrev(d, method_name='sortable_revision'):
|
||||
|
||||
The idea here is that we put the string "AUTOINC+" into return value if the revisions are not
|
||||
incremental, other code is then responsible for turning that into an increasing value (if needed)
|
||||
|
||||
A method_name can be supplied to retrieve an alternatively formatted revision from a fetcher, if
|
||||
that fetcher provides a method with the given name and the same signature as sortable_revision.
|
||||
"""
|
||||
|
||||
scms = []
|
||||
@@ -746,7 +731,7 @@ def get_srcrev(d, method_name='sortable_revision'):
|
||||
raise FetchError("SRCREV was used yet no valid SCM was found in SRC_URI")
|
||||
|
||||
if len(scms) == 1 and len(urldata[scms[0]].names) == 1:
|
||||
autoinc, rev = getattr(urldata[scms[0]].method, method_name)(urldata[scms[0]], d, urldata[scms[0]].names[0])
|
||||
autoinc, rev = urldata[scms[0]].method.sortable_revision(urldata[scms[0]], d, urldata[scms[0]].names[0])
|
||||
if len(rev) > 10:
|
||||
rev = rev[:10]
|
||||
if autoinc:
|
||||
@@ -764,7 +749,7 @@ def get_srcrev(d, method_name='sortable_revision'):
|
||||
for scm in scms:
|
||||
ud = urldata[scm]
|
||||
for name in ud.names:
|
||||
autoinc, rev = getattr(ud.method, method_name)(ud, d, name)
|
||||
autoinc, rev = ud.method.sortable_revision(ud, d, name)
|
||||
seenautoinc = seenautoinc or autoinc
|
||||
if len(rev) > 10:
|
||||
rev = rev[:10]
|
||||
@@ -778,7 +763,7 @@ def localpath(url, d):
|
||||
fetcher = bb.fetch2.Fetch([url], d)
|
||||
return fetcher.localpath(url)
|
||||
|
||||
def runfetchcmd(cmd, d, quiet=False, cleanup=None):
|
||||
def runfetchcmd(cmd, d, quiet = False, cleanup = []):
|
||||
"""
|
||||
Run cmd returning the command output
|
||||
Raise an error if interrupted or cmd fails
|
||||
@@ -798,14 +783,9 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None):
|
||||
'NO_PROXY', 'no_proxy',
|
||||
'ALL_PROXY', 'all_proxy',
|
||||
'GIT_PROXY_COMMAND',
|
||||
'GIT_SSL_CAINFO',
|
||||
'GIT_SMART_HTTP',
|
||||
'SSH_AUTH_SOCK', 'SSH_AGENT_PID',
|
||||
'SOCKS5_USER', 'SOCKS5_PASSWD']
|
||||
|
||||
if not cleanup:
|
||||
cleanup = []
|
||||
|
||||
for var in exportvars:
|
||||
val = d.getVar(var, True)
|
||||
if val:
|
||||
@@ -862,7 +842,7 @@ def build_mirroruris(origud, mirrors, ld):
|
||||
replacements["BASENAME"] = origud.path.split("/")[-1]
|
||||
replacements["MIRRORNAME"] = origud.host.replace(':','.') + origud.path.replace('/', '.').replace('*', '.')
|
||||
|
||||
def adduri(ud, uris, uds, mirrors):
|
||||
def adduri(ud, uris, uds):
|
||||
for line in mirrors:
|
||||
try:
|
||||
(find, replace) = line
|
||||
@@ -871,17 +851,6 @@ def build_mirroruris(origud, mirrors, ld):
|
||||
newuri = uri_replace(ud, find, replace, replacements, ld)
|
||||
if not newuri or newuri in uris or newuri == origud.url:
|
||||
continue
|
||||
|
||||
if not trusted_network(ld, newuri):
|
||||
logger.debug(1, "Mirror %s not in the list of trusted networks, skipping" % (newuri))
|
||||
continue
|
||||
|
||||
# Create a local copy of the mirrors minus the current line
|
||||
# this will prevent us from recursively processing the same line
|
||||
# as well as indirect recursion A -> B -> C -> A
|
||||
localmirrors = list(mirrors)
|
||||
localmirrors.remove(line)
|
||||
|
||||
try:
|
||||
newud = FetchData(newuri, ld)
|
||||
newud.setup_localpath(ld)
|
||||
@@ -889,18 +858,16 @@ def build_mirroruris(origud, mirrors, ld):
|
||||
logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
|
||||
logger.debug(1, str(e))
|
||||
try:
|
||||
# setup_localpath of file:// urls may fail, we should still see
|
||||
# if mirrors of the url exist
|
||||
adduri(newud, uris, uds, localmirrors)
|
||||
ud.method.clean(ud, ld)
|
||||
except UnboundLocalError:
|
||||
pass
|
||||
continue
|
||||
uris.append(newuri)
|
||||
uds.append(newud)
|
||||
|
||||
adduri(newud, uris, uds, localmirrors)
|
||||
adduri(newud, uris, uds)
|
||||
|
||||
adduri(origud, uris, uds, mirrors)
|
||||
adduri(origud, uris, uds)
|
||||
|
||||
return uris, uds
|
||||
|
||||
@@ -917,19 +884,19 @@ def rename_bad_checksum(ud, suffix):
|
||||
bb.utils.movefile(ud.localpath, new_localpath)
|
||||
|
||||
|
||||
def try_mirror_url(fetch, origud, ud, ld, check = False):
|
||||
def try_mirror_url(origud, ud, ld, check = False):
|
||||
# Return of None or a value means we're finished
|
||||
# False means try another url
|
||||
try:
|
||||
if check:
|
||||
found = ud.method.checkstatus(fetch, ud, ld)
|
||||
found = ud.method.checkstatus(ud, ld)
|
||||
if found:
|
||||
return found
|
||||
return False
|
||||
|
||||
os.chdir(ld.getVar("DL_DIR", True))
|
||||
|
||||
if not verify_donestamp(ud, ld, origud) or ud.method.need_update(ud, ld):
|
||||
if not verify_donestamp(ud, ld) or ud.method.need_update(ud, ld):
|
||||
ud.method.download(ud, ld)
|
||||
if hasattr(ud.method,"build_mirror_data"):
|
||||
ud.method.build_mirror_data(ud, ld)
|
||||
@@ -955,7 +922,7 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
|
||||
origud.method.download(origud, ld)
|
||||
if hasattr(origud.method,"build_mirror_data"):
|
||||
origud.method.build_mirror_data(origud, ld)
|
||||
return origud.localpath
|
||||
return ud.localpath
|
||||
# Otherwise the result is a local file:// and we symlink to it
|
||||
if not os.path.exists(origud.localpath):
|
||||
if os.path.islink(origud.localpath):
|
||||
@@ -985,7 +952,7 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
|
||||
pass
|
||||
return False
|
||||
|
||||
def try_mirrors(fetch, d, origud, mirrors, check = False):
|
||||
def try_mirrors(d, origud, mirrors, check = False):
|
||||
"""
|
||||
Try to use a mirrored version of the sources.
|
||||
This method will be automatically called before the fetchers go.
|
||||
@@ -999,46 +966,11 @@ def try_mirrors(fetch, d, origud, mirrors, check = False):
|
||||
uris, uds = build_mirroruris(origud, mirrors, ld)
|
||||
|
||||
for index, uri in enumerate(uris):
|
||||
ret = try_mirror_url(fetch, origud, uds[index], ld, check)
|
||||
ret = try_mirror_url(origud, uds[index], ld, check)
|
||||
if ret != False:
|
||||
return ret
|
||||
return None
|
||||
|
||||
def trusted_network(d, url):
|
||||
"""
|
||||
Use a trusted url during download if networking is enabled and
|
||||
BB_ALLOWED_NETWORKS is set globally or for a specific recipe.
|
||||
Note: modifies SRC_URI & mirrors.
|
||||
"""
|
||||
if d.getVar('BB_NO_NETWORK', True) == "1":
|
||||
return True
|
||||
|
||||
pkgname = d.expand(d.getVar('PN', False))
|
||||
trusted_hosts = d.getVarFlag('BB_ALLOWED_NETWORKS', pkgname)
|
||||
|
||||
if not trusted_hosts:
|
||||
trusted_hosts = d.getVar('BB_ALLOWED_NETWORKS', True)
|
||||
|
||||
# Not enabled.
|
||||
if not trusted_hosts:
|
||||
return True
|
||||
|
||||
scheme, network, path, user, passwd, param = decodeurl(url)
|
||||
|
||||
if not network:
|
||||
return True
|
||||
|
||||
network = network.lower()
|
||||
|
||||
for host in trusted_hosts.split(" "):
|
||||
host = host.lower()
|
||||
if host.startswith("*.") and ("." + network).endswith(host[1:]):
|
||||
return True
|
||||
if host == network:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def srcrev_internal_helper(ud, d, name):
|
||||
"""
|
||||
Return:
|
||||
@@ -1277,7 +1209,7 @@ class FetchData(object):
|
||||
class FetchMethod(object):
|
||||
"""Base class for 'fetch'ing data"""
|
||||
|
||||
def __init__(self, urls=None):
|
||||
def __init__(self, urls = []):
|
||||
self.urls = []
|
||||
|
||||
def supports(self, urldata, d):
|
||||
@@ -1483,7 +1415,7 @@ class FetchMethod(object):
|
||||
"""
|
||||
return True
|
||||
|
||||
def checkstatus(self, fetch, urldata, d):
|
||||
def checkstatus(self, urldata, d):
|
||||
"""
|
||||
Check the status of a URL
|
||||
Assumes localpath was called first
|
||||
@@ -1515,7 +1447,7 @@ class FetchMethod(object):
|
||||
return "%s-%s" % (key, d.getVar("PN", True) or "")
|
||||
|
||||
class Fetch(object):
|
||||
def __init__(self, urls, d, cache = True, localonly = False, connection_cache = None):
|
||||
def __init__(self, urls, d, cache = True, localonly = False):
|
||||
if localonly and cache:
|
||||
raise Exception("bb.fetch2.Fetch.__init__: cannot set cache and localonly at same time")
|
||||
|
||||
@@ -1524,7 +1456,6 @@ class Fetch(object):
|
||||
self.urls = urls
|
||||
self.d = d
|
||||
self.ud = {}
|
||||
self.connection_cache = connection_cache
|
||||
|
||||
fn = d.getVar('FILE', True)
|
||||
if cache and fn and fn in urldata_cache:
|
||||
@@ -1562,11 +1493,11 @@ class Fetch(object):
|
||||
|
||||
return local
|
||||
|
||||
def download(self, urls=None):
|
||||
def download(self, urls = []):
|
||||
"""
|
||||
Fetch all urls
|
||||
"""
|
||||
if not urls:
|
||||
if len(urls) == 0:
|
||||
urls = self.urls
|
||||
|
||||
network = self.d.getVar("BB_NO_NETWORK", True)
|
||||
@@ -1588,7 +1519,7 @@ class Fetch(object):
|
||||
elif m.try_premirror(ud, self.d):
|
||||
logger.debug(1, "Trying PREMIRRORS")
|
||||
mirrors = mirror_from_string(self.d.getVar('PREMIRRORS', True))
|
||||
localpath = try_mirrors(self, self.d, ud, mirrors, False)
|
||||
localpath = try_mirrors(self.d, ud, mirrors, False)
|
||||
|
||||
if premirroronly:
|
||||
self.d.setVar("BB_NO_NETWORK", "1")
|
||||
@@ -1596,11 +1527,8 @@ class Fetch(object):
|
||||
os.chdir(self.d.getVar("DL_DIR", True))
|
||||
|
||||
firsterr = None
|
||||
verified_stamp = verify_donestamp(ud, self.d)
|
||||
if not localpath and (not verified_stamp or m.need_update(ud, self.d)):
|
||||
if not localpath and ((not verify_donestamp(ud, self.d)) or m.need_update(ud, self.d)):
|
||||
try:
|
||||
if not trusted_network(self.d, ud.url):
|
||||
raise UntrustedUrl(ud.url)
|
||||
logger.debug(1, "Trying Upstream")
|
||||
m.download(ud, self.d)
|
||||
if hasattr(m, "build_mirror_data"):
|
||||
@@ -1625,11 +1553,10 @@ class Fetch(object):
|
||||
logger.debug(1, str(e))
|
||||
firsterr = e
|
||||
# Remove any incomplete fetch
|
||||
if not verified_stamp:
|
||||
m.clean(ud, self.d)
|
||||
m.clean(ud, self.d)
|
||||
logger.debug(1, "Trying MIRRORS")
|
||||
mirrors = mirror_from_string(self.d.getVar('MIRRORS', True))
|
||||
localpath = try_mirrors(self, self.d, ud, mirrors)
|
||||
localpath = try_mirrors (self.d, ud, mirrors)
|
||||
|
||||
if not localpath or ((not os.path.exists(localpath)) and localpath.find("*") == -1):
|
||||
if firsterr:
|
||||
@@ -1646,12 +1573,12 @@ class Fetch(object):
|
||||
finally:
|
||||
bb.utils.unlockfile(lf)
|
||||
|
||||
def checkstatus(self, urls=None):
|
||||
def checkstatus(self, urls = []):
|
||||
"""
|
||||
Check all urls exist upstream
|
||||
"""
|
||||
|
||||
if not urls:
|
||||
if len(urls) == 0:
|
||||
urls = self.urls
|
||||
|
||||
for u in urls:
|
||||
@@ -1661,25 +1588,25 @@ class Fetch(object):
|
||||
logger.debug(1, "Testing URL %s", u)
|
||||
# First try checking uri, u, from PREMIRRORS
|
||||
mirrors = mirror_from_string(self.d.getVar('PREMIRRORS', True))
|
||||
ret = try_mirrors(self, self.d, ud, mirrors, True)
|
||||
ret = try_mirrors(self.d, ud, mirrors, True)
|
||||
if not ret:
|
||||
# Next try checking from the original uri, u
|
||||
try:
|
||||
ret = m.checkstatus(self, ud, self.d)
|
||||
ret = m.checkstatus(ud, self.d)
|
||||
except:
|
||||
# Finally, try checking uri, u, from MIRRORS
|
||||
mirrors = mirror_from_string(self.d.getVar('MIRRORS', True))
|
||||
ret = try_mirrors(self, self.d, ud, mirrors, True)
|
||||
ret = try_mirrors(self.d, ud, mirrors, True)
|
||||
|
||||
if not ret:
|
||||
raise FetchError("URL %s doesn't work" % u, u)
|
||||
|
||||
def unpack(self, root, urls=None):
|
||||
def unpack(self, root, urls = []):
|
||||
"""
|
||||
Check all urls exist upstream
|
||||
"""
|
||||
|
||||
if not urls:
|
||||
if len(urls) == 0:
|
||||
urls = self.urls
|
||||
|
||||
for u in urls:
|
||||
@@ -1697,12 +1624,12 @@ class Fetch(object):
|
||||
if ud.lockfile:
|
||||
bb.utils.unlockfile(lf)
|
||||
|
||||
def clean(self, urls=None):
|
||||
def clean(self, urls = []):
|
||||
"""
|
||||
Clean files that the fetcher gets or places
|
||||
"""
|
||||
|
||||
if not urls:
|
||||
if len(urls) == 0:
|
||||
urls = self.urls
|
||||
|
||||
for url in urls:
|
||||
@@ -1724,42 +1651,6 @@ class Fetch(object):
|
||||
if ud.lockfile:
|
||||
bb.utils.unlockfile(lf)
|
||||
|
||||
class FetchConnectionCache(object):
|
||||
"""
|
||||
A class which represents an container for socket connections.
|
||||
"""
|
||||
def __init__(self):
|
||||
self.cache = {}
|
||||
|
||||
def get_connection_name(self, host, port):
|
||||
return host + ':' + str(port)
|
||||
|
||||
def add_connection(self, host, port, connection):
|
||||
cn = self.get_connection_name(host, port)
|
||||
|
||||
if cn not in self.cache:
|
||||
self.cache[cn] = connection
|
||||
|
||||
def get_connection(self, host, port):
|
||||
connection = None
|
||||
|
||||
cn = self.get_connection_name(host, port)
|
||||
if cn in self.cache:
|
||||
connection = self.cache[cn]
|
||||
|
||||
return connection
|
||||
|
||||
def remove_connection(self, host, port):
|
||||
cn = self.get_connection_name(host, port)
|
||||
if cn in self.cache:
|
||||
self.cache[cn].close()
|
||||
del self.cache[cn]
|
||||
|
||||
def close_connections(self):
|
||||
for cn in self.cache.keys():
|
||||
self.cache[cn].close()
|
||||
del self.cache[cn]
|
||||
|
||||
from . import cvs
|
||||
from . import git
|
||||
from . import gitsm
|
||||
|
||||
@@ -9,7 +9,7 @@ Usage in the recipe:
|
||||
|
||||
SRC_URI = "ccrc://cc.example.org/ccrc;vob=/example_vob;module=/example_module"
|
||||
SRCREV = "EXAMPLE_CLEARCASE_TAG"
|
||||
PV = "${@d.getVar("SRCREV", False).replace("/", "+")}"
|
||||
PV = "${@d.getVar("SRCREV").replace("/", "+")}"
|
||||
|
||||
The fetcher uses the rcleartool or cleartool remote client, depending on which one is available.
|
||||
|
||||
@@ -113,7 +113,7 @@ class ClearCase(FetchMethod):
|
||||
if data.getVar("SRCREV", d, True) == "INVALID":
|
||||
raise FetchError("Set a valid SRCREV for the clearcase fetcher in your recipe, e.g. SRCREV = \"/main/LATEST\" or any other label of your choice.")
|
||||
|
||||
ud.label = d.getVar("SRCREV", False)
|
||||
ud.label = d.getVar("SRCREV")
|
||||
ud.customspec = d.getVar("CCASE_CUSTOM_CONFIG_SPEC", True)
|
||||
|
||||
ud.server = "%s://%s%s" % (ud.proto, ud.host, ud.path)
|
||||
|
||||
@@ -66,7 +66,6 @@ Supported SRC_URI options are:
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import errno
|
||||
import os
|
||||
import re
|
||||
import bb
|
||||
@@ -138,10 +137,7 @@ class Git(FetchMethod):
|
||||
ud.unresolvedrev[name] = ud.revisions[name]
|
||||
ud.revisions[name] = self.latest_revision(ud, d, name)
|
||||
|
||||
gitsrcname = '%s%s' % (ud.host.replace(':', '.'), ud.path.replace('/', '.').replace('*', '.'))
|
||||
if gitsrcname.startswith('.'):
|
||||
gitsrcname = gitsrcname[1:]
|
||||
|
||||
gitsrcname = '%s%s' % (ud.host.replace(':','.'), ud.path.replace('/', '.').replace('*', '.'))
|
||||
# for rebaseable git repo, it is necessary to keep mirror tar ball
|
||||
# per revision, so that even the revision disappears from the
|
||||
# upstream repo in the future, the mirror will remain intact and still
|
||||
@@ -182,13 +178,20 @@ class Git(FetchMethod):
|
||||
def download(self, ud, d):
|
||||
"""Fetch url"""
|
||||
|
||||
if ud.user:
|
||||
username = ud.user + '@'
|
||||
else:
|
||||
username = ""
|
||||
|
||||
ud.repochanged = not os.path.exists(ud.fullmirror)
|
||||
|
||||
# If the checkout doesn't exist and the mirror tarball does, extract it
|
||||
if not os.path.exists(ud.clonedir) and os.path.exists(ud.fullmirror):
|
||||
bb.utils.mkdirhier(ud.clonedir)
|
||||
os.chdir(ud.clonedir)
|
||||
runfetchcmd("tar -xzf %s" % (ud.fullmirror), d)
|
||||
|
||||
repourl = self._get_repo_url(ud)
|
||||
repourl = "%s://%s%s%s" % (ud.proto, username, ud.host, ud.path)
|
||||
|
||||
# If the repo still doesn't exist, fallback to cloning it
|
||||
if not os.path.exists(ud.clonedir):
|
||||
@@ -219,11 +222,7 @@ class Git(FetchMethod):
|
||||
runfetchcmd(fetch_cmd, d)
|
||||
runfetchcmd("%s prune-packed" % ud.basecmd, d)
|
||||
runfetchcmd("%s pack-redundant --all | xargs -r rm" % ud.basecmd, d)
|
||||
try:
|
||||
os.unlink(ud.fullmirror)
|
||||
except OSError as exc:
|
||||
if exc.errno != errno.ENOENT:
|
||||
raise
|
||||
ud.repochanged = True
|
||||
os.chdir(ud.clonedir)
|
||||
for name in ud.names:
|
||||
if not self._contains_ref(ud, d, name):
|
||||
@@ -231,7 +230,7 @@ class Git(FetchMethod):
|
||||
|
||||
def build_mirror_data(self, ud, d):
|
||||
# Generate a mirror tarball if needed
|
||||
if ud.write_tarballs and not os.path.exists(ud.fullmirror):
|
||||
if ud.write_tarballs and (ud.repochanged or not os.path.exists(ud.fullmirror)):
|
||||
# it's possible that this symlink points to read-only filesystem with PREMIRROR
|
||||
if os.path.islink(ud.fullmirror):
|
||||
os.unlink(ud.fullmirror)
|
||||
@@ -278,22 +277,13 @@ class Git(FetchMethod):
|
||||
clonedir = indirectiondir
|
||||
|
||||
runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, cloneflags, clonedir, destdir), d)
|
||||
os.chdir(destdir)
|
||||
repourl = self._get_repo_url(ud)
|
||||
runfetchcmd("%s remote set-url origin %s" % (ud.basecmd, repourl), d)
|
||||
if not ud.nocheckout:
|
||||
os.chdir(destdir)
|
||||
if subdir != "":
|
||||
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d)
|
||||
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d)
|
||||
elif not ud.nobranch:
|
||||
branchname = ud.branches[ud.names[0]]
|
||||
runfetchcmd("%s checkout -B %s %s" % (ud.basecmd, branchname, \
|
||||
ud.revisions[ud.names[0]]), d)
|
||||
runfetchcmd("%s branch --set-upstream %s origin/%s" % (ud.basecmd, branchname, \
|
||||
branchname), d)
|
||||
else:
|
||||
runfetchcmd("%s checkout %s" % (ud.basecmd, ud.revisions[ud.names[0]]), d)
|
||||
|
||||
return True
|
||||
|
||||
def clean(self, ud, d):
|
||||
@@ -322,16 +312,6 @@ class Git(FetchMethod):
|
||||
raise bb.fetch2.FetchError("The command '%s' gave output with more then 1 line unexpectedly, output: '%s'" % (cmd, output))
|
||||
return output.split()[0] != "0"
|
||||
|
||||
def _get_repo_url(self, ud):
|
||||
"""
|
||||
Return the repository URL
|
||||
"""
|
||||
if ud.user:
|
||||
username = ud.user + '@'
|
||||
else:
|
||||
username = ""
|
||||
return "%s://%s%s%s" % (ud.proto, username, ud.host, ud.path)
|
||||
|
||||
def _revision_key(self, ud, d, name):
|
||||
"""
|
||||
Return a unique key for the url
|
||||
@@ -342,9 +322,13 @@ class Git(FetchMethod):
|
||||
"""
|
||||
Run git ls-remote with the specified search string
|
||||
"""
|
||||
repourl = self._get_repo_url(ud)
|
||||
cmd = "%s ls-remote %s %s" % \
|
||||
(ud.basecmd, repourl, search)
|
||||
if ud.user:
|
||||
username = ud.user + '@'
|
||||
else:
|
||||
username = ""
|
||||
|
||||
cmd = "%s ls-remote %s://%s%s%s %s" % \
|
||||
(ud.basecmd, ud.proto, username, ud.host, ud.path, search)
|
||||
if ud.proto.lower() != 'file':
|
||||
bb.fetch2.check_network_access(d, cmd)
|
||||
output = runfetchcmd(cmd, d, True)
|
||||
@@ -368,8 +352,7 @@ class Git(FetchMethod):
|
||||
for l in output.split('\n'):
|
||||
if s in l:
|
||||
return l.split()[0]
|
||||
raise bb.fetch2.FetchError("Unable to resolve '%s' in upstream git repository in git ls-remote output for %s" % \
|
||||
(ud.unresolvedrev[name], ud.host+ud.path))
|
||||
raise bb.fetch2.FetchError("Unable to resolve '%s' in upstream git repository in git ls-remote output" % ud.unresolvedrev[name])
|
||||
|
||||
def latest_versionstring(self, ud, d):
|
||||
"""
|
||||
@@ -377,28 +360,25 @@ class Git(FetchMethod):
|
||||
by searching through the tags output of ls-remote, comparing
|
||||
versions and returning the highest match.
|
||||
"""
|
||||
pupver = ('', '')
|
||||
|
||||
verstring = ""
|
||||
tagregex = re.compile(d.getVar('GITTAGREGEX', True) or "(?P<pver>([0-9][\.|_]?)+)")
|
||||
try:
|
||||
output = self._lsremote(ud, d, "refs/tags/*")
|
||||
output = self._lsremote(ud, d, "refs/tags/*^{}")
|
||||
except bb.fetch2.FetchError or bb.fetch2.NetworkAccess:
|
||||
return pupver
|
||||
return ""
|
||||
|
||||
verstring = ""
|
||||
revision = ""
|
||||
for line in output.split("\n"):
|
||||
if not line:
|
||||
break
|
||||
|
||||
tag_head = line.split("/")[-1]
|
||||
line = line.split("/")[-1]
|
||||
# Ignore non-released branches
|
||||
m = re.search("(alpha|beta|rc|final)+", tag_head)
|
||||
m = re.search("(alpha|beta|rc|final)+", line)
|
||||
if m:
|
||||
continue
|
||||
|
||||
# search for version in the line
|
||||
tag = tagregex.search(tag_head)
|
||||
tag = tagregex.search(line)
|
||||
if tag == None:
|
||||
continue
|
||||
|
||||
@@ -407,42 +387,14 @@ class Git(FetchMethod):
|
||||
|
||||
if verstring and bb.utils.vercmp(("0", tag, ""), ("0", verstring, "")) < 0:
|
||||
continue
|
||||
|
||||
verstring = tag
|
||||
revision = line.split()[0]
|
||||
pupver = (verstring, revision)
|
||||
|
||||
return pupver
|
||||
return verstring
|
||||
|
||||
def _build_revision(self, ud, d, name):
|
||||
return ud.revisions[name]
|
||||
|
||||
def gitpkgv_revision(self, ud, d, name):
|
||||
"""
|
||||
Return a sortable revision number by counting commits in the history
|
||||
Based on gitpkgv.bblass in meta-openembedded
|
||||
"""
|
||||
rev = self._build_revision(ud, d, name)
|
||||
localpath = ud.localpath
|
||||
rev_file = os.path.join(localpath, "oe-gitpkgv_" + rev)
|
||||
if not os.path.exists(localpath):
|
||||
commits = None
|
||||
else:
|
||||
if not os.path.exists(rev_file) or not os.path.getsize(rev_file):
|
||||
from pipes import quote
|
||||
commits = bb.fetch2.runfetchcmd(
|
||||
"git rev-list %s -- | wc -l" % (quote(rev)),
|
||||
d, quiet=True).strip().lstrip('0')
|
||||
if commits:
|
||||
open(rev_file, "w").write("%d\n" % int(commits))
|
||||
else:
|
||||
commits = open(rev_file, "r").readline(128).strip()
|
||||
if commits:
|
||||
return False, "%s+%s" % (commits, rev[:7])
|
||||
else:
|
||||
return True, str(rev)
|
||||
|
||||
def checkstatus(self, fetch, ud, d):
|
||||
def checkstatus(self, ud, d):
|
||||
try:
|
||||
self._lsremote(ud, d, "")
|
||||
return True
|
||||
|
||||
@@ -109,7 +109,6 @@ class GitSM(Git):
|
||||
runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*true/bare = false/'", d)
|
||||
os.chdir(tmpclonedir)
|
||||
runfetchcmd(ud.basecmd + " reset --hard", d)
|
||||
runfetchcmd(ud.basecmd + " checkout " + ud.revisions[ud.names[0]], d)
|
||||
runfetchcmd(ud.basecmd + " submodule init", d)
|
||||
runfetchcmd(ud.basecmd + " submodule update", d)
|
||||
self._set_relative_paths(tmpclonedir)
|
||||
|
||||
@@ -28,7 +28,6 @@ import os
|
||||
import sys
|
||||
import logging
|
||||
import bb
|
||||
import errno
|
||||
from bb import data
|
||||
from bb.fetch2 import FetchMethod
|
||||
from bb.fetch2 import FetchError
|
||||
@@ -44,13 +43,6 @@ class Hg(FetchMethod):
|
||||
"""
|
||||
return ud.type in ['hg']
|
||||
|
||||
def supports_checksum(self, urldata):
|
||||
"""
|
||||
Don't require checksums for local archives created from
|
||||
repository checkouts.
|
||||
"""
|
||||
return False
|
||||
|
||||
def urldata_init(self, ud, d):
|
||||
"""
|
||||
init hg specific variable within url data
|
||||
@@ -60,12 +52,10 @@ class Hg(FetchMethod):
|
||||
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
if 'protocol' in ud.parm:
|
||||
ud.proto = ud.parm['protocol']
|
||||
elif not ud.host:
|
||||
ud.proto = 'file'
|
||||
else:
|
||||
ud.proto = "hg"
|
||||
# Create paths to mercurial checkouts
|
||||
relpath = self._strip_leading_slashes(ud.path)
|
||||
ud.pkgdir = os.path.join(data.expand('${HGDIR}', d), ud.host, relpath)
|
||||
ud.moddir = os.path.join(ud.pkgdir, ud.module)
|
||||
|
||||
ud.setup_revisons(d)
|
||||
|
||||
@@ -74,19 +64,7 @@ class Hg(FetchMethod):
|
||||
elif not ud.revision:
|
||||
ud.revision = self.latest_revision(ud, d)
|
||||
|
||||
# Create paths to mercurial checkouts
|
||||
hgsrcname = '%s_%s_%s' % (ud.module.replace('/', '.'), \
|
||||
ud.host, ud.path.replace('/', '.'))
|
||||
ud.mirrortarball = 'hg_%s.tar.gz' % hgsrcname
|
||||
ud.fullmirror = os.path.join(d.getVar("DL_DIR", True), ud.mirrortarball)
|
||||
|
||||
hgdir = d.getVar("HGDIR", True) or (d.getVar("DL_DIR", True) + "/hg/")
|
||||
ud.pkgdir = os.path.join(hgdir, hgsrcname)
|
||||
ud.moddir = os.path.join(ud.pkgdir, ud.module)
|
||||
ud.localfile = ud.moddir
|
||||
ud.basecmd = data.getVar("FETCHCMD_hg", d, True) or "/usr/bin/env hg"
|
||||
|
||||
ud.write_tarballs = d.getVar("BB_GENERATE_MIRROR_TARBALLS", True)
|
||||
ud.localfile = data.expand('%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision), d)
|
||||
|
||||
def need_update(self, ud, d):
|
||||
revTag = ud.parm.get('rev', 'tip')
|
||||
@@ -96,21 +74,14 @@ class Hg(FetchMethod):
|
||||
return True
|
||||
return False
|
||||
|
||||
def try_premirror(self, ud, d):
|
||||
# If we don't do this, updating an existing checkout with only premirrors
|
||||
# is not possible
|
||||
if d.getVar("BB_FETCH_PREMIRRORONLY", True) is not None:
|
||||
return True
|
||||
if os.path.exists(ud.moddir):
|
||||
return False
|
||||
return True
|
||||
|
||||
def _buildhgcommand(self, ud, d, command):
|
||||
"""
|
||||
Build up an hg commandline based on ud
|
||||
command is "fetch", "update", "info"
|
||||
"""
|
||||
|
||||
basecmd = data.expand('${FETCHCMD_hg}', d)
|
||||
|
||||
proto = ud.parm.get('protocol', 'http')
|
||||
|
||||
host = ud.host
|
||||
@@ -127,7 +98,7 @@ class Hg(FetchMethod):
|
||||
hgroot = ud.user + "@" + host + ud.path
|
||||
|
||||
if command == "info":
|
||||
return "%s identify -i %s://%s/%s" % (ud.basecmd, proto, hgroot, ud.module)
|
||||
return "%s identify -i %s://%s/%s" % (basecmd, proto, hgroot, ud.module)
|
||||
|
||||
options = [];
|
||||
|
||||
@@ -140,22 +111,22 @@ class Hg(FetchMethod):
|
||||
|
||||
if command == "fetch":
|
||||
if ud.user and ud.pswd:
|
||||
cmd = "%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" clone %s %s://%s/%s %s" % (ud.basecmd, ud.user, ud.pswd, proto, " ".join(options), proto, hgroot, ud.module, ud.module)
|
||||
cmd = "%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" clone %s %s://%s/%s %s" % (basecmd, ud.user, ud.pswd, proto, " ".join(options), proto, hgroot, ud.module, ud.module)
|
||||
else:
|
||||
cmd = "%s clone %s %s://%s/%s %s" % (ud.basecmd, " ".join(options), proto, hgroot, ud.module, ud.module)
|
||||
cmd = "%s clone %s %s://%s/%s %s" % (basecmd, " ".join(options), proto, hgroot, ud.module, ud.module)
|
||||
elif command == "pull":
|
||||
# do not pass options list; limiting pull to rev causes the local
|
||||
# repo not to contain it and immediately following "update" command
|
||||
# will crash
|
||||
if ud.user and ud.pswd:
|
||||
cmd = "%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" pull" % (ud.basecmd, ud.user, ud.pswd, proto)
|
||||
cmd = "%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" pull" % (basecmd, ud.user, ud.pswd, proto)
|
||||
else:
|
||||
cmd = "%s pull" % (ud.basecmd)
|
||||
cmd = "%s pull" % (basecmd)
|
||||
elif command == "update":
|
||||
if ud.user and ud.pswd:
|
||||
cmd = "%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" update -C %s" % (ud.basecmd, ud.user, ud.pswd, proto, " ".join(options))
|
||||
cmd = "%s --config auth.default.prefix=* --config auth.default.username=%s --config auth.default.password=%s --config \"auth.default.schemes=%s\" update -C %s" % (basecmd, ud.user, ud.pswd, proto, " ".join(options))
|
||||
else:
|
||||
cmd = "%s update -C %s" % (ud.basecmd, " ".join(options))
|
||||
cmd = "%s update -C %s" % (basecmd, " ".join(options))
|
||||
else:
|
||||
raise FetchError("Invalid hg command %s" % command, ud.url)
|
||||
|
||||
@@ -166,36 +137,16 @@ class Hg(FetchMethod):
|
||||
|
||||
logger.debug(2, "Fetch: checking for module directory '" + ud.moddir + "'")
|
||||
|
||||
# If the checkout doesn't exist and the mirror tarball does, extract it
|
||||
if not os.path.exists(ud.pkgdir) and os.path.exists(ud.fullmirror):
|
||||
bb.utils.mkdirhier(ud.pkgdir)
|
||||
os.chdir(ud.pkgdir)
|
||||
runfetchcmd("tar -xzf %s" % (ud.fullmirror), d)
|
||||
|
||||
if os.access(os.path.join(ud.moddir, '.hg'), os.R_OK):
|
||||
# Found the source, check whether need pull
|
||||
updatecmd = self._buildhgcommand(ud, d, "update")
|
||||
updatecmd = self._buildhgcommand(ud, d, "pull")
|
||||
logger.info("Update " + ud.url)
|
||||
# update sources there
|
||||
os.chdir(ud.moddir)
|
||||
logger.debug(1, "Running %s", updatecmd)
|
||||
try:
|
||||
runfetchcmd(updatecmd, d)
|
||||
except bb.fetch2.FetchError:
|
||||
# Runnning pull in the repo
|
||||
pullcmd = self._buildhgcommand(ud, d, "pull")
|
||||
logger.info("Pulling " + ud.url)
|
||||
# update sources there
|
||||
os.chdir(ud.moddir)
|
||||
logger.debug(1, "Running %s", pullcmd)
|
||||
bb.fetch2.check_network_access(d, pullcmd, ud.url)
|
||||
runfetchcmd(pullcmd, d)
|
||||
try:
|
||||
os.unlink(ud.fullmirror)
|
||||
except OSError as exc:
|
||||
if exc.errno != errno.ENOENT:
|
||||
raise
|
||||
bb.fetch2.check_network_access(d, updatecmd, ud.url)
|
||||
runfetchcmd(updatecmd, d)
|
||||
|
||||
# No source found, clone it.
|
||||
if not os.path.exists(ud.moddir):
|
||||
else:
|
||||
fetchcmd = self._buildhgcommand(ud, d, "fetch")
|
||||
logger.info("Fetch " + ud.url)
|
||||
# check out sources there
|
||||
@@ -212,12 +163,14 @@ class Hg(FetchMethod):
|
||||
logger.debug(1, "Running %s", updatecmd)
|
||||
runfetchcmd(updatecmd, d)
|
||||
|
||||
def clean(self, ud, d):
|
||||
""" Clean the hg dir """
|
||||
scmdata = ud.parm.get("scmdata", "")
|
||||
if scmdata == "keep":
|
||||
tar_flags = ""
|
||||
else:
|
||||
tar_flags = "--exclude '.hg' --exclude '.hgrags'"
|
||||
|
||||
bb.utils.remove(ud.localpath, True)
|
||||
bb.utils.remove(ud.fullmirror)
|
||||
bb.utils.remove(ud.fullmirror + ".done")
|
||||
os.chdir(ud.pkgdir)
|
||||
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.module), d, cleanup = [ud.localpath])
|
||||
|
||||
def supports_srcrev(self):
|
||||
return True
|
||||
@@ -238,41 +191,3 @@ class Hg(FetchMethod):
|
||||
Return a unique key for the url
|
||||
"""
|
||||
return "hg:" + ud.moddir
|
||||
|
||||
def build_mirror_data(self, ud, d):
|
||||
# Generate a mirror tarball if needed
|
||||
if ud.write_tarballs == "1" and not os.path.exists(ud.fullmirror):
|
||||
# it's possible that this symlink points to read-only filesystem with PREMIRROR
|
||||
if os.path.islink(ud.fullmirror):
|
||||
os.unlink(ud.fullmirror)
|
||||
|
||||
os.chdir(ud.pkgdir)
|
||||
logger.info("Creating tarball of hg repository")
|
||||
runfetchcmd("tar -czf %s %s" % (ud.fullmirror, ud.module), d)
|
||||
runfetchcmd("touch %s.done" % (ud.fullmirror), d)
|
||||
|
||||
def localpath(self, ud, d):
|
||||
return ud.pkgdir
|
||||
|
||||
def unpack(self, ud, destdir, d):
|
||||
"""
|
||||
Make a local clone or export for the url
|
||||
"""
|
||||
|
||||
revflag = "-r %s" % ud.revision
|
||||
subdir = ud.parm.get("destsuffix", ud.module)
|
||||
codir = "%s/%s" % (destdir, subdir)
|
||||
|
||||
scmdata = ud.parm.get("scmdata", "")
|
||||
if scmdata != "nokeep":
|
||||
if not os.access(os.path.join(codir, '.hg'), os.R_OK):
|
||||
logger.debug(2, "Unpack: creating new hg repository in '" + codir + "'")
|
||||
runfetchcmd("%s init %s" % (ud.basecmd, codir), d)
|
||||
logger.debug(2, "Unpack: updating source in '" + codir + "'")
|
||||
os.chdir(codir)
|
||||
runfetchcmd("%s pull %s" % (ud.basecmd, ud.moddir), d)
|
||||
runfetchcmd("%s up -C %s" % (ud.basecmd, revflag), d)
|
||||
else:
|
||||
logger.debug(2, "Unpack: extracting source to '" + codir + "'")
|
||||
os.chdir(ud.moddir)
|
||||
runfetchcmd("%s archive -t files %s %s" % (ud.basecmd, revflag, codir), d)
|
||||
|
||||
@@ -112,7 +112,7 @@ class Local(FetchMethod):
|
||||
|
||||
return True
|
||||
|
||||
def checkstatus(self, fetch, urldata, d):
|
||||
def checkstatus(self, urldata, d):
|
||||
"""
|
||||
Check the status of the url
|
||||
"""
|
||||
|
||||
@@ -48,7 +48,7 @@ class Perforce(FetchMethod):
|
||||
(user, pswd, host, port) = path.split('@')[0].split(":")
|
||||
path = path.split('@')[1]
|
||||
else:
|
||||
(host, port) = d.getVar('P4PORT', False).split(':')
|
||||
(host, port) = d.getVar('P4PORT').split(':')
|
||||
user = ""
|
||||
pswd = ""
|
||||
|
||||
@@ -123,7 +123,7 @@ class Perforce(FetchMethod):
|
||||
if depot.find('/...') != -1:
|
||||
path = depot[:depot.find('/...')]
|
||||
else:
|
||||
path = depot[:depot.rfind('/')]
|
||||
path = depot
|
||||
|
||||
module = parm.get('module', os.path.basename(path))
|
||||
|
||||
|
||||
@@ -99,7 +99,7 @@ class SFTP(FetchMethod):
|
||||
"""Fetch urls"""
|
||||
|
||||
urlo = URI(ud.url)
|
||||
basecmd = 'sftp -oBatchMode=yes'
|
||||
basecmd = 'sftp -oPasswordAuthentication=no'
|
||||
port = ''
|
||||
if urlo.port:
|
||||
port = '-P %d' % urlo.port
|
||||
|
||||
@@ -54,11 +54,6 @@ class Svn(FetchMethod):
|
||||
|
||||
ud.module = ud.parm["module"]
|
||||
|
||||
if not "path_spec" in ud.parm:
|
||||
ud.path_spec = ud.module
|
||||
else:
|
||||
ud.path_spec = ud.parm["path_spec"]
|
||||
|
||||
# Create paths to svn checkouts
|
||||
relpath = self._strip_leading_slashes(ud.path)
|
||||
ud.pkgdir = os.path.join(data.expand('${SVNDIR}', d), ud.host, relpath)
|
||||
@@ -107,7 +102,7 @@ class Svn(FetchMethod):
|
||||
|
||||
if command == "fetch":
|
||||
transportuser = ud.parm.get("transportuser", "")
|
||||
svncmd = "%s co %s %s://%s%s/%s%s %s" % (ud.basecmd, " ".join(options), proto, transportuser, svnroot, ud.module, suffix, ud.path_spec)
|
||||
svncmd = "%s co %s %s://%s%s/%s%s %s" % (ud.basecmd, " ".join(options), proto, transportuser, svnroot, ud.module, suffix, ud.module)
|
||||
elif command == "update":
|
||||
svncmd = "%s update %s" % (ud.basecmd, " ".join(options))
|
||||
else:
|
||||
@@ -154,7 +149,7 @@ class Svn(FetchMethod):
|
||||
|
||||
os.chdir(ud.pkgdir)
|
||||
# tar them up to a defined filename
|
||||
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.path_spec), d, cleanup = [ud.localpath])
|
||||
runfetchcmd("tar %s -czf %s %s" % (tar_flags, ud.localpath, ud.module), d, cleanup = [ud.localpath])
|
||||
|
||||
def clean(self, ud, d):
|
||||
""" Clean SVN specific files and dirs """
|
||||
|
||||
@@ -100,173 +100,13 @@ class Wget(FetchMethod):
|
||||
|
||||
return True
|
||||
|
||||
def checkstatus(self, fetch, ud, d):
|
||||
import urllib2, socket, httplib
|
||||
from urllib import addinfourl
|
||||
from bb.fetch2 import FetchConnectionCache
|
||||
|
||||
class HTTPConnectionCache(httplib.HTTPConnection):
|
||||
if fetch.connection_cache:
|
||||
def connect(self):
|
||||
"""Connect to the host and port specified in __init__."""
|
||||
|
||||
sock = fetch.connection_cache.get_connection(self.host, self.port)
|
||||
if sock:
|
||||
self.sock = sock
|
||||
else:
|
||||
self.sock = socket.create_connection((self.host, self.port),
|
||||
self.timeout, self.source_address)
|
||||
fetch.connection_cache.add_connection(self.host, self.port, self.sock)
|
||||
|
||||
if self._tunnel_host:
|
||||
self._tunnel()
|
||||
|
||||
class CacheHTTPHandler(urllib2.HTTPHandler):
|
||||
def http_open(self, req):
|
||||
return self.do_open(HTTPConnectionCache, req)
|
||||
|
||||
def do_open(self, http_class, req):
|
||||
"""Return an addinfourl object for the request, using http_class.
|
||||
|
||||
http_class must implement the HTTPConnection API from httplib.
|
||||
The addinfourl return value is a file-like object. It also
|
||||
has methods and attributes including:
|
||||
- info(): return a mimetools.Message object for the headers
|
||||
- geturl(): return the original request URL
|
||||
- code: HTTP status code
|
||||
"""
|
||||
host = req.get_host()
|
||||
if not host:
|
||||
raise urlllib2.URLError('no host given')
|
||||
|
||||
h = http_class(host, timeout=req.timeout) # will parse host:port
|
||||
h.set_debuglevel(self._debuglevel)
|
||||
|
||||
headers = dict(req.unredirected_hdrs)
|
||||
headers.update(dict((k, v) for k, v in req.headers.items()
|
||||
if k not in headers))
|
||||
|
||||
# We want to make an HTTP/1.1 request, but the addinfourl
|
||||
# class isn't prepared to deal with a persistent connection.
|
||||
# It will try to read all remaining data from the socket,
|
||||
# which will block while the server waits for the next request.
|
||||
# So make sure the connection gets closed after the (only)
|
||||
# request.
|
||||
|
||||
# Don't close connection when connection_cache is enabled,
|
||||
if fetch.connection_cache is None:
|
||||
headers["Connection"] = "close"
|
||||
else:
|
||||
headers["Connection"] = "Keep-Alive" # Works for HTTP/1.0
|
||||
|
||||
headers = dict(
|
||||
(name.title(), val) for name, val in headers.items())
|
||||
|
||||
if req._tunnel_host:
|
||||
tunnel_headers = {}
|
||||
proxy_auth_hdr = "Proxy-Authorization"
|
||||
if proxy_auth_hdr in headers:
|
||||
tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr]
|
||||
# Proxy-Authorization should not be sent to origin
|
||||
# server.
|
||||
del headers[proxy_auth_hdr]
|
||||
h.set_tunnel(req._tunnel_host, headers=tunnel_headers)
|
||||
|
||||
try:
|
||||
h.request(req.get_method(), req.get_selector(), req.data, headers)
|
||||
except socket.error, err: # XXX what error?
|
||||
# Don't close connection when cache is enabled.
|
||||
if fetch.connection_cache is None:
|
||||
h.close()
|
||||
raise urllib2.URLError(err)
|
||||
else:
|
||||
try:
|
||||
r = h.getresponse(buffering=True)
|
||||
except TypeError: # buffering kw not supported
|
||||
r = h.getresponse()
|
||||
|
||||
# Pick apart the HTTPResponse object to get the addinfourl
|
||||
# object initialized properly.
|
||||
|
||||
# Wrap the HTTPResponse object in socket's file object adapter
|
||||
# for Windows. That adapter calls recv(), so delegate recv()
|
||||
# to read(). This weird wrapping allows the returned object to
|
||||
# have readline() and readlines() methods.
|
||||
|
||||
# XXX It might be better to extract the read buffering code
|
||||
# out of socket._fileobject() and into a base class.
|
||||
r.recv = r.read
|
||||
|
||||
# no data, just have to read
|
||||
r.read()
|
||||
class fp_dummy(object):
|
||||
def read(self):
|
||||
return ""
|
||||
def readline(self):
|
||||
return ""
|
||||
def close(self):
|
||||
pass
|
||||
|
||||
resp = addinfourl(fp_dummy(), r.msg, req.get_full_url())
|
||||
resp.code = r.status
|
||||
resp.msg = r.reason
|
||||
|
||||
# Close connection when server request it.
|
||||
if fetch.connection_cache is not None:
|
||||
if 'Connection' in r.msg and r.msg['Connection'] == 'close':
|
||||
fetch.connection_cache.remove_connection(h.host, h.port)
|
||||
|
||||
return resp
|
||||
|
||||
def export_proxies(d):
|
||||
variables = ['http_proxy', 'HTTP_PROXY', 'https_proxy', 'HTTPS_PROXY',
|
||||
'ftp_proxy', 'FTP_PROXY', 'no_proxy', 'NO_PROXY']
|
||||
exported = False
|
||||
|
||||
for v in variables:
|
||||
if v in os.environ.keys():
|
||||
exported = True
|
||||
else:
|
||||
v_proxy = d.getVar(v, True)
|
||||
if v_proxy is not None:
|
||||
os.environ[v] = v_proxy
|
||||
exported = True
|
||||
|
||||
return exported
|
||||
|
||||
def head_method(self):
|
||||
return "HEAD"
|
||||
|
||||
exported_proxies = export_proxies(d)
|
||||
|
||||
# XXX: Since Python 2.7.9 ssl cert validation is enabled by default
|
||||
# see PEP-0476, this causes verification errors on some https servers
|
||||
# so disable by default.
|
||||
import ssl
|
||||
ssl_context = None
|
||||
if hasattr(ssl, '_create_unverified_context'):
|
||||
ssl_context = ssl._create_unverified_context()
|
||||
|
||||
if exported_proxies == True and ssl_context is not None:
|
||||
opener = urllib2.build_opener(urllib2.ProxyHandler, CacheHTTPHandler,
|
||||
urllib2.HTTPSHandler(context=ssl_context))
|
||||
elif exported_proxies == False and ssl_context is not None:
|
||||
opener = urllib2.build_opener(CacheHTTPHandler,
|
||||
urllib2.HTTPSHandler(context=ssl_context))
|
||||
elif exported_proxies == True and ssl_context is None:
|
||||
opener = urllib2.build_opener(urllib2.ProxyHandler, CacheHTTPHandler)
|
||||
else:
|
||||
opener = urllib2.build_opener(CacheHTTPHandler)
|
||||
|
||||
urllib2.Request.get_method = head_method
|
||||
urllib2.install_opener(opener)
|
||||
def checkstatus(self, ud, d):
|
||||
|
||||
uri = ud.url.split(";")[0]
|
||||
fetchcmd = self.basecmd + " --spider '%s'" % uri
|
||||
|
||||
self._runwget(ud, d, fetchcmd, True)
|
||||
|
||||
try:
|
||||
urllib2.urlopen(uri)
|
||||
except:
|
||||
return False
|
||||
return True
|
||||
|
||||
def _parse_path(self, regex, s):
|
||||
@@ -406,7 +246,7 @@ class Wget(FetchMethod):
|
||||
version_dir = ['', '', '']
|
||||
version = ['', '', '']
|
||||
|
||||
dirver_regex = re.compile("(\D*)((\d+[\.\-_])+(\d+))")
|
||||
dirver_regex = re.compile("(\D*)((\d+[\.-_])+(\d+))")
|
||||
s = dirver_regex.search(dirver)
|
||||
if s:
|
||||
version_dir[1] = s.group(2)
|
||||
@@ -465,7 +305,7 @@ class Wget(FetchMethod):
|
||||
pn_regex = "(%s|%s|%s)" % (pn_prefix1, pn_prefix2, pn_prefix3)
|
||||
|
||||
# match version
|
||||
pver_regex = "(([A-Z]*\d+[a-zA-Z]*[\.\-_]*)+)"
|
||||
pver_regex = "(([A-Z]*\d+[a-zA-Z]*[\.-_]*)+)"
|
||||
|
||||
# match arch
|
||||
parch_regex = "-source|_all_"
|
||||
@@ -507,12 +347,12 @@ class Wget(FetchMethod):
|
||||
if not re.search("\d+", package):
|
||||
current_version[1] = re.sub('_', '.', current_version[1])
|
||||
current_version[1] = re.sub('-', '.', current_version[1])
|
||||
return (current_version[1], '')
|
||||
return current_version[1]
|
||||
|
||||
package_regex = self._init_regexes(package, ud, d)
|
||||
if package_regex is None:
|
||||
bb.warn("latest_versionstring: package %s don't match pattern" % (package))
|
||||
return ('', '')
|
||||
return ""
|
||||
bb.debug(3, "latest_versionstring, regex: %s" % (package_regex.pattern))
|
||||
|
||||
uri = ""
|
||||
@@ -530,12 +370,12 @@ class Wget(FetchMethod):
|
||||
|
||||
dirver_pn_regex = re.compile("%s\d?" % (re.escape(pn)))
|
||||
if not dirver_pn_regex.search(dirver):
|
||||
return (self._check_latest_version_by_dir(dirver,
|
||||
package, package_regex, current_version, ud, d), '')
|
||||
return self._check_latest_version_by_dir(dirver,
|
||||
package, package_regex, current_version, ud, d)
|
||||
|
||||
uri = bb.fetch.encodeurl([ud.type, ud.host, path, ud.user, ud.pswd, {}])
|
||||
else:
|
||||
uri = regex_uri
|
||||
|
||||
return (self._check_latest_version(uri, package, package_regex,
|
||||
current_version, ud, d), '')
|
||||
return self._check_latest_version(uri, package, package_regex,
|
||||
current_version, ud, d)
|
||||
|
||||
@@ -36,74 +36,28 @@ from bb import ui
|
||||
from bb import server
|
||||
from bb import cookerdata
|
||||
|
||||
__version__ = "1.26.0"
|
||||
logger = logging.getLogger("BitBake")
|
||||
|
||||
class BBMainException(Exception):
|
||||
class BBMainException(bb.BBHandledException):
|
||||
pass
|
||||
|
||||
def present_options(optionlist):
|
||||
if len(optionlist) > 1:
|
||||
return ' or '.join([', '.join(optionlist[:-1]), optionlist[-1]])
|
||||
else:
|
||||
return optionlist[0]
|
||||
def get_ui(config):
|
||||
if not config.ui:
|
||||
# modify 'ui' attribute because it is also read by cooker
|
||||
config.ui = os.environ.get('BITBAKE_UI', 'knotty')
|
||||
|
||||
class BitbakeHelpFormatter(optparse.IndentedHelpFormatter):
|
||||
def format_option(self, option):
|
||||
# We need to do this here rather than in the text we supply to
|
||||
# add_option() because we don't want to call list_extension_modules()
|
||||
# on every execution (since it imports all of the modules)
|
||||
# Note also that we modify option.help rather than the returned text
|
||||
# - this is so that we don't have to re-format the text ourselves
|
||||
if option.dest == 'ui':
|
||||
valid_uis = list_extension_modules(bb.ui, 'main')
|
||||
option.help = option.help.replace('@CHOICES@', present_options(valid_uis))
|
||||
elif option.dest == 'servertype':
|
||||
valid_server_types = list_extension_modules(bb.server, 'BitBakeServer')
|
||||
option.help = option.help.replace('@CHOICES@', present_options(valid_server_types))
|
||||
interface = config.ui
|
||||
|
||||
return optparse.IndentedHelpFormatter.format_option(self, option)
|
||||
|
||||
def list_extension_modules(pkg, checkattr):
|
||||
"""
|
||||
Lists extension modules in a specific Python package
|
||||
(e.g. UIs, servers). NOTE: Calling this function will import all of the
|
||||
submodules of the specified module in order to check for the specified
|
||||
attribute; this can have unusual side-effects. As a result, this should
|
||||
only be called when displaying help text or error messages.
|
||||
Parameters:
|
||||
pkg: previously imported Python package to list
|
||||
checkattr: attribute to look for in module to determine if it's valid
|
||||
as the type of extension you are looking for
|
||||
"""
|
||||
import pkgutil
|
||||
pkgdir = os.path.dirname(pkg.__file__)
|
||||
|
||||
modules = []
|
||||
for _, modulename, _ in pkgutil.iter_modules([pkgdir]):
|
||||
if os.path.isdir(os.path.join(pkgdir, modulename)):
|
||||
# ignore directories
|
||||
continue
|
||||
try:
|
||||
module = __import__(pkg.__name__, fromlist=[modulename])
|
||||
except:
|
||||
# If we can't import it, it's not valid
|
||||
continue
|
||||
module_if = getattr(module, modulename)
|
||||
if getattr(module_if, 'hidden_extension', False):
|
||||
continue
|
||||
if not checkattr or hasattr(module_if, checkattr):
|
||||
modules.append(modulename)
|
||||
return modules
|
||||
|
||||
def import_extension_module(pkg, modulename, checkattr):
|
||||
try:
|
||||
# Dynamically load the UI based on the ui name. Although we
|
||||
# suggest a fixed set this allows you to have flexibility in which
|
||||
# ones are available.
|
||||
module = __import__(pkg.__name__, fromlist = [modulename])
|
||||
return getattr(module, modulename)
|
||||
module = __import__("bb.ui", fromlist = [interface])
|
||||
return getattr(module, interface)
|
||||
except AttributeError:
|
||||
raise BBMainException('FATAL: Unable to import extension module "%s" from %s. Valid extension modules: %s' % (modulename, pkg.__name__, present_options(list_extension_modules(pkg, checkattr))))
|
||||
raise BBMainException("FATAL: Invalid user interface '%s' specified.\n"
|
||||
"Valid interfaces: depexp, goggle, ncurses, hob, knotty [default]." % interface)
|
||||
|
||||
|
||||
# Display bitbake/OE warnings via the BitBake.Warnings logger, ignoring others"""
|
||||
@@ -129,9 +83,8 @@ class BitBakeConfigParameters(cookerdata.ConfigParameters):
|
||||
|
||||
def parseCommandLine(self, argv=sys.argv):
|
||||
parser = optparse.OptionParser(
|
||||
formatter = BitbakeHelpFormatter(),
|
||||
version = "BitBake Build Tool Core version %s" % bb.__version__,
|
||||
usage = """%prog [options] [recipename/target recipe:do_task ...]
|
||||
version = "BitBake Build Tool Core version %s, %%prog version %s" % (bb.__version__, __version__),
|
||||
usage = """%prog [options] [recipename/target ...]
|
||||
|
||||
Executes the specified task (default is 'build') for a given set of target recipes (.bb files).
|
||||
It is assumed there is a conf/bblayers.conf available in cwd or in BBPATH which
|
||||
@@ -194,15 +147,11 @@ class BitBakeConfigParameters(cookerdata.ConfigParameters):
|
||||
parser.add_option("-P", "--profile", help = "Profile the command and save reports.",
|
||||
action = "store_true", dest = "profile", default = False)
|
||||
|
||||
env_ui = os.environ.get('BITBAKE_UI', None)
|
||||
default_ui = env_ui or 'knotty'
|
||||
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
|
||||
parser.add_option("-u", "--ui", help = "The user interface to use (@CHOICES@ - default %default).",
|
||||
action="store", dest="ui", default=default_ui)
|
||||
parser.add_option("-u", "--ui", help = "The user interface to use (e.g. knotty, hob, depexp).",
|
||||
action = "store", dest = "ui")
|
||||
|
||||
# @CHOICES@ is substituted out by BitbakeHelpFormatter above
|
||||
parser.add_option("-t", "--servertype", help = "Choose which server type to use (@CHOICES@ - default %default).",
|
||||
action = "store", dest = "servertype", default = "process")
|
||||
parser.add_option("-t", "--servertype", help = "Choose which server to use, process or xmlrpc.",
|
||||
action = "store", dest = "servertype")
|
||||
|
||||
parser.add_option("", "--token", help = "Specify the connection token to be used when connecting to a remote server.",
|
||||
action = "store", dest = "xmlrpctoken")
|
||||
@@ -331,8 +280,21 @@ def bitbake_main(configParams, configuration):
|
||||
|
||||
configuration.setConfigParameters(configParams)
|
||||
|
||||
ui_module = import_extension_module(bb.ui, configParams.ui, 'main')
|
||||
servermodule = import_extension_module(bb.server, configParams.servertype, 'BitBakeServer')
|
||||
ui_module = get_ui(configParams)
|
||||
|
||||
# Server type can be xmlrpc or process currently, if nothing is specified,
|
||||
# the default server is process
|
||||
if configParams.servertype:
|
||||
server_type = configParams.servertype
|
||||
else:
|
||||
server_type = 'process'
|
||||
|
||||
try:
|
||||
module = __import__("bb.server", fromlist = [server_type])
|
||||
servermodule = getattr(module, server_type)
|
||||
except AttributeError:
|
||||
raise BBMainException("FATAL: Invalid server type '%s' specified.\n"
|
||||
"Valid interfaces: xmlrpc, process [default]." % server_type)
|
||||
|
||||
if configParams.server_only:
|
||||
if configParams.servertype != "xmlrpc":
|
||||
@@ -383,13 +345,6 @@ def bitbake_main(configParams, configuration):
|
||||
# Collect the feature set for the UI
|
||||
featureset = getattr(ui_module, "featureSet", [])
|
||||
|
||||
if configParams.server_only:
|
||||
for param in ('prefile', 'postfile'):
|
||||
value = getattr(configParams, param)
|
||||
if value:
|
||||
setattr(configuration, "%s_server" % param, value)
|
||||
param = "%s_server" % param
|
||||
|
||||
if not configParams.remote_server:
|
||||
# we start a server with a given configuration
|
||||
server = start_server(servermodule, configParams, configuration, featureset)
|
||||
@@ -404,6 +359,8 @@ def bitbake_main(configParams, configuration):
|
||||
try:
|
||||
server_connection = server.establishConnection(featureset)
|
||||
except Exception as e:
|
||||
if configParams.kill_server:
|
||||
return 0
|
||||
bb.fatal("Could not connect to server %s: %s" % (configParams.remote_server, str(e)))
|
||||
|
||||
# Restore the environment in case the UI needs it
|
||||
|
||||
@@ -150,7 +150,7 @@ loggerDefaultVerbose = False
|
||||
loggerVerboseLogs = False
|
||||
loggerDefaultDomains = []
|
||||
|
||||
def init_msgconfig(verbose, debug, debug_domains=None):
|
||||
def init_msgconfig(verbose, debug, debug_domains = []):
|
||||
"""
|
||||
Set default verbosity and debug levels config the logger
|
||||
"""
|
||||
@@ -158,10 +158,7 @@ def init_msgconfig(verbose, debug, debug_domains=None):
|
||||
bb.msg.loggerDefaultVerbose = verbose
|
||||
if verbose:
|
||||
bb.msg.loggerVerboseLogs = True
|
||||
if debug_domains:
|
||||
bb.msg.loggerDefaultDomains = debug_domains
|
||||
else:
|
||||
bb.msg.loggerDefaultDomains = []
|
||||
bb.msg.loggerDefaultDomains = debug_domains
|
||||
|
||||
def constructLogOptions():
|
||||
debug = loggerDefaultDebugLevel
|
||||
|
||||
@@ -26,10 +26,9 @@ File parsers for the BitBake build tools.
|
||||
|
||||
handlers = []
|
||||
|
||||
import errno
|
||||
import logging
|
||||
import os
|
||||
import stat
|
||||
import logging
|
||||
import bb
|
||||
import bb.utils
|
||||
import bb.siggen
|
||||
@@ -71,12 +70,7 @@ def cached_mtime_noerror(f):
|
||||
return __mtime_cache[f]
|
||||
|
||||
def update_mtime(f):
|
||||
try:
|
||||
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
|
||||
except OSError:
|
||||
if f in __mtime_cache:
|
||||
del __mtime_cache[f]
|
||||
return 0
|
||||
__mtime_cache[f] = os.stat(f)[stat.ST_MTIME]
|
||||
return __mtime_cache[f]
|
||||
|
||||
def update_cache(f):
|
||||
@@ -87,7 +81,7 @@ def update_cache(f):
|
||||
def mark_dependency(d, f):
|
||||
if f.startswith('./'):
|
||||
f = "%s/%s" % (os.getcwd(), f[2:])
|
||||
deps = (d.getVar('__depends', False) or [])
|
||||
deps = (d.getVar('__depends') or [])
|
||||
s = (f, cached_mtime_noerror(f))
|
||||
if s not in deps:
|
||||
deps.append(s)
|
||||
@@ -95,7 +89,7 @@ def mark_dependency(d, f):
|
||||
|
||||
def check_dependency(d, f):
|
||||
s = (f, cached_mtime_noerror(f))
|
||||
deps = (d.getVar('__depends', False) or [])
|
||||
deps = (d.getVar('__depends') or [])
|
||||
return s in deps
|
||||
|
||||
def supports(fn, data):
|
||||
@@ -128,12 +122,12 @@ def resolve_file(fn, d):
|
||||
for af in attempts:
|
||||
mark_dependency(d, af)
|
||||
if not newfn:
|
||||
raise IOError(errno.ENOENT, "file %s not found in %s" % (fn, bbpath))
|
||||
raise IOError("file %s not found in %s" % (fn, bbpath))
|
||||
fn = newfn
|
||||
|
||||
mark_dependency(d, fn)
|
||||
if not os.path.isfile(fn):
|
||||
raise IOError(errno.ENOENT, "file %s not found" % fn)
|
||||
raise IOError("file %s not found" % fn)
|
||||
|
||||
return fn
|
||||
|
||||
|
||||
@@ -85,7 +85,7 @@ class DataNode(AstNode):
|
||||
if 'flag' in self.groupd and self.groupd['flag'] != None:
|
||||
return data.getVarFlag(key, self.groupd['flag'], noweakdefault=True)
|
||||
else:
|
||||
return data.getVar(key, False, noweakdefault=True, parsing=True)
|
||||
return data.getVar(key, noweakdefault=True)
|
||||
|
||||
def eval(self, data):
|
||||
groupd = self.groupd
|
||||
@@ -136,7 +136,7 @@ class DataNode(AstNode):
|
||||
if flag:
|
||||
data.setVarFlag(key, flag, val, **loginfo)
|
||||
else:
|
||||
data.setVar(key, val, parsing=True, **loginfo)
|
||||
data.setVar(key, val, **loginfo)
|
||||
|
||||
class MethodNode(AstNode):
|
||||
tr_tbl = string.maketrans('/.+-@%&', '_______')
|
||||
@@ -152,13 +152,13 @@ class MethodNode(AstNode):
|
||||
funcname = ("__anon_%s_%s" % (self.lineno, self.filename.translate(MethodNode.tr_tbl)))
|
||||
text = "def %s(d):\n" % (funcname) + text
|
||||
bb.methodpool.insert_method(funcname, text, self.filename)
|
||||
anonfuncs = data.getVar('__BBANONFUNCS', False) or []
|
||||
anonfuncs = data.getVar('__BBANONFUNCS') or []
|
||||
anonfuncs.append(funcname)
|
||||
data.setVar('__BBANONFUNCS', anonfuncs)
|
||||
data.setVar(funcname, text, parsing=True)
|
||||
data.setVar(funcname, text)
|
||||
else:
|
||||
data.setVarFlag(self.func_name, "func", 1)
|
||||
data.setVar(self.func_name, text, parsing=True)
|
||||
data.setVar(self.func_name, text)
|
||||
|
||||
class PythonMethodNode(AstNode):
|
||||
def __init__(self, filename, lineno, function, modulename, body):
|
||||
@@ -175,7 +175,7 @@ class PythonMethodNode(AstNode):
|
||||
bb.methodpool.insert_method(self.modulename, text, self.filename)
|
||||
data.setVarFlag(self.function, "func", 1)
|
||||
data.setVarFlag(self.function, "python", 1)
|
||||
data.setVar(self.function, text, parsing=True)
|
||||
data.setVar(self.function, text)
|
||||
|
||||
class MethodFlagsNode(AstNode):
|
||||
def __init__(self, filename, lineno, key, m):
|
||||
@@ -184,7 +184,7 @@ class MethodFlagsNode(AstNode):
|
||||
self.m = m
|
||||
|
||||
def eval(self, data):
|
||||
if data.getVar(self.key, False):
|
||||
if data.getVar(self.key):
|
||||
# clean up old version of this piece of metadata, as its
|
||||
# flags could cause problems
|
||||
data.setVarFlag(self.key, 'python', None)
|
||||
@@ -209,10 +209,10 @@ class ExportFuncsNode(AstNode):
|
||||
for func in self.n:
|
||||
calledfunc = self.classname + "_" + func
|
||||
|
||||
if data.getVar(func, False) and not data.getVarFlag(func, 'export_func'):
|
||||
if data.getVar(func) and not data.getVarFlag(func, 'export_func'):
|
||||
continue
|
||||
|
||||
if data.getVar(func, False):
|
||||
if data.getVar(func):
|
||||
data.setVarFlag(func, 'python', None)
|
||||
data.setVarFlag(func, 'func', None)
|
||||
|
||||
@@ -224,11 +224,11 @@ class ExportFuncsNode(AstNode):
|
||||
data.setVarFlag(calledfunc, flag, data.getVarFlag(func, flag))
|
||||
|
||||
if data.getVarFlag(calledfunc, "python"):
|
||||
data.setVar(func, " bb.build.exec_func('" + calledfunc + "', d)\n", parsing=True)
|
||||
data.setVar(func, " bb.build.exec_func('" + calledfunc + "', d)\n")
|
||||
else:
|
||||
if "-" in self.classname:
|
||||
bb.fatal("The classname %s contains a dash character and is calling an sh function %s using EXPORT_FUNCTIONS. Since a dash is illegal in sh function names, this cannot work, please rename the class or don't use EXPORT_FUNCTIONS." % (self.classname, calledfunc))
|
||||
data.setVar(func, " " + calledfunc + "\n", parsing=True)
|
||||
data.setVar(func, " " + calledfunc + "\n")
|
||||
data.setVarFlag(func, 'export_func', '1')
|
||||
|
||||
class AddTaskNode(AstNode):
|
||||
@@ -255,7 +255,7 @@ class BBHandlerNode(AstNode):
|
||||
self.hs = fns.split()
|
||||
|
||||
def eval(self, data):
|
||||
bbhands = data.getVar('__BBHANDLERS', False) or []
|
||||
bbhands = data.getVar('__BBHANDLERS') or []
|
||||
for h in self.hs:
|
||||
bbhands.append(h)
|
||||
data.setVarFlag(h, "handler", 1)
|
||||
@@ -315,22 +315,23 @@ def handleInherit(statements, filename, lineno, m):
|
||||
|
||||
def finalize(fn, d, variant = None):
|
||||
all_handlers = {}
|
||||
for var in d.getVar('__BBHANDLERS', False) or []:
|
||||
for var in d.getVar('__BBHANDLERS') or []:
|
||||
# try to add the handler
|
||||
bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask", True) or "").split())
|
||||
bb.event.register(var, d.getVar(var), (d.getVarFlag(var, "eventmask", True) or "").split())
|
||||
|
||||
bb.event.fire(bb.event.RecipePreFinalise(fn), d)
|
||||
|
||||
bb.data.expandKeys(d)
|
||||
bb.data.update_data(d)
|
||||
code = []
|
||||
for funcname in d.getVar("__BBANONFUNCS", False) or []:
|
||||
for funcname in d.getVar("__BBANONFUNCS") or []:
|
||||
code.append("%s(d)" % funcname)
|
||||
bb.utils.better_exec("\n".join(code), {"d": d})
|
||||
bb.data.update_data(d)
|
||||
|
||||
tasklist = d.getVar('__BBTASKS', False) or []
|
||||
bb.build.add_tasks(tasklist, d)
|
||||
tasklist = d.getVar('__BBTASKS') or []
|
||||
deltasklist = d.getVar('__BBDELTASKS') or []
|
||||
bb.build.add_tasks(tasklist, deltasklist, d)
|
||||
|
||||
bb.parse.siggen.finalise(fn, d, variant)
|
||||
|
||||
|
||||
@@ -32,7 +32,7 @@ import bb.build, bb.utils
|
||||
from bb import data
|
||||
|
||||
from . import ConfHandler
|
||||
from .. import resolve_file, ast, logger, ParseError
|
||||
from .. import resolve_file, ast, logger
|
||||
from .ConfHandler import include, init
|
||||
|
||||
# For compatibility
|
||||
@@ -48,7 +48,7 @@ __def_regexp__ = re.compile( r"def\s+(\w+).*:" )
|
||||
__python_func_regexp__ = re.compile( r"(\s+.*)|(^$)" )
|
||||
|
||||
|
||||
__infunc__ = []
|
||||
__infunc__ = ""
|
||||
__inpython__ = False
|
||||
__body__ = []
|
||||
__classname__ = ""
|
||||
@@ -69,14 +69,15 @@ def supports(fn, d):
|
||||
return os.path.splitext(fn)[-1] in [".bb", ".bbclass", ".inc"]
|
||||
|
||||
def inherit(files, fn, lineno, d):
|
||||
__inherit_cache = d.getVar('__inherit_cache', False) or []
|
||||
__inherit_cache = d.getVar('__inherit_cache') or []
|
||||
files = d.expand(files).split()
|
||||
for file in files:
|
||||
if not os.path.isabs(file) and not file.endswith(".bbclass"):
|
||||
file = os.path.join('classes', '%s.bbclass' % file)
|
||||
|
||||
if not os.path.isabs(file):
|
||||
bbpath = d.getVar("BBPATH", True)
|
||||
dname = os.path.dirname(fn)
|
||||
bbpath = "%s:%s" % (dname, d.getVar("BBPATH", True))
|
||||
abs_fn, attempts = bb.utils.which(bbpath, file, history=True)
|
||||
for af in attempts:
|
||||
if af != abs_fn:
|
||||
@@ -89,7 +90,7 @@ def inherit(files, fn, lineno, d):
|
||||
__inherit_cache.append( file )
|
||||
d.setVar('__inherit_cache', __inherit_cache)
|
||||
include(fn, file, lineno, d, "inherit")
|
||||
__inherit_cache = d.getVar('__inherit_cache', False) or []
|
||||
__inherit_cache = d.getVar('__inherit_cache') or []
|
||||
|
||||
def get_statements(filename, absolute_filename, base_name):
|
||||
global cached_statements
|
||||
@@ -119,7 +120,7 @@ def get_statements(filename, absolute_filename, base_name):
|
||||
def handle(fn, d, include):
|
||||
global __func_start_regexp__, __inherit_regexp__, __export_func_regexp__, __addtask_regexp__, __addhandler_regexp__, __infunc__, __body__, __residue__, __classname__
|
||||
__body__ = []
|
||||
__infunc__ = []
|
||||
__infunc__ = ""
|
||||
__classname__ = ""
|
||||
__residue__ = []
|
||||
|
||||
@@ -129,13 +130,13 @@ def handle(fn, d, include):
|
||||
|
||||
if ext == ".bbclass":
|
||||
__classname__ = root
|
||||
__inherit_cache = d.getVar('__inherit_cache', False) or []
|
||||
__inherit_cache = d.getVar('__inherit_cache') or []
|
||||
if not fn in __inherit_cache:
|
||||
__inherit_cache.append(fn)
|
||||
d.setVar('__inherit_cache', __inherit_cache)
|
||||
|
||||
if include != 0:
|
||||
oldfile = d.getVar('FILE', False)
|
||||
oldfile = d.getVar('FILE')
|
||||
else:
|
||||
oldfile = None
|
||||
|
||||
@@ -148,7 +149,7 @@ def handle(fn, d, include):
|
||||
statements = get_statements(fn, abs_fn, base_name)
|
||||
|
||||
# DONE WITH PARSING... time to evaluate
|
||||
if ext != ".bbclass" and abs_fn != oldfile:
|
||||
if ext != ".bbclass":
|
||||
d.setVar('FILE', abs_fn)
|
||||
|
||||
try:
|
||||
@@ -158,15 +159,10 @@ def handle(fn, d, include):
|
||||
if include == 0:
|
||||
return { "" : d }
|
||||
|
||||
if __infunc__:
|
||||
raise ParseError("Shell function %s is never closed" % __infunc__[0], __infunc__[1], __infunc__[2])
|
||||
if __residue__:
|
||||
raise ParseError("Leftover unparsed (incomplete?) data %s from %s" % __residue__, fn)
|
||||
|
||||
if ext != ".bbclass" and include == 0:
|
||||
return ast.multi_finalize(fn, d)
|
||||
|
||||
if ext != ".bbclass" and oldfile and abs_fn != oldfile:
|
||||
if oldfile:
|
||||
d.setVar("FILE", oldfile)
|
||||
|
||||
return d
|
||||
@@ -176,8 +172,8 @@ def feeder(lineno, s, fn, root, statements):
|
||||
if __infunc__:
|
||||
if s == '}':
|
||||
__body__.append('')
|
||||
ast.handleMethod(statements, fn, lineno, __infunc__[0], __body__)
|
||||
__infunc__ = []
|
||||
ast.handleMethod(statements, fn, lineno, __infunc__, __body__)
|
||||
__infunc__ = ""
|
||||
__body__ = []
|
||||
else:
|
||||
__body__.append(s)
|
||||
@@ -221,8 +217,8 @@ def feeder(lineno, s, fn, root, statements):
|
||||
|
||||
m = __func_start_regexp__.match(s)
|
||||
if m:
|
||||
__infunc__ = [m.group("func") or "__anonymous", fn, lineno]
|
||||
ast.handleMethodFlags(statements, fn, lineno, __infunc__[0], m)
|
||||
__infunc__ = m.group("func") or "__anonymous"
|
||||
ast.handleMethodFlags(statements, fn, lineno, __infunc__, m)
|
||||
return
|
||||
|
||||
m = __def_regexp__.match(s)
|
||||
|
||||
@@ -24,9 +24,8 @@
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import errno
|
||||
import re
|
||||
import os
|
||||
import re, os
|
||||
import logging
|
||||
import bb.utils
|
||||
from bb.parse import ParseError, resolve_file, ast, logger, handle
|
||||
|
||||
@@ -59,7 +58,7 @@ __require_regexp__ = re.compile( r"require\s+(.+)" )
|
||||
__export_regexp__ = re.compile( r"export\s+([a-zA-Z0-9\-_+.${}/]+)$" )
|
||||
|
||||
def init(data):
|
||||
topdir = data.getVar('TOPDIR', False)
|
||||
topdir = data.getVar('TOPDIR')
|
||||
if not topdir:
|
||||
data.setVar('TOPDIR', os.getcwd())
|
||||
|
||||
@@ -93,17 +92,12 @@ def include(parentfn, fn, lineno, data, error_out):
|
||||
logger.warn("Duplicate inclusion for %s in %s" % (fn, data.getVar('FILE', True)))
|
||||
|
||||
try:
|
||||
bb.parse.handle(fn, data, True)
|
||||
except (IOError, OSError) as exc:
|
||||
if exc.errno == errno.ENOENT:
|
||||
if error_out:
|
||||
raise ParseError("Could not %s file %s" % (error_out, fn), parentfn, lineno)
|
||||
logger.debug(2, "CONF file '%s' not found", fn)
|
||||
else:
|
||||
if error_out:
|
||||
raise ParseError("Could not %s file %s: %s" % (error_out, fn, exc.strerror), parentfn, lineno)
|
||||
else:
|
||||
raise ParseError("Error parsing %s: %s" % (fn, exc.strerror), parentfn, lineno)
|
||||
ret = bb.parse.handle(fn, data, True)
|
||||
except (IOError, OSError):
|
||||
if error_out:
|
||||
raise ParseError("Could not %(error_out)s file %(fn)s" % vars(), parentfn, lineno)
|
||||
logger.debug(2, "CONF file '%s' not found", fn)
|
||||
bb.parse.mark_dependency(data, fn)
|
||||
|
||||
# We have an issue where a UI might want to enforce particular settings such as
|
||||
# an empty DISTRO variable. If configuration files do something like assigning
|
||||
@@ -119,7 +113,7 @@ def handle(fn, data, include):
|
||||
if include == 0:
|
||||
oldfile = None
|
||||
else:
|
||||
oldfile = data.getVar('FILE', False)
|
||||
oldfile = data.getVar('FILE')
|
||||
|
||||
abs_fn = resolve_file(fn, data)
|
||||
f = open(abs_fn, 'r')
|
||||
|
||||
@@ -64,7 +64,7 @@ class Popen(subprocess.Popen):
|
||||
options.update(kwargs)
|
||||
subprocess.Popen.__init__(self, *args, **options)
|
||||
|
||||
def _logged_communicate(pipe, log, input, extrafiles):
|
||||
def _logged_communicate(pipe, log, input):
|
||||
if pipe.stdin:
|
||||
if input is not None:
|
||||
pipe.stdin.write(input)
|
||||
@@ -79,20 +79,6 @@ def _logged_communicate(pipe, log, input, extrafiles):
|
||||
if pipe.stderr is not None:
|
||||
bb.utils.nonblockingfd(pipe.stderr.fileno())
|
||||
rin.append(pipe.stderr)
|
||||
for fobj, _ in extrafiles:
|
||||
bb.utils.nonblockingfd(fobj.fileno())
|
||||
rin.append(fobj)
|
||||
|
||||
def readextras(selected):
|
||||
for fobj, func in extrafiles:
|
||||
if fobj in selected:
|
||||
try:
|
||||
data = fobj.read()
|
||||
except IOError as err:
|
||||
if err.errno == errno.EAGAIN or err.errno == errno.EWOULDBLOCK:
|
||||
data = None
|
||||
if data is not None:
|
||||
func(data)
|
||||
|
||||
try:
|
||||
while pipe.poll() is None:
|
||||
@@ -114,27 +100,18 @@ def _logged_communicate(pipe, log, input, extrafiles):
|
||||
if data is not None:
|
||||
errdata.append(data)
|
||||
log.write(data)
|
||||
|
||||
readextras(r)
|
||||
|
||||
finally:
|
||||
log.flush()
|
||||
|
||||
readextras([fobj for fobj, _ in extrafiles])
|
||||
|
||||
if pipe.stdout is not None:
|
||||
pipe.stdout.close()
|
||||
if pipe.stderr is not None:
|
||||
pipe.stderr.close()
|
||||
return ''.join(outdata), ''.join(errdata)
|
||||
|
||||
def run(cmd, input=None, log=None, extrafiles=None, **options):
|
||||
def run(cmd, input=None, log=None, **options):
|
||||
"""Convenience function to run a command and return its output, raising an
|
||||
exception when the command fails"""
|
||||
|
||||
if not extrafiles:
|
||||
extrafiles = []
|
||||
|
||||
if isinstance(cmd, basestring) and not "shell" in options:
|
||||
options["shell"] = True
|
||||
|
||||
@@ -147,7 +124,7 @@ def run(cmd, input=None, log=None, extrafiles=None, **options):
|
||||
raise CmdError(cmd, exc)
|
||||
|
||||
if log:
|
||||
stdout, stderr = _logged_communicate(pipe, log, input, extrafiles)
|
||||
stdout, stderr = _logged_communicate(pipe, log, input)
|
||||
else:
|
||||
stdout, stderr = pipe.communicate(input)
|
||||
|
||||
|
||||
@@ -793,18 +793,7 @@ class RunQueueData:
|
||||
if self.cooker.configuration.invalidate_stamp:
|
||||
for (fn, target) in self.target_pairs:
|
||||
for st in self.cooker.configuration.invalidate_stamp.split(','):
|
||||
if not st.startswith("do_"):
|
||||
st = "do_%s" % st
|
||||
invalidate_task(fn, st, True)
|
||||
|
||||
# Create and print to the logs a virtual/xxxx -> PN (fn) table
|
||||
virtmap = taskData.get_providermap()
|
||||
virtpnmap = {}
|
||||
for v in virtmap:
|
||||
virtpnmap[v] = self.dataCache.pkg_fn[virtmap[v]]
|
||||
bb.debug(2, "%s resolved to: %s (%s)" % (v, virtpnmap[v], virtmap[v]))
|
||||
if hasattr(bb.parse.siggen, "tasks_resolved"):
|
||||
bb.parse.siggen.tasks_resolved(virtmap, virtpnmap, self.dataCache)
|
||||
invalidate_task(fn, "do_%s" % st, True)
|
||||
|
||||
# Iterate over the task list and call into the siggen code
|
||||
dealtwith = set()
|
||||
@@ -1107,13 +1096,6 @@ class RunQueue:
|
||||
raise
|
||||
except SystemExit:
|
||||
raise
|
||||
except bb.BBHandledException:
|
||||
try:
|
||||
self.teardown_workers()
|
||||
except:
|
||||
pass
|
||||
self.state = runQueueComplete
|
||||
raise
|
||||
except:
|
||||
logger.error("An uncaught exception occured in runqueue, please see the failure below:")
|
||||
try:
|
||||
@@ -1172,14 +1154,9 @@ class RunQueue:
|
||||
sq_hash.append(self.rqdata.runq_hash[task])
|
||||
sq_taskname.append(taskname)
|
||||
sq_task.append(task)
|
||||
call = self.hashvalidate + "(sq_fn, sq_task, sq_hash, sq_hashfn, d)"
|
||||
locs = { "sq_fn" : sq_fn, "sq_task" : sq_taskname, "sq_hash" : sq_hash, "sq_hashfn" : sq_hashfn, "d" : self.cooker.expanded_data }
|
||||
try:
|
||||
call = self.hashvalidate + "(sq_fn, sq_task, sq_hash, sq_hashfn, d, siginfo=True)"
|
||||
valid = bb.utils.better_eval(call, locs)
|
||||
# Handle version with no siginfo parameter
|
||||
except TypeError:
|
||||
call = self.hashvalidate + "(sq_fn, sq_task, sq_hash, sq_hashfn, d)"
|
||||
valid = bb.utils.better_eval(call, locs)
|
||||
valid = bb.utils.better_eval(call, locs)
|
||||
for v in valid:
|
||||
valid_new.add(sq_task[v])
|
||||
|
||||
@@ -1265,6 +1242,8 @@ class RunQueue:
|
||||
prevh = __find_md5__.search(latestmatch).group(0)
|
||||
output = bb.siggen.compare_sigfiles(latestmatch, match, recursecb)
|
||||
bb.plain("\nTask %s:%s couldn't be used from the cache because:\n We need hash %s, closest matching task was %s\n " % (pn, taskname, h, prevh) + '\n '.join(output))
|
||||
else:
|
||||
bb.plain("Error, can't find multiple tasks at divergence point? Was there a previously run task?")
|
||||
|
||||
class RunQueueExecute:
|
||||
|
||||
@@ -1291,9 +1270,6 @@ class RunQueueExecute:
|
||||
if rq.fakeworkerpipe:
|
||||
rq.fakeworkerpipe.setrunqueueexec(self)
|
||||
|
||||
if self.number_tasks <= 0:
|
||||
bb.fatal("Invalid BB_NUMBER_THREADS %s" % self.number_tasks)
|
||||
|
||||
def runqueue_process_waitpid(self, task, status):
|
||||
|
||||
# self.build_stamps[pid] may not exist when use shared work directory.
|
||||
@@ -1591,12 +1567,7 @@ class RunQueueExecuteTasks(RunQueueExecute):
|
||||
taskdep = self.rqdata.dataCache.task_deps[fn]
|
||||
if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not self.cooker.configuration.dry_run:
|
||||
if not self.rq.fakeworker:
|
||||
try:
|
||||
self.rq.start_fakeworker(self)
|
||||
except OSError as exc:
|
||||
logger.critical("Failed to spawn fakeroot worker to run %s:%s: %s" % (fn, taskname, str(exc)))
|
||||
self.rq.state = runQueueFailed
|
||||
return True
|
||||
self.rq.start_fakeworker(self)
|
||||
self.rq.fakeworker.stdin.write("<runtask>" + pickle.dumps((fn, task, taskname, False, self.cooker.collection.get_file_appends(fn), taskdepdata)) + "</runtask>")
|
||||
self.rq.fakeworker.stdin.flush()
|
||||
else:
|
||||
@@ -1641,8 +1612,7 @@ class RunQueueExecuteTasks(RunQueueExecute):
|
||||
pn = self.rqdata.dataCache.pkg_fn[fn]
|
||||
taskname = self.rqdata.runq_task[revdep]
|
||||
deps = self.rqdata.runq_depends[revdep]
|
||||
provides = self.rqdata.dataCache.fn_provides[fn]
|
||||
taskdepdata[revdep] = [pn, taskname, fn, deps, provides]
|
||||
taskdepdata[revdep] = [pn, taskname, fn, deps]
|
||||
for revdep2 in deps:
|
||||
if revdep2 not in taskdepdata:
|
||||
additional.append(revdep2)
|
||||
|
||||
@@ -97,7 +97,7 @@ class ProcessServer(Process, BaseImplServer):
|
||||
def run(self):
|
||||
for event in bb.event.ui_queue:
|
||||
self.event_queue.put(event)
|
||||
self.event_handle.value = bb.event.register_UIHhandler(self, True)
|
||||
self.event_handle.value = bb.event.register_UIHhandler(self)
|
||||
|
||||
bb.cooker.server_main(self.cooker, self.main)
|
||||
|
||||
@@ -114,10 +114,6 @@ class ProcessServer(Process, BaseImplServer):
|
||||
if self.quitout.poll():
|
||||
self.quitout.recv()
|
||||
self.quit = True
|
||||
try:
|
||||
self.runCommand(["stateForceShutdown"])
|
||||
except:
|
||||
pass
|
||||
|
||||
self.idle_commands(.1, [self.command_channel, self.quitout])
|
||||
except Exception:
|
||||
@@ -127,12 +123,9 @@ class ProcessServer(Process, BaseImplServer):
|
||||
bb.event.unregister_UIHhandler(self.event_handle.value)
|
||||
self.command_channel.close()
|
||||
self.cooker.shutdown(True)
|
||||
self.quitout.close()
|
||||
|
||||
def idle_commands(self, delay, fds=None):
|
||||
def idle_commands(self, delay, fds = []):
|
||||
nextsleep = delay
|
||||
if not fds:
|
||||
fds = []
|
||||
|
||||
for function, data in self._idlefuns.items():
|
||||
try:
|
||||
@@ -151,9 +144,8 @@ class ProcessServer(Process, BaseImplServer):
|
||||
fds = fds + retval
|
||||
except SystemExit:
|
||||
raise
|
||||
except Exception as exc:
|
||||
if not isinstance(exc, bb.BBHandledException):
|
||||
logger.exception('Running idle function')
|
||||
except Exception:
|
||||
logger.exception('Running idle function')
|
||||
del self._idlefuns[function]
|
||||
self.quit = True
|
||||
|
||||
@@ -177,16 +169,12 @@ class BitBakeProcessServerConnection(BitBakeBaseServerConnection):
|
||||
self.event_queue = event_queue
|
||||
self.connection = ServerCommunicator(self.ui_channel, self.procserver.event_handle, self.procserver)
|
||||
self.events = self.event_queue
|
||||
self.terminated = False
|
||||
|
||||
def sigterm_terminate(self):
|
||||
bb.error("UI received SIGTERM")
|
||||
self.terminate()
|
||||
|
||||
def terminate(self):
|
||||
if self.terminated:
|
||||
return
|
||||
self.terminated = True
|
||||
def flushevents():
|
||||
while True:
|
||||
try:
|
||||
|
||||
@@ -99,7 +99,7 @@ class BitBakeServerCommands():
|
||||
if (self.cooker.state in [bb.cooker.state.parsing, bb.cooker.state.running]):
|
||||
return None
|
||||
|
||||
self.event_handle = bb.event.register_UIHhandler(s, True)
|
||||
self.event_handle = bb.event.register_UIHhandler(s)
|
||||
return self.event_handle
|
||||
|
||||
def unregisterEventHandler(self, handlerNum):
|
||||
@@ -281,15 +281,12 @@ class XMLRPCServer(SimpleXMLRPCServer, BaseImplServer):
|
||||
self.connection_token = token
|
||||
|
||||
class BitBakeXMLRPCServerConnection(BitBakeBaseServerConnection):
|
||||
def __init__(self, serverImpl, clientinfo=("localhost", 0), observer_only = False, featureset = None):
|
||||
def __init__(self, serverImpl, clientinfo=("localhost", 0), observer_only = False, featureset = []):
|
||||
self.connection, self.transport = _create_server(serverImpl.host, serverImpl.port)
|
||||
self.clientinfo = clientinfo
|
||||
self.serverImpl = serverImpl
|
||||
self.observer_only = observer_only
|
||||
if featureset:
|
||||
self.featureset = featureset
|
||||
else:
|
||||
self.featureset = []
|
||||
self.featureset = featureset
|
||||
|
||||
def connect(self, token = None):
|
||||
if token is None:
|
||||
|
||||
@@ -80,7 +80,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
self.taskdeps = {}
|
||||
self.runtaskdeps = {}
|
||||
self.file_checksum_values = {}
|
||||
self.taints = {}
|
||||
self.gendeps = {}
|
||||
self.lookupcache = {}
|
||||
self.pkgnameextract = re.compile("(?P<fn>.*)\..*")
|
||||
@@ -200,14 +199,11 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
if 'nostamp' in taskdep and task in taskdep['nostamp']:
|
||||
# Nostamp tasks need an implicit taint so that they force any dependent tasks to run
|
||||
import uuid
|
||||
taint = str(uuid.uuid4())
|
||||
data = data + taint
|
||||
self.taints[k] = "nostamp:" + taint
|
||||
data = data + str(uuid.uuid4())
|
||||
|
||||
taint = self.read_taint(fn, task, dataCache.stamp[fn])
|
||||
if taint:
|
||||
data = data + taint
|
||||
self.taints[k] = taint
|
||||
logger.warn("%s is tainted from a forced run" % k)
|
||||
|
||||
h = hashlib.md5(data).hexdigest()
|
||||
@@ -251,10 +247,6 @@ class SignatureGeneratorBasic(SignatureGenerator):
|
||||
if taint:
|
||||
data['taint'] = taint
|
||||
|
||||
if runtime and k in self.taints:
|
||||
if 'nostamp:' in self.taints[k]:
|
||||
data['taint'] = self.taints[k]
|
||||
|
||||
fd, tmpfile = tempfile.mkstemp(dir=os.path.dirname(sigfile), prefix="sigtask.")
|
||||
try:
|
||||
with os.fdopen(fd, "wb") as stream:
|
||||
@@ -427,16 +419,12 @@ def compare_sigfiles(a, b, recursecb = None):
|
||||
for f in removed:
|
||||
output.append("Dependency on checksum of file %s was removed" % (f))
|
||||
|
||||
|
||||
if len(a_data['runtaskdeps']) != len(b_data['runtaskdeps']):
|
||||
changed = ["Number of task dependencies changed"]
|
||||
else:
|
||||
changed = []
|
||||
for idx, task in enumerate(a_data['runtaskdeps']):
|
||||
a = a_data['runtaskdeps'][idx]
|
||||
b = b_data['runtaskdeps'][idx]
|
||||
if a_data['runtaskhashes'][a] != b_data['runtaskhashes'][b]:
|
||||
changed.append("%s with hash %s\n changed to\n%s with hash %s" % (a, a_data['runtaskhashes'][a], b, b_data['runtaskhashes'][b]))
|
||||
changed = []
|
||||
for idx, task in enumerate(a_data['runtaskdeps']):
|
||||
a = a_data['runtaskdeps'][idx]
|
||||
b = b_data['runtaskdeps'][idx]
|
||||
if a_data['runtaskhashes'][a] != b_data['runtaskhashes'][b]:
|
||||
changed.append("%s with hash %s\n changed to\n%s with hash %s" % (a, a_data['runtaskhashes'][a], b, b_data['runtaskhashes'][b]))
|
||||
|
||||
if changed:
|
||||
output.append("runtaskdeps changed from %s to %s" % (clean_basepaths_list(a_data['runtaskdeps']), clean_basepaths_list(b_data['runtaskdeps'])))
|
||||
|
||||
@@ -41,7 +41,7 @@ class TaskData:
|
||||
"""
|
||||
BitBake Task Data implementation
|
||||
"""
|
||||
def __init__(self, abort = True, tryaltconfigs = False, skiplist = None, allowincomplete = False):
|
||||
def __init__(self, abort = True, tryaltconfigs = False, skiplist = None):
|
||||
self.build_names_index = []
|
||||
self.run_names_index = []
|
||||
self.fn_index = []
|
||||
@@ -70,7 +70,6 @@ class TaskData:
|
||||
|
||||
self.abort = abort
|
||||
self.tryaltconfigs = tryaltconfigs
|
||||
self.allowincomplete = allowincomplete
|
||||
|
||||
self.skiplist = skiplist
|
||||
|
||||
@@ -514,7 +513,7 @@ class TaskData:
|
||||
self.add_runtime_target(fn, item)
|
||||
self.add_tasks(fn, dataCache)
|
||||
|
||||
def fail_fnid(self, fnid, missing_list=None):
|
||||
def fail_fnid(self, fnid, missing_list = []):
|
||||
"""
|
||||
Mark a file as failed (unbuildable)
|
||||
Remove any references from build and runtime provider lists
|
||||
@@ -523,8 +522,6 @@ class TaskData:
|
||||
"""
|
||||
if fnid in self.failed_fnids:
|
||||
return
|
||||
if not missing_list:
|
||||
missing_list = []
|
||||
logger.debug(1, "File '%s' is unbuildable, removing...", self.fn_index[fnid])
|
||||
self.failed_fnids.append(fnid)
|
||||
for target in self.build_targets:
|
||||
@@ -538,7 +535,7 @@ class TaskData:
|
||||
if len(self.run_targets[target]) == 0:
|
||||
self.remove_runtarget(target, missing_list)
|
||||
|
||||
def remove_buildtarget(self, targetid, missing_list=None):
|
||||
def remove_buildtarget(self, targetid, missing_list = []):
|
||||
"""
|
||||
Mark a build target as failed (unbuildable)
|
||||
Trigger removal of any files that have this as a dependency
|
||||
@@ -563,7 +560,7 @@ class TaskData:
|
||||
logger.error("Required build target '%s' has no buildable providers.\nMissing or unbuildable dependency chain was: %s", target, missing_list)
|
||||
raise bb.providers.NoProvider(target)
|
||||
|
||||
def remove_runtarget(self, targetid, missing_list=None):
|
||||
def remove_runtarget(self, targetid, missing_list = []):
|
||||
"""
|
||||
Mark a run target as failed (unbuildable)
|
||||
Trigger removal of any files that have this as a dependency
|
||||
@@ -597,10 +594,9 @@ class TaskData:
|
||||
added = added + 1
|
||||
except bb.providers.NoProvider:
|
||||
targetid = self.getbuild_id(target)
|
||||
if self.abort and targetid in self.external_targets and not self.allowincomplete:
|
||||
if self.abort and targetid in self.external_targets:
|
||||
raise
|
||||
if not self.allowincomplete:
|
||||
self.remove_buildtarget(targetid)
|
||||
self.remove_buildtarget(targetid)
|
||||
for target in self.get_unresolved_run_targets(dataCache):
|
||||
try:
|
||||
self.add_rprovider(cfgData, dataCache, target)
|
||||
@@ -612,18 +608,6 @@ class TaskData:
|
||||
break
|
||||
# self.dump_data()
|
||||
|
||||
def get_providermap(self):
|
||||
virts = []
|
||||
virtmap = {}
|
||||
|
||||
for name in self.build_names_index:
|
||||
if name.startswith("virtual/"):
|
||||
virts.append(name)
|
||||
for v in virts:
|
||||
if self.have_build_target(v):
|
||||
virtmap[v] = self.fn_index[self.get_provider(v)[0]]
|
||||
return virtmap
|
||||
|
||||
def dump_data(self):
|
||||
"""
|
||||
Dump some debug information on the internal data structures
|
||||
|
||||
@@ -24,30 +24,6 @@ import unittest
|
||||
import bb
|
||||
import bb.data
|
||||
import bb.parse
|
||||
import logging
|
||||
|
||||
class LogRecord():
|
||||
def __enter__(self):
|
||||
logs = []
|
||||
class LogHandler(logging.Handler):
|
||||
def emit(self, record):
|
||||
logs.append(record)
|
||||
logger = logging.getLogger("BitBake")
|
||||
handler = LogHandler()
|
||||
self.handler = handler
|
||||
logger.addHandler(handler)
|
||||
return logs
|
||||
def __exit__(self, type, value, traceback):
|
||||
logger = logging.getLogger("BitBake")
|
||||
logger.removeHandler(self.handler)
|
||||
return
|
||||
|
||||
def logContains(item, logs):
|
||||
for l in logs:
|
||||
m = l.getMessage()
|
||||
if item in m:
|
||||
return True
|
||||
return False
|
||||
|
||||
class DataExpansions(unittest.TestCase):
|
||||
def setUp(self):
|
||||
@@ -134,12 +110,12 @@ class DataExpansions(unittest.TestCase):
|
||||
|
||||
def test_rename(self):
|
||||
self.d.renameVar("foo", "newfoo")
|
||||
self.assertEqual(self.d.getVar("newfoo", False), "value_of_foo")
|
||||
self.assertEqual(self.d.getVar("foo", False), None)
|
||||
self.assertEqual(self.d.getVar("newfoo"), "value_of_foo")
|
||||
self.assertEqual(self.d.getVar("foo"), None)
|
||||
|
||||
def test_deletion(self):
|
||||
self.d.delVar("foo")
|
||||
self.assertEqual(self.d.getVar("foo", False), None)
|
||||
self.assertEqual(self.d.getVar("foo"), None)
|
||||
|
||||
def test_keys(self):
|
||||
keys = self.d.keys()
|
||||
@@ -196,28 +172,28 @@ class TestMemoize(unittest.TestCase):
|
||||
def test_memoized(self):
|
||||
d = bb.data.init()
|
||||
d.setVar("FOO", "bar")
|
||||
self.assertTrue(d.getVar("FOO", False) is d.getVar("FOO", False))
|
||||
self.assertTrue(d.getVar("FOO") is d.getVar("FOO"))
|
||||
|
||||
def test_not_memoized(self):
|
||||
d1 = bb.data.init()
|
||||
d2 = bb.data.init()
|
||||
d1.setVar("FOO", "bar")
|
||||
d2.setVar("FOO", "bar2")
|
||||
self.assertTrue(d1.getVar("FOO", False) is not d2.getVar("FOO", False))
|
||||
self.assertTrue(d1.getVar("FOO") is not d2.getVar("FOO"))
|
||||
|
||||
def test_changed_after_memoized(self):
|
||||
d = bb.data.init()
|
||||
d.setVar("foo", "value of foo")
|
||||
self.assertEqual(str(d.getVar("foo", False)), "value of foo")
|
||||
self.assertEqual(str(d.getVar("foo")), "value of foo")
|
||||
d.setVar("foo", "second value of foo")
|
||||
self.assertEqual(str(d.getVar("foo", False)), "second value of foo")
|
||||
self.assertEqual(str(d.getVar("foo")), "second value of foo")
|
||||
|
||||
def test_same_value(self):
|
||||
d = bb.data.init()
|
||||
d.setVar("foo", "value of")
|
||||
d.setVar("bar", "value of")
|
||||
self.assertEqual(d.getVar("foo", False),
|
||||
d.getVar("bar", False))
|
||||
self.assertEqual(d.getVar("foo"),
|
||||
d.getVar("bar"))
|
||||
|
||||
class TestConcat(unittest.TestCase):
|
||||
def setUp(self):
|
||||
@@ -270,13 +246,6 @@ class TestConcatOverride(unittest.TestCase):
|
||||
bb.data.update_data(self.d)
|
||||
self.assertEqual(self.d.getVar("TEST", True), "foo:val:val2:bar")
|
||||
|
||||
def test_append_unset(self):
|
||||
self.d.setVar("TEST_prepend", "${FOO}:")
|
||||
self.d.setVar("TEST_append", ":val2")
|
||||
self.d.setVar("TEST_append", ":${BAR}")
|
||||
bb.data.update_data(self.d)
|
||||
self.assertEqual(self.d.getVar("TEST", True), "foo::val2:bar")
|
||||
|
||||
def test_remove(self):
|
||||
self.d.setVar("TEST", "${VAL} ${BAR}")
|
||||
self.d.setVar("TEST_remove", "val")
|
||||
@@ -325,66 +294,13 @@ class TestOverrides(unittest.TestCase):
|
||||
bb.data.update_data(self.d)
|
||||
self.assertEqual(self.d.getVar("TEST", True), "testvalue2")
|
||||
|
||||
def test_one_override_unset(self):
|
||||
self.d.setVar("TEST2_bar", "testvalue2")
|
||||
bb.data.update_data(self.d)
|
||||
self.assertEqual(self.d.getVar("TEST2", True), "testvalue2")
|
||||
self.assertItemsEqual(self.d.keys(), ['TEST', 'TEST2', 'OVERRIDES', 'TEST2_bar'])
|
||||
|
||||
def test_multiple_override(self):
|
||||
self.d.setVar("TEST_bar", "testvalue2")
|
||||
self.d.setVar("TEST_local", "testvalue3")
|
||||
self.d.setVar("TEST_foo", "testvalue4")
|
||||
bb.data.update_data(self.d)
|
||||
self.assertEqual(self.d.getVar("TEST", True), "testvalue3")
|
||||
self.assertItemsEqual(self.d.keys(), ['TEST', 'TEST_foo', 'OVERRIDES', 'TEST_bar', 'TEST_local'])
|
||||
|
||||
def test_multiple_combined_overrides(self):
|
||||
self.d.setVar("TEST_local_foo_bar", "testvalue3")
|
||||
bb.data.update_data(self.d)
|
||||
self.assertEqual(self.d.getVar("TEST", True), "testvalue3")
|
||||
|
||||
def test_multiple_overrides_unset(self):
|
||||
self.d.setVar("TEST2_local_foo_bar", "testvalue3")
|
||||
bb.data.update_data(self.d)
|
||||
self.assertEqual(self.d.getVar("TEST2", True), "testvalue3")
|
||||
|
||||
def test_keyexpansion_override(self):
|
||||
self.d.setVar("LOCAL", "local")
|
||||
self.d.setVar("TEST_bar", "testvalue2")
|
||||
self.d.setVar("TEST_${LOCAL}", "testvalue3")
|
||||
self.d.setVar("TEST_foo", "testvalue4")
|
||||
bb.data.update_data(self.d)
|
||||
bb.data.expandKeys(self.d)
|
||||
self.assertEqual(self.d.getVar("TEST", True), "testvalue3")
|
||||
|
||||
def test_rename_override(self):
|
||||
self.d.setVar("ALTERNATIVE_ncurses-tools_class-target", "a")
|
||||
self.d.setVar("OVERRIDES", "class-target")
|
||||
bb.data.update_data(self.d)
|
||||
self.d.renameVar("ALTERNATIVE_ncurses-tools", "ALTERNATIVE_lib32-ncurses-tools")
|
||||
self.assertEqual(self.d.getVar("ALTERNATIVE_lib32-ncurses-tools", True), "a")
|
||||
|
||||
def test_underscore_override(self):
|
||||
self.d.setVar("TEST_bar", "testvalue2")
|
||||
self.d.setVar("TEST_some_val", "testvalue3")
|
||||
self.d.setVar("TEST_foo", "testvalue4")
|
||||
self.d.setVar("OVERRIDES", "foo:bar:some_val")
|
||||
self.assertEqual(self.d.getVar("TEST", True), "testvalue3")
|
||||
|
||||
class TestKeyExpansion(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.d = bb.data.init()
|
||||
self.d.setVar("FOO", "foo")
|
||||
self.d.setVar("BAR", "foo")
|
||||
|
||||
def test_keyexpand(self):
|
||||
self.d.setVar("VAL_${FOO}", "A")
|
||||
self.d.setVar("VAL_${BAR}", "B")
|
||||
with LogRecord() as logs:
|
||||
bb.data.expandKeys(self.d)
|
||||
self.assertTrue(logContains("Variable key VAL_${FOO} (A) replaces original key VAL_foo (B)", logs))
|
||||
self.assertEqual(self.d.getVar("VAL_foo", True), "A")
|
||||
|
||||
class TestFlags(unittest.TestCase):
|
||||
def setUp(self):
|
||||
|
||||
@@ -315,7 +315,6 @@ class URITest(unittest.TestCase):
|
||||
class FetcherTest(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
self.origdir = os.getcwd()
|
||||
self.d = bb.data.init()
|
||||
self.tempdir = tempfile.mkdtemp()
|
||||
self.dldir = os.path.join(self.tempdir, "download")
|
||||
@@ -327,7 +326,6 @@ class FetcherTest(unittest.TestCase):
|
||||
self.d.setVar("PERSISTENT_DIR", persistdir)
|
||||
|
||||
def tearDown(self):
|
||||
os.chdir(self.origdir)
|
||||
bb.utils.prunedir(self.tempdir)
|
||||
|
||||
class MirrorUriTest(FetcherTest):
|
||||
@@ -393,28 +391,6 @@ class MirrorUriTest(FetcherTest):
|
||||
uris, uds = bb.fetch2.build_mirroruris(fetcher, mirrors, self.d)
|
||||
self.assertEqual(uris, ['file:///someotherpath/downloads/bitbake-1.0.tar.gz'])
|
||||
|
||||
def test_mirror_of_mirror(self):
|
||||
# Test if mirror of a mirror works
|
||||
mirrorvar = self.mirrorvar + " http://.*/.* http://otherdownloads.yoctoproject.org/downloads/ \n"
|
||||
mirrorvar = mirrorvar + " http://otherdownloads.yoctoproject.org/.* http://downloads2.yoctoproject.org/downloads/ \n"
|
||||
fetcher = bb.fetch.FetchData("http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", self.d)
|
||||
mirrors = bb.fetch2.mirror_from_string(mirrorvar)
|
||||
uris, uds = bb.fetch2.build_mirroruris(fetcher, mirrors, self.d)
|
||||
self.assertEqual(uris, ['file:///somepath/downloads/bitbake-1.0.tar.gz',
|
||||
'file:///someotherpath/downloads/bitbake-1.0.tar.gz',
|
||||
'http://otherdownloads.yoctoproject.org/downloads/bitbake-1.0.tar.gz',
|
||||
'http://downloads2.yoctoproject.org/downloads/bitbake-1.0.tar.gz'])
|
||||
|
||||
recmirrorvar = "https://.*/[^/]* http://AAAA/A/A/A/ \n" \
|
||||
"https://.*/[^/]* https://BBBB/B/B/B/ \n"
|
||||
|
||||
def test_recursive(self):
|
||||
fetcher = bb.fetch.FetchData("https://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", self.d)
|
||||
mirrors = bb.fetch2.mirror_from_string(self.recmirrorvar)
|
||||
uris, uds = bb.fetch2.build_mirroruris(fetcher, mirrors, self.d)
|
||||
self.assertEqual(uris, ['http://AAAA/A/A/A/bitbake/bitbake-1.0.tar.gz',
|
||||
'https://BBBB/B/B/B/bitbake/bitbake-1.0.tar.gz',
|
||||
'http://AAAA/A/A/A/B/B/bitbake/bitbake-1.0.tar.gz'])
|
||||
|
||||
class FetcherLocalTest(FetcherTest):
|
||||
def setUp(self):
|
||||
@@ -500,19 +476,6 @@ class FetcherNetworkTest(FetcherTest):
|
||||
fetcher.download()
|
||||
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
|
||||
|
||||
def test_fetch_mirror_of_mirror(self):
|
||||
self.d.setVar("MIRRORS", "http://.*/.* http://invalid2.yoctoproject.org/ \n http://invalid2.yoctoproject.org/.* http://downloads.yoctoproject.org/releases/bitbake")
|
||||
fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
|
||||
fetcher.download()
|
||||
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
|
||||
|
||||
def test_fetch_file_mirror_of_mirror(self):
|
||||
self.d.setVar("MIRRORS", "http://.*/.* file:///some1where/ \n file:///some1where/.* file://some2where/ \n file://some2where/.* http://downloads.yoctoproject.org/releases/bitbake")
|
||||
fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
|
||||
os.mkdir(self.dldir + "/some2where")
|
||||
fetcher.download()
|
||||
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
|
||||
|
||||
def test_fetch_premirror(self):
|
||||
self.d.setVar("PREMIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
|
||||
fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
|
||||
@@ -584,43 +547,6 @@ class FetcherNetworkTest(FetcherTest):
|
||||
os.chdir(os.path.dirname(self.unpackdir))
|
||||
fetcher.unpack(self.unpackdir)
|
||||
|
||||
def test_trusted_network(self):
|
||||
# Ensure trusted_network returns False when the host IS in the list.
|
||||
url = "git://Someserver.org/foo;rev=1"
|
||||
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org someserver.org server2.org server3.org")
|
||||
self.assertTrue(bb.fetch.trusted_network(self.d, url))
|
||||
|
||||
def test_wild_trusted_network(self):
|
||||
# Ensure trusted_network returns true when the *.host IS in the list.
|
||||
url = "git://Someserver.org/foo;rev=1"
|
||||
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org *.someserver.org server2.org server3.org")
|
||||
self.assertTrue(bb.fetch.trusted_network(self.d, url))
|
||||
|
||||
def test_prefix_wild_trusted_network(self):
|
||||
# Ensure trusted_network returns true when the prefix matches *.host.
|
||||
url = "git://git.Someserver.org/foo;rev=1"
|
||||
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org *.someserver.org server2.org server3.org")
|
||||
self.assertTrue(bb.fetch.trusted_network(self.d, url))
|
||||
|
||||
def test_two_prefix_wild_trusted_network(self):
|
||||
# Ensure trusted_network returns true when the prefix matches *.host.
|
||||
url = "git://something.git.Someserver.org/foo;rev=1"
|
||||
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org *.someserver.org server2.org server3.org")
|
||||
self.assertTrue(bb.fetch.trusted_network(self.d, url))
|
||||
|
||||
def test_untrusted_network(self):
|
||||
# Ensure trusted_network returns False when the host is NOT in the list.
|
||||
url = "git://someserver.org/foo;rev=1"
|
||||
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org server2.org server3.org")
|
||||
self.assertFalse(bb.fetch.trusted_network(self.d, url))
|
||||
|
||||
def test_wild_untrusted_network(self):
|
||||
# Ensure trusted_network returns False when the host is NOT in the list.
|
||||
url = "git://*.someserver.org/foo;rev=1"
|
||||
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org server2.org server3.org")
|
||||
self.assertFalse(bb.fetch.trusted_network(self.d, url))
|
||||
|
||||
|
||||
class URLHandle(unittest.TestCase):
|
||||
|
||||
datatable = {
|
||||
@@ -640,7 +566,7 @@ class URLHandle(unittest.TestCase):
|
||||
result = bb.fetch.encodeurl(v)
|
||||
self.assertEqual(result, k)
|
||||
|
||||
class FetchLatestVersionTest(FetcherTest):
|
||||
class FetchMethodTest(FetcherTest):
|
||||
|
||||
test_git_uris = {
|
||||
# version pattern "X.Y.Z"
|
||||
@@ -707,8 +633,7 @@ class FetchLatestVersionTest(FetcherTest):
|
||||
self.d.setVar("SRCREV", k[2])
|
||||
self.d.setVar("GITTAGREGEX", k[3])
|
||||
ud = bb.fetch2.FetchData(k[1], self.d)
|
||||
pupver= ud.method.latest_versionstring(ud, self.d)
|
||||
verstring = pupver[0]
|
||||
verstring = ud.method.latest_versionstring(ud, self.d)
|
||||
r = bb.utils.vercmp_string(v, verstring)
|
||||
self.assertTrue(r == -1 or r == 0, msg="Package %s, version: %s <= %s" % (k[0], v, verstring))
|
||||
|
||||
@@ -718,52 +643,6 @@ class FetchLatestVersionTest(FetcherTest):
|
||||
self.d.setVar("REGEX_URI", k[2])
|
||||
self.d.setVar("REGEX", k[3])
|
||||
ud = bb.fetch2.FetchData(k[1], self.d)
|
||||
pupver = ud.method.latest_versionstring(ud, self.d)
|
||||
verstring = pupver[0]
|
||||
verstring = ud.method.latest_versionstring(ud, self.d)
|
||||
r = bb.utils.vercmp_string(v, verstring)
|
||||
self.assertTrue(r == -1 or r == 0, msg="Package %s, version: %s <= %s" % (k[0], v, verstring))
|
||||
|
||||
|
||||
class FetchCheckStatusTest(FetcherTest):
|
||||
test_wget_uris = ["http://www.cups.org/software/1.7.2/cups-1.7.2-source.tar.bz2",
|
||||
"http://www.cups.org/software/ipptool/ipptool-20130731-linux-ubuntu-i686.tar.gz",
|
||||
"http://www.cups.org/",
|
||||
"http://downloads.yoctoproject.org/releases/sato/sato-engine-0.1.tar.gz",
|
||||
"http://downloads.yoctoproject.org/releases/sato/sato-engine-0.2.tar.gz",
|
||||
"http://downloads.yoctoproject.org/releases/sato/sato-engine-0.3.tar.gz",
|
||||
"https://yoctoproject.org/",
|
||||
"https://yoctoproject.org/documentation",
|
||||
"http://downloads.yoctoproject.org/releases/opkg/opkg-0.1.7.tar.gz",
|
||||
"http://downloads.yoctoproject.org/releases/opkg/opkg-0.3.0.tar.gz",
|
||||
"ftp://ftp.gnu.org/gnu/autoconf/autoconf-2.60.tar.gz",
|
||||
"ftp://ftp.gnu.org/gnu/chess/gnuchess-5.08.tar.gz",
|
||||
"ftp://ftp.gnu.org/gnu/gmp/gmp-4.0.tar.gz",
|
||||
]
|
||||
|
||||
if os.environ.get("BB_SKIP_NETTESTS") == "yes":
|
||||
print("Unset BB_SKIP_NETTESTS to run network tests")
|
||||
else:
|
||||
|
||||
def test_wget_checkstatus(self):
|
||||
fetch = bb.fetch2.Fetch(self.test_wget_uris, self.d)
|
||||
for u in self.test_wget_uris:
|
||||
ud = fetch.ud[u]
|
||||
m = ud.method
|
||||
ret = m.checkstatus(fetch, ud, self.d)
|
||||
self.assertTrue(ret, msg="URI %s, can't check status" % (u))
|
||||
|
||||
|
||||
def test_wget_checkstatus_connection_cache(self):
|
||||
from bb.fetch2 import FetchConnectionCache
|
||||
|
||||
connection_cache = FetchConnectionCache()
|
||||
fetch = bb.fetch2.Fetch(self.test_wget_uris, self.d,
|
||||
connection_cache = connection_cache)
|
||||
|
||||
for u in self.test_wget_uris:
|
||||
ud = fetch.ud[u]
|
||||
m = ud.method
|
||||
ret = m.checkstatus(fetch, ud, self.d)
|
||||
self.assertTrue(ret, msg="URI %s, can't check status" % (u))
|
||||
|
||||
connection_cache.close_connections()
|
||||
|
||||
@@ -1,147 +0,0 @@
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# BitBake Test for lib/bb/parse/
|
||||
#
|
||||
# Copyright (C) 2015 Richard Purdie
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
#
|
||||
|
||||
import unittest
|
||||
import tempfile
|
||||
import logging
|
||||
import bb
|
||||
import os
|
||||
|
||||
logger = logging.getLogger('BitBake.TestParse')
|
||||
|
||||
import bb.parse
|
||||
import bb.data
|
||||
import bb.siggen
|
||||
|
||||
class ParseTest(unittest.TestCase):
|
||||
|
||||
testfile = """
|
||||
A = "1"
|
||||
B = "2"
|
||||
do_install() {
|
||||
echo "hello"
|
||||
}
|
||||
|
||||
C = "3"
|
||||
"""
|
||||
|
||||
def setUp(self):
|
||||
self.d = bb.data.init()
|
||||
bb.parse.siggen = bb.siggen.init(self.d)
|
||||
|
||||
def parsehelper(self, content, suffix = ".bb"):
|
||||
|
||||
f = tempfile.NamedTemporaryFile(suffix = suffix)
|
||||
f.write(content)
|
||||
f.flush()
|
||||
os.chdir(os.path.dirname(f.name))
|
||||
return f
|
||||
|
||||
def test_parse_simple(self):
|
||||
f = self.parsehelper(self.testfile)
|
||||
d = bb.parse.handle(f.name, self.d)['']
|
||||
self.assertEqual(d.getVar("A", True), "1")
|
||||
self.assertEqual(d.getVar("B", True), "2")
|
||||
self.assertEqual(d.getVar("C", True), "3")
|
||||
|
||||
def test_parse_incomplete_function(self):
|
||||
testfileB = self.testfile.replace("}", "")
|
||||
f = self.parsehelper(testfileB)
|
||||
with self.assertRaises(bb.parse.ParseError):
|
||||
d = bb.parse.handle(f.name, self.d)['']
|
||||
|
||||
overridetest = """
|
||||
RRECOMMENDS_${PN} = "a"
|
||||
RRECOMMENDS_${PN}_libc = "b"
|
||||
OVERRIDES = "libc:${PN}"
|
||||
PN = "gtk+"
|
||||
"""
|
||||
|
||||
def test_parse_overrides(self):
|
||||
f = self.parsehelper(self.overridetest)
|
||||
d = bb.parse.handle(f.name, self.d)['']
|
||||
self.assertEqual(d.getVar("RRECOMMENDS", True), "b")
|
||||
bb.data.expandKeys(d)
|
||||
self.assertEqual(d.getVar("RRECOMMENDS", True), "b")
|
||||
d.setVar("RRECOMMENDS_gtk+", "c")
|
||||
self.assertEqual(d.getVar("RRECOMMENDS", True), "c")
|
||||
|
||||
overridetest2 = """
|
||||
EXTRA_OECONF = ""
|
||||
EXTRA_OECONF_class-target = "b"
|
||||
EXTRA_OECONF_append = " c"
|
||||
"""
|
||||
|
||||
def test_parse_overrides(self):
|
||||
f = self.parsehelper(self.overridetest2)
|
||||
d = bb.parse.handle(f.name, self.d)['']
|
||||
d.appendVar("EXTRA_OECONF", " d")
|
||||
d.setVar("OVERRIDES", "class-target")
|
||||
self.assertEqual(d.getVar("EXTRA_OECONF", True), "b c d")
|
||||
|
||||
overridetest3 = """
|
||||
DESCRIPTION = "A"
|
||||
DESCRIPTION_${PN}-dev = "${DESCRIPTION} B"
|
||||
PN = "bc"
|
||||
"""
|
||||
|
||||
def test_parse_combinations(self):
|
||||
f = self.parsehelper(self.overridetest3)
|
||||
d = bb.parse.handle(f.name, self.d)['']
|
||||
bb.data.expandKeys(d)
|
||||
self.assertEqual(d.getVar("DESCRIPTION_bc-dev", True), "A B")
|
||||
d.setVar("DESCRIPTION", "E")
|
||||
d.setVar("DESCRIPTION_bc-dev", "C D")
|
||||
d.setVar("OVERRIDES", "bc-dev")
|
||||
self.assertEqual(d.getVar("DESCRIPTION", True), "C D")
|
||||
|
||||
|
||||
classextend = """
|
||||
VAR_var_override1 = "B"
|
||||
EXTRA = ":override1"
|
||||
OVERRIDES = "nothing${EXTRA}"
|
||||
|
||||
BBCLASSEXTEND = "###CLASS###"
|
||||
"""
|
||||
classextend_bbclass = """
|
||||
EXTRA = ""
|
||||
python () {
|
||||
d.renameVar("VAR_var", "VAR_var2")
|
||||
}
|
||||
"""
|
||||
|
||||
#
|
||||
# Test based upon a real world data corruption issue. One
|
||||
# data store changing a variable poked through into a different data
|
||||
# store. This test case replicates that issue where the value 'B' would
|
||||
# become unset/disappear.
|
||||
#
|
||||
def test_parse_classextend_contamination(self):
|
||||
cls = self.parsehelper(self.classextend_bbclass, suffix=".bbclass")
|
||||
#clsname = os.path.basename(cls.name).replace(".bbclass", "")
|
||||
self.classextend = self.classextend.replace("###CLASS###", cls.name)
|
||||
f = self.parsehelper(self.classextend)
|
||||
alldata = bb.parse.handle(f.name, self.d)
|
||||
d1 = alldata['']
|
||||
d2 = alldata[cls.name]
|
||||
self.assertEqual(d1.getVar("VAR_var", True), "B")
|
||||
self.assertEqual(d2.getVar("VAR_var", True), None)
|
||||
|
||||
@@ -22,7 +22,6 @@
|
||||
import unittest
|
||||
import bb
|
||||
import os
|
||||
import tempfile
|
||||
|
||||
class VerCmpString(unittest.TestCase):
|
||||
|
||||
@@ -106,476 +105,3 @@ class Path(unittest.TestCase):
|
||||
for arg1, correctresult in checkitems:
|
||||
result = bb.utils._check_unsafe_delete_path(arg1)
|
||||
self.assertEqual(result, correctresult, '_check_unsafe_delete_path("%s") != %s' % (arg1, correctresult))
|
||||
|
||||
|
||||
class EditMetadataFile(unittest.TestCase):
|
||||
_origfile = """
|
||||
# A comment
|
||||
HELLO = "oldvalue"
|
||||
|
||||
THIS = "that"
|
||||
|
||||
# Another comment
|
||||
NOCHANGE = "samevalue"
|
||||
OTHER = 'anothervalue'
|
||||
|
||||
MULTILINE = "a1 \\
|
||||
a2 \\
|
||||
a3"
|
||||
|
||||
MULTILINE2 := " \\
|
||||
b1 \\
|
||||
b2 \\
|
||||
b3 \\
|
||||
"
|
||||
|
||||
|
||||
MULTILINE3 = " \\
|
||||
c1 \\
|
||||
c2 \\
|
||||
c3 \\
|
||||
"
|
||||
|
||||
do_functionname() {
|
||||
command1 ${VAL1} ${VAL2}
|
||||
command2 ${VAL3} ${VAL4}
|
||||
}
|
||||
"""
|
||||
def _testeditfile(self, varvalues, compareto, dummyvars=None):
|
||||
if dummyvars is None:
|
||||
dummyvars = []
|
||||
with tempfile.NamedTemporaryFile('w', delete=False) as tf:
|
||||
tf.write(self._origfile)
|
||||
tf.close()
|
||||
try:
|
||||
varcalls = []
|
||||
def handle_file(varname, origvalue, op, newlines):
|
||||
self.assertIn(varname, varvalues, 'Callback called for variable %s not in the list!' % varname)
|
||||
self.assertNotIn(varname, dummyvars, 'Callback called for variable %s in dummy list!' % varname)
|
||||
varcalls.append(varname)
|
||||
return varvalues[varname]
|
||||
|
||||
bb.utils.edit_metadata_file(tf.name, varvalues.keys(), handle_file)
|
||||
with open(tf.name) as f:
|
||||
modfile = f.readlines()
|
||||
# Ensure the output matches the expected output
|
||||
self.assertEqual(compareto.splitlines(True), modfile)
|
||||
# Ensure the callback function was called for every variable we asked for
|
||||
# (plus allow testing behaviour when a requested variable is not present)
|
||||
self.assertEqual(sorted(varvalues.keys()), sorted(varcalls + dummyvars))
|
||||
finally:
|
||||
os.remove(tf.name)
|
||||
|
||||
|
||||
def test_edit_metadata_file_nochange(self):
|
||||
# Test file doesn't get modified with nothing to do
|
||||
self._testeditfile({}, self._origfile)
|
||||
# Test file doesn't get modified with only dummy variables
|
||||
self._testeditfile({'DUMMY1': ('should_not_set', None, 0, True),
|
||||
'DUMMY2': ('should_not_set_again', None, 0, True)}, self._origfile, dummyvars=['DUMMY1', 'DUMMY2'])
|
||||
# Test file doesn't get modified with some the same values
|
||||
self._testeditfile({'THIS': ('that', None, 0, True),
|
||||
'OTHER': ('anothervalue', None, 0, True),
|
||||
'MULTILINE3': (' c1 c2 c3', None, 4, False)}, self._origfile)
|
||||
|
||||
def test_edit_metadata_file_1(self):
|
||||
|
||||
newfile1 = """
|
||||
# A comment
|
||||
HELLO = "newvalue"
|
||||
|
||||
THIS = "that"
|
||||
|
||||
# Another comment
|
||||
NOCHANGE = "samevalue"
|
||||
OTHER = 'anothervalue'
|
||||
|
||||
MULTILINE = "a1 \\
|
||||
a2 \\
|
||||
a3"
|
||||
|
||||
MULTILINE2 := " \\
|
||||
b1 \\
|
||||
b2 \\
|
||||
b3 \\
|
||||
"
|
||||
|
||||
|
||||
MULTILINE3 = " \\
|
||||
c1 \\
|
||||
c2 \\
|
||||
c3 \\
|
||||
"
|
||||
|
||||
do_functionname() {
|
||||
command1 ${VAL1} ${VAL2}
|
||||
command2 ${VAL3} ${VAL4}
|
||||
}
|
||||
"""
|
||||
self._testeditfile({'HELLO': ('newvalue', None, 4, True)}, newfile1)
|
||||
|
||||
|
||||
def test_edit_metadata_file_2(self):
|
||||
|
||||
newfile2 = """
|
||||
# A comment
|
||||
HELLO = "oldvalue"
|
||||
|
||||
THIS = "that"
|
||||
|
||||
# Another comment
|
||||
NOCHANGE = "samevalue"
|
||||
OTHER = 'anothervalue'
|
||||
|
||||
MULTILINE = " \\
|
||||
d1 \\
|
||||
d2 \\
|
||||
d3 \\
|
||||
"
|
||||
|
||||
MULTILINE2 := " \\
|
||||
b1 \\
|
||||
b2 \\
|
||||
b3 \\
|
||||
"
|
||||
|
||||
|
||||
MULTILINE3 = "nowsingle"
|
||||
|
||||
do_functionname() {
|
||||
command1 ${VAL1} ${VAL2}
|
||||
command2 ${VAL3} ${VAL4}
|
||||
}
|
||||
"""
|
||||
self._testeditfile({'MULTILINE': (['d1','d2','d3'], None, 4, False),
|
||||
'MULTILINE3': ('nowsingle', None, 4, True),
|
||||
'NOTPRESENT': (['a', 'b'], None, 4, False)}, newfile2, dummyvars=['NOTPRESENT'])
|
||||
|
||||
|
||||
def test_edit_metadata_file_3(self):
|
||||
|
||||
newfile3 = """
|
||||
# A comment
|
||||
HELLO = "oldvalue"
|
||||
|
||||
# Another comment
|
||||
NOCHANGE = "samevalue"
|
||||
OTHER = "yetanothervalue"
|
||||
|
||||
MULTILINE = "e1 \\
|
||||
e2 \\
|
||||
e3 \\
|
||||
"
|
||||
|
||||
MULTILINE2 := "f1 \\
|
||||
\tf2 \\
|
||||
\t"
|
||||
|
||||
|
||||
MULTILINE3 = " \\
|
||||
c1 \\
|
||||
c2 \\
|
||||
c3 \\
|
||||
"
|
||||
|
||||
do_functionname() {
|
||||
othercommand_one a b c
|
||||
othercommand_two d e f
|
||||
}
|
||||
"""
|
||||
|
||||
self._testeditfile({'do_functionname()': (['othercommand_one a b c', 'othercommand_two d e f'], None, 4, False),
|
||||
'MULTILINE2': (['f1', 'f2'], None, '\t', True),
|
||||
'MULTILINE': (['e1', 'e2', 'e3'], None, -1, True),
|
||||
'THIS': (None, None, 0, False),
|
||||
'OTHER': ('yetanothervalue', None, 0, True)}, newfile3)
|
||||
|
||||
|
||||
def test_edit_metadata_file_4(self):
|
||||
|
||||
newfile4 = """
|
||||
# A comment
|
||||
HELLO = "oldvalue"
|
||||
|
||||
THIS = "that"
|
||||
|
||||
# Another comment
|
||||
OTHER = 'anothervalue'
|
||||
|
||||
MULTILINE = "a1 \\
|
||||
a2 \\
|
||||
a3"
|
||||
|
||||
MULTILINE2 := " \\
|
||||
b1 \\
|
||||
b2 \\
|
||||
b3 \\
|
||||
"
|
||||
|
||||
|
||||
"""
|
||||
|
||||
self._testeditfile({'NOCHANGE': (None, None, 0, False),
|
||||
'MULTILINE3': (None, None, 0, False),
|
||||
'THIS': ('that', None, 0, False),
|
||||
'do_functionname()': (None, None, 0, False)}, newfile4)
|
||||
|
||||
|
||||
def test_edit_metadata(self):
|
||||
newfile5 = """
|
||||
# A comment
|
||||
HELLO = "hithere"
|
||||
|
||||
# A new comment
|
||||
THIS += "that"
|
||||
|
||||
# Another comment
|
||||
NOCHANGE = "samevalue"
|
||||
OTHER = 'anothervalue'
|
||||
|
||||
MULTILINE = "a1 \\
|
||||
a2 \\
|
||||
a3"
|
||||
|
||||
MULTILINE2 := " \\
|
||||
b1 \\
|
||||
b2 \\
|
||||
b3 \\
|
||||
"
|
||||
|
||||
|
||||
MULTILINE3 = " \\
|
||||
c1 \\
|
||||
c2 \\
|
||||
c3 \\
|
||||
"
|
||||
|
||||
NEWVAR = "value"
|
||||
|
||||
do_functionname() {
|
||||
command1 ${VAL1} ${VAL2}
|
||||
command2 ${VAL3} ${VAL4}
|
||||
}
|
||||
"""
|
||||
|
||||
|
||||
def handle_var(varname, origvalue, op, newlines):
|
||||
if varname == 'THIS':
|
||||
newlines.append('# A new comment\n')
|
||||
elif varname == 'do_functionname()':
|
||||
newlines.append('NEWVAR = "value"\n')
|
||||
newlines.append('\n')
|
||||
valueitem = varvalues.get(varname, None)
|
||||
if valueitem:
|
||||
return valueitem
|
||||
else:
|
||||
return (origvalue, op, 0, True)
|
||||
|
||||
varvalues = {'HELLO': ('hithere', None, 0, True), 'THIS': ('that', '+=', 0, True)}
|
||||
varlist = ['HELLO', 'THIS', 'do_functionname()']
|
||||
(updated, newlines) = bb.utils.edit_metadata(self._origfile.splitlines(True), varlist, handle_var)
|
||||
self.assertTrue(updated, 'List should be updated but isn\'t')
|
||||
self.assertEqual(newlines, newfile5.splitlines(True))
|
||||
|
||||
|
||||
class EditBbLayersConf(unittest.TestCase):
|
||||
|
||||
def _test_bblayers_edit(self, before, after, add, remove, notadded, notremoved):
|
||||
with tempfile.NamedTemporaryFile('w', delete=False) as tf:
|
||||
tf.write(before)
|
||||
tf.close()
|
||||
try:
|
||||
actual_notadded, actual_notremoved = bb.utils.edit_bblayers_conf(tf.name, add, remove)
|
||||
with open(tf.name) as f:
|
||||
actual_after = f.readlines()
|
||||
self.assertEqual(after.splitlines(True), actual_after)
|
||||
self.assertEqual(notadded, actual_notadded)
|
||||
self.assertEqual(notremoved, actual_notremoved)
|
||||
finally:
|
||||
os.remove(tf.name)
|
||||
|
||||
|
||||
def test_bblayers_remove(self):
|
||||
before = r"""
|
||||
# A comment
|
||||
|
||||
BBPATH = "${TOPDIR}"
|
||||
BBFILES ?= ""
|
||||
BBLAYERS = " \
|
||||
/home/user/path/layer1 \
|
||||
/home/user/path/layer2 \
|
||||
/home/user/path/subpath/layer3 \
|
||||
/home/user/path/layer4 \
|
||||
"
|
||||
"""
|
||||
after = r"""
|
||||
# A comment
|
||||
|
||||
BBPATH = "${TOPDIR}"
|
||||
BBFILES ?= ""
|
||||
BBLAYERS = " \
|
||||
/home/user/path/layer1 \
|
||||
/home/user/path/subpath/layer3 \
|
||||
/home/user/path/layer4 \
|
||||
"
|
||||
"""
|
||||
self._test_bblayers_edit(before, after,
|
||||
None,
|
||||
'/home/user/path/layer2',
|
||||
[],
|
||||
[])
|
||||
|
||||
|
||||
def test_bblayers_add(self):
|
||||
before = r"""
|
||||
# A comment
|
||||
|
||||
BBPATH = "${TOPDIR}"
|
||||
BBFILES ?= ""
|
||||
BBLAYERS = " \
|
||||
/home/user/path/layer1 \
|
||||
/home/user/path/layer2 \
|
||||
/home/user/path/subpath/layer3 \
|
||||
/home/user/path/layer4 \
|
||||
"
|
||||
"""
|
||||
after = r"""
|
||||
# A comment
|
||||
|
||||
BBPATH = "${TOPDIR}"
|
||||
BBFILES ?= ""
|
||||
BBLAYERS = " \
|
||||
/home/user/path/layer1 \
|
||||
/home/user/path/layer2 \
|
||||
/home/user/path/subpath/layer3 \
|
||||
/home/user/path/layer4 \
|
||||
/other/path/to/layer5 \
|
||||
"
|
||||
"""
|
||||
self._test_bblayers_edit(before, after,
|
||||
'/other/path/to/layer5/',
|
||||
None,
|
||||
[],
|
||||
[])
|
||||
|
||||
|
||||
def test_bblayers_add_remove(self):
|
||||
before = r"""
|
||||
# A comment
|
||||
|
||||
BBPATH = "${TOPDIR}"
|
||||
BBFILES ?= ""
|
||||
BBLAYERS = " \
|
||||
/home/user/path/layer1 \
|
||||
/home/user/path/layer2 \
|
||||
/home/user/path/subpath/layer3 \
|
||||
/home/user/path/layer4 \
|
||||
"
|
||||
"""
|
||||
after = r"""
|
||||
# A comment
|
||||
|
||||
BBPATH = "${TOPDIR}"
|
||||
BBFILES ?= ""
|
||||
BBLAYERS = " \
|
||||
/home/user/path/layer1 \
|
||||
/home/user/path/layer2 \
|
||||
/home/user/path/layer4 \
|
||||
/other/path/to/layer5 \
|
||||
"
|
||||
"""
|
||||
self._test_bblayers_edit(before, after,
|
||||
['/other/path/to/layer5', '/home/user/path/layer2/'], '/home/user/path/subpath/layer3/',
|
||||
['/home/user/path/layer2'],
|
||||
[])
|
||||
|
||||
|
||||
def test_bblayers_add_remove_home(self):
|
||||
before = r"""
|
||||
# A comment
|
||||
|
||||
BBPATH = "${TOPDIR}"
|
||||
BBFILES ?= ""
|
||||
BBLAYERS = " \
|
||||
~/path/layer1 \
|
||||
~/path/layer2 \
|
||||
~/otherpath/layer3 \
|
||||
~/path/layer4 \
|
||||
"
|
||||
"""
|
||||
after = r"""
|
||||
# A comment
|
||||
|
||||
BBPATH = "${TOPDIR}"
|
||||
BBFILES ?= ""
|
||||
BBLAYERS = " \
|
||||
~/path/layer2 \
|
||||
~/path/layer4 \
|
||||
~/path2/layer5 \
|
||||
"
|
||||
"""
|
||||
self._test_bblayers_edit(before, after,
|
||||
[os.environ['HOME'] + '/path/layer4', '~/path2/layer5'],
|
||||
[os.environ['HOME'] + '/otherpath/layer3', '~/path/layer1', '~/path/notinlist'],
|
||||
[os.environ['HOME'] + '/path/layer4'],
|
||||
['~/path/notinlist'])
|
||||
|
||||
|
||||
def test_bblayers_add_remove_plusequals(self):
|
||||
before = r"""
|
||||
# A comment
|
||||
|
||||
BBPATH = "${TOPDIR}"
|
||||
BBFILES ?= ""
|
||||
BBLAYERS += " \
|
||||
/home/user/path/layer1 \
|
||||
/home/user/path/layer2 \
|
||||
"
|
||||
"""
|
||||
after = r"""
|
||||
# A comment
|
||||
|
||||
BBPATH = "${TOPDIR}"
|
||||
BBFILES ?= ""
|
||||
BBLAYERS += " \
|
||||
/home/user/path/layer2 \
|
||||
/home/user/path/layer3 \
|
||||
"
|
||||
"""
|
||||
self._test_bblayers_edit(before, after,
|
||||
'/home/user/path/layer3',
|
||||
'/home/user/path/layer1',
|
||||
[],
|
||||
[])
|
||||
|
||||
|
||||
def test_bblayers_add_remove_plusequals2(self):
|
||||
before = r"""
|
||||
# A comment
|
||||
|
||||
BBPATH = "${TOPDIR}"
|
||||
BBFILES ?= ""
|
||||
BBLAYERS += " \
|
||||
/home/user/path/layer1 \
|
||||
/home/user/path/layer2 \
|
||||
/home/user/path/layer3 \
|
||||
"
|
||||
BBLAYERS += "/home/user/path/layer4"
|
||||
BBLAYERS += "/home/user/path/layer5"
|
||||
"""
|
||||
after = r"""
|
||||
# A comment
|
||||
|
||||
BBPATH = "${TOPDIR}"
|
||||
BBFILES ?= ""
|
||||
BBLAYERS += " \
|
||||
/home/user/path/layer2 \
|
||||
/home/user/path/layer3 \
|
||||
"
|
||||
BBLAYERS += "/home/user/path/layer5"
|
||||
BBLAYERS += "/home/user/otherpath/layer6"
|
||||
"""
|
||||
self._test_bblayers_edit(before, after,
|
||||
['/home/user/otherpath/layer6', '/home/user/path/layer3'], ['/home/user/path/layer1', '/home/user/path/layer4', '/home/user/path/layer7'],
|
||||
['/home/user/path/layer3'],
|
||||
['/home/user/path/layer7'])
|
||||
|
||||
@@ -36,13 +36,13 @@ class Tinfoil:
|
||||
|
||||
# Set up logging
|
||||
self.logger = logging.getLogger('BitBake')
|
||||
self._log_hdlr = logging.StreamHandler(output)
|
||||
bb.msg.addDefaultlogFilter(self._log_hdlr)
|
||||
console = logging.StreamHandler(output)
|
||||
bb.msg.addDefaultlogFilter(console)
|
||||
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
|
||||
if output.isatty():
|
||||
format.enable_color()
|
||||
self._log_hdlr.setFormatter(format)
|
||||
self.logger.addHandler(self._log_hdlr)
|
||||
console.setFormatter(format)
|
||||
self.logger.addHandler(console)
|
||||
|
||||
self.config = CookerConfiguration()
|
||||
configparams = TinfoilConfigParameters(parse_only=True)
|
||||
@@ -88,7 +88,6 @@ class Tinfoil:
|
||||
self.cooker.shutdown(force=True)
|
||||
self.cooker.post_serve()
|
||||
self.cooker.unlockBitbake()
|
||||
self.logger.removeHandler(self._log_hdlr)
|
||||
|
||||
class TinfoilConfigParameters(ConfigParameters):
|
||||
|
||||
|
||||
@@ -16,43 +16,29 @@
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import datetime
|
||||
import sys
|
||||
import bb
|
||||
import re
|
||||
import os
|
||||
import ast
|
||||
|
||||
os.environ["DJANGO_SETTINGS_MODULE"] = "toaster.toastermain.settings"
|
||||
|
||||
|
||||
from django.utils import timezone
|
||||
|
||||
|
||||
def _configure_toaster():
|
||||
""" Add toaster to sys path for importing modules
|
||||
"""
|
||||
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), 'toaster'))
|
||||
_configure_toaster()
|
||||
|
||||
import toaster.toastermain.settings as toaster_django_settings
|
||||
from toaster.orm.models import Build, Task, Recipe, Layer_Version, Layer, Target, LogMessage, HelpText
|
||||
from toaster.orm.models import Target_Image_File, BuildArtifact
|
||||
from toaster.orm.models import Variable, VariableHistory
|
||||
from toaster.orm.models import Package, Package_File, Target_Installed_Package, Target_File
|
||||
from toaster.orm.models import Task_Dependency, Package_Dependency
|
||||
from toaster.orm.models import Recipe_Dependency
|
||||
|
||||
from toaster.orm.models import Project
|
||||
from bldcontrol.models import BuildEnvironment, BuildRequest
|
||||
|
||||
from bb.msg import BBLogFormatter as formatter
|
||||
from bb.msg import BBLogFormatter as format
|
||||
from django.db import models
|
||||
from pprint import pformat
|
||||
import logging
|
||||
|
||||
from django.db import transaction, connection
|
||||
|
||||
# pylint: disable=invalid-name
|
||||
# the logger name is standard throughout BitBake
|
||||
logger = logging.getLogger("ToasterLogger")
|
||||
logger = logging.getLogger("BitBake")
|
||||
|
||||
|
||||
class NotExisting(Exception):
|
||||
@@ -66,9 +52,9 @@ class ORMWrapper(object):
|
||||
|
||||
def __init__(self):
|
||||
self.layer_version_objects = []
|
||||
self.layer_version_built = []
|
||||
self.task_objects = {}
|
||||
self.recipe_objects = {}
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
def _build_key(**kwargs):
|
||||
@@ -95,8 +81,8 @@ class ORMWrapper(object):
|
||||
|
||||
created = False
|
||||
if not key in vars(self)[dictname].keys():
|
||||
vars(self)[dictname][key], created = \
|
||||
clazz.objects.get_or_create(**kwargs)
|
||||
vars(self)[dictname][key] = clazz.objects.create(**kwargs)
|
||||
created = True
|
||||
|
||||
return (vars(self)[dictname][key], created)
|
||||
|
||||
@@ -117,14 +103,7 @@ class ORMWrapper(object):
|
||||
|
||||
return vars(self)[dictname][key]
|
||||
|
||||
# pylint: disable=no-self-use
|
||||
# we disable detection of no self use in functions because the methods actually work on the object
|
||||
# even if they don't touch self anywhere
|
||||
|
||||
# pylint: disable=bad-continuation
|
||||
# we do not follow the python conventions for continuation indentation due to long lines here
|
||||
|
||||
def create_build_object(self, build_info, brbe, project_id):
|
||||
def create_build_object(self, build_info, brbe):
|
||||
assert 'machine' in build_info
|
||||
assert 'distro' in build_info
|
||||
assert 'distro_version' in build_info
|
||||
@@ -133,38 +112,7 @@ class ORMWrapper(object):
|
||||
assert 'build_name' in build_info
|
||||
assert 'bitbake_version' in build_info
|
||||
|
||||
prj = None
|
||||
buildrequest = None
|
||||
if brbe is not None: # this build was triggered by a request from a user
|
||||
logger.debug(1, "buildinfohelper: brbe is %s" % brbe)
|
||||
br, _ = brbe.split(":")
|
||||
buildrequest = BuildRequest.objects.get(pk = br)
|
||||
prj = buildrequest.project
|
||||
|
||||
elif project_id is not None: # this build was triggered by an external system for a specific project
|
||||
logger.debug(1, "buildinfohelper: project is %s" % prj)
|
||||
prj = Project.objects.get(pk = project_id)
|
||||
|
||||
else: # this build was triggered by a legacy system, or command line interactive mode
|
||||
prj = Project.objects.get_default_project()
|
||||
logger.debug(1, "buildinfohelper: project is not specified, defaulting to %s" % prj)
|
||||
|
||||
|
||||
if buildrequest is not None:
|
||||
build = buildrequest.build
|
||||
logger.info("Updating existing build, with %s", build_info)
|
||||
build.project = prj
|
||||
build.machine=build_info['machine']
|
||||
build.distro=build_info['distro']
|
||||
build.distro_version=build_info['distro_version']
|
||||
build.cooker_log_path=build_info['cooker_log_path']
|
||||
build.build_name=build_info['build_name']
|
||||
build.bitbake_version=build_info['bitbake_version']
|
||||
build.save()
|
||||
|
||||
else:
|
||||
build = Build.objects.create(
|
||||
project = prj,
|
||||
build = Build.objects.create(
|
||||
machine=build_info['machine'],
|
||||
distro=build_info['distro'],
|
||||
distro_version=build_info['distro_version'],
|
||||
@@ -175,33 +123,31 @@ class ORMWrapper(object):
|
||||
bitbake_version=build_info['bitbake_version'])
|
||||
|
||||
logger.debug(1, "buildinfohelper: build is created %s" % build)
|
||||
if brbe is not None:
|
||||
logger.debug(1, "buildinfohelper: brbe is %s" % brbe)
|
||||
from bldcontrol.models import BuildEnvironment, BuildRequest
|
||||
br, be = brbe.split(":")
|
||||
|
||||
if buildrequest is not None:
|
||||
buildrequest = BuildRequest.objects.get(pk = br)
|
||||
buildrequest.build = build
|
||||
buildrequest.save()
|
||||
|
||||
build.project_id = buildrequest.project_id
|
||||
build.save()
|
||||
return build
|
||||
|
||||
@staticmethod
|
||||
def get_or_create_targets(target_info):
|
||||
result = []
|
||||
for target in target_info['targets']:
|
||||
task = ''
|
||||
if ':' in target:
|
||||
target, task = target.split(':', 1)
|
||||
if task.startswith('do_'):
|
||||
task = task[3:]
|
||||
if task == 'build':
|
||||
task = ''
|
||||
obj, created = Target.objects.get_or_create(build=target_info['build'],
|
||||
target=target)
|
||||
if created:
|
||||
obj.is_image = False
|
||||
if task:
|
||||
obj.task = task
|
||||
obj.save()
|
||||
result.append(obj)
|
||||
return result
|
||||
def create_target_objects(self, target_info):
|
||||
assert 'build' in target_info
|
||||
assert 'targets' in target_info
|
||||
|
||||
targets = []
|
||||
for tgt_name in target_info['targets']:
|
||||
tgt_object = Target.objects.create( build = target_info['build'],
|
||||
target = tgt_name,
|
||||
is_image = False,
|
||||
)
|
||||
targets.append(tgt_object)
|
||||
return targets
|
||||
|
||||
def update_build_object(self, build, errors, warnings, taskfailures):
|
||||
assert isinstance(build,Build)
|
||||
@@ -212,7 +158,10 @@ class ORMWrapper(object):
|
||||
if errors or taskfailures:
|
||||
outcome = Build.FAILED
|
||||
|
||||
build.completed_on = timezone.now()
|
||||
build.completed_on = datetime.datetime.now()
|
||||
build.timespent = int((build.completed_on - build.started_on).total_seconds())
|
||||
build.errors_no = errors
|
||||
build.warnings_no = warnings
|
||||
build.outcome = outcome
|
||||
build.save()
|
||||
|
||||
@@ -232,8 +181,8 @@ class ORMWrapper(object):
|
||||
task_name=task_information['task_name']
|
||||
)
|
||||
if created and must_exist:
|
||||
task_information['debug'] = "build id %d, recipe id %d" % (task_information['build'].pk, task_information['recipe'].pk)
|
||||
raise NotExisting("Task object created when expected to exist", task_information)
|
||||
task_information['debug'] = "build id %d, recipe id %d" % (task_information['build'].pk, task_information['recipe'].pk)
|
||||
raise NotExisting("Task object created when expected to exist", task_information)
|
||||
|
||||
object_changed = False
|
||||
for v in vars(task_object):
|
||||
@@ -272,91 +221,40 @@ class ORMWrapper(object):
|
||||
def get_update_recipe_object(self, recipe_information, must_exist = False):
|
||||
assert 'layer_version' in recipe_information
|
||||
assert 'file_path' in recipe_information
|
||||
assert 'pathflags' in recipe_information
|
||||
|
||||
assert not recipe_information['file_path'].startswith("/") # we should have layer-relative paths at all times
|
||||
if recipe_information['file_path'].startswith(recipe_information['layer_version'].layer.local_path):
|
||||
recipe_information['file_path'] = recipe_information['file_path'][len(recipe_information['layer_version'].layer.local_path):].lstrip("/")
|
||||
|
||||
recipe_object, created = self._cached_get_or_create(Recipe, layer_version=recipe_information['layer_version'],
|
||||
file_path=recipe_information['file_path'])
|
||||
if created and must_exist:
|
||||
raise NotExisting("Recipe object created when expected to exist", recipe_information)
|
||||
|
||||
def update_recipe_obj(recipe_object):
|
||||
object_changed = False
|
||||
for v in vars(recipe_object):
|
||||
if v in recipe_information.keys():
|
||||
object_changed = True
|
||||
vars(recipe_object)[v] = recipe_information[v]
|
||||
object_changed = False
|
||||
for v in vars(recipe_object):
|
||||
if v in recipe_information.keys():
|
||||
object_changed = True
|
||||
vars(recipe_object)[v] = recipe_information[v]
|
||||
|
||||
if object_changed:
|
||||
recipe_object.save()
|
||||
if object_changed:
|
||||
recipe_object.save()
|
||||
|
||||
recipe, created = self._cached_get_or_create(Recipe, layer_version=recipe_information['layer_version'],
|
||||
file_path=recipe_information['file_path'], pathflags = recipe_information['pathflags'])
|
||||
|
||||
update_recipe_obj(recipe)
|
||||
|
||||
built_recipe = None
|
||||
# Create a copy of the recipe for historical puposes and update it
|
||||
for built_layer in self.layer_version_built:
|
||||
if built_layer.layer == recipe_information['layer_version'].layer:
|
||||
built_recipe, c = self._cached_get_or_create(Recipe,
|
||||
layer_version=built_layer,
|
||||
file_path=recipe_information['file_path'],
|
||||
pathflags = recipe_information['pathflags'])
|
||||
update_recipe_obj(built_recipe)
|
||||
break
|
||||
|
||||
|
||||
# If we're in analysis mode then we are wholly responsible for the data
|
||||
# and therefore we return the 'real' recipe rather than the build
|
||||
# history copy of the recipe.
|
||||
if recipe_information['layer_version'].build is not None and \
|
||||
recipe_information['layer_version'].build.project == \
|
||||
Project.objects.get_default_project():
|
||||
return recipe
|
||||
|
||||
return built_recipe
|
||||
return recipe_object
|
||||
|
||||
def get_update_layer_version_object(self, build_obj, layer_obj, layer_version_information):
|
||||
if isinstance(layer_obj, Layer_Version):
|
||||
# We already found our layer version for this build so just
|
||||
# update it with the new build information
|
||||
logger.debug("We found our layer from toaster")
|
||||
layer_obj.local_path = layer_version_information['local_path']
|
||||
layer_obj.save()
|
||||
self.layer_version_objects.append(layer_obj)
|
||||
|
||||
# create a new copy of this layer version as a snapshot for
|
||||
# historical purposes
|
||||
layer_copy, c = Layer_Version.objects.get_or_create(build=build_obj,
|
||||
layer=layer_obj.layer,
|
||||
commit=layer_version_information['commit'],
|
||||
local_path = layer_version_information['local_path'],
|
||||
)
|
||||
logger.info("created new historical layer version %d", layer_copy.pk)
|
||||
|
||||
self.layer_version_built.append(layer_copy)
|
||||
|
||||
return layer_obj
|
||||
|
||||
assert isinstance(build_obj, Build)
|
||||
assert isinstance(layer_obj, Layer)
|
||||
assert 'branch' in layer_version_information
|
||||
assert 'commit' in layer_version_information
|
||||
assert 'priority' in layer_version_information
|
||||
assert 'local_path' in layer_version_information
|
||||
|
||||
# If we're doing a command line build then associate this new layer with the
|
||||
# project to avoid it 'contaminating' toaster data
|
||||
project = None
|
||||
if build_obj.project == Project.objects.get_default_project():
|
||||
project = build_obj.project
|
||||
|
||||
layer_version_object, _ = Layer_Version.objects.get_or_create(
|
||||
build = build_obj,
|
||||
layer = layer_obj,
|
||||
branch = layer_version_information['branch'],
|
||||
commit = layer_version_information['commit'],
|
||||
priority = layer_version_information['priority'],
|
||||
local_path = layer_version_information['local_path'],
|
||||
project=project)
|
||||
layer_version_object, created = Layer_Version.objects.get_or_create(
|
||||
build = build_obj,
|
||||
layer = layer_obj,
|
||||
branch = layer_version_information['branch'],
|
||||
commit = layer_version_information['commit'],
|
||||
priority = layer_version_information['priority']
|
||||
)
|
||||
|
||||
self.layer_version_objects.append(layer_version_object)
|
||||
|
||||
@@ -364,15 +262,18 @@ class ORMWrapper(object):
|
||||
|
||||
def get_update_layer_object(self, layer_information, brbe):
|
||||
assert 'name' in layer_information
|
||||
assert 'local_path' in layer_information
|
||||
assert 'layer_index_url' in layer_information
|
||||
|
||||
if brbe is None:
|
||||
layer_object, _ = Layer.objects.get_or_create(
|
||||
layer_object, created = Layer.objects.get_or_create(
|
||||
name=layer_information['name'],
|
||||
local_path=layer_information['local_path'],
|
||||
layer_index_url=layer_information['layer_index_url'])
|
||||
return layer_object
|
||||
else:
|
||||
# we are under managed mode; we must match the layer used in the Project Layer
|
||||
from bldcontrol.models import BuildEnvironment, BuildRequest
|
||||
br_id, be_id = brbe.split(":")
|
||||
|
||||
# find layer by checkout path;
|
||||
@@ -391,18 +292,12 @@ class ORMWrapper(object):
|
||||
localdirname = os.path.join(bc.be.sourcedir, localdirname)
|
||||
#logger.debug(1, "Localdirname %s lcal_path %s" % (localdirname, layer_information['local_path']))
|
||||
if localdirname.startswith(layer_information['local_path']):
|
||||
# If the build request came from toaster this field
|
||||
# should contain the information from the layer_version
|
||||
# That created this build request.
|
||||
if brl.layer_version:
|
||||
return brl.layer_version
|
||||
|
||||
# we matched the BRLayer, but we need the layer_version that generated this BR; reverse of the Project.schedule_build()
|
||||
#logger.debug(1, "Matched %s to BRlayer %s" % (pformat(layer_information["local_path"]), localdirname))
|
||||
|
||||
for pl in buildrequest.project.projectlayer_set.filter(layercommit__layer__name = brl.name):
|
||||
if pl.layercommit.layer.vcs_url == brl.giturl :
|
||||
layer = pl.layercommit.layer
|
||||
layer.local_path = layer_information['local_path']
|
||||
layer.save()
|
||||
return layer
|
||||
|
||||
@@ -416,29 +311,26 @@ class ORMWrapper(object):
|
||||
files = filedata['files']
|
||||
syms = filedata['syms']
|
||||
|
||||
# always create the root directory as a special case;
|
||||
# note that this is never displayed, so the owner, group,
|
||||
# size, permission are irrelevant
|
||||
tf_obj = Target_File.objects.create(target = target_obj,
|
||||
path = '/',
|
||||
size = 0,
|
||||
owner = '',
|
||||
group = '',
|
||||
permission = '',
|
||||
inodetype = Target_File.ITYPE_DIRECTORY)
|
||||
tf_obj.save()
|
||||
|
||||
# insert directories, ordered by name depth
|
||||
# we insert directories, ordered by name depth
|
||||
for d in sorted(dirs, key=lambda x:len(x[-1].split("/"))):
|
||||
(user, group, size) = d[1:4]
|
||||
permission = d[0][1:]
|
||||
path = d[4].lstrip(".")
|
||||
|
||||
# we already created the root directory, so ignore any
|
||||
# entry for it
|
||||
if len(path) == 0:
|
||||
# we create the root directory as a special case
|
||||
path = "/"
|
||||
tf_obj = Target_File.objects.create(
|
||||
target = target_obj,
|
||||
path = path,
|
||||
size = size,
|
||||
inodetype = Target_File.ITYPE_DIRECTORY,
|
||||
permission = permission,
|
||||
owner = user,
|
||||
group = group,
|
||||
)
|
||||
tf_obj.directory = tf_obj
|
||||
tf_obj.save()
|
||||
continue
|
||||
|
||||
parent_path = "/".join(path.split("/")[:len(path.split("/")) - 1])
|
||||
if len(parent_path) == 0:
|
||||
parent_path = "/"
|
||||
@@ -502,7 +394,7 @@ class ORMWrapper(object):
|
||||
|
||||
try:
|
||||
filetarget_obj = Target_File.objects.get(target = target_obj, path = filetarget_path)
|
||||
except Target_File.DoesNotExist:
|
||||
except Exception as e:
|
||||
# we might have an invalid link; no way to detect this. just set it to None
|
||||
filetarget_obj = None
|
||||
|
||||
@@ -527,12 +419,6 @@ class ORMWrapper(object):
|
||||
errormsg = ""
|
||||
for p in packagedict:
|
||||
searchname = p
|
||||
if p not in pkgpnmap:
|
||||
logger.warning("Image packages list contains %p, but is"
|
||||
" missing from all packages list where the"
|
||||
" metadata comes from. Skipping...", p)
|
||||
continue
|
||||
|
||||
if 'OPKGN' in pkgpnmap[p].keys():
|
||||
searchname = pkgpnmap[p]['OPKGN']
|
||||
|
||||
@@ -576,26 +462,19 @@ class ORMWrapper(object):
|
||||
elif deptype == 'recommends':
|
||||
tdeptype = Package_Dependency.TYPE_TRECOMMENDS
|
||||
|
||||
try:
|
||||
packagedeps_objs.append(Package_Dependency(
|
||||
package = packagedict[p]['object'],
|
||||
depends_on = packagedict[px]['object'],
|
||||
dep_type = tdeptype,
|
||||
target = target_obj))
|
||||
except KeyError as e:
|
||||
logger.warn("Could not add dependency to the package %s "
|
||||
"because %s is an unknown package", p, px)
|
||||
packagedeps_objs.append(Package_Dependency( package = packagedict[p]['object'],
|
||||
depends_on = packagedict[px]['object'],
|
||||
dep_type = tdeptype,
|
||||
target = target_obj))
|
||||
|
||||
if len(packagedeps_objs) > 0:
|
||||
Package_Dependency.objects.bulk_create(packagedeps_objs)
|
||||
else:
|
||||
logger.info("No package dependencies created")
|
||||
|
||||
if len(errormsg) > 0:
|
||||
logger.warn("buildinfohelper: target_package_info could not identify recipes: \n%s", errormsg)
|
||||
if (len(errormsg) > 0):
|
||||
logger.warn("buildinfohelper: target_package_info could not identify recipes: \n%s" % errormsg)
|
||||
|
||||
def save_target_image_file_information(self, target_obj, file_name, file_size):
|
||||
Target_Image_File.objects.create( target = target_obj,
|
||||
target_image_file = Target_Image_File.objects.create( target = target_obj,
|
||||
file_name = file_name,
|
||||
file_size = file_size)
|
||||
|
||||
@@ -635,7 +514,7 @@ class ORMWrapper(object):
|
||||
if 'OPKGN' in package_info.keys():
|
||||
pname = package_info['OPKGN']
|
||||
|
||||
bp_object, _ = Package.objects.get_or_create( build = build_obj,
|
||||
bp_object, created = Package.objects.get_or_create( build = build_obj,
|
||||
name = pname )
|
||||
|
||||
bp_object.installed_name = package_info['PKG']
|
||||
@@ -652,7 +531,7 @@ class ORMWrapper(object):
|
||||
# save any attached file information
|
||||
packagefile_objects = []
|
||||
for path in package_info['FILES_INFO']:
|
||||
packagefile_objects.append(Package_File( package = bp_object,
|
||||
packagefile_objects.append(Package_File( package = bp_object,
|
||||
path = path,
|
||||
size = package_info['FILES_INFO'][path] ))
|
||||
if len(packagefile_objects):
|
||||
@@ -736,19 +615,7 @@ class ORMWrapper(object):
|
||||
|
||||
HelpText.objects.bulk_create(helptext_objects)
|
||||
|
||||
|
||||
class MockEvent(object):
|
||||
""" This object is used to create event, for which normal event-processing methods can
|
||||
be used, out of data that is not coming via an actual event
|
||||
"""
|
||||
def __init__(self):
|
||||
self.msg = None
|
||||
self.levelno = None
|
||||
self.taskname = None
|
||||
self.taskhash = None
|
||||
self.pathname = None
|
||||
self.lineno = None
|
||||
|
||||
class MockEvent: pass # sometimes we mock an event, declare it here
|
||||
|
||||
class BuildInfoHelper(object):
|
||||
""" This class gathers the build information from the server and sends it
|
||||
@@ -757,15 +624,11 @@ class BuildInfoHelper(object):
|
||||
Keeps in memory all data that needs matching before writing it to the database
|
||||
"""
|
||||
|
||||
# pylint: disable=protected-access
|
||||
# the code will look into the protected variables of the event; no easy way around this
|
||||
# pylint: disable=bad-continuation
|
||||
# we do not follow the python conventions for continuation indentation due to long lines here
|
||||
|
||||
def __init__(self, server, has_build_history = False):
|
||||
self._configure_django()
|
||||
self.internal_state = {}
|
||||
self.internal_state['taskdata'] = {}
|
||||
self.internal_state['targets'] = []
|
||||
self.task_order = 0
|
||||
self.autocommit_step = 1
|
||||
self.server = server
|
||||
@@ -776,24 +639,27 @@ class BuildInfoHelper(object):
|
||||
self.has_build_history = has_build_history
|
||||
self.tmp_dir = self.server.runCommand(["getVariable", "TMPDIR"])[0]
|
||||
self.brbe = self.server.runCommand(["getVariable", "TOASTER_BRBE"])[0]
|
||||
self.project = self.server.runCommand(["getVariable", "TOASTER_PROJECT"])[0]
|
||||
logger.debug(1, "buildinfohelper: Build info helper inited %s" % vars(self))
|
||||
|
||||
|
||||
def _configure_django(self):
|
||||
# Add toaster to sys path for importing modules
|
||||
sys.path.append(os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), 'toaster'))
|
||||
|
||||
###################
|
||||
## methods to convert event/external info into objects that the ORM layer uses
|
||||
|
||||
|
||||
def _get_build_information(self, build_log_path):
|
||||
def _get_build_information(self):
|
||||
build_info = {}
|
||||
# Generate an identifier for each new build
|
||||
|
||||
build_info['machine'] = self.server.runCommand(["getVariable", "MACHINE"])[0]
|
||||
build_info['distro'] = self.server.runCommand(["getVariable", "DISTRO"])[0]
|
||||
build_info['distro_version'] = self.server.runCommand(["getVariable", "DISTRO_VERSION"])[0]
|
||||
build_info['started_on'] = timezone.now()
|
||||
build_info['completed_on'] = timezone.now()
|
||||
build_info['cooker_log_path'] = build_log_path
|
||||
build_info['started_on'] = datetime.datetime.now()
|
||||
build_info['completed_on'] = datetime.datetime.now()
|
||||
build_info['cooker_log_path'] = self.server.runCommand(["getVariable", "BB_CONSOLELOG"])[0]
|
||||
build_info['build_name'] = self.server.runCommand(["getVariable", "BUILDNAME"])[0]
|
||||
build_info['bitbake_version'] = self.server.runCommand(["getVariable", "BB_VERSION"])[0]
|
||||
|
||||
@@ -821,17 +687,18 @@ class BuildInfoHelper(object):
|
||||
if self.brbe is None:
|
||||
def _slkey_interactive(layer_version):
|
||||
assert isinstance(layer_version, Layer_Version)
|
||||
return len(layer_version.local_path)
|
||||
return len(layer_version.layer.local_path)
|
||||
|
||||
# Heuristics: we always match recipe to the deepest layer path in the discovered layers
|
||||
for lvo in sorted(self.orm_wrapper.layer_version_objects, reverse=True, key=_slkey_interactive):
|
||||
# we can match to the recipe file path
|
||||
if path.startswith(lvo.local_path):
|
||||
if path.startswith(lvo.layer.local_path):
|
||||
return lvo
|
||||
|
||||
else:
|
||||
br_id, be_id = self.brbe.split(":")
|
||||
from bldcontrol.bbcontroller import getBuildEnvironmentController
|
||||
from bldcontrol.models import BuildRequest
|
||||
bc = getBuildEnvironmentController(pk = be_id)
|
||||
|
||||
def _slkey_managed(layer_version):
|
||||
@@ -844,47 +711,28 @@ class BuildInfoHelper(object):
|
||||
if not localdirname.startswith("/"):
|
||||
localdirname = os.path.join(bc.be.sourcedir, localdirname)
|
||||
if path.startswith(localdirname):
|
||||
# If the build request came from toaster this field
|
||||
# should contain the information from the layer_version
|
||||
# That created this build request.
|
||||
if brl.layer_version:
|
||||
return brl.layer_version
|
||||
|
||||
#logger.warn("-- managed: matched path %s with layer %s " % (path, localdirname))
|
||||
# we matched the BRLayer, but we need the layer_version that generated this br
|
||||
|
||||
for lvo in self.orm_wrapper.layer_version_objects:
|
||||
if brl.name == lvo.layer.name:
|
||||
return lvo
|
||||
|
||||
#if we get here, we didn't read layers correctly; dump whatever information we have on the error log
|
||||
logger.warn("Could not match layer version for recipe path %s : %s", path, self.orm_wrapper.layer_version_objects)
|
||||
logger.error("Could not match layer version for recipe path %s : %s" % (path, self.orm_wrapper.layer_version_objects))
|
||||
|
||||
#mockup the new layer
|
||||
unknown_layer, _ = Layer.objects.get_or_create(name="Unidentified layer", layer_index_url="")
|
||||
unknown_layer_version_obj, _ = Layer_Version.objects.get_or_create(layer = unknown_layer, build = self.internal_state['build'])
|
||||
|
||||
# append it so we don't run into this error again and again
|
||||
self.orm_wrapper.layer_version_objects.append(unknown_layer_version_obj)
|
||||
unknown_layer, created = Layer.objects.get_or_create(name="__FIXME__unidentified_layer", local_path="/", layer_index_url="")
|
||||
unknown_layer_version_obj, created = Layer_Version.objects.get_or_create(layer = unknown_layer, build = self.internal_state['build'])
|
||||
|
||||
return unknown_layer_version_obj
|
||||
|
||||
def _get_recipe_information_from_taskfile(self, taskfile):
|
||||
localfilepath = taskfile.split(":")[-1]
|
||||
filepath_flags = ":".join(sorted(taskfile.split(":")[:-1]))
|
||||
layer_version_obj = self._get_layer_version_for_path(localfilepath)
|
||||
|
||||
|
||||
|
||||
recipe_info = {}
|
||||
recipe_info['layer_version'] = layer_version_obj
|
||||
recipe_info['file_path'] = localfilepath
|
||||
recipe_info['pathflags'] = filepath_flags
|
||||
|
||||
if recipe_info['file_path'].startswith(recipe_info['layer_version'].local_path):
|
||||
recipe_info['file_path'] = recipe_info['file_path'][len(recipe_info['layer_version'].local_path):].lstrip("/")
|
||||
else:
|
||||
raise RuntimeError("Recipe file path %s is not under layer version at %s" % (recipe_info['file_path'], recipe_info['layer_version'].local_path))
|
||||
recipe_info['file_path'] = taskfile
|
||||
|
||||
return recipe_info
|
||||
|
||||
@@ -909,6 +757,13 @@ class BuildInfoHelper(object):
|
||||
|
||||
return build_stats_path
|
||||
|
||||
def _remove_redundant(self, string):
|
||||
ret = []
|
||||
for i in string.split():
|
||||
if i not in ret:
|
||||
ret.append(i)
|
||||
return " ".join(sorted(ret))
|
||||
|
||||
|
||||
################################
|
||||
## external available methods to store information
|
||||
@@ -929,16 +784,15 @@ class BuildInfoHelper(object):
|
||||
for layer in layerinfos:
|
||||
try:
|
||||
self.internal_state['lvs'][self.orm_wrapper.get_update_layer_object(layerinfos[layer], self.brbe)] = layerinfos[layer]['version']
|
||||
self.internal_state['lvs'][self.orm_wrapper.get_update_layer_object(layerinfos[layer], self.brbe)]['local_path'] = layerinfos[layer]['local_path']
|
||||
except NotExisting as nee:
|
||||
logger.warn("buildinfohelper: cannot identify layer exception:%s ", nee)
|
||||
logger.warn("buildinfohelper: cannot identify layer exception:%s " % nee)
|
||||
|
||||
|
||||
def store_started_build(self, event, build_log_path):
|
||||
def store_started_build(self, event):
|
||||
assert '_pkgs' in vars(event)
|
||||
build_information = self._get_build_information(build_log_path)
|
||||
build_information = self._get_build_information()
|
||||
|
||||
build_obj = self.orm_wrapper.create_build_object(build_information, self.brbe, self.project)
|
||||
build_obj = self.orm_wrapper.create_build_object(build_information, self.brbe)
|
||||
|
||||
self.internal_state['build'] = build_obj
|
||||
|
||||
@@ -956,40 +810,17 @@ class BuildInfoHelper(object):
|
||||
target_information['targets'] = event._pkgs
|
||||
target_information['build'] = build_obj
|
||||
|
||||
self.internal_state['targets'] = self.orm_wrapper.get_or_create_targets(target_information)
|
||||
self.internal_state['targets'] = self.orm_wrapper.create_target_objects(target_information)
|
||||
|
||||
# Save build configuration
|
||||
data = self.server.runCommand(["getAllKeysWithFlags", ["doc", "func"]])[0]
|
||||
|
||||
# convert the paths from absolute to relative to either the build directory or layer checkouts
|
||||
path_prefixes = []
|
||||
|
||||
if self.brbe is not None:
|
||||
_, be_id = self.brbe.split(":")
|
||||
be = BuildEnvironment.objects.get(pk = be_id)
|
||||
path_prefixes.append(be.builddir)
|
||||
|
||||
for layer in sorted(self.orm_wrapper.layer_version_objects, key = lambda x:len(x.local_path), reverse=True):
|
||||
path_prefixes.append(layer.local_path)
|
||||
|
||||
# we strip the prefixes
|
||||
for k in data:
|
||||
if not bool(data[k]['func']):
|
||||
for vh in data[k]['history']:
|
||||
if not 'documentation.conf' in vh['file']:
|
||||
abs_file_name = vh['file']
|
||||
for pp in path_prefixes:
|
||||
if abs_file_name.startswith(pp + "/"):
|
||||
vh['file']=abs_file_name[len(pp + "/"):]
|
||||
break
|
||||
|
||||
# save the variables
|
||||
self.orm_wrapper.save_build_variables(build_obj, data)
|
||||
|
||||
return self.brbe
|
||||
|
||||
|
||||
def update_target_image_file(self, event):
|
||||
image_fstypes = self.server.runCommand(["getVariable", "IMAGE_FSTYPES"])[0]
|
||||
evdata = BuildInfoHelper._get_data_from_event(event)
|
||||
|
||||
for t in self.internal_state['targets']:
|
||||
@@ -1051,7 +882,7 @@ class BuildInfoHelper(object):
|
||||
self.task_order += 1
|
||||
task_information['order'] = self.task_order
|
||||
|
||||
self.orm_wrapper.get_update_task_object(task_information)
|
||||
task_obj = self.orm_wrapper.get_update_task_object(task_information)
|
||||
|
||||
self.internal_state['taskdata'][identifier] = {
|
||||
'outcome': task_information['outcome'],
|
||||
@@ -1065,14 +896,14 @@ class BuildInfoHelper(object):
|
||||
|
||||
recipe_information = self._get_recipe_information_from_taskfile(taskfile)
|
||||
try:
|
||||
if recipe_information['file_path'].startswith(recipe_information['layer_version'].local_path):
|
||||
recipe_information['file_path'] = recipe_information['file_path'][len(recipe_information['layer_version'].local_path):].lstrip("/")
|
||||
if recipe_information['file_path'].startswith(recipe_information['layer_version'].layer.local_path):
|
||||
recipe_information['file_path'] = recipe_information['file_path'][len(recipe_information['layer_version'].layer.local_path):].lstrip("/")
|
||||
|
||||
recipe_object = Recipe.objects.get(layer_version = recipe_information['layer_version'],
|
||||
file_path__endswith = recipe_information['file_path'],
|
||||
name = recipename)
|
||||
except Recipe.DoesNotExist:
|
||||
logger.error("Could not find recipe for recipe_information %s name %s" , pformat(recipe_information), recipename)
|
||||
logger.error("Could not find recipe for recipe_information %s name %s" % (pformat(recipe_information), recipename))
|
||||
raise
|
||||
|
||||
task_information = {}
|
||||
@@ -1083,7 +914,7 @@ class BuildInfoHelper(object):
|
||||
task_information['disk_io'] = taskstats['disk_io']
|
||||
if 'elapsed_time' in taskstats:
|
||||
task_information['elapsed_time'] = taskstats['elapsed_time']
|
||||
self.orm_wrapper.get_update_task_object(task_information)
|
||||
task_obj = self.orm_wrapper.get_update_task_object(task_information, True) # must exist
|
||||
|
||||
def update_and_store_task(self, event):
|
||||
assert 'taskfile' in vars(event)
|
||||
@@ -1149,7 +980,7 @@ class BuildInfoHelper(object):
|
||||
def store_missed_state_tasks(self, event):
|
||||
for (fn, taskname, taskhash, sstatefile) in BuildInfoHelper._get_data_from_event(event)['missed']:
|
||||
|
||||
# identifier = fn + taskname + "_setscene"
|
||||
identifier = fn + taskname + "_setscene"
|
||||
recipe_information = self._get_recipe_information_from_taskfile(fn)
|
||||
recipe = self.orm_wrapper.get_update_recipe_object(recipe_information)
|
||||
mevent = MockEvent()
|
||||
@@ -1157,7 +988,7 @@ class BuildInfoHelper(object):
|
||||
mevent.taskhash = taskhash
|
||||
task_information = self._get_task_information(mevent,recipe)
|
||||
|
||||
task_information['start_time'] = timezone.now()
|
||||
task_information['start_time'] = datetime.datetime.now()
|
||||
task_information['outcome'] = Task.OUTCOME_NA
|
||||
task_information['sstate_checksum'] = taskhash
|
||||
task_information['sstate_result'] = Task.SSTATE_MISS
|
||||
@@ -1167,7 +998,7 @@ class BuildInfoHelper(object):
|
||||
|
||||
for (fn, taskname, taskhash, sstatefile) in BuildInfoHelper._get_data_from_event(event)['found']:
|
||||
|
||||
# identifier = fn + taskname + "_setscene"
|
||||
identifier = fn + taskname + "_setscene"
|
||||
recipe_information = self._get_recipe_information_from_taskfile(fn)
|
||||
recipe = self.orm_wrapper.get_update_recipe_object(recipe_information)
|
||||
mevent = MockEvent()
|
||||
@@ -1184,22 +1015,15 @@ class BuildInfoHelper(object):
|
||||
# for all image targets
|
||||
for target in self.internal_state['targets']:
|
||||
if target.is_image:
|
||||
pkgdata = BuildInfoHelper._get_data_from_event(event)['pkgdata']
|
||||
imgdata = BuildInfoHelper._get_data_from_event(event)['imgdata'][target.target]
|
||||
filedata = BuildInfoHelper._get_data_from_event(event)['filedata'][target.target]
|
||||
|
||||
try:
|
||||
pkgdata = BuildInfoHelper._get_data_from_event(event)['pkgdata']
|
||||
imgdata = BuildInfoHelper._get_data_from_event(event)['imgdata'][target.target]
|
||||
self.orm_wrapper.save_target_package_information(self.internal_state['build'], target, imgdata, pkgdata, self.internal_state['recipes'])
|
||||
except KeyError as e:
|
||||
logger.warn("KeyError in save_target_package_information"
|
||||
"%s ", e)
|
||||
|
||||
try:
|
||||
filedata = BuildInfoHelper._get_data_from_event(event)['filedata'][target.target]
|
||||
self.orm_wrapper.save_target_file_information(self.internal_state['build'], target, filedata)
|
||||
except KeyError as e:
|
||||
logger.warn("KeyError in save_target_file_information"
|
||||
"%s ", e)
|
||||
|
||||
except KeyError:
|
||||
# we must have not got the data for this image, nothing to save
|
||||
pass
|
||||
|
||||
|
||||
|
||||
@@ -1214,7 +1038,7 @@ class BuildInfoHelper(object):
|
||||
# save layer version priorities
|
||||
if 'layer-priorities' in event._depgraph.keys():
|
||||
for lv in event._depgraph['layer-priorities']:
|
||||
(_, path, _, priority) = lv
|
||||
(name, path, regexp, priority) = lv
|
||||
layer_version_obj = self._get_layer_version_for_path(path[1:]) # paths start with a ^
|
||||
assert layer_version_obj is not None
|
||||
layer_version_obj.priority = priority
|
||||
@@ -1224,9 +1048,8 @@ class BuildInfoHelper(object):
|
||||
self.internal_state['recipes'] = {}
|
||||
for pn in event._depgraph['pn']:
|
||||
|
||||
file_name = event._depgraph['pn'][pn]['filename'].split(":")[-1]
|
||||
pathflags = ":".join(sorted(event._depgraph['pn'][pn]['filename'].split(":")[:-1]))
|
||||
layer_version_obj = self._get_layer_version_for_path(file_name)
|
||||
file_name = event._depgraph['pn'][pn]['filename']
|
||||
layer_version_obj = self._get_layer_version_for_path(file_name.split(":")[-1])
|
||||
|
||||
assert layer_version_obj is not None
|
||||
|
||||
@@ -1256,13 +1079,6 @@ class BuildInfoHelper(object):
|
||||
recipe_info['bugtracker'] = event._depgraph['pn'][pn]['bugtracker']
|
||||
|
||||
recipe_info['file_path'] = file_name
|
||||
recipe_info['pathflags'] = pathflags
|
||||
|
||||
if recipe_info['file_path'].startswith(recipe_info['layer_version'].local_path):
|
||||
recipe_info['file_path'] = recipe_info['file_path'][len(recipe_info['layer_version'].local_path):].lstrip("/")
|
||||
else:
|
||||
raise RuntimeError("Recipe file path %s is not under layer version at %s" % (recipe_info['file_path'], recipe_info['layer_version'].local_path))
|
||||
|
||||
recipe = self.orm_wrapper.get_update_recipe_object(recipe_info)
|
||||
recipe.is_image = False
|
||||
if 'inherits' in event._depgraph['pn'][pn].keys():
|
||||
@@ -1327,8 +1143,8 @@ class BuildInfoHelper(object):
|
||||
taskdeps_objects.append(Task_Dependency( task = target, depends_on = dep ))
|
||||
Task_Dependency.objects.bulk_create(taskdeps_objects)
|
||||
|
||||
if len(errormsg) > 0:
|
||||
logger.warn("buildinfohelper: dependency info not identify recipes: \n%s", errormsg)
|
||||
if (len(errormsg) > 0):
|
||||
logger.warn("buildinfohelper: dependency info not identify recipes: \n%s" % errormsg)
|
||||
|
||||
|
||||
def store_build_package_information(self, event):
|
||||
@@ -1339,8 +1155,8 @@ class BuildInfoHelper(object):
|
||||
)
|
||||
|
||||
def _store_build_done(self, errorcode):
|
||||
logger.info("Build exited with errorcode %d", errorcode)
|
||||
br_id, be_id = self.brbe.split(":")
|
||||
from bldcontrol.models import BuildEnvironment, BuildRequest
|
||||
be = BuildEnvironment.objects.get(pk = be_id)
|
||||
be.lock = BuildEnvironment.LOCK_LOCK
|
||||
be.save()
|
||||
@@ -1355,10 +1171,10 @@ class BuildInfoHelper(object):
|
||||
|
||||
def store_log_error(self, text):
|
||||
mockevent = MockEvent()
|
||||
mockevent.levelno = formatter.ERROR
|
||||
mockevent.levelno = format.ERROR
|
||||
mockevent.msg = text
|
||||
mockevent.pathname = '-- None'
|
||||
mockevent.lineno = LogMessage.ERROR
|
||||
mockevent.lineno = -1
|
||||
self.store_log_event(mockevent)
|
||||
|
||||
def store_log_exception(self, text, backtrace = ""):
|
||||
@@ -1371,7 +1187,7 @@ class BuildInfoHelper(object):
|
||||
|
||||
|
||||
def store_log_event(self, event):
|
||||
if event.levelno < formatter.WARNING:
|
||||
if event.levelno < format.WARNING:
|
||||
return
|
||||
|
||||
if 'args' in vars(event):
|
||||
@@ -1382,11 +1198,13 @@ class BuildInfoHelper(object):
|
||||
if not 'backlog' in self.internal_state:
|
||||
self.internal_state['backlog'] = []
|
||||
self.internal_state['backlog'].append(event)
|
||||
return
|
||||
else: # we're under Toaster control, the build is already created
|
||||
br, _ = self.brbe.split(":")
|
||||
else: # we're under Toaster control, post the errors to the build request
|
||||
from bldcontrol.models import BuildRequest, BRError
|
||||
br, be = self.brbe.split(":")
|
||||
buildrequest = BuildRequest.objects.get(pk = br)
|
||||
self.internal_state['build'] = buildrequest.build
|
||||
brerror = BRError.objects.create(req = buildrequest, errtype="build", errmsg = event.msg)
|
||||
|
||||
return
|
||||
|
||||
if 'build' in self.internal_state and 'backlog' in self.internal_state:
|
||||
# if we have a backlog of events, do our best to save them here
|
||||
@@ -1395,27 +1213,23 @@ class BuildInfoHelper(object):
|
||||
logger.debug(1, "buildinfohelper: Saving stored event %s " % tempevent)
|
||||
self.store_log_event(tempevent)
|
||||
else:
|
||||
logger.info("buildinfohelper: All events saved")
|
||||
logger.error("buildinfohelper: Events not saved: %s" % self.internal_state['backlog'])
|
||||
del self.internal_state['backlog']
|
||||
|
||||
log_information = {}
|
||||
log_information['build'] = self.internal_state['build']
|
||||
if event.levelno == formatter.CRITICAL:
|
||||
log_information['level'] = LogMessage.CRITICAL
|
||||
elif event.levelno == formatter.ERROR:
|
||||
if event.levelno == format.ERROR:
|
||||
log_information['level'] = LogMessage.ERROR
|
||||
elif event.levelno == formatter.WARNING:
|
||||
elif event.levelno == format.WARNING:
|
||||
log_information['level'] = LogMessage.WARNING
|
||||
elif event.levelno == -2: # toaster self-logging
|
||||
log_information['level'] = -2
|
||||
elif event.levelno == -1: # toaster self-logging
|
||||
log_information['level'] = -1
|
||||
else:
|
||||
log_information['level'] = LogMessage.INFO
|
||||
|
||||
log_information['message'] = event.msg
|
||||
log_information['pathname'] = event.pathname
|
||||
log_information['lineno'] = event.lineno
|
||||
logger.info("Logging error 2: %s", log_information)
|
||||
|
||||
self.orm_wrapper.create_logmessage(log_information)
|
||||
|
||||
def close(self, errorcode):
|
||||
@@ -1430,7 +1244,7 @@ class BuildInfoHelper(object):
|
||||
else:
|
||||
# we have no build, and we still have events; something amazingly wrong happend
|
||||
for event in self.internal_state['backlog']:
|
||||
logger.error("UNSAVED log: %s", event.msg)
|
||||
logger.error("UNSAVED log: %s", event.msg)
|
||||
|
||||
if not connection.features.autocommits_when_autocommit_is_off:
|
||||
transaction.set_autocommit(True)
|
||||
|
||||
@@ -309,7 +309,7 @@ class Parameters:
|
||||
|
||||
def hob_conf_filter(fn, data):
|
||||
if fn.endswith("/local.conf"):
|
||||
distro = data.getVar("DISTRO_HOB", False)
|
||||
distro = data.getVar("DISTRO_HOB")
|
||||
if distro:
|
||||
if distro != "defaultsetup":
|
||||
data.setVar("DISTRO", distro)
|
||||
@@ -320,13 +320,13 @@ def hob_conf_filter(fn, data):
|
||||
"BB_NUMBER_THREADS_HOB", "PARALLEL_MAKE_HOB", "DL_DIR_HOB", \
|
||||
"SSTATE_DIR_HOB", "SSTATE_MIRRORS_HOB", "INCOMPATIBLE_LICENSE_HOB"]
|
||||
for key in keys:
|
||||
var_hob = data.getVar(key, False)
|
||||
var_hob = data.getVar(key)
|
||||
if var_hob:
|
||||
data.setVar(key.split("_HOB")[0], var_hob)
|
||||
return
|
||||
|
||||
if fn.endswith("/bblayers.conf"):
|
||||
layers = data.getVar("BBLAYERS_HOB", False)
|
||||
layers = data.getVar("BBLAYERS_HOB")
|
||||
if layers:
|
||||
data.setVar("BBLAYERS", layers)
|
||||
return
|
||||
|
||||
@@ -440,17 +440,11 @@ class HobHandler(gobject.GObject):
|
||||
self.commands_async.append(self.SUB_BUILD_RECIPES)
|
||||
self.run_next_command(self.GENERATE_PACKAGES)
|
||||
|
||||
def generate_image(self, image, base_image, image_packages=None, toolchain_packages=None, default_task="build"):
|
||||
def generate_image(self, image, base_image, image_packages=[], toolchain_packages=[], default_task="build"):
|
||||
self.image = image
|
||||
self.base_image = base_image
|
||||
if image_packages:
|
||||
self.package_queue = image_packages
|
||||
else:
|
||||
self.package_queue = []
|
||||
if toolchain_packages:
|
||||
self.toolchain_packages = toolchain_packages
|
||||
else:
|
||||
self.toolchain_packages = []
|
||||
self.package_queue = image_packages
|
||||
self.toolchain_packages = toolchain_packages
|
||||
self.default_task = default_task
|
||||
self.runCommand(["setPrePostConfFiles", "conf/.hob.conf", ""])
|
||||
self.commands_async.append(self.SUB_PARSE_CONFIG)
|
||||
|
||||
@@ -230,7 +230,7 @@ def _log_settings_from_server(server):
|
||||
if error:
|
||||
logger.error("Unable to get the value of BBINCLUDELOGS_LINES variable: %s" % error)
|
||||
raise BaseException(error)
|
||||
consolelogfile, error = server.runCommand(["getSetVariable", "BB_CONSOLELOG"])
|
||||
consolelogfile, error = server.runCommand(["getVariable", "BB_CONSOLELOG"])
|
||||
if error:
|
||||
logger.error("Unable to get the value of BB_CONSOLELOG variable: %s" % error)
|
||||
raise BaseException(error)
|
||||
|
||||
@@ -21,8 +21,6 @@
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
from __future__ import division
|
||||
import time
|
||||
import sys
|
||||
try:
|
||||
import bb
|
||||
except RuntimeError as exc:
|
||||
@@ -32,93 +30,57 @@ from bb.ui import uihelper
|
||||
from bb.ui.buildinfohelper import BuildInfoHelper
|
||||
|
||||
import bb.msg
|
||||
import copy
|
||||
import fcntl
|
||||
import logging
|
||||
import os
|
||||
|
||||
# pylint: disable=invalid-name
|
||||
# module properties for UI modules are read by bitbake and the contract should not be broken
|
||||
|
||||
import progressbar
|
||||
import signal
|
||||
import struct
|
||||
import sys
|
||||
import time
|
||||
import xmlrpclib
|
||||
|
||||
featureSet = [bb.cooker.CookerFeatures.HOB_EXTRA_CACHES, bb.cooker.CookerFeatures.SEND_DEPENDS_TREE, bb.cooker.CookerFeatures.BASEDATASTORE_TRACKING, bb.cooker.CookerFeatures.SEND_SANITYEVENTS]
|
||||
|
||||
logger = logging.getLogger("ToasterLogger")
|
||||
logger = logging.getLogger("BitBake")
|
||||
interactive = sys.stdout.isatty()
|
||||
|
||||
|
||||
|
||||
def _log_settings_from_server(server):
|
||||
# Get values of variables which control our output
|
||||
includelogs, error = server.runCommand(["getVariable", "BBINCLUDELOGS"])
|
||||
if error:
|
||||
logger.error("Unable to get the value of BBINCLUDELOGS variable: %s", error)
|
||||
logger.error("Unable to get the value of BBINCLUDELOGS variable: %s" % error)
|
||||
raise BaseException(error)
|
||||
loglines, error = server.runCommand(["getVariable", "BBINCLUDELOGS_LINES"])
|
||||
if error:
|
||||
logger.error("Unable to get the value of BBINCLUDELOGS_LINES variable: %s", error)
|
||||
logger.error("Unable to get the value of BBINCLUDELOGS_LINES variable: %s" % error)
|
||||
raise BaseException(error)
|
||||
consolelogfile, error = server.runCommand(["getVariable", "BB_CONSOLELOG"])
|
||||
if error:
|
||||
logger.error("Unable to get the value of BB_CONSOLELOG variable: %s", error)
|
||||
logger.error("Unable to get the value of BB_CONSOLELOG variable: %s" % error)
|
||||
raise BaseException(error)
|
||||
return consolelogfile
|
||||
return includelogs, loglines, consolelogfile
|
||||
|
||||
# create a log file for a single build and direct the logger at it;
|
||||
# log file name is timestamped to the millisecond (depending
|
||||
# on system clock accuracy) to ensure it doesn't overlap with
|
||||
# other log file names
|
||||
#
|
||||
# returns (log file, path to log file) for a build
|
||||
def _open_build_log(log_dir):
|
||||
format_str = "%(levelname)s: %(message)s"
|
||||
|
||||
now = time.time()
|
||||
now_ms = int((now - int(now)) * 1000)
|
||||
time_str = time.strftime('build_%Y%m%d_%H%M%S', time.localtime(now))
|
||||
log_file_name = time_str + ('.%d.log' % now_ms)
|
||||
build_log_file_path = os.path.join(log_dir, log_file_name)
|
||||
|
||||
build_log = logging.FileHandler(build_log_file_path)
|
||||
|
||||
logformat = bb.msg.BBLogFormatter(format_str)
|
||||
build_log.setFormatter(logformat)
|
||||
|
||||
bb.msg.addDefaultlogFilter(build_log)
|
||||
logger.addHandler(build_log)
|
||||
|
||||
return (build_log, build_log_file_path)
|
||||
|
||||
# stop logging to the build log if it exists
|
||||
def _close_build_log(build_log):
|
||||
if build_log:
|
||||
build_log.flush()
|
||||
build_log.close()
|
||||
logger.removeHandler(build_log)
|
||||
|
||||
def main(server, eventHandler, params):
|
||||
# set to a logging.FileHandler instance when a build starts;
|
||||
# see _open_build_log()
|
||||
build_log = None
|
||||
|
||||
# set to the log path when a build starts
|
||||
build_log_file_path = None
|
||||
def main(server, eventHandler, params ):
|
||||
|
||||
helper = uihelper.BBUIHelper()
|
||||
|
||||
# TODO don't use log output to determine when bitbake has started
|
||||
#
|
||||
# WARNING: this log handler cannot be removed, as localhostbecontroller
|
||||
# relies on output in the toaster_ui.log file to determine whether
|
||||
# the bitbake server has started, which only happens if
|
||||
# this logger is setup here (see the TODO in the loop below)
|
||||
console = logging.StreamHandler(sys.stdout)
|
||||
format_str = "%(levelname)s: %(message)s"
|
||||
formatter = bb.msg.BBLogFormatter(format_str)
|
||||
format = bb.msg.BBLogFormatter(format_str)
|
||||
bb.msg.addDefaultlogFilter(console)
|
||||
console.setFormatter(formatter)
|
||||
console.setFormatter(format)
|
||||
logger.addHandler(console)
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
includelogs, loglines, consolelogfile = _log_settings_from_server(server)
|
||||
|
||||
# verify and warn
|
||||
build_history_enabled = True
|
||||
inheritlist, _ = server.runCommand(["getVariable", "INHERIT"])
|
||||
inheritlist, error = server.runCommand(["getVariable", "INHERIT"])
|
||||
|
||||
if not "buildhistory" in inheritlist.split(" "):
|
||||
logger.warn("buildhistory is not enabled. Please enable INHERIT += \"buildhistory\" to see image details.")
|
||||
@@ -126,11 +88,10 @@ def main(server, eventHandler, params):
|
||||
|
||||
if not params.observe_only:
|
||||
logger.error("ToasterUI can only work in observer mode")
|
||||
return 1
|
||||
return
|
||||
|
||||
|
||||
# set to 1 when toasterui needs to shut down
|
||||
main.shutdown = 0
|
||||
|
||||
interrupted = False
|
||||
return_value = 0
|
||||
errors = 0
|
||||
@@ -140,82 +101,56 @@ def main(server, eventHandler, params):
|
||||
|
||||
buildinfohelper = BuildInfoHelper(server, build_history_enabled)
|
||||
|
||||
# write our own log files into bitbake's log directory;
|
||||
# we're only interested in the path to the parent directory of
|
||||
# this file, as we're writing our own logs into the same directory
|
||||
consolelogfile = _log_settings_from_server(server)
|
||||
log_dir = os.path.dirname(consolelogfile)
|
||||
bb.utils.mkdirhier(log_dir)
|
||||
if buildinfohelper.brbe is not None and consolelogfile:
|
||||
# if we are under managed mode we have no other UI and we need to write our own file
|
||||
bb.utils.mkdirhier(os.path.dirname(consolelogfile))
|
||||
conlogformat = bb.msg.BBLogFormatter(format_str)
|
||||
consolelog = logging.FileHandler(consolelogfile)
|
||||
bb.msg.addDefaultlogFilter(consolelog)
|
||||
consolelog.setFormatter(conlogformat)
|
||||
logger.addHandler(consolelog)
|
||||
|
||||
|
||||
while True:
|
||||
try:
|
||||
event = eventHandler.waitEvent(0.25)
|
||||
if first:
|
||||
first = False
|
||||
|
||||
# TODO don't use log output to determine when bitbake has started
|
||||
#
|
||||
# this is the line localhostbecontroller needs to
|
||||
# see in toaster_ui.log which it uses to decide whether
|
||||
# the bitbake server has started...
|
||||
logger.info("ToasterUI waiting for events")
|
||||
|
||||
if event is None:
|
||||
if main.shutdown > 0:
|
||||
# if shutting down, close any open build log first
|
||||
_close_build_log(build_log)
|
||||
|
||||
break
|
||||
continue
|
||||
|
||||
helper.eventHandler(event)
|
||||
|
||||
# pylint: disable=protected-access
|
||||
# the code will look into the protected variables of the event; no easy way around this
|
||||
|
||||
# we treat ParseStarted as the first event of toaster-triggered
|
||||
# builds; that way we get the Build Configuration included in the log
|
||||
# and any errors that occur before BuildStarted is fired
|
||||
if isinstance(event, bb.event.ParseStarted):
|
||||
if not (build_log and build_log_file_path):
|
||||
build_log, build_log_file_path = _open_build_log(log_dir)
|
||||
continue
|
||||
|
||||
if isinstance(event, bb.event.BuildStarted):
|
||||
# command-line builds don't fire a ParseStarted event,
|
||||
# so we have to start the log file for those on BuildStarted instead
|
||||
if not (build_log and build_log_file_path):
|
||||
build_log, build_log_file_path = _open_build_log(log_dir)
|
||||
|
||||
buildinfohelper.store_started_build(event, build_log_file_path)
|
||||
buildinfohelper.store_started_build(event)
|
||||
|
||||
if isinstance(event, (bb.build.TaskStarted, bb.build.TaskSucceeded, bb.build.TaskFailedSilent)):
|
||||
buildinfohelper.update_and_store_task(event)
|
||||
logger.info("Logfile for task %s", event.logfile)
|
||||
logger.warn("Logfile for task %s" % event.logfile)
|
||||
continue
|
||||
|
||||
if isinstance(event, bb.build.TaskBase):
|
||||
logger.info(event._message)
|
||||
|
||||
if isinstance(event, bb.event.LogExecTTY):
|
||||
logger.info(event.msg)
|
||||
logger.warn(event.msg)
|
||||
continue
|
||||
|
||||
if isinstance(event, logging.LogRecord):
|
||||
if event.levelno == -1:
|
||||
event.levelno = formatter.ERROR
|
||||
|
||||
buildinfohelper.store_log_event(event)
|
||||
|
||||
if event.levelno >= formatter.ERROR:
|
||||
if event.levelno >= format.ERROR:
|
||||
errors = errors + 1
|
||||
elif event.levelno == formatter.WARNING:
|
||||
return_value = 1
|
||||
elif event.levelno == format.WARNING:
|
||||
warnings = warnings + 1
|
||||
|
||||
# For "normal" logging conditions, don't show note logs from tasks
|
||||
# but do show them if the user has changed the default log level to
|
||||
# include verbose/debug messages
|
||||
if event.taskpid != 0 and event.levelno <= formatter.NOTE:
|
||||
if event.taskpid != 0 and event.levelno <= format.NOTE:
|
||||
continue
|
||||
|
||||
logger.handle(event)
|
||||
@@ -223,6 +158,7 @@ def main(server, eventHandler, params):
|
||||
|
||||
if isinstance(event, bb.build.TaskFailed):
|
||||
buildinfohelper.update_and_store_task(event)
|
||||
return_value = 1
|
||||
logfile = event.logfile
|
||||
if logfile and os.path.exists(logfile):
|
||||
bb.error("Logfile of failure stored in: %s" % logfile)
|
||||
@@ -232,6 +168,8 @@ def main(server, eventHandler, params):
|
||||
# timing and error informations from the parsing phase in Toaster
|
||||
if isinstance(event, (bb.event.SanityCheckPassed, bb.event.SanityCheck)):
|
||||
continue
|
||||
if isinstance(event, bb.event.ParseStarted):
|
||||
continue
|
||||
if isinstance(event, bb.event.ParseProgress):
|
||||
continue
|
||||
if isinstance(event, bb.event.ParseCompleted):
|
||||
@@ -250,6 +188,7 @@ def main(server, eventHandler, params):
|
||||
continue
|
||||
|
||||
if isinstance(event, bb.event.NoProvider):
|
||||
return_value = 1
|
||||
errors = errors + 1
|
||||
if event._runtime:
|
||||
r = "R"
|
||||
@@ -299,44 +238,39 @@ def main(server, eventHandler, params):
|
||||
if isinstance(event, (bb.event.TreeDataPreparationStarted, bb.event.TreeDataPreparationCompleted)):
|
||||
continue
|
||||
|
||||
if isinstance(event, (bb.event.BuildCompleted, bb.command.CommandFailed)):
|
||||
|
||||
errorcode = 0
|
||||
if isinstance(event, bb.command.CommandFailed):
|
||||
errors += 1
|
||||
errorcode = 1
|
||||
logger.error("Command execution failed: %s", event.error)
|
||||
|
||||
# turn off logging to the current build log
|
||||
_close_build_log(build_log)
|
||||
|
||||
# reset ready for next BuildStarted
|
||||
build_log = None
|
||||
|
||||
# update the build info helper on BuildCompleted, not on CommandXXX
|
||||
buildinfohelper.update_build_information(event, errors, warnings, taskfailures)
|
||||
buildinfohelper.close(errorcode)
|
||||
# mark the log output; controllers may kill the toasterUI after seeing this log
|
||||
logger.info("ToasterUI build done 1, brbe: %s", buildinfohelper.brbe )
|
||||
|
||||
# we start a new build info
|
||||
if buildinfohelper.brbe is not None:
|
||||
logger.debug("ToasterUI under BuildEnvironment management - exiting after the build")
|
||||
server.terminateServer()
|
||||
else:
|
||||
logger.debug("ToasterUI prepared for new build")
|
||||
errors = 0
|
||||
warnings = 0
|
||||
taskfailures = []
|
||||
buildinfohelper = BuildInfoHelper(server, build_history_enabled)
|
||||
|
||||
logger.info("ToasterUI build done 2")
|
||||
if isinstance(event, (bb.event.BuildCompleted)):
|
||||
continue
|
||||
|
||||
if isinstance(event, (bb.command.CommandCompleted,
|
||||
bb.command.CommandFailed,
|
||||
bb.command.CommandExit)):
|
||||
errorcode = 0
|
||||
if (isinstance(event, bb.command.CommandFailed)):
|
||||
event.levelno = format.ERROR
|
||||
event.msg = "Command Failed " + event.error
|
||||
event.pathname = ""
|
||||
event.lineno = 0
|
||||
buildinfohelper.store_log_event(event)
|
||||
errors += 1
|
||||
errorcode = 1
|
||||
logger.error("Command execution failed: %s", event.error)
|
||||
|
||||
buildinfohelper.update_build_information(event, errors, warnings, taskfailures)
|
||||
buildinfohelper.close(errorcode)
|
||||
# mark the log output; controllers may kill the toasterUI after seeing this log
|
||||
logger.info("ToasterUI build done")
|
||||
|
||||
# we start a new build info
|
||||
if buildinfohelper.brbe is not None:
|
||||
|
||||
logger.debug(1, "ToasterUI under BuildEnvironment management - exiting after the build")
|
||||
server.terminateServer()
|
||||
else:
|
||||
logger.debug(1, "ToasterUI prepared for new build")
|
||||
errors = 0
|
||||
warnings = 0
|
||||
taskfailures = []
|
||||
buildinfohelper = BuildInfoHelper(server, build_history_enabled)
|
||||
|
||||
continue
|
||||
|
||||
@@ -358,13 +292,12 @@ def main(server, eventHandler, params):
|
||||
elif event.type == "LicenseManifestPath":
|
||||
buildinfohelper.store_license_manifest_path(event)
|
||||
else:
|
||||
logger.error("Unprocessed MetadataEvent %s ", str(event))
|
||||
logger.error("Unprocessed MetadataEvent %s " % str(event))
|
||||
continue
|
||||
|
||||
if isinstance(event, bb.cooker.CookerExit):
|
||||
# shutdown when bitbake server shuts down
|
||||
main.shutdown = 1
|
||||
continue
|
||||
# exit when the server exits
|
||||
break
|
||||
|
||||
# ignore
|
||||
if isinstance(event, (bb.event.BuildBase,
|
||||
@@ -375,16 +308,14 @@ def main(server, eventHandler, params):
|
||||
bb.event.OperationProgress,
|
||||
bb.command.CommandFailed,
|
||||
bb.command.CommandExit,
|
||||
bb.command.CommandCompleted,
|
||||
bb.event.ReachableStamps)):
|
||||
bb.command.CommandCompleted)):
|
||||
continue
|
||||
|
||||
if isinstance(event, bb.event.DepTreeGenerated):
|
||||
buildinfohelper.store_dependency_information(event)
|
||||
continue
|
||||
|
||||
logger.warn("Unknown event: %s", event)
|
||||
return_value += 1
|
||||
logger.error("Unknown event: %s", event)
|
||||
|
||||
except EnvironmentError as ioerror:
|
||||
# ignore interrupted io
|
||||
@@ -392,31 +323,31 @@ def main(server, eventHandler, params):
|
||||
pass
|
||||
except KeyboardInterrupt:
|
||||
main.shutdown = 1
|
||||
pass
|
||||
except Exception as e:
|
||||
# print errors to log
|
||||
import traceback
|
||||
from pprint import pformat
|
||||
exception_data = traceback.format_exc()
|
||||
logger.error("%s\n%s" , e, exception_data)
|
||||
logger.error("%s\n%s" % (e, exception_data))
|
||||
|
||||
_, _, tb = sys.exc_info()
|
||||
exc_type, exc_value, tb = sys.exc_info()
|
||||
if tb is not None:
|
||||
curr = tb
|
||||
while curr is not None:
|
||||
logger.error("Error data dump %s\n%s\n" , traceback.format_tb(curr,1), pformat(curr.tb_frame.f_locals))
|
||||
logger.warn("Error data dump %s\n%s\n" % (traceback.format_tb(curr,1), pformat(curr.tb_frame.f_locals)))
|
||||
curr = curr.tb_next
|
||||
|
||||
# save them to database, if possible; if it fails, we already logged to console.
|
||||
try:
|
||||
buildinfohelper.store_log_exception("%s\n%s" % (str(e), exception_data))
|
||||
except Exception as ce:
|
||||
logger.error("CRITICAL - Failed to to save toaster exception to the database: %s", str(ce))
|
||||
logger.error("CRITICAL - Failed to to save toaster exception to the database: %s" % str(ce))
|
||||
|
||||
# make sure we return with an error
|
||||
return_value += 1
|
||||
pass
|
||||
|
||||
if interrupted and return_value == 0:
|
||||
return_value += 1
|
||||
if interrupted:
|
||||
if return_value == 0:
|
||||
return_value = 1
|
||||
|
||||
logger.warn("Return value is %d", return_value)
|
||||
return return_value
|
||||
|
||||
@@ -29,14 +29,11 @@ import multiprocessing
|
||||
import fcntl
|
||||
import subprocess
|
||||
import glob
|
||||
import fnmatch
|
||||
import traceback
|
||||
import errno
|
||||
import signal
|
||||
from commands import getstatusoutput
|
||||
from contextlib import contextmanager
|
||||
from ctypes import cdll
|
||||
|
||||
|
||||
logger = logging.getLogger("BitBake.Util")
|
||||
|
||||
@@ -743,12 +740,7 @@ def movefile(src, dest, newmtime = None, sstat = None):
|
||||
renamefailed = 1
|
||||
if sstat[stat.ST_DEV] == dstat[stat.ST_DEV]:
|
||||
try:
|
||||
# os.rename needs to know the dest path ending with file name
|
||||
# so append the file name to a path only if it's a dir specified
|
||||
srcfname = os.path.basename(src)
|
||||
destpath = os.path.join(dest, srcfname) if os.path.isdir(dest) \
|
||||
else dest
|
||||
os.rename(src, destpath)
|
||||
os.rename(src, dest)
|
||||
renamefailed = 0
|
||||
except Exception as e:
|
||||
if e[0] != errno.EXDEV:
|
||||
@@ -937,17 +929,11 @@ def cpu_count():
|
||||
def nonblockingfd(fd):
|
||||
fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK)
|
||||
|
||||
def process_profilelog(fn, pout = None):
|
||||
# Either call with a list of filenames and set pout or a filename and optionally pout.
|
||||
if not pout:
|
||||
pout = fn + '.processed'
|
||||
pout = open(pout, 'w')
|
||||
def process_profilelog(fn):
|
||||
pout = open(fn + '.processed', 'w')
|
||||
|
||||
import pstats
|
||||
if isinstance(fn, list):
|
||||
p = pstats.Stats(*fn, stream=pout)
|
||||
else:
|
||||
p = pstats.Stats(fn, stream=pout)
|
||||
p = pstats.Stats(fn, stream=pout)
|
||||
p.sort_stats('time')
|
||||
p.print_stats()
|
||||
p.print_callers()
|
||||
@@ -998,62 +984,14 @@ def exec_flat_python_func(func, *args, **kwargs):
|
||||
bb.utils.better_exec(comp, context, code, '<string>')
|
||||
return context['retval']
|
||||
|
||||
def edit_metadata(meta_lines, variables, varfunc, match_overrides=False):
|
||||
"""Edit lines from a recipe or config file and modify one or more
|
||||
specified variable values set in the file using a specified callback
|
||||
function. Lines are expected to have trailing newlines.
|
||||
Parameters:
|
||||
meta_lines: lines from the file; can be a list or an iterable
|
||||
(e.g. file pointer)
|
||||
variables: a list of variable names to look for. Functions
|
||||
may also be specified, but must be specified with '()' at
|
||||
the end of the name. Note that the function doesn't have
|
||||
any intrinsic understanding of _append, _prepend, _remove,
|
||||
or overrides, so these are considered as part of the name.
|
||||
These values go into a regular expression, so regular
|
||||
expression syntax is allowed.
|
||||
varfunc: callback function called for every variable matching
|
||||
one of the entries in the variables parameter. The function
|
||||
should take four arguments:
|
||||
varname: name of variable matched
|
||||
origvalue: current value in file
|
||||
op: the operator (e.g. '+=')
|
||||
newlines: list of lines up to this point. You can use
|
||||
this to prepend lines before this variable setting
|
||||
if you wish.
|
||||
and should return a three-element tuple:
|
||||
newvalue: new value to substitute in, or None to drop
|
||||
the variable setting entirely. (If the removal
|
||||
results in two consecutive blank lines, one of the
|
||||
blank lines will also be dropped).
|
||||
newop: the operator to use - if you specify None here,
|
||||
the original operation will be used.
|
||||
indent: number of spaces to indent multi-line entries,
|
||||
or -1 to indent up to the level of the assignment
|
||||
and opening quote, or a string to use as the indent.
|
||||
minbreak: True to allow the first element of a
|
||||
multi-line value to continue on the same line as
|
||||
the assignment, False to indent before the first
|
||||
element.
|
||||
match_overrides: True to match items with _overrides on the end,
|
||||
False otherwise
|
||||
Returns a tuple:
|
||||
updated:
|
||||
True if changes were made, False otherwise.
|
||||
newlines:
|
||||
Lines after processing
|
||||
def edit_metadata_file(meta_file, variables, func):
|
||||
"""Edit a recipe or config file and modify one or more specified
|
||||
variable values set in the file using a specified callback function.
|
||||
The file is only written to if the value(s) actually change.
|
||||
"""
|
||||
|
||||
var_res = {}
|
||||
if match_overrides:
|
||||
override_re = '(_[a-zA-Z0-9-_$(){}]+)?'
|
||||
else:
|
||||
override_re = ''
|
||||
for var in variables:
|
||||
if var.endswith('()'):
|
||||
var_res[var] = re.compile('^(%s%s)[ \\t]*\([ \\t]*\)[ \\t]*{' % (var[:-2].rstrip(), override_re))
|
||||
else:
|
||||
var_res[var] = re.compile('^(%s%s)[ \\t]*[?+:.]*=[+.]*[ \\t]*(["\'])' % (var, override_re))
|
||||
var_res[var] = re.compile(r'^%s[ \t]*[?+]*=' % var)
|
||||
|
||||
updated = False
|
||||
varset_start = ''
|
||||
@@ -1061,159 +999,73 @@ def edit_metadata(meta_lines, variables, varfunc, match_overrides=False):
|
||||
newlines = []
|
||||
in_var = None
|
||||
full_value = ''
|
||||
var_end = ''
|
||||
|
||||
def handle_var_end():
|
||||
prerun_newlines = newlines[:]
|
||||
op = varset_start[len(in_var):].strip()
|
||||
(newvalue, newop, indent, minbreak) = varfunc(in_var, full_value, op, newlines)
|
||||
changed = (prerun_newlines != newlines)
|
||||
|
||||
if newvalue is None:
|
||||
# Drop the value
|
||||
return True
|
||||
elif newvalue != full_value or (newop not in [None, op]):
|
||||
if newop not in [None, op]:
|
||||
# Callback changed the operator
|
||||
varset_new = "%s %s" % (in_var, newop)
|
||||
else:
|
||||
varset_new = varset_start
|
||||
|
||||
if isinstance(indent, (int, long)):
|
||||
if indent == -1:
|
||||
indentspc = ' ' * (len(varset_new) + 2)
|
||||
else:
|
||||
indentspc = ' ' * indent
|
||||
else:
|
||||
indentspc = indent
|
||||
if in_var.endswith('()'):
|
||||
# A function definition
|
||||
if isinstance(newvalue, list):
|
||||
newlines.append('%s {\n%s%s\n}\n' % (varset_new, indentspc, ('\n%s' % indentspc).join(newvalue)))
|
||||
else:
|
||||
if not newvalue.startswith('\n'):
|
||||
newvalue = '\n' + newvalue
|
||||
if not newvalue.endswith('\n'):
|
||||
newvalue = newvalue + '\n'
|
||||
newlines.append('%s {%s}\n' % (varset_new, newvalue))
|
||||
else:
|
||||
# Normal variable
|
||||
if isinstance(newvalue, list):
|
||||
if not newvalue:
|
||||
# Empty list -> empty string
|
||||
newlines.append('%s ""\n' % varset_new)
|
||||
elif minbreak:
|
||||
# First item on first line
|
||||
if len(newvalue) == 1:
|
||||
newlines.append('%s "%s"\n' % (varset_new, newvalue[0]))
|
||||
else:
|
||||
newlines.append('%s "%s \\\n' % (varset_new, newvalue[0]))
|
||||
for item in newvalue[1:]:
|
||||
newlines.append('%s%s \\\n' % (indentspc, item))
|
||||
newlines.append('%s"\n' % indentspc)
|
||||
(newvalue, indent, minbreak) = func(in_var, full_value)
|
||||
if newvalue != full_value:
|
||||
if isinstance(newvalue, list):
|
||||
intentspc = ' ' * indent
|
||||
if minbreak:
|
||||
# First item on first line
|
||||
if len(newvalue) == 1:
|
||||
newlines.append('%s "%s"\n' % (varset_start, newvalue[0]))
|
||||
else:
|
||||
# No item on first line
|
||||
newlines.append('%s " \\\n' % varset_new)
|
||||
for item in newvalue:
|
||||
newlines.append('%s%s \\\n' % (indentspc, item))
|
||||
newlines.append('%s "%s\\\n' % (varset_start, newvalue[0]))
|
||||
for item in newvalue[1:]:
|
||||
newlines.append('%s%s \\\n' % (intentspc, item))
|
||||
newlines.append('%s"\n' % indentspc)
|
||||
else:
|
||||
newlines.append('%s "%s"\n' % (varset_new, newvalue))
|
||||
# No item on first line
|
||||
newlines.append('%s " \\\n' % varset_start)
|
||||
for item in newvalue:
|
||||
newlines.append('%s%s \\\n' % (intentspc, item))
|
||||
newlines.append('%s"\n' % intentspc)
|
||||
else:
|
||||
newlines.append('%s "%s"\n' % (varset_start, newvalue))
|
||||
return True
|
||||
else:
|
||||
# Put the old lines back where they were
|
||||
newlines.extend(varlines)
|
||||
# If newlines was touched by the function, we'll need to return True
|
||||
return changed
|
||||
return False
|
||||
|
||||
checkspc = False
|
||||
|
||||
for line in meta_lines:
|
||||
if in_var:
|
||||
value = line.rstrip()
|
||||
varlines.append(line)
|
||||
if in_var.endswith('()'):
|
||||
full_value += '\n' + value
|
||||
else:
|
||||
full_value += value[:-1]
|
||||
if value.endswith(var_end):
|
||||
if in_var.endswith('()'):
|
||||
if full_value.count('{') - full_value.count('}') >= 0:
|
||||
continue
|
||||
full_value = full_value[:-1]
|
||||
if handle_var_end():
|
||||
updated = True
|
||||
checkspc = True
|
||||
in_var = None
|
||||
else:
|
||||
skip = False
|
||||
for (varname, var_re) in var_res.iteritems():
|
||||
res = var_re.match(line)
|
||||
if res:
|
||||
isfunc = varname.endswith('()')
|
||||
if isfunc:
|
||||
splitvalue = line.split('{', 1)
|
||||
var_end = '}'
|
||||
else:
|
||||
var_end = res.groups()[-1]
|
||||
splitvalue = line.split(var_end, 1)
|
||||
varset_start = splitvalue[0].rstrip()
|
||||
value = splitvalue[1].rstrip()
|
||||
if not isfunc and value.endswith('\\'):
|
||||
value = value[:-1]
|
||||
full_value = value
|
||||
varlines = [line]
|
||||
in_var = res.group(1)
|
||||
if isfunc:
|
||||
in_var += '()'
|
||||
if value.endswith(var_end):
|
||||
full_value = full_value[:-1]
|
||||
if handle_var_end():
|
||||
updated = True
|
||||
checkspc = True
|
||||
in_var = None
|
||||
skip = True
|
||||
break
|
||||
if not skip:
|
||||
if checkspc:
|
||||
checkspc = False
|
||||
if newlines and newlines[-1] == '\n' and line == '\n':
|
||||
# Squash blank line if there are two consecutive blanks after a removal
|
||||
continue
|
||||
newlines.append(line)
|
||||
return (updated, newlines)
|
||||
|
||||
|
||||
def edit_metadata_file(meta_file, variables, varfunc):
|
||||
"""Edit a recipe or config file and modify one or more specified
|
||||
variable values set in the file using a specified callback function.
|
||||
The file is only written to if the value(s) actually change.
|
||||
This is basically the file version of edit_metadata(), see that
|
||||
function's description for parameter/usage information.
|
||||
Returns True if the file was written to, False otherwise.
|
||||
"""
|
||||
with open(meta_file, 'r') as f:
|
||||
(updated, newlines) = edit_metadata(f, variables, varfunc)
|
||||
for line in f:
|
||||
if in_var:
|
||||
value = line.rstrip()
|
||||
varlines.append(line)
|
||||
full_value += value[:-1]
|
||||
if value.endswith('"') or value.endswith("'"):
|
||||
full_value = full_value[:-1]
|
||||
if handle_var_end():
|
||||
updated = True
|
||||
in_var = None
|
||||
else:
|
||||
matched = False
|
||||
for (varname, var_re) in var_res.iteritems():
|
||||
if var_re.match(line):
|
||||
splitvalue = line.split('"', 1)
|
||||
varset_start = splitvalue[0].rstrip()
|
||||
value = splitvalue[1].rstrip()
|
||||
if value.endswith('\\'):
|
||||
value = value[:-1]
|
||||
full_value = value
|
||||
varlines = [line]
|
||||
in_var = varname
|
||||
if value.endswith('"') or value.endswith("'"):
|
||||
full_value = full_value[:-1]
|
||||
if handle_var_end():
|
||||
updated = True
|
||||
in_var = None
|
||||
matched = True
|
||||
break
|
||||
if not matched:
|
||||
newlines.append(line)
|
||||
if updated:
|
||||
with open(meta_file, 'w') as f:
|
||||
f.writelines(newlines)
|
||||
return updated
|
||||
|
||||
|
||||
def edit_bblayers_conf(bblayers_conf, add, remove):
|
||||
"""Edit bblayers.conf, adding and/or removing layers
|
||||
Parameters:
|
||||
bblayers_conf: path to bblayers.conf file to edit
|
||||
add: layer path (or list of layer paths) to add; None or empty
|
||||
list to add nothing
|
||||
remove: layer path (or list of layer paths) to remove; None or
|
||||
empty list to remove nothing
|
||||
Returns a tuple:
|
||||
notadded: list of layers specified to be added but weren't
|
||||
(because they were already in the list)
|
||||
notremoved: list of layers that were specified to be removed
|
||||
but weren't (because they weren't in the list)
|
||||
"""
|
||||
"""Edit bblayers.conf, adding and/or removing layers"""
|
||||
|
||||
import fnmatch
|
||||
|
||||
@@ -1222,13 +1074,6 @@ def edit_bblayers_conf(bblayers_conf, add, remove):
|
||||
pth = pth[:-1]
|
||||
return pth
|
||||
|
||||
approved = bb.utils.approved_variables()
|
||||
def canonicalise_path(pth):
|
||||
pth = remove_trailing_sep(pth)
|
||||
if 'HOME' in approved and '~' in pth:
|
||||
pth = os.path.expanduser(pth)
|
||||
return pth
|
||||
|
||||
def layerlist_param(value):
|
||||
if not value:
|
||||
return []
|
||||
@@ -1237,80 +1082,48 @@ def edit_bblayers_conf(bblayers_conf, add, remove):
|
||||
else:
|
||||
return [remove_trailing_sep(value)]
|
||||
|
||||
notadded = []
|
||||
notremoved = []
|
||||
|
||||
addlayers = layerlist_param(add)
|
||||
removelayers = layerlist_param(remove)
|
||||
|
||||
# Need to use a list here because we can't set non-local variables from a callback in python 2.x
|
||||
bblayercalls = []
|
||||
removed = []
|
||||
plusequals = False
|
||||
orig_bblayers = []
|
||||
|
||||
def handle_bblayers_firstpass(varname, origvalue, op, newlines):
|
||||
bblayercalls.append(op)
|
||||
if op == '=':
|
||||
del orig_bblayers[:]
|
||||
orig_bblayers.extend([canonicalise_path(x) for x in origvalue.split()])
|
||||
return (origvalue, None, 2, False)
|
||||
|
||||
def handle_bblayers(varname, origvalue, op, newlines):
|
||||
def handle_bblayers(varname, origvalue):
|
||||
bblayercalls.append(varname)
|
||||
updated = False
|
||||
bblayers = [remove_trailing_sep(x) for x in origvalue.split()]
|
||||
if removelayers:
|
||||
for removelayer in removelayers:
|
||||
matched = False
|
||||
for layer in bblayers:
|
||||
if fnmatch.fnmatch(canonicalise_path(layer), canonicalise_path(removelayer)):
|
||||
if fnmatch.fnmatch(layer, removelayer):
|
||||
updated = True
|
||||
matched = True
|
||||
bblayers.remove(layer)
|
||||
removed.append(removelayer)
|
||||
break
|
||||
if addlayers and not plusequals:
|
||||
if not matched:
|
||||
notremoved.append(removelayer)
|
||||
if addlayers:
|
||||
for addlayer in addlayers:
|
||||
if addlayer not in bblayers:
|
||||
updated = True
|
||||
bblayers.append(addlayer)
|
||||
del addlayers[:]
|
||||
else:
|
||||
notadded.append(addlayer)
|
||||
|
||||
if updated:
|
||||
if op == '+=' and not bblayers:
|
||||
bblayers = None
|
||||
return (bblayers, None, 2, False)
|
||||
return (bblayers, 2, False)
|
||||
else:
|
||||
return (origvalue, None, 2, False)
|
||||
return (origvalue, 2, False)
|
||||
|
||||
with open(bblayers_conf, 'r') as f:
|
||||
(_, newlines) = edit_metadata(f, ['BBLAYERS'], handle_bblayers_firstpass)
|
||||
edit_metadata_file(bblayers_conf, ['BBLAYERS'], handle_bblayers)
|
||||
|
||||
if not bblayercalls:
|
||||
raise Exception('Unable to find BBLAYERS in %s' % bblayers_conf)
|
||||
|
||||
# Try to do the "smart" thing depending on how the user has laid out
|
||||
# their bblayers.conf file
|
||||
if bblayercalls.count('+=') > 1:
|
||||
plusequals = True
|
||||
|
||||
removelayers_canon = [canonicalise_path(layer) for layer in removelayers]
|
||||
notadded = []
|
||||
for layer in addlayers:
|
||||
layer_canon = canonicalise_path(layer)
|
||||
if layer_canon in orig_bblayers and not layer_canon in removelayers_canon:
|
||||
notadded.append(layer)
|
||||
notadded_canon = [canonicalise_path(layer) for layer in notadded]
|
||||
addlayers[:] = [layer for layer in addlayers if canonicalise_path(layer) not in notadded_canon]
|
||||
|
||||
(updated, newlines) = edit_metadata(newlines, ['BBLAYERS'], handle_bblayers)
|
||||
if addlayers:
|
||||
# Still need to add these
|
||||
for addlayer in addlayers:
|
||||
newlines.append('BBLAYERS += "%s"\n' % addlayer)
|
||||
updated = True
|
||||
|
||||
if updated:
|
||||
with open(bblayers_conf, 'w') as f:
|
||||
f.writelines(newlines)
|
||||
|
||||
notremoved = list(set(removelayers) - set(removed))
|
||||
|
||||
return (notadded, notremoved)
|
||||
|
||||
|
||||
@@ -1321,67 +1134,11 @@ def get_file_layer(filename, d):
|
||||
for collection in collections:
|
||||
collection_res[collection] = d.getVar('BBFILE_PATTERN_%s' % collection, True) or ''
|
||||
|
||||
def path_to_layer(path):
|
||||
# Use longest path so we handle nested layers
|
||||
matchlen = 0
|
||||
match = None
|
||||
for collection, regex in collection_res.iteritems():
|
||||
if len(regex) > matchlen and re.match(regex, path):
|
||||
matchlen = len(regex)
|
||||
match = collection
|
||||
return match
|
||||
|
||||
result = None
|
||||
bbfiles = (d.getVar('BBFILES', True) or '').split()
|
||||
bbfilesmatch = False
|
||||
for bbfilesentry in bbfiles:
|
||||
if fnmatch.fnmatch(filename, bbfilesentry):
|
||||
bbfilesmatch = True
|
||||
result = path_to_layer(bbfilesentry)
|
||||
|
||||
if not bbfilesmatch:
|
||||
# Probably a bbclass
|
||||
result = path_to_layer(filename)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
# Constant taken from http://linux.die.net/include/linux/prctl.h
|
||||
PR_SET_PDEATHSIG = 1
|
||||
|
||||
class PrCtlError(Exception):
|
||||
pass
|
||||
|
||||
def signal_on_parent_exit(signame):
|
||||
"""
|
||||
Trigger signame to be sent when the parent process dies
|
||||
"""
|
||||
signum = getattr(signal, signame)
|
||||
# http://linux.die.net/man/2/prctl
|
||||
result = cdll['libc.so.6'].prctl(PR_SET_PDEATHSIG, signum)
|
||||
if result != 0:
|
||||
raise PrCtlError('prctl failed with error code %s' % result)
|
||||
|
||||
#
|
||||
# Manually call the ioprio syscall. We could depend on other libs like psutil
|
||||
# however this gets us enough of what we need to bitbake for now without the
|
||||
# dependency
|
||||
#
|
||||
_unamearch = os.uname()[4]
|
||||
IOPRIO_WHO_PROCESS = 1
|
||||
IOPRIO_CLASS_SHIFT = 13
|
||||
|
||||
def ioprio_set(who, cls, value):
|
||||
NR_ioprio_set = None
|
||||
if _unamearch == "x86_64":
|
||||
NR_ioprio_set = 251
|
||||
elif _unamearch[0] == "i" and _unamearch[2:3] == "86":
|
||||
NR_ioprio_set = 289
|
||||
|
||||
if NR_ioprio_set:
|
||||
ioprio = value | (cls << IOPRIO_CLASS_SHIFT)
|
||||
rc = cdll['libc.so.6'].syscall(NR_ioprio_set, IOPRIO_WHO_PROCESS, who, ioprio)
|
||||
if rc != 0:
|
||||
raise ValueError("Unable to set ioprio, syscall returned %s" % rc)
|
||||
else:
|
||||
bb.warn("Unable to set IO Prio for arch %s" % _unamearch)
|
||||
# Use longest path so we handle nested layers
|
||||
matchlen = 0
|
||||
match = None
|
||||
for collection, regex in collection_res.iteritems():
|
||||
if len(regex) > matchlen and re.match(regex, filename):
|
||||
matchlen = len(regex)
|
||||
match = collection
|
||||
return match
|
||||
|
||||
@@ -15,16 +15,6 @@ sqlversion = sqlite3.sqlite_version_info
|
||||
if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
|
||||
raise Exception("sqlite3 version 3.3.0 or later is required.")
|
||||
|
||||
#
|
||||
# "No History" mode - for a given query tuple (version, pkgarch, checksum),
|
||||
# the returned value will be the largest among all the values of the same
|
||||
# (version, pkgarch). This means the PR value returned can NOT be decremented.
|
||||
#
|
||||
# "History" mode - Return a new higher value for previously unseen query
|
||||
# tuple (version, pkgarch, checksum), otherwise return historical value.
|
||||
# Value can decrement if returning to a previous build.
|
||||
#
|
||||
|
||||
class PRTable(object):
|
||||
def __init__(self, conn, table, nohist):
|
||||
self.conn = conn
|
||||
@@ -248,7 +238,7 @@ class PRData(object):
|
||||
self.connection.execute("PRAGMA journal_mode = WAL;")
|
||||
self._tables={}
|
||||
|
||||
def disconnect(self):
|
||||
def __del__(self):
|
||||
self.connection.close()
|
||||
|
||||
def __getitem__(self,tblname):
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
import os,sys,logging
|
||||
import signal, time
|
||||
import signal, time, atexit, threading
|
||||
from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
|
||||
import xmlrpclib
|
||||
import threading
|
||||
import Queue
|
||||
import socket
|
||||
|
||||
try:
|
||||
import sqlite3
|
||||
@@ -38,6 +38,7 @@ singleton = None
|
||||
class PRServer(SimpleXMLRPCServer):
|
||||
def __init__(self, dbfile, logfile, interface, daemon=True):
|
||||
''' constructor '''
|
||||
import socket
|
||||
try:
|
||||
SimpleXMLRPCServer.__init__(self, interface,
|
||||
logRequests=False, allow_none=True)
|
||||
@@ -97,13 +98,6 @@ class PRServer(SimpleXMLRPCServer):
|
||||
self.table.sync()
|
||||
self.table.sync_if_dirty()
|
||||
|
||||
def sigint_handler(self, signum, stack):
|
||||
self.table.sync()
|
||||
|
||||
def sigterm_handler(self, signum, stack):
|
||||
self.table.sync()
|
||||
raise SystemExit
|
||||
|
||||
def process_request(self, request, client_address):
|
||||
self.requestqueue.put((request, client_address))
|
||||
|
||||
@@ -148,17 +142,13 @@ class PRServer(SimpleXMLRPCServer):
|
||||
while not self.quit:
|
||||
self.handle_request()
|
||||
self.handlerthread.join()
|
||||
self.db.disconnect()
|
||||
self.table.sync()
|
||||
logger.info("PRServer: stopping...")
|
||||
self.server_close()
|
||||
return
|
||||
|
||||
def start(self):
|
||||
if self.daemon:
|
||||
pid = self.daemonize()
|
||||
else:
|
||||
pid = self.fork()
|
||||
|
||||
pid = self.daemonize()
|
||||
# Ensure both the parent sees this and the child from the work_forever log entry above
|
||||
logger.info("Started PRServer with DBfile: %s, IP: %s, PORT: %s, PID: %s" %
|
||||
(self.dbfile, self.host, self.port, str(pid)))
|
||||
@@ -191,24 +181,6 @@ class PRServer(SimpleXMLRPCServer):
|
||||
except OSError as e:
|
||||
raise Exception("%s [%d]" % (e.strerror, e.errno))
|
||||
|
||||
self.cleanup_handles()
|
||||
os._exit(0)
|
||||
|
||||
def fork(self):
|
||||
try:
|
||||
pid = os.fork()
|
||||
if pid > 0:
|
||||
return pid
|
||||
except OSError as e:
|
||||
raise Exception("%s [%d]" % (e.strerror, e.errno))
|
||||
|
||||
bb.utils.signal_on_parent_exit("SIGTERM")
|
||||
self.cleanup_handles()
|
||||
os._exit(0)
|
||||
|
||||
def cleanup_handles(self):
|
||||
signal.signal(signal.SIGINT, self.sigint_handler)
|
||||
signal.signal(signal.SIGTERM, self.sigterm_handler)
|
||||
os.umask(0)
|
||||
os.chdir("/")
|
||||
|
||||
@@ -241,6 +213,7 @@ class PRServer(SimpleXMLRPCServer):
|
||||
|
||||
self.work_forever()
|
||||
self.delpid()
|
||||
os._exit(0)
|
||||
|
||||
class PRServSingleton(object):
|
||||
def __init__(self, dbfile, logfile, interface):
|
||||
@@ -251,7 +224,7 @@ class PRServSingleton(object):
|
||||
self.port = None
|
||||
|
||||
def start(self):
|
||||
self.prserv = PRServer(self.dbfile, self.logfile, self.interface, daemon=False)
|
||||
self.prserv = PRServer(self.dbfile, self.logfile, self.interface)
|
||||
self.prserv.start()
|
||||
self.host, self.port = self.prserv.getinfo()
|
||||
|
||||
@@ -289,8 +262,7 @@ class PRServerConnection(object):
|
||||
return self.host, self.port
|
||||
|
||||
def start_daemon(dbfile, host, port, logfile):
|
||||
ip = socket.gethostbyname(host)
|
||||
pidfile = PIDPREFIX % (ip, port)
|
||||
pidfile = PIDPREFIX % (host, port)
|
||||
try:
|
||||
pf = file(pidfile,'r')
|
||||
pid = int(pf.readline().strip())
|
||||
@@ -303,21 +275,12 @@ def start_daemon(dbfile, host, port, logfile):
|
||||
% pidfile)
|
||||
return 1
|
||||
|
||||
server = PRServer(os.path.abspath(dbfile), os.path.abspath(logfile), (ip,port))
|
||||
server = PRServer(os.path.abspath(dbfile), os.path.abspath(logfile), (host,port))
|
||||
server.start()
|
||||
|
||||
# Sometimes, the port (i.e. localhost:0) indicated by the user does not match with
|
||||
# the one the server actually is listening, so at least warn the user about it
|
||||
_,rport = server.getinfo()
|
||||
if port != rport:
|
||||
sys.stdout.write("Server is listening at port %s instead of %s\n"
|
||||
% (rport,port))
|
||||
return 0
|
||||
|
||||
def stop_daemon(host, port):
|
||||
import glob
|
||||
ip = socket.gethostbyname(host)
|
||||
pidfile = PIDPREFIX % (ip, port)
|
||||
pidfile = PIDPREFIX % (host, port)
|
||||
try:
|
||||
pf = file(pidfile,'r')
|
||||
pid = int(pf.readline().strip())
|
||||
@@ -326,23 +289,11 @@ def stop_daemon(host, port):
|
||||
pid = None
|
||||
|
||||
if not pid:
|
||||
# when server starts at port=0 (i.e. localhost:0), server actually takes another port,
|
||||
# so at least advise the user which ports the corresponding server is listening
|
||||
ports = []
|
||||
portstr = ""
|
||||
for pf in glob.glob(PIDPREFIX % (ip,'*')):
|
||||
bn = os.path.basename(pf)
|
||||
root, _ = os.path.splitext(bn)
|
||||
ports.append(root.split('_')[-1])
|
||||
if len(ports):
|
||||
portstr = "Wrong port? Other ports listening at %s: %s" % (host, ' '.join(ports))
|
||||
|
||||
sys.stderr.write("pidfile %s does not exist. Daemon not running? %s\n"
|
||||
% (pidfile,portstr))
|
||||
return 1
|
||||
sys.stderr.write("pidfile %s does not exist. Daemon not running?\n"
|
||||
% pidfile)
|
||||
|
||||
try:
|
||||
PRServerConnection(ip, port).terminate()
|
||||
PRServerConnection(host, port).terminate()
|
||||
except:
|
||||
logger.critical("Stop PRService %s:%d failed" % (host,port))
|
||||
|
||||
|
||||
@@ -177,7 +177,7 @@ class BuildEnvironmentController(object):
|
||||
|
||||
return BitbakeController(self.connection)
|
||||
|
||||
def getArtifact(self, path):
|
||||
def getArtifact(path):
|
||||
""" This call returns an artifact identified by the 'path'. How 'path' is interpreted as
|
||||
up to the implementing BEC. The return MUST be a REST URL where a GET will actually return
|
||||
the content of the artifact, e.g. for use as a "download link" in a web UI.
|
||||
@@ -190,9 +190,6 @@ class BuildEnvironmentController(object):
|
||||
"""
|
||||
raise Exception("Must override BE release")
|
||||
|
||||
def triggerBuild(self, bitbake, layers, variables, targets):
|
||||
raise Exception("Must override BE release")
|
||||
|
||||
class ShellCmdException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
@@ -23,11 +23,9 @@
|
||||
import os
|
||||
import sys
|
||||
import re
|
||||
import shutil
|
||||
from django.db import transaction
|
||||
from django.db.models import Q
|
||||
from bldcontrol.models import BuildEnvironment, BRLayer, BRVariable, BRTarget, BRBitbake
|
||||
from orm.models import CustomImageRecipe, Layer, Layer_Version, ProjectLayer
|
||||
import subprocess
|
||||
|
||||
from toastermain import settings
|
||||
@@ -56,7 +54,7 @@ class LocalhostBEController(BuildEnvironmentController):
|
||||
if cwd is None:
|
||||
cwd = self.be.sourcedir
|
||||
|
||||
logger.debug("lbc_shellcmmd: (%s) %s" % (cwd, command))
|
||||
#logger.debug("lbc_shellcmmd: (%s) %s" % (cwd, command))
|
||||
p = subprocess.Popen(command, cwd = cwd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
(out,err) = p.communicate()
|
||||
p.wait()
|
||||
@@ -65,22 +63,27 @@ class LocalhostBEController(BuildEnvironmentController):
|
||||
err = "command: %s \n%s" % (command, out)
|
||||
else:
|
||||
err = "command: %s \n%s" % (command, err)
|
||||
logger.warn("localhostbecontroller: shellcmd error %s" % err)
|
||||
#logger.warn("localhostbecontroller: shellcmd error %s" % err)
|
||||
raise ShellCmdException(err)
|
||||
else:
|
||||
logger.debug("localhostbecontroller: shellcmd success")
|
||||
#logger.debug("localhostbecontroller: shellcmd success")
|
||||
return out
|
||||
|
||||
def _createdirpath(self, path):
|
||||
from os.path import dirname as DN
|
||||
if path == "":
|
||||
raise Exception("Invalid path creation specified.")
|
||||
if not os.path.exists(DN(path)):
|
||||
self._createdirpath(DN(path))
|
||||
if not os.path.exists(path):
|
||||
os.mkdir(path, 0755)
|
||||
|
||||
def _setupBE(self):
|
||||
assert self.pokydirname and os.path.exists(self.pokydirname)
|
||||
path = self.be.builddir
|
||||
if not path:
|
||||
raise Exception("Invalid path creation specified.")
|
||||
if not os.path.exists(path):
|
||||
os.makedirs(path, 0755)
|
||||
self._shellcmd("bash -c \"source %s/oe-init-build-env %s\"" % (self.pokydirname, path))
|
||||
self._createdirpath(self.be.builddir)
|
||||
self._shellcmd("bash -c \"source %s/oe-init-build-env %s\"" % (self.pokydirname, self.be.builddir))
|
||||
# delete the templateconf.cfg; it may come from an unsupported layer configuration
|
||||
os.remove(os.path.join(path, "conf/templateconf.cfg"))
|
||||
os.remove(os.path.join(self.be.builddir, "conf/templateconf.cfg"))
|
||||
|
||||
|
||||
def writeConfFile(self, file_name, variable_list = None, raw = None):
|
||||
@@ -110,25 +113,19 @@ class LocalhostBEController(BuildEnvironmentController):
|
||||
# get the file length; we need to detect the _last_ start of the toaster UI, not the first
|
||||
toaster_ui_log_filelength = 0
|
||||
if os.path.exists(toaster_ui_log_filepath):
|
||||
with open(toaster_ui_log_filepath, "w") as f:
|
||||
with open(toaster_ui_log_filepath, "r") as f:
|
||||
f.seek(0, 2) # jump to the end
|
||||
toaster_ui_log_filelength = f.tell()
|
||||
|
||||
cmd = "bash -c \"source %s/oe-init-build-env %s 2>&1 >toaster_server.log && bitbake --read %s/conf/toaster-pre.conf --postread %s/conf/toaster.conf --server-only -t xmlrpc -B 0.0.0.0:0 2>&1 >>toaster_server.log \"" % (self.pokydirname, self.be.builddir, self.be.builddir, self.be.builddir)
|
||||
|
||||
cmd = "bash -c \"source %s/oe-init-build-env %s 2>&1 >toaster_server.log && bitbake --read conf/toaster-pre.conf --postread conf/toaster.conf --server-only -t xmlrpc -B 0.0.0.0:0 2>&1 >toaster_server.log && DATABASE_URL=%s BBSERVER=0.0.0.0:-1 daemon -d -i -D %s -o toaster_ui.log -- %s --observe-only -u toasterui &\"" % (self.pokydirname, self.be.builddir,
|
||||
self.dburl, self.be.builddir, own_bitbake)
|
||||
port = "-1"
|
||||
logger.debug("localhostbecontroller: starting builder \n%s\n" % cmd)
|
||||
|
||||
cmdoutput = self._shellcmd(cmd)
|
||||
with open(self.be.builddir + "/toaster_server.log", "r") as f:
|
||||
for i in f.readlines():
|
||||
if i.startswith("Bitbake server address"):
|
||||
port = i.split(" ")[-1]
|
||||
logger.debug("localhostbecontroller: Found bitbake server port %s" % port)
|
||||
|
||||
cmd = "bash -c \"source %s/oe-init-build-env-memres -1 %s && DATABASE_URL=%s %s --observe-only -u toasterui --remote-server=0.0.0.0:-1 -t xmlrpc\"" % (self.pokydirname, self.be.builddir, self.dburl, own_bitbake)
|
||||
with open(toaster_ui_log_filepath, "a+") as f:
|
||||
p = subprocess.Popen(cmd, cwd = self.be.builddir, shell=True, stdout=f, stderr=f)
|
||||
for i in cmdoutput.split("\n"):
|
||||
if i.startswith("Bitbake server address"):
|
||||
port = i.split(" ")[-1]
|
||||
logger.debug("localhostbecontroller: Found bitbake server port %s" % port)
|
||||
|
||||
def _toaster_ui_started(filepath, filepos = 0):
|
||||
if not os.path.exists(filepath):
|
||||
@@ -142,7 +139,7 @@ class LocalhostBEController(BuildEnvironmentController):
|
||||
|
||||
retries = 0
|
||||
started = False
|
||||
while not started and retries < 50:
|
||||
while not started and retries < 10:
|
||||
started = _toaster_ui_started(toaster_ui_log_filepath, toaster_ui_log_filelength)
|
||||
import time
|
||||
logger.debug("localhostbecontroller: Waiting bitbake server to start")
|
||||
@@ -150,9 +147,8 @@ class LocalhostBEController(BuildEnvironmentController):
|
||||
retries += 1
|
||||
|
||||
if not started:
|
||||
toaster_ui_log = open(os.path.join(self.be.builddir, "toaster_ui.log"), "r").read()
|
||||
toaster_server_log = open(os.path.join(self.be.builddir, "toaster_server.log"), "r").read()
|
||||
raise BuildSetupException("localhostbecontroller: Bitbake server did not start in 25 seconds, aborting (Error: '%s' '%s')" % (toaster_ui_log, toaster_server_log))
|
||||
raise BuildSetupException("localhostbecontroller: Bitbake server did not start in 5 seconds, aborting (Error: '%s' '%s')" % (cmdoutput, toaster_server_log))
|
||||
|
||||
logger.debug("localhostbecontroller: Started bitbake server")
|
||||
|
||||
@@ -181,9 +177,15 @@ class LocalhostBEController(BuildEnvironmentController):
|
||||
logger.debug("localhostbecontroller: Stopped bitbake server")
|
||||
|
||||
def getGitCloneDirectory(self, url, branch):
|
||||
"""Construct unique clone directory name out of url and branch."""
|
||||
""" Utility that returns the last component of a git path as directory
|
||||
"""
|
||||
import re
|
||||
components = re.split(r'[:\.\/]', url)
|
||||
base = components[-2] if components[-1] == "git" else components[-1]
|
||||
|
||||
if branch != "HEAD":
|
||||
return "_toaster_clones/_%s_%s" % (re.sub('[:/@%]', '_', url), branch)
|
||||
return "_%s_%s.toaster_cloned" % (base, branch)
|
||||
|
||||
|
||||
# word of attention; this is a localhost-specific issue; only on the localhost we expect to have "HEAD" releases
|
||||
# which _ALWAYS_ means the current poky checkout
|
||||
@@ -193,7 +195,7 @@ class LocalhostBEController(BuildEnvironmentController):
|
||||
return local_checkout_path
|
||||
|
||||
|
||||
def setLayers(self, bitbakes, layers, targets):
|
||||
def setLayers(self, bitbakes, layers):
|
||||
""" a word of attention: by convention, the first layer for any build will be poky! """
|
||||
|
||||
assert self.be.sourcedir is not None
|
||||
@@ -218,26 +220,23 @@ class LocalhostBEController(BuildEnvironmentController):
|
||||
logger.debug("localhostbecontroller, our git repos are %s" % pformat(gitrepos))
|
||||
|
||||
|
||||
# 2. Note for future use if the current source directory is a
|
||||
# checked-out git repos that could match a layer's vcs_url and therefore
|
||||
# be used to speed up cloning (rather than fetching it again).
|
||||
# 2. find checked-out git repos in the sourcedir directory that may help faster cloning
|
||||
|
||||
cached_layers = {}
|
||||
|
||||
try:
|
||||
for remotes in self._shellcmd("git remote -v", self.be.sourcedir).split("\n"):
|
||||
for ldir in os.listdir(self.be.sourcedir):
|
||||
fldir = os.path.join(self.be.sourcedir, ldir)
|
||||
if os.path.isdir(fldir):
|
||||
try:
|
||||
remote = remotes.split("\t")[1].split(" ")[0]
|
||||
if remote not in cached_layers:
|
||||
cached_layers[remote] = self.be.sourcedir
|
||||
except IndexError:
|
||||
for line in self._shellcmd("git remote -v", fldir).split("\n"):
|
||||
try:
|
||||
remote = line.split("\t")[1].split(" ")[0]
|
||||
if remote not in cached_layers:
|
||||
cached_layers[remote] = fldir
|
||||
except IndexError:
|
||||
pass
|
||||
except ShellCmdException:
|
||||
# ignore any errors in collecting git remotes
|
||||
pass
|
||||
except ShellCmdException:
|
||||
# ignore any errors in collecting git remotes this is an optional
|
||||
# step
|
||||
pass
|
||||
|
||||
logger.info("Using pre-checked out source for layer %s", cached_layers)
|
||||
|
||||
layerlist = []
|
||||
|
||||
@@ -259,14 +258,13 @@ class LocalhostBEController(BuildEnvironmentController):
|
||||
self._shellcmd("git remote remove origin", localdirname)
|
||||
self._shellcmd("git remote add origin \"%s\"" % giturl, localdirname)
|
||||
else:
|
||||
logger.debug("localhostbecontroller: cloning %s in %s" % (giturl, localdirname))
|
||||
self._shellcmd('git clone "%s" "%s"' % (giturl, localdirname))
|
||||
logger.debug("localhostbecontroller: cloning %s:%s in %s" % (giturl, commit, localdirname))
|
||||
self._shellcmd("git clone \"%s\" --single-branch --branch \"%s\" \"%s\"" % (giturl, commit, localdirname))
|
||||
|
||||
# branch magic name "HEAD" will inhibit checkout
|
||||
if commit != "HEAD":
|
||||
logger.debug("localhostbecontroller: checking out commit %s to %s " % (commit, localdirname))
|
||||
ref = commit if re.match('^[a-fA-F0-9]+$', commit) else 'origin/%s' % commit
|
||||
self._shellcmd('git fetch --all && git reset --hard "%s"' % ref, localdirname)
|
||||
self._shellcmd("git fetch --all && git checkout \"%s\" && git rebase \"origin/%s\"" % (commit, commit) , localdirname)
|
||||
|
||||
# take the localdirname as poky dir if we can find the oe-init-build-env
|
||||
if self.pokydirname is None and os.path.exists(os.path.join(localdirname, "oe-init-build-env")):
|
||||
@@ -299,51 +297,6 @@ class LocalhostBEController(BuildEnvironmentController):
|
||||
if not os.path.exists(bblayerconf):
|
||||
raise BuildSetupException("BE is not consistent: bblayers.conf file missing at %s" % bblayerconf)
|
||||
|
||||
# 6. create custom layer and add custom recipes to it
|
||||
layerpath = os.path.join(self.be.sourcedir, "_meta-toaster-custom")
|
||||
if os.path.isdir(layerpath):
|
||||
shutil.rmtree(layerpath) # remove leftovers from previous builds
|
||||
for target in targets:
|
||||
try:
|
||||
customrecipe = CustomImageRecipe.objects.get(name=target.target,
|
||||
project=bitbakes[0].req.project)
|
||||
except CustomImageRecipe.DoesNotExist:
|
||||
continue # not a custom recipe, skip
|
||||
|
||||
# create directory structure
|
||||
for name in ("conf", "recipes"):
|
||||
path = os.path.join(layerpath, name)
|
||||
if not os.path.isdir(path):
|
||||
os.makedirs(path)
|
||||
|
||||
# create layer.oonf
|
||||
config = os.path.join(layerpath, "conf", "layer.conf")
|
||||
if not os.path.isfile(config):
|
||||
with open(config, "w") as conf:
|
||||
conf.write('BBPATH .= ":${LAYERDIR}"\nBBFILES += "${LAYERDIR}/recipes/*.bb"\n')
|
||||
|
||||
# create recipe
|
||||
recipe = os.path.join(layerpath, "recipes", "%s.bb" % target.target)
|
||||
with open(recipe, "w") as recipef:
|
||||
recipef.write("require %s\n" % customrecipe.base_recipe.recipe.file_path)
|
||||
packages = [pkg.name for pkg in customrecipe.packages.all()]
|
||||
if packages:
|
||||
recipef.write('IMAGE_INSTALL = "%s"\n' % ' '.join(packages))
|
||||
|
||||
# create *Layer* objects needed for build machinery to work
|
||||
layer = Layer.objects.get_or_create(name="Toaster Custom layer",
|
||||
summary="Layer for custom recipes",
|
||||
vcs_url="file://%s" % layerpath)[0]
|
||||
breq = target.req
|
||||
lver = Layer_Version.objects.get_or_create(project=breq.project, layer=layer,
|
||||
dirpath=layerpath, build=breq.build)[0]
|
||||
ProjectLayer.objects.get_or_create(project=breq.project, layercommit=lver,
|
||||
optional=False)
|
||||
BRLayer.objects.get_or_create(req=breq, name=layer.name, dirpath=layerpath,
|
||||
giturl="file://%s" % layerpath)
|
||||
if os.path.isdir(layerpath):
|
||||
layerlist.append(layerpath)
|
||||
|
||||
BuildEnvironmentController._updateBBLayers(bblayerconf, layerlist)
|
||||
|
||||
self.islayerset = True
|
||||
@@ -357,25 +310,3 @@ class LocalhostBEController(BuildEnvironmentController):
|
||||
import shutil
|
||||
shutil.rmtree(os.path.join(self.be.sourcedir, "build"))
|
||||
assert not os.path.exists(self.be.builddir)
|
||||
|
||||
|
||||
def triggerBuild(self, bitbake, layers, variables, targets):
|
||||
# set up the buid environment with the needed layers
|
||||
self.setLayers(bitbake, layers, targets)
|
||||
self.writeConfFile("conf/toaster-pre.conf", variables)
|
||||
self.writeConfFile("conf/toaster.conf", raw = "INHERIT+=\"toaster buildhistory\"")
|
||||
|
||||
# get the bb server running with the build req id and build env id
|
||||
bbctrl = self.getBBController()
|
||||
|
||||
# trigger the build command
|
||||
task = reduce(lambda x, y: x if len(y)== 0 else y, map(lambda y: y.task, targets))
|
||||
if len(task) == 0:
|
||||
task = None
|
||||
|
||||
bbctrl.build(list(map(lambda x:x.target, targets)), task)
|
||||
|
||||
logger.debug("localhostbecontroller: Build launched, exiting. Follow build logs at %s/toaster_ui.log" % self.be.builddir)
|
||||
|
||||
# disconnect from the server
|
||||
bbctrl.disconnect()
|
||||
|
||||
@@ -2,9 +2,8 @@ from django.core.management.base import NoArgsCommand, CommandError
|
||||
from django.db import transaction
|
||||
from bldcontrol.bbcontroller import getBuildEnvironmentController, ShellCmdException
|
||||
from bldcontrol.models import BuildRequest, BuildEnvironment, BRError
|
||||
from orm.models import ToasterSetting, Build
|
||||
from orm.models import ToasterSetting
|
||||
import os
|
||||
import traceback
|
||||
|
||||
def DN(path):
|
||||
if path is None:
|
||||
@@ -17,11 +16,8 @@ class Command(NoArgsCommand):
|
||||
args = ""
|
||||
help = "Verifies that the configured settings are valid and usable, or prompts the user to fix the settings."
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(Command, self).__init__(*args, **kwargs)
|
||||
self.guesspath = DN(DN(DN(DN(DN(DN(DN(__file__)))))))
|
||||
|
||||
def _find_first_path_for_file(self, startdirectory, filename, level=0):
|
||||
def _find_first_path_for_file(self, startdirectory, filename, level = 0):
|
||||
if level < 0:
|
||||
return None
|
||||
dirs = []
|
||||
@@ -38,7 +34,7 @@ class Command(NoArgsCommand):
|
||||
return ret
|
||||
return None
|
||||
|
||||
def _recursive_list_directories(self, startdirectory, level=0):
|
||||
def _recursive_list_directories(self, startdirectory, level = 0):
|
||||
if level < 0:
|
||||
return []
|
||||
dirs = []
|
||||
@@ -50,31 +46,80 @@ class Command(NoArgsCommand):
|
||||
except OSError:
|
||||
pass
|
||||
for j in dirs:
|
||||
dirs = dirs + self._recursive_list_directories(j, level - 1)
|
||||
dirs = dirs + self._recursive_list_directories(j, level - 1)
|
||||
return dirs
|
||||
|
||||
|
||||
def _verify_build_environment(self):
|
||||
# provide a local build env. This will be extended later to include non local
|
||||
if BuildEnvironment.objects.count() == 0:
|
||||
BuildEnvironment.objects.create(betype=BuildEnvironment.TYPE_LOCAL)
|
||||
def _get_suggested_sourcedir(self, be):
|
||||
if be.betype != BuildEnvironment.TYPE_LOCAL:
|
||||
return ""
|
||||
return DN(DN(DN(self._find_first_path_for_file(self.guesspath, "toasterconf.json", 4))))
|
||||
|
||||
def _get_suggested_builddir(self, be):
|
||||
if be.betype != BuildEnvironment.TYPE_LOCAL:
|
||||
return ""
|
||||
return DN(self._find_first_path_for_file(DN(self.guesspath), "bblayers.conf", 4))
|
||||
|
||||
|
||||
def handle(self, **options):
|
||||
# verify that we have a settings for downloading artifacts
|
||||
while ToasterSetting.objects.filter(name="ARTIFACTS_STORAGE_DIR").count() == 0:
|
||||
guessedpath = os.getcwd() + "/toaster_build_artifacts/"
|
||||
print("Toaster needs to know in which directory it can download build log files and other artifacts.\n Toaster suggests \"%s\"." % guessedpath)
|
||||
artifacts_storage_dir = raw_input(" Press Enter to select \"%s\" or type the full path to a different directory: " % guessedpath)
|
||||
if len(artifacts_storage_dir) == 0:
|
||||
artifacts_storage_dir = guessedpath
|
||||
if len(artifacts_storage_dir) > 0 and artifacts_storage_dir.startswith("/"):
|
||||
try:
|
||||
os.makedirs(artifacts_storage_dir)
|
||||
except OSError as ose:
|
||||
if "File exists" in str(ose):
|
||||
pass
|
||||
else:
|
||||
raise ose
|
||||
ToasterSetting.objects.create(name="ARTIFACTS_STORAGE_DIR", value=artifacts_storage_dir)
|
||||
|
||||
self.guesspath = DN(DN(DN(DN(DN(DN(DN(__file__)))))))
|
||||
# refuse to start if we have no build environments
|
||||
while BuildEnvironment.objects.count() == 0:
|
||||
print(" !! No build environments found. Toaster needs at least one build environment in order to be able to run builds.\n" +
|
||||
"You can manually define build environments in the database table bldcontrol_buildenvironment.\n" +
|
||||
"Or Toaster can define a simple localhost-based build environment for you.")
|
||||
|
||||
i = raw_input(" -- Do you want to create a basic localhost build environment ? (Y/n) ");
|
||||
if not len(i) or i.startswith("y") or i.startswith("Y"):
|
||||
BuildEnvironment.objects.create(pk = 1, betype = 0)
|
||||
else:
|
||||
raise Exception("Toaster cannot start without build environments. Aborting.")
|
||||
|
||||
|
||||
# we make sure we have builddir and sourcedir for all defined build envionments
|
||||
for be in BuildEnvironment.objects.all():
|
||||
be.needs_import = False
|
||||
def _verify_be():
|
||||
is_changed = False
|
||||
print("Verifying the Build Environment. If the local Build Environment is not properly configured, you will be asked to configure it.")
|
||||
|
||||
def _update_sourcedir():
|
||||
be.sourcedir = os.environ.get('TOASTER_DIR')
|
||||
suggesteddir = self._get_suggested_sourcedir(be)
|
||||
if len(suggesteddir) > 0:
|
||||
be.sourcedir = raw_input("Toaster needs to know in which directory it should check out the layers that will be needed for your builds.\n Toaster suggests \"%s\". If you select this directory, a layer like \"meta-intel\" will end up in \"%s/meta-intel\".\n Press Enter to select \"%s\" or type the full path to a different directory (must be a parent of current checkout directory): " % (suggesteddir, suggesteddir, suggesteddir))
|
||||
else:
|
||||
be.sourcedir = raw_input("Toaster needs to know in which directory it should check out the layers that will be needed for your builds. Type the full path to the directory (for example: \"%s\": " % os.environ.get('HOME', '/tmp/'))
|
||||
if len(be.sourcedir) == 0 and len(suggesteddir) > 0:
|
||||
be.sourcedir = suggesteddir
|
||||
return True
|
||||
|
||||
if len(be.sourcedir) == 0:
|
||||
print "\n -- Validation: The layers checkout directory must be set."
|
||||
print "\n -- Validation: The checkout directory must be set."
|
||||
is_changed = _update_sourcedir()
|
||||
|
||||
if not be.sourcedir.startswith("/"):
|
||||
print "\n -- Validation: The layers checkout directory must be set to an absolute path."
|
||||
print "\n -- Validation: The checkout directory must be set to an absolute path."
|
||||
is_changed = _update_sourcedir()
|
||||
|
||||
if not be.sourcedir in DN(__file__):
|
||||
print "\n -- Validation: The checkout directory must be a parent of the current checkout."
|
||||
is_changed = _update_sourcedir()
|
||||
|
||||
if is_changed:
|
||||
@@ -83,7 +128,13 @@ class Command(NoArgsCommand):
|
||||
return True
|
||||
|
||||
def _update_builddir():
|
||||
be.builddir = os.environ.get('TOASTER_DIR')+"/build"
|
||||
suggesteddir = self._get_suggested_builddir(be)
|
||||
if len(suggesteddir) > 0:
|
||||
be.builddir = raw_input("Toaster needs to know where it your build directory is located.\n The build directory is where all the artifacts created by your builds will be stored. Toaster suggests \"%s\".\n Press Enter to select \"%s\" or type the full path to a different directory: " % (suggesteddir, suggesteddir))
|
||||
else:
|
||||
be.builddir = raw_input("Toaster needs to know where is your build directory.\n The build directory is where all the artifacts created by your builds will be stored. Type the full path to the directory (for example: \" %s/build\")" % os.environ.get('HOME','/tmp/'))
|
||||
if len(be.builddir) == 0 and len(suggesteddir) > 0:
|
||||
be.builddir = suggesteddir
|
||||
return True
|
||||
|
||||
if len(be.builddir) == 0:
|
||||
@@ -96,66 +147,68 @@ class Command(NoArgsCommand):
|
||||
|
||||
|
||||
if is_changed:
|
||||
print "\nBuild configuration saved"
|
||||
print "Build configuration saved"
|
||||
be.save()
|
||||
return True
|
||||
|
||||
|
||||
if be.needs_import:
|
||||
try:
|
||||
config_file = os.environ.get('TOASTER_CONF')
|
||||
print "\nImporting file: %s" % config_file
|
||||
from loadconf import Command as LoadConfigCommand
|
||||
print "\nToaster can use a SINGLE predefined configuration file to set up default project settings and layer information sources.\n"
|
||||
|
||||
# find configuration files
|
||||
config_files = []
|
||||
for dirname in self._recursive_list_directories(be.sourcedir,2):
|
||||
if os.path.exists(os.path.join(dirname, ".templateconf")):
|
||||
import subprocess
|
||||
conffilepath, error = subprocess.Popen('bash -c ". '+os.path.join(dirname, ".templateconf")+'; echo \"\$TEMPLATECONF\""', shell=True, stdout=subprocess.PIPE).communicate()
|
||||
conffilepath = os.path.join(conffilepath.strip(), "toasterconf.json")
|
||||
candidatefilepath = os.path.join(dirname, conffilepath)
|
||||
if os.path.exists(candidatefilepath):
|
||||
config_files.append(candidatefilepath)
|
||||
|
||||
if len(config_files) > 0:
|
||||
print " Toaster will list now the configuration files that it found. Select the number to use the desired configuration file."
|
||||
for cf in config_files:
|
||||
print " [%d] - %s" % (config_files.index(cf) + 1, cf)
|
||||
print "\n [0] - Exit without importing any file"
|
||||
try:
|
||||
i = raw_input("\n Enter your option: ")
|
||||
if len(i) and (int(i) - 1 >= 0 and int(i) - 1 < len(config_files)):
|
||||
print "Importing file: %s" % config_files[int(i)-1]
|
||||
from loadconf import Command as LoadConfigCommand
|
||||
|
||||
LoadConfigCommand()._import_layer_config(config_files[int(i)-1])
|
||||
# we run lsupdates after config update
|
||||
print "Layer configuration imported. Updating information from the layer sources, please wait.\n You can re-update any time later by running bitbake/lib/toaster/manage.py lsupdates"
|
||||
from django.core.management import call_command
|
||||
call_command("lsupdates")
|
||||
|
||||
# we don't look for any other config files
|
||||
return is_changed
|
||||
except Exception as e:
|
||||
print "Failure while trying to import the toaster config file: %s" % e
|
||||
else:
|
||||
print "\n Toaster could not find a configuration file. You need to configure Toaster manually using the web interface, or create a configuration file and use\n bitbake/lib/toaster/managepy.py loadconf [filename]\n command to load it. You can use https://wiki.yoctoproject.org/wiki/File:Toasterconf.json.txt.patch as a starting point."
|
||||
|
||||
|
||||
LoadConfigCommand()._import_layer_config(config_file)
|
||||
# we run lsupdates after config update
|
||||
print "\nLayer configuration imported. Updating information from the layer sources, please wait.\nYou can re-update any time later by running bitbake/lib/toaster/manage.py lsupdates"
|
||||
from django.core.management import call_command
|
||||
call_command("lsupdates")
|
||||
|
||||
# we don't look for any other config files
|
||||
return is_changed
|
||||
except Exception as e:
|
||||
print "Failure while trying to import the toaster config file %s: %s" %\
|
||||
(config_file, e)
|
||||
traceback.print_exc(e)
|
||||
|
||||
return is_changed
|
||||
|
||||
while _verify_be():
|
||||
while (_verify_be()):
|
||||
pass
|
||||
return 0
|
||||
|
||||
def _verify_default_settings(self):
|
||||
# verify that default settings are there
|
||||
if ToasterSetting.objects.filter(name='DEFAULT_RELEASE').count() != 1:
|
||||
ToasterSetting.objects.filter(name='DEFAULT_RELEASE').delete()
|
||||
ToasterSetting.objects.get_or_create(name='DEFAULT_RELEASE', value='')
|
||||
return 0
|
||||
if ToasterSetting.objects.filter(name = 'DEFAULT_RELEASE').count() != 1:
|
||||
ToasterSetting.objects.filter(name = 'DEFAULT_RELEASE').delete()
|
||||
ToasterSetting.objects.get_or_create(name = 'DEFAULT_RELEASE', value = '')
|
||||
|
||||
def _verify_builds_in_progress(self):
|
||||
# we are just starting up. we must not have any builds in progress, or build environments taken
|
||||
for b in BuildRequest.objects.filter(state=BuildRequest.REQ_INPROGRESS):
|
||||
BRError.objects.create(req=b, errtype="toaster",
|
||||
errmsg=
|
||||
"Toaster found this build IN PROGRESS while Toaster started up. This is an inconsistent state, and the build was marked as failed")
|
||||
for b in BuildRequest.objects.filter(state = BuildRequest.REQ_INPROGRESS):
|
||||
BRError.objects.create(req = b, errtype = "toaster", errmsg = "Toaster found this build IN PROGRESS while Toaster started up. This is an inconsistent state, and the build was marked as failed")
|
||||
|
||||
BuildRequest.objects.filter(state=BuildRequest.REQ_INPROGRESS).update(state=BuildRequest.REQ_FAILED)
|
||||
BuildRequest.objects.filter(state = BuildRequest.REQ_INPROGRESS).update(state = BuildRequest.REQ_FAILED)
|
||||
|
||||
BuildEnvironment.objects.update(lock=BuildEnvironment.LOCK_FREE)
|
||||
|
||||
# also mark "In Progress builds as failures"
|
||||
from django.utils import timezone
|
||||
Build.objects.filter(outcome=Build.IN_PROGRESS).update(outcome=Build.FAILED, completed_on=timezone.now())
|
||||
BuildEnvironment.objects.update(lock = BuildEnvironment.LOCK_FREE)
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
|
||||
def handle_noargs(self, **options):
|
||||
retval = 0
|
||||
retval += self._verify_build_environment()
|
||||
retval += self._verify_default_settings()
|
||||
retval += self._verify_builds_in_progress()
|
||||
|
||||
return retval
|
||||
|
||||
@@ -1,14 +1,10 @@
|
||||
from django.core.management.base import BaseCommand, CommandError
|
||||
from orm.models import LayerSource, ToasterSetting, Branch, Layer, Layer_Version
|
||||
from orm.models import BitbakeVersion, Release, ReleaseDefaultLayer, ReleaseLayerSourcePriority
|
||||
from django.db import IntegrityError
|
||||
import os
|
||||
|
||||
from checksettings import DN
|
||||
|
||||
import logging
|
||||
logger = logging.getLogger("toaster")
|
||||
|
||||
def _reduce_canon_path(path):
|
||||
components = []
|
||||
for c in path.split("/"):
|
||||
@@ -38,7 +34,7 @@ class Command(BaseCommand):
|
||||
if not os.path.exists(filepath) or not os.path.isfile(filepath):
|
||||
raise Exception("Failed to find toaster config file %s ." % filepath)
|
||||
|
||||
import json
|
||||
import json, pprint
|
||||
data = json.loads(open(filepath, "r").read())
|
||||
|
||||
# verify config file validity before updating settings
|
||||
@@ -53,7 +49,7 @@ class Command(BaseCommand):
|
||||
cmd = subprocess.Popen("git remote -v", shell=True, cwd = os.path.dirname(filepath), stdout=subprocess.PIPE, stderr = subprocess.PIPE)
|
||||
(out,err) = cmd.communicate()
|
||||
if cmd.returncode != 0:
|
||||
logging.warning("Error while importing layer vcs_url: git error: %s" % err)
|
||||
raise Exception("Error while importing layer vcs_url: git error: %s" % err)
|
||||
for line in out.split("\n"):
|
||||
try:
|
||||
(name, path) = line.split("\t", 1)
|
||||
@@ -63,20 +59,16 @@ class Command(BaseCommand):
|
||||
except ValueError:
|
||||
pass
|
||||
if url == None:
|
||||
logging.warning("Error while looking for remote \"%s\" in \"%s\"" % (remote_name, out))
|
||||
raise Exception("Error while looking for remote \"%s\" in \"%s\"" % (remote_name, out))
|
||||
return url
|
||||
|
||||
|
||||
# import bitbake data
|
||||
for bvi in data['bitbake']:
|
||||
bvo, created = BitbakeVersion.objects.get_or_create(name=bvi['name'])
|
||||
bvo.giturl = bvi['giturl']
|
||||
if bvi['giturl'].startswith("remote:"):
|
||||
bvo.giturl = _read_git_url_from_local_repository(bvi['giturl'])
|
||||
if bvo.giturl is None:
|
||||
logger.error("The toaster config file references the local git repo, but Toaster cannot detect it.\nYour local configuration for bitbake version %s is invalid. Make sure that the toasterconf.json file is correct." % bvi['name'])
|
||||
|
||||
if bvo.giturl is None:
|
||||
bvo.giturl = bvi['giturl']
|
||||
bvo.branch = bvi['branch']
|
||||
bvo.dirpath = bvi['dirpath']
|
||||
bvo.save()
|
||||
@@ -97,12 +89,13 @@ class Command(BaseCommand):
|
||||
assert ((_get_id_for_sourcetype(lsi['sourcetype']) == LayerSource.TYPE_LAYERINDEX) or apiurl.startswith("/")), (lsi['sourcetype'],apiurl)
|
||||
|
||||
try:
|
||||
ls, created = LayerSource.objects.get_or_create(sourcetype = _get_id_for_sourcetype(lsi['sourcetype']), apiurl = apiurl)
|
||||
ls.name = lsi['name']
|
||||
ls.save()
|
||||
except IntegrityError as e:
|
||||
logger.warning("IntegrityError %s \nWhile setting name %s for layer source %s " % (e, lsi['name'], ls))
|
||||
|
||||
ls = LayerSource.objects.get(sourcetype = _get_id_for_sourcetype(lsi['sourcetype']), apiurl = apiurl)
|
||||
except LayerSource.DoesNotExist:
|
||||
ls = LayerSource.objects.create(
|
||||
name = lsi['name'],
|
||||
sourcetype = _get_id_for_sourcetype(lsi['sourcetype']),
|
||||
apiurl = apiurl
|
||||
)
|
||||
|
||||
layerbranches = []
|
||||
for branchname in lsi['branches']:
|
||||
@@ -118,14 +111,12 @@ class Command(BaseCommand):
|
||||
lo.local_path = _reduce_canon_path(os.path.join(ls.apiurl, layerinfo['local_path']))
|
||||
|
||||
if not os.path.exists(lo.local_path):
|
||||
logger.error("Local layer path %s must exists. Are you trying to import a layer that does not exist ? Check your local toasterconf.json" % lo.local_path)
|
||||
raise Exception("Local layer path %s must exists." % lo.local_path)
|
||||
|
||||
lo.vcs_url = layerinfo['vcs_url']
|
||||
if layerinfo['vcs_url'].startswith("remote:"):
|
||||
lo.vcs_url = _read_git_url_from_local_repository(layerinfo['vcs_url'])
|
||||
if lo.vcs_url is None:
|
||||
logger.error("The toaster config file references the local git repo, but Toaster cannot detect it.\nYour local configuration for layer %s is invalid. Make sure that the toasterconf.json file is correct." % layerinfo['name'])
|
||||
|
||||
if lo.vcs_url is None:
|
||||
else:
|
||||
lo.vcs_url = layerinfo['vcs_url']
|
||||
|
||||
if 'layer_index_url' in layerinfo:
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
from django.core.management.base import NoArgsCommand, CommandError
|
||||
from django.db import transaction
|
||||
from orm.models import Build, ToasterSetting, LogMessage, Target
|
||||
from orm.models import Build, ToasterSetting
|
||||
from bldcontrol.bbcontroller import getBuildEnvironmentController, ShellCmdException, BuildSetupException
|
||||
from bldcontrol.models import BuildRequest, BuildEnvironment, BRError, BRVariable
|
||||
import os
|
||||
import logging
|
||||
import time
|
||||
|
||||
logger = logging.getLogger("ToasterScheduler")
|
||||
logger = logging.getLogger("toaster")
|
||||
|
||||
class Command(NoArgsCommand):
|
||||
args = ""
|
||||
@@ -36,7 +35,7 @@ class Command(NoArgsCommand):
|
||||
# select the build environment and the request to build
|
||||
br = self._selectBuildRequest()
|
||||
except IndexError as e:
|
||||
#logger.debug("runbuilds: No build request")
|
||||
# logger.debug("runbuilds: No build request")
|
||||
return
|
||||
try:
|
||||
bec = self._selectBuildEnvironment()
|
||||
@@ -51,16 +50,33 @@ class Command(NoArgsCommand):
|
||||
|
||||
# write the build identification variable
|
||||
BRVariable.objects.create(req = br, name="TOASTER_BRBE", value="%d:%d" % (br.pk, bec.be.pk))
|
||||
|
||||
# let the build request know where it is being executed
|
||||
br.environment = bec.be
|
||||
br.save()
|
||||
|
||||
# this triggers an async build
|
||||
bec.triggerBuild(br.brbitbake_set.all(), br.brlayer_set.all(), br.brvariable_set.all(), br.brtarget_set.all())
|
||||
# set up the buid environment with the needed layers
|
||||
bec.setLayers(br.brbitbake_set.all(), br.brlayer_set.all())
|
||||
bec.writeConfFile("conf/toaster-pre.conf", br.brvariable_set.all())
|
||||
bec.writeConfFile("conf/toaster.conf", raw = "INHERIT+=\"toaster buildhistory\"")
|
||||
|
||||
# get the bb server running with the build req id and build env id
|
||||
bbctrl = bec.getBBController()
|
||||
|
||||
# trigger the build command
|
||||
task = reduce(lambda x, y: x if len(y)== 0 else y, map(lambda y: y.task, br.brtarget_set.all()))
|
||||
if len(task) == 0:
|
||||
task = None
|
||||
bbctrl.build(list(map(lambda x:x.target, br.brtarget_set.all())), task)
|
||||
|
||||
logger.debug("runbuilds: Build launched, exiting. Follow build logs at %s/toaster_ui.log" % bec.be.builddir)
|
||||
# disconnect from the server
|
||||
bbctrl.disconnect()
|
||||
|
||||
# cleanup to be performed by toaster when the deed is done
|
||||
|
||||
|
||||
except Exception as e:
|
||||
logger.error("runbuilds: Error launching build %s" % e)
|
||||
logger.error("runbuilds: Error executing shell command %s" % e)
|
||||
traceback.print_exc(e)
|
||||
if "[Errno 111] Connection refused" in str(e):
|
||||
# Connection refused, read toaster_server.out
|
||||
@@ -78,63 +94,41 @@ class Command(NoArgsCommand):
|
||||
bec.be.save()
|
||||
|
||||
def archive(self):
|
||||
''' archives data from the builds '''
|
||||
artifact_storage_dir = ToasterSetting.objects.get(name="ARTIFACTS_STORAGE_DIR").value
|
||||
for br in BuildRequest.objects.filter(state = BuildRequest.REQ_ARCHIVE):
|
||||
# save cooker log
|
||||
if br.build == None:
|
||||
br.state = BuildRequest.REQ_FAILED
|
||||
else:
|
||||
br.state = BuildRequest.REQ_COMPLETED
|
||||
br.save()
|
||||
continue
|
||||
build_artifact_storage_dir = os.path.join(artifact_storage_dir, "%d" % br.build.pk)
|
||||
try:
|
||||
os.makedirs(build_artifact_storage_dir)
|
||||
except OSError as ose:
|
||||
if "File exists" in str(ose):
|
||||
pass
|
||||
else:
|
||||
raise ose
|
||||
|
||||
file_name = os.path.join(build_artifact_storage_dir, "cooker_log.txt")
|
||||
try:
|
||||
with open(file_name, "w") as f:
|
||||
f.write(br.environment.get_artifact(br.build.cooker_log_path).read())
|
||||
except IOError:
|
||||
os.unlink(file_name)
|
||||
|
||||
br.state = BuildRequest.REQ_COMPLETED
|
||||
br.save()
|
||||
|
||||
def cleanup(self):
|
||||
from django.utils import timezone
|
||||
from datetime import timedelta
|
||||
# environments locked for more than 30 seconds - they should be unlocked
|
||||
BuildEnvironment.objects.filter(buildrequest__state__in=[BuildRequest.REQ_FAILED, BuildRequest.REQ_COMPLETED]).filter(lock=BuildEnvironment.LOCK_LOCK).filter(updated__lt = timezone.now() - timedelta(seconds = 30)).update(lock = BuildEnvironment.LOCK_FREE)
|
||||
|
||||
|
||||
# update all Builds that failed to start
|
||||
|
||||
for br in BuildRequest.objects.filter(state = BuildRequest.REQ_FAILED, build__outcome = Build.IN_PROGRESS):
|
||||
# transpose the launch errors in ToasterExceptions
|
||||
br.build.outcome = Build.FAILED
|
||||
for brerror in br.brerror_set.all():
|
||||
logger.debug("Saving error %s" % brerror)
|
||||
LogMessage.objects.create(build = br.build, level = LogMessage.EXCEPTION, message = brerror.errmsg)
|
||||
br.build.save()
|
||||
|
||||
# we don't have a true build object here; hence, toasterui didn't have a change to release the BE lock
|
||||
br.environment.lock = BuildEnvironment.LOCK_FREE
|
||||
br.environment.save()
|
||||
|
||||
|
||||
|
||||
# update all BuildRequests without a build created
|
||||
for br in BuildRequest.objects.filter(build = None):
|
||||
br.build = Build.objects.create(project = br.project, completed_on = br.updated, started_on = br.created)
|
||||
br.build.outcome = Build.FAILED
|
||||
try:
|
||||
br.build.machine = br.brvariable_set.get(name='MACHINE').value
|
||||
except BRVariable.DoesNotExist:
|
||||
pass
|
||||
br.save()
|
||||
# transpose target information
|
||||
for brtarget in br.brtarget_set.all():
|
||||
Target.objects.create(build=br.build, target=brtarget.target, task=brtarget.task)
|
||||
# transpose the launch errors in ToasterExceptions
|
||||
for brerror in br.brerror_set.all():
|
||||
LogMessage.objects.create(build = br.build, level = LogMessage.EXCEPTION, message = brerror.errmsg)
|
||||
|
||||
br.build.save()
|
||||
pass
|
||||
BuildEnvironment.objects.filter(lock=BuildEnvironment.LOCK_LOCK).filter(updated__lt = timezone.now() - timedelta(seconds = 30)).update(lock = BuildEnvironment.LOCK_FREE)
|
||||
|
||||
|
||||
def handle_noargs(self, **options):
|
||||
while True:
|
||||
try:
|
||||
self.cleanup()
|
||||
self.archive()
|
||||
self.schedule()
|
||||
except:
|
||||
pass
|
||||
|
||||
time.sleep(1)
|
||||
self.cleanup()
|
||||
self.archive()
|
||||
self.schedule()
|
||||
|
||||
@@ -1,180 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from south.utils import datetime_utils as datetime
|
||||
from south.db import db
|
||||
from south.v2 import SchemaMigration
|
||||
from django.db import models
|
||||
|
||||
|
||||
class Migration(SchemaMigration):
|
||||
|
||||
def forwards(self, orm):
|
||||
# Adding field 'BRLayer.layer_version'
|
||||
db.add_column(u'bldcontrol_brlayer', 'layer_version',
|
||||
self.gf('django.db.models.fields.related.ForeignKey')(to=orm['orm.Layer_Version'], null=True),
|
||||
keep_default=False)
|
||||
|
||||
|
||||
def backwards(self, orm):
|
||||
# Deleting field 'BRLayer.layer_version'
|
||||
db.delete_column(u'bldcontrol_brlayer', 'layer_version_id')
|
||||
|
||||
|
||||
models = {
|
||||
u'bldcontrol.brbitbake': {
|
||||
'Meta': {'object_name': 'BRBitbake'},
|
||||
'commit': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
|
||||
'dirpath': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
|
||||
'giturl': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
|
||||
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
|
||||
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']", 'unique': 'True'})
|
||||
},
|
||||
u'bldcontrol.brerror': {
|
||||
'Meta': {'object_name': 'BRError'},
|
||||
'errmsg': ('django.db.models.fields.TextField', [], {}),
|
||||
'errtype': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
|
||||
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
|
||||
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"}),
|
||||
'traceback': ('django.db.models.fields.TextField', [], {})
|
||||
},
|
||||
u'bldcontrol.brlayer': {
|
||||
'Meta': {'object_name': 'BRLayer'},
|
||||
'commit': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
|
||||
'dirpath': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
|
||||
'giturl': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
|
||||
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
|
||||
'layer_version': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Layer_Version']", 'null': 'True'}),
|
||||
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
|
||||
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"})
|
||||
},
|
||||
u'bldcontrol.brtarget': {
|
||||
'Meta': {'object_name': 'BRTarget'},
|
||||
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
|
||||
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"}),
|
||||
'target': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
|
||||
'task': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True'})
|
||||
},
|
||||
u'bldcontrol.brvariable': {
|
||||
'Meta': {'object_name': 'BRVariable'},
|
||||
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
|
||||
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
|
||||
'req': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildRequest']"}),
|
||||
'value': ('django.db.models.fields.TextField', [], {'blank': 'True'})
|
||||
},
|
||||
u'bldcontrol.buildenvironment': {
|
||||
'Meta': {'object_name': 'BuildEnvironment'},
|
||||
'address': ('django.db.models.fields.CharField', [], {'max_length': '254'}),
|
||||
'bbaddress': ('django.db.models.fields.CharField', [], {'max_length': '254', 'blank': 'True'}),
|
||||
'bbport': ('django.db.models.fields.IntegerField', [], {'default': '-1'}),
|
||||
'bbstate': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
|
||||
'bbtoken': ('django.db.models.fields.CharField', [], {'max_length': '126', 'blank': 'True'}),
|
||||
'betype': ('django.db.models.fields.IntegerField', [], {}),
|
||||
'builddir': ('django.db.models.fields.CharField', [], {'max_length': '512', 'blank': 'True'}),
|
||||
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
|
||||
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
|
||||
'lock': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
|
||||
'sourcedir': ('django.db.models.fields.CharField', [], {'max_length': '512', 'blank': 'True'}),
|
||||
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
|
||||
},
|
||||
u'bldcontrol.buildrequest': {
|
||||
'Meta': {'object_name': 'BuildRequest'},
|
||||
'build': ('django.db.models.fields.related.OneToOneField', [], {'to': u"orm['orm.Build']", 'unique': 'True', 'null': 'True'}),
|
||||
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
|
||||
'environment': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['bldcontrol.BuildEnvironment']", 'null': 'True'}),
|
||||
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
|
||||
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Project']"}),
|
||||
'state': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
|
||||
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'})
|
||||
},
|
||||
u'orm.bitbakeversion': {
|
||||
'Meta': {'object_name': 'BitbakeVersion'},
|
||||
'branch': ('django.db.models.fields.CharField', [], {'max_length': '32'}),
|
||||
'dirpath': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
|
||||
'giturl': ('django.db.models.fields.URLField', [], {'max_length': '200'}),
|
||||
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
|
||||
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '32'})
|
||||
},
|
||||
u'orm.branch': {
|
||||
'Meta': {'unique_together': "(('layer_source', 'name'), ('layer_source', 'up_id'))", 'object_name': 'Branch'},
|
||||
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
|
||||
'layer_source': ('django.db.models.fields.related.ForeignKey', [], {'default': 'True', 'to': u"orm['orm.LayerSource']", 'null': 'True'}),
|
||||
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
|
||||
'short_description': ('django.db.models.fields.CharField', [], {'max_length': '50', 'blank': 'True'}),
|
||||
'up_date': ('django.db.models.fields.DateTimeField', [], {'default': 'None', 'null': 'True'}),
|
||||
'up_id': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True'})
|
||||
},
|
||||
u'orm.build': {
|
||||
'Meta': {'object_name': 'Build'},
|
||||
'bitbake_version': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
|
||||
'build_name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
|
||||
'completed_on': ('django.db.models.fields.DateTimeField', [], {}),
|
||||
'cooker_log_path': ('django.db.models.fields.CharField', [], {'max_length': '500'}),
|
||||
'distro': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
|
||||
'distro_version': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
|
||||
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
|
||||
'machine': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
|
||||
'outcome': ('django.db.models.fields.IntegerField', [], {'default': '2'}),
|
||||
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Project']"}),
|
||||
'started_on': ('django.db.models.fields.DateTimeField', [], {})
|
||||
},
|
||||
u'orm.layer': {
|
||||
'Meta': {'unique_together': "(('layer_source', 'up_id'), ('layer_source', 'name'))", 'object_name': 'Layer'},
|
||||
'description': ('django.db.models.fields.TextField', [], {'default': 'None', 'null': 'True'}),
|
||||
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
|
||||
'layer_index_url': ('django.db.models.fields.URLField', [], {'max_length': '200'}),
|
||||
'layer_source': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'to': u"orm['orm.LayerSource']", 'null': 'True'}),
|
||||
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
|
||||
'summary': ('django.db.models.fields.TextField', [], {'default': 'None', 'null': 'True'}),
|
||||
'up_date': ('django.db.models.fields.DateTimeField', [], {'default': 'None', 'null': 'True'}),
|
||||
'up_id': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True'}),
|
||||
'vcs_url': ('django.db.models.fields.URLField', [], {'default': 'None', 'max_length': '200', 'null': 'True'}),
|
||||
'vcs_web_file_base_url': ('django.db.models.fields.URLField', [], {'default': 'None', 'max_length': '200', 'null': 'True'}),
|
||||
'vcs_web_tree_base_url': ('django.db.models.fields.URLField', [], {'default': 'None', 'max_length': '200', 'null': 'True'}),
|
||||
'vcs_web_url': ('django.db.models.fields.URLField', [], {'default': 'None', 'max_length': '200', 'null': 'True'})
|
||||
},
|
||||
u'orm.layer_version': {
|
||||
'Meta': {'unique_together': "(('layer_source', 'up_id'),)", 'object_name': 'Layer_Version'},
|
||||
'branch': ('django.db.models.fields.CharField', [], {'max_length': '80'}),
|
||||
'build': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'related_name': "'layer_version_build'", 'null': 'True', 'to': u"orm['orm.Build']"}),
|
||||
'commit': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
|
||||
'dirpath': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True'}),
|
||||
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
|
||||
'layer': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'layer_version_layer'", 'to': u"orm['orm.Layer']"}),
|
||||
'layer_source': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'to': u"orm['orm.LayerSource']", 'null': 'True'}),
|
||||
'local_path': ('django.db.models.fields.FilePathField', [], {'default': "'/'", 'max_length': '1024'}),
|
||||
'priority': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
|
||||
'project': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'to': u"orm['orm.Project']", 'null': 'True'}),
|
||||
'up_branch': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'to': u"orm['orm.Branch']", 'null': 'True'}),
|
||||
'up_date': ('django.db.models.fields.DateTimeField', [], {'default': 'None', 'null': 'True'}),
|
||||
'up_id': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True'})
|
||||
},
|
||||
u'orm.layersource': {
|
||||
'Meta': {'unique_together': "(('sourcetype', 'apiurl'),)", 'object_name': 'LayerSource'},
|
||||
'apiurl': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True'}),
|
||||
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
|
||||
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '63'}),
|
||||
'sourcetype': ('django.db.models.fields.IntegerField', [], {})
|
||||
},
|
||||
u'orm.project': {
|
||||
'Meta': {'object_name': 'Project'},
|
||||
'bitbake_version': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.BitbakeVersion']", 'null': 'True'}),
|
||||
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
|
||||
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
|
||||
'is_default': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
|
||||
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
|
||||
'release': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Release']", 'null': 'True'}),
|
||||
'short_description': ('django.db.models.fields.CharField', [], {'max_length': '50', 'blank': 'True'}),
|
||||
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
|
||||
'user_id': ('django.db.models.fields.IntegerField', [], {'null': 'True'})
|
||||
},
|
||||
u'orm.release': {
|
||||
'Meta': {'object_name': 'Release'},
|
||||
'bitbake_version': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.BitbakeVersion']"}),
|
||||
'branch_name': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '50'}),
|
||||
'description': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
|
||||
'helptext': ('django.db.models.fields.TextField', [], {'null': 'True'}),
|
||||
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
|
||||
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '32'})
|
||||
}
|
||||
}
|
||||
|
||||
complete_apps = ['bldcontrol']
|
||||
@@ -1,6 +1,6 @@
|
||||
from django.db import models
|
||||
from django.core.validators import MaxValueValidator, MinValueValidator
|
||||
from orm.models import Project, ProjectLayer, ProjectVariable, ProjectTarget, Build, Layer_Version
|
||||
from orm.models import Project, ProjectLayer, ProjectVariable, ProjectTarget, Build
|
||||
|
||||
# a BuildEnvironment is the equivalent of the "build/" directory on the localhost
|
||||
class BuildEnvironment(models.Model):
|
||||
@@ -39,16 +39,50 @@ class BuildEnvironment(models.Model):
|
||||
created = models.DateTimeField(auto_now_add = True)
|
||||
updated = models.DateTimeField(auto_now = True)
|
||||
|
||||
|
||||
def get_artifact_type(self, path):
|
||||
if self.betype == BuildEnvironment.TYPE_LOCAL:
|
||||
try:
|
||||
import magic
|
||||
|
||||
# fair warning: this is a mess; there are multiple competeing and incompatible
|
||||
# magic modules floating around, so we try some of the most common combinations
|
||||
|
||||
try: # we try ubuntu's python-magic 5.4
|
||||
m = magic.open(magic.MAGIC_MIME_TYPE)
|
||||
m.load()
|
||||
return m.file(path)
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
try: # we try python-magic 0.4.6
|
||||
m = magic.Magic(magic.MAGIC_MIME)
|
||||
return m.from_file(path)
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
try: # we try pip filemagic 1.6
|
||||
m = magic.Magic(flags=magic.MAGIC_MIME_TYPE)
|
||||
return m.id_filename(path)
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
return "binary/octet-stream"
|
||||
except ImportError:
|
||||
return "binary/octet-stream"
|
||||
raise Exception("FIXME: artifact type not implemented for build environment type %s" % be.get_betype_display())
|
||||
|
||||
|
||||
def get_artifact(self, path):
|
||||
if self.betype == BuildEnvironment.TYPE_LOCAL:
|
||||
return open(path, "r")
|
||||
raise Exception("FIXME: artifact download not implemented for build environment type %s" % self.get_betype_display())
|
||||
raise Exception("FIXME: artifact download not implemented for build environment type %s" % be.get_betype_display())
|
||||
|
||||
def has_artifact(self, path):
|
||||
import os
|
||||
if self.betype == BuildEnvironment.TYPE_LOCAL:
|
||||
if self.betype == BuildRequest.TYPE_LOCAL:
|
||||
return os.path.exists(path)
|
||||
raise Exception("FIXME: has artifact not implemented for build environment type %s" % self.get_betype_display())
|
||||
raise Exception("FIXME: has artifact not implemented for build environment type %s" % be.get_betype_display())
|
||||
|
||||
# a BuildRequest is a request that the scheduler will build using a BuildEnvironment
|
||||
# the build request queue is the table itself, ordered by state
|
||||
@@ -91,9 +125,6 @@ class BuildRequest(models.Model):
|
||||
def get_machine(self):
|
||||
return self.brvariable_set.get(name="MACHINE").value
|
||||
|
||||
def __str__(self):
|
||||
return "%s %s" % (self.project, self.get_state_display())
|
||||
|
||||
# These tables specify the settings for running an actual build.
|
||||
# They MUST be kept in sync with the tables in orm.models.Project*
|
||||
|
||||
@@ -103,7 +134,6 @@ class BRLayer(models.Model):
|
||||
giturl = models.CharField(max_length = 254)
|
||||
commit = models.CharField(max_length = 254)
|
||||
dirpath = models.CharField(max_length = 254)
|
||||
layer_version = models.ForeignKey(Layer_Version, null=True)
|
||||
|
||||
class BRBitbake(models.Model):
|
||||
req = models.ForeignKey(BuildRequest, unique = True) # only one bitbake for a request
|
||||
@@ -126,6 +156,3 @@ class BRError(models.Model):
|
||||
errtype = models.CharField(max_length=100)
|
||||
errmsg = models.TextField()
|
||||
traceback = models.TextField()
|
||||
|
||||
def __str__(self):
|
||||
return "%s (%s)" % (self.errmsg, self.req)
|
||||
|
||||
@@ -31,9 +31,6 @@ from toastermain import settings
|
||||
|
||||
from bbcontroller import BuildEnvironmentController, ShellCmdException, BuildSetupException
|
||||
|
||||
class NotImplementedException(Exception):
|
||||
pass
|
||||
|
||||
def DN(path):
|
||||
return "/".join(path.split("/")[0:-1])
|
||||
|
||||
@@ -128,7 +125,7 @@ class SSHBEController(BuildEnvironmentController):
|
||||
# set layers in the layersource
|
||||
|
||||
|
||||
raise NotImplementedException("Not implemented: SSH setLayers")
|
||||
raise Exception("Not implemented: SSH setLayers")
|
||||
# 3. configure the build environment, so we have a conf/bblayers.conf
|
||||
assert self.pokydirname is not None
|
||||
self._setupBE()
|
||||
@@ -156,24 +153,3 @@ class SSHBEController(BuildEnvironmentController):
|
||||
import shutil
|
||||
shutil.rmtree(os.path.join(self.be.sourcedir, "build"))
|
||||
assert not self._pathexists(self.be.builddir)
|
||||
|
||||
def triggerBuild(self, bitbake, layers, variables, targets):
|
||||
# set up the buid environment with the needed layers
|
||||
self.setLayers(bitbake, layers)
|
||||
self.writeConfFile("conf/toaster-pre.conf", )
|
||||
self.writeConfFile("conf/toaster.conf", raw = "INHERIT+=\"toaster buildhistory\"")
|
||||
|
||||
# get the bb server running with the build req id and build env id
|
||||
bbctrl = self.getBBController()
|
||||
|
||||
# trigger the build command
|
||||
task = reduce(lambda x, y: x if len(y)== 0 else y, map(lambda y: y.task, targets))
|
||||
if len(task) == 0:
|
||||
task = None
|
||||
|
||||
bbctrl.build(list(map(lambda x:x.target, targets)), task)
|
||||
|
||||
logger.debug("localhostbecontroller: Build launched, exiting. Follow build logs at %s/toaster_ui.log" % self.be.builddir)
|
||||
|
||||
# disconnect from the server
|
||||
bbctrl.disconnect()
|
||||
|
||||
@@ -7,7 +7,7 @@ Replace this with more appropriate tests for your application.
|
||||
|
||||
from django.test import TestCase
|
||||
|
||||
from bldcontrol.bbcontroller import BitbakeController, BuildSetupException
|
||||
from bldcontrol.bbcontroller import BitbakeController
|
||||
from bldcontrol.localhostbecontroller import LocalhostBEController
|
||||
from bldcontrol.sshbecontroller import SSHBEController
|
||||
from bldcontrol.models import BuildEnvironment, BuildRequest
|
||||
@@ -15,7 +15,6 @@ from bldcontrol.management.commands.runbuilds import Command
|
||||
|
||||
import socket
|
||||
import subprocess
|
||||
import os
|
||||
|
||||
# standard poky data hardcoded for testing
|
||||
BITBAKE_LAYERS = [type('bitbake_info', (object,), { "giturl": "git://git.yoctoproject.org/poky.git", "dirpath": "", "commit": "HEAD"})]
|
||||
@@ -30,17 +29,6 @@ POKY_LAYERS = [
|
||||
# we have an abstract test class designed to ensure that the controllers use a single interface
|
||||
# specific controller tests only need to override the _getBuildEnvironment() method
|
||||
|
||||
test_sourcedir = os.getenv("TTS_SOURCE_DIR")
|
||||
test_builddir = os.getenv("TTS_BUILD_DIR")
|
||||
test_address = os.getenv("TTS_TEST_ADDRESS", "localhost")
|
||||
|
||||
if test_sourcedir == None or test_builddir == None or test_address == None:
|
||||
raise Exception("Please set TTTS_SOURCE_DIR, TTS_BUILD_DIR and TTS_TEST_ADDRESS")
|
||||
|
||||
# The bb server will expect a toaster-pre.conf file to exist. If it doesn't exit then we make
|
||||
# an empty one here.
|
||||
open(test_builddir + 'conf/toaster-pre.conf', 'a').close()
|
||||
|
||||
class BEControllerTests(object):
|
||||
|
||||
def _serverForceStop(self, bc):
|
||||
@@ -48,53 +36,28 @@ class BEControllerTests(object):
|
||||
self.assertTrue(err == '', "bitbake server pid %s not stopped" % err)
|
||||
|
||||
def test_serverStartAndStop(self):
|
||||
from bldcontrol.sshbecontroller import NotImplementedException
|
||||
obe = self._getBuildEnvironment()
|
||||
bc = self._getBEController(obe)
|
||||
try:
|
||||
# setting layers, skip any layer info
|
||||
bc.setLayers(BITBAKE_LAYERS, POKY_LAYERS)
|
||||
except NotImplementedException, e:
|
||||
print "Test skipped due to command not implemented yet"
|
||||
return True
|
||||
# We are ok with the exception as we're handling the git already exists
|
||||
except BuildSetupException:
|
||||
pass
|
||||
bc.setLayers(BITBAKE_LAYERS, POKY_LAYERS) # setting layers, skip any layer info
|
||||
|
||||
bc.pokydirname = test_sourcedir
|
||||
bc.islayerset = True
|
||||
|
||||
hostname = test_address.split("@")[-1]
|
||||
hostname = self.test_address.split("@")[-1]
|
||||
|
||||
# test start server and stop
|
||||
bc.startBBServer()
|
||||
|
||||
self.assertFalse(socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect_ex((hostname, int(bc.be.bbport))), "Server not answering")
|
||||
self.assertTrue(socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect_ex((hostname, 8200)), "Port already occupied")
|
||||
bc.startBBServer("0:0")
|
||||
self.assertFalse(socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect_ex((hostname, 8200)), "Server not answering")
|
||||
|
||||
bc.stopBBServer()
|
||||
self.assertTrue(socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect_ex((hostname, int(bc.be.bbport))), "Server not stopped")
|
||||
self.assertTrue(socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect_ex((hostname, 8200)), "Server not stopped")
|
||||
|
||||
self._serverForceStop(bc)
|
||||
|
||||
def test_getBBController(self):
|
||||
from bldcontrol.sshbecontroller import NotImplementedException
|
||||
obe = self._getBuildEnvironment()
|
||||
bc = self._getBEController(obe)
|
||||
layerSet = False
|
||||
try:
|
||||
# setting layers, skip any layer info
|
||||
layerSet = bc.setLayers(BITBAKE_LAYERS, POKY_LAYERS)
|
||||
except NotImplementedException:
|
||||
print "Test skipped due to command not implemented yet"
|
||||
return True
|
||||
# We are ok with the exception as we're handling the git already exists
|
||||
except BuildSetupException:
|
||||
pass
|
||||
bc.setLayers(BITBAKE_LAYERS, POKY_LAYERS) # setting layers, skip any layer info
|
||||
|
||||
bc.pokydirname = test_sourcedir
|
||||
bc.islayerset = True
|
||||
|
||||
bbc = bc.getBBController()
|
||||
bbc = bc.getBBController("%d:%d" % (-1, obe.pk))
|
||||
self.assertTrue(isinstance(bbc, BitbakeController))
|
||||
bc.stopBBServer()
|
||||
|
||||
@@ -103,15 +66,19 @@ class BEControllerTests(object):
|
||||
class LocalhostBEControllerTests(TestCase, BEControllerTests):
|
||||
def __init__(self, *args):
|
||||
super(LocalhostBEControllerTests, self).__init__(*args)
|
||||
|
||||
# hardcoded for Alex's machine; since the localhost BE is machine-dependent,
|
||||
# I found no good way to abstractize this
|
||||
self.test_sourcedir = "/home/ddalex/ssd/yocto"
|
||||
self.test_builddir = "/home/ddalex/ssd/yocto/build"
|
||||
self.test_address = "localhost"
|
||||
|
||||
def _getBuildEnvironment(self):
|
||||
return BuildEnvironment.objects.create(
|
||||
lock = BuildEnvironment.LOCK_FREE,
|
||||
betype = BuildEnvironment.TYPE_LOCAL,
|
||||
address = test_address,
|
||||
sourcedir = test_sourcedir,
|
||||
builddir = test_builddir )
|
||||
address = self.test_address,
|
||||
sourcedir = self.test_sourcedir,
|
||||
builddir = self.test_builddir )
|
||||
|
||||
def _getBEController(self, obe):
|
||||
return LocalhostBEController(obe)
|
||||
@@ -119,20 +86,25 @@ class LocalhostBEControllerTests(TestCase, BEControllerTests):
|
||||
class SSHBEControllerTests(TestCase, BEControllerTests):
|
||||
def __init__(self, *args):
|
||||
super(SSHBEControllerTests, self).__init__(*args)
|
||||
self.test_address = "ddalex-desktop.local"
|
||||
# hardcoded for ddalex-desktop.local machine; since the localhost BE is machine-dependent,
|
||||
# I found no good way to abstractize this
|
||||
self.test_sourcedir = "/home/ddalex/ssd/yocto"
|
||||
self.test_builddir = "/home/ddalex/ssd/yocto/build"
|
||||
|
||||
def _getBuildEnvironment(self):
|
||||
return BuildEnvironment.objects.create(
|
||||
lock = BuildEnvironment.LOCK_FREE,
|
||||
betype = BuildEnvironment.TYPE_SSH,
|
||||
address = test_address,
|
||||
sourcedir = test_sourcedir,
|
||||
builddir = test_builddir )
|
||||
address = self.test_address,
|
||||
sourcedir = self.test_sourcedir,
|
||||
builddir = self.test_builddir )
|
||||
|
||||
def _getBEController(self, obe):
|
||||
return SSHBEController(obe)
|
||||
|
||||
def test_pathExists(self):
|
||||
obe = BuildEnvironment.objects.create(betype = BuildEnvironment.TYPE_SSH, address= test_address)
|
||||
obe = BuildEnvironment.objects.create(betype = BuildEnvironment.TYPE_SSH, address= self.test_address)
|
||||
sbc = SSHBEController(obe)
|
||||
self.assertTrue(sbc._pathexists("/"))
|
||||
self.assertFalse(sbc._pathexists("/.deadbeef"))
|
||||
@@ -157,7 +129,7 @@ class RunBuildsCommandTests(TestCase):
|
||||
self.assertRaises(IndexError, command._selectBuildEnvironment)
|
||||
|
||||
def test_br_select(self):
|
||||
from orm.models import Project, Release, BitbakeVersion, Branch
|
||||
from orm.models import Project, Release, BitbakeVersion
|
||||
p = Project.objects.create_project("test", Release.objects.get_or_create(name = "HEAD", bitbake_version = BitbakeVersion.objects.get_or_create(name="HEAD", branch=Branch.objects.get_or_create(name="HEAD"))[0])[0])
|
||||
obr = BuildRequest.objects.create(state = BuildRequest.REQ_QUEUED, project = p)
|
||||
command = Command()
|
||||
|
||||
44
bitbake/lib/toaster/bldviewer/api.py
Normal file
44
bitbake/lib/toaster/bldviewer/api.py
Normal file
@@ -0,0 +1,44 @@
|
||||
#
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# BitBake Toaster Implementation
|
||||
#
|
||||
# Copyright (C) 2013 Intel Corporation
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
from django.conf.urls import patterns, include, url
|
||||
|
||||
|
||||
urlpatterns = patterns('bldviewer.views',
|
||||
url(r'^builds$', 'model_explorer', {'model_name':'build'}, name='builds'),
|
||||
url(r'^targets$', 'model_explorer', {'model_name':'target'}, name='targets'),
|
||||
url(r'^target_files$', 'model_explorer', {'model_name':'target_file'}, name='target_file'),
|
||||
url(r'^target_image_file$', 'model_explorer', {'model_name':'target_image_file'}, name='target_image_file'),
|
||||
url(r'^tasks$', 'model_explorer', {'model_name':'task'}, name='task'),
|
||||
url(r'^task_dependencies$', 'model_explorer', {'model_name':'task_dependency'}, name='task_dependencies'),
|
||||
url(r'^packages$', 'model_explorer', {'model_name':'package'}, name='package'),
|
||||
url(r'^package_dependencies$', 'model_explorer', {'model_name':'package_dependency'}, name='package_dependency'),
|
||||
url(r'^target_packages$', 'model_explorer', {'model_name':'target_installed_package'}, name='target_packages'),
|
||||
url(r'^target_installed_packages$', 'model_explorer', {'model_name':'target_installed_package'}, name='target_installed_package'),
|
||||
url(r'^package_files$', 'model_explorer', {'model_name':'build_file'}, name='build_file'),
|
||||
url(r'^layers$', 'model_explorer', {'model_name':'layer'}, name='layer'),
|
||||
url(r'^layerversions$', 'model_explorer', {'model_name':'layerversion'}, name='layerversion'),
|
||||
url(r'^recipes$', 'model_explorer', {'model_name':'recipe'}, name='recipe'),
|
||||
url(r'^recipe_dependencies$', 'model_explorer', {'model_name':'recipe_dependency'}, name='recipe_dependencies'),
|
||||
url(r'^variables$', 'model_explorer', {'model_name':'variable'}, name='variables'),
|
||||
url(r'^variableshistory$', 'model_explorer', {'model_name':'variablehistory'}, name='variablehistory'),
|
||||
url(r'^logmessages$', 'model_explorer', {'model_name':'logmessage'}, name='logmessages'),
|
||||
)
|
||||
4797
bitbake/lib/toaster/bldviewer/static/css/bootstrap.css
vendored
Normal file
4797
bitbake/lib/toaster/bldviewer/static/css/bootstrap.css
vendored
Normal file
File diff suppressed because it is too large
Load Diff
1982
bitbake/lib/toaster/bldviewer/static/js/bootstrap.js
vendored
Normal file
1982
bitbake/lib/toaster/bldviewer/static/js/bootstrap.js
vendored
Normal file
File diff suppressed because it is too large
Load Diff
6
bitbake/lib/toaster/bldviewer/static/js/jquery-2.0.3.js
vendored
Normal file
6
bitbake/lib/toaster/bldviewer/static/js/jquery-2.0.3.js
vendored
Normal file
File diff suppressed because one or more lines are too long
30
bitbake/lib/toaster/bldviewer/templates/simple_base.html
Normal file
30
bitbake/lib/toaster/bldviewer/templates/simple_base.html
Normal file
@@ -0,0 +1,30 @@
|
||||
<!DOCTYPE html>
|
||||
{% load static %}
|
||||
<html>
|
||||
<head>
|
||||
<title>Toaster Simple Explorer</title>
|
||||
<script src="{% static 'js/jquery-2.0.3.js' %}">
|
||||
</script>
|
||||
<script src="{% static 'js/bootstrap.js' %}">
|
||||
</script>
|
||||
<link href="{% static 'css/bootstrap.css' %}" rel="stylesheet" type="text/css">
|
||||
</head>
|
||||
|
||||
<body style="height: 100%">
|
||||
<div style="width:100%; height: 100%; position:absolute">
|
||||
<div style="width: 100%; height: 3em" class="nav">
|
||||
<ul class="nav nav-tabs">
|
||||
<li><a href="{% url "simple-all-builds" %}">All Builds</a></li>
|
||||
<li><a href="{% url "simple-all-layers" %}">All Layers</a></li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div style="overflow-y:scroll; width: 100%; position: absolute; top: 3em; bottom:70px ">
|
||||
{% block pagecontent %}
|
||||
{% endblock %}
|
||||
</div>
|
||||
<div class="navbar" style="position: absolute; bottom: 0; width:100%"><br/>About Toaster | Yocto Project </div>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
@@ -0,0 +1,17 @@
|
||||
{% extends "simple_basetable.html" %}
|
||||
|
||||
{% block pagename %}
|
||||
<ul class="nav nav-tabs" style="display: inline-block">
|
||||
<li><a>Build {{build.target_set.all|join:" "}} at {{build.started_on}} : </a></li>
|
||||
<li><a href="{% url "simple-task" build.id %}"> Tasks </a></li>
|
||||
<li><a href="{% url "simple-bpackage" build.id %}"> Build Packages </a></li>
|
||||
{% for t in build.target_set.all %}
|
||||
{% if t.is_image %}
|
||||
<li><a href="{% url "simple-tpackage" build.id t.pk %}"> Packages for {{t.target}} </a> </li>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
<li><a href="{% url "simple-configuration" build.id %}"> Configuration </a> </li>
|
||||
</ul>
|
||||
<h1>Toaster - Build {% block pagetitle %} {% endblock %}</h1>
|
||||
{% endblock %}
|
||||
|
||||
@@ -0,0 +1,64 @@
|
||||
{% extends "simple_base.html" %}
|
||||
|
||||
{% block pagecontent %}
|
||||
<script>
|
||||
function showhideTableColumn(i, sh) {
|
||||
if (sh)
|
||||
$('td:nth-child('+i+'),th:nth-child('+i+')').show();
|
||||
else
|
||||
$('td:nth-child('+i+'),th:nth-child('+i+')').hide();
|
||||
}
|
||||
|
||||
|
||||
function filterTableRows(test) {
|
||||
if (test.length > 0) {
|
||||
var r = test.split(/[ ,]+/).map(function (e) { return new RegExp(e, 'i') });
|
||||
$('tr.data').map( function (i, el) {
|
||||
(! r.map(function (j) { return j.test($(el).html())}).reduce(function (c, p) { return c && p;} )) ? $(el).hide() : $(el).show();
|
||||
});
|
||||
} else
|
||||
{
|
||||
$('tr.data').show();
|
||||
}
|
||||
}
|
||||
</script>
|
||||
<div style="margin-bottom: 0.5em">
|
||||
|
||||
{% block pagename %}
|
||||
{% endblock %}
|
||||
<div align="left" style="display:inline-block; width: 40%; margin-left: 2em"> Filter: <input type="search" id="filterstring" style="width: 80%" onkeyup="filterTableRows($('#filterstring').val())" autocomplete="off">
|
||||
</div>
|
||||
{% if hideshowcols %}
|
||||
<div align="right" style="display: inline-block; width: 40%">Show/Hide columns:
|
||||
{% for i in hideshowcols %}
|
||||
<span>{{i.name}} <input type="checkbox" id="ct{{i.name}}" onchange="showhideTableColumn({{i.order}}, $('#ct{{i.name}}').is(':checked'))" checked autocomplete="off"></span> |
|
||||
{% endfor %}
|
||||
</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
<div style="display: block; float:right; margin-left: auto; margin-right:5em"><span class="pagination" style="vertical-align: top; margin-right: 3em">Showing {{objects.start_index}} to {{objects.end_index}} out of {{objects.paginator.count}} entries. </span>
|
||||
<ul class="pagination" style="display: block-inline">
|
||||
{%if objects.has_previous %}
|
||||
<li><a href="?page={{objects.previous_page_number}}">«</a></li>
|
||||
{%else%}
|
||||
<li class="disabled"><a href="#">«</a></li>
|
||||
{%endif%}
|
||||
{% for i in objects.page_range %}
|
||||
<li{%if i == objects.number %} class="active" {%endif%}><a href="?page={{i}}">{{i}}</a></li>
|
||||
{% endfor %}
|
||||
{%if objects.has_next%}
|
||||
<li><a href="?page={{objects.next_page_number}}">»</a></li>
|
||||
{%else%}
|
||||
<li class="disabled"><a href="#">»</a></li>
|
||||
{%endif%}
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<table class="table table-striped table-condensed" style="width:95%">
|
||||
{% block pagetable %}
|
||||
{% endblock %}
|
||||
</table>
|
||||
</div>
|
||||
|
||||
{% endblock %}
|
||||
24
bitbake/lib/toaster/bldviewer/templates/simple_bfile.html
Normal file
24
bitbake/lib/toaster/bldviewer/templates/simple_bfile.html
Normal file
@@ -0,0 +1,24 @@
|
||||
{% extends "simple_basebuildpage.html" %}
|
||||
|
||||
{% block pagetitle %}Files for package {{objects.0.bpackage.name}} {% endblock %}
|
||||
{% block pagetable %}
|
||||
{% if not objects %}
|
||||
<p>No files were recorded for this package!</p>
|
||||
{% else %}
|
||||
|
||||
<tr>
|
||||
<th>Name</th>
|
||||
<th>Size (Bytes)</th>
|
||||
</tr>
|
||||
|
||||
{% for file in objects %}
|
||||
|
||||
<tr class="data">
|
||||
<td>{{file.path}}</td>
|
||||
<td>{{file.size}}</td>
|
||||
|
||||
{% endfor %}
|
||||
|
||||
{% endif %}
|
||||
|
||||
{% endblock %}
|
||||
44
bitbake/lib/toaster/bldviewer/templates/simple_bpackage.html
Normal file
44
bitbake/lib/toaster/bldviewer/templates/simple_bpackage.html
Normal file
@@ -0,0 +1,44 @@
|
||||
{% extends "simple_basebuildpage.html" %}
|
||||
|
||||
{% block pagetitle %}Packages{% endblock %}
|
||||
{% block pagetable %}
|
||||
{% if not objects %}
|
||||
<p>No packages were recorded for this target!</p>
|
||||
{% else %}
|
||||
|
||||
<tr>
|
||||
<th>Name</th>
|
||||
<th>Version</th>
|
||||
<th>Recipe</th>
|
||||
<th>Summary</th>
|
||||
<th>Section</th>
|
||||
<th>Description</th>
|
||||
<th>Size on host disk (Bytes)</th>
|
||||
<th>License</th>
|
||||
<th>Dependencies List (all)</th>
|
||||
</tr>
|
||||
|
||||
{% for package in objects %}
|
||||
|
||||
<tr class="data">
|
||||
<td><a name="#{{package.name}}" href="{% url "simple-bfile" build.pk package.pk %}">{{package.name}} ({{package.filelist_bpackage.count}} files)</a></td>
|
||||
<td>{{package.version}}-{{package.revision}}</td>
|
||||
<td>{%if package.recipe%}<a href="{% url "simple-layer_versions_recipes" package.recipe.layer_version_id %}#{{package.recipe.name}}">{{package.recipe.name}}</a>{{package.package_name}}</a>{%endif%}</td>
|
||||
|
||||
<td>{{package.summary}}</td>
|
||||
<td>{{package.section}}</td>
|
||||
<td>{{package.description}}</td>
|
||||
<td>{{package.size}}</td>
|
||||
<td>{{package.license}}</td>
|
||||
<td>
|
||||
<div style="height: 3em; overflow:auto">
|
||||
{% for bpd in package.package_dependencies_source.all %}
|
||||
{{bpd.dep_type}}: {{bpd.depends_on.name}} <br/>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</td>
|
||||
{% endfor %}
|
||||
|
||||
{% endif %}
|
||||
|
||||
{% endblock %}
|
||||
43
bitbake/lib/toaster/bldviewer/templates/simple_build.html
Normal file
43
bitbake/lib/toaster/bldviewer/templates/simple_build.html
Normal file
@@ -0,0 +1,43 @@
|
||||
{% extends "simple_basetable.html" %}
|
||||
|
||||
{% block pagename %}
|
||||
<h1>Toaster - Builds</h1>
|
||||
{% endblock %}
|
||||
|
||||
{% block pagetable %}
|
||||
|
||||
{% load simple_projecttags %}
|
||||
<tr>
|
||||
<th>Outcome</th>
|
||||
<th>Started On</th>
|
||||
<th>Completed On</th>
|
||||
<th>Target</th>
|
||||
<th>Machine</th>
|
||||
<th>Time</th>
|
||||
<th>Errors</th>
|
||||
<th>Warnings</th>
|
||||
<th>Output</th>
|
||||
<th>Log</th>
|
||||
<th>Bitbake Version</th>
|
||||
<th>Build Name</th>
|
||||
</tr>
|
||||
{% for build in objects %}
|
||||
<tr class="data">
|
||||
<td><a href="{% url "simple-configuration" build.id %}">{{build.get_outcome_display}}</a></td>
|
||||
<td>{{build.started_on}}</td>
|
||||
<td>{{build.completed_on}}</td>
|
||||
<td>{% for t in build.target_set.all %}{%if t.is_image %}<a href="{% url "simple-tpackage" build.id t.id %}">{% endif %}{{t.target}}{% if t.is_image %}</a>{% endif %}<br/>{% endfor %}</td>
|
||||
<td>{{build.machine}}</td>
|
||||
<td>{% time_difference build.started_on build.completed_on %}</td>
|
||||
<td>{{build.errors_no}}:{% if build.errors_no %}{% for error in logs %}{% if error.build == build %}{% if error.level == 2 %}<p>{{error.message}}</p>{% endif %}{% endif %}{% endfor %}{% else %}None{% endif %}</td>
|
||||
<td>{{build.warnings_no}}:{% if build.warnings_no %}{% for warning in logs %}{% if warning.build == build %}{% if warning.level == 1 %}<p>{{warning.message}}</p>{% endif %}{% endif %}{% endfor %}{% else %}None{% endif %}</td>
|
||||
<td>TBD: determine image file list</td>
|
||||
<td>{{build.cooker_log_path}}</td>
|
||||
<td>{{build.bitbake_version}}</td>
|
||||
<td>{{build.build_name}}</td>
|
||||
</tr>
|
||||
|
||||
{% endfor %}
|
||||
{% endblock %}
|
||||
|
||||
|
||||
@@ -0,0 +1,22 @@
|
||||
{% extends "simple_basebuildpage.html" %}
|
||||
|
||||
{% block pagetitle %}Configuration{% endblock %}
|
||||
{% block pagetable %}
|
||||
|
||||
<tr>
|
||||
<th>Name</th>
|
||||
<th>Description</th>
|
||||
<th>Definition history</th>
|
||||
<th>Value</th>
|
||||
</tr>
|
||||
|
||||
{% for variable in objects %}
|
||||
|
||||
<tr class="data">
|
||||
<td>{{variable.variable_name}}</td>
|
||||
<td>{% if variable.description %}{{variable.description}}{% endif %}</td>
|
||||
<td>{% for vh in variable.variablehistory_set.all %}{{vh.operation}} in {{vh.file_name}}:{{vh.line_number}}<br/>{%endfor%}</td>
|
||||
<td>{{variable.variable_value}}</td>
|
||||
{% endfor %}
|
||||
|
||||
{% endblock %}
|
||||
34
bitbake/lib/toaster/bldviewer/templates/simple_layer.html
Normal file
34
bitbake/lib/toaster/bldviewer/templates/simple_layer.html
Normal file
@@ -0,0 +1,34 @@
|
||||
{% extends "simple_basetable.html" %}
|
||||
|
||||
{% block pagename %}
|
||||
<h1>Toaster - Layers</h1>
|
||||
{% endblock %}
|
||||
|
||||
{% block pagetable %}
|
||||
{% load simple_projecttags %}
|
||||
|
||||
<tr>
|
||||
<th>Name</th>
|
||||
<th>Local Path</th>
|
||||
<th>Layer Index URL</th>
|
||||
<th>Known Versions</th>
|
||||
</tr>
|
||||
|
||||
{% for layer in objects %}
|
||||
|
||||
<tr class="data">
|
||||
<td>{{layer.name}}</td>
|
||||
<td>{{layer.local_path}}</td>
|
||||
<td><a href='{{layer.layer_index_url}}'>{{layer.layer_index_url}}</a></td>
|
||||
<td><table>
|
||||
{% for lv in layer.versions %}
|
||||
<tr><td>
|
||||
<a href="{% url "simple-layer_versions_recipes" lv.id %}">({{lv.priority}}){{lv.branch}}:{{lv.commit}} ({{lv.count}} recipes)</a>
|
||||
</td></tr>
|
||||
{% endfor %}
|
||||
</table></td>
|
||||
</tr>
|
||||
|
||||
{% endfor %}
|
||||
|
||||
{% endblock %}
|
||||
36
bitbake/lib/toaster/bldviewer/templates/simple_package.html
Normal file
36
bitbake/lib/toaster/bldviewer/templates/simple_package.html
Normal file
@@ -0,0 +1,36 @@
|
||||
{% extends "simple_basebuildpage.html" %}
|
||||
|
||||
{% block pagetable %}
|
||||
{% if not objects %}
|
||||
<p>No packages were recorded for this target!</p>
|
||||
{% else %}
|
||||
|
||||
<tr>
|
||||
<th>Name</th>
|
||||
<th>Version</th>
|
||||
<th>Size (Bytes)</th>
|
||||
<th>Recipe</th>
|
||||
<th>Depends on</th>
|
||||
</tr>
|
||||
|
||||
{% for package in objects %}
|
||||
|
||||
<tr class="data">
|
||||
<td><a name="#{{package.name}}">{{package.name}}</a></td>
|
||||
<td>{{package.version}}</td>
|
||||
<td>{{package.size}}</td>
|
||||
<td>{%if package.recipe %}<a name="{{package.recipe.name}}.{{package.package_name}}">
|
||||
<a href="{% url "simple-layer_versions_recipes" package.recipe.layer_version_id %}#{{package.recipe.name}}">{{package.recipe.name}}</a>{{package.package_name}}</a>{%endif%}</td>
|
||||
<td>
|
||||
<div style="height: 4em; overflow:auto">
|
||||
{% for d in package.package_dependencies_source.all %}
|
||||
<a href="#{{d.name}}">{{d.depends_on.name}}</a><br/>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</td>
|
||||
|
||||
{% endfor %}
|
||||
|
||||
{% endif %}
|
||||
|
||||
{% endblock %}
|
||||
50
bitbake/lib/toaster/bldviewer/templates/simple_recipe.html
Normal file
50
bitbake/lib/toaster/bldviewer/templates/simple_recipe.html
Normal file
@@ -0,0 +1,50 @@
|
||||
{% extends "simple_basetable.html" %}
|
||||
|
||||
{% block pagename %}
|
||||
<ul class="nav nav-tabs" style="display: inline-block">
|
||||
<li><a>Layer {{layer_version.layer.name}} : {{layer_version.branch}} : {{layer_version.commit}} : {{layer_version.priority}}</a></li>
|
||||
</ul>
|
||||
<h1>Toaster - Recipes for a Layer</h1>
|
||||
{% endblock %}
|
||||
|
||||
{% block pagetable %}
|
||||
{% load simple_projecttags %}
|
||||
|
||||
<tr>
|
||||
</tr>
|
||||
<th>Name</th>
|
||||
<th>Version</th>
|
||||
<th>Summary</th>
|
||||
<th>Description</th>
|
||||
<th>Section</th>
|
||||
<th>License</th>
|
||||
<th>Homepage</th>
|
||||
<th>Bugtracker</th>
|
||||
<th>File_path</th>
|
||||
<th style="width: 30em">Recipe Dependency</th>
|
||||
|
||||
|
||||
{% for recipe in objects %}
|
||||
|
||||
<tr class="data">
|
||||
<td><a name="{{recipe.name}}">{{recipe.name}}</a></td>
|
||||
<td>{{recipe.version}}</td>
|
||||
<td>{{recipe.summary}}</td>
|
||||
<td>{{recipe.description}}</td>
|
||||
<td>{{recipe.section}}</td>
|
||||
<td>{{recipe.license}}</td>
|
||||
<td>{{recipe.homepage}}</td>
|
||||
<td>{{recipe.bugtracker}}</td>
|
||||
<td>{{recipe.file_path}}</td>
|
||||
<td>
|
||||
<div style="height: 5em; overflow:auto">
|
||||
{% for rr in recipe.r_dependencies_recipe.all %}
|
||||
<a href="#{{rr.depends_on.name}}">{{rr.depends_on.name}}</a><br/>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
{% endfor %}
|
||||
|
||||
{% endblock %}
|
||||
71
bitbake/lib/toaster/bldviewer/templates/simple_task.html
Normal file
71
bitbake/lib/toaster/bldviewer/templates/simple_task.html
Normal file
@@ -0,0 +1,71 @@
|
||||
{% extends "simple_basebuildpage.html" %}
|
||||
|
||||
{% block pagetitle %}Tasks{% endblock %}
|
||||
{% block pagetable %}
|
||||
{% if not objects %}
|
||||
<p>No tasks were executed in this build!</p>
|
||||
{% else %}
|
||||
|
||||
<tr>
|
||||
<th>Order</th>
|
||||
<th>Task</th>
|
||||
<th>Recipe Version</th>
|
||||
<th>Task Type</th>
|
||||
<th>Checksum</th>
|
||||
<th>Outcome</th>
|
||||
<th>Message</th>
|
||||
<th>Time</th>
|
||||
<th>CPU usage</th>
|
||||
<th>Disk I/O</th>
|
||||
<th>Script type</th>
|
||||
<th>Filesystem</th>
|
||||
<th>Depends</th>
|
||||
</tr>
|
||||
|
||||
{% for task in objects %}
|
||||
|
||||
<tr class="data">
|
||||
<td>{{task.order}}</td>
|
||||
<td><a name="{{task.recipe.name}}.{{task.task_name}}">
|
||||
<a href="{% url "simple-layer_versions_recipes" task.recipe.layer_version_id %}#{{task.recipe.name}}">{{task.recipe.name}}</a>.{{task.task_name}}</a></td>
|
||||
<td>{{task.recipe.version}}</td>
|
||||
|
||||
{% if task.task_executed %}
|
||||
<td>Executed</td>
|
||||
{% else %}
|
||||
<td>Not Executed</td>
|
||||
{% endif %}
|
||||
|
||||
<td>{{task.sstate_checksum}}</td>
|
||||
<td>{{task.get_outcome_display}}{% if task.provider %}</br>(by <a href="#{{task.provider.recipe.name}}.{{task.provider.task_name}}">{{task.provider.recipe.name}}.{{task.provider.task_name}}</a>){% endif %}
|
||||
{% if task.outcome == task.OUTCOME_CACHED %}{% for t in task.get_related_setscene %}
|
||||
<br/>({{t.task_name}} {{t.get_outcome_display}})
|
||||
{% endfor %}{%endif%}
|
||||
</td>
|
||||
<td><p>{{task.message}}</td>
|
||||
<td>{{task.elapsed_time}}</td>
|
||||
<td>{{task.cpu_usage}}</td>
|
||||
<td>{{task.disk_io}}</td>
|
||||
<td>{{task.get_script_type_display}}</td>
|
||||
<td> <table>
|
||||
<tr><td> Recipe</td><td><a target="_fileview" href="file:///{{task.recipe.file_path}}">{{task.recipe.file_path}}</a></td></tr>
|
||||
<tr><td> Source</td><td><a target="_fileview" href="file:///{{task.file_name}}">{{task.file_name}}:{{task.line_number}}</a></td></tr>
|
||||
<tr><td> Workdir</td><td><a target="_fileview" href="file:///{{task.work_directory}}">{{task.work_directory}}</a></td></tr>
|
||||
<tr><td> Log</td><td><a target="_fileview" href="file:///{{task.logfile}}">{{task.logfile}}</a><br/></td></tr>
|
||||
</table>
|
||||
</td>
|
||||
<td>
|
||||
<div style="height: 3em; overflow:auto">
|
||||
{% for tt in task.task_dependencies_task.all %}
|
||||
<a href="#{{tt.depends_on.recipe.name}}.{{tt.depends_on.task_name}}">
|
||||
{{tt.depends_on.recipe.name}}.{{tt.depends_on.task_name}}</a><br/>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
{% endfor %}
|
||||
|
||||
{% endif %}
|
||||
|
||||
{% endblock %}
|
||||
@@ -0,0 +1,29 @@
|
||||
#
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# BitBake Toaster Implementation
|
||||
#
|
||||
# Copyright (C) 2013 Intel Corporation
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
from datetime import datetime
|
||||
from django import template
|
||||
|
||||
register = template.Library()
|
||||
|
||||
@register.simple_tag
|
||||
def time_difference(start_time, end_time):
|
||||
return end_time - start_time
|
||||
345
bitbake/lib/toaster/bldviewer/tests.py
Normal file
345
bitbake/lib/toaster/bldviewer/tests.py
Normal file
@@ -0,0 +1,345 @@
|
||||
"""
|
||||
This file demonstrates writing tests using the unittest module. These will pass
|
||||
when you run "manage.py test".
|
||||
|
||||
Replace this with more appropriate tests for your application.
|
||||
"""
|
||||
from django.test import TestCase
|
||||
from django.test.client import Client
|
||||
from django.db.models import Count, Q
|
||||
from orm.models import Target, Recipe, Recipe_Dependency, Layer_Version, Target_Installed_Package
|
||||
from orm.models import Build, Task, Layer, Package, Package_File, LogMessage, Variable, VariableHistory
|
||||
import json, os, re, urllib, shlex
|
||||
|
||||
|
||||
class Tests(TestCase):
|
||||
# fixtures = ['orm_views_testdata.json']
|
||||
|
||||
def setUp(self):
|
||||
raise Exception("The %s test data is not longer valid, tests disabled" % __name__)
|
||||
|
||||
def test_builds(self):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/builds')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(len(json.loads(response['list'])) > 0)
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
self.assertTrue(fields["machine"] == "qemux86")
|
||||
self.assertTrue(fields["distro"] == "poky")
|
||||
self.assertTrue(fields["image_fstypes"] == "tar.bz2 ext3")
|
||||
self.assertTrue(fields["bitbake_version"] == "1.21.1")
|
||||
self.assertTrue("1.5+snapshot-" in fields["distro_version"])
|
||||
self.assertEqual(fields["outcome"], 0)
|
||||
self.assertEqual(fields["errors_no"], 0)
|
||||
log_path = "/tmp/log/cooker/qemux86/"
|
||||
self.assertTrue(log_path in fields["cooker_log_path"])
|
||||
self.assertTrue(".log" in fields["cooker_log_path"])
|
||||
|
||||
def test_targets(self):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/targets')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(len(json.loads(response['list'])) > 0)
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
self.assertTrue(fields["is_image"] == True)
|
||||
self.assertTrue(fields["target"] == "core-image-minimal")
|
||||
|
||||
def test_tasks(self):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/tasks')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(len(json.loads(response['list'])) > 0)
|
||||
recipe_id = self.get_recipes_id("pseudo-native")
|
||||
print recipe_id
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
if fields["build"] == 1 and fields["task_name"] == "do_populate_lic_setscene" and fields["recipe"] == recipe_id and fields["task_executed"] == True:
|
||||
self.assertTrue(fields["message"] == "recipe pseudo-native-1.5.1-r4: task do_populate_lic_setscene: Succeeded")
|
||||
self.assertTrue(fields["cpu_usage"] == "6.3")
|
||||
self.assertTrue(fields["disk_io"] == 124)
|
||||
self.assertTrue(fields["script_type"] == 2)
|
||||
self.assertTrue(fields["path_to_sstate_obj"] == "")
|
||||
self.assertTrue(fields["elapsed_time"] == "0.103494")
|
||||
self.assertTrue("tmp/work/i686-linux/pseudo-native/1.5.1-r4/temp/log.do_populate_lic_setscene.5867" in fields["logfile"])
|
||||
self.assertTrue(fields["sstate_result"] == 0)
|
||||
self.assertTrue(fields["outcome"] == 0)
|
||||
if fields["build"] == 1 and fields["task_name"] == "do_populate_lic" and fields["recipe"] == recipe_id and fields["task_executed"] == True:
|
||||
self.assertTrue(fields["cpu_usage"] == None)
|
||||
self.assertTrue(fields["disk_io"] == None)
|
||||
self.assertTrue(fields["script_type"] == 2)
|
||||
self.assertTrue(fields["path_to_sstate_obj"] == "")
|
||||
self.assertTrue(fields["elapsed_time"] == "0")
|
||||
self.assertTrue(fields["logfile"], None)
|
||||
self.assertTrue(fields["sstate_result"] == 3)
|
||||
self.assertTrue(fields["outcome"] == 2)
|
||||
|
||||
def test_layers(self):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/layers')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(len(json.loads(response['list'])) > 0)
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
if fields["name"] == "meta-yocto-bsp":
|
||||
self.assertTrue(fields["local_path"].endswith("meta-yocto-bsp"))
|
||||
self.assertTrue(fields["layer_index_url"] == "http://layers.openembedded.org/layerindex/layer/meta-yocto-bsp/")
|
||||
elif fields["name"] == "meta":
|
||||
self.assertTrue(fields["local_path"].endswith("/meta"))
|
||||
self.assertTrue(fields["layer_index_url"] == "http://layers.openembedded.org/layerindex/layer/openembedded-core/")
|
||||
elif fields["name"] == "meta-yocto":
|
||||
self.assertTrue(fields["local_path"].endswith("/meta-yocto"))
|
||||
self.assertTrue(fields["layer_index_url"] == "http://layers.openembedded.org/layerindex/layer/meta-yocto/")
|
||||
|
||||
def test_layerversions(self):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/layerversions')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(len(json.loads(response['list'])) > 0)
|
||||
layer_id = self.get_layer_id("meta")
|
||||
find = False
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
if fields["layer"] == layer_id:
|
||||
find = True
|
||||
self.assertTrue(fields["build"] == 1)
|
||||
self.assertTrue(fields["priority"] == 5)
|
||||
self.assertTrue(fields["branch"] == "master")
|
||||
self.assertTrue(find == True)
|
||||
|
||||
def test_recipes(self):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/recipes')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(len(json.loads(response['list'])) > 0)
|
||||
find = False
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
if fields["name"] == "busybox":
|
||||
find = True
|
||||
self.assertTrue(fields["version"] == "1.21.1-r0")
|
||||
self.assertTrue(fields["license"] == "GPLv2 & bzip2")
|
||||
self.assertTrue(fields["file_path"].endswith("/meta/recipes-core/busybox/busybox_1.21.1.bb"))
|
||||
self.assertTrue(fields["summary"] == "Tiny versions of many common UNIX utilities in a single small executable.")
|
||||
self.assertTrue(fields["description"] == "BusyBox combines tiny versions of many common UNIX utilities into a single small executable. It provides minimalist replacements for most of the utilities you usually find in GNU fileutils, shellutils, etc. The utilities in BusyBox generally have fewer options than their full-featured GNU cousins; however, the options that are included provide the expected functionality and behave very much like their GNU counterparts. BusyBox provides a fairly complete POSIX environment for any small or embedded system.")
|
||||
self.assertTrue(fields["bugtracker"] == "https://bugs.busybox.net/")
|
||||
self.assertTrue(fields["homepage"] == "http://www.busybox.net")
|
||||
self.assertTrue(fields["section"] == "base")
|
||||
self.assertTrue(find == True)
|
||||
|
||||
def test_task_dependencies(self):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/task_dependencies')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(len(json.loads(response['list'])) > 0)
|
||||
ids = self.get_task_id()
|
||||
do_install = ids["do_install"]
|
||||
do_compile = ids["do_compile"]
|
||||
entry = False
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
if fields["task"] == do_install and fields["depends_on"] == do_compile:
|
||||
entry = True
|
||||
self.assertTrue(entry == True)
|
||||
|
||||
def test_target_installed_package(self):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/target_installed_packages')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(len(json.loads(response['list'])) > 0)
|
||||
package = self.get_package_id("udev-utils")
|
||||
find = False
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
if fields["package"] == package:
|
||||
self.assertTrue(fields["target"], 1)
|
||||
find = True
|
||||
self.assertTrue(find, True)
|
||||
|
||||
def test_packages(self):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/packages')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(response['count'] > 0)
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
if fields["name"] == "base-files-dev":
|
||||
self.assertTrue(fields["license"] == "GPLv2")
|
||||
self.assertTrue(fields["description"] == "The base-files package creates the basic system directory structure and provides a small set of key configuration files for the system. This package contains symbolic links, header files, and related items necessary for software development.")
|
||||
self.assertTrue(fields["summary"] == "Miscellaneous files for the base system. - Development files")
|
||||
self.assertTrue(fields["version"] == "3.0.14")
|
||||
self.assertTrue(fields["build"] == 1)
|
||||
self.assertTrue(fields["section"] == "devel")
|
||||
self.assertTrue(fields["revision"] == "r73")
|
||||
self.assertTrue(fields["size"] == 0)
|
||||
self.assertTrue(fields["installed_size"] == 0)
|
||||
self.assertTrue(self.get_recipe_name(fields["recipe"]) == "base-files")
|
||||
|
||||
def test_package_dependencies(self):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/package_dependencies')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(len(json.loads(response['list'])) > 0)
|
||||
build_package = self.get_package_id("busybox")
|
||||
build_package_id = self.get_package_id("busybox-syslog")
|
||||
entry = False
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
self.assertTrue(fields["target"] == 1)
|
||||
if fields["package"] == build_package and fields["dep_type"] == 7 and fields["depends_on"] == build_package_id:
|
||||
entry = True
|
||||
self.assertTrue(entry == True)
|
||||
|
||||
def test_recipe_dependencies(self):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/recipe_dependencies')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(len(json.loads(response['list'])) > 0)
|
||||
depends_on = self.get_recipes_id("autoconf-native")
|
||||
recipe = self.get_recipes_id("ncurses")
|
||||
entry = False
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
if fields["recipe"] == recipe and fields["depends_on"] == depends_on and fields["dep_type"] == 0:
|
||||
entry = True
|
||||
self.assertTrue(entry == True)
|
||||
|
||||
def test_package_files(self):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/package_files')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(len(json.loads(response['list'])) > 0)
|
||||
build_package = self.get_package_id("base-files")
|
||||
entry = False
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
if fields["path"] == "/etc/motd" and fields["package"] == build_package and fields["size"] == 0:
|
||||
entry = True
|
||||
self.assertTrue(entry == True)
|
||||
|
||||
def test_Variable(self):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/variables')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(len(json.loads(response['list'])) > 0)
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
self.assertTrue(fields["build"] == 1)
|
||||
if fields["variable_name"] == "USRBINPATH":
|
||||
self.assertTrue(fields["variable_value"] == "/usr/bin")
|
||||
self.assertTrue(fields["changed"] == False)
|
||||
self.assertTrue(fields["description"] == "")
|
||||
if fields["variable_name"] == "PREFERRED_PROVIDER_virtual/libx11":
|
||||
self.assertTrue(fields["variable_value"] == "libx11")
|
||||
self.assertTrue(fields["changed"] == False)
|
||||
self.assertTrue(fields["description"] == "If multiple recipes provide an item, this variable determines which recipe should be given preference.")
|
||||
if fields["variable_name"] == "base_libdir_nativesdk":
|
||||
self.assertTrue(fields["variable_value"] == "/lib")
|
||||
|
||||
def test_VariableHistory(self):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/variableshistory')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(len(json.loads(response['list'])) > 0)
|
||||
variable_id = self.get_variable_id("STAGING_INCDIR_NATIVE")
|
||||
find = False
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
if fields["variable"] == variable_id:
|
||||
find = True
|
||||
self.assertTrue(fields["file_name"] == "conf/bitbake.conf")
|
||||
self.assertTrue(fields["operation"] == "set")
|
||||
self.assertTrue(fields["line_number"] == 358)
|
||||
self.assertTrue(find == True)
|
||||
|
||||
def get_task_id(self):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/tasks')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(len(json.loads(response['list'])) > 0)
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
if fields["recipe"] == 7 and fields["task_name"] == "do_install":
|
||||
do_install = item["pk"]
|
||||
if fields["recipe"] == 7 and fields["task_name"] == "do_compile":
|
||||
do_compile = item["pk"]
|
||||
result = {}
|
||||
result["do_install"] = do_install
|
||||
result["do_compile"] = do_compile
|
||||
return result
|
||||
|
||||
def get_recipes_id(self, value):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/recipes')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(len(json.loads(response['list'])) > 0)
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
if fields["name"] == value:
|
||||
return item["pk"]
|
||||
return None
|
||||
|
||||
def get_recipe_name(self, value):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/recipes')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(len(json.loads(response['list'])) > 0)
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
if item["pk"] == value:
|
||||
return fields["name"]
|
||||
return None
|
||||
|
||||
def get_layer_id(self, value):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/layers')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(len(json.loads(response['list'])) > 0)
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
if fields["name"] == value:
|
||||
return item["pk"]
|
||||
return None
|
||||
|
||||
def get_package_id(self, field):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/packages')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(response['count'] > 0)
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
if fields["name"] == field:
|
||||
return item["pk"]
|
||||
return None
|
||||
|
||||
def get_variable_id(self, field):
|
||||
client = Client()
|
||||
resp = client.get('http://localhost:8000/api/1.0/variables')
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
response = json.loads(resp.content)
|
||||
self.assertTrue(len(json.loads(response['list'])) > 0)
|
||||
for item in json.loads(response['list']):
|
||||
fields = item['fields']
|
||||
if fields["variable_name"] == field:
|
||||
return item["pk"]
|
||||
return None
|
||||
35
bitbake/lib/toaster/bldviewer/urls.py
Normal file
35
bitbake/lib/toaster/bldviewer/urls.py
Normal file
@@ -0,0 +1,35 @@
|
||||
#
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# BitBake Toaster Implementation
|
||||
#
|
||||
# Copyright (C) 2013 Intel Corporation
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
from django.conf.urls import patterns, include, url
|
||||
from django.views.generic import RedirectView
|
||||
|
||||
urlpatterns = patterns('bldviewer.views',
|
||||
url(r'^builds/$', 'build', name='simple-all-builds'),
|
||||
url(r'^build/(?P<build_id>\d+)/task/$', 'task', name='simple-task'),
|
||||
url(r'^build/(?P<build_id>\d+)/packages/$', 'bpackage', name='simple-bpackage'),
|
||||
url(r'^build/(?P<build_id>\d+)/package/(?P<package_id>\d+)/files/$', 'bfile', name='simple-bfile'),
|
||||
url(r'^build/(?P<build_id>\d+)/target/(?P<target_id>\d+)/packages/$', 'tpackage', name='simple-tpackage'),
|
||||
url(r'^build/(?P<build_id>\d+)/configuration/$', 'configuration', name='simple-configuration'),
|
||||
url(r'^layers/$', 'layer', name='simple-all-layers'),
|
||||
url(r'^layerversions/(?P<layerversion_id>\d+)/recipes/.*$', 'layer_versions_recipes', name='simple-layer_versions_recipes'),
|
||||
url(r'^$', RedirectView.as_view( url= 'builds/')),
|
||||
)
|
||||
287
bitbake/lib/toaster/bldviewer/views.py
Normal file
287
bitbake/lib/toaster/bldviewer/views.py
Normal file
@@ -0,0 +1,287 @@
|
||||
#
|
||||
# ex:ts=4:sw=4:sts=4:et
|
||||
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
|
||||
#
|
||||
# BitBake Toaster Implementation
|
||||
#
|
||||
# Copyright (C) 2013 Intel Corporation
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License version 2 as
|
||||
# published by the Free Software Foundation.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along
|
||||
# with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
import operator
|
||||
|
||||
from django.db.models import Q
|
||||
from django.shortcuts import render
|
||||
from orm.models import Build, Target, Task, Layer, Layer_Version, Recipe, LogMessage, Variable, Target_Installed_Package
|
||||
from orm.models import Task_Dependency, Recipe_Dependency, Package, Package_File, Package_Dependency
|
||||
from orm.models import Target_Installed_Package, VariableHistory, Target_Image_File, Target_File
|
||||
from django.views.decorators.cache import cache_control
|
||||
from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
|
||||
|
||||
|
||||
def _build_page_range(paginator, index = 1):
|
||||
try:
|
||||
page = paginator.page(index)
|
||||
except PageNotAnInteger:
|
||||
page = paginator.page(1)
|
||||
except EmptyPage:
|
||||
page = paginator.page(paginator.num_pages)
|
||||
|
||||
page.page_range = [page.number]
|
||||
crt_range = 0
|
||||
for i in range(1,5):
|
||||
if (page.number + i) <= paginator.num_pages:
|
||||
page.page_range = page.page_range + [ page.number + i]
|
||||
crt_range +=1
|
||||
if (page.number - i) > 0:
|
||||
page.page_range = [page.number -i] + page.page_range
|
||||
crt_range +=1
|
||||
if crt_range == 4:
|
||||
break
|
||||
return page
|
||||
|
||||
@cache_control(no_store=True)
|
||||
def build(request):
|
||||
template = 'simple_build.html'
|
||||
logs = LogMessage.objects.all()
|
||||
|
||||
build_info = _build_page_range(Paginator(Build.objects.order_by("-id"), 10),request.GET.get('page', 1))
|
||||
|
||||
context = {'objects': build_info, 'logs': logs ,
|
||||
'hideshowcols' : [
|
||||
{'name': 'Output', 'order':10},
|
||||
{'name': 'Log', 'order':11},
|
||||
]}
|
||||
|
||||
return render(request, template, context)
|
||||
|
||||
|
||||
def _find_task_revdep(task):
|
||||
tp = []
|
||||
for p in Task_Dependency.objects.filter(depends_on=task):
|
||||
tp.append(p.task);
|
||||
return tp
|
||||
|
||||
def _find_task_provider(task):
|
||||
task_revdeps = _find_task_revdep(task)
|
||||
for tr in task_revdeps:
|
||||
if tr.outcome != Task.OUTCOME_COVERED:
|
||||
return tr
|
||||
for tr in task_revdeps:
|
||||
trc = _find_task_provider(tr)
|
||||
if trc is not None:
|
||||
return trc
|
||||
return None
|
||||
|
||||
def task(request, build_id):
|
||||
template = 'simple_task.html'
|
||||
|
||||
tasks = _build_page_range(Paginator(Task.objects.filter(build=build_id, order__gt=0), 100),request.GET.get('page', 1))
|
||||
|
||||
for t in tasks:
|
||||
if t.outcome == Task.OUTCOME_COVERED:
|
||||
t.provider = _find_task_provider(t)
|
||||
|
||||
context = {'build': Build.objects.filter(pk=build_id)[0], 'objects': tasks}
|
||||
|
||||
return render(request, template, context)
|
||||
|
||||
def configuration(request, build_id):
|
||||
template = 'simple_configuration.html'
|
||||
variables = _build_page_range(Paginator(Variable.objects.filter(build=build_id), 50), request.GET.get('page', 1))
|
||||
context = {'build': Build.objects.filter(pk=build_id)[0], 'objects' : variables}
|
||||
return render(request, template, context)
|
||||
|
||||
def bpackage(request, build_id):
|
||||
template = 'simple_bpackage.html'
|
||||
packages = Package.objects.filter(build = build_id)
|
||||
context = {'build': Build.objects.filter(pk=build_id)[0], 'objects' : packages}
|
||||
return render(request, template, context)
|
||||
|
||||
def bfile(request, build_id, package_id):
|
||||
template = 'simple_bfile.html'
|
||||
files = Package_File.objects.filter(package = package_id)
|
||||
context = {'build': Build.objects.filter(pk=build_id)[0], 'objects' : files}
|
||||
return render(request, template, context)
|
||||
|
||||
def tpackage(request, build_id, target_id):
|
||||
template = 'simple_package.html'
|
||||
packages = map(lambda x: x.package, list(Target_Installed_Package.objects.filter(target=target_id)))
|
||||
context = {'build': Build.objects.filter(pk=build_id)[0], 'objects' : packages}
|
||||
return render(request, template, context)
|
||||
|
||||
def layer(request):
|
||||
template = 'simple_layer.html'
|
||||
layer_info = Layer.objects.all()
|
||||
|
||||
for li in layer_info:
|
||||
li.versions = Layer_Version.objects.filter(layer = li)
|
||||
for liv in li.versions:
|
||||
liv.count = Recipe.objects.filter(layer_version__id = liv.id).count()
|
||||
|
||||
context = {'objects': layer_info}
|
||||
|
||||
return render(request, template, context)
|
||||
|
||||
|
||||
def layer_versions_recipes(request, layerversion_id):
|
||||
template = 'simple_recipe.html'
|
||||
recipes = Recipe.objects.filter(layer_version__id = layerversion_id)
|
||||
|
||||
context = {'objects': recipes,
|
||||
'layer_version' : Layer_Version.objects.filter( id = layerversion_id )[0]
|
||||
}
|
||||
|
||||
return render(request, template, context)
|
||||
|
||||
#### API
|
||||
|
||||
import json
|
||||
from django.core import serializers
|
||||
from django.http import HttpResponse, HttpResponseBadRequest
|
||||
|
||||
|
||||
def model_explorer(request, model_name):
|
||||
|
||||
DESCENDING = 'desc'
|
||||
response_data = {}
|
||||
model_mapping = {
|
||||
'build': Build,
|
||||
'target': Target,
|
||||
'target_file': Target_File,
|
||||
'target_image_file': Target_Image_File,
|
||||
'task': Task,
|
||||
'task_dependency': Task_Dependency,
|
||||
'package': Package,
|
||||
'layer': Layer,
|
||||
'layerversion': Layer_Version,
|
||||
'recipe': Recipe,
|
||||
'recipe_dependency': Recipe_Dependency,
|
||||
'package': Package,
|
||||
'package_dependency': Package_Dependency,
|
||||
'target_installed_package': Target_Installed_Package,
|
||||
'build_file': Package_File,
|
||||
'variable': Variable,
|
||||
'variablehistory': VariableHistory,
|
||||
'logmessage': LogMessage,
|
||||
}
|
||||
|
||||
if model_name not in model_mapping.keys():
|
||||
return HttpResponseBadRequest()
|
||||
|
||||
model = model_mapping[model_name]
|
||||
|
||||
try:
|
||||
limit = int(request.GET.get('limit', 0))
|
||||
except ValueError:
|
||||
limit = 0
|
||||
|
||||
try:
|
||||
offset = int(request.GET.get('offset', 0))
|
||||
except ValueError:
|
||||
offset = 0
|
||||
|
||||
ordering_string, invalid = _validate_input(request.GET.get('orderby', ''),
|
||||
model)
|
||||
if invalid:
|
||||
return HttpResponseBadRequest()
|
||||
|
||||
filter_string, invalid = _validate_input(request.GET.get('filter', ''),
|
||||
model)
|
||||
if invalid:
|
||||
return HttpResponseBadRequest()
|
||||
|
||||
search_term = request.GET.get('search', '')
|
||||
|
||||
if filter_string:
|
||||
filter_terms = _get_filtering_terms(filter_string)
|
||||
try:
|
||||
queryset = model.objects.filter(**filter_terms)
|
||||
except ValueError:
|
||||
queryset = []
|
||||
else:
|
||||
queryset = model.objects.all()
|
||||
|
||||
if search_term:
|
||||
queryset = _get_search_results(search_term, queryset, model)
|
||||
|
||||
if ordering_string and queryset:
|
||||
column, order = ordering_string.split(':')
|
||||
if order.lower() == DESCENDING:
|
||||
queryset = queryset.order_by('-' + column)
|
||||
else:
|
||||
queryset = queryset.order_by(column)
|
||||
|
||||
if offset and limit:
|
||||
queryset = queryset[offset:(offset+limit)]
|
||||
elif offset:
|
||||
queryset = queryset[offset:]
|
||||
elif limit:
|
||||
queryset = queryset[:limit]
|
||||
|
||||
if queryset:
|
||||
response_data['count'] = queryset.count()
|
||||
else:
|
||||
response_data['count'] = 0
|
||||
|
||||
response_data['list'] = serializers.serialize('json', queryset)
|
||||
|
||||
return HttpResponse(json.dumps(response_data),
|
||||
content_type='application/json')
|
||||
|
||||
def _get_filtering_terms(filter_string):
|
||||
|
||||
search_terms = filter_string.split(":")
|
||||
keys = search_terms[0].split(',')
|
||||
values = search_terms[1].split(',')
|
||||
|
||||
return dict(zip(keys, values))
|
||||
|
||||
def _validate_input(input, model):
|
||||
|
||||
invalid = 0
|
||||
|
||||
if input:
|
||||
input_list = input.split(":")
|
||||
|
||||
# Check we have only one colon
|
||||
if len(input_list) != 2:
|
||||
invalid = 1
|
||||
return None, invalid
|
||||
|
||||
# Check we have an equal number of terms both sides of the colon
|
||||
if len(input_list[0].split(',')) != len(input_list[1].split(',')):
|
||||
invalid = 1
|
||||
return None, invalid
|
||||
|
||||
# Check we are looking for a valid field
|
||||
valid_fields = model._meta.get_all_field_names()
|
||||
for field in input_list[0].split(','):
|
||||
if field not in valid_fields:
|
||||
invalid = 1
|
||||
return None, invalid
|
||||
|
||||
return input, invalid
|
||||
|
||||
def _get_search_results(search_term, queryset, model):
|
||||
search_objects = []
|
||||
for st in search_term.split(" "):
|
||||
q_map = map(lambda x: Q(**{x+'__icontains': st}),
|
||||
model.search_allowed_fields)
|
||||
|
||||
search_objects.append(reduce(operator.or_, q_map))
|
||||
search_object = reduce(operator.and_, search_objects)
|
||||
queryset = queryset.filter(search_object)
|
||||
|
||||
return queryset
|
||||
@@ -1,6 +0,0 @@
|
||||
contrib directory for toaster
|
||||
|
||||
This directory holds code that works with Toaster, without being an integral part of the Toaster project.
|
||||
It is intended for testing code, testing fixtures, tools for Toaster, etc.
|
||||
|
||||
NOTE: This directory is NOT a Python module.
|
||||
@@ -1,10 +0,0 @@
|
||||
*.pyc
|
||||
*.swp
|
||||
*.swo
|
||||
*.kpf
|
||||
*.egg-info/
|
||||
.idea
|
||||
.tox
|
||||
tmp/
|
||||
dist/
|
||||
.DS_Store
|
||||
@@ -1,50 +0,0 @@
|
||||
language: python
|
||||
python:
|
||||
- "2.7"
|
||||
- "3.4"
|
||||
services:
|
||||
- mysql
|
||||
- postgresql
|
||||
env:
|
||||
- DJANGO=1.4 DB=sqlite
|
||||
- DJANGO=1.4 DB=mysql
|
||||
- DJANGO=1.4 DB=postgres
|
||||
- DJANGO=1.5 DB=sqlite
|
||||
- DJANGO=1.5 DB=mysql
|
||||
- DJANGO=1.5 DB=postgres
|
||||
- DJANGO=1.6 DB=sqlite
|
||||
- DJANGO=1.6 DB=mysql
|
||||
- DJANGO=1.6 DB=postgres
|
||||
- DJANGO=1.7 DB=sqlite
|
||||
- DJANGO=1.7 DB=mysql
|
||||
- DJANGO=1.7 DB=postgres
|
||||
|
||||
matrix:
|
||||
exclude:
|
||||
- python: "3.4"
|
||||
env: DJANGO=1.4 DB=sqlite
|
||||
- python: "3.4"
|
||||
env: DJANGO=1.4 DB=mysql
|
||||
- python: "3.4"
|
||||
env: DJANGO=1.4 DB=postgres
|
||||
- python: "3.4"
|
||||
env: DJANGO=1.5 DB=sqlite
|
||||
- python: "3.4"
|
||||
env: DJANGO=1.5 DB=mysql
|
||||
- python: "3.4"
|
||||
env: DJANGO=1.5 DB=postgres
|
||||
- python: "3.4"
|
||||
env: DJANGO=1.6 DB=mysql
|
||||
- python: "3.4"
|
||||
env: DJANGO=1.7 DB=mysql
|
||||
|
||||
before_script:
|
||||
- mysql -e 'create database aggregation;'
|
||||
- psql -c 'create database aggregation;' -U postgres
|
||||
install:
|
||||
- pip install six
|
||||
- if [ "$DB" == "mysql" ]; then pip install mysql-python; fi
|
||||
- if [ "$DB" == "postgres" ]; then pip install psycopg2; fi
|
||||
- pip install -q Django==$DJANGO --use-mirrors
|
||||
script:
|
||||
- ./runtests.py --settings=tests.test_$DB
|
||||
@@ -1,21 +0,0 @@
|
||||
The MIT License
|
||||
|
||||
Copyright (c) 2012 Henrique Bastos
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
@@ -1,156 +0,0 @@
|
||||
Django Aggregate If: Condition aggregates for Django
|
||||
====================================================
|
||||
|
||||
.. image:: https://travis-ci.org/henriquebastos/django-aggregate-if.png?branch=master
|
||||
:target: https://travis-ci.org/henriquebastos/django-aggregate-if
|
||||
:alt: Test Status
|
||||
|
||||
.. image:: https://landscape.io/github/henriquebastos/django-aggregate-if/master/landscape.png
|
||||
:target: https://landscape.io/github/henriquebastos/django-aggregate-if/master
|
||||
:alt: Code Helth
|
||||
|
||||
.. image:: https://pypip.in/v/django-aggregate-if/badge.png
|
||||
:target: https://crate.io/packages/django-aggregate-if/
|
||||
:alt: Latest PyPI version
|
||||
|
||||
.. image:: https://pypip.in/d/django-aggregate-if/badge.png
|
||||
:target: https://crate.io/packages/django-aggregate-if/
|
||||
:alt: Number of PyPI downloads
|
||||
|
||||
*Aggregate-if* adds conditional aggregates to Django.
|
||||
|
||||
Conditional aggregates can help you reduce the ammount of queries to obtain
|
||||
aggregated information, like statistics for example.
|
||||
|
||||
Imagine you have a model ``Offer`` like this one:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
class Offer(models.Model):
|
||||
sponsor = models.ForeignKey(User)
|
||||
price = models.DecimalField(max_digits=9, decimal_places=2)
|
||||
status = models.CharField(max_length=30)
|
||||
expire_at = models.DateField(null=True, blank=True)
|
||||
created_at = models.DateTimeField(auto_now_add=True)
|
||||
updated_at = models.DateTimeField(auto_now=True)
|
||||
|
||||
OPEN = "OPEN"
|
||||
REVOKED = "REVOKED"
|
||||
PAID = "PAID"
|
||||
|
||||
Let's say you want to know:
|
||||
|
||||
#. How many offers exists in total;
|
||||
#. How many of them are OPEN, REVOKED or PAID;
|
||||
#. How much money was offered in total;
|
||||
#. How much money is in OPEN, REVOKED and PAID offers;
|
||||
|
||||
To get these informations, you could query:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from django.db.models import Count, Sum
|
||||
|
||||
Offer.objects.count()
|
||||
Offer.objects.filter(status=Offer.OPEN).aggregate(Count('pk'))
|
||||
Offer.objects.filter(status=Offer.REVOKED).aggregate(Count('pk'))
|
||||
Offer.objects.filter(status=Offer.PAID).aggregate(Count('pk'))
|
||||
Offer.objects.aggregate(Sum('price'))
|
||||
Offer.objects.filter(status=Offer.OPEN).aggregate(Sum('price'))
|
||||
Offer.objects.filter(status=Offer.REVOKED).aggregate(Sum('price'))
|
||||
Offer.objects.filter(status=Offer.PAID).aggregate(Sum('price'))
|
||||
|
||||
In this case, **8 queries** were needed to retrieve the desired information.
|
||||
|
||||
With conditional aggregates you can get it all with only **1 query**:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from django.db.models import Q
|
||||
from aggregate_if import Count, Sum
|
||||
|
||||
Offer.objects.aggregate(
|
||||
pk__count=Count('pk'),
|
||||
pk__open__count=Count('pk', only=Q(status=Offer.OPEN)),
|
||||
pk__revoked__count=Count('pk', only=Q(status=Offer.REVOKED)),
|
||||
pk__paid__count=Count('pk', only=Q(status=Offer.PAID)),
|
||||
pk__sum=Sum('price'),
|
||||
pk__open__sum=Sum('price', only=Q(status=Offer.OPEN)),
|
||||
pk__revoked__sum=Sum('price'), only=Q(status=Offer.REVOKED)),
|
||||
pk__paid__sum=Sum('price'), only=Q(status=Offer.PAID))
|
||||
)
|
||||
|
||||
Installation
|
||||
------------
|
||||
|
||||
*Aggregate-if* works with Django 1.4, 1.5, 1.6 and 1.7.
|
||||
|
||||
To install it, simply:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ pip install django-aggregate-if
|
||||
|
||||
Inspiration
|
||||
-----------
|
||||
|
||||
There is a 5 years old `ticket 11305`_ that will (*hopefully*) implement this feature into
|
||||
Django 1.8.
|
||||
|
||||
Using Django 1.6, I still wanted to avoid creating custom queries for very simple
|
||||
conditional aggregations. So I've cherry picked those ideas and others from the
|
||||
internet and built this library.
|
||||
|
||||
This library uses the same API and tests proposed on `ticket 11305`_, so when the
|
||||
new feature is available you can easily replace ``django-aggregate-if``.
|
||||
|
||||
Limitations
|
||||
-----------
|
||||
|
||||
Conditions involving joins with aliases are not supported yet. If you want to
|
||||
help adding this feature, you're welcome to check the `first issue`_.
|
||||
|
||||
Contributors
|
||||
------------
|
||||
|
||||
* `Henrique Bastos <http://github.com/henriquebastos>`_
|
||||
* `Iuri de Silvio <https://github.com/iurisilvio>`_
|
||||
* `Hampus Stjernhav <https://github.com/champ>`_
|
||||
* `Bradley Martsberger <https://github.com/martsberger>`_
|
||||
* `Markus Bertheau <https://github.com/mbertheau>`_
|
||||
* `end0 <https://github.com/end0>`_
|
||||
* `Scott Sexton <https://github.com/scottsexton>`_
|
||||
* `Mauler <https://github.com/mauler>`_
|
||||
* `trbs <https://github.com/trbs>`_
|
||||
|
||||
Changelog
|
||||
---------
|
||||
|
||||
0.5
|
||||
- Support for Django 1.7
|
||||
|
||||
0.4
|
||||
- Use tox to run tests.
|
||||
- Add support for Django 1.6.
|
||||
- Add support for Python3.
|
||||
- The ``only`` parameter now freely supports joins independent of the main query.
|
||||
- Adds support for alias relabeling permitting excludes and updates with aggregates filtered on remote foreign key relations.
|
||||
|
||||
0.3.1
|
||||
- Fix quotation escaping.
|
||||
- Fix boolean casts on Postgres.
|
||||
|
||||
0.2
|
||||
- Fix postgres issue with LIKE conditions.
|
||||
|
||||
0.1
|
||||
- Initial release.
|
||||
|
||||
|
||||
License
|
||||
=======
|
||||
|
||||
The MIT License.
|
||||
|
||||
.. _ticket 11305: https://code.djangoproject.com/ticket/11305
|
||||
.. _first issue: https://github.com/henriquebastos/django-aggregate-if/issues/1
|
||||
@@ -1,164 +0,0 @@
|
||||
# coding: utf-8
|
||||
'''
|
||||
Implements conditional aggregates.
|
||||
|
||||
This code was based on the work of others found on the internet:
|
||||
|
||||
1. http://web.archive.org/web/20101115170804/http://www.voteruniverse.com/Members/jlantz/blog/conditional-aggregates-in-django
|
||||
2. https://code.djangoproject.com/ticket/11305
|
||||
3. https://groups.google.com/forum/?fromgroups=#!topic/django-users/cjzloTUwmS0
|
||||
4. https://groups.google.com/forum/?fromgroups=#!topic/django-users/vVprMpsAnPo
|
||||
'''
|
||||
from __future__ import unicode_literals
|
||||
from django.utils import six
|
||||
import django
|
||||
from django.db.models.aggregates import Aggregate as DjangoAggregate
|
||||
from django.db.models.sql.aggregates import Aggregate as DjangoSqlAggregate
|
||||
|
||||
|
||||
VERSION = django.VERSION[:2]
|
||||
|
||||
|
||||
class SqlAggregate(DjangoSqlAggregate):
|
||||
conditional_template = '%(function)s(CASE WHEN %(condition)s THEN %(field)s ELSE null END)'
|
||||
|
||||
def __init__(self, col, source=None, is_summary=False, condition=None, **extra):
|
||||
super(SqlAggregate, self).__init__(col, source, is_summary, **extra)
|
||||
self.condition = condition
|
||||
|
||||
def relabel_aliases(self, change_map):
|
||||
if VERSION < (1, 7):
|
||||
super(SqlAggregate, self).relabel_aliases(change_map)
|
||||
if self.has_condition:
|
||||
condition_change_map = dict((k, v) for k, v in \
|
||||
change_map.items() if k in self.condition.query.alias_map
|
||||
)
|
||||
self.condition.query.change_aliases(condition_change_map)
|
||||
|
||||
def relabeled_clone(self, change_map):
|
||||
self.relabel_aliases(change_map)
|
||||
return super(SqlAggregate, self).relabeled_clone(change_map)
|
||||
|
||||
def as_sql(self, qn, connection):
|
||||
if self.has_condition:
|
||||
self.sql_template = self.conditional_template
|
||||
self.extra['condition'] = self._condition_as_sql(qn, connection)
|
||||
|
||||
return super(SqlAggregate, self).as_sql(qn, connection)
|
||||
|
||||
@property
|
||||
def has_condition(self):
|
||||
# Warning: bool(QuerySet) will hit the database
|
||||
return self.condition is not None
|
||||
|
||||
def _condition_as_sql(self, qn, connection):
|
||||
'''
|
||||
Return sql for condition.
|
||||
'''
|
||||
def escape(value):
|
||||
if isinstance(value, bool):
|
||||
value = str(int(value))
|
||||
if isinstance(value, six.string_types):
|
||||
# Escape params used with LIKE
|
||||
if '%' in value:
|
||||
value = value.replace('%', '%%')
|
||||
# Escape single quotes
|
||||
if "'" in value:
|
||||
value = value.replace("'", "''")
|
||||
# Add single quote to text values
|
||||
value = "'" + value + "'"
|
||||
return value
|
||||
|
||||
sql, param = self.condition.query.where.as_sql(qn, connection)
|
||||
param = map(escape, param)
|
||||
|
||||
return sql % tuple(param)
|
||||
|
||||
|
||||
class SqlSum(SqlAggregate):
|
||||
sql_function = 'SUM'
|
||||
|
||||
|
||||
class SqlCount(SqlAggregate):
|
||||
is_ordinal = True
|
||||
sql_function = 'COUNT'
|
||||
sql_template = '%(function)s(%(distinct)s%(field)s)'
|
||||
conditional_template = '%(function)s(%(distinct)sCASE WHEN %(condition)s THEN %(field)s ELSE null END)'
|
||||
|
||||
def __init__(self, col, distinct=False, **extra):
|
||||
super(SqlCount, self).__init__(col, distinct=distinct and 'DISTINCT ' or '', **extra)
|
||||
|
||||
|
||||
class SqlAvg(SqlAggregate):
|
||||
is_computed = True
|
||||
sql_function = 'AVG'
|
||||
|
||||
|
||||
class SqlMax(SqlAggregate):
|
||||
sql_function = 'MAX'
|
||||
|
||||
|
||||
class SqlMin(SqlAggregate):
|
||||
sql_function = 'MIN'
|
||||
|
||||
|
||||
class Aggregate(DjangoAggregate):
|
||||
def __init__(self, lookup, only=None, **extra):
|
||||
super(Aggregate, self).__init__(lookup, **extra)
|
||||
self.only = only
|
||||
self.condition = None
|
||||
|
||||
def _get_fields_from_Q(self, q):
|
||||
fields = []
|
||||
for child in q.children:
|
||||
if hasattr(child, 'children'):
|
||||
fields.extend(self._get_fields_from_Q(child))
|
||||
else:
|
||||
fields.append(child)
|
||||
return fields
|
||||
|
||||
def add_to_query(self, query, alias, col, source, is_summary):
|
||||
if self.only:
|
||||
self.condition = query.model._default_manager.filter(self.only)
|
||||
for child in self._get_fields_from_Q(self.only):
|
||||
field_list = child[0].split('__')
|
||||
# Pop off the last field if it's a query term ('gte', 'contains', 'isnull', etc.)
|
||||
if field_list[-1] in query.query_terms:
|
||||
field_list.pop()
|
||||
# setup_joins have different returns in Django 1.5 and 1.6, but the order of what we need remains.
|
||||
result = query.setup_joins(field_list, query.model._meta, query.get_initial_alias(), None)
|
||||
join_list = result[3]
|
||||
|
||||
fname = 'promote_alias_chain' if VERSION < (1, 5) else 'promote_joins'
|
||||
args = (join_list, True) if VERSION < (1, 7) else (join_list,)
|
||||
|
||||
promote = getattr(query, fname)
|
||||
promote(*args)
|
||||
|
||||
aggregate = self.sql_klass(col, source=source, is_summary=is_summary, condition=self.condition, **self.extra)
|
||||
query.aggregates[alias] = aggregate
|
||||
|
||||
|
||||
class Sum(Aggregate):
|
||||
name = 'Sum'
|
||||
sql_klass = SqlSum
|
||||
|
||||
|
||||
class Count(Aggregate):
|
||||
name = 'Count'
|
||||
sql_klass = SqlCount
|
||||
|
||||
|
||||
class Avg(Aggregate):
|
||||
name = 'Avg'
|
||||
sql_klass = SqlAvg
|
||||
|
||||
|
||||
class Max(Aggregate):
|
||||
name = 'Max'
|
||||
sql_klass = SqlMax
|
||||
|
||||
|
||||
class Min(Aggregate):
|
||||
name = 'Min'
|
||||
sql_klass = SqlMin
|
||||
@@ -1,48 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
import os
|
||||
import sys
|
||||
from optparse import OptionParser
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = OptionParser()
|
||||
parser.add_option('-s', '--settings', help='Define settings.')
|
||||
parser.add_option('-t', '--unittest', help='Define which test to run. Default all.')
|
||||
options, args = parser.parse_args()
|
||||
|
||||
if not options.settings:
|
||||
parser.print_help()
|
||||
sys.exit(1)
|
||||
|
||||
if not options.unittest:
|
||||
options.unittest = ['aggregation']
|
||||
|
||||
return options
|
||||
|
||||
|
||||
def get_runner(settings_module):
|
||||
'''
|
||||
Asks Django for the TestRunner defined in settings or the default one.
|
||||
'''
|
||||
os.environ['DJANGO_SETTINGS_MODULE'] = settings_module
|
||||
|
||||
import django
|
||||
from django.test.utils import get_runner
|
||||
from django.conf import settings
|
||||
|
||||
if hasattr(django, 'setup'):
|
||||
django.setup()
|
||||
|
||||
return get_runner(settings)
|
||||
|
||||
|
||||
def runtests():
|
||||
options = parse_args()
|
||||
TestRunner = get_runner(options.settings)
|
||||
runner = TestRunner(verbosity=1, interactive=True, failfast=False)
|
||||
sys.exit(runner.run_tests([]))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
runtests()
|
||||
@@ -1,33 +0,0 @@
|
||||
# coding: utf-8
|
||||
from setuptools import setup
|
||||
import os
|
||||
|
||||
|
||||
setup(name='django-aggregate-if',
|
||||
version='0.5',
|
||||
description='Conditional aggregates for Django, just like the famous SumIf in Excel.',
|
||||
long_description=open(os.path.join(os.path.dirname(__file__), "README.rst")).read(),
|
||||
author="Henrique Bastos", author_email="henrique@bastos.net",
|
||||
license="MIT",
|
||||
py_modules=['aggregate_if'],
|
||||
install_requires=[
|
||||
'six>=1.6.1',
|
||||
],
|
||||
zip_safe=False,
|
||||
platforms='any',
|
||||
include_package_data=True,
|
||||
classifiers=[
|
||||
'Development Status :: 5 - Production/Stable',
|
||||
'Framework :: Django',
|
||||
'Intended Audience :: Developers',
|
||||
'License :: OSI Approved :: MIT License',
|
||||
'Natural Language :: English',
|
||||
'Operating System :: OS Independent',
|
||||
'Programming Language :: Python',
|
||||
'Programming Language :: Python :: 2.7',
|
||||
'Programming Language :: Python :: 3',
|
||||
'Topic :: Database',
|
||||
'Topic :: Software Development :: Libraries',
|
||||
],
|
||||
url='http://github.com/henriquebastos/django-aggregate-if/',
|
||||
)
|
||||
@@ -1,198 +0,0 @@
|
||||
[tox]
|
||||
envlist =
|
||||
py27-django1.4-sqlite,
|
||||
py27-django1.4-postgres,
|
||||
py27-django1.4-mysql,
|
||||
|
||||
py27-django1.5-sqlite,
|
||||
py27-django1.5-postgres,
|
||||
py27-django1.5-mysql,
|
||||
|
||||
py27-django1.6-sqlite,
|
||||
py27-django1.6-postgres,
|
||||
py27-django1.6-mysql,
|
||||
|
||||
py27-django1.7-sqlite,
|
||||
py27-django1.7-postgres,
|
||||
py27-django1.7-mysql,
|
||||
|
||||
py34-django1.6-sqlite,
|
||||
py34-django1.6-postgres,
|
||||
#py34-django1.6-mysql
|
||||
|
||||
py34-django1.7-sqlite,
|
||||
py34-django1.7-postgres,
|
||||
#py34-django1.7-mysql
|
||||
|
||||
[testenv]
|
||||
whitelist_externals=
|
||||
mysql
|
||||
psql
|
||||
|
||||
# Python 2.7
|
||||
# Django 1.4
|
||||
[testenv:py27-django1.4-sqlite]
|
||||
basepython = python2.7
|
||||
deps =
|
||||
django==1.4
|
||||
commands = python runtests.py --settings tests.test_sqlite
|
||||
|
||||
[testenv:py27-django1.4-postgres]
|
||||
basepython = python2.7
|
||||
deps =
|
||||
django==1.4
|
||||
psycopg2
|
||||
commands =
|
||||
psql -c 'create database aggregation;' postgres
|
||||
python runtests.py --settings tests.test_postgres
|
||||
psql -c 'drop database aggregation;' postgres
|
||||
|
||||
[testenv:py27-django1.4-mysql]
|
||||
basepython = python2.7
|
||||
deps =
|
||||
django==1.4
|
||||
mysql-python
|
||||
commands =
|
||||
mysql -e 'create database aggregation;'
|
||||
python runtests.py --settings tests.test_mysql
|
||||
mysql -e 'drop database aggregation;'
|
||||
|
||||
# Django 1.5
|
||||
[testenv:py27-django1.5-sqlite]
|
||||
basepython = python2.7
|
||||
deps =
|
||||
django==1.5
|
||||
commands = python runtests.py --settings tests.test_sqlite
|
||||
|
||||
[testenv:py27-django1.5-postgres]
|
||||
basepython = python2.7
|
||||
deps =
|
||||
django==1.5
|
||||
psycopg2
|
||||
commands =
|
||||
psql -c 'create database aggregation;' postgres
|
||||
python runtests.py --settings tests.test_postgres
|
||||
psql -c 'drop database aggregation;' postgres
|
||||
|
||||
[testenv:py27-django1.5-mysql]
|
||||
basepython = python2.7
|
||||
deps =
|
||||
django==1.5
|
||||
mysql-python
|
||||
commands =
|
||||
mysql -e 'create database aggregation;'
|
||||
python runtests.py --settings tests.test_mysql
|
||||
mysql -e 'drop database aggregation;'
|
||||
|
||||
# Django 1.6
|
||||
[testenv:py27-django1.6-sqlite]
|
||||
basepython = python2.7
|
||||
deps =
|
||||
django==1.6
|
||||
commands = python runtests.py --settings tests.test_sqlite
|
||||
|
||||
[testenv:py27-django1.6-postgres]
|
||||
basepython = python2.7
|
||||
deps =
|
||||
django==1.6
|
||||
psycopg2
|
||||
commands =
|
||||
psql -c 'create database aggregation;' postgres
|
||||
python runtests.py --settings tests.test_postgres
|
||||
psql -c 'drop database aggregation;' postgres
|
||||
|
||||
[testenv:py27-django1.6-mysql]
|
||||
basepython = python2.7
|
||||
deps =
|
||||
django==1.6
|
||||
mysql-python
|
||||
commands =
|
||||
mysql -e 'create database aggregation;'
|
||||
python runtests.py --settings tests.test_mysql
|
||||
mysql -e 'drop database aggregation;'
|
||||
|
||||
|
||||
# Python 2.7 and Django 1.7
|
||||
[testenv:py27-django1.7-sqlite]
|
||||
basepython = python2.7
|
||||
deps =
|
||||
django==1.7
|
||||
commands = python runtests.py --settings tests.test_sqlite
|
||||
|
||||
[testenv:py27-django1.7-postgres]
|
||||
basepython = python2.7
|
||||
deps =
|
||||
django==1.7
|
||||
psycopg2
|
||||
commands =
|
||||
psql -c 'create database aggregation;' postgres
|
||||
python runtests.py --settings tests.test_postgres
|
||||
psql -c 'drop database aggregation;' postgres
|
||||
|
||||
[testenv:py27-django1.7-mysql]
|
||||
basepython = python2.7
|
||||
deps =
|
||||
django==1.7
|
||||
mysql-python
|
||||
commands =
|
||||
mysql -e 'create database aggregation;'
|
||||
python runtests.py --settings tests.test_mysql
|
||||
mysql -e 'drop database aggregation;'
|
||||
|
||||
|
||||
# Python 3.4
|
||||
# Django 1.6
|
||||
[testenv:py34-django1.6-sqlite]
|
||||
basepython = python3.4
|
||||
deps =
|
||||
django==1.6
|
||||
commands = python runtests.py --settings tests.test_sqlite
|
||||
|
||||
[testenv:py34-django1.6-postgres]
|
||||
basepython = python3.4
|
||||
deps =
|
||||
django==1.6
|
||||
psycopg2
|
||||
commands =
|
||||
psql -c 'create database aggregation;' postgres
|
||||
python runtests.py --settings tests.test_postgres
|
||||
psql -c 'drop database aggregation;' postgres
|
||||
|
||||
[testenv:py34-django1.6-mysql]
|
||||
basepython = python3.4
|
||||
deps =
|
||||
django==1.6
|
||||
mysql-python3
|
||||
commands =
|
||||
mysql -e 'create database aggregation;'
|
||||
python runtests.py --settings tests.test_mysql
|
||||
mysql -e 'drop database aggregation;'
|
||||
|
||||
|
||||
# Python 3.4
|
||||
# Django 1.7
|
||||
[testenv:py34-django1.7-sqlite]
|
||||
basepython = python3.4
|
||||
deps =
|
||||
django==1.7
|
||||
commands = python runtests.py --settings tests.test_sqlite
|
||||
|
||||
[testenv:py34-django1.7-postgres]
|
||||
basepython = python3.4
|
||||
deps =
|
||||
django==1.7
|
||||
psycopg2
|
||||
commands =
|
||||
psql -c 'create database aggregation;' postgres
|
||||
python runtests.py --settings tests.test_postgres
|
||||
psql -c 'drop database aggregation;' postgres
|
||||
|
||||
[testenv:py34-django1.7-mysql]
|
||||
basepython = python3.4
|
||||
deps =
|
||||
django==1.7
|
||||
mysql-python3
|
||||
commands =
|
||||
mysql -e 'create database aggregation;'
|
||||
python runtests.py --settings tests.test_mysql
|
||||
mysql -e 'drop database aggregation;'
|
||||
@@ -1,41 +0,0 @@
|
||||
|
||||
Toaster Testing Framework
|
||||
Yocto Project
|
||||
|
||||
|
||||
Rationale
|
||||
------------
|
||||
As Toaster contributions grow with the number of people that contribute code, verifying each patch prior to submitting upstream becomes a hard-to-scale problem for humans. We devised this system in order to run patch-level validation, trying to eliminate common problems from submitted patches, in an automated fashion.
|
||||
|
||||
The Toaster Testing Framework is a set of Python scripts that provides an extensible way to write smoke and regression tests that will be run on each patch set sent for review on the toaster mailing list.
|
||||
|
||||
|
||||
Usage
|
||||
------------
|
||||
There are three main executable scripts in this directory.
|
||||
* runner.py is designed to be run from the command line. It requires, as mandatory parameter, a branch name on poky-contrib, branch which contains the patches to be tested. The program will auto-discover the available tests residing in this directory by looking for unittest classes, and will run the tests on the branch dumping the output to the standard output. Optionally, it can take parameters inhibiting the branch checkout, or specifying a single test to be run, for debugging purposes.
|
||||
* launcher.py is a designed to be run from a crontab or similar scheduling mechanism. It looks up a backlog file containing branches-to-test (named tasks in the source code), select the first one in FIFO manner, and launch runner.py on it. It will await for completion, and email the standard output and standard error dumps from the runner.py execution
|
||||
* recv.py is an email receiver, designed to be called as a pipe from a .forward file. It is used to monitor a mailing list, for example, and add tasks to the backlog based on review requests coming on the mailing list.
|
||||
|
||||
|
||||
Installation
|
||||
------------
|
||||
As prerequisite, we expect a functioning email system on a machine with Python2.
|
||||
|
||||
The broad steps to installation
|
||||
* set up the .forward on the receiving email account to pipe to the recv.py file
|
||||
* edit config.py and settings.json to alter for local installation settings
|
||||
* on email receive, verify backlog.txt to see that the tasks are received and marked for processing
|
||||
* execute launcher.py in command line to verify that a test occurs with no problems, and that the outgoing email is delivered
|
||||
* add launcher.py
|
||||
|
||||
|
||||
|
||||
Contribute
|
||||
------------
|
||||
What we need are tests. Add your own tests to either tests.py file, or to a new file.
|
||||
Use "config.logger" to write logs that will make it to email.
|
||||
|
||||
Commonly used code should be going to shellutils, and configuration to config.py.
|
||||
|
||||
Contribute code by emailing patches to the list: toaster@yoctoproject.org (membership required)
|
||||
@@ -1,9 +0,0 @@
|
||||
We need to implement tests:
|
||||
|
||||
automated link checker; currently
|
||||
$ linkchecker -t 1000 -F csv http://localhost:8000/
|
||||
|
||||
integrate the w3c-validation service; currently
|
||||
$ python urlcheck.py
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user