wic: Add mic w/pykickstart

This is the starting point for the implemention described in [YOCTO
3847] which came to the conclusion that it would make sense to use
kickstart syntax to implement image creation in OpenEmbedded.  I
subsequently realized that there was an existing tool that already
implemented image creation using kickstart syntax, the Tizen/Meego mic
tool.  As such, it made sense to use that as a starting point - this
commit essentially just copies the relevant Python code from the MIC
tool to the scripts/lib dir, where it can be accessed by the
previously created wic tool.

Most of this will be removed or renamed by later commits, since we're
initially focusing on partitioning only.  Care should be taken so that
we can easily add back any additional functionality should we decide
later to expand the tool, though (we may also want to contribute our
local changes to the mic tool to the Tizen project if it makes sense,
and therefore should avoid gratuitous changes to the original code if
possible).

Added the /mic subdir from Tizen mic repo as a starting point:

 git clone git://review.tizen.org/tools/mic.git

 For reference, the top commit:

 commit 20164175ddc234a17b8a12c33d04b012347b1530
 Author: Gui Chen <gui.chen@intel.com>
 Date:   Sun Jun 30 22:32:16 2013 -0400

    bump up to 0.19.2

Also added the /plugins subdir, moved to under the /mic subdir (to
match the default plugin_dir location in mic.conf.in, which was
renamed to yocto-image.conf (moved and renamed by later patches) and
put into /scripts.

(From OE-Core rev: 31f0360f1fd4ebc9dfcaed42d1c50d2448b4632e)

Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
This commit is contained in:
Tom Zanussi
2013-08-24 15:31:34 +00:00
committed by Richard Purdie
parent 53a1d9a788
commit 9fc88f96d4
135 changed files with 29216 additions and 0 deletions

View File

@@ -0,0 +1,298 @@
""" This module implements the block map (bmap) creation functionality and
provides the corresponding API in form of the 'BmapCreate' class.
The idea is that while images files may generally be very large (e.g., 4GiB),
they may nevertheless contain only little real data, e.g., 512MiB. This data
are files, directories, file-system meta-data, partition table, etc. When
copying the image to the target device, you do not have to copy all the 4GiB of
data, you can copy only 512MiB of it, which is 4 times less, so copying should
presumably be 4 times faster.
The block map file is an XML file which contains a list of blocks which have to
be copied to the target device. The other blocks are not used and there is no
need to copy them. The XML file also contains some additional information like
block size, image size, count of mapped blocks, etc. There are also many
commentaries, so it is human-readable.
The image has to be a sparse file. Generally, this means that when you generate
this image file, you should start with a huge sparse file which contains a
single hole spanning the entire file. Then you should partition it, write all
the data (probably by means of loop-back mounting the image or parts of it),
etc. The end result should be a sparse file where mapped areas represent useful
parts of the image and holes represent useless parts of the image, which do not
have to be copied when copying the image to the target device.
This module uses the FIBMAP ioctl to detect holes. """
# Disable the following pylint recommendations:
# * Too many instance attributes - R0902
# * Too few public methods - R0903
# pylint: disable=R0902,R0903
import hashlib
from mic.utils.misc import human_size
from mic.utils import Fiemap
# The bmap format version we generate
SUPPORTED_BMAP_VERSION = "1.3"
_BMAP_START_TEMPLATE = \
"""<?xml version="1.0" ?>
<!-- This file contains the block map for an image file, which is basically
a list of useful (mapped) block numbers in the image file. In other words,
it lists only those blocks which contain data (boot sector, partition
table, file-system metadata, files, directories, extents, etc). These
blocks have to be copied to the target device. The other blocks do not
contain any useful data and do not have to be copied to the target
device.
The block map an optimization which allows to copy or flash the image to
the image quicker than copying of flashing the entire image. This is
because with bmap less data is copied: <MappedBlocksCount> blocks instead
of <BlocksCount> blocks.
Besides the machine-readable data, this file contains useful commentaries
which contain human-readable information like image size, percentage of
mapped data, etc.
The 'version' attribute is the block map file format version in the
'major.minor' format. The version major number is increased whenever an
incompatible block map format change is made. The minor number changes
in case of minor backward-compatible changes. -->
<bmap version="%s">
<!-- Image size in bytes: %s -->
<ImageSize> %u </ImageSize>
<!-- Size of a block in bytes -->
<BlockSize> %u </BlockSize>
<!-- Count of blocks in the image file -->
<BlocksCount> %u </BlocksCount>
"""
class Error(Exception):
""" A class for exceptions generated by this module. We currently support
only one type of exceptions, and we basically throw human-readable problem
description in case of errors. """
pass
class BmapCreate:
""" This class implements the bmap creation functionality. To generate a
bmap for an image (which is supposedly a sparse file), you should first
create an instance of 'BmapCreate' and provide:
* full path or a file-like object of the image to create bmap for
* full path or a file object to use for writing the results to
Then you should invoke the 'generate()' method of this class. It will use
the FIEMAP ioctl to generate the bmap. """
def _open_image_file(self):
""" Open the image file. """
try:
self._f_image = open(self._image_path, 'rb')
except IOError as err:
raise Error("cannot open image file '%s': %s" \
% (self._image_path, err))
self._f_image_needs_close = True
def _open_bmap_file(self):
""" Open the bmap file. """
try:
self._f_bmap = open(self._bmap_path, 'w+')
except IOError as err:
raise Error("cannot open bmap file '%s': %s" \
% (self._bmap_path, err))
self._f_bmap_needs_close = True
def __init__(self, image, bmap):
""" Initialize a class instance:
* image - full path or a file-like object of the image to create bmap
for
* bmap - full path or a file object to use for writing the resulting
bmap to """
self.image_size = None
self.image_size_human = None
self.block_size = None
self.blocks_cnt = None
self.mapped_cnt = None
self.mapped_size = None
self.mapped_size_human = None
self.mapped_percent = None
self._mapped_count_pos1 = None
self._mapped_count_pos2 = None
self._sha1_pos = None
self._f_image_needs_close = False
self._f_bmap_needs_close = False
if hasattr(image, "read"):
self._f_image = image
self._image_path = image.name
else:
self._image_path = image
self._open_image_file()
if hasattr(bmap, "read"):
self._f_bmap = bmap
self._bmap_path = bmap.name
else:
self._bmap_path = bmap
self._open_bmap_file()
self.fiemap = Fiemap.Fiemap(self._f_image)
self.image_size = self.fiemap.image_size
self.image_size_human = human_size(self.image_size)
if self.image_size == 0:
raise Error("cannot generate bmap for zero-sized image file '%s'" \
% self._image_path)
self.block_size = self.fiemap.block_size
self.blocks_cnt = self.fiemap.blocks_cnt
def _bmap_file_start(self):
""" A helper function which generates the starting contents of the
block map file: the header comment, image size, block size, etc. """
# We do not know the amount of mapped blocks at the moment, so just put
# whitespaces instead of real numbers. Assume the longest possible
# numbers.
mapped_count = ' ' * len(str(self.image_size))
mapped_size_human = ' ' * len(self.image_size_human)
xml = _BMAP_START_TEMPLATE \
% (SUPPORTED_BMAP_VERSION, self.image_size_human,
self.image_size, self.block_size, self.blocks_cnt)
xml += " <!-- Count of mapped blocks: "
self._f_bmap.write(xml)
self._mapped_count_pos1 = self._f_bmap.tell()
# Just put white-spaces instead of real information about mapped blocks
xml = "%s or %.1f -->\n" % (mapped_size_human, 100.0)
xml += " <MappedBlocksCount> "
self._f_bmap.write(xml)
self._mapped_count_pos2 = self._f_bmap.tell()
xml = "%s </MappedBlocksCount>\n\n" % mapped_count
# pylint: disable=C0301
xml += " <!-- The checksum of this bmap file. When it is calculated, the value of\n"
xml += " the SHA1 checksum has be zeoro (40 ASCII \"0\" symbols). -->\n"
xml += " <BmapFileSHA1> "
self._f_bmap.write(xml)
self._sha1_pos = self._f_bmap.tell()
xml = "0" * 40 + " </BmapFileSHA1>\n\n"
xml += " <!-- The block map which consists of elements which may either be a\n"
xml += " range of blocks or a single block. The 'sha1' attribute (if present)\n"
xml += " is the SHA1 checksum of this blocks range. -->\n"
xml += " <BlockMap>\n"
# pylint: enable=C0301
self._f_bmap.write(xml)
def _bmap_file_end(self):
""" A helper function which generates the final parts of the block map
file: the ending tags and the information about the amount of mapped
blocks. """
xml = " </BlockMap>\n"
xml += "</bmap>\n"
self._f_bmap.write(xml)
self._f_bmap.seek(self._mapped_count_pos1)
self._f_bmap.write("%s or %.1f%%" % \
(self.mapped_size_human, self.mapped_percent))
self._f_bmap.seek(self._mapped_count_pos2)
self._f_bmap.write("%u" % self.mapped_cnt)
self._f_bmap.seek(0)
sha1 = hashlib.sha1(self._f_bmap.read()).hexdigest()
self._f_bmap.seek(self._sha1_pos)
self._f_bmap.write("%s" % sha1)
def _calculate_sha1(self, first, last):
""" A helper function which calculates SHA1 checksum for the range of
blocks of the image file: from block 'first' to block 'last'. """
start = first * self.block_size
end = (last + 1) * self.block_size
self._f_image.seek(start)
hash_obj = hashlib.new("sha1")
chunk_size = 1024*1024
to_read = end - start
read = 0
while read < to_read:
if read + chunk_size > to_read:
chunk_size = to_read - read
chunk = self._f_image.read(chunk_size)
hash_obj.update(chunk)
read += chunk_size
return hash_obj.hexdigest()
def generate(self, include_checksums = True):
""" Generate bmap for the image file. If 'include_checksums' is 'True',
also generate SHA1 checksums for block ranges. """
# Save image file position in order to restore it at the end
image_pos = self._f_image.tell()
self._bmap_file_start()
# Generate the block map and write it to the XML block map
# file as we go.
self.mapped_cnt = 0
for first, last in self.fiemap.get_mapped_ranges(0, self.blocks_cnt):
self.mapped_cnt += last - first + 1
if include_checksums:
sha1 = self._calculate_sha1(first, last)
sha1 = " sha1=\"%s\"" % sha1
else:
sha1 = ""
if first != last:
self._f_bmap.write(" <Range%s> %s-%s </Range>\n" \
% (sha1, first, last))
else:
self._f_bmap.write(" <Range%s> %s </Range>\n" \
% (sha1, first))
self.mapped_size = self.mapped_cnt * self.block_size
self.mapped_size_human = human_size(self.mapped_size)
self.mapped_percent = (self.mapped_cnt * 100.0) / self.blocks_cnt
self._bmap_file_end()
try:
self._f_bmap.flush()
except IOError as err:
raise Error("cannot flush the bmap file '%s': %s" \
% (self._bmap_path, err))
self._f_image.seek(image_pos)
def __del__(self):
""" The class destructor which closes the opened files. """
if self._f_image_needs_close:
self._f_image.close()
if self._f_bmap_needs_close:
self._f_bmap.close()

View File

@@ -0,0 +1,252 @@
""" This module implements python API for the FIEMAP ioctl. The FIEMAP ioctl
allows to find holes and mapped areas in a file. """
# Note, a lot of code in this module is not very readable, because it deals
# with the rather complex FIEMAP ioctl. To understand the code, you need to
# know the FIEMAP interface, which is documented in the
# Documentation/filesystems/fiemap.txt file in the Linux kernel sources.
# Disable the following pylint recommendations:
# * Too many instance attributes (R0902)
# pylint: disable=R0902
import os
import struct
import array
import fcntl
from mic.utils.misc import get_block_size
# Format string for 'struct fiemap'
_FIEMAP_FORMAT = "=QQLLLL"
# sizeof(struct fiemap)
_FIEMAP_SIZE = struct.calcsize(_FIEMAP_FORMAT)
# Format string for 'struct fiemap_extent'
_FIEMAP_EXTENT_FORMAT = "=QQQQQLLLL"
# sizeof(struct fiemap_extent)
_FIEMAP_EXTENT_SIZE = struct.calcsize(_FIEMAP_EXTENT_FORMAT)
# The FIEMAP ioctl number
_FIEMAP_IOCTL = 0xC020660B
# Minimum buffer which is required for 'class Fiemap' to operate
MIN_BUFFER_SIZE = _FIEMAP_SIZE + _FIEMAP_EXTENT_SIZE
# The default buffer size for 'class Fiemap'
DEFAULT_BUFFER_SIZE = 256 * 1024
class Error(Exception):
""" A class for exceptions generated by this module. We currently support
only one type of exceptions, and we basically throw human-readable problem
description in case of errors. """
pass
class Fiemap:
""" This class provides API to the FIEMAP ioctl. Namely, it allows to
iterate over all mapped blocks and over all holes. """
def _open_image_file(self):
""" Open the image file. """
try:
self._f_image = open(self._image_path, 'rb')
except IOError as err:
raise Error("cannot open image file '%s': %s" \
% (self._image_path, err))
self._f_image_needs_close = True
def __init__(self, image, buf_size = DEFAULT_BUFFER_SIZE):
""" Initialize a class instance. The 'image' argument is full path to
the file to operate on, or a file object to operate on.
The 'buf_size' argument is the size of the buffer for 'struct
fiemap_extent' elements which will be used when invoking the FIEMAP
ioctl. The larger is the buffer, the less times the FIEMAP ioctl will
be invoked. """
self._f_image_needs_close = False
if hasattr(image, "fileno"):
self._f_image = image
self._image_path = image.name
else:
self._image_path = image
self._open_image_file()
# Validate 'buf_size'
if buf_size < MIN_BUFFER_SIZE:
raise Error("too small buffer (%d bytes), minimum is %d bytes" \
% (buf_size, MIN_BUFFER_SIZE))
# How many 'struct fiemap_extent' elements fit the buffer
buf_size -= _FIEMAP_SIZE
self._fiemap_extent_cnt = buf_size / _FIEMAP_EXTENT_SIZE
self._buf_size = self._fiemap_extent_cnt * _FIEMAP_EXTENT_SIZE
self._buf_size += _FIEMAP_SIZE
# Allocate a mutable buffer for the FIEMAP ioctl
self._buf = array.array('B', [0] * self._buf_size)
self.image_size = os.fstat(self._f_image.fileno()).st_size
try:
self.block_size = get_block_size(self._f_image)
except IOError as err:
raise Error("cannot get block size for '%s': %s" \
% (self._image_path, err))
self.blocks_cnt = self.image_size + self.block_size - 1
self.blocks_cnt /= self.block_size
# Synchronize the image file to make sure FIEMAP returns correct values
try:
self._f_image.flush()
except IOError as err:
raise Error("cannot flush image file '%s': %s" \
% (self._image_path, err))
try:
os.fsync(self._f_image.fileno()),
except OSError as err:
raise Error("cannot synchronize image file '%s': %s " \
% (self._image_path, err.strerror))
# Check if the FIEMAP ioctl is supported
self.block_is_mapped(0)
def __del__(self):
""" The class destructor which closes the opened files. """
if self._f_image_needs_close:
self._f_image.close()
def _invoke_fiemap(self, block, count):
""" Invoke the FIEMAP ioctl for 'count' blocks of the file starting from
block number 'block'.
The full result of the operation is stored in 'self._buf' on exit.
Returns the unpacked 'struct fiemap' data structure in form of a python
list (just like 'struct.upack()'). """
if block < 0 or block >= self.blocks_cnt:
raise Error("bad block number %d, should be within [0, %d]" \
% (block, self.blocks_cnt))
# Initialize the 'struct fiemap' part of the buffer
struct.pack_into(_FIEMAP_FORMAT, self._buf, 0, block * self.block_size,
count * self.block_size, 0, 0,
self._fiemap_extent_cnt, 0)
try:
fcntl.ioctl(self._f_image, _FIEMAP_IOCTL, self._buf, 1)
except IOError as err:
error_msg = "the FIEMAP ioctl failed for '%s': %s" \
% (self._image_path, err)
if err.errno == os.errno.EPERM or err.errno == os.errno.EACCES:
# The FIEMAP ioctl was added in kernel version 2.6.28 in 2008
error_msg += " (looks like your kernel does not support FIEMAP)"
raise Error(error_msg)
return struct.unpack(_FIEMAP_FORMAT, self._buf[:_FIEMAP_SIZE])
def block_is_mapped(self, block):
""" This function returns 'True' if block number 'block' of the image
file is mapped and 'False' otherwise. """
struct_fiemap = self._invoke_fiemap(block, 1)
# The 3rd element of 'struct_fiemap' is the 'fm_mapped_extents' field.
# If it contains zero, the block is not mapped, otherwise it is
# mapped.
return bool(struct_fiemap[3])
def block_is_unmapped(self, block):
""" This function returns 'True' if block number 'block' of the image
file is not mapped (hole) and 'False' otherwise. """
return not self.block_is_mapped(block)
def _unpack_fiemap_extent(self, index):
""" Unpack a 'struct fiemap_extent' structure object number 'index'
from the internal 'self._buf' buffer. """
offset = _FIEMAP_SIZE + _FIEMAP_EXTENT_SIZE * index
return struct.unpack(_FIEMAP_EXTENT_FORMAT,
self._buf[offset : offset + _FIEMAP_EXTENT_SIZE])
def _do_get_mapped_ranges(self, start, count):
""" Implements most the functionality for the 'get_mapped_ranges()'
generator: invokes the FIEMAP ioctl, walks through the mapped
extents and yields mapped block ranges. However, the ranges may be
consecutive (e.g., (1, 100), (100, 200)) and 'get_mapped_ranges()'
simply merges them. """
block = start
while block < start + count:
struct_fiemap = self._invoke_fiemap(block, count)
mapped_extents = struct_fiemap[3]
if mapped_extents == 0:
# No more mapped blocks
return
extent = 0
while extent < mapped_extents:
fiemap_extent = self._unpack_fiemap_extent(extent)
# Start of the extent
extent_start = fiemap_extent[0]
# Starting block number of the extent
extent_block = extent_start / self.block_size
# Length of the extent
extent_len = fiemap_extent[2]
# Count of blocks in the extent
extent_count = extent_len / self.block_size
# Extent length and offset have to be block-aligned
assert extent_start % self.block_size == 0
assert extent_len % self.block_size == 0
if extent_block > start + count - 1:
return
first = max(extent_block, block)
last = min(extent_block + extent_count, start + count) - 1
yield (first, last)
extent += 1
block = extent_block + extent_count
def get_mapped_ranges(self, start, count):
""" A generator which yields ranges of mapped blocks in the file. The
ranges are tuples of 2 elements: [first, last], where 'first' is the
first mapped block and 'last' is the last mapped block.
The ranges are yielded for the area of the file of size 'count' blocks,
starting from block 'start'. """
iterator = self._do_get_mapped_ranges(start, count)
first_prev, last_prev = iterator.next()
for first, last in iterator:
if last_prev == first - 1:
last_prev = last
else:
yield (first_prev, last_prev)
first_prev, last_prev = first, last
yield (first_prev, last_prev)
def get_unmapped_ranges(self, start, count):
""" Just like 'get_mapped_ranges()', but yields unmapped block ranges
instead (holes). """
hole_first = start
for first, last in self._do_get_mapped_ranges(start, count):
if first > hole_first:
yield (hole_first, first - 1)
hole_first = last + 1
if hole_first < start + count:
yield (hole_first, start + count - 1)

View File

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,71 @@
#!/usr/bin/python -tt
#
# Copyright (c) 2007 Red Hat, Inc.
# Copyright (c) 2011 Intel, Inc.
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the Free
# Software Foundation; version 2 of the License
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
# for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc., 59
# Temple Place - Suite 330, Boston, MA 02111-1307, USA.
class CreatorError(Exception):
"""An exception base class for all imgcreate errors."""
keyword = '<creator>'
def __init__(self, msg):
self.msg = msg
def __str__(self):
if isinstance(self.msg, unicode):
self.msg = self.msg.encode('utf-8', 'ignore')
else:
self.msg = str(self.msg)
return self.keyword + self.msg
class Usage(CreatorError):
keyword = '<usage>'
def __str__(self):
if isinstance(self.msg, unicode):
self.msg = self.msg.encode('utf-8', 'ignore')
else:
self.msg = str(self.msg)
return self.keyword + self.msg + ', please use "--help" for more info'
class Abort(CreatorError):
keyword = ''
class ConfigError(CreatorError):
keyword = '<config>'
class KsError(CreatorError):
keyword = '<kickstart>'
class RepoError(CreatorError):
keyword = '<repo>'
class RpmError(CreatorError):
keyword = '<rpm>'
class MountError(CreatorError):
keyword = '<mount>'
class SnapshotError(CreatorError):
keyword = '<snapshot>'
class SquashfsError(CreatorError):
keyword = '<squashfs>'
class BootstrapError(CreatorError):
keyword = '<bootstrap>'
class RuntimeError(CreatorError):
keyword = '<runtime>'

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,331 @@
#!/usr/bin/python -tt
#
# Copyright (c) 2013 Intel, Inc.
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the Free
# Software Foundation; version 2 of the License
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
# for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc., 59
# Temple Place - Suite 330, Boston, MA 02111-1307, USA.
""" This module implements a simple GPT partitions parser which can read the
GPT header and the GPT partition table. """
import struct
import uuid
import binascii
from mic.utils.errors import MountError
_GPT_HEADER_FORMAT = "<8s4sIIIQQQQ16sQIII"
_GPT_HEADER_SIZE = struct.calcsize(_GPT_HEADER_FORMAT)
_GPT_ENTRY_FORMAT = "<16s16sQQQ72s"
_GPT_ENTRY_SIZE = struct.calcsize(_GPT_ENTRY_FORMAT)
_SUPPORTED_GPT_REVISION = '\x00\x00\x01\x00'
def _stringify_uuid(binary_uuid):
""" A small helper function to transform a binary UUID into a string
format. """
uuid_str = str(uuid.UUID(bytes_le = binary_uuid))
return uuid_str.upper()
def _calc_header_crc(raw_hdr):
""" Calculate GPT header CRC32 checksum. The 'raw_hdr' parameter has to
be a list or a tuple containing all the elements of the GPT header in a
"raw" form, meaning that it should simply contain "unpacked" disk data.
"""
raw_hdr = list(raw_hdr)
raw_hdr[3] = 0
raw_hdr = struct.pack(_GPT_HEADER_FORMAT, *raw_hdr)
return binascii.crc32(raw_hdr) & 0xFFFFFFFF
def _validate_header(raw_hdr):
""" Validate the GPT header. The 'raw_hdr' parameter has to be a list or a
tuple containing all the elements of the GPT header in a "raw" form,
meaning that it should simply contain "unpacked" disk data. """
# Validate the signature
if raw_hdr[0] != 'EFI PART':
raise MountError("GPT partition table not found")
# Validate the revision
if raw_hdr[1] != _SUPPORTED_GPT_REVISION:
raise MountError("Unsupported GPT revision '%s', supported revision " \
"is '%s'" % \
(binascii.hexlify(raw_hdr[1]),
binascii.hexlify(_SUPPORTED_GPT_REVISION)))
# Validate header size
if raw_hdr[2] != _GPT_HEADER_SIZE:
raise MountError("Bad GPT header size: %d bytes, expected %d" % \
(raw_hdr[2], _GPT_HEADER_SIZE))
crc = _calc_header_crc(raw_hdr)
if raw_hdr[3] != crc:
raise MountError("GPT header crc mismatch: %#x, should be %#x" % \
(crc, raw_hdr[3]))
class GptParser:
""" GPT partition table parser. Allows reading the GPT header and the
partition table, as well as modifying the partition table records. """
def __init__(self, disk_path, sector_size = 512):
""" The class constructor which accepts the following parameters:
* disk_path - full path to the disk image or device node
* sector_size - size of a disk sector in bytes """
self.sector_size = sector_size
self.disk_path = disk_path
try:
self._disk_obj = open(disk_path, 'r+b')
except IOError as err:
raise MountError("Cannot open file '%s' for reading GPT " \
"partitions: %s" % (disk_path, err))
def __del__(self):
""" The class destructor. """
self._disk_obj.close()
def _read_disk(self, offset, size):
""" A helper function which reads 'size' bytes from offset 'offset' of
the disk and checks all the error conditions. """
self._disk_obj.seek(offset)
try:
data = self._disk_obj.read(size)
except IOError as err:
raise MountError("cannot read from '%s': %s" % \
(self.disk_path, err))
if len(data) != size:
raise MountError("cannot read %d bytes from offset '%d' of '%s', " \
"read only %d bytes" % \
(size, offset, self.disk_path, len(data)))
return data
def _write_disk(self, offset, buf):
""" A helper function which writes buffer 'buf' to offset 'offset' of
the disk. This function takes care of unaligned writes and checks all
the error conditions. """
# Since we may be dealing with a block device, we only can write in
# 'self.sector_size' chunks. Find the aligned starting and ending
# disk offsets to read.
start = (offset / self.sector_size) * self.sector_size
end = ((start + len(buf)) / self.sector_size + 1) * self.sector_size
data = self._read_disk(start, end - start)
off = offset - start
data = data[:off] + buf + data[off + len(buf):]
self._disk_obj.seek(start)
try:
self._disk_obj.write(data)
except IOError as err:
raise MountError("cannot write to '%s': %s" % (self.disk_path, err))
def read_header(self, primary = True):
""" Read and verify the GPT header and return a dictionary containing
the following elements:
'signature' : header signature
'revision' : header revision
'hdr_size' : header size in bytes
'hdr_crc' : header CRC32
'hdr_lba' : LBA of this header
'hdr_offs' : byte disk offset of this header
'backup_lba' : backup header LBA
'backup_offs' : byte disk offset of backup header
'first_lba' : first usable LBA for partitions
'first_offs' : first usable byte disk offset for partitions
'last_lba' : last usable LBA for partitions
'last_offs' : last usable byte disk offset for partitions
'disk_uuid' : UUID of the disk
'ptable_lba' : starting LBA of array of partition entries
'ptable_offs' : disk byte offset of the start of the partition table
'ptable_size' : partition table size in bytes
'entries_cnt' : number of available partition table entries
'entry_size' : size of a single partition entry
'ptable_crc' : CRC32 of the partition table
'primary' : a boolean, if 'True', this is the primary GPT header,
if 'False' - the secondary
'primary_str' : contains string "primary" if this is the primary GPT
header, and "backup" otherwise
This dictionary corresponds to the GPT header format. Please, see the
UEFI standard for the description of these fields.
If the 'primary' parameter is 'True', the primary GPT header is read,
otherwise the backup GPT header is read instead. """
# Read and validate the primary GPT header
raw_hdr = self._read_disk(self.sector_size, _GPT_HEADER_SIZE)
raw_hdr = struct.unpack(_GPT_HEADER_FORMAT, raw_hdr)
_validate_header(raw_hdr)
primary_str = "primary"
if not primary:
# Read and validate the backup GPT header
raw_hdr = self._read_disk(raw_hdr[6] * self.sector_size, _GPT_HEADER_SIZE)
raw_hdr = struct.unpack(_GPT_HEADER_FORMAT, raw_hdr)
_validate_header(raw_hdr)
primary_str = "backup"
return { 'signature' : raw_hdr[0],
'revision' : raw_hdr[1],
'hdr_size' : raw_hdr[2],
'hdr_crc' : raw_hdr[3],
'hdr_lba' : raw_hdr[5],
'hdr_offs' : raw_hdr[5] * self.sector_size,
'backup_lba' : raw_hdr[6],
'backup_offs' : raw_hdr[6] * self.sector_size,
'first_lba' : raw_hdr[7],
'first_offs' : raw_hdr[7] * self.sector_size,
'last_lba' : raw_hdr[8],
'last_offs' : raw_hdr[8] * self.sector_size,
'disk_uuid' :_stringify_uuid(raw_hdr[9]),
'ptable_lba' : raw_hdr[10],
'ptable_offs' : raw_hdr[10] * self.sector_size,
'ptable_size' : raw_hdr[11] * raw_hdr[12],
'entries_cnt' : raw_hdr[11],
'entry_size' : raw_hdr[12],
'ptable_crc' : raw_hdr[13],
'primary' : primary,
'primary_str' : primary_str }
def _read_raw_ptable(self, header):
""" Read and validate primary or backup partition table. The 'header'
argument is the GPT header. If it is the primary GPT header, then the
primary partition table is read and validated, otherwise - the backup
one. The 'header' argument is a dictionary which is returned by the
'read_header()' method. """
raw_ptable = self._read_disk(header['ptable_offs'],
header['ptable_size'])
crc = binascii.crc32(raw_ptable) & 0xFFFFFFFF
if crc != header['ptable_crc']:
raise MountError("Partition table at LBA %d (%s) is corrupted" % \
(header['ptable_lba'], header['primary_str']))
return raw_ptable
def get_partitions(self, primary = True):
""" This is a generator which parses the GPT partition table and
generates the following dictionary for each partition:
'index' : the index of the partition table endry
'offs' : byte disk offset of the partition table entry
'type_uuid' : partition type UUID
'part_uuid' : partition UUID
'first_lba' : the first LBA
'last_lba' : the last LBA
'flags' : attribute flags
'name' : partition name
'primary' : a boolean, if 'True', this is the primary partition
table, if 'False' - the secondary
'primary_str' : contains string "primary" if this is the primary GPT
header, and "backup" otherwise
This dictionary corresponds to the GPT header format. Please, see the
UEFI standard for the description of these fields.
If the 'primary' parameter is 'True', partitions from the primary GPT
partition table are generated, otherwise partitions from the backup GPT
partition table are generated. """
if primary:
primary_str = "primary"
else:
primary_str = "backup"
header = self.read_header(primary)
raw_ptable = self._read_raw_ptable(header)
for index in xrange(0, header['entries_cnt']):
start = header['entry_size'] * index
end = start + header['entry_size']
raw_entry = struct.unpack(_GPT_ENTRY_FORMAT, raw_ptable[start:end])
if raw_entry[2] == 0 or raw_entry[3] == 0:
continue
part_name = str(raw_entry[5].decode('UTF-16').split('\0', 1)[0])
yield { 'index' : index,
'offs' : header['ptable_offs'] + start,
'type_uuid' : _stringify_uuid(raw_entry[0]),
'part_uuid' : _stringify_uuid(raw_entry[1]),
'first_lba' : raw_entry[2],
'last_lba' : raw_entry[3],
'flags' : raw_entry[4],
'name' : part_name,
'primary' : primary,
'primary_str' : primary_str }
def _change_partition(self, header, entry):
""" A helper function for 'change_partitions()' which changes a
a paricular instance of the partition table (primary or backup). """
if entry['index'] >= header['entries_cnt']:
raise MountError("Partition table at LBA %d has only %d " \
"records cannot change record number %d" % \
(header['entries_cnt'], entry['index']))
# Read raw GPT header
raw_hdr = self._read_disk(header['hdr_offs'], _GPT_HEADER_SIZE)
raw_hdr = list(struct.unpack(_GPT_HEADER_FORMAT, raw_hdr))
_validate_header(raw_hdr)
# Prepare the new partition table entry
raw_entry = struct.pack(_GPT_ENTRY_FORMAT,
uuid.UUID(entry['type_uuid']).bytes_le,
uuid.UUID(entry['part_uuid']).bytes_le,
entry['first_lba'],
entry['last_lba'],
entry['flags'],
entry['name'].encode('UTF-16'))
# Write the updated entry to the disk
entry_offs = header['ptable_offs'] + \
header['entry_size'] * entry['index']
self._write_disk(entry_offs, raw_entry)
# Calculate and update partition table CRC32
raw_ptable = self._read_disk(header['ptable_offs'],
header['ptable_size'])
raw_hdr[13] = binascii.crc32(raw_ptable) & 0xFFFFFFFF
# Calculate and update the GPT header CRC
raw_hdr[3] = _calc_header_crc(raw_hdr)
# Write the updated header to the disk
raw_hdr = struct.pack(_GPT_HEADER_FORMAT, *raw_hdr)
self._write_disk(header['hdr_offs'], raw_hdr)
def change_partition(self, entry):
""" Change a GPT partition. The 'entry' argument has the same format as
'get_partitions()' returns. This function simply changes the partition
table record corresponding to 'entry' in both, the primary and the
backup GPT partition tables. The parition table CRC is re-calculated
and the GPT headers are modified accordingly. """
# Change the primary partition table
header = self.read_header(True)
self._change_partition(header, entry)
# Change the backup partition table
header = self.read_header(False)
self._change_partition(header, entry)

View File

@@ -0,0 +1,97 @@
#!/usr/bin/python
import os
import sys
import rpm
import fcntl
import struct
import termios
from mic import msger
from mic.utils import runner
from mic.utils.errors import CreatorError
from urlgrabber import grabber
from urlgrabber import __version__ as grabber_version
if rpm.labelCompare(grabber_version.split('.'), '3.9.0'.split('.')) == -1:
msger.warning("Version of python-urlgrabber is %s, lower than '3.9.0', "
"you may encounter some network issues" % grabber_version)
def myurlgrab(url, filename, proxies, progress_obj = None):
g = grabber.URLGrabber()
if progress_obj is None:
progress_obj = TextProgress()
if url.startswith("file:/"):
filepath = "/%s" % url.replace("file:", "").lstrip('/')
if not os.path.exists(filepath):
raise CreatorError("URLGrabber error: can't find file %s" % url)
if url.endswith('.rpm'):
return filepath
else:
# untouch repometadata in source path
runner.show(['cp', '-f', filepath, filename])
else:
try:
filename = g.urlgrab(url=str(url),
filename=filename,
ssl_verify_host=False,
ssl_verify_peer=False,
proxies=proxies,
http_headers=(('Pragma', 'no-cache'),),
quote=0,
progress_obj=progress_obj)
except grabber.URLGrabError, err:
msg = str(err)
if msg.find(url) < 0:
msg += ' on %s' % url
raise CreatorError(msg)
return filename
def terminal_width(fd=1):
""" Get the real terminal width """
try:
buf = 'abcdefgh'
buf = fcntl.ioctl(fd, termios.TIOCGWINSZ, buf)
return struct.unpack('hhhh', buf)[1]
except: # IOError
return 80
def truncate_url(url, width):
return os.path.basename(url)[0:width]
class TextProgress(object):
# make the class as singleton
_instance = None
def __new__(cls, *args, **kwargs):
if not cls._instance:
cls._instance = super(TextProgress, cls).__new__(cls, *args, **kwargs)
return cls._instance
def __init__(self, totalnum = None):
self.total = totalnum
self.counter = 1
def start(self, filename, url, *args, **kwargs):
self.url = url
self.termwidth = terminal_width()
msger.info("\r%-*s" % (self.termwidth, " "))
if self.total is None:
msger.info("\rRetrieving %s ..." % truncate_url(self.url, self.termwidth - 15))
else:
msger.info("\rRetrieving %s [%d/%d] ..." % (truncate_url(self.url, self.termwidth - 25), self.counter, self.total))
def update(self, *args):
pass
def end(self, *args):
if self.counter == self.total:
msger.raw("\n")
if self.total is not None:
self.counter += 1

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,790 @@
#!/usr/bin/python -tt
#
# Copyright (c) 2009, 2010, 2011 Intel, Inc.
# Copyright (c) 2007, 2008 Red Hat, Inc.
# Copyright (c) 2008 Daniel P. Berrange
# Copyright (c) 2008 David P. Huff
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the Free
# Software Foundation; version 2 of the License
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
# for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc., 59
# Temple Place - Suite 330, Boston, MA 02111-1307, USA.
import os
from mic import msger
from mic.utils import runner
from mic.utils.errors import MountError
from mic.utils.fs_related import *
from mic.utils.gpt_parser import GptParser
# Overhead of the MBR partitioning scheme (just one sector)
MBR_OVERHEAD = 1
# Overhead of the GPT partitioning scheme
GPT_OVERHEAD = 34
# Size of a sector in bytes
SECTOR_SIZE = 512
class PartitionedMount(Mount):
def __init__(self, mountdir, skipformat = False):
Mount.__init__(self, mountdir)
self.disks = {}
self.partitions = []
self.subvolumes = []
self.mapped = False
self.mountOrder = []
self.unmountOrder = []
self.parted = find_binary_path("parted")
self.kpartx = find_binary_path("kpartx")
self.mkswap = find_binary_path("mkswap")
self.btrfscmd=None
self.mountcmd = find_binary_path("mount")
self.umountcmd = find_binary_path("umount")
self.skipformat = skipformat
self.snapshot_created = self.skipformat
# Size of a sector used in calculations
self.sector_size = SECTOR_SIZE
self._partitions_layed_out = False
def __add_disk(self, disk_name):
""" Add a disk 'disk_name' to the internal list of disks. Note,
'disk_name' is the name of the disk in the target system
(e.g., sdb). """
if disk_name in self.disks:
# We already have this disk
return
assert not self._partitions_layed_out
self.disks[disk_name] = \
{ 'disk': None, # Disk object
'mapped': False, # True if kpartx mapping exists
'numpart': 0, # Number of allocate partitions
'partitions': [], # Indexes to self.partitions
'offset': 0, # Offset of next partition (in sectors)
# Minimum required disk size to fit all partitions (in bytes)
'min_size': 0,
'ptable_format': "msdos" } # Partition table format
def add_disk(self, disk_name, disk_obj):
""" Add a disk object which have to be partitioned. More than one disk
can be added. In case of multiple disks, disk partitions have to be
added for each disk separately with 'add_partition()". """
self.__add_disk(disk_name)
self.disks[disk_name]['disk'] = disk_obj
def __add_partition(self, part):
""" This is a helper function for 'add_partition()' which adds a
partition to the internal list of partitions. """
assert not self._partitions_layed_out
self.partitions.append(part)
self.__add_disk(part['disk_name'])
def add_partition(self, size, disk_name, mountpoint, fstype = None,
label=None, fsopts = None, boot = False, align = None,
part_type = None):
""" Add the next partition. Prtitions have to be added in the
first-to-last order. """
ks_pnum = len(self.partitions)
# Converting MB to sectors for parted
size = size * 1024 * 1024 / self.sector_size
# We need to handle subvolumes for btrfs
if fstype == "btrfs" and fsopts and fsopts.find("subvol=") != -1:
self.btrfscmd=find_binary_path("btrfs")
subvol = None
opts = fsopts.split(",")
for opt in opts:
if opt.find("subvol=") != -1:
subvol = opt.replace("subvol=", "").strip()
break
if not subvol:
raise MountError("No subvolume: %s" % fsopts)
self.subvolumes.append({'size': size, # In sectors
'mountpoint': mountpoint, # Mount relative to chroot
'fstype': fstype, # Filesystem type
'fsopts': fsopts, # Filesystem mount options
'disk_name': disk_name, # physical disk name holding partition
'device': None, # kpartx device node for partition
'mount': None, # Mount object
'subvol': subvol, # Subvolume name
'boot': boot, # Bootable flag
'mounted': False # Mount flag
})
# We still need partition for "/" or non-subvolume
if mountpoint == "/" or not fsopts or fsopts.find("subvol=") == -1:
# Don't need subvolume for "/" because it will be set as default subvolume
if fsopts and fsopts.find("subvol=") != -1:
opts = fsopts.split(",")
for opt in opts:
if opt.strip().startswith("subvol="):
opts.remove(opt)
break
fsopts = ",".join(opts)
part = { 'ks_pnum' : ks_pnum, # Partition number in the KS file
'size': size, # In sectors
'mountpoint': mountpoint, # Mount relative to chroot
'fstype': fstype, # Filesystem type
'fsopts': fsopts, # Filesystem mount options
'label': label, # Partition label
'disk_name': disk_name, # physical disk name holding partition
'device': None, # kpartx device node for partition
'mount': None, # Mount object
'num': None, # Partition number
'boot': boot, # Bootable flag
'align': align, # Partition alignment
'part_type' : part_type, # Partition type
'partuuid': None } # Partition UUID (GPT-only)
self.__add_partition(part)
def layout_partitions(self, ptable_format = "msdos"):
""" Layout the partitions, meaning calculate the position of every
partition on the disk. The 'ptable_format' parameter defines the
partition table format, and may be either "msdos" or "gpt". """
msger.debug("Assigning %s partitions to disks" % ptable_format)
if ptable_format not in ('msdos', 'gpt'):
raise MountError("Unknown partition table format '%s', supported " \
"formats are: 'msdos' and 'gpt'" % ptable_format)
if self._partitions_layed_out:
return
self._partitions_layed_out = True
# Go through partitions in the order they are added in .ks file
for n in range(len(self.partitions)):
p = self.partitions[n]
if not self.disks.has_key(p['disk_name']):
raise MountError("No disk %s for partition %s" \
% (p['disk_name'], p['mountpoint']))
if p['part_type'] and ptable_format != 'gpt':
# The --part-type can also be implemented for MBR partitions,
# in which case it would map to the 1-byte "partition type"
# filed at offset 3 of the partition entry.
raise MountError("setting custom partition type is only " \
"imlemented for GPT partitions")
# Get the disk where the partition is located
d = self.disks[p['disk_name']]
d['numpart'] += 1
d['ptable_format'] = ptable_format
if d['numpart'] == 1:
if ptable_format == "msdos":
overhead = MBR_OVERHEAD
else:
overhead = GPT_OVERHEAD
# Skip one sector required for the partitioning scheme overhead
d['offset'] += overhead
# Steal few sectors from the first partition to offset for the
# partitioning overhead
p['size'] -= overhead
if p['align']:
# If not first partition and we do have alignment set we need
# to align the partition.
# FIXME: This leaves a empty spaces to the disk. To fill the
# gaps we could enlargea the previous partition?
# Calc how much the alignment is off.
align_sectors = d['offset'] % (p['align'] * 1024 / self.sector_size)
# We need to move forward to the next alignment point
align_sectors = (p['align'] * 1024 / self.sector_size) - align_sectors
msger.debug("Realignment for %s%s with %s sectors, original"
" offset %s, target alignment is %sK." %
(p['disk_name'], d['numpart'], align_sectors,
d['offset'], p['align']))
# increase the offset so we actually start the partition on right alignment
d['offset'] += align_sectors
p['start'] = d['offset']
d['offset'] += p['size']
p['type'] = 'primary'
p['num'] = d['numpart']
if d['ptable_format'] == "msdos":
if d['numpart'] > 2:
# Every logical partition requires an additional sector for
# the EBR, so steal the last sector from the end of each
# partition starting from the 3rd one for the EBR. This
# will make sure the logical partitions are aligned
# correctly.
p['size'] -= 1
if d['numpart'] > 3:
p['type'] = 'logical'
p['num'] = d['numpart'] + 1
d['partitions'].append(n)
msger.debug("Assigned %s to %s%d, sectors range %d-%d size %d "
"sectors (%d bytes)." \
% (p['mountpoint'], p['disk_name'], p['num'],
p['start'], p['start'] + p['size'] - 1,
p['size'], p['size'] * self.sector_size))
# Once all the partitions have been layed out, we can calculate the
# minumim disk sizes.
for disk_name, d in self.disks.items():
d['min_size'] = d['offset']
if d['ptable_format'] == 'gpt':
# Account for the backup partition table at the end of the disk
d['min_size'] += GPT_OVERHEAD
d['min_size'] *= self.sector_size
def __run_parted(self, args):
""" Run parted with arguments specified in the 'args' list. """
args.insert(0, self.parted)
msger.debug(args)
rc, out = runner.runtool(args, catch = 3)
out = out.strip()
if out:
msger.debug('"parted" output: %s' % out)
if rc != 0:
# We don't throw exception when return code is not 0, because
# parted always fails to reload part table with loop devices. This
# prevents us from distinguishing real errors based on return
# code.
msger.debug("WARNING: parted returned '%s' instead of 0" % rc)
def __create_partition(self, device, parttype, fstype, start, size):
""" Create a partition on an image described by the 'device' object. """
# Start is included to the size so we need to substract one from the end.
end = start + size - 1
msger.debug("Added '%s' partition, sectors %d-%d, size %d sectors" %
(parttype, start, end, size))
args = ["-s", device, "unit", "s", "mkpart", parttype]
if fstype:
args.extend([fstype])
args.extend(["%d" % start, "%d" % end])
return self.__run_parted(args)
def __format_disks(self):
self.layout_partitions()
if self.skipformat:
msger.debug("Skipping disk format, because skipformat flag is set.")
return
for dev in self.disks.keys():
d = self.disks[dev]
msger.debug("Initializing partition table for %s" % \
(d['disk'].device))
self.__run_parted(["-s", d['disk'].device, "mklabel",
d['ptable_format']])
msger.debug("Creating partitions")
for p in self.partitions:
d = self.disks[p['disk_name']]
if d['ptable_format'] == "msdos" and p['num'] == 5:
# The last sector of the 3rd partition was reserved for the EBR
# of the first _logical_ partition. This is why the extended
# partition should start one sector before the first logical
# partition.
self.__create_partition(d['disk'].device, "extended",
None, p['start'] - 1,
d['offset'] - p['start'])
if p['fstype'] == "swap":
parted_fs_type = "linux-swap"
elif p['fstype'] == "vfat":
parted_fs_type = "fat32"
elif p['fstype'] == "msdos":
parted_fs_type = "fat16"
else:
# Type for ext2/ext3/ext4/btrfs
parted_fs_type = "ext2"
# Boot ROM of OMAP boards require vfat boot partition to have an
# even number of sectors.
if p['mountpoint'] == "/boot" and p['fstype'] in ["vfat", "msdos"] \
and p['size'] % 2:
msger.debug("Substracting one sector from '%s' partition to " \
"get even number of sectors for the partition" % \
p['mountpoint'])
p['size'] -= 1
self.__create_partition(d['disk'].device, p['type'],
parted_fs_type, p['start'], p['size'])
if p['boot']:
if d['ptable_format'] == 'gpt':
flag_name = "legacy_boot"
else:
flag_name = "boot"
msger.debug("Set '%s' flag for partition '%s' on disk '%s'" % \
(flag_name, p['num'], d['disk'].device))
self.__run_parted(["-s", d['disk'].device, "set",
"%d" % p['num'], flag_name, "on"])
# If the partition table format is "gpt", find out PARTUUIDs for all
# the partitions. And if users specified custom parition type UUIDs,
# set them.
for disk_name, disk in self.disks.items():
if disk['ptable_format'] != 'gpt':
continue
pnum = 0
gpt_parser = GptParser(d['disk'].device, SECTOR_SIZE)
# Iterate over all GPT partitions on this disk
for entry in gpt_parser.get_partitions():
pnum += 1
# Find the matching partition in the 'self.partitions' list
for n in d['partitions']:
p = self.partitions[n]
if p['num'] == pnum:
# Found, fetch PARTUUID (partition's unique ID)
p['partuuid'] = entry['part_uuid']
msger.debug("PARTUUID for partition %d on disk '%s' " \
"(mount point '%s') is '%s'" % (pnum, \
disk_name, p['mountpoint'], p['partuuid']))
if p['part_type']:
entry['type_uuid'] = p['part_type']
msger.debug("Change type of partition %d on disk " \
"'%s' (mount point '%s') to '%s'" % \
(pnum, disk_name, p['mountpoint'],
p['part_type']))
gpt_parser.change_partition(entry)
del gpt_parser
def __map_partitions(self):
"""Load it if dm_snapshot isn't loaded. """
load_module("dm_snapshot")
for dev in self.disks.keys():
d = self.disks[dev]
if d['mapped']:
continue
msger.debug("Running kpartx on %s" % d['disk'].device )
rc, kpartxOutput = runner.runtool([self.kpartx, "-l", "-v", d['disk'].device])
kpartxOutput = kpartxOutput.splitlines()
if rc != 0:
raise MountError("Failed to query partition mapping for '%s'" %
d['disk'].device)
# Strip trailing blank and mask verbose output
i = 0
while i < len(kpartxOutput) and kpartxOutput[i][0:4] != "loop":
i = i + 1
kpartxOutput = kpartxOutput[i:]
# Make sure kpartx reported the right count of partitions
if len(kpartxOutput) != d['numpart']:
# If this disk has more than 3 partitions, then in case of MBR
# paritions there is an extended parition. Different versions
# of kpartx behave differently WRT the extended partition -
# some map it, some ignore it. This is why we do the below hack
# - if kpartx reported one more partition and the partition
# table type is "msdos" and the amount of partitions is more
# than 3, we just assume kpartx mapped the extended parition
# and we remove it.
if len(kpartxOutput) == d['numpart'] + 1 \
and d['ptable_format'] == 'msdos' and len(kpartxOutput) > 3:
kpartxOutput.pop(3)
else:
raise MountError("Unexpected number of partitions from " \
"kpartx: %d != %d" % \
(len(kpartxOutput), d['numpart']))
for i in range(len(kpartxOutput)):
line = kpartxOutput[i]
newdev = line.split()[0]
mapperdev = "/dev/mapper/" + newdev
loopdev = d['disk'].device + newdev[-1]
msger.debug("Dev %s: %s -> %s" % (newdev, loopdev, mapperdev))
pnum = d['partitions'][i]
self.partitions[pnum]['device'] = loopdev
# grub's install wants partitions to be named
# to match their parent device + partition num
# kpartx doesn't work like this, so we add compat
# symlinks to point to /dev/mapper
if os.path.lexists(loopdev):
os.unlink(loopdev)
os.symlink(mapperdev, loopdev)
msger.debug("Adding partx mapping for %s" % d['disk'].device)
rc = runner.show([self.kpartx, "-v", "-a", d['disk'].device])
if rc != 0:
# Make sure that the device maps are also removed on error case.
# The d['mapped'] isn't set to True if the kpartx fails so
# failed mapping will not be cleaned on cleanup either.
runner.quiet([self.kpartx, "-d", d['disk'].device])
raise MountError("Failed to map partitions for '%s'" %
d['disk'].device)
# FIXME: there is a bit delay for multipath device setup,
# wait 10ms for the setup
import time
time.sleep(10)
d['mapped'] = True
def __unmap_partitions(self):
for dev in self.disks.keys():
d = self.disks[dev]
if not d['mapped']:
continue
msger.debug("Removing compat symlinks")
for pnum in d['partitions']:
if self.partitions[pnum]['device'] != None:
os.unlink(self.partitions[pnum]['device'])
self.partitions[pnum]['device'] = None
msger.debug("Unmapping %s" % d['disk'].device)
rc = runner.quiet([self.kpartx, "-d", d['disk'].device])
if rc != 0:
raise MountError("Failed to unmap partitions for '%s'" %
d['disk'].device)
d['mapped'] = False
def __calculate_mountorder(self):
msger.debug("Calculating mount order")
for p in self.partitions:
if p['mountpoint']:
self.mountOrder.append(p['mountpoint'])
self.unmountOrder.append(p['mountpoint'])
self.mountOrder.sort()
self.unmountOrder.sort()
self.unmountOrder.reverse()
def cleanup(self):
Mount.cleanup(self)
if self.disks:
self.__unmap_partitions()
for dev in self.disks.keys():
d = self.disks[dev]
try:
d['disk'].cleanup()
except:
pass
def unmount(self):
self.__unmount_subvolumes()
for mp in self.unmountOrder:
if mp == 'swap':
continue
p = None
for p1 in self.partitions:
if p1['mountpoint'] == mp:
p = p1
break
if p['mount'] != None:
try:
# Create subvolume snapshot here
if p['fstype'] == "btrfs" and p['mountpoint'] == "/" and not self.snapshot_created:
self.__create_subvolume_snapshots(p, p["mount"])
p['mount'].cleanup()
except:
pass
p['mount'] = None
# Only for btrfs
def __get_subvolume_id(self, rootpath, subvol):
if not self.btrfscmd:
self.btrfscmd=find_binary_path("btrfs")
argv = [ self.btrfscmd, "subvolume", "list", rootpath ]
rc, out = runner.runtool(argv)
msger.debug(out)
if rc != 0:
raise MountError("Failed to get subvolume id from %s', return code: %d." % (rootpath, rc))
subvolid = -1
for line in out.splitlines():
if line.endswith(" path %s" % subvol):
subvolid = line.split()[1]
if not subvolid.isdigit():
raise MountError("Invalid subvolume id: %s" % subvolid)
subvolid = int(subvolid)
break
return subvolid
def __create_subvolume_metadata(self, p, pdisk):
if len(self.subvolumes) == 0:
return
argv = [ self.btrfscmd, "subvolume", "list", pdisk.mountdir ]
rc, out = runner.runtool(argv)
msger.debug(out)
if rc != 0:
raise MountError("Failed to get subvolume id from %s', return code: %d." % (pdisk.mountdir, rc))
subvolid_items = out.splitlines()
subvolume_metadata = ""
for subvol in self.subvolumes:
for line in subvolid_items:
if line.endswith(" path %s" % subvol["subvol"]):
subvolid = line.split()[1]
if not subvolid.isdigit():
raise MountError("Invalid subvolume id: %s" % subvolid)
subvolid = int(subvolid)
opts = subvol["fsopts"].split(",")
for opt in opts:
if opt.strip().startswith("subvol="):
opts.remove(opt)
break
fsopts = ",".join(opts)
subvolume_metadata += "%d\t%s\t%s\t%s\n" % (subvolid, subvol["subvol"], subvol['mountpoint'], fsopts)
if subvolume_metadata:
fd = open("%s/.subvolume_metadata" % pdisk.mountdir, "w")
fd.write(subvolume_metadata)
fd.close()
def __get_subvolume_metadata(self, p, pdisk):
subvolume_metadata_file = "%s/.subvolume_metadata" % pdisk.mountdir
if not os.path.exists(subvolume_metadata_file):
return
fd = open(subvolume_metadata_file, "r")
content = fd.read()
fd.close()
for line in content.splitlines():
items = line.split("\t")
if items and len(items) == 4:
self.subvolumes.append({'size': 0, # In sectors
'mountpoint': items[2], # Mount relative to chroot
'fstype': "btrfs", # Filesystem type
'fsopts': items[3] + ",subvol=%s" % items[1], # Filesystem mount options
'disk_name': p['disk_name'], # physical disk name holding partition
'device': None, # kpartx device node for partition
'mount': None, # Mount object
'subvol': items[1], # Subvolume name
'boot': False, # Bootable flag
'mounted': False # Mount flag
})
def __create_subvolumes(self, p, pdisk):
""" Create all the subvolumes. """
for subvol in self.subvolumes:
argv = [ self.btrfscmd, "subvolume", "create", pdisk.mountdir + "/" + subvol["subvol"]]
rc = runner.show(argv)
if rc != 0:
raise MountError("Failed to create subvolume '%s', return code: %d." % (subvol["subvol"], rc))
# Set default subvolume, subvolume for "/" is default
subvol = None
for subvolume in self.subvolumes:
if subvolume["mountpoint"] == "/" and p['disk_name'] == subvolume['disk_name']:
subvol = subvolume
break
if subvol:
# Get default subvolume id
subvolid = self. __get_subvolume_id(pdisk.mountdir, subvol["subvol"])
# Set default subvolume
if subvolid != -1:
rc = runner.show([ self.btrfscmd, "subvolume", "set-default", "%d" % subvolid, pdisk.mountdir])
if rc != 0:
raise MountError("Failed to set default subvolume id: %d', return code: %d." % (subvolid, rc))
self.__create_subvolume_metadata(p, pdisk)
def __mount_subvolumes(self, p, pdisk):
if self.skipformat:
# Get subvolume info
self.__get_subvolume_metadata(p, pdisk)
# Set default mount options
if len(self.subvolumes) != 0:
for subvol in self.subvolumes:
if subvol["mountpoint"] == p["mountpoint"] == "/":
opts = subvol["fsopts"].split(",")
for opt in opts:
if opt.strip().startswith("subvol="):
opts.remove(opt)
break
pdisk.fsopts = ",".join(opts)
break
if len(self.subvolumes) == 0:
# Return directly if no subvolumes
return
# Remount to make default subvolume mounted
rc = runner.show([self.umountcmd, pdisk.mountdir])
if rc != 0:
raise MountError("Failed to umount %s" % pdisk.mountdir)
rc = runner.show([self.mountcmd, "-o", pdisk.fsopts, pdisk.disk.device, pdisk.mountdir])
if rc != 0:
raise MountError("Failed to umount %s" % pdisk.mountdir)
for subvol in self.subvolumes:
if subvol["mountpoint"] == "/":
continue
subvolid = self. __get_subvolume_id(pdisk.mountdir, subvol["subvol"])
if subvolid == -1:
msger.debug("WARNING: invalid subvolume %s" % subvol["subvol"])
continue
# Replace subvolume name with subvolume ID
opts = subvol["fsopts"].split(",")
for opt in opts:
if opt.strip().startswith("subvol="):
opts.remove(opt)
break
opts.extend(["subvolrootid=0", "subvol=%s" % subvol["subvol"]])
fsopts = ",".join(opts)
subvol['fsopts'] = fsopts
mountpoint = self.mountdir + subvol['mountpoint']
makedirs(mountpoint)
rc = runner.show([self.mountcmd, "-o", fsopts, pdisk.disk.device, mountpoint])
if rc != 0:
raise MountError("Failed to mount subvolume %s to %s" % (subvol["subvol"], mountpoint))
subvol["mounted"] = True
def __unmount_subvolumes(self):
""" It may be called multiple times, so we need to chekc if it is still mounted. """
for subvol in self.subvolumes:
if subvol["mountpoint"] == "/":
continue
if not subvol["mounted"]:
continue
mountpoint = self.mountdir + subvol['mountpoint']
rc = runner.show([self.umountcmd, mountpoint])
if rc != 0:
raise MountError("Failed to unmount subvolume %s from %s" % (subvol["subvol"], mountpoint))
subvol["mounted"] = False
def __create_subvolume_snapshots(self, p, pdisk):
import time
if self.snapshot_created:
return
# Remount with subvolid=0
rc = runner.show([self.umountcmd, pdisk.mountdir])
if rc != 0:
raise MountError("Failed to umount %s" % pdisk.mountdir)
if pdisk.fsopts:
mountopts = pdisk.fsopts + ",subvolid=0"
else:
mountopts = "subvolid=0"
rc = runner.show([self.mountcmd, "-o", mountopts, pdisk.disk.device, pdisk.mountdir])
if rc != 0:
raise MountError("Failed to umount %s" % pdisk.mountdir)
# Create all the subvolume snapshots
snapshotts = time.strftime("%Y%m%d-%H%M")
for subvol in self.subvolumes:
subvolpath = pdisk.mountdir + "/" + subvol["subvol"]
snapshotpath = subvolpath + "_%s-1" % snapshotts
rc = runner.show([ self.btrfscmd, "subvolume", "snapshot", subvolpath, snapshotpath ])
if rc != 0:
raise MountError("Failed to create subvolume snapshot '%s' for '%s', return code: %d." % (snapshotpath, subvolpath, rc))
self.snapshot_created = True
def mount(self):
for dev in self.disks.keys():
d = self.disks[dev]
d['disk'].create()
self.__format_disks()
self.__map_partitions()
self.__calculate_mountorder()
for mp in self.mountOrder:
p = None
for p1 in self.partitions:
if p1['mountpoint'] == mp:
p = p1
break
if not p['label']:
if p['mountpoint'] == "/":
p['label'] = 'platform'
else:
p['label'] = mp.split('/')[-1]
if mp == 'swap':
import uuid
p['uuid'] = str(uuid.uuid1())
runner.show([self.mkswap,
'-L', p['label'],
'-U', p['uuid'],
p['device']])
continue
rmmountdir = False
if p['mountpoint'] == "/":
rmmountdir = True
if p['fstype'] == "vfat" or p['fstype'] == "msdos":
myDiskMount = VfatDiskMount
elif p['fstype'] in ("ext2", "ext3", "ext4"):
myDiskMount = ExtDiskMount
elif p['fstype'] == "btrfs":
myDiskMount = BtrfsDiskMount
else:
raise MountError("Fail to support file system " + p['fstype'])
if p['fstype'] == "btrfs" and not p['fsopts']:
p['fsopts'] = "subvolid=0"
pdisk = myDiskMount(RawDisk(p['size'] * self.sector_size, p['device']),
self.mountdir + p['mountpoint'],
p['fstype'],
4096,
p['label'],
rmmountdir,
self.skipformat,
fsopts = p['fsopts'])
pdisk.mount(pdisk.fsopts)
if p['fstype'] == "btrfs" and p['mountpoint'] == "/":
if not self.skipformat:
self.__create_subvolumes(p, pdisk)
self.__mount_subvolumes(p, pdisk)
p['mount'] = pdisk
p['uuid'] = pdisk.uuid
def resparse(self, size = None):
# Can't re-sparse a disk image - too hard
pass

View File

@@ -0,0 +1,183 @@
#!/usr/bin/python -tt
#
# Copyright (c) 2010, 2011 Intel, Inc.
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the Free
# Software Foundation; version 2 of the License
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
# for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc., 59
# Temple Place - Suite 330, Boston, MA 02111-1307, USA.
import os
import urlparse
_my_proxies = {}
_my_noproxy = None
_my_noproxy_list = []
def set_proxy_environ():
global _my_noproxy, _my_proxies
if not _my_proxies:
return
for key in _my_proxies.keys():
os.environ[key + "_proxy"] = _my_proxies[key]
if not _my_noproxy:
return
os.environ["no_proxy"] = _my_noproxy
def unset_proxy_environ():
for env in ('http_proxy',
'https_proxy',
'ftp_proxy',
'all_proxy'):
if env in os.environ:
del os.environ[env]
ENV=env.upper()
if ENV in os.environ:
del os.environ[ENV]
def _set_proxies(proxy = None, no_proxy = None):
"""Return a dictionary of scheme -> proxy server URL mappings.
"""
global _my_noproxy, _my_proxies
_my_proxies = {}
_my_noproxy = None
proxies = []
if proxy:
proxies.append(("http_proxy", proxy))
if no_proxy:
proxies.append(("no_proxy", no_proxy))
# Get proxy settings from environment if not provided
if not proxy and not no_proxy:
proxies = os.environ.items()
# Remove proxy env variables, urllib2 can't handle them correctly
unset_proxy_environ()
for name, value in proxies:
name = name.lower()
if value and name[-6:] == '_proxy':
if name[0:2] != "no":
_my_proxies[name[:-6]] = value
else:
_my_noproxy = value
def _ip_to_int(ip):
ipint=0
shift=24
for dec in ip.split("."):
ipint |= int(dec) << shift
shift -= 8
return ipint
def _int_to_ip(val):
ipaddr=""
shift=0
for i in range(4):
dec = val >> shift
dec &= 0xff
ipaddr = ".%d%s" % (dec, ipaddr)
shift += 8
return ipaddr[1:]
def _isip(host):
if host.replace(".", "").isdigit():
return True
return False
def _set_noproxy_list():
global _my_noproxy, _my_noproxy_list
_my_noproxy_list = []
if not _my_noproxy:
return
for item in _my_noproxy.split(","):
item = item.strip()
if not item:
continue
if item[0] != '.' and item.find("/") == -1:
# Need to match it
_my_noproxy_list.append({"match":0,"needle":item})
elif item[0] == '.':
# Need to match at tail
_my_noproxy_list.append({"match":1,"needle":item})
elif item.find("/") > 3:
# IP/MASK, need to match at head
needle = item[0:item.find("/")].strip()
ip = _ip_to_int(needle)
netmask = 0
mask = item[item.find("/")+1:].strip()
if mask.isdigit():
netmask = int(mask)
netmask = ~((1<<(32-netmask)) - 1)
ip &= netmask
else:
shift=24
netmask=0
for dec in mask.split("."):
netmask |= int(dec) << shift
shift -= 8
ip &= netmask
_my_noproxy_list.append({"match":2,"needle":ip,"netmask":netmask})
def _isnoproxy(url):
(scheme, host, path, parm, query, frag) = urlparse.urlparse(url)
if '@' in host:
user_pass, host = host.split('@', 1)
if ':' in host:
host, port = host.split(':', 1)
hostisip = _isip(host)
for item in _my_noproxy_list:
if hostisip and item["match"] <= 1:
continue
if item["match"] == 2 and hostisip:
if (_ip_to_int(host) & item["netmask"]) == item["needle"]:
return True
if item["match"] == 0:
if host == item["needle"]:
return True
if item["match"] == 1:
if host.rfind(item["needle"]) > 0:
return True
return False
def set_proxies(proxy = None, no_proxy = None):
_set_proxies(proxy, no_proxy)
_set_noproxy_list()
set_proxy_environ()
def get_proxy_for(url):
if url.startswith('file:') or _isnoproxy(url):
return None
type = url[0:url.index(":")]
proxy = None
if _my_proxies.has_key(type):
proxy = _my_proxies[type]
elif _my_proxies.has_key("http"):
proxy = _my_proxies["http"]
else:
proxy = None
return proxy

View File

@@ -0,0 +1,600 @@
#!/usr/bin/python -tt
#
# Copyright (c) 2008, 2009, 2010, 2011 Intel, Inc.
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the Free
# Software Foundation; version 2 of the License
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
# for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc., 59
# Temple Place - Suite 330, Boston, MA 02111-1307, USA.
import os
import sys
import re
import rpm
from mic import msger
from mic.utils.errors import CreatorError
from mic.utils.proxy import get_proxy_for
from mic.utils import runner
class RPMInstallCallback:
""" Command line callback class for callbacks from the RPM library.
"""
def __init__(self, ts, output=1):
self.output = output
self.callbackfilehandles = {}
self.total_actions = 0
self.total_installed = 0
self.installed_pkg_names = []
self.total_removed = 0
self.mark = "+"
self.marks = 40
self.lastmsg = None
self.tsInfo = None # this needs to be set for anything else to work
self.ts = ts
self.filelog = False
self.logString = []
self.headmsg = "Installing"
def _dopkgtup(self, hdr):
tmpepoch = hdr['epoch']
if tmpepoch is None: epoch = '0'
else: epoch = str(tmpepoch)
return (hdr['name'], hdr['arch'], epoch, hdr['version'], hdr['release'])
def _makeHandle(self, hdr):
handle = '%s:%s.%s-%s-%s' % (hdr['epoch'], hdr['name'], hdr['version'],
hdr['release'], hdr['arch'])
return handle
def _localprint(self, msg):
if self.output:
msger.info(msg)
def _makefmt(self, percent, progress = True):
l = len(str(self.total_actions))
size = "%s.%s" % (l, l)
fmt_done = "[%" + size + "s/%" + size + "s]"
done = fmt_done % (self.total_installed + self.total_removed,
self.total_actions)
marks = self.marks - (2 * l)
width = "%s.%s" % (marks, marks)
fmt_bar = "%-" + width + "s"
if progress:
bar = fmt_bar % (self.mark * int(marks * (percent / 100.0)), )
fmt = "\r %-10.10s: %-20.20s " + bar + " " + done
else:
bar = fmt_bar % (self.mark * marks, )
fmt = " %-10.10s: %-20.20s " + bar + " " + done
return fmt
def _logPkgString(self, hdr):
"""return nice representation of the package for the log"""
(n,a,e,v,r) = self._dopkgtup(hdr)
if e == '0':
pkg = '%s.%s %s-%s' % (n, a, v, r)
else:
pkg = '%s.%s %s:%s-%s' % (n, a, e, v, r)
return pkg
def callback(self, what, bytes, total, h, user):
if what == rpm.RPMCALLBACK_TRANS_START:
if bytes == 6:
self.total_actions = total
elif what == rpm.RPMCALLBACK_TRANS_PROGRESS:
pass
elif what == rpm.RPMCALLBACK_TRANS_STOP:
pass
elif what == rpm.RPMCALLBACK_INST_OPEN_FILE:
self.lastmsg = None
hdr = None
if h is not None:
try:
hdr, rpmloc = h
except:
rpmloc = h
hdr = readRpmHeader(self.ts, h)
handle = self._makeHandle(hdr)
fd = os.open(rpmloc, os.O_RDONLY)
self.callbackfilehandles[handle]=fd
if hdr['name'] not in self.installed_pkg_names:
self.installed_pkg_names.append(hdr['name'])
self.total_installed += 1
return fd
else:
self._localprint("No header - huh?")
elif what == rpm.RPMCALLBACK_INST_CLOSE_FILE:
hdr = None
if h is not None:
try:
hdr, rpmloc = h
except:
rpmloc = h
hdr = readRpmHeader(self.ts, h)
handle = self._makeHandle(hdr)
os.close(self.callbackfilehandles[handle])
fd = 0
# log stuff
#pkgtup = self._dopkgtup(hdr)
self.logString.append(self._logPkgString(hdr))
elif what == rpm.RPMCALLBACK_INST_PROGRESS:
if h is not None:
percent = (self.total_installed*100L)/self.total_actions
if total > 0:
try:
hdr, rpmloc = h
except:
rpmloc = h
m = re.match("(.*)-(\d+.*)-(\d+\.\d+)\.(.+)\.rpm", os.path.basename(rpmloc))
if m:
pkgname = m.group(1)
else:
pkgname = os.path.basename(rpmloc)
if self.output:
fmt = self._makefmt(percent)
msg = fmt % (self.headmsg, pkgname)
if msg != self.lastmsg:
self.lastmsg = msg
msger.info(msg)
if self.total_installed == self.total_actions:
msger.raw('')
msger.verbose('\n'.join(self.logString))
elif what == rpm.RPMCALLBACK_UNINST_START:
pass
elif what == rpm.RPMCALLBACK_UNINST_PROGRESS:
pass
elif what == rpm.RPMCALLBACK_UNINST_STOP:
self.total_removed += 1
elif what == rpm.RPMCALLBACK_REPACKAGE_START:
pass
elif what == rpm.RPMCALLBACK_REPACKAGE_STOP:
pass
elif what == rpm.RPMCALLBACK_REPACKAGE_PROGRESS:
pass
def readRpmHeader(ts, filename):
""" Read an rpm header. """
fd = os.open(filename, os.O_RDONLY)
h = ts.hdrFromFdno(fd)
os.close(fd)
return h
def splitFilename(filename):
""" Pass in a standard style rpm fullname
Return a name, version, release, epoch, arch, e.g.::
foo-1.0-1.i386.rpm returns foo, 1.0, 1, i386
1:bar-9-123a.ia64.rpm returns bar, 9, 123a, 1, ia64
"""
if filename[-4:] == '.rpm':
filename = filename[:-4]
archIndex = filename.rfind('.')
arch = filename[archIndex+1:]
relIndex = filename[:archIndex].rfind('-')
rel = filename[relIndex+1:archIndex]
verIndex = filename[:relIndex].rfind('-')
ver = filename[verIndex+1:relIndex]
epochIndex = filename.find(':')
if epochIndex == -1:
epoch = ''
else:
epoch = filename[:epochIndex]
name = filename[epochIndex + 1:verIndex]
return name, ver, rel, epoch, arch
def getCanonX86Arch(arch):
#
if arch == "i586":
f = open("/proc/cpuinfo", "r")
lines = f.readlines()
f.close()
for line in lines:
if line.startswith("model name") and line.find("Geode(TM)") != -1:
return "geode"
return arch
# only athlon vs i686 isn't handled with uname currently
if arch != "i686":
return arch
# if we're i686 and AuthenticAMD, then we should be an athlon
f = open("/proc/cpuinfo", "r")
lines = f.readlines()
f.close()
for line in lines:
if line.startswith("vendor") and line.find("AuthenticAMD") != -1:
return "athlon"
# i686 doesn't guarantee cmov, but we depend on it
elif line.startswith("flags") and line.find("cmov") == -1:
return "i586"
return arch
def getCanonX86_64Arch(arch):
if arch != "x86_64":
return arch
vendor = None
f = open("/proc/cpuinfo", "r")
lines = f.readlines()
f.close()
for line in lines:
if line.startswith("vendor_id"):
vendor = line.split(':')[1]
break
if vendor is None:
return arch
if vendor.find("Authentic AMD") != -1 or vendor.find("AuthenticAMD") != -1:
return "amd64"
if vendor.find("GenuineIntel") != -1:
return "ia32e"
return arch
def getCanonArch():
arch = os.uname()[4]
if (len(arch) == 4 and arch[0] == "i" and arch[2:4] == "86"):
return getCanonX86Arch(arch)
if arch == "x86_64":
return getCanonX86_64Arch(arch)
return arch
# Copy from libsatsolver:poolarch.c, with cleanup
archPolicies = {
"x86_64": "x86_64:i686:i586:i486:i386",
"i686": "i686:i586:i486:i386",
"i586": "i586:i486:i386",
"ia64": "ia64:i686:i586:i486:i386",
"armv7tnhl": "armv7tnhl:armv7thl:armv7nhl:armv7hl",
"armv7thl": "armv7thl:armv7hl",
"armv7nhl": "armv7nhl:armv7hl",
"armv7hl": "armv7hl",
"armv7l": "armv7l:armv6l:armv5tejl:armv5tel:armv5l:armv4tl:armv4l:armv3l",
"armv6l": "armv6l:armv5tejl:armv5tel:armv5l:armv4tl:armv4l:armv3l",
"armv5tejl": "armv5tejl:armv5tel:armv5l:armv4tl:armv4l:armv3l",
"armv5tel": "armv5tel:armv5l:armv4tl:armv4l:armv3l",
"armv5l": "armv5l:armv4tl:armv4l:armv3l",
}
# dict mapping arch -> ( multicompat, best personality, biarch personality )
multilibArches = {
"x86_64": ( "athlon", "x86_64", "athlon" ),
}
# from yumUtils.py
arches = {
# ia32
"athlon": "i686",
"i686": "i586",
"geode": "i586",
"i586": "i486",
"i486": "i386",
"i386": "noarch",
# amd64
"x86_64": "athlon",
"amd64": "x86_64",
"ia32e": "x86_64",
# arm
"armv7tnhl": "armv7nhl",
"armv7nhl": "armv7hl",
"armv7hl": "noarch",
"armv7l": "armv6l",
"armv6l": "armv5tejl",
"armv5tejl": "armv5tel",
"armv5tel": "noarch",
#itanium
"ia64": "noarch",
}
def isMultiLibArch(arch=None):
"""returns true if arch is a multilib arch, false if not"""
if arch is None:
arch = getCanonArch()
if not arches.has_key(arch): # or we could check if it is noarch
return False
if multilibArches.has_key(arch):
return True
if multilibArches.has_key(arches[arch]):
return True
return False
def getBaseArch():
myarch = getCanonArch()
if not arches.has_key(myarch):
return myarch
if isMultiLibArch(arch=myarch):
if multilibArches.has_key(myarch):
return myarch
else:
return arches[myarch]
if arches.has_key(myarch):
basearch = myarch
value = arches[basearch]
while value != 'noarch':
basearch = value
value = arches[basearch]
return basearch
def checkRpmIntegrity(bin_rpm, package):
return runner.quiet([bin_rpm, "-K", "--nosignature", package])
def checkSig(ts, package):
""" Takes a transaction set and a package, check it's sigs,
return 0 if they are all fine
return 1 if the gpg key can't be found
return 2 if the header is in someway damaged
return 3 if the key is not trusted
return 4 if the pkg is not gpg or pgp signed
"""
value = 0
currentflags = ts.setVSFlags(0)
fdno = os.open(package, os.O_RDONLY)
try:
hdr = ts.hdrFromFdno(fdno)
except rpm.error, e:
if str(e) == "public key not availaiable":
value = 1
if str(e) == "public key not available":
value = 1
if str(e) == "public key not trusted":
value = 3
if str(e) == "error reading package header":
value = 2
else:
error, siginfo = getSigInfo(hdr)
if error == 101:
os.close(fdno)
del hdr
value = 4
else:
del hdr
try:
os.close(fdno)
except OSError:
pass
ts.setVSFlags(currentflags) # put things back like they were before
return value
def getSigInfo(hdr):
""" checks signature from an hdr hand back signature information and/or
an error code
"""
import locale
locale.setlocale(locale.LC_ALL, 'C')
string = '%|DSAHEADER?{%{DSAHEADER:pgpsig}}:{%|RSAHEADER?{%{RSAHEADER:pgpsig}}:{%|SIGGPG?{%{SIGGPG:pgpsig}}:{%|SIGPGP?{%{SIGPGP:pgpsig}}:{(none)}|}|}|}|'
siginfo = hdr.sprintf(string)
if siginfo != '(none)':
error = 0
sigtype, sigdate, sigid = siginfo.split(',')
else:
error = 101
sigtype = 'MD5'
sigdate = 'None'
sigid = 'None'
infotuple = (sigtype, sigdate, sigid)
return error, infotuple
def checkRepositoryEULA(name, repo):
""" This function is to check the EULA file if provided.
return True: no EULA or accepted
return False: user declined the EULA
"""
import tempfile
import shutil
import urlparse
import urllib2 as u2
import httplib
from mic.utils.errors import CreatorError
def _check_and_download_url(u2opener, url, savepath):
try:
if u2opener:
f = u2opener.open(url)
else:
f = u2.urlopen(url)
except u2.HTTPError, httperror:
if httperror.code in (404, 503):
return None
else:
raise CreatorError(httperror)
except OSError, oserr:
if oserr.errno == 2:
return None
else:
raise CreatorError(oserr)
except IOError, oserr:
if hasattr(oserr, "reason") and oserr.reason.errno == 2:
return None
else:
raise CreatorError(oserr)
except u2.URLError, err:
raise CreatorError(err)
except httplib.HTTPException, e:
raise CreatorError(e)
# save to file
licf = open(savepath, "w")
licf.write(f.read())
licf.close()
f.close()
return savepath
def _pager_file(savepath):
if os.path.splitext(savepath)[1].upper() in ('.HTM', '.HTML'):
pagers = ('w3m', 'links', 'lynx', 'less', 'more')
else:
pagers = ('less', 'more')
file_showed = False
for pager in pagers:
cmd = "%s %s" % (pager, savepath)
try:
os.system(cmd)
except OSError:
continue
else:
file_showed = True
break
if not file_showed:
f = open(savepath)
msger.raw(f.read())
f.close()
msger.pause()
# when proxy needed, make urllib2 follow it
proxy = repo.proxy
proxy_username = repo.proxy_username
proxy_password = repo.proxy_password
if not proxy:
proxy = get_proxy_for(repo.baseurl[0])
handlers = []
auth_handler = u2.HTTPBasicAuthHandler(u2.HTTPPasswordMgrWithDefaultRealm())
u2opener = None
if proxy:
if proxy_username:
proxy_netloc = urlparse.urlsplit(proxy).netloc
if proxy_password:
proxy_url = 'http://%s:%s@%s' % (proxy_username, proxy_password, proxy_netloc)
else:
proxy_url = 'http://%s@%s' % (proxy_username, proxy_netloc)
else:
proxy_url = proxy
proxy_support = u2.ProxyHandler({'http': proxy_url,
'https': proxy_url,
'ftp': proxy_url})
handlers.append(proxy_support)
# download all remote files to one temp dir
baseurl = None
repo_lic_dir = tempfile.mkdtemp(prefix = 'repolic')
for url in repo.baseurl:
tmphandlers = handlers[:]
(scheme, host, path, parm, query, frag) = urlparse.urlparse(url.rstrip('/') + '/')
if scheme not in ("http", "https", "ftp", "ftps", "file"):
raise CreatorError("Error: invalid url %s" % url)
if '@' in host:
try:
user_pass, host = host.split('@', 1)
if ':' in user_pass:
user, password = user_pass.split(':', 1)
except ValueError, e:
raise CreatorError('Bad URL: %s' % url)
msger.verbose("adding HTTP auth: %s, XXXXXXXX" %(user))
auth_handler.add_password(None, host, user, password)
tmphandlers.append(auth_handler)
url = scheme + "://" + host + path + parm + query + frag
if tmphandlers:
u2opener = u2.build_opener(*tmphandlers)
# try to download
repo_eula_url = urlparse.urljoin(url, "LICENSE.txt")
repo_eula_path = _check_and_download_url(
u2opener,
repo_eula_url,
os.path.join(repo_lic_dir, repo.id + '_LICENSE.txt'))
if repo_eula_path:
# found
baseurl = url
break
if not baseurl:
shutil.rmtree(repo_lic_dir) #cleanup
return True
# show the license file
msger.info('For the software packages in this yum repo:')
msger.info(' %s: %s' % (name, baseurl))
msger.info('There is an "End User License Agreement" file that need to be checked.')
msger.info('Please read the terms and conditions outlined in it and answer the followed qustions.')
msger.pause()
_pager_file(repo_eula_path)
# Asking for the "Accept/Decline"
if not msger.ask('Would you agree to the terms and conditions outlined in the above End User License Agreement?'):
msger.warning('Will not install pkgs from this repo.')
shutil.rmtree(repo_lic_dir) #cleanup
return False
# try to find support_info.html for extra infomation
repo_info_url = urlparse.urljoin(baseurl, "support_info.html")
repo_info_path = _check_and_download_url(
u2opener,
repo_info_url,
os.path.join(repo_lic_dir, repo.id + '_support_info.html'))
if repo_info_path:
msger.info('There is one more file in the repo for additional support information, please read it')
msger.pause()
_pager_file(repo_info_path)
#cleanup
shutil.rmtree(repo_lic_dir)
return True

View File

@@ -0,0 +1,109 @@
#!/usr/bin/python -tt
#
# Copyright (c) 2011 Intel, Inc.
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the Free
# Software Foundation; version 2 of the License
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
# for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc., 59
# Temple Place - Suite 330, Boston, MA 02111-1307, USA.
import os
import subprocess
from mic import msger
def runtool(cmdln_or_args, catch=1):
""" wrapper for most of the subprocess calls
input:
cmdln_or_args: can be both args and cmdln str (shell=True)
catch: 0, quitely run
1, only STDOUT
2, only STDERR
3, both STDOUT and STDERR
return:
(rc, output)
if catch==0: the output will always None
"""
if catch not in (0, 1, 2, 3):
# invalid catch selection, will cause exception, that's good
return None
if isinstance(cmdln_or_args, list):
cmd = cmdln_or_args[0]
shell = False
else:
import shlex
cmd = shlex.split(cmdln_or_args)[0]
shell = True
if catch != 3:
dev_null = os.open("/dev/null", os.O_WRONLY)
if catch == 0:
sout = dev_null
serr = dev_null
elif catch == 1:
sout = subprocess.PIPE
serr = dev_null
elif catch == 2:
sout = dev_null
serr = subprocess.PIPE
elif catch == 3:
sout = subprocess.PIPE
serr = subprocess.STDOUT
try:
p = subprocess.Popen(cmdln_or_args, stdout=sout,
stderr=serr, shell=shell)
(sout, serr) = p.communicate()
# combine stdout and stderr, filter None out
out = ''.join(filter(None, [sout, serr]))
except OSError, e:
if e.errno == 2:
# [Errno 2] No such file or directory
msger.error('Cannot run command: %s, lost dependency?' % cmd)
else:
raise # relay
finally:
if catch != 3:
os.close(dev_null)
return (p.returncode, out)
def show(cmdln_or_args):
# show all the message using msger.verbose
rc, out = runtool(cmdln_or_args, catch=3)
if isinstance(cmdln_or_args, list):
cmd = ' '.join(cmdln_or_args)
else:
cmd = cmdln_or_args
msg = 'running command: "%s"' % cmd
if out: out = out.strip()
if out:
msg += ', with output::'
msg += '\n +----------------'
for line in out.splitlines():
msg += '\n | %s' % line
msg += '\n +----------------'
msger.verbose(msg)
return rc
def outs(cmdln_or_args, catch=1):
# get the outputs of tools
return runtool(cmdln_or_args, catch)[1].strip()
def quiet(cmdln_or_args):
return runtool(cmdln_or_args, catch=0)[0]