María del Carmen Quintana Ponce
The main objects of study in this PhD thesis are rational matrices. A rational matrix R(z) is a matrix whose entries are quotients of polynomials in the scalar variable z, i.e., rational functions. Rational matrices have received a lot of attention since the 1950s, as a consequence of their fundamental role in linear systems and control theory.
Rational matrices can have poles and zeros and have rational right and left null spaces, which can be trivial, i.e., equal to {0}. Via the notion of the Smith-McMillan form, one can associate partial multiplicities to the poles and zeros, and via the notion of minimal polynomial bases for rational vector spaces, one can associate the so called right and left minimal indices to the right and left null spaces, which exist only when the rational matrix is singular, i.e., rectangular or square with identically zero determinant. All these quantities are among the most relevant structural data of a rational matrix.
Many classic problems in linear systems and control theory can be posed in terms of rational matrices and are related to the computation of their zeros and poles. For that, it is fundamental the key concept introduced by Rosenbrock in 1970 of polynomial system matrices of rational matrices. This notion allows us, among other things, to include simultaneously all the information about the zeros and the poles of a rational matrix into a polynomial matrix, if certain minimality conditions are satisfied.
In the 1970s the first numerical algorithms for computing the structural data of rational matrices were developed. The most reliable algorithms were based on constructing a matrix pencil, i.e., a matrix polynomial of degree 1, containing the information about the structural data of the considered rational matrix. These pencils are among the first examples of linearizations of rational matrices and are, in fact, particular instances of minimal polynomial system matrices. Then, backward stable algorithms developed also in the 1970s, for computing the eigenvalues and/or other structural data of general pencils, were applied to these matrix pencils. Nowadays, given a matrix pencil linearizing a rational matrix, one can apply to the pencil the backward stable eigenvalue algorithms developed in the 1970s for problems of moderate size, or Krylov methods adapted to the structure of the pencil in the large-scale setting.
As we explained in the previous paragraph, the approach of constructing a linear polynomial matrix containing information about the structural data of rational matrices was first introduced in the 1970s. However, a formal definition of linearization of rational matrices was not given at that time.
Currently, the computation of the zeros of rational matrices is also playing a fundamental role in the very active area of Nonlinear Eigenvalue Problems (NLEPs), either because they appear directly in rational eigenvalue problems (REPs) modeling real-life problems or because other NLEPs are approximated by REPs. Given a rational matrix R(z), the REP consists of finding scalars z_0 such that z_0 is not a pole of R(z), i.e., R(z_0) has finite entries, and that there exist nonzero constant vectors x and y satisfying R(z_0)x=0 and y^{T}R(z_0)=0. The scalar z_0 is said to be an eigenvalue of R(z) and the vectors x and y are called, respectively, right and left eigenvectors associated with z_0. Then, the problem of finding the eigenvalues of a rational matrix can also be seen as the problem of finding the zeros of R(z) that are not poles.
The term linearization of rational matrix was also used in works on solving numerically REPs and NLEPs without referring to a formal definition of linearization. A first formal definition of linearization of a rational matrix was proposed by Alam and Behera (2016). Then, a different definition was introduced by Amparan et al. (2018), together with the first formal definition of strong linearization, i.e., a pencil that allows to recover both the finite and infinite pole and zero structure of the corresponding rational matrix. However, the pencils considered for linearizing NLEPs do not satisfy the earlier definitions of linearizations. Motivated by this fact, we develop a more general theory of linearizations of rational matrices in this thesis. In particular, we introduce a definition local linearization of rational matrices, in a way that most of the linearizations for rational matrices in the literature have been unified. For that, we present a rigorous local theory of linearizations of rational matrices by considering local equivalences. We also extend the concept of Rosenbrock's minimal polynomial system matrices to a local scenario.
In addition to formal definitions, some works on linearizations of rational matrices have introduced families of strong linearizations that are constructed from the fact that any rational matrix can be written as the sum of its polynomial and strictly proper parts, and that the strictly proper part can be expressed with minimal state-space realizations. Thanks to this property, strong linearizations of rational matrices are constructed from strong linearizations of the polynomial part combined with minimal state-space realizations of the strictly proper part. In addition, the study of the recovery properties from these families of linearizations has received considerable attention. Among the new classes of strong linearizations, we mention the family of ''strong block minimal bases linearizations'' of rational matrices introduced by Amparan et al. (2018), as a wide family of strong linearizations constructed by considering ''strong block minimal bases pencils'' associated to their polynomial parts. They include as particular cases the Fiedler-like linearizations (modulo permutations) and are valid for general rectangular rational matrices. In this thesis we analyse the backward stability when running a backward stable algorithm for computing the eigenvalues of a particular type of the strong block minimal bases linearization of rational matrices, which are called ''block Kronecker linearizations''.
The question whether or not other strong linearizations of rational matrices can be constructed based on other kinds of strong linearizations of the polynomial parts arises naturally. For answering this question, we construct strong linearizations of a rational matrix by using strong linearizations of its polynomial part D(z) that belong to other important family of strong linearizations of polynomial matrices, i.e., the so-called vector spaces of linearizations. In particular, we consider strong linearizations of D(z) that belong to the ansatz spaces M_{1}(D) or M_{2}(D), developed by Fassbender and Saltenberger (2017), where polynomial matrices are expressed in terms of polynomial bases other than the monomial basis. Another motivation for developing these results is that, in order to compute the eigenvalues of polynomial matrices from linearizations, it is known that for polynomial matrices of large degree, the use of the monomial basis to express the matrix leads to numerical instabilities due to the ill-conditioning of the eigenvalues in certain situations. It is expected that this instability appears also while computing eigenvalues of REPs when the polynomial part of the rational matrix has large degree and is expressed in terms of the monomial basis. For that reason, it is of interest to consider rational matrices with polynomial parts expressed in other bases as the Chebyshev basis. As a consequence of the results obtained, we can conclude that the combination of them allows us to construct very easily infinitely many strong linearizations of rational matrices via the following three-step strategy: (1) express the rational matrix as the sum of its polynomial and strictly proper parts; (2) construct any of the strong linearizations of the polynomial part known so far; and (3) combine adequately that strong linearization with a minimal state-space realization of the strictly proper part.
As we explained in the previous paragraphs, there exist different methods for constructing linearizations of rational matrices when the corresponding rational matrix is expressed as the sum of its polynomial and strictly proper parts and by considering a minimal state-space realization for the strictly proper part. Furthermore, if the rational matrix is not in that form, there exist procedures in the literature for obtaining such a representation. However, these procedures are not simple, and may introduce errors that were not present in the original problem. Motivated by this fact, we also construct linearizations for rational matrices from more general representations. More precisely, we consider rational matrices R(z) expressed in the general form R(z)=D(z)+C(z)A(z)^{-1}B(z), where A(z), B(z), C(z) and D(z) are polynomial matrices with arbitrary degrees. For any rational matrix R(z), a representation of this type always exists and is not unique. Representations of this form arise naturally, for example, when solving REPs or in linear systems and control theory. The new linearizations are constructed from linearizations of the polynomial matrices D(z) and A(z), where each of them can be represented in terms of any polynomial basis. We emphasize that in contrast to the construction of other families of lineariations for rational matrices, the construction of these linearizations do not require neither to write the corresponding rational matrix R(z) as the sum of its polynomial part and its strictly proper part nor to express the strictly proper rational part in state-space form, which can introduce errors that were not in the original problem.
Despite the intense activity described in the previous paragraph, as we mentioned before, there are pencils that have been used in influential references for solving numerically REPs that approximate NLEPs which do not satisfy the definitions of linearization of rational matrices given by Alam and Behera (2016) and Amparan et al (2018). The reason is that these definitions focus on pencils that allow to recover the complete pole and zero structure of rational matrices, while linearizations for rational approximations of NELPs only the eigenvalue information in a certain subset of the complex plane is necessary. To explain their properties, we use our new theory of linearizations of rational matrices in a local sense. In general, this new theory of local linearizations captures and explains rigorously the properties of all the different pencils that have been constructed from the 1970's in the literature for computing zeros, poles and eigenvalues of rational matrices. Local linearizations are pencils associated to a rational matrix that preserve its structure of zeros and poles in subsets of any algebraically closed field, in the whole underlying field and also at infinity. In practice, one is often interested in studying the pole and zero structure of rational matrices in a particular region. For instance, this happens when a REP arises from approximating a NLEP, since the approximation is usually reliable only in a target region. As a consequence, the eigenvalues (those zeros that are not poles) of the approximating REP need to be computed only in that region. In this scenario, one can use local linearizations of the corresponding rational matrix which contain the information about the poles and zeros in the target region, but possibly not in the whole field. In this thesis the theory of local linearizations is applied to a number of pencils that have appeared in some influential papers on solving numerically NLEPs by combining rational approximations and linearizations of the resulting rational matrices. This theory allows us to view these pencils, and to explain their properties, from rather different perspectives. Apart from a new definition of local linearizations, a specific family of local linearizations of rational matrices is also introduced in this thesis, that are called block full rank linearizations, as a template that covers many of the pencils, available in the literature, for linearizing rational matrices. We also study the properties of the linearizations for rational approximations of NLEPs by using the new theory of block full rank linearizations.
When a local linearization preserves the structure of zeros and poles in the whole underlying field and also at infinity we say it is strong. But this definition of strong linearization is different from that introduced by Amparan et al. (2018). We pay special attention to a particular type of the strong linearizations for rational matrices defined in this thesis, which we called strongly minimal linearizations. Strongly minimal linearizations are linear polynomial system matrices, minimal in the whole underlying field and also at infinity, whose transfer function matrix is the desired rational matrix. We will see that the strong minimality conditions imply the strong irreducibility conditions by Verghese (1979), and that the former are easier to test. In addition, we will also show that when the strong minimality conditions are not satisfied, we can reduce the system matrix to one where they are satisfied without modifying the corresponding transfer function matrix.
One important property of strongly minimal linearizations is that they can preserve many different structures of the original rational matrix without imposing any restriction on such a matrix. This result is in stark contrast with previous results existing in the literature on linearizations that preserve structures of rational matrices, which impose conditions on the corresponding rational matrix as a consequence of using other definitions of linearizations. This is one of the most important results in this thesis.
It is known that structured polynomial and rational matrices have symmetries in their spectra and these spectral symmetries reflect specific physical properties, as they originate usually from the physical symmetries of the underlying applications. Such special structures occur in numerous applications in engineering, mechanics, control, and linear systems theory. Some of the most common algebraic structures that appear in applications are the (skew-)symmetric, and alternating structures. Symmetric (or Hermitian) matrix polynomials arise in the classical problem of vibration analysis, and alternating matrix polynomials find applications, for instance, in the study of corner singularities in anisotropic elastic materials, in the study of gyroscopic systems, in the continuous-time linear-quadratic optimal control problem and in the spectral factorization problem.
Because of the numerous applications where structured polynomial and rational matrices occur, there have been several attempts to construct linearizations for them that display the same structure. But these earlier attempts impose certain conditions on the corresponding polynomial or rational matrix for the construction of the linearization to apply, such as regularity, strict properness or invertibility of certain matrix coefficients. In this thesis we give a construction of structured linearizations for structured polynomial and rational matrices without imposing any conditions, by using the notion of strongly minimal linearization. Moreover, the proof used for this construction is different from earlier works, and we claim it to be simpler as well. As far as we know, it is the first time that such a completely restriction free construction of structured linearizations for structured polynomial and rational matrices has been achieved.
All the results in this thesis are in part possible due to the new treatment of minimality of polynomial system matrices at infinity, that we introduce in the new theory of local linearizations.
Finally, in this thesis we study the backward stability of running a backward stable algorithm to compute the eigenvalues on a pencil that is a strong linearization of a rational matrix of block Kronecker type. We describe how to restore the structure of block Kronecker linearizations of rational matrices after they suffer sufficiently small perturbations. Then, we give sufficient conditions on the pencil and on the corresponding rational matrix that guarantee structural backward stability for (regular or singular) REPs solved via block Kronecker linearizations. In addition, we derive a scaling technique that allows to guarantee structural backward stability. We conclude by presenting a number of numerical results illustrating our theoretical bounds. The results showed that solving numerically REPs via block Kronecker linearizations is a backward stable process in a global sense, under certain conditions involving both the representation of the corresponding rational matrix and the choice of the linearization. An analysis of this type had not been developed before in the literature.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados