Projection operators. Linear operators in Euclidean space Find the matrix of projection onto a plane

Linear operator matrix

Nekhai is a line operator, and the spaces are both ending and .

Let's set quite a basis: in i V.

Let’s set the task: for a sufficient vector, calculate the coordinates of the vector at the base.

By introducing a vector matrix-row, which consists of images of vectors in the basis, we can remove:

Respectfully, jealousy will remain in this place through the linearity of the operator.

Let us decompose the system of vectors according to the basis:

,

The part of the matrix is ​​the part of the coordinates of the vector in the basis.

Remainingly mathematical:

Otje, In order to calculate the set of vector coordinates for the selected basis of another space, it is sufficient to multiply the set of vector coordinates for the selected basis of the first space by a matrix that is formed from the set of coordinates of the images of the basis vectors of the first space to the basis of another space.

The matrix is ​​called matrix of a linear operator in a given pair of bases.

The matrix of a linear operator can be denoted by the same letter as the operator itself, aka without italics. Sometimes we will use the following meaning: , omitting most often the basis (as this will harm the accuracy).

For linear transformation (if ) can we talk about yoga matrices in this basis.

As a butt, let’s look at the matrix of the butt design operator in section 1.7 (with respect to the transformation of the space of geometric vectors). Yak basis viberemo zvichayny basis.

Also, the matrix of the operator for projection onto the plane in the basis looks like this:

It is important that we considered the projection operator as a reflection of , considering the remaining space of all geometric vectors that lie near the plane, then, taking the basis as a basis, we can now obtain the following matrix:

Considering a sufficient matrix of size as a linear operator that maps an arithmetic space into an arithmetic space and selecting a canonical basis from each of these spaces, it is possible to remove the matrix of this linear operator in such a pair basis is the matrix itself, which means this operator - then, in this case the matrix and the linear operator are the same (exactly the same as when choosing a canonical basis in an arithmetic vector space, a vector and a set of coordinates in this basis can be identified). Ale would have been rudely tossed away vector yak like thisі line operator yak such from these data to one or another basis (they look like matrices). Both the vector and the linear operator are geometric, invariant objects, ideas regardless of any basis. So if, for example, there is a minimally geometric vector for straightening the sections, the values ​​are completely invariant, then. We, if we draw anything, have a lot of effort to get to bases, coordinate systems, and so on, and we can operate on them purely geometrically. Insha rich, what for handiness In order to make it easier to calculate with vectors, we will use the first algebraic apparatus, introducing coordinate systems, bases, and the associated algebraic technique for calculating with vectors. p align="justify"> Figuratively speaking, a vector, like a “naked” geometric object, “dresses up” in different coordinate manifestations depending on the choice of basis. Albeit people can put on themselves very manipulative cloth, which is why the essence of how people do not change, but it is also true that if you don’t come close to this or any other situation (you won’t go to the beach at the concert tailcoat), that and golim tezh not you'll walk through it. So even if any basis is not suitable for the final design, even a purely geometric solution may turn out to be too complex. It is important for us in our course, as for the sake of such a task, it would seem that a purely geometric problem, as a classification of surfaces of a different order, will be completed by the complex and beautiful theory of algebra.

The understanding of the importance of a geometric object from its one or another basis becomes the basis for the application of linear algebra. It is not the geometric vector itself that is to blame for the geometric object. So, when we put the arithmetic vector , then it can be identified with the basis of its coordinates in the canonical basis , Bo (div. first semester):

Then we introduce another basis y, which consists of vectors І (turn over, this is an effective basis!) І, vikorist matrix of the transition, we rearrange the coordinates of our vector:

We have taken a completely different method, but it represents the same arithmetic vector in a different basis.

What has been said about vectors can be extended to linear operators. Those whose vector is its coordinate manifestation, its linear operator is its matrix.

Ozhe (repeat again), it is necessary to clearly distinguish between the power forces of invariant, geometric, objects, such as a vector and a linear operator, and Their manifestations on one or another basis (Let’s go, clearly, let’s talk about the end linear spaces).

Let us deal with the very tasks of transforming the matrix of the linear operator when moving from one pair of bases to another.

Let's go - a new pair of bases for the species.

Todi (indicating the matrix of the operator in a pair of “hatched” bases) can be removed:

Ale from the other side,

,

signs, through the unity of decomposition of the vector behind the basis

,

For linear transformation, the formula looks more simple:

Matrices associated with such relationships are called similar.

It is easy to see that the determinants of such matrices are avoided.

Let's introduce it now, it's clear line operator rank.

Behind the given number is the corresponding dimension of the image of this operator:

Let us convey this more important assertion:

Tverzhennia 1. 10 The rank of the linear operator is matched by the rank of its matrix, regardless of the choice of bases.

Finished. First of all, respectfully, the image of a linear operator is the linear shell of the system, the basis of space.

True,

Regardless of the number, i means that it is indicated by the linear shell.

The dimension of the linear shell, as is evident (section 1.2), matches the rank of the transmission system of vectors.

It was stated earlier (section 1.3) that the system of vectors is laid out according to a certain basis as

then, in the minds of the system, the matrices are linearly independent. One can make a stronger assertion (we omit this proof): The rank of the system is equal to the rank of the matrix Moreover, this result does not lie in the choice of basis, since multiplying a matrix by an ungenerated transition matrix does not change its rank.

Oskolki

,

Obviously, the ranks of similar matrices are saved, and the result depends on the choice of a specific basis.

The truth has been completed.

For linear re-creation any end linear space we can send and understand determinant of this re-creation as a determinant of its matrix in a fairly fixed basis, then the matrix of linear transformation in different bases is similar and, however, may have new determinants.

Vikorist's understanding of the matrix of a linear operator, we bring to the next important relationship: for any linear transformation - a peaceful linear space

We choose quite a basis for space. Then the kernel consists of these and more vectors, a set of coordinates which are the essence of the solution of a homogeneous system

and the vector itself, and only then, if the whole system is solved (1).

Otherwise, it seems that there is an isomorphism of the core throughout the entire system (1). Well, the dimensions of these spaces are being reduced. In addition to the size of the space, the solution of the system (1) is ancient, as we know, the rank of the matrix. Ale mi shoyno brought, scho

Brothers ket-vectors of Dirac are amazing because with their help you can record different types of creations.

The addition of a bra-vector to a ket-vector is called a scalar creation or an internal creation. In essence, this is a standard matrix model based on the “row on top” rule. The result is a complex number.

A new ket-vector gives not a number, but another ket-vector. It is also represented by a vector-stem, and in addition to many components, the dimensions of the output vectors are added. Such a solid is called a tensor creation or a Kronecker creation.

Similarly for creating two sconce vectors. Let's take away the large vector-row.

The remaining option is to multiply the ket-vector by the bra-vector. Then you need to multiply the row by row. Such a creation is also called a tensor or external creation. The result is a matrix, or operator.

Let's take a look at the example of the history of such operators.

Let’s take some sufficient Hermite operator A. Based on his postulates, he suggests every quantity that must be guarded against. The vectors of the Hermite operator form a basis. The largest vector can be divided into a given basis. To reveal the sum of basis vectors with complex complex coefficients. This fact is known as the principle of superposition. Let's rewrite viraz using the sumi sign.

If there are coefficients in the decomposition of the vector as a basis, then the scalar addition of the vector will become a similar basis vector. Let's write this amplitude of the right hand as a vector. Viraz under the sum sign can be used to multiply the ket-vector by a complex number - the amplitude of the intensity. On the other hand, it can be seen as the addition of a matrix, derived from the multiplication of the ket-vector by the bra-vector, and the output ket-vector. The ket-vector can be brought from behind the sumi sign to the bow. Right-handed and left-handed, a sign of fidelity appears as the same vector of psi. This means that the entire sum does not work with anything from a vector and, obviously, a single matrix.

This formula is so useful when manipulating viruses with the creation of bra- and ket-vectors. Even one can be inserted in any place where I create.

One wonders what the matrices are that are included in the sum and possess the tensor creation of the base ket-vector with their Hermitian results. Again, for the sake of precision, let us draw an analogy with prime vectors in trivial space.

We select single basis vectors ex ey and ez, which run directly along the coordinate axes. The tensorial addition of the vector ex to its pair will be represented by an approach matrix. Let's take a fairly large vector v. What happens when the matrix is ​​multiplied by the vector? This matrix easily reset all components of the vector crim x to zero. The result is a vector that is straightened along the x axis, so the projection of the output vector is the basis vector ex. It turns out that our matrix is ​​nothing more than a projection operator.

The two projection operators that are missing on the basis vectors ey and ez appear as similar matrices and conjugate a similar function - zeroing out all but one vector component.

What is the result of the assumption of projection operators? For example, the operators Px and Py are collapsible. Such a matrix eliminates the z-component of the vector. The sub-bag vector always lies in the x-y plane. Then we can use the projection operator onto the x-y area.

Now it is clear why the sum of all projection operators onto the basis vectors is equal to the identity matrix. In our application, we remove the projection of a trivial vector onto a trivial space. Alone, the matrix is ​​essentially a vector projector on itself.

The output of the projection operator is equivalent to that of the output space. This type of tridimensional Euclidean space can have a one-dimensional line, which is specified by one vector, or a two-dimensional area, which is specified by a pair of vectors.

Returning to quantum mechanics with these vectors in the Hilbert space, we can say that projection operators define a subspace and project a vector in this Hilbert subspace.

Let's look at the main power of projection operators.

  1. Consecutive use of the same projection operator is equivalent to one projection operator. Please write this power as P 2 =P. In truth, since the first operator has designed a vector at the subspace, the other cannot create anything with it. The vector is already in use in this space.
  2. Projection operators are Hermitian operators, as in quantum mechanics they represent quantities that are guarded against.
  3. The important values ​​of projection operators of any dimension are greater than the number one and zero. The vector may be located in the subspace or not. Through such binarity, which is described by the projection operator, the quantity that is being guarded can be formulated in terms of nutrition, such as “so” or “not”. For example, why straighten the spin of the first electron in a singlet position along the z axis? This power supply can be assigned to the projection operator type. Quantum mechanics allows for the possibility of both the “so” and the “nor” types.

We also talk about projection operators.

1. Operators of design and idepotentiy of the ring

Let the vector space V be the direct sum of the subspaces W and L: . According to the meaning of the direct sum, the vector vV is uniquely representable as v=w+l, wW. lL.

Value 1. Since v=w+l, then the image that sets the skin vector vV to its component (projection) wW is called a projector from the space V onto the space W. It is also called a projection operator or a projection operator.

Obviously, if wW, then (w)=w. The star screams that miraculous power may come 2 = R.

Value 2. The element e of ring K is called idempotent (that is, similar to one), since e 2 = e.

The ring of integers has only two idempotentities: 1 and 0. The other matrix is ​​on the right in the ring. For example, matrices are idepotentiary. Design operator matrices are also independent. The following operators are called idempotent operators.

Let us now take a look at the direct sum n under the expanse of the expanse V:

Then, similarly to the direct sum of two subspaces, we can extract n design operators, …, . The stinks of power ==0 at ij.

Value 3. Idempotentities e i and e j (ij) are called orthogonal, since e i e j = e j e i =0. Also, i are orthogonal idepotentiaries.

From the fact that IV=V, and the rules for adding linear operators, it follows that

This layout is called the layout of one in the sum of idempotents.

Value 4. Idempotent e is called minimal, since it cannot be given in terms of the sum of idempotents subsumed under 0.

2. Canonically laid out manifestations

Value 5. The canonical expansion of the phenomenon T(g) is called its expansion of the form T(g)=n 1 T 1 (g)+ n 2 T 2 (g)+…+ n t T t (g), in which the equivalent non-reduction of the manifestation T i (g ) ) are combined at once, and n i is the multiplicity of input of the unreduced supply T i (g) of the layout T(g).

Theorem 1. The canonical expansion of the data is indicated by an additional projection operator in the form

I=1, 2, …, t, (31)

de | G | - Group G order; m i - stage of manifestation T i (g), where i=1, 2, …, t; i (g), i = 1, 2, …, t - characteristics of irreducible phenomena T i (g). When m i is determined by the formula

3. Projection operators associated with matrices of non-guidance manifestations of groups

With the help of formulas (31), it is possible to eliminate the phenomena that are not canonically laid out. Zagalom, it is necessary to quickly calculate the matrices of irreducible manifestations, which allow for the implementation of secondary design operators.

Theorem 2. Let - matrix elements of the unreduced phenomenon T r (g) of group G. Form operator

The design operator is called the Wigner operator. In the virus (33) m r - the size of the phenomenon T r (g).

4. Unpacking the dues from the direct sum of non-directed notifications for the assistance of the operator Wigner

Significantly through the M module, the connections with the submodules T. Let the non-directed phenomena T 1, T 2, ..., T t from the canonical layout of the manifestations be based on the method described earlier (section 4), as shown by the non-reducible submodules M 1, M 2 , …, M t. Unfolding the module M view

is called the canonical expansion of the module M. Significantly niMi = Li so that

Not given submodules of modules L i are significant

; i=1, 2, …, t. (36)

We need to know these modules.

It is acceptable that the truth is true. Also, for each modmodule Mi (s) (s = 1, 2, ..., ni) an orthonormal base has been found, which is the operator of representations by the matrix T i (g) of the unreduced data T, drawn as a result (following the rule in § 3) operator to the base for the formula

J=1, 2, …, m i . (37)

It is important to note that m i is the dimension of the irreducible phenomenon T i (i=1, 2, …, t), and the elements of the basis with number g are from the irreducible submodule M i. It is now possible to accommodate the elements of the base L i with the i fixed in the upcoming order:

Right-handed in Virazi (38) expanded base of modules Mi (1) , Mi (2) , …, . As soon as i changes from 1 to t, we remove the base of the entire module M, which consists of m 1 n 1 + m 2 n 2 + ... + m t n t elements.

Let's take a look now operator

What does module M do (j fixed). Extending to Theorem 2, is the projection operator. Therefore, this operator removes without change all the basic elements (s=1, 2, …, ni), expanded in the j-th column of the expression (38), and converts all other vectors of the basis to zero. It is significant through M ij the vector space spanned by the orthogonal system of vectors that stands at the j-th column of the line (38). It can also be said that the operator is designing for the space M ij. Operator vidomy, fragments vidomy diagonal elements and matrix of non-directions of group manifestations, as well as operator T(g).

Now we can complete our task.

We select n i additional basis vectors M: and raise them by the design operator. The vectors should lie in the space M ij and є linearly independent. The smells are not obov'yazkovo orthogonal and normalized. The system of vectors has been orthonormally recovered in accordance with the rule of § 2. The system of vectors has been recovered by significance e ij (s) consistent with the values ​​taken from the assumption that the given value is verified. As it was designated, here j is fixed, and s = 1, 2, ..., n i . Significantly e if (s) (f=1, 2, …, j-1, j+1, …, mi), other elements of the module base M i of dimensions n i mi . Significantly through the offensive operator:

This relation of orthogonality for the non-guidance matrix shows that this operator makes it possible to remove e ig s from the formula

I = 1, 2, …, t. (41)

All of the above can be interpreted according to the algorithm.

In order to know the base of the module M of elements that are transformed behind the irreducible manifestations T i, which are located in the manifestation T associated with the module M, it is necessary:

Using formula (32), calculate the dimensions of the subspaces M ij, the subordinate j-components of the unreduced phenomenon T i.

Find out for the help of the design operator (39) all subspaces M ij .

Select a sufficient orthonormal base for the skin area M ij.

Using Vikorist formula (41), find out all the elements of the basis that can be transformed into other components of the unreduced phenomenon T i.

Hi line operator A lives in the Euclidean space E n and transforms this space into itself.

Entered appointment: operator A* we will contact the operator A for any two vectors x,y z E n is determined by the jealousy of scalar creations in the form:

(Ax,y) = (x,A*y)

More appointment: a linear operator is called self-receiving because it is similar to its received operator, then equality is fair:

(Ax,y) = (x,Ay)

or, zokrema ( Ax,x) = (x,Ax).

Self-reliance operator may act as power. Let's guess what they do:

    The authority of the number of the self-received operator is speech (without proof);

    The vectors of the self-generated operator are orthogonal. It's true, right? x 1і x 2 are power vectors, and  1 and  2 are their power numbers, then: Ax 1 =  1 x; Ax 2 =  2 x; (Ax 1 ,x 2) = (x 1 ,Ax 2), or  1 ( x 1 ,x 2) =  2 (x 1 ,x 2). Shards  1 and  2 massacres, then zvidsi ( x 1 ,x 2) = 0, which was necessary to achieve.

    The Euclidean space has an orthonormal basis from the power vectors of the self-generated operator A. That is, the matrix of the self-defined operator can now be reduced to a diagonal view in any orthonormal basis, folded from the power vectors of the self-defined operator.

One more appointment: called the self-sufficiency operator that exists in the Euclidean space symmetrical operator Let's look at the matrix of the symmetric operator. Let us state the following: For the operator to be symmetric, it is necessary and sufficient for the matrix to be symmetric in its orthonormal basis.

Let's go A- Symmetric operator, then:

(Ax,y) = (x,Ay)

Yakshcho A is the matrix of operator A, a xі y- Deyaki vectors, then we write:

coordinates xі y for a real orthonormal basis

Todi: ( x,y) = X T Y = Y T X i maєmo ( Ax,y) = (AX) T Y = X T A T Y

(x,Ay) = X T (AY) = X T AY,

tobto. X T A T Y = X T AY. With sufficient matrix matrices X,Y, this equality is only possible for AT = A, and this means that the matrix A is symmetrical.

Let's take a look at the actions of line operators

Operator design You need to know the matrix of the linear operator that projects the trivial space onto the coordinate whole e 1 at the base e 1 , e 2 , e 3 . The matrix of a linear operator is a matrix in which there may be images of basis vectors e 1 = (1,0,0), e 2 = (0,1,0), e 3 = (0,0,1). This image obviously means: Ae 1 = (1,0,0)

Ae 2 = (0,0,0)

Ae 3 = (0,0,0)

Well, at the base e 1 , e 2 , e 3 Matrix of the searched linear operator:

We know the kernel of this operator. The kernel is identical to the designated one – there are no vectors X, for any AX = 0. Or


That is, the kernel of the operator becomes an impersonal vector that lies at the plane e 1 , e 2 . The size of the nucleus is the same as n - rangA = 2.

The impersonal image of this operator is, obviously, the impersonal vectors, collinear e 1 . The size of the image space is similar to the rank of the line operator and to the previous level 1 which is less dimension to the space of prototypes. That is, the operator A- Virogeny. Matrix And also virogen.

Another butt: find the matrix of the linear operator that operates in the space V 3 (basis i, j, k) linear transformation - symmetry to the beginning of the coordinates.

Maemo: Ai = -i

That is, the shukana matrix

Let's take a look at the linear re-creation - symmetry to plane y = x.

Aj = i(1,0,0)

Ak = k (0,0,1)

The operator matrix will be:

Another example is the already known matrix, which connects the coordinates of the vector when rotating the coordinate axes. Let's call the operator that involves the rotation of coordinate axes - the rotation operator. It is permissible that there is a turn on the corner :

Ai' = cos i+ sin j

Aj' = -sin i+cos j

Matrix rotation operator:

AiAj

Here are the formulas for transforming the coordinates of a point when changing the basis - replacing coordinates on a plane when changing the basis:

E These formulas can be looked at step by step. Previously, we looked at these formulas in such a way that the dot stands on the spot and the coordinate system rotates. It can also be seen in this way that the coordinate system becomes unchanged, and the point moves from position M * to position M. The coordinates of the point M and M * are designated in the same coordinate system.

U All of the above allows us to reach the next task that is faced by programmers who deal with graphics on EOM. You must, on the EOM screen, rotate a certain flat figure (for example, a tricubitule) to point O with coordinates (a, b) to a certain corner . The rotation of coordinates is described by the formulas:

In parallel with the transfer, the following will be ensured:

In order to complete this task, use a simple technique: enter the same coordinates of a point on the XOY plane: (x, y, 1). This matrix, which operates in parallel with the transfer, can be written:

Effective:

And the matrix is ​​rotated:

The mystery that can be seen could be more than three times:

1st line: parallel to transfer to vector A(-a, -b) to accommodate the center of rotation with the coordinates:

2nd creek: corner turn :

3rd line: parallel to transfer to vector A(a, b) to rotate to the center of the rotation in the wheel position:

Shukan linear transformation in matrix view:

(**)