Homework 2

Due Oct 11th at 11:59pm. Submit to CANVAS at https://canvas.uw.edu/courses/1828831/assignments/10766564.

Problem 1

Let \(A\) and \(B\) be real \(n\) by \(n\) matrices. Suppose both are nonsingular.

a) Show that \(AB\) is nonsingular.

b) Find an example where \(A+B\) is singular.

Solution.

a) Many ways to do this. One could use the identity \(det(AB)=det(A)det(B)\). Another way to show this is to suppose \(AB\) is singular. Then there exist some vector \(u\), not zero, such that \(ABu=0\). But \(Bu\) cannot be 0 since \(B\) is nonsingular. Also \(A (Bu)\) cannot be \(0\) since \(A\) is nonsingular, and we have a contradiction.

b) Take \(B=-A\).

Problem 2

We say a real symmetric matrix is positive semidefinite if \(x^T A x \geq 0\) for all \(x\).

a) Find a matrix that is positive semidefinite but not positive definite.

b) Show that for an arbitrary matrix \(B\), the matrix \(B^T B\) is positive semidefinite.

Solution.

a) Any matrix that is not full rank. For example, \(\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}\). Note that \(\begin{bmatrix} 1 \\ -1 \end{bmatrix}\) is in the null space.

b) Let \(u\) be some arbitrary vector. We have \(u^T (B^T B) u= (Bu)^T Bu = \Vert Bu \Vert^2 \geq 0\).

Problem 3

An extremely useful fact about symmetric matrices is that their eigenvectors always form an orthogonal basis. That is, if a \(n\) by \(n\) real matrix \(A\) is symmetric, it has \(n\) eigenvectors \(u_1, \dots, u_n\) and \(u_i^T u_j =0\) if \(i \neq j\). This holds for complex matrices if we replace symmetric by hermitian. We won’t prove this statement here. It can also be generalized significantly to include infinite dimensional operators. For the following problem, we look at how it can be used.

a) Show that all eigenvalues of a positive definite matrix are positive and all eigenvalues of a positive semidefinite matrix are nonnegative.

b) Let \(A\) be a symmetric matrix. What is the largest \(\lambda\) such that \(A - \lambda I \succ 0\). Recall that \(\succ\) means positive definiteness and \(I\) is the identity matrix.

Solution.

a) We will do this for positive definite matrices, positive semidefinite follows similarly. Suppose an eigenvalue \(\lambda\) is negative. Let \(u\) be the associated eigenvector. We have \(u^T A u=\lambda \Vert u \Vert^2 <0\), which contradicts \(A\) being positive definite.

b) Note any eigenvector of \(A\) is an eigenvector of \(\lambda I\) with eigenvalue \(\lambda\). If the eigenvalues of \(A\) are \(\mu_1,\dots,\mu_n\), ordered as \(\mu_1 \geq \mu_2 \geq \dots \geq \mu_n\), the eigenvalues of \(A-\lambda I\) are \(\mu_i-\lambda I, \dots, \mu_n -\lambda\). To have \(A - \lambda I \succ 0\), we need \(\mu_i-\lambda I, \dots, \mu_n -\lambda\) to be all positive. So the largest \(\lambda\) is \(\mu_n\).

Problem 4

The notation of positive definiteness is usually defined for symmetric matrices. We can extend to nonsymmetric square matrices using the same definition as before: given a \(n\) by \(n\) matrix \(A\) (not necessarily symmetric), we say \(A\) is positive definte if \(x^T A x >0\) for all nonzero vectors \(x\).

a) Show that \(A\) is positive definite if and only if \(A+A^T\) is positive definite.

b) Suppose \(A\) and \(B\) and symmetric postive definite matrices, is \(AB\) positive semidefinite?

Solution

a) This follows from the fact \(x^T A x= x^T A^T x\), hence \(x^T A x = x^T \frac{A+A^T}{2} x\).

b) No. For example, take \(A=\begin{bmatrix} 5 & 2 \\ 2 & 1 \end{bmatrix}\) and \(B=\begin{bmatrix} 2 & -1 \\ -1 & 2 \end{bmatrix}\).

Problem 5

Consider the following network.

Circuit for problem 5

a) Suppose the voltages are \(V_1=1+0.1j, V_2=0.9+0.5j, V_3=-0.8+0.2j, V_4=-0.3+j, V_5=1-0.2j\). Find the current injections.

b) Suppose the current injections are \(I_1=2, I_2=-j, I_3=0.5-2j, I_4=0.5+0.5j, I_5=1-j\). Find the bus voltages. Hint: check whether the \(Y\)-bus matrix is invertable.

Solution

a) Straight forward caculation. We have

\[I=YV= \begin{bmatrix} 5.6 - 1j & -2 + j & -1 + 0.1j & -0.6 + 0.9j & 0 \\ -2 + j & 2.5 - 3j & -0.5 + 2j & 0 & 0 \\ -1 + 0.1j & -0.5 + 2j & 1.5 - 2.1j & 0 & 0 \\ -0.6 + 0.9j & 0 & 0 & 1.7 - 1.4j & -0.1 + j \\ 0 & 0 & 0 & -0.1 + j & 0.1 - j \end{bmatrix} \begin{bmatrix} 1 + 0.1j \\ 0.9 + 0.5j \\ -0.8 + 0.2j \\ -0.3 + 1j \\ 1 - 0.2j \end{bmatrix}= \begin{bmatrix} 3.46 - 1.69j \\ 1.65 - 2.35j \\ -3.24 + 3.53j \\ 0.30 + 3.98j \\ -1.07 - 1.42j \end{bmatrix}\]

b) The \(Y\) matrix is invertible, so we have

\[V=Y^{-1} I = \begin{bmatrix} 0.6289 - 1.6292j \\ 0.9557 - 2.4347j \\ 1.3792 - 2.6164j \\ 0.5423 - 1.1416j \\ 1.6314 - 0.2505j \end{bmatrix}\]

Problem 6

Schur complement can be used to reduce the size of circuits by removing internal buses. This is called Kron reduction. We will work through it for the following circuit, with the goal of finding an equivalent circuit that “gets rid of” the internal nodes between buses 1 and 2.

Circuit for problem 6

a) First, find the \(Y\)-bus matrix.

b) Schur complement can be used to partially solve linear equations. Namely, given \(M=\begin{bmatrix} A & B \\ D & C \end{bmatrix}\) and suppose we want to solve

\[\begin{bmatrix} A & B \\ D & C \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} a \\ b \end{bmatrix}.\]

Performing the operations used in Gaussian elimination, we can write \(x\) as

\[x = (M/C)^{-1} \hat{a}, \quad \hat{a}=a-BC^{-1}b.\]

Applying this to our circuit, and noticing that the current injections in the internal nodes all sum to $0$, find a relationship between just \(\begin{bmatrix} I_1 \\ I_2 \end{bmatrix}\) and \(\begin{bmatrix} V_1 \\ V_2 \end{bmatrix}\).

c) Draw the resulting equivalent circuit. Note this circuit might have shunt elements.

Solution

a) The \(Y\)-bus matrix is

\[Y = \begin{bmatrix} 3 - 1.5j & 0 & -1 + j & -2 + 0.5j & 0 & 0 & 0 \\ 0 & 2.2 - 5j & 0 & 0 & 0 & -1.5 + 2j & -0.7 + 3j \\ -1 + j & 0 & 1.5 - 1.7j & 0 & -0.5 + 0.7j & 0 & 0 \\ -2 + 0.5j & 0 & 0 & 2.2 - 1.5j & -0.2 + j & 0 & 0 \\ 0 & 0 & -0.5 + 0.7j & -0.2 + j & 1.1 - 2.3j & -0.4 + 0.6j & 0 \\ 0 & -1.5 + 2j & 0 & 0 & -0.4 + 0.6j & 2.4 - 3.2j & -0.5 + 0.6j \\ 0 & -0.7 + 3j & 0 & 0 & 0 & -0.5 + 0.6j & 1.2 - 3.6j \end{bmatrix}\]

b) The idea is to partition \(Y\) as

\(Y = \begin{bmatrix} Y_{ee} & Y_{ei} \\ Y_{ei}^T & Y_{ii} \end{bmatrix}\) where

  • \(Y_{ee}\) is the admittance submatrix corresponding to buses 1 and 2 (the external buses)
  • \(Y_{ii}\) is the admittance submatrix corresponding to the other buses (the internal buses)
  • \(Y_{ei}\) represent the coupling between external and internal buses.

Using Schur complement formula and the fact that the internal buses have zero net current injection, the reduced admittance matrix \(Y^{\text{Kron}}\) is given by:

\[Y^{\text{Kron}} = Y_{ee} - Y_{ei} Y_{ii}^{-1} Y_{ei}^T\]

Numerically, this is

\[\begin{bmatrix} 0.2291 - 0.3356j & -0.2291 + 0.3356j \\ -0.2291 + 0.3356j & 0.2291 - 0.3356j \end{bmatrix}.\]

c) This is just a two bus network without shunts.