EE 554 Largescale Electric Systems Analysis Homework Assigments
Homework 3
Due Oct 18th at 11:59pm. Submit to CANVAS at https://canvas.uw.edu/courses/1828831/assignments/10779006.
Problem 1
Consider the following system of nonlinear equations
\[\begin{align} x_1^2-x_1 x_2+x_2 &= 1 \\ x_2^3-x_1^2 x_2 &=6 \end{align}\]Solve it using the Newton Raphson method. Plot the trajectory of the iterations. That is, start at your chosen initial point, and plot the subsequent values of \(x_1\) and \(x_2\) in each iteration until they converge.
Solution.
What the algorithm does depends on the starting point. Note this problem is actually simple enough that we can compute the solutions, they are \((1,2)\) and \((-2.5,-1.5)\). The following figures shows three possible behaviors.

The algorithm converges for the first two starting points, although to different values. It does not converge for the third starting point. This example shows some of the difficulties of using the Newton methd, since it is really not obvious there are two solutions, and that \((-0.5,-1.5)\) is a bad starting point.
Problem 2
Consider the following function \(f(\theta_1, \theta_2)=\begin{bmatrix} f_1(\theta_1,\theta_2) \\ f_2(\theta_1,\theta_2) \end{bmatrix} = \begin{bmatrix} \cos(\theta_1-\theta_2)+\sin(\theta_1) \\ \sin(\theta_1-\theta_2)+\cos(\theta_2) \end{bmatrix}.\)
a) Plot the all possible values of \(f_1\) and \(f_2\) by varying \(\theta_1\) and \(\theta_2\).
b) Say we want to solve
\[f=\begin{bmatrix} 3 \\ 1 \end{bmatrix}.\]This equation doesn’t have a solution (adding \(\sin\) and \(\cos\) cannot be bigger than \(2\)). If we tried to use Newton Raphson, the algorithm would diverge or oscillate. But suppose we use the regularized version. That is, at each step \(k\), we solve (with some positive \(\lambda\))
\[\min_{\Delta \theta} \; \Vert f (\theta^{(k)}) + \nabla f (\theta^{(k)}) \Delta \theta \Vert ^2+\lambda \Vert \Delta \theta \Vert^2\]and update the soluiton as \(\theta^{(k+1)}=\theta^{(k)}+\Delta \theta\).
Would this converge? If so, what does it converge to? If not, why not?
The feasible region (the achievable values of \(f_1\) and \(f_2\)) can be found by iterating over all possible values of \(\theta_1\) and \(\theta_2\).

The red point is \((3,1)\), which is not achievable. The algorithm could settle to a point like the black one, where it gets “close” to the target. Where it goes exactly is controlled by \(\lambda\). Note that the feasible region is actually convex (by inspection), but is nontrivial to show this is the case by looking at the equations that defines it. This is part of the challenge of working with the power flow equations: the geometry is not obivous from the algebraic equations.
Problem 3
A fundamental problem in power system operations is that given a set of active and reactive injections, we want to know whether they are feasible. That is, whether they can be achieved by some complex voltages. Currently, we don’t really have an algorithm that’s better than just to solve power flow and see. Note that the feasibility problem looks to be an easier problem, since it’s just asking for a yes/no answer, but solving power flow returns the actual voltages. Later in the course will look at cases where answering yes/no quesitons is easier to finding the soluiton. For this problem, we look at a necessary condition on the active powers.
For a \(n+1\)-bus system where bus \(0\) is the slack bus. Show that if an active power injection vector \((P_0,\dots,P_n)\) is feasible, then it must satisfy \(\sum_{i=0}^{n+1} P_i \geq 0\).
Solution.
There are many ways to show this. One way is to decompose the sum of nodal power injections into power flow on the edges:
\[\sum_{i=0}^n P_i = \sum_{i=0}^n \sum_{j \sim i} P_{ij} = \sum_{(i,j) \text{ is an edge}} P_{ij}+P_{ji}.\]We now show that \(P_{ij}+P_{ji} \geq 0\) for all \((i,j)\). Using \(\theta_{ij}\) as a shorthand for \(\theta_i-\theta_j\), we have:
\[P_{ij}+P_{ji}= g_{ij} V_i^2 -g_{ij} V_i V_j \cos(\theta_{ij})+b_{ij} V_i V_j \sin(\theta_{ij})+ g_{ij} V_j^2 -g_{ij} V_i V_j \cos(\theta_{ij})-b_{ij} V_i V_j \sin(\theta_{ij}) = g_{ij} (V_i^2-2V_i V_j \cos(\theta_{ij}+V_j^2)).\]Since \(g_{ij}>0\) (resistances are positive) and \(V_i^2-2V_i V_j \cos(\theta_{ij}+V_j^2 \geq V_i^2-2V_i V_j +V_j^2=(V_i-V_j)^2 \geq 0\), we can conclude that the original sum is nonnegative. In fact, \(g_{ij} (V_i^2-2V_i V_j \cos(\theta_{ij}+V_j^2))\) is the thermal loss on that line, and \(\sum P_i\) is the total thermal loss in the system. So this problem verifies the first law of thermal dyanmics (loss must be nonnegative) in the power system context.
Problem 4
It is often useful to have a softer measure of how “feasible” a power injection is. More precisely, we say a power injection is more feasible if after a large perturbation, it remains feasible. Conversely, we say a power injection is on the feasibility boundary if it is feasible, but can become infeasible after a very small perturbation. Find a metric that quantifies how feasible a soultion is. Hint: this is related to question 2b).
Solution.
Here we provide a metric based on the inverse function theorem. Consider a continuous function \(f\)$ from \(\mathbb{R}^n\) to \(\mathbb{R}^n\). We want to know whether an inverse function \(f^{-1}\) exists. Note that it is often too much to ask for \(f^{-1}\) to exist for every point. For example, \(\sin^{-1}\) typically maps \([-1,1]\) to only \([-\frac{\pi}{2},\frac{pi}{2}]\) even though \(\sin\) is defined for all real numbers. For multivariate functions, we typically cannot get these type of defined ranges. So the question becomes suppose \(f(x)=y\), then does there exist an open set \(\mathcal{X}\) containing \(x\), and an open set \(\mathcal{Y}\) containing \(y\), such that \(f^{-1}\) exists from \(\mathcal{Y}\) to \(\mathcal{X}\). This is answered by the inverse function theorem, which says if the \(f\) is continuous differentiable at \(x\) and its Jacobian \(J(x)\) is nonsingular, then \(f^{-1}\) exists for some open sets \(\mathcal{X}\) and \(\mathcal{Y}\).
The way this is used in power flow is to let \(f\) be the power flow equation, \(x\) be \((V,\theta)\) and \(y\) be the active and reactive powers. Then if we can find an \(x\) that solves \(f(x)=y\) and if \(J(x)\) is not singular, then we know \(f^{-1}\) exists for some set around \(y\), and for a small enough \(\Delta y\), \(f^{-1}(y+\Delta y)\) makes sense. Or in aother word, loads around \(y\) is feasible.
Of course, this theorem doesn’t say anything about how large \(\mathcal{X}\) and \(\mathcal{Y}\) are. A heuristic is to look at how close to singular \(J(x)\) is, with the idea that a \(J\) far from being singular would be able to tolerate larger changes in the load, or ``more’’ feasible. As covered in the class, this can be done by looking at the condition number of \(J\).
Note that the inverse function theorem is not directly computationally useful, since it doesn’t give a way to compute \(f^{-1}\) (just it exists). Also, the heuristic of looking at the condition number of \(J\) doesn’t directly give an quantitative measure of how large \(\Delta y\) can be. Finding a more explicit metric, in the form of ``for any \(\Delta y\) where \(\Vert \Delta y \Vert<0.1\) (replacing 0.1 by any easily computable number), \(y+\Delta y\) is feasible’’, is an open problem.
Problem 5
Consider following system. For simplicity, we fix all the voltages to be \(1\) and only consider active power. The feasible set of all possible \((P_1,P_2)\) that is achievable by varying the angles at buses 1 and 2. Show the condition you derived in problem 4 makes sense for this example. That is, if your condition decided a point is on the boundary, is it actually on the boundary? You don’t need to prove things analytically and can do this through plotting.

Solution
See below for the feasible region. Every point on the boundary of the region would have a Jacobian that is singular. However, there would points in the interior that also have a singular Jacobian. Therefore, the nonsingularity of the Jacobian implies the point is not on the boundary, but the converse is not necessarily true.
