For the real-time simulation, DC formulation is run within the testbed’s Matlab programming environment, providing real-time control actions to a test case provided by an industry partner, which is modeled and simulated in RTDS. The Modified IEEE 14 Bus system shown in Figure \ref{fig:SimulatedSystem}. It includes 3 wind farm on Bus 2, Bus 9, and Bus 11. An overload in the transmission line from bus 7 to bus 9 was created by increasing the generation of the wind farm at bus 9 from 60 MW to 100 MW.
The advantage of a dynamically calculated curtailment values is that it can be reactive in real-time to line rating changes caused by changing weather conditions. In the online test system, we use a single static set of line ratings for demonstrating a simple case of exceeding the transmission line limits. These ratings could be recalculated every cycle, derived from either weather conditions, predominantly temperature, or could be feed by power donuts on transmission lines for even more precise real-time line ratings \cite{singh_2014}.
Using the established system and line ratings, an overload condition occurs at the transmission line connected to generator 6, and curtailment action occurs. Generation is curtailed from it’s previous generation value down to 81.57% of it’s maximum possible generation based on the wind energy available. In larger connected systems, results may instead partially curtail multiple wind farms, but in the current test system, optimal curtailment results in only minor generation shedding, to be sufficient to protect the transmission lines.
Following cases have been studied using the developed cyber-physical test bed.
-
Case A: No RAS actions.
-
Case B: All computing nodes are healthy and the communication link is intact.
-
Case C: The failure of the leader computing node.
-
Case D: The failure of networking interface
Case C, which depicts a scenario in which the primary leader node fails before control action is taken shows that the flow is still maintained within the limit same as with Case B. In this case the backup node quickly comes into the action, providing the resiliency to the node failure.
Case A
The voltage and current measurements are obtained from the RTDS using GTNET PMUs. The PMU data is sent to SEL PDC and PDC sends the synchrophasor data based on C37.118 protocol to NS-3 simulated communication network. Without any RAS deployed, the transmission line from Bus-6 to Bus-7 carries a real power of 111 MW, where as the capacity of the line is 95MW, resulting in an overload condition. Based on the traditional protection method, the wind farm generation, which is 180 MW, on Bus 6 will be shed in order to solve this overload problem. It can protect the transmission line from overload condition, but it sheds large amount of renewable energy. In order to maximize the usage of the wind power, the proposed RAS is implemented.
Case B
Since both the primary leader and backup leader are healthy in this case, the measurements are received by both the nodes. The primary leader runs the RAS algorithm, and utilizing this data, the primary leader calculates the curtailment if an overload occurs in the system. The control action thus calculated will be sent through NS-3 simulated communication network back to master PC. In the master PC, the self-designed communication program receives the control action and sends it into RSCAD to control the breaker or wind farm output in the simulated power system.
Case C
In this simulation case, the primary computational leader node fails. The backup leader detects this failure. For this simple case, both leaders are running in the Matlab parallel cluster, and failures are detected directly from it’s interface. Upon a process death, the backup begins sending back the control signals to the power system, which in this case, is the RTDS.
In tandem with a primary leader, backups can be selected to also receive the same set of the data for processing at the same time as the leader. Results from backup leaders can be ignored unless a fault is detected in the primary leader.
This is important for quick recovery in a failure condition, as the backup already has a complete set of data and a calculated result, no rollback or redelivery of data is necessitated to continue operations. In addition, when there is sufficiently low network latency, and excess time before new sensor data, backup leaders can compare their results with the primary leader, creating a sort of triple modular redundancy, only reporting the leaders result if all backups came to the same conclusion. This also protects against specific link faults, where a node may be unable to deliver its data to the primary leader, but still can send to one or more backup leader nodes.
Thus, even in a case of node failure, the proposed scheme is capable of ensuring the resiliency, thus avoiding any catastrophic effects on the power system.
Case D
In this case, processes are running in standalone machines, and the communication link at the primary leader fails. Instead of utilizing parallel processing code, liveliness is detected using a heartbeat process, where a call and response occurs between the backup and primary node continually. Upon failed communication, using reasonable timeout thresholds, if the backup node believes control action should be taken, it will begin communication with the simulated system as the primary control node.
Since a process can’t be certain if and where a fault has occurred, in a larger network of collaborating nodes, a Phi Accrual Failure Detector \cite{phi} is utilized to provide all nodes a suspicion level of failure for the primary and backup leader. Upon this failure likelihood value passing a defined threshold in any one node, a vote is initiated to decide on switching to the backup, and potentially to define a new backup at the same time.