Automated Testing of Load Shedding Logic
By Keith Gray, PE and Jared Mraz, PE
A recent project involved adding a load shedding scheme to the medium voltage electrical distribution system of an oil refinery. This load shedding scheme utilizes an existing power system protection network which already made use of IEC 61850 GOOSE messaging.
The scheme was built using redundant controllers. Each controller consisted of two logic processors. The controllers continually monitor the status of breakers feeding five substations and the load on each of forty-six feeders. The controllers determine which feeder(s) would be shed when a system event dictates it. The calculation is based on the system configuration, the amount of import from the utility, and the amount of load on each feeder. The controllers must handle various failure modes without undershedding load or causing a refinery outage.
One challenge of this project was validating the logic in all of the numerous possible system states. Validating the operation of each state manually would be tedious, error prone, and consume more schedule than the project allowed. Therefore, automation was utilized to test a reasonable number of possible states.
The scheme was designed to use GOOSE messaging for high speed IED-to-IED communications. DNP was used for SCADA communications. A power system test set was used to send GOOSE messages to the controllers. The test set’s software interface allowed it to modify the data in the simulated GOOSE messages during runtime. A DNP OPC server was used to poll information from controllers. This input/output configuration created the ability to vary inputs and confirm the outputs matched expectations. The controller logic was treated like a black box.
The author’s familiarity with the Python programming language made it a good choice for this automation. This paper describes the publicly available libraries used in this application. The “unittest” library (http://docs.python.org/3.5/library/unittest.html) was one of those libraries. The “unittest” library provides a framework for setting up, running, and cleaning up after a number of automated tests. It then provides the pass/fail/error counts at the end of the run. Stack traces are provided for any errors allowing the user to easily find the location in the code that caused the error and detailed messages can be provided to describe why a test failed. The “PyWin32” library was also used on this project. It gives Python the ability to interface with the Win32 system provided by Microsoft Windows operating systems. This was required for the test set interface as well as to communicate with the DNP OPC server.
Due to the compact nature of the project schedule, much of the logic validation occurred while parts of the project were still under development. In addition to the capability to test many more states than could be tested manually, using automation allowed us to re-test the system each time a change occurred. The ability to quickly retest logic changes confirmed no adverse changes were made to other areas of logic. This paper s describes the process used to determine which tests to run. The architecture of the tests are also included, as are code snippets and examples.
To request a copy of this paper, send your name, company name and e-mail address to TandD@powereng.com. Please include the title of the paper you are requesting.