References of "Glavic, Mevludin"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailApproximate value iteration in the reinforcement learning context. Application to electrical power system control
Ernst, Damien ULg; Glavic, Mevludin; Geurts, Pierre ULg et al

in International Journal of Emerging Electrical Power Systems (2005), 3(1),

In this paper we explain how to design intelligent agents able to process the information acquired from interaction with a system to learn a good control policy and show how the methodology can be applied ... [more ▼]

In this paper we explain how to design intelligent agents able to process the information acquired from interaction with a system to learn a good control policy and show how the methodology can be applied to control some devices aimed to damp electrical power oscillations. The control problem is formalized as a discrete-time optimal control problem and the information acquired from interaction with the system is a set of samples, where each sample is composed of four elements: a state, the action taken while being in this state, the instantaneous reward observed and the successor state of the system. To process this information we consider reinforcement learning algorithms that determine an approximation of the so-called Q-function by mimicking the behavior of the value iteration algorithm. Simulations are first carried on a benchmark power system modeled with two state variables. Then we present a more complex case study on a four-machine power system where the reinforcement learning algorithm controls a Thyristor Controlled Series Capacitor (TCSC) aimed to damp power system oscillations. [less ▲]

Detailed reference viewed: 48 (4 ULg)
Full Text
Peer Reviewed
See detailNew developments in the application of automatic learning to power system control
Wehenkel, Louis ULg; Glavic, Mevludin; Ernst, Damien ULg

in Proceedings of the 15th Power System Computation Conference (PSCC 2005) (2005)

In this paper we present the basic principles of supervised learning and reinforcement learning as two complementary frameworks to design control laws or decision policies within the context of power ... [more ▼]

In this paper we present the basic principles of supervised learning and reinforcement learning as two complementary frameworks to design control laws or decision policies within the context of power system control. We also review recent developments in the realm of automatic learning methods and discuss their applicability to power system decision and control problems. Simulation results illustrating the potentials of the recently introduced fitted Q iteration learning algorithm in controlling a TCSC device aimed to damp electro-mechanical oscillations in a synthetic 4-machine system, are included in the paper. [less ▲]

Detailed reference viewed: 28 (4 ULg)
Full Text
Peer Reviewed
See detailPower systems stability control: Reinforcement learning framework
Ernst, Damien ULg; Glavic, Mevludin; Wehenkel, Louis ULg

in IEEE Transactions on Power Systems (2004), 19(1), 427-435

In this paper, we explore how a computational approach to learning from interactions, called reinforcement learning (RL), can be applied to control power systems. We describe some challenges in power ... [more ▼]

In this paper, we explore how a computational approach to learning from interactions, called reinforcement learning (RL), can be applied to control power systems. We describe some challenges in power system control and discuss how some of those challenges could be met by using these RL methods. The difficulties associated with their application to control power systems are described and discussed as well as strategies that can be adopted to overcome them. Two reinforcement learning modes are considered: the online mode in which the interaction occurs with the real power system and the offline mode in which the interaction occurs with a simulation model of the real power system. We present two case studies made on a four-machine power system model. The first one concerns the design by means of RL algorithms used in offline mode of a dynamic brake controller. The second concerns RL methods used in online mode when applied to control a thyristor controlled series capacitor (TCSC) aimed to damp power system oscillations. [less ▲]

Detailed reference viewed: 59 (7 ULg)
Full Text
Peer Reviewed
See detailA reinforcement learning based discrete supplementary control for power system transient stability enhancement
Glavic, Mevludin; Ernst, Damien ULg; Wehenkel, Louis ULg

in Proceedings of the 12th Intelligent Systems Application to Power Systems Conference (ISAP 2003) (2003)

This paper proposes an application of a Reinforcement Learning (RL) method to the control of a dynamic brake aimed to enhance power system transient stability. The control law of the resistive brake is in ... [more ▼]

This paper proposes an application of a Reinforcement Learning (RL) method to the control of a dynamic brake aimed to enhance power system transient stability. The control law of the resistive brake is in the form of switching strategies. In particular, the paper focuses on the application of a model based RL method, known as prioritized sweeping, a method proven to be suitable in applications in which computation is considered to be cheap. The curse of dimensionality problem is resolved by the system state dimensionality reduction based on the One Machine Infinite Bus (OMIB) transformation. Results obtained by using a synthetic four-machine power system are given to illustrate the performances of the proposed methodology. [less ▲]

Detailed reference viewed: 28 (1 ULg)
Full Text
Peer Reviewed
See detailTransient stability emergency control combining open-loop and closed-loop technique
Ruiz-Vega, Daniel; Glavic, Mevludin; Ernst, Damien ULg

in Proceedings of the IEEE Power Engineering Society General Meeting, 2003 (2003)

An on-line transient stability emergency control approach is proposed, which couples an open-loop and a closed-loop emergency control technique. The open-loop technique uses on-line transient stability ... [more ▼]

An on-line transient stability emergency control approach is proposed, which couples an open-loop and a closed-loop emergency control technique. The open-loop technique uses on-line transient stability assessment in order to adapt the settings of automatic system protection schemes to the current operating conditions. On the other hand, the closed-loop technique uses measurements in order to design and trigger countermeasures, after the contingency has actually happened, then to continue monitoring in a closed-loop fashion. The approach aims at combining advantages of event-based and measurement-based system protection schemes, namely, speed of action and robustness with respect to uncertainties in system modeling. It can also comply with economic criteria. [less ▲]

Detailed reference viewed: 274 (2 ULg)
Full Text
Peer Reviewed
See detailUsing artificial neural networks to estimate rotor angles and speeds from phasor measurements
Del Angel, Alberto; Glavic, Mevludin; Wehenkel, Louis ULg

(2003)

This paper deals with an improved use of phasor measurements. In particular, the paper focuses on the development of a technique for estimation of generator rotor angle and speed, based on phasor ... [more ▼]

This paper deals with an improved use of phasor measurements. In particular, the paper focuses on the development of a technique for estimation of generator rotor angle and speed, based on phasor measurement units, for transient stability assessment and control in real-time. Two multilayered feed-forward artificial neural networks are used for this purpose. One for the estimation of rotor angle and another for the estimation of rotor speed. The validation has been made by simulation in a power system because techniques for the direct measurement were not available. Results obtained with the help of a simple one machine to infinite bus system are presented and compared against those obtained using analytical formulas derived from the generator classical model. [less ▲]

Detailed reference viewed: 26 (1 ULg)