References of "Ernst, Damien"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailOptimal Assignment of Off-Peak Hours to Lower Curtailments in the Distribution Network
Merciadri, Luca ULg; Mathieu, Sébastien ULg; Ernst, Damien ULg et al

in Proceedings of the 5th European Innovative Smart Grid Technologies (ISGT) (2014, October)

We consider a price signal with two settings: off-peak tariff and on-peak tariff. Some loads are connected to specific electricity meters which allow the consumption of power only in off-peak periods ... [more ▼]

We consider a price signal with two settings: off-peak tariff and on-peak tariff. Some loads are connected to specific electricity meters which allow the consumption of power only in off-peak periods. Historically, off-peak periods were located during the night and on-peak periods during the day. Changing the assignment of off-peak periods is an easy method for distribution system operators to access to the flexibility of small consumers. This solution can be implemented quickly as the infrastructure needed already exists in some countries. We propose a mixed-integer linear model to assign optimally the off-peak hours so as to minimize a societal cost. This cost gathers together the cost of electricity, the financial losses due to energy curtailments of photovoltaic installations and the loads' wellbeing. Our model considers automatic tripping of inverters and constraints of the electrical distribution networks. Simulation results show that the new disposition of off-peak hours could reduce significantly the photovoltaic energy curtailed in the summer. [less ▲]

Detailed reference viewed: 40 (8 ULg)
Full Text
Peer Reviewed
See detailMathematical modeling of HIV dynamics after antiretroviral therapy initiation: A clinical research study
Moog, Claude; Rivadeneira, Pablo; Stan, Guy-Bart et al

in AIDS Research and Human Retroviruses (2014), 30(9), 831-834

Immunological failure is identified from the estimation of certain parameters of a mathematical model of the HIV infection dynamics. This identification is supported by clinical research results from an ... [more ▼]

Immunological failure is identified from the estimation of certain parameters of a mathematical model of the HIV infection dynamics. This identification is supported by clinical research results from an original clinical trial. Standard clinical data were collected from infected patients starting Highly Active Anti-Retroviral Therapy (HAART), just after one month following therapy initiation and were used to carry out the model identification. The early diagnosis is shown to be consistent with the patients monitoring after six months. [less ▲]

Detailed reference viewed: 24 (1 ULg)
Full Text
Peer Reviewed
See detailGlobal power grids for harnessing world renewable energy
Chatzivasileiadis, Spyros; Ernst, Damien ULg; Andersson, Göran

in Jones, Lawrence (Ed.) Renewable Energy Integration: Practical Management of Variability, Uncertainty and Flexibility in Power Grids (2014)

The Global Grid advocates the connection of all regional power systems into one electricity transmission system spanning the whole globe. Power systems are currently forming larger and larger ... [more ▼]

The Global Grid advocates the connection of all regional power systems into one electricity transmission system spanning the whole globe. Power systems are currently forming larger and larger interconnections. Environmental awareness and increased electricity consumption leads more investments towards renewable energy sources, abundant in remote locations (off-shore or in deserts). The Global Grid will facilitate the transmission of this “green” electricity to load centers, serving as backbone. This chapter elaborates on the concept presenting four stages that could gradually lead to the development of a globally interconnected power network. Quantitative analyses are carried out for all stages, demonstrating that a Global Grid is both technically feasible and economically competitive. Real price data from Europe and the USA are used to identify the potential of intercontinental electricity trade, showing that substantial profits can be generated through such interconnections. [less ▲]

Detailed reference viewed: 48 (6 ULg)
Full Text
Peer Reviewed
See detailA quantitative analysis of the effect of flexible loads on reserve markets
Mathieu, Sébastien ULg; Louveaux, Quentin ULg; Ernst, Damien ULg et al

in Proceedings of the 18th Power Systems Computation Conference (PSCC) (2014, August)

We propose and analyze a day-ahead reserve market model that handles bids from flexible loads. This pool market model takes into account the fact that a load modulation in one direction must usually be ... [more ▼]

We propose and analyze a day-ahead reserve market model that handles bids from flexible loads. This pool market model takes into account the fact that a load modulation in one direction must usually be compensated later by a modulation of the same magnitude in the opposite direction. Our analysis takes into account the gaming possibilities of producers and retailers, controlling load flexibility, in the day-ahead energy and reserve markets, and in imbalance settlement. This analysis is carried out by an agent-based approach where, for every round, each actor uses linear programs to maximize its profit according to forecasts of the prices. The procurement of a reserve is assumed to be determined, for each period, as a fixed percentage of the total consumption cleared in the energy market for the same period. The results show that the provision of reserves by flexible loads has a negligible impact on the energy market prices but markedly decreases the cost of reserve procurement. However, as the rate of flexible loads increases, the system operator has to rely more and more on non-contracted reserves, which may cancel out the benefits made in the procurement of reserves. [less ▲]

Detailed reference viewed: 134 (22 ULg)
Full Text
Peer Reviewed
See detailRelaxations for multi-period optimal power flow problems with discrete decision variables
Gemine, Quentin ULg; Ernst, Damien ULg; Louveaux, Quentin ULg et al

in Proceedings of the 18th Power Systems Computation Conference (PSCC'14) (2014, August)

We consider a class of optimal power flow (OPF) applications where some loads offer a modulation service in exchange for an activation fee. These applications can be modeled as multi-period formulations ... [more ▼]

We consider a class of optimal power flow (OPF) applications where some loads offer a modulation service in exchange for an activation fee. These applications can be modeled as multi-period formulations of the OPF with discrete variables that define mixed-integer non-convex mathematical programs. We propose two types of relaxations to tackle these problems. One is based on a Lagrangian relaxation and the other is based on a network flow relaxation. Both relaxations are tested on several benchmarks and, although they provide a comparable dual bound, it appears that the constraints in the solutions derived from the network flow relaxation are significantly less violated. [less ▲]

Detailed reference viewed: 124 (31 ULg)
Full Text
Peer Reviewed
See detailSimple connectome inference from partial correlation statistics in calcium imaging
Sutera, Antonio ULg; Joly, Arnaud ULg; François-Lavet, Vincent ULg et al

in Proceedings of Connectomics 2014 (ECML 2014) (2014, June)

In this work, we propose a simple yet effective solution to the problem of connectome inference in calcium imaging data. The proposed algorithm consists of two steps. First, processing the raw signals to ... [more ▼]

In this work, we propose a simple yet effective solution to the problem of connectome inference in calcium imaging data. The proposed algorithm consists of two steps. First, processing the raw signals to detect neural peak activities. Second, inferring the degree of association between neurons from partial correlation statistics. This paper summarises the methodology that led us to win the Connectomics Challenge, proposes a simplified version of our method, and finally compares our results with respect to other inference methods. [less ▲]

Detailed reference viewed: 342 (107 ULg)
Full Text
Peer Reviewed
See detailDistributed Model-free Control of Photovoltaic Units for Mitigating Overvoltages in Low-Voltage Networks
Aristidou, Petros ULg; Olivier, Frédéric ULg; Hervas, Maria Emilia et al

in Proc. of CIRED 2014 workshop (2014, June)

In this paper, a distributed model-free control scheme to mitigate overvoltage problems caused by high photovoltaic generation in low-voltage feeders is proposed. The distributed controllers are ... [more ▼]

In this paper, a distributed model-free control scheme to mitigate overvoltage problems caused by high photovoltaic generation in low-voltage feeders is proposed. The distributed controllers are implemented on the photovoltaic inverters and modulate the active and reactive power injected into the network. In particular, they direct photovoltaic units first to consume reactive power and, if necessary, curtail active power generation to reduce high voltages in the feeder. [less ▲]

Detailed reference viewed: 78 (28 ULg)
Full Text
Peer Reviewed
See detailBayes Adaptive Reinforcement Learning versus Off-line Prior-based Policy Search: an Empirical Comparison
Castronovo, Michaël ULg; Ernst, Damien ULg; Fonteneau, Raphaël ULg

in Proceedings of the 23rd annual machine learning conference of Belgium and the Netherlands (BENELEARN 2014) (2014, June)

This paper addresses the problem of decision making in unknown finite Markov decision processes (MDPs). The uncertainty about the MDPs is modeled using a prior distribution over a set of candidate MDPs ... [more ▼]

This paper addresses the problem of decision making in unknown finite Markov decision processes (MDPs). The uncertainty about the MDPs is modeled using a prior distribution over a set of candidate MDPs. The performance criterion is the expected sum of discounted rewards collected over an infinite length trajectory. Time constraints are defined as follows: (i) an off-line phase with a given time budget can be used to exploit the prior distribution and (ii) at every time step of the on-line phase, decisions have to be computed within a given time budget. In this setting, we compare two decision-making strategies: OPPS, a recently proposed meta-learning scheme which mainly exploits the off-line phase to perform policy search and BAMCP, a state-of-the-art model-based Bayesian reinforcement learning algorithm, which mainly exploits the on-line time budget. We empirically compare these approaches in a real Bayesian setting by computing their performances over a large set of problems. To the best of our knowledge, it is the first time that this is done in the reinforcement learning literature. Several settings are considered by varying the prior distribution and the distribution from which test problems are drawn. The main finding of these experiments is that there may be a significant benefit of having an off-line prior-based optimization phase in the case of informative and accurate priors, especially when on-line time constraints are tight. [less ▲]

Detailed reference viewed: 115 (50 ULg)
Full Text
Peer Reviewed
See detailApprentissage par renforcement bayésien versus recherche directe de politique hors-ligne en utilisant une distribution a priori: comparaison empirique
Castronovo, Michaël ULg; Ernst, Damien ULg; Fonteneau, Raphaël ULg

in Proceedings des 9èmes Journée Francophones de Planification, Décision et Apprentissage (2014, May)

Cet article aborde le problème de prise de décision séquentielle dans des processus de déci- sion de Markov (MDPs) finis et inconnus. L’absence de connaissance sur le MDP est modélisée sous la forme ... [more ▼]

Cet article aborde le problème de prise de décision séquentielle dans des processus de déci- sion de Markov (MDPs) finis et inconnus. L’absence de connaissance sur le MDP est modélisée sous la forme d’une distribution de probabilité sur un ensemble de MDPs candidats connue a priori. Le cri- tère de performance utilisé est l’espérance de la somme des récompenses actualisées sur une trajectoire infinie. En parallèle du critère d’optimalité, les contraintes liées au temps de calcul sont formalisées rigoureusement. Tout d’abord, une phase « hors-ligne » précédant l’interaction avec le MDP inconnu offre à l’agent la possibilité d’exploiter la distribution a priori pendant un temps limité. Ensuite, durant la phase d’interaction avec le MDP, à chaque pas de temps, l’agent doit prendre une décision dans un laps de temps contraint déterminé. Dans ce contexte, nous comparons deux stratégies de prise de déci- sion : OPPS, une approche récente exploitant essentiellement la phase hors-ligne pour sélectionner une politique dans un ensemble de politiques candidates et BAMCP, une approche récente de planification en-ligne bayésienne. Nous comparons empiriquement ces approches dans un contexte bayésien, en ce sens que nous évaluons leurs performances sur un large ensemble de problèmes tirés selon une distribution de test. A notre connaissance, il s’agit des premiers tests expérimentaux de ce type en apprentissage par renforcement. Nous étudions plusieurs cas de figure en considérant diverses distributions pouvant être utilisées aussi bien en tant que distribution a priori qu’en tant que distribution de test. Les résultats obtenus suggèrent qu’exploiter une distribution a priori durant une phase d’optimisation hors-ligne est un avantage non- négligeable pour des distributions a priori précises et/ou contraintes à de petits budgets temps en-ligne. [less ▲]

Detailed reference viewed: 45 (18 ULg)
Full Text
Peer Reviewed
See detailGestion active d’un réseau de distribution d’électricité : formulation du problème et benchmark
Gemine, Quentin ULg; Ernst, Damien ULg; Cornélusse, Bertrand ULg

in Proceedings des 9èmes Journées Francophones de Planification, Décision et Apprentissage (2014, May)

Afin d’opérer un réseau de distribution d’électricité de manière fiable et efficace, c’est-à-dire de respecter les contraintes physiques tout en évitant des coûts de renforcement prohibitifs, il devient ... [more ▼]

Afin d’opérer un réseau de distribution d’électricité de manière fiable et efficace, c’est-à-dire de respecter les contraintes physiques tout en évitant des coûts de renforcement prohibitifs, il devient nécessaire de recourir à des stratégies de gestion active du réseau. Ces stratégies, rendues nécessaires notamment par l’essor de la production distribuée, reposent sur des politiques de contrôle à court-terme du niveau de puissance des dispositifs producteurs ou consommateurs d’électricité. Alors qu’une solution simple consisterait à moduler à la baisse la production des générateurs, il paraît néan- moins plus intéressant de déplacer la consommation aux moments adéquats afin d’exploiter au mieux les sources d’énergie renouvelables sur lesquelles reposent généralement ces générateurs. Un tel moyen de contrôle introduit néanmoins un couplage temporel au problème, menant à un problème d’optimisation non-linéaire, séquentiel sous incertitude et à variables mixtes. Afin de favoriser la recherche dans ce domaine très complexe, nous proposons une formalisation générique du problème de ges- tion active d’un réseau de distribution moyenne tension (MT). Plus spécifiquement, cette formalisa- tion se présente sous la forme d’un processus de décision markovien. Dans cette article, nous pré- sentons également une spécification de ce modèle décisionnel à un réseau de 75 noeuds et pour un ensemble de services de modulation donnés. L’instance de test qui en résulte est disponible à l’adresse http://www.montefiore.ulg.ac.be/~anm/ et a pour objectif de mesurer et de comparer les performances des techniques de résolution qui seront développées. [less ▲]

Detailed reference viewed: 111 (38 ULg)
Full Text
Peer Reviewed
See detailToggling a genetic switch using reinforcement learning
Sootla, Aivar; Strelkowa, Natalja; Ernst, Damien ULg et al

in Proceedings of the 9th French Meeting on Planning, Decision Making and Learning (2014, May)

In this paper, we consider the problem of optimal exogenous control of gene regulatory networks. Our approach consists in adapting an established reinforcement learning algorithm called the fitted Q ... [more ▼]

In this paper, we consider the problem of optimal exogenous control of gene regulatory networks. Our approach consists in adapting an established reinforcement learning algorithm called the fitted Q iteration. This algorithm infers the control law directly from the measurements of the system’s response to external control inputs without the use of a mathematical model of the system. The measurement data set can either be collected from wet-lab experiments or artificially created by computer simulations of dynamical models of the system. The algorithm is applicable to a wide range of biological systems due to its ability to deal with nonlinear and stochastic system dynamics. To illustrate the application of the algorithm to a gene regulatory network, the regulation of the toggle switch system is considered. The control objective of this problem is to drive the concentrations of two specific proteins to a target region in the state space. [less ▲]

Detailed reference viewed: 20 (1 ULg)
Full Text
Peer Reviewed
See detailToggling a genetic switch using reinforcement learning
Sootla, Aivar; Strelkowa, Natalja; Ernst, Damien ULg et al

in Proceedings of the 9th French Meeting on Planning, Decision Making and Learning (2014, May)

In this paper, we consider the problem of optimal exogenous control of gene regulatory networks. Our approach consists in adapting an established reinforcement learning algorithm called the fitted Q ... [more ▼]

In this paper, we consider the problem of optimal exogenous control of gene regulatory networks. Our approach consists in adapting an established reinforcement learning algorithm called the fitted Q iteration. This algorithm infers the control law directly from the measurements of the system’s response to external control inputs without the use of a mathematical model of the system. The measurement data set can either be collected from wet-lab experiments or artificially created by computer simulations of dynamical models of the system. The algorithm is applicable to a wide range of biological systems due to its ability to deal with nonlinear and stochastic system dynamics. To illustrate the application of the algorithm to a gene regulatory network, the regulation of the toggle switch system is considered. The control objective of this problem is to drive the concentrations of two specific proteins to a target region in the state space. [less ▲]

Detailed reference viewed: 20 (1 ULg)
Full Text
Peer Reviewed
See detailToggling a genetic switch using reinforcement learning
Sootla, Aivar; Strelkowa, Natalja; Ernst, Damien ULg et al

in Proceedings of the 9th French Meeting on Planning, Decision Making and Learning (2014, May)

In this paper, we consider the problem of optimal exogenous control of gene regulatory networks. Our approach consists in adapting an established reinforcement learning algorithm called the fitted Q ... [more ▼]

In this paper, we consider the problem of optimal exogenous control of gene regulatory networks. Our approach consists in adapting an established reinforcement learning algorithm called the fitted Q iteration. This algorithm infers the control law directly from the measurements of the system’s response to external control inputs without the use of a mathematical model of the system. The measurement data set can either be collected from wet-lab experiments or artificially created by computer simulations of dynamical models of the system. The algorithm is applicable to a wide range of biological systems due to its ability to deal with nonlinear and stochastic system dynamics. To illustrate the application of the algorithm to a gene regulatory network, the regulation of the toggle switch system is considered. The control objective of this problem is to drive the concentrations of two specific proteins to a target region in the state space. [less ▲]

Detailed reference viewed: 20 (1 ULg)
Full Text
Peer Reviewed
See detailToggling a genetic switch using reinforcement learning
Sootla, Aivar; Strelkowa, Natalja; Ernst, Damien ULg et al

in Proceedings of the 9th French Meeting on Planning, Decision Making and Learning (2014, May)

In this paper, we consider the problem of optimal exogenous control of gene regulatory networks. Our approach consists in adapting an established reinforcement learning algorithm called the fitted Q ... [more ▼]

In this paper, we consider the problem of optimal exogenous control of gene regulatory networks. Our approach consists in adapting an established reinforcement learning algorithm called the fitted Q iteration. This algorithm infers the control law directly from the measurements of the system’s response to external control inputs without the use of a mathematical model of the system. The measurement data set can either be collected from wet-lab experiments or artificially created by computer simulations of dynamical models of the system. The algorithm is applicable to a wide range of biological systems due to its ability to deal with nonlinear and stochastic system dynamics. To illustrate the application of the algorithm to a gene regulatory network, the regulation of the toggle switch system is considered. The control objective of this problem is to drive the concentrations of two specific proteins to a target region in the state space. [less ▲]

Detailed reference viewed: 20 (1 ULg)
Full Text
Peer Reviewed
See detailEstimating the revenues of a hydrogen-based high-capacity storage device: methodology and results
François-Lavet, Vincent ULg; Fonteneau, Raphaël ULg; Ernst, Damien ULg

in Proceedings des 9èmes Journée Francophones de Planification, Décision et Apprentissage (2014, May)

This paper proposes a methodology to estimate the maximum revenue that can be generated by a company that operates a high-capacity storage device to buy or sell electricity on the day-ahead electricity ... [more ▼]

This paper proposes a methodology to estimate the maximum revenue that can be generated by a company that operates a high-capacity storage device to buy or sell electricity on the day-ahead electricity market. The methodology exploits the Dynamic Programming (DP) principle and is specified for hydrogen-based storage devices that use electrolysis to produce hydrogen and fuel cells to generate electricity from hydrogen. Experimental results are generated using historical data of energy prices on the Belgian market. They show how the storage capacity and other parameters of the storage device influence the optimal revenue. The main conclusion drawn from the experiments is that it may be interesting to invest in large storage tanks to exploit the inter-seasonal price fluctuations of electricity. [less ▲]

Detailed reference viewed: 45 (22 ULg)
Full Text
Peer Reviewed
See detailOptimized look-ahead tree policies: a bridge between look-ahead tree policies and direct policy search
Jung, Tobias ULg; Wehenkel, Louis ULg; Ernst, Damien ULg et al

in International Journal of Adaptive Control and Signal Processing (2014), 28(3-5), 255-289

Direct policy search (DPS) and look-ahead tree (LT) policies are two popular techniques for solving difficult sequential decision-making problems. They both are simple to implement, widely applicable ... [more ▼]

Direct policy search (DPS) and look-ahead tree (LT) policies are two popular techniques for solving difficult sequential decision-making problems. They both are simple to implement, widely applicable without making strong assumptions on the structure of the problem, and capable of producing high performance control policies. However, computationally both of them are, each in their own way, very expensive. DPS can require huge offline resources (effort required to obtain the policy) to first select an appropriate space of parameterized policies that works well for the targeted problem, and then to determine the best values of the parameters via global optimization. LT policies do not require any offline resources; however, they typically require huge online resources (effort required to calculate the best decision at each step) in order to grow trees of sufficient depth. In this paper, we propose optimized look-ahead trees (OLT), a model-based policy learning scheme that lies at the intersection of DPS and LT. In OLT, the control policy is represented indirectly through an algorithm that at each decision step develops, as in LT using a model of the dynamics, a small look-ahead tree until a prespecified online budget is exhausted. Unlike LT, the development of the tree is not driven by a generic heuristic; rather, the heuristic is optimized for the target problem and implemented as a parameterized node scoring function learned offline via DPS. We experimentally compare OLT with pure DPS and pure LT variants on optimal control benchmark domains. The results show that the LT-based representation is a versatile way of compactly representing policies in a DPS scheme (which results in OLT being easier to tune and having lower offline complexity than pure DPS); while at the same time, DPS helps to significantly reduce the size of the look-ahead trees that are required to take high-quality decisions (which results in OLT having lower online complexity than pure LT). Moreover, OLT produces overall better performing policies than pure DPS and pure LT and also results in policies that are robust with respect to perturbations of the initial conditions. [less ▲]

Detailed reference viewed: 83 (32 ULg)
Full Text
Peer Reviewed
See detailA learning procedure for sampling semantically different valid expressions
St-Pierre, David Lupien; Maes, Francis; Ernst, Damien ULg et al

in International Journal of Artificial Intelligence (2014), 12(1), 18-35

A large number of problems can be formalized as finding the best symbolic expression to maximize a given numerical objective. Most approaches to approximately solve such problems rely on random ... [more ▼]

A large number of problems can be formalized as finding the best symbolic expression to maximize a given numerical objective. Most approaches to approximately solve such problems rely on random exploration of the search space. This paper focuses on how this random exploration should be performed to take into account expressions redundancy and invalid expressions. We propose a learning algorithm that, given the set of available constants, variables and operators and given the target finite number of trials, computes a probability distribution to maximize the expected number of semantically different, valid, generated expressions. We illustrate the use of our approach on both medium-scale and large-scale expression spaces, and empirically show that such optimized distributions significantly outperform the uniform distribution in terms of the diversity of generated expressions. We further test the method in combination with the recently proposed nested Monte-Carlo algorithm on a set of benchmark symbolic regression problems and demonstrate its interest in terms of reduction of the number of required calls to the objective function. [less ▲]

Detailed reference viewed: 22 (4 ULg)
Full Text
See detailL'invité - Damien Ernst - "Nous allons vers une globalisation du marché de l'électricité"
Ernst, Damien ULg

Article for general public (2014)

En décembre 2013, Damien Ernst, Professeur à l’ULG, a donné une conférence au CESW intitulée : «Vers une globalisation du marché de l’électricité. Quel rôle pour les acteurs du secteur belge de ... [more ▼]

En décembre 2013, Damien Ernst, Professeur à l’ULG, a donné une conférence au CESW intitulée : «Vers une globalisation du marché de l’électricité. Quel rôle pour les acteurs du secteur belge de l’électricité?». Damien Ernst est un observateur privilégié du secteur énergétique belge, et plus particulièrement de tout ce qui concerne le secteur de l’électricité. Auteur de nombreuses publications et études, Damien Ernst s’est notamment interrogé sur les perspectives des énergies renouvelables en Belgique. Damien Ernst est l’invité de ce numéro 120 de la revue Wallonie. Dans son interview, il nous explique pourquoi la globalisation du marché de l’électricité est inéluctable et quelles en seront les conséquences, pour les entreprises du secteur et pour la Wallonie. [less ▲]

Detailed reference viewed: 31 (7 ULg)
Full Text
Peer Reviewed
See detailLipschitz robust control from off-policy trajectories
Fonteneau, Raphaël ULg; Ernst, Damien ULg; Boigelot, Bernard ULg et al

in Proceedings of the 53rd IEEE Conference on Decision and Control (IEEE CDC 2014) (2014)

We study the minmax optimization problem introduced in [Fonteneau et al. (2011), ``Towards min max reinforcement learning'', Springer CCIS, vol. 129, pp. 61-77] for computing control policies for batch ... [more ▼]

We study the minmax optimization problem introduced in [Fonteneau et al. (2011), ``Towards min max reinforcement learning'', Springer CCIS, vol. 129, pp. 61-77] for computing control policies for batch mode reinforcement learning in a deterministic setting with fixed, finite optimization horizon. First, we state that the $\min$ part of this problem is NP-hard. We then provide two relaxation schemes. The first relaxation scheme works by dropping some constraints in order to obtain a problem that is solvable in polynomial time. The second relaxation scheme, based on a Lagrangian relaxation where all constraints are dualized, can also be solved in polynomial time. We theoretically show that both relaxation schemes provide better results than those given in [Fonteneau et al. (2011)] [less ▲]

Detailed reference viewed: 15 (1 ULg)
Full Text
See detailPower system transient stability preventive and emergency control
Ruiz-Vega, Daniel; Wehenkel, Louis ULg; Ernst, Damien ULg et al

in Savulescu, Savu (Ed.) Real-Time Stability in Power Systems 2nd Edition (2014)

A general approach to real-time transient stability control is described, yielding various complementary techniques: pure preventive, open loop emergency, and closed loop emergency controls. Recent ... [more ▼]

A general approach to real-time transient stability control is described, yielding various complementary techniques: pure preventive, open loop emergency, and closed loop emergency controls. Recent progress in terms of a global transient stability constrained optimal power flow are presented, yielding in a scalable nonlinear programming formulation which allows to take near-optimal decisions for preventive control with a computing budget corresponding only to a few runs of standard optimal power flow and time domain simulations. These complementary techniques meet the stringent conditions imposed by the real-life applications. [less ▲]

Detailed reference viewed: 20 (2 ULg)