References of "Brankart, Jean-Michel"
     in
Bookmark and Share    
Full Text
See detailComparison of different assimilation schemes in an operational assimilation system with Ensemble Kalman Filter
Yan, Yajing; Barth, Alexander ULg; Beckers, Jean-Marie ULg et al

Poster (2016)

In this paper, four assimilation schemes, including an intermittent assimilation scheme (INT) and three incremental assimilation schemes (IAU 0, IAU 50 and IAU 100), are compared in the same assimilation ... [more ▼]

In this paper, four assimilation schemes, including an intermittent assimilation scheme (INT) and three incremental assimilation schemes (IAU 0, IAU 50 and IAU 100), are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The three IAU schemes differ from each other in the position of the increment update window that has the same size as the assimilation window. 0, 50 and 100 correspond to the degree of superposition of the increment update window on the current assimilation window. Sea surface height, sea surface temperature, and temperature profiles at depth collected between January and December 2005 are assimilated. Sixty ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments The relevance of each assimilation scheme is evaluated through analyses on thermohaline variables and the current velocities. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with independent/semi-independent observations. For deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations, in order to diagnose the ensemble distribution properties in a deterministic way. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centered random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system. [less ▲]

Detailed reference viewed: 13 (1 ULg)
Full Text
See detailThe SANGOMA Tools for Data Assimilation
Nerger, Lars; Altaf, Umer; Barth, Alexander ULg et al

Poster (2015)

The EU-funded project SANGOMA – Stochastic Assimilation of the Next Gener- ation Ocean Model Applications –provides new developments in data assimilation to ensure that future operational systems can make ... [more ▼]

The EU-funded project SANGOMA – Stochastic Assimilation of the Next Gener- ation Ocean Model Applications –provides new developments in data assimilation to ensure that future operational systems can make use of state-of-the-art data-assimilation methods and related analysis tools. One task of SANGOMA is to develop a collection of common tools for data assimilation with a uniform interface so that the tools are usable from dif- ferent data assimilation systems. The tool developments mainly aim at tools that support ensemble-based data assimilation applications like for the generation of perturbations, to perform transformations, to compute diagnostics, as well as further utilities. In addition, a selection of ensemble filter analysis steps is included. The tools are implemented in Fortran and as scripts for Matlab or Octave. They are provided as free open-source programs via the project web site [http://www.data-assimilation.net]. This contribution provides an overview of the tools that are available in the latest release V1 of the SANGOMA tools as well as the plans for the next release. [less ▲]

Detailed reference viewed: 37 (2 ULg)
Full Text
Peer Reviewed
See detailGeneration of analysis and consistent error fields using the Data Interpolating Variational Analysis (Diva)
Troupin, Charles ULg; Barth, Alexander ULg; Sirjacobs, Damien ULg et al

in Ocean Modelling (2012), 52-53

The Data Interpolating Variational Analysis (Diva) is a method designed to interpolate irregularly-spaced, noisy data onto any desired location, in most cases on regular grids. It is the combination of a ... [more ▼]

The Data Interpolating Variational Analysis (Diva) is a method designed to interpolate irregularly-spaced, noisy data onto any desired location, in most cases on regular grids. It is the combination of a particular methodology, based on the minimisation of a cost function, and a numerically efficient method, based on a finite-element solver. The cost function penalises the misfit between the observations and the reconstructed field, as well as the regularity or smoothness of the field. The intrinsic advantages of the method are its natural way to take into account topographic and dynamic constraints (coasts, advection, . . . ) and its capacity to handle large data sets, frequently encountered in oceanography. The method provides gridded fields in two dimensions, usually in horizontal layers. Three-dimension fields are obtained by stacking horizontal layers. In the present work, we summarize the background of the method and describe the possible methods to compute the error field associated to the analysis. In particular, we present new developments leading to a more consistent error estimation, by determining numerically the real covariance function in Diva, which is never formulated explicitly, contrarily to Optimal Interpolation. The real covariance function is obtained by two concurrent executions of Diva, the first providing the covariance for the second. With this improvement, the error field is now perfectly consistent with the inherent background covariance in all cases. A two-dimension application using salinity measurements in the Mediterranean Sea is presented. Applied on these measurements, Optimal Interpolation and Diva provided very similar gridded fields (correlation: 98.6%, RMS of the difference: 0.02). The method using the real covariance produces an error field similar to the one of OI, except in the coastal areas. [less ▲]

Detailed reference viewed: 596 (56 ULg)
Full Text
See detailAdvanced Data Interpolating Variational Analysis. Application to climatological data
Troupin, Charles ULg; Sirjacobs, Damien ULg; Rixen, Michel et al

Poster (2011, April)

DIVA (Data Interpolating Variational Analysis) is a variational analysis tool designed to interpolate irregularly-spaced, noisy data onto any desired location, in most cases on regular grids. It is the ... [more ▼]

DIVA (Data Interpolating Variational Analysis) is a variational analysis tool designed to interpolate irregularly-spaced, noisy data onto any desired location, in most cases on regular grids. It is the combination of a particular methodology, based on the minimization of a functional, and a numerically efficient resolution method, based on a finite elements solver. The intrinsic advantages of DIVA are its natural way to take into account topographic and dynamic constraints (coasts, advection, ...) and its capacity to handle large data sets, frequently encountered in oceanography. In the present work, we describe various improvements to the variational analysis tool. The most significant advance is the development of a full error calculation, whilst until now, only an approximate error-field estimate was available. The key issue is the numerical determination of the real covariance function in DIVA, which is not formulated explicitly. This is solved by two concurrent executions of two DIVA, one providing the covariance for the other. The new calculation of the error field is now perfectly coherent with the inherent background covariance in all cases. The correlation length, which was previously set uniform over the computational domain, is now allowed to vary spatially. The efficiency of the tools for estimating the signal-to-noise ratio, through generalized cross-validation, has also been improved. Finally, a data quality-control method is implemented and allows one to detect possible outliers, based on statistics of the data-reconstruction misfit. The added value of these features are illustrated in the case of a large data set of salinity measured in the Mediterranean Sea. Several analyses are performed with different parameters in order to demonstrate their influence on the interpolated fields. In particular, we examine the benefits of using the parameter optimization tools and the advection constraint. The results are validated by means of a subset of data set apart for an independent validation. The corresponding errors fields are estimated using different methods and underline the role of the data coverage. [less ▲]

Detailed reference viewed: 105 (17 ULg)
Full Text
See detailAdvanced Data Interpolating Variational Analysis. Application to climatological data.
Troupin, Charles ULg; Sirjacobs, Damien ULg; Rixen, Michel et al

Poster (2011, March 21)

Detailed reference viewed: 31 (5 ULg)