Browsing
     by title


0-9 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

or enter first few letters:   
OK
Peer Reviewed
See detailModeling contamination of clay.
Boukpeti, N.; Charlier, Robert ULg; Hueckel, T.

in Proc. Int. Conf. on Coupled T-H-M-C Processes in Geosystems, GeoproC 2003 (2003, October)

Detailed reference viewed: 11 (0 ULg)
Full Text
Peer Reviewed
See detailModeling Daily Traffic Counts: Analyzing the Effects of Holidays
Cools, Mario ULg; Moons, Elke; Wets, Geert

in Sloboda, Brian (Ed.) Transportation Statistics (2009)

Detailed reference viewed: 22 (1 ULg)
Full Text
See detailModeling external and internal drug exposure: assessing their causal biomedical consequences
Comté, Laetitia ULg

Doctoral thesis (2012)

C’est un fait notoire que beaucoup de patients ne suivent pas parfaitement le traitement prescrit par leur médecin mais se permettent bien des écarts. Cependant, à notre époque où les thérapies sont de ... [more ▼]

C’est un fait notoire que beaucoup de patients ne suivent pas parfaitement le traitement prescrit par leur médecin mais se permettent bien des écarts. Cependant, à notre époque où les thérapies sont de plus en plus pointues et où, pour certaines maladies, un suivi strict du régime thérapeutique est essentiel, toute variation par rapport à ce régime peut être dommageable à l’efficacité du traitement. Cette non-adhésion aux traitements prescrits à été scientifiquement étudiée dès la seconde moitié du vingtième siècle et plus particulièrement, ces trente dernières années. Il a été ainsi prouvé qu’elle est une des causes premières dans la diversité des réponses cliniques aux traitements. Comme l’a dit le médecin général des Etats-Unis, Everett Koop, “Les médicaments ne fonctionnent pas pour des patients qui ne les prennent pas” (i.e. “Drugs don’t work in patients who don’t take them”). L’avènement des piluliers électroniques a révolutionné la recherche sur l’adhésion des patients au régime thérapeutique prescrit. En effet, ces piluliers permettent aujourd’hui de mémoriser l’historique de la prise du médicament de chaque patient grâce à un circuit électronique dans leur couvercle qui enregistre la date et l’heure de chaque ouverture. De telles nouvelles données devaient être exploitées de manière optimale. Comment y parvenir? Afin de répondre à cette question, il est tout d’abord nécessaire de préciser exactement la signification de ce concept d’‘adhésion à un traitement’ (en anglais: ‘adherence’ ou encore en français ‘observance’). Cette adhésion d’un patient à un traitement a été définie comme une mesure globale de la conformité de l’historique de prises de ce patient avec celui qu’il aurait connu s’il avait suivi le régime thérapeutique prescrit (Sackett and Haynes 1976). Une mesure globale est le plus souvent insuffisante pour une étude appronfondie des cas de non-adhésion. Pour le prouver, considérons par exemple le cas d’un patient qui doit prendre un médicament 2 fois par jour pendant 4 mois. S’il ne prend que la dose du matin pendant 4 mois, son adhésion sera évaluée à 50%. Toutefois, s’il prend ses deux doses correctement mais seulement pendant 2 mois puis qu’il stoppe le traitement, son adhésion sera également évaluée à 50%. PPourtant ces 2 types de comportement risquent d’engendrer des conséquences pharmacologiques bien différentes. On sépara alors le concept d’adhésion en trois composantes : l’initiation, l’exécution du régime thérapeutique et la persistance au traitement. Ainsi, dans le premier exemple repris ci-dessus, l’exécution du régime est incorrecte mais la persistance est maximale (4 mois) tandis que dans le second, l’exécution du régime est correcte mais plus la persistance (2 mois). La persistance correspond à la durée totale depuis l’initiation, correspondant à la première prise, jusqu’à l’arrêt du traitement. L’exécution du régime thérapeutique est une notion plus délicate à caractériser car elle peut varier de différentes manières et tout au long du traitement; c’est donc une variable multidimensionnelle. Elle résulte de la comparaison entre l’historique de prises du patient, tant que celui-ci est engagé dans le traitement, avec l’historique de prises attendu par le régime thérapeutique prescrit. Malheureusement, la mesure de l’exécution du traitement est trop souvent ramenée à un simple pourcentage (par exemple, le pourcentage des doses prescrites effectivement prises sur un intervalle de temps déterminé ou encore le pourcentage de jours où le nombre de doses prescrites à été respecté, etc...) que certains chercheurs exploitent en fixant arbitrairement une valeur qui permet de répartir les patients en deux groupes: ‘bons’ ou ‘mauvais’ exécutants. Cette valeur arbitrairement choisie fait pourtant bien peu de cas de cette question essentielle et toujours présente: ‘Comment peut-on juger si l’adhésion est suffisante?’. Pour illustrer la faiblesse de cette façon de faire, citons Harrigan (2005) qui montra que, pour des patients séropositifs, une haute mais non parfaite exposition au médicament, exposition que l’on aurait tendance à juger suffisante, peut s’avérer plus dommageable en terme de résistance au traitement qu’une plus faible exposition. En effet, sur base des relevés d’ordonnances pharmaceutiques (‘prescription refill data’), il montra que c’est dans la tranche 80% à 90% de doses achetées par rapport aux doses prescrites qu’il y a le plus risque de développer une résistance au traitement. Voilà donc pourquoi, dans la première partie de cette thèse, notre but a été d’étudier toute une série de façons de mesurer l’exposition au traitement afin de déterminer celle(s) qui engloberai(en)t le plus de caractéristiques de l’historique des prises. Après avoir présenté le concept d’adhésion au premier chapitre, nous avons examiné au chapitre suivant non seulement les pourcentages classiques de doses prises, mais aussi la variabilité du moment de la prise du médicament en prenant soin de distinguer l’étude des prises du matin de l’étude de celles du soir, la distribution des intervalles de temps entre les doses successives, l’occurrence de doses manquantes consécutives, etc... Nous avons ainsi obtenu 26 variables résumant l’historique de doses prises. Nous avons cherché ensuite à identifier 3 groupes de patients via la méthode de classification de Hartigan (Hartigan K-Means Clustering method). Notre but étant de caractériser l’exécution du traitement pour les patients de chaque groupe par les variables les plus pertinentes parmi les 26 variables obtenues, nous avons proposé un algorithme basé sur la théorie du ‘multidimensional scaling’ qui nous permet de garder les principales caractéristiques des données de chaque groupe malgré la réduction de l’ensemble des variables. Cela nous a permis de mettre en évidence que les variables sources principales de discrimination en groupes de patients sont relatives à la quantité de doses prises. Il s’est avéré que ces mêmes variables expliquent également la non-persistance des patients, tout comme certaines variables relatives à la variabilité dans les moments de prise du médicament. Cela conforte l’idée qu’une mauvaise exposition pourrait entraîner l’arrêt du médicament (non-persistance). Ces efforts de classification n’ont cependant pas suffi à obtenir une caractérisation claire des groupes de patients. En effet, chaque patient peut dévier du traitement prescrit de multiples et diverses façons durant son traitement (par exemple, une fois en manquant la dose du soir, une autre fois celle du matin, ou encore celle du samedi ou celle du mercredi ou en montrant une grande variabilité dans ses prises sur une certaine période,...). Dans un second temps, cherchant toujours une mesure de l’exposition au traitement englobant le plus de caractéristiques de l’historique des prises, nous avons investigué une mesure de l’exposition au traitement via la concentration du médicament dans le sang. Il s’agit donc d’une mesure ‘interne’ de l’exposition au traitement du patient. Elle combine les acquis de la pharmacocinétique à l’historique des prises du patient (Vrijens et al 2005b). Puisque cette mesure interne requiert l’utilisation d’un modèle pharmacocinétique, ce type de modèle est présenté au Chapitre 3. Ensuite, au Chapitre 4, nous avons utilisé cette mesure interne de l’exposition pour comparer deux régimes de prises d’un médicament: une fois par jour (QD) et deux fois par jour (BID) pour des patients séropositifs. Nous avons ainsi pu montrer que le régime QD permet un moins grand nombre d’oublis du médicament que le régime BID par contre tout oubli affecte plus gravement la concentration du médicament dans le sang que le régime BID (L. Comté et al 2007). La deuxième partie de cette thèse a pour but d’intégrer les mesures d’exposition aux modèles statistiques permettant d’évaluer l’efficacité d’un traitement. Les méthodes statistiques classiques ne tiennent pas compte de la façon dont le traitement a été pris (exposition au traitement) et permettent donc seulement d’étudier l’effet moyen du traitement tel qu’il a été prescrit aux patients. Pourtant, dans certains cas, et surtout avec des traitements de longues durées (maladies chroniques : infection par VIH, diabète, etc), il devient intéressant, en vue de l’évaluation de l’effet d’un traitement, d’intégrer aux modèles la façon dont le patient a respecté le régime médicamenteux qui lui a été prescrit. Mais une difficulté apparaît puisque les modèles statistiques classiques de régression ne permettent pas l’interprétation causale directe de l’effet de la mesure de l’exposition sur la réponse clinique au traitement. En effet, l’exposition ne se mesurant qu’après l’échantillonnage, elle peut donc elle-même être influencée par la réponse clinique (Lee et al 1991; Goetghebeur and Pocock 1993). En présence de telles interactions possibles, des modèles spéciaux (‘causal models’) sont nécessaires pour une estimation non biaisée de l’effet de la dose prise. Jusqu’aujourd’hui, les méthodes basées sur les modèles structuraux moyens (structural mean models) développés par Robins (Robins 1994; Fisher-Lapp and Goetghebeur 1999) sont celles qui permettent le plus de flexibilité mais sont pourtant peu utilisées dans la pratique en raison de leur plus grande complexité par rapport aux modèles statistiques classiques. De ce fait, elles sont également moins souvent représentées dans les logiciels informatiques. Ainsi, au Chapitre 5, nous avons détaillé le modèle structural moyen log-linéaire ainsi qu’introduit un nouvel outil diagnostique pour les modèles structuraux moyens linéaires et log-linéaires. Nous avons ensuite appliqué ces modèles à un ensemble de données concernant des patients souffrant de problèmes gastriques, randomisés entre un traitement et un placebo tous deux prescrits ‘à la demande’. Ce régime, comme son nom l’indique, demande aux patients de prendre le médicament lorsque les symptômes apparaissent ce qui constitue un cas particulièrement représentatif de la nécessité d’utiliser un modèle spécial afin d’éviter un biais puisque l’exposition dépend de l’état clinique du patient (L. Comté et al 2009). Pour parachever cet ouvrage, nous avons combiné, au Chapitre 6, la mesure interne de l’exposition au traitement (via la pharmacocinétique) et les modèles structuraux, pour mesurer l’évolution de la charge virale en fonction de cette exposition interne pour des patients séropositifs n’ayant jamais reçu de traitement auparavant. Pour ce type de patients, la décroissance de la charge virale est en effet un bon indicateur du succès du traitement. Il devient dès lors intéressant de quantifier la relation causale entre l’exposition interne (mesure pharmacocinétique) et la décroissance de la charge virale. Comme on s’intéresse à l’évolution de cette charge virale au cours des visites, l’exposition interne sera mesurée entre chaque visite c’est-à-dire sur des intervalles de temps. Les modèles structuraux moyens emboîtés (Structural nested mean models) sont alors détaillés et ensuite utilisés afin d’estimer cette relation causale entre les différentes séquences d’expositions internes et l’évolution de la charge virale. Nous avons ainsi pu mettre en évidence une réduction substantielle et significative de la charge virale pouvant être attribuée à l’exposition interne au médicament d’un patient tant qu’il est en phase décroissante. Cette réduction sera d’autant plus importante que la charge virale était élevée à la visite précédente (L. Comté et al 2011). En résumé, nous pensons que notre travail permet de mieux cibler le challenge d’une modélisation adéquate de l’exposition au traitement et donc l’utilité d’une mesure interne au patient. De plus, il permet de mieux comprendre l’impact de l’exposition au traitement sur l’efficacité d’un traitement. [less ▲]

Detailed reference viewed: 47 (13 ULg)
Full Text
See detailModeling frictional contact conditions with the penalty method in the extended finite element framework
Biotteau, Ewen ULg; Ponthot, Jean-Philippe ULg

Scientific conference (2012, September 12)

This paper introduces an application of the eXtended Finite Element Method (XFEM) to model metal forming processes. The X-FEM is used to account for material interfaces and reduce the meshing constraints ... [more ▼]

This paper introduces an application of the eXtended Finite Element Method (XFEM) to model metal forming processes. The X-FEM is used to account for material interfaces and reduce the meshing constraints due to the shape of the tools and the evolving configuration of the structures. Large deformations and non-linear behaviors are also accounted for, but this contribution focuses in the modeling of frictional conditions on the interface. In X-FEM simulations, the constraint of impenetrability is usually imposed using Lagrange multiplier methods. For such strategies, stabilisation algorithms are needed to prevent the apparition of instabilities due to the introduction of dual unknowns. The strategy presented here proposes to manage the contact using the penalty approach. As it requires no additional variables, it is not submitted to the same kind of instabilities. The contact problem is modeled using integration sub-elements, defined on the boundary of the structure, on which the contact constraints have to be enforced. [less ▲]

Detailed reference viewed: 98 (13 ULg)
Full Text
Peer Reviewed
See detailModeling groundwater with ocean and river interaction
Carabin, Guy; Dassargues, Alain ULg

in Water Resources Research (1999), 35(8), 2347-2358

We develop and implement the groundwater model, Saturated/Unsaturated Flow and Transport in 3D (SUFT3D), to integrate water quantity/quality data and simulations with models of other hydrologic cycle ... [more ▼]

We develop and implement the groundwater model, Saturated/Unsaturated Flow and Transport in 3D (SUFT3D), to integrate water quantity/quality data and simulations with models of other hydrologic cycle components, namely, rivers and the ocean. This work was done as part of the Sea Air Land Modeling Operational Network (SALMON) project supported by the IBM International Foundation through its Environmental Research Program. The first research steps, presented here, address the simulation of typical hydrologic conditions to demonstrate SUFT3D's effectiveness and accuracy. The theory behind the modeling of seawater intrusion and groundwater-river interaction is summarized along with the numerical methods and characteristics of SUFT3D. The code was applied to different, increasingly complex scenarios: confined to unconfined conditions, local to regional scale, homogeneous to increasing heterogeneity, two- to three-dimensional. Of particular interest were the impacts of different boundary conditions and influence of river interactions on seawater intrusion. Results are illustrated, discussed, and compared, when possible, to those in the literature. Simulating groundwater exchange between both the river and the ocean has provided interesting results that better depict the dynamics of flow and transport in coastal zone groundwater systems. [less ▲]

Detailed reference viewed: 152 (17 ULg)
Full Text
Peer Reviewed
See detailModeling heat stress under different environmental conditions
Carabano, Maria-Jesus; Logar, Betka; Bormann, Jeanne et al

in Journal of Dairy Science (2016)

Renewed interest in heat stress effects on livestock productivity derives from climate change, which is expected to increase temperatures and the frequency of extreme weather events. This study aimed at ... [more ▼]

Renewed interest in heat stress effects on livestock productivity derives from climate change, which is expected to increase temperatures and the frequency of extreme weather events. This study aimed at evaluating the effect of temperature and humidity on milk production in highly selected dairy cattle populations across three European regions differing in climate and production systems to detect differences and similarities that can be used to optimize heat stress (HS) effect modeling. Milk, fat and protein test day data from official milk recording for years 1999 to 2010 in four Holstein populations located in the Walloon Region of Belgium (BEL), Luxembourg (LUX), Slovenia (SLO) and Southern Spain (SPA) were merged with temperature and humidity data provided by the state meteorological agencies. After merging, the number of test day records/cows per trait ranged from 686,726/49,655 in SLO to 1,982,047/136,746 in BEL. Values for the daily average and maximum temperature and humidity index (THIavg and THImax) ranges for THIavg/THImax were largest in SLO (22-74/28-84) in SLO and shortest in SPA (39-76/46-83). Change point techniques were used to determine comfort thresholds, which differed across traits and climatic regions. Milk yield showed an inverted U shaped pattern of response across the THI scale with a HS threshold around 73 THImax units. For fat and protein, thresholds were lower than for milk yield and were shifted around 6 THI units towards larger values in SPA compared with the other countries. Fat showed lower HS thresholds than protein traits in all countries. The traditional broken line model was compared to quadratic and cubic fits of the pattern of response in production to increasing heat loads. A cubic polynomial model allowing for individual variation in patterns of response and THIavg as heat load measure showed the best statistical features. Higher/lower producing animals showed less/more persistent production (quantity and quality) across the THI scale. The estimated correlations between comfort and THIavg values of 70 (which represents the upper end of the THIavg scale in BEL-LUX) were lower for BEL-LUX (0.70 - 0.80) than for SPA (0.83 - 0.85). Overall, animals producing in the more temperate climates and semi-extensive grazing systems of BEL and LUX showed HS at lower heat loads and more re-ranking across the THI scale than animals producing in the warmer climate and intensive indoor system of SPA. [less ▲]

Detailed reference viewed: 104 (20 ULg)
Full Text
Peer Reviewed
See detailModeling in Air Transportation: Cargo Loading and Itinerary Choice
Lurkin, Virginie ULg

in 4OR : Quarterly Journal of the Belgian, French and Italian Operations Research Societies (2016)

This is a summary of the author's PhD thesis supervised by Michael Schyns and defended on April 29, 2016 at the University of Li\`ege, Belgium. We examine two problems as part of this dissertation. The ... [more ▼]

This is a summary of the author's PhD thesis supervised by Michael Schyns and defended on April 29, 2016 at the University of Li\`ege, Belgium. We examine two problems as part of this dissertation. The first is a cargo loading problem. The second problem we examine involves the estimation of itinerary choice models that include price variables and correct for price endogeneity using a control function that uses several types of instrumental variables. [less ▲]

Detailed reference viewed: 18 (3 ULg)
See detailModeling in Air Transportation: Cargo Loading and Itinerary Choice
Lurkin, Virginie ULg

Doctoral thesis (2016)

We examine two problems as part of this dissertation. The first is a cargo loading problem. The aim is to load a set of containers and pallets into a cargo aircraft that serves multiple airports. Our work ... [more ▼]

We examine two problems as part of this dissertation. The first is a cargo loading problem. The aim is to load a set of containers and pallets into a cargo aircraft that serves multiple airports. Our work is the first to model cargo transport as a series of trips consisting of several legs at the end of which pickup and delivery operations might occur. This problem is crucial for airlines because in an attempt to reduce their costs, most airlines prefer to load as many containers as possible, even if all the loaded containers do not have the same final destination. Our results demonstrate that it is possible to quickly find near optimal or excellent feasible loading plans, and that our approach leads to substantial savings with respect to typical manual approaches currently used in practice. The second problem we examine involves the estimation of itinerary choice models that include price variables and correct for price endogeneity using a control function that uses several types of instrumental variables. The motivation for developing these models is to demonstrate the importance of accounting for price endogeneity and to estimate different price sensitivities as a function of advance purchase periods. This is important as the airline industry can use our results to incorporate different customer segments as revealed through high-yield and low-yield booking curves when evaluating the profitability of airline schedules. Results based on Continental U.S. markets for May 2013 departures showed that models that fail to account for price endogeneity overestimate customers' value of time and result in biased price estimates and incorrect pricing recommendations. The advanced models estimated (nested logit and ordered generalized extreme value (OGEV) models) are shown to outperform the baseline multinomial logit model with regard to statistical tests and behavioral interpretations. Additionally, results show that price sensitivities vary as a function of advance purchase periods, with those purchasing high-yield products being less price sensitive than those purchasing low-yield products (across any advance purchase periods) and those purchasing closer to departure being less price sensitive. Results also indicate that inter-alternative competition is strong for itineraries that share similar departure times. Finally, as part of the itinerary choice model developed in this dissertation, we estimate highly refined departure time of day preferences. Results are intuitive and show that departure time of day preferences vary across many dimensions including the length of haul, direction of travel, number of time zones crossed, departure day of week, and itinerary type (i.e., outbound, inbound, and one-way itineraries). To the best of our knowledge, these curves represent the most refined publicly-available estimates of airline passengers' time of day preferences. [less ▲]

Detailed reference viewed: 72 (9 ULg)
Full Text
See detailModeling inflorescence development in tomato
Périlleux, Claire ULg; Lobet, Guillaume ULg; Tocquin, Pierre ULg

Conference (2015, June)

Detailed reference viewed: 27 (5 ULg)
Full Text
Peer Reviewed
See detailModeling information sharing in animal health surveillance with social network analysis
Delabouglise, Alexis; Dao Thi, Hiep; Nguyen Tien, Thanh et al

Poster (2014, May 08)

Detailed reference viewed: 42 (5 ULg)
Full Text
Peer Reviewed
See detailModeling inter-laminar failure in composite structures: illustration on an industrial case study
Bruyneel, Michaël ULg; Delsemme, Jean-Pierre; Jetteur, Philippe et al

in Applied Composite Materials (2009), 16

Detailed reference viewed: 16 (0 ULg)
Full Text
Peer Reviewed
See detailModeling lactation curves and estimation of genetic parameters for first lactation test-day records of French Holstein cows.
Druet, Tom ULg; Jaffrezic, F.; Boichard, D. et al

in Journal of Dairy Science (2003), 86(7), 2480-90

Several functions were used to model the fixed part of the lactation curve and genetic parameters of milk test-day records to estimate using French Holstein data. Parametric curves (Legendre polynomials ... [more ▼]

Several functions were used to model the fixed part of the lactation curve and genetic parameters of milk test-day records to estimate using French Holstein data. Parametric curves (Legendre polynomials, Ali-Schaeffer curve, Wilmink curve), fixed classes curves (5-d classes), and regression splines were tested. The latter were appealing because they adjusted the data well, were relatively insensitive to outliers, were flexible, and resulted in smooth curves without requiring the estimation of a large number of parameters. Genetic parameters were estimated with an Average Information REML algorithm where the average information matrix and the first derivatives of the likelihood functions were pooled over 10 samples. This approach made it possible to handle larger data sets. The residual variance was modeled as a quadratic function of days in milk. Quartic Legendre polynomials were used to estimate (co)variances of random effects. The estimates were within the range of most other studies. The greatest genetic variance was in the middle of the lactation while residual and permanent environmental variances mostly decreased during the lactation. The resulting heritability ranged from 0.15 to 0.40. The genetic correlation between the extreme parts of the lactation was 0.35 but genetic correlations were higher than 0.90 for a large part of the lactation. The use of the pooling approach resulted in smaller standard errors for the genetic parameters when compared to those obtained with a single sample. [less ▲]

Detailed reference viewed: 27 (5 ULg)
Full Text
Peer Reviewed
See detailModeling leucine's metabolic pathway and knockout prediction improving the production of surfactin, a biosurfactant from Bacillus subtilis
Coutte, François; Niehren, Joachim; Dhali, Debarun et al

in Biotechnology journal (2015), 10(8), 1216-1234

Detailed reference viewed: 13 (2 ULg)
Full Text
Peer Reviewed
See detailModeling Lymphangiogenesis in a three-dimensional culture system
Bruyere, Françoise; Melen-Lamalle, Laurence; Blacher, Silvia ULg et al

in Nature Methods (2008), 5(5), 431-437

Unraveling the molecular mechanisms of lymphangiogenesis is hampered by the lack of appropriate in vitro models of three-dimensional (3D) lymph vessel growth which can be used to exploit the potential of ... [more ▼]

Unraveling the molecular mechanisms of lymphangiogenesis is hampered by the lack of appropriate in vitro models of three-dimensional (3D) lymph vessel growth which can be used to exploit the potential of available transgenic mice. We developed a potent reproducible and quantifiable 3D-culture system of lymphatic endothelial cells, the lymphatic ring assay, bridging the gap between 2D-in vitro and in vivo models of lymphangiogenesis. Mice thoracic duct fragments are embedded in a collagen gel leading to the formation of lymphatic capillaries containing a lumen as assessed by electron microscopy and immunostaining. This assay phenocopies the different steps of lymphangiogenesis, including the spreading from a preexisting vessel, cell proliferation, migration and differentiation into capillaries. Our study provides evidence for the implication of an individual matrix metalloproteinase, MMP-2, during lymphangiogenesis. The lymphatic ring assay is a robust, quantifiable and reproducible system which offers new opportunities for rapid identification of unknown regulators of lymphangiogenesis. [less ▲]

Detailed reference viewed: 159 (27 ULg)
Full Text
Peer Reviewed
See detailModeling medium-scale TEC structures observed by Belgian GPS receivers network
Kutiev, Ivan; Marinov, Pencho; Fidanova, Stefka et al

in Advances in Space Research (2009), 43

Detailed reference viewed: 38 (4 ULg)
Full Text
Peer Reviewed
See detailModeling Microbial Cross-contamination in Quick Service Restaurants by Means of Experimental Simulations With Bacillus Spores
Baptista Rodrigues, Ana Lúcia ULg; Crevecoeur, Sébastien ULg; Dure, Remi et al

Poster (2010)

Cross contamination has been frequently mentioned as being in the origin of a wide range of food borne outbreaks. Handling of food is one of the ways through which cross contamination may occur. For many ... [more ▼]

Cross contamination has been frequently mentioned as being in the origin of a wide range of food borne outbreaks. Handling of food is one of the ways through which cross contamination may occur. For many different reasons, quick service restaurants are particularly at risk. Due to its importance, cross contamination via the hands should be taken into consideration when carrying out a quantitative risk assessment. The main goal of this study was to determine transfer rates of bacteria to and via the hands, reduction rates of two hand sanitizing procedures and to apply the results to a quantitative microbial risk assessment model. According to our results, handling of a portion of raw minced meat contaminated at 4.104 cfu leads to the presence of 24 cfu on both hands, 3 cfu on ready-to-eat product (RTE) manipulated with unwashed hands, 1 cfu on RTE manipulated with wiped hands and absence on RTE manipulated with washed hands. This study provides adequate quantitative data for quantitative microbial risk assessment. [less ▲]

Detailed reference viewed: 114 (6 ULg)
Full Text
Peer Reviewed
See detailModeling microdamage behavior of cortical bone
Donaldson, Finn; Ruffoni, Davide ULg; Schneider, Philipp et al

in BIOMECHANICS AND MODELING IN MECHANOBIOLOGY (2014), 13(6), 1227-1242

Bone is a complex material which exhibits several hierarchical levels of structural organization. At the submicron-scale, the local tissue porosity gives rise to discontinuities in the bone matrix which ... [more ▼]

Bone is a complex material which exhibits several hierarchical levels of structural organization. At the submicron-scale, the local tissue porosity gives rise to discontinuities in the bone matrix which have been shown to influence damage behavior. Computational tools to model the damage behavior of bone at different length scales are mostly based on finite element (FE) analysis, with a range of algorithms developed for this purpose. Although the local mechanical behavior of bone tissue is influenced by microstructural features such as bone canals and osteocyte lacunae, they are often not considered in FE damage models due to the high computational cost required to simulate across several length scales, i.e., from the loads applied at the organ level down to the stresses and strains around bone canals and osteocyte lacunae. Hence, the aim of the current study was twofold: First, a multilevel FE framework was developed to compute, starting from the loads applied at the whole bone scale, the local mechanical forces acting at the micrometer and submicrometer level. Second, three simple microdamage simulation procedures based on element removal were developed and applied to bone samples at the submicrometerscale, where cortical microporosity is included. The present microdamage algorithm produced a qualitatively analogous behavior to previous experimental tests based on stepwise mechanical compression combined with in situ synchrotron radiation computed tomography. Our results demonstrate the feasibility of simulating microdamage at a physiologically relevant scale using an image-based meshing technique and multilevel FE analysis; this allows relating microdamage behavior to intracortical bone microstructure. [less ▲]

Detailed reference viewed: 27 (1 ULg)
Full Text
Peer Reviewed
See detailModeling milk urea of Walloon dairy cows in management perspectives.
Bastin, Catherine ULg; Laloux, Laurent; Gillon, Alain ULg et al

in Journal of Dairy Science (2009), 92(7), 3529-40

The aim of this study was to develop an adapted random regression test-day model for milk urea (MU) and to study the possibility of using predictions and solutions given by the model for management ... [more ▼]

The aim of this study was to develop an adapted random regression test-day model for milk urea (MU) and to study the possibility of using predictions and solutions given by the model for management purposes. Data included 607,416 MU test-day records of first-lactation cows from 632 dairy herds in the Walloon Region of Belgium. Several advanced features were used. First, to detect the herd influence, the classical herd x test-day effect was split into 3 new effects: a fixed herd x year effect, a fixed herd x month-period effect, and a random herd test-day effect. A fixed time period regression was added in the model to take into account the yearly oscillations of MU on a population scale. Moreover, first autoregressive processes were introduced and allowed us to consider the link between successive test-day records. The variance component estimation indicated that large variance was associated with the random herd x test-day effect (48% of the total variance), suggesting the strong influence of herd management on the MU level. The heritability estimate was 0.13. By comparing observed and predicted MU levels at both the individual and herd levels, target ranges for MU concentrations were defined to take into account features of each cow and each herd. At the cow level, an MU record was considered as deviant if it was <200 or >400 mg/L (target range used in the field) and if the prediction error was >50 mg/L (indicating a significant deviation from the expected level). Approximately 7.5% of the MU records collected between June 2007 and May 2008 were beyond these thresholds. This combination allowed for the detection of potentially suspicious cows. At the herd level, the expected MU level was considered as the sum of the solutions for specific herd effects. A herd was considered as deviant from its target range when the prediction error was greater than the standard deviation of MU averaged by herd test day. Results showed that 6.7% of the herd test-day MU levels between June 2007 and May 2008 were considered deviant. These deviations seemed to occur more often during the grazing period. Although theoretical considerations developed in this study should be validated in the field, this research showed the potential use of a test-day model for analyzing functional traits to advise dairy farmers. [less ▲]

Detailed reference viewed: 101 (35 ULg)