Select Page

Evolutions from Monolix2016R1 to Monolix2018R1

Monolix had a complete transformation to have a better interface and plots, better performance and be easier to use.

Monolix project definition, settings, and outputs

Monolix Connectors

Monolix Interface

The Monolix user interface is fresh new with a new javascript technology. It is not only one single frame anymore. There are now frames

Welcome frame

In this frame, it is possible to
– create a new project
– load a project
– load a recent project
– load a demo
– look at Monolix web documentation

Data frame

In this frame, the user defines its data set and tag each column of its data set. The possible column are the same but lot of names were changed to be more intuitive. Notice that, when the user defines the observation column, it should define its type (continuous/discrete/event)
When clicking on OK, it validates the data set and provide the possible use of it. When the data set is validated, a DATA VIEWER button appears providing the possibility to explore the data set parallel to the project.
Data frame enhancements
– error messages pop up when there is an error in the data set or in the consistency between the data set and the header definition.
– warning messages pop up when there is a warning in the interpretation of the data set.
– it is possible to scroll down the data set while keeping the header visible
– it is possible to sort the data set by any column
– loading a large data set is much more efficient
– it is possible to visualize the whole data
– number of doses can be chosen if there is steady-state

Structural model frame

In this frame, the user defines it structural model. The user can
– browse the file from any folder
– load a file from the library
– open in MlxEditor
– reload it (if it has been changed in the editor for example)
– error messages pop up when there is an error in the model or in the consistency between the data set and the proposed model.
– custom lixoft library browser to choose easily the model

Initial estimates frame

There are two possibilities

CHECK INITIAL ESTIMATES to see how the structural model fits each individual.

Enhancements
– it is possible to define the number of individuals showed and the associated layout
– it is possible to define the same x-axis and/or y-axis
– in case of bsmm, the two models are plotted in full and dotted red respectively
– the calculation is much faster and dedicated to the considered frame.
Evolution
– it is not possible to check the initial values of the beta’s anymore
– the grid for the prediction takes doses into account

Set values of the INITIAL ESTIMATES.

Enhancements
– there is a new link to fix all parameters.
– there is a new link to fix estimate all parameters (error model parameter ‘c’ is not affected by the ‘estimate all’ feature if its value is 1).
– there is a new link to use the last estimated values as initial estimates. Notice that this link is usable only if there has been no modification of the project.
– there is a new link to use only the last estimated fixed effects values as initial estimates.
– to define the estimation method is not a right click anymore, the user has to click on the wheel next to the parameter value.
– when the user clicks on the value, the associated constraint (typically: “Value must be >0”) is displayed to define the domain of definition of the parameter.
– there are error messages when the initial values are not set to a correct value (due to the associated distribution).
– in case of IOV, all the random effects are on the same frame.
– in case of use of a categorical covariate with several modalities in the statistical model, the user can initialize all associated beta’s independently
– in case of use of a categorical covariate with several modalities in the statistical model, the user can define the estimation method on all associated beta’s independently.
Evolution
– it is not possible to use the last estimate if there were any modification of the project
– for bayesian, only the MAP option is available.
– the methods color evolved: black for MLE, orange for fix and purple for MAP

Statistical model and tasks frame

Tasks

– The task for the calculation of the individual parameter was splitted in two tasks (Ebes referring to the conditional mode and conditional distribution allowing the conditional mean).
– The task for the calculation of the individual parameter is displayed before the other one to be consistent with a scenario usage.
– Use of the linearization method is now shared between the standard errors calculation and the log-likelihood calculation.
– The convergence assessment is now using a user defined scenario and not the current one. Three scenario are proposed (computed the se and the LL and if linearization method is used). Notice that the plots are not run.
– Assessment: new plot last convergences (dot) for each run.
– Assessment: ‘Stop’ button stops the current run and keep only the previous ones.
– Assessment: interactivity with graphs in real time (zoom, layout, selected subplots).
– Assessment: there is a summary provided in the Assessment folder in the project result folder.
– Assessment: the scenario of the assessment is now independent of the scenario of the project. The user can choose between three scenari.
– The settings for each tasks is now available with a button on next to each task.
– It is not possible to reload a previous convergence assessment using the interface. However, all the results are in the result folder in an Assessment folder.
– The list of plots is now arranged in categories to increase readability
– Lists of plots can be selected (All, none) by categories or for all the plots.

Observation model

– A button formula was added to show the formula associated to the error model in case of continuous error model in real time
– Additional and customizable error model are proposed. Now, the user can choose in a list of both distribution (normal/lognormal/logitnormal) and error models (constant/proportional/combined1/combined2)
– Generalization of error models: parameter c is always a parameter of proportional and combined1/combined2 models (fixed to 1 by default)
– it is possible to choose the minimum and the maximum of the logit function when chosen.
– there are error messages when the minimum and maximum values are set to a correct value
– there is an error message if the user try to set the distribution as lognormal and it is not possible (in case of negative observations for example)
– Type of discrete models display

Individual model

– The display is very different and more synthetic
– A button formula was added to show the formula associated to the error model in case of continuous error model in real time
– The names of the parameters and the covariates are displayed
– In case of IOV, all levels are displayed in the same frame
– The choice of the parameter distribution is done by choosing in a list
– The choice of adding or removing variability is performed by clicking in the column “Random Effects”
– There are two buttons to add and remove variability on all parameters at the same time
– The correlation is not defined as a matrix anymore, the user must define groups and add parameters on those groups
– Adding a covariate on an individual parameter is performed by clicking in the covariate name column
– In case of IOV, the covariates are arranged by level of variability
– There is dedicated buttons to add a transformed continuous covariate, transformed categorical covariate and mixture
– To add a transformed continuous covariate, the user click on the button CONTINUOUS and the user can define a Name and a Formula.
— a Name is proposed
— the list of available covariates is proposed
— by clicking on one available covariates write it in the Formula
— overlaying an available covariate show the min, mean, median, and max of the covariate
— the formula can be any Mlxtran compatible expression
— the Formula can contains several covariates
– To add a transformed categorical covariate, the user click on the button CATEGORICAL and the user can define a Name and a groups.
— a Name is proposed
— the list of modalities is proposed
— one can allocate, reset an allocation, and modify the allocation
— the user can choose the reference category
— the user can choose the name of the groups
– To add a transformed categorical covariate, the user click on the button MIXTURE and the user can define the name and the number of modalities
– a magnifying glass icon is proposed to be able to locate the covariate when there are several covariates
– For each transformed covariate, there is a possibility to edit and remove this covariate
– there are errors displayed explaining the reason of the error if the action is not possible

Results

– New section so see all the tasks results
– better representation
– It contains a section for Population parameter estimates
– It contains a section for Individual parameter estimates with the conditional mode and conditional mean
– It contains a section for Correlation matrix of the estimates (and RSE) with the linearization method or the stochastic approximation
– The values of the elements of the correlation matrix and the rse are colored to improve readability and faster diagnosis
– A selected correlation in the matrix set a focus on both associated population parameters
– It contains a section for the Estimated log-likelihood and information criteria with the linearization method or the importance sampling method
– It contains a section for all the statistical tests
– The values of the elements of the tests are colored to improve readability and faster diagnosis
– It is possible to open the output folder directly from here
– Results display is loaded if the project has results

Monolix calculation engine

Better performance thanks to the parallelization

It is now possible to parallelize the calculation of Monolix over several machines using open mpi.

Better performance in structural model evaluation

– Faster analytical solutions
– Faster calculation for ODEs
– No restrictions to use analytical solutions if regressors are constant over the subject time. Sequential models (using a PK model and its associated analytical solution) will be much faster.
– Less restrictive conditions to use analytical solutions when IOV occurs
Bug fixed:
– A time varying initial condition (for DDE models) is now well taken into account
– A regressor as initial condition is now well taken into account

Algorithms settings

– Constraints for settings
– Names and reorganization modified for a better comprehension
– all the settings are now available through a button next to the task.

SAEM algorithm

– Addition of new error models. The user can now defined both the distribution and the error model.
– Optimization of SAEM strategy when the error model has several parameters (typically for combined1 and combined2 model).
– Strategy with simulated annealing for conbined1 and combined2 (improve convergence)
– Evolution of SAEM strategy when the error model is proportional (there were issues when the prediction was very close to zero).
– CPU time optimization of SAEM strategy.
– When latent covariate are used, the methodology to estimate the probabilities and the associated betas is now based on the mixing law and not on a individual probability draw. It allows a better evaluation of the Log-likelihood and better convergence properties.
– When there are parameters without variability,
— With the no-variability method, the maximum number of iterations depends on the number of parameters without variability
— With the no-variability method, the optimization is much faster.
— With the decreasing variability methods, the decreasing speed of the artificial variance is lower
— For normal law, better strategy to initialize variance (more consistent)
— when there is a latent covariate on the parameter, all methods can be used.
– When no parameter has variability and the no-variability method is used. Only one iteration of SAEM is done.
– Two settings of SAEM were updated to provide a better convergence
— The minimum number of iterations in the exploratory phase is now at 150 (it was 100 previously)
— The step size exponent in the smoothing phase is now at .7 (it was 1 previously)
– Constraints for settings
– If SAEM reaches the maximum number of iterations during the exploratory phase, a warning arises.

Removed feature
– We removed the possibility to add different variances depending on the modality of a categorical covariate
– We removed the possibility to choose to work either with standard errors or variances. Only standard errors are proposed. However, variance project can be loaded.
– We removed the possibility to have a bayesian posterior distribution
– We removed the possibility to have a custom distribution of the individual parameters
– Autocorrelation can not be added anymore in graphical interface. However, it can be loaded or added by the connectors

Conditional distribution

– Conditional distribution can now be computed for discrete and event models.
– New setting: number of simulations by individual parameters
– Adaptative number of simulations value according to the data size
– If the Fisher Information Matrix by stochastic approximation has already been computed, all the MCMC draws are reused and providing a much faster calculation.

Conditional mode

– The calculation is now tremendously faster. (between 20 to 100 time faster)

Standard error calculation

– Fisher information matrix can now be computed with discrete and event models and IOV
– Improvement of the calculation for the linearization
– Improvement of the calculations for S.A. if there are nans
– Faster calculation for the linearization
– Decrease of the maximum of iterations to 200 (it was 300 previously)
– Settings are modified for S.A.: min and max iterations
– Warnings if there are numerical issues for linearization
– If the conditional distribution has already been computed, all the MCMC draws are reused and providing a much faster calculation.

Log-likelihood calculation

– Improvement of the calculation for the linearization
– Faster calculation in case of censored data
– Faster calculation in case of importance sampling
– When the calculation by linearization has issues, then warning is provided to the users.
– The number of Monte-Carlo size in the importance sampling is now at 10000 (it was previously at 20000)

Simulation computation for plots

The simulation are much faster than in the previous version. It impacts a lot the time needed for the generation of the VPCs and the prediction interval for example.
In addition, a deep effort was done on the discrete and event models where the simulation is now tremendously faster. A progression bar is proposed too.
For the simulation on a grid, the doses and the regressors were added.

Plots during algorithms

– large interactivity (zoom, layout, coordinates)
– Possibility to switch between different frames during the algorithms calculation
– List of elements to compute for plots (‘Stop’ button keeps the done computations)

Tests computation

Tests are computed when the conditional distribution task is performed and the plots are launched. The following tests are computed
– Pearson’s correlation test on the individual parameter and the covariates used in the statistical model
– Pearson’s correlation test on the individual random effects and the covariates
– Fisher test for discrete covariates
– Shapiro Wilk test on the random effects
– Pearson’s correlation test on the random effects
– Shapiro Wilk test on the individual parameters that have no covariate
– Kolmogorov Smirnov adequacy test on the individual parameters that have covariates
– Van Der Waerden test on the residuals
– Shapiro Wilk test on the residuals
– For all tests associated to individual (parameters, random effects, NPDEs), the Benjamini-Hochberg procedure is applied to avoid bias

Monolix plots

All the plots were updated with a new technology and new features. In addition, all the color/graphical can be changed in the Preferences frame.
Notice that
– When you save the projects, your current graphical settings are conserved
– you can export your settings to be your defaults in the Export menu.

Stratify

The user can now define all the stratifications needed in a Stratify frame in a very easy way and can split, color and filter bay any defined covariate.
Enhancements
– Large simplification of the usage
– For a continuous covariate, possibility to define groups with either equal number of individuals, or equal size
– Possibility to change all the color
– Possibility to highlight a full group when clicking on the covariate category
– Buttons to add and remove categories
– Better performance

Observed data enhancements

This plot contains all the observations and can be used with all types of observations. It produces
– the spaghetti plot for continuous observations.
– the spaghetti plot or histogram for discrete observations (the user has the possibility to switch).
– the kaplan-Meier plot for event observations along with the mean number of events per individual
Enhancements
– When overlaying a curve, the ID is displayed and all the points of the subject are highlighted.
– When splitting, the information for each group is computed.
– When splitting, the user can choose to adapt the x-axis and or y-axis to each group or to have the same for all groups.
– Possibility to display the dosing times when overlaying an individual.

Individual fits enhancements

– It is possible to sort the individuals by individual parameters values.
– When there are censored data, the full interval is displayed
– the y-scale management is better performed.
– Possibility to display the dosing times.
– The user can choose to share the x-axis and/or y-axis .
– Possibility to zoom on all the individual at the same time with a linked zoom.
– Population fits (population covariate)
– Grid takes doses (and regressors) into account
– Color is added for a better representation of IOV when occasions are joined (according the presence of washout or not)

Observation vs Prediction enhancements

– The conditional distribution can be used for this plots.
– 90% prediction interval is now available.
– Information on the outliers proportions.
– Overlaying a point will display both the ID and the time of the points (and its replicates if the conditional distribution is chosen). In addition, the other points corresponding to the same ID are also highlighted.
– The log-log scale management is more efficiently done.

Scatter plots of the residuals enhancements

– Possibility to have Scatter plot for event.
– IWRES can be computed with the conditional distribution.
– Overlaying a point will display both the ID and the time of the points (and its replicates if the conditional distribution is chosen). In addition,
— the other points corresponding to the same ID are also highlighted.
— the same points are overlaid on the other plots.
– 2 predefined configurations (VPC and scatter).
– In case of discrete models, the scatter plot w.r.t time was added.

Distribution of the residuals enhancements

– By overlaying a bar in the pdf plots, we have the percentage of individual in this bar.
– By overlaying in the cdf plot, the theoretical and empirical cdf are displayed along with the x-axis value.
– The qqplot representation was replaced by a cdf representation.
– The empirical pdf is not computed anymore.

Distribution of the individual parameters enhancements

– The non parametric pdf is not proposed anymore.
– By hovering over a bar in the pdf plots, we have the percentage of individual in this bar.
– The empirical and theoretical cdf of the individual parameters are now computed.
– By hovering over the cdf plot, the theoretical and empirical cdf are displayed along with the x-axis value.
– When splitting, the shrinkage information is computed.

Distribution of the random effects enhancements

– The empirical and theoretical pdf of the individual parameters are now computed.
– By hovering over a bar in the pdf plots, we have the percentage of individual in this bar.
– The empirical and theoretical cdf of the individual parameters are now computed.
– By hovering over the cdf plot, the theoretical and empirical cdf are displayed along with the x-axis value.

Correlation between random effects enhancements

– Correlation information is proposed.
– Hovering over a point will display the ID of the point (and its replicates if the conditional distribution is chosen). In addition, the same ID is overlaid in the other figures.
– Possibility to select the parameters to look at.
– Possibility to split the graphic.
– Optimized layout. At the beginning, a maximum of 6 random effects is displayed. However, the user can choose any number afterward.

Individual parameters vs covariates

– Overlaying a point will display the ID of the point (and its replicates if the conditional distribution is chosen). In addition, the same ID is overlaid in the other figures.
– Possibility to split and have all figures
– Possibility to select the parameters to look at.
– Possibility to select the covariates to look at.
– Possibility to split and color at the same time.

Visual Predictive Checks enhancements

– This plot contains all the observations and can be used with all types of observations.
– In case of categorical projects, there are no bins management on the y-axis. All the categories are displayed with the good y-label
– In case of count projects, the y-label is well defined

Prediction distribution

– Possibility to color the observations
– Possibility to differentiate the censored data and the non censored data
– Overlaying a point will display the ID of the point. In addition, the other points of the same ID are also overlaid.
– Overlaying a band will display the range of the band.

Loglikelihood contribution

– Possibility to zoom on part of the individuals

New plots

– Standards errors of the estimates
– MCMC convergence plot
– Importance Sampling convergence plot

Monolix project definition, settings, and outputs

Project evolution

In terms of project, there are only few modifications
– The definition of the number of doses in the STEADY STATE definition is now in the project and not in the user configuration
– In case of several outputs in the data set, the names of the type of outputs described in the observation is now named yname and not ytype anymore.. Retro compatibility is ensured.
– In case of a single output in the data set, the name of the type of output described in the observation was ytype=1. It is now removed as it is useless. Retro compatibility is ensured.
– The list of plots is now defined in the project file and not in the associated .xmlx anymore.
– The name of the tasks in the Mlxtran project evolved a little bit to be more consistent to the user interface.

Settings

In terms of project settings, there are only few modifications
– The flexibility to use or not the analytical solutions is now defined in the Mlxtran structural model and not in the user configuration
– The project settings are now available via the menu Settings/Project settings
– The user has the possibility to save the data and the model next to the project
– The preferences interface has evolved to be in javascript
– The working directory is not available through the interface but only with the user configuration file
– The change of the number of threads does not imply a restart of Monolix anymore
– We propose to automatically exports all the charts data after the run
– The charts export format are now .svg and .png
– The timestamping option is now called ‘Save History’. The project and its results is saved after each run now.

Configuration of the plots

It is now saved in a .properties associated to the project. It is not a xmxl anymore. It is not xmlx anymore but still readable. Retrocompatibility is only performed on the list of plots. This .properties
– overlay the default settings (default.settings in the user/lixoft folder)
– contains all the information for the display of the plots in terms of what is displayed
– contains all the information for the display of the plots in terms of the covariate stratification in the plot
– contains all the information for the display of the plots in terms of the colors and preferences for each plot
When saving a project, a .properties is generated ensuring to replot exactly the same figures after a reload.
It is possible to export all the settings to define it as the global settings.

Outputs

In terms of outputs, all the files and folder are reorganized. We now have
– summary.txt: providing a summary of the run
– populationparameter.txt with all the estimated population parameters
– the output predictions
– all the files concerning the Fisher Information Matrix are in a folder FisherInformation
– all the files concerning the individual parameters and the random effects are in a folder IndividualParameters
– all the files concerning the logLikelihood are in a folder logLikelihood
– all the files concerning the results of the Tests are in a folder Tests
– a part of the Lixoft files needed to reload is in a private folder .Internals
– when the charts data are exported, the data are exported in a folder ChartsData
– when the figures are exported, the data are exported in a folder ChartsFigures
– all figures can be exported independently

In terms of export, we can
– export all the charts data in Settings/Export charts data
– export all the figures in Settings/Export plots
– export the project in Mlxplore in Settings/Export in Mlxplore

Monolix Connectors

There is a R package associated to Monolix where the user has all the functions available through the interface. The following functions are available
– abort Stop the current task run
– addCategoricalTransformedCovariate: Add Categorical Transformed Covariate
– addContinuousTransformedCovariate: Add Continuous Transformed Covariate
– addMixture: Add Mixture Add a new latent covariate to the current model giving its name and its modality number.
– computePredictions: Compute predictions from the structural model
– getConditionalDistributionSamplingSettings: Get conditional distribution sampling settings
– getConditionalModeEstimationSettings: Get conditional mode estimation settings
– getContinuousObservationModel: Get continuous observation models information
– getCorrelationOfEstimates: Get the inverse of the Fisher Matrix
– getCovariateInformation: Get Covariates Information
– getData: Get project data
– getEstimatedIndividualParameters: Get last estimated individual parameter values
– getEstimatedLogLikelihood: Get Log-Likelihood Values
– getEstimatedPopulationParameters: Get last estimated population parameter value
– getEstimatedRandomEffects: Get estimated the random effects
– getEstimatedStandardErrors: Get standard errors of population parameters
– getGeneralSettings: Get project general settings
– getIndividualParameterModel: Get Individual Parameter Model
– getLastRunStatus: Get last run status
– getLaunchedTasks: Get tasks with results
– getLogLikelihoodEstimationSettings: Get LogLikelihood algorithm settings
– getMCMCSettings: Get MCMC algorithm settings
– getMlxEnvInfo: Get information about MlxEnvironment object
– getObservationInformation: Get observations information
– getPopulationParameterEstimationSettings: Get population parameter estimation settings
– getPopulationParameterInformation: Get Population Parameters Information
– getPreferences: Get project preferences
– getProjectSettings: Get project settings
– getSAEMiterations: Get SAEM algorithm iterations
– getScenario: Get current scenario
– getSimulatedIndividualParameters: Get simulated individual parameters
– getSimulatedRandomEffects: Get simulated random effects
– getStandardErrorEstimationSettings: Get standard error estimation settings
– getStructuralModel: Get structural model file
– getVariabilityLevels: Get Variability Levels
– initializeMlxConnectors: Initialize MlxConnectors API
– isRunning: Get current scenario state
– loadProject: Load project from file
– mlxDisplay: Display Mlx API Structures
– newProject: Create new project
– removeCovariate: Remove Covariate
– runConditionalDistributionSampling: Sampling from the conditional distribution
– runConditionalModeEstimation: Estimation of the conditional modes (EBEs)
– runLogLikelihoodEstimation: Log-Likelihood estimation
– runPopulationParameterEstimation: Population parameter estimation
– runScenario: Run Current Scenario
– runStandardErrorEstimation: Standard error estimation
– saveProject: Save current project
– setAutocorrelation: Set auto-correlation
– setConditionalDistributionSamplingSettings: Set conditional distribution sampling settings
– setConditionalModeEstimationSettings: Set conditional mode estimation settings
– setCorrelationBlocks: Set Correlation Block Structure
– setCovariateModel: Set Covariate Model
– setData: Set project data
– setErrorModel: Set error model
– setGeneralSettings: Set common settings for algorithms
– setIndividualParameterDistribution: Set Individual Parameter Distribution
– setIndividualParameterVariability: Individual Variability Management
– setInitialEstimatesToLastEstimates: Initialize population parameters with the last estimated ones
– setLogLikelihoodEstimationSettings: Set loglikelihood estimation settings
– setMCMCSettings: Set settings associated to the MCMC algorithm
– setObservationDistribution: Set observation model distribution
– setObservationLimits: Set observation model distribution limits
– setPopulationParameterEstimationSettings: Set population parameter estimation settings
– setPopulationParameterInformation: Population Parameters Initialization and Estimation Method
– setPreferences: Set preferences
– setProjectSettings: Set project settings
– setScenario: Set scenario
– setStandardErrorEstimationSettings: Set standard error estimation settings
– setStructuralModel: Set structural model file