Select Page

# FAQ

#### Resolution and display

• OpenGL technology impact on remote access: Monolix and Datxplore interface were updated with OpenGL technology. Unfortunately, remote access using direct rendering is not compatible with OpenGL, as the OpenGL application sends instructions directly to the local hardware bypassing the target X server. As a consequence, MonolixSuite cannot be used with X11 forwarding. Instead, an indirect rendering should be used, where the remote application sends instructions to the X server which transfers them to the graphics card. It is possible to do that with ssh application, but it requires a dedicated configuration depending on the machine and the operating system. Other applications such as VNC or Remina can also be used for an indirect rendering.
• If the graphical user interface appears with too high or too low resolution, follow these steps:
• open and close Datxplore
• open Monolix
• load any project from the demos
• in the menu, go to Settings > Preferences and disable the “High dpi scaling” in the Options.
• close Monolix
• restart Monolix

#### Regulatory

• What is needed for a regulatory submission using Monolix2018? Monolix is used for regulatory submissions (including the FDA and the EMA) of population PK and PK/PD analyses. The summary of elements needed for submission can be found here.
• How to cite Monolix2018R1? To cite Monolix, please reference it as here
Monolix version 2018R1. Antony, France: Lixoft SAS, 2018.
http://lixoft.com/products/monolix/

#### Running Monolix

• On what operating systems does Monolix run? MonolixSuite runs on Windows, Linux and MacOS platform.
• Is it possible to run Monolix using a simple command line? Yes, see here. In addition, there is a full R -api providing the full flexibility on running and modifying a Monolix project as can be seen here

#### Initialization

• How to initialize my parameters? There are several ways to initialize your parameters and visualize the impact. See here the different possibilities.

#### Results

• Can I define myself the result folder? By default, the result folder corresponds to the project name. However, you can define it by yourself. See here to see how to define it on the user interface.
• What result files are generated by Monolix? Monolix proposes a lot of different output files depending on the tasks done by the user. Here is a complete listing of the files along with the condition for creation. See here for more information.
• Can I replot the plots using another plotting software? Yes, if you go to the menu Export and click on “Export charts data”, all the data needed to reproduce the plots are stored in text files. See here for the description of all the files generated along with the plots.
• When I open a project, my results are not loaded (message “Results have not been loaded due to an old inconsistent project”). Why? When loading a project, Monolix checks that the project (i.e all the information saved in the .mlxtran file) being loaded and the project that has been used to generate the results are the same. If not, the error message is shown. For instance if one runs a project, then do “use last estimates”, save and try to reload the project, the saved project has the “last estimates” as initial values which are different from the initial values used to run and generate the results. In that case the results will not be loaded because they are inconsistent with the loaded project.
It is possible to check what is preventing the load of the results by comparing the content of the .mlxtran file to load and the .mlxtran file located in the hidden .Internals folder in the result folder. To see the .Internals folder, the “show the hidden files/folders” must be activated on the machine.

• How are the censored data handled? The handling of censored data is described here.
• How are the parameters without variability handled? The different methods for parameters without variability are explained here.
• What is the convergence indicator, displayed during SAEM? The convergence indicator is the complete log-likelihood. It can help to follow convergence. Note that the complete likelihood is not the same as the log-likelihood computed as separate task. Indeed, the log-likelihood is defined as $$\sum_{i=1}^{N_{\text{ind}}}\log\left(p(y_i; \theta)\right)$$. It is the relevant quantity to compare model, but unfortunately it cannot be computed in closed form because the individual parameters $$\phi_i$$ are unobserved. Thus, to estimate the log-likelihood an importance sampling Monte Carlo method is used in a separate task (or an approximation is calculated via linearization of the model). To know more on the log-likelihood calculation using linearization or importance sampling, see here. On the contrary, the complete likelihood refers to the joint distribution $$\sum_{i=1}^{N_{\text{ind}}}\log\left(p(y_i, \phi_i; \theta)\right)$$. By decomposing the terms as $$p(y_i, \psi_i; \theta)=p(y_i| \psi_i; \theta)p(\psi_i; \theta)$$, we see that this quantity can easily be computed using as $$\phi_i$$ the individual parameters drawn by MCMC for the current iteration of SAEM. This quantity is calculated at each SAEM step and is useful to assess the convergence of the SAEM algorithm.
• When estimating the log-likelihood via importance sampling, the log-likelihood does not seem to stabilize. What can I do? The log-likelihood estimator obtained by importance sampling is biased by construction (see here for details). To reduce the bias, the conditional distribution $$p_{\phi_i|y_i}$$ should be known as well as possible. For this, run the “conditional distribution” task before estimating the log-likelihood.

#### Model definition

• Is it possible to use time-varying covariates? Yes, however the covariates relationship must be defined in the model instead of the GUI. See here how to do that.
• Is it possible to define complex covariate-parameter relationships such as Michaelis-Menten for instance? Yes, this can be done directly in the model file. See here how to do it.
• Is it possible to define a categorical covariate effect on the standard deviation of a parameter? Yes, this can be done directly in the model file. See here how to do it.
• Is it possible to define mixture of structural models? Yes, it may be necessary in some situations to introduce diversity into the structural models themselves using between-subject model mixtures (BSMM) or within-subject model mixtures (WSMM). The handling of mixture of structural models is defined here. Notice that in the case of a BSMM, the proportionbetween groups is a population parameter of the model to estimate. There is no inter-patient variability on p: all the subjects have the same probability and a logit-normal distribution for p  must be used to constrain it to be between 0 and 1 without any variability.
• Is it possible to define mixture of distributions? Yes, the handling of mixture of structural models is defined here.
• Can I set bounds on the population parameters for example between a and b? It is not possible to set bounds for the estimated population parameters. However it is possible to define bounded parameter distributions, which as a consequence also bound the estimated fixed effect parameter. See here how to do it.
• Can I put any distribution on the population parameters? Not directly through the interface. Using the interface, you can only put normal, lognormal, logitnormal and probitnormal. However, you can set any transformation of your parameter in the EQUATION: part of the structural model and apply any transformation on it.
• Can I set a custom error model? No, this is not possible. It may however be possible to transform the data such that the error model can be picked from the list. For an example with a model-based meta-analysis project, see here.

#### Tricks

• How to compute AUC, time interval AUC, … using Mlxtran in a Monolix project?  See here.
• How can I calculate the coefficient of variation? The coefficient of variation is not outputted by Monolix but can easily be calculated manually. The coefficient of variation is defined as the ratio of the standard deviation divided by the mean. It is often reported for log-normally distributed parameters where it can be calculated as: $$\textrm{CV}=\sqrt{e^{\omega^2}-1}$$ with $$\omega$$ the estimated standard deviation.