Release Notes MonolixSuite2018R1

MonolixSuite 2018R1 Release Notes
February 15, 2018

The document is the release notes for MonolixSuite2018R1 and contains most of the evolution of the software along with the data set and Mlxtran management. You can dowload the pdf here: (ReleaseNotes2018R1).

Data set

In order to have a simpler and more effective data management through all MonolixSuite application, data management evolved from the previous MonolixSuite versions. We describe in the following all the differences w.r.t. the different column types.

Evolution for each column type

Column-types used to identify subject-occasions
– [ID] Several ID columns in a data set is not supported anymore
– [OCCASION] Sequences of covariates are no more generated

Column-types used to time-stamp data
– [TIME] Time is converted into relative hours based on the smallest value per individual. If both DATE and TIME are present, time will be defined in hour starting at the first day of the individual.
– [DATE/DAT1/DAT2/DAT3] The separator “-” is now allowed.

Column-types used to define responses
– Y column has been renamed OBSERVATION and YTYPE column has been renamed OBSERVATION ID
– [OBSERVATION] If there is a non null dose and a value in the response-column, we consider it as both dose and response. It was formerly considered as a response.
– Regressor is taken into account even if the measurement is not (due to a MDV or EVID column).
– [OBSERVATION ID] Several OBSERVATION ID columns in a data set is not supported anymore.

Column-types used to define a dose regimen
– ADM column has been renamed ADMINISTRATION ID,
– TINF and RATE columns have been renamed INFUSION DURATION and INFUSION RATE respectively
– SS, II, and ADDL columns have been renamed STEADY STATE, INTERDOSE INTERVAL and ADDITIONAL DOSES respectively
– [AMOUNT] If there is a non null dose and a value in the response-column, we consider it as both dose and response. It was formerly considered as a response.
– [ADMINISTRATION ID] Several ADMINISTRATION ID columns in a data set is not supported anymore.
– [ADMINISTRATION ID] ADMINISTRATION IDis used only for dose lines. It is not possible anymore to use ADMINISTRATION ID as both dose identifiers and observation identifiers
– [STEADY STATE] Steady state column does not create a new occasion anymore.
– [STEADY STATE] When the number of implied doses overlap another dose or measurement, we stop adding doses.
– [STEADY STATE] The number of applied doses using steady state in not managed anymore using the configuration file but in the project and through the user interface.
– [STEADY STATE] STEADY STATE = 2 and STEADY STATE = 3 are now possible. It generate doses without any washout (contrary to STEADY STATE=1).
– [STEADY STATE/INTERDOSE INTERVAL/ADDITIONAL DOSES] When a data set contained a column with column-type INTERDOSE INTERVAL and no column with column-type STEADY STATE but a column with column-type ADDITIONAL DOSES, a STEADY STATE column was artificially created with. It is not the case anymore.
– [INTERDOSE INTERVAL] INTERDOSE INTERVAL column alone is no longer possible.
– [ADDITIONAL DOSES/INTERDOSE INTERVAL] If ADDITIONAL DOSES equals 0 and INTERDOSE INTERVAL> 0, we no longer create occasions.

Column-types used to define covariates
– CAT and COV columns have been renamed respectively CATEGORICAL COVARIATE and CONTINUOUS COVARIATE
– [CONTINUOUS COVARIATE] If a covariate varies within the same subject-occasion, only the first evaluation (by time ordering) of the covariate is taken into account. However, the covariate is still considered as constant within the same subject-occasion
– [CATEGORICAL COVARIATE] ‘.’ is not a valid category anymore, it is considered as the repetition of the previous valid defined category to be consistent with the CONTINUOUS COVARIATE.

Column-types used to define regressions
– X columns has been renamed as REGRESSOR
Column-types used to define controls and events
– EVID columns have been renamed  EVENT ID
– Management of EVENT ID=2 and EVENT ID=3
In addition, the data management system technology was updated to full C++, thus loading a data set is much faster.

Evolution of the data set in the Mlxtran

There were some evolution in the Mlxtran associated to the data set which corresponds to the section <DATAFILE> of the Mlxtran project both used in Datxplore (with .datxplore extension) and Monolix (using a .mlxtran extension).
– the number of doses is indicated in the definition of the SS column,
– in the definition of the observation, ytype is removed when there is only one measurement,  and changed to yname when there are several ones.

Mlxtran

There are no evolution of the Mlxtran language. However, three additional libraries were added
– The TMDD (Target Mediated Drug Disposition) library contains a large number of TMDD models corresponding to different approximations, different administration routes, different parameterizations, and different outputs.
– The TTE (Time To Event) library contains a large number of TTE models corresponding to most used hazard functions.
– a PKPD library is now proposed.

Several enhancements were done
– The computation of the analytical solutions was improved to decrease the CPU time
– Analytical solution management: The usage of analytical solution is not in the preferences anymore. The settings by default is TRUE. If an user would like not to use analytical solutions, write “useAnalyticalSolution = no” in the Mlxtran model file in section EQUATION: or PK:
– All the models of the PK and PKe libraries with nonlinear elimination are now using a stiff ode solver
– The graphical interface to choose your model in the library was largely improved in order the user to have a simple and comprehensive access to the requested family of models.

Datxplore

Datxplore had a complete transformation to have a better interface and be easier to use. The main enhancements are
– a better and more consistent data management
– a new interface with a new technology and more functionalities.

Data management

  • The rules to accept a data set are exactly the same as in Monolix and defined in the data set documentation
  • The following column-type are now accepted: OCCASION, all the dose related columns (AMOUNT, INFUSION RATE, INFUSION DURATION, ADDITIONAL DOSES, STEADY STATE, INTERDOSE INTERVAL), EVENT ID, and MDV
  • All the data definition has its own frame
  • It is possible to load a Monolix project and explore its data set
  • There is a complete and efficient error and warning management providing all the informations for the users on why a data set can not be loaded, what interpretations were made. It is a great help for the good management of the data set.
  • There was a renaming of the column tag to clarify the meaning for the users.

Data representation

The representation of the plots is nicer due to the javascript technology. In addition, it is possible to customize the plot in terms of preferences.
The new features are
– All the split/color/filter/select functionalities are simpler to use and more efficient.
– The ID and the other points associated to this ID is highlighted on a spaghetti plot by hovering the individual
– The dosing times are displayed on a spaghetti plot by hovering the individual
– Information is provided (number of subjects, …)
– It is possible to add constraint on the zoom in order to zoom only on the x-axis or the y-axis
– It is possible to color groups based on covariate informations
– It is possible to stratify and select several individuals
– When a continuous covariate is represented with respect to another continuous covariate. It is possible to plot the regression line and the split. In addition, the correlation value is proposed as an information.
– When a categorical covariate is represented. It is possible to represent it either grouped or stacked. In addition, the number of points by group is proposed.
– In case of several observations, it is possible to display one output w.r.t. the another one. In addition, a red arrow is added to indicate in which direction time increases.
– In case of discrete observations, it is possible to display it using spaghetti or histogram data.
Evolution
– The extension for datxplore projects are now .datxplore
– The demos were largely updated
– One can not change the name of the observations anymore in the user interface. It is still possible to do in the project file.
– There is a custom error and warning management. Error messages pop up when there is an error in the data set or in the consistency between the data set and the header definition. Warning messages pop up when there is a warning in the interpretation of the data set.
– The stratification is more performant
– The load of the data set is much faster allowing the possibility to easily load large data set.
Bug fix:
– Stratify groups when modalities from different covariates have the same name

Mlxplore

There were only few enhancements of Mlxplore, and mainly bug fixes

Evolution
– The extension for Mlxplore projects are now .mlxplore
– It is possible to load Monolix projects
– When loading a Monolix project, the treatment index should are individual ID, not internal index
Bug fix:
– When loading a Monolix project, all individuals are now available for selection
– When loading a Monolix project, infusion time and rate are now well computed
– When loading a Monolix project, the number of doses related to ADDITIONAL DOSES and STEADY STATE columns is now correct.
– Correction of the percentile function

Minor enhancements in editor

Monolix

Monolix had a complete transformation to have a better interface and plots, better performance and be easier to use.

Monolix Interface

The Monolix user interface is fresh new with a new javascript technology. It is not only one single frame anymore. There are now frames

Welcome frame
In this frame, it is possible to
– create a new project
– load a project
– load a recent project
– load a demo
– look at Monolix web documentation

Data frame
In this frame, the user defines its data set and tag each column of its data set. The possible column are the same but lot of names were changed to be more intuitive. Notice that, when the user defines the observation column, it should define its type (continuous/discrete/event)
When clicking on OK, it validates the data set and provide the possible use of it. When the data set is validated, a DATA VIEWER button appears providing the possibility to explore the data set parallel to the project.

Data frame enhancements
– error messages pop up when there is an error in the data set or in the consistency between the data set and the header definition.
– warning messages pop up when there is a warning in the interpretation of the data set.
– it is possible to scroll down the data set while keeping the header visible
– it is possible to sort the data set by any column
– loading a large data set is much more efficient
– it is possible to visualize the whole data
– number of doses can be chosen if there is steady-state

Structural model frame
In this frame, the user defines it structural model. The user can
– browse the file from any folder
– load a file from the library
– open in MlxEditor
– reload it (if it has been changed in the editor for example)
– error messages pop up when there is an error in the model or in the consistency between the data set and the proposed model.
– custom lixoft library browser to choose easily the model

Initial estimates frame
There are two possibilities
CHECK INITIAL ESTIMATES to see how the structural model fits each individual.
Enhancements
– it is possible to define the number of individuals showed and the associated layout
– it is possible to define the same x-axis and/or y-axis
– in case of bsmm, the two models are plotted in full and dotted red respectively
– the calculation is much faster and dedicated to the considered frame.
Evolution
– it is not possible to check the initial values of the beta’s anymore
– the grid for the prediction takes doses into account

Set values of the INITIAL ESTIMATES.
Enhancements
– there is a new link to fix all parameters.
– there is a new link to fix estimate all parameters (error model parameter ‘c’ is not affected by the ‘estimate all’ feature if its value is 1).
– there is a new link to use the last estimated values as initial estimates. Notice that this link is usable only if there has been no modification of the project.
– there is a new link to use only the last estimated fixed effects values as initial estimates.
– to define the estimation method is not a right click anymore, the user has to click on the wheel next to the parameter value.
– when the user clicks on the value, the associated constraint (typically: “Value must be >0”) is displayed to define the  domain of definition of the parameter.
– there are error messages when the initial values are not set to a correct value (due to the associated distribution).
– in case of IOV, all the random effects are on the same frame.
– in case of use of a categorical covariate with several modalities in the statistical model, the user can initialize all associated beta’s independently
– in case of use of a categorical covariate with several modalities in the statistical model, the user can define the estimation method on all associated beta’s independently.

Evolution
– it is not possible to use the last estimate if there were any modification of the project
– for bayesian, only the MAP option is available.
– the methods color evolved: black for MLE, orange for fix and purple for MAP

Statistical model and tasks frame
Tasks
– The task for the calculation of the individual parameter was split in two tasks (Ebes referring to the conditional mode and conditional distribution allowing the conditional mean).
– The task for the calculation of the individual parameter is displayed before the other one to be consistent with a scenario usage.
– Use of the linearization method is now shared between the standard errors calculation and the log-likelihood calculation.
– The convergence assessment is now using a user defined scenario and not the current one. Three scenario are proposed (computed the se and the LL and if linearization method is used). Notice that the plots are not run.
– Assessment: new plot last convergences (dot) for each run.
– Assessment: ‘Stop’ button stops the current run and keep only the previous ones.
– Assessment: interactivity with graphs in real time (zoom, layout, selected subplots).
– Assessment: there is a summary provided in the Assessment folder in the project result folder.
– Assessment: the scenario of the assessment is now independent of the scenario of the project. The user can choose between three scenari.
– The settings for each tasks is now available with a button on next to each task.
– It is not possible to reload a previous convergence assessment using the interface. However, all the results are in the result folder in an Assessment folder.
– The list of plots is now arranged in categories to increase readability
– Lists of plots can be selected (All, none) by categories or for all the plots.

Observation model
– A button formula was added to show the formula associated to the error model in case of continuous error model in real time
– Additional and customizable error model are proposed. Now, the user can choose in a list of both distribution (normal/lognormal/logitnormal) and error models (constant/proportional/combined1/combined2)
– Generalization of error models: parameter c is always a parameter of proportional and combined1/combined2 models (fixed to 1 by default)
– it is possible to choose the minimum and the maximum of the logit function when chosen.
– there are error messages when the minimum and maximum values are set to a correct value
– there is an error message if the user try to set the distribution as lognormal and it is not possible (in case of negative observations for example)
– Type of discrete models display

Individual model
– The display is very different and more synthetic
– A button formula was added to show the formula associated to the error model in case of continuous error model in real time
– The names of the parameters and the covariates are displayed
– In case of IOV, all levels are displayed in the same frame
– The choice of the parameter distribution is done by choosing in a list
– The choice of adding or removing variability is performed by clicking in the column “Random Effects”
– There are two buttons to add and remove variability on all parameters at the same time
– The correlation is not defined as a matrix anymore, the user must define groups and add parameters on those groups
– Adding a covariate on an individual parameter is performed by clicking in the covariate name column
– In case of IOV, the covariates are arranged by level of variability
– There is dedicated buttons to add a transformed continuous covariate, transformed categorical covariate and mixture
– To add a transformed continuous covariate, the user click on the button CONTINUOUS and the user can define a Name and a Formula.
— a Name is proposed
— the list of available covariates is proposed
— by clicking on one available covariates write it in the Formula
— overlaying an available covariate show the min, mean, median, and max of the covariate
— the formula can be any Mlxtran compatible expression
— the Formula can contains several covariates
– To add a transformed categorical covariate, the user click on the button  CATEGORICAL and the user can define a Name and a groups.
— a Name is proposed
— the list of modalities is proposed
— one can allocate, reset an allocation, and modify the allocation
— the user can choose the reference category
— the user can choose the name of the groups
– To add a transformed categorical covariate, the user click on the button MIXTURE and the user can define the name and the number of modalities
– a magnifying glass icon is proposed to be able to locate the covariate when there are several covariates
– For each transformed covariate, there is a possibility to edit and remove this covariate
– there are errors displayed explaining the reason of the error if the action is not possible

Results
– New section so see all the tasks results
– better representation
– It contains a section for Population parameter estimates
– It contains a section for Individual parameter estimates with the conditional mode and conditional mean
– It contains a section for Correlation matrix of the estimates (and RSE) with the linearization method or the stochastic approximation
– The values of the elements of the correlation matrix and the rse are colored to improve readability and faster diagnosis
– A selected correlation in the matrix set a focus on both associated population parameters
– It contains a section for the Estimated log-likelihood and information criteria with the linearization method or the importance sampling method
– It contains a section for all the statistical tests
– The values of the elements of the tests are colored to improve readability and faster diagnosis
– It is possible to open the output folder directly from here
– Results display is loaded if the project has results

Monolix calculation engine

Better performance thanks to the parallelization
It is now possible to parallelize the calculation of Monolix over several machines using open mpi.

Better performance in structural model evaluation
– Faster analytical solutions
– Faster calculation for ODEs
– No restrictions to use analytical solutions if regressors are constant over the subject time. Sequential models (using a PK model and its associated analytical solution) will be much faster.
– Less restrictive conditions to use analytical solutions when IOV occurs

Bug fixed:
– A time varying initial condition (for DDE models) is now well taken into account
– A regressor as initial condition is now well taken into account

Algorithms settings
– Constraints for settings
– Names and reorganization modified for a better comprehension
– all the settings are now available through a button next to the task.

SAEM algorithm
– Addition of new error models. The user can now defined both the distribution and the error model.
– Optimization of SAEM strategy when the error model has several parameters (typically for combined1 and combined2 model).
– Strategy with simulated annealing for conbined1 and combined2 (improve convergence)
– Evolution of SAEM strategy when the error model is proportional (there were issues when the prediction was very close to zero).
– CPU time optimization of SAEM strategy.
– When latent covariate are used, the methodology to estimate the probabilities and the associated betas is now based on the mixing law and not on a individual probability draw. It allows a better evaluation of the Log-likelihood and better convergence properties.
– When there are parameters without variability,
— With the no-variability method, the maximum number of iterations depends on the number of parameters without variability
— With the no-variability method, the optimization is much faster.
— With the decreasing variability methods, the decreasing speed of the artificial variance is lower
— For normal law, better strategy to initialize variance (more consistent)
— when there is a latent covariate on the parameter, all methods can be used.
– When no parameter has variability and the no-variability method is used. Only one iteration of SAEM is done.
– Two settings of SAEM were updated to provide a better convergence
— The minimum number of iterations in the exploratory phase is now at 150 (it was 100 previously)
— The step size exponent in the smoothing phase is now at .7 (it was 1 previously)
– Constraints for settings
– If SAEM reaches the maximum number of iterations during the exploratory phase, a warning arises.
Removed feature
– We removed the possibility to add different variances depending on the modality of a categorical covariate
– We removed the possibility to choose to work either with standard errors or variances. Only standard errors are proposed. However, variance project can be loaded.
– We removed the possibility to have a bayesian posterior distribution
– We removed the possibility to have a custom distribution of the individual parameters
– Autocorrelation can not be added anymore in graphical interface. However, it can be loaded or added by the connectors

Conditional distribution
– Conditional distribution can now be computed for discrete and event models.
– New setting: number of simulations by individual parameters
– Adaptative number of simulations value according to the data size
– If the Fisher Information Matrix by stochastic approximation has already been computed, all the MCMC draws are reused and providing a much faster calculation.

Conditional mode
– The calculation is now tremendously faster. (between 20 to 100 time faster)
Standard error calculation
– Fisher information matrix can now be computed with discrete and event models and IOV
– Improvement of the calculation for the linearization
– Improvement of the calculations for S.A. if there are nans
– Faster calculation for the linearization
– Decrease of the maximum of iterations to 200 (it was 300 previously)
– Settings are modified for S.A.: min and max iterations
– Warnings if there are numerical issues for linearization
– If the conditional distribution has already been computed, all the MCMC draws are reused and providing a much faster calculation.

Log-likelihood calculation
– Improvement of the calculation for the linearization
– Faster calculation in case of censored data
– Faster calculation in case of importance sampling
– When the calculation by linearization has issues, then warning is provided to the users.
– The number of Monte-Carlo size in the importance sampling is now at 10000 (it was previously at 20000)

Simulation computation for plots
The simulation are much faster than in the previous version. It impacts a lot the time needed for the generation of the VPCs and the prediction interval for example.
In addition, a deep effort was done on the discrete and event models where the simulation is now tremendously faster. A progression bar is proposed too.
For the simulation on a grid, the doses and the regressors were added.

Plots during algorithms
– large interactivity (zoom, layout, coordinates)
– Possibility to switch between different frames during the algorithms calculation
– List of elements to compute for plots (‘Stop’ button keeps the done computations)

Tests computation
Tests are computed when the conditional distribution task is performed and the plots are launched. The following tests are computed
– Pearson’s correlation test on the individual parameter and the covariates used in the statistical model
– Pearson’s correlation test on the individual random effects and the covariates
– Fisher test for discrete covariates
– Shapiro Wilk test on the random effects
– Pearson’s correlation test on the random effects
– Shapiro Wilk test on the individual parameters that have no covariate
– Kolmogorov Smirnov adequacy test on the individual parameters that have covariates
– Van Der Waerden test on the residuals
– Shapiro Wilk test on the residuals
– For all tests associated to individual (parameters, random effects, NPDEs), the Benjamini-Hochberg procedure is applied to avoid bias

Monolix plots

All the plots were updated with a new technology and new features. In addition, all the color/graphical can be changed in the Preferences frame.

Notice that
– When you save the projects, your current graphical settings are conserved
– you can export your settings to be your defaults in the Export menu.

Stratify
The user can now define all the stratifications needed in a Stratify frame in a very easy way and can split, color and filter bay any defined covariate.

Enhancements
– Large simplification of the usage
– For a continuous covariate, possibility to define groups with either equal number of individuals, or equal size
– Possibility to change all the color
– Possibility to highlight a full group when clicking on the covariate category
– Buttons to add and remove categories
– Better performance

Observed data enhancements
This plot contains all the observations and can be used with all types of observations. It produces
– the spaghetti plot for continuous observations.
– the spaghetti plot or histogram for discrete observations (the user has the possibility to switch).
– the kaplan-Meier plot for event observations along with the mean number of events per individual

Enhancements
– When overlaying a curve, the ID is displayed and all the points of the subject are highlighted.
– When splitting, the information for each group is computed.
– When splitting, the user can choose to adapt the x-axis and or y-axis to each group or to have the same for all groups.
– Possibility to display the dosing times when overlaying an individual.

Individual fits enhancements
– It is possible to sort the individuals by individual parameters values.
– When there are censored data, the full interval is displayed
– the y-scale management is better performed.
– Possibility to display the dosing times.
– The user can choose to share the x-axis and/or y-axis .
– Possibility to zoom on all the individual at the same time with a linked zoom.
– Population fits (population covariate)
– Grid takes doses (and regressors) into account
– Color is added for a better representation of IOV when occasions are joined (according the presence of washout or not)

Observation vs Prediction enhancements
– The conditional distribution can be used for this plots.
– 90% prediction interval is now available.
– Information on the outliers proportions.
– Overlaying a point will display both the ID and the time of the points (and its replicates if the conditional distribution is chosen). In addition, the other points corresponding to the same ID are also highlighted.
– The log-log scale management is more efficiently done.

Scatter plots of the residuals enhancements
– Possibility to have Scatter plot for event.
– IWRES can be computed with the conditional distribution.
– Overlaying a point will display both the ID and the time of the points (and its replicates if the conditional distribution is chosen). In addition,
— the other points corresponding to the same ID are also highlighted.
— the same points are overlaid on the other plots.
– 2 predefined configurations (VPC and scatter).
– In case of discrete models, the scatter plot w.r.t time was added.

Distribution of the residuals enhancements
– By overlaying a bar in the pdf plots, we have the percentage of individual in this bar.
– By overlaying in the cdf plot, the theoretical and empirical cdf are displayed along with the x-axis value.
– The qqplot representation was replaced by a cdf representation.
– The empirical pdf is not computed anymore.

Distribution of the individual parameters enhancements
– The non parametric pdf is not proposed anymore.
– By hovering over a bar in the pdf plots, we have the percentage of individual in this bar.
– The empirical and theoretical cdf of the individual parameters are now computed.
– By hovering over the cdf plot, the theoretical and empirical cdf are displayed along with the x-axis value.
– When splitting, the shrinkage information is computed.

Distribution of the random effects enhancements
– The empirical and theoretical pdf of the individual parameters are now computed.
– By hovering over a bar in the pdf plots, we have the percentage of individual in this bar.
– The empirical and theoretical cdf of the individual parameters are now computed.
– By hovering over  the cdf plot, the theoretical and empirical cdf are displayed along with the x-axis value.

Correlation between random effects enhancements
– Correlation information is proposed.
– Hovering over a point will display the ID of the point (and its replicates if the conditional distribution is chosen). In addition, the same ID is overlaid in the other figures.
– Possibility to select the parameters to look at.
– Possibility to split the graphic.
– Optimized layout. At the beginning, a maximum of 6 random effects is displayed. However, the user can choose any number afterward.
Individual parameters vs covariates
– Overlaying a point will display the ID of the point (and its replicates if the conditional distribution is chosen). In addition, the same ID is overlaid in the other figures.
– Possibility to split and have all figures
– Possibility to select the parameters to look at.
– Possibility to select the covariates to look at.
– Possibility to split and color at the same time.

Visual Predictive Checks enhancements
– This plot contains all the observations and can be used with all types of observations.
– In case of categorical projects, there are no bins management on the y-axis. All the categories are displayed with the good y-label
– In case of count projects, the y-label is well defined

Prediction distribution
– Possibility to color the observations
– Possibility to differentiate the censored data and the non censored data
– Overlaying a point will display the ID of the point. In addition, the other points of the same ID are also overlaid.
– Overlaying a band will display the range of the band.

Loglikelihood contribution
– Possibility to zoom on part of the individuals

New plots
– Standards errors of the estimates
– MCMC convergence plot
– Importance Sampling convergence plot

Monolix project definition, settings, and outputs

Project evolution
In terms of project, there are only few modifications
– The definition of the number of doses in the STEADY STATE definition is now in the project and not in the user configuration
– In case of several outputs in the data set, the names of the type of outputs described in the observation is now named yname and not ytype anymore.. Retro compatibility is ensured.
– In case of a single output in the data set,  the name of the type of output described in the observation was ytype=1. It is now removed as it is useless. Retro compatibility is ensured.
– The list of graphics is now defined in the project file and not in the associated .xmlx anymore.
– The name of the tasks in the Mlxtran project evolved a little bit to be more consistent to the user interface.

Settings
In terms of project settings, there are only few modifications
– The flexibility to use or not the analytical solutions is now defined in the Mlxtran structural model and not in the user configuration
– The project settings are now available via the menu Settings/Project settings
– The user has the possibility to save the data and the model next to the project
– The preferences interface has evolved to be in javascript
– The working directory is not available through the interface but only with the user configuration file
– The change of the number of threads does not imply a restart of Monolix anymore
– We propose to automatically exports all the charts data after the run
– The charts export format are now .svg and .png
– The timestamping option is now called ‘Save History’. The project and its results is saved after each run now.

Configuration of the plots
It is now saved in a .properties associated to the project. It is not a xmxl anymore. It is not xmlx anymore but still readable. Retrocompatibility is only performed on the list of graphics. This .properties
– overlay the default settings (default.settings in the user/lixoft folder)
– contains all the informations for the display of the graphics in terms of what is displayed
– contains all the informations for the display of the graphics in terms of the covariate stratification in the graphics
– contains all the informations for the display of the graphics in terms of the colors and preferences for each graphics
When saving a project, a .properties is generated ensuring to replot exactly the same figures after a reload.
It is possible to export all the settings to define it as the global settings.

Outputs
In terms of outputs, all the files and folder are reorganized. We now have
– summary.txt: providing a summary of the run
– populationparameter.txt with all the estimated population parameters
– the output predictions
– all the files concerning the Fisher Information Matrix are in a folder FisherInformation
– all the files concerning the individual parameters and the random effects are in a folder IndividualParameters
– all the files concerning the logLikelihood are in a folder logLikelihood
– all the files concerning the results of the Tests  are in a folder Tests
– a part of the Lixoft files needed to reload is in a private folder .Internals
– when the charts data are exported, the data are exported in a folder ChartsData
– when the figures are exported, the data are exported in a folder ChartsFigures
– all figures can be exported independently
In terms of export, we can
– export all the charts data in Settings/Export charts data
– export all the figures in Settings/Export plots
– export the project in Mlxplore in Settings/Export in Mlxplore

Monolix Connectors

There is a R package associated to Monolix where the user has all the functions available through the interface. The following functions are available
– abort Stop the current task run
– addCategoricalTransformedCovariate: Add Categorical Transformed Covariate
– addContinuousTransformedCovariate: Add Continuous Transformed Covariate
– addMixture: Add Mixture Add a new latent covariate to the current model giving its name and its modality number.
– computePredictions: Compute predictions from the structural model
– getConditionalDistributionSamplingSettings: Get conditional distribution sampling settings
– getConditionalModeEstimationSettings: Get conditional mode estimation settings
– getContinuousObservationModel: Get continuous observation models information
– getCorrelationOfEstimates: Get the inverse of the Fisher Matrix
– getCovariateInformation: Get Covariates Information
– getData: Get project data
– getEstimatedIndividualParameters: Get last estimated individual parameter values
– getEstimatedLogLikelihood: Get Log-Likelihood Values
– getEstimatedPopulationParameters: Get last estimated population parameter value
– getEstimatedRandomEffects: Get estimated the random effects
– getEstimatedStandardErrors: Get standard errors of population parameters
– getGeneralSettings: Get project general settings
– getIndividualParameterModel: Get Individual Parameter Model
– getLastRunStatus: Get last run status
– getLaunchedTasks: Get tasks with results
– getLogLikelihoodEstimationSettings: Get LogLikelihood algorithm settings
– getMCMCSettings: Get MCMC algorithm settings
– getMlxEnvInfo: Get information about MlxEnvironment object
– getObservationInformation: Get observations information
– getPopulationParameterEstimationSettings: Get population parameter estimation  settings
– getPopulationParameterInformation: Get Population Parameters Information
– getPreferences: Get project preferences
– getProjectSettings: Get project settings
– getSAEMiterations: Get SAEM algorithm iterations
– getScenario: Get current scenario
– getSimulatedIndividualParameters: Get simulated individual parameters
– getSimulatedRandomEffects: Get simulated random effects
– getStandardErrorEstimationSettings: Get standard error estimation settings
– getStructuralModel: Get structural model file
– getVariabilityLevels: Get Variability Levels
– initializeMlxConnectors: Initialize MlxConnectors API
– isRunning: Get current scenario state
– loadProject: Load project from file
– mlxDisplay: Display Mlx API Structures
– newProject: Create new project
– removeCovariate: Remove Covariate
– runConditionalDistributionSampling: Sampling from the conditional distribution
– runConditionalModeEstimation: Estimation of the conditional modes (EBEs)
– runLogLikelihoodEstimation: Log-Likelihood estimation
– runPopulationParameterEstimation: Population parameter estimation
– runScenario: Run Current Scenario
– runStandardErrorEstimation: Standard error estimation
– saveProject: Save current project
– setAutocorrelation: Set auto-correlation
– setConditionalDistributionSamplingSettings: Set conditional distribution sampling settings
– setConditionalModeEstimationSettings: Set conditional mode estimation settings
– setCorrelationBlocks: Set Correlation Block Structure
– setCovariateModel: Set Covariate Model
– setData: Set project data
– setErrorModel: Set error model
– setGeneralSettings: Set common settings for algorithms
– setIndividualParameterDistribution: Set Individual Parameter Distribution
– setIndividualParameterVariability: Individual Variability Management
– setInitialEstimatesToLastEstimates: Initialize population parameters with the last estimated ones
– setLogLikelihoodEstimationSettings: Set loglikelihood estimation settings
– setMCMCSettings: Set settings associated to the MCMC algorithm
– setObservationDistribution: Set observation model distribution
– setObservationLimits: Set observation model distribution limits
– setPopulationParameterEstimationSettings: Set population parameter estimation settings
– setPopulationParameterInformation: Population Parameters Initialization and Estimation Method
– setPreferences: Set preferences
– setProjectSettings: Set project settings
– setScenario: Set scenario
– setStandardErrorEstimationSettings: Set standard error estimation settings
– setStructuralModel: Set structural model file

Installer and configuration

Installer

  • Possibility to give an activation key during the installation
  • No more choice installation as there are not both a stand alone and a Matlab version
  • The requirements are specified at the beginning of the installation (Windows redistributable for windows, gcc 4.8 for Linux, …)

System configuration

  • All the xmlx files managing the default graphics configuration no longer exist (for the list, the settings and preferences). All the default configuration is internal of the software.
  • All the xmlx files managing the default scenario configuration no longer exist. All the default configuration is internal of the software.
  • All the xmlx files managing the default algorithms settings no longer exist. All the default configuration is internal of the software.
  • Both system.ini and system.xmlx were merged into system.ini and simplified. The [PlugInCodeGeneration] was set always at TRUE and it is possible to change it directly in the model file. All the possibility to enforce are maintained.

User configuration

  • The configuration folder is now internal to all application, there is a config folder for all application.
  • Folders older than a month in the module and tmp folders are removed at each start of any application.
  • All paths are now sharing the same root defined in the config.ini file
  • There is not a perlScripts folder anymore

License

The license management was updated in order to have the possibility to provide a cloud-based server management. Therefore, all floating server should be updated too.

Jonathan CHAUVINRelease Notes MonolixSuite2018R1