An experiment is a collection of comparable model candidates. Experiments can be long lived (for example, when they represent
a use case), or short lived (results from hyperparameter tuning triggered by a merge request), but usually hold model candidates
that have a similar set of parameters and metrics.
## Model candidate
A model candidate is a variation of the training of a machine learning model, that can be eventually promoted to a version
of the model. The goal of a data scientist is to find the model candidate whose parameter values lead to the best model
performance, as indicated by the given metrics.
Example parameters:
- Algorithm (linear regression, decision tree, and so on).
- Hyperparameters for the algorithm (learning rate, tree depth, number of epochs).
- Features included.
## Usage
### User access management
An experiment is always associated to a project. Only users with access to the project an experiment is associated with
can view that experiment data.
### Tracking new experiments and trials
Experiment and trials can only be tracked through the [MLFlow](https://www.mlflow.org/docs/latest/tracking.html) client
integration. More information on how to use GitLab as a backend for MLFlow Client can be found [at the documentation page](../../integrations/mlflow_client.md).
### Exploring model candidates
To list the current active experiments, navigate to `https/-/ml/experiments`. To display all trials
- Searching experiments, searching trials, visual comparison of trials, and creating, deleting and updating experiments and trials through GitLab UI is under development.
## Disabling or enabling the Feature
On self-managed GitLab, ML Experiment Tracking is disabled by default. To enable the feature, ask an administrator to [disable the feature flag](../../../../administration/feature_flags.md) named `ml_experiment_tracking`.
On GitLab.com, this feature is currently on private testing.