RandomForestClassifier
Random forest classifier that aggregates predictions from many decision trees.
Random Forest is a bagging ensemble that fits n_estimators decision trees,
each on a bootstrap sample of the training data. At each split only a random
subset of features is evaluated, which decorrelates the trees and reduces
variance. The final class prediction is determined by majority vote across all
trees.
Key hyperparameters include n_estimators (number of trees), max_depth
(maximum tree depth), min_samples_split, min_samples_leaf,
max_leaf_nodes, and random_state. The implementation wraps
scikit-learn's RandomForestClassifier.
References
- [1] Breiman, L. (2001). "Random Forests." Machine Learning, 45(1), 5-32. https://doi.org/10.1023/A:1010933404324
- [2] https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
Parameters
- n_estimators : integer, default=
100 - The 'n_estimators' parameter corresponds to the number of decision trees. It must be an integer greater than or equal to 1.
- max_depth : integer, default=
2 - The parameter corresponds to the maximum depth of the tree. It must be an integer greater than or equal to 1.
- min_samples_split : integer, default=
2 - This parameter sets the minimum number of samples required to split an internal node. It must be a number greater than or equal to 2.
- min_samples_leaf : integer, default=
1 - This parameter sets the minimum number of samples required to be at a leaf node. It must be a number greater than or equal to 1.
- max_leaf_nodes : integer, default=
2 - This parameter sets the maximum number of leaf nodes. It must be an integer greater than or equal to 2.
- random_state : integer, default=
0 - This parameter must be an integer greater than or equal to 0.
Methods
calculate_metrics(self, split: DashAI.back.core.enums.metrics.SplitEnum = <SplitEnum.VALIDATION: 'validation'>, level: DashAI.back.core.enums.metrics.LevelEnum = <LevelEnum.LAST: 'last'>, log_index: int = None, x_data: 'DashAIDataset' = None, y_data: 'DashAIDataset' = None)
BaseModelCalculate and save metrics for a given data split and level.
Parameters
- split : SplitEnum
- The data split to evaluate (TRAIN, VALIDATION, or TEST). Defaults to SplitEnum.VALIDATION.
- level : LevelEnum
- The metric granularity level (LAST, TRIAL, STEP, or BATCH). Defaults to LevelEnum.LAST.
- log_index : int, optional
- Explicit step index for the metric entry. If None, the next step index is computed automatically. Defaults to None.
- x_data : DashAIDataset, optional
- Input features. If None, the dataset stored in the model for the given split is used. Defaults to None.
- y_data : DashAIDataset, optional
- Target labels. If None, the labels stored in the model for the given split are used. Defaults to None.
get_metadata(cls) -> Dict[str, Any]
BaseModelGet metadata values for the current model.
Returns
- Dict[str, Any]
- Dictionary containing UI metadata such as the model icon used in the DashAI frontend.
get_schema(cls) -> dict
ConfigObjectGenerates the component related Json Schema.
Returns
- dict
- Dictionary representing the Json Schema of the component.
load(filename: str) -> None
SklearnLikeModelDeserialise a model from disk using joblib.
Parameters
- filename : str
- Path to the file previously written by :meth:
save.
Returns
- SklearnLikeModel
- The loaded model instance.
predict(self, x_pred: 'DashAIDataset') -> 'ndarray'
SklearnLikeClassifierMake a prediction with the model
Parameters
- x_pred : DashAIDataset
- Dataset with the input data columns.
Returns
- np.ndarray
- Array with the predicted target values for x_pred
prepare_dataset(self, dataset: 'DashAIDataset', is_fit: bool = False) -> 'DashAIDataset'
SklearnLikeModelApply the model transformations to the dataset.
Parameters
- dataset : DashAIDataset
- The dataset to be transformed.
- is_fit : bool, optional
- If True, the method will fit encoders on the data. If False, will apply previously fitted encoders.
Returns
- DashAIDataset
- The prepared dataset ready to be converted to an accepted format in the model.
prepare_output(self, dataset: 'DashAIDataset', is_fit: bool = False) -> 'DashAIDataset'
SklearnLikeModelPrepare output targets using Label encoding.
Parameters
- dataset : DashAIDataset
- The output dataset to be transformed.
- is_fit : bool, optional
- If True, fit the encoder. If False, use existing encodings.
Returns
- DashAIDataset
- Dataset with categorical columns converted to integers.
save(self, filename: str) -> None
SklearnLikeModelSerialise the model to disk using joblib.
Parameters
- filename : str
- Destination file path where the model will be written.
train(self, x_train, y_train, x_validation=None, y_validation=None)
SklearnLikeModelTrain the sklearn model on the provided dataset.
Parameters
- x_train : DashAIDataset
- The input features for training.
- y_train : DashAIDataset
- The target labels for training.
- x_validation : DashAIDataset, optional
- Validation input features (unused in sklearn models). Defaults to None.
- y_validation : DashAIDataset, optional
- Validation target labels (unused in sklearn models). Defaults to None.
Returns
- BaseModel
- The fitted scikit-learn estimator (self).
validate_and_transform(self, raw_data: dict) -> dict
ConfigObjectIt takes the data given by the user to initialize the model and returns it with all the objects that the model needs to work.
Parameters
- raw_data : dict
- A dictionary with the data provided by the user to initialize the model.
Returns
- dict
- A validated dictionary with the necessary objects.