SVC
Support vector machine classifier that maximises the margin between classes.
SVC constructs a maximum-margin hyperplane in a (possibly kernel-transformed) feature space. Training data points that lie on or inside the margin are called support vectors; they fully define the decision boundary. Non-linearly separable problems are addressed by mapping the input space into a higher-dimensional space via kernel functions (linear, polynomial, RBF, or sigmoid).
Regularisation is controlled by C: smaller values allow more misclassified
training points in exchange for a wider margin, while larger values enforce a
harder margin. The kernel, gamma, degree, and coef0 parameters
configure the kernel function. The implementation wraps scikit-learn's SVC.
References
- [1] Cortes, C. & Vapnik, V. (1995). "Support-vector networks." Machine Learning, 20(3), 273-297. https://doi.org/10.1007/BF00994018
- [2] https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html
Parameters
- C : number, default=
1.0 - The parameter 'C' is a regularization parameter. The strength of the regularization is inversely proportional to C
- coef0 : number, default=
1.0 - The parameter 'coef0' is independent term in kernel function. It is only significant for kernel poly and sigmoid.
- degree : number, default=
1.0 - The 'degree' parameter is only significant for 'poly' kernel.
- gamma : string, default=
scale - Coefficient for 'rbf', 'poly' and 'sigmoid' kernels.
- kernel : string, default=
rbf - The 'kernel' parameter is the kernel used in the model.
- max_iter : integer, default=
-1 - The 'max_iter' parameter determines the iteration limit for the solver. It must be of type positive integer or -1 to indicate no limit.
- shrinking : boolean, default=
True - The 'shrinking' parameter determines whether a shrinking heuristic is used.
- tol : number, default=
1.0 - The parameter 'tol' determines the tolerance for the stop criterion.
Methods
calculate_metrics(self, split: DashAI.back.core.enums.metrics.SplitEnum = <SplitEnum.VALIDATION: 'validation'>, level: DashAI.back.core.enums.metrics.LevelEnum = <LevelEnum.LAST: 'last'>, log_index: int = None, x_data: 'DashAIDataset' = None, y_data: 'DashAIDataset' = None)
BaseModelCalculate and save metrics for a given data split and level.
Parameters
- split : SplitEnum
- The data split to evaluate (TRAIN, VALIDATION, or TEST). Defaults to SplitEnum.VALIDATION.
- level : LevelEnum
- The metric granularity level (LAST, TRIAL, STEP, or BATCH). Defaults to LevelEnum.LAST.
- log_index : int, optional
- Explicit step index for the metric entry. If None, the next step index is computed automatically. Defaults to None.
- x_data : DashAIDataset, optional
- Input features. If None, the dataset stored in the model for the given split is used. Defaults to None.
- y_data : DashAIDataset, optional
- Target labels. If None, the labels stored in the model for the given split are used. Defaults to None.
get_metadata(cls) -> Dict[str, Any]
BaseModelGet metadata values for the current model.
Returns
- Dict[str, Any]
- Dictionary containing UI metadata such as the model icon used in the DashAI frontend.
get_schema(cls) -> dict
ConfigObjectGenerates the component related Json Schema.
Returns
- dict
- Dictionary representing the Json Schema of the component.
load(filename: str) -> None
SklearnLikeModelDeserialise a model from disk using joblib.
Parameters
- filename : str
- Path to the file previously written by :meth:
save.
Returns
- SklearnLikeModel
- The loaded model instance.
predict(self, x_pred: 'DashAIDataset') -> 'ndarray'
SklearnLikeClassifierMake a prediction with the model
Parameters
- x_pred : DashAIDataset
- Dataset with the input data columns.
Returns
- np.ndarray
- Array with the predicted target values for x_pred
prepare_dataset(self, dataset: 'DashAIDataset', is_fit: bool = False) -> 'DashAIDataset'
SklearnLikeModelApply the model transformations to the dataset.
Parameters
- dataset : DashAIDataset
- The dataset to be transformed.
- is_fit : bool, optional
- If True, the method will fit encoders on the data. If False, will apply previously fitted encoders.
Returns
- DashAIDataset
- The prepared dataset ready to be converted to an accepted format in the model.
prepare_output(self, dataset: 'DashAIDataset', is_fit: bool = False) -> 'DashAIDataset'
SklearnLikeModelPrepare output targets using Label encoding.
Parameters
- dataset : DashAIDataset
- The output dataset to be transformed.
- is_fit : bool, optional
- If True, fit the encoder. If False, use existing encodings.
Returns
- DashAIDataset
- Dataset with categorical columns converted to integers.
save(self, filename: str) -> None
SklearnLikeModelSerialise the model to disk using joblib.
Parameters
- filename : str
- Destination file path where the model will be written.
train(self, x_train, y_train, x_validation=None, y_validation=None)
SklearnLikeModelTrain the sklearn model on the provided dataset.
Parameters
- x_train : DashAIDataset
- The input features for training.
- y_train : DashAIDataset
- The target labels for training.
- x_validation : DashAIDataset, optional
- Validation input features (unused in sklearn models). Defaults to None.
- y_validation : DashAIDataset, optional
- Validation target labels (unused in sklearn models). Defaults to None.
Returns
- BaseModel
- The fitted scikit-learn estimator (self).
validate_and_transform(self, raw_data: dict) -> dict
ConfigObjectIt takes the data given by the user to initialize the model and returns it with all the objects that the model needs to work.
Parameters
- raw_data : dict
- A dictionary with the data provided by the user to initialize the model.
Returns
- dict
- A validated dictionary with the necessary objects.