OrdinalEncoder
Encode categorical feature columns as integer ordinal codes.
For each input feature column every unique category value is mapped to a
contiguous integer starting at 0. Given categories ["cold", "warm", "hot"] sorted alphabetically the default mapping would be
cold -> 0, hot -> 1, warm -> 2; a custom category list can
be supplied to impose a domain-specific order.
Unlike OneHotEncoder, ordinal encoding produces a single output
column per input column and implicitly encodes a numerical order between
the categories. This makes it appropriate when:
- The categories have a meaningful rank (e.g. education level, severity score, shirt size).
- The downstream model can exploit ordinal structure (e.g. tree-based models such as gradient-boosted trees or random forests).
For unordered nominal categories, OneHotEncoder is typically
preferred because ordinal codes introduce a spurious ordering.
References
Parameters
- categories : string, default=
auto - Categories (unique values) per feature.
- dtype : string, default=
np.float64 - Desired dtype of output.
- handle_unknown : string, default=
error - Whether to raise an error or use a specific encoded value when an unknown category is seen.
- unknown_value, default=
None - The value to use for unknown categories.
- min_frequency, default=
None - Minimum frequency of a category to be considered as frequent.
- max_categories, default=
None - Maximum number of categories to encode.
Methods
get_output_type(self, column_name: str = None) -> DashAI.back.types.dashai_data_type.DashAIDataType
OrdinalEncoderReturn the DashAI data type produced by this converter for a column.
Parameters
- column_name : str, optional
- Not used; all output columns share the same type. Defaults to None.
Returns
- DashAIDataType
- A placeholder
Categoricaltype with two string values ("0"and"1"). The actual category values are not reflected at schema-declaration time; the real categories are available after fitting viaself.categories_.
changes_row_count(self) -> 'bool'
BaseConverterIndicate whether this converter changes the number of dataset rows.
Returns
- bool
- True if the converter may add or remove rows, False otherwise.
fit(self, x: 'DashAIDataset', y: Optional[ForwardRef('DashAIDataset')] = None) -> DashAI.back.converters.base_converter.BaseConverter
SklearnWrapperFit the scikit-learn transformer to the data.
Parameters
- x : DashAIDataset
- The input dataset to fit the transformer on.
- y : DashAIDataset, optional
- Target values for supervised transformers. Defaults to None.
Returns
- BaseConverter
- The fitted transformer instance (self).
get_metadata(cls) -> 'Dict[str, Any]'
BaseConverterGet metadata for the converter, used by the DashAI frontend.
Parameters
- cls : type
- The converter class (injected automatically by Python for classmethods).
Returns
- Dict[str, Any]
- Dictionary containing display name, short description, image preview path, category, icon, color, and whether the converter is supervised.
get_schema(cls) -> dict
ConfigObjectGenerates the component related Json Schema.
Returns
- dict
- Dictionary representing the Json Schema of the component.
transform(self, x: 'DashAIDataset', y: Optional[ForwardRef('DashAIDataset')] = None) -> 'DashAIDataset'
SklearnWrapperTransform the data using the fitted scikit-learn transformer.
Parameters
- x : DashAIDataset
- The input dataset to transform.
- y : DashAIDataset, optional
- Not used. Present for API consistency. Defaults to None.
Returns
- DashAIDataset
- The transformed dataset with updated DashAI column types.
validate_and_transform(self, raw_data: dict) -> dict
ConfigObjectIt takes the data given by the user to initialize the model and returns it with all the objects that the model needs to work.
Parameters
- raw_data : dict
- A dictionary with the data provided by the user to initialize the model.
Returns
- dict
- A validated dictionary with the necessary objects.