StableDiffusionXLV1ControlNet
DashAI.back.models.hugging_face.StableDiffusionXLV1ControlNet
A wrapper implementation of ControlNet with depth preprocessing and stable diffusion xl 1.0 as pipeline.
Parameters
- num_inference_steps : integer, default=
15 - Number of denoising steps to run. More steps refine the image but increase generation time. Typical range: 20-30 for fast results, 40-50 for higher quality. Values above 100 rarely improve output.
- controlnet_conditioning_scale : number, default=
1.0 - Weight of the ControlNet depth conditioning relative to the base diffusion pipeline. Valid range is 0.0-2.0. At 0.0 the depth map has no effect; at 1.0 (default) the output closely follows the input image structure; above 1.5 the depth constraint dominates and may produce overly rigid results.
- device : string, default=
CPU - Hardware device for inference. Select a GPU option for hardware acceleration, which is strongly recommended for diffusion models. Select 'CPU' on systems without a compatible GPU, but expect significantly longer generation times.
Methods
generate(self, input: Tuple[Any, str]) -> List[Any]
StableDiffusionXLV1ControlNetGenerate output from a generative model.
Parameters
- input : Tuple[Any, str]
- Input data to be generated
Returns
- List[Any]
- Generated output data in a list
get_schema(cls) -> dict
ConfigObjectGenerates the component related Json Schema.
Returns
- dict
- Dictionary representing the Json Schema of the component.
validate_and_transform(self, raw_data: dict) -> dict
ConfigObjectIt takes the data given by the user to initialize the model and returns it with all the objects that the model needs to work.
Parameters
- raw_data : dict
- A dictionary with the data provided by the user to initialize the model.
Returns
- dict
- A validated dictionary with the necessary objects.