Skip to main content

Generative AI

The Generative module lets you interact with generative AI models directly inside DashAI — without writing any code. You can generate text or images, adjust model parameters in real time, and track every configuration change in a session history.

Hardware requirement

Generative models are computationally intensive. A NVIDIA GPU with CUDA support is strongly recommended. Running these models on CPU is possible but significantly slower, and some larger models may fail to load due to memory constraints.


Available Tasks

When you open the Generative module, you select a task type that determines which models are available:

TaskDescription
TextToTextGenerate text from a text prompt. Includes LLMs such as Qwen, Llama, and others.
TextToImageGenerate images from a text description using models like Stable Diffusion.
ImageToImageTransform or modify an existing image guided by text and an input image.

Step-by-Step Guide

1. Select a Task

Navigate to the GENERATIVE section in the top navigation bar. Click on the task type you want to use (e.g., TextToText).

2. Select a Model

A list of available models for the selected task is shown. Click on a model to select it.

3. Configure Model Parameters

Each model exposes a set of parameters that control its behavior. These appear in a panel on the right side of the screen. Common parameters include:

ParameterDescription
TemperatureControls the randomness of the output. Lower values (e.g., 0.1) produce more deterministic, focused responses. Higher values (e.g., 1.0+) produce more varied and creative outputs.
Max tokensThe maximum number of tokens (roughly words or word pieces) the model will generate in a single response.
Top-pNucleus sampling — limits generation to the smallest set of tokens whose cumulative probability exceeds this value. Works together with temperature to control output diversity.
SeedA fixed random seed for reproducibility. Setting the same seed with the same parameters will produce the same output.

Parameters vary by model — not all models expose all of the above. Each parameter has a ? help icon that shows a description when hovered.

For image generation tasks, additional parameters are available such as:

  • Width / Height — output image dimensions. Both must be divisible by 8.
  • Inference steps — number of denoising steps. More steps generally produce higher quality images but take longer.
  • Guidance scale — how closely the model follows the text prompt.

4. Configure Session Parameters

Before creating the session you can optionally set:

  • Session name — a label to identify this session in the session list.
  • Session description — an optional note about the purpose of this session.

Both fields are optional. If left empty, DashAI assigns a default name.

5. Create the Session

Click "Create a session" to initialize the session. DashAI will load the selected model — this may take a moment, especially on first use when model weights need to be downloaded.

Once the session is ready, the interaction interface opens.

6. Interact with the Model

The main area of the session is the interaction interface:

  • TextToText — an input field where you type a prompt and receive the model's text response. Each exchange is shown as a conversation thread.
  • TextToImage / ImageToImage — an input field for the text prompt and, where applicable, an image upload area. The generated image is displayed inline.

Submit your input and wait for the model to generate a response. Generation time depends on the model size, the parameter configuration, and your hardware.

7. Adjust Parameters During the Session

You can change any model parameter at any time during an active session using the parameter panel on the right. Changes take effect on the next generation — you do not need to create a new session.

8. View the Session History

Click the "History" button to open a log of all parameter changes made during the session. Each entry shows:

  • The parameter that was changed.
  • The previous and new values.
  • The date and time of the change.

This is useful for tracking what configuration produced a particular output and for reverting to a previous state.

9. Access Previous Sessions

All sessions you have created are listed on the left side of the Generative section, organized by task type. Click any session to reopen it and continue interacting with the model using the same configuration.


Tips

  • Start with default parameter values and adjust incrementally — changing multiple parameters at once makes it hard to understand which change affected the output.
  • For image generation, width and height must be divisible by 8. Values that don't meet this requirement will cause an error.
  • Use a fixed Seed when experimenting with parameters — it lets you compare outputs from different configurations while holding randomness constant.
  • If the model runs out of memory during generation, try reducing Max tokens (for text) or Width/Height and Inference steps (for images).
  • Session history is particularly useful in a teaching context: it creates a visible record of how parameter changes affect model behavior.

Troubleshooting

SymptomLikely causeSolution
Generation fails immediatelyInsufficient GPU memoryReduce image dimensions or max tokens; close other GPU-intensive applications
Image dimensions errorWidth or height not divisible by 8Adjust dimensions to the nearest multiple of 8 (e.g., 512, 768, 1024)
Model takes very long to loadFirst-time use, model weights being downloadedWait for the download to complete; subsequent loads will be faster
Output is always identicalSeed is fixed and parameters haven't changedChange the seed or increase temperature to get varied outputs
Error details are not visible in the UIError modal shows a generic messageOpen the browser developer console (F12) and check the console logs for the full error