Document Home / Chat Tab

When you open the Chat tab, a screen like this is displayed. To start a chat, you first need to select the model you want to use for the chat.

To select a model, click or tap the model switch button in the top right corner. This will display a list of models on the currently selected Ollama server, and you can select the model you want to use (to switch the selected server, please refer to the Server Tab documentation).

Once a model is selected, you can enter a message. Type the text you want to send to the AI model and click or tap the Send button.
To insert a line break on macOS, press ⇧ (Shift) + ↩︎ (Return) simultaneously.

When you send a message, a chat request is sent to the Ollama server, and the model loading process will begin if necessary.
Loading may take time depending on the model type and the type of storage where the model is saved on the Ollama server.
Tip
The API timeout is set to 30 seconds by default, so a timeout error may occur if model loading takes a long time.
If you know that model loading will take time, I recommend increasing the API timeout duration in Settings or setting it to unlimited.
For instructions on how to set the API timeout duration, please refer to the Settings documentation.

After a while, you will receive a response from the AI.
Once the response is fully completed, you can perform operations on the message. The possible operations are as follows:
Information
To prevent performance degradation, Markdown text is processed line by line while the AI's response is being returned (during stream response).
Therefore, the display may appear corrupted temporarily, but it should quickly change to the correct display.
To clear the chat history and start a new chat, click the New Chat button in the top right corner or press ⌥ (Option) + ⌘ (Command) + N simultaneously.

By opening the Inspector, you can customize the chat settings.
To open the Inspector, click or tap the sidebar toggle button in the top right corner.
From the Inspector, you can customize the following settings:
0.0 and 2.0. Lowering the temperature makes the output more accurate, while raising it makes it more creative (not all models may follow this setting, and incorrect output may result depending on the setting).512 and the model's context length. To check the model's context length, please refer to the Model Tab documentation.The settings configured here will be reflected from the next message you send onward.




