Skip to main content
POST
/
v1
/
conversation
Conversation
curl --request POST \
  --url https://api.incredible.one/v1/conversation \
  --header 'Content-Type: application/json' \
  --data '{
  "messages": [
    {
      "role": "user",
      "content": "Hello!"
    },
    {
      "role": "assistant",
      "content": "Hi there!"
    },
    {
      "role": "user",
      "content": "How are you?"
    }
  ],
  "system_prompt": "You are a helpful assistant.",
  "stream": false
}'
{
  "success": true,
  "response": "I'm doing well, thank you for asking! How can I help you today?"
}

Overview

Have natural multi-turn conversations using DeepSeek v3.1 via Fireworks. This endpoint is optimized for conversational AI without the overhead of tool calling or full agentic capabilities. Use this when you need:
  • Fast, cost-effective conversations
  • Multi-turn dialogue with context
  • Simple chat without function calling
  • Streaming or non-streaming responses

Use cases

  • Customer support chatbots
  • Interactive Q&A sessions
  • Conversational interfaces
  • Multi-step explanations
  • Context-aware dialogue

Model details

  • Model: DeepSeek v3.1 (via Fireworks)
  • Fast inference and cost-effective
  • Excellent for conversational tasks
  • No tool calling capabilities

Request example

curl -X POST "https://api.incredible.one/v1/conversation" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [
      {"role": "user", "content": "Hello!"},
      {"role": "assistant", "content": "Hi there!"},
      {"role": "user", "content": "How are you?"}
    ],
    "system_prompt": "You are a helpful assistant."
  }'

Request Body

  • messages array (required) — Conversation history with alternating user and assistant messages
    • role string — Either "user" or "assistant"
    • content string — Message content
  • system_prompt string (optional) — System prompt to guide the conversation (default: “You are a helpful assistant.”)
  • stream boolean (optional) — Enable streaming response via Server-Sent Events (default: false)

Response

Non-streaming (default)

{
  "success": true,
  "response": "I'm doing well, thank you for asking! How can I help you today?"
}

Streaming (stream=true)

Server-Sent Events format:
data: {"thinking": "User is greeting me..."}
data: {"content": "I'm doing"}
data: {"content": " well"}
data: {"content": ", thank you!"}
data: {"tokens": 156}
data: {"done": true}

vs Other Endpoints

Feature/v1/conversation/v1/agent/v1/chat-completion
Multi-turn✅ Yes✅ Yes✅ Yes
Tool calling❌ No✅ Yes✅ Yes
Streaming✅ Yes✅ Yes✅ Yes
ModelDeepSeek v3.1Kimi K2 ThinkingConfigurable
Best forFast conversationsReasoning + toolsFull agentic

Next steps

Body

application/json
messages
object[]
required

Conversation history with user and assistant messages

Minimum length: 1
system_prompt
string

System prompt to guide the conversation. Defaults to 'You are a helpful assistant.'

stream
boolean
default:false

Enable streaming response via Server-Sent Events

Response

Successful conversation response

success
boolean
required

Whether the request was successful

response
string
required

The assistant's response