Conversations in Opentrace are threaded dialogues within a project where you interact with the AI assistant. Every response is grounded in your documents, with citations linking back to the exact source.
Each project can have multiple conversations, allowing you to explore different topics without losing context:
Within a conversation, the system maintains a rolling chat history. The last N messages (default: 10) are included in the system prompt to give the AI contextual awareness for follow-up questions like “Tell me more about that” or “What was the second point?”
Citations are the cornerstone of Opentrace's trustworthiness. Every piece of information in the AI's response can be traced back to a specific document chunk.
Each citation includes:
Citations accumulate in the agent state across tool calls, so even multi-step agent interactions (in the Supervisor agent) properly track their sources.
Responses are streamed token-by-token from the backend to the frontend. You see the answer appear in real-time as the LLM generates it, with citations displayed alongside or below the response once complete.
Each AI response includes a thumbs-up/thumbs-down feedback mechanism. This feedback is stored in the database and can be used to evaluate and improve the system's performance over time.