Ask Your Documents

Get direct answers to questions using Ducky's intelligent ask endpoint

Ask Your Documents

The Ask endpoint is Ducky's intelligent question-answering interface that provides direct, synthesized answers from your indexed documents. Instead of retrieving raw documents and handling LLM integration yourself, simply ask a natural language question and get a confident answer back.

Why Use the Ask Endpoint

  1. Simplified Integration: No need to manage top_k parameters, result parsing, or prompt engineering. Ask a question, get an answer.
  2. Intelligent Agent: Powered by an internal agent that can extend search, reason across multiple documents, and synthesize comprehensive responses.
  3. Direct Answers: Like Google's answer boxes, you get the information you need without having to sift through search results.
  4. Source Attribution: Every answer includes confidence scoring and source documents for verification and transparency.

How It Works

The Ask endpoint uses an internal Agent-of-Thought (AoT) system that:

  1. Understands your question using natural language processing
  2. Searches your index intelligently, extending search as needed
  3. Reasons across documents to synthesize comprehensive answers
  4. Returns confident responses with source attribution and confidence scoring

This eliminates the complexity of the traditional retrieval → LLM workflow, giving you a direct question-to-answer interface.

Code Examples

Python SDK

from duckyai import DuckyAI

ducky = DuckyAI(api_key="your-api-key")

# Ask a question
response = ducky.indexes.ask(
    index_name="medical-knowledge",
    question="What is basal cell carcinoma?"
)

print(f"Answer: {response.answer}")
print(f"Confidence: {response.confidence}/1000")
print(f"Sources: {len(response.sources)} documents")

for source in response.sources:
    print(f"  - Document {source.doc_id} (relevance: {source.relevance_score})")
import { Ducky } from "duckyai-ts";

const ducky = new Ducky({
  apiKey: process.env.DUCKY_API_KEY ?? "",
});

// Ask a question
const response = await ducky.indexes.ask({
  indexName: "medical-knowledge",
  question: "What is basal cell carcinoma?"
});

console.log(`Answer: ${response.answer}`);
console.log(`Confidence: ${response.confidence}/1000`);
console.log(`Sources: ${response.sources.length} documents`);

response.sources.forEach(source => {
  console.log(`  - Document ${source.docId} (relevance: ${source.relevanceScore})`);
});
curl -X POST "https://api.ducky.ai/v1/indexes/medical-knowledge/ask" \
  -H "Authorization: Bearer your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "What is basal cell carcinoma?"
  }'

Complex Medical Case Example

The ask endpoint can handle sophisticated diagnostic scenarios:

# Complex diagnostic case
response = ducky.indexes.ask(
    index_name="medical-knowledge",
    question="""A 23-year-old man presented with a week's history of sudden-onset skin lesions. 
    Two days before the lesions appeared, he had a sore throat, dry cough, and fever. 
    Examination showed multiple 5–15 mm erythematous papules and small nodules with 
    necrotic ulcerated centres and hemorrhagic crusting on his trunk, arms, thighs, and groins; 
    some papulovesicles with purpura were noted. Palms, soles, face, scalp, and oral mucosa 
    were spared. Soft non-tender 1 cm axillary and inguinal lymph nodes were palpable. 
    Labs showed normal blood counts, liver and renal function; CRP was 47 mg/L. 
    Skin biopsy revealed marked spongiosis, intraepidermal erythrocytes, ulceration with 
    keratinocyte necrosis, upper dermal edema, a wedge-shaped lymphohistiocytic infiltrate 
    with occasional neutrophils extending to the deeper dermis, and endothelial swelling 
    without fibrinoid necrosis. Immunohistochemistry was CD8+ (CD30–), and T-cell clone 
    rearrangement studies were positive. What is the most appropriate diagnosis?"""
)

print(f"Diagnosis: {response.answer}")
# Expected response: "Pityriasis lichenoides et varioliformis acuta..."

Understanding Responses

Confidence Scores

  • 800-1000: High confidence - the agent found clear, relevant information
  • 500-799: Medium confidence - good information but some uncertainty
  • 200-499: Low confidence - limited or unclear information found
  • 0-199: Very low confidence - agent struggled to find relevant information

Source Attribution

Every answer includes the source documents that contributed to the response. Use these to:

  • Verify information by checking the original documents
  • Understand context by reviewing the source material
  • Trace reasoning by seeing which documents influenced the answer

Best Practices

Effective Questions

  • Be specific: "What are the symptoms of basal cell carcinoma?" vs "Tell me about BCC"
  • Provide context: Include relevant details that help the agent understand what you're looking for
  • Control the level of detail: You can ask for brief answers or comprehensive explanations
    • For names only: "Give me names of diseases caused by mutation of the K86 and K81 gene?"
    • For detailed explanation: "What diseases are caused by mutation of the K86 or K81 gene and why?"
  • Ask focused questions: Break complex queries into smaller, specific questions for better results

Interpreting Results

  • Check confidence scores: Scores below 500 may indicate the agent had difficulty finding relevant information
  • Review sources: Always check the source documents, especially for critical information
  • Iterate on questions: If confidence is low, try rephrasing or being more specific

Next Steps

🦆

Questions about the Ask endpoint? Get in touch or check our roadmap for upcoming features.