How to Use Cortexa
A complete guide to building grounded marketing content — from knowledge ingestion and language control through to publishing in your marketing stack.
Key Concepts
Workspace
Top-level container for a team or brand. Holds projects and can optionally connect to a dedicated PostgreSQL + pgvector runtime database that Cortexa manages for that workspace's data plane. Each workspace has its own isolated encryption boundary and platform-managed encryption keys by default. Members are managed at the workspace level.
Project
Lives inside a workspace. Groups a set of knowledge documents under a shared vector index. Language settings (glossary, brand voice) are configured per project.
Document
A piece of knowledge ingested into a project — from a file upload (TXT, PDF, image) or a connected source (web, S3, Notion, Google Drive). After indexing it becomes searchable.
Index
A versioned snapshot of all embedded document chunks. Indexes can be promoted to production or archived. Retrieval always targets the active production index.
Glossary Term
An approved term with its definition, preferred expression, forbidden alternatives, and usage notes. Cortexa injects matched terms into the LLM prompt to enforce consistent language.
Brand Voice Profile
A named set of tone, target audience, style rules, and formatting preferences. Activate a profile per project to shape the style of every generated output.
Playground
The interactive content generation workspace. Ask questions or give content briefs — Cortexa retrieves relevant knowledge, applies language settings, and generates a grounded answer.
Evidence Search
A retrieval-only view for finding the most relevant chunks and source documents without generating an LLM answer. Use it to validate retrieval quality before running full RAG answers.
Publish Job
A record of a content generation and delivery event. Tracks the generated content, target connector, delivery status, and any provider error details.
Security Model
Cortexa encrypts customer content in transit and at rest using platform-managed keys. Default plans do not currently include customer-managed keys or BYOK.
Cortexa personnel do not routinely inspect customer content. Access is restricted to documented support, security, abuse-prevention, or legal/compliance workflows when operationally necessary.
If your organization requires customer-controlled key management or custom security terms, contact us before rollout.
Step-by-Step Guide
Project Setup — Create a Workspace and Project
Workspaces isolate teams or brands. Projects hold a specific knowledge base and its settings.
- Go to the Console from the sidebar.
- Click "New Workspace" and enter a name (e.g., "ACME Marketing").
- Expand the workspace and click "New Project" (e.g., "Product Launch Q3").
- Each project maintains an independent knowledge index and language configuration.
- Invite teammates into the workspace when you are ready to collaborate.
- Use separate projects when you want different knowledge bases, language settings, or campaign contexts.
Knowledge Ingestion — Upload Documents and Connect Sources
Populate your project's knowledge base from multiple sources. Everything is chunked, embedded, and indexed automatically.
- Open a project → Documents tab → click "Upload" to add TXT, PDF, or image files.
- For web content, use "Sources" → Add Connector → Web URL. Enter URLs to crawl and extract.
- For cloud storage, connect Amazon S3 (with bucket and credentials) or Google Drive (service account).
- For internal wikis, connect Notion using an integration token and page IDs.
- After syncing, all source content appears as Document records in the Documents tab.
- Documents with status "uploaded" are ready to be indexed.
Indexing — Build and Version Your Knowledge Index
Building the index embeds your documents into vectors and stores them for retrieval. Indexes are versioned so you can safely iterate.
- In the Documents tab, click "Build Index All" to index all uploaded documents, or "Build Index" per document.
- Each document status changes to "indexing" while the background worker processes it, then to "indexed".
- In the Version Control tab, each build creates a new versioned index (v1, v2, …).
- Promote a staging index to production when ready — retrieval will switch immediately.
- Archive old indexes to keep the version history clean.
Glossary & Language Settings — Define Your Brand Language
Before generating content, configure the language rules that Cortexa will enforce in every output for this project.
- Open a project → Glossary tab → enable the glossary toggle in project settings.
- Add terms with definitions, preferred expressions (e.g. "CortexaOS" not "the operating system"), and forbidden alternatives.
- Use "Extract Suggestions" to automatically identify candidate terms from your indexed documents.
- Review, approve, or reject suggestions before they become active glossary entries.
- Open Brand Voice → create a profile with tone (e.g. "professional, direct"), target audience, style rules, and formatting preferences.
- Activate the brand voice profile — it will apply to all Playground queries and content generation in this project.
Evidence Search — Inspect Retrieval Before Generation
Use Evidence Search when you want to verify whether the project actually has strong references before asking the model to generate an answer.
- Open a project → Search tab.
- Enter a topic or question, then choose Hybrid or Vector retrieval mode.
- Adjust Top-K, index version, reranker, and query rewrite settings just like in the Playground retrieval stack.
- Review the returned chunks, document names, scores, and deep links before generating an answer.
- If no evidence is returned, broaden the query, switch retrieval mode, or index more relevant source material first.
Content Generation — Use the Playground
The Playground is where knowledge, language settings, and AI generation come together. Use it to draft any marketing content grounded in your knowledge base.
- Open a project → Playground tab.
- Type a question or content brief (use the Email or Landing Page quick-fill buttons as a starting point).
- Adjust Top-K to control how many knowledge chunks are retrieved (higher = broader context).
- Enable Query Rewrite to let Cortexa rephrase ambiguous questions for better retrieval.
- Enable Reranker to reorder retrieved chunks by relevance before generation.
- Click "Run Query" — the answer is generated strictly from retrieved chunks with inline citations.
- Expand "Prompt Snapshot" to inspect the exact system prompt, retrieved context, and citations sent to the model.
- If a section is not covered by your knowledge base, Cortexa will write [Reference not available] rather than fabricate content.
Output Review — Check, Edit, and Validate
Before publishing, review the generated content for accuracy, completeness, and brand alignment.
- Read the inline citations — each [1], [2] reference links back to a specific document chunk.
- In the Queries tab, open the trace for any query to inspect retrieved chunks, latency, and token usage.
- If sections are marked [Reference not available], add the missing knowledge and re-index, then regenerate.
- Edit the content in the Publish panel before sending — the content field is fully editable.
- Check that tone, terminology, and formatting match your active brand voice profile.
Publishing — Send Content to Your Marketing Stack
Once content is ready, publish directly to your configured marketing platform without leaving Cortexa.
- Set up a connector under Marketing → New Connector (Marketo, HubSpot, Mailchimp, Outlook, or Salesforce).
- In the Playground, scroll to "Publish as Marketing Content" below the query panel.
- Select the connector and content type (Email or Landing Page).
- For Marketo: enter the Program ID and Template ID. The editable sections are loaded automatically — select the section to replace.
- Fill in metadata: email title, subject line, CTA text, and CTA link URL.
- Click "Publish" — a job is created, queued, and executed. Job status updates every 2 seconds.
- Check the Marketing tab to review past publish jobs and their delivery status.
You're ready to go
Your knowledge operations workflow is live. Explore the Console to manage projects, review your plan and usage, or read the best practices below.
Best Practices
Knowledge quality drives output quality
- Ingest authoritative, up-to-date source documents — the AI can only cite what is indexed.
- Prefer structured content (clear headings, defined sections) for better chunking results.
- Use the Structured or Semantic chunking strategy for long-form product documentation.
- Re-index documents whenever the source content changes.
Build your glossary before generating
- Run "Extract Suggestions" on your indexed documents to discover candidate terms automatically.
- Define preferred expressions for product names, features, and industry terms before your first content run.
- Forbidden alternatives prevent the LLM from using outdated or off-brand terms even when they appear in source documents.
- Review and approve suggestions regularly as your knowledge base grows.
Review before publishing
- Always check cited references — if a citation is missing, the claim may not be grounded.
- Sections marked [Reference not available] indicate a knowledge gap, not an AI error. Fill the gap in your knowledge base.
- Use the Queries tab trace view to diagnose why a particular chunk was or was not retrieved.
- Test publish jobs with a staging program or template before pushing to production campaigns.
Ready to try the full workflow?
Start with the free Starter plan, upload a few source documents, and run your first grounded content workflow inside the Playground.