Security in Enterprise AI
Learn how to configure and manage LLMs in StackAI—from provider selection to advanced memory and citation settings.
LLMs (Large Language Models) are the core engine behind StackAI agents. In this tutorial, you’ll learn how to select providers, configure system prompts and inputs, and activate advanced features like memory, citations, and fallback logic. Whether you use OpenAI, Anthropic, local models, or your own API keys, StackAI gives you the flexibility to build AI apps that are secure, compliant, and tailored to your organization’s needs.
Summary
Choose from major LLM providers (OpenAI, Anthropic, Google, etc.) or connect custom/local models
Drag and drop model nodes and switch models with one click
Use StackAI’s enterprise keys or connect your own API keys for full control
Configure two key fields:
Instructions (system prompt) – defines the model’s role and behavior
Prompt – combines user inputs, files, and knowledge bases
Use the expanded editor for easier prompt writing
Connect knowledge bases and private data sources directly into prompts
Enable memory settings to control how the model remembers prior interactions
Turn on citations to show which docs or files the model referenced
Set up guardrails like PII filtering, fallback models, and automatic retries
Chain multiple LLMs for specialized steps within one workflow
Manage which models appear in the sidebar via feature access control
Assign default API keys per provider for centralized billing and usage
Build workflows with the right model for every task, securely and efficiently
Next up: explore chaining tools and models for complex agent behavior