Paul Omenaca
@houmlandJanuary has been quite a ride! Reaching 80,000 users, onboarding new team members, and expanding into new geographies.
StackAI is going international, with deployments across the globe. Check with us if you’d like to have StackAI closer to you.
This sprint's product focus has been on governance, ease of use, and agentic tools. We’ve updated models along the way, on the day they became available (OpenAI o3, DeepSeek in U.S. servers).
Let’s check out the new features.
We just released Quick Start — a faster and easier way to build AI agents in Stack AI! These AI Agents are deployed as Chat Assistant — imagine a custom ChatGPT or Perplexity, powered by any LLM (OpenAI, Anthropic, DeepSeek, Llama), knowledge base, and tools—built in seconds.
Easily customize the title, description, LLM, knowledge base, and agentic tools to trigger — then go live in minutes.
This allows you to immediately launch AI agents. You can also expand the agent functionality in the workflow builder with more nodes. Either way, you can greatly speed up your deployment of AI agents.
Check out our newly redesigned chat assistant interface! We've enhanced the look and feel to be more intuitive and seamless for your end-users.
We recently released the Feature Access Control Tab — Admins can now decide which features are accessible to the users in the workflow builder, and appearing in the sidebar.
Moreover, admins can select how citations are rendered to end users. You can permit users to: • Download source documents • Access source documents via a cloud provider (e.g. Sharepoint, GDrive) • Block source documents from download
This empowers admins to control who has access to the underlying source documents. When links are provided to the cloud provider, users must have login access to that provider in order to retrieve the documents.
We’re excited to announce the release of StackAI’s own Code Interpreter!
As a tool in Stack AI, Code Interpreter enables you to analyze data, create charts, and translate natural language queries into Python code. This is the equivalent of the same feature from ChatGPT.
It’s under ‘Analysis Tool’ in the ‘Add Tools’ section.
Scroll through the list of available tools to discover exciting new additions like Stripe, Knowledge Base, and more.
Forget typing! You can now speak instructions directly into your LLM. This works in two ways. For end users, they can use the voice interface to instruct prompts to the LLM.
For app builders, they can voice instructions for the model instead of typing out long passages in the workflow builder. This saves time for both users and creators.
January was a big month for new LLMs.
DeepSeek was available in StackAI, the same day it was available in US servers. Now you can build your AI agents with DeepSeek’s low cost, high performance LLM at the core. And you can base your DeepSeek model on a US host (TogetherAI and Groq).
We also recently released OpenAI o3-mini, Gemini 2.0 Flash and Perplexity Sonar Pro, among several other LLM models.
Check out our latest pieces of content: