How to Build AI Agents with Semantic Kernel Step by Step
You can build powerful AI Agents with Semantic Kernel. This guide gives you clear steps and real code examples, so you see how things work as you try them. Semantic Kernel lets you create agents that do more than chat or search for information. Try each step yourself and see what you can make.
Key Takeaways
Semantic Kernel lets you make smart AI Agents. These agents can do many jobs. They connect to different AI models. They use plugins to help them work.
First, set up your computer with the right tools. Then install Semantic Kernel with Python, .NET, or Java. This helps you start building your agent fast.
Build your AI Agent one step at a time. Set up the kernel first. Add plugins and prompts next. Set up planners and memory. Make personas for your agent. Run workflows to finish.
Use plugins to give your agent new skills. Planners break big jobs into small steps. Memory helps your agent remember what it did before. This helps your agent get better over time.
Semantic Kernel agents can do work for you. They can help in chats. They connect to business workflows. This saves time and makes jobs easier.
AI Agents and Semantic Kernel
Semantic Kernel Overview
Semantic Kernel is a toolkit that helps you make AI Agents for many jobs. You can use it with different AI models. It works with C#, Python, and Java. This means you can pick what works best for your project. Semantic Kernel is special because it uses instructions as a main part. You can tell your agent what to do using normal language. This makes it simple to build smart tools.
Here are some main features that make Semantic Kernel different:
It knows who the user is and follows rules for safety.
It helps you make plans with many steps for your agent.
Every action is tracked so you can check what happened.
It is safe to use in big companies.
You can use C#, Python, or Java to build your AI Agent.
Tip: You can change AI models or add plugins without changing all your code. This helps your projects stay up to date and easy to fix.
Agent Capabilities
Semantic Kernel lets your AI Agents do many things. You can mix skills, use prompts, and connect to APIs and big language models. For example, you can use a plugin to look for jobs online. Then your agent can send the results to a user. Semantic Kernel is like the main center. It manages prompts, picks the right AI, and handles answers.
You can set up your agent to:
Share results from one skill to the next.
Work through hard steps with many parts.
The plugin system and memory help your agent remember things and use many services. This makes Semantic Kernel a good choice for building AI Agents that do jobs, talk to people, and work with business tools.
Environment Setup
Prerequisites
You need to get your computer ready before you build your AI agent. Semantic Kernel works on Windows, macOS, and Linux. You can use Python, .NET, or Java. Check if your computer meets these needs:
You might want to use Visual Studio Code for .NET projects. The .NET Extension Pack is helpful. If you want to run AI models on your computer, LM Studio works on all systems. It can use your GPU if you have the right hardware. If you need advanced tools like vector storage, Docker helps you run services like Qdrant.
Tip: You do not need a GPU, but it helps your AI agent work faster. This is true when you use big models.
Installation
You can install Semantic Kernel in different ways. It depends on the programming language you pick. For Python, use pip. For .NET, use NuGet. Here are the main steps:
Python:
pip install semantic-kernel
.NET:
dotnet add package Microsoft.SemanticKernel
dotnet add package Microsoft.SemanticKernel.Connectors.OpenAI
Java:
Download the newest release from the official site. Add it to your project.
Semantic Kernel works on Windows, Linux, and macOS. If you use Python, do not use conda right now. The package is not in conda channels yet.
Project Initialization
Start by making a new folder for your project. Then set up your codebase. For .NET, use the console template:
dotnet new console -n MySemanticKernelAgent
cd MySemanticKernelAgent
Add the needed packages as shown above. Next, make a Kernel object. This object is the main part for your AI services. You can add connectors for OpenAI, Azure OpenAI, or Ollama. For Python, import and set up the Kernel. Then add your AI services and plugins.
It is good to keep setup, kernel start, and agent setup in different parts. You can use environment variables or a .env
file for settings. This way, you can grow your agent and add new things easily.
Building AI Agents Step by Step
When you build AI Agents with Semantic Kernel, you follow clear steps. First, set up the kernel. Then add plugins and prompts. Next, set up planners and memory. After that, define personas. Finally, run workflows. Each step helps your agent get smarter and more helpful.
Kernel Configuration
Start by setting up the kernel. The kernel is the main part of your AI Agent. It connects to language models and uses plugins. You can pick providers like Azure OpenAI, OpenAI, Ollama, or ONNX. Set model IDs, API keys, and endpoints in your settings. Use environment variables or a .env
file to keep your API keys safe. Do not put keys in your code. This keeps your project safe.
You can change your agent’s persona by updating its name, prompt, or color. You can set time limits and maximum steps for your agent. You can also change how your agent checks its work by updating quality checks.
Here is a simple Python example:
import semantic_kernel as sk
kernel = sk.Kernel()
kernel.add_chat_service(
"openai",
api_key=os.getenv("OPENAI_API_KEY"),
model_id="gpt-3.5-turbo"
)
Tip: Use secret tools or environment variables for your API keys. This keeps your AI Agents safe and ready to use.
You can set options like FunctionChoiceBehavior
to control how your agent picks functions. You can let the AI choose functions or make it use a certain one. Adding chat history helps your agent remember past talks and give better answers.
Plugins and Prompts
Plugins give your AI Agent new skills. Each plugin has a group of functions. You can write plugins in code or use prompt templates. When you make plugins, describe each function clearly. Use simple names and keep the number of parameters small. This helps the AI pick the right function.
Here are some good tips for plugins:
Write clear descriptions for each function and what it needs.
Only add plugins your agent will use. Too many plugins can confuse the AI.
Use simple parameter types and do not use short forms.
Group similar functions together so you can reuse them.
You can make plugins from code or prompts. For prompt plugins, put them in folders. Each function has a prompt file and a config file. Use variables in your prompts to make them flexible. For example:
/plugins/JobSearch/
├─ skprompt.txt
└─ config.json
In skprompt.txt
:
Find job listings for {{$jobTitle}} in {{$location}}.
In config.json
:
{
"parameters": ["jobTitle", "location"]
}
You can call one function from another using special syntax. This lets you build big skills from small parts.
Planners and Memory
Planners help your AI Agent break big jobs into small steps. You can use different planners, like Sequential or Handlebars. Planners turn your instructions into a plan with steps, checks, and loops. The kernel runs each step by calling the right plugin.
For example, if your agent needs to process a loan application, the planner can:
Get customer information
Read the application
Fill out forms
Check for missing data
You can set up memory for your agent. Memory lets your agent remember facts, past actions, or files. Semantic Kernel supports two types of memory:
Content Storage: Stores files and raw data, like documents.
Memory Storage: Stores processed data and facts for searching and thinking.
You can use Azure Blobs, local disk, or vector databases for storage. For simple projects, use the built-in MemoryServerless option.
Here is a code example for adding memory:
from semantic_kernel.memory import MemoryServerless
memory = MemoryServerless()
kernel.add_memory(memory)
Note: Memory helps your AI Agents remember what happened before. This makes them smarter and more helpful.
Personas
A persona gives your AI Agent a special style and role. You can set the agent’s name, tone, and instructions. For example, you can make an agent act like a careful advisor or a friendly teacher. Use templates to keep instructions separate from code. This makes it easy to change how your agent acts.
When you make a persona, think about:
The agent’s role and style (like “Fact-checking assistant”)
The tone of voice (formal, casual, friendly)
The agent’s skills (memory, planning, tool use)
You can use YAML or prompt templates to set personas. Mix memory, planning, and plugins to give your agent purpose and context. For advanced use, you can make teams of AI Agents with different personas working together.
Here is a simple persona prompt:
You are a helpful assistant who always checks facts before answering.
Workflow Execution
Now you can run your AI Agent’s workflow. Each step is a class or function. You connect steps using events. The kernel manages the flow, calls plugins, and handles results. You can start the workflow with an event.
Here is a C# example:
ProcessBuilder processBuilder = new(nameof(LoanApplicationWorkflow));
var gatherStep = processBuilder.AddStepFromType<GatherCustomerInformationStep>();
var parseStep = processBuilder.AddStepFromType<ParseLoanApplicationStep>();
var fillStep = processBuilder.AddStepFromType<FillOutApplicationStep>();
processBuilder.OnInputEvent("Start")
.SendEventTo(new(gatherStep))
.SendEventTo(new(parseStep));
gatherStep.OnEvent(ProcessEvents.CustomerInfoGatheringCompleted)
.SendEventTo(new ProcessFunctionTargetBuilder(fillStep));
parseStep.OnEvent(ProcessEvents.ParsingApplicationFormCompleted)
.SendEventTo(new ProcessFunctionTargetBuilder(fillStep));
fillStep.OnFunctionResult()
.StopProcess();
var process = processBuilder.Build();
await process.StartAsync(kernel, new KernelProcessEvent { Id = "Start", Data = "Customer Name: John Doe" });
This example shows how to set steps, link them, and start the workflow. The kernel handles errors and keeps your workflow running. If something goes wrong, the kernel can try again, stop, or tell you.
Tip: Use planners and memory to help your AI Agents handle hard jobs and fix mistakes. This makes your agents reliable and ready for real work.
By following these steps, you can build AI Agents that do tasks, remember important things, and act with a special personality. You can connect them to APIs, use smart language models, and make workflows that save time and effort.
Use Cases
Task Automation
Semantic Kernel helps you automate many jobs at work. AI Agents do boring tasks like reading documents, Level 1 support, and cleaning data. When you use these agents, you save time and make fewer mistakes. For example, Microsoft found companies get $3.70 to $10 back for every $1 spent on automation. You also launch products faster and improve data quality. In healthcare, AI systems cut paperwork time by 80%. This lets doctors spend more time with patients.
Here are ways Semantic Kernel agents help you work better:
They learn and improve as they work.
They fix errors and keep things running.
They follow rules like HIPAA and PCI-DSS.
Tip: Begin with easy jobs, then add more steps later. You can link agents to tools like Azure AI Search for smarter work.
Conversational Assistant
Semantic Kernel lets you build smart chat assistants. These assistants answer questions, guide users, and help with jobs. You pick the language model and plugins, then set up the agent’s style. Semantic Kernel works with C#, Python, and Java, so you choose what fits. The toolkit helps you make assistants that work everywhere and connect to APIs.
Semantic Kernel agents are special because they mix skills, remember chats, and change to fit new needs. You can use them for customer support, HR help desks, or company info bases. They give clear answers by searching many places and improving questions. In engineering, AI assistants made searches 74% better and raised sales by 23%.
Workflow Integration
You can link Semantic Kernel agents to outside workflow systems. This helps you automate business steps and connect AI Agents to tools you already use. For example, you can use Microsoft Logic Apps, which has over 1,400 connectors. You bring in workflows, and the agent picks the right one for your input.
Here is a table with some ways to connect:
Note: You can start workflows with events, link to APIs, and follow business rules. This makes your automation safe and able to grow.
You have learned how to make AI Agents using Semantic Kernel. This toolkit lets you link language models and add plugins. You can also set up planners and memory. It works for many real jobs.
You can have many agents talk together in one chat.
You can add your own plugins to automate tasks.
You can mix AI with normal code for strong workflows.
You can build your agent in parts to fit big companies.
Try cool features like having agents work together and linking workflows. See how Semantic Kernel can help with your next project.
FAQ
How do you add a new skill to your AI agent?
You add a new skill by creating a plugin. Write the plugin in code or as a prompt template. Register it with your kernel. Your agent can now use this skill in workflows.
Can you use Semantic Kernel with different AI models?
Yes, you can connect Semantic Kernel to OpenAI, Azure OpenAI, Ollama, or ONNX models. Change the model by updating your kernel configuration. You do not need to rewrite your agent.
What is the best way to keep your API keys safe?
Store your API keys in environment variables or a .env
file. Do not put keys in your code. This keeps your project secure.
How do you make your agent remember past actions?
Add memory storage to your kernel. Use built-in options like MemoryServerless
or connect to a vector database. Your agent can now recall facts and past steps.
Can you run multiple agents together?
Yes! You can set up teams of agents with different personas. Each agent can handle a part of the workflow. Use events to let them work together and share results.