AI Agents with OpenAI
Building intelligent AI agents using OpenAI's GPT-4 and function calling capabilities for real-world applications.
GitHubOverview
This experiment explores:
- Function calling patterns
- Multi-step agent workflows
- Tool integration
- Production considerations
Basic Agent Setup
TypeScript
1import OpenAI from 'openai';
2
3const openai = new OpenAI({
4 apiKey: process.env.OPENAI_API_KEY,
5});
6
7const tools = [
8 {
9 type: "function",
10 function: {
11 name: "get_weather",
12 description: "Get current weather for a location",
13 parameters: {
14 type: "object",
15 properties: {
16 location: { type: "string" },
17 unit: { type: "string", enum: ["celsius", "fahrenheit"] },
18 },
19 required: ["location"],
20 },
21 },
22 },
23];
Implementing Function Calls
Function calling allows the model to structured data extraction and tool use.
TypeScript
1async function runAgent(userMessage: string) {
2 const messages = [
3 { role: "system", content: "You are a helpful assistant." },
4 { role: "user", content: userMessage },
5 ];
6
7 const response = await openai.chat.completions.create({
8 model: "gpt-4-turbo-preview",
9 messages,
10 tools,
11 tool_choice: "auto",
12 });
13
14 const toolCalls = response.choices[0].message.tool_calls;
15
16 if (toolCalls) {
17 for (const toolCall of toolCalls) {
18 const functionName = toolCall.function.name;
19 const functionArgs = JSON.parse(toolCall.function.arguments);
20
21 // Execute the function and get results
22 const result = await executeFunction(functionName, functionArgs);
23
24 // Add function response to messages
25 messages.push({
26 role: "tool",
27 tool_call_id: toolCall.id,
28 content: JSON.stringify(result),
29 });
30 }
31 }
32
33 return response;
34}
Multi-Agent Patterns
Building complex workflows with multiple specialized agents:
- Router Agent: Determines which specialist agent to use
- Specialist Agents: Handle specific tasks
- Coordinator: Combines results
Cost Consideration
Multi-agent systems can increase API costs significantly. Implement caching and optimize prompts.
Production Lessons
Key insights from deploying agents in production:
Rate Limiting
TypeScript
1import pLimit from 'p-limit';
2
3const limit = pLimit(10); // Max 10 concurrent requests
4
5const results = await Promise.all(
6 tasks.map(task => limit(() => processWithAgent(task)))
7);
Error Handling
Always implement robust retry logic:
TypeScript
1async function callWithRetry(fn: () => Promise<any>, maxRetries = 3) {
2 for (let i = 0; i < maxRetries; i++) {
3 try {
4 return await fn();
5 } catch (error) {
6 if (i === maxRetries - 1) throw error;
7 await sleep(Math.pow(2, i) * 1000); // Exponential backoff
8 }
9 }
10}
Performance Metrics
From our production deployment:
- Avg Response Time: 2.3s
- Success Rate: 97.8%
- Cost per Request: $0.012
- User Satisfaction: 4.6/5
Success
AI agents reduced manual processing time by 85% while maintaining quality.