Transform your web development skills by creating a cutting-edge AI chat platform that integrates ChatGPT, Claude, Grok, and Gemini in one sleek application.
π₯ Watch the Complete Tutorial
Before diving into the written guide, check out my comprehensive video tutorial where I build this entire project from scratch:
βΆοΈ Multi-AI Chat Platform Tutorial - Next.js 15 Complete Guide
If this tutorial helps you, please consider buying me a coffee β to support more content like this!
π What We're Building
Imagine having ChatGPT, Claude AI, Grok, and Google Gemini all in one platform where users can seamlessly switch between AI models with a simple dropdown. This isn't just another chatbot clone β it's a comprehensive multi-AI platform with:
4 AI Model Integrations: OpenAI GPT, Anthropic Claude, xAI Grok, and Google Gemini
Real-time Chat Storage: Firebase Firestore for instant message sync
Modern UI/UX: Sleek animations and responsive design
AI Model Switching: Dynamic dropdown selection
Production Ready: Built with Next.js 15 and deployed on Vercel
π οΈ Tech Stack & Architecture
Core Technologies
Next.js 15 - React framework with App Router
TypeScript - Type safety and better development experience
Tailwind CSS - Utility-first CSS framework
Framer Motion - Smooth animations and transitions
AI Integrations
OpenAI API - GPT-4 and GPT-3.5 models
Anthropic Claude - Claude-3 Sonnet and Haiku
xAI Grok - Grok-1 model
Google Gemini - Gemini Pro model
Database & Storage
Firebase Firestore - Real-time NoSQL database
Alternative: Supabase (PostgreSQL with real-time subscriptions)
Alternative: PlanetScale (MySQL with edge functions)
Why These Choices?
Next.js 15: Latest features, server components, and excellent performance
Firebase: Free tier includes 50K reads/writes per day (perfect for testing)
TypeScript: Essential for managing multiple API interfaces safely
π Step 1: Project Setup
Initialize Next.js 15 Project
npx create-next-app@latest multi-ai-chat --typescript --tailwind --eslint --app
cd multi-ai-chat
Install Dependencies
npm install firebase framer-motion lucide-react
npm install -D @types/node
Environment Variables Setup
Create .env.local
:
# AI API Keys
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
XAI_API_KEY=your_xai_key
GOOGLE_API_KEY=your_google_key
# Firebase Configuration
NEXT_PUBLIC_FIREBASE_API_KEY=your_firebase_key
NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN=your_project.firebaseapp.com
NEXT_PUBLIC_FIREBASE_PROJECT_ID=your_project_id
NEXT_PUBLIC_FIREBASE_STORAGE_BUCKET=your_project.appspot.com
NEXT_PUBLIC_FIREBASE_MESSAGING_SENDER_ID=123456789
NEXT_PUBLIC_FIREBASE_APP_ID=your_app_id
π₯ Step 2: Firebase Configuration
Firebase Setup
// lib/firebase.ts
import { initializeApp } from 'firebase/app';
import { getFirestore } from 'firebase/firestore';
const firebaseConfig = {
apiKey: process.env.NEXT_PUBLIC_FIREBASE_API_KEY,
authDomain: process.env.NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN,
projectId: process.env.NEXT_PUBLIC_FIREBASE_PROJECT_ID,
storageBucket: process.env.NEXT_PUBLIC_FIREBASE_STORAGE_BUCKET,
messagingSenderId: process.env.NEXT_PUBLIC_FIREBASE_MESSAGING_SENDER_ID,
appId: process.env.NEXT_PUBLIC_FIREBASE_APP_ID
};
const app = initializeApp(firebaseConfig);
export const db = getFirestore(app);
Database Service
// lib/database.ts
import { db } from './firebase';
import { collection, addDoc, query, orderBy, onSnapshot, serverTimestamp } from 'firebase/firestore';
export interface Message {
id?: string;
content: string;
role: 'user' | 'assistant';
aiModel: string;
timestamp: any;
sessionId: string;
}
export const saveMessage = async (message: Omit<Message, 'id' | 'timestamp'>) => {
try {
await addDoc(collection(db, 'messages'), {
...message,
timestamp: serverTimestamp()
});
} catch (error) {
console.error('Error saving message:', error);
}
};
export const subscribeToMessages = (sessionId: string, callback: (messages: Message[]) => void) => {
const q = query(
collection(db, 'messages'),
orderBy('timestamp', 'asc')
);
return onSnapshot(q, (snapshot) => {
const messages = snapshot.docs
.map(doc => ({ id: doc.id, ...doc.data() } as Message))
.filter(msg => msg.sessionId === sessionId);
callback(messages);
});
};
π€ Step 3: AI Integration Layer
AI Service Abstraction
// lib/ai-service.ts
export type AIModel = 'gpt-4' | 'claude-3-sonnet' | 'grok-1' | 'gemini-pro';
export interface AIResponse {
content: string;
model: AIModel;
usage?: {
promptTokens: number;
completionTokens: number;
};
}
export class AIService {
static async sendMessage(model: AIModel, message: string): Promise<AIResponse> {
const response = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ model, message })
});
if (!response.ok) {
throw new Error(`AI request failed: ${response.statusText}`);
}
return response.json();
}
}
API Route Implementation
// app/api/chat/route.ts
import { NextRequest, NextResponse } from 'next/server';
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
export async function POST(request: NextRequest) {
try {
const { model, message } = await request.json();
let response: string;
switch (model) {
case 'gpt-4':
response = await handleOpenAI(message);
break;
case 'claude-3-sonnet':
response = await handleClaude(message);
break;
case 'grok-1':
response = await handleGrok(message);
break;
case 'gemini-pro':
response = await handleGemini(message);
break;
default:
throw new Error('Unsupported AI model');
}
return NextResponse.json({ content: response, model });
} catch (error) {
console.error('AI API Error:', error);
return NextResponse.json(
{ error: 'Failed to process AI request' },
{ status: 500 }
);
}
}
async function handleOpenAI(message: string): Promise<string> {
const completion = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: message }],
max_tokens: 1000
});
return completion.choices[0]?.message?.content || 'No response';
}
async function handleClaude(message: string): Promise<string> {
const response = await fetch('https://api.anthropic.com/v1/messages', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-api-key': process.env.ANTHROPIC_API_KEY!,
'anthropic-version': '2023-06-01'
},
body: JSON.stringify({
model: 'claude-3-sonnet-20240229',
max_tokens: 1000,
messages: [{ role: 'user', content: message }]
})
});
const data = await response.json();
return data.content[0]?.text || 'No response';
}
async function handleGrok(message: string): Promise<string> {
const response = await fetch('https://api.x.ai/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.XAI_API_KEY}`
},
body: JSON.stringify({
model: 'grok-beta',
messages: [{ role: 'user', content: message }],
max_tokens: 1000
})
});
const data = await response.json();
return data.choices[0]?.message?.content || 'No response';
}
async function handleGemini(message: string): Promise<string> {
const response = await fetch(`https://generativelanguage.googleapis.com/v1/models/gemini-pro:generateContent?key=${process.env.GOOGLE_API_KEY}`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
contents: [{ parts: [{ text: message }] }]
})
});
const data = await response.json();
return data.candidates[0]?.content?.parts[0]?.text || 'No response';
}
π¨ Step 4: Modern UI Components
Main Chat Interface
// components/ChatInterface.tsx
'use client';
import React, { useState, useEffect, useRef } from 'react';
import { motion, AnimatePresence } from 'framer-motion';
import { Send, Bot, User, Sparkles } from 'lucide-react';
import { AIService, AIModel } from '@/lib/ai-service';
import { saveMessage, subscribeToMessages, Message } from '@/lib/database';
const AI_MODELS: { value: AIModel; label: string; icon: string }[] = [
{ value: 'gpt-4', label: 'ChatGPT-4', icon: 'π€' },
{ value: 'claude-3-sonnet', label: 'Claude 3', icon: 'π§ ' },
{ value: 'grok-1', label: 'Grok', icon: 'β‘' },
{ value: 'gemini-pro', label: 'Gemini Pro', icon: 'π' }
];
export default function ChatInterface() {
const [messages, setMessages] = useState<Message[]>([]);
const [input, setInput] = useState('');
const [selectedModel, setSelectedModel] = useState<AIModel>('gpt-4');
const [isLoading, setIsLoading] = useState(false);
const [sessionId] = useState(() => Math.random().toString(36).substring(7));
const messagesEndRef = useRef<HTMLDivElement>(null);
useEffect(() => {
const unsubscribe = subscribeToMessages(sessionId, setMessages);
return () => unsubscribe();
}, [sessionId]);
useEffect(() => {
messagesEndRef.current?.scrollIntoView({ 'smooth' });
}, [messages]);
const sendMessage = async () => {
if (!input.trim() || isLoading) return;
const userMessage: Omit<Message, 'id' | 'timestamp'> = {
content: input,
role: 'user',
aiModel: selectedModel,
sessionId
};
await saveMessage(userMessage);
setInput('');
setIsLoading(true);
try {
const response = await AIService.sendMessage(selectedModel, input);
const aiMessage: Omit<Message, 'id' | 'timestamp'> = {
content: response.content,
role: 'assistant',
aiModel: selectedModel,
sessionId
};
await saveMessage(aiMessage);
} catch (error) {
console.error('Error sending message:', error);
} finally {
setIsLoading(false);
}
};
return (
<div className="flex flex-col h-screen bg-gradient-to-br from-slate-900 via-purple-900 to-slate-900">
{/* Header */}
<motion.header
initial={{ y: -50, opacity: 0 }}
animate={{ y: 0, opacity: 1 }}
className="bg-black/20 backdrop-blur-lg border-b border-white/10 p-4"
>
<div className="max-w-4xl mx-auto flex items-center justify-between">
<div className="flex items-center space-x-3">
<Sparkles className="w-8 h-8 text-purple-400" />
<h1 className="text-2xl font-bold text-white">Multi-AI Chat</h1>
</div>
<select
value={selectedModel}
={(e) => setSelectedModel(e.target.value as AIModel)}
className="bg-white/10 backdrop-blur-lg text-white px-4 py-2 rounded-lg border border-white/20 focus:outline-none focus:ring-2 focus:ring-purple-400"
>
{AI_MODELS.map((model) => (
<option key={model.value} value={model.value} className="bg-slate-800">
{model.icon} {model.label}
</option>
))}
</select>
</div>
</motion.header>
{/* Messages */}
<div className="flex-1 overflow-y-auto p-4">
<div className="max-w-4xl mx-auto space-y-4">
<AnimatePresence>
{messages.map((message, index) => (
<motion.div
key={message.id || index}
initial={{ opacity: 0, y: 20 }}
animate={{ opacity: 1, y: 0 }}
exit={{ opacity: 0, y: -20 }}
className={`flex ${message.role === 'user' ? 'justify-end' : 'justify-start'}`}
>
<div className={`flex items-start space-x-3 max-w-3xl ${message.role === 'user' ? 'flex-row-reverse space-x-reverse' : ''}`}>
<div className={`w-8 h-8 rounded-full flex items-center justify-center ${
message.role === 'user' ? 'bg-purple-600' : 'bg-slate-700'
}`}>
{message.role === 'user' ? (
<User className="w-4 h-4 text-white" />
) : (
<Bot className="w-4 h-4 text-white" />
)}
</div>
<div className={`px-4 py-3 rounded-2xl ${
message.role === 'user'
? 'bg-purple-600 text-white'
: 'bg-white/10 backdrop-blur-lg text-white border border-white/20'
}`}>
<p className="whitespace-pre-wrap">{message.content}</p>
{message.role === 'assistant' && (
<p className="text-xs text-gray-400 mt-2">
{AI_MODELS.find(m => m.value === message.aiModel)?.label}
</p>
)}
</div>
</div>
</motion.div>
))}
</AnimatePresence>
{isLoading && (
<motion.div
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
className="flex justify-start"
>
<div className="bg-white/10 backdrop-blur-lg rounded-2xl px-4 py-3 border border-white/20">
<div className="flex space-x-2">
<div className="w-2 h-2 bg-purple-400 rounded-full animate-bounce"></div>
<div className="w-2 h-2 bg-purple-400 rounded-full animate-bounce" style={{ animationDelay: '0.1s' }}></div>
<div className="w-2 h-2 bg-purple-400 rounded-full animate-bounce" style={{ animationDelay: '0.2s' }}></div>
</div>
</div>
</motion.div>
)}
<div ref={messagesEndRef} />
</div>
</div>
{/* Input */}
<motion.div
initial={{ y: 50, opacity: 0 }}
animate={{ y: 0, opacity: 1 }}
className="bg-black/20 backdrop-blur-lg border-t border-white/10 p-4"
>
<div className="max-w-4xl mx-auto">
<div className="flex space-x-4">
<input
type="text"
value={input}
={(e) => setInput(e.target.value)}
={(e) => e.key === 'Enter' && sendMessage()}
placeholder="Type your message..."
className="flex-1 bg-white/10 backdrop-blur-lg text-white placeholder-gray-400 px-4 py-3 rounded-lg border border-white/20 focus:outline-none focus:ring-2 focus:ring-purple-400"
disabled={isLoading}
/>
<motion.button
whileHover={{ scale: 1.05 }}
whileTap={{ scale: 0.95 }}
={sendMessage}
disabled={isLoading || !input.trim()}
className="bg-purple-600 hover:bg-purple-700 disabled:bg-gray-600 text-white px-6 py-3 rounded-lg transition-colors"
>
<Send className="w-5 h-5" />
</motion.button>
</div>
</div>
</motion.div>
</div>
);
}
Main App Layout
// app/page.tsx
import ChatInterface from '@/components/ChatInterface';
export default function Home() {
return (
<main className="min-h-screen">
<ChatInterface />
</main>
);
}
ποΈ Database Alternatives (Free Options)
1. Firebase Firestore (Recommended)
Free Tier: 50K reads, 20K writes per day
Real-time: Built-in real-time listeners
Setup: Easiest to implement
2. Supabase
// Alternative setup for Supabase
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!
);
export const saveMessage = async (message: Message) => {
const { error } = await supabase
.from('messages')
.insert([message]);
if (error) console.error('Error:', error);
};
3. PlanetScale (MySQL)
Free Tier: 1 database, 1GB storage
Edge Functions: Serverless MySQL
Global: Low latency worldwide
π Step 5: Deployment
Vercel Deployment
# Install Vercel CLI
npm i -g vercel
# Deploy
vercel
# Add environment variables in Vercel dashboard
# - All your API keys
# - Firebase configuration
Environment Variables Checklist
β OpenAI API Key
β Anthropic API Key
β xAI API Key
β Google API Key
β Firebase Configuration
π― Advanced Features & Optimizations
Performance Optimizations
// lib/ai-cache.ts
const cache = new Map<string, { response: string; timestamp: number }>();
const CACHE_DURATION = 5 * 60 * 1000; // 5 minutes
export const getCachedResponse = (key: string) => {
const cached = cache.get(key);
if (cached && Date.now() - cached.timestamp < CACHE_DURATION) {
return cached.response;
}
return null;
};
export const setCachedResponse = (key: string, response: string) => {
cache.set(key, { response, timestamp: Date.now() });
};
Error Handling & Retry Logic
// lib/retry-logic.ts
export const withRetry = async <T>(
fn: () => Promise<T>,
maxRetries = 3,
delay = 1000
): Promise<T> => {
for (let i = 0; i < maxRetries; i++) {
try {
return await fn();
} catch (error) {
if (i === maxRetries - 1) throw error;
await new Promise(resolve => setTimeout(resolve, delay * Math.pow(2, i)));
}
}
throw new Error('Max retries exceeded');
};
π‘ Pro Tips & Best Practices
1. API Key Security
Never expose API keys in client-side code
Use server-side API routes for all AI calls
Implement rate limiting to prevent abuse
2. Cost Management
Implement token usage tracking
Set maximum token limits per request
Cache similar responses
3. User Experience
Add typing indicators
Implement message status (sending, sent, failed)
Add copy/share functionality
4. Performance
Use React.memo for message components
Implement virtual scrolling for long conversations
Optimize image/asset loading
π What's Next?
Now you have a fully functional multi-AI chat platform! Here are some ideas to extend it:
User Authentication: Add user accounts and conversation history
File Uploads: Support image/document analysis
Voice Input: Add speech-to-text functionality
AI Model Comparison: Side-by-side responses
Custom Prompts: Pre-built prompt templates
Export Conversations: PDF/markdown export
π° Support This Content
If this tutorial helped you build something amazing, consider buying me a coffee β! Your support helps me create more in-depth tutorials like this.
What You Get by Supporting:
π More advanced tutorials
π Source code access
π¬ Direct support in comments
π― Tutorial requests priority
π Resources & Links
π GitHub Repository
π Live Demo
π₯ Firebase Setup Guide
Happy coding! π
Built with β€οΈ using Next.js 15, TypeScript, and the power of multiple AI models.