- Live Demo: Kirimin Messenger
- Subscribe: YouTube Devanka 761
- Join Discord: Codemate Resort
- Private, group, and global chats
- Support for text, image, video, audio, and other documents
- Video Call & Voice Call
- Friendlist for private call
- Chat reply embed
- User profile (username, displayname, bio, and profile picture)
- Public posts
- In-App Notifications
- Web Push Notifications
Install all dependencies with NPM
npm install
- Extract and change directory to
chat-app-main
folder - Open terminal and go to
chat-app-main
folder - Install all dependencies with NPM
npm install
- Copy file
.env.example
to.env
- Edit file
.env
based on your preferences
src/config/public.config.json
{
// Enable AI Chat feature (powered by Google Generative AI)
// If true, edit GENAI_API_KEY inside `.env`
"GEN_AI_FEATURE": false,
// Choose ai model only if GEN_AI_FEATURE is true
// See All Models: https://ai.google.dev/gemini-api/docs/models
"AI_MODEL": "gemini-2.5-pro",
// Enable Google OAuth login method
// If true, edit GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET inside `.env`
"USE_OAUTH_GOOGLE": true,
// Enable GitHub OAuth login method
// If true, edit GITHUB_CLIENT_ID and GITHUB_CLIENT_SECRET inside `.env`
"USE_OAUTH_GITHUB": false,
// Enable Discord OAuth login method
// If true, edit DISCORD_CLIENT_ID and DISCORD_CLIENT_SECRET inside `.env`
"USE_OAUTH_DISCORD": false,
// Sync users localstorage save version to the latest stable version. If outdated, old save file will be destroyed and generated a new one.
"SAVE_VERSION": "Kirimin20250726",
}
src/config/server.config.json
{
// Send webhook about website log to discord
// If true, (1) edit DISCORD_BOT_TOKEN inside `.env`, If true, (2) setup `src/config/discord.config.json`
"webhook": false,
// Update app version and force users to reload the page after server restart,
"update": false,
// generation config
"GenAIConfig": {
// Value that controls the degree of randomness in token selection
"temperature": 1,
// Tokens are selected from the most to least probable until the sum of their probabilities equals this value
"topP": 0.95,
// For each token selection step, the `top_k` tokens with the highest probabilities are sampled. Then tokens are further filtered based on `top_p` with the final token selected using temperature sampling.
"topK": 50,
// Maximum number of tokens that can be generated in the response
"maxOutputTokens": 65536,
// Instructions for the model to steer it toward better performance
"systemInstruction": "You are a friendly and helpful assistant. Ensure your answers are complete, unless the user requests a more concise approach."
}
}
src/config/discord.config.json
{
// if webhook is true, put your channel id from your discord server
// monitor how ai answers to the user input
"AI_LEARN": "00000000000000",
// monitor user log (online/offline)
"USER_LOG": "00000000000000"
}
Edit src/config/peer.config.json
with your RTCConfiguration
see example:
src/config/peer.example.config.json
Open 2 terminals or 1 terminal with 2 tabs
- Watch Client Build
npm run dev:build
- Watch Server Start
npm run dev:start
npm run build
npm run start
pm2 start npm --name "my-chat-app" -- start && pm2 restart "my-chat-app" --max-memory-restart 8G
Tip
Units can be K(ilobyte), M(egabyte), G(igabyte)