Generated: October 21, 2025
Analyst: AI Code Reviewer
This is a multi-technology chatbot platform combining Laravel (PHP backend), Python (Streamlit UI), LangChain (AI framework), Pinecone (vector database), and OpenAI (LLM). The system allows creation of multiple chatbots with custom training data, deployable via an embed widget.
langchain-openai: 0.3.18langchain-pinecone: 0.2.6langchain-community: 0.3.24langchain-core: 0.3.63chatbot-embed.js)┌─────────────────────────────────────────────────────────────┐
│ ADMIN INTERFACE │
│ (Filament @ lumi-public/index.php) │
│ │
│ • Create/Edit Bots (slug, prompts, namespace) │
│ • Upload Training Documents (files/URLs) │
│ • Manage bot configurations │
└────────────────────┬────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ LARAVEL BACKEND │
│ (lumi-backend/) │
│ │
│ Models: │
│ • Bot (name, slug, role_prompt, system_prompt_template, │
│ pinecone_namespace) │
│ • TrainingDocument (bot_id, type, source, status) │
│ │
│ API Routes (/api): │
│ • GET /bots/{slug} - Fetch bot configuration │
│ • GET /bots - List all bots (route exists but no impl) │
│ │
│ Storage: │
│ • SQLite/MySQL database │
└────────────────────┬────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ PYTHON CHATBOT SERVICE │
│ (lumi-backend/chatbot/app.py) │
│ Run via: run_chatbot_fixed.sh │
│ │
│ 1. Fetch bot config from Laravel API │
│ 2. Initialize Pinecone with bot's namespace │
│ 3. Accept user messages via Streamlit UI │
│ 4. Retrieve relevant context from Pinecone │
│ 5. Build prompt with system template + context │
│ 6. Call OpenAI GPT-4 for response │
│ 7. Display response in Streamlit chat │
└────────────────────┬────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ VECTOR DATABASE │
│ (Pinecone) │
│ │
│ • Stores embeddings per bot (via namespace) │
│ • Each namespace = one bot's knowledge base │
│ • Embeddings created via text-embedding-3-small │
│ • Similarity search for context retrieval │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ EMBED WIDGET │
│ (chatbot/public/chatbot-embed.js) │
│ │
│ • Lightweight JavaScript widget │
│ • Creates floating chat button │
│ • Opens iframe pointing to Streamlit app with bot slug │
│ • Customizable position, color, size │
└─────────────────────────────────────────────────────────────┘
?bot=<slug>/api/bots/{slug} to get bot configManual Process (Current):
- Admin uploads training documents via Filament
- Documents stored in Laravel database with status: pending
- Manual step: Admin must run manage_documents.py CLI script to upload to Pinecone
- Script chunks text (1000 chars, 200 overlap) and creates embeddings
- Embeddings uploaded to Pinecone namespace
Intended Automated Process (Incomplete):
- TrainDocumentJob.php_ (note: disabled with underscore suffix) was meant to:
- Automatically process uploaded documents
- Call Python script from Laravel queue
- Update document status to trained or error
- Status: Not currently active/working
lumi-backend/app/Jobs/TrainDocumentJob.php_scripts/train_document.pyRecommendation: Remove entirely OR implement properly (see Section 5)
lumi-backend/scripts/ directory
__pycache__base_path('scripts/train_document.py') which doesn't existRecommendation: Remove directory or create proper training script
lumi-backend/chatbot/passenger_wsgi.py
streamlit run)Recommendation: Remove if not using Passenger WSGI, OR implement properly for Streamlit
lumi-backend/chatbot/public/serve-embed.py
Recommendation: Safe to remove in production; useful for local testing
lumi-backend/chatbot/public/examples.html (likely exists but not read)
Recommendation: Keep for documentation, remove for production
lumi-backend/chatbot/public/test.html
Recommendation: Remove in production deployment
lumi-backend/test (empty file in root)
Route: Route::get('/bots', [BotController::class, 'index']);
- File: routes/api.php:24
- Issue: BotController has no index() method
- Result: Would cause 500 error if called
- Recommendation: Remove route OR implement index() method
training_documents table:
- pinecone_id column (nullable, never populated)
- pinecone_metadata column (JSON, nullable, never populated)
- Recommendation: Remove in a migration if not planning to use, OR implement usage
Recommendation: Remove dependency if not planning auth, OR implement auth
/api/user route
auth:sanctum but Sanctum not configuredRecommendation: Remove
Welcome Blade View
resources/views/welcome.blade.phpRecommendation: Remove
Frontend Assets
resources/js/app.js, resources/js/bootstrap.js, resources/css/app.cssFrom requirements.txt:
- pytest, pytest-asyncio, pytest-socket, syrupy: Testing frameworks (not seeing any tests)
- langchain-tests: Testing utilities for LangChain
- SQLAlchemy: Database ORM (not used, Laravel handles DB)
- FastAPI, uvicorn (if in full requirements): API framework (Streamlit used instead)
- Recommendation: Remove testing dependencies in production, keep in development requirements
app.py - Streamlit ChatbotCurrent Issues:
1. Pydantic Model Rebuild Hack (Lines 6-10)
python
from langchain.schema import BaseCache
from langchain.callbacks.manager import Callbacks
from langchain_openai import ChatOpenAI
ChatOpenAI.model_rebuild()
- Issue: Workaround for Pydantic forward reference issues
- Simplification: Update to LangChain 0.4+ or use proper imports
python
llm = ChatOpenAI(model="gpt-4o", temperature=1, cache=None)Simplification: Add model_name and temperature to Bot model
Redundant Session Key
python
session_key = f"messages_{bot_slug}"
Simplification: Use single key if multi-bot sessions not needed
Mixing SystemMessage Types
Simplified app.py Structure:
# Remove Pydantic hack when upgraded
# Make model/temperature configurable
# Use clearer message structure
# Add error handling for API calls
# Cache bot config to reduce API calls
manage_documents.pySimplification Opportunities:
Simplification: Create validate_env() helper
Unused Function
chunk_text() (lines 29-31) defined but never calledSimplification: Remove
ASCII Namespace Sanitization
Simplification: Pinecone supports UTF-8, remove or simplify
Argument Handling
--filter-value or --source for same purpose (lines 111, 122)Simplified Structure:
# Remove unused chunk_text
# Simplify CLI args
# Add batch upload support
# Add progress indicators
# Better error messages
BotController.phpCurrent:
public function show($slug)
{
$bot = \App\Models\Bot::where('slug', $slug)->firstOrFail();
return response()->json([
'name' => $bot->name,
'role_prompt' => $bot->role_prompt,
'pinecone_namespace' => $bot->pinecone_namespace,
]);
}
Issues:
1. Hardcoded response structure (doesn't include system_prompt_template)
2. Missing from JSON response but used in app.py (line 45)
3. Full namespace path not using model import
Simplified:
use App\Models\Bot;
public function show(string $slug): JsonResponse
{
$bot = Bot::where('slug', $slug)->firstOrFail();
return response()->json([
'name' => $bot->name,
'role_prompt' => $bot->role_prompt,
'system_prompt_template' => $bot->system_prompt_template,
'pinecone_namespace' => $bot->pinecone_namespace,
]);
}
BotResource.php:
- Line 44: default(Bot::DEFAULT_SYSTEM_PROMPT) - This default is already in the model accessor
- Simplification: Remove redundant default
TrainingDocumentResource.php:
- Lines 37-47: Two separate fields for same column based on type
- Complex reactive logic
- Simplification: Use single polymorphic field or custom field type
run_chatbot-fixed.sh vs run_chatbot.sh:
- Only difference is parameters to streamlit run
- Simplification: Keep one script, use environment variables for parameters
app.py Missing system_prompt_template in ResponseFile: lumi-backend/chatbot/app.py:45
Current Code:
system_prompt_template = bot.get("system_prompt_template")
Issue:
- API endpoint doesn't return this field!
- Will always be None
- Falls back to hardcoded prompt (lines 98-103)
Impact: Bot's custom system prompt template is ignored!
Fix Required in BotController.php:
return response()->json([
'name' => $bot->name,
'role_prompt' => $bot->role_prompt,
'system_prompt_template' => $bot->system_prompt_template, // ADD THIS
'pinecone_namespace' => $bot->pinecone_namespace,
]);
File: chatbot-embed.js
chatUrl: 'http://151.106.62.241:8501', // Line 23
File: QUICK-START.md, test.html
http://151.106.62.241:8501
Issues: - Hardcoded IP address - HTTP (not HTTPS) - Won't work if IP changes
Fix: - Use environment variable or relative URL - Configure via embed init options - Use HTTPS in production
File: app.py
- No CORS headers
- Accepts requests from any origin
- Fix: Add Streamlit CORS config or use reverse proxy
File: routes/api.php
- All endpoints public
- Anyone can query bot configs
- Fix: Implement API key or token authentication
Impact: Deleting bots permanently removes training data
Fix: Add soft deletes to bots and training_documents
Tables: bots, training_documents
Missing indexes on:
- bots.slug (frequently queried, should be indexed)
- training_documents.bot_id (foreign key, auto-indexed)
- training_documents.status (for filtering)
Fix:
$table->string('slug')->unique()->index();
$table->string('status')->index();
app.py - No API Error HandlingLines 33-36:
resp = requests.get(f"{API_BASE}/bots/{bot_slug}")
if resp.status_code != 200:
st.error(f"Bot "{bot_slug}" not found (HTTP {resp.status_code}).")
st.stop()
Missing: - Network error handling (timeout, connection error) - JSON decode errors - Pinecone connection errors - OpenAI API errors
Fix:
try:
resp = requests.get(f"{API_BASE}/bots/{bot_slug}", timeout=5)
resp.raise_for_status()
bot = resp.json()
except requests.RequestException as e:
st.error(f"Failed to connect to bot service: {e}")
st.stop()
except ValueError:
st.error("Invalid response from bot service")
st.stop()
manage_documents.py - Generic Exception HandlingLines 135-142: - Catches all exceptions - Prints traceback - Not actionable for users
Fix: - Specific exception types - User-friendly messages - Proper exit codes
File: BotController.php:12
$bot = \App\Models\Bot::where('slug', $slug)->firstOrFail();
Issue: Not following repository pattern per Laravel rules
Fix:
// Create app/Repositories/BotRepository.php
class BotRepository {
public function findBySlug(string $slug): Bot {
return Bot::where('slug', $slug)->firstOrFail();
}
}
// Inject in controller
public function __construct(private BotRepository $botRepository) {}
public function show(string $slug) {
$bot = $this->botRepository->findBySlug($slug);
// ...
}
File: BotController.php
Current:
public function show($slug)
Should be (PHP 8.2+):
public function show(string $slug): JsonResponse
declare(strict_types=1);All PHP files should start with:
<?php
declare(strict_types=1);
namespace App\...
Per PSR-12 and Laravel best practices.
Current: Manual message construction and LLM calls
Better: Use LangChain's ConversationalRetrievalChain
from langchain.chains import ConversationalRetrievalChain
chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=auto_retriever,
return_source_documents=True,
)
result = chain({"question": user_input, "chat_history": chat_history})
Current: Storing full message list in Streamlit session
Better: Use LangChain memory with summarization
from langchain.memory import ConversationSummaryMemory
memory = ConversationSummaryMemory(llm=llm)
Issue: Every retrieval creates new embedding of query
Fix: Cache embeddings or use LangChain's built-in caching
Given your cPanel deployment and the hybrid PHP/Python stack, here are options from simplest to most robust:
Reasoning: - Lando is primarily for local development - Not designed for production deployment - Doesn't work well with cPanel - Overkill for this project
Verdict: Skip Lando.
Why This is the Simplest:
- Single docker-compose.yml file
- Works with cPanel if Docker is available
- Easy to manage both PHP and Python
- Can run on any VPS
Structure:
# docker-compose.yml
version: '3.8'
services:
# Laravel Backend
laravel:
build:
context: ./lumi-backend
dockerfile: Dockerfile.laravel
ports:
- "8000:8000"
volumes:
- ./lumi-backend:/var/www/html
environment:
- DB_CONNECTION=sqlite
- DB_DATABASE=/var/www/html/database/database.sqlite
depends_on:
- chatbot
# Python Chatbot
chatbot:
build:
context: ./lumi-backend/chatbot
dockerfile: Dockerfile.chatbot
ports:
- "8501:8501"
environment:
- PINECONE_API_KEY=${PINECONE_API_KEY}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- PINECONE_INDEX_NAME=${PINECONE_INDEX_NAME}
- LARAVEL_API_BASE_URL=http://laravel:8000/api
volumes:
- ./lumi-backend/chatbot:/app
# Optional: Nginx reverse proxy
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- laravel
- chatbot
Dockerfiles:
lumi-backend/Dockerfile.laravel:
FROM php:8.2-fpm
# Install dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip \
sqlite3 \
libsqlite3-dev
# Install PHP extensions
RUN docker-php-ext-install pdo pdo_sqlite mbstring exif pcntl bcmath gd
# Install Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Set working directory
WORKDIR /var/www/html
# Copy application
COPY . .
# Install PHP dependencies
RUN composer install --no-dev --optimize-autoloader
# Set permissions
RUN chown -R www-data:www-data /var/www/html
# Expose port
EXPOSE 8000
# Run Laravel
CMD php artisan serve --host=0.0.0.0 --port=8000
lumi-backend/chatbot/Dockerfile.chatbot:
FROM python:3.11-slim
# Set working directory
WORKDIR /app
# Copy requirements
COPY requirements.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy application
COPY . .
# Expose Streamlit port
EXPOSE 8501
# Run Streamlit
CMD ["streamlit", "run", "app.py", "--server.port=8501", "--server.address=0.0.0.0", "--server.headless=true"]
Pros:
- ✅ Simple to understand
- ✅ Single command: docker-compose up
- ✅ Consistent environments
- ✅ Easy to scale
- ✅ Works on any host with Docker
Cons: - ❌ cPanel may not support Docker (depends on host) - ❌ Need VPS or dedicated server
If you must stay on cPanel without Docker:
Create deployment script:
deploy.sh:
#!/bin/bash
# Laravel deployment
cd lumi-backend
composer install --no-dev --optimize-autoloader
php artisan migrate --force
php artisan config:cache
php artisan route:cache
# Python deployment
cd chatbot
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Restart services
touch tmp/restart.txt # For Passenger
Document environment variables in .env.example
Even simpler - automate deployment via GitHub:
.github/workflows/deploy.yml:
name: Deploy to cPanel
on:
push:
branches: [ main ]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Deploy to cPanel via FTP
uses: SamKirkland/FTP-Deploy-Action@4.3.0
with:
server: ${{ secrets.FTP_SERVER }}
username: ${{ secrets.FTP_USERNAME }}
password: ${{ secrets.FTP_PASSWORD }}
local-dir: ./lumi-backend/
server-dir: /public_html/
If you have Docker access: → Use Docker Compose (Option 2)
If stuck on cPanel without Docker: → Document deployment process and create deployment scripts (Option 3)
For future: → Consider migrating to a VPS (DigitalOcean, Linode, AWS) for better control
declare(strict_types=1);Violates: Laravel rules - "Use strict typing"
Files affected:
- app/Models/Bot.php
- app/Models/TrainingDocument.php
- app/Http/Controllers/Api/BotController.php
- All Filament resources
Fix for each file:
<?php
declare(strict_types=1);
namespace App\...
Violates: Laravel best practices - "Implement Repository pattern for data access layer"
File: BotController.php
Current:
$bot = \App\Models\Bot::where('slug', $slug)->firstOrFail();
Fix:
Create app/Repositories/BotRepository.php:
<?php
declare(strict_types=1);
namespace App\Repositories;
use App\Models\Bot;
class BotRepository
{
public function findBySlug(string $slug): Bot
{
return Bot::where('slug', $slug)->firstOrFail();
}
public function all()
{
return Bot::all();
}
}
Update BotController.php:
<?php
declare(strict_types=1);
namespace App\Http\Controllers\Api;
use App\Http\Controllers\Controller;
use App\Repositories\BotRepository;
use Illuminate\Http\JsonResponse;
class BotController extends Controller
{
public function __construct(
private BotRepository $botRepository
) {}
public function show(string $slug): JsonResponse
{
$bot = $this->botRepository->findBySlug($slug);
return response()->json([
'name' => $bot->name,
'role_prompt' => $bot->role_prompt,
'system_prompt_template' => $bot->system_prompt_template,
'pinecone_namespace' => $bot->pinecone_namespace,
]);
}
public function index(): JsonResponse
{
$bots = $this->botRepository->all();
return response()->json($bots);
}
}
Violates: "Use descriptive variable and method names" + strict typing
Examples:
- BotController::show($slug) → should be show(string $slug): JsonResponse
- Model properties should use typed properties (PHP 8.1+)
Fix for Bot model:
<?php
declare(strict_types=1);
namespace App\Models;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
use Illuminate\Database\Eloquent\Relations\HasMany;
class Bot extends Model
{
use HasFactory;
protected $fillable = [
'name',
'slug',
'role_prompt',
'system_prompt_template',
'pinecone_namespace',
];
public const DEFAULT_SYSTEM_PROMPT = "Use the following pieces of retrieved context to answer the question.\n" .
"If you don't know the answer, just say that you don't know.\n" .
"Keep responses concise (three sentences max).\n\n" .
"Context:\n{context}";
public function getSystemPromptTemplateAttribute(?string $value): string
{
return $value ?? self::DEFAULT_SYSTEM_PROMPT;
}
public function trainingDocuments(): HasMany
{
return $this->hasMany(TrainingDocument::class);
}
}
Violates: "Use Laravel's validation features for form and request validation"
Issue: No validation on API endpoints
Fix: Create Form Request:
<?php
declare(strict_types=1);
namespace App\Http\Requests;
use Illuminate\Foundation\Http\FormRequest;
class ShowBotRequest extends FormRequest
{
public function authorize(): bool
{
return true; // Or implement auth logic
}
public function rules(): array
{
return [
'slug' => ['required', 'string', 'exists:bots,slug'],
];
}
}
Use in controller:
public function show(ShowBotRequest $request, string $slug): JsonResponse
{
$bot = $this->botRepository->findBySlug($slug);
// ...
}
Violates: "Implement API versioning for public APIs"
Current: /api/bots/{slug}
Should be: /api/v1/bots/{slug}
Fix in routes/api.php:
Route::prefix('v1')->group(function () {
Route::get('/bots', [BotController::class, 'index']);
Route::get('/bots/{slug}', [BotController::class, 'show']);
});
Violates: "Implement proper error logging and monitoring"
Fix: Add logging to critical operations:
use Illuminate\Support\Facades\Log;
public function show(string $slug): JsonResponse
{
try {
$bot = $this->botRepository->findBySlug($slug);
Log::info("Bot accessed", ['slug' => $slug]);
return response()->json([...]);
} catch (\Exception $e) {
Log::error("Failed to fetch bot", [
'slug' => $slug,
'error' => $e->getMessage()
]);
throw $e;
}
}
Violates: "Implement proper CSRF protection"
Current: API routes don't use CSRF (standard for APIs)
If needed: Add Sanctum token authentication
Violates: "Follow PEP8 with docstrings"
Files: app.py, manage_documents.py
Fix for app.py:
#!/usr/bin/env python3
"""
Lumi Chatbot - Streamlit Application
This module provides the web interface for the Lumi chatbot system.
It integrates with Laravel backend for bot configuration and uses
LangChain + Pinecone for retrieval-augmented generation.
Environment Variables:
PINECONE_API_KEY: Pinecone authentication key
OPENAI_API_KEY: OpenAI API key
PINECONE_INDEX_NAME: Target Pinecone index
LARAVEL_API_BASE_URL: Laravel backend URL (default: http://localhost:8000/api)
"""
import os
from typing import Dict, List, Any
import streamlit as st
import requests
from dotenv import load_dotenv
# ...rest of imports
Violates: "Use PEP8 and type hints in Python"
Current functions have no type hints
Fix:
from typing import Dict, List, Any, Optional
def fetch_bot_config(bot_slug: str, api_base: str) -> Dict[str, Any]:
"""
Fetch bot configuration from Laravel API.
Args:
bot_slug: Unique identifier for the bot
api_base: Base URL of the Laravel API
Returns:
Dictionary containing bot configuration
Raises:
requests.RequestException: If API call fails
ValueError: If response is invalid
"""
resp = requests.get(f"{api_base}/bots/{bot_slug}", timeout=5)
resp.raise_for_status()
return resp.json()
Violates: "Never expose secrets; use .env files"
Issue: While using os.environ, there's no .env.example in chatbot directory
Fix: Create lumi-backend/chatbot/.env.example:
# OpenAI Configuration
OPENAI_API_KEY=sk-...
# Pinecone Configuration
PINECONE_API_KEY=pc-...
PINECONE_INDEX_NAME=lumi-chatbot
PINECONE_ENVIRONMENT=us-east-1-aws
# Laravel Backend
LARAVEL_API_BASE_URL=http://localhost:8000/api
Violates: Custom rules - "Keep chain creation abstracted in chains.py"
Issue: All LangChain logic in app.py
Fix: Create modular structure:
lumi-backend/chatbot/langchain_logic/chains.py:
"""LangChain chain configurations."""
from typing import List
from langchain_openai import ChatOpenAI
from langchain_core.messages import BaseMessage
def create_chat_llm(model: str = "gpt-4o", temperature: float = 1.0) -> ChatOpenAI:
"""Create and configure ChatOpenAI instance."""
return ChatOpenAI(
model=model,
temperature=temperature,
cache=None
)
def generate_response(llm: ChatOpenAI, messages: List[BaseMessage]) -> str:
"""
Generate response from LLM.
Args:
llm: Configured ChatOpenAI instance
messages: List of conversation messages
Returns:
Generated response text
"""
return llm.invoke(messages).content
lumi-backend/chatbot/langchain_logic/pinecone_client.py:
"""Pinecone vector store client."""
import os
from pinecone import Pinecone
from langchain_openai import OpenAIEmbeddings
from langchain_pinecone import PineconeVectorStore
from langchain_core.vectorstores import VectorStoreRetriever
def initialize_pinecone_store(namespace: str) -> PineconeVectorStore:
"""
Initialize Pinecone vector store for given namespace.
Args:
namespace: Pinecone namespace identifier
Returns:
Configured PineconeVectorStore instance
"""
pc = Pinecone(api_key=os.environ["PINECONE_API_KEY"])
index = pc.Index(os.environ["PINECONE_INDEX_NAME"])
emb = OpenAIEmbeddings(
model="text-embedding-3-small",
api_key=os.environ.get("OPENAI_API_KEY")
)
return PineconeVectorStore(
index=index,
embedding=emb,
namespace=namespace
)
def create_retriever(
vector_store: PineconeVectorStore,
k: int = 3,
score_threshold: float = 0.5
) -> VectorStoreRetriever:
"""
Create retriever from vector store.
Args:
vector_store: Pinecone vector store instance
k: Number of documents to retrieve
score_threshold: Minimum similarity score
Returns:
Configured retriever
"""
return vector_store.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={"k": k, "score_threshold": score_threshold},
)
Then simplify app.py:
from langchain_logic.chains import create_chat_llm, generate_response
from langchain_logic.pinecone_client import initialize_pinecone_store, create_retriever
Violates: "Log all interactions" (from LangChain rules)
Fix:
import logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
# In code:
logger.info(f"Bot '{bot_slug}' loaded successfully")
logger.debug(f"Retrieved {len(docs)} documents from Pinecone")
logger.error(f"Failed to fetch bot config: {e}")
Violates: LangChain rules - "Use async methods when available"
Current: Synchronous calls blocking Streamlit
Fix: Use async for API calls and LangChain:
import asyncio
from langchain_openai import ChatOpenAI
async def afetch_bot_config(bot_slug: str) -> Dict:
"""Async fetch bot config."""
# Use aiohttp instead of requests
pass
async def agenerate_response(llm: ChatOpenAI, messages: List) -> str:
"""Async generate response."""
return await llm.ainvoke(messages)
Violates: "Keep backend (Laravel), AI logic (LangChain), and UI (Streamlit) modular and independent"
Issue: - Streamlit directly calls Laravel API - No abstraction layer - Hard to test
Fix: Create API client abstraction:
lumi-backend/chatbot/api/laravel_client.py:
"""Laravel API client."""
import os
from typing import Dict, Optional
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
class LaravelAPIClient:
"""Client for interacting with Laravel backend."""
def __init__(self, base_url: Optional[str] = None):
"""
Initialize API client.
Args:
base_url: Laravel API base URL (default from env)
"""
self.base_url = base_url or os.environ.get(
"LARAVEL_API_BASE_URL",
"http://localhost:8000/api"
)
self.session = self._create_session()
def _create_session(self) -> requests.Session:
"""Create requests session with retry logic."""
session = requests.Session()
retry = Retry(
total=3,
backoff_factor=0.3,
status_forcelist=[500, 502, 503, 504]
)
adapter = HTTPAdapter(max_retries=retry)
session.mount("http://", adapter)
session.mount("https://", adapter)
return session
def get_bot(self, slug: str) -> Dict:
"""
Fetch bot configuration.
Args:
slug: Bot slug identifier
Returns:
Bot configuration dictionary
Raises:
requests.RequestException: If request fails
"""
url = f"{self.base_url}/bots/{slug}"
response = self.session.get(url, timeout=5)
response.raise_for_status()
return response.json()
Use in app.py:
from api.laravel_client import LaravelAPIClient
api_client = LaravelAPIClient()
bot = api_client.get_bot(bot_slug)
Violates: "Validate Pinecone init on startup" (Pinecone rules)
Fix: Create startup validation:
def validate_environment() -> None:
"""Validate required environment variables and connections."""
required_vars = [
"PINECONE_API_KEY",
"OPENAI_API_KEY",
"PINECONE_INDEX_NAME",
]
missing = [var for var in required_vars if not os.getenv(var)]
if missing:
raise EnvironmentError(
f"Missing required environment variables: {', '.join(missing)}"
)
# Test Pinecone connection
try:
pc = Pinecone(api_key=os.environ["PINECONE_API_KEY"])
index = pc.Index(os.environ["PINECONE_INDEX_NAME"])
index.describe_index_stats()
logger.info("Pinecone connection validated")
except Exception as e:
raise ConnectionError(f"Failed to connect to Pinecone: {e}")
# Call on startup
validate_environment()
system_prompt_template to BotController response - Currently brokenapp.py - Will crash on network errorschatbot-embed.js - Won't work when IP changes/api/bots route - Causes 500 errordeclare(strict_types=1) to all PHP files - PSR-12 compliance.env.example files - DocumentationThis chatbot system is functionally working but has significant technical debt and architectural issues that should be addressed:
Recommended Next Steps: 1. Fix critical API bug 2. Implement proper error handling 3. Add type declarations across codebase 4. Refactor to modular architecture 5. Containerize with Docker Compose 6. Add authentication and rate limiting
The codebase shows good potential but needs significant refactoring to meet production quality standards and follow Laravel/Python best practices.
End of Report