Troubleshooting Guide¶
Common issues and solutions for CodeGraph.
Table of Contents¶
- Installation Issues
- CUDA Not Found
- DuckDB Connection Failed
- ChromaDB Initialization Failed
- Import Errors
- LLM Provider Issues
- GigaChat Authentication Failed
- Local LLM Out of Memory
- LLM Response Timeout
- Query Issues
- No Results Found
- Slow Query Performance
- Incorrect Results
- Joern Server Issues
- Server Won’t Start
- SQL/PGQ Query Timeout
- Memory Issues
- Out of Memory During Processing
- High Memory Usage
- Debugging
- Enable Debug Logging
- Check Component Status
- Generate Debug Report
- Getting Help
- Next Steps
Installation Issues¶
CUDA Not Found¶
Symptom:
RuntimeError: CUDA not available
Solution:
# Check CUDA installation
nvidia-smi
nvcc --version
# Reinstall PyTorch with CUDA
pip uninstall torch
pip install torch --index-url https://download.pytorch.org/whl/cu118
DuckDB Connection Failed¶
Symptom:
duckdb.IOException: Could not open file 'cpg.duckdb'
Solution:
# Check file exists
ls -la cpg.duckdb
# Check permissions
chmod 644 cpg.duckdb
# Check not locked by another process
lsof cpg.duckdb # Linux/Mac
ChromaDB Initialization Failed¶
Symptom:
chromadb.errors.ChromaDBError: Collection not found
Solution:
# Verify chromadb_storage exists
ls -la chromadb_storage/
# Reinitialize if needed
python scripts/init_vector_store.py
Import Errors¶
Symptom:
ModuleNotFoundError: No module named 'src'
Solution:
# Ensure you're in the project root
cd /path/to/codegraph
# Add to PYTHONPATH
export PYTHONPATH="${PYTHONPATH}:$(pwd)"
# Or use pip install in development mode
pip install -e .
LLM Provider Issues¶
GigaChat Authentication Failed¶
Symptom:
401 Unauthorized: Invalid credentials
Solution:
# Check environment variable
echo $GIGACHAT_AUTH_KEY
# Set if missing
export GIGACHAT_AUTH_KEY="your_key"
# Verify in Python
python -c "import os; print(os.environ.get('GIGACHAT_AUTH_KEY', 'NOT SET'))"
Local LLM Out of Memory¶
Symptom:
CUDA out of memory
Solution:
# Reduce model layers in config.yaml
llm:
n_gpu_layers: 20 # Reduce from -1
n_ctx: 4096 # Reduce context window
Or use a smaller quantization:
# Use Q4_K_M instead of Q5_K_M
LLM Response Timeout¶
Symptom:
TimeoutError: LLM did not respond within timeout
Solution:
# Increase timeout in config.yaml
llm:
timeout: 120 # seconds
max_retries: 3
Query Issues¶
No Results Found¶
Symptom:
No methods found matching query
Solutions:
1. Check spelling - Method names are case-sensitive
2. Use partial match - Try *Transaction* instead of CommitTransaction
3. Check database - Verify data exists:
sql
SELECT COUNT(*) FROM nodes_method WHERE full_name LIKE '%Transaction%';
Slow Query Performance¶
Symptom: Query takes more than 10 seconds
Solutions:
# Reduce search scope in config.yaml
retrieval:
top_k_qa: 3 # Reduce from 10
# Disable hybrid mode for speed
retrieval:
hybrid:
enabled: false
Incorrect Results¶
Symptom: Answers don’t match expected results
Solutions:
1. Refine question - Be more specific
2. Check domain - Ensure correct domain is set
3. Verify embeddings - Re-generate if corrupted:
bash
python src/cpg_export/add_vector_embeddings.py --force
Joern Server Issues¶
Server Won’t Start¶
Symptom:
Connection refused on port 8080
Solution:
# Check if port is in use
netstat -ano | findstr :8080
# Kill existing process if needed
taskkill /F /PID <pid>
# Restart Joern
powershell -ExecutionPolicy Bypass -File scripts/bootstrap_joern.ps1
Memory Issues¶
Out of Memory During Processing¶
Symptom:
MemoryError: Unable to allocate
Solutions:
# Reduce batch sizes
retrieval:
batch_size: 25 # Reduce from 100
# Enable incremental processing
processing:
streaming: true
chunk_size: 1000
High Memory Usage¶
Symptom: System becomes unresponsive
Solutions:
# Monitor memory usage
watch -n 1 'free -h'
# Clear caches
python -c "from src.optimization.query_cache import QueryCache; QueryCache().clear()"
# Reduce vector store in memory
# Use disk-backed ChromaDB instead
Debugging¶
Enable Debug Logging¶
# In config.yaml
logging:
level: DEBUG
Or via environment:
export LOG_LEVEL=DEBUG
python examples/demo_simple.py
Check Component Status¶
# Diagnostic script
from src.services.cpg_query_service import CPGQueryService
from src.retrieval.vector_store_real import VectorStoreReal
# Check DuckDB
cpg = CPGQueryService()
print(f"Methods: {cpg.count_methods()}")
# Check Vector Store
vs = VectorStoreReal()
print(f"QA docs: {vs.qa_collection.count()}")
# Check LLM
from src.llm.llm_interface_compat import get_llm
llm = get_llm()
print(f"LLM: {type(llm).__name__}")
Generate Debug Report¶
python -c "
import sys
import platform
print('=== System Info ===')
print(f'Python: {sys.version}')
print(f'Platform: {platform.platform()}')
print('\n=== CUDA ===')
try:
import torch
print(f'PyTorch: {torch.__version__}')
print(f'CUDA available: {torch.cuda.is_available()}')
if torch.cuda.is_available():
print(f'CUDA version: {torch.version.cuda}')
print(f'GPU: {torch.cuda.get_device_name(0)}')
except ImportError:
print('PyTorch not installed')
print('\n=== Dependencies ===')
import duckdb
print(f'DuckDB: {duckdb.__version__}')
import chromadb
print(f'ChromaDB: {chromadb.__version__}')
"
Getting Help¶
If issues persist:
- Check logs in
logs/codegraph.log - Search existing issues in the repository
- Create a new issue with: - Error message - Steps to reproduce - Debug report output - config.yaml (sensitive values removed)
Next Steps¶
- Installation - Setup guide
- Configuration - Config options
- TUI User Guide - Usage instructions