Performance Monitoring - nself-org/nchat GitHub Wiki
Complete guide to the performance monitoring system in nself-chat.
- Overview
- Web Vitals Tracking
- Custom Metrics
- Performance Hooks
- Admin Dashboard
- API Integration
- Best Practices
The performance monitoring system provides comprehensive tracking of:
- Web Vitals: LCP, CLS, TTFB, FCP, INP (FID deprecated)
- Custom Metrics: API response times, WebSocket latency, render times, memory usage
- Warnings: Automatic detection of performance issues
- Analytics: Statistical analysis and trend detection
- Sentry Integration: Metrics sent to Sentry for long-term tracking
┌─────────────────────────────────────────────────────────────┐
│ Performance Monitor │
│ - Web Vitals Collection │
│ - Custom Metrics Recording │
│ - Warning Detection │
│ - LocalStorage Persistence │
└─────────────────────────────────────────────────────────────┘
│
│
┌──────────────────┼──────────────────┐
│ │ │
▼ ▼ ▼
┌────────────────┐ ┌──────────────┐ ┌──────────────┐
│ React Hooks │ │ Sentry │ │ Admin Panel │
│ - usePerf... │ │ - Metrics │ │ - Dashboard │
│ - useRender │ │ - Tracking │ │ - Charts │
└────────────────┘ └──────────────┘ └──────────────┘
Web Vitals are automatically collected on every page load:
// Automatically tracked:
// - LCP (Largest Contentful Paint)
// - FID (First Input Delay)
// - CLS (Cumulative Layout Shift)
// - TTFB (Time to First Byte)
// - FCP (First Contentful Paint)
// - INP (Interaction to Next Paint)| Metric | Good | Needs Improvement | Poor |
|---|---|---|---|
| LCP | ≤2500ms | ≤4000ms | >4000ms |
| FID | ≤100ms | ≤300ms | >300ms |
| CLS | ≤0.1 | ≤0.25 | >0.25 |
| TTFB | ≤800ms | ≤1800ms | >1800ms |
| FCP | ≤1800ms | ≤3000ms | >3000ms |
| INP | ≤200ms | ≤500ms | >500ms |
Each metric receives a rating:
- Good: Green, score 90-100
- Needs Improvement: Yellow, score 50-89
- Poor: Red, score 0-49
import { performanceMonitor } from '@/lib/performance/monitor'
// Record a custom metric
performanceMonitor.recordCustomMetric({
name: 'api-response-time',
value: 250,
unit: 'ms',
tags: {
endpoint: '/api/messages',
method: 'POST',
},
})import { useApiPerformance } from '@/hooks/use-performance'
function MyComponent() {
const { recordApiCall } = useApiPerformance()
const fetchData = async () => {
const start = performance.now()
try {
const response = await fetch('/api/data')
const duration = performance.now() - start
recordApiCall('/api/data', duration, response.ok)
} catch (error) {
const duration = performance.now() - start
recordApiCall('/api/data', duration, false)
}
}
}import { useWebSocketPerformance } from '@/hooks/use-performance'
function WebSocketComponent() {
const { recordLatency, recordMessage } = useWebSocketPerformance()
const sendMessage = (message: string) => {
const start = performance.now()
socket.emit('message', message, () => {
const latency = performance.now() - start
recordLatency(latency)
})
recordMessage('sent', message.length)
}
}import { useRenderPerformance } from '@/hooks/use-performance';
function MyComponent() {
const { renderCount } = useRenderPerformance('MyComponent');
return <div>Rendered {renderCount} times</div>;
}// Automatically tracked every 10 seconds
// No manual code requiredMain hook for accessing all performance data:
import { usePerformance } from '@/hooks/use-performance';
function PerformanceDashboard() {
const {
snapshot, // Current performance snapshot
score, // Performance scores
metrics, // Web Vitals metrics
customMetrics, // Custom metrics
warnings, // Performance warnings
stats, // Statistical analysis
trends, // Trend analysis
refresh, // Manual refresh
reset, // Reset all data
clearWarning, // Clear a warning
clearAllWarnings,
recordCustomMetric,
} = usePerformance();
return (
<div>
<h1>Performance Score: {score.overall}</h1>
<p>LCP: {snapshot.webVitals.lcp}ms</p>
<p>API Avg: {stats.apiResponseTime.avg}ms</p>
</div>
);
}Specialized hook for monitoring warnings:
import { usePerformanceWarnings } from '@/hooks/use-performance';
function WarningsBanner() {
const {
warnings,
criticalWarnings,
activeWarnings,
clearWarning,
clearAllWarnings,
} = usePerformanceWarnings();
if (activeWarnings.length === 0) return null;
return (
<div className="warnings">
{activeWarnings.map((warning) => (
<div key={warning.id} className={warning.severity}>
<p>{warning.message}</p>
<button onClick={() => clearWarning(warning.id)}>Dismiss</button>
</div>
))}
</div>
);
}Hook for time-series data visualization:
import { usePerformanceTimeSeries } from '@/hooks/use-performance';
function PerformanceChart() {
const timeSeries = usePerformanceTimeSeries(
'api-response-time',
3600000 // 1 hour
);
return (
<LineChart
data={timeSeries.map((point) => ({
time: new Date(point.timestamp),
value: point.value,
}))}
/>
);
}import PerformanceMonitor from '@/components/admin/PerformanceMonitor';
export default function AdminPerformancePage() {
return (
<div className="admin-layout">
<PerformanceMonitor />
</div>
);
}-
Performance Score
- Overall score (0-100)
- Category scores (Web Vitals, API, Rendering, Memory, Errors)
- Circular progress indicators
-
Web Vitals Cards
- Individual cards for each vital
- Color-coded ratings
- Threshold indicators
-
Custom Metrics
- Statistical analysis (min, max, avg, median, P75, P95, P99)
- Trend indicators (improving, stable, degrading)
- Comparison with previous hour
-
Warnings Panel
- Critical and warning alerts
- Dismiss individual warnings
- Clear all warnings
-
Recent Activity
- Web Vitals table
- Custom metrics table
- Timestamp tracking
-
Export
- Export to CSV
- Export to JSON
The performance monitor automatically tracks all API calls via PerformanceObserver:
// No code needed - automatic tracking for:
// - /api/* routes
// - /v1/graphql endpointsFor more detailed tracking:
import { measurePerformanceAsync } from '@/lib/performance/monitor'
async function fetchData() {
return measurePerformanceAsync(
'fetch-messages',
async () => {
const response = await fetch('/api/messages')
return response.json()
},
{ endpoint: '/api/messages' }
)
}import { ApolloLink } from '@apollo/client'
import { performanceMonitor } from '@/lib/performance/monitor'
const performanceLink = new ApolloLink((operation, forward) => {
const start = performance.now()
return forward(operation).map((response) => {
const duration = performance.now() - start
performanceMonitor.recordCustomMetric({
name: 'graphql-query-time',
value: duration,
unit: 'ms',
tags: {
operation: operation.operationName,
type: operation.query.definitions[0]?.operation || 'unknown',
},
})
return response
})
})import { memo, useCallback } from 'react';
// Memoize expensive components
const ExpensiveComponent = memo(({ data }) => {
return <div>{/* ... */}</div>;
});
// Use useCallback for event handlers
function Parent() {
const handleClick = useCallback(() => {
// ...
}, []);
return <ExpensiveComponent onClick={handleClick} />;
}import dynamic from 'next/dynamic';
// Lazy load heavy components
const HeavyComponent = dynamic(() => import('./HeavyComponent'), {
loading: () => <Spinner />,
ssr: false,
});import Image from 'next/image';
// Use Next.js Image component
<Image
src="/hero.jpg"
width={1200}
height={600}
priority // for LCP images
alt="Hero"
/>import useSWR from 'swr'
function Messages() {
// SWR automatically caches responses
const { data } = useSWR('/api/messages', fetcher, {
revalidateOnFocus: false,
dedupingInterval: 2000,
})
}import { useDebouncedCallback } from 'use-debounce';
function SearchInput() {
const debouncedSearch = useDebouncedCallback(
(value) => {
// Expensive search operation
performSearch(value);
},
300
);
return <input onChange={(e) => debouncedSearch(e.target.value)} />;
}import { useVirtualizer } from '@tanstack/react-virtual';
function LongList({ items }) {
const parentRef = useRef();
const virtualizer = useVirtualizer({
count: items.length,
getScrollElement: () => parentRef.current,
estimateSize: () => 50,
});
return (
<div ref={parentRef} style={{ height: '500px', overflow: 'auto' }}>
<div style={{ height: virtualizer.getTotalSize() }}>
{virtualizer.getVirtualItems().map((virtualRow) => (
<div key={virtualRow.index}>{items[virtualRow.index]}</div>
))}
</div>
</div>
);
}import { useEffect } from 'react'
function Component() {
useEffect(() => {
const subscription = observable.subscribe(/* ... */)
// Always clean up!
return () => {
subscription.unsubscribe()
}
}, [])
}import { Profiler } from 'react';
import { recordRenderTime } from '@/lib/performance/monitor';
function App() {
return (
<Profiler
id="App"
onRender={(id, phase, actualDuration) => {
recordRenderTime(id, phase, actualDuration);
}}
>
<YourApp />
</Profiler>
);
}Recommended targets for production:
| Metric | Target | Alert Threshold |
|---|---|---|
| LCP | <2.5s | >4s |
| FID | <100ms | >300ms |
| CLS | <0.1 | >0.25 |
| TTFB | <800ms | >1.8s |
| API Response | <500ms | >2s |
| WebSocket Latency | <100ms | >500ms |
| Render Time | <16ms (60fps) | >50ms |
| Memory Usage | <50% | >80% |
| Error Rate | <1% | >5% |
- Optimize images (use Next.js Image)
- Preload critical resources
- Reduce server response time
- Use CDN for static assets
- Break up long tasks
- Use web workers for heavy computations
- Defer non-critical JavaScript
- Optimize event handlers
- Set dimensions on images and videos
- Avoid inserting content above existing content
- Use CSS containment
- Reserve space for ads and embeds
- Add database indexes
- Implement caching (Redis)
- Use connection pooling
- Optimize queries
- Clean up event listeners
- Cancel pending requests
- Unsubscribe from observables
- Clear timeouts/intervals
All metrics are automatically sent to Sentry when configured:
# .env.local
NEXT_PUBLIC_SENTRY_DSN=https://...- Sentry Dashboard: View trends and alerts
- Admin Panel: Real-time monitoring
- Browser DevTools: Performance tab
- Lighthouse: Audit reports
Configure Sentry alerts for:
- Poor Web Vitals (score < 50)
- High error rate (>5%)
- Slow API responses (>2s)
- Memory warnings
Q: Do metrics impact performance? A: Minimal impact. The monitor uses passive observers and async operations.
Q: How long are metrics stored? A: Last 1000 metrics in localStorage, full history in Sentry.
Q: Can I disable monitoring?
A: Yes, don't call performanceMonitor.initialize() or set feature flag.
Q: What browsers are supported? A: All modern browsers. Gracefully degrades on older browsers.
Q: How do I export data?
A: Use the Export buttons in the Admin Panel or call exportToCSV(metrics).