Skip to content

Frontend Structured Logging Implementation Guide

This comprehensive guide explains how to implement structured logging for production in the PRS Frontend application, providing better error tracking, user action monitoring, and performance insights.

Overview

Structured logging in the frontend provides:

  1. Centralized Logging: Consistent logging interface across the application
  2. Production Monitoring: Real-time error tracking and performance monitoring
  3. User Analytics: Track user interactions and workflows
  4. Debug Information: Detailed context for troubleshooting
  5. Performance Insights: Monitor API response times and component performance

Current State vs. Target State

Current State

  • ✅ Basic error handling with react-error-boundary
  • ✅ API error formatting in apiClient.js
  • ✅ Console.error logging for API errors
  • ✅ Toast notifications for user feedback
  • ❌ No structured logging system
  • ❌ No production log aggregation
  • ❌ No centralized error tracking

Target State

  • ✅ Structured logging service with consistent format
  • ✅ Production log transport to external services
  • ✅ Enhanced error tracking with context
  • ✅ User action and performance monitoring
  • ✅ Environment-based logging configuration

Implementation Plan

Phase 1: Core Logging Infrastructure

1.1 Logger Service (src/lib/logger/index.js)

JavaScript
import { LOG_LEVELS, LOG_CATEGORIES } from './constants';
import { formatLog, formatError } from './formatters';
import { getTransports } from './transports';

class LoggerService {
  constructor() {
    this.transports = getTransports();
    this.context = this.getGlobalContext();
  }

  getGlobalContext() {
    return {
      sessionId: this.generateSessionId(),
      userId: this.getCurrentUserId(),
      userAgent: navigator.userAgent,
      url: window.location.href,
      timestamp: new Date().toISOString(),
      environment: import.meta.env.MODE,
    };
  }

  log(level, message, data = {}, category = LOG_CATEGORIES.GENERAL) {
    const logEntry = formatLog({
      level,
      message,
      category,
      data,
      context: { ...this.context, ...data.context },
    });

    // Update Prometheus metrics
    if (typeof window !== 'undefined' && window.prometheusMetrics) {
      window.prometheusMetrics.incrementCounter('frontend_logs_total', {
        level,
        category,
      });
    }

    this.transports.forEach(transport => {
      if (transport.shouldLog(level)) {
        transport.log(logEntry);
      }
    });
  }

  debug(message, data = {}) {
    this.log(LOG_LEVELS.DEBUG, message, data, LOG_CATEGORIES.DEBUG);
  }

  info(message, data = {}) {
    this.log(LOG_LEVELS.INFO, message, data, LOG_CATEGORIES.INFO);
  }

  warn(message, data = {}) {
    this.log(LOG_LEVELS.WARN, message, data, LOG_CATEGORIES.WARNING);
  }

  error(message, error = null, data = {}) {
    const errorData = error ? formatError(error) : {};
    this.log(LOG_LEVELS.ERROR, message, { ...data, error: errorData }, LOG_CATEGORIES.ERROR);
  }

  // Specialized logging methods
  logUserAction(action, data = {}) {
    this.log(LOG_LEVELS.INFO, `User action: ${action}`, data, LOG_CATEGORIES.USER_ACTION);
  }

  logApiCall(method, url, duration, status, data = {}) {
    this.log(LOG_LEVELS.INFO, `API ${method} ${url}`, {
      ...data,
      api: { method, url, duration, status }
    }, LOG_CATEGORIES.API);

    // Update Prometheus metrics
    if (typeof window !== 'undefined' && window.prometheusMetrics) {
      window.prometheusMetrics.incrementCounter('frontend_api_requests_total', {
        method: method.toUpperCase(),
        status: status.toString(),
        endpoint: this.sanitizeUrl(url),
      });

      window.prometheusMetrics.observeHistogram('frontend_api_duration_seconds', duration / 1000, {
        method: method.toUpperCase(),
        endpoint: this.sanitizeUrl(url),
      });
    }
  }

  sanitizeUrl(url) {
    // Remove IDs and sensitive data from URLs for metrics
    return url.replace(/\/\d+/g, '/:id').replace(/\?.*/, '');
  }

  logPerformance(metric, value, data = {}) {
    this.log(LOG_LEVELS.INFO, `Performance: ${metric}`, {
      ...data,
      performance: { metric, value, timestamp: Date.now() }
    }, LOG_CATEGORIES.PERFORMANCE);
  }

  logNavigation(from, to, data = {}) {
    this.log(LOG_LEVELS.INFO, `Navigation: ${from} -> ${to}`, {
      ...data,
      navigation: { from, to }
    }, LOG_CATEGORIES.NAVIGATION);
  }
}

export const logger = new LoggerService();

1.2 Constants (src/lib/logger/constants.js)

JavaScript
export const LOG_LEVELS = {
  DEBUG: 'debug',
  INFO: 'info',
  WARN: 'warn',
  ERROR: 'error',
};

export const LOG_CATEGORIES = {
  GENERAL: 'general',
  ERROR: 'error',
  API: 'api',
  USER_ACTION: 'user_action',
  PERFORMANCE: 'performance',
  NAVIGATION: 'navigation',
  AUTHENTICATION: 'authentication',
  BUSINESS_LOGIC: 'business_logic',
  DEBUG: 'debug',
  WARNING: 'warning',
  INFO: 'info',
};

export const LOG_PRIORITIES = {
  [LOG_LEVELS.DEBUG]: 0,
  [LOG_LEVELS.INFO]: 1,
  [LOG_LEVELS.WARN]: 2,
  [LOG_LEVELS.ERROR]: 3,
};

export const SENSITIVE_FIELDS = [
  'password',
  'token',
  'secret',
  'key',
  'authorization',
  'cookie',
  'session',
];

1.3 Formatters (src/lib/logger/formatters.js)

JavaScript
import { SENSITIVE_FIELDS } from './constants';

export const formatLog = ({ level, message, category, data, context }) => {
  return {
    level,
    message,
    category,
    timestamp: new Date().toISOString(),
    context: sanitizeData(context),
    data: sanitizeData(data),
    correlationId: generateCorrelationId(),
  };
};

export const formatError = (error) => {
  if (!error) return null;

  return {
    name: error.name,
    message: error.message,
    stack: error.stack,
    cause: error.cause,
    // React-specific error properties
    componentStack: error.componentStack,
    errorBoundary: error.errorBoundary,
  };
};

export const formatApiError = (error, requestConfig) => {
  const baseError = formatError(error);

  return {
    ...baseError,
    request: {
      method: requestConfig?.method,
      url: requestConfig?.url,
      headers: sanitizeData(requestConfig?.headers),
    },
    response: {
      status: error.response?.status,
      statusText: error.response?.statusText,
      data: sanitizeData(error.response?.data),
    },
    network: {
      timeout: error.code === 'ECONNABORTED',
      offline: !navigator.onLine,
    },
  };
};

const sanitizeData = (data) => {
  if (!data || typeof data !== 'object') return data;

  const sanitized = { ...data };

  SENSITIVE_FIELDS.forEach(field => {
    if (sanitized[field]) {
      sanitized[field] = '[REDACTED]';
    }
  });

  return sanitized;
};

const generateCorrelationId = () => {
  return `${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
};

Phase 2: Transport Layer

2.1 Transports (src/lib/logger/transports.js)

JavaScript
import { LOG_LEVELS, LOG_PRIORITIES } from './constants';
import { env } from '@config/env';

class ConsoleTransport {
  constructor(minLevel = LOG_LEVELS.DEBUG) {
    this.minLevel = minLevel;
  }

  shouldLog(level) {
    return LOG_PRIORITIES[level] >= LOG_PRIORITIES[this.minLevel];
  }

  log(logEntry) {
    const { level, message, data } = logEntry;

    switch (level) {
      case LOG_LEVELS.ERROR:
        console.error(message, data);
        break;
      case LOG_LEVELS.WARN:
        console.warn(message, data);
        break;
      case LOG_LEVELS.INFO:
        console.info(message, data);
        break;
      default:
        console.log(message, data);
    }
  }
}

class HttpTransport {
  constructor(endpoint, options = {}) {
    this.endpoint = endpoint;
    this.options = {
      batchSize: 10,
      flushInterval: 5000,
      maxRetries: 3,
      ...options,
    };
    this.buffer = [];
    this.setupBatching();
  }

  shouldLog(level) {
    // Only send INFO and above to external services in production
    return LOG_PRIORITIES[level] >= LOG_PRIORITIES[LOG_LEVELS.INFO];
  }

  log(logEntry) {
    this.buffer.push(logEntry);

    if (this.buffer.length >= this.options.batchSize) {
      this.flush();
    }
  }

  setupBatching() {
    setInterval(() => {
      if (this.buffer.length > 0) {
        this.flush();
      }
    }, this.options.flushInterval);

    // Flush on page unload
    window.addEventListener('beforeunload', () => {
      this.flush(true);
    });
  }

  async flush(sync = false) {
    if (this.buffer.length === 0) return;

    const logs = [...this.buffer];
    this.buffer = [];

    const payload = {
      logs,
      metadata: {
        userAgent: navigator.userAgent,
        url: window.location.href,
        timestamp: new Date().toISOString(),
      },
    };

    try {
      if (sync) {
        // Use sendBeacon for synchronous sending on page unload
        navigator.sendBeacon(this.endpoint, JSON.stringify(payload));
      } else {
        await fetch(this.endpoint, {
          method: 'POST',
          headers: {
            'Content-Type': 'application/json',
          },
          body: JSON.stringify(payload),
        });
      }
    } catch (error) {
      console.error('Failed to send logs:', error);
      // Re-add logs to buffer for retry
      this.buffer.unshift(...logs);
    }
  }
}

class LocalStorageTransport {
  constructor(maxEntries = 100) {
    this.maxEntries = maxEntries;
    this.storageKey = 'prs_frontend_logs';
  }

  shouldLog(level) {
    return LOG_PRIORITIES[level] >= LOG_PRIORITIES[LOG_LEVELS.WARN];
  }

  log(logEntry) {
    try {
      const existingLogs = this.getLogs();
      const updatedLogs = [logEntry, ...existingLogs].slice(0, this.maxEntries);

      localStorage.setItem(this.storageKey, JSON.stringify(updatedLogs));
    } catch (error) {
      console.error('Failed to store log in localStorage:', error);
    }
  }

  getLogs() {
    try {
      const logs = localStorage.getItem(this.storageKey);
      return logs ? JSON.parse(logs) : [];
    } catch (error) {
      return [];
    }
  }

  clearLogs() {
    localStorage.removeItem(this.storageKey);
  }
}

export const getTransports = () => {
  const transports = [];

  // Always include console transport
  const consoleLevel = env.NODE_ENV === 'production' ? LOG_LEVELS.WARN : LOG_LEVELS.DEBUG;
  transports.push(new ConsoleTransport(consoleLevel));

  // Add Loki transport for production
  if (env.NODE_ENV === 'production' && env.LOKI_ENDPOINT) {
    transports.push(new LokiTransport(env.LOKI_ENDPOINT, {
      labels: {
        service: 'prs-frontend',
        environment: env.NODE_ENV,
        version: env.APP_VERSION || '1.0.0',
      },
    }));
  }

  // Add localStorage transport for offline scenarios
  transports.push(new LocalStorageTransport());

  return transports;
};

Phase 3: Enhanced Error Tracking

3.1 Enhanced API Client (src/lib/apiClient.js - Updates)

JavaScript
// Add to existing apiClient.js
import { logger } from '@lib/logger';

// Update the response interceptor
api.interceptors.response.use(
  response => {
    // Log successful API calls
    const duration = Date.now() - response.config.metadata?.startTime;
    logger.logApiCall(
      response.config.method?.toUpperCase(),
      response.config.url,
      duration,
      response.status,
      {
        requestId: response.config.metadata?.requestId,
        responseSize: JSON.stringify(response.data).length,
      }
    );

    if (response.config.responseType === 'blob') {
      return response;
    }

    return response.data;
  },
  async error => {
    // Enhanced error logging with structured data
    const formattedError = formatApiError(error);
    const duration = Date.now() - error.config?.metadata?.startTime;

    logger.error('API Request Failed', error, {
      api: {
        method: error.config?.method?.toUpperCase(),
        url: error.config?.url,
        duration,
        status: error.response?.status,
        requestId: error.config?.metadata?.requestId,
      },
      errorType: formattedError.type,
      retryCount: error.config?.retryCount || 0,
    });

    // Handle unauthorized errors (401)
    if (formattedError.type === API_ERROR_TYPES.UNAUTHORIZED) {
      logger.logUserAction('session_expired', {
        reason: 'api_unauthorized',
        url: error.config?.url,
      });

      setState({
        token: null,
        type: null,
        refreshToken: null,
        expiredAt: null,
      });
      setUserState({ user: null, otp: null, secret: null, currentRoute: null });
      setPermissionState({ permissions: null });
      sessionStorage.removeItem('timeoutState');
    }

    // Handle network errors with retry logic
    else if (
      formattedError.type === API_ERROR_TYPES.NETWORK_ERROR ||
      formattedError.type === API_ERROR_TYPES.TIMEOUT
    ) {
      const config = error.config;

      if (!config.retryCount) {
        config.retryCount = 0;
      }

      if (config.retryCount < MAX_RETRIES) {
        config.retryCount += 1;
        const delay = getRetryDelay(config.retryCount);

        logger.info(`Retrying API request`, {
          api: {
            method: config.method?.toUpperCase(),
            url: config.url,
            retryCount: config.retryCount,
            maxRetries: MAX_RETRIES,
            delay,
          },
        });

        await new Promise(resolve => setTimeout(resolve, delay));
        return api(config);
      } else {
        logger.error('API request failed after max retries', error, {
          api: {
            method: config.method?.toUpperCase(),
            url: config.url,
            maxRetries: MAX_RETRIES,
          },
        });
      }
    }

    error.formattedError = formattedError;
    return Promise.reject(error);
  },
);

// Add request interceptor for timing and correlation
api.interceptors.request.use(config => {
  const requestId = `req_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;

  config.metadata = {
    startTime: Date.now(),
    requestId,
  };

  // Add correlation ID header
  config.headers['X-Correlation-ID'] = requestId;

  const { getState } = useTokenStore;
  const token = getState().token;

  if (!config.headers.has('Authorization') && token) {
    config.headers.Authorization = `Bearer ${token}`;
  }

  if (!config.headers.Accept) {
    config.headers.Accept = 'application/json';
  }

  logger.debug('API Request Started', {
    api: {
      method: config.method?.toUpperCase(),
      url: config.url,
      requestId,
      hasAuth: !!token,
    },
  });

  return config;
});

Usage Examples

Basic Logging

JavaScript
import { logger } from '@lib/logger';

// Simple logging
logger.info('User logged in successfully');
logger.error('Failed to load data', error);

// With additional context
logger.logUserAction('form_submitted', {
  formType: 'requisition',
  formId: 'req_123',
  fields: ['title', 'description', 'amount'],
});

// Performance logging
logger.logPerformance('page_load_time', 1250, {
  page: 'dashboard',
  route: '/app/dashboard',
});

Component Integration

JavaScript
import { useLogger } from '@hooks/useLogger';

const MyComponent = () => {
  const { logUserAction, logError } = useLogger('MyComponent');

  const handleSubmit = async (data) => {
    try {
      logUserAction('form_submit_started', { formType: 'requisition' });

      const result = await submitForm(data);

      logUserAction('form_submit_success', {
        formType: 'requisition',
        resultId: result.id
      });
    } catch (error) {
      logError('Form submission failed', error, {
        formType: 'requisition',
        formData: data,
      });
    }
  };

  return (
    <form onSubmit={handleSubmit}>
      {/* form content */}
    </form>
  );
};

Environment Configuration

Update src/config/env.js:

JavaScript
const EnvSchema = z.object({
  API_URL: z.string().default('http://localhost:4000'),
  UPLOAD_URL: z.string().default('http://localhost:4000/upload'),

  // PLG Stack Configuration
  LOKI_ENDPOINT: z.string().optional(),
  PROMETHEUS_GATEWAY: z.string().optional(),
  GRAFANA_URL: z.string().optional(),

  // Logging Configuration
  LOG_LEVEL: z.enum(['debug', 'info', 'warn', 'error']).default('info'),
  LOG_BATCH_SIZE: z.string().transform(Number).default('10'),
  LOG_FLUSH_INTERVAL: z.string().transform(Number).default('5000'),

  // Application Metadata
  APP_VERSION: z.string().default('1.0.0'),
  BUILD_ID: z.string().optional(),

  ENABLE_API_MOCKING: z
    .string()
    .refine(s => s === 'true' || s === 'false')
    .transform(s => s === 'true')
    .optional(),
});

Production Deployment

Environment Variables

Bash
# .env.production
# PLG Stack Endpoints
VITE_APP_LOKI_ENDPOINT=http://loki:3100
VITE_APP_PROMETHEUS_GATEWAY=http://prometheus-pushgateway:9091
VITE_APP_GRAFANA_URL=http://grafana:3000

# Logging Configuration
VITE_APP_LOG_LEVEL=info
VITE_APP_LOG_BATCH_SIZE=20
VITE_APP_LOG_FLUSH_INTERVAL=10000

# Application Metadata
VITE_APP_APP_VERSION=1.2.0
VITE_APP_BUILD_ID=${CI_COMMIT_SHA}

Docker Compose Integration

Add frontend logging to your PLG stack:

YAML
# docker-compose.yml (excerpt)
version: '3.8'

services:
  # Frontend Application
  frontend:
    build: ./prs-frontend
    environment:
      - VITE_APP_LOKI_ENDPOINT=http://loki:3100
      - VITE_APP_PROMETHEUS_GATEWAY=http://prometheus-pushgateway:9091
      - VITE_APP_LOG_LEVEL=info
    depends_on:
      - loki
      - prometheus-pushgateway
    networks:
      - monitoring

  # Loki for log aggregation
  loki:
    image: grafana/loki:2.9.0
    ports:
      - "3100:3100"
    command: -config.file=/etc/loki/local-config.yaml
    volumes:
      - ./loki-config.yaml:/etc/loki/local-config.yaml
      - loki-data:/loki
    networks:
      - monitoring

  # Prometheus for metrics
  prometheus:
    image: prom/prometheus:latest
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus-data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.console.libraries=/etc/prometheus/console_libraries'
      - '--web.console.templates=/etc/prometheus/consoles'
    networks:
      - monitoring

  # Prometheus Push Gateway for frontend metrics
  prometheus-pushgateway:
    image: prom/pushgateway:latest
    ports:
      - "9091:9091"
    networks:
      - monitoring

  # Grafana for visualization
  grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin
    volumes:
      - grafana-data:/var/lib/grafana
      - ./grafana/provisioning:/etc/grafana/provisioning
    networks:
      - monitoring

volumes:
  loki-data:
  prometheus-data:
  grafana-data:

networks:
  monitoring:
    driver: bridge

PLG Stack Integration (Prometheus, Loki, Grafana)

The PRS system uses the PLG stack for comprehensive observability:

  1. Prometheus - Metrics collection and alerting
  2. Loki - Log aggregation and querying
  3. Grafana - Visualization and dashboards

Loki Integration

Frontend logs are sent to Loki via HTTP API with proper labels for efficient querying:

JavaScript
// Loki-specific transport configuration
class LokiTransport {
  constructor(lokiEndpoint, options = {}) {
    this.lokiEndpoint = lokiEndpoint;
    this.options = {
      batchSize: 10,
      flushInterval: 5000,
      maxRetries: 3,
      labels: {
        service: 'prs-frontend',
        environment: import.meta.env.MODE,
        ...options.labels,
      },
      ...options,
    };
    this.buffer = [];
    this.setupBatching();
  }

  log(logEntry) {
    // Format for Loki
    const lokiEntry = this.formatForLoki(logEntry);
    this.buffer.push(lokiEntry);

    if (this.buffer.length >= this.options.batchSize) {
      this.flush();
    }
  }

  formatForLoki(logEntry) {
    const timestamp = new Date(logEntry.timestamp).getTime() * 1000000; // nanoseconds

    return {
      stream: {
        ...this.options.labels,
        level: logEntry.level,
        category: logEntry.category,
        component: logEntry.data?.component || 'unknown',
        route: logEntry.context?.route || 'unknown',
      },
      values: [[timestamp.toString(), JSON.stringify(logEntry)]],
    };
  }

  async flush(sync = false) {
    if (this.buffer.length === 0) return;

    const streams = this.groupByStream(this.buffer);
    this.buffer = [];

    const payload = { streams };

    try {
      if (sync) {
        navigator.sendBeacon(`${this.lokiEndpoint}/loki/api/v1/push`, JSON.stringify(payload));
      } else {
        await fetch(`${this.lokiEndpoint}/loki/api/v1/push`, {
          method: 'POST',
          headers: {
            'Content-Type': 'application/json',
          },
          body: JSON.stringify(payload),
        });
      }
    } catch (error) {
      console.error('Failed to send logs to Loki:', error);
      this.buffer.unshift(...this.ungroupStreams(streams));
    }
  }

  groupByStream(entries) {
    const streamMap = new Map();

    entries.forEach(entry => {
      const streamKey = JSON.stringify(entry.stream);
      if (!streamMap.has(streamKey)) {
        streamMap.set(streamKey, {
          stream: entry.stream,
          values: [],
        });
      }
      streamMap.get(streamKey).values.push(...entry.values);
    });

    return Array.from(streamMap.values());
  }
}

Prometheus Metrics Integration

Export frontend metrics to Prometheus for monitoring:

JavaScript
// src/lib/logger/metrics.js
class PrometheusMetrics {
  constructor() {
    this.metrics = new Map();
    this.setupMetrics();
  }

  setupMetrics() {
    // Counter for different log levels
    this.registerCounter('frontend_logs_total', 'Total number of frontend logs', ['level', 'category']);

    // Counter for API calls
    this.registerCounter('frontend_api_requests_total', 'Total API requests', ['method', 'status', 'endpoint']);

    // Histogram for API response times
    this.registerHistogram('frontend_api_duration_seconds', 'API request duration', ['method', 'endpoint']);

    // Counter for user actions
    this.registerCounter('frontend_user_actions_total', 'Total user actions', ['action', 'component']);

    // Counter for errors
    this.registerCounter('frontend_errors_total', 'Total frontend errors', ['type', 'component']);

    // Gauge for performance metrics
    this.registerGauge('frontend_performance_score', 'Performance scores', ['metric']);
  }

  registerCounter(name, help, labels = []) {
    this.metrics.set(name, {
      type: 'counter',
      help,
      labels,
      values: new Map(),
    });
  }

  registerHistogram(name, help, labels = []) {
    this.metrics.set(name, {
      type: 'histogram',
      help,
      labels,
      buckets: [0.1, 0.25, 0.5, 1, 2.5, 5, 10],
      values: new Map(),
    });
  }

  registerGauge(name, help, labels = []) {
    this.metrics.set(name, {
      type: 'gauge',
      help,
      labels,
      values: new Map(),
    });
  }

  incrementCounter(name, labels = {}, value = 1) {
    const metric = this.metrics.get(name);
    if (!metric || metric.type !== 'counter') return;

    const labelKey = this.getLabelKey(labels);
    const current = metric.values.get(labelKey) || 0;
    metric.values.set(labelKey, current + value);
  }

  observeHistogram(name, value, labels = {}) {
    const metric = this.metrics.get(name);
    if (!metric || metric.type !== 'histogram') return;

    const labelKey = this.getLabelKey(labels);
    if (!metric.values.has(labelKey)) {
      metric.values.set(labelKey, {
        count: 0,
        sum: 0,
        buckets: new Map(metric.buckets.map(b => [b, 0])),
      });
    }

    const histogram = metric.values.get(labelKey);
    histogram.count++;
    histogram.sum += value;

    // Update buckets
    metric.buckets.forEach(bucket => {
      if (value <= bucket) {
        histogram.buckets.set(bucket, histogram.buckets.get(bucket) + 1);
      }
    });
  }

  setGauge(name, value, labels = {}) {
    const metric = this.metrics.get(name);
    if (!metric || metric.type !== 'gauge') return;

    const labelKey = this.getLabelKey(labels);
    metric.values.set(labelKey, value);
  }

  getLabelKey(labels) {
    return Object.keys(labels)
      .sort()
      .map(key => `${key}="${labels[key]}"`)
      .join(',');
  }

  // Export metrics in Prometheus format
  exportMetrics() {
    let output = '';

    this.metrics.forEach((metric, name) => {
      output += `# HELP ${name} ${metric.help}\n`;
      output += `# TYPE ${name} ${metric.type}\n`;

      metric.values.forEach((value, labelKey) => {
        const labels = labelKey ? `{${labelKey}}` : '';

        if (metric.type === 'histogram') {
          // Export histogram buckets
          metric.buckets.forEach((count, bucket) => {
            output += `${name}_bucket${labels.replace('}', `,le="${bucket}"}`)} ${count}\n`;
          });
          output += `${name}_bucket${labels.replace('}', ',le="+Inf"}')} ${value.count}\n`;
          output += `${name}_count${labels} ${value.count}\n`;
          output += `${name}_sum${labels} ${value.sum}\n`;
        } else {
          output += `${name}${labels} ${value}\n`;
        }
      });

      output += '\n';
    });

    return output;
  }

  // Expose metrics endpoint for Prometheus scraping
  setupMetricsEndpoint() {
    // This would typically be handled by a service worker or backend endpoint
    // For frontend, we'll send metrics via HTTP push
    setInterval(() => {
      this.pushMetrics();
    }, 15000); // Push every 15 seconds
  }

  async pushMetrics() {
    try {
      const metrics = this.exportMetrics();
      await fetch('/api/metrics', {
        method: 'POST',
        headers: {
          'Content-Type': 'text/plain',
        },
        body: metrics,
      });
    } catch (error) {
      console.error('Failed to push metrics:', error);
    }
  }
}

export const prometheusMetrics = new PrometheusMetrics();

Grafana Dashboard Configuration

Create comprehensive dashboards for frontend monitoring:

JSON
{
  "dashboard": {
    "title": "PRS Frontend Monitoring",
    "panels": [
      {
        "title": "Error Rate",
        "type": "stat",
        "targets": [
          {
            "expr": "rate(frontend_logs_total{level=\"error\"}[5m])",
            "legendFormat": "Error Rate"
          }
        ]
      },
      {
        "title": "API Response Times",
        "type": "graph",
        "targets": [
          {
            "expr": "histogram_quantile(0.95, rate(frontend_api_duration_seconds_bucket[5m]))",
            "legendFormat": "95th percentile"
          },
          {
            "expr": "histogram_quantile(0.50, rate(frontend_api_duration_seconds_bucket[5m]))",
            "legendFormat": "50th percentile"
          }
        ]
      },
      {
        "title": "User Actions",
        "type": "graph",
        "targets": [
          {
            "expr": "rate(frontend_user_actions_total[5m])",
            "legendFormat": "{{action}}"
          }
        ]
      },
      {
        "title": "Recent Errors",
        "type": "logs",
        "targets": [
          {
            "expr": "{service=\"prs-frontend\",level=\"error\"} |= \"\"",
            "refId": "A"
          }
        ]
      }
    ]
  }
}

LogQL Queries for Frontend Monitoring

Essential LogQL queries for monitoring frontend applications:

Text Only
# Error logs in the last hour
{service="prs-frontend",level="error"} | json | line_format "{{.timestamp}} [{{.level}}] {{.message}}"

# API errors by endpoint
{service="prs-frontend",category="api"} | json | __error__ = "" | line_format "{{.data.api.method}} {{.data.api.url}} - {{.data.api.status}}"

# User actions by component
{service="prs-frontend",category="user_action"} | json | line_format "{{.data.component}}: {{.message}}"

# Performance issues (slow API calls)
{service="prs-frontend",category="api"} | json | data_api_duration > 2000

# Authentication failures
{service="prs-frontend",category="authentication"} | json | line_format "Auth failure: {{.message}}"

# Component errors with stack traces
{service="prs-frontend",category="error"} | json | data_component != "" | line_format "{{.data.component}}: {{.message}}"

# Network connectivity issues
{service="prs-frontend"} | json | message =~ ".*network.*|.*offline.*|.*connection.*"

# High-frequency errors (potential issues)
sum by (message) (count_over_time({service="prs-frontend",level="error"}[1h])) > 10

Prometheus Alerting Rules

Configure alerts for critical frontend issues:

YAML
# prometheus-alerts.yml
groups:
  - name: frontend-alerts
    rules:
      - alert: HighFrontendErrorRate
        expr: rate(frontend_logs_total{level="error"}[5m]) > 0.1
        for: 2m
        labels:
          severity: warning
          service: prs-frontend
        annotations:
          summary: "High error rate in frontend application"
          description: "Frontend error rate is {{ $value }} errors per second"

      - alert: SlowAPIResponses
        expr: histogram_quantile(0.95, rate(frontend_api_duration_seconds_bucket[5m])) > 5
        for: 5m
        labels:
          severity: warning
          service: prs-frontend
        annotations:
          summary: "Slow API responses detected"
          description: "95th percentile API response time is {{ $value }} seconds"

      - alert: FrontendDown
        expr: up{job="frontend-metrics"} == 0
        for: 1m
        labels:
          severity: critical
          service: prs-frontend
        annotations:
          summary: "Frontend application is down"
          description: "Frontend metrics endpoint is not responding"

      - alert: HighJavaScriptErrors
        expr: rate(frontend_errors_total{type="javascript"}[5m]) > 0.05
        for: 3m
        labels:
          severity: warning
          service: prs-frontend
        annotations:
          summary: "High JavaScript error rate"
          description: "JavaScript error rate is {{ $value }} errors per second"

Best Practices

  1. Log Levels: Use appropriate log levels (debug for development, info/warn/error for production)
  2. Sensitive Data: Always sanitize sensitive information before logging
  3. Performance: Use batching and sampling to avoid performance impact
  4. Context: Include relevant context (user ID, session ID, correlation IDs)
  5. Structured Data: Use consistent data structures for easier analysis
  6. Error Boundaries: Implement error boundaries with logging
  7. User Privacy: Respect user privacy and comply with data protection regulations

Testing

JavaScript
// Test logging functionality
import { logger } from '@lib/logger';

describe('Logger Service', () => {
  it('should log user actions with correct format', () => {
    const consoleSpy = jest.spyOn(console, 'info');

    logger.logUserAction('test_action', { testData: 'value' });

    expect(consoleSpy).toHaveBeenCalledWith(
      expect.stringContaining('User action: test_action'),
      expect.objectContaining({
        category: 'user_action',
        data: expect.objectContaining({ testData: 'value' }),
      })
    );
  });
});

Monitoring and Alerts

Set up alerts for:

  • High error rates
  • Performance degradation
  • Authentication failures
  • API timeout increases
  • Critical business workflow failures

This structured logging system provides comprehensive visibility into your frontend application's behavior, enabling better debugging, monitoring, and user experience optimization.

Advanced Features

React Hooks Integration

Create a custom hook for easy logging integration:

JavaScript
// src/hooks/useLogger.js
import { useCallback, useEffect, useRef } from 'react';
import { logger } from '@lib/logger';
import { useLocation } from 'react-router-dom';

export const useLogger = (componentName) => {
  const location = useLocation();
  const mountTimeRef = useRef(Date.now());

  useEffect(() => {
    // Log component mount
    logger.debug(`Component mounted: ${componentName}`, {
      component: componentName,
      route: location.pathname,
      mountTime: mountTimeRef.current,
    });

    return () => {
      // Log component unmount with duration
      const duration = Date.now() - mountTimeRef.current;
      logger.debug(`Component unmounted: ${componentName}`, {
        component: componentName,
        route: location.pathname,
        duration,
      });
    };
  }, [componentName, location.pathname]);

  const logUserAction = useCallback((action, data = {}) => {
    logger.logUserAction(action, {
      ...data,
      component: componentName,
      route: location.pathname,
    });
  }, [componentName, location.pathname]);

  const logError = useCallback((message, error, data = {}) => {
    logger.error(message, error, {
      ...data,
      component: componentName,
      route: location.pathname,
    });
  }, [componentName, location.pathname]);

  const logPerformance = useCallback((metric, value, data = {}) => {
    logger.logPerformance(metric, value, {
      ...data,
      component: componentName,
      route: location.pathname,
    });
  }, [componentName, location.pathname]);

  return {
    logUserAction,
    logError,
    logPerformance,
    logger,
  };
};

Higher-Order Component for Automatic Logging

JavaScript
// src/hoc/withLogging.js
import React, { Component } from 'react';
import { logger } from '@lib/logger';

export const withLogging = (WrappedComponent, options = {}) => {
  const {
    logMount = true,
    logUnmount = true,
    logErrors = true,
    logProps = false,
  } = options;

  return class LoggingWrapper extends Component {
    constructor(props) {
      super(props);
      this.componentName = WrappedComponent.displayName || WrappedComponent.name;
      this.mountTime = Date.now();
    }

    componentDidMount() {
      if (logMount) {
        logger.debug(`Component mounted: ${this.componentName}`, {
          component: this.componentName,
          props: logProps ? this.props : undefined,
        });
      }
    }

    componentWillUnmount() {
      if (logUnmount) {
        const duration = Date.now() - this.mountTime;
        logger.debug(`Component unmounted: ${this.componentName}`, {
          component: this.componentName,
          duration,
        });
      }
    }

    componentDidCatch(error, errorInfo) {
      if (logErrors) {
        logger.error(`Component error: ${this.componentName}`, error, {
          component: this.componentName,
          errorInfo,
          props: this.props,
        });
      }
    }

    render() {
      return <WrappedComponent {...this.props} />;
    }
  };
};

Performance Monitoring

JavaScript
// src/lib/logger/performance.js
import { logger } from './index';

class PerformanceMonitor {
  constructor() {
    this.observers = new Map();
    this.setupObservers();
  }

  setupObservers() {
    // Core Web Vitals monitoring
    if ('PerformanceObserver' in window) {
      this.observeWebVitals();
      this.observeNavigation();
      this.observeResources();
    }
  }

  observeWebVitals() {
    const observer = new PerformanceObserver((list) => {
      list.getEntries().forEach((entry) => {
        logger.logPerformance('web_vital', entry.value, {
          metric: entry.name,
          rating: this.getVitalRating(entry.name, entry.value),
          entryType: entry.entryType,
        });
      });
    });

    observer.observe({ entryTypes: ['largest-contentful-paint', 'first-input', 'layout-shift'] });
    this.observers.set('webVitals', observer);
  }

  observeNavigation() {
    const observer = new PerformanceObserver((list) => {
      list.getEntries().forEach((entry) => {
        logger.logPerformance('navigation', entry.duration, {
          type: entry.type,
          redirectCount: entry.redirectCount,
          domContentLoaded: entry.domContentLoadedEventEnd - entry.domContentLoadedEventStart,
          loadComplete: entry.loadEventEnd - entry.loadEventStart,
        });
      });
    });

    observer.observe({ entryTypes: ['navigation'] });
    this.observers.set('navigation', observer);
  }

  observeResources() {
    const observer = new PerformanceObserver((list) => {
      list.getEntries().forEach((entry) => {
        if (entry.duration > 1000) { // Log slow resources
          logger.logPerformance('slow_resource', entry.duration, {
            name: entry.name,
            type: entry.initiatorType,
            size: entry.transferSize,
          });
        }
      });
    });

    observer.observe({ entryTypes: ['resource'] });
    this.observers.set('resources', observer);
  }

  getVitalRating(metric, value) {
    const thresholds = {
      'largest-contentful-paint': { good: 2500, poor: 4000 },
      'first-input': { good: 100, poor: 300 },
      'cumulative-layout-shift': { good: 0.1, poor: 0.25 },
    };

    const threshold = thresholds[metric];
    if (!threshold) return 'unknown';

    if (value <= threshold.good) return 'good';
    if (value <= threshold.poor) return 'needs-improvement';
    return 'poor';
  }

  disconnect() {
    this.observers.forEach(observer => observer.disconnect());
    this.observers.clear();
  }
}

export const performanceMonitor = new PerformanceMonitor();

Global Error Handler

JavaScript
// src/lib/logger/errorHandler.js
import { logger } from './index';

class GlobalErrorHandler {
  constructor() {
    this.setupErrorHandlers();
  }

  setupErrorHandlers() {
    // Handle unhandled JavaScript errors
    window.addEventListener('error', (event) => {
      logger.error('Unhandled JavaScript error', event.error, {
        filename: event.filename,
        lineno: event.lineno,
        colno: event.colno,
        message: event.message,
        stack: event.error?.stack,
      });
    });

    // Handle unhandled promise rejections
    window.addEventListener('unhandledrejection', (event) => {
      logger.error('Unhandled promise rejection', event.reason, {
        promise: event.promise,
        reason: event.reason,
      });
    });

    // Handle network errors
    window.addEventListener('offline', () => {
      logger.warn('Network connection lost', {
        online: navigator.onLine,
        connectionType: navigator.connection?.effectiveType,
      });
    });

    window.addEventListener('online', () => {
      logger.info('Network connection restored', {
        online: navigator.onLine,
        connectionType: navigator.connection?.effectiveType,
      });
    });
  }
}

export const globalErrorHandler = new GlobalErrorHandler();

Security Considerations

Data Privacy and GDPR Compliance

JavaScript
// src/lib/logger/privacy.js
export const PRIVACY_LEVELS = {
  PUBLIC: 'public',
  INTERNAL: 'internal',
  CONFIDENTIAL: 'confidential',
  RESTRICTED: 'restricted',
};

export const sanitizeForPrivacy = (data, privacyLevel = PRIVACY_LEVELS.INTERNAL) => {
  if (!data || typeof data !== 'object') return data;

  const sanitized = { ...data };

  // Remove PII based on privacy level
  const piiFields = ['email', 'phone', 'address', 'ssn', 'creditCard'];
  const sensitiveFields = ['password', 'token', 'secret', 'key'];

  if (privacyLevel === PRIVACY_LEVELS.PUBLIC) {
    [...piiFields, ...sensitiveFields].forEach(field => {
      if (sanitized[field]) {
        sanitized[field] = '[REDACTED]';
      }
    });
  }

  return sanitized;
};

Rate Limiting and Sampling

JavaScript
// src/lib/logger/rateLimiter.js
class LogRateLimiter {
  constructor() {
    this.buckets = new Map();
    this.windowSize = 60000; // 1 minute
    this.maxLogsPerWindow = 100;
  }

  shouldLog(category, level) {
    const key = `${category}:${level}`;
    const now = Date.now();

    if (!this.buckets.has(key)) {
      this.buckets.set(key, { count: 0, windowStart: now });
    }

    const bucket = this.buckets.get(key);

    // Reset window if expired
    if (now - bucket.windowStart > this.windowSize) {
      bucket.count = 0;
      bucket.windowStart = now;
    }

    // Check if under limit
    if (bucket.count < this.maxLogsPerWindow) {
      bucket.count++;
      return true;
    }

    return false;
  }
}

export const rateLimiter = new LogRateLimiter();

Troubleshooting

Common Issues

  1. Logs not appearing in production
  2. Check environment variables
  3. Verify transport configuration
  4. Check network connectivity to log endpoint

  5. Performance impact

  6. Implement log sampling
  7. Use batching for HTTP transport
  8. Reduce log verbosity in production

  9. Storage quota exceeded

  10. Implement log rotation
  11. Reduce localStorage retention
  12. Use compression for stored logs

Debug Mode

JavaScript
1
2
3
4
5
6
// Enable debug mode
localStorage.setItem('prs_debug_logging', 'true');

// Check debug logs
const debugLogs = JSON.parse(localStorage.getItem('prs_frontend_logs') || '[]');
console.table(debugLogs);

Migration Guide

From Console Logging

JavaScript
1
2
3
4
5
6
7
8
9
// Before
console.log('User clicked button');
console.error('API call failed', error);

// After
import { logger } from '@lib/logger';

logger.logUserAction('button_clicked');
logger.error('API call failed', error);

From Basic Error Handling

JavaScript
// Before
try {
  await apiCall();
} catch (error) {
  toast.error(error.message);
}

// After
try {
  await apiCall();
} catch (error) {
  logger.error('API call failed', error, {
    operation: 'user_data_fetch',
    userId: currentUser.id,
  });
  toast.error(error.message);
}

Integration with PRS Backend Logging

The frontend logging system is designed to complement the existing PRS backend structured logging:

Correlation IDs

Ensure frontend and backend logs can be correlated:

JavaScript
// Enhanced API client with correlation ID propagation
api.interceptors.request.use(config => {
  const correlationId = `${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;

  // Add to headers for backend correlation
  config.headers['X-Correlation-ID'] = correlationId;
  config.headers['X-Request-Source'] = 'prs-frontend';

  // Store for frontend logging
  config.metadata = {
    ...config.metadata,
    correlationId,
    startTime: Date.now(),
  };

  logger.debug('API Request Initiated', {
    correlationId,
    api: {
      method: config.method?.toUpperCase(),
      url: config.url,
    },
  });

  return config;
});

Unified Log Format

Align frontend logs with backend log structure:

JavaScript
// Enhanced log formatter for backend compatibility
export const formatLog = ({ level, message, category, data, context }) => {
  return {
    // Standard fields matching backend
    timestamp: new Date().toISOString(),
    level: level.toUpperCase(),
    message,
    service: 'prs-frontend',

    // PRS-specific fields
    category,
    correlationId: data.correlationId || generateCorrelationId(),
    userId: context.userId,
    sessionId: context.sessionId,

    // Frontend-specific context
    context: {
      userAgent: navigator.userAgent,
      url: window.location.href,
      route: context.route,
      component: data.component,
      environment: import.meta.env.MODE,
    },

    // Sanitized data
    data: sanitizeData(data),

    // Error details (if applicable)
    error: data.error ? formatError(data.error) : undefined,
  };
};

Cross-Service Monitoring

Monitor end-to-end request flows:

JavaScript
// Track requests across frontend and backend
export const trackEndToEndRequest = (operation, requestData) => {
  const correlationId = generateCorrelationId();
  const startTime = Date.now();

  logger.info(`Starting ${operation}`, {
    correlationId,
    operation,
    requestData: sanitizeData(requestData),
    phase: 'frontend_start',
  });

  return {
    correlationId,
    complete: (result, error = null) => {
      const duration = Date.now() - startTime;

      if (error) {
        logger.error(`${operation} failed`, error, {
          correlationId,
          operation,
          duration,
          phase: 'frontend_complete',
          success: false,
        });
      } else {
        logger.info(`${operation} completed`, {
          correlationId,
          operation,
          duration,
          phase: 'frontend_complete',
          success: true,
          result: sanitizeData(result),
        });
      }
    },
  };
};

// Usage example
const tracker = trackEndToEndRequest('create_requisition', formData);
try {
  const result = await createRequisition(formData);
  tracker.complete(result);
} catch (error) {
  tracker.complete(null, error);
}

Shared Monitoring Dashboards

Create unified dashboards showing both frontend and backend metrics:

JSON
{
  "dashboard": {
    "title": "PRS End-to-End Monitoring",
    "panels": [
      {
        "title": "Request Flow",
        "type": "graph",
        "targets": [
          {
            "expr": "rate(frontend_api_requests_total[5m])",
            "legendFormat": "Frontend Requests"
          },
          {
            "expr": "rate(backend_requests_total[5m])",
            "legendFormat": "Backend Requests"
          }
        ]
      },
      {
        "title": "Error Correlation",
        "type": "logs",
        "targets": [
          {
            "expr": "{service=~\"prs-frontend|prs-backend\",level=\"error\"} | json | line_format \"{{.service}}: {{.message}} ({{.correlationId}})\"",
            "refId": "A"
          }
        ]
      }
    ]
  }
}

This comprehensive guide provides everything needed to implement production-ready structured logging in your frontend application that seamlessly integrates with the existing PRS backend logging infrastructure and PLG stack.