Getting Started with Logging - j-fischer/rflib GitHub Wiki
Before you start logging with RFLIB, ensure you have:
- Installed RFLIB in your org (see Installation Guide)
- Basic understanding of logger levels: TRACE, DEBUG, INFO, WARN, ERROR, FATAL
- Access to the Ops Center for monitoring logs (requires appropriate permissions)
- Step 1: Add logging to your code using the examples below
- Step 2: Configure your Logger Settings to control what gets logged and where
- Step 3: Monitor your logs in the Ops Center Log Monitor
- General Log Level: Controls what messages are cached locally (in memory)
- Log Event Reporting Level: Controls what messages are sent as Platform Events (visible in Ops Center)
- System Debug Log Level: Controls what appears in Salesforce Debug Logs
- Archive Log Level: Controls what gets stored in Big Objects for long-term retention
Example: If you set:
- General Log Level = INFO
- Log Event Reporting Level = ERROR
Then INFO level messages will still be captured and cached locally, but only ERROR and FATAL messages will appear in the Ops Center dashboard. This means when an ERROR occurs, you'll see ALL the cached messages (including INFO) in the log event details, providing full context for debugging.
Import the log factory using the following import:
import { createLogger } from 'c/rflibLogger';
Then declare a property in your module for the actual logger.
export default class MyModule extends LightningElement {
logger = createLogger('MyModule');
...
}
Last, use the logger to record log statements.
handleSomeEvent(event) {
// Note the variable length of arguments
this.logger.info('Event occurred, {0} - {1}', 'foo', 'bar');
// Log different severity levels
this.logger.debug('Detailed debug information');
this.logger.warn('Something unexpected happened');
this.logger.error('An error occurred but was handled');
}
Insert the wrapper component into any Lightning Component, preferably at the top of the markup.
<c:rflibLoggerCmp aura:id="logger" name="MyCustomContext" appendComponentId="false" />
Then retrieve the logger from your controller or helper code.
({
doInit: function(component, event, helper) {
var logger = component.find('logger');
// Note that second argument has to be a list
logger.debug('This is a test > {0}-{1}', ['foo', 'bar']);
}
})
Create a logger in your Apex class using one of the following commands:
rflib_Logger logger = rflib_LoggerUtil.getFactory().createLogger('MyContext'); // generally used to create a loggerrflib_Logger logger = rflib_LoggerUtil.getFactory().createBatchedLogger('MyContext'); // used to create a logger in specific situations
Then call the log functions.
logger.debug('This is a test -> {0}: {1}', new List<Object> { 'foo', 'bar' });
Here is a full example for a controller class.
// This will log all Log Events immediately after their creation, depending on their settings.
private static final rflib_Logger LOGGER = rflib_LoggerUtil.getFactory().createLogger('MyController');
@AuraEnabled
public static String doSomething(){
try {
LOGGER.info('doSomething() invoked');
// Application logic here
return 'Result';
} catch (Exception ex) {
LOGGER.fatal('DoSomething threw an exception', ex);
}
}
One of RFLIB's most powerful features is the ability to pass Exception objects directly to ERROR and FATAL logging methods. This provides comprehensive error context including stack traces, cause chains, and exception details.
✨ Key Benefits:
- Automatic stack trace capture
- Exception cause chain analysis
- Detailed error context preservation
- No manual string conversion needed
public class OrderService {
private static final rflib_Logger LOGGER = rflib_LoggerUtil.getFactory().createLogger('OrderService');
public void processOrder(Order__c order) {
try {
// Business logic that might fail
validateOrder(order);
calculateTotals(order);
submitToExternalSystem(order);
} catch (ValidationException ex) {
// WARN level - recoverable issue (no Exception object support)
LOGGER.warn('Order validation failed for Order {0}: {1}', new List<Object>{ order.Id, ex.getMessage() });
throw ex;
} catch (CalloutException ex) {
// ERROR level - integration failure
LOGGER.error('External system integration failed for Order {0}', new List<Object>{ order.Id }, ex);
// Could retry or handle gracefully
} catch (Exception ex) {
// FATAL level - unexpected critical error
LOGGER.fatal('Unexpected error processing Order {0}', new List<Object>{ order.Id }, ex);
throw ex;
}
}
}public class IntegrationService {
private static final rflib_Logger LOGGER = rflib_LoggerUtil.getFactory().createLogger('IntegrationService');
public void syncData(List<Account> accounts) {
LOGGER.info('Starting data sync for {0} accounts', new List<Object>{ accounts.size() });
for (Account acc : accounts) {
try {
syncSingleAccount(acc);
} catch (HttpException ex) {
// Log with both context and exception details
LOGGER.error('HTTP integration failed for Account {0} - {1}',
new List<Object>{ acc.Id, acc.Name }, ex);
// Continue processing other accounts
continue;
} catch (Exception ex) {
// Critical error - log with full context and stop processing
LOGGER.fatal('Critical error during account sync. Account: {0}, remaining: {1}',
new List<Object>{ acc.Id, accounts.size() }, ex);
throw ex;
}
}
LOGGER.info('Data sync completed successfully');
}
}When you log an Exception object, RFLIB automatically captures:
📋 Exception Details:
- Exception type and message
- Complete stack trace
- Line numbers and method names
- Cause chain (if nested exceptions exist)
🎯 Context Information:
- Custom message with variable substitution
- Timestamp and user context
- Transaction and session details
- Custom logger context name
✅ DO:
// Use appropriate log levels with Exception objects (ERROR and FATAL only)
LOGGER.warn('Recoverable issue occurred: {0}', new List<Object>{ ex.getMessage() }); // WARN doesn't support Exception objects
LOGGER.error('Integration failed', ex); // ERROR supports Exception objects
LOGGER.fatal('Critical system error', ex); // FATAL supports Exception objects
// Include relevant context with Exception objects
LOGGER.error('Payment processing failed for Order {0}, Amount: {1}',
new List<Object>{ orderId, amount }, ex);
// Log at the right abstraction level
LOGGER.error('User authentication failed for email {0}',
new List<Object>{ userEmail }, ex);❌ DON'T:
// Don't manually convert exceptions to strings for ERROR/FATAL levels
LOGGER.error('Error occurred: ' + ex.getMessage()); // ❌ Loses stack trace, use Exception object instead
// Don't try to pass Exception objects to WARN level
LOGGER.warn('Recoverable issue', ex); // ❌ TRACE, DEBUG, INFO,and WARN don't support Exception objectspublic void run(rflib_TriggerManager.Args args) {
try {
processRecords(args.newRecords);
} catch (Exception ex) {
LOGGER.fatal('Trigger execution failed for {0} records',
new List<Object>{ args.newRecords.size() }, ex);
// Decide whether to rethrow or handle gracefully
}
}handleError(error) {
// Log client-side errors (limited exception detail)
logger.error('Component error occurred: {0}', JSON.stringify(error));
}When to use batch logging:
- High-volume operations where DML limits might be exceeded
- Situations where immediate log publishing isn't critical
- Testing scenarios where you need precise control over when logs are published
When NOT to use batch logging:
- Normal application logging (use standard logger)
- Error handling scenarios (immediate visibility is important)
- Low-volume operations
When using the batched logging pattern, the code is responsible for the publishing of the log events. This means that it is required to call the rflib_Logger.publishBatchedLogEvents() method at the end of any transaction. It does not matter what logger it is called on as all loggers manage batched log events globally. Batched logging will reduce the number of DML statements, especially for low log level configurations.
Following is an example of an Aura controller using the batch pattern:
// This will log all Log Events as batched events, independent of the settings.
private static final rflib_Logger LOGGER = rflib_LoggerUtil.getFactory().createBatchedLogger('MyController');
@AuraEnabled
public static String doSomething(){
try {
// Application logic here
return 'Result';
} catch (Exception ex) {
LOGGER.fatal('DoSomething threw an exception', ex);
} finally {
// IMPORTANT: This method must be invoked to trigger the publishing of any queued events.
LOGGER.publishBatchedLogEvents();
}
}
Similar to the Logger, the rflib_LogFinalizer implementation is based on dependency injection and comes with a default implementation. Therefore, an instance can be instantiated using the rflib_LoggerUtil class. The following sample code highlights the key elements of a Finalizer implementation.
The rflib_DefaultLogFinalizer will check the ParentJobResult and log an INFO statement if the transaction was successful. If the transaction failed, a FATAL statement with more details will be logged. All log statements that were generated in the Queueable will be included in the Log Event.
public class FinalizerExample implements Queueable {
private static final rflib_Logger LOGGER = rflib_LoggerUtil.getFactory().createLogger('FinalizerExample');
public void execute(QueueableContext ctx) {
rflib_LogFinalizer logFinalizer = rflib_LoggerUtil.createLogFinalizer(LOGGER);
System.attachFinalizer(logFinalizer);
LOGGER.info('Foo bar');
}
}
If you need more work to do in the transaction finalizer other than logging, consider using the Finalplexer created by Chris Peterson, which allows you to attach multiple finalizer to a single transaction.
Logging is also supported in Process Builder and Flow using Apex Actions.
In Process Builder, define an Action with the Action Type Apex, give it a unique name and then select the Apex Class Log Message. Fill out the fields to log a message during the Process Builder execution. Please note that there are two optional parameters that can be configured when clicking at the Add Row button at the bottom of the form. The screenshot below illustrates the configuration.

In Flow Builder, add an Action element to your Flow. When the New Action modal appears, select RFLIB as the category on the left and search for the Log Message action in the right column. Once selected, fill out the standard action fields by giving it a unique name and define your log parameters. Please note that there are two optional parameters that can be configured by enabling the toggle at the bottom of the form. The screenshot below illustrates the configuration.

Unlike displayed in the screenshots, please avoid using batched logging, it should only be set to true in very rare cases.
Logging is a bit of an art. Here are best practices based on real-world experience:
📋 General Guidelines:
- Generally use INFO statements for normal flow tracking
- Try to create a log "stacktrace" - almost every function should log entry with arguments
- Use TRACE statements within loops or for extremely large payloads
- Every class or Lightning component should have a logger instance
- Reduce log statements by using the formatting feature to print multiple variables efficiently
🚨 Exception Logging (Key Feature):
- Pass Exception objects directly to ERROR/FATAL methods for full stack traces
-
WARN level does not support Exception objects - use
ex.getMessage()if needed - Use FATAL for unexpected exceptions in critical paths (controllers, integrations)
- Use ERROR for service failures (callouts, external integrations)
- Use WARN for recoverable issues that shouldn't happen in normal operation
Example of proper exception logging:
// ✅ DO - Pass exception object directly to ERROR/FATAL
LOGGER.error('Integration failed for Account {0}', new List<Object>{ accountId }, ex);
// ✅ DO - For WARN level, include exception message manually
LOGGER.warn('Validation issue for Account {0}: {1}', new List<Object>{ accountId, ex.getMessage() });
// ❌ DON'T - Lose valuable stack trace information for ERROR/FATAL
LOGGER.error('Integration failed: ' + ex.getMessage());⚡ Performance & Integration:
- Use
rflib_HttpRequestinstead of the Apex platform class for debugging integration issues - Consider using batch logger for high-volume callout operations to avoid DML limit issues
- Avoid batched logging unless absolutely necessary (Platform Events have dedicated governor limits)
🎯 Context & Clarity:
- Include relevant business context in log messages
- Use consistent logger names (typically class name)
- Log at appropriate levels - don't spam with overly verbose DEBUG messages
The RFLIB Plugin for Salesforce CLI simplifies logging integration by automating the process of instrumenting your Salesforce codebase with RFLIB's logging framework. It supports Apex classes, Lightning Web Components (LWC), and Aura components, ensuring consistency and saving time during the setup.
- Automatically adds logging statements for method entries, parameter values, and error handling.
- Supports preview mode (
dry-run) to review changes before applying them. - Options to format code using Prettier for consistent styling.
- Allows selective integration, skipping conditional structures like
if/else.
Run the following command to install the plugin:
sf plugins install rflib-pluginFor a detailed guide on installation and commands, see the RFLIB Plugin for Salesforce CLI page.
Solution: Check your Logger Settings:
- Verify Log Event Reporting Level is set to include your message level
- Confirm General Log Level allows your messages to be cached
- Check if the user has access to the Ops Center

Solution: Adjust System Debug Log Level to INFO or WARN in production environments
Solution: Use user-specific Logger Settings to temporarily lower log levels for your user only
Example Configuration:
- General Log Level = DEBUG (captures everything)
- Log Event Reporting Level = ERROR (only errors go to Ops Center)
- System Debug Log Level = INFO (moderate Salesforce log output)
- Archive Log Level = WARN (long-term storage for warnings and above)
Result: When an ERROR occurs, you'll see:
- All DEBUG/INFO/WARN messages in the log event details (full context)
- Only the ERROR in the Ops Center dashboard
- INFO and above in Salesforce Debug Logs
- WARN and above stored for long-term analysis
- Configure Logger Settings: Start with the recommended values for your environment
- Monitor Your Logs: Access the Ops Center to view and analyze your logs
- Set Up Alerts: Configure email notifications for critical errors
- Explore Advanced Features: Consider Log Aggregation for trend analysis
Related Topics: