Components Code Quality Examples Implementation - DevClusterAI/DOD-definition GitHub Wiki
This document provides practical examples of code quality implementations across different contexts. Each example showcases specific approaches, tools, and practices that organizations have used to improve code quality.
A mid-sized fintech organization with a microservices architecture consisting of 30+ services developed by 6 teams.
# quality-standards.yml - Central repository defining shared quality standards
version: 1.0
code_style:
javascript:
standard: airbnb
config_file: .eslintrc.standard.js
java:
standard: google
config_file: checkstyle.xml
python:
standard: black
config_file: pyproject.toml
testing:
unit_test:
coverage_threshold: 80%
critical_paths_threshold: 90%
integration_test:
required: true
coverage_approach: "critical paths"
contract_testing:
required: true
tool: pact
# Quality-focused CI pipeline example
name: Quality Pipeline
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main, develop ]
jobs:
quality_checks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up environment
uses: actions/setup-node@v2
with:
node-version: '16'
- name: Install dependencies
run: npm ci
- name: Lint
run: npm run lint
- name: Security scan
run: npm run security-scan
- name: Unit tests
run: npm run test:unit
- name: Integration tests
run: npm run test:integration
- name: Build
run: npm run build
- name: Contract tests
run: npm run test:contract
- name: SonarQube analysis
uses: SonarSource/sonarqube-scan-action@v1
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
The organization implemented a centralized quality dashboard showing key metrics for all services:
- Test coverage by service
- Linting conformance
- Security vulnerabilities
- Technical debt metrics
- Build stability
- API contract conformance
- Standardized quality configurations across services while allowing teams flexibility in implementation
- Centralized quality metrics reporting
- Quality gates in the CI/CD pipeline that prevent deployment if quality thresholds aren't met
- Monthly quality review meetings with representatives from all teams
- Quality champions in each team responsible for maintaining standards
- 40% reduction in production incidents
- Faster onboarding for new developers
- Improved cross-team collaboration
- Consistent quality across all services
A large insurance company with a 15-year-old Java monolith application that needed modernization while maintaining quality.
// Example of adding characterization tests to legacy code
@Test
public void testLegacyPolicyCalculationFunctionality() {
// GIVEN: Setup with production-like test data
PolicyCalculationRequest request = TestDataFactory.createRealWorldPolicyRequest();
// WHEN: The legacy method is called
PolicyCalculationResult result = legacyPolicyCalculator.calculate(request);
// THEN: Document the current behavior
assertThat(result.getPremium()).isEqualTo(expectedPremium);
assertThat(result.getDiscounts()).hasSize(expectedDiscountCount);
assertThat(result.getTaxes()).containsExactly(expectedTaxes);
// Store this test case for regression testing during modernization
TestCaseRegistry.registerTestCase("POLICY-CALC-1", request, result);
}
The team implemented a multi-phase quality monitoring approach:
-
Pre-Modernization:
- Established baseline quality metrics
- Added logging for key business transactions
- Implemented monitoring for critical user journeys
-
During Modernization:
- Parallel deployment of legacy and modernized components
- Comparison testing between old and new implementations
- Expanded test coverage for rewritten modules
-
Post-Modernization:
- Comprehensive regression testing
- Performance benchmarking against baseline
- Business outcome validation
The team progressively tightened quality standards:
Phase 1 (Initial):
<!-- Initial PMD ruleset for legacy code -->
<ruleset name="Legacy Compatibility Rules">
<description>Basic rules that won't break existing patterns</description>
<!-- Include only critical rules -->
<rule ref="category/java/errorprone/AvoidBranchingStatementAsLastInLoop"/>
<rule ref="category/java/errorprone/AvoidDecimalLiteralsInBigDecimalConstructor"/>
<rule ref="category/java/errorprone/AvoidMultipleUnaryOperators"/>
<rule ref="category/java/errorprone/AvoidUsingOctalValues"/>
<rule ref="category/java/errorprone/BrokenNullCheck"/>
<!-- Exclude rules that would flag most legacy code -->
<exclude-pattern>.*/legacy/.*</exclude-pattern>
</ruleset>
Phase 3 (Modernized):
<!-- Modern PMD ruleset for new/rewritten code -->
<ruleset name="Modern Java Standards">
<description>Comprehensive rules for modernized codebase</description>
<!-- Include all categories -->
<rule ref="category/java/bestpractices/"/>
<rule ref="category/java/codestyle/"/>
<rule ref="category/java/design/"/>
<rule ref="category/java/documentation/"/>
<rule ref="category/java/errorprone/"/>
<rule ref="category/java/multithreading/"/>
<rule ref="category/java/performance/"/>
<rule ref="category/java/security/"/>
<!-- Customize specific rules -->
<rule ref="category/java/design/CyclomaticComplexity">
<properties>
<property name="methodReportLevel" value="10"/>
<property name="classReportLevel" value="80"/>
</properties>
</rule>
<!-- Still exclude truly legacy areas -->
<exclude-pattern>.*/legacy-core/.*</exclude-pattern>
</ruleset>
- Established two-track approach: strict standards for new code, pragmatic standards for legacy code
- Created a dedicated quality engineering team to support the modernization effort
- Implemented automated comparison testing between legacy and modernized components
- Used feature flags to gradually roll out modernized components
- Prioritized testability in the modernization roadmap
- Successfully modernized 70% of the codebase within 18 months
- Improved test coverage from 35% to 75%
- Zero critical production issues during transition
- Reduced maintenance cost by 40%
A startup with a team of 7 developers building a SaaS product needed to implement quality practices that wouldn't slow down development.
The team implemented a lightweight but effective quality process:
- "Just Enough" Quality Checklist:
# Pre-commit Quality Checklist
## Code Changes
- [ ] Code follows team style guide
- [ ] No commented-out code
- [ ] No debug/console statements
- [ ] Error handling implemented
- [ ] No security issues introduced
## Tests
- [ ] Unit tests for new functionality
- [ ] Existing tests still pass
- [ ] Edge cases considered
## Documentation
- [ ] Inline comments for complex logic
- [ ] API documentation updated (if applicable)
- [ ] README updated (if applicable)
## Security & Performance
- [ ] No obvious security vulnerabilities
- [ ] No significant performance issues introduced
- Git Hooks Implementation:
#!/bin/bash
# pre-commit hook to enforce basic quality checks
echo "Running pre-commit quality checks..."
# Check for console.log statements
if grep -r "console.log" --include="*.js" --include="*.jsx" --include="*.ts" --include="*.tsx" ./src; then
echo "ERROR: console.log statements found. Please remove them before committing."
exit 1
fi
# Check for TODO comments
if grep -r "TODO" --include="*.js" --include="*.jsx" --include="*.ts" --include="*.tsx" ./src; then
echo "WARNING: TODO comments found. Consider resolving them before committing."
fi
# Run linting
npm run lint
if [ $? -ne 0 ]; then
echo "ERROR: Linting failed. Please fix the issues before committing."
exit 1
fi
# Run tests
npm run test
if [ $? -ne 0 ]; then
echo "ERROR: Tests failed. Please fix the issues before committing."
exit 1
fi
echo "All quality checks passed!"
exit 0
- Streamlined Code Review Process: The team implemented a lightweight code review process focused on key quality aspects:
- Limited Scope: Reviews limited to ~300 lines of code max
- Quick Turnaround: 24-hour SLA for initial review feedback
- Focus Areas: Security, error handling, and edge cases prioritized
- Async Reviews: Leveraged GitHub PR reviews with screenshots and screen recordings for complex changes
- Pair Programming: Used for complex features instead of heavy review processes
The team carefully selected tools that provided maximum value with minimal overhead:
// package.json excerpt showing quality tooling
{
"scripts": {
"lint": "eslint --ext .js,.jsx,.ts,.tsx src",
"lint:fix": "eslint --ext .js,.jsx,.ts,.tsx src --fix",
"format": "prettier --write 'src/**/*.{js,jsx,ts,tsx,css,md}'",
"test": "jest",
"test:watch": "jest --watch",
"validate": "npm-run-all --parallel lint test",
"prepare": "husky install"
},
"devDependencies": {
"@typescript-eslint/eslint-plugin": "^5.0.0",
"@typescript-eslint/parser": "^5.0.0",
"eslint": "^8.0.0",
"eslint-config-prettier": "^8.3.0",
"eslint-plugin-react": "^7.26.0",
"eslint-plugin-react-hooks": "^4.2.0",
"eslint-plugin-security": "^1.4.0",
"husky": "^7.0.0",
"jest": "^27.2.4",
"lint-staged": "^11.2.0",
"npm-run-all": "^4.1.5",
"prettier": "^2.4.1"
},
"lint-staged": {
"*.{js,jsx,ts,tsx}": [
"eslint --fix",
"prettier --write"
],
"*.{json,css,md}": [
"prettier --write"
]
}
}
The team implemented a CI pipeline focused on speed:
# .github/workflows/quality.yml
name: Quality Checks
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: '16'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Lint
run: npm run lint
- name: Test
run: npm run test
- name: Build
run: npm run build
- name: Security scan
run: npx audit-ci --moderate
- Automated as much as possible to reduce manual effort
- Used lightweight tools suitable for a small team
- Focused on high-impact quality issues rather than perfection
- Integrated quality checks into existing workflows
- Prioritized rapid feedback over comprehensive documentation
- Maintained high velocity while ensuring quality
- Reduced customer-reported bugs by 60%
- Onboarded new developers quickly with minimal quality dips
- Established quality culture without bureaucracy
A mobile app development team creating a consumer app available on iOS and Android needed to implement quality practices specific to mobile development.
The team implemented platform-specific quality standards alongside cross-platform ones:
Cross-Platform Standards:
# mobile-quality-standards.yml
version: 1.0
common:
# Code standards common to all platforms
naming_convention: camelCase
file_organization: feature_based
comment_requirements:
- public_apis
- complex_algorithms
- non_obvious_behaviors
# Test requirements
unit_test_coverage: 70%
ui_test_coverage: key_flows_only
# Performance requirements
cold_start: < 2s
hot_start: < 1s
memory_leak: none
animation_fps: >= 58fps
ios:
# iOS specific standards
swift_version: 5.5+
min_ios_version: 13.0
ui_framework: uikit_and_swiftui
static_analysis: swiftlint
crash_monitoring: firebase_crashlytics
accessibility: required
android:
# Android specific standards
kotlin_version: 1.5+
min_android_api: 23
target_android_api: 31
ui_framework: jetpack_compose
static_analysis: detekt
crash_monitoring: firebase_crashlytics
accessibility: required
The team implemented a comprehensive testing pyramid approach:
- Unit Tests: Platform-specific using XCTest and JUnit
- Integration Tests: Platform-specific for native components
- UI Tests: Using Appium for cross-platform consistency
- Manual Testing: Focused on usability and complex interactions
// Example of Android UI test for login
@Test
fun loginFlowCompletesSuccessfully() {
// Launch app
ActivityScenario.launch(MainActivity::class.java)
// Enter credentials and login
onView(withId(R.id.email_input))
.perform(typeText("[email protected]"), closeSoftKeyboard())
onView(withId(R.id.password_input))
.perform(typeText("password123"), closeSoftKeyboard())
onView(withId(R.id.login_button)).perform(click())
// Verify navigation to home screen
onView(withId(R.id.home_screen_container)).check(matches(isDisplayed()))
// Verify welcome message
onView(withId(R.id.welcome_message))
.check(matches(withText(containsString("Welcome"))))
}
# .github/workflows/mobile-ci.yml
name: Mobile App CI
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main, develop ]
jobs:
android_quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up JDK
uses: actions/setup-java@v2
with:
distribution: 'adopt'
java-version: '11'
- name: Lint check
run: ./gradlew lintDebug
- name: Run unit tests
run: ./gradlew testDebugUnitTest
- name: Static analysis
run: ./gradlew detekt
- name: Build debug APK
run: ./gradlew assembleDebug
- name: Archive debug APK
uses: actions/upload-artifact@v2
with:
name: app-debug
path: app/build/outputs/apk/debug/app-debug.apk
ios_quality:
runs-on: macos-latest
steps:
- uses: actions/checkout@v2
- name: Set up Xcode
uses: maxim-lobanov/setup-xcode@v1
with:
xcode-version: latest-stable
- name: Install dependencies
run: |
cd ios
pod install
- name: Run SwiftLint
run: |
cd ios
swiftlint
- name: Run unit tests
run: |
cd ios
xcodebuild test -workspace MyApp.xcworkspace -scheme MyApp -destination 'platform=iOS Simulator,name=iPhone 13'
- name: Build app
run: |
cd ios
xcodebuild -workspace MyApp.xcworkspace -scheme MyApp -destination 'platform=iOS Simulator,name=iPhone 13' -configuration Debug build
The team implemented comprehensive monitoring:
- Crash Reporting:
// Kotlin implementation for crash reporting setup
class MyApplication : Application() {
override fun onCreate() {
super.onCreate()
// Initialize Firebase Crashlytics
Firebase.initialize(this)
// Set user identifier for better crash analysis
FirebaseCrashlytics.getInstance().setUserId(getUserId())
// Log app startup
FirebaseCrashlytics.getInstance().log("App started")
// Custom keys for filtering
FirebaseCrashlytics.getInstance().setCustomKey("build_type", BuildConfig.BUILD_TYPE)
FirebaseCrashlytics.getInstance().setCustomKey("device_type", getDeviceType())
// Capture non-fatal errors
Thread.setDefaultUncaughtExceptionHandler { thread, throwable ->
FirebaseCrashlytics.getInstance().recordException(throwable)
}
}
}
-
Performance Monitoring:
- Application startup time
- Screen rendering time
- Network request latency
- Memory usage
- Battery consumption
-
User Analytics:
- Feature usage tracking
- User flow completion rates
- UI interaction metrics
- Session duration and frequency
- Used platform-specific tooling for platform-specific quality concerns
- Implemented shared quality standards across platforms
- Focused heavily on UI quality and performance
- Prioritized accessibility as a core quality metric
- Implemented extensive production monitoring
- App store rating improved from 3.7 to 4.6
- Crash-free sessions increased to 99.8%
- App size reduced by 18%
- Performance improved across all target devices
- Accessibility compliance achieved
An enterprise organization developing internal APIs that serve multiple downstream applications needed a robust quality implementation to ensure reliability and consistency.
The team implemented contract testing to ensure API stability:
// Example of consumer-driven contract test with Pact
@RunWith(SpringRunner.class)
@Provider("user-service")
@PactFolder("src/test/resources/pacts")
public class UserServiceProviderTest {
@MockBean
private UserRepository userRepository;
@TestTarget
public final MockMvcTarget target = new MockMvcTarget();
@Before
public void setUp() {
// Set up the mock MVC target
target.setControllerAdvice(new GlobalExceptionHandler());
target.setMessageConverters(new MappingJackson2HttpMessageConverter());
target.setControllers(new UserController(userRepository));
// Set up mock repository responses
Mockito.when(userRepository.findById(42L))
.thenReturn(Optional.of(new User(42L, "John", "Doe", "[email protected]")));
}
@State("User with ID 42 exists")
public void userExists() {
// State already set up in the setUp method
}
@State("User with ID 99 does not exist")
public void userDoesNotExist() {
Mockito.when(userRepository.findById(99L))
.thenReturn(Optional.empty());
}
}
The team used OpenAPI/Swagger specifications with enhanced documentation requirements:
# openapi.yaml
openapi: 3.0.3
info:
title: User Service API
description: |
API for managing user accounts and profiles.
## Usage Guidelines
* All endpoints require authentication except /health
* Rate limited to 100 requests per minute
* Pagination parameters are consistent across all collection endpoints
## Error Handling
All errors follow a standard format with error codes and messages.
version: 1.2.0
contact:
name: API Support Team
email: [email protected]
servers:
- url: https://api.example.com/v1
description: Production server
- url: https://staging-api.example.com/v1
description: Staging server
paths:
/users:
get:
summary: Get all users
description: |
Returns a paginated list of users.
Results can be filtered by name, email, or status.
parameters:
- name: page
in: query
description: Page number (0-based)
schema:
type: integer
minimum: 0
default: 0
- name: size
in: query
description: Number of items per page
schema:
type: integer
minimum: 1
maximum: 100
default: 20
responses:
'200':
description: Successful operation
content:
application/json:
schema:
type: object
properties:
content:
type: array
items:
$ref: '#/components/schemas/User'
pagination:
$ref: '#/components/schemas/Pagination'
'400':
$ref: '#/components/responses/BadRequest'
'401':
$ref: '#/components/responses/Unauthorized'
'429':
$ref: '#/components/responses/TooManyRequests'
The team implemented comprehensive API quality gates in their CI pipeline:
-
Schema Validation:
- OpenAPI linting
- Schema compatibility checks
- Breaking change detection
-
Functional Validation:
- Unit tests (90% coverage requirement)
- Integration tests
- Contract tests with consumers
-
Performance Validation:
- Response time tests (P95 < 300ms)
- Throughput tests (min 100 req/sec)
- Resource usage tests (max 250MB heap)
-
Security Validation:
- Authentication tests
- Authorization tests
- Input validation tests
- OWASP dependency checks
The team implemented comprehensive API metrics collection:
// Example Micrometer + Prometheus metrics configuration
@Configuration
public class MetricsConfig {
@Bean
public MeterRegistryCustomizer<MeterRegistry> metricsCommonTags() {
return registry -> registry.config()
.commonTags("application", "user-service", "environment", getCurrentEnvironment());
}
@Bean
public TimedAspect timedAspect(MeterRegistry registry) {
return new TimedAspect(registry);
}
@Bean
public WebMvcMetricsFilter webMvcMetricsFilter(MeterRegistry registry, WebMvcTagsProvider tagsProvider) {
return new WebMvcMetricsFilter(registry, tagsProvider, "http.server.requests",
true, new DefaultWebFluxTagsProvider());
}
}
// Controller with metrics
@RestController
@RequestMapping("/users")
public class UserController {
private final UserService userService;
private final MeterRegistry meterRegistry;
@Autowired
public UserController(UserService userService, MeterRegistry meterRegistry) {
this.userService = userService;
this.meterRegistry = meterRegistry;
}
@GetMapping("/{id}")
@Timed(value = "user.lookup", description = "Time taken to return user by ID")
public ResponseEntity<User> getUserById(@PathVariable Long id) {
try {
return userService.findById(id)
.map(ResponseEntity::ok)
.orElseGet(() -> {
meterRegistry.counter("user.not.found").increment();
return ResponseEntity.notFound().build();
});
} catch (Exception e) {
meterRegistry.counter("user.lookup.error", "exception", e.getClass().getSimpleName()).increment();
throw e;
}
}
}
- Treated API as a product with clear versioning and documentation
- Implemented consumer-driven contract testing to detect breaking changes
- Focused on comprehensive metrics collection for both technical and business insights
- Established clear performance SLAs and monitored them continuously
- Automated compatibility checks for API changes
- 99.99% API uptime
- Zero breaking changes introduced to consumers
- Average response time reduced by 35%
- Developer onboarding time to use APIs reduced from weeks to days
- Reuse of APIs increased by 40% across the organization