Temporal Analysis - Black-Lights/planetscope-py GitHub Wiki
Temporal Analysis Module Documentation
Table of Contents
- Overview
- Key Features
- How It Works
- Temporal Metrics
- Optimization Methods
- Quick Start
- Advanced Usage
- Configuration Options
- Output Formats
- Performance Guidelines
- Examples
- API Reference
- Troubleshooting
Overview
The Temporal Analysis Module provides grid-based temporal pattern analysis for PlanetScope satellite imagery. It analyzes acquisition patterns and identifies coverage gaps using the same robust grid-based approach as the spatial density analysis, ensuring consistent coordinate system handling and professional output quality.
What It Does
- Temporal Pattern Analysis: Understand how satellite coverage varies over time and space
- Coverage Gap Detection: Identify areas with poor temporal coverage
- Acquisition Planning: Optimize future data acquisition strategies
- Time Series Preparation: Assess data availability for temporal analysis workflows
Key Features
Grid-Based Analysis
- Same grid creation approach as spatial density analysis
- Coordinate system fixes applied for proper geographic alignment
- ROI clipping support (
clip_to_roi
parameter) - Multiple spatial resolutions (3m to 1000m+)
Comprehensive Temporal Metrics
- Coverage Days: Number of unique dates with scene coverage per grid cell
- Mean/Median Intervals: Days between consecutive scene acquisitions
- Temporal Density: Scenes per day over the analysis period
- Min/Max Intervals: Range of temporal gaps
- Coverage Frequency: Percentage of days with coverage
Performance Optimization
- FAST Method: Vectorized operations (10-50x faster)
- ACCURATE Method: Cell-by-cell processing (slower but precise)
- AUTO Selection: Automatically chooses based on grid size
Professional Outputs
- Multiple GeoTIFF files with proper styling
- QML style files for QGIS visualization
- Comprehensive summary plots
- JSON metadata with detailed statistics
How It Works
1. Grid Creation
The temporal analysis uses the same grid creation approach as spatial density analysis:
# Convert spatial resolution to degrees
resolution_deg = spatial_resolution / 111000 # meters to degrees
# Calculate grid dimensions
width = int((bounds[2] - bounds[0]) / resolution_deg)
height = int((bounds[3] - bounds[1]) / resolution_deg)
# Create corrected transform (north-to-south orientation)
pixel_width = (bounds[2] - bounds[0]) / width # Positive (west to east)
pixel_height = -(bounds[3] - bounds[1]) / height # Negative (north to south)
transform = Affine(
pixel_width, 0.0, bounds[0], # X: west to east
0.0, pixel_height, bounds[3], # Y: north to south (negative height)
0.0, 0.0, 1.0
)
2. Scene Data Preparation
For each scene in the analysis period:
# Extract acquisition date
acq_date = datetime.strptime(acquired.split("T")[0], "%Y-%m-%d")
# Filter by date range and create DataFrame
scene_data = pd.DataFrame([{
'scene_id': properties.get('id'),
'geometry': Polygon(coords),
'acquired_date': acq_date,
'acquired_str': acq_date.strftime("%Y-%m-%d"),
'cloud_cover': properties.get('cloud_cover', 0),
'properties': properties
}])
3. Temporal Metrics Calculation
The module offers two optimization methods:
FAST Method (Vectorized Operations)
# Group scenes by date
scenes_by_date = scene_data.groupby('date_only')
unique_dates = sorted(scenes_by_date.groups.keys())
# Create coverage masks for each date
for date, scenes_group in scenes_by_date:
date_mask = np.zeros((height, width), dtype=np.uint8)
for _, scene in scenes_group.iterrows():
scene_mask = rasterize([(scene['geometry'], 1)],
out_shape=(height, width),
transform=transform)
date_mask = np.maximum(date_mask, scene_mask)
date_coverage_masks[date] = date_mask
# Calculate metrics using array operations
coverage_days = sum(all_date_masks) # Vectorized
temporal_density = total_scenes / analysis_days
ACCURATE Method (Cell-by-Cell)
# Process each grid cell individually
for i in range(height):
for j in range(width):
# Create cell geometry
cell = box(x_min, y_min, x_max, y_max)
# Find intersecting scenes
intersecting_scenes = find_scenes_for_cell(cell)
# Calculate temporal metrics for this cell
cell_metrics = calculate_temporal_metrics(intersecting_scenes)
# Store in output arrays
for metric in metrics:
metric_arrays[metric][i, j] = cell_metrics[metric]
Temporal Metrics
Coverage Days
Definition: Number of unique dates with satellite scene coverage per grid cell
Calculation:
# Group scenes by acquisition date
unique_dates = sorted(list(set(scene_dates)))
coverage_days = len(unique_dates)
Interpretation:
- Higher values = More frequent coverage
- Range: 1 to maximum possible days in analysis period
- Useful for: Identifying well-monitored areas
Mean Interval
Definition: Average number of days between consecutive satellite acquisitions
Calculation:
# Calculate intervals between consecutive dates
intervals = []
for i in range(1, len(unique_dates)):
interval_days = (unique_dates[i] - unique_dates[i-1]).days
intervals.append(interval_days)
mean_interval = np.mean(intervals)
Interpretation:
- Lower values = More frequent acquisitions (better)
- Range: 1 day (daily coverage) to analysis period length
- Useful for: Time series analysis planning
Temporal Density
Definition: Average scenes per day over the analysis period
Calculation:
date_range_days = (end_date - start_date).days + 1
temporal_density = total_scenes / date_range_days
Interpretation:
- Higher values = More scenes per unit time
- Range: 0 to theoretical maximum based on constellation
- Useful for: Data availability assessment
Coverage Frequency
Definition: Percentage of days with satellite coverage
Calculation:
analysis_days = (analysis_end - analysis_start).days + 1
coverage_frequency = len(unique_dates) / analysis_days
Interpretation:
- Range: 0.0 (no coverage) to 1.0 (daily coverage)
- Useful for: Monitoring capability assessment
Optimization Methods
AUTO Selection (Recommended)
The system automatically chooses the best method based on grid size:
total_cells = width * height
if total_cells > 500_000:
method = "fast" # Use vectorized operations
else:
method = "accurate" # Use cell-by-cell processing
FAST Method
Best for: Large grids (>500k cells), quick analysis
Performance: 10-50x faster than ACCURATE method
Trade-offs: Slight differences possible in interval calculations
ACCURATE Method
Best for: Small to medium grids (<500k cells), maximum precision
Performance: Slower but potentially more precise
Trade-offs: Significantly longer processing time for large areas
Quick Start
Basic Temporal Analysis
from planetscope_py import analyze_roi_temporal_patterns
from shapely.geometry import Polygon
# Define your region of interest
milan_roi = Polygon([
[8.7, 45.1], [9.8, 44.9], [10.3, 45.3], [10.1, 45.9],
[9.5, 46.2], [8.9, 46.0], [8.5, 45.6], [8.7, 45.1]
])
# Run temporal analysis
result = analyze_roi_temporal_patterns(
milan_roi,
"2025-01-01/2025-03-31",
spatial_resolution=100,
clip_to_roi=True
)
# Access results
print(f"Found {result['scenes_found']} scenes")
print(f"Mean coverage days: {result['temporal_result'].temporal_stats['mean_coverage_days']:.1f}")
print(f"Output directory: {result['output_directory']}")
Simple Workflow
from planetscope_py import quick_temporal_analysis
# Ultra-simple temporal analysis
result = quick_temporal_analysis(milan_roi, "last_3_months")
# Access temporal metrics
temporal_result = result['temporal_result']
for metric, array in temporal_result.metric_arrays.items():
print(f"{metric.value}: {array.shape} grid")
Advanced Usage
Custom Configuration
from planetscope_py import TemporalAnalyzer, TemporalConfig, TemporalMetric
# Configure specific metrics and optimization
config = TemporalConfig(
spatial_resolution=50,
metrics=[
TemporalMetric.COVERAGE_DAYS,
TemporalMetric.MEAN_INTERVAL,
TemporalMetric.TEMPORAL_DENSITY
],
min_scenes_per_cell=3,
optimization_method="fast",
coordinate_system_fixes=True
)
# Create analyzer and run analysis
analyzer = TemporalAnalyzer(config)
result = analyzer.analyze_temporal_patterns(
scene_footprints=scenes,
roi_geometry=roi,
start_date="2025-01-01",
end_date="2025-03-31",
clip_to_roi=True
)
Export Options
# Export GeoTIFF files
exported_files = analyzer.export_temporal_geotiffs(
result,
output_dir="./temporal_output",
roi_polygon=roi,
clip_to_roi=True
)
print(f"Exported files: {list(exported_files.keys())}")
# Output: ['coverage_days', 'mean_interval', 'temporal_density', 'metadata']
Performance Optimization
# For large areas - use fast method with coarser resolution
result = analyze_roi_temporal_patterns(
large_roi,
"2025-01-01/2025-06-30",
spatial_resolution=500, # Larger cells = faster processing
optimization_level="fast", # Force fast method
cloud_cover_max=0.3
)
# For detailed studies - use accurate method with fine resolution
result = analyze_roi_temporal_patterns(
small_roi,
"2025-01-01/2025-01-31",
spatial_resolution=30, # Fine resolution
optimization_level="accurate" # Maximum precision
)
Configuration Options
TemporalConfig Parameters
Parameter | Type | Default | Description |
---|---|---|---|
spatial_resolution |
float | 30.0 | Grid cell size in meters |
temporal_resolution |
TemporalResolution | DAILY | Temporal resolution for analysis |
metrics |
List[TemporalMetric] | None | Metrics to calculate (None = default set) |
min_scenes_per_cell |
int | 2 | Minimum scenes required per cell |
optimization_method |
str | "auto" | "fast", "accurate", or "auto" |
coordinate_system_fixes |
bool | True | Enable coordinate system corrections |
no_data_value |
float | -9999.0 | NoData value for output rasters |
validate_geometries |
bool | True | Validate input geometries |
High-Level Function Parameters
Parameter | Type | Default | Description |
---|---|---|---|
roi_polygon |
Polygon/list/dict | Required | Region of interest geometry |
time_period |
str | Required | "YYYY-MM-DD/YYYY-MM-DD" format |
spatial_resolution |
float | 30.0 | Grid cell size in meters |
cloud_cover_max |
float | 0.3 | Maximum cloud cover threshold |
clip_to_roi |
bool | True | Clip analysis to ROI shape |
optimization_level |
str | "auto" | Performance optimization level |
create_visualizations |
bool | True | Generate summary plots |
export_geotiffs |
bool | True | Export GeoTIFF files |
show_plots |
bool | True | Display plots in notebooks |
Output Formats
GeoTIFF Files
Each temporal metric is exported as a separate GeoTIFF file:
temporal_coverage_days_clipped.tif
: Coverage days rastertemporal_mean_interval_clipped.tif
: Mean interval rastertemporal_temporal_density_clipped.tif
: Temporal density rastertemporal_coverage_days_clipped.qml
: QGIS style filetemporal_analysis_metadata.json
: Analysis metadata
Summary Visualizations
Comprehensive 4-panel summary plot:
- Coverage Days Map: Spatial distribution of coverage frequency
- Mean Interval Map: Spatial distribution of acquisition intervals
- Interval Histogram: Distribution of mean intervals with statistics
- Statistics Table: Comprehensive analysis summary
Metadata JSON
Complete analysis information:
{
"temporal_analysis_info": {
"analysis_type": "grid_based_temporal_patterns",
"computation_time_seconds": 45.2,
"optimization_method": "fast"
},
"temporal_parameters": {
"date_range": {
"start_date": "2025-01-01",
"end_date": "2025-03-31",
"total_days": 90
},
"metrics_calculated": ["coverage_days", "mean_interval", "temporal_density"]
},
"spatial_parameters": {
"spatial_resolution_meters": 100.0,
"bounds": [-71.8, -4.0, -69.6, -2.2]
}
}
Performance Guidelines
Grid Size Recommendations
ROI Size | Spatial Resolution | Grid Cells | Recommended Method | Expected Time |
---|---|---|---|---|
Small (<50 km²) | 30m | <50k | AUTO/ACCURATE | 30 seconds - 2 minutes |
Medium (50-500 km²) | 50-100m | 50k-500k | AUTO | 2-10 minutes |
Large (500-2000 km²) | 100-300m | 500k-1M | FAST | 5-20 minutes |
Very Large (>2000 km²) | 300-1000m | >1M | FAST | 10-60 minutes |
Memory Usage
Estimated memory requirements:
# Memory calculation
cells = (roi_width_m / spatial_resolution) * (roi_height_m / spatial_resolution)
metrics_count = len(selected_metrics)
memory_mb = (cells * metrics_count * 4) / 1024 / 1024 # 4 bytes per float32
# Example: 1000x1000 grid with 3 metrics = ~12 MB
Optimization Tips
-
Use appropriate spatial resolution:
# For quick overview spatial_resolution=500 # 500m cells # For detailed analysis spatial_resolution=30 # 30m cells
-
Force fast method for large areas:
optimization_level="fast" # Override auto-selection
-
Limit metrics for speed:
metrics=[TemporalMetric.COVERAGE_DAYS] # Only essential metrics
Examples
Example 1: Amazon Deforestation Monitoring
from planetscope_py import analyze_roi_temporal_patterns
from shapely.geometry import Polygon
# Define Amazon study area
amazon_roi = Polygon([
[-71.5, -3.8], [-70.2, -4.1], [-69.6, -3.5], [-69.9, -2.7],
[-70.7, -2.2], [-71.4, -2.4], [-71.8, -3.1], [-71.5, -3.8]
])
# Analyze temporal patterns for change detection
result = analyze_roi_temporal_patterns(
amazon_roi,
"2025-01-01/2025-06-30", # 6 months
spatial_resolution=300, # 300m for large area
cloud_cover_max=0.4, # Higher threshold for tropics
optimization_level="fast", # Fast processing
clip_to_roi=True
)
# Check temporal coverage
stats = result['temporal_result'].temporal_stats
print(f"Mean interval between acquisitions: {stats['mean_interval_stats']['mean']:.1f} days")
print(f"Areas with good coverage: {stats['coverage_days_stats']['count']} pixels")
Example 2: Urban Development Monitoring
# Define urban area
city_roi = Polygon([
[8.7, 45.1], [9.8, 44.9], [10.3, 45.3], [10.1, 45.9],
[9.5, 46.2], [8.9, 46.0], [8.5, 45.6], [8.7, 45.1]
])
# High-resolution temporal analysis
result = analyze_roi_temporal_patterns(
city_roi,
"2025-01-01/2025-12-31", # Full year
spatial_resolution=30, # Fine resolution for urban area
cloud_cover_max=0.2, # Strict cloud threshold
optimization_level="accurate", # Maximum precision
metrics=[
TemporalMetric.COVERAGE_DAYS,
TemporalMetric.MEAN_INTERVAL,
TemporalMetric.TEMPORAL_DENSITY
]
)
# Export for GIS analysis
files = result['exports']
print(f"GeoTIFF files created: {len(files)} files")
print(f"Use {files['coverage_days']} in QGIS with {files['coverage_days_qml']}")
Example 3: Agricultural Monitoring
# Agricultural region
farm_roi = create_bounding_box(center_lat=42.5, center_lon=-85.0, size_km=20)
# Optimize for seasonal analysis
result = analyze_roi_temporal_patterns(
farm_roi,
"2025-03-01/2025-10-31", # Growing season
spatial_resolution=100, # 100m for field-level analysis
cloud_cover_max=0.3,
min_scenes_per_cell=5, # Require more scenes for reliable intervals
output_dir="./farm_temporal_analysis"
)
# Analyze seasonal patterns
temporal_result = result['temporal_result']
mean_intervals = temporal_result.metric_arrays[TemporalMetric.MEAN_INTERVAL]
# Find areas with frequent coverage (good for time series)
frequent_coverage = mean_intervals[mean_intervals <= 7] # Weekly or better
print(f"Areas with weekly coverage: {len(frequent_coverage)} pixels")
API Reference
High-Level Functions
analyze_roi_temporal_patterns()
def analyze_roi_temporal_patterns(
roi_polygon: Union[Polygon, list, dict],
time_period: str,
spatial_resolution: float = 30.0,
cloud_cover_max: float = 0.3,
output_dir: str = "./temporal_analysis",
clip_to_roi: bool = True,
metrics: Optional[List[TemporalMetric]] = None,
create_visualizations: bool = True,
export_geotiffs: bool = True,
show_plots: bool = True,
optimization_level: str = "auto",
**kwargs
) -> Dict[str, Any]
quick_temporal_analysis()
def quick_temporal_analysis(
roi: Union[Polygon, list, dict],
period: str = "last_3_months",
output_dir: str = "./temporal_output",
spatial_resolution: float = 100.0,
**kwargs
) -> Dict[str, Any]
Core Classes
TemporalAnalyzer
class TemporalAnalyzer:
def __init__(self, config: Optional[TemporalConfig] = None)
def analyze_temporal_patterns(
self,
scene_footprints: List[Dict],
roi_geometry: Union[Dict, Polygon],
start_date: str,
end_date: str,
clip_to_roi: bool = True,
**kwargs
) -> TemporalResult
def export_temporal_geotiffs(
self,
result: TemporalResult,
output_dir: str,
roi_polygon: Optional[Polygon] = None,
clip_to_roi: bool = True,
compress: str = "lzw"
) -> Dict[str, str]
TemporalConfig
@dataclass
class TemporalConfig:
spatial_resolution: float = 30.0
temporal_resolution: TemporalResolution = TemporalResolution.DAILY
metrics: List[TemporalMetric] = None
min_scenes_per_cell: int = 2
optimization_method: str = "auto"
coordinate_system_fixes: bool = True
no_data_value: float = -9999.0
validate_geometries: bool = True
TemporalResult
@dataclass
class TemporalResult:
metric_arrays: Dict[TemporalMetric, np.ndarray]
transform: rasterio.Affine
crs: str
bounds: Tuple[float, float, float, float]
temporal_stats: Dict[str, Any]
computation_time: float
config: TemporalConfig
grid_info: Dict[str, Any]
date_range: Tuple[str, str]
coordinate_system_corrected: bool = True
no_data_value: float = -9999.0
Enums
TemporalMetric
class TemporalMetric(Enum):
COVERAGE_DAYS = "coverage_days"
MEAN_INTERVAL = "mean_interval"
MEDIAN_INTERVAL = "median_interval"
MIN_INTERVAL = "min_interval"
MAX_INTERVAL = "max_interval"
TEMPORAL_DENSITY = "temporal_density"
COVERAGE_FREQUENCY = "coverage_frequency"
TemporalResolution
class TemporalResolution(Enum):
DAILY = "daily"
WEEKLY = "weekly"
MONTHLY = "monthly"
Troubleshooting
Common Issues
Performance Issues
Problem: Analysis taking too long for large areas
Solutions:
# 1. Increase spatial resolution
spatial_resolution=500 # Use larger grid cells
# 2. Force fast method
optimization_level="fast"
# 3. Limit metrics
metrics=[TemporalMetric.COVERAGE_DAYS] # Only essential metrics
# 4. Check grid size
total_cells = (width * height)
if total_cells > 1_000_000:
print("Consider increasing spatial_resolution")
Memory Issues
Problem: Out of memory errors
Solutions:
# 1. Reduce grid resolution
spatial_resolution=200 # Larger cells = less memory
# 2. Enable chunking (automatic for large areas)
force_single_chunk=False
# 3. Limit metrics
metrics=[TemporalMetric.COVERAGE_DAYS, TemporalMetric.MEAN_INTERVAL]
No Valid Data
Problem: All pixels show no-data values
Solutions:
# 1. Check minimum scenes requirement
min_scenes_per_cell=1 # Reduce requirement
# 2. Verify date range and scene availability
print(f"Scenes found: {len(scene_footprints)}")
# 3. Check cloud cover threshold
cloud_cover_max=0.5 # Increase threshold
# 4. Verify ROI geometry
print(f"ROI valid: {roi_polygon.is_valid}")
Coordinate Issues
Problem: Output appears flipped or misaligned
Solutions:
# Ensure coordinate fixes are enabled (default)
coordinate_system_fixes=True
# Check CRS in output
print(f"Output CRS: {result.crs}")
# Verify transform
print(f"Transform: {result.transform}")
Error Messages
"No valid scenes found in date range"
- Check date format: "YYYY-MM-DD/YYYY-MM-DD"
- Verify scene availability for your ROI and time period
- Check cloud cover threshold
"Spatial resolution must be positive"
- Ensure
spatial_resolution > 0
- Typical range: 3.0 to 1000.0 meters
"Could not fix invalid ROI geometry"
- Verify ROI polygon is valid
- Check for self-intersections or malformed coordinates
- Use
roi_polygon.buffer(0)
to fix minor issues
Performance Debugging
Check Grid Size
# Calculate expected grid size
roi_width_deg = roi_bounds[2] - roi_bounds[0]
roi_height_deg = roi_bounds[3] - roi_bounds[1]
resolution_deg = spatial_resolution / 111000
width = int(roi_width_deg / resolution_deg)
height = int(roi_height_deg / resolution_deg)
total_cells = width * height
print(f"Grid size: {width} x {height} = {total_cells:,} cells")
if total_cells > 1_000_000:
print("Consider using optimization_level='fast'")
Monitor Progress
# Enable verbose logging
import logging
logging.getLogger('planetscope_py.temporal_analysis').setLevel(logging.INFO)
# Check intermediate results
print(f"Scenes prepared: {len(scene_data)}")
print(f"Date range: {scene_data['acquired_str'].min()} to {scene_data['acquired_str'].max()}")
Getting Help
For additional support:
- Check logs: Enable INFO level logging for detailed progress
- Validate inputs: Ensure ROI and date ranges are correct
- Test with smaller areas: Use subset for debugging
- Check dependencies: Ensure all required packages are installed
- Report issues: Include error messages, grid size, and optimization method used
Best Practices
1. Resolution Selection
# Urban areas (detailed analysis)
spatial_resolution=30 # 30m
# Regional studies (overview)
spatial_resolution=100 # 100m
# Continental analysis (broad patterns)
spatial_resolution=1000 # 1km
2. Time Period Selection
# Change detection
time_period="2024-01-01/2025-01-01" # Annual comparison
# Seasonal analysis
time_period="2025-03-01/2025-09-30" # Growing season
# Event monitoring
time_period="2025-06-01/2025-06-30" # Monthly detail
3. Output Management
# Organize outputs by analysis type
output_dir=f"./analysis_{datetime.now().strftime('%Y%m%d')}"
# Include metadata in filenames
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
output_dir=f"./temporal_analysis_{timestamp}"
4. Quality Control
# Verify results
stats = result['temporal_result'].temporal_stats
print(f"Valid pixels: {stats['coverage_days_stats']['count']:,}")
print(f"Mean coverage: {stats['coverage_days_stats']['mean']:.1f} days")
# Check data distribution
mean_intervals = result['temporal_result'].metric_arrays[TemporalMetric.MEAN_INTERVAL]
valid_intervals = mean_intervals[mean_intervals != -9999.0]
print(f"Interval range: {np.min(valid_intervals):.1f} - {np.max(valid_intervals):.1f} days")
This documentation provides comprehensive guidance for using the Temporal Analysis Module effectively. For the most up-to-date information, refer to the source code and inline documentation.