Guide to bombcell's GUI - Julie-Fabre/bombcell GitHub Wiki

Bombcell GUI Guide

The Bombcell GUI allows you to interactively explore units and their quality metrics, manually classify units, and fine-tune classification thresholds.

Python GUI

Launching the GUI

import bombcell as bc

# Option 1: Quick launch from kilosort path (auto-loads everything)
gui = bc.unit_quality_gui(ks_dir='/path/to/kilosort/output', save_path='/path/to/bombcell/output')

# Option 2: After running bombcell
quality_metrics, param, unit_type, unit_type_string = bc.run_bombcell(ks_dir, save_path, param)
gui = bc.unit_quality_gui(ks_dir, quality_metrics=quality_metrics, unit_types=unit_type,
                          param=param, save_path=save_path)

# Option 3: With pre-computed data for faster loading
gui_data = bc.precompute_gui_data(ephys_data, quality_metrics, param, save_path)
gui = bc.InteractiveUnitQualityGUI(ephys_data, quality_metrics, param=param,
                                    gui_data=gui_data, save_path=save_path)

Note: The Python GUI requires ipywidgets and runs in Jupyter notebooks. Install with pip install ipywidgets.

Manual classification system

The Python GUI includes a manual classification system that is separate from BombCell's automatic classifications. This allows you to:

  1. Manually classify units as Noise, Good, MUA, or Non-somatic
  2. Compare your classifications with BombCell's automatic results
  3. Get parameter suggestions to improve BombCell's thresholds

Classifying units

Use the classification buttons in the GUI:

  • Noise (red) - Classify current unit as noise
  • Good (green) - Classify current unit as good single unit
  • MUA (orange) - Classify current unit as multi-unit activity
  • Non-somatic (blue) - Classify current unit as non-somatic

Manual classifications are saved automatically to:

  • manual_unit_classifications.csv - Your manual labels only
  • manual_vs_bombcell_classifications.csv - Comparison with BombCell

Auto-advance

By default, the GUI automatically advances to the next unit after you classify one. Disable with:

gui = bc.unit_quality_gui(ks_dir, save_path=save_path, auto_advance=False)

Jump to unclassified

Use the "Next unclassified" button to quickly find units that haven't been manually classified yet.

Navigation

Button Action
β—€ / β–Ά Previous / Next unit
β—€ good / good β–Ά Previous / Next good unit
β—€ mua / mua β–Ά Previous / Next MUA unit
β—€ noise / noise β–Ά Previous / Next noise unit
Unit slider Jump to specific unit index
Go to unit # Jump to specific unit ID

Comparing manual vs BombCell classifications

After manually classifying some units, you can analyze how well BombCell's automatic classification matches your labels and get suggestions for improving the thresholds.

Running the analysis

import bombcell as bc

# Simple one-liner analysis
bc.compare_manual_vs_bombcell('/path/to/bombcell/output')

This prints:

  • Concordance statistics - Overall agreement rate
  • Confusion matrix - BombCell vs Manual classification breakdown
  • Per-class precision - When BombCell says "Good", how often do you agree?
  • Per-class recall - Of all units you labeled "Good", how many did BombCell catch?
  • Parameter suggestions - Threshold adjustments to improve agreement

Example output

πŸ“Š Classification Concordance Analysis
==================================================
Total classified units: 50
Concordant classifications: 42
Overall concordance: 84.0%
==================================================

Confusion Matrix (rows=BombCell, columns=Manual):
manual_type_name   Good  MUA  Noise  All
Bombcell_unit_type
Good                 25    3      0   28
MUA                   2   12      1   15
Noise                 0    0      7    7
All                  27   15      8   50

πŸ”§ Parameter Threshold Suggestions
============================================================
πŸ“Š fractionRPVs_estimatedTauR: Current=0.1, Optimalβ‰ˆ0.05
   Good units passing: 25/27 β†’ 27/27
1. maxRPVviolations: 0.1 β†’ 0.05

πŸ’‘ To apply suggestions:
  1. Load your parameters: param, _, _ = bc.load_bc_results(save_path)
  2. Update param with suggested values (e.g., param['maxRPVviolations'] = 0.05)
  3. Re-run bc.run_bombcell(ks_dir, save_path, param)

How parameter suggestions work

For each quality metric, the algorithm uses a simple min/max approach:

  1. Get the metric values for all units you labeled "Good" vs "Noise"
  2. For minimum thresholds (e.g., minNumSpikes): suggest the minimum value among your good units
  3. For maximum thresholds (e.g., maxRPVviolations): suggest the minimum value among your noise units (to exclude them)
  4. Only suggest a change if it would improve classification

No regression or machine learning - just finding the decision boundary that best separates your labeled groups.

Detailed analysis

For more control, use the individual analysis functions:

from bombcell import (
    load_manual_classifications,
    analyze_classification_concordance,
    suggest_parameter_adjustments,
    analyze_manual_vs_bombcell
)

# Load manual classifications
manual_df = load_manual_classifications(save_path)

# Full analysis with all results
results = analyze_manual_vs_bombcell(save_path, quality_metrics_df, param)

# Access individual components
confusion_matrix = results['confusion_matrix']
suggestions = results['parameter_suggestions']
stats = results['concordance_stats']

Applying parameter suggestions

After getting suggestions, update your parameters and re-run BombCell:

import bombcell as bc

# Load existing parameters
param, quality_metrics, _ = bc.load_bc_results(save_path)

# Apply suggestions (example)
param['maxRPVviolations'] = 0.05
param['minAmplitude'] = 30

# Re-run bombcell with updated parameters
quality_metrics, param, unit_type, unit_type_string = bc.run_bombcell(
    ks_dir, save_path, param
)

# Compare again to see improvement
bc.compare_manual_vs_bombcell(save_path)

Exporting classifications

Export manual classifications for external analysis:

# From GUI object
gui.export_manual_classifications('/path/to/export')

# Compare statistics
stats = gui.compare_classifications()
print(f"Agreement rate: {stats['agreement_rate']:.1%}")

MATLAB GUI

Launch the GUI with:

bc.viz.unitQualityGUI_synced(memMapData, ephysData, qMetrics, param, probeLocation, unitType, plotRaw)

This opens two synced windows:

  1. Histogram window - Distribution of all quality metrics with the current unit's position marked
  2. Unit window - Waveforms, ACG, amplitude over time, and quality metrics for the current unit

Keyboard shortcuts

Key Action
← / β†’ Previous / Next unit
g Next good unit
m Next multi-unit
n Next noise unit
a Next non-somatic unit
u Go to specific unit
↑ / ↓ Navigate through raw data time

GUI views (both Python and MATLAB)

Unit location view

Plots the depth of each unit on the probe (y-axis) vs log-normalized firing rate (x-axis). Colors indicate classification: green = good, orange = MUA, red = noise. Current unit is circled in black.

Template waveform view

Template waveform on the peak channel (blue) with detected peaks overlaid.

Raw waveform view

Mean raw waveforms on the peak channel (blue) with detected peaks overlaid.

ACG view

Auto-correlogram for the current unit. Horizontal red line = firing rate asymptote. Vertical red line = refractory period.

Raw data view

Raw data trace (black) with detected spikes for this unit (blue).

Amplitude view

Spike amplitudes over time (black). Currently displayed spikes in blue. ISI violations (< refractory period) in purple.

Histogram panel

Distribution histograms for each quality metric with the current unit's value marked. Colored lines show classification thresholds.

Workflow: Fine-tuning parameters with the GUI

  1. Run BombCell with default parameters
  2. Open the GUI and browse through units
  3. Manually classify a representative sample of units (e.g., 50-100 units)
    • Focus on units where you disagree with BombCell's classification
  4. Run concordance analysis to compare your labels with BombCell
  5. Apply suggested parameter changes based on disagreements
  6. Re-run BombCell with updated parameters
  7. Repeat until satisfied with classification accuracy