Profile Performance Dashboard - nonmodernist/magic-lantern GitHub Wiki
📊 Profile Performance Dashboard: Measuring Success in Context
🎯 Overview
The Profile Performance Dashboard aims to measure how well different Magic Lantern research profiles serve specific research questions. Rather than using universal "good/bad" metrics, this approach recognizes that success is contextual - what's valuable for one research question might be noise for another.
💡 Core Philosophy
"The best research tools help discover the unexpected."
Success isn't just about finding what you're looking for - it's about discovering what you didn't know you were looking for.
⚠️ The Challenge with Universal Metrics
Traditional metrics fail because research needs vary:
- High result count: Could mean thorough coverage OR lots of duplicate fluff
- Advertising content: Essential for marketing history, noise for narrative analysis
- Brief mentions: Perfect for tracking distribution, useless for deep critique
- Syndicated content: Valuable for media network analysis, repetitive for content analysis
- Long articles: Often higher quality, but not always
📚 The Reality of Archival Research
Most importantly, we're working with incomplete historical collections. The Media History Digital Library, while extensive, represents a fraction of what was originally published. Many publications are lost, others are partially digitized, and some have poor OCR quality.
This means:
- 🎉 Finding even one relevant article can be a major research victory
- 📈 A search returning "only" 3 results might actually be finding 100% of what exists in the digitized record
- 📊 "Low" result counts often reflect archival gaps, not search failure
- 🔬 The absence of results is itself valuable data about preservation and historical memory
"Success" must be measured against what's actually available, not an imaginary complete archive.
🔍 Let Research Questions Drive Development
What Are Research Questions?
Research questions are the specific inquiries that drive scholarly investigation. In film history, these might include:
- Reception Studies: "How did rural audiences respond differently than urban audiences to this film?"
- Industrial History: "What distribution strategies did RKO use for horror films in 1933?"
- Cultural Analysis: "How did trade publications discuss women's roles in pre-Code films?"
- Censorship History: "Which specific scenes were cut in different states?"
- Economic History: "How did the Depression affect double feature programming?"
Why Research Questions Matter for Tool Development
Traditional search tools assume all researchers want the same thing: "relevant" results. But relevance is entirely dependent on the research question:
-
A marketing historian researching "King Kong" (1933) needs:
- Every advertisement variant
- Promotional tie-ins
- Exhibitor ballyhoo suggestions
- Brief mentions showing geographic reach
-
A censorship scholar researching the same film needs:
- State board rulings
- Specific scene objections
- Church group responses
- Editorial debates about appropriateness
-
A reception studies scholar needs:
- Detailed reviews
- Audience response articles
- Regional variation in reception
- Comparative reviews with other films
The same search result - say, a brief news item mentioning "King Kong" was "held over for a second week" - is crucial data for researcher #1, somewhat useful for #3, and probably irrelevant noise for #2.
How This Shapes Development
By acknowledging that different research questions need different approaches, we can:
- Create profiles that embody specific research methodologies
- Measure success based on research goals, not universal metrics
- Help researchers discover unexpected evidence that serves their particular inquiry
- Avoid the fallacy of one-size-fits-all "precision" scores
🎯 Proposed Solution: Research-Goal-Oriented Profiles
Instead of measuring universal "precision," we can create additional research profiles that are designed around specific research questions. These aren't new functionality - they're simply new profile configurations (like censorship.profile.js
or boxoffice.profile.js
) that join the existing profiles in the system.
Example Research Profiles
🎬 "First Week Reception" Profile
- Goal: Understand immediate audience and critical response
- Approach:
- Tight date ranges (1-2 months after release)
- Prioritize reviews and audience reactions
- High weight on "opening," "premiere," "first night"
- Success metrics: Number of opening week reviews found, geographic diversity of reactions
🚫 "Censorship Tracker" Profile
- Goal: Document censorship, controversy, and moral debates
- Approach:
- Wider date ranges to catch delayed reactions
- High weight on censorship themes and keywords
- Search for board rulings, cuts, protests
- Success metrics: Instances of censorship documented, variety of objections found
💰 "Box Office Archaeology" Profile
- Goal: Reconstruct financial performance
- Approach:
- Focus on trade publications
- Prioritize exhibitor reports
- Weight "gross," "receipts," "holds over," "solid," "weak"
- Success metrics: Number of box office data points, weekly performance tracking
🌟 "Cultural Impact" Profile
- Goal: Trace long-term influence and memory
- Approach:
- Very wide date ranges (years after release)
- Look for retrospectives, influence mentions
- Weight "remembered," "influenced," "inspired," "recalls"
- Success metrics: Temporal spread of mentions, variety of impact types
🎭 "Production History" Profile
- Goal: Document behind-the-scenes development
- Approach:
- Dates before and during production
- Weight cast/crew names, studio terminology
- Find trade announcements, gossip columns
- Success metrics: Production timeline coverage, key decisions documented
🔧 Implementation in the Search Matrix
These research-goal-oriented profiles are simply additional profiles that can be selected, just like the existing default.profile.js
, exhibition.profile.js
, or adaptation-studies.profile.js
. They work within the current system:
- Corpus size: small/medium/large (determines search breadth)
- Film list: User's CSV (determines period/genre/scope)
- Research profile: Choose from expanded set including research-question-specific profiles
For example, a user could run:
node magic-lantern.js films.csv --corpus=large --profile=censorship
Each new profile is just another .profile.js
file with weights and strategies optimized for that research question.
📈 Success Metrics by Context
🎯 Goal Achievement View
Rather than universal quality scores, measure achievement of stated goals:
Research Goal | Achievement
-----------------------|-------------
Find period reviews | ████████░░ 78%
Geographic coverage | ██████░░░░ 62%
Unique perspectives | █████████░ 94%
Avoid syndicated | ███████░░░ 71%
📊 Key Metrics to Track
-
🎯 Goal-Specific Success
- Reviews found (for review-focused research)
- Geographic spread (for distribution research)
- Unique vs. syndicated content ratio
- Temporal coverage
-
🔮 Discovery Metrics
- Unexpected content types found
- Surprising date ranges that yielded results
- Cross-references to other films/topics
-
📚 Archival Reality Metrics
- Coverage assessment: "Found 3 of likely 3-5 existing digitized articles" is better than "Found 3 articles"
- Rarity indicators: Flag when results come from rare/unique publications
- Gap documentation: When searches find nothing, note which publications/dates were searched
- Victory highlights: Celebrate single finds from hard-to-search publications or poor OCR conditions
-
⚡ Efficiency Metrics
- Search strategies that worked best for each goal
- Publication sources most valuable for each research type
- Optimal date ranges for different research questions
🎯 Example: Same Film, Different Discoveries
Researching "Baby Face" (1933)
- 🚫 Censorship Tracker: Finds pre-Code controversy, state-by-state bans, moral objections
- 💰 Box Office Archaeology: Finds week-by-week performance, comparison to other pre-Code films
- 🌟 Cultural Impact: Finds 1960s feminist film retrospectives, influence on women's pictures
- 🎬 First Week Reception: Finds opening night reactions, early critical divisions
Each profile helps discover different unexpected aspects of the same film.
🚀 Next Steps for Development
-
Define Core Research Profiles
- Identify 5-10 common research approaches
- Create profile configurations for each
- Set goal-specific success metrics
-
Build Measurement Framework
- Track which content serves which research goals
- Measure "surprise" factor - unexpected valuable finds
- Create feedback mechanism for researchers
-
Visualization Design
- Goal achievement displays
- Comparative profile performance
- Discovery highlights
- Research path visualization
-
Iterate Based on Use
- Gather researcher feedback
- Identify new research patterns
- Create new profiles as needed
❓ Open Questions
- Should profiles be mixable? (e.g., 70% reviews + 30% box office)
- How do we measure "unexpected discoveries" systematically?
- Can we auto-suggest profiles based on the film list provided? Or based on how a researcher answers questions?
- Should profiles adapt based on what they're finding?
🌟 The Vision
A research tool that doesn't just find what you're looking for, but helps you discover what you didn't know to look for - with success measured not by universal metrics, but by how well it serves your specific research question.