Reference Ecosystem Static Analysis Tool Comparison_Criterion Evaluation Template - rpapub/WatchfulAnvil GitHub Wiki

UiPath Workflow Analyzer – Criterion-Level Evaluation

Functional Categories

Rule Coverage

Detects style violations

Score: [1–5] Weight: 20 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Evaluates whether the tool enforces consistent code formatting, naming conventions, and layout structure.

πŸ“Š Scoring Rubric

  • 5: Enforces naming, spacing, casing, layout, and consistency
  • 4: Detects most common style violations with some customization
  • 3: Detects basic formatting/naming only
  • 2: Few hardcoded checks; little customization
  • 1: No style checks at all

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Detects security vulnerabilities

Score: [1–5] Weight: 20 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Assesses the tool’s ability to detect insecure code patterns, such as injection risks, weak cryptography, or unvalidated input.

πŸ“Š Scoring Rubric

  • 5: Covers OWASP/common vulnerabilities; includes taint/flow analysis
  • 4: Detects key security issues (e.g., injection, weak crypto)
  • 3: Some hardcoded security checks
  • 2: Very limited or outdated checks
  • 1: No security detection

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Detects performance issues

Score: [1–5] Weight: 20 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Determines whether the tool can identify inefficient patterns like nested loops, redundant operations, or high memory usage.

πŸ“Š Scoring Rubric

  • 5: Identifies inefficient patterns, memory usage, loops, database calls
  • 4: Detects common performance smells (e.g., nested loops, recursion)
  • 3: Some performance hints only
  • 2: Very basic or occasional warnings
  • 1: No performance detection

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Architecture/design pattern validation

Score: [1–5] Weight: 20 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Examines whether the tool can check for adherence to architectural boundaries or design principles, such as layering, decoupling, or naming contracts.

πŸ“Š Scoring Rubric

  • 5: Supports rule-based architecture enforcement (layering, dependencies)
  • 4: Can enforce modularity, naming conventions for architecture
  • 3: Some convention checks that suggest design intent
  • 2: Only ad hoc or implicit design validation
  • 1: No architectural understanding

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Severity levels / categories supported

Score: [1–5] Weight: 20 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Checks if the tool allows classification of rule violations by severity or category, helping teams prioritize issues.

πŸ“Š Scoring Rubric

  • 5: Full control over rule severity, categories, and groups
  • 4: Severity can be configured or mapped
  • 3: Basic default severities exist
  • 2: Fixed severity only (e.g., everything is a warning)
  • 1: No severity levels or classification

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Distinguishes warnings vs errors

Score: [1–5] Weight: 20 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Looks at whether the tool separates minor issues (warnings) from critical ones (errors) and whether this is configurable.

πŸ“Š Scoring Rubric

  • 5: Clear separation between info, warning, and error; configurable
  • 4: Warnings vs errors are separate, but not customizable
  • 3: Warnings and errors shown, but system-defined only
  • 2: Only one violation level (e.g., just β€œerror”)
  • 1: No distinction made; all rules treated equally

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Custom Rule Support

SDK or API for rule development

Score: [1–5] Weight: 15 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Evaluates whether the tool provides an official, documented interface for creating custom rules programmatically.

πŸ“Š Scoring Rubric

  • 5: Well-documented SDK or public API; officially supported and stable
  • 4: Usable SDK/API with partial documentation or limited support
  • 3: Community-supported SDK or unofficial APIs
  • 2: Some extension points exist but no SDK
  • 1: No way to create custom rules

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Rules written in standard language

Score: [1–5] Weight: 15 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Assesses if rules can be authored using general-purpose programming languages (e.g., C#, Python) instead of proprietary formats.

πŸ“Š Scoring Rubric

  • 5: Rules authored in well-known languages (e.g., C#, JS, Python)
  • 4: Rules use structured scripting with familiar syntax
  • 3: Rules use custom DSL or config, but expressive
  • 2: Rules defined via rigid config/GUI with limited logic
  • 1: No control over rule logic

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Rule metadata/tags

Score: [1–5] Weight: 15 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Checks if rules can include structured metadata such as descriptions, severity levels, categories, or tags for classification.

πŸ“Š Scoring Rubric

  • 5: Rules support custom metadata, tags, categories, severity, descriptions
  • 4: Supports some metadata (e.g., severity or category)
  • 3: Rules include basic descriptors only (e.g., name, ID)
  • 2: Very limited metadata (e.g., label only)
  • 1: No metadata associated

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Ability to share rule packages

Score: [1–5] Weight: 15 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Determines whether custom rules can be bundled and distributed as reusable packages or plugins.

πŸ“Š Scoring Rubric

  • 5: Rules can be exported/imported as packages or plugins
  • 4: Rules shareable via structured export or file copy
  • 3: Manual copying possible; no packaging mechanism
  • 2: Sharing requires source-level access or rebuild
  • 1: No practical way to share rules

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Versioning / backward compatibility

Score: [1–5] Weight: 15 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Looks at whether the tool supports version control of rules and maintains compatibility across versions.

πŸ“Š Scoring Rubric

  • 5: Rules support versioning; backward-compatible changes are respected
  • 4: Rules have version tags but partial compatibility support
  • 3: Versioning must be managed manually
  • 2: Rule changes can break usage; no guidance
  • 1: No versioning or compatibility concept

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Rule Granularity

Fine-tuned configuration (per rule)

Score: [1–5] Weight: 10 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Evaluates if individual rules can be enabled, disabled, or configured independently.

πŸ“Š Scoring Rubric

  • 5: Each rule can be configured individually (thresholds, conditions, scope)
  • 4: Rules have individual toggles and basic settings
  • 3: Some rules configurable, others hardcoded
  • 2: Global settings only; little rule-level control
  • 1: No per-rule configuration

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Scoped rule targeting (file/module/etc)

Score: [1–5] Weight: 10 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Assesses whether rules can be applied selectively based on project structure, such as specific files, folders, or components.

πŸ“Š Scoring Rubric

  • 5: Rules can be scoped to project, folder, file, or specific modules
  • 4: Rules can be grouped or filtered by component or type
  • 3: Some high-level targeting (e.g., by project)
  • 2: Rules apply globally or only at app-level
  • 1: No scoping; all rules apply everywhere equally

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Parameterized rules

Score: [1–5] Weight: 10 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Determines whether rules accept configurable inputs or thresholds (e.g., max line length, naming prefix).

πŸ“Š Scoring Rubric

  • 5: Rules accept parameters (e.g., max complexity = 10), configurable by user
  • 4: Parameters can be set globally or by rule set
  • 3: A few parameters exposed for tuning
  • 2: Parameters exist but are hidden/hardcoded
  • 1: No parameterization possible

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Static Validation Scope

Validates executable code

Score: [1–5] Weight: 10 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Checks if the tool analyzes the code that will be executed (source code or compiled output).

πŸ“Š Scoring Rubric

  • 5: Fully analyzes source or compiled code (AST, IL, or similar)
  • 4: Parses and evaluates source code with semantic awareness
  • 3: Performs basic static checks on code (e.g., patterns, syntax)
  • 2: Surface-level checks only; no deep analysis
  • 1: Does not validate code at all

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Validates config/workflow artifacts

Score: [1–5] Weight: 10 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Evaluates whether non-code components like configuration files or workflow models are also validated.

πŸ“Š Scoring Rubric

  • 5: Deep analysis of config/workflow files with structural awareness
  • 4: Parses workflows/configs with schema and rule logic
  • 3: Basic checks like key/value or required fields
  • 2: Validates structure only (e.g., syntax)
  • 1: Does not validate non-code artifacts

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Can analyze dependencies

Score: [1–5] Weight: 10 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Assesses whether the tool can inspect referenced libraries or modules and consider their impact.

πŸ“Š Scoring Rubric

  • 5: Analyzes transitive dependencies and their impact on behavior
  • 4: Detects direct dependencies and usage issues
  • 3: Lists or references dependencies without analysis
  • 2: Acknowledges dependencies but does not inspect them
  • 1: Ignores all dependencies

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Third-party file type support

Score: [1–5] Weight: 10 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Determines whether the tool supports analysis of formats outside its core language or platform (e.g., JSON, YAML, XML).

πŸ“Š Scoring Rubric

  • 5: Supports custom or external file types (e.g., JSON, YAML, XAML, .ruleset)
  • 4: Handles common config or integration formats
  • 3: Supports limited structured formats
  • 2: Only supports proprietary or internal formats
  • 1: No external file support

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Test Support

Framework for rule unit testing

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Checks if there is a structured way to write and run automated tests for custom rule logic.

πŸ“Š Scoring Rubric

  • 5: Official or community-supported test framework for authoring unit tests against custom rules
  • 4: Framework available but limited in documentation or flexibility
  • 3: Testable via general-purpose unit test frameworks (e.g., xUnit, JUnit), no dedicated support
  • 2: Possible to test rules manually or with effort, no structure
  • 1: No support for testing custom rules

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

CI feedback for test failure

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Evaluates whether rule violations or test failures can be surfaced directly in CI pipelines with actionable results.

πŸ“Š Scoring Rubric

  • 5: Full CI integration with pass/fail status based on rule test results or violations
  • 4: CI output includes rule violation info or logs clearly
  • 3: CI shows results but not actionable (e.g., no failure gating)
  • 2: Rule results available post-build only, not integrated
  • 1: No CI feedback or integration available

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Integration Points

IDE integration

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Assesses if the tool runs within or alongside the IDE, providing real-time feedback to developers.

πŸ“Š Scoring Rubric

  • 5: Native, real-time feedback with inline annotations in major IDEs
  • 4: Plugin or extension available; good user experience
  • 3: Basic support via output or manual refresh
  • 2: Integration possible but manual setup or poor UX
  • 1: No IDE integration at all

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

CLI support

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Determines whether the tool can be executed via command line with configurable options.

πŸ“Š Scoring Rubric

  • 5: Full-featured CLI with rule execution, config, output control
  • 4: CLI exists and can run scans with basic options
  • 3: CLI available but limited or undocumented
  • 2: CLI possible through wrappers or scripts
  • 1: No CLI interface

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Git hook support

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Checks whether the tool can be integrated into Git workflows via pre-commit or pre-push hooks.

πŸ“Š Scoring Rubric

  • 5: Built-in support for pre-commit, pre-push hooks with docs
  • 4: Easy to configure via CLI or plugins
  • 3: Git hook integration possible with effort
  • 2: Only achievable with custom scripting
  • 1: No Git integration support

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

CI/CD pipelines

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Evaluates whether the tool integrates cleanly into continuous integration and deployment workflows.

πŸ“Š Scoring Rubric

  • 5: First-class integration with major CI tools (e.g., GitHub Actions, Azure DevOps, GitLab CI)
  • 4: CI support available via plugins or templates
  • 3: Integration through custom scripts or wrappers
  • 2: Basic support; manual steps required
  • 1: No CI/CD support

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

SCM integration (Git, SVN)

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Looks at whether the tool can read source control metadata or interact directly with version control systems.

πŸ“Š Scoring Rubric

  • 5: Native support for SCM metadata, blame/annotate, branching context
  • 4: SCM-aware checks or rule filtering by commit context
  • 3: Reads files from SCM but not SCM-aware
  • 2: Can be run in SCM context manually
  • 1: No integration or SCM-awareness

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Non-Functional Categories

Usability

UI or dashboard for violations

Score: [1–5] Weight: 10 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Evaluates whether the tool provides a graphical interface or dashboard to review, filter, and understand rule violations.

πŸ“Š Scoring Rubric

  • 5: Rich, interactive UI/dashboard with filtering, grouping, and detail views
  • 4: Clean UI with basic violation summaries and navigation
  • 3: Violations shown in output/log or simple list
  • 2: Minimal UI; basic tables or text files only
  • 1: No UI; raw output or CLI-only

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Rule suppression w/ comments

Score: [1–5] Weight: 10 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Checks if developers can suppress specific rule violations inline, ideally with explanatory comments.

πŸ“Š Scoring Rubric

  • 5: Fine-grained suppression (e.g., inline, per rule) with required comment support
  • 4: Supports suppression with optional comments
  • 3: Allows suppression globally or via config
  • 2: Manual workaround required for suppression
  • 1: No suppression support

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Inline suggestions or autofix

Score: [1–5] Weight: 10 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Determines whether the tool offers actionable suggestions or automatic code fixes where applicable.

πŸ“Š Scoring Rubric

  • 5: Suggests fixes and can apply them automatically in the IDE
  • 4: Suggests fixes inline; manual fix required
  • 3: Suggests fixes outside the editor (e.g., report)
  • 2: Hints only; no suggestion system
  • 1: No suggestions or fix info

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Navigation to offending location

Score: [1–5] Weight: 10 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Assesses whether users can jump directly to the location of a violation within the code or visual editor.

πŸ“Š Scoring Rubric

  • 5: Click-through or jump-to-source from report or UI
  • 4: Integrated editor navigation support
  • 3: File/line info shown; user navigates manually
  • 2: Location only shown in logs
  • 1: No location info provided

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Onboarding support / ease of use

Score: [1–5] Weight: 10 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Evaluates how easily a new user can begin using the tool, including setup guides, templates, and UI clarity.

πŸ“Š Scoring Rubric

  • 5: Tutorials, templates, sample rules, guided setup
  • 4: Docs and examples available; moderate learning curve
  • 3: Basic docs; requires experimentation
  • 2: Sparse docs or hard to follow
  • 1: No guidance or support materials

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Performance

Fast scan time on large projects

Score: [1–5] Weight: 5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Measures the tool’s performance when scanning large codebases, with emphasis on responsiveness.

πŸ“Š Scoring Rubric

  • 5: Scans large codebases (>10k files) in under a minute; optimized indexing
  • 4: Fast on typical projects; acceptable performance on large ones
  • 3: Moderate scan time, noticeable delays at larger scale
  • 2: Slow scans even on medium-sized projects
  • 1: Very slow or impractical for large projects

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Incremental scanning

Score: [1–5] Weight: 5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Determines whether the tool supports scanning only modified files or sections to improve performance.

πŸ“Š Scoring Rubric

  • 5: Only changed files re-scanned; instant feedback
  • 4: Partial incremental support; fast rebuilds
  • 3: Detects changes but still re-scans broadly
  • 2: Scans everything every time
  • 1: No incremental support at all

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Low IDE lag

Score: [1–5] Weight: 5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Assesses whether the tool introduces noticeable delays or UI sluggishness during development.

πŸ“Š Scoring Rubric

  • 5: Real-time feedback with no UI delays or blocking
  • 4: Light performance impact in IDE
  • 3: Noticeable delay but tolerable
  • 2: Frequent UI lag or sluggishness during scan
  • 1: Heavy IDE slowdown or freeze during analysis

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Extensibility

Plugin support

Score: [1–5] Weight: 10 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Checks if the tool can be extended via plugins, with a defined API and loading mechanism.

πŸ“Š Scoring Rubric

  • 5: Robust plugin architecture with full lifecycle support and documentation
  • 4: Plugin interface exists and is stable
  • 3: Partial or community-driven plugin system
  • 2: Limited extension through scripts or injection
  • 1: No plugin or extension support

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Scripting/custom logic

Score: [1–5] Weight: 10 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Evaluates whether users can define custom behavior or rules using scripting or embedded logic.

πŸ“Š Scoring Rubric

  • 5: Supports scripting (e.g., Python, JS, C#) to define behavior/rules
  • 4: Rules can use expressions or light scripting
  • 3: Custom logic possible but awkward
  • 2: Only hardcoded extensions
  • 1: No scripting or logic customization

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Output format customization

Score: [1–5] Weight: 10 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Assesses whether the tool allows configuration of output format (e.g., JSON, XML, custom reports).

πŸ“Š Scoring Rubric

  • 5: Fully customizable output (e.g., SARIF, JSON, HTML, XML)
  • 4: Multiple output formats with limited control
  • 3: One or two formats with minor formatting options
  • 2: Output in plain text or CLI log only
  • 1: No control over output

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Hooking into analysis pipeline

Score: [1–5] Weight: 10 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Checks if the tool supports custom processing steps before, during, or after analysis runs.

πŸ“Š Scoring Rubric

  • 5: Supports lifecycle hooks (before, after, on event)
  • 4: Allows limited hook points (e.g., pre/post scan)
  • 3: Can wrap core commands externally
  • 2: Needs manual scripting or automation
  • 1: No pipeline interaction possible

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Documentation

SDK docs

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Evaluates the availability and quality of documentation specifically for developers using the SDK.

πŸ“Š Scoring Rubric

  • 5: Complete, well-structured API/SDK documentation with examples
  • 4: Mostly complete SDK docs; some gaps
  • 3: Basic reference only; few examples
  • 2: Sparse or outdated SDK docs
  • 1: No SDK documentation

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Rule authoring guides

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Assesses whether there are practical guides or examples for writing custom rules.

πŸ“Š Scoring Rubric

  • 5: Step-by-step guides, tutorials, and samples for writing rules
  • 4: Detailed documentation but missing walkthroughs
  • 3: Overview + some samples, no deep explanation
  • 2: Minimal instructions for authoring
  • 1: No guidance at all

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Deployment instructions

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Checks for clear documentation on how to deploy, distribute, or activate custom rules within the tool or platform.

πŸ“Š Scoring Rubric

  • 5: Clear, tested deployment process with examples
  • 4: Mostly clear; a few assumptions
  • 3: Generic guidance only
  • 2: Sparse, user must experiment
  • 1: No deployment help provided

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Maintainability

Changelog & versioning

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Assesses whether the tool provides clear release notes and follows a consistent versioning scheme.

πŸ“Š Scoring Rubric

  • 5: Versioned releases with clear changelogs and upgrade notes
  • 4: Changelog exists, mostly up to date
  • 3: Tags/releases but minimal detail
  • 2: Infrequent or unclear updates
  • 1: No visible versioning or changelog

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Active maintenance

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Evaluates the frequency of updates and the presence of ongoing development or vendor engagement.

πŸ“Š Scoring Rubric

  • 5: Regular updates, active development repo or release cadence
  • 4: Frequent updates but sometimes sporadic
  • 3: Occasional updates (e.g., yearly)
  • 2: Very slow development
  • 1: Abandoned or legacy-only

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Support/contact available

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Checks whether users can access support through official channels (e.g., tickets, email, chat).

πŸ“Š Scoring Rubric

  • 5: Multiple support channels (tickets, chat, forums, GitHub)
  • 4: Official support or strong community
  • 3: Community forum with moderate activity
  • 2: Hard to get help or slow response
  • 1: No support options available

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Ecosystem

Community-contributed rules

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Determines whether there is a visible ecosystem of publicly shared, reusable rules from external contributors.

πŸ“Š Scoring Rubric

  • 5: Large ecosystem of reusable rules, actively maintained by users
  • 4: Several rule sets available; some community involvement
  • 3: A few rules shared informally
  • 2: Rare contributions; hard to find
  • 1: No community rules available

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Marketplace or registry

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Assesses whether the tool has a central location to browse, search, or install rules or plugins.

πŸ“Š Scoring Rubric

  • 5: Well-organized marketplace or registry with search, categories, versioning
  • 4: Exists and usable but limited UX/features
  • 3: Informal listing (e.g., GitHub repo, doc page)
  • 2: Occasional plugins shared but no official channel
  • 1: No registry or listing mechanism

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Active discussion channels

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Evaluates whether there are active forums, chats, or community spaces for user questions and support.

πŸ“Š Scoring Rubric

  • 5: Multiple active forums, Discord/Slack, GitHub Issues, StackOverflow
  • 4: One or two active, responsive channels
  • 3: Exists but low activity or engagement
  • 2: Sparse or archived discussions only
  • 1: No discussion/support presence outside vendor docs

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Governance & Reporting

Policy enforcement (block builds)

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Checks if the tool can enforce quality gates by stopping builds or deploys when rule violations are detected.

πŸ“Š Scoring Rubric

  • 5: Violations can fail builds or merge requests based on rules
  • 4: Warnings configurable into errors or blockers
  • 3: Integration with approval gates but manual
  • 2: Only flagging, no enforcement
  • 1: Cannot influence delivery lifecycle

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Dashboards & metrics

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Evaluates the availability of summary views, charts, or metrics that show rule violations over time or by category.

πŸ“Š Scoring Rubric

  • 5: Interactive dashboards with filtering, timelines, severity breakdowns
  • 4: Good charts/metrics, less interactive
  • 3: Basic dashboards only
  • 2: CSV or manual reporting only
  • 1: No dashboards or visualization

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Exportable reports

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Determines whether reports can be exported in common formats (e.g., PDF, CSV, SARIF) for sharing or archiving.

πŸ“Š Scoring Rubric

  • 5: Multiple formats (SARIF, PDF, CSV, JSON), customizable exports
  • 4: Common formats available
  • 3: Basic exports, minimal customization
  • 2: Only raw output logs
  • 1: No reporting/export feature

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Audit trail

Score: [1–5] Weight: 2.5 Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Assesses whether the tool records violation history, rule execution, and developer actions over time.

πŸ“Š Scoring Rubric

  • 5: Full violation history, timestamps, user actions
  • 4: Basic history with timestamps
  • 3: Tracks violations but not historical state
  • 2: Manual logging or partial metadata
  • 1: No tracking/auditing support

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Cost & Licensing

Free or OSS available

Score: [1–5] Weight: X Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Evaluates whether the tool or its core features are freely available or open source.

πŸ“Š Scoring Rubric

  • 5: Fully open source or unrestricted free use (even commercially)
  • 4: Free for commercial use, not OSS
  • 3: Free for non-commercial / small teams only
  • 2: Evaluation-only or restricted free use
  • 1: No free or OSS availability

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Enterprise cost level

Score: [1–5] Weight: X Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Assesses the typical cost of using the tool at scale, including licensing or platform dependencies.

πŸ“Š Scoring Rubric

  • 5: Very affordable or flat-rate pricing
  • 4: Moderate cost per seat or per use
  • 3: Standard pricing acceptable for mid-sized teams
  • 2: High cost at scale or bundled into premium platforms
  • 1: Expensive or cost-prohibitive

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Pricing transparency

Score: [1–5] Weight: X Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Determines whether pricing is publicly documented and understandable without sales contact.

πŸ“Š Scoring Rubric

  • 5: Clear pricing on website or public documents
  • 4: Mostly public pricing, some enterprise gray areas
  • 3: Partial pricing info available
  • 2: Vague or β€œcontact us” for all tiers
  • 1: No public pricing information

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Flexibility of licensing

Score: [1–5] Weight: X Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Checks whether the tool offers multiple licensing models (e.g., usage-based, flat rate, seat-based).

πŸ“Š Scoring Rubric

  • 5: Multiple licensing models (seat, usage, flat)
  • 4: Predictable, moderately flexible
  • 3: Some flexibility within tiers or editions
  • 2: Rigid licensing tied to vendor account
  • 1: No licensing flexibility

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Support/feature disparity OSS vs paid

Score: [1–5] Weight: X Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Evaluates how much functionality is lost when using the free or open-source version compared to a paid edition.

πŸ“Š Scoring Rubric

  • 5: OSS and enterprise nearly identical in features
  • 4: Minor feature gaps
  • 3: Core use possible in OSS, but missing key enterprise features
  • 2: OSS mostly for demo/trial; paid required for real use
  • 1: OSS nonexistent or not usable in practice

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Free or OSS Available

Score: [1–5] Weight: X Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Evaluates whether the tool or its core features are freely available or open source.

πŸ“Š Scoring Rubric

  • 5: Fully open source or completely free for all use cases, including commercial
  • 4: Free for commercial use, but not open source
  • 3: Free for personal or non-commercial use only (e.g., Community Edition)
  • 2: Free only for evaluation or heavily limited usage
  • 1: No free or OSS version available

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Enterprise Cost Level

Score: [1–5] Weight: X Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Assesses the typical cost of using the tool at scale, including licensing or platform dependencies.

πŸ“Š Scoring Rubric

  • 5: Very low or flat-rate cost; inexpensive at scale
  • 4: Reasonable per-user or usage-based cost
  • 3: Mid-range pricing, acceptable for SMBs
  • 2: High cost for scale or bundled with expensive platforms
  • 1: Premium pricing, cost-prohibitive for many

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Pricing Transparency

Score: [1–5] Weight: X Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Determines whether pricing is publicly documented and understandable without sales contact.

πŸ“Š Scoring Rubric

  • 5: Free or clearly published pricing structure
  • 4: Mostly public pricing with minor enterprise gaps
  • 3: Some pricing info available, but incomplete or unclear at scale
  • 2: Vague pricing or hidden TCO (total cost of ownership)
  • 1: No pricing published; completely opaque

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

Licensing Flexibility

Score: [1–5] Weight: X Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Evaluates how many licensing models are available and how easily teams can choose or switch between them.

πŸ“Š Scoring Rubric

  • 5: Multiple models (flat, seat, usage-based); flexible licensing
  • 4: Predictable licensing, though tied to per-seat or usage
  • 3: Some flexibility but bound to specific tiers or accounts
  • 2: Rigid licensing with few/no options
  • 1: Fully vendor-locked; no flexibility or choice

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]

OSS vs Paid Feature Gap

Score: [1–5] Weight: X Normalized Contribution: [auto-calculated if needed]

πŸ” Description

Assesses how much functionality is lost or restricted in the free/open version compared to the commercial offering.

πŸ“Š Scoring Rubric

  • 5: No meaningful difference; OSS = enterprise edition
  • 4: Minor limitations in OSS version
  • 3: OSS version usable, but missing major features
  • 2: OSS limited to evaluation or basic use
  • 1: No OSS or free equivalent; enterprise version required

πŸ§ͺ Tested Scenarios

  • Describe a test case: what you tried, how it behaved
  • Include relevant screenshots or test files

πŸ“ Findings

Summarize how UiPath behaves for this criterion:

  • What's supported
  • Any undocumented limitations
  • Real-world relevance

🧩 Workarounds / Extensions

  • Can custom rules address the gaps?
  • SDK behavior or constraints

πŸ“Ž Artifacts & References

  • Screenshot:
  • Sample Rule: [link-to-rule.cs]
  • Test Workflow: [link-to-test.xaml]