Skip to content

Hierarchical-Compensatory, Effect-Size-Driven Ranking Algorithm

MATLAB File Exchange View on PyPI View on GitHub Issues License: MIT DOI ORCID

Made for Scientific Benchmarking

Key FeaturesInstallationQuick StartDocumentationCitation


Overview

HERA (Hierarchical-Compensatory, Effect-Size-Driven Ranking Algorithm) is a MATLAB-based scientific ranking framework for paired benchmarking, designed to automate the objective comparison of algorithms, experimental conditions, or other methods with repeated measurements across up to three quality metrics. Unlike traditional ranking methods that rely solely on mean values or p-values, HERA employs a hierarchical-compensatory logic that integrates:

  • Significance Testing: Wilcoxon signed-rank tests for paired data.
  • Effect Sizes: Cliff's Delta and Relative Mean Difference for practical relevance.
  • Bootstrapping: Data-driven thresholds and confidence intervals.

This ensures that a "win" is only counted if it is both statistically significant and practically relevant, providing a robust and nuanced ranking system.

For more information please refer to the Project Website.


Key Features

  • Hierarchical-Compensatory Logic: Define primary, secondary, and tertiary metrics for the sequential comparison. Secondary and tertiary metrics act as iterative rank correctors or tie-breakers (e.g., M1_M2, M1_M3A). Overall, the primary metric sorts, the secondary metric corrects, and the tertiary metric finalizes the ranking (e.g., M1_M2_M3).
  • Data-Driven Thresholds: Automatically calculates adaptive effect size thresholds via Percentile Bootstrapping, utilizing a dynamic SEM guardrail to maintain practical relevance relative to measurement noise.
  • Robustness: Utilizes Bias-Corrected and Accelerated (BCa) confidence intervals for effect sizes and Cluster Bootstrapping using the percentile method for rank stability.
  • Automated Reporting: Generates PDF reports with high-resolution graphics (e.g., Win-Loss matrices, Sankey diagrams) and machine-readable JSON/CSV exports including the complete analysis and statistics.
  • Reproducibility: Features automated convergence control to determine optimal bootstrap iterations (\(B\)) without guesswork, alongside fixed-seed execution and configuration-based workflows.

Installation

Requirements

  • MATLAB (R2020a or later Required)
  • Statistics and Machine Learning Toolbox (Required)
  • Parallel Computing Toolbox (Required for performance)

Setup

  1. Download the latest HERA_v1.4.2.mltbx from the Releases page.
  2. Double-click the file to install it.
  3. Done! HERA is now available as a command (HERA.start_ranking) in MATLAB.

Option B: Git Clone (for Developers)

  1. Clone the repository:

    git clone https://github.com/lerdmann1601/HERA-Matlab.git
    
  2. Install/Configure Path:

    Navigate to the repository folder and run the setup script to add HERA to your MATLAB path.

    cd HERA-Matlab
    setup_HERA
    

Option C: Standalone Runtime and Python Integration

👉 Standalone Runtime

👉 Python Integration


Quick Start

The interactive command-line interface guides you through every step of the configuration, from data selection to statistical parameters. If you are new to HERA, this is the recommended mode. At any point, you can exit the interface by typing exit or quit or q.

HERA.start_ranking()

2. Batch Mode (Reproducible / Server)

For automated analysis or reproducible research, use a JSON configuration file. For more details on configuration parameters, see Configuration & Parameters.

HERA.start_ranking('configFile', 'config.json')

3. Unit Test Mode

Run the built-in validation suite to ensure HERA is working correctly on your system. For more details, see the Testing section.

% Run tests and save log to default location
HERA.start_ranking('runtest', 'true')

% Run tests and save log to a specific folder
HERA.start_ranking('runtest', 'true', 'logPath', '/path/to/logs')

4. Convergence Analysis

Perform a robust scientific validation of the default convergence parameters. For more details, see Convergence Analysis.

% Run analysis and save log to default location
HERA.start_ranking('convergence', 'true')

% Run analysis and save log to a specific folder
HERA.start_ranking('convergence', 'true', 'logPath', '/path/to/logs')

% Run analysis using a JSON configuration file
HERA.start_ranking('convergence', 'path/to/config.json')

Note

Example use cases with synthetic datasets and results are provided in the data/examples directory. See Example Analysis for a walkthrough of the example use cases and visual examples of the ranking outputs.


Note

HERA is designed for high-performance scientific computing, featuring fully parallelized bootstrap procedures and automatic memory management to optimize efficiency. However, specifically due to the extensive use of bootstrapping, it remains a CPU-intensive application. Please ensure you have access to enough CPU cores for reasonable performance.


Documentation

👉 Version History (Changelog)

👉 Repository Structure

👉 Theoretical Background

👉 Ranking Modes Explained

👉 Convergence Modes Explained

👉 Methodological Guidelines & Limitations

👉 Example Analysis

👉 Input Data Specification

👉 Configuration & Parameters

👉 Bootstrap Configuration

👉 Convergence Analysis

👉 Advanced Usage (MATLAB Users)

👉 Results Structure Reference


Outputs

HERA generates a timestamped directory containing:

Ranking_<Timestamp>/
├── Output/
│   ├── results_*.csv                 % Final ranking table (Mean ± SD of metrics and rank CI)
│   ├── data_*.json                   % Complete analysis record (Inputs, Config, Stats, Results)
│   ├── log_*.csv                     % Detailed log of pairwise comparisons and logic
│   ├── sensitivity_details_*.csv     % Results of the Borda sensitivity analysis
│   ├── BCa_Correction_Factors_*.csv  % Correction factors (Bias/Skewness) for BCa CIs
│   └── bootstrap_rank_*.csv          % Complete distribution of bootstrapped ranks
├── Graphics/                         % High-res PNGs organized in subfolders
│   ├── Ranking/
│   ├── Detail_Comparison/
│   ├── CI_Histograms/
│   └── Threshold_Analysis/
├── PDF/                              % Specialized reports
│   ├── Ranking_Report.pdf
│   ├── Convergence_Report.pdf
│   └── Bootstrap_Report.pdf
├── Final_Ranking_*.png               % Summary graphic of ranking result
├── Final_Report_*.pdf                % Consolidated graphical report of the main results
├── Ranking_*.txt                     % Complete console log of the session
└── configuration.json                % Reusable configuration file to reproduce the run

Testing

HERA includes a comprehensive validation framework (run_unit_test.m) comprising 46 test cases organized into four suites:

  1. Unit Tests (19 cases): Checks individual components, helper functions, and execution logic (Run/Start packages) to ensure specific parts of the code work correctly.
  2. Statistical Tests (5 cases): Verifies the core mathematical functions (e.g., Jackknife, Cliff's Delta) and ensures the performance optimizations (hybrid switching) work as intended.
  3. Scientific Tests (19 cases): Comprehensive validation of ranking logic, statistical accuracy, and robustness against edge cases (e.g., zero variance, outliers).
  4. System Tests (3 cases): Runs the entire HERA pipeline from start to finish to ensure that the JSON configuration (batch), Developer API and NaN Data handling are working correctly.

Running Tests

You can run the test suite in three ways:

  1. Auto-Log Mode (Default) Automatically finds a writable folder (e.g., Documents) to save the log file.

    HERA.run_unit_test()
    
  2. Interactive Mode Opens a dialog to select where to save the log file.

    HERA.run_unit_test('interactive')
    
  3. Custom Path Mode Saves the log file to a specific directory.

    HERA.run_unit_test('/path/to/my/logs')
    

GitHub Actions (Cloud Testing)

For reviewers or users without a local MATLAB license, you can run the test suite directly on GitHub:

  1. Go to the Actions tab in this repository.
  2. Select Testing HERA from the left sidebar.
  3. Click Run workflow.

Contributing

We welcome contributions! Please see CONTRIBUTING.md for details.

  1. Fork the repository.
  2. Create a feature branch.
  3. Commit your changes.
  4. Open a Pull Request.

Citation

If you use HERA in your research, please cite:

@software{HERA_Matlab,
  author = {von Erdmannsdorff, Lukas},
  title = {HERA: Hierarchical-Compensatory, Effect-Size-Driven
  Ranking Algorithm},
  url = {https://github.com/lerdmann1601/HERA-Matlab},
  version = {1.4.2},
  doi = {10.5281/zenodo.18274870},
  year = {2026}
}

License

This project is licensed under the MIT License - see the LICENSE file for details.