HERA: Hierarchical-Compensatory, Effect-Size-Driven and Non-Parametric Ranking Algorithm
Made for Scientific Benchmarking
Key Features • Installation • Quick Start • Documentation • Citation
Overview
HERA is a MATLAB toolbox designed to automate the objective comparison of algorithms, experimental conditions, or datasets across multiple quality metrics. Unlike traditional ranking methods that rely solely on mean values or p-values, HERA employs a hierarchical-compensatory logic that integrates:
- Significance Testing: Wilcoxon signed-rank tests for paired data.
- Effect Sizes: Cliff's Delta and Relative Mean Difference for practical relevance.
- Bootstrapping: Data-driven thresholds and BCa confidence intervals.
This ensures that a "win" is only counted if it is both statistically significant and practically relevant, providing a robust and nuanced ranking system.
Key Features
- Hierarchical Logic: Define primary and secondary metrics. Secondary
metrics can act as tie-breakers or rank correctors (e.g.,
M1_M2,M1_M2_M3). - Data-Driven Thresholds: Automatically calculates adaptive effect size thresholds using Percentile Bootstrapping.
- Robustness: Utilizes Bias-Corrected and Accelerated (BCa) confidence intervals and Cluster Bootstrapping for rank stability.
- Automated Reporting: Generates PDF reports, Win-Loss Matrices, Sankey Diagrams, and machine-readable JSON/CSV exports.
- Reproducibility: Supports fixed-seed execution and configuration file-based workflows.
Installation
Requirements
- MATLAB (R2024a or later Required)
- Statistics and Machine Learning Toolbox (Required)
- Parallel Computing Toolbox (Required for performance)
Setup
Option A: MATLAB Toolbox (Recommended)
- Download the latest
HERA_vX.Y.Z.mltbxfrom the Releases page. - Double-click the file to install it.
- Done! HERA is now available as a command (
HERA.start_ranking) in MATLAB.
Option B: Git Clone (for Developers)
-
Clone the repository:
git clone https://github.com/lerdmann1601/HERA-Matlab.git -
Install/Configure Path:
Navigate to the repository folder and run the setup script to add HERA to your MATLAB path.
cd HERA-Matlab setup_HERA
👉 Automated Build (GitHub Actions)
Quick Start
1. Interactive Mode (Recommended for Beginners)
The interactive command-line interface guides you through every step of the configuration,
from data selection to statistical parameters.
If you are new to HERA, this is the recommended mode.
At any point, you can exit the interface by typing exit or quit or q.
HERA.start_ranking()
2. Batch Mode (Reproducible / Server)
For automated analysis or reproducible research, use a JSON configuration file.
HERA.start_ranking('configFile', 'config.json')
3. Unit Test Mode
Run the built-in validation suite to ensure HERA is working correctly on your system.
% Run tests and save log to default location
HERA.start_ranking('runtest', 'true')
% Run tests and save log to a specific folder
HERA.start_ranking('runtest', 'true', 'logPath', '/path/to/logs')
Note: Example use cases with synthetic datasets and results are provided in the
data/examplesdirectory. See docs/Example_Analysis.md for a walkthrough of the example use cases and visual examples of the ranking outputs.Note: HERA is designed for high-performance scientific computing, featuring fully parallelized bootstrap procedures and automatic memory management to optimize efficiency. However, specifically due to the extensive use of bootstrapping, it remains a CPU-intensive application. Please ensure you have access to enough CPU cores for reasonable performance.
Documentation
👉 Methodological Guidelines & Limitations
👉 Advanced Usage (Developer Mode)
Outputs
HERA generates a timestamped directory containing:
Ranking_<Timestamp>/
├── Output/
│ ├── results_*.csv % Final ranking table (Mean ± SD of metrics and rank CI)
│ ├── data_*.json % Complete analysis record (Inputs, Config, Stats, Results)
│ ├── log_*.csv % Detailed log of pairwise comparisons and logic
│ ├── sensitivity_details_*.csv % Results of the Borda sensitivity analysis
│ ├── BCa_Correction_Factors_*.csv % Correction factors (Bias/Skewness) for BCa CIs
│ └── bootstrap_rank_*.csv % Complete distribution of bootstrapped ranks
├── Graphics/ % High-res PNGs organized in subfolders
│ ├── Ranking/
│ ├── Detail_Comparison/
│ ├── CI_Histograms/
│ └── Threshold_Analysis/
├── PDF/ % Specialized reports
│ ├── Ranking_Report.pdf
│ ├── Convergence_Report.pdf
│ └── Bootstrap_Report.pdf
├── Final_Ranking_*.png % Summary graphic of ranking result
├── Final_Report_*.pdf % Consolidated graphical report of the main results
├── Ranking_*.txt % Complete console log of the session
└── configuration.json % Reusable configuration file to reproduce the run
Testing
HERA includes a comprehensive validation framework (run_unit_test.m)
comprising 46 test cases organized into four suites:
- Unit Tests (19 cases): Checks individual components, helper functions, and execution logic (Run/Start packages) to ensure specific parts of the code work correctly.
- Statistical Tests (5 cases): Verifies the core mathematical functions (e.g., Jackknife, Cliff's Delta) and ensures the performance optimizations (hybrid switching) work as intended.
- Scientific Tests (19 cases): Comprehensive validation of ranking logic, statistical accuracy, and robustness against edge cases (e.g., zero variance, outliers).
- System Tests (3 cases): Runs the entire HERA pipeline from start to finish to ensure that the JSON configuration (batch), Developer API and NaN Data handling are working correctly.
Running Tests
You can run the test suite in three ways:
-
Auto-Log Mode (Default) Automatically finds a writable folder (e.g., Documents) to save the log file.
HERA.run_unit_test() -
Interactive Mode Opens a dialog to select where to save the log file.
HERA.run_unit_test('interactive') -
Custom Path Mode Saves the log file to a specific directory.
HERA.run_unit_test('/path/to/my/logs')
GitHub Actions (Cloud Testing)
For reviewers or users without a local MATLAB license, you can run the test suite directly on GitHub:
- Go to the Actions tab in this repository.
- Select Testing HERA from the left sidebar.
- Click Run workflow.
Contributing
We welcome contributions! Please see CONTRIBUTING.md for details.
- Fork the repository.
- Create a feature branch.
- Commit your changes.
- Open a Pull Request.
Citation
If you use HERA in your research, please cite:
@software{HERA_Matlab,
author = {von Erdmannsdorff, Lukas},
title = {HERA: A Hierarchical-Compensatory, Effect-Size Driven and Non-parametric
Ranking Algorithm using Data-Driven Thresholds and Bootstrap Validation},
url = {https://github.com/lerdmann1601/HERA-Matlab},
version = {1.1.1},
doi = {10.5281/zenodo.18274871},
year = {2026}
}
License
This project is licensed under the MIT License - see the LICENSE file for details.