Fidamen

Correlation Coefficient Calculator

This toolkit computes multiple correlation measures to quantify association between variables or classifier outputs. Choose the appropriate method for your data: Pearson for linear continuous relationships, Spearman or Kendall for ordinal or non-linear monotonic relationships, and Matthews for binary classification confusion matrices.

The interface accepts paired lists for rank- and value-based measures and integer confusion-matrix counts for MCC. Results include the coefficient and an approximate p-value where applicable, plus guidance about assumptions and limitations.

Updated Nov 5, 2025QA PASS — golden 25 / edge 120Run golden-edge-2026-01-23

Governance

Record 7ae89ccbb97b • Reviewed by Fidamen Standards Committee

Measures linear association between two continuous variables. Appropriate when relationships are approximately linear and data are interval or ratio scale.

Inputs

Advanced inputs

Paired data

Paired data (Spearman)

Paired data (Kendall)

Confusion matrix counts

Results

Updates as you type

Pearson correlation coefficient (r)

Two-tailed p-value (approx.)

OutputValueUnit
Pearson correlation coefficient (r)
Two-tailed p-value (approx.)
Primary result

Visualization

Methodology

Pearson r is computed as covariance divided by the product of standard deviations. A t-test approximation provides a two-tailed p-value when sample size is sufficient (at least three observations) and assumptions hold.

Spearman rho is computed by converting values to ranks then applying the Pearson formula to ranks; p-values use standard approximations suitable for moderate to large samples or exact methods for small samples.

Kendall Tau is computed by counting concordant and discordant pairs; exact or asymptotic p-values are reported depending on sample size and ties.

Matthews correlation coefficient (MCC) is computed from confusion-matrix counts using the standard formula and is appropriate for evaluating binary classifiers especially when class sizes are imbalanced.

Key takeaways

Select Pearson for linear associations, Spearman or Kendall when data are ordinal or assumptions are violated, and Matthews for binary classification evaluation.

Always inspect scatterplots or contingency tables and check assumptions before interpreting correlation coefficients. Correlation does not imply causation.

Worked examples

Example 1: Pearson — X: 1,2,3,4,5 and Y: 2,4,6,8,10 returns r = 1.0 and p ≈ 0 (perfect linear relationship).

Example 2: Spearman — X and Y with monotonic but non-linear relationship will show high rho even if Pearson r is lower.

Example 3: Matthews — For TP=50, FP=10, FN=5, TN=35, MCC summarizes classifier quality in a single value between -1 and 1.

F.A.Q.

Which correlation should I use if data are not normally distributed?

Use Spearman or Kendall Tau, which are nonparametric and rely on ranks rather than raw values; they are more robust to non-normal distributions and outliers.

Can I use Pearson with tied or duplicated values?

Pearson is sensitive to ties and outliers. If ties are common or relationship is monotonic but non-linear, prefer Spearman or Kendall.

How reliable are p-values reported here?

P-values are approximate when using asymptotic formulas. For small samples or many ties use exact methods. See the methodology and the cited standards for guidance on sample-size thresholds and exact tests.

What input formats are accepted for paired lists?

Paste comma-, space-, or newline-separated numeric lists into the X and Y fields. Lists must match in length and represent paired observations in order.

Are negative or zero denominators possible in MCC?

MCC denominator can be zero when any of the products (TP+FP), (TP+FN), (TN+FP), or (TN+FN) equals zero. The calculator will indicate invalid or undefined MCC in that case.

Sources & citations

Further resources

Versioning & Change Control

Audit record (versions, QA runs, reviewer sign-off, and evidence).

Record ID: 7ae89ccbb97b

What changed (latest)

v1.0.02025-11-05MINOR

Initial publication and governance baseline.

Why: Published with reviewed formulas, unit definitions, and UX controls.

Public QA status

PASS — golden 25 + edge 120

Last run: 2026-01-23 • Run: golden-edge-2026-01-23

Engine

v1.0.0

Data

Baseline (no external datasets)

Content

v1.0.0

UI

v1.0.0

Governance

Last updated: Nov 5, 2025

Reviewed by: Fidamen Standards Committee (Review board)

Credentials: Internal QA

Risk level: low

Reviewer profile (entity)

Fidamen Standards Committee

Review board

Internal QA

Entity ID: https://fidamen.com/reviewers/fidamen-standards-committee#person

Semantic versioning

  • MAJOR: Calculation outputs can change for the same inputs (formula, rounding policy, assumptions).
  • MINOR: New features or fields that do not change existing outputs for the same inputs.
  • PATCH: Bug fixes, copy edits, or accessibility changes that do not change intended outputs except for previously incorrect cases.

Review protocol

  • Verify formulas and unit definitions against primary standards or datasets.
  • Run golden-case regression suite and edge-case suite.
  • Record reviewer sign-off with credentials and scope.
  • Document assumptions, limitations, and jurisdiction applicability.

Assumptions & limitations

  • Uses exact unit definitions from the Fidamen conversion library.
  • Internal calculations use double precision; display rounding follows the unit's configured decimal places.
  • Not a substitute for calibrated instruments in regulated contexts.
  • Jurisdiction-specific rules may require official guidance.

Change log

v1.0.02025-11-05MINOR

Initial publication and governance baseline.

Why: Published with reviewed formulas, unit definitions, and UX controls.

Areas: engine, content, ui • Reviewer: Fidamen Standards Committee • Entry ID: 1bfa53f8f229