Experimental Psychology Toolkit

Stimulus Norming Studio

Upload a CSV, map columns, inspect quality, test whether stimulus sets are matched, and export randomized lists for a new experiment.

Session ready
What you do Upload, map, compare, export
Best for Stimulus matching before running a study
Quick tip Review QC warnings before trusting the statistics

Step 1

Upload a stimulus CSV

Start with the file you want to inspect. The app previews the data, guesses likely column types, and flags import problems before any statistics run.

Included examples

Use two_group.csv first for a simple matching task, then three_group.csv if you want to inspect the multi-group workflow.

  • two_group.csv for 2-group matching
  • three_group.csv for 3-group ANOVA checks
  • no_group.csv for QC without group tests

Preview

No dataset loaded

Check whether the parser split the columns correctly and whether the first rows look like the original spreadsheet.

QC warnings

Early import checks

This block highlights file-shape problems such as width mismatches, renamed duplicate headers, or other issues seen during upload.

    Step 2

    Map columns and confirm the dataset

    Tell the app which column contains the stimulus text, which column defines the comparison groups, and which columns should become numeric metrics.

    Numeric metrics

    Uploaded quantitative columns

    Choose the numeric columns that should be compared across groups, such as ratings, lengths, frequencies, or any precomputed item property.

    Ignored columns

    Keep them in exports but exclude from analysis

    Use this for notes, labels, bookkeeping fields, or anything that should remain attached to each row but not drive the statistics.

    Type overrides

    Manual fixes when auto-detection is wrong

    If a numeric column was read as text, or a note column was mistaken for data, correct it here before saving the mapping.

    Column Inferred type Example Override

    Step 3

    Run normalization analysis

    Set the matching criteria here. The results table shows whether each metric passes the current rule and which rows look risky.

    Waiting for dataset

    Metric overview

    Group means for the selected metric

    Use the chart for a quick visual comparison only. The decision still comes from the metric table and its pass/fail checks.

    Metric results

    Pass/fail, p-values, thresholds

    Each row summarizes one metric. Start with the status column, then inspect the test, effect size, and notes if a metric fails.

    Row annotations

    Inspect missing values, duplicates, parse failures, and outliers

    This block explains why individual rows may be risky to keep, for example because a value is missing, duplicated, or unusually extreme.

    Step 4

    Generate counterbalanced lists and export files

    After QC and analysis, create balanced list assignments and export the exact rows you want to carry into the experiment build.

    List balance

    Assignments by group and list

    Check that each list receives a balanced number of rows before exporting the randomized file.

    Assignments

    Per-row seeded output

    The seed makes the assignment reproducible. Keep it if you want to rebuild the same lists later.