Step 1
Upload a stimulus CSV
Start with the file you want to inspect. The app previews the data, guesses likely column types, and flags import problems before any statistics run.
Preview
No dataset loadedCheck whether the parser split the columns correctly and whether the first rows look like the original spreadsheet.
QC warnings
Early import checksThis block highlights file-shape problems such as width mismatches, renamed duplicate headers, or other issues seen during upload.
Step 2
Map columns and confirm the dataset
Tell the app which column contains the stimulus text, which column defines the comparison groups, and which columns should become numeric metrics.
Step 3
Run normalization analysis
Set the matching criteria here. The results table shows whether each metric passes the current rule and which rows look risky.
Metric overview
Group means for the selected metricUse the chart for a quick visual comparison only. The decision still comes from the metric table and its pass/fail checks.
Metric results
Pass/fail, p-values, thresholdsEach row summarizes one metric. Start with the status column, then inspect the test, effect size, and notes if a metric fails.
Row annotations
Inspect missing values, duplicates, parse failures, and outliersThis block explains why individual rows may be risky to keep, for example because a value is missing, duplicated, or unusually extreme.
Step 4
Generate counterbalanced lists and export files
After QC and analysis, create balanced list assignments and export the exact rows you want to carry into the experiment build.
List balance
Assignments by group and listCheck that each list receives a balanced number of rows before exporting the randomized file.
Assignments
Per-row seeded outputThe seed makes the assignment reproducible. Keep it if you want to rebuild the same lists later.