METRICS EXPLAINED
Your complete reference for understanding predictive analytics
Evaluate Your Content in 30 Seconds
Check these four numbers in order. If all four pass, your content is ready.
Impact Score
≥ 7.0
Overall creative effectiveness
Cognitive Demand
25–50
Not too simple, not too complex
Memory
≥ 65
Viewers will remember your brand
Avoidance
< 30
Viewers will not skip your content
What to improve first?
Impact Score (NIS)
A single 1–10 score summarizing your creative's overall predicted effectiveness. NIS is calculated from all underlying metrics. It is your "pre-flight check" before spending media budget.
Optimized
Ready to launch
Needs refinement
Optimize before launch
Needs improvement
Significant changes required
Performance Badges
Badges are earned automatically when your content meets the threshold. The more badges, the stronger your content. Badges directly determine your Campaign Readiness verdict.
BLS Ready
Memory ≥ 65
Content has strong Brand Linkage Score — viewers will remember your brand
Intent Impact
Intent ≥ 65
Content drives strong action intent — viewers are motivated to click or buy
Trusted
Trust ≥ 65
Content appears credible and trustworthy
Simple
Cognitive Demand 25–50
Content is easy to process without cognitive overload
Campaign Readiness
The verdict is automatically calculated based on which badges your content earned and the campaign objective. The objective is set when you create a test.
Ready for Campaign
Requires: BLS Ready + (Trusted OR Simple)
Content is strong enough for brand awareness campaigns
Review Before Use
Requires: BLS Ready only
Good recall, but trust or simplicity needs improvement
Needs Optimization
Requires: No BLS Ready badge
Brand recall is below threshold — optimize before spending budget
Not sure which objective to choose?
If the campaign goal is brand recall and recognition, choose Brand Building. If the goal is driving purchases or clicks, choose Conversion. If unsure, ask your Project Manager or check the FAQ.
Detailed Metrics
Each metric answers a specific question about your content. Click any metric to see score ranges, tips, and connections to other metrics.
How to read the metrics
All metrics are "higher is better" except two special cases: Cognitive Demand (middle range 25–50 is best) and Avoidance (lower is better).
ACEI and ECEI
Composite effectiveness indices that combine multiple metrics into a single score. Both are displayed on every test result.
ACEI
Ad Creative Effectiveness Index
Predicts how effective your content will be for awareness campaigns. Scale: 0–100 (higher is better).
ECEI
E-commerce Creative Effectiveness Index
Predicts how effective your content will be for conversion campaigns. Scale: 0–100 (higher is better).
Cognitive Demand is inverted
Lower complexity improves your ACEI and ECEI scores. This is the opposite of how the raw Cognitive Demand metric works.
Weights are adjustable by your administrator in the Admin panel.
Heatmaps — How to Read Them
Visual overlays that show where viewers focus their attention and what they might miss.
Attention Heatmap
Shows where viewers' eyes are predicted to look in the first 2–3 seconds.
What to look for: Is the product, logo, or CTA in the red zone? If not, viewers may miss your key message.
Fog Heatmap
Shows what viewers will NOT see clearly — areas that are "foggy" to the brain.
What to look for: Is your key message in a clear area? If it is in the fog, consider repositioning.
Video heatmaps
For video tests, heatmaps are generated as video overlays showing attention flow frame-by-frame. Pay attention to the first 2 seconds — that is when most viewers decide to watch or skip.
Benchmarks
Industry averages from our database of tested content, filtered by category.
How to read them
Your score is compared to the industry average. Above benchmark = outperforming, below = underperforming.
Percentile buckets
Extreme Low (0–20th), Low (20–40th), Medium (40–60th), High (60–80th), Extreme High (80–100th).
Category-specific
A Memory score of 60 might be above average for Beauty but below average for FMCG. Benchmarks use the most specific data available (precise category → industry → global).
View benchmarks on the test results page or visit the Benchmarks page for a full overview.
FRT (Fast Response Test)
The methodology behind Intent, Trust, Engagement, and Avoidance metrics.
Step 1
View ad briefly (~5 seconds)
Step 2
Quick yes/no word associations
Step 3
Score calculated from reaction time
FRT measures instinctive reactions, not conscious opinions — making it harder to "fake" and more predictive of real behavior. Faster answers indicate stronger, automatic associations.
| FRT Metric | AI Accuracy |
|---|---|
| Engagement | 91% |
| Avoidance | 89% |
| Intent | 92% |
| Trust | 93% |
Video vs Image Testing
Key differences between testing static images and video content.
| Aspect | Image Test | Video Test |
|---|---|---|
| Processing time | ~30 seconds | 2–10 minutes |
| Heatmaps | Static overlay | Video overlay (frame-by-frame) |
| Metrics | Single set of scores | Aggregated from all frames |
| File size limit | 20 MB | 30 MB |
| Best for | Static ads, social posts, banners | Reels, TikToks, Stories, video ads |
Video tests analyze every frame and aggregate the results. The heatmap video shows how attention moves through your content over time — useful for identifying the exact moment viewers lose interest.
Note
Interpretations and recommendations are AI-generated and may not reflect reality. Always verify results with actual campaign data.
