Resources
UX Analytics Glossary
Quick, plain-English definitions for UX analytics and behavior data, plus platform-specific terms you’ll see in FullSession.
Understand terms like session replay, funnels, and rage clicks
Decode privacy, masking, RBAC, and data governance language
Learn Lift AI concepts like goals, opportunities, and evidence
Explore
Go deeper than definitions
Prefer examples and workflows? These pages show how the concepts look in practice.
Session Replay
Watch real user journeys to see where people hesitate, fail, or abandon—then share clear evidence with your team.
Heatmaps
Visualize clicks and scroll depth to spot weak CTAs, dead zones, and attention hot spots—then confirm with replay.
Funnels & Conversions
Map journeys into steps, see drop-offs, segment by audience, and jump straight into the sessions behind the numbers.
Safety & Security
Understand masking, blocking, access controls, and performance safeguards—so stakeholders can approve replay confidently.
Conversion Rate Optimization (CRO)
Definitions for common CRO terms
Lift (uplift)
The change in a metric after an intervention; often reported as absolute (pp) or relative (%).
Absolute vs. relative lift
Absolute = +X percentage points; relative = (post/pre − 1).
Baseline
The “before” value used as the comparison reference (often the pre-period average).
Confidence interval (CI)
A range of plausible values for the true lift (not just a single point estimate).
Statistical significance
Whether the observed change is unlikely under “no real effect” (usually via CI/p-value).
Sample size (N)
The number of sessions/users/conversions behind a metric; drives uncertainty.
Minimum detectable effect (MDE)
The smallest lift you can reliably detect given traffic + variance.
Power
The probability you’ll detect a real effect of a given size (ties to MDE + duration).
Seasonality / day-of-week effects
Repeating patterns that can distort pre/post comparisons unless windows match.
Confounder
A concurrent change (campaign, pricing, release) that can “fake” lift in pre/post.
Primary metric vs. guardrail metric
Primary = what you optimize; guardrail = what you refuse to harm (e.g., AOV while improving checkout completion). (This maps directly to your “trade-off warnings” concept.)., “Implemented”) used to align measurement windows and overlays.you can also dedupe “Evidence link” vs “Evidence links” (currently both appear) and keep just one.
Session replay & behavior terms
Definitions for replay, signals, and how teams move from “what happened” to “what to fix.”
Session Replay
Session replay is a privacy-conscious way to reconstruct a user’s visit so you can watch clicks, scrolls, and navigation like a video. In FullSession, replays are searchable and help teams see where users struggle and why drop-offs happen.
Event-based replay (event data)
Event-based replay reconstructs a session from interaction events (like clicks, scrolls, and inputs) instead of storing raw screen video. This approach is typically easier to filter and search, and it supports stronger privacy controls when configured correctly.
Searchable sessions
Searchable sessions are replays you can filter by device, browser, pages, or behaviors so you can find patterns fast. Instead of watching random recordings, teams narrow to the journeys most likely to reveal the root cause of a UX or technical issue.
Rage click / rage tap
A rage click (or rage tap) is rapid repeated clicking on the same area, often signaling frustration, confusion, or a broken interaction. Teams use it to spot misleading UI affordances, unresponsive elements, or unclear next steps in key flows.
Dead click / dead tap
A dead click (or dead tap) happens when users click something that looks interactive but doesn’t respond. It’s a strong indicator of UX mismatch, like non-clickable styling, missing links, blocked buttons, or elements that fail on certain devices.
Friction signal
A friction signal is a behavioral pattern that suggests struggle—like repeated clicks, excessive scrolling, backtracking, or errors during a task. These signals help teams prioritize which sessions to watch and which parts of a journey need investigation.
Evidence link (shareable replay link)
An evidence link is a shareable pointer to the exact session moment that demonstrates an issue. Teams use it to align product, UX, and engineering on what’s happening—without long meetings or subjective interpretations of “what users said.”
Masking at capture vs masking at playback
Masking at capture hides sensitive values before they leave the browser, so they don’t appear in stored events or replays. Masking at playback adds an extra viewing layer so specific teams or workflows can see less, even when underlying events exist.
Heatmaps & page interaction terms
Definitions for click/scroll behavior, attention patterns, and how to interpret heatmaps responsibly.
Website heatmap
A website heatmap is a visual overlay that shows how visitors interact with a page—where they click, how far they scroll, and which areas get attention. It turns raw interaction data into patterns you can interpret quickly without digging through reports.
Click heatmap
A click heatmap highlights where users click (or tap) across a page. Teams use it to validate CTA placement, spot distracting hotspots, and find non-obvious dead zones—then pair findings with replay to understand the intent behind the clicks.
Scroll depth
Scroll depth shows how far users scroll down a page and where most people stop. It’s commonly used to validate whether key content and CTAs are actually being seen, and to decide what should move above the fold for higher visibility.
Attention hot spots
Attention hot spots are areas that consistently attract clicks, engagement, or interaction. They can indicate strong interest, or confusion if users repeatedly click the wrong element. Hot spots are most useful when you compare them across devices and segments.
Dead zone
A dead zone is a page area that gets little to no interaction, suggesting it’s ignored, unclear, or buried. Teams use dead zones to simplify layouts, reduce visual noise, and move high-value content toward areas users naturally engage with.
Fold line / above the fold
The fold line is the approximate point where content drops below the initially visible area of the screen. “Above the fold” content is what users see without scrolling. Heatmaps help validate whether important messaging and CTAs land above common scroll drop-off.
Device segmentation (mobile vs desktop)
Device segmentation compares interaction patterns across mobile and desktop (and sometimes breakpoint buckets). Because screen size, scrolling behavior, and tap targets differ, segmentation helps prevent false conclusions—like redesigning for desktop patterns when mobile users behave differently.
Traffic source segmentation
Traffic source segmentation splits heatmap data by channels (ads, email, organic, referrals) to uncover intent mismatch. If one source clicks differently or ignores key sections, it often points to misaligned messaging, landing page relevance, or audience expectations.
Funnels & conversion terms
Definitions for funnel steps, drop-offs, windows, and how to measure improvement.
Funnel analysis
Funnel analysis maps a journey into steps and measures how many users move from one step to the next. It highlights conversion and drop-off at each stage so you can see where users get stuck or disappear, and where fixes will most likely improve outcomes.
Funnel step
A funnel step is a defined stage in a journey, often represented by a page, screen, or event (like “Signup started” or “Checkout complete”). Clear steps make funnels interpretable, so teams can isolate the exact point where intent turns into abandonment.
Conversion rate
Conversion rate is the percentage of users who complete the desired action, either for a single step or for the full funnel. Teams use it to compare performance across segments and time periods, and to validate whether a change improved outcomes.
Drop-off rate
Drop-off rate is the share of users who leave between two funnel steps. High drop-off points are where teams focus investigation, often by jumping into session replay for affected users to see whether the cause is UX confusion, friction, or technical failure.
Conversion window
A conversion window is the time period you allow for users to complete a funnel (for example, within minutes, hours, or days). The window should match how the journey actually behaves; otherwise you may undercount conversions or misread where users truly abandon.
Segment
A segment is a subset of users grouped by attributes like device, country, traffic source, plan, or behavior. Segmenting funnels helps you find problems that only affect certain audiences—like mobile-only drop-offs, browser-specific errors, or low-intent campaign traffic.
Pre/post measurement
Pre/post measurement compares funnel performance before and after a change (a release, UX tweak, or campaign). It’s a fast way to estimate impact, but it can be influenced by seasonality and traffic mix, so teams often use experiments for higher confidence.
Experiment / holdout
An experiment (often A/B) compares variants to isolate cause and effect. A holdout keeps a control group unchanged. These approaches help validate whether a fix truly improved conversion, rather than coinciding with traffic shifts or external changes.
Errors, alerts & debugging terms
Definitions for monitoring issues, routing alerts, and debugging with session context.
Error tracking
Error tracking monitors your application for errors, groups similar issues, and helps your team understand what’s breaking in production. FullSession ties errors to real sessions so teams can see what the user did, what broke, and how it affected key journeys.
Error spike
An error spike is a sudden increase in errors, often correlated with a release, a third-party outage, or a broken flow. Spikes are high priority because they frequently affect many users at once and can create immediate conversion drops or support ticket surges.
Alert
An alert notifies your team when something crosses a threshold—like an error spike, a high-impact issue on a specific page, or failures in checkout. Good alerts include enough context to act quickly, without digging through logs to find basic details.
Frontend JavaScript error
A frontend JavaScript error occurs in the browser runtime and can break UI interactions, prevent submissions, or degrade performance. When connected to replay, teams can reproduce the failure in context—device, browser, route, and user actions leading up to the error.
Console context
Console context is the set of console messages and signals associated with a session at the time an issue occurs. It helps engineers see warnings and errors that might not show up in backend logs, especially for client-side failures and third-party script problems.
Network trace context
Network trace context links errors to the requests around them—failed APIs, slow responses, or blocked calls. This is especially useful when a UI failure is caused by backend or third-party issues, because the replay shows the user impact while traces suggest the cause.
Release version segmentation
Release version segmentation filters issues by the version of your app or deploy. It’s a fast way to confirm whether a new release introduced a problem, and it helps teams roll back or patch with confidence while measuring recovery of key funnels.
Privacy, security & governance terms
Definitions for masking, access control, retention, and legal/security review language.
Data masking
Data masking hides sensitive information so it doesn’t appear in analytics or replay. In FullSession, masking can be configured with fine-grained rules, and masking at capture helps ensure sensitive values never leave the browser in the first place.
Blocking (elements/pages)
Blocking prevents specific page elements—or entire pages—from being recorded. Teams use blocking for high-risk areas like payment, health, or private messaging. Done well, it reduces exposure risk while still letting teams capture surrounding behavior and technical signals.
RBAC (Role-Based Access Control)
RBAC controls who can see what by assigning roles and permissions. It supports least-privilege access so teams only view the data they need. RBAC is a key requirement for security-conscious environments where replay and behavior data must be tightly governed.
Audit logs
Audit logs record who accessed what and when—projects, sessions, or settings. They’re used for governance, internal investigations, and compliance workflows. Audit logs provide accountability and help security teams validate that access controls are actually being followed.
Retention controls
Retention controls define how long data is stored before deletion, helping limit risk and cost. Security and legal teams often require retention settings that match internal policies. For behavior data, retention also affects how far back teams can investigate issues.
Keystroke capture
Keystroke capture refers to recording individual key presses. Many teams avoid it due to sensitivity risk. Tools can reduce risk by limiting what’s captured and by using masking and blocking rules to prevent sensitive inputs from being recorded.
Lift AI & prioritization terms
Definitions for goal-based recommendations, confidence, and evidence-backed opportunities.
Lift AI
FullSession Lift AI turns real user sessions and behavior into a ranked list of opportunities tied to a goal (like checkout completion or signup conversion). It’s designed to reduce dashboard time by surfacing what to fix next, with evidence linked to each recommendation.
Goal (goal/funnel)
A goal is the outcome you want to improve, typically represented by a funnel or conversion event. Lift AI uses the goal to focus analysis on the sessions and signals most relevant to that outcome, so recommendations stay aligned to revenue-critical journeys.
Opportunity
An opportunity is a recommended improvement area, often a friction point, failure mode, or slowdown—ranked based on expected goal impact. Opportunities are most actionable when they include a clear description, affected steps/pages, and links to example sessions.
Expected improvement
Expected improvement is Lift AI’s directional estimate of how much a recommendation could improve the chosen goal if addressed. It helps teams prioritize. Measurement (pre/post or experiment) is still the source of truth for what actually changed after shipping.
Attribution time window
The attribution time window is the period Lift AI uses to relate behavior signals to a goal outcome. Choosing an appropriate window helps avoid misleading conclusions, especially for journeys where conversion happens hours or days after the first visit.
Validate (pre/post or experiment)
Validation is the step where you confirm whether a shipped fix moved the goal. Teams validate through pre/post comparisons or controlled experiments. This keeps prioritization honest: AI can suggest what to fix, but measurement confirms what actually improved.
Evidence links
Evidence links connect an opportunity to the proof behind it—specific steps, impacted pages, and example sessions. This reduces debate and accelerates shipping, because stakeholders can see the exact user experiences that motivated the recommendation.
Turn confusion into clarity
See how replay, funnels, heatmaps, errors, and Lift AI work together, on your own journeys.