Skip to main content
Heurilens Logo
Core UX

Mobile UX Audit Checklist: 20 Critical Issues Most Teams Overlook

March 4, 202618 min read
Mobile UX Audit Checklist: 20 Critical Issues Most Teams Overlook

Mobile accounts for over 60% of global web traffic, yet conversion rates on mobile devices consistently lag behind desktop by 30–50%. The gap isn't a mystery — it's a symptom. Most products are audited for desktop-first problems and then adapted for mobile as an afterthought. The result is a surface that looks responsive but behaves like a compromise.

\n\n

A proper mobile UX audit checklist doesn't ask whether your layout reflows at 375px. It asks whether someone can complete a task with one thumb, on a slow network, in a distracting environment, without making an error they can't recover from. These are fundamentally different questions — and they require a fundamentally different audit process.

\n\n

This guide covers 20 specific, high-impact issues grouped into six categories. For each item you'll find what to check, why it matters, and how to fix it. At the end, you'll find a priority scoring framework to help your team triage findings and ship improvements in the right order.

\n\n

Mobile UX isn't a resized version of desktop UX. It's a different interaction model operating under different physical, cognitive, and network constraints.

\n\n

Why Mobile UX Audits Are Different

\n\n

Before running through the checklist, it's worth establishing what makes mobile audits distinct. The usual responsive design checklist — does it wrap, does it scroll, does the text truncate — addresses layout concerns, not usability concerns. The problems that actually break mobile experiences fall into four categories:

\n\n
    \n
  • Physical constraints: Users interact with a finger, not a cursor. Thumb reach is limited. One-handed use is common. Input precision is lower.
  • \n
  • Network constraints: Mobile users are more likely to be on cellular connections with variable latency. Even on WiFi, mobile devices process and render differently than desktop hardware.
  • \n
  • Cognitive constraints: Mobile usage happens in fragmented sessions — commuting, waiting, multitasking. Attention is split. Memory load is higher.
  • \n
  • Context constraints: Lighting conditions, ambient noise, screen glare, and interruptions all affect how a user perceives and interacts with your product.
  • \n
\n\n

If your current audit process doesn't account for these dimensions, you're measuring the wrong things. For a broader view of how surface-level metrics can mask deeper product issues, see UX Signals vs UX Metrics.

\n\n

Category 1: Touch and Interaction

\n\n

1. Tap Target Size

\n

What to check: Every interactive element — buttons, links, icons, form inputs — should have a tap target of at least 44x44px. This applies to the tappable area, not just the visual element. A 16px icon can have a 44px tap target through padding.

\n

Why it matters: Undersized tap targets cause mis-taps, accidental navigation, and repeated failed attempts. Research from MIT Touch Lab found that the average adult fingertip covers 10–14mm of screen area. A 20px button is physically smaller than the contact zone of a fingertip.

\n

How to fix: Use browser DevTools in device emulation to inspect computed sizes. Add padding or use min-height/min-width declarations. Pay particular attention to icon-only buttons (close, share, filter) which are frequently undersized.

\n\n

2. Tap Target Spacing

\n

What to check: Adjacent interactive elements should have at least 8px of non-interactive space between them. Closely packed links (especially in navigation or tag lists) are a common failure point.

\n

Why it matters: Even if individual elements are adequately sized, when they're packed together the effective precision required to hit the right target increases significantly. This is especially problematic for users with motor impairments or large fingers.

\n

How to fix: Audit navigation menus, footer links, social icon rows, and tag/category filters. Increase margin or padding between elements. Consider a bottom sheet or modal for dense link clusters.

\n\n

3. Gesture Conflicts

\n

What to check: Identify any horizontal swipe carousels, pull-to-refresh, or custom swipe gestures. Test them alongside native browser gestures (back navigation, tab switching, scroll).

\n

Why it matters: When a custom gesture conflicts with a system gesture, users either trigger the wrong action or become unable to navigate naturally. Horizontal scroll carousels that start near the left edge of the screen frequently hijack the back swipe on iOS Safari.

\n

How to fix: Avoid gesture interactions near screen edges. Provide always-visible UI controls (arrows, indicators) as alternatives to gesture-only interactions. Never rely on gestures as the only way to perform an action.

\n\n

4. Thumb Zone Coverage

\n

What to check: Map your most critical actions (primary CTA, submit, back, confirm) against the thumb zone model. The natural thumb reach covers the lower two-thirds of the screen; the top third is the stretch zone.

\n

Why it matters: Studies consistently show that 49% of users hold their phone with one hand. Placing primary actions in the stretch zone increases both effort and error rate. This directly affects completion rates on conversion flows.

\n

How to fix: Move primary CTAs to the lower half of the screen. Consider bottom navigation over top navigation for high-frequency actions. Use sticky bottom bars for checkout, submit, and confirm actions.

\n\n

5. Feedback and Response Latency

\n

What to check: Tap any interactive element and measure the time until visible feedback appears. Users expect visual feedback within 100ms of a tap. Check both the feedback timing and the feedback type (color change, ripple, opacity shift).

\n

Why it matters: When feedback is delayed or absent, users tap again. Double submissions, double navigations, and duplicate form entries are almost always caused by missing or delayed tap feedback, not user error. This is a critical source of invisible friction — see Invisible Friction in Multi-Step Flows for more on how this breaks completion rates.

\n

How to fix: Implement :active states with immediate CSS transitions. Use optimistic UI for actions that trigger network requests. Disable buttons immediately after tap and show a loading state.

\n\n

Category 2: Performance and Speed

\n\n

6. Mobile-Specific Core Web Vitals

\n

What to check: Run Google PageSpeed Insights on your most critical pages using the mobile tab. Focus on Largest Contentful Paint (LCP should be under 2.5s), Interaction to Next Paint (INP under 200ms), and Cumulative Layout Shift (CLS under 0.1).

\n

Why it matters: Google's field data shows that mobile CWV scores are significantly lower than desktop scores for most sites, often by 2–3x. A 1-second improvement in mobile load time has been linked to 3.5–8% improvement in conversion rates across commerce benchmarks.

\n

How to fix: Prioritize above-the-fold resource loading, preload LCP images, and defer non-critical scripts. Reserve space for dynamic content (ads, embeds) to prevent layout shift. For a deeper treatment of why speed perception matters beyond raw metrics, see How Performance Delays Break Perceived Usability.

\n\n

7. Image Optimization for Mobile

\n

What to check: Inspect images being loaded on mobile viewports. Are they using srcset or picture elements to serve appropriately sized images? Are hero images and product photos compressed for mobile bandwidth?

\n

Why it matters: Serving a 2400px image to a 390px screen wastes 30–40x the bandwidth required. On a 4G connection with variable signal, this can add 2–4 seconds to perceived load time for image-heavy pages.

\n

How to fix: Implement responsive images with srcset. Use modern formats (WebP, AVIF). Lazy load below-the-fold images. Consider Content Delivery Networks with on-the-fly image transformation.

\n\n

8. JavaScript Bundle Impact

\n

What to check: Use Chrome DevTools Coverage tab on a mobile-emulated view to identify JavaScript that is loaded but never executed on the initial page. Check for third-party scripts (analytics, chat widgets, A/B testing) that block rendering.

\n

Why it matters: Mobile devices have significantly less CPU performance than desktop. JavaScript parsing and execution time is 3–5x longer on mid-range Android devices. A 500KB JS bundle that parses in 200ms on a MacBook may take 900ms on a Pixel 4a.

\n

How to fix: Code-split aggressively. Defer or async non-critical scripts. Audit and prune third-party scripts — each one adds network round trips and parsing overhead. Prioritize script loading order to unblock first render.

\n\n

Category 3: Navigation and Information Architecture

\n\n

9. Hamburger Menu Usability

\n

What to check: If you use a hamburger menu, test the full navigation flow: icon visibility, tap target, menu open animation, menu items legibility, close behavior, and scroll behavior within the menu on long nav structures.

\n

Why it matters: Hamburger menus hide navigation, which reduces discoverability for secondary pages. Eye-tracking studies show that visible navigation items receive 3x more interaction than hidden ones. If your product's core value is distributed across multiple sections, hiding the navigation is costing you engagement.

\n

How to fix: For products with 4–5 core destinations, switch to a bottom navigation bar. Reserve hamburger menus for secondary navigation. If you keep the hamburger, ensure the icon is labeled "Menu," placed in a predictable location, and has at least a 44x44px tap target.

\n\n

10. Bottom Navigation Hierarchy

\n

What to check: If you use a bottom navigation bar, audit the items for frequency of use vs. prominence. Are the 3–5 items shown the ones users access most? Are labels visible (not icon-only)? Does it stay visible during scroll?

\n

Why it matters: Bottom navigation is in the natural thumb zone and should contain only the highest-frequency destinations. Icon-only navigation without labels increases identification time by 28% on average. The wrong items in the bottom nav can cause users to miss entire feature areas.

\n

How to fix: Validate bottom nav items against your analytics (which pages are visited most from mobile). Add text labels. Keep the bar visible on scroll unless screen real estate is critically limited, in which case hide it only on downscroll and reveal on upscroll.

\n\n

11. Search Experience

\n

What to check: Test the search trigger (icon vs. persistent bar), keyboard behavior on search focus, result display on mobile, empty state messaging, and error handling for no results.

\n

Why it matters: Mobile search sessions are intent-driven and high-value. Users who search convert at 2–5x the rate of browsers. But a search experience that triggers the wrong keyboard type, overlays critical content, or returns unparseable results on a small screen destroys that intent.

\n

How to fix: For content-heavy products, use a persistent search bar above the fold on key landing pages. Ensure inputmode="search" is set so the keyboard shows the search return key. Show results progressively, not all at once. Prioritize result legibility on a single column layout.

\n\n

12. Back Navigation Predictability

\n

What to check: Test the Android back button and iOS back swipe across all flows — especially modals, drawers, multi-step forms, and deep navigation paths. Does back always go where the user expects?

\n

Why it matters: Unexpected back navigation behavior is a top source of rage taps on mobile. If a modal closes unexpectedly, if back exits the app instead of going up a level, or if a multi-step form loses data on back, users often abandon. This maps directly to the drop-off patterns described in Why Users Abandon Between Steps 2 and 3.

\n

How to fix: Map every screen's expected back behavior before building. Implement consistent back stack management. For multi-step flows, test back navigation explicitly and confirm data persistence. Warn users before losing unsaved state.

\n\n

Category 4: Content and Readability

\n\n

13. Font Size and Line Length

\n

What to check: Measure body text size on a real device (not emulator). It should be at least 16px to avoid iOS auto-zoom on input focus and to maintain readability in varied lighting. Line length should be 45–75 characters per line on mobile.

\n

Why it matters: Text below 16px on mobile inputs triggers automatic zoom in iOS Safari, breaking layout and disrupting user flow. Text that spans the full width of a phone screen without line-length control becomes difficult to track across lines, increasing reading errors and fatigue.

\n

How to fix: Set base font size to 16px minimum. Use max-width on text containers rather than letting content span edge-to-edge. Increase line-height to at least 1.5 for body text on mobile.

\n\n

14. Color Contrast in Variable Lighting

\n

What to check: Test text contrast not just against WCAG minimums (4.5:1 for normal text) but against the experience of viewing your product in bright sunlight or in dark mode. Check placeholder text, secondary labels, and disabled states specifically.

\n

Why it matters: Mobile devices are used outdoors frequently, where screen glare dramatically reduces perceived contrast. A 4.5:1 ratio that passes in a lab may fail in real-world conditions. Low-contrast placeholder text — a common pattern — is especially problematic: users routinely mistake it for pre-filled data.

\n

How to fix: Target 7:1 for critical body text on mobile. Never use placeholder text as a substitute for labels. If you support dark mode, audit every text/background combination in both themes. For a full view of how these patterns compound into broken experiences, see 7 Core UX Signals of a Broken Product Experience.

\n\n

15. Content Priority and Progressive Disclosure

\n

What to check: On key pages (homepage, product/service page, landing page), identify what is visible in the first viewport on mobile without scrolling. Does the most important content and action appear above the fold?

\n

Why it matters: Mobile users scroll less than desktop users. Studies from NN/g show that content below the fold receives 84% less attention than above-the-fold content on mobile. If your primary value proposition and CTA are buried below a hero image, a navigation bar, and a trust strip, you're losing users before they engage.

\n

How to fix: Audit the first viewport on a 375px and 390px screen (the two most common iPhone widths). Everything non-essential — decorative imagery, secondary social proof, partner logos — moves below the CTA. Use progressive disclosure (accordions, "read more") for supporting detail.

\n\n

Category 5: Forms and Input on Mobile

\n\n

16. Keyboard Type Optimization

\n

What to check: For every input field, verify that the correct keyboard type is triggered. Phone number fields should trigger the numeric pad (inputmode="tel"), email fields should show the email keyboard (type="email"), and numeric fields should use inputmode="numeric".

\n

Why it matters: Wrong keyboard types force users to switch keyboard modes manually — an extra 3–5 taps per field. In a checkout form with phone, email, and card number fields, misconfigured keyboards can add 15–20 unnecessary interactions. This is a high-friction, low-effort fix with immediate conversion impact.

\n

How to fix: Audit every input in every form. Map the input type and inputmode attribute against the expected keyboard. Test on both iOS and Android, as keyboard behavior differs between platforms. These small friction points compound into the pattern analyzed in UI Patterns That Slow Down User Actions.

\n\n

17. Autofill and Autocomplete Support

\n

What to check: Test whether browser autofill and password managers work correctly on all forms. Check that autocomplete attributes are set correctly: name, email, tel, street-address, postal-code, cc-number, etc.

\n

Why it matters: Mobile users rely on autofill more than desktop users because typing on a mobile keyboard is slower and more error-prone. Forms that block or break autofill effectively force manual entry for every field, dramatically increasing completion time and abandonment rate. A checkout form that takes 30 seconds on desktop can take 4–5 minutes when autofill is broken on mobile.

\n

How to fix: Add correct autocomplete attributes to all form fields. Never use autocomplete="off" on forms that users are expected to complete repeatedly (login, checkout, profile). Test with iCloud Keychain on iOS and Google Autofill on Android.

\n\n

18. Error State and Recovery

\n

What to check: Submit each form with intentional errors. Verify that error messages appear inline next to the specific field, that the error persists until corrected (not on re-focus), that the keyboard does not dismiss unexpectedly, and that previously entered data is preserved.

\n

Why it matters: Form error recovery on mobile is significantly harder than on desktop. When an error triggers a page scroll to the top, the user loses their place. When valid data is wiped from all fields on validation failure, the user faces the entire input challenge again. Form abandonment after a first error is 27% higher on mobile than desktop for this reason.

\n

How to fix: Validate inline on blur (not on submit). Scroll to the first error field precisely, not to the top of the page. Never clear valid field data during validation. Show errors in clear, non-technical language with specific correction instructions. For a full audit methodology on these failure patterns, see Mastering UX Analysis: Avoiding Common Pitfalls.

\n\n

Category 6: Conversion Path on Mobile

\n\n

19. Checkout and High-Stakes Flow Optimization

\n

What to check: Walk through your primary conversion flow end-to-end on a real mobile device. Count the total number of taps required. Identify any steps that require leaving the flow (email verification, external authentication, redirects). Check whether Apple Pay, Google Pay, or other one-tap payment options are offered.

\n

Why it matters: Baymard Institute research shows that 26% of mobile checkout abandonment is caused by "too long/complicated checkout process." Every additional step in a mobile flow has a measurable toll on completion — often 5–15% drop per step. One-tap payment options like Apple Pay reduce checkout time by 60–80% for returning users.

\n

How to fix: Eliminate any step that does not collect absolutely necessary information. Offer guest checkout without forced account creation. Implement digital wallet payment options. Keep the user in the same context — avoid redirects to payment processors that look visually discontinuous. For frameworks on managing cognitive load across these steps, see Cognitive Load Is Decision Pressure.

\n\n

20. Trust Signals and Social Proof on Mobile

\n

What to check: Identify where trust signals appear in the mobile layout (security badges, testimonials, review counts, guarantees). Are they visible near the primary CTA or buried below a long scroll? Are they legible without zooming?

\n

Why it matters: Trust is contextually critical at the moment of conversion. On desktop, users can see the CTA and nearby trust signals simultaneously. On mobile, if trust signals are two scroll-lengths away from the CTA, they provide no conversion support at the decision moment. This is especially important for first-time users on mobile where brand recognition is lower.

\n

How to fix: Place 1–2 compact trust signals (star rating, "30-day guarantee," SSL badge) immediately adjacent to the primary CTA — not in a separate section. Use compact formats that don't break flow: a single line of social proof ("Trusted by 10,000+ teams") outperforms a full testimonial block placed below the fold.

\n\n

Priority Scoring Framework

\n\n

Not all 20 issues carry equal weight. After completing your audit, score each finding on two dimensions: impact (how directly does this affect conversion, retention, or task completion?) and effort (how hard is it to implement a fix?). Use this three-tier framework:

\n\n

Critical — Fix in the Current Sprint

\n
    \n
  • Tap targets under 44px on primary CTAs, submit buttons, or navigation
  • \n
  • Form fields triggering wrong keyboard types in checkout flows
  • \n
  • LCP over 4 seconds on mobile (field data)
  • \n
  • Back navigation that loses form data in multi-step flows
  • \n
  • Error states that wipe previously entered valid data
  • \n
\n\n

Important — Address in the Next 30 Days

\n
    \n
  • Tap target spacing issues in navigation and tag systems
  • \n
  • Missing autofill attributes on checkout and login forms
  • \n
  • Primary CTA not visible in first viewport on mobile
  • \n
  • Bottom navigation with icon-only labels
  • \n
  • Images not optimized for mobile bandwidth
  • \n
  • Trust signals misplaced relative to conversion CTA
  • \n
\n\n

Nice to Have — Backlog for Systematic Improvement

\n
    \n
  • Thumb zone optimization for secondary actions
  • \n
  • Gesture conflict resolution for non-critical carousels
  • \n
  • Progressive disclosure on content-heavy landing pages
  • \n
  • Dark mode contrast audit
  • \n
  • JavaScript bundle optimization beyond Core Web Vitals threshold
  • \n
\n\n

A mobile UX audit is only valuable if it generates a ranked action list that a team can actually execute. Treat it as a scored backlog input, not a compliance report.

\n\n

For teams that want to run this audit on a recurring schedule rather than as a one-time exercise, the approach described in Building a Repeatable UX Audit System provides a practical framework for embedding these checks into product cycles.

\n\n

Running the Audit: Practical Setup

\n\n

To execute this checklist effectively, you need three things: a real device (not just emulator), a slow network simulation (Chrome DevTools allows throttling to Fast 3G and Slow 3G), and a structured observation protocol.

\n\n

Emulators miss real-world rendering behavior, actual finger interaction dynamics, and OS-level behaviors like keyboard autocorrect and autofill. Always validate your highest-priority findings on at least one real iOS and one real Android device before prioritizing fixes.

\n\n

Run the checklist on a fresh session with no cookies, no login state, and no pre-filled data. This replicates the new user experience — the one most likely to convert or not. Once you have findings, map them against your funnel analytics to identify which issues correlate with your highest drop-off points. The intersection of audit findings and funnel data is where your highest-ROI fixes live.

\n\n

For broader context on how mobile audit findings fit within a full product UX assessment, the signal-based framework in UX Signals vs UX Metrics helps frame findings as product health indicators rather than isolated technical issues — making them easier to prioritize and communicate to stakeholders.

\n\n

Conclusion

\n\n

The 20 issues in this mobile UX audit checklist share a common thread: they are all invisible in standard QA processes, invisible in desktop testing, and invisible in aggregate analytics — but they are highly visible to users trying to complete a task with one thumb on a 6-inch screen in a coffee shop.

\n\n

The most important shift in mobile UX auditing is moving from "does it work?" to "does it work well, under real conditions, for real people with real constraints?" That shift requires a specific checklist, a specific testing setup, and a prioritization framework that connects findings to business outcomes.

\n\n

Run this audit on your most important mobile flows. Score the findings. Put the Critical issues in the next sprint. Revisit in 30 days. The conversion gap between mobile and desktop doesn't close from a single release — it closes from a sustained, systematic commitment to the details that most teams skip.

Was this article helpful?