Apps, Games & ecommerce – we accelerate your business with AI‑powered creative and performance marketing.
Every term defined the way practitioners actually use it inside live UA programmes. No recycled textbook definitions. Built by the team that has managed €500M+ in mobile ad spend across Meta, TikTok, Google, and programmatic DSPs.
Comparing two or more variants of an ad, creative, or landing page to identify which drives better performance against a defined metric. In mobile UA, meaningful A/B tests require statistical significance before you act on results. Most teams move too quickly, reading signals from insufficient impression or install volume. Test one variable at a time: the hook or format in isolation, not full creative swaps that change multiple elements simultaneously.
A synthetically generated or AI-driven digital persona used in ad creatives, app onboarding, or brand content. AI characters range from fully animated 3D avatars to stylised 2D spokespersons rendered via generative models. In mobile UA, AI characters reduce production costs by 60-80% compared to live-action talent while enabling rapid localisation across markets. Performance data shows AI characters match or exceed human talent CTR in direct-response formats, particularly in gaming and fintech verticals.
End-to-end software environments that bundle generative AI models, prompt management, fine-tuning capabilities, and deployment infrastructure into a single product. For mobile marketing teams, AI platforms consolidate creative generation, copy variation, audience insight extraction, and performance prediction into unified workflows. Key evaluation criteria: model quality for your vertical, API reliability, data privacy controls, and cost per generation at the volume your creative velocity demands.
Video ad creatives produced partially or entirely using generative AI tools, from script generation and voiceover synthesis to full scene rendering. AI video ads compress the production cycle from weeks to hours, enabling the creative velocity (4-6 new variants per week) that high-performing UA programmes require. Current best practice: use AI for rapid concept iteration and first-draft production, then refine winning concepts with human creative direction. Pure AI-generated video already outperforms static ads on engagement metrics across most mobile verticals.
Automated sequences that chain multiple AI operations together to complete complex marketing tasks without manual intervention. A typical mobile marketing AI workflow might: pull performance data from your MMP, identify fatigued creatives, generate replacement concepts, produce ad copy variations, and queue assets for review. The shift from individual AI tool usage to orchestrated workflows is what separates teams experimenting with AI from those achieving compounding efficiency gains. Well-designed workflows reduce human touchpoints by 70% while maintaining creative quality gates.
Apple’s successor framework to SKAdNetwork, introduced with iOS 17.4 and expanded in iOS 18. AdAttributionKit extends privacy-preserving attribution beyond the App Store to alternative app marketplaces and web-to-app flows. It supports re-engagement attribution, developer postbacks, and multiple attribution windows. For UA teams, this means more attribution signal without sacrificing user privacy, but requires MMP integration updates and new postback configurations.
Meta’s automated campaign type for mobile app installs that uses machine learning to optimise ad delivery, creative selection, and audience targeting simultaneously. It typically outperforms manual structures at scale once sufficient conversion signal has accumulated (50+ in-app events per week). Performance is highly sensitive to the quality of in-app event postbacks fed back to Meta via the MMP.
Average Revenue Per User (ARPU) and Average Revenue Per Paying User (ARPPU). ARPU measures total revenue divided by all active users; ARPPU isolates only paying users, revealing monetisation depth. ARPU is meaningful only within a cohort window: D7 ARPU and D30 ARPU for the same campaign can differ by 2-4x. Segment by acquisition channel to surface the per-channel LTV differences that aggregate ARPU conceals.
Apple’s native paid app promotion platform within the App Store, where advertisers bid on keywords to appear at the top of search results on a CPT (cost per tap) auction model. Because users are actively searching at the point of exposure, ASA consistently delivers higher purchase intent and stronger downstream retention than interruption-based channels. Placement types include Search Results, Today Tab, Search Tab, and Product Page Promotions.
The practice of optimising an app’s App Store listing (title, subtitle, keyword field, screenshots, icon, ratings, and reviews) to improve organic search ranking and tap-to-install conversion rate. ASO directly affects paid UA economics: a well-optimised listing improves CVR on Apple Search Ads, reducing effective CPI without changing a single bid. ASO and paid UA are the same cost lever at scale, not separate disciplines.
Apple’s iOS 14.5 framework requiring apps to request explicit user permission before tracking activity across apps and websites via IDFA. Users who decline ATT (typically 60-75% on consumer apps) cannot be tracked deterministically, limiting paid channel attribution. ATT accelerated adoption of SKAdNetwork, probabilistic modelling, and first-party data strategies across the entire iOS UA ecosystem.
Part of Google’s Privacy Sandbox for Android, replacing the advertising ID (GAID) for attribution with privacy-preserving event-level and aggregatable reports. Unlike SKAN’s single postback, it supports multiple conversion windows and cross-network attribution. Android UA teams should be testing now. GAID deprecation is underway, and early adoption gives a competitive edge in optimising bid strategies around the new signal structure.
A bid strategy that sets a hard ceiling on the maximum amount an advertiser will pay per optimisation event. Unlike cost caps, bid caps enforce a strict limit. The algorithm will not bid above the cap, even if doing so would win high-value impressions. Effective for cost control but can severely limit delivery volume if set too aggressively. Use when CPI/CPA discipline is more important than scale.
A targeting approach that removes demographic and interest restrictions, allowing the ad platform’s algorithm to find the best users within the entire available audience pool. On Meta and TikTok, broad audiences often outperform narrowly targeted segments because the algorithm has more room to optimise. The creative itself becomes the targeting mechanism. Different hooks attract different user segments organically.
A keyword match type in Apple Search Ads where your ad may appear for close variants, synonyms, and related searches, not just the exact keyword you bid on. Broad match enables discovery of high-performing queries you haven’t explicitly targeted, but requires aggressive negative keyword management to prevent budget waste on irrelevant traffic. Use broad match for keyword discovery, then migrate winners to exact match campaigns.
Meta’s server-side event tracking interface that sends conversion data directly from your server to Meta, bypassing browser-based pixel limitations and ad blockers. In mobile UA, CAPI typically works alongside your MMP to ensure event signals reach Meta reliably. Proper CAPI implementation can improve campaign delivery optimisation by 10-20% through more complete conversion data.
A campaign setting on Meta (and similar platforms) that distributes budget dynamically across ad sets based on real-time performance, rather than allocating fixed budgets to each ad set. CBO lets the algorithm shift spend to the best-performing audiences and creatives throughout the day. Works best when ad sets have similar audience sizes and comparable bid ceilings.
The percentage of users who stop using an app within a given time period, expressed as the inverse of retention rate. A D7 retention rate of 25% means D7 churn is 75%. Churn is most actionable when segmented by acquisition source. High-churn channels may deliver cheap installs but destroy unit economics. Reducing churn by even 5 percentage points often has a larger LTV impact than reducing CPI by 20%.
Credits an install to the last ad a user clicked before installing. Click-through attribution is deterministic and high-confidence: the user actively engaged with the ad. Standard windows range from 7 to 30 days depending on the platform. Compare with view-through attribution (VTA), which credits impressions the user saw but did not click.
Grouping users who share a common characteristic (typically install date) and tracking their metrics over time. Cohort analysis reveals how retention, revenue, and engagement evolve, surfacing patterns that aggregate metrics conceal. The most common approach: daily install cohorts tracked at D1, D7, D14, D30, and D90 retention windows.
The cost to acquire a specific in-app action beyond the install: a registration, purchase, subscription start, or level completion. CPA is calculated as total spend divided by the number of target actions. CPA-optimised bidding requires sufficient post-install event volume (typically 50+ events per week per campaign) for the algorithm to learn effectively. CPA is a more meaningful efficiency metric than CPI for apps with complex monetisation funnels.
The total spend required to acquire one app install, calculated as spend divided by installs. CPI varies widely by genre and platform: hypercasual games see $0.20-$0.80, casual puzzle $1-$3, midcore $3-$8, and hardcore/strategy $5-$15+. iOS CPIs are generally 2-3x Android on comparable targeting. CPI alone is a misleading metric. Always evaluate alongside D7 retention and LTV to determine whether installs are actually profitable.
The cost per 1,000 ad impressions served. CPM is the base unit of media pricing across all platforms. Higher CPMs don’t always mean worse performance. A $15 CPM campaign that converts at 2% IPM yields a lower CPI than a $5 CPM campaign converting at 0.5% IPM. CPM trends serve as a market barometer: rising CPMs during Q4 holiday season or post-IDFA indicate increased competition for the same inventory.
The auction pricing model used by Apple Search Ads, where advertisers pay only when a user taps their ad in the App Store search results, equivalent to CPC in traditional search advertising. Effective CPT management requires setting keyword-level caps back-calculated from target CPI: if target CPI is $3.00 and expected CVR is 60%, the maximum CPT should be approximately $1.80.
The decline in ad performance that occurs when a target audience has been exposed to the same creative too many times, resulting in falling CTR and IPM. Key fatigue signals: CTR drop of 25%+ week-over-week on a previously stable creative, or IPM decline of 20%+ without corresponding changes in bid or placement. Fatigue should be monitored at the creative level, not the campaign level. A campaign average can look acceptable while 40% of budget runs against fatigued assets.
The rate at which new creative variants are produced, tested, and rotated into active campaigns. High creative velocity (4-6 new variants per week per active account) is the primary driver of sustained performance in mature UA programmes, continuously refreshing the winning creative pool and preventing fatigue compounding. Most in-house teams operate at low velocity (1-2 variants per month), which is the main reason UA performance plateaus rather than compounds over time.
The percentage of users who click or tap an ad after seeing it, calculated as clicks divided by impressions. CTR measures creative relevance to the audience (whether the hook and format attract attention) but not whether the resulting installs are high quality. A high CTR with low downstream retention signals a misleading creative that over-promises. Benchmark: 1-3% for in-feed social ads, 5-15% for Apple Search Ads.
Connected TV and over-the-top advertising delivers ads through streaming platforms (Roku, Fire TV, Apple TV, Samsung TV Plus) that can drive app installs via QR codes, companion banners, or second-screen retargeting. CTV is an emerging UA channel for mobile apps because it reaches users in lean-back, high-attention environments. Attribution remains challenging, most installs are measured through probabilistic matching or incrementality studies rather than deterministic click-through paths.
The percentage of users who complete a desired action out of those who had the opportunity. In mobile UA, CVR most commonly refers to tap-to-install rate on app store pages or click-to-install rate from ad impressions. CVR is the bridge between traffic quality and install efficiency. Improving CVR by 10% has the same economic impact as reducing CPI by 10%, but is often cheaper to achieve through CRO and ASO improvements.
The percentage of users who return to an app 1, 7, or 30 days after installing. Retention is the single most important quality signal in mobile UA. It determines whether installs convert into revenue. Benchmarks vary by genre: casual games see 35-45% D1, 15-20% D7, 5-10% D30. Subscription apps with strong onboarding can hit 50%+ D1. If D1 is below 25%, fix the product before scaling spend.
Daily Active Users and Monthly Active Users, the count of unique users who open the app in a given day or month. The DAU/MAU ratio (stickiness) indicates engagement quality: social apps target 50%+, gaming 20-30%, utility apps 15-25%. For UA, DAU/MAU by acquisition source reveals which channels bring engaged users vs. one-time installers.
Technology that automatically assembles and optimises ad creative elements: headlines, images, CTAs, backgrounds, in real time based on user signals. DCO shifts creative production from manual iteration to algorithmic combination, testing thousands of permutations simultaneously. On Meta, Advantage+ Creative performs a form of DCO. The key limitation: DCO optimises the arrangement of existing assets, not the conceptual quality of the creative itself.
A URL that routes users directly to a specific screen or content within an app rather than the home screen. Deep links reduce friction and improve conversion by delivering users to the exact experience promised by the ad. Implementation requires Universal Links (iOS) or App Links (Android), plus fallback handling for users who don’t have the app installed. Proper deep linking can improve retargeting CVR by 2-3x versus generic app open links.
A deep link that persists through the app install process, the user clicks an ad, is routed to the App Store, installs, then on first open is taken to the specific in-app content referenced in the original link rather than the default home screen. Deferred deep links are essential for performance campaigns where the ad promises a specific product, offer, or content piece. Most MMPs support deferred deep linking natively.
A platform that allows advertisers to buy mobile ad inventory programmatically across multiple ad exchanges and supply sources through real-time bidding (RTB). Leading mobile DSPs include Moloco, AppLovin (AXON), Unity, and Digital Turbine. DSPs access inventory that walled gardens (Meta, Google, TikTok) don’t reach, making them essential for scaling beyond self-serve platforms. Performance depends heavily on first-party data signals fed into the DSP’s ML models.
The effective revenue earned per 1,000 ad impressions shown to your users, calculated as (total ad revenue / total impressions) x 1,000. eCPM is the primary metric for apps monetised through in-app advertising. Higher eCPMs come from premium placements (rewarded video yields 3-10x higher eCPM than banners), strong user engagement, and optimised waterfall or bidding configurations in your ad mediation stack.
Measures how actively users interact with an app: session frequency, session duration, feature usage depth, and content consumption. The exact definition varies by team. Some measure sessions per DAU; others track specific feature adoption. For UA, segment engagement rate by acquisition source to identify which channels deliver users who actually use the product, not just install and leave.
A keyword match type in Apple Search Ads where your ad only appears when the user’s search query matches your keyword precisely or with very close variants (plural/singular). Exact match delivers the highest relevance and typically the best CVR, but limits discovery of new queries. Best practice: run broad match campaigns for keyword discovery, then migrate proven winners into exact match campaigns with higher bids for maximum efficiency.
A deprecated attribution method that matched ad clicks with installs by combining device signals: IP address, device model, OS version, and screen resolution. Apple explicitly prohibited fingerprinting in 2024, and Google is following suit through Privacy Sandbox. Any attribution strategy still relying on fingerprinting is on borrowed time.
Data collected directly from your users through their interactions with your app, website, or CRM, as distinct from third-party data purchased externally. Post-ATT, first-party data is the most valuable signal for building seed audiences, lookalike targeting, and feeding platform algorithms. Apps with strong registration flows and rich in-app event taxonomies hold a significant competitive advantage in the privacy-first era.
A monetisation model where the app is free to download, with revenue generated through in-app purchases (IAP), subscriptions, or advertising. F2P dominates mobile gaming; freemium is the standard for SaaS and subscription apps. The UA implication is fundamental: no upfront revenue means the entire business model depends on acquiring users whose LTV exceeds their CPI. Cohort-level LTV tracking is not optional.
Google’s automated app install campaign types that distribute ads across Search, Play Store, YouTube, Display, and Discover. UAC (now largely replaced by Performance Max for apps) uses machine learning to optimise bids, placements, and creative combinations. PMAX extends reach to additional Google surfaces. Both require high-quality creative assets and sufficient conversion volume (50+ events/week) for effective algorithm training.
A self-reinforcing cycle where user actions attract new users, who repeat the cycle. Growth loops compound over time, unlike linear funnels. Common examples: referral loops (user invites friends), content loops (user-created content drives organic discovery), and performance loops (paid UA revenue funds more spend). The most defensible mobile businesses run at least one organic growth loop alongside paid UA.
The opening 1-3 seconds of a video ad designed to stop the user from scrolling. The hook is the single most important variable in mobile creative performance. It determines whether the rest of the ad is ever seen. Effective hooks use pattern interrupts, curiosity gaps, or immediate problem recognition. Testing should isolate hooks from body content: the same body with different hooks can see 3-5x differences in IPM.
A mobile game genre characterised by ultra-simple mechanics, minimal onboarding, and short session times, monetised primarily through in-app advertising. Hypercasual games rely on extremely low CPI ($0.10-$0.50) and high creative velocity to maintain profitability against thin eCPM margins. The genre’s UA playbook, massive scale, rapid creative iteration, aggressive IPM testing, has influenced UA strategies across all mobile categories.
A monetisation model where revenue is generated by showing ads to users within the app. IAA formats include rewarded video (highest eCPM, best user experience), interstitial (high eCPM, but can damage retention if overused), and banner (lowest eCPM, minimal disruption). Ad mediation platforms (MAX, ironSource, AdMob) optimise which ad network fills each impression. For UA, IAA-monetised apps must evaluate ROAS on ad revenue, not just IAP.
Transactions where users purchase virtual goods, premium features, currency, or subscriptions within an app. IAP is the primary revenue driver for most mobile games and subscription apps. Apple and Google take a 15-30% commission on IAP revenue. For UA optimisation, feeding IAP events back to ad platforms via your MMP enables value-based bidding, letting algorithms target users likely to spend, not just install.
Isolates the true causal impact of advertising by comparing outcomes between a test group exposed to ads and a holdout control group that sees none. Incrementality answers the question deterministic attribution cannot: “Would these users have installed anyway without the ad?” Post-ATT, incrementality testing is essential for validating channel efficiency. Run geo-based or audience-based holdout tests on each major channel quarterly.
A full-screen ad format displayed at natural transition points within an app (between game levels, after completing an action). Interstitials deliver high eCPMs ($10-30 in Tier 1 markets) but can damage retention if shown too frequently or at poor transition points. Best practice: cap at 2-3 interstitials per session, never on first session, and always at natural content breaks. Never mid-action.
The number of installs generated per 1,000 ad impressions. The primary creative performance metric on TikTok and in programmatic UA. IPM combines CTR and CVR into a single number, making it the best apples-to-apples comparison metric for creative variants. Higher IPM directly reduces effective CPI. Benchmark: 1-3 IPM is average for social ads, 5+ is strong, 10+ is exceptional and typically short-lived before fatigue sets in.
A metric measuring organic virality: the average number of new users each existing user generates. K-Factor = (invitations per user) x (conversion rate of invitations). K > 1 means exponential organic growth; K < 1 means paid acquisition is required to grow. Most apps have K-factors of 0.1-0.3. Even a low K-factor significantly reduces blended CPI by subsidising paid installs with organic ones.
A targeting method where the platform identifies users who resemble a seed audience of your existing high-value users. Lookalike quality depends entirely on the seed: a 1% lookalike built from D30-retained payers outperforms one built from all installers. Post-ATT, lookalike effectiveness on iOS has diminished. Meta’s Advantage+ broad targeting often outperforms narrow lookalikes because it gives the algorithm more room to explore.
A neural network trained on massive text datasets that can generate, summarise, and reason about natural language. Models like GPT, Claude, and Gemini power a growing share of mobile marketing operations: ad copy generation, ASO keyword research, creative briefing, competitor analysis, and automated reporting. For UA teams, LLMs are most valuable when integrated into existing workflows rather than used as standalone chat tools. Fine-tuning or prompt engineering with your own performance data produces dramatically better output than generic prompts. The cost per generation has dropped 90%+ since 2023, making high-volume creative testing via LLMs economically viable for teams of any size.
The total net revenue generated by a single user over their entire relationship with the app. LTV is the north star of mobile UA economics, every bidding, budget, and channel allocation decision ultimately depends on it. LTV must be calculated per cohort and per channel, not as a blended average. Most teams track LTV at D7, D30, D90, and D180 windows, using early cohort data to predict long-term value via LTV curves.
A chart plotting cumulative revenue per user over time, revealing the shape of monetisation. Whether revenue front-loads (IAP-heavy games) or accrues linearly (subscription apps). LTV curves enable extrapolation: if D7 LTV predicts 35% of D180 LTV for your genre, you can project long-term ROAS from early data. Accurate LTV curves are the foundation of predictive bidding and payback period calculations.
An open protocol that standardises how AI models connect to external tools, data sources, and APIs. MCP enables LLMs to pull live data from your MMP, ad platforms, analytics dashboards, and creative repositories during a single conversation or workflow. For mobile marketing teams, MCP turns AI assistants from static knowledge tools into operational agents that can query your Adjust dashboard, pull Google Ads performance, and draft optimisation recommendations grounded in real data. The protocol is gaining rapid adoption across AI platforms and is becoming the standard integration layer for AI-powered marketing operations.
Uses regression analysis on aggregate spending and outcome data to determine each marketing channel’s contribution to business results, without requiring user-level tracking. MMM has resurged post-ATT as a privacy-safe complement to deterministic attribution. Meta’s open-source Robyn and Google’s Meridian make MMM accessible to teams without dedicated data science resources. Best used for strategic budget allocation across channels, not daily optimisation.
Meta’s advertising platform spanning Facebook, Instagram, Messenger, and Audience Network, the largest single source of mobile app installs globally. Meta’s algorithm excels at identifying high-value users from broad audiences when fed sufficient conversion data. Key formats: Reels (highest engagement, growing inventory), Stories (strong for app installs), and Feed (largest reach). Post-ATT performance has recovered through Advantage+ automation and improved modelling.
A third-party attribution platform that tracks where app installs originate by matching click and install signals across multiple ad networks. Leading MMPs include Adjust, AppsFlyer, Singular, and Branch. MMPs provide the single source of truth for UA spend efficiency, deduplicating installs across channels and handling SKAN/AdAttributionKit postback decoding. Choosing and properly configuring your MMP is the single most important infrastructure decision in mobile UA.
A new generation of demand-side platforms that use proprietary machine learning models trained on advertiser first-party data to optimise programmatic bidding. Moloco, AppLovin’s AXON engine, and Unity’s ad network represent a shift from rule-based to ML-driven programmatic buying. These platforms perform best when fed rich post-install event data and given sufficient budget to train their models (typically $5-10K+ per week).
Keywords explicitly excluded from an Apple Search Ads campaign to prevent ads from appearing on irrelevant queries. Negative keyword management is critical when running broad match campaigns, without it, 30-50% of broad match spend can go to irrelevant queries. Review search term reports weekly and add negatives aggressively. Common negatives: competitor brand names, unrelated app categories, and queries with commercial intent mismatches.
The single metric that best captures the core value your product delivers to users and correlates most strongly with long-term business success. For mobile games: D7 retention or sessions per user. For subscription apps: trial-to-paid conversion rate. For e-commerce: orders per MAU. UA strategy should ultimately optimise for the north star metric, not vanity metrics like total installs or raw CPI.
Acquiring users through unpaid channels: App Store search (ASO-driven), word of mouth, press coverage, social media, and web SEO. Organic UA is not truly “free.” It requires investment in ASO, content, and product quality, but carries zero marginal cost per install. Paid and organic UA are interdependent: paid campaigns boost category ranking, which lifts organic visibility, creating a multiplier effect measured by the organic uplift ratio.
The practice of acquiring app users through paid advertising across channels like Meta, TikTok, Google, Apple Search Ads, and programmatic DSPs. Paid UA is the primary growth engine for most mobile apps because organic growth alone rarely achieves the scale venture-backed businesses require. Effective paid UA is a system: creative, bidding, attribution, and LTV analytics must work together. Optimising any single component in isolation produces diminishing returns.
The time required for the cumulative revenue generated by an acquired user cohort to equal or exceed the cost of acquiring it. A D90 payback period means the average user generates enough revenue by day 90 to cover their CPI. Shorter payback periods allow faster reinvestment of UA budgets. VC-backed apps typically target 6-12 month payback; bootstrapped apps need 30-90 day payback to maintain cash flow.
An interactive ad format that lets users experience a simplified version of a game or app before installing. Playable ads typically deliver higher-quality installs with better D1 retention because users self-select, only those who enjoy the gameplay convert. Production cost is higher than static or video, but the install quality premium often justifies the investment. Best for games and interactive apps; less relevant for utility or e-commerce.
A machine learning model that predicts a user’s long-term lifetime value from early behavioural signals observed in the first 24-72 hours post-install. pLTV enables real-time bid optimisation toward high-value users before actual revenue data is available. Platforms like Meta, Google, and Moloco can ingest pLTV scores as value signals for value-based bidding. Building accurate pLTV models requires clean event data, sufficient training volume, and continuous recalibration as user behaviour shifts.
A server-to-server notification sent from an MMP or attribution framework to an ad network confirming that an install or in-app event occurred. Postbacks are the plumbing of mobile attribution. They carry the data that ad platforms use to optimise delivery algorithms. SKAdNetwork postbacks are delayed (24-48 hours) and contain limited conversion value data. Proper postback configuration directly affects campaign optimisation quality.
Google’s initiative to replace the Android advertising ID (GAID) with privacy-preserving APIs for targeting and measurement. Key APIs include Topics (interest-based targeting without tracking), Protected Audiences (on-device remarketing auctions), and Attribution Reporting (conversion measurement without user-level data). Privacy Sandbox represents the Android equivalent of Apple’s ATT. UA teams must prepare for a world without GAID by investing in first-party data and contextual strategies.
The automated buying and selling of digital ad inventory through real-time auctions, connecting demand-side platforms (DSPs) with supply-side platforms (SSPs). Programmatic reach extends beyond walled gardens to thousands of apps and mobile web publishers. Performance depends on data quality, creative variety, and algorithm training time. Programmatic is essential for scaling UA beyond Meta and Google, particularly for apps spending $100K+/month.
Targeting users who have never interacted with your app or brand, the top-of-funnel acquisition motion, as distinct from retargeting existing users. Prospecting campaigns prioritise reach and install volume over personalisation. On Meta, prospecting is typically run through broad or lookalike targeting; on programmatic channels, through contextual and behavioural targeting. Prospecting efficiency is measured by CPI and D1/D7 retention of acquired cohorts.
Messages sent directly to a user’s device lock screen or notification center, even when the app is not open. Push notifications are the highest-impact retention and re-engagement tool in mobile marketing, a well-timed push can recover 10-20% of lapsing users. Rich push adds images, action buttons, or deep links. Over-sending (more than 3-5 per week for most categories) drives opt-outs and uninstalls. Personalise timing and content based on user behaviour segments.
An emerging channel where retailers (Amazon, Walmart, Instacart) offer ad placements within their apps and platforms that can drive installs or actions in third-party apps. Retail media networks leverage high-intent purchase data for targeting, a user browsing fitness equipment on Amazon is a strong prospect for a fitness app ad. Early-stage for app UA but growing rapidly as retailers expand their ad businesses beyond on-platform product promotion.
Serving ads to users who have already installed your app but lapsed in engagement, aiming to bring them back. Retargeting delivers higher CVR and lower CPA than prospecting because users already know the product. Post-ATT, retargeting on iOS has become more challenging. Privacy Sandbox’s Protected Audiences API is the future mechanism. Deep links are critical for retargeting: send users to the exact content or offer that’s most likely to re-engage them.
The percentage of users who return to the app after a specific number of days. Retention is the single best predictor of long-term app value and the most important quality signal for UA. Industry benchmarks: D1 retention of 25-40% is average, 40%+ is strong. D30 retention above 10% indicates a sticky product. Always segment retention by acquisition source, a 30% D1 average can hide channels delivering 50% alongside channels delivering 10%.
An ad format where users voluntarily watch a video ad in exchange for an in-app reward (extra lives, virtual currency, premium content access). Rewarded video delivers the highest eCPMs ($15-60 in Tier 1 markets) and best user experience of any ad format because the exchange is transparent and opt-in. Completion rates average 90%+, driving strong advertiser demand. The key to implementation: rewards must feel valuable but not undermine the core monetisation loop.
Revenue generated per dollar of advertising spend, expressed as a percentage or ratio. A D7 ROAS of 30% means every $1 spent generated $0.30 in revenue by day 7. ROAS is the primary profitability metric for mobile UA, but it must be evaluated at the right time window. D7 ROAS predicts long-term profitability only when correlated with established LTV curves. A campaign with 20% D7 ROAS can be highly profitable if the app’s D7-to-D365 revenue multiplier is 5x+.
An Apple Search Ads feature that automatically matches your ad to relevant search queries based on your app’s metadata, category, and related searches, without requiring explicit keyword bids. Search Match is useful for keyword discovery but delivers inconsistent quality without negative keyword pruning. Best practice: run Search Match in a separate campaign with a lower CPA ceiling, harvest high-performing queries, and migrate them to dedicated exact match campaigns.
Apple’s privacy-preserving attribution framework providing aggregated, delayed install and conversion data without exposing individual user data. SKAN delivers postbacks 24-48 hours after install with a single conversion value (0-63) that must encode all post-install quality signals. SKAN 4.0 introduced multiple conversion windows, coarse values, and crowd anonymity levels. Being replaced by AdAttributionKit in iOS 18+, but SKAN understanding remains essential as the transition continues.
Google’s automated bidding strategies that use machine learning to optimise bids in real-time for each auction. Smart Bidding options include Target CPA (optimise for action cost), Target ROAS (optimise for revenue efficiency), and Maximise Conversions (maximise volume within budget). Smart Bidding requires 2-4 weeks of learning phase with consistent budgets and sufficient conversion volume. Changing targets or budgets by more than 20% resets the learning phase.
An audience exclusion list uploaded to ad platforms containing users who have already installed or converted, preventing wasted spend on re-acquisition. On Meta and TikTok, suppression lists should be updated daily from your MMP. Post-ATT, iOS suppression is imperfect, device-level matching has gaps, so some budget inevitably reaches existing users. Android suppression remains more effective with GAID matching (until Privacy Sandbox rolls out fully).
A bid strategy that tells the platform to optimise bids to achieve a target cost per action (install, registration, purchase). Unlike bid caps, Target CPA allows individual bids to exceed the target as long as the average CPA across the campaign stays at or below the goal. This flexibility delivers more consistent volume than strict caps. Set Target CPA based on your LTV-to-CPA ratio, if target LTV is $10, a Target CPA of $3-4 gives a 2.5-3x return.
A bid strategy that optimises for revenue efficiency, targeting a specific return on ad spend ratio. Target ROAS bidding requires purchase or revenue events to be passed back to the platform. Making MMP-to-platform integration critical. Works best for IAP-heavy apps with sufficient purchase volume (50+ purchases/week per campaign). For apps with long monetisation tails, feed pLTV scores as value events rather than waiting for actual revenue.
TikTok’s advertising platform offering in-feed video, TopView, Spark Ads, and branded effects for mobile app install campaigns. TikTok’s algorithm excels at finding niche audiences through content-based matching rather than demographic targeting. Creative must feel native to the platform, polished ads underperform vs. UGC-style content. TikTok’s younger demographic skews toward casual and social apps but is expanding rapidly across all categories including finance and health.
Part of Google’s Privacy Sandbox, the Topics API infers a user’s interests from their recent app usage and shares a limited set of coarse topics (e.g., “Sports,” “Travel”) with advertisers, without revealing browsing history or enabling cross-app tracking. Topics replaces Google’s earlier FLoC proposal with a simpler, more privacy-preserving approach. For mobile UA, Topics enables interest-based targeting on Android after GAID deprecation, but with significantly less granularity than current ID-based methods.
The practice of driving new users to download and install a mobile app through paid, organic, or owned channels. UA is the central discipline in mobile marketing. It encompasses media buying, creative production, attribution, analytics, and LTV modelling. A mature UA operation optimises the entire system, not individual components. The best UA teams treat creative as their primary competitive advantage, not just a production task.
An ad designed to mimic user-generated content, handheld camera, natural lighting, conversational voiceover, minimal branding, to feel native in social feeds. UGC-style ads consistently outperform polished brand ads on TikTok, Instagram Reels, and Shorts because they bypass ad fatigue patterns users have learned. Can be produced by real creators (best for authenticity) or AI-generated (best for velocity and cost). Combine UGC hooks with product demonstrations for highest IPM.
A bidding strategy that optimises for revenue value rather than install or event volume, targeting users predicted to generate the highest lifetime value. VBO requires sending purchase value data or pLTV scores to the ad platform. On Meta, VBO consistently delivers 20-40% higher ROAS than install-optimised campaigns for apps with sufficient purchase volume. The key constraint: you need enough purchase events (typically 100+/week) for the algorithm to optimise effectively.
Credits an install to an ad impression the user saw but did not click, provided the install occurs within a defined window (typically 1-24 hours). VTA captures the influence of visual exposure, particularly important for video and display campaigns where users install after seeing an ad without clicking. Keep VTA windows short (1-6 hours) to avoid over-counting. Self-reported networks often inflate VTA claims.
A user journey that starts on mobile web and converts to an app install, with attribution maintained across the transition. Web-to-app flows are growing because landing pages let advertisers pre-qualify users before sending them to the App Store, improving install quality and enabling richer tracking. Smart banners, deferred deep links, and MMP web SDK integrations handle measurement. Critical for subscription apps running Google Search or web-based paid campaigns.
A user who spends significantly more than average, typically the top 1-2% of payers generating 30-50% of total IAP revenue. Whale identification and acquisition is a core objective of value-optimised UA: targeting campaigns toward user profiles likely to become whales delivers outsized ROAS. Whales are identified through early behavioural signals (first purchase timing, session depth, feature engagement) and can be used as pLTV seed audiences.
We audit your attribution setup, creative pipeline, and LTV models for free. You get a clear diagnosis of where performance is leaking. No commitment, no sales pitch.
Straight answers from UA practitioners who manage live campaigns daily.
CPI (Cost Per Install) is total ad spend divided by the number of installs a paid campaign drives. CPI varies widely by platform, genre, and geography: iOS typically costs 2-3x more than Android, and competitive categories like fintech or RPG games command CPIs of $5-$15+. CPI alone says nothing about campaign quality. Always evaluate it alongside retention and LTV.
ROAS (Return on Ad Spend) measures gross revenue per dollar of ad spend: 150% ROAS means $1.50 back for every $1 spent. ROI (Return on Investment) accounts for all costs, including production, agency fees, platform commissions, and operational overhead, making it a net profitability metric. Use ROAS for daily campaign-level decisions. Use ROI for quarterly strategic budget allocation. A campaign can show positive ROAS but negative ROI when non-media costs are high.
SKAdNetwork is Apple’s privacy-preserving framework that attributes installs without exposing user-level data. When a user sees an ad and installs an app, Apple’s servers (not the ad network) generate an attribution postback sent 24-48 hours later. The postback includes a conversion value (0-63) that the app sets based on post-install behaviour. SKAN 4.0 introduced multiple conversion windows and coarse/fine value tiers based on crowd anonymity. The system is being succeeded by AdAttributionKit in iOS 18+.
CPI measures the cost to acquire an install; CPA measures the cost to acquire a specific post-install action (registration, purchase, subscription). CPI is a top-of-funnel metric. It tells you acquisition cost but nothing about user quality. CPA goes deeper, measuring cost per valuable user action. For apps with complex funnels, CPA-optimised bidding is more effective because it targets users likely to complete meaningful actions, not just download the app.
Predictive LTV (pLTV) uses machine learning to forecast a user’s long-term lifetime value from early post-install behaviour, typically within the first 24-72 hours. The model analyses signals like session frequency, feature engagement, and early monetisation events to predict D30, D90, or D365 revenue. pLTV scores can be fed to ad platforms (Meta, Google, Moloco) as value signals for value-based bidding, enabling algorithms to target users predicted to spend the most rather than just any user likely to install.
Google’s Privacy Sandbox replaces the Android advertising ID (GAID) with privacy-preserving APIs: Topics API for interest-based targeting, Protected Audiences for remarketing, and Attribution Reporting API for measurement. This means Android UA will eventually face similar constraints to post-ATT iOS: less deterministic tracking, more reliance on aggregated data, and greater importance of first-party data strategies. UA teams should begin testing Privacy Sandbox APIs now and invest in pLTV models and MMM as complementary measurement approaches.
Incrementality testing measures the true causal impact of advertising by comparing a group exposed to ads against a control group that sees no ads (or a placebo). This isolates the “incremental” lift, installs or revenue that would not have occurred without the ad. Unlike attribution, which assigns credit, incrementality answers “did the ad actually cause the outcome?” Run geo-holdout or audience-holdout tests on each major channel quarterly to validate that attributed performance reflects genuine incremental value.
An MMP (Mobile Measurement Partner) is a third-party platform that tracks where your app installs come from by matching ad clicks/views with installs across all advertising channels. You need one because without an MMP, you’re relying on each ad network’s self-reported data, and every network claims credit for as many installs as possible. An MMP deduplicates, providing a single source of truth. Leading MMPs include Adjust, AppsFlyer, Singular, and Branch.
Admiral embeds alongside in-house teams to build the systems that scale: attribution architecture, creative velocity pipelines, and LTV-driven bidding. You keep the knowledge. We accelerate the build.
Most mobile marketing glossaries recycle the same surface-level definitions you could find on Wikipedia. This one is built differently. Every term reflects how practitioners actually talk inside UA programmes managing six- and seven-figure monthly budgets.
The glossary spans foundational metrics (CPI, ROAS, LTV), attribution frameworks (SKAdNetwork, AdAttributionKit, Privacy Sandbox), AI-powered creative production, and emerging measurement approaches like predictive LTV and incrementality testing. Whether you run UA, interpret cohort data as a product manager, or allocate budget as a founder, you will find operational precision here instead of marketing fluff.
Maintained by the team at Admiral Media, a mobile performance marketing agency specialising in user acquisition, AI creative, and app growth across Meta, TikTok, Google, Apple Search Ads, and programmatic DSPs.
We review your attribution setup, campaign structure, creative output, and LTV models, then hand you a prioritised action plan. Free. No strings attached.