Grow your conversions ★★★★★ 5.0 · 150+ brands
Free Audit →
Leading AI Agency

Creative Performance
Agency

Apps, Games & ecommerce – we accelerate your business with AI‑powered creative and performance marketing.

Live reporting dashboard
AI‑assisted insights
ROAS (7 days)
4.8x
+23% vs prev. 7 days
CPA (last 30 days)
€21.92
−18% vs baseline
Ad spend (7 days)
€127K
+8% vs prev. 7 days
Performance trend — last 7 days
New creative v3 live
Day 1Day 2Day 3Day 4Day 5Day 6Day 7
CPA dropped from €26.80 → €21.92 in 7 days
Current period
Previous period
Subscription app — ROAS up 48% in 7 days
Admiral Media performance account

The Admiral Performance Loop™

Most mobile UA agencies manage creative testing, audience modeling, and ROAS forecasting as three separate workstreams with three separate owners and three separate reporting cycles. The insight from Phase 1 rarely reaches the team running Phase 3. The payback model sitting in a spreadsheet rarely influences how bids are set at the campaign level. Each discipline operates with partial information, and the gaps between them are where performance bleeds out.

The Admiral Performance Loop™ is a closed-loop UA framework that connects these disciplines into a single compounding system. Signal informs creative. Creative output feeds audience modeling. Audience quality drives forecast accuracy. Forecasts govern scale decisions. And scale decisions generate new signals that restart the loop. Each cycle produces better data than the last, which is why Admiral campaigns tend to improve efficiency over time rather than decay.

The Loop consists of five phases:

Phase 1: Signal Capture

You cannot optimise a UA campaign against a signal you are not accurately measuring. This is not a novel observation, but it is one that a majority of growth teams violate in practice — either because attribution is misconfigured, postback windows are set to defaults rather than calibrated to the app’s monetisation curve, or the events flowing back to ad platforms are too shallow to be useful optimisation targets.

Signal Capture is the foundation phase. The work here happens before any campaign goes live, and revisiting it mid-flight is expensive. Getting it right at the start determines the quality ceiling for every phase that follows.

Attribution Setup

Admiral installs attribution via AppsFlyer, Adjust, or Singular depending on the client’s existing stack. The critical configuration decision is postback window length. Default windows (typically D7 for revenue events) do not match the monetisation curves of most mobile games or subscription apps, where meaningful revenue signals may not appear until D14 or D30. We set postback windows to match the app’s actual monetisation curve: a hypercasual game monetising on Day 1 ad revenue gets different windows than a midcore RPG where first IAP conversion peaks at D10–D14.

Cohort Events as Optimisation Targets

Raw installs are a proxy, not a signal. Admiral connects D1, D7, and D30 retention events and revenue milestones as optimisation targets flowing back to Meta, Google, TikTok, and Apple Search Ads. This means the algorithms are learning against user quality, not install volume. The difference in downstream audience quality is substantial: campaigns optimised against D7 retention on iOS can produce 30–50% higher D30 ROAS than equivalent campaigns optimised against install volume alone.

Creative Tagging Taxonomy

Every creative asset is tagged at upload against a standardised taxonomy: format (static, video, playable, UGC-style), hook type (gameplay reveal, social proof, problem–solution, character-led, offer-first), and audience angle (core demographic, genre crossover, lapsed-user re-engagement). This taxonomy makes learnings portable. When a hook type consistently outperforms across multiple campaigns, that signal is identifiable in the data — not buried in ad naming conventions that differ by account manager.

Garbage-in, garbage-out is a cliché because it is consistently true. The most common failure mode in mobile UA is not bad creative or poor audience strategy — it is optimising against the wrong signal from day one and spending months scaling a campaign that is generating cheap installs from low-LTV users.

Phase 2: Creative Velocity

The algorithm is not the primary variable in mobile UA performance. The creative is. Ad platforms have largely converged on similar optimisation logic — what differentiates campaign performance at scale is the quality and volume of creative inputs. An account running 20 tested creative variants will outperform an account running 4, all else being equal, because the better variant wins more auctions at a lower effective CPM.

Creative Velocity is about building the systems to generate, test, and scale creative at a pace that most in-house teams cannot match and most agencies do not attempt.

Testing Volume and Structure

Admiral maintains a minimum output of 4–6 new creative variants per week per active account. This is not ad hoc production — it is structured testing against a defined matrix: Hook (what happens in the first 3 seconds) × Format (the creative container: video length, static ratio, playable, etc.) × CTA (the action being prompted and its framing). Each axis can be tested independently. A single winning hook can be developed into 8 format variants before the hook concept is retired, generating a high-quality testing queue without requiring 8 separate conceptual briefs.

Kill Threshold

Any creative that fails to hit the account’s IPM (Installs Per Mille impressions) benchmark within 5–7 days of delivery is paused. Not iterated. Not given another week. Paused. Iteration before proof of concept is one of the most common ways UA teams burn production capacity. The IPM threshold is set per account based on genre benchmarks and current account baseline — typically 1.5–3.0 IPM for casual games, 0.5–1.5 IPM for midcore, with significant variance by geo and placement.

Concept Families

When a creative concept clears the IPM threshold and demonstrates positive D7 downstream signals, it is developed into a concept family: 5–8 variants that systematically explore format, length, CTA framing, and audience angle. This protects against fatigue while extracting maximum learning from a proven concept. A single winning hook angle, fully exploited across a concept family before retirement, can sustain 6–10 weeks of efficient delivery.

Fatigue Signals

Creative fatigue is monitored at the creative level, not the campaign level. Key signals: CTR drop greater than 25% week-over-week on a creative that previously held a stable CTR, and IPM decline greater than 20% week-over-week without corresponding changes in bid, audience, or placement. When either signal triggers, the creative enters a review queue rather than continued delivery.

Most agencies fail at creative not because they lack design talent, but because they are too slow (waiting for monthly creative reviews), too conservative (running 2 variants when 6 is the correct number), and too reactive (responding to fatigue 2 weeks after it starts rather than monitoring for it weekly).

Phase 3: Audience Modeling

Audience quality in prospecting is a direct function of the seed audience quality used to build lookalikes. This is where most UA programs introduce a structural flaw that is invisible in early metrics but compounds over time: seeding lookalike audiences from all installs rather than from high-value cohorts.

When you seed a lookalike from all installs, you are asking the algorithm to find more users who look like your entire install base — including the users who installed from poorly-targeted creative, churned on Day 1, and generated zero revenue. The resulting lookalike imports the full quality distribution of your existing user base, including its worst segments.

High-Value Cohort Seeding

Admiral seeds lookalike audiences from D7 payers and D30 retained users exclusively. On Meta, this means Custom Audiences built from mobile app events filtered to purchase events with a D7 attribution window. On TikTok, equivalent event-based audiences. The seed size targets a minimum of 1,000 qualifying users before a lookalike is launched — below this threshold, platform signal quality degrades and the lookalike loses specificity.

Exclusion and Suppression Logic

Prospecting campaigns run with active exclusion lists covering three segments: users who already have the app installed (suppression list via SKAdNetwork or MMP integration), churned users who previously installed and uninstalled, and low-LTV cohorts identified from historical data. Re-acquiring a user who already has the app is pure waste. Re-acquiring a low-LTV user at prospecting CPIs is negative-value activity dressed up as scale.

Broad Audience in Parallel

Narrow 1–2% lookalikes run in parallel with broad audience delivery on Meta and TikTok. Broad audience serves two functions: scale beyond the LAL size ceiling, and discovery — broad delivery regularly surfaces user segments that no structured lookalike would have identified. The budget split between broad and LAL is calibrated per account, typically 60–70% broad once a campaign has 14+ days of cohort data feeding the lookalike seed.

Phase 4: ROAS Forecasting

UA without LTV modeling is budgeting blind. A campaign reporting 40% D7 ROAS can be highly profitable, marginally profitable, or deeply unprofitable depending on the shape of the LTV curve for users acquired via that channel — a variable that differs significantly by genre, platform, and geo.

ROAS Forecasting is how Admiral translates early campaign signals into forward-looking budget and bid decisions rather than reacting to D7 performance snapshots.

LTV Curve Calibration

For new accounts, Admiral deploys LTV curves calibrated to genre benchmarks: D7, D14, D30, D60, and D90 revenue multiples by category (casual, midcore, hardcore, subscription, utility). These benchmarks are updated with account-specific cohort data as it matures. A campaign running for 60 days has enough D30 cohort data to replace benchmark assumptions with observed curves for that account. The model updates continuously rather than remaining static.

Per-Channel Payback Modeling

Each channel attracts users with different LTV profiles, so a single payback target applied across all channels misprices almost every channel. Meta prospecting, ASA exact match, and Google UAC broad match can produce users from the same app with D30 LTV multiples that vary by 40–60%. Applying the same D7 ROAS target to all three will over-spend on the channel with the flattest LTV curve and under-invest in the channel with the steepest one. Admiral builds per-channel payback models and sets channel-specific ROAS targets accordingly.

Predictive Bidding

Where platform automation supports it — Meta Value Optimisation, Google tROAS — Admiral bids toward predicted D30 revenue events rather than D7 ROAS. This requires feeding high-quality revenue postbacks at the event level (not just aggregated conversions) so the platform model has sufficient data to bid against predicted value rather than volume. The setup work in Phase 1 (postback configuration and cohort event mapping) is what makes this viable.

A campaign showing 40% D7 ROAS in a genre where D30 LTV is typically 2.5× D7 LTV is tracking toward 100% D30 ROAS — a strong outcome. The same 40% D7 ROAS in a genre where D30 LTV is typically 1.1× D7 LTV signals a campaign that will not reach payback. The D7 number alone cannot tell you which situation you are in.

Phase 5: Scale Architecture

Scaling a mobile UA campaign without structure produces predictable outcomes: CPI inflation as the primary channel reaches frequency saturation, creative fatigue accelerating beyond the team’s replacement rate, and spend concentration in a single platform that creates fragile dependency on algorithm changes. Scale Architecture is the operational framework that prevents these failure modes.

Channel Diversification Triggers

When the primary channel’s CPI rises more than 20% week-over-week without a corresponding improvement in downstream LTV signals, Admiral triggers budget redistribution to secondary channels. This is a rules-based trigger, not a discretionary judgment call. The threshold prevents the common pattern of continuing to scale a channel past its efficient range because the D7 numbers are still acceptable — by which point the damage to blended CPI is already significant.

Platform Sequencing

Admiral follows a structured diversification sequence calibrated to data volume requirements. Meta and TikTok launch first — both platforms respond quickly to event signals and can achieve meaningful scale with modest seed data. Apple Search Ads (ASA) launches for iOS accounts once brand search volume and category benchmarks are validated. Google UAC for Android scales once the D30 cohort data from Meta is sufficient to calibrate tROAS targets. DSPs — Moloco, Unity Ads, Digital Turbine — are added as incremental reach channels once the primary mix is efficient, not as substitutes for it.

Geo Layering

Geos are managed in three tiers with separate CPI and LTV models for each: Tier 1 (US, UK, AU), Tier 2 (DE, FR, JP, CA), and Tier 3 (LATAM, SEA, MEA). Blending tiers in a single campaign produces averaged signals that are actionable for neither. A Tier-1 CPI target applied to a blended global campaign will either cap Tier-3 scale (if the target is US-calibrated) or overspend in Tier-1 (if the target is global-averaged). Separated tier campaigns allow appropriate bid levels, creative customisation, and payback modeling per market.

Budget Allocation at the Creative Level

Budget reallocation decisions happen at the creative level, not the campaign level. High-performing creatives receive incremental budget within 48–72 hours of reaching statistical significance. Underperformers are paused on the same timeline. Campaign-level budget decisions that lag behind creative-level performance signals waste significant spend on mixed portfolios — a campaign average can look acceptable while 40% of the budget runs against creatives that cleared the kill threshold days ago.

How the Loop Creates Compounding Returns

The five phases are not a linear sequence — they are a continuous loop. Phase 5 output feeds Phase 1 input. Every scale decision generates new cohort data that sharpens the LTV curves in Phase 4. Better LTV curves produce more accurate audience seeds for Phase 3. Better audience seeds improve the quality signal against which Phase 2 creative is being optimised. And Phase 2 learnings — which hook types, formats, and audience angles drive high-LTV installs — feed back into the creative tagging taxonomy that makes Phase 1 signals interpretable.

The loop closes continuously, not just at campaign launch. This is why Admiral campaigns tend to become more efficient over the first 60–90 days rather than plateauing at launch efficiency levels: the system is learning faster than it is fatiguing.

Standard agency practice runs these phases independently. Creative is briefed by one team, audience strategy is set by another, and ROAS analysis is delivered in a monthly report that is already 30 days stale. The structural gap between these workstreams means the loop never closes — learnings from scale decisions do not inform creative strategy, and creative performance does not influence audience seeding. Each campaign launch starts with the same assumptions the previous campaign had, rather than building on what the previous campaign learned.

Where the Loop Applies

The Admiral Performance Loop™ is the operating methodology across all Admiral engagements. The specific configuration of each phase varies by app category, but the underlying structure is consistent:

Apply the Loop to Your App

The Admiral Performance Loop™ is the operational framework behind every Admiral engagement. The five phases are not a pitch deck concept — they are the actual sequence of work that runs from onboarding through to steady-state campaign management.

If you want to understand how the Loop would apply to your specific app — which phases are weakest in your current setup, where the measurement gaps are, and what the realistic improvement trajectory looks like — the starting point is a UA Audit. We review your attribution configuration, current creative output, audience structure, and LTV modeling and return a prioritised gap analysis within five business days.

There is no cost and no obligation. The audit either confirms your current setup is sound or identifies the specific changes that would move your key metrics. Either outcome is useful.

Request a Free UA Audit

Get in touch with us