Practical, repeatable operating procedure for taking a ClickBank offer from first test to controlled scale
This SOP turns Jordan’s ClickBank process into a repeatable operating system for beginners. The sequence is simple on purpose: qualify the offer, launch 3 initial campaigns, find 2-3 winning ads, test primary text, test landing pages, then only add custom VSSLs and monetization layers after a profitable baseline exists.[1][2][3]
The biggest theme across the project files is simplification. Don’t optimize for CTR, CPC, or button clicks first. Optimize for purchases, use initiate checkout as your faster internal read, and avoid multiplying variables too early.[1][2] Jordan’s own unlocks came from removing complexity: one winning landing page across most ads, aggregate landing-page reads instead of ad-to-page matching, and adding advanced layers only after baseline profitability.[1][2]
If you follow this SOP, you should always know:
- what phase an offer is in
- what metric decides whether it advances
- what gets killed
- what gets scaled
- what gets added next
Jordan is explicit: purchases are the real optimization target. He does not optimize for CPC, CTR, button clicks, or headline tests.[1][2]
What this means operationally:
- Your ad account optimization goal should stay tied to purchases whenever possible.
- Your internal decision-making can use initiate checkout as a faster signal.
- CTR and CPC are diagnostics, not decision-makers.
Jordan’s data point: on ClickBank, roughly 25% of initiate checkouts become purchases.[1][2]
That means:
- 4 initiate checkouts ≈ 1 purchase
- cost per initiate checkout gives you signal faster than waiting for purchases
- you can speed up decisions without changing the campaign’s real optimization goal
The project files repeat this lesson over and over:
- one landing page usually wins across most ads[1][2]
- don’t pair specific ads to specific landing pages[2]
- don’t split-test everything at once[1][2]
- don’t add custom VSSLs or monetization layers before the base funnel works[1][2]
The winning order is:
1. offer validation
2. simple bridge page + vendor VSSL
3. 3 initial campaigns
4. ad winners
5. primary text winners
6. landing page winner
7. custom VSSL
8. monetization layers[1][2]
If you skip steps, you create false positives and false negatives.
| Phase | Goal | Main Variable | Advance When | Kill When |
|---|---|---|---|---|
| 0. Qualification | Choose a viable offer | Offer quality | Offer passes validation checklist | Offer fails core validation |
| 1. Launch Prep | Build clean test environment | Tracking + base assets | Tracker, bridge page, swipe file ready | Tracking broken / assets incomplete |
| 2. 3 Initial Campaigns | Find raw creative signal | Hooks + images | 2-3 ads show real promise | No useful signal after full test battery |
| 3. Winner Consolidation | Isolate best creatives | Creative selection | 2-3 ads keep hitting KPI | Winners collapse after consolidation |
| 4. Primary Text Testing | Improve message around winning creatives | Above fold then below fold text | A clear message winner emerges | Text tests add no lift |
| 5. Landing Pages | Improve click-to-purchase economics | Page format | One page wins in aggregate | No page reaches target economics |
| 6. Custom VSSL | Create differentiated sales asset | VSSL sections | New open/body/close beats control | Custom version loses to vendor control |
| 7. Monetization Layers | Increase value per click | Pop-unders / push | Base funnel stable and profitable | Core funnel not yet stable |
Do not test random products. Use the project’s validation framework first.[3][4]
An offer should pass all of these before it goes into paid testing:
- Gravity / sales proof: 20+ gravity on ClickBank or equivalent evidence of active affiliates[3]
- EPC: must be higher than your expected CPC[3]
- Commission: ideally $30+ if you’re buying paid traffic[3]
- VSSL quality: watch the full VSSL yourself[3]
- Refund rate: below 10% if possible[3]
- Upsells: at least one meaningful upsell[3]
- Mobile experience: must look clean on mobile[3]
- Ethical threshold: if you’d be embarrassed to promote it, walk away[3]
Beginner version:
- 1 offer in active testing at a time
- 1 backup offer queued
- 3-5 more offers in watchlist
Jordan-style advanced version:
- 3 new offers per week[1][3]
Before an offer reaches launch, you should have:
- vendor name
- offer link
- avg payout
- target ROAS
- max allowable purchase CPA
- target cost per initiate checkout
- 10-20 hook ideas
- 20-50 competitor creative references
Use these formulas before spending money:
Target ROAS (ClickBank affiliate offer): 1.5[2]
Max purchase CPA:
Avg commission per sale ÷ target ROAS
Example:
- Avg commission = $45
- Target ROAS = 1.5
- Max purchase CPA = $30
Target initiate checkout CPA:
Max purchase CPA × 0.25
Example:
- Max purchase CPA = $30
- Target initiate checkout CPA = $7.50
This becomes your operational scoreboard.
This phase exists so you don’t waste money on broken plumbing.
Before launch, have these ready:
- 1 tracker account (RedTrack or equivalent — do not build your own tracker)[1][2]
- 1 simple bridge page live
- 1 vendor VSSL path confirmed
- 1 naming convention for campaigns / ads / landing pages
- 1 spreadsheet or dashboard for daily reads
- 1 swipe file of competitor hooks and creatives
For first launch, keep the path simple:
Meta ad -> bridge/splash page -> vendor VSSL -> ClickBank checkout
Do not start with:
- custom VSSL
- quiz + email + push + pop-under stack
- multiple landers mapped to multiple ad sets
- exotic monetization tricks
Jordan starts with simple splash / bridge pages before layering complexity.[1][2]
Your first page should usually have:
- one strong headline
- one visual
- one CTA button
- optional short proof / framing copy
Before launch, prepare:
- 20 inspired ad variations
- 20 red-square/yellow-text hook cards
- 20 AI-generated images
- 10 primary text ideas held for later testing
That gives you the full 60-ad starter battery Jordan uses for a new offer.[1][3]
This is the first true test phase.
Find raw signal from creatives fast.
Jordan’s sequence is clear.[1]
An ad becomes a signal ad when it does one or more of these:
- gets cheaper purchases than account average
- gets initiate checkouts at or below target IC CPA
- keeps showing promise after initial spend instead of collapsing
- shows up as a clear leader on end-to-end economics
You want to exit with:
- 2-3 candidate winner ads[1]
- 1-2 losing hook clusters to stop pursuing
- 1-2 promising creative themes to drill into
Now you move from wide testing to controlled concentration.
Extract the 2-3 best ads from the initial battery and verify they still work in a more focused environment.[1]
A candidate winner becomes a confirmed winner when it:
- stays at or under target purchase CPA or target IC CPA
- continues producing after being moved into the focused campaign
- doesn’t rely on a weird one-off spike
- looks repeatable enough to build more around
Soft winner:
- one purchase or a few cheap ICs
- promising, but not ready for aggressive scale
Hard winner:
- repeated signal over multiple days
- stable enough to justify more copy and page testing
You want:
- 1-3 hard winners
- 1 dominant angle cluster
- 1 clear next step: primary text testing
Jordan’s flow after finding ad winners is to test primary text, not headlines.[1]
Improve the messaging around winning creatives without changing the creative itself.
Jordan’s testing order:[1]
1. test above-the-fold primary text first
2. then test below-the-fold copy
3. do about 10 variations of each stage
4. do not waste time on headline split tests
This is where you test:
- angle framing
- pain statement
- curiosity hook
- mechanism lead-in
- emotional opener
This is where you test:
- supporting proof
- elaboration
- CTA framing
- social proof language
- urgency / payoff clarification
When doing text testing:
- keep the winning image fixed
- keep the landing path fixed
- keep offer fixed
- only change the text variable being tested
Advance the text winner when it improves either:
- purchase CPA
- initiate checkout CPA
- total purchase volume at similar efficiency
You want:
- 1 winning above-the-fold text
- 1 winning below-the-fold text
- 1 final ad package worth moving into landing page testing
This is where many affiliates overcomplicate the system. Jordan’s simplification here is one of the most important project lessons.[1][2]
Find the best page in aggregate for the offer, not per ad.[1][2]
Jordan’s standard page menu:[1]
1. Bridge / Splash Page
2. Listicle
3. Quiz Funnel
4. Scientific Advertorial
5. Granny Blog
Do not put landing pages into separate campaigns just to test them.[2]
Instead:
- keep the campaign structure stable
- use your tracker to split traffic randomly across pages[1][2]
- read the page performance in aggregate
Best for:
- quickest baseline test
- fastest build
- low-friction offers
Best for:
- curiosity + recommendation structure
- “Top 5 remedies / options / fixes” framing[1][2]
Best for:
- emotional progression from problem -> desire -> hope[1][2]
Best for:
- mechanism-heavy offers
- users who need explanation
- highly qualified clicks, even if CTR is lower[1][2]
Best for:
- emotional trust
- personal-story angles
- cases where lower click-through can still lead to better downstream conversion[1][2]
Jordan’s method:[1][2]
- winner gets 90% of traffic
- new challenger gets 10%
- if challenger holds up, move to 50/50 for confirmation
- if it wins again, promote it to the new 90% winner
This is the standard loop.
Not landing-page CTR by itself.
The winner is decided by:
1. purchases
2. cost per purchase
3. cost per initiate checkout
4. total end-to-end economics
Jordan specifically notes that a page with terrible CTR can still be the true winner if the traffic it sends converts much better downstream.[2]
You want:
- 1 dominant page format
- 1-2 page variants worth future 10% tests
- 1 stable baseline funnel
This is the point where the offer has a real operating baseline.
Do not do this first. Do it after baseline profitability exists.[1][2]
Create a differentiated sales asset after the base system is already working.
Jordan’s method:[1]
1. start with vendor’s VSSL as the control
2. try to beat the opening first
3. then try to beat the body
4. then try to beat the close
Start with custom VSSL opens only.
That means:
- keep the vendor’s proven body and close for now
- replace only the intro or first emotional frame
- use your best-performing ad as the VSSL open if possible[1]
Only begin when:
- you have a winning landing page
- you have repeatable winners from the ad layer
- you understand the vendor’s current sales story well
- the offer is already proving profitable or close enough to justify deeper work
You want:
- one custom open control test
- maybe a second open angle
- only after that, body and close revisions
Jordan adds these after scale, not before.[2]
Increase value per click after the base funnel is already stable.
Jordan’s logic:[2]
- someone who doesn’t buy the main offer may still buy a complementary offer
- pop-unders create extra revenue from traffic you already paid for
Rules:
- only after core funnel is profitable
- use complementary offers
- don’t distract from the main funnel prematurely
Jordan prefers push over email because deliverability is easier.[2]
Rules:
- collect after the base path is proven
- use urgency / news / opportunity style messaging
- treat it as backend monetization, not primary monetization
Add monetization layers only after:
- the base funnel is hitting KPI consistently
- you know your true purchase CPA
- the winning lander is stable
- the ops stack is not already overloaded
Use this order:
1. ROAS
2. Cost per purchase
3. Cost per initiate checkout
4. supporting diagnostics (CTR, CPC, CPM, LP CTR)
From the project notes:[2]
- ClickBank affiliate offers: target 1.5 ROAS
- Recurring app offers: target 6.0 ROAS
| KPI | What it means | Use it for |
|---|---|---|
| ROAS | Final business result | Main scale / kill decision |
| Cost per Purchase | True acquisition cost | Main ad / page / offer decision |
| Cost per Initiate Checkout | Fast proxy for purchase | Faster internal read |
| Purchase Volume | Stability | Distinguish one-off from repeatable |
| CPM | Market / creative friction signal | Diagnose weak creative or policy friction |
| CTR / CPC | Engagement cost only | Diagnostic, not final decision |
| Landing Page CTR | Page click-through | Useful only with downstream conversion data |
These are operating rules, not laws of physics. But beginners need rules.
After the full sequence below there is still no stable path to KPI:
- 3 initial campaigns
- winner consolidation
- primary text testing
- at least 2-3 serious landing page attempts
At that point, move on.
From the project notes:[1][2]
- new ads launch under max conversion
- scaling campaigns transition to bid cap later
- budget scheduling is used reactively for winners
- many low-volume, high-ROAS ads can run together in a larger CBO and produce strong total economics
Beginners should not copy the most advanced scale behavior immediately.
Use this order instead:
1. increase spend on confirmed winners only
2. add close creative variants around proven themes
3. keep the winning page stable
4. only then test broader CBO / larger creative pools
5. only then think about bid-cap transition
Offer:
Vendor:
Network:
Gravity:
Avg $/Sale:
EPC:
Refund rate:
Target ROAS:
Max Purchase CPA:
Target IC CPA:
Status: Queued / Launching / Testing / Scaling / Killed
Notes:
Offer: __________
Base funnel: Meta ad -> bridge page -> vendor VSSL
Campaign 1: Inspired Variations
- 20 ads
- Goal: find proven market angles worth adapting
Campaign 2: Red Square Hook Testing
- 20 ads
- Goal: isolate hooks/messages cheaply
Campaign 3: AI Image Testing
- 20 ads
- Goal: discover visual concepts and emotional frames
Candidate winners:
1.
2.
3.
Why they are winning:
- cheaper purchases
- cheaper initiate checkouts
- stronger end-to-end economics
Next action:
- consolidate into winner campaign
- hold creative fixed
- begin primary text testing
Winning creative: __________
Above-the-fold tests (10):
1.
2.
3.
...
Below-the-fold tests (10):
1.
2.
3.
...
Headline testing: SKIP unless there is a very specific reason
Current control page: __________
Challenger page: __________
Traffic split: 90/10
Decision metric: purchases + cost per purchase + cost per IC
If challenger holds: move to 50/50
If challenger loses: kill and launch next variant
Date:
Offer:
Phase:
Spend:
Purchases:
ICs:
ROAS:
Purchase CPA:
IC CPA:
Best ad:
Worst ad:
Best page:
Decision for tomorrow:
Use these exact labels so nothing gets fuzzy:
- Queued
- Qualified
- Launching
- Creative Testing
- Winner Consolidation
- Text Testing
- Landing Page Testing
- Baseline Profitable
- Custom VSSL Testing
- Monetization Layers Added
- Scaling
- Killed
If you change creative, copy, lander, and VSSL at the same time, you learn nothing.
Jordan’s notes are blunt: CTR and CPC do not predict long-term winners well enough to drive decisions.[2]
This creates complexity with little payoff. Test pages in aggregate instead.[2]
You earn the right to build advanced assets after the simple funnel works.
Pop-unders and push are accelerants, not life support.[2]
Jordan tests fast and kills fast. Dead offers steal time from future winners.[3]
[1] ./jordan-interview-report.md — campaign structure, 3 initial campaigns, winner flow, landing page testing, custom VSSL order, scaling notes.
[2] ./jordan-gold-nuggets.md — operational details on landing page simplification, initiate checkout usage, KPI targets, bid-cap transition, Andromeda/CBO, funnel sequence, monetization layers.
[3] ./product-finding-playbook.md — validation framework, Jordan method, offer-selection standards, kill-fast orientation.
[4] ./free-research-methods-guide.md — research workflow, competitor tracking structure, repeatable documentation system.
Prepared for the ClickBank Affiliate project — 2026-03-24