There is no rulebook to reply which advertising practices drive the most revenue. Thus, in order to test, publishers switch between ad types or sizes to see which ones get more impressions. Or, they simply go with their gut. The result? They never find empirical data on what really works and what doesn’t.
Ad testing is a form of A/B testing, i.e., the comparison of operation between two variations. But that’s not limited to just the advertisement type (for the publisher), or, by way of example, the colour of the CTA button (for the advertiser). In advertising tech, the scope of testing has far expanded from plain testing. Testing conventions have changed and publishers now have granular data to produce mindful decisions.
Different types of testing, on different elements and parts of a site, enable publishers with the data they want to grow. Each test has a different aim, but a common goal一to increase ad revenue. By way of instance, an A/B evaluation between two ad sizes may aim to attain more ad impressions. While an A/B test between dynamic and static ad types may target better CTR.
There are lots of testing ideas which publishers may execute to discover the best results for themselves. But whatever the case, the thumb rule is that publishers have to test. Let’s move along and discuss some ad testing ideas which may help publishers begin.
1. Ad Format Testing
What to test: Which advertisement type/format gets better CTR.
How to check: as soon as you’re sure which page and advertisement format/type you would like to experiment on, start by creating variations. For instance, if your page has a screen ad put in the sidebar, try putting a variation of a text advertisement on the same spot to see if it makes a difference on click through rates.
If you are new to testing, it is advisable to experiment on a page with relatively little traffic, that way you can find the insight you need to make a decision without disrupting anything. For example, a month old website is a much better option compared to an older, high-traffic website that’s already generating revenue.
2. Ad Size Testing
What to test: Which ad size/dimension has got the most impressions.
How to test: There are certain ad sizes that are proven to get good results, like the leaderboard (728×90) and small rectangle (300×250). However, depending on the web site layout, the performance of the sizes may vary. For instance, a neat site with more emphasis on white spacing should compliment a bigger ad size like 970×250, just below the navigation. On the other hand, for a site with more number of click points and attractions, a 729×90 placed above or between the content might work.
Testing with advertising sizes is highly determined by the website layout. The present spacing on your site should give you a visual understanding of what ad size can fit where without hampering the UX.
3. Ad Placement Testing
Things to test: Which ad placement/spot gets better CTR.
How to test: As we all know, the human eye naturally scans from left to right. Based on this understanding, content is stored on the left side of the page and advertisements to the right. The drawback is that now users can easily predict where the advertisements exist, and emotionally tune them out, which impacts CTR. Hence, to test, you can put the ads towards the left, in alignment with the primary content. Make sure not to overlap the content.
Another instance is, if you’re experimenting with above-the-fold advertisements, be certain that the intent of this page remains above-the-fold. Example, if a site provides a source to download, ensure that the CTA is to be found above-the-fold. Bad ad placements can be tricky and prove hazardous to the advertising revenue. Consider referring to a resource guides which give you a heads up on the do’s and don’ts of ad placements.
4. Ad Layout Testing
What to test: Which advertisement combinations generate better earnings.
How to test: Testing one advertisement size vs. one or another advertisement format vs. another is simple. But what about the entire layout? Using ad layout testing tools, it is likely to create variations of entire ad layouts with specific combinations of different ad formats, sizes, and placements.
Layout testing tool gives you the ability to produce versions and compare performance on a website level. This is just like running tests between versions of webpages, or email collecting sign up bars, or exit intent pop ups–but on a bigger scale. The winning ad combination gets majority traffic, driving up the earnings.
5. User Device Testing
What to check: Which advertisement variation gets better impressions/CTR on which device.
How to test: Usually, it is healthy to keep track of device-wise traffic, owing to changing trends in device usage. In 2018, 52.2% of all website traffic worldwide was generated through mobile. By 2019, 63.4percent of cell phone users will access the Internet from mobile. Tools such as Google Analytics help keep track of device/traffic split. Here’s where testing begins. For example, you might receive 10K users from cellular in two weeks and 8K users from desktop in precisely the same duration. This means that your mobile traffic has surpassed your desktop , and hence your mobile ads need closer attention.
Focusing on device-wise ad testing is also related to the website targeting you do. When you target an audience from the apparatus (e.g. mobile), of course you’ll get more traffic from there. Hence, experimenting with advertising formats and dimensions makes more sense on that particular device. Going in depth, there is further segmentation within apparatus like device manufacturer, viewport size, OS, browser, etc. which make a broader scope for testing.
6. Ad Color Testing
Things to check: Which text ad presentation gets better CTR.
How to test: Text ads are plain advertisements which blend in with the page and are known to achieve better click through rates in comparison to standard display ads; especially for content-rich websites/blogs. If you’re an AdSense publisher, the dash provides you the option to choose and set the style for text ads, including the colors and font family used.
Text ads are known to be subtle. They aren’t flashy, but yet they are able to catch user attention due to its text-like appearance. To make the most of text ads, you can try to play with the colour and presentation to find out how well it works for your site. Don’t forget to test variants.
Some More Testing Considerations
Ads prove effective when everything works well, including design, content, design, and UX. Here are a few things publishers must consider to make their ad experience more successful.
1. Webpage Layout
You may have conducted tests on ad formats, sizes, placements, and layouts. But is your site ready to benefit from ad testing? Tools like Optimizely and VWO can help you run efficient A/B evaluations for the content on pages, with the ability that will assist you create webpage variants.
This is similar to using advertising layout testing tools to make ad layout variations. These tools automatically send traffic to the more effective variations, giving you the insight you want to make on-page improvements.
2. Content Type
We are aware that users visit a site in search of helpful content. Consequently, it won’t be fresh advice to say ‘you need to optimize your page content.’ Here’s the way the content and ad cycle runs一users come to the website, engage with the site while reading, ads load and receive impressions till the user is on the webpage, impressions (and clicks) add to your advertisement revenue.
You can test which material type works better for you, e.g., listicles, long-form content, resources/guide, or examples. The aim of testing out articles is to attain a better involvement rate, gauged through metrics like session duration or time on site. This method can also be effective in planning out your long term content strategy.
3. Ads from the UI
Sometimes, testing the website layout or content type may not be your immediate priority. In such circumstances, you can test ad type, size, and positioning based on the present UI. For example, if your site gets more engagement on its above-the-fold section due to videos, then you can try using pre-roll or rich media ads in that area.
Similarly, if your website has unlimited scroll enabled, a tacky ad would do wonders. Spend some time analyzing the features of your site and find the ideal ad spots and ad types which you can experiment with; if testing the whole UI is not a possibility.
Based on Wikipedia, the global market share in May 2019 for Chrome stood at 69.09%, followed by Firefox in 10.01% and Safari at 7.25%. Your users could be anywhere. Tools such as Google Analytics provide you browser visitors divide for your customers telling you that browser is the most used by your users.
Using that data, you can conduct browser-wise advertisement testing to maximize performance. You can even check if you all of your advertisements are rendering properly across all browsers. For instance, your website and advertisements may appear distorted on Chrome but may appear fine on Mozilla Firefox.
5. Ad Networks
Testing amongst ad networks or need sources may not sound as creative as experimentation with ad sizes, formats, or placements. However, demand being among the most important aspects in revenue, it is wise to test which ad network helps generate the most revenue for your inventory.
Ad networks like AdSense, and lots of more, give you the option to utilize multiple ad networks concurrently. The network-wise yield reports give you a clear insights that ad network is worth investing in, and which is not.
Types of Testing
A/B testing is the first thing that comes to many people’s minds when they hear”testing.” But there’s also multivariate testing which allows publishers to test between two or more variants. Here are the two types of testing along with their pros and cons:
A/B testing: Also called split testing, it allows you to make a main Control Unit and a Variation to try against it. An existing control unit may already have a statistical evaluation. That info could be a standard against which you can run the variation and compare performance. In A/B testing, just one variation is analyzed, which provides a clearer understanding of how changes affect outcomes.
Pro: Once you have some tests running, you can expand to multiple tests. For example, after control unit A (300×250) vs variation B (300×600), you can further compare version B (300×600) with variant C (728×90), and so on.
Con: In a continuous process, manual A vs B studying can be laborious and slow. That is where automatic A/B testing and marketing tools come to the rescue.
Multivariate testing: As the name implies, multivariate testing permits you to experiment with more than two variants simultaneously. By way of instance, withad size being the parameter in A/B testing, you may compare the performance of A vs B, and then C. In multivariate testing, it is possible to test A vs B vs C with multiple parameters like advertisement size, placement, and type at the same time.
Pro: The ability to run several tests at exactly the exact same time saves effort.
Con: With numerous testing parameters, occasionally there may be too much information at disposal, which makes it difficult to draw conclusions.
Why Testing is Advantageous
Continuous testing expedites performance results and solves some common publisher issues. Here are some of them:
Everything publishers do is to eventually increase ad revenue. Successful testing helps publishers grow CPM/CTR, which directly affects the advertising revenue.
Comparison data between variations gives publishers the insights they need to make a better website and run better advertisements , thereby improving user experience.
Testing within advertising formats and ad placements helps combat banner blindness. It occurs when an existing ad layout gets predictable and users consciously or unconsciously ignore ads, leading to CPM and CTR decline for publishers.
Testing gives access to actionable data and insights. This assists publishers take proven decisions instead of risking for a hit-and-trial method.
Publishers seek better control over the ads they run and optimizations they do. Different kinds of ad testing provide them proper monitoring and control over data and decisions.
Manual testing may be effective but it’s also time-consuming. With growth in ad technology, publishers now have access to tools that automate the advertising testing making it more efficient.
Bad Testing Habits to Avoid
Testing may be fun, effective, and potentially profitable. But overdoing it can backfire. By now, if you have already thought of a rough ad testing plan for yourself, here are some things you need to remember first:
Keep calm and provide analyzing the time it needs. It would be wrong to expect significant data with two days of testing
Do not attempt to over-optimize. You may have N number of items to test. But, testing all of them at once will make it harder to ascertain which test or parameter actually lead to success.
Run tests, but not at the cost of user experience. It’s fine if you go through a design overhaul or a complete revamp. But be sure you always keep user experience in mind.
What worked then may not work today or later. Testing results are based on customers’ responses to changes. But because traffic volume and user behavior keeps changing, the ad test that may have worked wonders for you three months ago may not deliver the same results now.
Do not run ad tests randomly and blindly. After reading a few posts (including this one), you may get a couple of ad testing ideas. But it is wise that you first peep into your historical data to see if such tests are performed before or not; if yes, what impact did they have.
Run the ideal tests at the right time. Testing might sound fun, but you would not want to run a test at a time that it might backfire, e.g. if December is your highest revenue month, you would probably want to stick to your current setup to avoid doubts.
Ad testing has helped publishers witness an increase in page RPM and CPM by as much as 41% and 77.4% respectively. Although the results for various publishers based on the use case always change. Whether you choose to test manually or by using automation tools, regularly or occasionally, the conclusion is that testing should be the rule, not the exception.
For instance, without testing you might never discover that a little rectangle could bring you a better CTR than a leaderboard. Moreover, testing makes it possible to define your long-term plans as a publisher.
Similarly, if you’ve proven data that X advertising type drives the highest attention and involvement for you, you may change your inventory accordingly to receive better bids from advertisers. These days, getting started with an ad testing plan is significantly easier. There are tools and advertising ops professionals who do the legwork on your behalf, as you get to track the performance and focus on your growth.