Marketers should spend their budget where the absolute ROI is greatest or the return-to-cost ratio is best. This can be achieved with the help of data-based budget allocation.
Budget allocation generally means to allocate a budget for a certain period (typically for a year or a quarter) to different investment alternatives. Investment alternatives in the media context range from the communication channel level (TV, OOH, radio, digital social, digital video, digital display, etc.), to publishers and placement. We can start, for example, by looking at what share of the media budget should be allocated to which channel.
But why do we even bother so much with how to allocate budgets instead of just randomly allocating them to every different type available? Because it's about getting as much back (mostly profit or, in the media context, attention) for your investment as possible. That means we don't want to just invest randomly, but rather to spend every euro planned for media in that place where the absolute return (mostly sales) is the highest or the cost-return ratio is best.
This is illustrated in the following, simplified example in the form of three investment alternatives. Although each alternative leads to more "impact," any marketing manager would, on the basis of this information, most likely only distribute the media budget according to planning scenario 3 because the change in impact return is highest in that scenario.
An investment decision (budget allocation) relating to media is a typical economic optimization problem, which is simple in terms of the basic idea. Assuming given means, the return should be maximized by distributing the budget between action alternatives.
The three decisive questions that solve the optimization problem of budget allocation are
If you can give a positive response to these questions, you can build an optimization system on this basis that will create target-optimized budget allocation.
Budget allocation should be target-optimized. Budget allocation is only effective if I know how a change in budget allocation will affect my target and thus my return causally and incrementally. If I reduce the TV share in my media mix from 80% of the total budget to 60% and at the same time increase the digital share to 40%, how will my target/return change?
To understand this better, we can look at the course taken by past investment decisions and targets. This will show you, for example, that "...when we added social to the media mix, sales of our product actually fell slightly.” Social was therefore perhaps banned from the media mix for the time being. But did you also take into account that, parallel to the social test campaign, a new competitive product with massive media spending came onto the market? Of course, you can observed and classify this as well, but in order to get a truly holistic, clear picture of the actual incremental-causal effect of different investment alternatives, statistical models and machine learning algorithms need to replace human observation and the attribution (assignment) of effects (changes in target value) that are based upon these observations.
Attribution is the central concept of budget allocation and media optimization. Effects need to be attributed (which part of a change in the target is due to which measure) and quantified (by how many units does the target change when a measure changes by one unit e.g. an impression), so that it becomes truly possible to control the investment.
In principle, attribution works to predict the change in target is predicted as accurately as possible by applying a mathematical model of the underlying relationship (investment and target) (prediction).
Thus, if the course of the target curve can be traced as well as possible, the role played by different events (e.g. advertising, weather, natural demand) can be attributed and quantified on the basis of further mathematical models. Using such a model, we can then predict different budget allocation scenarios with regard to their target effectiveness, thereby making it possible to compare them.
Even more important than mathematical models for mapping relationships are the data that flows into these models.
Afterall, it is quite simple. No matter how good a machine learning algorithm is, if there is too little data or the data structure (distribution, granularity and linkability of data) is not correct, then the attribution results will be limited. That is why the most successful companies of our time are those companies that collect the most structured data. The more granular and numerous the data, the better.
The first step is to create a (data) basis. It makes no sense to develop the most complex machine learning algorithm and then to throw it on an unstructured Excel with 100 rows. This does not mean that only those companies that have a wealth of data like Google, Amazon and Facebook can make budget allocations, but that the first step is to make sure that
Although these requirements may appear self-evident, many companies do not even own some of the relevant data themselves (but rather access it from service providers or agencies) or assume they will only need certain data three years retroactively, usually with the argument that current data better reflects current market developments. The temporal context of data must always be taken into account, but a long data history is of great importance for attribution, because the more data is available, the better the mathematical model "understands" how the relevant relationships work. Therefore, when in doubt, keep data for a long time and track everything you can, because you can always make adjustments later.
Real availability of data means that data is consistently structured in a database system and can be used by other systems (e.g. dashboards and analysis systems) without having to make further fundamental adjustments. Often the reality is that people who want to work with data at a company first need to control and structure the data themselves. Such a procedure is inefficient and, above all, not standardized and leads to inconsistent results. The goal should always be to have a “single point of truth” for your data.
In particular, data needs to be available to the departments and systems that work with it and that are responsible for developing optimization systems. Data should be available to everyone at the company, because the whole company needs to be data-driven.
Data-driven marketing means not only using data that is available anyway but also producing specific data. On a higher level, this means tracking everything that is technically trackable. But it also means, for example, establishing a higher-level testing process and structuring your media campaigns in such a way that specific data points are generated (e.g. not always playing all media channels in parallel and not always advertising at the absolute peak of demand). Of course, the entire optimization process cannot be geared solely towards measurability. The primary goal is always to maximize the target, but without attribution knowledge regarding your own or third party data, this is simply not possible.
Data and its structured availability (single point of truth) are the basic requirements. Setting up a clean data process is already a major challenge for many companies that are still working in the Excel age. Nowadays, the data process needs to have the highest priority and be thoroughly completed before the actual work with the data begins.
Everything that is done in a business context is actually an optimization process. There is a target parameter (profit) that needs to be maximized (not always in the short term, but always in the long term) by optimally distributing limited resources (budget) to investment alternatives (control parameters). In the context of media budget allocation, this means that budget is optimally allocated to media investment alternatives in order to maximize the profit of a brand or product.
This is clear in terms of the basic idea. The challenge, then, is that the relationship between media investment and the goal is not independent of other influential factors.
It is therefore tremendously important that the optimization system with all its parameters and (presumed) correlations is clearly defined. The result is a cycle that begins and ends with the goal and in between represents control parameters, influential factors and their interrelationships.
The design of the optimization process serves as the basis for the actual attribution process and technical implementation of the optimization process (investment decisions, investment, control). Here, too, it is important that we have a single point of truth. Of course, there are different hypotheses and opinions on such systems, but it is critical for companies that only one optimization process is valid at any given time. Otherwise, contradictory optimizations can occur, which is always bad when it comes to achieving objectives. An optimization process is adaptable and even needs to be able to continuously adapt to new insights and to do so in the same direction for everyone involved at the company.
90% of attribution quality (the accuracy and granularity of attribution) depends on the underlying data. Large amounts of high-quality data (complete, granular, linked, variation-rich) lead to high attribution quality. The remaining 10% is accounted for by the optimization process concept, the correct mathematical model (statistical model, machine learning algorithm) and consistent application of attribution results.
In the actual attribution process, then, you should always move from large correlations to small ones, a process that is referred to as macro-micro optimization. The first step is to understand how media as a whole causally influences the goal. What part of the target pair change is media and what is not (baseline)?
You should always start with infrastructure. The dream of having an all-encompassing machine learning budget allocation is fine, but it only works if the data infrastructure works.
Like any optimization process, budget allocation works as an iterative cycle. Decisions are made based on the analysis of past data, which are then analyzed ex-post and again and again from the beginning. When it comes to making a decision regarding budget allocation, data points are generated again, on the basis of which the effect on the target can be tested in order to again adjust the budget allocation.
Being able to use a given budget as effectively and efficiently as possible has always been the goal. It is now time to create the best conditions by establishing a data, attribution and optimization infrastructure. It is particularly important to create a sufficient data infrastructure for this purpose. Attribution can only work on this basis, giving you target-optimized budget allocation within a holistic optimization process.