• Start an A/A test and check the tracking

    Table of contents

    In short

    To check your tracking setup in GA4, go to the real-time report and search for the "Abtesting" event. Click on it and check if the parameter "varify_abTestShort" is present. By starting an A/A test. If the event is missing, check the integration of the Varify snippet and the tracking settings.

    Step-by-Step Tutorial
    Create A/A test and test tracking

    1. Create A/A test

    An A/A test works like an A/B test and assigns users two variants - without any actual visible changes. This allows you to test the tracking setup optimally.

    Prerequisite:

    The Varify code snippet must already be integrated into your website and the tracking set up.

    1. Create a new variation/experiment by opening, for example, the home page of your website via the Varify.io dashboard.
    1. Click on the code symbol at the top right and select CSS.
    2. Insert a few spaces in the window and click on Save (this means that there are no visible changes for the users).
    1. Save the variant at the top right under "Finish" and name it A/A test.
    2. Click on the page targeting of the A/A test in the dashboard and then on "Edit Conditions".
    3. Enter your domain and set the Matching Type to "Contains" (this will give you the most data for your A/A test, as it will be displayed for all users of your website).
    4. Start the A/A test.

    8. go to any page on your website and check whether the AA test is being delivered. The easiest way to do this is to open the Varify Google Chrome Browser Extension and check that the AA test is listed under "Active Experiments". (It may take up to 2 minutes after the test goes live before you see the experiment. Reload the page accordingly).

    2. Check live events in GA4

    Once the Varify snippet has been integrated, tracking has been set up and an A/A test has been started, you can check in the GA4 Realtime Overview whether the events of your A/A test are arriving.

    1. In GA4, go to the Realtime Overview under Reports.
    2. Search for the widget "Event count by Event Name".
    3. Check whether an event called "Abtesting" appears (you may have to click through the pages of the widget).
    4. Click on this event to open the parameters.
    5. Check whether the parameter "varify_abTestShort" is available (you may have to click through the pages of the widget).
    6. Click on the parameter and check whether the parameter consisting of Experiment ID and Variant ID is displayed correctly

    If this parameter is visible and correct, the tracking is set up correctly.

    (Optional) Check varify events in DataLayer

    If no live events are visible in GA4, you can check in the developer console of your browser whether the Varify tracking events are being pushed correctly into the DataLayer.

    This is how you proceed:

    1. Open a page on which your A/A test is active.
    2. Open the developer console of your browser: In Chrome: Right-click on the page → "Inspect" → select the "Console" tab.
    3. Write dataLayer into the console and press enter.
    4. Search the DataLayer for an event with the name "varify".
    5. Unfold the event - here you will find all relevant parameters and information about the experiment, e.g:
      Name of the experiment
      Variant you are in

    3. important for the evaluation of A/A tests

    How to carry out an A/A test correctly

    An A/A test should be planned and carried out just as carefully as a regular A/B test. This means: test duration of at least one week or up to approx. 1,000 conversions per variant - depending on what is achieved first. Observe one main metric in the first instance and only evaluate the test at the specified time. Only if there is no significant difference (p > 0.05) at the end can you assume that your setup is working correctly.

    Less is more: choose metrics wisely

    Avoid evaluating too many metrics at the same time. The more you look at, the higher the probability of so-called false positives - i.e. seemingly significant differences that do not actually exist. Example: With 14 metrics, the probability of at least one false positive is around 51 %, even if you assume a significance level of 5 % for each metric. This means that it is more likely to measure a false positive than not to measure one!

    Important: Don't look in too early!

    Repeated interim tests can also greatly distort the significance of your test. This is because each additional look at the results increases the probability of incorrectly interpreting a random result as significant.

    • 7 intermediate tests (e.g. daily over a week): around 30 % risk of a false positive

    • 14 intermediate tests (e.g. daily over two weeks): around 51 % risk of a false positive

    Conclusion:
    Only end your test at the planned time - even if "something seems to happen" beforehand. If you stop too early, you risk making the wrong decisions based on a false positive.

  • First steps