Commerce Grid Reporting API

The Commerce Grid Reporting API is a new RESTful service that gives programmatic access to detailed reporting data. Built as a replacement for the legacy PMC API, the new solution features a more modern structure with a focus on simplifying publishers' interactions with reporting endpoints.

Customers who use u-Slicer should continue to use the associated u-Slicer API.

The Commerce Grid Reporting API is distinct from other Criteo reporting APIs in that it has it's own Request and Response formats, in addition to a unique structure.

Authentication

Access to the CGrid API is secured using OAuth 2.0, managed through the CGrid IAM service.

Before making any requests to the API, your application needs to obtain an access token. This token must be included in the Authorization header of each request.

Authentication: Login & Check Tokens Section

To get a token, first log-in to u-Auth (https://uauth.iponweb.com/uauth/settings/#access) with your Commerce Grid account credentials.

After logging in, go to the “Tokens” section. If this is your first time getting a token, the list will be empty.

Authentication: Add A New Token

To start a new token process, click “New Permanent Token” button on the right.

You can change the autogenerated token name to something more meaningful, for example, “Commerce Grid Reporting API”.

Type “themediagrid.com” in the "Scope" field and then press the “Add” button on the right.

Once all fields have been updated, click "Create" to create the new token.

Authentication: Receive Your Token & Save It

Following the "New Permanent Token" process, you will receive the token in the next window that appears. (In the associated screenshot to the left, a grey box is indicated; your token will appear where this grey box is located.)

Press “Copy token value” and save it somewhere secure and accessible. The token value will only appear here once; if you lose your token or otherwise exit the window before you have saved it, you can start the process over again to access a new replacement token.

Once done, press “Done” to return to token list.

Authentication: Revoking A Token

Should you need to revoke a token you have created, select the “Revoke” button on the right side or “Revoke All” above the token list:

Getting Started: Querying the API

Base URL

https://pub.themediagrid.com/api/uslicer/reporting/

Required Headers

Make sure to include the following headers in every request:

Header

Value

Description

Authorization

Bearer <access_token>

OAuth 2.0 access token

Content-Type

application/json

Request body format

Accept

application/json

Response format

Here’s a simple example using curl:

curl -X GET https://pub.themediagrid.com/api/uslicer/reporting/ \
  -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \
  -H "Content-Type: application/json" \
  -H "Accept: application/json"
  -d '{
      "start_date": "2025-05-15",
      "end_date": "2025-05-22",
      "split_by": [
        "granularity_day"
      ],
      "timezone": 0,
      "order_by": [
        {
          "name": "granularity_day",
          "direction": "ASC"
        }
      ],
      "add_keys": [
        "granularity_hour"
      ],
      "data_fields": [
        "pub_payout"
      ],
      "limit": 100,
      "offset": 0,
      "include_others": true
    }'

Response Format

{
  "status": "success",
  "uslicer-spark.version": "4.19.0",
  "rows": [
    {
      "data": [
        {
          "name": "pub_payout",
          "value": 7095.69,
          "percent": "24.52106"
        }
      ],
      "name": "2025-05-15",
      "confidence_range": null,
      "mapping": null
    },
    {
      "data": [
        {
          "name": "pub_payout",
          "value": 7225.84,
          "percent": "24.97080"
        }
      ],
      "name": "2025-05-16",
      "confidence_range": null,
      "mapping": null
    },
    {
      "data": [
        {
          "name": "pub_payout",
          "value": 8233.03,
          "percent": "28.45142"
        }
      ],
      "name": "2025-05-17",
      "confidence_range": null,
      "mapping": null
    },
    {
      "data": [
        {
          "name": "pub_payout",
          "value": 6382.58,
          "percent": "22.05672"
        }
      ],
      "name": "2025-05-18",
      "confidence_range": null,
      "mapping": null
    }
  ],
  "total": {
    "data": [
      {
        "name": "pub_payout",
        "value": 28937.15
      }
    ],
    "dates": [
      "2025-05-15",
      "2025-05-16",
      "2025-05-17",
      "2025-05-18"
    ],
    "records_found": 4,
    "confidence_range": null
  },
  "others": {
    "data": [
      {
        "name": "pub_payout",
        "value": 0,
        "percent": "0.00000"
      }
    ],
    "confidence_range": null,
    "key_count": 0
  }
}

Endpoint Detailed Description: Request

Request

POST arguments*:
(required unless marked optional)

New fields, such as currencies may appear here.

  • limit (optional): the maximum number of returned rows. Possible values: any integer in the range from -1 to 10,000. -1 means all rows. The default value is -1.

  • offset (optional): skip the specified number of rows from the result. Together with the limit POST argument allows querying large datasets. The default value is 0. Can be used only, if the limit POST argument is in the range from 0 to 10,000.

  • add_keys(optional): the array of key fields to add. Format: ["key_field1", ... ].

  • split_by: an array of the key fields to split data by in the form of ["key_field1",...]

  • start_date: the start of the date range to gather data for. Supported formats:

    • Absolute dates: YYYY-MM-DD.

    • Relative dates:

      • todayyesterday

      • -Nand -Ndays for the number of days since the current date, where N - any positive integer or zero.

      • -Nm and - Nmonths for the number of months since the current date, where N - any positive integer or zero.

  • end_date: the end of the date range to gather data for. Supported formats:

    • Absolute dates: YYYY-MM-DD.

    • Relative dates:

      • todayyesterday

      • -Nand -Ndays for the number of days since the current date, where N - any positive integer or zero.

      • -Nm and - Nmonths for the number of months since the current date, where N - any positive integer or zero.

  • timezone (optional): time zone UTC offset in hours. Format: N , where N - any integer in the range of -12 <= N <= +12.

  • order_by (optional): defines sorting rules.

    • "name": "field", the name of the key or data field to sort the results by. Note that this field can be any key field, specified in the split_by POST argument, or one of the data fields specified by the data_fields POST argument (or any of the data fields if the  data_fields POST argument is not specified). The default value is the first column with enabled percent.

    • "mapping": defines whether to sort by key field ID or by key field mapping. Possible values: (0 - sort by key field ID, 1 - sort by key field mapping). The default value is 0.

    • "direction": sort in the ascending or descending order, either ASC or DESC. The default value is DESC.

  • data_fields (optional): the array of data fields to be returned in the form of ["data_field1", ....].

  • include_others (optional): return the summary of data outside the specified limit.

  • include_mappings (optional): defines whether key field value mappings are returned by the API request. The default value is 1 (mappings are returned) if only 1 key field is used in the split_by POST argument. The default value is 0 (mappings are not returned) if 2 or more key fields are used in the split_by POST argument.

  • filters (optional):

    • name": "key_field",

    • "case_insensitive": (optional), defines whether search should be case-insensitive. Possible values: (1 - case-insensitive, 0 - case-sensitive). The default value is 0.

    • "search_mappings": (optional), defines whether the search should be performed in mappings. Possible values: (1 - search should be performed in both key field values and their mappings, 0 - search should be performed in key field values only). The default value is 0.

    • "value": ["value1", ... ], where the value of the "value" field should be an array.

    • "match": "equals|not equals|contains|not contains|beginswith|endswith|not beginswith|not endswith"

    • if "search_mappings" is set to 1, the search will be performed for both specified key field values and their mappings (descriptions). For example, if you have the creative_id key field with some value of "1" and its mapping of "First Creative", you can search for it as follows:

      {"name": "creative_id", "value": ["First Creative"],"match":"equals", "search_mappings": 1}

      OR

      {"name": "creative_id", "value": [1],"match": "equals"}

    • Regarding the account filters: if the account is not explicitly specified in the filters, it's taken from the user (based on the account selected by the user). If no account is selected (this case should only occur for superusers), an error is raised. Account filter example:

      "filters": [
        {
          "match": "equals",
          "value": [
              "211"
          ],
          "case_insensitive": 1,
          "search_mappings": 1,
          "name": "publisher.account_id"
        }
      ]

Endpoint Detailed Description: Available Dimensions

List of available reporting dimensions and data_fields

Below is a table of available reporting dimensions and fields that can be requested.

In the table, "UI Name" refers to how the report or field is named in the Reporting Tab in the Commerce Grid UI. The "API Name" refers to how the same report or field should be called when utilizing the API. "Description" defines a given report or field.

UI name

API name

Description

Inventory Type

inventory_type

Indicates whether the impression was app or site, returns the following values: app, web, or ctv

Ad Unit ID

publisher.ad_unit_id

name and id(UID) of an ad unit

Media Type

publisher.media_type

Indicates the inventory content type, i.e. video, banner, or native

Device Type

user.agent.device.type

Specifies the device type, e.g. Phone, PC, Tablet

OS

user.agent.os.name

Specifies the Operating System, e.g. Android, iOS, Linux

Country

user.geo.country

Specifies the country in which the impression was displayed

Day

granularity_day

Returns data broken down by day.

DSP

grid_or_verona_dsp_id

Indicates the DSP that purchased the inventory

CGrid UI Account ID

publisher.account_id

Indicates the Publisher Account ID

Network ID

publisher.network_id

Indicates the Publisher Network ID

Inventory Group

publisher.id

Specifies the Publisher Inventory Group

Publisher Domain

publisher.domain

Specifies the domain that the publisher represents, e.g. forbes.com

App Bundle

app.bundle

Specifies the app bundle

Publisher Sub ID

publisher.publisher_sub_id

Indicates Publisher Sub ID

Creative Size

creative_size

Indicated data broken down using the selected creative sizes, e.g. 300x250

Browser

user.agent.browser.name

Specifies the browser name, e.g Firefox

Endpoint Detailed Description: Response

Response

The response may vary slightly due to additional aggregations.

  • status: the status of the request. success, if the request was processed successfully, or error code, if any error occurred. If the status is not success, then the response contains the status and reason fields only. Possible values:

    • success: the request was processed successfully.

    • bad_request: invalid request parameters, please see the reason field for more details.

    • timeout: the request took too long to complete.

    • access_error: the user doesn't have access to the specified project/slicer, or a wrong token was used.

    • internal_error: the request failed due to an unknown problem.

  • reason: user-friendly description of the occurred error. This field is displayed for failed requests only.

  • total: this section contains information about the entire dataset returned by the query.

    • data: the summary values for each data column found in the result dataset. It's an array of elements with the following fields:

      • name: data field name.

      • value: data field value.

      • comment (optional): calculation comment (only for custom data columns), can be inf , -inf in case of stack overflow, "Division by zero" , and ERROR! at any other errors in calculation. In all of these cases, the value is displayed as 0.

    • records_found: the total number of found records.

    • confidence_range: the confidence range for data (in percent, presented in this section, if the returned dataset is compressed. Available for the Total rows. Contains “N/A” if compressed is unknown and 0 for uncompressed rows.

    • dates: the array with all the dates, for which data exist in the period from the start_date to the end_date.

  • rows: this section contains query data results. It is an array of data rows, each containing the following fields:

    • data: the list of items with the following field names and values:

      • name: data field name.

        • value: data field value.

        • percent (optional): percent of the total value (if applicable).

        • comment (optional): calculation comment (only for custom data columns), can be inf,-inf in case of stack overflow, "Division by zero" , and ERROR! at any other errors in calculation. In all of these cases, the value is displayed as 0.

    • name: the value of the split_by key field for this row, including six specific time-related fields:

      • granularity_hour: data, aggregated by day+hour, where each key field value contains 1 item (date and hour) like:
        "name": [ "2013-09-30 19:00" ]

      • granularity_day: data, aggregated by day, where each key field value contains 1 item (date) like:
        "name": "2013-09-30"

      • granularity_week: data, aggregated by week, where each key field value contains 1 item (week) like:
        "name": "2013-W48"

      • granularity_month: data aggregated by month, where each key field value contains 1 item (month) like:
        name": "2013-09"

      • granularity_quarter: data aggregated by quarter, where each key field value contains 1 item (quarter) like:
        name": "2013 Q4"

      • granularity_year: data aggregated by year, where each key field value contains 1 item (year) like:
        name": "2013"

Note: If the split_by POST argument contains several key fields, then the name parameter also contains several key fields.

  • mapping (optional): the mappings of the current row key field values for the key fields specified in the split_by POST argument. Mapping display is defined by the include_mappings POST argument.

    • confidence_range: the confidence range for data (percent), presented in this section if the returned dataset is compressed. Available for every data row in the resulting dataset.

    • others (optional): the summary of data rows beyond the range defined by limit + offset, if the include_others POST argument was set to 1. It's an array of elements with the following fields:

      • data: the list of Total values of data rows beyond the range defined by limit + offset for the data fields specified in the data_fields POST argument. It's an array of elements with the following fields:

        • name: data field name.

        • value: data field Total value for data rows beyond the range defined by limit + offset.

        • percent (optional): percent of the data field Total value for data rows beyond the range defined by limit + offset (if applicable).

      • confidence_range: the confidence range for data (in percent), presented in this section if the returned dataset is compressed. Available for the Total values matching data rows beyond the range defined by limit + offset. Contains “N/A” if compressed is unknown and 0 for uncompressed rows.

      • key_count: the number of data rows beyond the range defined by limit + offset .

Example Script

Below, you will find a template script that retrieves data from the CGrid API to be used as a reference when setting up your API calls. It uses optional environment variables for configuration.

You can run the script with custom request parameters in a JSON file and save results in a specified location such as: python reporting-uauth.py --payload custom_payload.json --output my_output.json

Example Script

#reporting-uauth.py
import json
import os
import sys
import argparse
from typing import Any
from requests import post

# Hardcoded uAuth token
UAUTH_TOKEN = "your_uauth_token"

def get_config() -> dict[str, str]:
    """Get configuration from environment variables or use defaults"""
    return {
        'reporting_api_url': os.environ.get('CGRID_REPORTING_API_URL','https://pub.themediagrid.com/api/uslicer/reporting/')
    }

def create_reporting_payload(payload_path: str = None) -> dict[str, Any]:
    """
    Create the payload for Reporting API request

    If payload_path is provided and exists, use it.
    Otherwise, use the default payload.
    """
    if payload_path and os.path.isfile(payload_path):
        try:
            with open(payload_path, 'r') as f:
                print(f'Using payload from {payload_path}')
                return json.load(f)
        except json.JSONDecodeError:
            print(f'Error parsing {payload_path}, falling back to default payload')
        except Exception as e:
            print(f'Error reading {payload_path}: {e}, falling back to default payload')

    return {
        'start_date': '-7d',
        'end_date': '-1d',
        'split_by': [
            'grid_or_verona_dsp_id',
            'granularity_day'
        ],
        'timezone': 0,
        'filters': [
            {
                'match': 'equals',
                'value': [
                    'publisher first, inc. d/b/a freestar - cgrid'
                ],
                'case_insensitive': 1,
                'search_mappings': 1,
                'name': 'publisher.account_id'
            },
            {
                'match': 'not equals',
                'value': [
                    '(empty value)'
                ],
                'case_insensitive': 1,
                'search_mappings': 1,
                'name': 'grid_or_verona_dsp_id'
            }
        ],
        'order_by': [
            {
                'name': 'granularity_day',
                'direction': 'ASC'
            }
        ],
        'include_mappings': 1,
        'add_keys': [
            'granularity_hour'
        ],
        'data_fields': [
            'pub_payout',
            'verona_fee',
            'bid_offers'
        ],
        'offset': 0
    }

def fetch_reporting_data(api_url: str, payload: dict[str, Any]) -> dict[str, Any]:
    """
    Request data from Reporting API using uAuth token

    Args:
        api_url: API URL
        payload: Request payload

    Returns:
        API response data

    Raises:
        Exception: If the API request fails
    """
    headers = {
        'Authorization': f'Bearer {UAUTH_TOKEN}'
    }

    response = post(
        url=api_url,
        headers=headers,
        json=payload
    )

    if response.status_code != 200:
        raise Exception(f'API request failed: {response.status_code}: {response.text}')

    return response.json()

def save_data_to_file(data: dict[str, Any], filename: str) -> None:
    """
    Save data to a JSON file

    Args:
        data: The data to save
        filename: The file to save to
    """
    with open(filename, 'w') as f:
        json.dump(data, f, indent=4)

def main():
    """Main function to orchestrate the workflow"""
    parser = argparse.ArgumentParser(description="Fetch reporting data using uAuth token")
    parser.add_argument('--payload', type=str, default=None, help='Path to input payload JSON file')
    parser.add_argument('--output', type=str, default='reporting_data.json', help='Output file name')
    args = parser.parse_args()

    try:
        config = get_config()

        payload = create_reporting_payload(args.payload)

        print('Requesting reporting data...')
        result = fetch_reporting_data(
            api_url=config['reporting_api_url'],
            payload=payload
        )
        print('Data received successfully')

        save_data_to_file(result, args.output)
        print(f'Result saved to file. Contains data for {result["total"].get("records_found")} records')

    except Exception as e:
        print(f'Error: {e}')
        sys.exit(1)

if __name__ == '__main__':
    main()