Loading data into a TM1 cube with TM1py

This article explains three methods of loading data into a TM1 cube with TM1py.

Method 1 – Writing directly to a cube

The first method is to create a dictionary of cells and assign a value to each set

cells[element_name, "Numeric"] = 1

Once the cells set has been created you just need to use the TM1py function cells.write_values as below:

from TM1py import TM1Service

with TM1Service(address=ADDRESS, port=PORT, user=USER, password=PWD, ssl=SSL) as tm1:
    cells = dict()

    for i in range(500_000):
        element_name = str(i).zfill(6)
        cells[element_name, "Numeric"] = 1

    tm1.cells.write_values(
        cube_name="Big Cube",
        cellset_as_dict=cells,
        dimensions=["Big Dimension", "Big Cube Measure"],
        deactivate_transaction_log=True,
        reactivate_transaction_log=True)

Method 2 – Updating a TM1 cellset

Another way to update cells in a cube with the TM1 REST API is to create a TM1 cellset and then update this cellset. To do that with TM1py, you will need to use the function cells.write_values_through_cellset:

from TM1py.Services import TM1Service
from mdxpy import MdxBuilder, Member

with TM1Service(address=ADDRESS, port=PORT, user=USER, password=PWD, ssl=SSL) as tm1:
    query = MdxBuilder.from_cube("Big Cube")

    for i in range(500_000):
        element_name = str(i).zfill(6)
        query.add_member_tuple_to_rows(Member.of("Big Dimension", element_name))

    query.add_member_tuple_to_columns(Member.of("Big Cube Measure", "Numeric"))

    tm1.cells.write_values_through_cellset(
        mdx=query.to_mdx(),
        values=[1] * 500_000)

Instead of writing a MDX, this script is leveragin the MDXpy package.

Method 3 – Hybrid approach (TM1py + TM1 process)

The third method is to use an hybrid approach, using TM1py to export the data into a csv file and then executing a TM1 process to load the data into the cube:

from TM1py.Services import TM1Service
from mdxpy import MdxBuilder, Member

CSV_FILE = r"C:ProjectsTM1py_ChampionsSession5data.csv"

with TM1Service(address=ADDRESS, port=PORT, user=USER, password=PWD, ssl=SSL) as tm1:
    query = MdxBuilder.from_cube("Big Cube")

    with open(CSV_FILE, "w") as file:
        lines = list()
        for i in range(500_000):
            element_name = str(i).zfill(6)
            lines.append(",".join([element_name, "Numeric", "1", "n"]))
        file.writelines(lines)

    process_name = "import_csv"
    success, status, error_log_file = tm1.processes.execute_with_return(process_name)
    if not success:
        raise RuntimeError(f"Process '{process_name}' failed with status: {status}")

When working with large dimensions (100,000+elements), it is recommended to use this approach as it is the fastest. However its main downside is that it cannot be used if TM1 is hosted on the IBM Cloud.


Posted

in

, , , ,

by

Tags: