Intro to the core interfaces of mloda¶
mloda is a robust and flexible data framework tailored for professionals to efficiently manage data and feature engineering. It enables users to abstract processes away from data, in contrast to the current industry setup where processes are usually bound to specific data sets.
This introductory notebook provides a practical demonstration of how MLoda helps machine learning data workflows by emphasizing data processes over raw data manipulation.
- It begins by loading data from various sources, such as order, payment, location, and categorical datasets.
- Next, we showcase mloda's versatility in handling diverse compute frameworks, including PyArrow tables and Pandas DataFrames.
- Then we leverage mloda's advanced capabilities to integrate data from various sources into cohesive and unified feature sets (details on feature sets are covered in chapter 3).
Finally, we will conclude by discussing the broader implications of what was done.
# Load all available plugins into the python environment
from mloda.user import PluginLoader
plugin_loader = PluginLoader.all()
# Since there are potentially many plugins loaded, we'll focus on specific categories for clarity.
# Here, we demonstrate by listing the available 'read' and 'sql' plugins.
print(plugin_loader.list_loaded_modules("read"))
print(plugin_loader.list_loaded_modules("sql"))
/home/tom/envs/python310/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm
['mloda_plugins.feature_group.experimental.llm.tools.available.read_file_tool', 'mloda_plugins.feature_group.input_data.read_context_files', 'mloda_plugins.feature_group.input_data.read_file', 'mloda_plugins.feature_group.input_data.read_db_feature', 'mloda_plugins.feature_group.input_data.read_db', 'mloda_plugins.feature_group.input_data.read_file_feature', 'mloda_plugins.feature_group.input_data.read_files.json', 'mloda_plugins.feature_group.input_data.read_files.csv', 'mloda_plugins.feature_group.input_data.read_files.parquet', 'mloda_plugins.feature_group.input_data.read_files.feather', 'mloda_plugins.feature_group.input_data.read_files.text_file_reader', 'mloda_plugins.feature_group.input_data.read_files.orc', 'mloda_plugins.feature_group.input_data.read_dbs.sqlite'] ['mloda_plugins.feature_group.input_data.read_dbs.sqlite']
# Optional!
# We use synthetic dummy data to demonstrate the basic usage.
# You can run this cell in your own jupyter notebook.
# They are however not relevant for further understanding.
#
# from examples.mloda_basics import create_synthetic_data
# create_synthetic_data.create_ml_lifecylce_data()
We should see 4 files in the base_data folder. One sqlite example for a db and 3 different file formats.
Now we want to load the data to look at the content, so we can look at the data.
# Step 1: We want to load typical order information like order_id, product_id, quantity, and item_price.
from typing import List
from mloda.user import Feature
order_features: List[str | Feature] = ["order_id", "product_id", "quantity", "item_price"]
payment_features: List[str | Feature] = ["payment_id", "payment_type", "payment_status", "valid_datetime"]
location_features: List[str | Feature] = ["user_location", "merchant_location", "update_date"]
categorical_features: List[str | Feature] = ["user_age_group", "product_category", "transaction_type"]
# Step 2: We specify the data sources to load
import os
from mloda.user import DataAccessCollection
from mloda_plugins.feature_group.input_data.read_dbs.sqlite import SQLITEReader
# Initialize a DataAccessCollection object
data_access_collection = DataAccessCollection()
# Define the folders containing the data
# Note: We use two paths to accommodate different possible root locations as it depends where the code is executed.
base_data_path = os.path.join(os.getcwd(), "docs", "docs", "examples", "mloda_basics", "base_data")
if not os.path.exists(base_data_path):
base_data_path = os.path.join(os.getcwd(), "base_data")
# Add the folder to the DataAccessCollection
data_access_collection.add_folder(base_data_path)
# As a db cannot work with a folder, we need to add a connection for the db.
data_access_collection.add_credential_dict(
credential_dict={SQLITEReader.db_path(): os.path.join(base_data_path, "example.sqlite")}
)
# Step 3: Request Data Using the Defined Access Collection and Desired Features
from mloda.user import mloda
from mloda_plugins.compute_framework.base_implementations.pyarrow.table import PyArrowTable
all_features = order_features + payment_features + location_features + categorical_features
# Retrieve data based on the specified feature list and access collection
result = mloda.run_all(all_features, data_access_collection=data_access_collection, compute_frameworks={PyArrowTable})
# Display the first five entries of each result table and its type
for data in result:
print(data[:2], type(data))
pyarrow.Table quantity: int64 product_id: int64 order_id: int64 item_price: double ---- quantity: [[6,2]] product_id: [[282,355]] order_id: [[1,2]] item_price: [[74.86,154.56]] <class 'pyarrow.lib.Table'> pyarrow.Table valid_datetime: timestamp[ns, tz=UTC] payment_type: string payment_id: int64 payment_status: string ---- valid_datetime: [[2024-01-11 23:01:49.090909090Z,2024-01-15 09:41:49.090909090Z]] payment_type: [["debit card","debit card"]] payment_id: [[1,2]] payment_status: [["failed","pending"]] <class 'pyarrow.lib.Table'> pyarrow.Table update_date: int64 merchant_location: string user_location: string ---- update_date: [[1640995200000,1641632290909]] merchant_location: [["North","East"]] user_location: [["East","West"]] <class 'pyarrow.lib.Table'> pyarrow.Table user_age_group: string product_category: string transaction_type: string ---- user_age_group: [["26-35","26-35"]] product_category: [["clothing","home"]] transaction_type: [["online","online"]] <class 'pyarrow.lib.Table'>
# The data is initially loaded as a Pyarrow table. However, we can easily load it also as a PandasDataFrame.
from mloda_plugins.compute_framework.base_implementations.pandas.dataframe import PandasDataFrame
# Request data using the Pandas compute framework
result = mloda.run_all(
all_features, data_access_collection=data_access_collection, compute_frameworks={PandasDataFrame}
)
# Display the first five entries of each result table and its type
for data in result:
print(data[:2], type(data))
quantity product_id order_id item_price
0 6 282 1 74.86
1 2 355 2 154.56 <class 'pandas.core.frame.DataFrame'>
valid_datetime payment_type payment_id payment_status
0 2024-01-11 23:01:49.090909090+00:00 debit card 1 failed
1 2024-01-15 09:41:49.090909090+00:00 debit card 2 pending <class 'pandas.core.frame.DataFrame'>
update_date merchant_location user_location
0 1640995200000 North East
1 1641632290909 East West <class 'pandas.core.frame.DataFrame'>
user_age_group product_category transaction_type
0 26-35 clothing online
1 26-35 home online <class 'pandas.core.frame.DataFrame'>
# Define features with specific compute frameworks
order_id = Feature(name="order_id", compute_framework="PandasDataFrame")
product_id = Feature(name="product_id", compute_framework="PyArrowTable")
specific_framework_feature_list: List[Feature | str] = [order_id, product_id]
# Request data for the defined features
result = mloda.run_all(specific_framework_feature_list, data_access_collection=data_access_collection)
# Display the first few rows and data types of the results
for res in result:
print("The resulting data structure differs based on the compute framework:")
print("\n", res[:3], type(res))
The resulting data structure differs based on the compute framework:
order_id
0 1
1 2
2 3 <class 'pandas.core.frame.DataFrame'>
The resulting data structure differs based on the compute framework:
pyarrow.Table
product_id: int64
----
product_id: [[282,355,395]] <class 'pyarrow.lib.Table'>
# Demonstrating mloda's Flexibility with Different Data Technologies
# Import required modules
from typing import Any, List, Optional, Set
from mloda.provider import FeatureGroup, FeatureSet
from mloda.user import Feature, FeatureName, Options, Index, Link, JoinSpec
from mloda_plugins.feature_group.input_data.read_file_feature import ReadFileFeature
# Define the index for the join
index = Index(("order_id",))
# Extend ReadFileFeature to provide index columns
class ReadFileFeatureJoin(ReadFileFeature):
@classmethod
def index_columns(cls) -> Optional[List[Index]]:
return [index]
# Define the link between the features using JoinSpec
link = Link.inner(JoinSpec(ReadFileFeatureJoin, index), JoinSpec(ReadFileFeatureJoin, index))
# Create an example feature group to demonstrate joining
class ExampleMlLifeCycleJoin(FeatureGroup):
# Define input features with different compute frameworks
def input_features(self, options: Options, feature_name: FeatureName) -> Optional[Set[Feature]]:
quantity = Feature(name="quantity", compute_framework="PandasDataFrame")
product_id = Feature(name="product_id", compute_framework="PyArrowTable")
return {product_id, quantity}
# Perform calculations on the joined data
@classmethod
def calculate_feature(cls, data: Any, features: FeatureSet) -> Any:
print(
"Data from two different sources is now combined into one feature within one data technology: \n",
data,
type(data),
"\n",
)
return {"ExampleMlLifeCycleJoin": [1, 2, 3]}
# Run the pipeline
result = mloda.run_all(["ExampleMlLifeCycleJoin"], data_access_collection=data_access_collection, links={link})
# Display the final result
print(
"Final result: ",
result[0],
"\nNote: As no specific compute framework was defined for the result, the output could be in either format.",
)
# Summary: mloda's abstraction layer enables complex process pipelines that handle different data technologies.
# This decouples processes from the underlying data structure, ensuring flexibility and scalability.
Data from two different sources is now combined into one feature within one data technology: pyarrow.Table product_id: int64 order_id: int64 quantity: int64 ---- product_id: [[282,355,395,319,275,...,170,328,361,192,271]] order_id: [[1,2,3,4,5,...,96,97,98,99,100]] quantity: [[6,2,4,9,5,...,4,3,5,5,6]] <class 'pyarrow.lib.Table'> Final result: pyarrow.Table ExampleMlLifeCycleJoin: int64 ---- ExampleMlLifeCycleJoin: [[1,2,3]] Note: As no specific compute framework was defined for the result, the output could be in either format.
What Have We Observed So Far?¶
mloda unifies the interfaces for data for various sources, formats and technologies for the definition of the processes and applying the processes on the data. We used the FeatureGroup, the ComputeFramework and mlodaAPI as interfaces.
It integrates with any techologies, e.g. PyArrow and Pandas, enabling flexible tool choices for data processing.
mloda combines data access and computation, reducing complexity and providing a reusable approach to ML workflows. Data Access can be controlled centrally for different sources of data. Here, we showed folders and a database access.
We will further deepen the advantages of the used approach in the next notebook.