Ploutos Experiment Tracking

DequeAI Python SDK Documentation

DequeAI Python SDK is a library that helps you to easily track and manage your experiments, log artifacts, and monitor resources in your machine learning projects. With DequeAI, you can keep your experiments organized and collaborate with your teammates seamlessly.

Getting Started

This guide will walk you through the basic usage of the DequeAI Python SDK.

Installation

To use the DequeAI Python SDK, you will need to install it first. You can install it using pip:

pip install dequeai

Initialization

To start using the SDK, you need to import it in your Python script:

import dequeai

Before logging any experiments or artifacts, you need to initialize the Run instance:

dequeai.init(user_name="your_username", project_name="your_project_name", api_key="your_api_key")

Logging Data

You can log experiment data by calling the log() function:

data = { "accuracy": 0.95, "loss": 0.1 } 
dequeai.log(data)

You can also log a nested dictionary as followes:

data = { "training":{"accuracy": 0.95, "loss": 0.1}} 
dequeai.log(data)

You can also log hyperparameters using the log_hyperparams() function:

hyperparams = { "learning_rate": 0.001, "batch_size": 32 } dequeai.log_hyperparams(hyperparams)

Logging Images

The same log() function can be used to log images with bounding boxes. The number of bounding box lists should match the number of images, meaning each image should have an associated bounding box list, even if it's None. If no bounding box data is applicable or available for an image, log None in place.

# Create an array of images
images = [np.random.randint(0, 255, (100, 100, 3)) for _ in range(3)]


# Create bounding boxes for each image
boxes = [
    [BoundingBox2D(coordinates=(10, 10, 20, 20), caption="Object1"), BoundingBox2D(coordinates=(30, 30, 40, 40), caption="Object2")],
    [BoundingBox2D(coordinates=(15, 15, 25, 25), caption="Object3"), BoundingBox2D(coordinates=(35, 35, 45, 45), caption="Object4")],
    [BoundingBox2D(coordinates=(20, 20, 30, 30), caption="Object5"), BoundingBox2D(coordinates=(40, 40, 50, 50), caption="Object6")]
]
train_image = Image(data=images,box_data=boxes)

# Log the images
dequeai.log({"training":{"loss":0.2,"images":train_image}})

Logging (Automatic) System Metrics

When a user logs a dictionary using the dequeai.log() function, the platform automatically logs system metrics at the same point in time. This enables users to monitor system utilization relative to model training.

In the UI, users can click on "Edit settings" and add dot notation-based attributes to monitor specific system metrics from the following list:

  • System.GPU.0.utilization

  • System.GPU.0.memory.total

  • System.GPU.0.memory.free

  • System.CPU.utilization

  • System.CPU.count

  • System.CPU.cores

  • System.memory.utilization

  • System.disk.free

To add a metric to monitor, simply enter the dot notation-based attribute in the "Edit settings" panel. For example, to track GPU utilization, add System.GPU.0.utilization.

Logging Artifacts

Artifacts such as models, code, and environment can be logged using the log_artifact() function:

# Log model 
dequeai.log_artifact(artifact_type="model", path="path/to/model/file") 
# Log code
dequeai.log_artifact(artifact_type="code", path="path/to/code/file") 
# Log environment 
dequeai.log_artifact(artifact_type="environment", path="path/to/environment/file")

Registering Artifacts

You can register the artifacts you have logged by calling the register_artifacts() function:

dequeai.register_artifacts(latest=True, label="v1.0", tags=["tag1""tag2"])

Loading Artifacts

You can load artifacts by calling the load_artifact() function:

dequeai.load_artifact(artifact_type="model", run_id="your_run_id")

The loaded artifact is downloaded into the current directory.

Finishing the Run

After logging your experiments and artifacts, you can finish the run by calling the finish() function:

dequeai.finish()

Example

Here is a complete example of using the DequeAI Python SDK:

import dequeai 
# Initialize the Run
dequeai.init(user_name="your_username", project_name="your_project_name", api_key="your_api_key") 
# Log experiment data 
data = { "accuracy": 0.95,"loss": 0.1 } 
dequeai.log(data) 
# Log hyperparameters 
hyperparams = {"learning_rate": 0.001, "batch_size": 32 } 
dequeai.log_hyperparams(hyperparams) 
# Log artifacts
dequeai.log_artifact(artifact_type="model", path="path/to/model/file") 
dequeai.log_artifact(artifact_type="code", path="path/to/code/file") 
dequeai.log_artifact(artifact_type="environment", path="path/to/environment/file") 
# Register artifacts
dequeai.register_artifacts(latest=True, label="v1.0", tags=["tag1""tag2"]) 
# Load artifacts 
dequeai.load_artifact(artifact_type="model", run_id="your_run_id") 
# Finish the Run 
dequeai.finish()

With this getting started guide, you should now be able to set up and use the DequeAI Python SDK in your machine learning projects. You can now track and manage your experiments, log artifacts, and monitor resources effectively. Furthermore, you can collaborate with your teammates and maintain a well-organized project structure.

Advanced Usage

For more advanced usage, the DequeAI Python SDK provides additional features that can help you to monitor and manage your machine learning projects more efficiently.

Comparing all Runs within the Project

You can compare different runs of your experiments using the compare_runs() function:

dequeai.compare_runs(project_name="836b72e0-95a6-c84d95a960b2",metric_key="Inference Accuracy.Average Portfolio Value")

This will return a table comparing various metrics (best recorded for each run) and their corresponding hyperparameters.

Reading the best Run

You can search for runs using the search_runs() function:

dequeai.read_best_run(project_name="836b72e0-1a5b-c84d95a960b2",metric_key="Inference Accuracy.Average Portfolio Value")

This will return a the run with the best metric across the project.

Updating Run Metadata (Coming Soon...)

To update the metadata of a run, you can use the update_run_metadata() function:

dequeai.update_run_metadata(run_id="your_run_id", metadata={"new_key""new_value"})

This will update the specified run's metadata with the new key-value pair.

Creating Reports (Coming Soon...)

You can create reports of your experiments using the create_report() function:

dequeai.create_report(run_ids=["run_id1""run_id2""run_id3"], file_format="pdf", output_path="path/to/report/file")

This will generate a report in the specified file format containing the details of the selected runs.

With these advanced features, you can gain more insights into your experiments, manage your project resources effectively, and make informed decisions to improve your machine learning models.

If you have any suggestions, feature requests or issues, please log them here: https://github.com/orgs/deque-inc/projects/1 or email me at team@deque.ai.

Happy experimenting!

Last updated