Extend model lineage to include ML features using Amazon SageMaker Feature Store

Function engineering is pricey and time-consuming, which may lead you to embrace a function shop for managing functions throughout teams and designs. Machine learning (ML) lineage solutions have yet to adapt to this new idea of function management. To accomplish the full benefits of a feature store by making it possible for function reuse, you need to be able to answer essential questions about functions. How were these features developed? What designs are utilizing these functions? What functions does my model depend on? What features are constructed with this information source?
Amazon SageMaker provides 2 crucial foundation to allow answering key feature family tree concerns:

SageMaker Feature Store is a purpose-built option for ML feature management. It assists data science teams recycle ML functions across teams and designs, serve features for design predictions at scale with low latency, and train and deploy brand-new models more rapidly and successfully.

SageMaker ML Lineage Tracking lets you produce and save details about the steps of an ML workflow, from data preparation to model deployment. With lineage tracking info, you can reproduce the workflow actions, track model and dataset family tree, and establish model governance and audit requirements.

In this post, we discuss how to extend ML family tree to include ML features and function processing, which can help data science groups transfer to proactive management of functions. We supply a total sample note pad demonstrating how to quickly add lineage tracking to your workflow. You then utilize that family tree to respond to crucial questions about how models and features are constructed and what models and endpoints are consuming them.
Why is function family tree important?
Feature family tree plays an important function in assisting companies scale their ML practice beyond the first couple of successful models to cover requirements that emerge when they have several information science groups constructing and releasing hundreds or thousands of designs. Think about the following diagram, showing a simplified view of the key artifacts and associations for a small set of designs.

Envision trying to manually track all of this for a big team, numerous groups, or even several company units. Lineage tracking and querying helps make this more workable and assists companies move to ML at scale. The following are four examples of how function family tree assists scale the ML procedure:

What relationships are essential to track?
The following diagram reveals a sample set of ML lifecycle actions, artifacts, and associations that are typically needed for design lineage when utilizing a feature store.

Troubleshoot and audit designs and model forecasts– Incorrect predictions, or prejudiced predictions, may take place in production, and teams need answers about how this happened. This troubleshooting might also take place as a result of a regulator trying to find proof of how models were constructed, consisting of all the functions driving the forecasts.

Build self-confidence for reuse of existing features– A data researcher might browse for existing functions, however will not use them if they cant easily determine the raw data sources, the transformations that have been carried out, and who else is currently using the features in other designs.

Manage features proactively– As a growing number of recyclable features are offered in centralized function shops, owners of particular functions need to plan for the evolution of function groups, and ultimately even deprecation of old features. These feature owners need to understand what designs are utilizing their functions in order to understand the effect and who they need to deal with.

Avoid transforming functions that are based on the very same raw data as existing functions– Lets say a data researcher is preparing to develop brand-new functions based on a specific information source. Function family tree can help them easily discover all the features that already depend upon the exact same data source and are utilized by production designs. Instead of structure and preserving yet another function, they can find functions to immediately reuse.

These parts consist of the following:

Model– In addition to relating models to hosting endpoints, you can connect them to their corresponding training job, and indirectly to function groups.

Data source– ML functions depend upon raw information sources like an operational data shop, or a set of CSV files in Amazon Simple Storage Service (Amazon S3).

Endpoint– Lastly, for online designs, you can associate particular endpoints with the models theyre hosting, finishing the end-to-end chain from data sources to endpoints providing predictions.

Function pipeline– Production-worthy features are typically constructed utilizing a feature pipeline that takes a set of raw information sources, performs function changes, and consumes the resulting functions into the feature store. Family tree tracking can help by associating those pipelines with their data sources and their target function groups.

Function sets– When functions remain in a function shop, information researchers query it to recover information for model training and recognition. You can utilize family tree tracking to associate the feature store question with the produced dataset. This provides granular detail into which functions were utilized and what function history was chosen throughout several feature groups.

Training job– As the ML lifecycle matures to embrace the usage of a feature shop, design lineage can associate training with specific features and feature groups.

There is no “one size fits all” approach to a total model pipeline. This is simply an example, and you can adapt it to cover how your teams run to satisfy your particular family tree requirements. The underlying APIs are flexible enough to cover a broad variety of methods.
Develop lineage tracking
Lets walk through how to instrument your code to quickly record these associations. Our example uses a custom wrapper library we constructed around SageMaker ML Lineage Tracking. This library is a wrapper around the SageMaker SDK to support ease of family tree tracking across the ML lifecycle. Lineage artifacts include data, code, feature groups, features in a feature group, feature group questions, training tasks, and models.
Initially, we import the library:

from ml_lineage_helper import *

Next, preferably you want your family tree to even track the code you used to process your data with SageMaker Processing jobs or code used to train your design in SageMaker. If this code is variation controlled (which we highly recommend!) , we can rebuild what those URL links would be in your selected git management platform like GitHub or GitLab:

ml_lineage = MLLineageHelper().
family tree = ml_lineage. create_ml_lineage( estimator,.
model_name= model_name, question= query, sagemaker_processing_job_description= preprocessing_job_description,.
feature_group_names= [ clients, claims],.
repo_links= repo_links).
family tree.

About the Authors.
Hes been in innovation for over a decade, covering multiple roles and numerous technologies. He is presently focused on combining his background in software engineering, DevOps, and device knowing to assist customers deliver device knowing workflows at scale.
Mark Roy is a Principal Machine Learning Architect for AWS, assisting customers design and develop AI/ML solutions. Marks work covers a vast array of ML usage cases, with a primary interest in computer vision, deep knowing, and scaling ML throughout the business. He has assisted companies in many industries, consisting of insurance coverage, monetary services, media and entertainment, health care, energies, and production. Mark holds 6 AWS certifications, including the ML Specialty Certification. Prior to joining AWS, Mark was a technology, designer, and designer leader for over 25 years, consisting of 19 years in monetary services.
Mohan Pasappulatti is a Senior Solutions Architect at AWS, based in San Francisco, USA. Mohan helps high profile strategic consumers and disruptive start-ups architect and release dispersed applications, consisting of maker knowing workloads in production on AWS. He has more than 20 years of work experience in numerous functions like engineering leader, chief designer and primary engineer. In his spare time, Mohan likes to cheer his college football team (LSU Tigers!), play poker, ski, see the monetary markets, play volley ball and hang out outdoors.

Feature engineering is time-consuming and expensive, which might lead you to embrace a feature shop for managing features throughout models and teams. Feature sets– When functions are in a function store, data scientists query it to obtain information for design training and validation. Lineage artifacts include information, code, feature groups, functions in a feature group, feature group questions, training jobs, and designs.
To avoid transforming functions that are based on the very same raw data as existing functions, you want to look at all the features that have currently been constructed and are in production utilizing that very same information source. As more and more functions are made readily available in a centralized feature store, owners of particular features need to prepare for the advancement of function groups, and eventually even deprecation of old functions.

Conclusion.
In this post, we went over the importance of tracking ML lineage, elements of the ML lifecycle that you must track and include to the lineage, and how to utilize SageMaker to offer end-to-end ML lineage. We also covered how to incorporate Feature Store as you move towards multiple-use features across designs and groups, and lastly how to use the helper library to accomplish end-to-end ML lineage tracking.

query_lineage. get_models_from_feature_group( artifact_or_fg_arn).

query_lineage = QueryLineage().
query_lineage. get_feature_groups_from_data_source( artifact_arn_or_s3_uri).

from ml_lineage_helper. query_lineage import QueryLineage.

ml_lineage = MLLineageHelper( sagemaker_model_name_or_model_s3_uri= my-sagemaker-model-name)
ml_lineage. df.

You began with a raw data source.
You utilized SageMaker Processing to process the raw data and ingest it into two various feature groups.
You queried the Feature Store to create training and test datasets.
You trained a model in SageMaker on your training and test datasets.

We get the following results.
You might likewise require to examine a design or a set of model forecasts. If incorrect forecasts, or biased forecasts, occurred in production, your group requires answers about how this took place. Offered a design, you can query family tree to see all the actions utilized in the ML lifecycle to produce the model:.

processing_code_repo_url = get_repo_link( os.getcwd(), processing.py).
training_code_repo_url = get_repo_link( os.getcwd(), pytorch-model/train _ deploy.py, processing_code= False).
repo_links = [processing_code_repo_url, training_code_repo_url]

The following screenshot reveals our results.
Or maybe youre considering using a specific feature group, and you wish to know what information sources are connected with it:.

Finally, we create the family tree. A number of the inputs are optional, but in this example, we presume the following:.

The following screenshot reveals our outcomes.
You can also reverse the concern and discover out which include groups are associated with a provided design:.

We get the following outcomes.
As increasingly more functions are provided in a central feature store, owners of particular features need to prepare for the development of function groups, and eventually even deprecation of old features. These function owners need to comprehend what designs are utilizing their functions to understand the effect and who they require to work with. You can do this with the following code:.

query_lineage. get_data_sources_from_feature_group( artifact_or_fg_arn, max_depth= 3).

query_lineage. get_feature_groups_from_model( artifact_arn_or_model_name).

The following screenshot reveals our outcomes.
The call returns a pandas dataframe representing the family tree chart of artifacts that were developed and associated in your place. It provides names, associations (such as Produced or ContributedTo), and ARNs that distinctively identify resources.
Now that the family tree is in place, you can use it to address crucial questions about your models and features. Keep in mind that the full benefit of this family tree tracking comes when this practice is embraced across numerous data scientists dealing with big numbers of functions and designs.
Usage lineage to answer crucial concerns and acquire insights.
Lets look at some examples of what you can do with the family tree information now that family tree tracking is in place.
As an information researcher, you might be preparing to utilize a specific information source. To avoid transforming features that are based upon the very same raw information as existing functions, you wish to look at all the features that have currently been constructed and remain in production using that exact same data source. An easy call can get you that insight:.

Leave a Reply

Your email address will not be published.