dotsdots

Build, Train, and Deploy ML Pipelines using BERT

Description

In the second course of the Practical Data Science Specialization, you will learn to automate a natural language processing task by building an end-to-end machine learning pipeline using Hugging Face’s highly-optimized implementation of the state-of-the-art BERT algorithm with Amazon SageMaker Pipelines.

Your pipeline will first transform the dataset into BERT-readable features and store the features in the Amazon SageMaker Feature Store. It will then fine-tune a text classification model to the dataset using a Hugging Face pre-trained model, which has learned to understand the human language from millions of Wikipedia documents. Finally, your pipeline will evaluate the model’s accuracy and only deploy the model if the accuracy exceeds a given threshold.Read more.

This resource is offered by an affiliate partner. If you pay for training, we may earn a commission to support this site.

Career Relevance by Data Role

The techniques and tools covered in Build, Train, and Deploy ML Pipelines using BERT are most similar to the requirements found in Data Scientist job advertisements.

Similarity Scores (Out of 100)

Learning Sequence

Build, Train, and Deploy ML Pipelines using BERT is a part of one structured learning path.

Coursera
DeepLearning.AI

3 Courses 3 Months

Practical Data Science