Every enterprise today wants to accelerate innovation by building Data and ML into their business. However, most companies struggle with preparing large datasets for analytics, managing the proliferation of Data and ML frameworks, and moving models in development to production.
In this virtual workshop, we’ll cover best practices for enterprises to use powerful open source technologies to simplify and scale your Data and ML efforts. We’ll discuss how to leverage Apache Spark™, the de-facto data processing and analytics engine in enterprises today, for data preparation as it unifies data at massive scale across various sources. You’ll also learn how to use Data and ML frameworks (i.e. TensorFlow, XGBoost, Scikit-Learn, etc.) to train models based on different requirements. And finally, you can learn how to use MLflow to track experiment runs between multiple users within a reproducible environment, and manage the deployment of models to production on Amazon SageMaker.
Join this virtual workshop to learn how Unified Data Analytics can bring Data Science, Business Analytics and engineering together to accelerate your Data and ML efforts. This virtual workshop will give you the opportunity to:
- Learn how to build highly scalable and reliable pipelines for analytics
- Deeper insight into Apache Spark and Databricks, including the latest updates with Delta Lake
- Train a model against data and learn best practices for working with ML frameworks (i.e. - TensorFlow, XGBoost, Scikit-Learn, etc.)
- Learn about MLflow to track experiments, share projects and deploy models in the cloud with Amazon SageMaker
We will use Zoom for a virtual meeting environment. Your Zoom link will be sent to you upon registration.
We look forward to seeing you on January 13th at 9:00 AM PT