Ingest data with Spark and Microsoft Fabric notebooks
Discover how to use Apache Spark and Python for data ingestion into a Microsoft Fabric lakehouse. Fabric notebooks provide a scalable and systematic solution.
Data Analyst
Data Engineer
Data Scientist
Fabric
Module Objectives
By the end of this module, you’ll be able to:
- Ingest external data to Fabric lakehouses using Spark
- Configure external source authentication and optimization
- Load data into lakehouse as files or as Delta tables
Units
Prerequisites
- Experience with Apache Spark and Python
- Basic understanding of extracting, transforming, and loading data