This ADF Pipeline has a very simple structure. The aim of the pipeline is to read data from Blob storage, transform it and then store the transformed data into an RDBMS table for reporting. Connection strings, container names and output table names have been parameterised. So almost any execution can be achieved on this notebook but the output table's DDL query needs to be parameterized such that it builds the table based on the output file.
-
Notifications
You must be signed in to change notification settings - Fork 0
Soumyadeep-github/ADF-Spark-Notebook-pipeline
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
A simple pipeline to transform data within Azure Data Factory using Azure Databricks. Although it is written in Scala the same can be replicated in Python.
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published