Category: PySpark

Send file attachments via Email using Python in Databricks

In this blog, You will be learning How to send data files in attachments via Email using Python in Databricks. Sometimes, We get requirements to send data files that are saved in External locations like ADLS, Blob Storage, S3 bucket, etc, or DBFS via Email to stakeholders and another downstream user. You might be thinking […]

How to read and write CSV file in PySpark using Databricks

Geeks, In this tutorial You will be learning how data stored in a CSV file is being read in PySpark. Moreover, You will also learn how multiple CSV files can be read and write into the location or table. Note: PySpark supports reading a CSV file with a pipe, comma, tab, space, or any other […]

Table Batch Reads and Writes

In this Tutorial, I will be going through the explanation of how data is being read and written into delta lake. Moreover, I will be also teaching other operations of the table read and write like PartitionBy, etc. Create a table Delta Lake supports creating two types of tables—tables defined in the metastore(Managed Table) and […]

What is Delta Lake?

Delta Lake is an open-source project that enables building a Lakehouse architecture on top of data lakes. Delta Lake provides ACID transactions, scalable metadata handling, and unifies streaming and batch data processing on top of existing data lakes, such as S3, ADLS Gen1, ADLS Gen2 GCS, and HDFS. Features of Delta Lake ACID Transaction: Readers will never encounter inconsistent data due to the serializable isolation levels feature. […]

Back To Top