How to connect to data on S3 using Spark
In this guide we will demonstrate how to use Spark to connect to data stored on AWS S3. In our examples, we will specifically be connecting to csv files.
Prerequisites
- An installation of GX set up to work with S3
- Access to data on a S3 bucket
Steps
1. Import GX and instantiate a Data Context
The code to import Great Expectations and instantiate a Data Context is:
import great_expectations as gx
context = gx.get_context()
2. Create a Datasource
We can define a S3 datasource by providing three pieces of information:
-
name
: In our example, we will name our Datasource"my_s3_datasource"
-
bucket_name
: The name of our S3 bucket -
boto3_options
: We can provide various additional options here, but in this example we will leave this empty and use the default values.
datasource_name = "version-0.16.16 my_s3_datasource"
bucket_name = "version-0.16.16 my_bucket"
boto3_options = {}
boto3_options
specify?
The parameter boto3_options
will
allow you to pass such things as:
-
endpoint_url
: specifies an S3 endpoint. You can use an environment variable such as"${S3_ENDPOINT}"
to securely include this in your code. The string"${S3_ENDPOINT}"
will be replaced with the value of the corresponding environment variable. -
region_name
: Your AWS region name.
Once we have those three elements, we can define our Datasource like so:
datasource = context.sources.add_spark_s3(
name=datasource_name, bucket=bucket_name, boto3_options=boto3_options
)
3. Add S3 data to the Datasource as a Data Asset
asset_name = "version-0.16.16 my_taxi_data_asset"
s3_prefix = "data/taxi_yellow_tripdata_samples/"
batching_regex = r"yellow_tripdata_sample_(?P<year>\d{4})-(?P<month>\d{2})\.csv"
data_asset = datasource.add_csv_asset(
name=asset_name,
batching_regex=batching_regex,
s3_prefix=s3_prefix,
header=True,
infer_schema=True,
)
Your Data Asset will connect to all files that match the regex that you provide. Each matched file will become a Batch inside your Data Asset.
For example:
Let's say that your S3 bucket has the following files:
- "yellow_tripdata_sample_2021-11.csv"
- "yellow_tripdata_sample_2021-12.csv"
- "yellow_tripdata_sample_2023-01.csv"
If you define a Data Asset using the full file name
with no regex groups, such as
"yellow_tripdata_sample_2023-01\.csv"
your Data Asset will contain only one Batch, which
will correspond to that file.
However, if you define a partial file name with a
regex group, such as
"yellow_tripdata_sample_(?P<year>\d{4})-(?P<month>\d{2})\.csv"
your Data Asset will contain 3 Batches, one
corresponding to each matched file. You can then use
the keys year
and month
to
indicate exactly which file you want to request from
the available Batches.
Next steps
- How to organize Batches in a file-based Data Asset
- How to request Data from a Data Asset
- How to create Expectations while interactively evaluating a set of data
- How to use the Onboarding Data Assistant to evaluate data and create Expectations
Additional information
Related reading
For more details regarding storing credentials for use with GX, please see our guide: How to configure credentials