kedro_datasets.pandas.ParquetDataset

class kedro_datasets.pandas.ParquetDataset(filepath, load_args=None, save_args=None, version=None, credentials=None, fs_args=None, metadata=None)[source]

ParquetDataset loads/saves data from/to a Parquet file using an underlying filesystem (e.g.: local, S3, GCS). It uses pandas to handle the Parquet file.

Example usage for the YAML API:

boats:
  type: pandas.ParquetDataset
  filepath: data/01_raw/boats.parquet
  load_args:
    engine: pyarrow
    use_nullable_dtypes: True
  save_args:
    file_scheme: hive
    has_nulls: False
    engine: pyarrow

trucks:
  type: pandas.ParquetDataset
  filepath: abfs://container/02_intermediate/trucks.parquet
  credentials: dev_abs
  load_args:
    columns: [name, gear, disp, wt]
    index: name
  save_args:
    compression: GZIP
    partition_on: [name]

Example usage for the Python API:

 from kedro_datasets.pandas import ParquetDataset
 import pandas as pd

 data = pd.DataFrame({'col1': [1, 2], 'col2': [4, 5],
...                      'col3': [5, 6]})

 dataset = ParquetDataset(filepath="test.parquet")
 dataset.save(data)
 reloaded = dataset.load()
 assert data.equals(reloaded)

Attributes

DEFAULT_LOAD_ARGS

DEFAULT_SAVE_ARGS

Methods

exists()

Checks whether a data set's output already exists by calling the provided _exists() method.

from_config(name, config[, load_version, ...])

Create a data set instance using the configuration provided.

load()

Loads data by delegation to the provided load method.

release()

Release any cached data.

resolve_load_version()

Compute the version the dataset should be loaded with.

resolve_save_version()

Compute the version the dataset should be saved with.

save(data)

Saves data by delegation to the provided save method.

DEFAULT_LOAD_ARGS: Dict[str, Any] = {}
DEFAULT_SAVE_ARGS: Dict[str, Any] = {}
__init__(filepath, load_args=None, save_args=None, version=None, credentials=None, fs_args=None, metadata=None)[source]

Creates a new instance of ParquetDataset pointing to a concrete Parquet file on a specific filesystem.

Parameters:
  • filepath (str) – Filepath in POSIX format to a Parquet file prefixed with a protocol like s3://. If prefix is not provided, file protocol (local filesystem) will be used. The prefix should be any protocol supported by fsspec. It can also be a path to a directory. If the directory is provided then it can be used for reading partitioned parquet files. Note: http(s) doesn’t support versioning.

  • load_args (Optional[Dict[str, Any]]) – Additional options for loading Parquet file(s). Here you can find all available arguments when reading single file: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_parquet.html Here you can find all available arguments when reading partitioned datasets: https://arrow.apache.org/docs/python/generated/pyarrow.parquet.ParquetDataset.html#pyarrow.parquet.ParquetDataset.read All defaults are preserved.

  • save_args (Optional[Dict[str, Any]]) – Additional saving options for saving Parquet file(s). Here you can find all available arguments: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_parquet.html All defaults are preserved. partition_cols is not supported.

  • version (Optional[Version]) – If specified, should be an instance of kedro.io.core.Version. If its load attribute is None, the latest version will be loaded. If its save attribute is None, save version will be autogenerated.

  • credentials (Optional[Dict[str, Any]]) – Credentials required to get access to the underlying filesystem. E.g. for GCSFileSystem it should look like {“token”: None}.

  • fs_args (Optional[Dict[str, Any]]) – Extra arguments to pass into underlying filesystem class constructor (e.g. {“project”: “my-project”} for GCSFileSystem).

  • metadata (Optional[Dict[str, Any]]) – Any arbitrary metadata. This is ignored by Kedro, but may be consumed by users or external plugins.

exists()

Checks whether a data set’s output already exists by calling the provided _exists() method.

Return type:

bool

Returns:

Flag indicating whether the output already exists.

Raises:

DatasetError – when underlying exists method raises error.

classmethod from_config(name, config, load_version=None, save_version=None)

Create a data set instance using the configuration provided.

Parameters:
  • name – Data set name.

  • config – Data set config dictionary.

  • load_version – Version string to be used for load operation if the data set is versioned. Has no effect on the data set if versioning was not enabled.

  • save_version – Version string to be used for save operation if the data set is versioned. Has no effect on the data set if versioning was not enabled.

Returns:

An instance of an AbstractDataset subclass.

Raises:

DatasetError – When the function fails to create the data set from its config.

load()

Loads data by delegation to the provided load method.

Return type:

TypeVar(_DO)

Returns:

Data returned by the provided load method.

Raises:

DatasetError – When underlying load method raises error.

release()

Release any cached data.

Raises:

DatasetError – when underlying release method raises error.

Return type:

None

resolve_load_version()

Compute the version the dataset should be loaded with.

Return type:

str | None

resolve_save_version()

Compute the version the dataset should be saved with.

Return type:

str | None

save(data)

Saves data by delegation to the provided save method.

Parameters:

data (TypeVar(_DI)) – the value to be saved by provided save method.

Raises:
  • DatasetError – when underlying save method raises error.

  • FileNotFoundError – when save method got file instead of dir, on Windows.

  • NotADirectoryError – when save method got file instead of dir, on Unix.

Return type:

None