kedro_datasets.polars.GenericDataset

class kedro_datasets.polars.GenericDataset(filepath, file_format, load_args=None, save_args=None, version=None, credentials=None, fs_args=None)[source]

polars.GenericDataset loads/saves data from/to a data file using an underlying filesystem (e.g.: local, S3, GCS). It uses polars to handle the dynamically select the appropriate type of read/write on a best effort basis.

Example usage for the YAML API:

cars:
  type: polars.GenericDataset
  file_format: parquet
  filepath: s3://data/01_raw/company/cars.parquet
  load_args:
    low_memory: True
  save_args:
    compression: "snappy"

Example using Python API:

 from kedro_datasets.polars import GenericDataset
 import polars as pl

 data = pl.DataFrame({'col1': [1, 2], 'col2': [4, 5],
...                      'col3': [5, 6]})

 dataset = GenericDataset(filepath='test.parquet', file_format='parquet')
 dataset.save(data)
 reloaded = dataset.load()
 assert data.frame_equal(reloaded)

Attributes

DEFAULT_LOAD_ARGS

DEFAULT_SAVE_ARGS

Methods

exists()

Checks whether a data set's output already exists by calling the provided _exists() method.

from_config(name, config[, load_version, ...])

Create a data set instance using the configuration provided.

load()

Loads data by delegation to the provided load method.

release()

Release any cached data.

resolve_load_version()

Compute the version the dataset should be loaded with.

resolve_save_version()

Compute the version the dataset should be saved with.

save(data)

Saves data by delegation to the provided save method.

DEFAULT_LOAD_ARGS: Dict[str, Any] = {}
DEFAULT_SAVE_ARGS: Dict[str, Any] = {}
__init__(filepath, file_format, load_args=None, save_args=None, version=None, credentials=None, fs_args=None)[source]

Creates a new instance of GenericDataset pointing to a concrete data file on a specific filesystem. The appropriate polars load/save methods are dynamically identified by string matching on a best effort basis.

Parameters:
  • filepath (str) – Filepath in POSIX format to a file prefixed with a protocol like s3://. If prefix is not provided, file protocol (local filesystem) will be used. The prefix should be any protocol supported by fsspec. Key assumption: The first argument of either load/save method points to a filepath/buffer/io type location. There are some read/write targets such as ‘clipboard’ or ‘records’ that will fail since they do not take a filepath like argument.

  • file_format (str) – String which is used to match the appropriate load/save method on a best effort basis. For example if ‘csv’ is passed, the polars.read_csv and polars.DataFrame.write_csv methods will be identified. An error will be raised unless there is at least one matching read_<file_format> or write_<file_format>.

  • load_args (Optional[Dict[str, Any]]) – Polars options for loading CSV files. Here you can find all available arguments: https://pola-rs.github.io/polars/py-polars/html/reference/io.html All defaults are preserved.

  • save_args (Optional[Dict[str, Any]]) – Polars options for saving files. Here you can find all available arguments: https://pola-rs.github.io/polars/py-polars/html/reference/io.html All defaults are preserved.

  • version (Optional[Version]) – If specified, should be an instance of kedro.io.core.Version. If its load attribute is None, the latest version will be loaded. If its save attribute is None, save version will be autogenerated.

  • credentials (Optional[Dict[str, Any]]) – Credentials required to get access to the underlying filesystem. E.g. for GCSFileSystem it should look like {“token”: None}.

  • fs_args (Optional[Dict[str, Any]]) – Extra arguments to pass into underlying filesystem class constructor (e.g. {“project”: “my-project”} for GCSFileSystem).

  • metadata – Any arbitrary metadata. This is ignored by Kedro, but may be consumed by users or external plugins.

Raises:

DatasetError – Will be raised if at least less than one appropriate read or write methods are identified.

exists()

Checks whether a data set’s output already exists by calling the provided _exists() method.

Return type:

bool

Returns:

Flag indicating whether the output already exists.

Raises:

DatasetError – when underlying exists method raises error.

classmethod from_config(name, config, load_version=None, save_version=None)

Create a data set instance using the configuration provided.

Parameters:
  • name – Data set name.

  • config – Data set config dictionary.

  • load_version – Version string to be used for load operation if the data set is versioned. Has no effect on the data set if versioning was not enabled.

  • save_version – Version string to be used for save operation if the data set is versioned. Has no effect on the data set if versioning was not enabled.

Returns:

An instance of an AbstractDataset subclass.

Raises:

DatasetError – When the function fails to create the data set from its config.

load()

Loads data by delegation to the provided load method.

Return type:

TypeVar(_DO)

Returns:

Data returned by the provided load method.

Raises:

DatasetError – When underlying load method raises error.

release()

Release any cached data.

Raises:

DatasetError – when underlying release method raises error.

Return type:

None

resolve_load_version()

Compute the version the dataset should be loaded with.

Return type:

str | None

resolve_save_version()

Compute the version the dataset should be saved with.

Return type:

str | None

save(data)

Saves data by delegation to the provided save method.

Parameters:

data (TypeVar(_DI)) – the value to be saved by provided save method.

Raises:
  • DatasetError – when underlying save method raises error.

  • FileNotFoundError – when save method got file instead of dir, on Windows.

  • NotADirectoryError – when save method got file instead of dir, on Unix.

Return type:

None