feldera package

feldera.sql_context module

class feldera.sql_context.SQLContext(pipeline_name: str, client: FelderaClient, pipeline_description: str | None = None, program_name: str | None = None, program_description: str | None = None, storage: bool = False, workers: int = 8, resources: Resources | None = None, compilation_profile: CompilationProfile = CompilationProfile.OPTIMIZED)[source]

Bases: object

The SQLContext is the main entry point for the Feldera SQL API. Abstracts the interaction with the Feldera API and provides a high-level interface for SQL pipelines.

Parameters:
  • pipeline_name – The name of the pipeline.

  • client – The FelderaClient instance to use.

  • pipeline_description – The description of the pipeline.

  • program_name – The name of the program. Defaults to the pipeline name.

  • program_description – The description of the program. Defaults to an empty string.

  • storage – Set True to use storage with this pipeline. Defaults to False.

  • workers – The number of workers to use with this pipeline. Defaults to 8.

  • resources – The PipelineResourceConfig for the pipeline. Defaults to None.

  • compilation_profile – The compilation profile to use when compiling the program. Defaults to CompilationProfile.OPTIMIZED.

add_lateness(view: str, timestamp_column: str, lateness_expr: str)[source]

Add a lateness annotation to a view. Lateness annotations are SQL statements of the form

LATENESS <view>.<timestamp_column> <lateness_expr>;
-- example:
LATENESS V.COL1 INTERVAL '1' HOUR;
Parameters:
  • view – View name.

  • timestamp_column – Timestamp column to associate lateness with.

  • lateness_expr – SQL expression that defines lateness.

connect_sink_delta_table(view_name: str, connector_name: str, config: dict)[source]

Tell Feldera to write the data to the specified delta table.

Parameters:
  • view_name – The name of the view whose output is sent to delta table.

  • connector_name – The unique name for this connector.

  • config – The configuration for the delta table connector.

connect_sink_kafka(view_name: str, connector_name: str, config: dict, fmt: JSONFormat | CSVFormat | AvroFormat)[source]

Associate the specified Kafka topic on the specified Kafka server as output sink for the specified view in Feldera. The topic is populated with changes in the specified view.

Parameters:
  • view_name – The name of the view whose changes are sent to kafka topic.

  • connector_name – The unique name for this connector.

  • config – The configuration for the kafka connector.

  • fmt – The format of the data in the kafka topic.

connect_source_delta_table(table_name: str, connector_name: str, config: dict)[source]

Tell Feldera to read the data from the specified delta table.

Parameters:
  • table_name – The name of the table.

  • connector_name – The unique name for this connector.

  • config – The configuration for the delta table.

connect_source_kafka(table_name: str, connector_name: str, config: dict, fmt: JSONFormat | CSVFormat, max_queued_records: int | None = None)[source]

Associate the specified kafka topics on the specified Kafka server as input source for the specified table in Feldera. The table is populated with changes from the specified kafka topics.

Parameters:
  • table_name – The name of the table.

  • connector_name – The unique name for this connector.

  • config – The configuration for the kafka connector.

  • fmt – The format of the data in the kafka topic.

  • max_queue_records – Maximal number of records queued by the endpoint before the endpoint is paused by the backpressure mechanism.

connect_source_url(table_name: str, connector_name: str, path: str, fmt: JSONFormat | CSVFormat)[source]

Associate the specified URL as input source for the specified table in Feldera. Feldera will make a GET request to the specified URL to read the data and populate the table.

Parameters:
  • table_name – The name of the table.

  • connector_name – The unique name for this connector.

  • path – The URL to read the data from.

  • fmt – The format of the data in the URL.

create() Self[source]

Set the build mode to CREATE, meaning that the pipeline will be created from scratch.

delete(delete_program: bool = True, delete_connectors: bool = False)[source]

Delete the pipeline.

Parameters:
  • delete_program – If True, also deletes the program associated with the pipeline. True by default.

  • delete_connectors – If True, also deletes the connectors associated with the pipeline. False by default.

foreach_chunk(view_name: str, callback: Callable[[DataFrame, int], None])[source]

Run the given callback on each chunk of the output of the specified view.

Parameters:
  • view_name – The name of the view.

  • callback

    The callback to run on each chunk. The callback should take two arguments:

    • chunk -> The chunk as a pandas DataFrame

    • seq_no -> The sequence number. The sequence number is a monotonically increasing integer that starts from 0. Note that the sequence number is unique for each chunk, but not necessarily contiguous.

Please note that the callback is run in a separate thread, so it should be thread-safe. Please note that the callback should not block for a long time, as by default, backpressure is enabled and will block the pipeline.

Note

  • The callback must be thread-safe as it will be run in a separate thread.

get() Self[source]

Set the build mode to GET, meaning that an existing pipeline will be used.

get_or_create() Self[source]

Set the build mode to GET_OR_CREATE, meaning that an existing pipeline will be used if it exists, else a new one will be created.

input_pandas(table_name: str, df: DataFrame, force: bool = False)[source]

Push all rows in a pandas DataFrame to the pipeline.

Parameters:
  • table_name – The name of the table to insert data into.

  • df – The pandas DataFrame to be pushed to the pipeline.

  • forceTrue to push data even if the pipeline is paused. False by default.

listen(view_name: str) OutputHandler[source]

Listen to the output of the provided view so that it is available in the notebook / python code.

Parameters:

view_name – The name of the view to listen to.

pause()[source]

Pause the pipeline.

pipeline_status() PipelineStatus[source]

Return the current state of the pipeline.

register_local_view(name: str, query: str)[source]

Register a local view with the SQLContext. Local views are not exposed to the outside world as an output of the computation. This is useful for modularizing the SQL code, by declaring intermediate views that are used in the implementation of other views.

Marking a view as local results in it not being materialized, potentially yielding performance benefit over regular views at the cost of not being able to observe it (e.g., attach connectors to it). This is particularly handy for intermediate views that are used in the implementation of other views, a practice that benefits modularization of the SQL code.

Auto inserts the trailing semicolon if not present.

Parameters:
  • name – The name of the view.

  • query – The query to be used to create the view.

register_materialized_view(name: str, query: str)[source]

Register a Feldera materialized View based on the provided query. Auto inserts the trailing semicolon if not present.

Parameters:
  • name – The name of the view.

  • query – The query to be used to create the view.

register_table(table_name: str, schema: SQLSchema | None = None, ddl: str | None = None)[source]

Register a table with the SQLContext. The table can be registered with a schema or with the SQL DDL. One of the two must be provided, but not both. Auto inserts the trailing semicolon if not present. In the future, schema will be inferred from the data provided from applicable sources.

Parameters:
  • table_name – The name of the table.

  • schema – The schema of the table.

  • ddl – The SQL DDL of the table.

register_table_from_sql(ddl: str)[source]

Register a table with the provided SQL DDL. Auto inserts the trailing semicolon if not present.

Parameters:

ddl – The SQL DDL of the table.

register_type(name: str, spec: str)[source]

Register a SQL type. Auto inserts the trailing semicolon if not present.

Parameters:
  • name – The name of the type.

  • spec – Type definition.

register_view(name: str, query: str)[source]

Register a Feldera View based on the provided query. Auto inserts the trailing semicolon if not present.

Parameters:
  • name – The name of the view.

  • query – The query to be used to create the view.

resume()[source]

Resumes the pipeline.

shutdown()[source]

Shut down the pipeline.

start()[source]

Start the pipeline.

Raises:

RuntimeError – If the pipeline returns unknown metrics.

wait_for_completion(shutdown: bool = False)[source]

Block until the pipeline has completed processing all input records.

This method blocks until (1) all input connectors attached to the pipeline have finished reading their input data sources and issued end-of-input notifications to the pipeline, and (2) all inputs received from these connectors have been fully processed and corresponding outputs have been sent out through the output connectors.

This method will block indefinitely if at least one of the input connectors attached to the pipeline is a streaming connector, such as Kafka, that does not issue the end-of-input notification.

Parameters:

shutdown – If True, the pipeline will be shutdown after completion. False by default.

Raises:

RuntimeError – If the pipeline returns unknown metrics.

wait_for_idle(idle_interval_s: float = 5.0, timeout_s: float = 600.0, poll_interval_s: float = 0.2)[source]

Wait for the pipeline to become idle and then returns.

Idle is defined as a sufficiently long interval in which the number of input and processed records reported by the pipeline do not change, and they equal each other (thus, all input records present at the pipeline have been processed).

Parameters:
  • idle_interval_s – Idle interval duration (default is 5.0 seconds).

  • timeout_s – Timeout waiting for idle (default is 600.0 seconds).

  • poll_interval_s – Polling interval, should be set substantially smaller than the idle interval (default is 0.2 seconds).

Raises:
  • ValueError – If idle interval is larger than timeout, poll interval is larger than timeout, or poll interval is larger than idle interval.

  • RuntimeError – If the metrics are missing or the timeout was reached.

feldera.enums module

class feldera.enums.BuildMode(value)[source]

Bases: Enum

An enumeration.

CREATE = 1
GET = 2
GET_OR_CREATE = 3
class feldera.enums.CompilationProfile(value)[source]

Bases: Enum

The compilation profile to use when compiling the program.

DEV = 'dev'

The development compilation profile.

OPTIMIZED = 'optimized'

The optimized compilation profile, the default for this API.

SERVER_DEFAULT = None

The compiler server default compilation profile.

UNOPTIMIZED = 'unoptimized'

The unoptimized compilation profile.

class feldera.enums.PipelineStatus(value)[source]

Bases: Enum

Represents the state that this pipeline is currently in.

Shutdown     ◄────┐
│         │
/deploy   │       │
│   ⌛ShuttingDown
▼         ▲
⌛Provisioning    │
│         │
Provisioned        │
▼         │/shutdown
⌛Initializing     │
│        │
┌────────┴─────────┴─┐
│        ▼           │
│      Paused        │
│      │    ▲        │
│/start│    │/pause  │
│      ▼    │        │
│     Running        │
└──────────┬─────────┘
           │
           ▼
        Failed
FAILED = 8

The pipeline remains in this state until the users acknowledge the failure by issuing a call to shutdown the pipeline; transitions to the PipelineStatus.SHUTDOWN state.

INITIALIZING = 4

The pipeline is initializing its internal state and connectors.

This state is part of the pipeline’s deployment process. In this state, the pipeline’s HTTP server is up and running, but its query engine and input and output connectors are still initializing.

The pipeline remains in this state until:

  1. Initialization completes successfully; the pipeline transitions to the PipelineStatus.PAUSED state.

  2. Initialization fails; transitions to the PipelineStatus.FAILED state.

  3. A pre-defined timeout has passed. The runner performs forced shutdown of the pipeline; returns to the PipelineStatus.SHUTDOWN state.

  4. The user cancels the pipeline by invoking the /shutdown endpoint. The manager performs forced shutdown of the pipeline; returns to the PipelineStatus.SHUTDOWN state.

NOT_FOUND = 1

The pipeline has not been created yet.

PAUSED = 5

The pipeline is fully initialized, but data processing has been paused.

The pipeline remains in this state until:

  1. The user starts the pipeline by invoking the /start endpoint. The manager passes the request to the pipeline; transitions to the PipelineStatus.RUNNING state.

  2. The user cancels the pipeline by invoking the /shutdown endpoint. The manager passes the shutdown request to the pipeline to perform a graceful shutdown; transitions to the PipelineStatus.SHUTTING_DOWN state.

  3. An unexpected runtime error renders the pipeline PipelineStatus.FAILED.

PROVISIONING = 3

The runner triggered a deployment of the pipeline and is waiting for the pipeline HTTP server to come up.

In this state, the runner provisions a runtime for the pipeline, starts the pipeline within this runtime and waits for it to start accepting HTTP requests.

The user is unable to communicate with the pipeline during this time. The pipeline remains in this state until:

  1. Its HTTP server is up and running; the pipeline transitions to the PipelineStatus.INITIALIZING state.

  2. A pre-defined timeout has passed. The runner performs forced shutdown of the pipeline; returns to the PipelineStatus.SHUTDOWN state.

  3. The user cancels the pipeline by invoking the /shutdown endpoint. The manager performs forced shutdown of the pipeline, returns to the PipelineStatus.SHUTDOWN state.

RUNNING = 6

The pipeline is processing data.

The pipeline remains in this state until:

  1. The user pauses the pipeline by invoking the /pause endpoint. The manager passes the request to the pipeline; transitions to the PipelineStatus.PAUSED state.

  2. The user cancels the pipeline by invoking the /shutdown endpoint. The runner passes the shutdown request to the pipeline to perform a graceful shutdown; transitions to the PipelineStatus.SHUTTING_DOWN state.

  3. An unexpected runtime error renders the pipeline PipelineStatus.FAILED.

SHUTDOWN = 2

Pipeline has not been started or has been shut down.

The pipeline remains in this state until the user triggers a deployment by invoking the /deploy endpoint.

SHUTTING_DOWN = 7

Graceful shutdown in progress.

In this state, the pipeline finishes any ongoing data processing, produces final outputs, shuts down input/output connectors and terminates.

The pipeline remains in this state until:

  1. Shutdown completes successfully; transitions to the PipelineStatus.SHUTDOWN state.

  2. A pre-defined timeout has passed. The manager performs forced shutdown of the pipeline; returns to the PipelineStatus.SHUTDOWN state.

static from_str(value)[source]

feldera.formats module

class feldera.formats.AvroFormat(config: dict | None = None, schema: str | None = None, skip_schema_id: bool | None = False, registry_urls: list[str] | None = None, registry_headers: Mapping[str, str] | None = None, registry_proxy: str | None = None, registry_timeout_secs: int | None = None, registry_username: str | None = None, registry_password: str | None = None, registry_authorization_token: str | None = None)[source]

Bases: Format

Avro output format configuration.

Parameters:
  • config – A dictionary that contains the entire configuration for the Avro format.

  • schema – Avro schema used to encode output records. Specified as a string containing schema definition in JSON format. This schema must match precisely the SQL view definition, including nullability of columns.

  • skip_schema_id – Set True if the serialized message should only contain the data and not contain the magic byte + schema ID. False by default. The first 5 bytes of the Avro message are the magic byte and 4-byte schema ID. https://docs.confluent.io/platform/current/schema-registry/fundamentals/serdes-develop/index.html#wire-format

  • registry_urls – List of schema registry URLs. When non-empty, the connector will post the schema to the registry and use the schema id returned by the registry. Otherwise, schema id 0 is used.

  • registry_headers – Custom headers that will be added to every call to the schema registry. This option requires registry_urls to be set.

  • registry_proxy – Proxy that will be used to access the schema registry. Requires registry_urls to be set.

  • registry_timeout_secs – Timeout in seconds used to connect to the registry. Requires registry_urls to be set.

  • registry_username – Username used to authenticate with the registry. Requires registry_urls to be set. This option is mutually exclusive with token-based authentication (see registry_authorization_token).

  • registry_password – Password used to authenticate with the registry. Requires registry_urls to be set.

  • registry_authorization_token – Token used to authenticate with the registry. Requires registry_urls to be set. This option is mutually exclusive with password-based authentication (see registry_username and registry_password).

with_registry_authorization_token(registry_authorization_token: str) Self[source]

Token used to authenticate with the registry.

Requires registry_urls to be set. This option is mutually exclusive with password-based authentication (see registry_username and registry_password).

with_registry_headers(registry_headers: Mapping[str, str]) Self[source]

Custom headers that will be added to every call to the schema registry.

This option requires registry_urls to be set.

with_registry_password(registry_password: str) Self[source]

Password used to authenticate with the registry.

Requires registry_urls to be set. This option is mutually exclusive with token-based authentication (see registry_authorization_token).

with_registry_proxy(registry_proxy: str) Self[source]

Proxy that will be used to access the schema registry.

Requires registry_urls to be set.

with_registry_timeout_secs(registry_timeout_secs: int) Self[source]

Timeout in seconds used to connect to the registry.

Requires registry_urls to be set.

with_registry_urls(registry_urls: list[str]) Self[source]

List of schema registry URLs.

When non-empty, the connector will post the schema to the registry and use the schema id returned by the registry. Otherwise, schema id 0 is used.

with_registry_username(registry_username: str) Self[source]

Username used to authenticate with the registry.

Requires registry_urls to be set. This option is mutually exclusive with token-based authentication (see registry_authorization_token).

with_schema(schema: str | dict) Self[source]

Avro schema used to encode output records.

Specified as a string containing schema definition in JSON format. This schema must match precisely the SQL view definition, including nullability of columns.

with_skip_schema_id(skip_schema_id: bool) Self[source]

Set True if the serialized message should only contain the data and not contain the magic byte + schema ID. False by default.

The first 5 bytes of the Avro message are the magic byte and 4-byte schema ID. https://docs.confluent.io/platform/current/schema-registry/fundamentals/serdes-develop/index.html#wire-format

class feldera.formats.CSVFormat[source]

Bases: Format

Used to represent data ingested and output from Feldera in the CSV format.

https://www.feldera.com/docs/api/csv

class feldera.formats.Format[source]

Bases: ABC

Base class for all data formats.

class feldera.formats.JSONFormat(config: dict | None = None)[source]

Bases: Format

Used to represent data ingested and output from Feldera in the JSON format.

with_array(array: bool) Self[source]

Set to True if updates in this stream are packaged into JSON arrays.

Example: [{“id”: 1, “name”: “Alice”}, {“id”: 2, “name”: “Bob”}]

with_update_format(update_format: JSONUpdateFormat) Self[source]

Specifies the format of the data change events in the JSON data stream.

class feldera.formats.JSONUpdateFormat(value)[source]

Bases: Enum

Supported JSON data change event formats.

Each element in a JSON-formatted input stream specifies an update to one or more records in an input table. We support several different ways to represent such updates.

https://www.feldera.com/docs/api/json/#the-insertdelete-format

InsertDelete = 1

Insert/delete format.

Each element in the input stream consists of an “insert” or “delete” command and a record to be inserted to or deleted from the input table.

Example: {“insert”: {“id”: 1, “name”: “Alice”}, “delete”: {“id”: 2, “name”: “Bob”}} Here, id and name are the columns in the table.

Raw = 2

Raw input format.

This format is suitable for insert-only streams (no deletions). Each element in the input stream contains a record without any additional envelope that gets inserted in the input table.

Example: {“id”: 1, “name”: “Alice”} Here, id and name are the columns in the table.

feldera.output_handler module

class feldera.output_handler.OutputHandler(client: FelderaClient, pipeline_name: str, view_name: str, queue: Queue | None)[source]

Bases: object

start()[source]

Starts the output handler in a separate thread

to_pandas(clear_buffer: bool = True)[source]

Returns the output of the pipeline as a pandas DataFrame

Parameters:

clear_buffer – Whether to clear the buffer after getting the output.

feldera.resources module

class feldera.resources.Resources(config: Mapping[str, Any] | None = None, cpu_cores_max: int | None = None, cpu_cores_min: int | None = None, memory_mb_max: int | None = None, memory_mb_min: int | None = None, storage_class: str | None = None, storage_mb_max: int | None = None)[source]

Bases: object

Class used to specify the resource configuration for a pipeline.

Parameters:
  • config – A dictionary containing all the configuration values.

  • cpu_cores_max – The maximum number of CPU cores to reserve for an instance of the pipeline.

  • cpu_cores_min – The minimum number of CPU cores to reserve for an instance of the pipeline.

  • memory_mb_max – The maximum memory in Megabytes to reserve for an instance of the pipeline.

  • memory_mb_min – The minimum memory in Megabytes to reserve for an instance of the pipeline.

  • storage_class – The storage class to use for the pipeline. The class determines storage performance such as IOPS and throughput.

  • storage_mb_max – The storage in Megabytes to reserve for an instance of the pipeline.

feldera.sql_schema module

class feldera.sql_schema.SQLSchema(schema: Mapping[str, str])[source]

Bases: object

build_ddl(table_name: str) str[source]

Subpackages