These are data integration functionalities that enables the end-user to connect and ingest data from SFTP servers as well as provides the platform with access to the desired destinations where data can be written into when transformed or cleaned on the platform.
These are data integration functionalities that enables the end-user to connect and ingest data from Amazon S3 as well as provides the platform with access to the desired destinations where data can be written into when transformed or cleaned on the platform.
These are data integration functionalities that enables the end-user to connect and ingest data from Amazon S3 as well as provides the platform with access to the desired destinations where data can be written into when transformed or cleaned on the platform.
Jobs is a distributed orchestration workflow engine that takes the designed transformation and orchestrate it on a provisioned cluster of machines enables to end-user to automate the processing of the data pipelines without of manual execution.
This feature allows end-users to design, test and execute data integration, wrangling, transformation, clean and sinking data into different destinations of their choice from an interactive user interface without the need of extensive code implementation from a canvas.
Cluster provisioner: This feature allows end-users to provider compute machines that enables to deployment data engineering workloads to specific cluster of machines of their choice.