Contents Menu Expand Light mode Dark mode Auto light/dark mode
Light Logo Dark Logo
Aiven.io GitHub
Log in Start free trial
Light Logo Dark Logo
Start now
Light Logo Dark Logo
  • Platform
    • Concepts
      • Authentication tokens
      • Availability zones
      • Billing
        • Tax information regarding Aiven services
        • Billing groups
        • Corporate billing
        • Hourly billing model for all services
      • Beta services
      • Cloud security
      • About logging, metrics and alerting
      • Projects, accounts, and managing access permissions
      • Service forking
      • Backups at Aiven
      • Service power cycle
      • Service memory limits
      • Out of memory conditions
      • Static IP addresses
      • TLS/SSL certificates
      • Bring your own account (BYOA)
      • Dynamic Disk Sizing
      • Enhanced compliance environments (ECE)
      • Disaster Recovery testing scenarios
      • Choosing a time series database
      • Service level agreement
      • Maintenance window
      • Service resources
      • Service integration
    • HowTo
      • User/Access management
        • Manage projects
        • Change your email address
        • Create an authentication token
        • Create a new Aiven service user
        • Create and manage teams
        • Enable Aiven password
        • Manage user two-factor authentication
        • Receive technical notifications
        • Reactivate suspended projects
        • Disable platform authentication
      • Account management
        • Create accounts
      • Service management
        • Create a new service
        • Fork your service
        • Pause or terminate your service
        • Rename a service
        • Scale your service
        • Migrate service to another cloud or region
        • Migrate a public service to a Virtual Private Cloud (VPC)
        • Recover a deleted service
        • Add additional storage
        • Tag your Aiven resources
        • Search for services
        • Access service logs
        • Prepare services for high load
        • Create a service integration
      • Network management
        • Download a CA certificate
        • Restrict network access to your service
        • Enable public access in a VPC
        • Manage static IP addresses
        • Handle resolution errors of private IP addresses
        • Attach a VPC to an AWS Transit Gateway
        • Manage Virtual Private Cloud (VPC) peering
        • Set up Virtual Private Cloud (VPC) peering on Google Cloud Platform (GCP)
        • Set up Virtual Private Cloud (VPC) peering on AWS
        • Set up Azure virtual network peering
        • Use AWS PrivateLink with Aiven services
        • Use Azure Private Link with Aiven services
      • Monitoring management
        • Monitoring services
        • Use Prometheus with Aiven
        • Increase metrics limit setting for Datadog
        • Access JMX metrics via Jolokia
      • Billing management
        • Manage payment card
        • Create billing groups
        • Manage billing groups in the Aiven Console
        • Billing contact
        • Update your tax status
        • Assign projects to billing groups
        • Solve payment issues when upgrading to larger service plans
        • Request service custom plans
        • Set up Google Cloud Marketplace
        • Move to Google Cloud Marketplace
        • Set up Azure Marketplace
      • SAML Authentication
        • Enable SAML authentication
        • Setting up SAML with OneLogin
        • Setting up SAML with Azure
        • Setting up SAML with Okta
        • Setting up SAML with Auth0
      • Get support in the Aiven console
    • Reference
      • EOL for major versions of Aiven Services
      • List of available cloud regions
      • Password policy
      • Project member privileges
      • Default service IP address and hostname
  • Integrations
    • Datadog
      • Send metrics to Datadog
      • Send logs to Datadog
    • Amazon CloudWatch
      • CloudWatch Metrics
      • CloudWatch Logs
        • Send logs to AWS CloudWatch from Aiven web console
        • Send logs to AWS CloudWatch from Aiven client
    • RSyslog
      • Logtail
      • Loggly
    • Send logs to Elasticsearch®
    • Prometheus system metrics
  • Aiven tools
    • Aiven Console
    • Aiven CLI
      • avn account
        • avn account authentication-method
        • avn account team
      • avn billing-group
      • avn card
      • avn cloud
      • avn credits
      • avn events
      • avn mirrormaker
      • avn project
      • avn service
        • avn service acl
        • avn service connection-info
        • avn service connection-pool
        • avn service connector
        • avn service database
        • avn service es-acl
        • avn service flink beta
        • avn service integration
        • avn service m3
        • avn service privatelink
        • avn service schema-registry-acl
        • avn service index
        • avn service tags
        • avn service topic
        • avn service user
      • avn ticket
      • avn user
        • avn user access-token
      • avn vpc
    • Aiven API
      • API examples
    • Aiven Terraform provider
      • Get started
      • HowTo
        • Enable debug logging
        • Upgrade the Aiven Terraform Provider from v1 to v2
        • Upgrade the Aiven Terraform Provider from v2 to v3
        • Use PostgreSQL provider alongside Aiven Terraform Provider
        • Promote PostgreSQL read replica to master
        • Upgrade to OpenSearch® with Terraform
        • Azure virtual network peering
      • Concepts
        • Data sources in Terraform
      • Reference
        • Aiven Terraform Cookbook
          • Apache Kafka and OpenSearch
          • Multicloud PostgreSQL
          • Apache Kafka and Apache Flink
          • Apache Kafka and Apache MirrorMaker
          • Apache Kafka with Karapace
          • Visualize PostgreSQL metrics with Grafana
          • PostgreSQL with custom configs
          • Apache Kafka MongoDB Source Connector
          • Debezium Source Connector across clouds
          • Apache Kafka with topics and HTTP sink connector
          • Apache Kafka with custom configurations
          • M3 and M3 Aggregator
          • PostgreSQL® Read Replica
          • Configure ClickHouse user's access
          • Apache Kafka and ClickHouse
          • ClickHouse and PostgreSQL
        • Troubleshooting
          • Private access error when using VPC
    • Aiven Operator for Kubernetes
  • Apache Kafka
    • Get started
    • Sample data generator
    • Concepts
      • Upgrade procedure
      • Scaling options
      • Access control lists permission mapping
      • Schema registry authorization
      • Apache Kafka® REST API
      • Compacted topics
      • Partition segments
      • Authentication types
      • NOT_LEADER_FOR_PARTITION errors
    • HowTo
      • Code samples
        • Connect with Python
        • Connect with Java
        • Connect with Go
        • Connect with command line
        • Connect with NodeJS
      • Tools
        • Configure properties for Apache Kafka® toolbox
        • Use kcat with Aiven for Apache Kafka®
        • Connect to Apache Kafka® with Conduktor
        • Use Kafdrop Web UI with Aiven for Apache Kafka®
        • Use Provectus® UI for Apache Kafka® with Aiven for Apache Kafka®
        • Use Kpow with Aiven for Apache Kafka®
        • Connect Aiven for Apache Kafka® with Klaw
      • Security
        • Configure Java SSL keystore and truststore to access Apache Kafka
        • Manage users and access control lists
        • Monitor and alert logs for denied ACL
        • Use SASL Authentication with Apache Kafka®
        • Renew and Acknowledge service user SSL certificates
      • Administration tasks
        • Schema registry
          • Use Karapace with Aiven for Apache Kafka®
        • Get the best from Apache Kafka®
        • Manage configurations with Apache Kafka® CLI tools
        • Manage Apache Kafka® parameters
        • View and reset consumer group offsets
        • Configure log cleaner for topic compaction
        • Prevent full disks
        • Set Apache ZooKeeper™ configuration
        • Avoid OutOfMemoryError errors in Aiven for Apache Kafka®
      • Integrations
        • Integration of logs into Apache Kafka® topic
        • Use Apache Kafka® Streams with Aiven for Apache Kafka®
        • Use Apache Flink® with Aiven for Apache Kafka®
        • Configure Apache Kafka® metrics sent to Datadog
        • Use ksqlDB with Aiven for Apache Kafka
        • Add kafka.producer. and kafka.consumer Datadog metrics
      • Topic/schema management
        • Creating an Apache Kafka® topic
        • Create Apache Kafka® topics automatically
        • Get partition details of an Apache Kafka® topic
        • Use schema registry in Java with Aiven for Apache Kafka®
        • Change data retention period
    • Reference
      • Advanced parameters
      • Metrics available via Prometheus
    • Apache Kafka Connect
      • Getting started
      • Concepts
        • List of available Apache Kafka® Connect connectors
        • JDBC source connector modes
        • Causes of “connector list not currently available”
      • HowTo
        • Administration tasks
          • Get the best from Apache Kafka® Connect
          • Bring your own Apache Kafka® Connect cluster
          • Enable Apache Kafka® Connect on Aiven for Apache Kafka®
          • Enable Apache Kafka® Connect connectors auto restart on failures
          • Manage Kafka Connect logging level
          • Request a new connector
        • Source connectors
          • PostgreSQL to Kafka
          • PostgreSQL to Kafka with Debezium
          • MySQL to Kafka
          • MySQL to Kafka with Debezium
          • SQL Server to Kafka
          • SQL Server to Kafka with Debezium
          • MongoDB to Kafka
          • Handle PostgreSQL® node replacements when using Debezium for change data capture
          • MongoDB to Kafka with Debezium
          • Cassandra to Kafka
          • MQTT to Kafka
          • Google Pub/Sub to Kafka
          • Google Pub/Sub Lite to Kafka
          • Couchbase to Kafka
        • Sink connectors
          • Kafka to another database with JDBC
          • Configure AWS for an S3 sink connector
          • Kafka to S3 (Aiven)
          • Use AWS IAM assume role credentials provider
          • Kafka to S3 (Confluent)
          • Configure GCP for a Google Cloud Storage sink connector
          • Kafka to GCS
          • Configure GCP for a Google BigQuery sink connector
          • Kafka to Big Query
          • Kafka to OpenSearch
          • Kafka to Elasticsearch
          • Configure Snowflake for a sink connector
          • Kakfa to Snowflake
          • Kafka to HTTP
          • Kafka to MongoDB
          • Kafka to MongoDB (by Lenses)
          • Kafka to InfluxDB
          • Kafka to Redis
          • Kafka to Cassandra
          • Kafka to Couchbase
          • Kafka to Google Pub/Sub
          • Kafka to Google Pub/Sub Lite
          • Kafka to Splunk
      • Reference
        • Advanced parameters
        • AWS S3 sink connector naming and data format
          • S3 sink connector by Aiven naming and data formats
          • S3 sink connector by Confluent naming and data formats
        • Google Cloud Storage sink connector naming and data formats
        • Metrics available via Prometheus
    • Apache Kafka MirrorMaker2
      • Getting started
      • Concepts
        • Topics included in a replication flow
      • HowTo
        • Integrate an external Apache Kafka® cluster in Aiven
        • Set up an Apache Kafka® MirrorMaker 2 replication flow
        • Setup Apache Kafka® MirrorMaker 2 monitoring
        • Remove topic prefix when replicating with Apache Kafka® MirrorMaker 2
      • Reference
        • List of advanced parameters
        • Terminology for Aiven for Apache Kafka® MirrorMaker 2
    • Karapace
      • Getting started with Karapace
      • Concepts
        • Karapace schema registry authorization
        • ACLs definition
        • Apache Kafka® REST proxy authorization
      • HowTo
        • Enable Karapace schema registry and REST APIs
        • Enable Karapace schema registry authorization
        • Enable Apache Kafka® REST proxy authorization
        • Manage Karapace schema registry authorization
        • Manage Apache Kafka® REST proxy authorization
  • PostgreSQL
    • Get started
    • Sample dataset: Pagila
    • Concepts
      • About aiven-db-migrate
      • Perform DBA-type tasks in Aiven for PostgreSQL®
      • High availability
      • PostgreSQL® backups
      • Connection pooling
      • About PostgreSQL® disk usage
      • About TimescaleDB
      • Upgrade and failover procedures
    • HowTo
      • Code samples
        • Connect with Go
        • Connect with Java
        • Connect with NodeJS
        • Connect with PHP
        • Connect with Python
      • DBA tasks
        • Create additional PostgreSQL® databases
        • Perform a PostgreSQL® major version upgrade
        • Install or update an extension
        • Create manual PostgreSQL® backups
        • Restore PostgreSQL® from a backup
        • Migrate to a different cloud provider or region
        • Claim public schema ownership
        • Manage connection pooling
        • Access PgBouncer statistics
        • Use the PostgreSQL® dblink extension
        • Use the PostgreSQL® pg_repack extension
        • Enable JIT in PostgreSQL®
        • Identify and repair issues with PostgreSQL® indexes with REINDEX
        • Identify PostgreSQL® slow queries
        • Detect and terminate long-running queries
        • Optimize PostgreSQL® slow queries
        • Check and avoid transaction ID wraparound
        • Prevent PostgreSQL® full disk issues
      • Replication and migration
        • Create and use read-only replicas
        • Set up logical replication to Aiven for PostgreSQL®
        • Migrate to Aiven for PostgreSQL® with aiven-db-migrate
          • Enable logical replication on Amazon Aurora PostgreSQL®
          • Enable logical replication on Amazon RDS PostgreSQL®
          • Enable logical replication on Google Cloud SQL
        • Migrate to Aiven for PostgreSQL® with pg_dump and pg_restore
        • Migrating to Aiven for PostgreSQL® using Bucardo
        • Migrate between PostgreSQL® instances using aiven-db-migrate in Python
      • Integrations
        • Connect with psql
        • Connect with pgAdmin
        • Connect with Rivery
        • Connect with Skyvia
        • Connect with Zapier
        • Database monitoring with Datadog
        • Visualize PostgreSQL® data with Grafana®
        • Monitor PostgreSQL® metrics with Grafana®
        • Monitor PostgreSQL® metrics with pgwatch2
        • Connect two PostgreSQL® services via datasource integration
        • Report and analyze with Google Data Studio
    • Reference
      • High CPU load
      • Advanced parameters
      • Extensions
      • Metrics exposed to Grafana
      • Connection limits per plan
      • Resource capability per plan
      • Terminology
      • Idle connections
      • Use of deprecated TLS Versions
  • Apache Flink
    • Overview
      • Architecture overview
      • Aiven for Apache Flink features
      • Managed service features
      • Plans and pricing
      • Limitations
    • Quickstart
    • Concepts
      • Aiven Flink applications
      • Built-in SQL editor
      • Flink tables
      • Checkpoints
      • Savepoints
      • Event and processing times
      • Watermarks
      • Windows
      • Stardand and upsert connectors
      • Settings for Apache Kafka® connectors
    • HowTo
      • Get started
      • Data service integrations
      • Aiven for Apache Flink applications
        • Create Apache Flink applications
        • Manage Apache Flink applications
      • Apache Flink tables
        • Manage Apache Flink tables
        • Create Apache Flink tables with data sources
          • Create an Apache Kafka®-based Apache Flink® table
          • Create a PostgreSQL®-based Apache Flink® table
          • Create an OpenSearch®-based Apache Flink® table
          • Create a Slack-based Apache Flink® table
      • Manage cluster
      • Advanced topics
        • Define OpenSearch® timestamp data in SQL pipeline
        • Real-time alerting solution
    • Reference
      • Advanced parameters
  • ClickHouse
    • Overview
      • Features overview
      • Architecture overview
      • Plans and pricing
      • Limits and limitations
    • Quickstart
    • Concepts
      • Online analytical processing
      • Databases and tables
      • Columnar databases
      • Indexing and data processing
      • Disaster recovery
      • Strings
    • HowTo
      • Get started
        • Load data
        • Secure a service
      • Connect to service
        • Connect with the ClickHouse client
        • Connect with Go
        • Connect with Python
        • Connect with Node.js
        • Connect with PHP
        • Connect with Java
      • Manage service
        • Manage users and roles
        • Manage user permissions with Terraform
        • Manage databases and tables
        • Query databases
        • Create materialized views
        • Monitor performance
        • Read and write data across shards
      • Manage cluster
      • Integrate service
        • Connect to Grafana
        • Connect to Apache Kafka
        • Connect to PostgreSQL
        • Connect to external DBs with JDBC
    • Reference
      • Supported table engines
      • ClickHouse metrics in Grafana
      • Formats for ClickHouse-Kafka data exchange
      • Advanced parameters
  • OpenSearch
    • Get started
    • Sample dataset: recipes
    • Concepts
      • Access control
      • Backups
      • Indices
      • Aggregations
      • High availability in Aiven for OpenSearch®
      • OpenSearch® vs Elasticsearch
      • Optimal number of shards
      • When to create a new index
      • OpenSearch® cross-cluster replication beta
    • HowTo
      • Use Aiven for OpenSearch® with cURL
      • Migrate Elasticsearch data to Aiven for OpenSearch®
      • Manage OpenSearch® log integration
      • Upgrade to OpenSearch®
      • Upgrade Elasticsearch clients to OpenSearch®
      • Setup cross cluster replication for Aiven for OpenSearch® beta
      • Connect with NodeJS
      • Connect with Python
      • Search with Python
      • Search with NodeJS
      • Aggregation with NodeJS
      • Control access to content in your service
      • Restore an OpenSearch® backup
      • Copy data from OpenSearch to Aiven for OpenSearch® using elasticsearch-dump
      • Copy data from Aiven for OpenSearch® to AWS S3 using elasticsearch-dump
      • Set index retention patterns
      • Create alerts with OpenSearch® API
      • Integrate with Grafana®
      • Handle low disk space
    • OpenSearch Dashboards
      • Getting started
      • HowTo
        • Getting started with Dev tools
        • Create alerts with OpenSearch® Dashboards
    • Reference
      • Plugins
      • Advanced parameters
      • Automatic adjustment of replication factors
      • REST API endpoint access
      • Low disk space watermarks
  • M3DB
    • Get started
    • Concepts
      • Aiven for M3 components
      • About M3DB namespaces and aggregation
      • About scaling M3
    • HowTo
      • Visualize M3DB data with Grafana®
      • Monitor Aiven services with M3DB
      • Use M3DB as remote storage for Prometheus
      • Write to M3 from Telegraf
      • Telegraf to M3 to Grafana® Example
      • Write data to M3DB with Go
      • Write data to M3DB with PHP
      • Write data to M3DB with Python
    • Reference
      • Terminology
      • Advanced parameters
      • Advanced parameters M3Aggregator
  • MySQL
    • Get started
    • Concepts
      • MySQL max_connections
      • Understand MySQL backups
      • Understanding MySQL memory usage
      • MySQL replication
      • MySQL tuning for concurrency
    • HowTo
      • Code samples
        • Connect to MySQL from the command line
        • Using mysqlsh
        • Using mysql
        • Connect to MySQL with Python
        • Connect to MySQL using MySQLx with Python
        • Connect to MySQL with Java
        • Connect to MySQL with PHP
      • Create additional MySQL® databases
      • Connect to MySQL with MySQL Workbench
      • Migrate to Aiven for MySQL from an external MySQL
      • Perform migration check
      • Prevent MySQL disk full
      • Reclaim disk space
      • Identify disk usage issues
      • Disable foreign key checks
      • Enable slow query logging
      • Backup and restore MySQL data using mysqldump
      • Create new tables without primary keys
      • Create missing primary keys
    • Reference
      • Advanced parameters
      • Resource capability per plan
  • Redis
    • Get started
    • Concepts
      • High availability in Aiven for Redis®*
      • Lua scripts with Aiven for Redis®*
      • Memory usage, on-disk persistence and replication in Aiven for Redis®*
    • HowTo
      • Code samples
        • Connect with redis-cli
        • Connect with Go
        • Connect with NodeJS
        • Connect with PHP
        • Connect with Python
        • Connect with Java
      • DBA tasks
        • Configure ACL permissions in Aiven for Redis®*
        • Migrate from Redis®* to Aiven for Redis®*
      • Estimate maximum number of connection
      • Manage SSL connectivity
      • Handle warning overcommit_memory
      • Benchmark performance
    • Reference
      • Advanced parameters
  • Apache Cassandra
    • Get started
    • Concepts
      • Tombstones in Apache Cassandra®
    • HowTo
      • Code samples
        • Connect with Python
        • Connect with Go
      • Connect with cqlsh
      • Perform a stress test using nosqlbench
      • Use DSBULK to load, unload and count data on Aiven service for Cassandra®
    • Reference
      • Advanced parameters
  • Grafana
    • Get started
    • HowTo
      • Dashboard preview for Aiven for Grafana®
      • Log in to Aiven for Grafana®
      • Replacing a string in Grafana® dashboards
      • Rotating Grafana® service credentials
      • Send emails from Aiven for Grafana®
    • Reference
      • Advanced parameters
      • Plugins
  • InfluxDB
    • Get started
    • Concepts
      • Continuous queries
      • InfluxDB® retention policies
    • Reference
      • Advanced parameters for Aiven for InfluxDB®
  • Community
    • Documentation
      • Create anonymous links
      • Create orphan pages
      • Rename files and adding redirects
    • Catch the Bus - Aiven challenge with ClickHouse
    • Rolling - Aiven challenge with Apache Kafka and Apache Flink
Start free trial Log in GitHub Aiven.io
Back to top

avn service flink beta#

Here you’ll find the full list of commands for avn service flink.

Warning

As with many beta products, the Aiven for Apache Flink® experience, APIs and CLI calls are currently being redesigned, you might get error messages if using the currently documented ones.

We will be working to update all the examples in the documentation.

Manage an Apache Flink® table#

avn service flink table create#

Creates a new Aiven for Apache Flink® table.

Parameter

Information

service_name

The name of the service

table_properties

Table properties definition as JSON string or path (preceded by ‘@’) to a JSON configuration file

The table_properties parameter should contain the following common properties in JSON format

Parameter

Information

name

Logical table name

integration_id

The ID of the integration to use to locate the source/sink table/topic. The integration ID can be found with the integration-list command

schema_sql

The Flink table SQL schema definition

And then a property identifying the type of Flink table connector within the one supported.

For the Aiven for Apache Kafka® insert only mode, add a JSON field named kafka with the following fields included in a JSON object:

Parameter

Information

scan_startup_mode

The Apache Kafka consumer starting offset; possible values are earliest-offset starting from the beginning of the topic and latest-offset starting from the last message

topic

The name of the source or target Aiven for Apache Kafka topic

value_fields_include

Defines if the message key fields are included in the value; possible values are ALL to include the key fields in the value, EXCEPT_KEY to remove them

key_format

Defines the format used to convert the message value; possible values are json or avro; if the key is not used, the key_format field can be omitted

value_format

Defines the format used to convert the message value; possible values are json or avro

For the Aiven for Apache Kafka® upsert mode, add a JSON field named upsert-kafka with the following fields included in a JSON object:

Parameter

Information

topic

The name of the source or target Aiven for Apache Kafka topic

value_fields_include

Defines if the message key fields are included in the value; possible values are ALL to include the key fields in the value, EXCEPT_KEY to remove them

key_format

Defines the format used to convert the message value; possible values are json or avro; if the key is not used, the key_format field can be omitted

value_format

Defines the format used to convert the message value; possible values are json or avro

For the Aiven for PostgreSQL® JDBC query mode, add a JSON field named jdbc with the following fields included in a JSON object:

Parameter

Information

table_name

The name of the Aiven for PostgreSQL® database table

For the Aiven for OpenSearch® index integration, add a JSON field named opensearch with the following fields included in a JSON object:

Parameter

Information

index

The name of the Aiven for OpenSearch® index to use

Example: Create an Apache Flink® table named KAlert on top of an Aiven for Apache Kafka® topic in insert mode with:

  • alert as source Apache Kafka topic

  • kafka as connector type

  • json as value and key data format

  • earliest-offset as starting offset

  • cpu FLOAT, node INT, cpu_percent INT, occurred_at TIMESTAMP_LTZ(3) as SQL schema

  • ab8dd446-c46e-4979-b6c0-1aad932440c9 as integration ID

  • flink-devportal-demo as service name

avn service flink table create flink-devportal-demo \
  """{
      \"name\":\"KAlert\",
      \"integration_id\": \"ab8dd446-c46e-4979-b6c0-1aad932440c9\",
      \"kafka\": {
          \"scan_startup_mode\": \"earliest-offset\",
          \"topic\": \"alert\",
          \"value_fields_include\": \"ALL\",
          \"value_format\": \"json\",
          \"key_format\": \"json\"
      },
      \"schema_sql\":\"cpu FLOAT, node INT, cpu_percent INT, occurred_at TIMESTAMP_LTZ(3)\"
  }"""

Example: Create an Apache Flink® table named KAlert on top of an Aiven for Apache Kafka® topic in upsert mode with:

  • alert as source Apache Kafka topic

  • upsert-kafka as connector type

  • json as value and key data format

  • cpu FLOAT, node INT PRIMARY KEY, cpu_percent INT, occurred_at TIMESTAMP_LTZ(3) as SQL schema

  • ab8dd446-c46e-4979-b6c0-1aad932440c9 as integration ID

  • flink-devportal-demo as service name

avn service flink table create flink-devportal-demo \
  """{
      \"name\":\"Kalert\",
      \"integration_id\": \"ab8dd446-c46e-4979-b6c0-1aad932440c9\",
      \"upsert_kafka\": {
          \"key_format\": \"json\",
          \"topic\": \"alert\",
          \"value_fields_include\": \"ALL\",
          \"value_format\": \"json\"
      },
      \"schema_sql\":\"cpu FLOAT, node INT PRIMARY KEY, cpu_percent INT, occurred_at TIMESTAMP_LTZ(3)\"
  }"""

Example: Create an Apache Flink® table named KAlert on top of an Aiven for PostgreSQL® table with:

  • alert as source PostgreSQL® table

  • jdbc as connector type

  • cpu FLOAT, node INT PRIMARY KEY, cpu_percent INT, occurred_at TIMESTAMP(3) as SQL schema

  • ab8dd446-c46e-4979-b6c0-1aad932440c9 as integration ID

  • flink-devportal-demo as service name

avn service flink table create flink-devportal-demo \
  """{
      \"name\":\"KAlert\",
      \"integration_id\": \"ab8dd446-c46e-4979-b6c0-1aad932440c9\",
      \"jdbc\": {
          \"table_name\": \"alert\"
      },
      \"schema_sql\":\"cpu FLOAT, node INT PRIMARY KEY, cpu_percent INT, occurred_at TIMESTAMP(3)\"
  }"""

Example: Create an Apache Flink® table named KAlert on top of an Aiven for OpenSearch® index with:

  • alert as source OpenSearch® index

  • opensearch as connector type

  • cpu FLOAT, node INT PRIMARY KEY, cpu_percent INT, occurred_at TIMESTAMP(3) as SQL schema

  • ab8dd446-c46e-4979-b6c0-1aad932440c9 as integration ID

  • flink-devportal-demo as service name

avn service flink table create flink-devportal-demo \
  """{
      \"name\":\"KAlert\",
      \"integration_id\": \"ab8dd446-c46e-4979-b6c0-1aad932440c9\",
      \"opensearch\": {
          \"index\": \"alert\"
      },
      \"schema_sql\":\"cpu FLOAT, node INT PRIMARY KEY, cpu_percent INT, occurred_at TIMESTAMP(3)\"
  }"""

avn service flink table delete#

Deletes an existing Aiven for Apache Flink® table.

Parameter

Information

service_name

The name of the service

table_id

The ID of the table to delete

Example: Delete the Apache Flink® table with ID 8b8ac2fe-b6eb-46bc-b327-fb4b84d27276 belonging to the Aiven for Flink service flink-devportal-demo.

avn service flink table delete flink-devportal-demo 8b8ac2fe-b6eb-46bc-b327-fb4b84d27276

avn service flink table get#

Retrieves the definition of an existing Aiven for Apache Flink® table.

Parameter

Information

service_name

The name of the service

table_id

The ID of the table to retrieve

Example: Retrieve the definition of the Apache Flink® table with ID 8b8ac2fe-b6eb-46bc-b327-fb4b84d27276 belonging to the Aiven for Flink service flink-devportal-demo.

avn service flink table get flink-devportal-demo 8b8ac2fe-b6eb-46bc-b327-fb4b84d27276

An example of avn service flink table get output:

INTEGRATION_ID                        TABLE_ID                              TABLE_NAME   SCHEMA_SQL              COLUMNS
====================================  ====================================  ===========  ======================  ===============================================================================================================
77741d89-71f1-4de6-897a-fd83bce0ee62  f7bbe17b-ab47-46fd-83cb-2f5d23656018  mytablename  "id INT,name string"   ß{"data_type": "INT", "name": "id", "nullable": true}, {"data_type": "STRING", "name": "name", "nullable": true}

Tip

Adding the --json flag retrieves the table information in a richer JSON format

[
    {
        "columns": [
            {
                "data_type": "INT",
                "name": "id",
                "nullable": true
            },
            {
                "data_type": "STRING",
                "name": "name",
                "nullable": true
            }
        ],
        "integration_id": "77741d89-71f1-4de6-897a-fd83bce0ee62",
        "jdbc": {
            "table_name": "mysourcetablename"
        },
        "schema_sql": "id INT,name string",
        "table_id": "f7bbe17b-ab47-46fd-83cb-2f5d23656018",
        "table_name": "mytablename"
    }
]

avn service flink table list#

Lists all the Aiven for Apache Flink® tables in a selected service.

Parameter

Information

service_name

The name of the service

Example: List all the Apache Flink® tables available in the Aiven for Flink service flink-devportal-demo.

avn service flink table list flink-devportal-demo

An example of avn service flink table list output:

INTEGRATION_ID                        TABLE_ID                              TABLE_NAME   SCHEMA_SQL
====================================  ====================================  ===========  ======================
315fe8af-34d9-4d7e-8711-bc7b6841dc55  882ee0be-cb0b-4ccf-b4d1-89d2e4a34260  ttt5         "id INT,\nage int"
77741d89-71f1-4de6-897a-fd83bce0ee62  f7bbe17b-ab47-46fd-83cb-2f5d23656018  testname445  "id INT,\nname string"

Manage an Apache Flink® job#

avn service flink job create#

Creates a new Aiven for Apache Flink® job.

Parameter

Information

service_name

The name of the service

job_name

Name of the Flink job

--table-ids

List of Flink tables IDs to use as source/sink. Table IDs can be found using the list command

--statement

Flink job SQL statement

Example: Create an Apache Flink® job named JobExample with:

  • KCpuIn (with id cac53785-d1b5-4856-90c8-7cbcc3efb2b6) and KAlert (with id 54c2f4e6-a446-4d62-8dc9-2b81179c6f43) as source/sink tables

  • INSERT INTO KAlert SELECT * FROM KCpuIn WHERE cpu_percent > 70 as SQL statement

  • flink-devportal-demo as service name

avn service flink job create flink-devportal-demo JobExample                        \
  --table-ids cac53785-d1b5-4856-90c8-7cbcc3efb2b6 54c2f4e6-a446-4d62-8dc9-2b81179c6f43 \
  --statement "INSERT INTO KAlert SELECT * FROM KCpuIn WHERE cpu_percent > 70"

avn service flink job cancel#

Cancels an existing Aiven for Apache Flink® job.

Parameter

Information

service_name

The name of the service

job_id

The ID of the job to delete

Example: Cancel the Apache Flink® job with ID 8b8ac2fe-b6eb-46bc-b327-fb4b84d27276 belonging to the Aiven for Flink service flink-devportal-demo.

avn service flink job cancel flink-devportal-demo 8b8ac2fe-b6eb-46bc-b327-fb4b84d27276

avn service flink job get#

Retrieves the definition of an existing Aiven for Apache Flink® job.

Parameter

Information

service_name

The name of the service

job_id

The ID of the job to retrieve

Example: Retrieve the definition of the Apache Flink® job with ID 8b8ac2fe-b6eb-46bc-b327-fb4b84d27276 belonging to the Aiven for Flink service flink-devportal-demo.

avn service flink job get flink-devportal-demo 8b8ac2fe-b6eb-46bc-b327-fb4b84d27276

An example of avn service flink job get output:

JID                               NAME        STATE    START-TIME     END-TIME  DURATION  ISSTOPPABLE  MAXPARALLELISM
================================  ==========  =======  =============  ========  ========  ===========  ==============
b63c78c70033e00afa84de9029257e31  JobExample  RUNNING  1633336792083  -1        423503    false        96

avn service flink job list#

Lists all the Aiven for Apache Flink® jobs in a selected service.

Parameter

Information

service_name

The name of the service

Example: List all the Apache Flink® jobs available in the Aiven for Flink service flink-devportal-demo.

avn service flink jobs list flink-devportal-demo

An example of avn service flink job list output:

ID                                STATUS
================================  =======
b63c78c70033e00afa84de9029257e31  RUNNING
Did you find this useful?

Apache, Apache Kafka, Kafka, Apache Flink, Flink, Apache Cassandra, and Cassandra are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. M3, M3 Aggregator, M3 Coordinator, OpenSearch, PostgreSQL, MySQL, InfluxDB, Grafana, Terraform, and Kubernetes are trademarks and property of their respective owners. *Redis is a registered trademark of Redis Ltd. Any rights therein are reserved to Redis Ltd. Any use by Aiven is for referential purposes only and does not indicate any sponsorship, endorsement or affiliation between Redis and Aiven. All product and service names used in this website are for identification purposes only and do not imply endorsement.

Copyright © 2022, Aiven Team | Show Source | Last updated: January 2023
Contents
  • avn service flink beta
    • Manage an Apache Flink® table
      • avn service flink table create
      • avn service flink table delete
      • avn service flink table get
      • avn service flink table list
    • Manage an Apache Flink® job
      • avn service flink job create
      • avn service flink job cancel
      • avn service flink job get
      • avn service flink job list