Skip to main content
On this page

Get started

You can get started for free using the Conduktor Community version.

info

Pre-requisite: Docker Compose

Get started in a few minutes with the latest Conduktor Console Docker image.

note

As the Conduktor Playground, also known as Conduktor Cloud, does not exist anymore, we recommend using the embedded Kafka option to get started quickly.

Simple Setup

When launching Conduktor Console for the first time, an onboarding guide will walk you through configuring your environment.

Step 1: Start the Console

Let's start by running one of the commands below to launch Conduktor. Choose the option that best fits your setup: use an embedded Kafka cluster, or connect to your own Kafka cluster.

Option 1: Start with an embedded Kafka cluster

Start Conduktor Console with 2 clusters pre-configured:

  • a Redpanda Kafka cluster and Schema Registry
  • a Conduktor Gateway connected to the Redpanda cluster
curl -L https://releases.conduktor.io/quick-start -o docker-compose.yml && docker compose up -d --wait && echo "Conduktor started on http://localhost:8080"
info

If you have an M4 Mac the above command will fail because of a JDK/Docker interopability bug. Use the following work-around until a JDK fix is released in April:

curl -L https://releases.conduktor.io/quick-start-m4 -o docker-compose.yml && docker compose up -d --wait && echo "Conduktor started on http://localhost:8080"

Option 2: Use your existing Kafka cluster

Start Conduktor Console without any cluster pre-configured.

curl -L https://releases.conduktor.io/console -o docker-compose.yml && docker compose up -d --wait && echo "Conduktor started on http://localhost:8080"

Step 2: Complete the onboarding wizard

After a few seconds, the onboarding wizard will be available at http://localhost:8080. Here, you can set the admin credentials to use to log in.

Step 3: Connect to your existing Kafka cluster

Conduktor Console is compatible with all the Kafka providers, such as Confluent, Aiven, MSK or Redpanda. To see the full value of Conduktor, we recommend configuring it against your own Kafka data.

In that regard, after having completed the onboarding wizard, go to the Clusters page, and click on Add cluster.

note

Use our interactive guide to learn how to connect your Kafka cluster, Schema Registry and Kafka Connect!

From within the cluster configuration screen, fill the:

  • Bootstrap servers
  • Authentication details
  • Additional properties
note

Configuring an SSL/TLS cluster? Use the Conduktor Certificates Store.

How to connect to Kafka running on localhost:9092?

Add the below to your Kafka server.properties file

listeners=EXTERNAL://0.0.0.0:19092,PLAINTEXT://0.0.0.0:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
advertised.listeners=PLAINTEXT://127.0.0.1:9092,EXTERNAL://host.docker.internal:19092

If running Kafka in KRaft mode, add the below to your Kafka config/kraft/server.properties file

listeners=EXTERNAL://0.0.0.0:19092,PLAINTEXT://0.0.0.0:9092,CONTROLLER://:9093
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT,CONTROLLER:PLAINTEXT
advertised.listeners=PLAINTEXT://127.0.0.1:9092,EXTERNAL://host.docker.internal:19092
inter.broker.listener.name=PLAINTEXT

From within the Conduktor interface, connect using the bootstrap server: host.docker.internal:19092

Step 4: Add additional users

If you have deployed Conduktor on a central server, you can add new users to collaborate with you inside the Console.

For that, go to the Users screen and select Create Members to set the credentials of a new local user.

info

You can configure your SSO using the free Console!

Advanced Setup

warning

For production deployments, please make sure you respect the production requirements.

Step 1: Configure the Console

To configure the Conduktor Console during deployment, you have two options:

important

If both methods are used, environment variables will take precedence over the configuration file.

Here’s what you can configure:

  • External database (required)
  • User authentication (Local or SSO/LDAP)
  • Kafka clusters configurations
  • Conduktor enterprise license key
note

Some objects, such as groups or Self-service resources, can't be initiated before the Console has started. To automate their creation, you can either use our API, CLI or Terraform provider.

Option 1: Using a configuration file

Create a configuration file

The below example shows how to configure Conduktor with the following configuration:

  • The external database configuration
  • The local administrator credentials
  • The connection to the Monitoring container called conduktor-console-cortex

If you want, you can add more snippets, like SSO or license key. You can get the list of all the properties supported here.

console-config.yaml
database:         # External database configuration
hosts:
- host: 'postgresql'
port: 5432
name: 'conduktor-console'
username: 'conduktor'
password: 'change_me'
connection_timeout: 30 # in seconds

admin: # Local admin credentials
email: "<name@your_company.io>"
password: "adminP4ss!"

monitoring: # Connection to the Cortex Monitoring container
cortex-url: http://conduktor-monitoring:9009/
alert-manager-url: http://conduktor-monitoring:9010/
callback-url: http://conduktor-console:8080/monitoring/api/
notifications-callback-url: http://localhost:8080

# license: "" # Enterprise license key

Bind the file to the Console container

The below docker-compose indicates how to bind your console-config.yaml file.

Note that the environment variable CDK_IN_CONF_FILE is used to indicate that a configuration file is being used, and the location to find it. The file is also mounted to be used from within the container.

docker-compose.yaml
services:  
postgresql:
image: postgres:14
hostname: postgresql
environment:
POSTGRES_DB: "conduktor-console"
POSTGRES_USER: "conduktor"
POSTGRES_PASSWORD: "change_me"

conduktor-console:
image: conduktor/conduktor-console:1.30.0
depends_on:
- postgresql
ports:
- "8080:8080"
volumes:
- type: bind
source: "./console-config.yaml"
target: /opt/conduktor/console-config.yaml
read_only: true
environment:
CDK_IN_CONF_FILE: /opt/conduktor/console-config.yaml

conduktor-monitoring:
image: conduktor/conduktor-console-cortex:1.30.0
environment:
CDK_CONSOLE-URL: "http://conduktor-console:8080" # Connection to the Console container

Option 2: Using environment variables

The same configuration can be achieved using environment variables.

You can use our YAML to ENV converter to easily convert the configuration file into environment variables.

docker-compose.yaml
services:  
postgresql:
image: postgres:14
hostname: postgresql
environment:
POSTGRES_DB: "conduktor-console"
POSTGRES_USER: "conduktor"
POSTGRES_PASSWORD: "change_me"

conduktor-console:
image: conduktor/conduktor-console:1.30.0
depends_on:
- postgresql
ports:
- "8080:8080"
environment:
# Enterprise license key
# CDK_LICENSE: ""
# External database configuration
CDK_DATABASE_URL: "postgresql://conduktor:change_me@postgresql:5432/conduktor-console"
# Local admin credentials
CDK_ADMIN_EMAIL: "<name@your_company.io>"
CDK_ADMIN_PASSWORD: "adminP4ss!"
# Connection to the Cortex Monitoring container
CDK_MONITORING_CORTEX-URL: http://conduktor-monitoring:9009/
CDK_MONITORING_ALERT-MANAGER-URL: http://conduktor-monitoring:9010/
CDK_MONITORING_CALLBACK-URL: http://conduktor-console:8080/monitoring/api/
CDK_MONITORING_NOTIFICATIONS-CALLBACK-URL: http://localhost:8080

conduktor-monitoring:
image: conduktor/conduktor-console-cortex:1.30.0
environment:
# Connection to the Console container
CDK_CONSOLE-URL: "http://conduktor-console:8080"

volumes:
pg_data: {}
conduktor_data: {}

Step 2: Deploy the Console

Last step to start the containers is to run the following command. It will start:

  • An external PostgreSQL database
  • The Conduktor Console and Cortex containers
docker compose up

After a few minutes, Conduktor will be available at http://localhost:8080

You can use the admin email and password to log in.

If using SSO, you will see an option to log in via the relevant identity provider.

Step 3: Connect to your existing Kafka cluster

See connecting to your existing Kafka cluster

Step 4: Add additional users

See adding additional users

first-tab.yaml
myFirstTab: "content"

Conduktor Gateway is provided as a Docker image and Helm chart.

It should be deployed and managed in the best way for your organization and use case(s). This could be a single container, or more likely, multiple Gateway instances should be deployed and scaled to meet your needs. Optionally, the instances could be deployed behind a load balancer.

Use this quick start guide to help you get started.

Jump to:

Running the Gateway

info

For a fully self-contained quick-start, see the Docker Compose.

In its simplest form, Gateway can be run from a simple Docker run command with the credentials to connect to your existing Kafka cluster.

Start a local Kafka

Create a new directory (note the docker network will be derived from the directory name):

mkdir gateway-quick-start && cd gateway-quick-start

Run the below command to start a single node Kafka and ZooKeeper:

curl -L https://releases.conduktor.io/single-kafka -o docker-compose.yml && docker compose up -d 

Start Conduktor Gateway

Run the below command to start Conduktor Gateway and configure Docker networking between the two containers:

  docker run \
--network gateway-quick-start_default \
-e KAFKA_BOOTSTRAP_SERVERS=kafka1:29092 \
-e GATEWAY_ADVERTISED_HOST=localhost \
-e GATEWAY_PORT_START=9099 \
-p 9099:9099 \
-d \
conduktor/conduktor-gateway:3.5.1

By default, the Gateway uses port-based routing and listens on as many ports as there are Kafka brokers. In this case, we started a single-node Kafka cluster and opened 1 port.

At this stage you have:

  • Kafka running and its brokers available on localhost:9092
  • Gateway acting as a proxy to the backing Kafka cluster, accessible at loalhost:9099

Connecting your clients

Your clients can now interact with Conduktor Gateway like any other Kafka cluster.

Example: creating a topic via Gateway using the Apache Kafka command line client:

bin/kafka-topics.sh --create --topic orders --bootstrap-server localhost:9099

Next Steps

This quick start shows the basics, demonstrating Conduktor Gateway acting as a network proxy for Kafka. However, the real value comes with configuring interceptors, which are pluggable components that augment Kafka by intercepting specific requests of the Kafka protocol and applying operations to it.

View demos that demonstrate how interceptors are used to satisfy specific use cases such as encryption, data quality and safeguarding your cluster with technical and business rules.

Connecting to secured Kafka

Your Kafka's bootstrap server, along with its authentication method should be configured using environment variables.

Conduktor Gateway connects to Kafka just like any other client.

Security configurations are provided using this scheme. For example:

ssl.truststore.location

becomes:

KAFKA_SSL_TRUSTSTORE_LOCATION

Confluent Cloud Example

Below shows the most simple way to get started with Confluent Cloud.

info

By default, Gateway assumes you want the same security protocol between your clients and Gateway, as between your Gateway and Kafka.

However, this example uses DELEGATED_SASL_PLAINTEXT for the GATEWAY_SECURITY_PROTOCOL. For quick start purposes, this avoids needing to configure SSL certificates when connecting to Conduktor Gateway.

  docker run \
-e KAFKA_BOOTSTRAP_SERVERS=$CONFLUENT_CLOUD_KAFKA_BOOTSTRAP_SERVER \
-e KAFKA_SASL_MECHANISM=PLAIN \
-e KAFKA_SECURITY_PROTOCOL=SASL_SSL \
-e KAFKA_SASL_JAAS_CONFIG='org.apache.kafka.common.security.plain.PlainLoginModule required username="$CONFLUENT_CLOUD_API_KEY" password="$CONFLUENT_CLOUD_API_SECRET' \
-e GATEWAY_SECURITY_PROTOCOL=DELEGATED_SASL_PLAINTEXT \
-e GATEWAY_ADVERTISED_HOST=localhost \
-e GATEWAY_CLUSTER_ID=test \
-p 6969-6999:6969-6999 \
-d \
conduktor/conduktor-gateway:3.5.1

Note that if you wish to maintain the SSL/TLS connection between clients and Conduktor Gateway, see Client to Gateway Configuration.

By default, the Gateway uses port-based routing and listens on as many ports as there are Kafka brokers. In this case, we open 30 ports to account for Confluent Cloud clusters with many brokers. However, you may need to open more ports depending on the size of your cluster.

If you need support with your configuration, please contact us for support.

Docker Compose

The below example demonstrates an environment consisting of:

cat docker-compose.yaml

1. Start the Docker environment

Start all your docker services, wait for them to be up and ready, then run in background:

  • --wait: Wait for services to be running|healthy. Implies detached mode.
  • --detach: Detached mode: Run containers in the background
docker compose up --detach --wait

2. Create a topic via Conduktor Gateway

kafka-topics \
--bootstrap-server localhost:6969 \
--replication-factor 1 \
--partitions 1 \
--create --if-not-exists \
--topic orders

3. Produce a message to your topic

echo '{"orderId":"12345","customerId":"67890","price":10000}' | \
kafka-console-producer \
--bootstrap-server localhost:6969 \
--topic orders

4. Consume a message from your topic

kafka-console-consumer \
--bootstrap-server localhost:6969 \
--topic cars \
--from-beginning \
--max-messages 1 \
--timeout-ms 10000 | jq

5. Next Steps: Configure an interceptor

This quick start shows the basics, demonstrating Conduktor Gateway can be interacted with like any other Kafka cluster.

However, the real value comes with configuring interceptors, which are pluggable components that augment Kafka by intercepting specific requests of the Kafka protocol and applying operations to it.

View demos that demonstrate how interceptors are used to satisfy specific use cases such as encryption, data quality and safeguarding your cluster with technical and business rules.