Skip to main content
Sim IDX can stream the data from your listener’s events in real-time to an external data sink. This allows you to integrate your indexed blockchain data directly into your own infrastructure for real-time analytics, monitoring, or complex event processing. This feature currently supports Apache Kafka as a sink type, enabling you to send your data to managed services like Confluent Cloud and Redpanda Cloud, or to your own self-hosted Kafka cluster.
Configuring a sink sends data to your Kafka topic instead of the default Postgres database.

sim.toml Configuration

To enable streaming to an external sink, you must define it within your sim.toml file under a new [[env.<name>.sinks]] table. You can define multiple sinks for different environments. For example, you might have one sink for dev and another for prod. The configuration maps specific events from your listener to specific topics in your Kafka cluster. Here is an example configuration for a development environment named dev:
sim.toml
[app]
name = "my-uniswap-indexer"

# Define a development environment named "dev"
[[env.dev.sinks]]
type = "kafka"
name = "dev_kafka_cluster"
brokers = ["pkc-00000.us-central1.gcp.confluent.cloud:9092"]
username = "YOUR_KAFKA_API_KEY"
password = "YOUR_KAFKA_API_SECRET"
sasl_mechanism = "PLAIN"
event_to_topic = { "PoolCreated" = "uniswap_pool_created_dev" }

Sink Parameters

ParameterRequiredDescription
typeYesThe type of sink. Currently, the only supported value is "kafka".
nameYesA unique name for this sink configuration within the environment.
brokersYesAn array of broker addresses for your Kafka cluster.
usernameYesThe username for authenticating with your Kafka cluster.
passwordYesThe password for authenticating with your Kafka cluster.
sasl_mechanismYesThe SASL mechanism to use. Supported values are "PLAIN", "SCRAM_SHA_256", and "SCRAM_SHA_512".
event_to_topicYesA map where the key is the name of the event emitted by your listener and the value is the destination Kafka topic name. For example, "PoolCreated" might map to "uniswap_pool_created_dev".
compression_typeNoThe compression codec to use for compressing messages. Supported values are "GZIP", "SNAPPY", "LZ4", and "ZSTD".

Store Credentials in .env

To avoid commiting your username and password, you can reference environment variables in your sim.toml file. Place credentials in a .env file and use ${VAR_NAME} placeholders in sim.toml.
.env
KAFKA_PROD_PASSWORD=MyPassword
KAFKA_PROD_USERNAME=MyUsername
sim.toml
[[env.prod.sinks]]
type = "kafka"
name = "prod"
password = "${KAFKA_PROD_PASSWORD}" # Read from .env file
username = "${KAFKA_PROD_USERNAME}"
During deployment, Sim IDX loads these variables from your environment and substitutes them into sim.toml.

Deploy with a Sink Environment

After you have configured your sink in sim.toml, you must explicitly deploy your application using the CLI with the sim deploy command and specify which environment to use. This is different from the automatic deployment that occurs when you push changes to your repository. To use Kafka sinks, you must use the --environment flag with the sim deploy command. The name you provide to the --environment flag must exactly match the environment name defined in your sim.toml file. For example, if you defined [[env.dev.sinks]], you would use dev as the environment name.
sim deploy --environment dev
The --environment flag is required to activate your sink configuration.If you run sim deploy without the --environment flag, your sink configuration will be ignored. The deployment will succeed, but it will only write data to the default Neon Postgres database, and no data will be sent to your Kafka topics.

Deployment Output and Tracking

When you successfully deploy with a sink environment, the command will output deployment details including a unique Deployment ID. Here’s an example of what the output looks like:
sim deploy --environment dev
2025-09-18T16:13:26.816281Z  INFO build_probe: deploy::listeners: Building all contracts
2025-09-18T16:13:35.322414Z  INFO build_probe: deploy::listeners: Calculating triggers
2025-09-18T16:13:35.624276Z  INFO build_probe:create_probes_from_listeners: deploy::listeners: Building listeners
2025-09-18T16:13:35.707338Z  INFO deploy::api: Building APIs...
2025-09-18T16:13:54.344313Z  INFO deploy: Submitting listener
2025-09-18T16:14:14.317367Z  INFO sim::deploy: Deployed:
Deployment {
    deployment_id: "ff27355a-a1a9-4b43-9a01-75a6060f3dfa",
    api_url: Some(
        "https://a4d175722c-457c96ab-1.idx.sim.io",
    ),
    connection_string: Some(
        "postgres://firstly-as-E6uSqoeXzc:ddiH%29w%28tUUKCDilY@ep-yellow-haze-a4wva18x.us-east-1.aws.neon.tech/agreeable-care-HxcJ3J2yLJ?sslmode=require&options=-c%20search_path%3D%22who-here-UBOQMdSdNP%22%2Cpublic",
    ),
    // ... additional deployment details
}
Keep track of your deployment ID, API URL, and connection string from the output above. You can also find this information in the developer portal.

Delete a Sink Environment Deployment

To stop data streaming to your Kafka topics, you need to delete the specific deployment that was created with the sink environment. This is the only way to stop the writing to the sink from the Sim side. Use the sim deploy delete command with the deployment ID:
sim deploy delete --deployment-id <deployment-id>
To retrieve your deployment ID, you should have gotten that after running the sim deploy command. If necessary, you can also retrieve the deployment ID by visiting the developer portal where it’s displayed in the Current Deployment section. The deployment will be deleted almost immediately upon successful execution, which will stop all existing writes to your Kafka topics.
Additional Kafka Cleanup: After deleting the deployment, you may also need to clean up resources on the Kafka side (topics, ACLs, etc.) depending on your setup and requirements. This cleanup should be done directly in your Kafka provider’s console or CLI.

Redeploy Without Sinks

After deleting a deployment that was using sinks, you can redeploy without sinks by using the standard Git deployment method. Push changes to your repository, which will create a deployment that writes only to the default Postgres database. For more information on standard deployments, see the Deployment guide.

Set up Redpanda Cloud

1

Create a Cluster

First, create a new serverless cluster from the Clusters dashboard. Click Create cluster, select the Serverless type, provide a Cluster name, choose your cloud provider and region, and then click Create.Redpanda Cloud Create Serverless cluster page
2

Get the Bootstrap Server URL

Once the cluster is running, you can get the connection URL. On the cluster’s Overview page, select the Kafka API tab and copy the Bootstrap server URL. This value is used for the brokers field in your sim.toml file.Redpanda Cloud Kafka API tab showing Bootstrap server URL
3

Create a Kafka Topic

Next, create the topic where your listener events will be sent. From the cluster’s navigation menu, select Topics and click Create topic. Enter a Topic Name, which is the name you will use in your sim.toml file. For example, you might name it my_topic.Redpanda Cloud Create topic dialog
4

Create a User and Set Permissions

Finally, create a user and grant it permissions to access your topic. Navigate to the Security section and create a new user, providing a Username and saving the generated Password. Then, go to the ACLs tab to create an ACL for that user. Configure it to Allow all operations for your Prefixed topic name. The Username and Password you created correspond to the username and password fields in sim.toml.Redpanda Cloud ACL creation dialog showing user and topic permissions

Set up Confluent Cloud

While the sim.toml configuration is standardized, setting up credentials and permissions can vary between different managed Kafka providers.
When configuring your sim.toml file for a Confluent Cloud cluster, you must use the following settings:
  • sasl_mechanism: Set this to "PLAIN".
  • username: Use the API Key generated from the Confluent Cloud dashboard.
  • password: Use the API Secret associated with that key.
1

Create an Environment

First, create a new cloud environment to house your cluster. In the Confluent Cloud dashboard, navigate to Environments and click Add cloud environment. Provide an Environment name such as new_cluster, select the Essentials Stream Governance package, and click Create.
2

Create a Kafka Cluster

Next, launch a new Kafka cluster within your environment. On the “Create cluster” page, select the Standard cluster type, give it a Cluster name such as cluster_0, choose a cloud provider and region, then click Launch cluster.
3

Create a Kafka Topic

Now you will need to create a topic where your listener events will be sent. From your cluster’s navigation menu, select Topics, then click Create topic. Enter a Topic name, which is the name you will use in your sim.toml file, and click Create with defaults. For example, you might name it topic_0.Confluent Cloud New topic creation dialog
4

Create an API Key and Set Permissions

To allow Sim IDX to connect, generate an API key and grant it the necessary permissions. From the cluster’s navigation menu, select API Keys, then click Create key. Choose to create a new Service account and give it a descriptive name like my_account. You must then add an Access Control List (ACL) to grant WRITE and CREATE permissions for your PREFIXED topic (topic_0).Confluent Cloud ACL configuration screen
5

Save Your Credentials

After creation, Confluent will display your API Key and Secret. Copy and save these credentials securely, as the Secret will not be shown again. The Key corresponds to the username field and the Secret corresponds to the password field in your sim.toml file.Confluent Cloud screen showing the generated API Key and Secret
6

Get the Bootstrap Server Address

Finally, retrieve the broker address for your cluster. Navigate to Cluster settings and, under the Endpoints section, copy the Bootstrap server address. This value is used for the brokers field in your sim.toml file.Confluent Cloud Cluster settings showing Bootstrap server address