podman を用いて Apache Kafka 環境を構築した際のメモ

以下の環境を活用し、podman を用いながら Apache Kafka 環境を構築した際のメモを整理します。

github.com

事前準備


  • サンプル環境をローカルへ Clone します
$ git clone https://github.com/hguerrero/amq-examples.git  
Cloning into 'amq-examples'...
remote: Enumerating objects: 983, done.
remote: Counting objects: 100% (116/116), done.
remote: Compressing objects: 100% (92/92), done.
remote: Total 983 (delta 49), reused 71 (delta 21), pack-reused 867
Receiving objects: 100% (983/983), 2.86 MiB | 2.41 MiB/s, done.
Resolving deltas: 100% (387/387), done.
$ 


$ cd amq-examples/strimzi-all-in-one/ 
$ 
$ ls -la
合計 12
drwxr-xr-x. 1 demo_user demo_user   88  9月 29 11:36 .
drwxr-xr-x. 1 demo_user demo_user  672  9月 29 11:36 ..
-rw-r--r--. 1 demo_user demo_user  499  9月 29 11:36 README.md
-rw-r--r--. 1 demo_user demo_user 2021  9月 29 11:36 docker-compose.yaml
-rw-r--r--. 1 demo_user demo_user  601  9月 29 11:36 log4j.properties
$ 


Apache Kafka の起動

  • podman-compose を実行します(これだけで Apache Kafka が起動します!)
$ podman-compose up
['podman', '--version', '']
using podman version: 4.2.0
** excluding:  set()
['podman', 'network', 'exists', 'strimzi-all-in-one_default']
podman create --name=zookeeper --label io.podman.compose.config-hash=123 --label io.podman.compose.project=strimzi-all-in-one --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=strimzi-all-in-one --label com.docker.compose.project.working_dir=/home/demo_user/opt/training/amq-examples/strimzi-all-in-one --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=zookeeper -e LOG_DIR=/tmp/logs --net strimzi-all-in-one_default --network-alias zookeeper -p 2181:2181 strimzi/kafka:0.18.0-kafka-2.5.0 sh -c bin/zookeeper-server-start.sh config/zookeeper.properties
a8da3abb34cc182a4e0f1095027a9633557775e7794b91ab5b6564684e242bdd
exit code: 0
['podman', 'network', 'exists', 'strimzi-all-in-one_default']
podman create --name=kafka --label io.podman.compose.config-hash=123 --label io.podman.compose.project=strimzi-all-in-one --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=strimzi-all-in-one --label com.docker.compose.project.working_dir=/home/demo_user/opt/training/amq-examples/strimzi-all-in-one --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=kafka -e LOG_DIR=/tmp/logs -e KAFKA_BROKER_ID=1 -e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT -e KAFKA_LISTENERS=PLAINTEXT://:29092,PLAINTEXT_HOST://:9092 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092 -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 --net strimzi-all-in-one_default --network-alias kafka -p 9092:9092 strimzi/kafka:0.18.0-kafka-2.5.0 sh -c bin/kafka-server-start.sh config/server.properties --override listener.security.protocol.map=${KAFKA_LISTENER_SECURITY_PROTOCOL_MAP} --override listeners=${KAFKA_LISTENERS} --override advertised.listeners=${KAFKA_ADVERTISED_LISTENERS} --override zookeeper.connect=${KAFKA_ZOOKEEPER_CONNECT}
f297c2f7d54e84da5f04ad5ea434e8a23379d263c5fee51b2894df9f4db59610
exit code: 0
['podman', 'network', 'exists', 'strimzi-all-in-one_default']
podman create --name=registry --label io.podman.compose.config-hash=123 --label io.podman.compose.project=strimzi-all-in-one --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=strimzi-all-in-one --label com.docker.compose.project.working_dir=/home/demo_user/opt/training/amq-examples/strimzi-all-in-one --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=registry -e QUARKUS_PROFILE=prod -e KAFKA_BOOTSTRAP_SERVERS=kafka:29092 -e APPLICATION_ID=registry_id -e APPLICATION_SERVER=localhost:9000 --net strimzi-all-in-one_default --network-alias registry -p 8081:8080 apicurio/apicurio-registry-mem:1.2.2.Final
39e7dfe472d5241b896a88378b371f0f38f8fe4330993ced612f9038fd279cb7
exit code: 0
['podman', 'network', 'exists', 'strimzi-all-in-one_default']
podman create --name=bridge --label io.podman.compose.config-hash=123 --label io.podman.compose.project=strimzi-all-in-one --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=strimzi-all-in-one --label com.docker.compose.project.working_dir=/home/demo_user/opt/training/amq-examples/strimzi-all-in-one --label com.docker.compose.project.config_files=docker-compose.yaml --label com.docker.compose.container-number=1 --label com.docker.compose.service=bridge -e KAFKA_BRIDGE_BOOTSTRAP_SERVERS=kafka:29092 -e KAFKA_BRIDGE_ID=bridge1 -e KAFKA_BRIDGE_HTTP_ENABLED=true -e KAFKA_BRIDGE_HTTP_HOST=0.0.0.0 -e KAFKA_BRIDGE_HTTP_PORT=8080 -v /home/demo_user/opt/training/amq-examples/strimzi-all-in-one/log4j.properties:/opt/strimzi/custom-config/log4j.properties --net strimzi-all-in-one_default --network-alias bridge -p 8082:8080 strimzi/kafka-bridge:0.16.0 sh -c /opt/strimzi/bin/docker/kafka_bridge_run.sh
1c54856ff14631532458e1ee9b12d1712a665e5fbfae9b79da152b2119e89122
exit code: 0
podman start -a zookeeper
[2022-09-29 02:36:56,576] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2022-09-29 02:36:56,577] WARN config/zookeeper.properties is relative. Prepend ./ to indicate that you're sure! (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2022-09-29 02:36:56,580] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2022-09-29 02:36:56,580] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2022-09-29 02:36:56,582] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2022-09-29 02:36:56,582] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2022-09-29 02:36:56,582] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
[2022-09-29 02:36:56,582] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2022-09-29 02:36:56,583] INFO Log4j found with jmx enabled. (org.apache.zookeeper.jmx.ManagedUtil)
[2022-09-29 02:36:56,593] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2022-09-29 02:36:56,593] WARN config/zookeeper.properties is relative. Prepend ./ to indicate that you're sure! (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2022-09-29 02:36:56,593] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2022-09-29 02:36:56,593] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2022-09-29 02:36:56,593] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2022-09-29 02:36:56,595] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2022-09-29 02:36:56,604] INFO Server environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT (org.apache.zookeeper.server.ZooKeeperServer)
[2022-09-29 02:36:56,604] INFO Server environment:host.name=a8da3abb34cc (org.apache.zookeeper.server.ZooKeeperServer)
[2022-09-29 02:36:56,604] INFO Server environment:java.version=1.8.0_252 (org.apache.zookeeper.server.ZooKeeperServer)
[2022-09-29 02:36:56,604] INFO Server environment:java.vendor=Oracle Corporation (org.apache.zookeeper.server.ZooKeeperServer)
[2022-09-29 02:36:56,605] INFO Server environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.252.b09-2.el7_8.x86_64/jre (org.apache.zookeeper.server.ZooKeeperServer)
[2022-09-29 02:36:56,605] INFO Server environment:java.class.path=/opt/kafka/bin/../libs/activation-1.1.1.jar:/opt/kafka/bin/../libs/annotations-13.0.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka/bin/../libs/bcpkix-jdk15on-1.62.jar:/opt/kafka/bin/../libs/bcprov-jdk15on-1.60.jar:/opt/kafka/bin/../libs/commons-cli-1.4.jar:/opt/kafka/bin/../libs/commons-lang-2.6.jar:/opt/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/kafka/bin/../libs/connect-api-2.5.0.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension-2.5.0.jar:/opt/kafka/bin/../libs/connect-file-2.5.0.jar:/opt/kafka/bin/../libs/connect-json-2.5.0.jar:/opt/kafka/bin/../libs/connect-mirror-2.5.0.jar:/opt/kafka/bin/../libs/connect-mirror-client-2.5.0.jar:/opt/kafka/bin/../libs/connect-runtime-2.5.0.jar:/opt/kafka/bin/../libs/connect-transforms-2.5.0.jar:/opt/kafka/bin/../libs/cruise-control-metrics-reporter-2.0.103.jar:/opt/kafka/bin/../libs/gson-2.8.6.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0.jar:/opt/kafka/bin/../libs/jackson-annotations-2.10.2.jar:/opt/kafka/bin/../libs/jackson-core-2.10.2.jar:/opt/kafka/bin/../libs/jackson-databind-2.10.2.jar:/opt/kafka/bin/../libs/jackson-dataformat-csv-2.10.2.jar:/opt/kafka/bin/../libs/jackson-datatype-jdk8-2.10.2.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.10.2.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.2.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.2.jar:/opt/kafka/bin/../libs/jackson-module-paranamer-2.10.2.jar:/opt/kafka/bin/../libs/jackson-module-scala_2.12-2.10.2.jar:/opt/kafka/bin/../libs/jaeger-client-1.1.0.jar:/opt/kafka/bin/../libs/jaeger-core-1.1.0.jar:/opt/kafka/bin/../libs/jaeger-thrift-1.1.0.jar:/opt/kafka/bin/../libs/jaeger-tracerresolver-1.1.0.jar:/opt/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/kafka/bin/../libs/jakarta.annotation-api-1.3.4.jar:/opt/kafka/bin/../libs/jakarta.inject-2.5.0.jar:/opt/kafka/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/opt/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/kafka/bin/../libs/javassist-3.26.0-GA.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/kafka/bin/../libs/jersey-client-2.28.jar:/opt/kafka/bin/../libs/jersey-common-2.28.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.28.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar:/opt/kafka/bin/../libs/jersey-hk2-2.28.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.28.jar:/opt/kafka/bin/../libs/jersey-server-2.28.jar:/opt/kafka/bin/../libs/jetty-client-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-continuation-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-http-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-io-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-security-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-server-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-servlet-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-servlets-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jetty-util-9.4.24.v20191120.jar:/opt/kafka/bin/../libs/jmx_prometheus_javaagent-0.12.0.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/json-smart-1.1.1.jar:/opt/kafka/bin/../libs/jsonevent-layout-1.7.jar:/opt/kafka/bin/../libs/kafka-agent.jar:/opt/kafka/bin/../libs/kafka-clients-2.5.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-2.5.0.jar:/opt/kafka/bin/../libs/kafka-oauth-client-0.5.0.jar:/opt/kafka/bin/../libs/kafka-oauth-common-0.5.0.jar:/opt/kafka/bin/../libs/kafka-oauth-keycloak-authorizer-0.5.0.jar:/opt/kafka/bin/../libs/kafka-oauth-server-0.5.0.jar:/opt/kafka/bin/../libs/kafka-streams-2.5.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-2.5.0.jar:/opt/kafka/bin/../libs/kafka-streams-scala_2.12-2.5.0.jar:/opt/kafka/bin/../libs/kafka-streams-test-utils-2.5.0.jar:/opt/kafka/bin/../libs/kafka-tools-2.5.0.jar:/opt/kafka/bin/../libs/kafka_2.12-2.5.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.12-2.5.0.jar:/opt/kafka/bin/../libs/keycloak-common-10.0.0.jar:/opt/kafka/bin/../libs/keycloak-core-10.0.0.jar:/opt/kafka/bin/../libs/kotlin-stdlib-1.3.50.jar:/opt/kafka/bin/../libs/kotlin-stdlib-common-1.3.50.jar:/opt/kafka/bin/../libs/libthrift-0.13.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.7.1.jar:/opt/kafka/bin/../libs/maven-artifact-3.6.3.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/mirror-maker-agent.jar:/opt/kafka/bin/../libs/netty-buffer-4.1.45.Final.jar:/opt/kafka/bin/../libs/netty-codec-4.1.45.Final.jar:/opt/kafka/bin/../libs/netty-common-4.1.45.Final.jar:/opt/kafka/bin/../libs/netty-handler-4.1.45.Final.jar:/opt/kafka/bin/../libs/netty-resolver-4.1.45.Final.jar:/opt/kafka/bin/../libs/netty-transport-4.1.45.Final.jar:/opt/kafka/bin/../libs/netty-transport-native-epoll-4.1.45.Final.jar:/opt/kafka/bin/../libs/netty-transport-native-unix-common-4.1.45.Final.jar:/opt/kafka/bin/../libs/okhttp-4.2.2.jar:/opt/kafka/bin/../libs/okio-2.2.2.jar:/opt/kafka/bin/../libs/opentracing-api-0.33.0.jar:/opt/kafka/bin/../libs/opentracing-kafka-client-0.1.12.jar:/opt/kafka/bin/../libs/opentracing-noop-0.33.0.jar:/opt/kafka/bin/../libs/opentracing-tracerresolver-0.1.8.jar:/opt/kafka/bin/../libs/opentracing-util-0.33.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/paranamer-2.8.jar:/opt/kafka/bin/../libs/plexus-utils-3.2.1.jar:/opt/kafka/bin/../libs/reflections-0.9.12.jar:/opt/kafka/bin/../libs/rocksdbjni-5.18.3.jar:/opt/kafka/bin/../libs/scala-collection-compat_2.12-2.1.3.jar:/opt/kafka/bin/../libs/scala-java8-compat_2.12-0.9.0.jar:/opt/kafka/bin/../libs/scala-library-2.12.10.jar:/opt/kafka/bin/../libs/scala-logging_2.12-3.9.2.jar:/opt/kafka/bin/../libs/scala-reflect-2.12.10.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.30.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.30.jar:/opt/kafka/bin/../libs/snappy-java-1.1.7.3.jar:/opt/kafka/bin/../libs/tracing-agent.jar:/opt/kafka/bin/../libs/validation-api-2.0.1.Final.jar:/opt/kafka/bin/../libs/zookeeper-3.5.7.jar:/opt/kafka/bin/../libs/zookeeper-jute-3.5.7.jar:/opt/kafka/bin/../libs/zstd-jni-1.4.4-7.jar (org.apache.zookeeper.server.ZooKeeperServer)
[2022-09-29 02:36:56,605] INFO Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer)
[2022-09-29 02:36:56,605] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer)
[2022-09-29 02:36:56,605] INFO Server environment:java.compiler=<NA> (org.apache.zookeeper.server.ZooKeeperServer)
[2022-09-29 02:36:56,605] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer)
[2022-09-29 02:36:56,605] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer)
[2022-09-29 02:36:56,605] INFO Server environment:os.version=5.19.10-200.fc36.x86_64 (org.apache.zookeeper.server.ZooKeeperServer)
[2022-09-29 02:36:56,605] INFO Server environment:user.name=kafka (org.apache.zookeeper.server.ZooKeeperServer)
[2022-09-29 02:36:56,606] INFO Server environment:user.home=/home/kafka (org.apache.zookeeper.server.ZooKeeperServer)
[2022-09-29 02:36:56,606] INFO Server environment:user.dir=/opt/kafka (org.apache.zookeeper.server.ZooKeeperServer)

   -- 省略 --

#Apache Kafka Producer

#Apache Kafka Consumer

#HTTP configuration
http.enabled=true
http.host=0.0.0.0
http.port=8080
http.cors.enabled=
http.cors.allowedOrigins=
http.cors.allowedMethods=

cat: /sys/fs/cgroup/memory/memory.limit_in_bytes: No such file or directory
/opt/strimzi/bin/docker/dynamic_resources.sh: line 7: [: : integer expression expected
log4j:ERROR Could not read configuration file from URL [file:/opt/strimzi/custom-config/log4j.properties].
java.io.FileNotFoundException: /opt/strimzi/custom-config/log4j.properties (Permission denied)
    at java.io.FileInputStream.open0(Native Method)
    at java.io.FileInputStream.open(FileInputStream.java:195)
    at java.io.FileInputStream.<init>(FileInputStream.java:138)
    at java.io.FileInputStream.<init>(FileInputStream.java:93)
    at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
    at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
    at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
    at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
    at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
    at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:81)
    at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:358)
    at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
    at io.strimzi.kafka.bridge.Application.<clinit>(Application.java:34)
log4j:ERROR Ignoring configuration file [file:/opt/strimzi/custom-config/log4j.properties].
log4j:WARN No appenders could be found for logger (io.strimzi.kafka.bridge.Application).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
__  ____  __  _____   ___  __ ____  ______ 
 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ 
 -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \   
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/   
2022-09-29 02:36:58,878 WARN  [io.qua.config] (main) Unrecognized configuration key "quarkus.datasource.username" was provided; it will be ignored
2022-09-29 02:36:58,879 WARN  [io.qua.config] (main) Unrecognized configuration key "quarkus.datasource.driver" was provided; it will be ignored
2022-09-29 02:36:58,879 WARN  [io.qua.config] (main) Unrecognized configuration key "quarkus.datasource.url" was provided; it will be ignored
2022-09-29 02:36:58,879 WARN  [io.qua.config] (main) Unrecognized configuration key "quarkus.hibernate-orm.database.generation" was provided; it will be ignored
2022-09-29 02:36:58,879 WARN  [io.qua.config] (main) Unrecognized configuration key "quarkus.datasource.password" was provided; it will be ignored
2022-09-29 02:36:59,962 INFO  [io.quarkus] (main) apicurio-registry-app 1.2.2.Final (powered by Quarkus 1.3.3.Final) started in 1.251s. Listening on: http://0.0.0.0:8080
2022-09-29 02:36:59,962 INFO  [io.quarkus] (main) Profile prod activated. 
2022-09-29 02:36:59,963 INFO  [io.quarkus] (main) Installed features: [cdi, resteasy, resteasy-jackson, servlet, smallrye-health, smallrye-metrics, smallrye-openapi]


Apache Kafka の設定

  • 別の端末上で strimzi-all-in-one ディレクトリへ遷移し、以下の手順にて Topic を作成します
    (補足)以下の手順では Topic 名へ demo-topic を指定しています。
$ pwd
/home/demo_user/opt/training/amq-examples/strimzi-all-in-one
$ 
$ podman-compose exec kafka bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic demo-topic
['podman', '--version', '']
using podman version: 4.2.0
podman exec --interactive --tty --env LOG_DIR=/tmp/logs --env KAFKA_BROKER_ID=1 --env KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT --env KAFKA_LISTENERS=PLAINTEXT://:29092,PLAINTEXT_HOST://:9092 --env KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092 --env KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 kafka bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic demo-topic
Created topic demo-topic.
exit code: 0
$ 


動作確認

  • 別の端末上で strimzi-all-in-one ディレクトリへ遷移し、以下の手順にてコンシューマー1 を起動し、メッセージ受信を開始します
$ podman-compose exec kafka bin/kafka-console-consumer.sh --topic demo-topic  --bootstrap-server localhost:9092
['podman', '--version', '']
using podman version: 4.2.0
podman exec --interactive --tty --env LOG_DIR=/tmp/logs --env KAFKA_BROKER_ID=1 --env KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT --env KAFKA_LISTENERS=PLAINTEXT://:29092,PLAINTEXT_HOST://:9092 --env KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092 --env KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 kafka bin/kafka-console-consumer.sh --topic demo-topic --bootstrap-server localhost:9092


  • 別の端末上で strimzi-all-in-one ディレクトリへ遷移し、以下の手順にてコンシューマー2 を起動し、メッセージ受信を開始します
$ podman-compose exec kafka bin/kafka-console-consumer.sh --topic demo-topic  --bootstrap-server localhost:9092
['podman', '--version', '']
using podman version: 4.2.0
podman exec --interactive --tty --env LOG_DIR=/tmp/logs --env KAFKA_BROKER_ID=1 --env KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT --env KAFKA_LISTENERS=PLAINTEXT://:29092,PLAINTEXT_HOST://:9092 --env KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092 --env KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 kafka bin/kafka-console-consumer.sh --topic demo-topic --bootstrap-server localhost:9092


  • 別の端末上で strimzi-all-in-one ディレクトリへ遷移し、以下の手順にてプロデューサーを起動します
$ podman-compose exec kafka bin/kafka-console-producer.sh --topic demo-topic --bootstrap-server localhost:9092
['podman', '--version', '']
using podman version: 4.2.0
podman exec --interactive --tty --env LOG_DIR=/tmp/logs --env KAFKA_BROKER_ID=1 --env KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT --env KAFKA_LISTENERS=PLAINTEXT://:29092,PLAINTEXT_HOST://:9092 --env KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092 --env KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 kafka bin/kafka-console-producer.sh --topic demo-topic --bootstrap-server localhost:9092


  • プロデューサーから3つ程メッセージを登録します
{"data" : {"id": 1, "status": "Gold", "score": 200}}
{"data" : {"id": 2, "status": "Red", "score": 5}}       
{"data" : {"id": 3, "status": "Silver", "score": 170}}


  • 各コンシューマーで同じメッセージが受信できることを確認します
{"data" : {"id": 1, "status": "Gold", "score": 200}}
{"data" : {"id": 2, "status": "Red", "score": 5}}
{"data" : {"id": 3, "status": "Silver", "score": 170}}


Apache Kafka の終了

  • 別の端末上で strimzi-all-in-one ディレクトリへ遷移し、以下の手順にて Apache Kafka を終了します
$ podman-compose down
['podman', '--version', '']
using podman version: 4.2.0
** excluding:  set()
podman stop -t 10 bridge
WARN[0010] StopSignal SIGTERM failed to stop container bridge in 10 seconds, resorting to SIGKILL 
bridge
exit code: 0
podman stop -t 10 registry
registry
exit code: 0
podman stop -t 10 kafka
kafka
exit code: 0
podman stop -t 10 zookeeper
zookeeper
exit code: 0
podman rm bridge
1c54856ff14631532458e1ee9b12d1712a665e5fbfae9b79da152b2119e89122
exit code: 0
podman rm registry
39e7dfe472d5241b896a88378b371f0f38f8fe4330993ced612f9038fd279cb7
exit code: 0
podman rm kafka
f297c2f7d54e84da5f04ad5ea434e8a23379d263c5fee51b2894df9f4db59610
exit code: 0
podman rm zookeeper
a8da3abb34cc182a4e0f1095027a9633557775e7794b91ab5b6564684e242bdd
exit code: 0
$ 


まとめ

以上の手順にてあっと言う間に Apache Kafka 環境を構築することができました。 一部のノードがダウンした際の動作などを検証することは難しいかも知れませんが、手っ取り早く Apache Kafka 環境を用意したい場合に便利だと思います。