1.11. fejezet, Kafka

Kapcsolódó hivatkozások

Telepítés

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
kubectl create namespace kafka
helm install kafka bitnami/kafka --namespace kafka
 
# persistent volume létrehozása (3db kell majd)
kubectl apply -f pv1.yaml
kubectl edit pvc data-kafka-controller-0 -n kafka

Persistent volume leírója (3db kell majd)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: 8gb-pv1
spec:
  capacity:
    storage: 8Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /data/pv1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - vroctopus

Add hozzá ezt a pvc tulajdonságaihoz a "storageClassName: local-storage" bejegyzést

kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
  creationTimestamp: "2025-04-26T07:21:04Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app.kubernetes.io/component: controller-eligible
    app.kubernetes.io/instance: kafka
    app.kubernetes.io/name: kafka
    app.kubernetes.io/part-of: kafka
  name: data-kafka-controller-0
  namespace: kafka
  resourceVersion: "1079881"
  uid: 62f0470b-e870-43a7-b338-713eae165ace
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  storageClassName: local-storage
  volumeMode: Filesystem
  volumeName: 8gb-pv1

Telepítés eredménye

NAME: kafka
LAST DEPLOYED: Thu Apr 24 08:52:02 2025
NAMESPACE: kafka
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 32.2.0
APP VERSION: 4.0.0
 
Did you know there are enterprise versions of the Bitnami catalog? For enhanced secure software supply chain features, unlimited pulls from Docker, LTS support, or application customization, see Bitnami Premium or Tanzu Application Catalog. See https://www.arrow.com/globalecs/na/vendors/bitnami for more information.
 
** Please be patient while the chart is being deployed **
 
Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:
 
    kafka.kafka.svc.cluster.local
 
Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:
 
    kafka-controller-0.kafka-controller-headless.kafka.svc.cluster.local:9092
    kafka-controller-1.kafka-controller-headless.kafka.svc.cluster.local:9092
    kafka-controller-2.kafka-controller-headless.kafka.svc.cluster.local:9092
 
The CLIENT listener for Kafka client connections from within your cluster have been configured with the following security settings:
    - SASL authentication
 
To connect a client to your Kafka, you need to create the 'client.properties' configuration files with the content below:
 
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username="user1" \
    password="$(kubectl get secret kafka-user-passwords --namespace kafka -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";
 
To create a pod that you can use as a Kafka client run the following commands:
 
    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:4.0.0-debian-12-r0 --namespace kafka --command -- sleep infinity
    kubectl cp --namespace kafka /path/to/client.properties kafka-client:/tmp/client.properties
    kubectl exec --tty -i kafka-client --namespace kafka -- bash
 
    PRODUCER:
        kafka-console-producer.sh \
            --producer.config /tmp/client.properties \
            --bootstrap-server kafka.kafka.svc.cluster.local:9092 \
            --topic test
 
    CONSUMER:
        kafka-console-consumer.sh \
            --consumer.config /tmp/client.properties \
            --bootstrap-server kafka.kafka.svc.cluster.local:9092 \
            --topic test \
            --from-beginning
 
WARNING: There are "resources" sections in the chart not set. Using "resourcesPreset" is not recommended for production. For production installations, please set the following values according to your workload needs:
  - controller.resources
  - defaultInitContainers.prepareConfig.resources
+info https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

Tesztelés

# Kafka service legyen LoadBalancer típusú
 
# az external IP-t (pl.: 192.168.1.15) a /etc/hosts fájlba tegyük be a következőképpen
192.168.1.15 kafka-controller-0.kafka-controller-headless.kafka.svc.cluster.local kafka-controller-1.kafka-controller-headless.kafka.svc.cluster.local kafka-controller-2.kafka-controller-headless.kafka.svc.cluster.local
 
# A bootstrap servers legyen a controller gép felé kiajánlott port
export KAFKA_BOOTSTRAP_SERVER=master.me.local:31710
 
# consumer indítása
java -jar kafkaconsumer-0.0.1-SNAPSHOT.jar
 
# producer indítása
java -jar kafkaproducer-0.0.1-SNAPSHOT.jar
# böngészőben nyissad meg a Swagger oldalt: http://localhost:8080/swagger-ui/index.html