Kafka standalone setup without container - mahuadasgupta/kafkasetup GitHub Wiki
Kafka Setup:standalone
2.cd /opt
3.git clone https://github.com/mahuadasgupta/kafkasetup.git (this command will create a kafkasetup folder)
4.cd kafkasetup
5.download kafka2.12 wget http://download.nextag.com/apache/kafka/1.0.0/kafka_2.12-1.0.0.tgz
extract the kafka tarfile:
1. tar -zxvf kafka_2.12-1.0.0.tgz
Rename the extracted folder: mv kafka_2.12-1.0.0 kafka_2.12-1.0.0_standalone
Configure zookeeper:
1.cd /opt/kafkasetup/software/kafka_2.12-1.0.0_standalone/config
2.edit zookeeper.properties file with below parameter
dataDir=/opt/kafkasetup/zookeeper
clientPort=2181
maxClientCnxns=0
start zookeeper:
1.cd /opt/kafkasetup/software/kafka_2.12-1.0.0_standalone
2. bin/zookeeper-server-start.sh -daemon config/zookeeper.propertiess
verify zookeeper:
1.netstat -anp|grep 2181 (it should output as mentioned below)
tcp6 0 0 :::2181 :::* LISTEN 633/java
if above output is not visible then run zookeeper in non daemon mode i.e infore groud and check the issue
bin/zookeeper-server-start.sh config/zookeeper.properties
configure kafka:
1.cd /opt/kafkasetup/software/kafka_2.12-1.0.0_standalone/config
2.edit server.properties with below parameters.
broker.id=0
log.dirs=/opt/app/kafkasetup/kafka-logs
zookeeper.connect=localhost:2181
start kafka:
1.cd /opt/kafkasetup/software/kafka_2.12-1.0.0_standalone
2. bin/kafka-server-start.sh -daemon config/server.properties
verify kafka:
1.netstat -anp|grep 9092 (it should output as mentioned below)
tcp6 0 0 :::9092 :::* LISTEN 1233/java
if above output is not visible then run zookeeper in non daemon mode i.e infore groud and check the issue
bin/kafka-server-start.sh config/server.properties
Kafka producer and consumer testing:
single producer vs single consumer/subscriber
1.set up a docker container kafkasetup in a terminal
1.1 login to a putty session where kafka is instaled (192.168.2.217)
1.2 login to the running docker container using docker kafkasetup (if found kafkasetup is not running use the command
docker rm kafkamahua ; docker run -it --hostname kafkamahua --name kafkamahua kafkamahua /bin/bash)
1.3 grab the ip of the system using ifconfig command(172.17.0.3)
1.4 start the sshd server :(/sbin/sshd)
1.5 create a kafkaproducer
and kafkaconsumer (adduser kafkaproducer ;passwd kafkaproducer) and (adduser kafkaconsumer ;passwd kafkaconsumer)
2. start producer in a terminal
2.1 open up a new putty terminal (login to a putty session where kafka is instaled (192.168.2.217)
2.2 login to docker terminal using ssh [email protected] and the password
2.3 cd /opt/kafkasetup/software/kafka_2.12-1.0.0_stand
2.4 create a topic:bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic mahuatopic
2.5 verify the topic:bin/kafka-topics.sh --list --zookeeper localhost:2181
2.6 start the producer:bin/kafka-console-producer.sh --broker-list localhost:9092 --topic mahuatopic
>hi this is kafka setup test
3.start consumer in a terminal
3.1 open up a new putty terminal (login to a putty session where kafka is instaled (192.168.2.217)
3.2 login to docker terminal using ssh [email protected] and the password
3.3 cd /opt/kafkasetup/software/kafka_2.12-1.0.0_standalone/
3.4 message consumption from beginning:bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic mahuatopic --from-beginning
hi this is kafka setup test
3.5 go to producer terminal and append a line
>hi this is kafka setting test
3.6 look into the consumer terminal, it will look like
hi this is kafka setup test
hi this is kafka setting test
3.7 incremental message consumption:bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic mahuatopic (it will hang)
3.8 go to producer terminal and append a line
testing
3.9 look into the consumer terminal it will look like
testing
this means it will only consume upcoming messages and not from begining
single producer vs multi consumer/subscriber
1.keep the above set up as it is i.e. keep open the previous producer and consumer terminals,the commands
2.add a user kafkaconsumer1 in docker terminal
3.open up a new putty terminal (login to a putty session where kafka is instaled (192.168.2.217)
4. login to a docker terminal using ssh [email protected]
5. cd /opt/kafkasetup/software/kafka_2.12-1.0.0_standalone/
6.incremental message consumption:bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic mahuatopic
(it will hang)
7.go to producer terminal and append a line
testing2
8.go to consumer terminal1,it will look like
testing
testing2
9.go to consumer terminal2 ,it will look like
testing2
multi producer vs multi consumer/subscriber
1.keep the above set up as it is i.e. keep open the previous producer and consumer terminals,the commands
2.add a user kafkaproducer1 in docker terminal
3.open up a new putty terminal (login to a putty session where kafka is instaled (192.168.2.217)
4.login to a docker terminal using ssh [email protected]
5. cd /opt/kafkasetup/software/kafka_2.12-1.0.0_standalone/
6.start the 2nd producer:bin/kafka-console-producer.sh --broker-list localhost:9092 --topic mahuatopic
append a line:
testing 2nd producer
7.go to consumer terminal1,it will look like
testing
testing2
testing 2nd producer
8.go to consumer terminal2 ,it will look like
testing2
testing 2nd producer
9. go to producer terminal1 and append a line
testing first producer again
10..go to consumer terminal1,it will look like
testing
testing2