Categories
Bigdata

Load Testing Apache kafka using Kafkameter

You can refer overview of Kafka here . For Installation and configuring Kafka , refer the previous post : Configuring Apache Kafka

Step 1

Go to githup and download kafkameter
https://github.com/BrightTag/kafkameter

Step 2

Follow the Steps for building the kafkameter.jar

Step 3

Place the generated jar in $JMETER_HOME/lib/ext folder

Step 4

Restart jmeter

Step 5

Add Threadgroup->Samplers-> Javarequest

Step 6

Configure Input parameters, like hostname , kafka topic that you created already

Step 7

Run Jmeter test and verify the same message is consumed by Kafka consumer.

Refer the below screenshots

 

 

Kafka Parameters

 

Sending message to Kafka Consumer

 

Message Consumed

Categories
Bigdata

Configuring Apache Kafka Environment

Kafka is used widely in social networking websites because of its performance. Refer Kafka introduction for more information.  Here i’m going to show how to setup Kafka in simple 8 steps.

Step 1

Download Kafka from Apache Website:
https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.2.2/kafka_2.9.1-0.8.2.2.tgz

Step 2

Untar downloaded archive.

Step 3

cd kafka_2.9.2-0.8.2.2

Step 4

Start zookeeper which is used for Synchronizing producers and consumers.
bin/zookeeper-server-start.sh config/zookeeper.properties

Step 5

start Kafka Server
bin/kafka-server-start.sh config/server.properties

Step 6

Create topics for Kafka producer and consumer
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic kafkatopic2

Step 7

Start kafka Producer and type some messages
bin/kafka-console-producer.sh --broker-list localhost:9092 --sync --topic kafkatopic2

Step 8

Start Kafka Consumer and observe that the messages you typed in #Step 7 , getting consumed.
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic kafkatopic2 --from-beginning

So you have successfully setup the kafka . We will try to add more consumers and producers in next post.

 

Categories
Bigdata

Apache Kafka – Introduction

Apache Kafka is the distributed messaging system which serves as a substitute for traditional JMS messaging systems in the world of BIG-DATA. Another way to describe Kafka as per Apache website is “Apache Kafka is publish-subscribe messaging rethought as a distributed commit log”. It was originally developed by LinkedIn and later on became a part of the Apache project.

Features of Kafka are
  • Faster
  • Scalable
  • Durable and
  • Distributed by Design.

Kafka has some differences when compared to other message brokers like RabittMQ, Websphere MQ . One of the best-known advantages is its performance. It consumes data in its own way.

Components of Apache Kafka

There are five important components in Kafka as given below

Kafka COMPONENTS

Kafka can have multiple producers and consumers and work as a cluster in a distributed model.

Kafka works on a publisher-consumer mechanism. Kafka maintains feeds of messages in  “topics“, processes which publish messages to a Kafka topic is named as “Producers“, processes that subscribe to the topic and process the published messages is called as “Consumers“. Kafka is run as a cluster comprised of one or more servers each of which is called as a broker.

Kafka Use cases
  • Messaging: Replacement for a more traditional message broker, Kafka has better throughput, built-in partitioning, replication, and fault-tolerance.
  • Website Activity Tracking: Real-time processing, real-time monitoring, and loading into Hadoop or offline data warehousing systems for offline processing and reporting.
  • Metrics: Aggregating statistics from distributed applications to produce centralized feeds of operational data.
  • Log Aggregation: Collects physical log files of servers and puts them in a central place (a file server or HDFS perhaps) for processing

This is just an introduction about Kafka, please refer Apache-site documentation for detailed documentation.