How to change Google Drive folder location?

By default Google Drive installer creates a folder here C:UsersUSERNAMEAppDataLocalGoogle (Replace ‘USERNAME’ with your computer username), considering the storage option available with google drives this might end-up in utilizing filling your default drive space if all folders is allowed to sync.

We can change the Gdrive  installation location in two ways.

During installation we can change as mentioned BELOW
  • Download the latest drive installer
  • Begin the installation, then click “Get started”
  • Log in using your account
  • Click next 3 times after logging in, you should see a window that says “You’re all set” –>There should now be an Advanced Setup button
  • Click that button and choose your installation location.
After Installation
  • Click on the Google Drive icon in your system tray
  • Choose Disconnect account… in Preferences
  • Sign in again. You can change the folder when you click Advanced setup button.


PETYA Crypto-ransomware

Till now we have heard of ransomware’s targeted computers files will be encrypted, in this scenario users are allowed to login to the Operating System but won’t be able to open encrypted files. The newly discovered PETYA Crypto-Ransomware has crossed one more step and it overwrites the MBR itself to lock users out of their computers.

Petya is still distributed via email. Victims would receive an email tailored to look and read like an applicant applying for a job in a company. It would present users with a hyperlink to Dropbox storage location, which would let users download the above said user’s CV.

The file downloaded is actually a self-extracting executable which will unleash the trojan into the system.

Once executed, Petya overwrites the MBR of the entire hard drive, causing windows to crash and display a blue screen. When the user tries to reboot the system the modified MBR will stop him to boot into the operating system, and will be greeted with an ASCII skull and an ultimatum: pay up with a certain amount of bitcoins or lose access to your files and computer.

Fig1: Petya’s red skulls-and-crossbones warning


This modified MBR even disable booting to safe mode also. The user is then given explicit instructions on how to do this, just like any crypto-ransomware currently making the rounds: a list of demands, a link to the Tor Project and how to get to the payment page using it, and a personal decryption code.

Fig 2: Petya’s decryption and ransom payment instructions


Drown vulnerabilty

Drown stands for Decrypting RSA using Obsolete and Weakened eNcryption, and it provides a way for attackers to decrypt HTTPS communications from servers that are still supporting SSLv2. Most of us think supporting SSLv2 protocol on server is not a problem because most of the modern client software’s just don’t use it.

But looks like the mere existence of SSLv2 help attackers to crack a connection’s encryption, and initiate what is effectively a MITM attack.


According to the researchers, a server is vulnerable to the DROWN vulnerability (also known as CVE-2016-0800) if:

It allows SSLv2 connections. This is surprisingly common, due to misconfiguration and inappropriate default settings. Our measurements show that 17% of HTTPS servers still allow SSLv2 connections.


Its private key is used on any other server that allows SSLv2 connections, even for another protocol. Many companies reuse the same certificate and key on their web and email servers, for instance. In this case, if the email server supports SSLv2 and the web server does not, an attacker can take advantage of the email server to break TLS connections to the web server. When taking key reuse into account, an additional 16% of HTTPS servers are vulnerable, putting 33% of HTTPS servers at risk.


If you want to check whether a particular site is vulnerable, the researchers have helpfully provided an online tool.

Gatling the Beginning

We are living in the world of technologies, it changes the way we work, the way we look at the things. We search queries in google, do shopping online, share a moment in Facebook, chat with our childhood friends irrespective of where they live. Everyone likes if the tasks are simplified or automated by online solutions and gives more sophistication on our needs. Hence it is more important to design the web application by considering the performance factors such as concurrency, response time, fault tolerance, scalability. Though there are so many load testing tools available in the market, Jmeter and Gatling are considered as widely used tools. These two tools are open source and offers support for protocol testing.

We had seen already about Jmeter and how it helps QA to identify performance bottlenecks without worrying much on coding part  as Jmeter offers built-in components like Logical controllers, Samplers for protocols like http,ftp,Soap,Jms etc…, listeners for reports, Config elements like CSVDatasetConfig for customizing user data, Pre-Processors , Post-Processors such as Regex-Extractor, JMXMon for Monitoring the heap parameters of the Web application. I like Jmeter so much and used it for longer period. It servers the purpose of finding the performance bottleneck. Just for a change I would really like to try Gatling and hereafter i will post more about my gatling experience.

Introduction on Gatling
Gatling is developed and maintained by Stephene Landelle. Gatling is well known for its speed and performance as it is built on AKKA Engine, scala language based. Gayling works on asynchronous model. Gatling does not lock the thread at the jvm level, so there is no one user per thread concept. After thread completes it tasks , it will be released and takes the next task. Currently Gatling supports more on HTTP protocol , also they support JMS protocol.

Download Gatling from the following location, Latest version is Gatling 2.1.7

When you open Gatling you can see the following folders

It has gatling recorder and gatling to run simulations for both windows and linux

conf has the configuration files of gatling are placed here

lib folder has Gatling Jar files , Scala libararies, netty libraries

results folder basically consists of simulation results, html files

where simulation class files can be found

user files are placed under data folder of this user-files directory.
simulation files are inside /user-files/simulation folder.

Load Testing Apache kafka using Kafkameter

You can refer overview of Kafka here . For Installation and configuring Kafka , refer the previous post : Configuring Apache Kafka

Step 1

Go to Githup and download kafkameter

Step 2

Follow the Steps for building the kafkameter.jar

Step 3

Place the generated jar in $JMETER_HOME/lib/ext folder

Step 4

Restart jmeter

Step 5

Add Threadgroup->Samplers-> Javarequest

Step 6

Configure Input parameters, like hostname , kafka topic that you created already

Step 7

Run Jmeter test and verify the same message is consumed by Kafka consumer.

Refer the below screenshots


Adding a Java Request Sampler


Kafka Parameters


Sending message to Kafka Consumer


Message Consumed

Configuring Apache Kafka Environment

Kafka is used widely in social networking websites because of its performance. Refer Kafka introduction for more information.  Here i’m going to show how to setup Kafka in simple 8 steps.

Step 1

Download Kafka from Apache Website:

Step 2

Untar downloaded archive.

Step 3

cd kafka_2.9.2-

Step 4

Start zookeeper which is used for Synchronizing producers and consumers.
bin/ config/

Step 5

start Kafka Server
bin/ config/

Step 6

Create topics for Kafka producer and consumer
bin/ --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic kafkatopic2

Step 7

Start kafka Producer and type some messages
bin/ --broker-list localhost:9092 --sync --topic kafkatopic2

Step 8

Start Kafka Consumer and observe that the messages you typed in #Step 7 , getting consumed.
bin/ --zookeeper localhost:2181 --topic kafkatopic2 --from-beginning

So you have successfully setup the kafka . We will try to add more consumers and producers in next post.


Apache Kafka – Introduction

Apache Kafka is the distributed messaging system which serves as substitute for traditional JMS messaging systems in the world of BIG-DATA . Another way to describe Kafka as per Apache website  is “Apache Kafka is publish-subscribe messaging rethought as a distributed commit log”. It was originally developed by LinkedIn and and later on became a part of Apache project.

Features of Kafka are
  • Faster
  • Scalable
  • Durable and
  • Distributed by Design.

Kafka has some differences when compared to other message brokers like RabittMQ, Websphere MQ .One of the best known advantage is its performance. It consumes data on its own way.

Components of Apache Kafka

There are five important components in Kafka as given below



Kafka can have multiple producers and consumers and work as a cluster in distributed model.


Kafka Cluster diagram

Kafka works on a publisher-consumer mechanism. Kafka maintains feeds of messages in  “topics“, processes which publish messages to a Kafka topic is named as “Producers“, processes that subscribe to the topic and process the published messages is called as “Consumers“. Kafka is run as a cluster comprised of one or more servers each of which is called as broker.

Kafka Use cases
  • Messaging : Replacement for a more traditional message broker, Kafka has better throughput, built-in partitioning, replication, and fault-tolerance.
  • Website Activity Tracking : Real-time processing, real-time monitoring, and loading into Hadoop or offline data warehousing systems for offline processing and reporting.
  • Metrics : Aggregating statistics from distributed applications to produce centralized feeds of operational data.
  • Log Aggregation : Collects physical log files off servers and puts them in a central place (a file server or HDFS perhaps) for processing

This is just an introduction about kafka, please refer Apache-site documentation for detailed documentation.

How to Transfer/Refer Jmeter Variables from one Thread Group to another Thread Group?


In many situations we encounter in referring the variables from one thread group to another. Even i found myself every time I end up in searching for the solution. Finally I have decided to write it here ,there are many ways to transfer variables, I preferably choose the following method.

1) Set up Thread Groups
2) Add a “BeanShell Assertion” to the sampler where you are extracting the value
3) In the Script area of BeanShell Assertion specify as
${__setProperty(SG_Id2,${SG_Id1})} where SG_Id2 is a variable in which we are storing the value of
SG_Id1 , so that we can refer this value anywhere in the testplan across all the thread groups.
4) Now go to next Thread Group sampler where you want to pass the value and give as
5) Now run the test and check the value passed from one thread group to another thread group

Introduction to Elasticsearch

Elasticsearch is an Open Source  (Apache 2), Distributed search engine built on top of Apache Lucene. It allows you start with one machine and scale to ‘n’ number of servers with high availability.

Elasticsearch makes it easy to run a full-featured search server. It can be setup in lesser than five minutes. In this blog I’ll show how to:-

  • Install and run elasticsearch.
  • Indexing data.
  • searching
Installing and running Elasticsearch
  1. Download and unzip the latest version of Elasticsearch from website.
  2. Go to the extracted folder, and Run bin/elasticsearch  on Unix or bin/elasticsearch.bat  on Windows, your terminal will be showing something like below.


3.   Open web browser and point the url to http://localhost:9200/ , you should see something like below


Indexing data

There are to main ways in adding data to elasticsearch.

  1. json over HTTP
  2. Native client.

Here we are using the curl command to insert data into elasticsearch

$ curl -XPUT ‘http://localhost:9200/twitter/tweet/1’ -d ‘{

“user” : “kimchy”,

“post_date” : “2009-11-15T14:12:12”,

“message” : “trying out Elasticsearch”


the result of the above index operation is:


“_index” : “twitter”,

“_type” : “tweet”,

“_id” : “1”,

“_version” : 1,

“created” : true



MITM illustrated

© 2016 Technix

Theme by Anders NorénUp ↑