Scalafmt – Styling The Beast

Knoldus

“I have never seen elegance go out of style” – SONYA TECLAI

Scala (The Beast) has imported efficiency and elegance to the programming world by combining coziness of object oriented programming with the magic of functional programming. This blog illuminates the work of Ólafur Páll Geirsson who has done a great job of styling the beast. In the bulk below I will discuss my experience of using Scalafmt, its installation process and some of its cool code styling features. Stuff we would be using in this blog

  • Editor -> Intellij (v 2017)
  • Build Tool -> sbt (v 0.13.15)
  • Language -> Scala (v 2.12.1)

One of the most important aspect of good code is its readability which comes with good and standard formatting. Ever wondered how an entire project having around 1000 Scala files of poorly formatted code could be formatted without having a headache? Well going shft + ctrl + alt…

View original post 634 more words

Advertisements
Scalafmt – Styling The Beast

Tutorial 3:Monitor CPU Utilization with Dynatrace

Knoldus

This is last blog of this series in this we will read how to monitor CPU Utilization by Dynatrace.

Why we always need  memory  analysis ?

We need memory analysis is to optimize garbage collection (GC) in such a way that its impact on application response time or CPU usage is minimized. If garbage collection has a negative impact on response time, our goal must be to optimize the configuration.

In Dynatrace, we have full analysis of memory utilization. A healthy system performs better. Dynatrace uses defined parameters to monitor health. These parameters use metrics such as CPU, memory, network, and disk.

Cpu Profiler

Here You can see these values in the Transaction Flow on the respective agent node. Use this to identify the impact of an unhealthy host on your business transactions.

cpu3

We can easily go through the execution of the each thread as like below figure.

cpu5

You can use the filter list…

View original post 129 more words

Tutorial 3:Monitor CPU Utilization with Dynatrace

Basic of The Gherkin Language

Knoldus

Hello Everyone ,

In this blog we will discuss about Gherkin Language  which we used in BDD for writing test cases.we will take a look on below topic.

Introduction:

Gherkin’s grammar is defined in the parsing expression grammars. It is Business Readable, DSL created specifically for behavior descriptions without explaining how that behaviour is implemented. Gherkin is a plain English text language.

Gherkin serves two purposes — documentation and automated tests. It is a whitespace-oriented language that uses indentation to define structure.

The Gherkin includes 60 different spoken languages so that we can easily use our own language.The parser divides the input into features, scenarios and steps.

Here is a simple example of Gherkin:

Screenshot from 2017-04-18 11-31-45

When we run this feature this gives us a step definition.In Gherkin, each line is start with a Gherkin keyword, followed by any text you like.

The main keywords are:

  • Feature
  • Scenario
  • Given
  • When
  • Then

Feature:

View original post 198 more words

Basic of The Gherkin Language

Setting Up Multi-Node Hadoop Cluster , just got easy !

Knoldus

In this blog,we are going to embark the journey of how to setup the Hadoop Multi-Node cluster on a distributed environment.

So lets do not waste any time, and let’s get started.
Here are steps you need to perform.

Prerequisite:

1.Download & install Hadoop for local machine (Single Node Setup)
http://hadoop.apache.org/releases.html – 2.7.3
use java : jdk1.8.0_111
2. Download Apache Spark from : http://spark.apache.org/downloads.html
choose spark release : 1.6.2

1. Mapping the nodes

First of all ,we have to edit hosts file in /etc/ folder on all nodes, specify the IP address of each system followed by their host names.

# vi /etc/hostsenter the following lines in the /etc/hosts file.192.168.1.xxx hadoop-master 192.168.1.xxx hadoop-slave-1192.168.56.xxx hadoop-slave-2

View original post 687 more words

Setting Up Multi-Node Hadoop Cluster , just got easy !

Creating a DSL (Domain Specific Language) using ANTLR ( Part-II) : Writing the Grammar file.

Knoldus

Earlier we discussed in our blog how to configure the ANTLR plugin for the intellij for getting started with our language.

In this post we will discuss the basics of the ANTLR  and exactly how can we get started with our main goal. What is the lexer, parser and what are their roles and many other things. So lets get started,

Antlr stands for ANother Tool for Language Recognition. The tool is able to generate compiler or interpreter for any computer language. If you need to parse languages like Java , scala, php then this is the thing that you are looking for.
Here is the list of some projects that uses ANTLR.

View original post 611 more words

Creating a DSL (Domain Specific Language) using ANTLR ( Part-II) : Writing the Grammar file.

Boost Factorial Calculation with Spark

Knoldus

We all know that, Apache Spark is a fast and a general engine for large-scale data processing. It can process data up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk.

But, is that the only task (i.e., MapReduce) for which Spark can be used ? The answer is: No. Spark is not only a Big Data processing engine. It is a framework which provides a distributed environment to process data. This means we can perform any type of task using Spark.

For example, lets take Factorial. We all know that calculating Factorial for Large numbers is cumbersome in any programming language and on top of that, CPU takes a lot of time to complete the calculations. So, what can be the solution ?

Well, Spark can be the solution to this problem. Lets see that in form of code.

First, we will try to implement Factorial using only Scala in a Tail…

View original post 97 more words

Boost Factorial Calculation with Spark

Logging Spark Application on standalone cluster

Knoldus

Logging of the application is much important to debug application, and logging spark application on standalone cluster is little bit different. We have two components for our spark application – Driver and Executer. Spark default use log4j logger to log  application. So whenever we use spark on local machine or spark-shell its use default log4j.properties from /spark/conf/log4j.properties by default spark logging rootCategory=INFO, console. But when we deploy our application on spark standalone cluster its different, we need to log executer and driver logs into some specific file.

So to log spark application on standalone cluster we don’t need to add log4j.properties into the application jar we should create the log4j.properties for driver and executer.

We need to create separate log4j.properties file for executer and driver both like below

# Set everything to be logged to the console
log4j.rootCategory=INFO,FILE
log4j.appender.FILE=org.apache.log4j.FileAppender
log4j.appender.FILE.File={Enter path of the file}
log4j.appender.FILE.MaxFileSize=10MB
log4j.appender.FILE.MaxBackupIndex=10
log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
log4j.appender.FILE.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p…

View original post 127 more words

Logging Spark Application on standalone cluster