Continuous Delivery (CD)

Continuous Delivery (CD) expands upon continuous integration by deploying all code changes to a testing environment after the build stage. When continuous delivery is implemented in place, we will always have a Deployment-ready build artifact that has passed through a standardised test process.

I am going to give a generic logical flow of a Delivery Pipeline where everything gets automated for seamless delivery. However, this flow may vary from organisation to organisation as per the requirement.

Let’s assume that we have 3 deployment environments (Development, Staging/QA and Production), then a typical delivery pipeline would have the following steps:

  1. Developer develop and commits the code changes to Version Control System tools like Git.
  2. Continuous Integration/Delivery automation servers like Jenkins pulls this code from the repository using the Git like plugin and build it using build tools like Ant or Maven or MSBuild and runs the unit tests.
  3. After this, same code base passes through static code analysis, An artifact is created, with a unique version number, and will be published to an artifact repository tools like Nexus.
  4. Configuration management tools like Chef provisions testing environment & deploys the app & then Jenkins, chef together releases the same code onto different Test environments on which testing is done using test automation tools like Selenium.
  5. Once the code is tested, the same code will be send for deployment on to the production server at a defined schedule.
  6. After deployment, it is continuously monitored by the tools like Nagios.

Achieving Continuous Deployment (CD) as part of DevOps Practices:
We can achieve CD in 2-ways depending on the requirement.
1. Using Jenkins alone without using any Configuration Management Tools like Puppet/Chef/Ansible.
2. Using Jenkins and Configuration Management Tools like Puppet/Chef/Ansible.

Lets see both one by one with a Java Application with Maven (you may use .Net with MS Build)…

For the next steps, I assume, you have a a Jenkins instance and a remote Tomcat Server up and running with administrator privileges.
1. Continuous Deployment with Jenkins & without using Chef:  By deploying war file from Jenkins to remote Tomcat Server of Staging/QA environment .

Step 1. Generate & Archive the Artifacts in a Package job to produce a war/ear file:
1st Create job called package> ‘In Post Build actions’ of your build job>Select Archive Artifacts- and provide, Files to archive as  **/*.war>save>build.

Step 2. Install required plugins:  Install copy artifact & deploy to container plugins.  Also install & configure tomcat and edit tomcat users file in conf folder of desired QA/Stage environment  to add a user with the role ‘manager-script’ in order to accept remote deployments.
Ex: <role rolename=”manager-script”/>
<user username=”admin” password=”tomcat” roles=”manager-gui, manager-script”/>

Step 3. Deploy the archived Artifact to defined environment: In the next job, named DeployToStage, pick up the archived war artifact and deploy it to a QA/Stage env.
a). 1st create a job called DeployToStage and then add Build Step-select copy artifacts from another project & provide the Project Name and Artifacts to Copy.
b). “Post-build Actions”. Select the option “Deploy war/ear to a container” from the “Add post-build action” dropdown button. Fill  war file, path and tomcat details>save>run.

Now, you can browse to your application in Staging environment via url;

In the next episode, we are going to see how can we achieve Continuous Deployment (CD) using Jenkins and a Configuration Management Tool called Chef.


Spring Security

Spring SecuritySpring SecuritySpring Security provides comprehensive security services for Java EE-based enterprise software applications. There is a particular emphasis on supporting projects built using The Spring Framework, which is the leading Java EE solution for enterprise software development.

As you probably know two major areas of application security are “authentication” and “authorization” (or “access-control”). These are the two main areas that Spring Security targets. “Authentication” is the process of establishing a principal is who they claim to be (a “principal” generally means a user, device or some other system which can perform an action in your application).”Authorization” refers to the process of deciding whether a principal is allowed to perform an action within your application. To arrive at the point where an authorization decision is needed, the identity of the principal has already been established by the authentication process. These concepts are common, and not at all specific to Spring Security.

At an authentication level, Spring Security supports a wide range of authentication models. Most of these authentication models are either provided by third parties, or are developed by relevant standards bodies such as the Internet Engineering Task Force. In addition, Spring Security provides its own set of authentication features.

Java Configuration
General support for Java Configuration was added to Spring Framework in Spring 3.1. Since Spring Security 3.2 there has been Spring Security Java Configuration support which enables users to easily configure Spring Security without the use of any XML.

Technologies used :

  1. Spring 3.2.2.RELEASE
  2. Spring Security 3.2.2.RELEASE
  3. Hibernate 4.2.1.Final
  4. MySQL Server 5.1.25
  5. Tomcat 7 (Servlet 3.x container)

Note: Add all these dependencies in pom.xml

Web/Spring Security Java Configuration
The first step is to create our Spring Security Java Configuration. The configuration creates a Servlet Filter known as the springSecurityFilterChain which is responsible for all the security (protecting the application URLs, validating submitted username and passwords, redirecting to the login form, etc) within your application. You can find the most basic example of a Spring Security Java Configuration below:

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.context.annotation.Configuration;
@EnableGlobalMethodSecurity(securedEnabled = true)
public class AppSecurityConfig extends WebSecurityConfigurerAdapter {
  UserDetailsService userDetailsService;
  CustomSuccessHandler customSuccessHandler;
  AuthenticationSuccessHandler authenticationSuccessHandler;
  public void configureGlobalSecurity(AuthenticationManagerBuilder auth) throws Exception {
  protected void configure(HttpSecurity http) throws Exception {
              .antMatchers(“/index**”, “/home**”, “/login**”, “/resources**”, “/pages**”).permitAll()

AbstractSecurityWebApplicationInitializer with Spring MVC
If you were using Spring elsewhere in our application we probably already had a WebApplicationInitializer that is loading our Spring Configuration. If we use the previous configuration we would get an error. Instead, we should register Spring Security with the existing ApplicationContext. For example, if we were using Spring MVC our SecurityWebApplicationInitializer would look something like the following:

public class SecurityWebApplicationInitializer extends AbstractSecurityWebApplicationInitializer {
This would simply only register the springSecurityFilterChain Filter for every URL in your application.

Authorize Requests
Our examples have only required users to be authenticated and have done so for every URL in our application. We can specify custom requirements for our URLs by adding multiple children to our http.authorizeRequests() method. For example:
protected void configure(HttpSecurity http) throws Exception {
          .antMatchers(“/index**”, “/home**”, “/login**”, “/resources**”, “/pages**”).permitAll()

Create a new custom class that will implement AuthenticationSuccessHandler. Then add your logic on how you want to handle whenever the user successfully logs in. For this example, if ever the user successfully logs in, we will add his username and his roles to its session and redirect him to the home page.

public class CustomSuccessHandler implements AuthenticationSuccessHandler {
  private RedirectStrategy redirectStrategy = new DefaultRedirectStrategy();
  PermissionsRepository permissionsRepository;
  PageRepository pageRepository;
  public void onAuthenticationSuccess(HttpServletRequest request, HttpServletResponse response, Authentication authentication) throws IOException, ServletException {
      handle(request, response, authentication);
      HttpSession session = request.getSession(false);
      if (session != null) {
          session.setMaxInactiveInterval(10 * 60);
 authUser = ( SecurityContextHolder.getContext().getAuthentication().getPrincipal();
          session.setAttribute(“userName”, authUser.getUsername());
          session.setAttribute(“authorities”, authentication.getAuthorities());
  protected void handle(final HttpServletRequest request, final HttpServletResponse response, final Authentication authentication) throws IOException {
      final String targetUrl = determineTargetUrl(authentication);
      if (response.isCommitted()) {
          System.out.println(“User resourceURL’s are ” + targetUrl);
      redirectStrategy.sendRedirect(request, response, targetUrl);
  protected String determineTargetUrl(Authentication authentication) {
      Set roles = AuthorityUtils.authorityListToSet(authentication.getAuthorities());
      List availablePermissions = permissionsRepository.findAll();
      List permissionsList = new ArrayList<>();
      List PagesURL = pageRepository.findAll();
      List availablePageUrlList = new ArrayList<>();
      for (Permissions avlblPermissions : availablePermissions) {
      System.out.println(“List of Permissions’s are ” + permissionsList + “”);
      for (Page PageURLs : PagesURL) {
      System.out.println(“List of available Page’s are ” + availablePageUrlList + “”);

      if (roles.contains(“ROLE_ADMIN”) && permissionsList.contains(“DELETE_PRIVILEGE”) && availablePageUrlList.contains(http://localhost:8080/admin&#8221;)) {
          return “/admin”;
      } else if (roles.contains(“ROLE_MANAGER”) && permissionsList.contains(“WRITE_PRIVILEGE”) && availablePageUrlList.contains(http://localhost:8080/manager&#8221;)) {
          return “/manager”;

      } else if (roles.contains(“ROLE_USER”) && permissionsList.contains(“READ_PRIVILEGE”) && availablePageUrlList.contains(http://localhost:8080/user&#8221;)) {
          return “/user”;
      } else {
          return “/403”;
  protected void clearAuthenticationAttributes(HttpServletRequest request) {
      HttpSession session = request.getSession(false);
      if (session == null) {
  public RedirectStrategy getRedirectStrategy() {
      return redirectStrategy;
  public void setRedirectStrategy(RedirectStrategy redirectStrategy) {
      this.redirectStrategy = redirectStrategy;
Note: In this example, we are getting and checking all the Users, Roles, Privileges and Page URL’s Dynamically from Database.

I have Implemented with User specific controllers as in below for ‘admin’ likewise to Manager/DBA/User.

@RequestMapping(value = “/admin”)
public class AdminController extends SecurityLoginController {
  private static final String viewPrefix = “security/Pages/admin”;
  private static final String accessDeniedViewPrefix = “security/AccessDenied/”;
  private UsersRepository userRepository;
  private UsersService userService;
  @RequestMapping(value = “”, method = RequestMethod.GET)
  public String adminPage(ModelMap model) {
      model.addAttribute(“user”, getPrincipal());
      return viewPrefix;

Case 1. Open browser with the url  http://localhost:8080/login  and enter user “” and password “admin123” and click on login


You will see below Home screen as the user is only a normal user

Case 2. Try access a password protected unauthorized page;  http://localhost:8080/admin, an access denied page is displayed as the user is a normal user and is not authorized to access admin page.


Similarly, if you try to access; http://localhost:8080/manager  page you will be displayed with below.


Case 3. If you try to access with wrong credentials you will be displayed with below screen



DevOps is a set of practices that emphasizes the communication and collaboration of software Developers, Testers, Operations professionals and other stakeholders while automating the process of software delivery and infrastructure changes. It aims at establishing a culture and environment where building, testing, and releasing software can happen quickly, frequently, and more reliably.


DevOps Tool-chain

Because DevOps is a cultural shift and collaboration between development, operations and testing, there is no single DevOps tool, rather a set or “DevOps toolchain” consisting of multiple tools. Generally, DevOps tools fit into one or more of these categories, which is reflective of the software development and delivery process.

Code – Code development and review, Version control tools, code merging

Build – Continuous integration tools, build status

Test – Test and results determine performance

Package – Artifact repository, application pre-deployment staging

Release – Change management, release approvals, release automation

Configure – Infrastructure configuration and management, Infrastructure as Code tools

Monitor – Applications performance monitoring, end user experience

Though there are many tools available, certain categories of them are essential in the DevOps tool chain setup for use in an organization.

Tools such as Docker (containerization), Jenkins (Continuous Integration), Chef (Infrastructure as Code) and Vagrant (Virtualization Platform) among many others are often used and discussed.

In DevOps, Continuous Integration (CI), Continuous Delivery (CD) and Continuous Testing (CT) are 3 key aspects which are briefed below.

Continuous Integration (CI): Is a software development practice where developers regularly merge their code changes into a central repository, after which automated builds and tests are run. The key goals of continuous integration are to find and address bugs quicker, improve software quality, and reduce the time it takes to validate and release new software updates.

Continuous Delivery (CD): Is a software development practice where code changes are automatically built, tested, and prepared for a release to production. It expands upon continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When continuous delivery is implemented properly, developers will always have a Deployment-ready build artifact that has passed through a standardized test process.

Continuous Testing (CT): Is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.

What is the difference between Continuous Delivery and Continuous Deployment:


Continuous Delivery (CD): Automates the entire software release process. Every revision that is committed, triggers an automated flow that builds, tests, and then stages the update. The final decision to deploy to a live production environment is triggered by a developer/release manager.

Continuous Deployment: Is a step further to CD, with Continuous Deployment, revisions are deployed to a production environment automatically without explicit approval from a developer, making the entire software release process automated.

Advantages of DevOps:
  • Quick to Market.
  • Reliability in Delivery (no human errors)
  • Scale at ease (via configuration management tools).
  • Improved Collaboration (between dev & ops reduces risk by sharing work).
  • Secure.


Docker Containers enclose(wrap) a piece of software in a complete file-system that contains everything needed to run: code, run-time, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.

What made dockers adoption?
Docker is a tool that can package an application and its dependencies in a virtual container that can run on any Linux server(not windows, since they relies on kernel) irrespective of any language. This helps enable flexibility and portability on where the application can run, whether on premises, public cloud, private cloud, bare metal, etc.

What is the difference between VM’s and Docker Containers?

  1. a Docker container, unlike a virtual machine, does not require a separate operating system. Instead, it relies on the kernel’s functionality and uses resource isolation (CPU, memory, block I/O, network, etc.) Docker accesses the Linux kernel’s virtualization features either directly using the libcontainer library, which is available as of Docker 0.9, or indirectly via  libvirt, LXC (Linux Containers) or systemd-nspawn.
  2. Size: VMs are very large which makes them impractical to store and transfer.
  3. Performance: running VMs consumes significant CPU and memory.
  4. Portability: To any Linux VM/Machine.

Which one to Use?
In reality, both are complementary technologies(VMs and Containers are better together) for achieving maximum agility. (***Docker Containers can run inside Virtual Machines).

****Both VM’s and containers are IaaS solutions.

For application/software portability, Docker is your safest bet. For machine portability and greater isolation(h/w), go with VM.

IMP Note:
Docker containers are Open source, Secure(isolate from each other) and so lightweight, a single server or virtual machine can run several containers simultaneously. A 2016 analysis found that a typical Docker use case involves running five containers per host, but that many organizations run 10 or more.

Docker can be integrated into various infrastructure tools, including Amazon Web Services, Microsoft Azure, Ansible, Chef, Jenkins, Puppet,Salt, Vagrant, Google Cloud Platform, IBM Bluemix, Jelastic, OpenStack Nova, HPE Helion Stackato,and VMware vSphere Integrated Containers.

Docker in Details – briefly
Docker builds upon Linux Container(LXC) and consists of three parts: Docker Daemon, Docker Images, the Docker Repositories which together make Linux Container easy and fun to use.

Docker Daemon: runs as root and orchestrates all running containers.

Docker images: Just as virtual machines are based on images, Docker Containers are based on Docker images which are tiny compared to virtual machine images and are stackable .

RegistryA service responsible for hosting and distributing images. The default registry is the Docker Hub.

Repository: Docker repository is a collection of different docker images with same name, that have different tags.

Tag: An alphanumeric identifier attached to images within a repository (e.g., 14.04 or stable ).

Use Case: Spinning up a Docker Container on Ubuntu(14.04)
Docker has two important installation requirements:

  • Docker only works on a 64-bit Linux installation.
  • Docker requires version 3.10 or higher of the Linux kernel.

To check the Ubuntu version, run:    # cat /etc/lsb-release                   // o/p:  14.04.4 LTS
To check your current kernel version, open a terminal and use   # sudo uname -r          //  o/p:  3.13

Installation of Docker
Step 1: Ensure the list of available packages is up to date before installing anything new. Login to root user and then
# apt-get update
Let’s install Docker by installing the docker-io package:
# apt-get  install
Now check the docker version using    # docker version
Optionally, we can configure Docker to start when the server boots:
# update-rc.d docker defaults
And then we’ll start the docker service:
# service docker restart

Step 2: Download a Docker Container
There are many community containers already available, which can be found through a search. In the command below I am searching for the keyword debian:
# docker search debian/ubuntu      // displays list available images         
Let’s begin using Docker! Download the ubuntu Docker image:
# pull ubuntu
Now you can see all downloaded images by using the command:    # docker images

Step 3: Create & Run a Docker Container
Now, to setup a basic ubuntu container with a bash shell, we just run one command. docker run will run a command in a new container, -i attaches stdin and stdout, -t allocates a tty, and we’re using the standard ubuntu container.
# run -i -t ubuntu /bin/bash      
That’s it! You’re now using a bash shell inside of a ubuntu docker container.
span style=”font-weight: 400;”>To disconnect, or detach, from the shell without exiting use the escape sequence Ctrl-p + Ctrl-q.
But the container will stop when you leave it with the command exit.
### If you like to have a container that is running in the background like daemon, you just need to add the -d option in the command, optionally add a message to it.
# $ docker run -d ubuntu /bin/sh -c “while true; do echo Hello Ram Howdy?; sleep 1; done”
### Use below command to see all the containers that are  running in the background.
# docker ps    
Now you can check the logs with this command:   
# docker logs 68a29978b064  //ContainerId – take 1st 12 digits of the long form Id
#### If you like to remove the container, 1st stop it first and then remove it with the command:
# docker stop 68a29978b064               // here inplace of stop you can use keywords like start/restart
span style=”font-weight: 400;”># docker rm 68a29978b064              // removes the container

Install & Run Jenkins 2.0 in “Docker Container”
Step 1: First, pull the official jenkins image from Docker repository.
# docker pull jenkins
Step 2: As jenkins default plugin capabilities won’t sufficient to build devops, we should implement a Data Volume Container to provide simple backup capabilities and to extend the official image to include some plugins via the core-support plugin format.
# docker create -v /var/jenkins_home –name jenkins-dev jenkins
This command uses the ‘/var/jenkins_home’ directory volume as per the official image and provides a name ‘jenkins-dv’ to identify the data volume container.
Step 3: To use the data volume container with an image you use the ‘–volumes-from’ flag to mount the ‘/var/jenkins_home’ volume in another container:
# docker run -d -p 8080:8080 –volumes-from jenkins-dev –name jenkins-master jenkins
Step 4: Once you have the docker container running you can go to http://IP:8080 to see the Jenkins instance running. This instance is storing data in the volume container you set up in step 2, so if you set up a job and stop the container the data is persisted.
Up on Hitting this url http://IP:8080 it asks for password then run below command and copy, paste the pwd in Jenkins login screen >
# docker exec jenkins-master cat /var/jenkins_home/secrets/initialAdminPassword
Next-Install Plugins >Provide login credentials >start jenkins

To backup the data from the volume container is very simple. Just run:
# docker cp jenkins-dv:/var/jenkins_home /opt/jenkins-backup
Once this operation is complete on your local machine in ‘/opt/jenkins-backup’ you will find a ‘jenkins_home’ directory backup. You could now use this to populate a new data volume container.


Docker container is virtualization platform which helps developers to deploy their applications and system administrators to manage applications in a safe virtual container environment. Docker runs on 64-bit architecture and the kernel should be higher 3.10 version. With Docker, you can build and run your application inside a container and then move your containers to other machines running docker without any worries.

Continuous Integration (CI)

Continuous Integration(CI) is a software development practice where developers regularly merge their code changes into a central repository, after which automated builds and tests are run. The key goals of continuous integration are to find and address bugs quicker, improve software quality, and reduce the time it takes to validate and release new software updates.

I will walk you through the baby steps of implementation process Continuous Integration(CI) of DevOps for Java EE and Maven based Spring Petclinic project as follows.

Pre requisites

  • Linux Ubuntu Operating System 14.04 LTS.
  • Java JDK/JRE
  • DBMS (MySQL in my case)

Step 1: Install Jenkins
# wget -q -O – | sudo apt-key add –
# sudo sh -c ‘echo deb binary/ > /etc/apt/sources.list.d/jenkins.list’
# sudo apt-get update
# sudo apt-get install jenkins

By default, Jenkins listen on port 8080. Access this port with browser to start configuration.
If we start it locally, we can see it running under  http://localhost/IP:8080/  URL.

Just follow the instructions…

Edit above /var/lib/jenkins/secrets/intialAdminPasswrod and copy the password & paste it over here>Click to continue.

Double click on Install suggested plugin option and wait for couple of minutes which will takes you to ‘Create First Admin User’ Page.

Provide all the required User details > Click to save and Finish>Start using Jenkins. This will takes you to below welcome page.

Step 2: Configure Jenkins System
Install Git and Maven on Ubuntu and Configure these two along with JDK in Jenkins

Step (a). Install Git:
# apt-get install git
Verify git:  #  git –version                  //gives you output like  ‘git version 2.7.4’

Step (b). Install Maven:
# apt-get install maven          
Verify maven:  # mvn  –version      //gives you output like Apache Maven version 3.3.9 along with maven & Java paths.

Step (c). Configure Git, Maven and JDK in Jenkins
Now Go to Jenkins>Manage jenkins>Global Tool Configuration and provide the tools paths.

Step (d). Install Few more Plugins in Jenkins (configure them if require):
Manage Jenkins>Manage Plugins>Available>select Sonar Integration plugin, Role-Based Strategy Plugin, Mask Password plugin and restart Jenkins.

Step 3: Install MySql, Sonarqube, Sonar-runner in Ubuntu and Configure them to Jenkins:

a). Installation of Mysql:
# sudo apt-get -y install mysql-server-5.6

Now, login to MySQL through terminal to create Sonar Database:
# mysql -u root -p
Create the database and a user with permissions:
GRANT ALL ON sonar.* TO ‘sonar’@’%’ IDENTIFIED BY ‘sonar’;
GRANT ALL ON sonar.* TO ‘sonar’@’localhost’ IDENTIFIED BY ‘sonar’;

Note: Post installation of Mysql,if you want to modify mysql.conf to do any changes like bind address in ubuntu 15.x/ubuntu 16.x+ on wards edit & save below file.

cd etc/mysql/mysql.conf.d/mysql.cnf
sudo nano mysql.cnf

b). Download & Install Sonarqube:
# wget
# unzip

Edit (if you wish to use custom database for consistancy)
nano /opt/sonarqube-6.1/conf/ with your favorite text editor, and modify it.

#MySQL DB settings:

#Web Server settings    // required irrespective of DB
The following settings allow you to run the server on page http://localhost:9000/sonar

Step c). Download & Install sonar-runner
# wget
# unzip

nano /opt/sonar-runner-2.4/conf/ in a text editor, and modify it as like below.            //save>exit and restart sonar service

Configure Sonarqube and runner paths in Ubuntu
#nano /etc/environment

Restart and Verify Sonarqube Installation:
# sudo /opt/sonar/bin/linux-x86-64/ restart

Browse the url http://IP:9000/sonar to confirm its installation.

Now, Configure both Sonarqube & Sonar-runner in Jenkins: Manage Jenkins>Configure System and provide details as in below>save.

Also, Configure Sonar-Runner in Manage Jenkins>Global Tool Configuration.

Step 4:  Secure Jenkins (add users for access, if required)
The easiest way is to use Jenkins own user database. Create at least the user “Anonymous” with read access. Also create entries for the users you want to add/allow users to sign up on login page via Manage Jenkins>Global Security Select-allow users to sign up as well as  Role-Based Strategy (a plugin already installed)>save.

Create users: A Returning user/admin can create users from Manage Jenkins>Manage Users>Create User

First Time user will be redirected to create user screen post click to “create an account”

Manage & Assign Roles:
Go to Manage Jenkins> Manage & Assign Roles > Create the roles and assign the permissions from the Manage & Assign Roles as like in below

Step 5. Download & Install Nexus Repository & Configure
Nexus Setup:
Step a). Add a user for nexus
sudo adduser –home /opt/nexus –disabled-login –disabled-password nexus

Step b). Change into that user, move to the home directory and unpack your Nexus download (by a link or by this command  
#  wget   )
sudo su – nexus
tar -xzf /home/USER/Downloads/nexus-2.14.1-01-bundle.tar.gz
Then, back to our normal user again, using ‘exit’ command.

Step c). Now we setup the init script:
sudo ln -s /opt/nexus/nexus-2.14.1-01/bin/nexus /etc/init.d/nexus
In the init script, make sure the following variables are changed to the right values:
RUN_AS_USER=nexus   // It will be 4 lines below to above variable in commented stuff
##make the above file executable
# cd /opt/nexus/nexus-2.14.1-01/bin
# sudo chmod a+x nexus
Next change ownership of the extracted nexus oss folder recursively to nexus user and nexus group by command
# sudo chown -R nexus:nexus /opt/nexus/nexus-2.14.1-01/
Now you can change to user nexus and start the nexus oss
su – nexus
cd /opt/nexus/nexus-2.14.1-01/bin
./nexus start
Type the url http://IP:8081/nexus/ & verify that application is live

Login: with admin/admin123 credentials

Step d ). Create a Repository:
Click to ‘+ Add’ drop down on top> Select ‘Hosted Repository’ and provide Repository ID (in my case it is ‘PC’) and Repository Name as ‘Petclinic’ leave to defaults and save.

Artifact Upload:
Click to Artifact Upload -in Petclinic window>Select Gav Parameters Provide the Group Id (Dev), Artifact (spring-petclinic), version (1.1), Packaging (zip/war/ear)>Click on Select Artifacts to Upload and browse to any text file and open>click on Add Artifact>Upload Artifact>ok. 

Now you can check updated folder structure on any button by clicking on any button of Petclinic window. 

Step 5: Now, It is time to Create Jobs in Jenkins
Create a Folder:
First Create a Folder named  PetClinic for instance, which will be specific to your project(you may have more projects in future).
Jenkins>New Item>Enter Name and Select Folder>ok>Save.

Similarly, create a Delivery pipeline view by clicking to + button on Jenkins Dashboard.

Job1:  Build:
Step a). Now Click to PetClinic Folder>New Item>Free Style Project>ok.

Step b).In the Opened page, General-Select-Delivery Pipeline configuration provide Stage Name as ‘Build’ and Task Name as ‘Compile’ and Select-This project is parameterized- select two String Parameters one after another from ‘Add Parameter’ Drop down and provide BUILD_LABEL & COMMIT_ID in each.

Step c). Select, Git in Source Code Management and provide your Repository URL: & click on add button-provide your git credentials here, select>save.

Step d). In Build Triggers-Add build step, Select, Invoke Top Level Maven goals-provide the Goal as
install -Dmaven.test.skip=true

Step e). Select, Trigger parameterized builds on other Project from ‘Add post-build action’ at bottom of page & provide info as in below.

Create Build Job:
save>Click to Build now-Build job will compile execute source code successfully.

Job2: UnitTests
Step a). Click to New Item-inside folder, Name it as UnitTests>scroll to bottom and provide the ‘Build’ job name in Copy from Textbox>ok

****Below are the changes from Build job as we created this UnitTests job by copying its configurations.

Step b). In General-Delivery Pipeline configuration, provide Stage Name as ‘Build’ and Task Name as ‘Unit Test’.

Step c). Click on Advanced-bottom right in General>select, Use custom workspace-provide Build job path. I.e; /var/lib/jenkins/workspace/PetClinic/Build  and select nothing from Source Code Management.

Step d). In Build Triggers-’Add build step’, Select, Invoke Top Level Maven goals-provide the Goal as

Step e). Select- Publish JUnit test result report from ‘Add post-build action’ & provide info as in below

Provide Test report XMLs as target/surefire-reports/*.xml

Step f). Inpost-build actions’ of ‘Build’ at the bottom of page, change the ‘Projects to Build’ to  Static Code Analysis>save.

Trigger the build of UnitTests job post job success, Click on Test Result for unit result.

Job3: Static Code Analysis
Step a). Click to New Item-inside folder, Name it as Static Code Analysis>scroll to bottom and provide the ‘UnitTests’ job name in ‘Copy from’ Textbox>ok

****Below are the changes from UnitTests job as we created this job by copying its configurations.

Step b). In General-Delivery Pipeline configuration, provide Stage Name as ‘Static Code Analysis’ and Task Name as ‘Code Quality Check’.

Step c). In Build Triggers-’Add build step’, Select, ‘Execute shell’ and provide below commands to remove existing sonar properties in project.
# Generally this 2 line script in not needed. It is specific to Petclinic, we are removing this undesired.
cd /var/lib/jenkins/workspace/PetClinic/Build
rm -rf

Step d). Again,  In Build Triggers-’Add build step’, Select, ‘Execute sonarqube scanner’ and provide below sonar properties to run static code analysis on project.

Select Execute Sonarqube Scanner & provide Analysis properties as in below

Step e). Inpost-build actions’ of ‘Build’ at the bottom of page, change the ‘Projects to Build’ to  Package>save.
Trigger the build of ‘Package’ job post job success,
Browse the URL: http://IP:9000/sonar/  to observe code quality metrics.

Job4: Package
Step a). Click to New Item-inside folder, Name it as Package >scroll to bottom and provide the ‘UnitTests’ job name in ‘Copy from’ Textbox>ok

****Below are the changes from ‘UnitTests’ job as we created this job by copying its configurations.

Step b). In General-Delivery Pipeline configuration, provide Stage Name as ‘Package’ and Task Name as ‘Packaging Source Code’.

Step c). Select-Mask passwords>click Add and provide Name as NexusUserName, password as your admin user name in my case admin again click Add and provide Name as NexusPassword, password as you admin user password in my case admin123.

Step d). In Build Triggers-’Add build step’, Select, ‘Execute shell’ and provide below commands to remove existing sonar properties in project.
zip -r * .

Step e). In Build Triggers-’Add build step’, Select, ‘Execute shell’ and provide below commands to to store package in Nexus repo artifacts.

curl -F “r=PC” -F “e=zip” -F “hasPom=false” -F “g=Dev” -F “a=spring-petclinic” -F “v=${BUILD_LABEL}” -F “p=zip” -F “” -u ${NexusUserName}:${NexusPassword}

Note: Make sure that you install zip in your Ubuntu Machine before running package job.

Step f). Inpost-build actions’ of ‘Build’ at the bottom of page, change the ‘Projects to Build’ to  Package>save.
Trigger the build of ‘Package’ job post job success,
Browse for package in nexus url http://IP:8081/nexus/ 

Finally, Observe Continuous Integration in the below delivery pipe line on Jenkins Dashboard.

Delivery Pipeline Full Screen view of CI:


Note: Here I just added ‘Approval’ job to the delivery pipeline as I want to keep the interest of the readers about our next Article of DevOps on ‘Continuous Delivery (CD)’.

Build and Run a simple Angular2 Application

Angular2 is a structural framework for building dynamic web applications client-side for desktop and mobile. It lets you use HTML as your template language and lets you extend HTML’s syntax to express your application’s components clearly and succinctly. In this article, I’ll show you how to create a simple application which has the structure of a real-world Angular application and displays the simple message.

This application is using Angular 2, TypeScript, Angular CLI Node, NPM Java EE, Eclipse Neon and Google Chrome.


  • Download and Install Java JDK latest version & Eclipse Neon as an IDE.

Environment Setup & Steps To Run Angular2 Application in ‘Eclipse’ IDE:

  1. Install the latest versions of Node.js & NPM which acts as JavaScript runtime environment and its package manager.
  2. Install Angular CLI which acts as a command line interface for Angular.
  3. Drag & Drop Angular2 Eclipse plugin (which internally packed with below plugins) from below

  1. File>New>Dynamic Web Project>Name it and click Finish.
  2. Now Angular-CLI comes into picture -which generates Angular 2 components, services and allows you to work with Angular 2 application out-of-the-box.
  3. Right click on the Project>select Show in Local Terminal>Terminal.

Inside the terminal, execute,  ng init  which takes couple of minutes to complete, post completion refresh the Project in Eclipse (Observe the src folder which is updated with an app folder and several other files like app.components, index.html, main.ts and many other dependent files).

Observe the Final Project folder Structure over here.

Running the App

If you want a customize the default welcome message just open app.component.ts and write up you wish to…

It’s time to run the app!!! Run the command  npm start / you can run directly run from UI via Right Click on Project -> Run As- Run on Server(configured if any) or simply just click on ng serve  and

Observe the terminal for build logs which displays as in follows.

Finally open Chrome browser and type the url:  http://localhost:4200/

You’ll see a “Loading…” message for a second and then the below customized welcome message.

JIRA, a Bug Tracking, Issue Tracking and Project Management Tool_Overview

JIRA is a commercial issue tracking product, developed in Java by Atlassian. It provides bug tracking, issue tracking, and project management functions.

Who uses JIRA?
Software project development teams, help desk systems, leave request systems etc.

Coming to its applicability to QA teams, it is widely used for bug tracking, tracking project level issues- like documentation completion and for tracking environmental issues.  A working knowledge of this tool is highly desirable across the industry.

JIRA is offered in three packages:

  • JIRA Core includes the base software.
  • JIRA Software is intended for use by software development teams and includes JIRA Core and JIRA Agile.
  • JIRA Service Desk is intended for use by IT or business service desks.

Base Concepts in JIRA @ High level:
JIRA has a 4 level hierarchy: Project > Components (logical subsections) and Versions (phases/milestones) > Issues > Subtasks.

Backup and Restore Mechanism in JIRA:
Go to Admin>System>Backup System> Provide the Name. The backup file will be placed in: /var/atlassian/application-data/jira6.4/export (default path) with .zip extension.

After that copy the .zip to below import location.

Now click on Restore/Project Import and Enter .zip filename to restore project data from. Files will be loaded from :  /var/atlassian/application-data/jira6.4/import/