Spring Security

Spring SecuritySpring SecuritySpring Security provides comprehensive security services for Java EE-based enterprise software applications. There is a particular emphasis on supporting projects built using The Spring Framework, which is the leading Java EE solution for enterprise software development.

As you probably know two major areas of application security are “authentication” and “authorization” (or “access-control”). These are the two main areas that Spring Security targets. “Authentication” is the process of establishing a principal is who they claim to be (a “principal” generally means a user, device or some other system which can perform an action in your application).”Authorization” refers to the process of deciding whether a principal is allowed to perform an action within your application. To arrive at the point where an authorization decision is needed, the identity of the principal has already been established by the authentication process. These concepts are common, and not at all specific to Spring Security.

At an authentication level, Spring Security supports a wide range of authentication models. Most of these authentication models are either provided by third parties, or are developed by relevant standards bodies such as the Internet Engineering Task Force. In addition, Spring Security provides its own set of authentication features.

Java Configuration
General support for Java Configuration was added to Spring Framework in Spring 3.1. Since Spring Security 3.2 there has been Spring Security Java Configuration support which enables users to easily configure Spring Security without the use of any XML.

Technologies used :

  1. Spring 3.2.2.RELEASE
  2. Spring Security 3.2.2.RELEASE
  3. Hibernate 4.2.1.Final
  4. MySQL Server 5.1.25
  5. Tomcat 7 (Servlet 3.x container)

Note: Add all these dependencies in pom.xml

Web/Spring Security Java Configuration
The first step is to create our Spring Security Java Configuration. The configuration creates a Servlet Filter known as the springSecurityFilterChain which is responsible for all the security (protecting the application URLs, validating submitted username and passwords, redirecting to the login form, etc) within your application. You can find the most basic example of a Spring Security Java Configuration below:

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.method.configuration.EnableGlobalMethodSecurity;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
import org.springframework.security.core.userdetails.UserDetailsService;
import org.springframework.security.web.authentication.AuthenticationSuccessHandler;
@Configuration
@EnableWebSecurity
@EnableGlobalMethodSecurity(securedEnabled = true)
public class AppSecurityConfig extends WebSecurityConfigurerAdapter {
  @Autowired
  @Qualifier(“customUserDetailsService”)
  UserDetailsService userDetailsService;
  @Autowired
  CustomSuccessHandler customSuccessHandler;
  @Autowired
  AuthenticationSuccessHandler authenticationSuccessHandler;
  @Autowired
  public void configureGlobalSecurity(AuthenticationManagerBuilder auth) throws Exception {
      auth.userDetailsService(userDetailsService);
  }
  @Override
  protected void configure(HttpSecurity http) throws Exception {
      http
              .authorizeRequests()
              .antMatchers(“/index**”, “/home**”, “/login**”, “/resources**”, “/pages**”).permitAll()
              .antMatchers(“/admin/**”).hasRole(“ADMIN”)
              .antMatchers(“/manager/**”).hasAnyRole(“ADMIN”,“MANAGER”)
              .anyRequest().authenticated()
              .and()
              .formLogin()
              .loginPage(“/login”)
              .failureUrl(“/login?error=true”)
              .defaultSuccessUrl(“/indexHome”)
              .usernameParameter(“email”).passwordParameter(“password”).permitAll()
              .successHandler(customSuccessHandler)
              .and()
              .logout()
              .logoutSuccessUrl(“/login?logout”)
              .invalidateHttpSession(true).deleteCookies(“JSESSIONID”).permitAll()
              .and()
              .exceptionHandling().accessDeniedPage(“/403”)
              .and().csrf().disable();
  }
}

AbstractSecurityWebApplicationInitializer with Spring MVC
If you were using Spring elsewhere in our application we probably already had a WebApplicationInitializer that is loading our Spring Configuration. If we use the previous configuration we would get an error. Instead, we should register Spring Security with the existing ApplicationContext. For example, if we were using Spring MVC our SecurityWebApplicationInitializer would look something like the following:

import org.springframework.security.web.context.AbstractSecurityWebApplicationInitializer;
public class SecurityWebApplicationInitializer extends AbstractSecurityWebApplicationInitializer {
}
This would simply only register the springSecurityFilterChain Filter for every URL in your application.

Authorize Requests
Our examples have only required users to be authenticated and have done so for every URL in our application. We can specify custom requirements for our URLs by adding multiple children to our http.authorizeRequests() method. For example:
@Override
protected void configure(HttpSecurity http) throws Exception {
  http
          .authorizeRequests()
          .antMatchers(“/index**”, “/home**”, “/login**”, “/resources**”, “/pages**”).permitAll()
          .antMatchers(“/admin/**”).hasRole(“ADMIN”)
          .antMatchers(“/manager/**”).hasAnyRole(“ADMIN”,“MANAGER”)
          .anyRequest().authenticated()                       
          .and()

Authentication
Create a new custom class that will implement AuthenticationSuccessHandler. Then add your logic on how you want to handle whenever the user successfully logs in. For this example, if ever the user successfully logs in, we will add his username and his roles to its session and redirect him to the home page.

@Component
public class CustomSuccessHandler implements AuthenticationSuccessHandler {
  private RedirectStrategy redirectStrategy = new DefaultRedirectStrategy();
  @Autowired
  PermissionsRepository permissionsRepository;
  @Autowired
  PageRepository pageRepository;
  
  @Override
  public void onAuthenticationSuccess(HttpServletRequest request, HttpServletResponse response, Authentication authentication) throws IOException, ServletException {
      handle(request, response, authentication);
      HttpSession session = request.getSession(false);
      if (session != null) {
          session.setMaxInactiveInterval(10 * 60);
          org.springframework.security.core.userdetails.User authUser = (org.springframework.security.core.userdetails.User) SecurityContextHolder.getContext().getAuthentication().getPrincipal();
          session.setAttribute(“userName”, authUser.getUsername());
          session.setAttribute(“authorities”, authentication.getAuthorities());
      }
      clearAuthenticationAttributes(request);
  }
  protected void handle(final HttpServletRequest request, final HttpServletResponse response, final Authentication authentication) throws IOException {
      final String targetUrl = determineTargetUrl(authentication);
      if (response.isCommitted()) {
          System.out.println(“User resourceURL’s are ” + targetUrl);
          return;
      }
      redirectStrategy.sendRedirect(request, response, targetUrl);
  }
  protected String determineTargetUrl(Authentication authentication) {
      Set roles = AuthorityUtils.authorityListToSet(authentication.getAuthorities());
      List availablePermissions = permissionsRepository.findAll();
      List permissionsList = new ArrayList<>();
      List PagesURL = pageRepository.findAll();
      List availablePageUrlList = new ArrayList<>();
      for (Permissions avlblPermissions : availablePermissions) {
          permissionsList.add(avlblPermissions.getPermission());
      }
      System.out.println(“List of Permissions’s are ” + permissionsList + “”);
      for (Page PageURLs : PagesURL) {
          availablePageUrlList.add(PageURLs.getPageUrl());
      }
      System.out.println(“List of available Page’s are ” + availablePageUrlList + “”);

      if (roles.contains(“ROLE_ADMIN”) && permissionsList.contains(“DELETE_PRIVILEGE”) && availablePageUrlList.contains(http://localhost:8080/admin&#8221;)) {
          return “/admin”;
      } else if (roles.contains(“ROLE_MANAGER”) && permissionsList.contains(“WRITE_PRIVILEGE”) && availablePageUrlList.contains(http://localhost:8080/manager&#8221;)) {
          return “/manager”;

      } else if (roles.contains(“ROLE_USER”) && permissionsList.contains(“READ_PRIVILEGE”) && availablePageUrlList.contains(http://localhost:8080/user&#8221;)) {
          return “/user”;
      } else {
          return “/403”;
      }
  }
  protected void clearAuthenticationAttributes(HttpServletRequest request) {
      HttpSession session = request.getSession(false);
      if (session == null) {
          return;
      }
      session.removeAttribute(WebAttributes.AUTHENTICATION_EXCEPTION);
  }
  public RedirectStrategy getRedirectStrategy() {
      return redirectStrategy;
  }
  public void setRedirectStrategy(RedirectStrategy redirectStrategy) {
      this.redirectStrategy = redirectStrategy;
  }
}
Note: In this example, we are getting and checking all the Users, Roles, Privileges and Page URL’s Dynamically from Database.

Controllers
I have Implemented with User specific controllers as in below for ‘admin’ likewise to Manager/DBA/User.

@Controller
@RequestMapping(value = “/admin”)
public class AdminController extends SecurityLoginController {
  private static final String viewPrefix = “security/Pages/admin”;
  private static final String accessDeniedViewPrefix = “security/AccessDenied/”;
  @Autowired
  private UsersRepository userRepository;
  @Autowired
  private UsersService userService;
  @RequestMapping(value = “”, method = RequestMethod.GET)
  public String adminPage(ModelMap model) {
      model.addAttribute(“user”, getPrincipal());
      return viewPrefix;

Case 1. Open browser with the url  http://localhost:8080/login  and enter user “ram@gmail.com” and password “admin123” and click on login

normalusertohomepage

You will see below Home screen as the user ram@gmail.com is only a normal user

Case 2. Try access a password protected unauthorized page;  http://localhost:8080/admin, an access denied page is displayed as the user is a normal user and is not authorized to access admin page.

normalusertoadminpage

Similarly, if you try to access; http://localhost:8080/manager  page you will be displayed with below.

normalusertomanagerpage

Case 3. If you try to access with wrong credentials you will be displayed with below screen

invalidcredentials

DevOps

DevOps is a set of practices that emphasizes the communication and collaboration of software Developers, Testers, Operations professionals and other stakeholders while automating the process of software delivery and infrastructure changes. It aims at establishing a culture and environment where building, testing, and releasing software can happen quickly, frequently, and more reliably.

devopsoverview

DevOps Tool-chain

Because DevOps is a cultural shift and collaboration between development, operations and testing, there is no single DevOps tool, rather a set or “DevOps toolchain” consisting of multiple tools. Generally, DevOps tools fit into one or more of these categories, which is reflective of the software development and delivery process.

Code – Code development and review, Version control tools, code merging

Build – Continuous integration tools, build status

Test – Test and results determine performance

Package – Artifact repository, application pre-deployment staging

Release – Change management, release approvals, release automation

Configure – Infrastructure configuration and management, Infrastructure as Code tools

Monitor – Applications performance monitoring, end user experience

Though there are many tools available, certain categories of them are essential in the DevOps tool chain setup for use in an organization.

Tools such as Docker (containerization), Jenkins (Continuous Integration), Chef (Infrastructure as Code) and Vagrant (Virtualization Platform) among many others are often used and discussed.

In DevOps, Continuous Integration (CI), Continuous Delivery (CD) and Continuous Testing (CT) are 3 key aspects which are briefed below.

Continuous Integration (CI): Is a software development practice where developers regularly merge their code changes into a central repository, after which automated builds and tests are run. The key goals of continuous integration are to find and address bugs quicker, improve software quality, and reduce the time it takes to validate and release new software updates.

Continuous Delivery (CD): Is a software development practice where code changes are automatically built, tested, and prepared for a release to production. It expands upon continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When continuous delivery is implemented properly, developers will always have a Deployment-ready build artifact that has passed through a standardized test process.

Continuous Testing (CT): Is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.

What is the difference between Continuous Delivery and Continuous Deployment:

cdeploy

Continuous Delivery (CD): Automates the entire software release process. Every revision that is committed, triggers an automated flow that builds, tests, and then stages the update. The final decision to deploy to a live production environment is triggered by a developer/release manager.

Continuous Deployment: Is a step further to CD, with Continuous Deployment, revisions are deployed to a production environment automatically without explicit approval from a developer, making the entire software release process automated.

Advantages of DevOps:
  • Quick to Market.
  • Reliability in Delivery (no human errors)
  • Scale at ease (via configuration management tools).
  • Improved Collaboration (between dev & ops reduces risk by sharing work).
  • Secure.

Docker

Docker Containers enclose(wrap) a piece of software in a complete file-system that contains everything needed to run: code, run-time, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.

What made dockers adoption?
Docker is a tool that can package an application and its dependencies in a virtual container that can run on any Linux server(not windows, since they relies on kernel) irrespective of any language. This helps enable flexibility and portability on where the application can run, whether on premises, public cloud, private cloud, bare metal, etc.

What is the difference between VM’s and Docker Containers?

  1. a Docker container, unlike a virtual machine, does not require a separate operating system. Instead, it relies on the kernel’s functionality and uses resource isolation (CPU, memory, block I/O, network, etc.) Docker accesses the Linux kernel’s virtualization features either directly using the libcontainer library, which is available as of Docker 0.9, or indirectly via  libvirt, LXC (Linux Containers) or systemd-nspawn.
  2. Size: VMs are very large which makes them impractical to store and transfer.
  3. Performance: running VMs consumes significant CPU and memory.
  4. Portability: To any Linux VM/Machine.

Which one to Use?
In reality, both are complementary technologies(VMs and Containers are better together) for achieving maximum agility. (***Docker Containers can run inside Virtual Machines).

****Both VM’s and containers are IaaS solutions.

For application/software portability, Docker is your safest bet. For machine portability and greater isolation(h/w), go with VM.

IMP Note:
Docker containers are Open source, Secure(isolate from each other) and so lightweight, a single server or virtual machine can run several containers simultaneously. A 2016 analysis found that a typical Docker use case involves running five containers per host, but that many organizations run 10 or more.

Integration
Docker can be integrated into various infrastructure tools, including Amazon Web Services, Microsoft Azure, Ansible, Chef, Jenkins, Puppet,Salt, Vagrant, Google Cloud Platform, IBM Bluemix, Jelastic, OpenStack Nova, HPE Helion Stackato,and VMware vSphere Integrated Containers.

Docker in Details – briefly
Docker builds upon Linux Container(LXC) and consists of three parts: Docker Daemon, Docker Images, the Docker Repositories which together make Linux Container easy and fun to use.

Docker Daemon: runs as root and orchestrates all running containers.

Docker images: Just as virtual machines are based on images, Docker Containers are based on Docker images which are tiny compared to virtual machine images and are stackable .

RegistryA service responsible for hosting and distributing images. The default registry is the Docker Hub.

Repository: Docker repository is a collection of different docker images with same name, that have different tags.

Tag: An alphanumeric identifier attached to images within a repository (e.g., 14.04 or stable ).

Use Case: Spinning up a Docker Container on Ubuntu(14.04)
Prerequisites
Docker has two important installation requirements:

  • Docker only works on a 64-bit Linux installation.
  • Docker requires version 3.10 or higher of the Linux kernel.

To check the Ubuntu version, run:    # cat /etc/lsb-release                   // o/p:  14.04.4 LTS
To check your current kernel version, open a terminal and use   # sudo uname -r          //  o/p:  3.13

Installation of Docker
Step 1: Ensure the list of available packages is up to date before installing anything new. Login to root user and then
# apt-get update
Let’s install Docker by installing the docker-io package:
# apt-get  install docker.io
Now check the docker version using    # docker version
Optionally, we can configure Docker to start when the server boots:
# update-rc.d docker defaults
And then we’ll start the docker service:
# service docker restart

Step 2: Download a Docker Container
There are many community containers already available, which can be found through a search. In the command below I am searching for the keyword debian:
# docker search debian/ubuntu      // displays list available images         
Let’s begin using Docker! Download the ubuntu Docker image:
# docker.io pull ubuntu
Now you can see all downloaded images by using the command:    # docker images

Step 3: Create & Run a Docker Container
Now, to setup a basic ubuntu container with a bash shell, we just run one command. docker run will run a command in a new container, -i attaches stdin and stdout, -t allocates a tty, and we’re using the standard ubuntu container.
# docker.io run -i -t ubuntu /bin/bash      
That’s it! You’re now using a bash shell inside of a ubuntu docker container.
span style=”font-weight: 400;”>To disconnect, or detach, from the shell without exiting use the escape sequence Ctrl-p + Ctrl-q.
But the container will stop when you leave it with the command exit.
### If you like to have a container that is running in the background like daemon, you just need to add the -d option in the command, optionally add a message to it.
# $ docker run -d ubuntu /bin/sh -c “while true; do echo Hello Ram Howdy?; sleep 1; done”
### Use below command to see all the containers that are  running in the background.
# docker ps    
Now you can check the logs with this command:   
# docker logs 68a29978b064  //ContainerId – take 1st 12 digits of the long form Id
#### If you like to remove the container, 1st stop it first and then remove it with the command:
# docker stop 68a29978b064               // here inplace of stop you can use keywords like start/restart
span style=”font-weight: 400;”># docker rm 68a29978b064              // removes the container

Install & Run Jenkins 2.0 in “Docker Container”
Step 1: First, pull the official jenkins image from Docker repository.
# docker pull jenkins
Step 2: As jenkins default plugin capabilities won’t sufficient to build devops, we should implement a Data Volume Container to provide simple backup capabilities and to extend the official image to include some plugins via the core-support plugin format.
# docker create -v /var/jenkins_home –name jenkins-dev jenkins
This command uses the ‘/var/jenkins_home’ directory volume as per the official image and provides a name ‘jenkins-dv’ to identify the data volume container.
Step 3: To use the data volume container with an image you use the ‘–volumes-from’ flag to mount the ‘/var/jenkins_home’ volume in another container:
# docker run -d -p 8080:8080 –volumes-from jenkins-dev –name jenkins-master jenkins
Step 4: Once you have the docker container running you can go to http://IP:8080 to see the Jenkins instance running. This instance is storing data in the volume container you set up in step 2, so if you set up a job and stop the container the data is persisted.
Up on Hitting this url http://IP:8080 it asks for password then run below command and copy, paste the pwd in Jenkins login screen >
# docker exec jenkins-master cat /var/jenkins_home/secrets/initialAdminPassword
Next-Install Plugins >Provide login credentials >start jenkins

BACKING UP
To backup the data from the volume container is very simple. Just run:
# docker cp jenkins-dv:/var/jenkins_home /opt/jenkins-backup
Once this operation is complete on your local machine in ‘/opt/jenkins-backup’ you will find a ‘jenkins_home’ directory backup. You could now use this to populate a new data volume container.

Conclusion

Docker container is virtualization platform which helps developers to deploy their applications and system administrators to manage applications in a safe virtual container environment. Docker runs on 64-bit architecture and the kernel should be higher 3.10 version. With Docker, you can build and run your application inside a container and then move your containers to other machines running docker without any worries.

Continuous Integration (CI)

Continuous Integration(CI) is a software development practice where developers regularly merge their code changes into a central repository, after which automated builds and tests are run. The key goals of continuous integration are to find and address bugs quicker, improve software quality, and reduce the time it takes to validate and release new software updates.

I will walk you through the baby steps of implementation process Continuous Integration(CI) of DevOps for Java EE and Maven based Spring Petclinic project as follows.

Pre requisites

  • Linux Ubuntu Operating System 14.04 LTS.
  • Java JDK/JRE
  • DBMS (MySQL in my case)

Step 1: Install Jenkins
# wget -q -O – https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add –
# sudo sh -c ‘echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list’
# sudo apt-get update
# sudo apt-get install jenkins

By default, Jenkins listen on port 8080. Access this port with browser to start configuration.
If we start it locally, we can see it running under  http://localhost/IP:8080/  URL.

Just follow the instructions…

Edit above /var/lib/jenkins/secrets/intialAdminPasswrod and copy the password & paste it over here>Click to continue.

Double click on Install suggested plugin option and wait for couple of minutes which will takes you to ‘Create First Admin User’ Page.

Provide all the required User details > Click to save and Finish>Start using Jenkins. This will takes you to below welcome page.

Step 2: Configure Jenkins System
Install Git and Maven on Ubuntu and Configure these two along with JDK in Jenkins

Step (a). Install Git:
# apt-get install git
Verify git:  #  git –version                  //gives you output like  ‘git version 2.7.4’

Step (b). Install Maven:
# apt-get install maven          
Verify maven:  # mvn  –version      //gives you output like Apache Maven version 3.3.9 along with maven & Java paths.

Step (c). Configure Git, Maven and JDK in Jenkins
Now Go to Jenkins>Manage jenkins>Global Tool Configuration and provide the tools paths.

Step (d). Install Few more Plugins in Jenkins (configure them if require):
Manage Jenkins>Manage Plugins>Available>select Sonar Integration plugin, Role-Based Strategy Plugin, Mask Password plugin and restart Jenkins.

Step 3: Install MySql, Sonarqube, Sonar-runner in Ubuntu and Configure them to Jenkins:

a). Installation of Mysql:
# sudo apt-get -y install mysql-server-5.6

Now, login to MySQL through terminal to create Sonar Database:
# mysql -u root -p
Create the database and a user with permissions:
CREATE DATABASE sonar CHARACTER SET utf8 COLLATE utf8_general_ci;
CREATE USER ‘sonar’ IDENTIFIED BY ‘sonar’;
GRANT ALL ON sonar.* TO ‘sonar’@’%’ IDENTIFIED BY ‘sonar’;
GRANT ALL ON sonar.* TO ‘sonar’@’localhost’ IDENTIFIED BY ‘sonar’;
FLUSH PRIVILEGES;

Note: Post installation of Mysql,if you want to modify mysql.conf to do any changes like bind address in ubuntu 15.x/ubuntu 16.x+ on wards edit & save below file.

cd etc/mysql/mysql.conf.d/mysql.cnf
sudo nano mysql.cnf

b). Download & Install Sonarqube:
# wget https://sonarsource.bintray.com/Distribution/sonarqube/sonarqube-6.1.zip
# unzip sonarqube-6.1.zip

Edit sonar.properties (if you wish to use custom database for consistancy)
nano /opt/sonarqube-6.1/conf/sonar.properties with your favorite text editor, and modify it.

#MySQL DB settings:
sonar.jdbc.username=sonar
sonar.jdbc.password=sonar
sonar.jdbc.url=jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance

#Web Server settings    // required irrespective of DB
The following settings allow you to run the server on page http://localhost:9000/sonar
sonar.web.host=0.0.0.0/127.0.0.1
sonar.web.context=/sonar
sonar.web.port=9000

Step c). Download & Install sonar-runner
# wget http://repo1.maven.org/maven2/org/codehaus/sonar/runner/sonar-runner-dist/2.4/sonar-runner-dist-2.4.zip
# unzip sonar-runner-dist-2.4.zip

Edit sonar-runner.properties
nano /opt/sonar-runner-2.4/conf/sonar-runner.properties in a text editor, and modify it as like below.

sonar.host.url=http://localhost:9000/sonar            //save>exit and restart sonar service

Configure Sonarqube and runner paths in Ubuntu
#nano /etc/environment
SONAR_HOME=/opt/sonarqube-6.1
SONAR_RUNNER_HOME=/opt/sonar-runner-2.4

Restart and Verify Sonarqube Installation:
# sudo /opt/sonar/bin/linux-x86-64/sonar.sh restart

Browse the url http://IP:9000/sonar to confirm its installation.

Now, Configure both Sonarqube & Sonar-runner in Jenkins: Manage Jenkins>Configure System and provide details as in below>save.

Also, Configure Sonar-Runner in Manage Jenkins>Global Tool Configuration.

Step 4:  Secure Jenkins (add users for access, if required)
The easiest way is to use Jenkins own user database. Create at least the user “Anonymous” with read access. Also create entries for the users you want to add/allow users to sign up on login page via Manage Jenkins>Global Security Select-allow users to sign up as well as  Role-Based Strategy (a plugin already installed)>save.

Create users: A Returning user/admin can create users from Manage Jenkins>Manage Users>Create User

First Time user will be redirected to create user screen post click to “create an account”

Manage & Assign Roles:
Go to Manage Jenkins> Manage & Assign Roles > Create the roles and assign the permissions from the Manage & Assign Roles as like in below

Step 5. Download & Install Nexus Repository & Configure
Nexus Setup:
Step a). Add a user for nexus
sudo adduser –home /opt/nexus –disabled-login –disabled-password nexus

Step b). Change into that user, move to the home directory and unpack your Nexus download (by a link or by this command  
#  wget www.sonatype.org/downloads/nexus-2.14.1-01-bundle.tar.gz   )
sudo su – nexus
cd
tar -xzf /home/USER/Downloads/nexus-2.14.1-01-bundle.tar.gz
Then, back to our normal user again, using ‘exit’ command.

Step c). Now we setup the init script:
sudo ln -s /opt/nexus/nexus-2.14.1-01/bin/nexus /etc/init.d/nexus
In the init script, make sure the following variables are changed to the right values:
NEXUS_HOME=”/opt/nexus/nexus-2.14.1-01″
RUN_AS_USER=nexus   // It will be 4 lines below to above variable in commented stuff
##make the above file executable
# cd /opt/nexus/nexus-2.14.1-01/bin
# sudo chmod a+x nexus
Next change ownership of the extracted nexus oss folder recursively to nexus user and nexus group by command
# sudo chown -R nexus:nexus /opt/nexus/nexus-2.14.1-01/
Now you can change to user nexus and start the nexus oss
su – nexus
cd /opt/nexus/nexus-2.14.1-01/bin
./nexus start
Type the url http://IP:8081/nexus/ & verify that application is live

Login: with admin/admin123 credentials

Step d ). Create a Repository:
Click to ‘+ Add’ drop down on top> Select ‘Hosted Repository’ and provide Repository ID (in my case it is ‘PC’) and Repository Name as ‘Petclinic’ leave to defaults and save.

Artifact Upload:
Click to Artifact Upload -in Petclinic window>Select Gav Parameters Provide the Group Id (Dev), Artifact (spring-petclinic), version (1.1), Packaging (zip/war/ear)>Click on Select Artifacts to Upload and browse to any text file and open>click on Add Artifact>Upload Artifact>ok. 

Now you can check updated folder structure on any button by clicking on any button of Petclinic window. 

Step 5: Now, It is time to Create Jobs in Jenkins
Create a Folder:
First Create a Folder named  PetClinic for instance, which will be specific to your project(you may have more projects in future).
Jenkins>New Item>Enter Name and Select Folder>ok>Save.

Similarly, create a Delivery pipeline view by clicking to + button on Jenkins Dashboard.

Job1:  Build:
Step a). Now Click to PetClinic Folder>New Item>Free Style Project>ok.

Step b).In the Opened page, General-Select-Delivery Pipeline configuration provide Stage Name as ‘Build’ and Task Name as ‘Compile’ and Select-This project is parameterized- select two String Parameters one after another from ‘Add Parameter’ Drop down and provide BUILD_LABEL & COMMIT_ID in each.

Step c). Select, Git in Source Code Management and provide your Repository URL: https://github.com/Rammohanrmc/spring-petclinic.git & click on add button-provide your git credentials here, select>save.

Step d). In Build Triggers-Add build step, Select, Invoke Top Level Maven goals-provide the Goal as
install -Dmaven.test.skip=true

Step e). Select, Trigger parameterized builds on other Project from ‘Add post-build action’ at bottom of page & provide info as in below.

Create Build Job:
save>Click to Build now-Build job will compile execute source code successfully.

Job2: UnitTests
Step a). Click to New Item-inside folder, Name it as UnitTests>scroll to bottom and provide the ‘Build’ job name in Copy from Textbox>ok

****Below are the changes from Build job as we created this UnitTests job by copying its configurations.

Step b). In General-Delivery Pipeline configuration, provide Stage Name as ‘Build’ and Task Name as ‘Unit Test’.

Step c). Click on Advanced-bottom right in General>select, Use custom workspace-provide Build job path. I.e; /var/lib/jenkins/workspace/PetClinic/Build  and select nothing from Source Code Management.

Step d). In Build Triggers-’Add build step’, Select, Invoke Top Level Maven goals-provide the Goal as
test

Step e). Select- Publish JUnit test result report from ‘Add post-build action’ & provide info as in below

Provide Test report XMLs as target/surefire-reports/*.xml

Step f). Inpost-build actions’ of ‘Build’ at the bottom of page, change the ‘Projects to Build’ to  Static Code Analysis>save.

Trigger the build of UnitTests job post job success, Click on Test Result for unit result.

Job3: Static Code Analysis
Step a). Click to New Item-inside folder, Name it as Static Code Analysis>scroll to bottom and provide the ‘UnitTests’ job name in ‘Copy from’ Textbox>ok

****Below are the changes from UnitTests job as we created this job by copying its configurations.

Step b). In General-Delivery Pipeline configuration, provide Stage Name as ‘Static Code Analysis’ and Task Name as ‘Code Quality Check’.

Step c). In Build Triggers-’Add build step’, Select, ‘Execute shell’ and provide below commands to remove existing sonar properties in project.
# Generally this 2 line script in not needed. It is specific to Petclinic, we are removing this undesired.
cd /var/lib/jenkins/workspace/PetClinic/Build
rm -rf sonar-project.properties

Step d). Again,  In Build Triggers-’Add build step’, Select, ‘Execute sonarqube scanner’ and provide below sonar properties to run static code analysis on project.

Select Execute Sonarqube Scanner & provide Analysis properties as in below
sonar.projectKey=PetClinic
sonar.projectName=PetClinic
sonar.projectVersion=1.0
sonar.sources=.
sonar.verbose=true

Step e). Inpost-build actions’ of ‘Build’ at the bottom of page, change the ‘Projects to Build’ to  Package>save.
Trigger the build of ‘Package’ job post job success,
Browse the URL: http://IP:9000/sonar/  to observe code quality metrics.

Job4: Package
Step a). Click to New Item-inside folder, Name it as Package >scroll to bottom and provide the ‘UnitTests’ job name in ‘Copy from’ Textbox>ok

****Below are the changes from ‘UnitTests’ job as we created this job by copying its configurations.

Step b). In General-Delivery Pipeline configuration, provide Stage Name as ‘Package’ and Task Name as ‘Packaging Source Code’.

Step c). Select-Mask passwords>click Add and provide Name as NexusUserName, password as your admin user name in my case admin again click Add and provide Name as NexusPassword, password as you admin user password in my case admin123.

Step d). In Build Triggers-’Add build step’, Select, ‘Execute shell’ and provide below commands to remove existing sonar properties in project.
zip -r PetClinic-1.1.zip * .

Step e). In Build Triggers-’Add build step’, Select, ‘Execute shell’ and provide below commands to to store package in Nexus repo artifacts.

curl -F “r=PC” -F “e=zip” -F “hasPom=false” -F “g=Dev” -F “a=spring-petclinic” -F “v=${BUILD_LABEL}” -F “p=zip” -F “file=@PetClinic-1.1.zip” -u ${NexusUserName}:${NexusPassword} http://172.25.151.173:8081/nexus/service/local/artifact/maven/content

Note: Make sure that you install zip in your Ubuntu Machine before running package job.

Step f). Inpost-build actions’ of ‘Build’ at the bottom of page, change the ‘Projects to Build’ to  Package>save.
Trigger the build of ‘Package’ job post job success,
Browse for package in nexus url http://IP:8081/nexus/ 

Finally, Observe Continuous Integration in the below delivery pipe line on Jenkins Dashboard.

Delivery Pipeline Full Screen view of CI:

deliverypipeline

Note: Here I just added ‘Approval’ job to the delivery pipeline as I want to keep the interest of the readers about our next Article of DevOps on ‘Continuous Delivery (CD)’.