1. In the below page the Checkbox checked values should be maintained as comma separated values.
  2. If select all checkbox at the top is clicked all the checkbox in that column should be marked as clicked.
  3. In the same way if all individual checkbox are selected the select All checkbox should be Marked as Clicked.

function changeInvestmentIds(pThis) 
{   	
    	if($(pThis).is(":checked"))
    	{	
      	      //Hidden variable which stores the CSV values of Checkbox
  	      if($('#csvSelectedInvestmentIds').val() != '')    	  
  	    	 arrInvstIds = $('#csvSelectedInvestmentIds').val().split(',');	
    		
	       arrInvstIds.push($(pThis).val());
	      
	      $('#csvSelectedInvestmentIds').val(arrInvstIds.join(","));
    	}
        else
        //Incase the checkbox is unchecked the values shld be removed from hidden variable 
    	{	
    		arrInvstIds = $('#csvSelectedInvestmentIds').val().split(',');
    		
    		for(var i = arrInvstIds.length; i--;) {
  	          if(arrInvstIds[i] === $(pThis).val()) {
  	        	  arrInvstIds.splice(i, 1);
  	          }
  	      }	
    		
    	  $('#csvSelectedInvestmentIds').val(arrInvstIds.join(","));
      	}    	
    	
        //This is called here incase all checkbox are checked the 
        //select all should also be checked
    	chkIsAllChecked();


    	setParentWindowVal();
}

//Incase selectAll checkbox is ticked all the checkbox should be marked as checked.
function toggleSelectAll(source) 
{		
	  csvInvestmentIds = '';	
	  var arrInvIds = [];
		  
	  if($(source).prop('checked'))	  
	   $('.chkInvestmentClass').prop('checked', true);	  
	  else	  
	   $('.chkInvestmentClass').prop('checked', false);		   
	  
            
	  if($(source).prop('checked'))
	  {
                  //The values of all the checkbox are  added to array and
                  //moved to csv hidden variable 
		  $(".chkInvestmentClass").each(function() {		  
			  arrInvIds.push($(this).val());
		  });
		  
		  $('#csvSelectedInvestmentIds').val(arrInvIds.join(","));
	  }else		  
	    $('#csvSelectedInvestmentIds').val('');
	  
	  setParentWindowVal();	  
}


//This function is to check if all check boxes when checked
//individually the select all check box should be checked
function chkIsAllChecked()
{
 if($('.chkInvestmentClass:checked').length == $('.chkInvestmentClass').length)		  
  $('#chkSelectAll')[0].checked=true;
 else
  $('#chkSelectAll')[0].checked=false;
}

HTML Code

  1. changeInvestmentIds(pThis) function is called in each checkbox when click event is carried out.
  2. toggleSelectAll(source) is called when selectAll checkbox is Clicked
  3. chkIsAllChecked() is called internally in changeInvestmentIds(pThis)
<table>
<thead>
<tr>
<th>
Locations
</th>
<th>
<input type="checkbox" name="chkSelectAll" value="SelectAll" styleId="selectAllChkBx" onclick="toggleSelectAll(this);"/>
</th>
<tr>
</thead>
<tbody>
 <tr>
 <td>Chennai</td>
 <td> 
  <input type="checkbox" name="chkInvestmentIds" value='Chennai' />' class="chkInvestmentClass" 
				onclick="changeInvestmentIds(this)">
</td>
</tr>
 <tr>
 <td>Bangalore</td>
 <td> 
  <input type="checkbox" name="chkInvestmentIds" value='Bangalore' />' class="chkInvestmentClass" 
				onclick="changeInvestmentIds(this)">
</td>
</tr>
 <tr>
 <td>Mumbai</td>
 <td> 
  <input type="checkbox" name="chkInvestmentIds" value='Mumbai' />' class="chkInvestmentClass" 
				onclick="changeInvestmentIds(this)">
</td>
</tr>
</tbody>
</table>

கணவன் மனைவி படிக்க வேண்டிய அழகான குட்டிக்கதை..

ஒருவர் எதற்கெடுத்தாலும் மனைவியுடன் சண்டைப் போடுவார். ஒருநாள் ‘ஆபீஸ்’ போய் வேலை செய்து பார்.. சம்பாதிப்பது எவ்வளவுக் கஷ்டம் என்று புரியும் என்று அடிக்கடி சவால் விடுவார்..

அவள் ஒருநாள் பொறுமை இழந்து, ஒருநாள் நீங்க வீட்ல இருந்து பசங்களை பார்த்துக்கோங்க.. காலைல குளிப்பாட்டி சாப்பிட வச்சு, வீட்டுப் பாடங்கள் சொல்லிக்கொடுத்து சீருடை அணிவித்து பள்ளிக்கு அனுப்புங்க.. அதோடு சமைப்பது துவைப்பது எல்லாத்தையும் செஞ்சுதான் பாருங்களேன்.. என எதிர் சவால்விட்டாள். கணவனும் அதை ஏற்றுக் கொண்டான்..

அவன் வீட்டில் இருக்க.. இவள் ஆபீஸ் போனாள்.. ஒரே குப்பை, கூளமாக கிடந்தது ஆபீஸ். முதலாளி மனைவி என்பதை மனதில் கொள்ளாமல்கூட்டிப் பெருக்கி சுத்தம் செய்தாள். வருகைப் பதிவேட்டை சரிபார்த்து தாமதமாய் வருபவர்களை கண்டித்தாள்.. கணக்கு வழக்குகளைப் பார்த்தாள். மாலை 5 மணி ஆனதும் வீட்டுக்குப் புறப்பட நினைத்தபோது, ஓர் அலுவலரின் மகள் திருமண வரவேற்பு குறித்து உதவியாளர் சொல்ல, பரிசுப் பொருள் வாங்கிக்கொண்டு கல்யாண மண்டபத்திற்கு சென்றாள்.

கணவர் வராததற்கு பொய்யான காரணம் ஒன்றை சொல்லிவிட்டு, மணமக்களின் கட்டாயத்தால் சாப்பிட சென்றாள்.. பந்தியில் உட்கார்ந்தவளுக்கு சிந்தனையெல்லாம் வீட்டைப் பற்றியே! இலையில் வைத்த ‘ஜாங்கிரியை’ மூத்தவனுக்கு பிடிக்கும்என்று கைப்பையில் எடுத்து வைத்தாள்..முறுக்கு கணவனுக்குப் பிடிக்குமே என்று அதையும் கைப்பைக்குள் வைத்துக் கொண்டாள்..அவள் சாப்பிட்டதை விட, பிள்ளைகளுக்கும் கணவனுக்கும் என பைக்குள் பதுக்கியதே அதிகம்.

ஒரு வழியாய் வீடு வந்து இறங்கியவள்,கணவன் கையில் பிரம்போடு கோபத்துடன் அங்கும்இங்குமாக நடந்து கொண்டிருந்ததைப் பார்த்தாள். இவளை பார்த்ததும், பிள்ளையா பெத்து வச்சிருக்க..? எல்லாம் கியா முய என்று கத்தி தொலையுதுங்க அத்தனையும் குரங்குகள்.! சொல்றதை கேட்க மாட்டேங்குது.. படின்னா படிக்க மாட்டேங்குது.. சாப்பிடுன்னா சாப்பிட மாட்டேங்குது.. அத்தனை பேரையும் அடிச்சு அந்த ரூம்ல படுக்க வச்சிருக்கேன்.. பாசம் காட்டுறேன்னு பிள்ளைகள கெடுத்து வச்சிருக்கே… என்று பாய….

அவளோ, அய்யய்யோ பிள்ளைகளை அடிச்சீங்களா… என்றவாறே உள்ளே ஓடி கதவை திறந்து பார்த்தாள். உள்ளே ஒரே அழுகையும் பொருமலுமாய் பிள்ளைகள்.! விளக்கை போட்டவள் அதிர்ச்சியுடன், ‘ஏங்க.. இவனை ஏன் அடிச்சு படுக்க வச்சீங்க..? இவன் எதிர்வீட்டு பையனாச்சே ‘ என்று அலற.. ஓஹோ, அதான் ஓடப் பார்த்தானா..! என கணவன் திகைக்க…

அந்த நிலையில் இருவருக்கும் ஒன்று புரிந்தது.. இல்லாள் என்றும், மனைக்கு உரியவள் மனைவி என்றும் சங்க காலம் தொடங்கி நம் மூதாதையர்கள் சொல்வது சும்மா இல்லை.

இல்லத்தைப் பராமரிப்பதிலும் பிள்ளைகளுக்கு வளமான வாழ்க்கையை அமைத்துக் கொடுப்பதிலும்ஒரு பெண்ணின் பங்கு தலையாயது. அதுபோல, பொருளீட்டி வரக்கூடிய ஆண்களின் பங்கும் அளப்பரியது..ஆனால் இருவரும் வேலைக்கு செல்லும் இந்த காலத்தில் இது ஆணுக்கு, இது பெண்ணுக்கு என்று குடும்பப் பொறுப்புகளை இனம்பிரிக்க இயலாதபடி வாழ்க்கை சமத்துவம் ஆகிவிட்டது..

இந்த சூழ்நிலையில் ஒரு குடும்பம் மகிழ்ச்சியாக இருக்க வேண்டும் என்றால் கணவன்மீது மனைவியோ, மனைவிமீது கணவனோ ஆதிக்கம் செலுத்தாமல் அன்பால் சாதிக்கும் மனநிலையை கொண்டிருந்தால் தான் எல்லா வளமும் பெற்று பல்லாண்டு வாழ முடியும்…

Step 1: Initialize the Working Directory.This is the working Directory where your project resides

 [cloudera@localhost MapReduce]$ git init

Step 2: Add to the Staging Directory as below

 [cloudera@localhost MapReduce1]$ git add *

Step 3: Commit to the Local Repository

[cloudera@localhost MapReduce1]$ git commit -am "Initial Commit"

Step 4: Run this on Terminal to Generate SSH Key

ssh-keygen -t rsa -C "mugil.cse@gmail.com" 

Step 5: Generated SSH Key will be in Users home Directory.It contains both private and Public key

cd /home/cloudera/.ssh
[cloudera@localhost .ssh]$ ll
total 12
-rw------- 1 cloudera cloudera 1671 May  7 21:11 id_rsa
-rw-r--r-- 1 cloudera cloudera  401 May  7 21:11 id_rsa.pub
-rw-rw-r-- 1 cloudera cloudera  395 May  7 21:01 known_hosts

id_rsa contains the private key and id_rsa.pub contains the public key.The Public key is added to the main repository where all the files are pushed at the day end.The public key and private key are verified by some algorithm every time a connection is made.The private key can be shared to user to whom limited access to repository should be given rather than using email adderess as password.

Step 6 : Open the id_rsa.pub which contains the public key which need to be added to the Repository

[cloudera@localhost .ssh]$ cat id_rsa.pub 
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAymFMhm8XM8NzuGMNrBzybMhdwkFZtWGSqo2zwkS4tWQGKQDLODD9tV+gXZeqiGWh/JkVq6P+QvwsMdl5kOTg+M33UDtP/dydUQ+eWNE0HoKRypldS5qvUtt/Y0+rOg/cy3U2tN4EWaY3oer0lFEY+esOc2tAogUcqCuyN37ywLB6wa23XKxgPFNpgxGM+rf3r2gVbV81hdHJ7RSTVpsS/BaetZZlvFAgNSo2qbVJlhpTY/GrF1Nhtz3q8oGfoxsUGtU+12JFJXphRQnYO0EJhZLxYSZvIcqb5YWmUZLVOg+HTsncnH1T5/l9tx/AT6IjqIo5ZbV+NxQ6R2F4fD0wEQ== mugil@gmail.com

Step 7: Paste the Git Repository URL as below

[cloudera@localhost MapReduce]$ git remote add origin git@bitbucket.org:Mugil/hadoop.git

Step 8: Now we need to Push the Staged File.

[cloudera@localhost MapReduce]$ git push origin master

Some Times while Pushing it might return back message like below

conq: repository access denied. deployment key is not associated with the requested repository. fatal: The remote end hung up unexpectedly

For that use

 ssh-add ~/.ssh/id_rsa

This Forces the Location of the private key to be set at global level in the System.

Checking SSH Status

>>service sshd status
>>service sshd start
>>chkconfig sshd on
Posted in git.

1.Compile the Files which should be added.

javac Test.java

Incase you are working in eclipse the same thing can be seen under the bin directory in the Navigator tab.you need to navigate to the folder to access the class files.

2.Manifest.txt
Incase if you are going to run the jar file the main class which should be called should be added to JAR file as meta-data through META-DATA/MANIFEST.There should be a Carriage Return Empty line in Manifest.txt to get read properly

Manifest.txt

Main-Class: com.mugil.util.Test

3.Create JAR file using the Below command

jar cfm Test.jar manifest.txt com/mugil/util/*

The Above command should be executed from bin directory in our case

* specifies the list of all files in that folder

4.To Run this JAR file

 java -jar Test.jar

When you run the Created Jar file it should be exactly placed in the folder from where the com folder starts.

jar cfm Test.jar manifest.txt com/mugil/util/*

The letters “m” and “f” must appear in the same order that “manifest” and “jarfile” appear.

It is a metadata file that contains name-value pairs organized in different sections.

If a JAR file is intended to be used as an executable file, the manifest file specifies the main class of the application. The manifest file is named MANIFEST.MF

Other uses are:

  1. store hashes of stored files for signature validation
  2. sealing jar files (i.e. ensure that only classes from this jar file are loaded in the packages defined in this jar file).
  3. store version/product/producer information to be readable at runtime

The Main class to be called once the JAR is executed is defined as below

If an application is bundled in a JAR file, the Java Virtual Machine needs to be told what the entry point to the application is. An entry point is any class with a public static void main(String[] args) method

Manifest-Version: 1.0
Main-Class: com.mugil.util.Test

While Using JSON Array for adding Details the required filter parameter in the Array may appear in any part of the Array.To Filter the Array which has many multiple data types i.e String, Integer, boolean you need to do instanceof comparison

In the below JSON array has a List of Employee Details from which Employee from a Particular location is taken out after Filtering

 
 package com.mugil.json;

import java.util.Iterator;

import org.json.JSONArray;
import org.json.JSONException;

public class Employee {

	public static void main(String[] args) {
		String Location = "Chennai";
		JSONArray arrEmpDetails = getEmpList();

		System.out
				.println("*-----------------Before Filter-----------------------------*");
		displayEmpDetails(arrEmpDetails);
		arrEmpDetails = filterEmpByLocation(arrEmpDetails, Location);

		System.out
				.println("*-----------------After Filter-----------------------------*");
		displayEmpDetails(arrEmpDetails);
		System.out
				.println("*-----------------------------------------------------*");

	}

	private static void displayEmpDetails(JSONArray arrEmpDetails) {
		JSONArray arrEmpDet = new JSONArray();

		for (int i = 0; i < arrEmpDetails.length(); i++) {
			try {
				arrEmpDet = arrEmpDetails.getJSONArray(i);
				System.out.println(arrEmpDet);
			} catch (JSONException e) {
				e.printStackTrace();
			}
		}

	}

	private static JSONArray filterEmpByLocation(JSONArray arrEmpDetails,
			String location) {
		JSONArray arrEmpByLoc = new JSONArray();
		String strToComp = null;
		boolean isSameLoc = false;

		for (int i = 0; i < arrEmpDetails.length(); i++) {
			try {
				JSONArray type = arrEmpDetails.getJSONArray(i);

				for (int j = 0; j < type.length(); j++) {
					Object strString = type.get(j);
					if (strString instanceof java.lang.String) {
						strToComp = (String) strString;
						isSameLoc = strToComp.equals(location);
					}
				}

				if (isSameLoc) {
					arrEmpByLoc.put(type);
					isSameLoc = false;
				}

			} catch (JSONException e) {
				e.printStackTrace();
			}
		}

		return arrEmpByLoc;
	}

	public static JSONArray getEmpList() {
		JSONArray arrEmpDetails = new JSONArray();
		JSONArray arrPerDetails = new JSONArray();

		arrPerDetails.put("Empl1");
		arrPerDetails.put(23);
		arrPerDetails.put("Address1");
		arrPerDetails.put("Chennai");
		arrEmpDetails.put(arrPerDetails);

		arrPerDetails = new JSONArray();
		arrPerDetails.put("Empl2");
		arrPerDetails.put(23);
		arrPerDetails.put("Address2");
		arrPerDetails.put("Coimbatore");
		arrEmpDetails.put(arrPerDetails);

		arrPerDetails = new JSONArray();
		arrPerDetails.put("Empl3");
		arrPerDetails.put(24);
		arrPerDetails.put("Address3");
		arrPerDetails.put("Tirchy");
		arrEmpDetails.put(arrPerDetails);

		arrPerDetails = new JSONArray();
		arrPerDetails.put("Empl4");
		arrPerDetails.put(26);
		arrPerDetails.put("Address4");
		arrPerDetails.put("Chennai");
		arrEmpDetails.put(arrPerDetails);

		return arrEmpDetails;
	}

}

View list of Hadoop Files

>>hadoop fs -ls ..

Creating new Folder

>>hadoop fs -mkdir test

The above created file can be viewed in Hue

Adding Files to Hadoop File System

>>hadoop fs -put Test.txt test

Incase files need to be copied from more than one Directory use put command as Below

>>hadoop fs -put Test1 Test2 Test

Getting Files to Hadoop File System

>>hadoop fs -get Test.txt Test1

Deleting a File from Hadoop File System

>>hadoop fs -rm Test1/Test.txt

In the above case the file will be moved to the Trash

Deleting a File from Hadoop File System

>>hadoop fs -rm -skipTrash Test1/Test.txt

Deleting a File- Recursive Remove

>>hadoop fs -rmr -skipTrash Test1

View part of Data file

>> hadoop fs -cat /user/training/shakespeare.txt | tail -n5

Hadoop – Map Reduce

>> hadoop jar Test.jar T1 output

hadoop jar MapReduce.jar InputFile OutputFolder

Start hdfs daemons

>>  start-dfs.sh

Start MapReduce daemons:

>>  start-yarn.sh

Verify Hadoop daemons:

>>  jps

For one JVM (Isolated Process)there will be
Job Tracker – one(Controller and scheduler)
Task Tracker – One per Cluster(Monitors task)

The Map Reduce consist of Two Parts

The Map Part
The Reduce Part

Map Part

  1. Function in java which perform some action in some data.The Map reduce is run as a job.During this run of Map Reduce as a job the Java function gets called in each Node where the data lives.
  2. The Map Reduce runs 3 Nodes (default HDFS cluster is replicated 3 Times).
  3. HDFS is self healing.If one goes down other will be used
  4. Once the MapReduce is run the output will be pairs
  5. The second part is the Reduce Part in the pairs

2 Versions of Map Reduce

Map Reduce Version 1

  1. As given by Google
  2. HDFS Triple Replicated
  3. Parallel Processing via Map and Reduce(aggregated)

Coding Steps

  1. Create a Class
  2. Create a static Map class
  3. Create a static Reduce class
  4. Create a Main Function
    1. Create a Job
    2. Job calls the Map and Reduce Classes

Java Coding for MapReduce

  public class MapReduce{
    public static void Main(String[] args)
    {
      //Create Job Runner Instance
      //Call MapInstance on Job Instance
      //Call ReduceInstance on Job Instance
        
    } 
    
    public void Map()
    {
       //write Mapper
    }

    public void Reduce()
    {
       //write Reducer
    }     
  }
  1. In MapReduce the States should not be Shared
  2. Top Down Programming, One Entry Point – One Exit Point

Aspects of MapReduce

  1. Job – Unit of MapReduce
  2. Map Task runs on each node
  3. Reduce Task – runs on some nodes
  4. Source date – HDFS or other location(amazon s3)

In Java while transferring data over network we serialize and deserialize values for security purposes.In MapReduce the Map output is serialized and the input is deserialized in Reduce.Serialized and Deserialized values are called as Writables in MapReduce. To acheive this String in java is replaced with Text and int in Java is replaced with IntWritable which does the serialization on it own.

Hadoop – Map Reduce

>> hadoop jar MapReduce.jar T1 output

hadoop jar MapReduce.jar InputFile OutputFolder

An aspect is a software entity implementing a specific non-functional part of the application.

Using AOP has 2 Benefits

  1. The logic for each concern is now in one place, as opposed to being scattered all over the code base.
  2. Classes are cleaner since they only contain code for their primary concern (or core functionality) and secondary concerns have been moved to aspects.

OOP and AOP are not mutually exclusive. AOP can be good addition to OOP. AOP is especially handy for adding standard code like logging, performance tracking, etc. to methods without clogging up the method code with this standard code.

Assume you have a graphical class with many “set…()” methods. After each set method, the data of the graphics changed, thus the graphics changed and thus the graphics need to be updated on screen. Assume to repaint the graphics you must call “Display.update()”. The classical approach is to solve this by adding more code. At the end of each set method you write

 void set...(...) {
    :
    :
    Display.update();
}

If you have 3 set-methods, that is not a problem. If you have 200 (hypothetical), it’s getting real painful to add this everywhere. Also whenever you add a new set-method, you must be sure to not forget adding this to the end, otherwise you just created a bug.

AOP solves this without adding tons of code, instead you add an aspect:

after() : set() {
   Display.update();
}

And that’s it! Instead of writing the update code yourself, you just tell the system that after a set() pointcut has been reached, it must run this code and it will run this code. No need to update 200 methods, no need to make sure you don’t forget to add this code on a new set-method. Additionally you just need a pointcut:

pointcut set() : execution(* set*(*) ) && this(MyGraphicsClass) && within(com.company.*);

What does that mean? That means if a method is named “set*” (* means any name might follow after set), regardless of what the method returns (first asterisk) or what parameters it takes (third asterisk) and it is a method of MyGraphicsClass and this class is part of the package “com.company.*”, then this is a set() pointcut. And our first code says “after running any method that is a set pointcut, run the following code”.

See how AOP elegantly solves the problem here? Actually everything described here can be done at compile time. A AOP preprocessor can just modify your source (e.g. adding Display.update() to the end of every set-pointcut method) before even compiling the class itself.

However, this example also shows one of the big downsides of AOP. AOP is actually doing something that many programmers consider an “Anti-Pattern”. The exact pattern is called “Action at a distance”.

Action at a distance is an anti-pattern (a recognized common error) in which behavior in one part of a program varies wildly based on difficult or impossible to identify operations in another part of the program.

As a newbie to a project, I might just read the code of any set-method and consider it broken, as it seems to not update the display. I don’t see by just looking at the code of a set-method, that after it is executed, some other code will “magically” be executed to update the display. I consider this a serious downside! By making changes to a method, strange bugs might be introduced. Further understanding the code flow of code where certain things seem to work correctly, but are not obvious (as I said, they just magically work… somehow), is really hard.

OOP and AOP are not mutually exclusive. AOP can be good addition to OOP. AOP is especially handy for adding standard code like logging, performance tracking, etc. to methods without clogging up the method code with this standard code.

AOP addresses the problem of cross-cutting concerns, which would be any kind of code that is repeated in different methods and can’t normally be completely refactored into its own module, like with logging or verification

function mainProgram()
{  
   var x =  foo();
   doSomethingWith(x);
   return x;
 }

 aspect logging
 { 
    before (mainProgram is called):
    { 
       log.Write("entering mainProgram");
    }

     after (mainProgram is called):
    {  
      log.Write(  "exiting mainProgram with return value of "
                + mainProgram.returnValue);
    }
 } 

 aspect verification
 { 
  before (doSomethingWith is called):
  { 
       if (doSomethingWith.arguments[0] == null) 
       { 
         throw NullArgumentException();
       }

      if (!doSomethingWith.caller.isAuthenticated)
      { 
         throw Securityexception();
      }
    }
 }

And then an aspect-weaver is used to compile the code into this:

function mainProgram()
 { 
   log.Write("entering mainProgram");

   var x = foo();   

   if (x == null) throw NullArgumentException();
   if (!mainProgramIsAuthenticated()) throw Securityexception();
   doSomethingWith(x);   

   log.Write("exiting mainProgram with return value of "+ x);
   return x;
 }

Cross Cutting Concerns

  1. Database Access
  2. Data Entities
  3. Email/Notification
  4. Error Handling
  5. Logging

A Wrapper class is any class which “wraps” or “encapsulates” the functionality of another class or component.These are useful by providing a level of abstraction from the implementation of the underlying class or component.

A wrapper class is a class that “wraps” around something else, just like its name.

In general a wrapper is going to expand on what the wrappee does, without being concerned about the implementation of the wrappee, otherwise there’s no point of wrapping versus extending the wrapped class. A typical example is to add timing information or logging functionality around some other service interface, as opposed to adding it to every implementation of that interface.

For example
Wrapper classes provides a way to use the primitive types as objects. For each primitive , we have wrapper class such as for

int Integer
byte Byte 

Integer and Byte are the wrapper classes of primitive int and byte. There are times/restrictions when you need to use the primitives as objects so wrapper classes provide a mechanism called as boxing/unboxing.

Concept can be well understood by the following example as

double d = 135.0 d;

Double doubleWrapper = new Double(d);

int integerValue = doubleWrapper.intValue();
byte byteValue = doubleWrapper.byteValue();
sting stringValue = doubleWrapper.stringValue();

So this is the way , we can use wrapper class type to convert into other primitive types as well. This type of conversion is used when you need to convert a primitive type to object and use them to get other primitives as well.Though for this approach , you need to write a big code . However, the same can be achieved with the simple casting technique as code snippet can be achieved as below

double d = 135.0;
int integerValue = (int) d ;

In general a wrapper is going to expand on what the wrappee does, without being concerned about the implementation of the wrappee, otherwise there’s no point of wrapping versus extending the wrapped class. A typical example is to add timing information or logging functionality around some other service interface, as opposed to adding it to every implementation of that interface.

This then ends up being a typical example for Aspect programming. Rather than going through an interface function by function and adding boilerplate logging, in aspect programming you define a pointcut, which is a kind of regular expression for methods, and then declare methods that you want to have executed before, after or around all methods matching the pointcut. Its probably fair to say that aspect programming is a kind of use of the Decorator pattern, which wrapper classes can also be used for, but that both technologies have other uses.

A boilerplate is a unit of writing that can be reused over and over without change.

A pointcut is a set of join points.