Does the MapReduce framework (for example Hadoop implementation), assign the inputs for mappers before mapper job starts or it is done at runtime?
In MapReduce framework, the Mapper tasks are assigned to the machines based on the Data Locality Concept. This means, data nodes which are storing the block of the data, will be assigned to execute the mapper task for that block of data.

The data splits (blocks) happen when you store the data into HDFS using configuration defined for data replication and block size. So if the original file is let say 128MB and block size is 64MB then file will be split into two blocks. These blocks will be store on two different machines.

A typical block size used by HDFS is 64 MB. Thus, an HDFS file is chopped up into 64 MB chunks, and if possible, each chunk will reside on a different DataNode.

Now when run the MapReduce job for a particular file then two Mapper tasks will be launched on these two machines.

So the data split and launching of mappers are completely two independent things. The first is handled by HDFS framework and second is by MapReduce framework.

the inputs for the Map tasks are prepared before the Mapper phase starts in Hadoop. The number of mappers is decided by the number of Input Splits calculated for the given input file before the Mapper phase starts.

Here the Input Split is the logic blocks of the given input file, where by default for every block of the file , one Input Split will be prepared and for every input split one mapper task will be dispatched.

You can control the number of InputSplits by controlling the mapreduce.input.fileinputformat.split.maxsize and mapreduce.input.fileinputformat.split.minsize properties.

The number of nodes available to execute the calculated number of map tasks is depends on the capacity of your cluster.

For example , say your input file is about 100GB(102400 MB) in size and block size 100MB, and Input split size is block size (by default), then 1024 Map tasks will be calculated. In this case assume that you cluster’s maximum containers available to execute map/reduce tasks across the cluster is 500, then at the best case only 500 mappers will be executed in Parallel. The machines whichever executes the Map task container sooner will pick the next Map task from the queue and continue so on until all mappers were completed.

Does map & reduce tasks run on Same Thread?

map() is called sequential and not in parallel.

On a high-level, you absolutely cannot expect that these run in the same thread. They actually often run on separate machines, which is what makes MapReduce attractive (ability to run the job on lots of hardware in parallel).

Even if you have a single-machine hadoop cluster or if your map & reduce tasks happen to run on the same node, you still won’t share threads, because the task node daemon will generally speaking create a new Java VM for each new task (unless JVM reuse has been configured).

So in general you have to expect that your map and reduce functions are running in isolation from each other, with the any data exchange only occurring through input and output values.

The second piece of the puzzle is thread safety between different invocations within a single task. There is always a single Mapper or Reducer instance in existence for each task, so there is no complexity there to think about. Within a single instance, execution is controlled by the run() method that is part of the Mapper/Reducer API.

  • By default, map() calls are made in sequence on a single thread.
  • Your implementation of Mapper is free to introduce multithreading to enable fancy execution orders.
  • You are free to introduce shared state on a single instance of Mapper if it helps you process the batch of maps that run within a single task.

How is the run() method of mapper or reducer class called by the Hadoop framework?

The run() method will be called using the Java Run Time Polymorphism (i.e method overriding). As you can see the line# 569 on the link below, extended mapper/reducer will get instantiated using the Java Reflection APIs. The MapTask class gets the name of extended mapper/reducer from the Job configuration object which the client program would have been configured extended mapper/reducer class using job.setMapperClass()

The following is the code taken from the Hadoop Source MapTask.java

mapperContext = contextConstructor.newInstance(mapper, job, getTaskID(),
                                                  input, output, committer,
                                                  reporter, split);

   input.initialize(split, mapperContext);
   mapper.run(mapperContext);
   input.close();` 

The line# 621 is an example of run time polymorphism. On this line, the MapTask calls the run() method of configured mapper with ‘Mapper Context’ as parameter. If the run() is not extended, it calls the run() method on the org.apache.hadoop.mapreduce.Mapper which again calls the map() method on configured mapper.

On the line# 616 of the above link, MapTask creates the context object with all the details of job configuration, etc. as mentioned by @harpun and then passes onto the run() method on line # 621.

The above explanation holds good for reduce task as well with appropriate ReduceTask class being the main entry class.

When configuring a Hadoop Clusterhow to set the number of mappers/reducers for the cluster?

It depends on how many cores and how much memory do you have. The number of mapper + number of reducer should not exceed the number of cores in general. Keep in mind that the machine is also running Task Tracker and Data Node daemons. One of the general suggestion is more mappers than reducers.

How to Chain multiple MapReduce jobs in Hadoop?
Answer

Use of setup() and cleanup() methods
As already mentioned, setup() and cleanup() are methods you can override, if you choose, and they are there for you to initialize and clean up your map/reduce tasks. You actually don’t have access to any data from the input split directly during these phases. The lifecycle of a map/reduce task is (from a programmer’s point of view):

setup -> map -> cleanup

setup -> reduce -> cleanup

What typically happens during setup() is that you may read parameters from the configuration object to customize your processing logic.

What typically happens during cleanup() is that you clean up any resources you may have allocated. There are other uses too, which is to flush out any accumulation of aggregate results.

The setup() and cleanup() methods are simply “hooks” for you, the developer/programmer, to have a chance to do something before and after your map/reduce tasks.

For example, in the canonical word count example, let’s say you want to exclude certain words from being counted (e.g. stop words such as “the”, “a”, “be”, etc…). When you configure your MapReduce Job, you can pass a list (comma-delimited) of these words as a parameter (key-value pair) into the configuration object. Then in your map code, during setup(), you can acquire the stop words and store them in some global variable (global variable to the map task) and exclude counting these words during your map logic

public class WordCount {

 public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();
    private Set<String> stopWords;

    protected void setup(Context context) throws IOException, InterruptedException {
        Configuration conf = context.getConfiguration();

        stopWords = new HashSet<String>();
        for(String word : conf.get("stop.words").split(",")) {
            stopWords.add(word);
        }
    }

    public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String line = value.toString();
        StringTokenizer tokenizer = new StringTokenizer(line);
        while (tokenizer.hasMoreTokens()) {
            String token = tokenizer.nextToken();
            if(stopWords.contains(token)) {
                continue;
            }
            word.set(tokenizer.nextToken());
            context.write(word, one);
        }
    }
 } 

 public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {

    public void reduce(Text key, Iterable<IntWritable> values, Context context) 
      throws IOException, InterruptedException {
        int sum = 0;
        for (IntWritable val : values) {
            sum += val.get();
        }
        context.write(key, new IntWritable(sum));
    }
 }

 public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    conf.set("stop.words", "the, a, an, be, but, can");

    Job job = new Job(conf, "wordcount");

    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);

    job.setMapperClass(Map.class);
    job.setReducerClass(Reduce.class);

    job.setInputFormatClass(TextInputFormat.class);
    job.setOutputFormatClass(TextOutputFormat.class);

    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));

    job.waitForCompletion(true);
 }
}

Different modes of hadoop
Standalone Mode
Default mode of Hadoop,HDFS is not utilized in this mode.Local file system is used for input and output, Used for debugging purpose, No Custom Configuration is required in 3 hadoop(mapred-site.xml,core-site.xml, hdfs-site.xml) files.Standalone mode is much faster than Pseudo-distributed mode.

Pseudo Distributed Mode(Single Node Cluster)
Configuration is required in given 3 files for this mode.Replication factory is one for HDFS.
Here one node will be used as Master Node / Data Node / Job Tracker / Task Tracker
Used for Real Code to test in HDFS.Pseudo distributed cluster is a cluster where all daemons are running on one node itself.

Fully distributed mode (or multiple node cluster)
This is a Production Phase.Data are used and distributed across many nodes.Different Nodes will be used as Master Node / Data Node / Job Tracker / Task Tracker

How many datanodes can run on a single Hadoop cluster?
Hadoop slave nodes contain only one datanode process each.

How many job tracker processes can run on a single Hadoop cluster?
Like datanodes, there can only be one job tracker process running on a single Hadoop cluster. Job tracker processes run on their own Java virtual machine process. If job tracker goes down, all currently active jobs stop.

What sorts of actions does the job tracker process perform?

Client applications send the job tracker jobs.
Job tracker determines the location of data by communicating with Namenode.
Job tracker finds nodes in task tracker that has open slots for the data.
Job tracker submits the job to task tracker nodes.
Job tracker monitors the task tracker nodes for signs of activity. If there is not enough activity, job tracker transfers the job to a different task tracker node.
Job tracker receives a notification from task tracker if the job has failed. From there, job tracker might submit the job elsewhere, as described above. If it doesn’t do this, it might blacklist either the job or the task tracker.

How does job tracker schedule a job for the task tracker?

When a client application submits a job to the job tracker, job tracker searches for an empty node to schedule the task on the server that contains the assigned datanode.

What does the mapred.job.tracker command do?

The mapred.job.tracker command will provide a list of nodes that are currently acting as a job tracker process.

What is “jps”?
jps – Java Virtual Machine Process Status
jps is similar to ps command.ps command on linux is one of the most basic commands for viewing the processes running on the system.jps is standard command-line utility which comes with JDK.jps is useful tools for viewing information about running java processes.It’s a little annoying to see jps itself is included in the output.

public class MapReduce
{
public static void main(String[] args)
{
File f = new File(“/usr/lib/hadoop/etc/hadoop/core-site.xml”);
Configuration conf = new Configuration();
conf.addResource(new Path(“/usr/lib/hadoop/etc/hadoop/hdfs-site.xml”));

for (Entry entry : conf)
{
System.out.println(“Key ” + entry.getKey());
System.out.println(“Value ” + entry.getValue());
}
}
}

CreationScript.java

	sessionFactory = createSessionFactory();
	Session objSession = sessionFactory.openSession();
	objSession.beginTransaction();

	Criteria crt = objSession.createCriteria(Users.class);
	crt.add(Restrictions.eq("UserName", "UserName 9"));


	List<Users> arrUsers = (List<Users>)crt.list();

	for (Users users : arrUsers) {
		System.out.println(users.getUserName());
	}

AND Restrictions

Criteria crt = objSession.createCriteria(Users.class);
	crt.add(Restrictions.eq("UserName", "UserName 9")).
		add(Restrictions.gt("UserId", 5));
Criteria crt = objSession.createCriteria(Users.class);
	crt.add(Restrictions.eq("UserName", "UserName 9")).
		add(Restrictions.gt("UserId", 5));
Criteria crt = objSession.createCriteria(Users.class);
		crt.add(Restrictions.eq("UserName", "UserName 9")).
			add(Restrictions.between("UserId", 5, 10));
Criteria crt = objSession.createCriteria(Users.class);
		crt.add(Restrictions.eq("UserName", "UserName 9")).
			add(Restrictions.between("UserId", 5, 10));

OR Restrictions

Criteria crt = objSession.createCriteria(Users.class);
		crt.add(Restrictions.or(Restrictions.between("UserId", 0, 5), Restrictions.like("UserName", "Updated %")));

Getting list of Users from users Table

     sessionFactory = createSessionFactory();
     Session objSession = sessionFactory.openSession();
     objSession.beginTransaction();
		
     Query objQuery = objSession.createQuery("from Users");
     List<Users> arrUsers = objQuery.list();
		
     objSession.getTransaction().commit();
     objSession.close();
		
     System.out.println(arrUsers.size());

     for (Users users : arrUsers) {
	System.out.println(users.getUserName());
     }

Pagination Using HQL

    Query objQuery = objSession.createQuery("from Users");		
    objQuery.setFirstResult(5);
    objQuery.setMaxResults(2);
    List<Users> arrUsers = objQuery.list();
				
    objSession.getTransaction().commit();
    objSession.close();
		
    System.out.println(arrUsers.size());
		
    for (Users users : arrUsers) {
	System.out.println(users.getUserName());
    }

Note: In Pagination the Starting record is specified by setFirstResult and ending record is specified by setMaxResults.

Taking a Specific Column for Entity

	Query objQuery = objSession.createQuery("select UserName from Users");		
	objQuery.setFirstResult(5);
	objQuery.setMaxResults(2);
	List<String> arrUsers = (List<String>)objQuery.list();

	objSession.getTransaction().commit();
	objSession.close();

	System.out.println(arrUsers.size());

	for (String users : arrUsers) {
		System.out.println(users);
	}

Note :
The Object Name in entity should be same as specified in class including Case. username will not work in select query but UserName does.

Parameter Binding in Hibernate
Method 1

  Query objQuery = objSession.createQuery("from Users where UserId >?");
  objQuery.setParameter(0, 5);		
  List<Users> arrUsers = (List<Users>)objQuery.list();

  for (Users users : arrUsers) {
	System.out.println(users.getUserName());
  }

Method 2

     Query objQuery = objSession.createQuery("from Users where UserId > :limit");
     objQuery.setInteger("limit", 5);
		
     List<Users> arrUsers = (List<Users>)objQuery.list();
				
     objSession.getTransaction().commit();
     objSession.close();
	
     for (Users users : arrUsers) {
	System.out.println(users.getUserName());
     }

NamedQuery vs NamedNativeQuery
NamedQuery helps to consolidate all query at particular place.

users.java

@Entity
@NamedQuery(name="Users.byUserId", query="from Users where UserId=?")
public class Users {
	@Id @GeneratedValue(strategy=GenerationType.IDENTITY)
	private int UserId;
	private String UserName;
	
	public int getUserId() {
		return UserId;
	}
	public void setUserId(int userId) {
		UserId = userId;
	}
	public String getUserName() {
		return UserName;
	}
	public void setUserName(String userName) {
		UserName = userName;
	}	
}

CreationScript.java

	sessionFactory = createSessionFactory();
	Session objSession = sessionFactory.openSession();
	objSession.beginTransaction();
		
	Query objQuery = objSession.getNamedQuery("Users.byUserId");
        objQuery.setInteger(0, 5);
		
	List<Users> arrUsers = (List<Users>)objQuery.list();
	
	for (Users users : arrUsers) {
	   System.out.println(users.getUserName());
	}

NativeQueries helps us to query the table directly by using table name instead of querying through Entity like one in NamedQuery.This is useful when we use stored procedure to take our resultSets.

users.java

@Entity
@NamedNativeQuery(name="Users.byUserId", query="SELECT * from Users where UserId=?", resultClass=Users.class)
public class Users {
	@Id @GeneratedValue(strategy=GenerationType.IDENTITY)
	private int UserId;
	private String UserName;
	
	public int getUserId() {
		return UserId;
	}
	public void setUserId(int userId) {
		UserId = userId;
	}
	public String getUserName() {
		return UserName;
	}
	public void setUserName(String userName) {
		UserName = userName;
	}	
} 

Note: resultClass=Users.class should be specified or else object class cast exception would be thrown.

There may be times where you want to retrieve data from DB, do some changes in it and save back the data.In such a time the connection could not be kept open for the whole period of time since db connections are resource intensive.

So I will fetch the object -> close the Session -> Do some operations -> Open new session again -> Update the Object -> Close the Session.

	sessionFactory = createSessionFactory();
	Session objSession = sessionFactory.openSession();
	objSession.beginTransaction();		
	Users objUser = objSession.get(com.mugil.user.Users.class, 11);
	objSession.getTransaction().commit();
	objSession.close();

	objUser.setUserName("Updated");

	objSession = sessionFactory.openSession();
	objSession.beginTransaction();
	objSession.update(objUser);
	objSession.getTransaction().commit();
	objSession.close();

Now the way Hibernate works is it first runs the select query for the value which should be changed and updates the value.So 2 queries for the whole process.Now there are chance’s the value fetched from DB may or may not be changed.So we can do a check which checks the fetched and updated value for uniquess before running the update query.If the fetched data is not changed th update query wont be run.

The Annotation for that is as below.

 @Entity
 @org.hibernate.annotations.Entity(selectBeforeUpdate=true)

 	objSession.beginTransaction();		
	Users objUser = new Users();
	objUser.setUserName("Max");
	.
	.
	.
	----Transient State----
	.
	.		
	objSession.save(objUser);
	.
	.
	.
	----Persistent State----
	.
	.		
	objUser.setUserName("Max Muller");
	.
	.
	.
	----Persistent State----
	.
	.
	objSession.getTransaction().commit();
	objSession.close();
	.
	.
	.
	----Detached State----
	.
	.
	//Doesnot get reflected
	objUser.setUserName("Max Muller");
	.
	.
  • The object will be in Transient State until it is saved.An object is in transient state if it just has been instantiated using the new operator and there is no reference of it in the database i.e it does not represent any row in the database.
  • The object will be in Persistent State until the session is closed.An object is in the persistent state if it has some reference in the database i.e it represent some row in the database and identifier value is assigned to it. If any changes are made to the object then hibernate will detect those changes and effects will be there in the database that is why the name Persistent. These changes are made when session is closed. A persistent object is in the session scope.
  • The object will be in Detached State once the session is closed.An object that has been persistent and is no longer in the session scope. The hibernate will not detect any changes made to this object. It can be connected to the session again to make it persistent again.

The changes done after the session close will not get reflected

Image Showing Values in Table with and Without DiscriminatorColumn defined

  • By Default Hibernate follows Single Table Inheritance
  • DiscriminatorColumn tells hibernate the name in which DiscriminatorColumn should be save or else it would be saved as DType
@Inheritance(strategy=InheritanceType.SINGLE_TABLE)
@DiscriminatorColumn(name="VEHICLE_TYPE",
		     discriminatorType=DiscriminatorType.STRING)

Vehicles.java

@Entity
@Inheritance(strategy=InheritanceType.SINGLE_TABLE)
@DiscriminatorColumn(name="VEHICLE_TYPE",
		     discriminatorType=DiscriminatorType.STRING)
public class Vehicles {
	@Id @GeneratedValue(strategy=GenerationType.IDENTITY)
	private int VehicleId;	
	private String name;
		
	public int getVehicleId() {
		return VehicleId;
	}
	public void setVehicleId(int vehicleId) {
		VehicleId = vehicleId;
	}
	public String getName() {
		return name;
	}
	public void setName(String name) {
		this.name = name;
	}	
}

TwoWheelers.java

@Entity
@DiscriminatorValue("Bike")
public class TwoWheelers extends Vehicles{
	private String steeringHolder;

	public String getSteeringHolder() {
		return steeringHolder;
	}

	public void setSteeringHolder(String steeringHolder) {
		this.steeringHolder = steeringHolder;
	}
}

FourWheelers.java

@Entity
@DiscriminatorValue("Car")
public class FourWheelers extends Vehicles{
	
	private String steeringWheel;

	public String getSteeringWheel() {
		return steeringWheel;
	}

	public void setSteeringWheel(String steeringWheel) {
		this.steeringWheel = steeringWheel;
	}
	
}

By Using the below annotation individual tables would be created for all the subclasses instead of placing all the values getting placed in a single class.

InheritanceType.TABLE_PER_CLASS

@Inheritance(strategy=InheritanceType.TABLE_PER_CLASS)

InheritanceType.JOINED

@Inheritance(strategy=InheritanceType.JOINED)

Using InheritanceType.JOINED gets more normalized table compared to InheritanceType.TABLE_PER_CLASS

ஓம் என்பதை அ, உ, ம் என்று பிரிக்க வேண்டும்.

அதாவது அ, உ, ம் என்ற எழுத்துகளை இணைத்தால் ஓம் என்று வரும். அ என்பது படைப்பதையும், உ என்பது காப்பதையும், ம் என்பது அழிப்பதையும் குறிக்கும்.

அ என்பது முதலெழுத்து. இது வாழ்வின் ஆரம்பத்தை குறிக்கிறது.

உ என்பது உயிரெழுத்துக்களின் வரிசையில் ஐந்தாவதாக வருகிறது. மெய், வாய், கண், மூக்கு, செவி என்னும் ஐந்து உறுப்புகளை மனிதர்கள் அடக்கி வைத்துக் கொண்டால், ஆயுள் அதிகரிக்கும் என்பதும், ஆயுள் கூடக்கூட, மனிதர்கள் துவங்கியது தடையின்றி நடக்கும் என்பதும் தெரிந்த விஷயம்.

மேலும், உ என்பது காத்தல் எழுத்து என்பதால், இறைவன் நம்மை பாதுகாப்பதைக் குறிக்கிறது. நம் செயல்கள் தடையின்றி நடக்க வேண்டுமானால் நமக்கொரு பாதுகாப்பு வேண்டும். இதற்காகவே உ என எழுதுகிறோம்

Cascade is used when multiple objects in entities wants to be saved in a single shot.

For Example

 session.save(objUserDetail1);
 session.save(objVeh1);
 session.save(objVeh2);
 session.save(objVeh3);

can be replaced by

 session.persist(objUserDetail1);

UserDetails.java

@Entity
@Table(name="USER_DETAIL")
public class UserDetails 
{	
	@Id @GeneratedValue(strategy = GenerationType.AUTO)	
	private int UserId;	
	private String UserName;
	
	@OneToMany(cascade=CascadeType.PERSIST)
	private List<Vehicles> arrVehicles = new ArrayList<Vehicles>();
	  
	public List<Vehicles> getArrVehicles() {
		return arrVehicles;
	}
	public void setArrVehicles(List<Vehicles> arrVehicles) {
		this.arrVehicles = arrVehicles;
	}
	
	public int getUserId() {
		return UserId;
	}
	public void setUserId(int userId) {
		UserId = userId;
	}
	public String getUserName() {
		return UserName;
	}
	public void setUserName(String userName) {
		UserName = userName;
	}
}

CreateTables.java

public static void main(String[] args) 
  {	
	UserDetails objUserDetail1 =  new UserDetails();
	objUserDetail1.setUserName("Mugil");

	Vehicles objVeh1 = new Vehicles();
	objVeh1.setVehicleName("Suzuki");
	objUserDetail1.getArrVehicles().add(objVeh1);

	Vehicles objVeh2 = new Vehicles();
	objVeh2.setVehicleName("Maruthi");
	objUserDetail1.getArrVehicles().add(objVeh2);

	Vehicles objVeh3 = new Vehicles();
	objVeh3.setVehicleName("Volkswagon");
	objUserDetail1.getArrVehicles().add(objVeh3);
							
	SessionFactory sessionFact = createSessionFactory();
	Session session = sessionFact.openSession();

	session.beginTransaction();				
	session.persist(objUserDetail1);		

	session.getTransaction().commit();
	session.close();  
  }

1.What is the difference between annotations @Id and @GeneratedValue

@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name="id") 
private Integer id;

@Id
In a Object Relational Mapping context, every object needs to have a unique identifier. You use the @Id annotation to specify the primary key of an entity.

@GeneratedValue
The @GeneratedValue annotation is used to specify how the primary key should be generated. In your example you are using an Identity strategy which indicates that the persistence provider must assign primary keys for the entity using a database identity column.

Notes

  • The difference between @Id and @GeneratedValue can be clearly observed while switching from OneToOne and OneToMany Mapping where the OneToOne Mapping requires only ID to insert values for both the table whereas OneToMany Mapping table insertion depends on values inserted in one and other table.
  • @GeneratedValue creates a sequence maintained at database

2.Sequence vs Identity
Sequence and identity both used to generate auto number but the major difference is Identity is a table dependant and Sequence is independent from table.

If you have a scenario where you need to maintain an auto number globally (in multiple tables), also you need to restart you interval after particular number and you need to cache it also for performance, here is the place where we need sequence and not identity.

When @Id is used the value count starts from 0 where as when @GeneratedValue is used the count starts from 1

3.What is difference between OneToMany and ManyToOne Mapping?
For example, if a user, a company, a provider all have many addresses, it would make sense to have a unidirectional between every of them and Address, and have Address not know about their owner.

Suppose you have a User and a Message, where a user can have thousands of messages, it could make sense to model it only as a ManyToOne from Message to User, because you’ll rarely ask for all the messages of a user anyway.

In One-to-many you keep the reference of many objects via (set, list) for the associated objects. You may not access the parent object from the items it is associated with. E.g. A person has many skills. If you go to a particular skill you may not access the persons possessing such skills. This means given a Skill ,s, you’ll not be able to do s.persons.

In Many-to-one many items/objects will have reference to a particular object. E.g. Users x and y apply to some job k. So both classes will have their attribute Job job set to k but given a reference to the job k you many not access the objects that have it as an attribute job. So to answer the question “Which users have applied to the job k?”, you’ll have to go through the Users list.

One-to-Many: One Person Has Many Skills, a Skill is not reused between Person(s)
Unidirectional: A Person can directly reference Skills via its Set
Bidirectional: Each “child” Skill has a single pointer back up to the Person (which is not shown in your code)

One-to-Many: One Person Has Many Skills, a Skill is not reused between Person(s)
Unidirectional: A Person can directly reference Skills via its Set
Bidirectional: Each “child” Skill has a single pointer back up to the Person (which is not shown in your code)

Many-to-Many: One Person Has Many Skills, a Skill is reused between Person(s)
Unidirectional: A Person can directly reference Skills via its Set
Bidirectional: A Skill has a Set of Person(s) which relate to it.

4.Difference between Unidirectional and Bidirectional associations?
Bidirectional relationship provides navigational access in both directions, so that you can access the other side without explicit queries. Also it allows you to apply cascading options to both directions.

When we have a bidirectional relationship between objects, it means that we are able to access Object A from Object B, and Object B from Object A.

Unidirectional – means only allow navigating from one side of the mapping to another. For example in the case of a one-many mapping, only allow navigation from the one side to the many side. Bi-directional means to allow navigation both ways.

Continue reading