Q1.What is the Difference between Filters and Interceptors?
Filter: – A filter as the name suggests is a Java class executed by the servlet container for each incoming HTTP request and for each http response. This way, is possible to manage HTTP incoming requests before they reach the resource, such as a JSP page, a servlet or a simple static page; in the same way, it’s possible to manage HTTP outbound response after resource execution.

  1. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses.
  2. Filters typically do not themselves create a response, but instead provide universal functions that can be “attached” to any type of servlet or JSP page.
  3. They provide the ability to encapsulate recurring tasks in reusable units. Organized developers are constantly on the lookout for ways to modularize their code.
  4. Modular code is more manageable and documentable, is easier to debug, and if done well, can be reused in another setting.

Interceptor: – Spring Interceptors are similar to Servlet Filters but they act in Spring Context so are many powerful to manage HTTP Request and Response but they can implement more sophisticated behavior because can access to all Spring context. Interceptors are used in conjunction with Java EE managed classes to allow developers to invoke interceptor methods in conjunction with method invocations or lifecycle events on an associated target class. Common uses of interceptors are logging, auditing, or profiling.

  1. Interceptor can be defined within a target class as an interceptor method, or in an associated class called an interceptor class.
  2. Interceptor classes contain methods that are invoked in conjunction with the methods or lifecycle events of the target class.
  3. Interceptor classes and methods are defined using metadata annotations, or in the deployment descriptor of the application containing the interceptors and target classes.
  4. Interceptor classes may be targets of dependency injection. Dependency injection occurs when the interceptor class instance is created, using the naming context of the associated target class, and before any @PostConstruct callbacks are invoked.

Refer Here

Q2.Spring interceptor vs servlet filter?

  1. Using Interceptor we can inject other beans in the interceptor
  2. Can use more advanced mapping patterns (ant-style)
  3. You have the target handler object (controller) available, as well as the result ModelAndView
  4. It is a bean, so you can use AOP with it (althoug that would be rare)

Q3.How to avoid multiple submission of Form to Server?

  1. Use JavaScript to disable the button a few ms after click. This will avoid multiple submits being caused by impatient users clicking multiple times on the button.
  2. Send a redirect after submit, this is known as Post-Redirect-Get (PRG) pattern. This will avoid multiple submits being caused by users pressing F5 on the result page and ignoring the browser warning that the data will be resend, or navigating back and forth by browser back/forward buttons and ignoring the same warning.
  3. Generate an unique token when the page is requested and put in both the session scope and as hidden field of the form. During processing, check if the token is there and then remove it immediately from the session and continue processing. If the token is not there, then block processing. This will avoid the aforementioned kinds of problems.

Q4.What is POST-REDIRECT-GET Pattern?
The client gets a page with a form.The form POSTs to the server.The server performs the action, and then redirects to another page.The client follows the redirect.
For example, say we have this structure of the website as below

  1. /posts (shows a list of posts and a link to “add post”) and / (view a particular post)
  2. /create (if requested with the GET method, returns a form posting to itself; if it’s a POST request, creates the post and redirects to the / endpoint)

For retrieving posts /posts/ might be implemented like this:

  1. Find the post with that ID in the database.
  2. Render a template with the content of that post.

For creating posts /posts/create might be implemented like this:

  1. If the request is a GET request for the Insert posts page Show an empty form with the target set to itself and the method set to POST.
  2. If the request is a POST request
    • Validate the fields.
    • If there are invalid fields, show the form again with errors indicated.
  3. Otherwise, if all fields are valid
    • Add the post to the database.
    • Redirect to /posts/ (where is returned from the call to the database)

null

Q1.What is the Difference between Stack and Heap?
Stack vs Heap

Q2.Does Wrapper Classes are immutable Similar to String?
Yes, Wrapper classes are immutable similar to String.

Q3.Does Wrapper Classes would be cached Similar to String Pool for Strings?
Yes.Java has Integer pool for small integers between -128 to 127 so it will behave same for Integer also similar to String Constant pool
java.lang.Boolean store two inbuilt instances TRUE and FALSE, and return their reference if new keyword is not used.
java.lang.Character has a cache for chars between unicodes 0 and 127 (ascii-7 / us-ascii).
java.lang.Long has a cache for long between -128 to +127.
java.lang.String has a whole new concept of string pool.

Q4.How String will behave in memory management incase of String Literal or String Object?

Q5.See the Below Code

class D {
    public static void main(String args[]) {
        Integer b1=127;
        Integer b2=127;
        Integer b3=128;
        Integer b4=128;
        System.out.println(b1==b2);
        System.out.println(b3==b4);
    }
}
true 
false

Why it is so?
If the value p being boxed is true, false, a byte, a char in the range \u0000 to \u007f, or an int or short number between -128 and 127, then let r1 and r2 be the results of any two boxing conversions of p. It is always the case that r1 == r2.

Q6.In which Memory would the following would be created?

int a = 0; 
Integer b = 0;

It Depends whether a and b variables are local variables or fields (static or instance) of an object.

If they are local variables:
a is on the stack.
b is on the stack (a reference) and it refers to an object in the heap.

If they are fields of an instance or class:
a is on the heap (as part of the instance or the class).
b is on the heap (as above) and it refers to an object in the heap.

Q7.Why the value of i didnt change after modify being called?

class Demo 
{ 
    public static void main(String[] args) 
    { 
        Integer i = new Integer(12); 
        System.out.println(i); 
        modify(i); 
        System.out.println(i); 
    } 
  
    private static void modify(Integer i) 
    { 
        i = i + 1; 
    } 
} 

Output

12
12
12
12

The reason again traces back to the Immutability of wrapper class.

i = i + 1;

It does the following:

  1. Unbox i to an int value
  2. Add 1 to that value
  3. Box the result into another Integer object
  4. Assign the resulting Integer to i (thus changing what object i references)

Since object references are passed by value, the action taken in the modified method does not change i that was used as an argument in the call to modify. Thus the main routine still prints 12 after the method returns.

Q8.How the array is stored in the memory?

arr[0] = new String("abc");
arr[1] = new List();

Stack has a single pointer to a location in the heap that contains the array itself. The array itself is just an array of pointers which also point to locations in the heap that contain the objects you reference.

Q9.What is Contagious memory block?
Array are “contiguous”. That means the elements are laid out end-to-end, with no discontinuities and no padding between them (there may be padding inside each element, but not between elements). So an array of 5 4-byte elements looks like this (1 underscore character per byte, the | symbols don’t represent memory).Arrays and ArrayList uses Contagious memory whereas LinkedList uses Non Contagious memory.

Contiguous

Non-Contiguous

Type inference is a feature of Java which provides ability to compiler to look at each method invocation and corresponding declaration to determine the type of arguments.
Java provides improved version of type inference in Java 8.
Here, we are creating arraylist by mentioning integer type explicitly at both side. The following approach is used earlier versions of Java.

List<Integer> list = new ArrayList<Integer>();  

In the following declaration, we are mentioning type of arraylist at one side. This approach was introduce in Java 7. Here, you can left second side as blank diamond and compiler will infer type of it by type of reference variable.

List<Integer> list2 = new ArrayList<>();   

Improved Type Inference
In Java 8, you can call specialized method without explicitly mentioning of type of arguments.

showList(new ArrayList<>());  

Example
You can use type inference with generic classes and methods.

import java.util.ArrayList;
import java.util.List;
public class TypeInferenceExample {
 public static void showList(List < Integer > list) {
  if (!list.isEmpty()) {
   list.forEach(System.out::println);
  } else System.out.println("list is empty");
 }

 public static void main(String[] args) {

  // An old approach(prior to Java 7) to create a list  
  List < Integer > list1 = new ArrayList < Integer > ();
  list1.add(11);
  showList(list1);

  // Java 7    
  List < Integer > list2 = new ArrayList < > (); // You can left it blank, compiler can infer type  
  list2.add(12);
  showList(list2);

  // Compiler infers type of ArrayList, in Java 8  
  showList(new ArrayList < > ());
 }
}

Output

11
12
list is empty

Type inference for Custom Classes

class GenericClass <X> {
 X name;
 public void setName(X name) {
  this.name = name;
 }
 public X getName() {
  return name;
 }
 public String genericMethod(GenericClass < String > x) {
  x.setName("John");
  returnx.name;
 }
}

public class TypeInferenceExample {
 public static void main(String[] args) {
  GenericClass < String > genericClass = new GenericClass < String > ();
  genericClass.setName("Peter");
  System.out.println(genericClass.getName());

  GenericClass < String > genericClass2 = new GenericClass < > ();
  genericClass2.setName("peter");
  System.out.println(genericClass2.getName());

  // New improved type inference  
  System.out.println(genericClass2.genericMethod(new GenericClass < > ()));
 }
}

Output

Peter
peter
John

Lambdas implement a functional interface.Anonymous Inner Classes can extend a class or implement an interface with any number of methods.
Variables – Lambdas can only access final or effectively final.
State – Anonymous inner classes can use instance variables and thus can have state, lambdas cannot.
Scope – Lambdas can’t define a variable with the same name as a variable in enclosing scope.
Compilation – Anonymous compiles to a class, while lambda is an invokedynamic instruction.

Syntax
Lambda expressions looks neat as compared to Anonymous Inner Class (AIC)

public static void main(String[] args) {
    Runnable r = new Runnable() {
        @Override
        public void run() {
            System.out.println("in run");
        }
    };

    Thread t = new Thread(r);
    t.start(); 
}

//syntax of lambda expression 
public static void main(String[] args) {
    Runnable r = ()->{System.out.println("in run");};
    Thread t = new Thread(r);
    t.start();
}

Scope
An anonymous inner class is a class, which means that it has scope for variable defined inside the inner class.

Whereas,lambda expression is not a scope of its own, but is part of the enclosing scope.

Similar rule applies for super and this keyword when using inside anonymous inner class and lambda expression. In case of anonymous inner class this keyword refers to local scope and super keyword refers to the anonymous class’s super class. While in case of lambda expression this keyword refers to the object of the enclosing type and super will refer to the enclosing class’s super class.

//AIC
    public static void main(String[] args) {
        final int cnt = 0; 
        Runnable r = new Runnable() {
            @Override
            public void run() {
                int cnt = 5;    
                System.out.println("in run" + cnt);
            }
        };

        Thread t = new Thread(r);
        t.start();
    }

//Lambda
    public static void main(String[] args) {
        final int cnt = 0; 
        Runnable r = ()->{
            int cnt = 5; //compilation error
            System.out.println("in run"+cnt);};
        Thread t = new Thread(r);
        t.start();
    }

Performance
At runtime anonymous inner classes require class loading, memory allocation, and object initialization and invocation of a non-static method while lambda expression is a compile-time activity and don’t incur extra cost during runtime. So the performance of lambda expression is better as compare to anonymous inner classes.

Reading a File extending Thread API

  1. ReadFile.java has a run() method which implements the reading the file code within the try with resources block
  2. In the main method start method is called over the ReadFile class instance
  3. In thread we have coded is asynchrobnous(order of execution cannot be guaranteed) which we can see from the output below

TestThread.java

package com.mugil.test;

import com.mugil.runnables.ReadFile;

public class TestThread {
	public static void main(String[] args) {
		ReadFile objReadFileThread1 = new ReadFile();
		ReadFile objReadFileThread2 = new ReadFile();
		ReadFile objReadFileThread3 = new ReadFile();
				
		objReadFileThread1.start();
		objReadFileThread2.start();
		objReadFileThread3.start();
	}
}

ReadFile.java

package com.mugil.runnables;

import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;

public class ReadFile extends Thread {

 public void run() {

  try (BufferedReader reader = new BufferedReader(new FileReader(new File("E:\\JavaProjects\\JavaThreads\\src\\Sample.txt")))) {
   String line = null;

   while ((line = reader.readLine()) != null) {
    System.out.println(Thread.currentThread().getName() + " reading line " + line);
   }

  } catch (IOException e) {
   // TODO Auto-generated catch block
   e.printStackTrace();
  }

 }
}

Output

Thread-2 reading line Line1
Thread-0 reading line Line1
Thread-0 reading line Line2
Thread-0 reading line Line3
Thread-1 reading line Line1
Thread-1 reading line Line2
Thread-2 reading line Line2
Thread-1 reading line Line3
Thread-1 reading line Line4
Thread-1 reading line Line5
Thread-0 reading line Line4
Thread-0 reading line Line5
Thread-2 reading line Line3
Thread-2 reading line Line4
Thread-2 reading line Line5

Reading a File implementing Runnable API

  1. Now in the below code the runnable API is implemented rather than extending like Thread
  2. The run() is called over instance of ReadFile rather than start() method
  3. Calling run() method will start the execution of thread in the present running thread rather than creating new Thread for execution which can been seen in output main reading line rather than Thread-N reading line

TestThread.java

package com.mugil.test;

import com.mugil.runnables.ReadFile;

public class TestThread {
	public static void main(String[] args) {
		ReadFile objReadFileThread1 = new ReadFile();
		ReadFile objReadFileThread2 = new ReadFile();
		ReadFile objReadFileThread3 = new ReadFile();
				
		objReadFileThread1.run();
		objReadFileThread2.run();
		objReadFileThread3.run();
	}
}

ReadFile.java

public class ReadFile implements Runnable {

 public void run() {

  try (BufferedReader reader = new BufferedReader(new FileReader(new File("E:\\JavaProjects\\JavaThreads\\src\\Sample.txt")))) {
   String line = null;

   while ((line = reader.readLine()) != null) {
    System.out.println(Thread.currentThread().getName() + " reading line " + line);
   }

  } catch (IOException e) {
   // TODO Auto-generated catch block
   e.printStackTrace();
  }
 }
}

Output

main reading line Line1
main reading line Line2
main reading line Line3
main reading line Line4
main reading line Line5
main reading line Line1
main reading line Line2
main reading line Line3
main reading line Line4
main reading line Line5
main reading line Line1
main reading line Line2
main reading line Line3
main reading line Line4
main reading line Line5

Methods to Manage thread are available on Thread class not in Runnable. So we can pass the runnable instance as parameter like one below
TestThread.java

.
.
.
Thread objThread = new Thread(runObj);
objThread.start();
.
.

Q1.Optimistic vs. Pessimistic locking
Optimistic Locking is a strategy where you read a record, take note of a version number (other methods to do this involve dates, timestamps or checksums/hashes) and check that the version hasn’t changed before you write the record back. When you write the record back you filter the update on the version to make sure it’s atomic. (i.e. hasn’t been updated between when you check the version and write the record to the disk) and update the version in one hit.If the record is dirty (i.e. different version to yours) you abort the transaction and the user can re-start it.

This strategy is most applicable to high-volume systems and three-tier architectures where you do not necessarily maintain a connection to the database for your session. In this situation the client cannot actually maintain database locks as the connections are taken from a pool and you may not be using the same connection from one access to the next.Optimistic locking doesn’t necessarily use a version number. Other strategies include using (a) a timestamp or (b) the entire state of the row itself. The latter strategy is ugly but avoids the need for a dedicated version column, in cases where you aren’t able to modify the schema.

Pessimistic Locking is when you lock the record for your exclusive use until you have finished with it. It has much better integrity than optimistic locking but requires you to be careful with your application design to avoid Deadlocks. To use pessimistic locking you need either a direct connection to the database (as would typically be the case in a two tier client server application) or an externally available transaction ID that can be used independently of the connection.

In the latter case you open the transaction with the TxID and then reconnect using that ID. The DBMS maintains the locks and allows you to pick the session back up through the TxID.

Optimistic locking is used when you don’t expect many collisions. It costs less to do a normal operation but if the collision DOES occur you would pay a higher price to resolve it as the transaction is aborted.Pessimistic locking is used when a collision is anticipated. The transactions which would violate synchronization are simply blocked.
To select proper locking mechanism you have to estimate the amount of reads and writes and plan accordingly

Optimistic needs a three-tier architectures where you do not necessarily maintain a connection to the database for your session whereas Pessimistic Locking is when you lock the record for your exclusive use until you have finished with it. It has much better integrity than optimistic locking you need either a direct connection to the database.optimistic (versioning) is faster because of no locking but (pessimistic) locking performs better when contention is high and it is better to prevent the work rather than discard it and start over.Optimistic locking works best when you have rare collisions

Q2.What is the Need for Indexing in Database Tables?
An index can be used to efficiently find all rows matching some column in your query and then walk through only that subset of the table to find exact matches. If you don’t have indexes on any column in the WHERE clause, the SQL server have to walk through the whole table and check every row to see if it matches, which may be a slow operation on big tables.The index can also be a UNIQUE index, which means that you cannot have duplicate values in that column, or a PRIMARY KEY which in some storage engines defines where in the database file the value is stored.

Q3.Clustered and Non Clustered Index
A Clustered index determines the physical order of data in a table.There can be only one clustered index per table (the clustered index IS the table). All other indexes on a table are termed non-clustered.A clustered index means you are telling the database to store close values actually close to one another on the disk. This has the benefit of rapid scan / retrieval of records falling into some range of clustered index values.

For example, you have two tables, Customer and Order:

Customer
----------
ID
Name
Address

Order
----------
ID
CustomerID
Price

If you wish to quickly retrieve all orders of one particular customer, you may wish to create a clustered index on the “CustomerID” column of the Order table. This way the records with the same CustomerID will be physically stored close to each other on disk (clustered) which speeds up their retrieval.The index on CustomerID will obviously be not unique, so you either need to add a second field to “uniquify” the index or let the database handle that for you but that’s another story.

Since the clustered index is actually related to how the data is stored, there is only one of them possible per table (although you can cheat to simulate multiple clustered indexes).

A non-clustered index is different in that you can have many of them and they then point at the data in the clustered index. You could have e.g. a non-clustered index at the back of a phone book which is keyed on (town, address)

You can have only one clustered index per table because this defines how the data is physically arranged. If you wish an analogy, imagine a big room with many tables in it. You can either put these tables to form several rows or pull them all together to form a big conference table, but not both ways at the same time. A table can have other indexes, they will then point to the entries in the clustered index which in its turn will finally say where to find the actual data.

Clustered Index

  1. Only one clustered index can be there in a table
  2. Sort the records and store them physically according to the order
  3. Data retrieval is faster than non-clustered indexes
  4. Do not need extra space to store logical structure

Non Clustered Index

  1. There can be any number of non-clustered indexes in a table
  2. Do not affect the physical order. Create a logical order for data rows and use pointers to physical data files
  3. Data insertion/update is faster than clustered index
  4. Use extra space to store logical structure

Q4.What is Staging/Factory table?
Staging table is a temporary table that is used to stage the data for temporary purpose just before loading it to the Target table from the Source Table. As the data resides temporarily, you can do various stuff on that data like
De-duping,Cleansing, Normalizing to multiple tables, De-Normalizing from multiple to a single table and Extrapolating

Q5.Staging vs Temp table?
Staging tables are permanent table just database tables containing your business data in some form or other. Staging is the process of preparing your business data, usually taken from some business application.Temporary tables can be created at runtime and can do the all kinds of operations that one normal table can do. But, based on the table types, the scope is limited. These tables are created inside tempdb database.When we are doing large number of row manipulation in stored procedures. This is useful to replace the cursor. We can store the result set data into a temp table, then we can manipulate the data from there. When we are having a complex join operation.

Permanent table is faster if the table structure is to be 100% the same since there’s no overhead for allocating space and building the table.

Temp table is faster in certain cases (e.g. when you don’t need indexes that are present on permanent table which would slow down inserts/updates)

Q6.What are different types of Tables?
Normal tables are exactly that, physical tables defined in your database.

Local temporary tables are temporary tables that are available only to the session that created them. These tables are automatically destroyed at the termination of the procedure or session that created them.

Global temporary tables are temporary tables that are available to all sessions and all users. They are dropped automatically when the last session using the temporary table has completed. Both local temporary tables and global temporary tables are physical tables created within the tempdb database.

Table variables are stored within memory but are laid out like a table. Table variables are partially stored on disk and partially stored in memory. It’s a common misconception that table variables are stored only in memory. Because they are partially stored in memory, the access time for a table variable can be faster than the time it takes to access a temporary table.

Q7. Procedures vs Functions?

Stored Procedures (SP) Functions (UDF – User Defined Function)
SP can return zero , single or multiple values. Function must return a single value (which may be a scalar or a table).
We can use transaction in SP We can’t use transaction in UDF.
SP can have input/output parameter. Only input parameter
We can call function from SP. We can’t call SP from function.
We can’t use SP in SELECT/ WHERE/ HAVING statement. We can use UDF in SELECT/ WHERE/HAVING statement.
We can use exception handling using Try-Catch block in SP. We can’t use Try-Catch block in UDF.

Q8.Table vs View?
Table is a preliminary storage for storing data and information in RDBMS. A table is a collection of related data entries and it consists of columns and rows.

A view is a virtual table whose contents are defined by a query. Unless indexed, a view does not exist as a stored set of data values in a database. The advantage of a view is that it can join data from several tables thus creating a new view of it

Advantages over table are

  1. We can combine columns/rows from multiple table or another view and have a consolidated view.
  2. Views can be used as security mechanisms by letting users access data through the view, without granting the users permissions to directly access the underlying base tables of the view
  3. It acts as abstract layer to downstream systems, so any change in schema is not exposed and hence the downstream systems doesn’t get affected.
  4. Instead of sending the complex query to the database all the time, you can save the query as a view and then SELECT * FROM view

Q9.Can View could be indexed?
Yes. View can be indexed.The big disadvantage of indexed views is that they are recreated every time the underlying table data changes. That restricts the use of indexed views to data that does not change often, typically in a data warehouse or business intelligence environment.

Q10.Which one is Faster Optimistic or Pessimistic?
Optimistic locking assumes concurrent transactions can complete without affecting each other. So Optimistic locking is faster because no locks are enforced while doing transactions.Optimistic locking does not cause transactions to wait for each other.Optimistic locking possibly causes a transaction to fail, but it does so without any “lock” ever having been taken.
The word “optimistic” derives from exactly the “I will not be taking actual locks because I hope they won’t be needed anyway. If it turns out I was wrong about that, I will accept the inevitable failure.”

In a real world analogy. Let’s say you have to get done 2 very important tasks in one day:

  • Get a passport
  • Get a presentation done

Now, the problem is that task-1 requires you to go to an extremely bureaucratic government office that makes you wait for 4 hours in a line to get your passport. Meanwhile, task-2 is required by your office, and it is a critical task. Both must be finished on a specific day.

Case 1: Sequential Execution
Ordinarily, you will drive to passport office for 2 hours, wait in the line for 4 hours, get the task done, drive back two hours, go home, stay awake 5 more hours and get presentation done.

Case 2: Concurrent Execution
But you’re smart. You plan ahead. You carry a laptop with you, and while waiting in the line, you start working on your presentation. This way, once you get back at home, you just need to work 1 extra hour instead of 5.

In this case, both tasks are done by you, just in pieces. You interrupted the passport task while waiting in the line and worked on presentation. When your number was called, you interrupted presentation task and switched to passport task. The saving in time was essentially possible due to interruptability of both the tasks.

Concurrency, IMO, can be understood as the “isolation” property in ACID. Two database transactions are considered isolated if sub-transactions can be performed in each and any interleaved way and the final result is same as if the two tasks were done sequentially. Remember, that for both the passport and presentation tasks, you are the sole executioner.

Case 3: Parallel Execution
Now, since you are such a smart fella, you’re obviously a higher-up, and you have got an assistant. So, before you leave to start the passport task, you call him and tell him to prepare first draft of the presentation. You spend your entire day and finish passport task, come back and see your mails, and you find the presentation draft. He has done a pretty solid job and with some edits in 2 more hours, you finalize it.

Now since, your assistant is just as smart as you, he was able to work on it independently, without needing to constantly ask you for clarifications. Thus, due to the independentability of the tasks, they were performed at the same time by two different executioners.

Still with me? Alright…

Case 4: Concurrent But Not Parallel
Remember your passport task, where you have to wait in the line? Since it is your passport, your assistant cannot wait in line for you. Thus, the passport task has interruptability (you can stop it while waiting in the line, and resume it later when your number is called), but no independentability (your assistant cannot wait in your stead).

Case 5: Parallel But Not Concurrent
Suppose the government office has a security check to enter the premises. Here, you must remove all electronic devices and submit them to the officers, and they only return your devices after you complete your task.

In this, case, the passport task is neither independentable nor interruptible. Even if you are waiting in the line, you cannot work on something else because you do not have necessary equipment.

Similarly, say the presentation is so highly mathematical in nature that you require 100% concentration for at least 5 hours. You cannot do it while waiting in line for passport task, even if you have your laptop with you.

In this case, the presentation task is independentable (either you or your assistant can put in 5 hours of focused effort), but not interruptible.

Case 6: Concurrent and Parallel Execution
Now, say that in addition to assigning your assistant to the presentation, you also carry a laptop with you to passport task. While waiting in the line, you see that your assistant has created the first 10 slides in a shared deck. You send comments on his work with some corrections. Later, when you arrive back home, instead of 2 hours to finalize the draft, you just need 15 minutes.

This was possible because presentation task has independentability (either one of you can do it) and interruptability (you can stop it and resume it later). So you concurrently executed both tasks, and executed the presentation task in parallel.

Let’s say that, in addition to being overly bureaucratic, the government office is corrupt. Thus, you can show your identification, enter it, start waiting in line for your number to be called, bribe a guard and someone else to hold your position in the line, sneak out, come back before your number is called, and resume waiting yourself.

In this case, you can perform both the passport and presentation tasks concurrently and in parallel. You can sneak out, and your position is held by your assistant. Both of you can then work on the presentation, etc.

Back to Computer Science
In computing world, here are example scenarios typical of each of these cases:

Case 1: Interrupt processing.
Case 2: When there is only one processor, but all executing tasks have wait times due to I/O.
Case 3: Often seen when we are talking about map-reduce or hadoop clusters.
Case 4: I think Case 4 is rare. It’s uncommon for a task to be concurrent but not parallel. But it could happen. For example, suppose your task requires access to a special computational chip which can be accessed through only processor-1. Thus, even if processor-2 is free and processor-1 is performing some other task, the special computation task cannot proceed on processor-2.
Case 5: also rare, but not quite as rare as Case 4. A non-concurrent code can be a critical region protected by mutexes. Once it is started, it must execute to completion. However, two different critical regions can progress simultaneously on two different processors.
Case 6: IMO, most discussions about parallel or concurrent programming are basically talking about Case 6. This is a mix and match of both parallel and concurrent executions.

1 server , 1 job queue (with 5 jobs) -> no concurrency, no parallelism (Only one job is being serviced to completion, the next job in the queue has to wait till the serviced job is done and there is no other server to service it)

1 server, 2 or more different queues (with 5 jobs per queue) -> concurrency (since server is sharing time with all the 1st jobs in queues, equally or weighted) , still no parallelism since at any instant, there is one and only job being serviced.

2 or more servers , one Queue -> parallelism ( 2 jobs done at the same instant) but no concurrency ( server is not sharing time, the 3rd job has to wait till one of the server completes.)

2 or more servers, 2 or more different queues -> concurrency and parallelism

In other words, concurrency is sharing time to complete a job, it MAY take up the same time to complete its job but at least it gets started early. Important thing is , jobs can be sliced into smaller jobs, which allows interleaving.Parallelism is achieved with just more CPUs , servers, people etc that run in parallel.If the resources are shared, pure parallelism cannot be achieved, but this is where concurrency would have it’s best practical use, taking up another job that doesn’t need that resource.

What is the difference between Multithreading vs Multiprocessing?
Multiprocessing is more than one process in execution where as Multithreading is executing multiple threads within same process. One of the main requirement of MultiProcessing is Multi Core Processor.

Multiprocessing: In a cinema, multiple movies are screened simultaneously in different theaters. Each screening represents a separate process. For instance, one theater might be showing an action movie, another might be showing a romantic comedy, and a third might be screening a documentary. Each screening operates independently, with its own audience and projection equipment.

Multithreading: Within a single screening, there are different tasks being performed concurrently to ensure a smooth movie-watching experience. For example, while the movie is playing, the cinema staff might be selling tickets at the box office, preparing popcorn at the concession stand, and monitoring the theater for any disturbances. These tasks can be seen as threads within the same process (screening). They share resources such as the cinema lobby, staff members, and facilities.

What is Context Switching?
A context switch (also sometimes referred to as a process switch or a task switch) is the switching of the CPU (central processing unit) from one process or thread to another.
Irrespective of Single or Multi Core Context Switching happens.

  1. suspending the progression of one process and storing the CPU’s state (i.e., the context) for that process somewhere in memory
  2. Retrieving the context of the next process from memory and restoring it in the CPU’s registers
  3. Returning to the location indicated by the program counter (i.e., returning to the line of code at which the process was interrupted) in order to resume the process.

Should Context Switch Happen Frequenlty or Less?
No. If that happens, It would be resource consuming and no task gets Completed.If that happens less, You see process hanging. Context Switching should be always decided by Operating system by taking no of threads.

Difference between Concurrency and Parallelism
You and your friend has visited restaurant and seated in a table.

You(Processor) have been tasked to Eat and Sing at same Time. If you take a bite – Stop Eating – Start Singing – Sing few lines – Stop Singing – Resume Eating this is concurrency in action
You(Processor1) and Your Friend(Processor2) have been tasked to Eat and Sing where one person sings and another eats at same time. This is Parallelism.

Concurrency is when two or more tasks can start, run, and complete in overlapping time periods. It doesn’t necessarily mean they’ll ever both be running at the same instant. context switching is a key part of enabling concurrency in a single-core system For example, multitasking on a single-core machine.In Concurrency Interruptability exists

Parallelism is when tasks literally run at the same time, e.g., on a multicore processor.In Parallelism Independabality exists.

Concurrency                 Concurrency + parallelism
(Single-Core CPU)           (Multi-Core CPU)
 ___                         ___ ___
|th1|                       |th1|th2|
|   |                       |   |___|
|___|___                    |   |___
    |th2|                   |___|th2|
 ___|___|                    ___|___|
|th1|                       |th1|
|___|___                    |   |___
    |th2|                   |   |th2|

In both cases we have concurrency from the mere fact that we have more than one thread running.If we ran this program on a computer with a single CPU core, the OS would be switching between the two threads, allowing one thread to run at a time.If we ran this program on a computer with a multi-core CPU then we would be able to run the two threads in parallel – side by side at the exact same time.

Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once.

How do you explain the following Scenarios?

Scenario 1:

 
                     Completed               Progressing
Timeline    <----------------------->|<-------------------------------->
      P1    |-----------|--------|------------|------|-----------------|
                 T1         T2         T3        T4         T5   

Scenario 2:

 
                Completed                    Progressing
Timeline    <---------------->|<----------------------------------------->
      P1    |--------|--------|---------|-----|--------|--------|--------|
                 T1      T2       T3       T1      T3      T4       T5

Scenario 3:

 
                     Completed                            Progressing
Timeline    <-------------------------------------->|<------------------->
      P1    |--------|--------|---------|-----|--------|--------|--------|
                 T1      T2       T3       T1      T6      T4       T2
      P2    |--------|--------|---------|-----|--------|--------|--------|
                 T2      T1       T1       T2      T3      T6       T3

Scenario 4:

 
                     Completed                            Progressing
Timeline    <-------------------------------------->|<------------------->
      P1    |--------|--------|---------|-----|--------|--------|--------|
                 T1      T2       T3       T1      T6      T2       T2
      P2    |--------|--------|---------|-----|--------|--------|--------|
                 T4      T5       T4       T7      T5      T4       T8

Scenario 1:
Completed Thread: T1, T2 (2)
Progressing Thread: T3 (1)

Neither Concurrent Nor Parallel – Sequential Execution

Scenario 2:
Completed Thread: T2 (1)
Progressing Thread: T1,T3 (>2)

Concurrent Not Parallel

Scenario 3:
Completed Thread: T1,T2 (2)
Progressing Thread: T3,T4,T6 (>3)

Concurrent and Parallel Execution

Scenario 4:
Completed Thread P1: T1,T3
Progressing Thread P1: T2,T6

Completed Thread P2: T8
Progressing Thread P2: T4,T5

In the above keeping the status of the completed and progressing threads aside, being Multi Core processor the threads are not shared
among the processor. This is example of

Parallel Not concurrent

  1. An application can be concurrent but not parallel, which means that it processes more than one task at the same time, but no two tasks are executing at the same time instant.
  2. An application can be parallel — but not concurrent, which means that it processes multiple sub-tasks of a task in multi-core CPU at the same time.
  3. An application can be neither parallel — nor concurrent, which means that it processes all tasks one at a time, sequentially.
  4. An application can be both parallel — and concurrent, which means that it processes multiple tasks concurrently in multi-core CPU at the same time.

What is Thread in Java?
Thread is an independent path of execution.

How Program, Process, Thread and Task are related?

  1. Program contains Multiple Process.
  2. Process is instance of Program in Execution. When the Process starts, it would always start with a Single Thread. From there the No of Threads increases by the Way program has written.
  3. Threads is instance of Program in Execution.Threads within the process share the same memory as the process. A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of registers. A thread of execution results from a fork of a computer program into two or more concurrently running tasks.
  4. Task is a set of program instructions that are loaded in memory. The Piece of runnable code which is loaded or Set of instruction processed by Memory is Task. A “Task” is a piece of work that will execute, and complete at some point in the future.

Two process runs on different memory space unless forked, but all threads share same memory space.

Difference between Thread and Task
Suppose you are running a Resturant. You have four Orders and Four Chef. A Order is a thread, a Chef is a processor, and a Cooking is a task. The problem you face is how to efficiently schedule the Chef and Orders so that the tasks get done as quickly as possible.

A Task means an action or work you want to do. A Thread may be one of the doer or worker performing that work.

Why should I prefer Thread over process?
Inter-thread communication (sharing data etc.) is significantly simpler to program than inter-process communication.
Context switches between threads are faster than between processes. That is, it’s quicker for the OS to stop one thread and start running another than do the same with two processes.

Example:
Applications with GUIs typically use one thread for the GUI and others for background computation. The spellchecker in MS Office, for example, is a separate thread from the one running the Office user interface. In such applications, using multiple processes instead would result in slower performance and code that’s tough to write and maintain.

It entirely depends on the design perspective whether to go for a thread or process. When I want to set of logically co-related operations to be carried out parallel. For example, if you run a Notepad++ there will be one thread running in the foreground as an editor and other thread running in background auto saving the document at regular intervals so no one would design a process to do that autosaving task separately.

What is the difference between Asynchronous vs synchronous execution?
synchronous – When you execute something synchronously, you wait for it to finish before moving on to another task. You are in a queue to get a movie ticket. You cannot get one until everybody in front of you gets one, and the same applies to the people queued behind you.

asynchronous -When you execute something asynchronously, you can move on to another task before it finishes. i.e. You are in a restaurant with many other people. You order your food. Other people can also order their food, they don’t have to wait for your food to be cooked and served to you before they can order. In the kitchen, restaurant workers are continuously cooking, serving, and taking orders. People will get their food served as soon as it is cooked.

Synchronous (one thread):

Single Thread  |--------A--------||--------B--------|                  

Synchronous (Multi-Threaded):

Thread A |--------A--------|
Thread B                   |--------B--------|
Thread C                                     |--------C--------|

ASynchronous (One thread):

           A-Start ------------------------------------------ A-End   
               | B-Start -----------------------------------------|--- B-End   
	       |    |      C-Start ------------------- C-End      |      |   
	       |    |       |                           |         |      |
  	       V    V       V                           V         V      V      
Single thread->|--A-|---B---|--C-|-A-|-C-|--A--|-B-|--C---|---A---|--B-->|

Asynchronous (Multi-Threaded):

Thread A ->     |----A-----|
Thread B ----->     |-----B-----------| 
Thread C --------->     |-------C----------|

Thread Implementation using Runnable vs Callable
A Callable needs to implement call() method while a Runnable needs to implement run() method. A Callable can return a value and throw checked exception. Runnable interface for fire and forget calls, especially when you are not interested in result of the task execution. A Callable can be used with ExecutorService#invokeXXX(Collection> tasks) methods but a Runnable cannot be. Refer here

One important difference: the run() method in the Runnable interface returns void; the call() method in the Callable interface returns an object of type T. This allows you to access a response object easily.

public interface Runnable {
    void run();
}

public interface Callable<V> {
    V call() throws Exception;
}

What is ExecutorService?
It manages a pool of worker threads, and allows you to submit tasks for execution. ExecutorService abstracts away many of the complexities associated with the lower-level abstractions like raw Thread. It provides mechanisms for safely starting, closing down, submitting, executing, and blocking on the successful or abrupt termination of tasks (expressed as Runnable or Callable). ExecutorService handles creation, management, and reusability of threads, making it easier to handle concurrent tasks in multithreaded applications. Refer here

An ExecutorService is a utility in Java that provides a way to execute tasks concurrently and hides the complexities of underlying thread.

Below are some benefits:

  1. Executor service manage thread in asynchronous way
  2. Use Future to get the return result after thread completion.
  3. Manage allocation of work to free thread and resale completed work from thread for assigning new work automatically
  4. fork – join framework for parallel processing
  5. Better communication between threads
  6. invokeAll and invokeAny give more control to run any or all thread at once
  7. shutdown provide capability for completion of all thread assigned work
  8. Scheduled Executor Services provide methods for producing repeating invocations of runnables and callables Hope it will help you

What is Future?
The Future interface represents the result of an asynchronous computation.It provides methods to check if the computation is complete, wait for the result, and retrieve the result

What is difference between calling submit and execute in executorService?
execute: Use it for fire and forget calls
submit: Method submit extends base method Executor.execute(Runnable) by creating and returning a Future that can be used to cancel execution and/or wait for completion. In nutshell submit is wrapper around execute

void execute(Runnable command) : Executes the given command at some time in the future. The command may execute in a new thread, in a pooled thread, or in the calling thread, at the discretion of the Executor implementation.

submit could take both Runnable and Callable as argument

submit(Callable task) : Submits a value-returning task for execution and returns a Future representing the pending results of the task.
Future submit(Runnable task) : Submits a Runnable task for execution and returns a Future representing that task.

submit() is wrapper around execute and hides exception in the framework itself unless you embed your task code in try{} catch{} block.

execute() throws output when Runnable code actually throws exeception

public class Main {
    public static void main(String[] args) throws Exception {
        ExecutorService objExecService = Executors.newFixedThreadPool(2);

        objExecService.execute(new Runnable() {
            @Override
            public void run() {
                int num = 5/0;
                System.out.println("Division by zero successful");
            }
        });

        objExecService.shutdown();
    }
}

Output

Exception in thread "pool-1-thread-1" java.lang.ArithmeticException: / by zero
	at Main$1.run(Main.java:13)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
	at java.base/java.lang.Thread.run(Thread.java:832)

submit() throws output when Runnable code actually throws exeception

public class Main {
    public static void main(String[] args) throws Exception {
        ExecutorService objExecService = Executors.newFixedThreadPool(2);

        objExecService.submit(new Runnable() {
            @Override
            public void run() {
                int num = 5/0;
                System.out.println("Division by zero successful");
            }
        });

        objExecService.shutdown();
    }
}

Output


Why thread pools are needed? Refer here
Thread objects use a significant amount of memory, and in a large-scale application, allocating and deallocating many thread objects creates a significant memory management overhead.Thread pool is a pool of already created worker thread ready to do the job. It creates Thread and manage them. Instead of creating Thread and discarding them once task is done, thread-pool reuses threads in form of worker thread.

Because creation of Thread is time consuming process and it delays request processing.

Threadpool addresses below issues.

  • Run time latency for thread creation
  • Uncontrolled use of System Resources

What is Executor? What are different ways of Creating Thread using Executors?
It provides a way to separate the task execution logic from the application code, allowing developers to focus on business logic rather than thread management.

The Executor framework consists of two main components:

  1. Executor interface and the ExecutorService interface. The Executor interface defines a single method, execute(Runnable), which is used to submit tasks for execution.
  2. The ExecutorService interface extends the Executor interface and provides additional methods for managing the execution of tasks, such as the ability to submit callables and the ability to shut down the executor.

Executor framework also provides a static utility class called Executors ( similar to Collections) which provides several static factory method to create various type of Thread Pool implementation in Java e.g. fixed size thread pool, cached thread pool and scheduled thread pool.best way to get an executor is to use one of the static factory methods provided by the Executors utility class. Some of the available factory methods in Executors class are:

  1. static ExecutorService newCachedThreadPool() : Creates a thread pool that creates new threads as needed, but will reuse previously constructed threads when they are available.
  2. static ExecutorService newFixedThreadPool(int numThreads) : Creates a thread pool that reuses a fixed number of threads.
  3. static ScheduledExecutorService newScheduledThreadPool(int numThreads) : Creates a thread pool that can schedule commands to run after a given delay, or to execute periodically.
  4. newSingleThreadExecutor() : Creates an Executor that uses a single worker thread.

Executor framework is used for creating threadpool

 ExecutorService service = Executors.newFixedThreadPool(10);

Why we need shutdown in executor service?
shutdown() method does one thing: prevents clients to send more work to the executor service. This means all the existing tasks will still run to completion unless other actions are taken. This is true even for scheduled tasks, e.g., for a ScheduledExecutorService: new instances of the scheduled task won’t run. It also frees up any background thread resources.

shutdown() method provides graceful application shutdown Prevent your application to submit new tasks, and wait for all the existing tasks to complete before shutting down the JVM.

shutdown() vs shutdownNow()?
shutdown() – Initiates an orderly shutdown in which previously submitted tasks are executed, but no new tasks will be accepted.
shutdownNow() – Attempts to stop all actively executing tasks, halts the processing of waiting tasks, and returns a list of the tasks that were awaiting execution.

What is Java Fork-Join pool
Fork-Join Pools allow you to break down a larger task into smaller subtasks that can be executed concurrently. This is particularly valuable for tasks that can be divided into independent parts, such as recursive algorithms, matrix multiplication, sorting, and searching.This framework is well-suited for tasks that follow a recursive structure. A task is divided into smaller tasks until it reaches the base case, at which point the results are computed.Fork-Join Pools employ work-stealing algorithms, enabling idle threads to ‘steal’ tasks from other threads’ task queues when they have completed their own work.

What is Reactor Pattern?
The Reactor pattern efficiently handles multiple concurrent service requests by dispatching them to appropriate event handlers using a single or a limited number of threads.The idea is that you create a lot of threads which don’t do anything at first. Instead, they “wait for work”. When work arrives (in the form of code), some kind of executor service (the reactor) identifies idle threads from the pool and assigns them work to do.Use when low-latency and high-throughput in server-side applications, making it an essential strategy for modern networking frameworks and web servers

What is Future?
If you have code that performs some long-time operations and only then returns the result Future is used. Future is a placeholder. It doesn’t contain any value as long as the new thread hasn’t finished its work.

Future<String> objFuture = objExecService.submit(() ->{
              Thread.sleep(3000);
              return Thread.currentThread().getName();
});
System.out.println(objFuture.get());

While the separate thread is calculating something, the main thread continues its work. And when you think it’s finally time the value has got calculated, you write future.get() and get the actual value. But be careful: this time if the value hasn’t yet been assigned and the future is still empty, the main thread will have to wait until it happens

What is Worker Thread?
The idea is that you create a lot of threads which don’t do anything at first. Instead, they “wait for work”. When work arrives (in the form of code), some kind of executor service (the reactor) identifies idle threads from the pool and assigns them work to do. Worker Thread makes sense when taking in terms of the reactor pattern, different types of events are run by the handler threads which is similar. A thread is not tied to a single event class but will run any number of different events as they occur.

What is Interprocess Communication?
Interprocess communication (IPC) is a set of programming interfaces that allow a programmer to coordinate activities among different program processes that can run concurrently in an operating system. This allows a program to handle many user requests at the same time. Since even a single user request may result in multiple processes running in the operating system on the user’s behalf, the processes need to communicate with each other. The IPC interfaces make this possible. Each IPC method has its own advantages and limitations so it is not unusual for a single program to use all of the IPC methods.

Java interprocess communication is based at the lowest level on turning state, requests, etc into sequences of bytes that can be sent as messages or as a stream to another Java process. You can do this work yourself, or you can use a variety of “middleware” technologies of various levels of complexity to abstract away the implementation details. Technologies that may be used include, Java object serialization, XML, JSON, RMI, CORBA, SOAP / “web services”, message queing, and so on.

Interprocess Communication vs Inter-Thread Communication?
The fundamental difference is that threads live in the same address spaces, but processes live in the different address spaces. This means that inter-thread communication is about passing references to objects and changing shared objects, but processes is about passing serialized copies of objects.In practice, Java interthread communication can be implemented as plain Java method calls on a shared object with appropriate synchronization thrown in.

Inter-Thread Communication = threads inside the same JVM talking to each other.Threads inside the same JVM can use pipelining through lock-free queues to talk to each other with nanosecond latency.

Inter-Process Communication (IPC) = threads inside the same machine but running in different JVMs talking to each other.Threads in different JVMs can use off-heap shared memory (usually acquired through the same memory-mapped file) to talk to each other with nanosecond latency.

What is Starvation?
Starvation describes a situation where a thread is unable to gain regular access to shared resources and is unable to make progress. This happens when shared resources are made unavailable for long periods by “greedy” threads. For example, suppose an object provides a synchronized method that often takes a long time to return. If one thread invokes this method frequently, other threads that also need frequent synchronized access to the same object will often be blocked.

What is Livelock?
A thread often acts in response to the action of another thread. If the other thread’s action is also a response to the action of another thread, then livelock may result. As with deadlock, livelocked threads are unable to make further progress. However, the threads are not blocked — they are simply too busy responding to each other to resume work. This is comparable to two people attempting to pass each other in a corridor: Alphonse moves to his left to let Gaston pass, while Gaston moves to his right to let Alphonse pass. Seeing that they are still blocking each other, Alphone moves to his right, while Gaston moves to his left. They’re still blocking each other, so…

Preemptive vs Non-Preemptive Scheduling
Scheduling is order of executuion of threads. JVM would simply use the underlying threading mechanism provided by the OS.
Non-preemptive Scheduling: The current process releases the CPU either by terminating or by switching to the waiting state. (Used in MS Windows family)

Advantages are Decreases turnaround time and Does not require special HW (e.g., timer)
Disadvantages are Limited choice of scheduling algorithm

Preemptive Scheduling: The current process needs to involuntarily release the CPU when a more important process is inserted into the ready queue or once an allocated CPU time has elapsed. (Used in Unix and Unix-like systems).This is determined by priority assigned to thread.Despite priority JVM may decide to execute thread of lower priority inorder to avoid starvation.

Advantages are No limitation on the choice of scheduling algorithm
Disadvantages are Additional overheads (e.g., more frequent context switching, HW timer, coordinated access to data, etc.)

How do you implement Thread in Java?
By extending java.lang.Thread class
By implementing java.lang.Runnable interface.

Which way of implementing Thread is better? Extending Thread class or implementing Runnable method?
Implementing Runnable is better because in Java we can only extend one class so if we extend Thread class we can not extend any other class while by implementing Runnable interface we still have that option open with us

What is the difference between start() and run() method of Thread class?
start() method is used to start newly created thread, while start() internally calls run() method

When you invoke run() as normal method, its called in the same thread, no new thread is started

Spring MVC uses HttpMessageConverter to convert the Http request to an object representation and back.Spring Framework then uses one of the Jackson message converters to marshall and unmarshall Java Objects to and from JSON over HTTP.Spring will use the “Accept” header to determine the media type that it needs to respond with and uses the “Content-Type” header to determine the media type of the request body.

Default Message Converters in Spring MVC
StringHttpMessageConverter: it converts Strings from the HTTP request and response.
FormHttpMessageConverter: it converts form data to/from a MultiValueMap.
ByteArrayHttpMessageConverter: it converts byte arrays from the HTTP request and response.
MappingJackson2HttpMessageConverter: it converts JSON from the HTTP request and response.
Jaxb2RootElementHttpMessageConverter: it converts Java objects to/from XML.
SourceHttpMessageConverter: it converts javax.xml.transform.Source from the HTTP request and response.
AtomFeedHttpMessageConverter: it converts Atom feeds.
RssChannelHttpMessageConverter: it converts RSS feeds.

Customizing HttpMessageConverters with Spring MVC

Annotation Usage
@RequestMapping
@RequestMapping(value = "/{name}", 
                method = RequestMethod.GET, 
                consumes="application/json"
                produces ="application/json",
                headers={"name=pankaj", "id=1"})
path (or) (or) name (or) and value: which URL the method is mapped to
method: compatible HTTP methods
params: filters requests based on presence, absence, or value of HTTP parameters
headers: filters requests based on presence, absence, or value of HTTP headers
consumes: which media types the method can consume in the HTTP request body
produces: which media types the method can produce in the HTTP response body
@RequestBody
@RequestMapping(method = RequestMethod.POST)
@ResponseBody
public HttpStatus something(@RequestBody MyModel myModel) 
{
    return HttpStatus.OK;
}
with @RequestBody, Spring will bind the incoming HTTP request body(for the URL mentioned in @RequestMapping for that method) to that parameter. While doing that, Spring will [behind the scenes] use HTTP Message converters to convert the HTTP request body into domain object [deserialize request body to domain object], based on Accept header present in request.
@ResponseBody
@RequestMapping(value = "/user/all", method = RequestMethod.GET)
public @ResponseBody List<User> listAllUsers() {
    return userService.findAllUsers();
}
with @ResponseBody, Spring will bind the return value to outgoing HTTP response body. While doing that, Spring will [behind the scenes] use HTTP Message converters to convert the return value to HTTP response body [serialize the object to response body], based on Content-Type present in request HTTP header
@RequestParam
http://localhost:8080/springmvc/hello/101?param1=10¶m2=20

public String getDetails(
    @RequestParam(value="param1", required=true) String param1,
        @RequestParam(value="param2", required=false) String param2){
...
}
@RequestParam is to obtain an parameter from the URI as well.@RequestParam annotation used for accessing the query parameter values from the request
defaultValue – This is the default value as a fallback mechanism if request is not having the value or it is empty.
name – Name of the parameter to bind
required – Whether the parameter is mandatory or not. If it is true, failing to send that parameter will fail.
value – This is an alias for the name attribute
@PathVariable
'http://localhost:8080/springmvc/hello/101?param1=10&param2=20

@RequestMapping("/hello/{id}")    public String getDetails(@PathVariable(value="id") String id,
    @RequestParam(value="param1", required=true) String param1,
    @RequestParam(value="param2", required=false) String param2){
.......
}

@GetMapping("/user/{firstName}/{lastName}")
   @ResponseBody
   public String handler(@MatrixVariable("firstName") String firstName,
         @MatrixVariable("lastName") String lastName
         ) {

      return "<br>Matxrix variable <br> "
            + "firstName =" + firstName +"<br>"
            + "lastName =" + lastName;
   }
@PathVariable is to obtain some placeholder from the URI
@MatrixVariable – a name-value pair within a path segment is referred as matrix variable. Matrix variables can appear in any path segment, each variable separated with a semicolon (;) and multiple values are separated by comma (,)
i.e.
http://www.example.com/employee/Mike;salary=45000;dept=HR
http://www.example.com/car/Audi;color=RED,BLACK,WHITE

@RequestHeader
@Controller
public class HelloController {
 @RequestMapping(value = "/hello.htm")
 public String hello(
   @RequestHeader(value="Accept") String accept,
   @RequestHeader(value="Accept-Language") String acceptLanguage,
   @RequestHeader(value="User-Agent", defaultValue="foo") String userAgent,
   HttpServletResponse response) {

  System.out.println("accept: " + accept);
  System.out.println("acceptLanguage: " + acceptLanguage);
  System.out.println("userAgent: " + userAgent);
  
  return null;
 }
}
Reading http requestheader is written in HelloController
The advantage of using Spring @RequestHeader is that it will automatically throw an exception like HTTP Status 400 – Missing request header ‘X’ for method parameter of type, if the header is NOT sent in the input request (by setting required=true)

@RequestHeader for facilitating use to get the header details easily in our controller class