@Configuration – @Configuration as a replacement to the XML based configuration for configuring spring beans.So instead of an xml file we write a class and annotate that with @Configuration and define the beans in it using @Bean annotation on the methods.It is just another way of configuration Indicates that a class declares one or more @Bean methods and may be processed by the Spring container to generate bean definitions and service requests for those beans at runtime.

Use @Configuration annotation on top of any class to declare that this class provides one or more @Bean methods and may be processed by the Spring container to generate bean definitions and service requests for those beans at runtime.

AppConfig.java

@Configuration
public class AppConfig {
 
    @Bean(name="demoService")
    public DemoClass service()
    {
        
    }
}

pom.xml
The following dependency should be added to pom.xml before using @configuration annotation to get the bean from the context

<dependency>
		<groupId>org.springframework</groupId>
		<artifactId>spring-context</artifactId>
		<version>5.0.6.RELEASE</version>
</dependency>
public class VerifySpringCoreFeature
{
    public static void main(String[] args)
    {
        ApplicationContext context = new AnnotationConfigApplicationContext(ApplicationConfiguration.class); 
        DemoManager  obj = (DemoManager) context.getBean("demoService"); 
        System.out.println( obj.getServiceName() );
    }
}

What if I want to ensure the beans are already loaded even before requested and fail-fast

import org.springframework.context.annotation.AnnotationConfigApplicationContext;
public class MySpringApp {
	public static void main(String[] args) {
		AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext();
		ctx.register(MyConfiguration.class);
		ctx.refresh();
		MyBean mb1 = ctx.getBean(MyBean.class);
		MyBean mb2 = ctx.getBean(MyBean.class);
		ctx.close();
	}
}

In the above example Spring loads beans into it’s context before we have even requested it. This is to make sure all the beans are properly configured and application fail-fast if something goes wrong.

Now what will happen if we uncomment the @Configuration annotation in the above class. In this case, if we make a call to myBean() method then it will be a plain java method call and we will get a new instance of MyBean and it won’t remain singleton.

@Configurble – How to inject dependencies into objects that are not created by Spring

public class CarSalon {
    //...
    public void testDrive() {
        Car car = new Car();
        car.startCar();
    }
}
 
@Component
public class Car {
    @Autowired
    private Engine engine;
    @Autowired
    private Transmission transmission;
 
    public void startCar() {
        transmission.setGear(1);
        engine.engineOn();
        System.out.println("Car started");
    }
}
 
@Component
public class Engine {
//...
}
 
@Component
public class Transmission {
//...
}

In the above example

  1. We try to create a Object for Car class using new operator
  2. How ever objects created using new operator are not container managed rather Java Runtime Managed
  3. Trying to access the startCar method using car object will throw NullPointerException
  4. However using @Configurable annotation will tell Spring to inject dependencies into the object before the constructor is run
  5. You need to have these JAR files in pom.xml inorder to make it work.aspectj-x.x.x.jar, aspectjrt.jar, aspectjveawer-x.x.x.jar
@Configurable(preConstruction = true)
@Component
public class Car {
 
    @Autowired
    private Engine engine;
    @Autowired
    private Transmission transmission;
 
    public void startCar() {
        transmission.setGear(1);
        engine.engineOn();
 
        System.out.println("Car started");
    }
}

Q1.What is use of @SpringBootApplication

 @SpringBootApplication =  @Configuration + @EnableAutoConfiguration + @ComponentScan

@Configuration – Spring Configuration annotation indicates that the class has @Bean definition methods. So Spring container can process the class and generate Spring Beans to be used in the application.Refer here

@EnableAutoConfiguration – @EnableAutoConfiguration automatically configures the Spring application based on its included jar files, it sets up defaults or helper based on dependencies in pom.xml. Auto-configuration is usually applied based on the classpath and the defined beans. Therefore, we donot need to define any of the DataSource, EntityManagerFactory, TransactionManager etc and magically based on the classpath, Spring Boot automatically creates proper beans and registers them for us. For example when there is a tomcat-embedded.jar on your classpath you likely need a TomcatEmbeddedServletContainerFactory (unless you have defined your own EmbeddedServletContainerFactory bean).

@EnableAutoConfiguration has a exclude attribute to disable an auto-configuration explicitly otherwise we can simply exclude it from the pom.xml, for example if we do not want Spring to configure the tomcat then exclude spring-bootstarter-tomcat from spring-boot-starter-web.
@ComponentScan – @ComponentScan provides scope for spring component scan, it simply goes though the provided base package and picks up dependencies required by @Bean or @Autowired etc, In a typical Spring application, @ComponentScan is used in a configuration classes, the ones annotated with @Configuration. Configuration classes contains methods annotated with @Bean. These @Bean annotated methods generate beans managed by Spring container. Those beans will be auto-detected by @ComponentScan annotation. There are some annotations which make beans auto-detectable like @Repository , @Service, @Controller, @Configuration, @Component. In below code Spring starts scanning from the package including BeanA class

@Configuration
@ComponentScan(basePackageClasses = BeanA.class)
@EnableAutoConfiguration(exclude = {DataSourceAutoConfiguration.class})
public class Config {

  @Bean
  public BeanA beanA(){
    return new BeanA();
  }

  @Bean
  public BeanB beanB{
    return new BeanB();
  }

}

Q2.What @Configurable annotation Does?
@Configurable – @Configurable is an annotation that injects dependencies into objects, that are not managed by Spring using aspectj libraries. You still use old way of instantiation with plain new operator to create objects but the spring will take care of injecting the dependencies into that object automatically for you.
Refer here

Q3.What is Starter Project?What are the Starter Project offered?
Starter POMs are a set of convenient dependency descriptors that you can include in your application
spring-boot-starter-web-services – SOAP Web Services
spring-boot-starter-web – Web & RESTful applications
spring-boot-starter-test – Unit testing and Integration Testing
spring-boot-starter-jdbc – Traditional JDBC
spring-boot-starter-hateoas – Add HATEOAS features to your services
spring-boot-starter-security – Authentication and Authorization using Spring Security
spring-boot-starter-data-jpa – Spring Data JPA with Hibernate
spring-boot-starter-data-rest – Expose Simple REST Services using Spring Data REST

Q4.What SpringApplication.run() method does?

@SpringBootApplication
public class EmployeeManagementApplication 
{
 public static void main(String[] args) 
 {
  SpringApplication.run(EmployeeManagementApplication.class, args);
 }
}

– SpringApplication class is used to bootstrap and launch a Spring application from a Java main method.
– Creates the ApplicationContext from the classpath, scan the configuration classes and launch the application
– You can find the list of beans loaded by changing the code as below.Spring Boot provides auto-configuration, there are a lot of beans getting configured by it.

  
@SpringBootApplication(scanBasePackages = "com.mugil.org.beans")
public class EmployeeManagementApplication
{
	public static void main(String[] args)
	{
		ApplicationContext ctx = SpringApplication.run(EmployeeManagementApplication.class, args);
		String[] beans = ctx.getBeanDefinitionNames();
		for(String s : beans) System.out.println(s);
	}
}

@Component
public class Employee 
{
.
.
.
}

In the console you could see employeeManagementApplication and employee getting printed with their starting
letter in lower case.

Q5.Why Spring Boot created?
There was lot of difficulty to setup Hibernate Datasource, Entity Manager, Session Factory, and Transaction Management. It takes a lot of time for a developer to set up a basic project using Spring with minimum functionality. Spring Boot does all of those using AutoConfiguration and will take care of all the internal dependencies that your application needs — all you need to do is run your application. It follows “Opinionated Defaults Configuration” Approach to reduce Developer effort.Spring Boot looks at a) Frameworks available on the CLASSPATH b) Existing configuration for the application. Based on these, Spring Boot provides the basic configuration needed to configure the application with these frameworks. This is called Auto Configuration.

Q6.
Q7.

Stream is a sequence of data that you can process in a declarative and functional style. The Stream interface is located in the java.util.stream package. It represents a sequence of objects somewhat like the Iterator interface. However, unlike the Iterator, it supports parallel execution.The Stream interface supports the map/filter/reduce pattern and executes lazily, forming the basis (along with lambdas) for functional-style programming in Java 8.
There are also corresponding primitive streams (IntStream, DoubleStream, and LongStream) for performance reasons. Let’s take a simple example of Iterating through a List with aim of summing up numbers above 10. This laziness is achieved by a separation between two types of operations that could be executed on streams: intermediate and terminal operations.

private static int sumIterator(List<Integer> list) {
	Iterator<Integer> it = list.iterator();
	int sum = 0;
	while (it.hasNext()) {
		int num = it.next();
		if (num > 10) {
			sum += num;
		}
	}
	return sum;
}

The Disadvantages of above method are

  1. The program is sequential in nature, there is no way we can do this in parallel easily.
  2. We need to provide the code logic for sum of integers on how the iteration will take place, this is also called external iteration because client program is handling the algorithm to iterate over the list.

To overcome the above issue Java 8 introduced Java Stream API to implement internal iteration, that is better because java framework is in control of the iteration.Internal iteration provides several features such as sequential and parallel execution, filtering based on the given criteria, mapping etc.Java 8 Stream API method arguments are functional interfaces, so lambda expressions work very well with them. Using Stream the same code turnout to be

private static int sumStream(List<Integer> list) {
	return list.stream().filter(i -> i > 10).mapToInt(i -> i).sum();
}

Streams are lazy because intermediate operations are not evaluated unless a terminal operation is invoked. Each intermediate operation creates a new stream, stores the provided operation/function and return the new stream. The pipeline accumulates these newly created streams.The time when terminal operation is called, traversal of streams begins and the associated function is performed one by one. Parallel streams don’t evaluate streams ‘one by one’ (at the terminal point). The operations are rather performed simultaneously, depending on the available cores.

To perform a sequence of operations over the elements of the data source and aggregate their results, three parts are needed –

  1. Source
  2. intermediate operation
  3. terminal operation

How to create a simple stream

Collection<String> collection = Arrays.asList("a", "b", "c");
Stream<String> streamOfCollection = collection.stream();
Stream<String> streamOfArray = Stream.of("a", "b", "c");
String[] arr = new String[]{"a", "b", "c"};
Stream<String> streamOfArrayFull = Arrays.stream(arr);
Stream<String> streamOfArrayPart = Arrays.stream(arr, 1, 3);

Using Stream.builder() When builder is used the desired type should be additionally specified in the right part of the statement, otherwise the build() method will create an instance of the Stream Object

Stream<String> streamBuilder =
  Stream.<String>builder().add("a").add("b").add("c").build();

Using Stream.generate()
The generate() method accepts a Supplier for element generation. As the resulting stream is infinite, developer should specify the desired size or the generate() method will work until it reaches the memory limit

Stream<String> streamGenerated =
  Stream.generate(() -> "element").limit(10);

The code above creates a sequence of ten strings with the value – “element”.

Stream.iterate()
Another way of creating an infinite stream is by using the iterate() method:

 Stream<Integer> streamIterated = Stream.iterate(40, n -> n + 2).limit(20);

The first element of the resulting stream is a first parameter of the iterate() method. For creating every following element the specified function is applied to the previous element. In the example above the second element will be 42.

A stream by itself is worthless, the real thing a user is interested in is a result of the terminal operation, which can be a value of some type or action applied to every element of the stream. Only one terminal operation can be used per stream.

For More details on streams refer here

ArrayList is not synchronized. That means sharing an instance of ArrayList among many threads where those threads are modifying (by adding or removing the values) the collection may result in unpredictable behavior.A thread-safe variant of ArrayList in which all mutative operations (e.g. add, set, remove..) are implemented by creating a separate copy of underlying array. It achieves thread-safety by creating a separate copy of List which is a is different way than vector or other collections use to provide thread-safety.Iterator does not throw ConcurrentModificationException even if copyOnWriteArrayList is modified once iterator is created because iterator is iterating over the separate copy of ArrayList while write operation is happening on another copy of ArrayList.

There are two ways to Synchronize ArrayList

  1. Collections.synchronizedList() method – It returns synchronized list backed by the specified list.
  2. CopyOnWriteArrayList class – It is a thread-safe variant of ArrayList.

Collections.synchronizedList()

import java.util.*; 
  
class GFG 
{ 
    public static void main (String[] args) 
    { 
        List<String> list = 
           Collections.synchronizedList(new ArrayList<String>()); 
  
        list.add("practice"); 
        list.add("code"); 
        list.add("quiz"); 
  
        synchronized(list) 
        { 
            // must be in synchronized block 
            Iterator it = list.iterator(); 
  
            while (it.hasNext()) 
                System.out.println(it.next()); 
        } 
    } 
} 

CopyOnWriteArrayList

import java.io.*; 
import java.util.Iterator; 
import java.util.concurrent.CopyOnWriteArrayList; 
  
class GFG 
{ 
    public static void main (String[] args) 
    { 
        // creating a thread-safe Arraylist. 
        CopyOnWriteArrayList<String> threadSafeList 
            = new CopyOnWriteArrayList<String>(); 
  
        // Adding elements to synchronized ArrayList 
        threadSafeList.add("geek"); 
        threadSafeList.add("code"); 
        threadSafeList.add("practice"); 
  
        System.out.println("Elements of synchronized ArrayList :"); 
  
        // Iterating on the synchronized ArrayList using iterator. 
        Iterator<String> it = threadSafeList.iterator(); 
  
        while (it.hasNext()) 
            System.out.println(it.next()); 
    } 
} 

1.What is the Default Size and Capacity of ArrayList in Java 8?What is the Maximum Size of ArrayList?
Size is the number of elements you have placed into the arrayList while capacity is the max number of elements the arrayList can take. Once you’ve reached max, the capacity is doubled.The initial List size is zero (unless you specify otherwise).However the initial capacity of ArrayList is 10.The size of the list is the number of elements in it. The capacity of the list is the number of elements the backing data structure can hold at this time. The size will change as elements are added to or removed from the list. The capacity will change when the implementation of the list you’re using needs it to. (The size, of course, will never be bigger than the capacity.)

When it has to grow, this is used:

 int newCapacity = oldCapacity + (oldCapacity >> 1)

oldCapacity >> 1 is division by two, so it grows by 1.5

int newCapacity = oldCapacity + (oldCapacity >> 1);
int newCapacity = oldCapacity + 0.5*oldCapacity; 
int newCapacity = 1.5*oldCapacity ;

Maximum Size of ArrayList
It would depend on the implementation, but the limit is not defined by the List interface.An ArrayList can’t hold more than Integer.MAX_VALUE elements

2.Difference is between a fixed size container (data structure) and a variable size container.
An array is a fixed size container, the number of elements it holds is established when the array is created and never changes. (When the array is created all of those elements will have some default value, e.g., null for reference types or 0 for ints, but they’ll all be there in the array: you can index each and every one.)

A list is a variable size container, the number of elements in it can change, ranging from 0 to as many as you want (subject to implementation limits). After creation the number of elements can either grow or shrink. At all times you can retrieve any element by its index.

List is actually an interface and it can be implemented in many different ways. Thus, ArrayList, LinkedList, etc. There is a data structure “behind” the list to actually hold the elements. And that data structure itself might be fixed size or variable size, and at any given time might have the exact size of the number of elements in the list, or it might have some extra “buffer” space.The LinkedList, for example, always has in its underlying data structure exactly the same number of “places for elements” as are in the list it is representing. But the ArrayList uses a fixed length array as its backing store.

3.How to create a Synchronized ArrayList
There are two ways to Synchronize ArrayList

  1. Collections.synchronizedList() method – It returns synchronized list backed by the specified list.
  2. CopyOnWriteArrayList class – It is a thread-safe variant of ArrayList.It achieves thread-safety by creating a separate copy of List which is a is different way than vector or other collections use to provide thread-safety

More here

4.Why to use arrayList when vector is synchronized?
Vector synchronizes at the level of each individual operation. Generally a programmer like to synchronize a whole sequence of operations. Synchronizing individual operations is both less safe and slower.Vectors are considered obsolete an d unofficially deprecated in java.

5.Difference between CopyOnWriteArrayList and synchronizedList
Both synchronizedList and CopyOnWriteArrayList take a lock on the entire array during write operations.The difference emerges if you look at other operations, such as iterating over every element of the collection. The documentation for Collections.synchronizedList says It is imperative that the user manually synchronize on the returned list when iterating over it.Failure to follow this advice may result in non-deterministic behavior.

 List list = Collections.synchronizedList(new ArrayList());
    ...
    synchronized (list) {
        Iterator i = list.iterator(); // Must be in synchronized block
        while (i.hasNext())
            foo(i.next());
    }

Iterating over a synchronizedList is not thread-safe unless you do locking manually. Note that when using this technique, all operations by other threads on this list, including iterations, gets, sets, adds, and removals, are blocked. Only one thread at a time can do anything with this collection.

CopyOnWriteArrayList uses “snapshot” style iterator method uses a reference to the state of the array at the point that the iterator was created. This array never changes during the lifetime of the iterator, so interference is impossible and the iterator is guaranteed not to throw ConcurrentModificationException. The iterator will not reflect additions, removals, or changes to the list since the iterator was created. “snapshot” style iterator method uses a reference to the state of the array at the point that the iterator was created. This array never changes during the lifetime of the iterator, so interference is impossible and the iterator is guaranteed not to throw ConcurrentModificationException. The iterator will not reflect additions, removals, or changes to the list since the iterator was created.

Operations by other threads on this list can proceed concurrently, but the iteration isn’t affected by changes made by any other threads. So, even though write operations lock the entire list, CopyOnWriteArrayList still can provide higher throughput than an ordinary synchronizedList.

6.What is Functional Interface?What are the rules to define a Functional Interface?Is it Mandatory to define @FunctionalInterface annotation?
Functional Interface also know as Single Abstract Method(SAM) interface contains one and only one abstract method. @FunctionalInterface is not amndatory but tells other developers the interface is Functional and prevents them from adding anymore methods to it.We can have any number of Default methods and Static methods.Overridding methods in java.lang.object such as equals and hashcode doesnot count as an abstract method. More here

7.Difference between Streams and Collections?

Stream Collections
A stream is not a data structure that stores elements; instead, it conveys elements from a source such as a data structure, an array, a generator function, or an I/O channel, through a pipeline of computational operations. Collection is a Datastructure
An operation on a stream produces a result, but does not modify its source. For example, filtering a Stream obtained from a collection produces a new Stream without the filtered elements, rather than removing elements from the source collection. Operation on collection will have direct impact on collection object itself
Streams are based on ‘process-only, on-demand’ strategy.Many stream operations, such as filtering, mapping, or duplicate removal, can be implemented lazily, exposing opportunities for optimization. Stream operations are divided into intermediate (Stream-producing) operations and terminal (value- or side-effect-producing) operations. Intermediate operations are always lazy. All Data Values in collections are processed in single shot
Stream acts upon infinite set of Values i.e. infinite stream Collections always act upon finite set of Data
The elements of a stream are only visited once during the life of a stream. Like an Iterator, a new stream must be generated to revisit the same elements of the source. Collections can be iterated any number of Times

8.How do I read / convert an InputStream into a String in Java?
Using Apache commons IOUtils to copy the InputStream into a StringWriter

StringWriter writer = new StringWriter();
IOUtils.copy(inputStream, writer, encoding);
String theString = writer.toString();
String theString = IOUtils.toString(inputStream, encoding); 

Using only the standard Java library

static String convertStreamToString(java.io.InputStream is) {
    java.util.Scanner s = new java.util.Scanner(is).useDelimiter("\\A");
    return s.hasNext() ? s.next() : "";
}

Scanner iterates over tokens in the stream, and in this case we separate tokens using “beginning of the input boundary” (\A), thus giving us only one token for the entire contents of the stream.

9.How do I convert a String to an InputStream in Java?

InputStream stream = new ByteArrayInputStream(exampleString.getBytes(StandardCharsets.UTF_8));

Using Apache Commons IO

String source = "This is the source of my input stream";
InputStream in = org.apache.commons.io.IOUtils.toInputStream(source, "UTF-8");

Using StringReader

String charset = ...; // your charset
byte[] bytes = string.getBytes(charset);
ByteArrayInputStream bais = new ByteArrayInputStream(bytes);
InputStreamReader isr = new InputStreamReader(bais);

10.Difference between hashtable and hashmap?
Click here

11.What is exception-masking?
When code in a try block throws an exception, and the close method in the finally also throws an exception, the exception thrown by the try block gets lost and the exception thrown in the finally gets propagated. This is usually unfortunate, since the exception thrown on close is something unhelpful while the useful exception is the informative one. Using try-with-resources to close your resources will prevent any exception-masking from taking place.

11.Try With Resources vs Try-Catch

  1. The main point of try-with-resources is to make sure resources are closed, without requiring the application code to do it.
  2. when there are situations where two independent exceptions can be thrown in sibling code blocks, in particular in the try block of a try-with-resources statement and the compiler-generated finally block which closes the resource. In these situations, only one of the thrown exceptions can be propagated. In the try-with-resources statement, when there are two such exceptions, the exception originating from the try block is propagated and the exception from the finally block is added to the list of exceptions suppressed by the exception from the try block. As an exception unwinds the stack, it can accumulate multiple suppressed exceptions.
  3. On the other hand if your code completes normally but the resource you’re using throws an exception on close, that exception (which would get suppressed if the code in the try block threw anything) gets thrown. That means that if you have some JDBC code where a ResultSet or PreparedStatement is closed by try-with-resources, an exception resulting from some infrastructure glitch when a JDBC object gets closed can be thrown and can rollback an operation that otherwise would have completed successfully.

12.How to get Suppressed Exceptions?
only one exception can be thrown by a method (per execution) but it is possible, in the case of a try-with-resources, for multiple exceptions to be thrown. For instance one might be thrown in the block and another might be thrown from the implicit finally provided by the try-with-resources.The compiler has to determine which of these to “really” throw. It chooses to throw the exception raised in the explicit code (the code in the try block) rather than the one thrown by the implicit code (the finally block). Therefore the exception(s) thrown in the implicit block are suppressed (ignored). This only occurs in the case of multiple exceptions.

The try-catch-resource block does expose the suppressed exception using the new (since Java 1.7) getSuppressed() method. This method returns all of the suppressed exceptions by the try-catch-resource block (notice that it returns ALL of the suppressed exceptions if more than one occurred). A caller might use the following structure to reconcile with existing behavior

try { 
  testJava7TryCatchWithExceptionOnFinally(); //Method throws exception in both try and finally block
} catch (IOException e) {   
  Throwable[] suppressed = e.getSuppressed();
    for (Throwable t : suppressed) {
    // Check T's type and decide on action to be taken
  }
}

13.How do you avoid fuzzy try-catch blocks in code like one below?

try{ 
     ...
     stmts
     ...
} 
catch(Exception ex) {
     ... 
     stmts
     ... 
} finally {
     connection.close // throws an exception
}

Write a SQLUtils class that contains static closeQuietly methods that catch and log such exceptions, then use as appropriate.

public class SQLUtils 
{
  private static Log log = LogFactory.getLog(SQLUtils.class);

  public static void closeQuietly(Connection connection)
  {
    try
    {
      if (connection != null)
      {
        connection.close();
      }
    }
    catch (SQLExcetpion e)
    {
      log.error("An error occurred closing connection.", e);
    }
  }

  public static void closeQuietly(Statement statement)
  {
    try
    {
      if (statement!= null)
      {
        statement.close();
      }
    }
    catch (SQLExcetpion e)
    {
      log.error("An error occurred closing statement.", e);
    }
  }

  public static void closeQuietly(ResultSet resultSet)
  {
    try
    {
      if (resultSet!= null)
      {
        resultSet.close();
      }
    }
    catch (SQLExcetpion e)
    {
      log.error("An error occurred closing result set.", e);
    }
  }
}

and

Connection connection = null;
Statement statement = null;
ResultSet resultSet = null;
try 
{
  connection = getConnection();
  statement = connection.prepareStatement(...);
  resultSet = statement.executeQuery();

  ...
}
finally
{
  SQLUtils.closeQuietly(resultSet);
  SQLUtils.closeQuietly(statment);
  SQLUtils.closeQuietly(connection);
}

14.What is Difference between Iterator and Split Iterator
A Spliterator can be used to split given element set into multiple sets so that we can perform some kind of operations/calculations on each set in different threads independently, possibly taking advantage of parallelism. It is designed as a parallel analogue of Iterator. Other than collections, the source of elements covered by a Spliterator could be, for example, an array, an IO channel, or a generator function.

There are 2 main methods in the Spliterator interface.

  1. tryAdvance()- With tryAdvance(), we can traverse underlying elements one by one (just like Iterator.next()). If a remaining element exists, this method performs the consumer action on it, returning true; else returns false.
  2. forEachRemaining() -For sequential bulk traversal we can use forEachRemaining()

A Spliterator is also a “smarter” Iterator, via it’s internal properties like DISTINCT or SORTED, etc (which you need to provide correctly when implementing your own Spliterator). These flags are used internally to disable unnecessary operations, also called optimizations, like this one for example:

 someStream().map(x -> y).count();

Because size does not change in case of the stream, the map can be skipped entirely, since all we do is counting.

You can create a Spliterator around an Iterator if you would need to, via:

Spliterators.spliteratorUnknownSize(yourIterator, properties)

15.What is Type Inference?
Type Inference means determining the Type by compiler at compile-time.It is not new feature in Java SE 8. It is available in Java 7 and before Java 7 too.Java 8 uses Type inference for calling lambda expressions. Refer here

16.What is Optional in Java 8? What is the use of Optional?Advantages of Java 8 Optional?
Optional is a final Class introduced as part of Java SE 8. It is defined in java.util package.It is used to represent optional values that is either exist or not exist. It can contain either one value or zero value. If it contains a value, we can get it. Otherwise, we get nothing.It is a bounded collection that is it contains at most one element only. It is an alternative to “null” value.

17.What is difference between initialization and instantiation?
instantiation – This is when memory is allocated for an object. This is what the new keyword is doing. A reference to the object that was created is returned from the new keyword.
initialization – This is when values are put into the memory that was allocated. This is what the Constructor of a class does when using the new keyword.A variable must also be initialized by having the reference to some object in memory passed to it.
Refer here

18.What are different Method References in Java?

  1. Reference to a static method – ClassName::MethodName
  2. Reference to an instance method – Object::methodName
  3. Reference to a constructor – ClassName::new

Refer Here

19.What are the difference between predicate and function?
Predicate interface has an abstract method test(T t) which has a Boolean return type. Usage, when we need to return/check the condition as True or False. It is best suited to code.

Function interface has an abstract method apply which takes the argument of type T and returns a result of type R. Here, R is nothing but the type of result user wants to return. It may be Integer, String, Boolean, Double, Long.

20.Why to go for Optional instead of NULL Check?
The Effectiveness of Optional could be only seen during Chaining in Streams or when accessing multiple getters at once like one below

.
.
computer.getSoundcard().getUSB().getVersion();
.
.
.
Optional.ofNullable(modem2)
       .map(Modem::getPrice)
       .filter(p -> p >= 10)
       .filter(p -> p <= 15)
       .isPresent();

21.What is Lambda Expressions?
The Lambda expression is used to provide the implementation for abstract method in functional interface. No need to define the method again for providing the implementation. Here, we just write the implementation code.

@FunctionalInterfac
interface Drawable {
 public void draw();
}

public class LambdaExpressionExample2 {
 public static void main(String[] args) {
  int width = 10;

  //with lambda  
  Drawable d2 = () -> {
   System.out.println("Drawing " + width);
  };
  d2.draw();
 }
}

22.How to handle Checked Exceptions in Lambda Expressions?
To handle checked exception we use a lambda wrapper for the lambda function. Refer here

23.What is the Difference between Lambda Expression and Anonymous Inner Class?
The key difference between Anonymous class and Lambda expression is the usage of ‘this’ keyword. In the anonymous classes, ‘this’ keyword resolves to anonymous class itself, whereas for lambda expression ‘this’ keyword resolves to enclosing class where lambda expression is written.

Another difference between lambda expression and anonymous class is in the way these two are compiled. Java compiler compiles lambda expressions and convert them into private method of the class. It uses invokedynamic instruction that was added in Java 7 to bind this method dynamically.

Functions reside in permanent memory whereas for classes the memory is loaded on demand.
Functions act on unrelated data whereas objects act on their own data.

Refer here

24.Why static methods are Not allowed in Interface prior to Java 8?
Prior to Java 8 Interface could only have abstract methods. If you are writing a static method and defining it then the defining of the static methods may vary based on the implementing classes. So if two classes implement static method and since the
purpose of interface is to provide multiple inheritance when the fourth class implementsthe second and third method which is overrided the it would lead to Diamond of Death Problem.
This is similar to same thing with default methods in Java 8

This Works

class Animal {
    public static void identify() {
        System.out.println("This is an animal");
    }
}
class Cat extends Animal {}

public static void main(String[] args) {
    Animal.identify();
    Cat.identify(); // This compiles, even though it is not redefined in Cat.
}

This Doesnot Works

interface Animal {
    public static void identify() {
        System.out.println("This is an animal");
    }
}
class Cat implements Animal {}

public static void main(String[] args) {
    Animal.identify();
    Cat.identify(); // This does not compile, because interface static methods do not inherit. (Why?)
}

Cat can only extend one class so if Cat extends Animal, Cat.identify has only one meaning. Cat can implement multiple interfaces each of which can have a static implementation.
So Java Compiler is not sure which implementation to call

25.Explain different memory Allocation in JVM?
Memory in Java is divided into two portions

Stack: One stack is created per thread and it stores stack frames which again stores local variables and if a variable is a reference type then that variable refers to a memory location in heap for the actual object.

Heap: All kinds of objects will be created in heap only.

Heap memory is again divided into 3 portions
Young Generation: Stores objects which have a short life, Young Generation itself can be divided into two categories Eden Space and Survivor Space.
Old Generation: Store objects which have survived many garbage collection cycles and still being referenced.
Permanent Generation: Stores metadata about the program e.g. runtime constant pool.

Interpolation
app.component.ts

import { Component } from '@angular/core';

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.css']
})
export class AppComponent {
  title = 'schoolApp';
}

registration.component.html

<div style="text-align:center">
  <h1>
    Welcome to {{ title }}!
  </h1>
</div>

Looping through Model Object set in component and Printing in HTML Page
Student.ts

export class Student {
   public isPresent:boolean=true;

   constructor(public studentId:number,
               public studentName:string,
               public studentAge:number,
               public studentGender:boolean){}
               

    getAttendance():string{
       if(this.isPresent==true)
        return "P";
       else 
        return "A";
    }           
}

registration.component.ts

import { Component, OnInit } from '@angular/core';
import {Student} from '../model/student';

@Component({
  selector: 'app-registration',
  templateUrl: './registration.component.html',
  styleUrls: ['./registration.component.css']
})
export class RegistrationComponent implements OnInit {
  public arrStudent:Array<Student>;
  constructor() { }

  ngOnInit() {
     this.arrStudent = [new Student(101, 'Mugil', 31, true),
                        new Student(102, 'Manasa', 26, false),
                        new Student(103, 'Kavitha', 27, false),
                        new Student(104, 'Renu', 28, true),
                        new Student(105, 'Joseph', 23, true)];
  }

  getGender(Student):string
  {
    if(Student.studentGender==true)
     return "M";
     else
     return "F";
  }
}

registration.component.html

<style>
 .even
    {
        background-color: rgb(235, 235, 235);
    }

    .odd
    {
        background-color : #ffffff;
    }
</style>
<table cellpadding="5">
    <thead>
  <tr>
    <th>Sno</th>
    <th>Student ID</th>
    <th>Name</th>
    <th>Age</th>
    <th>Gender</th>
    <th>Attendance</th>
  </tr>
</thead>
<tbody>
  <tr *ngFor="let objStudent of arrStudent; index as i;even as isEven;odd as isOdd" [class]="isEven?'even':'odd'">
     <td>{{i+1}}</td>
     <td>{{objStudent.studentId}}</td>
     <td>{{objStudent.studentName}}</td>
     <td>{{objStudent.studentAge}}</td>
     <td>{{ getGender(objStudent) }}</td>
     <td [class]="objStudent.isPresent?'present':'absent'">{{objStudent.getAttendance()}}</td> 
  </tr>
</tbody>
</table>


	

Bootstrapping Angular Application

  1. Then entry point to every Angular application is the main.ts file which contains this last line:
     
     platformBrowserDynamic().bootstrapModule(AppModule); 
    
  2. The platformBrowserDynamic() part of this line of code indicates that we are about to boot Angular in a browser environment. As Angular can be used in Javascript host environments asides the browser (e.g. on the server or in a web worker), its thus imperative that we specify the environment in which our App is to be booted.
  3. The bootstrapModule() function helps bootstrap our root module taking in the root module as its argument.
  4. AppModule is our root module which is the entry module for our application, this can actually be any of the modules in our application but by convention AppModule is used as the root module.
  5. In our AppModule, we then need to specify the component that will serve as the entry point component for our application. This happens in our app.module.ts file where we import the entry component (conventionally AppComponent) and supply it as the only item in our bootstrap array inside the NgModule configuration object.
     
     bootstrap:[AppComponent]
    

To put it short

  1. platformBrowserDynamic() to determine the Broswer or platform in which your angular app is about to run
  2. bootstrapModule() function to boot your entry module(app.module.ts) by supplying the module as an argument.
  3. app.module.ts is the root module that would specify the entry point component in the module configuration object.

How angular Works Internally

1.What Is an Angular Component?
An Angular application is a tree of Angular components.The component contains the data & user interaction logic that defines how the View looks and behaves. A view in Angular refers to a template (HTML).Components are like the basic building block in an Angular application.Components are defined using the @component decorator. A component has a selector, template, style and other properties, using which it specifies the metadata required to process the component. Angular components are a subset of directives. Unlike directives, components always have a template and only one component can be instantiated per an element in a template.

Components consist of three main building block

  1. Template
  2. Class
  3. MetaData

2.Angularjs vs Angular
Angularjs and Angular should be treated as two different frameworks. Here are few comparisons as below

>

AngularJS Angular
Controllers WebComponents
AngularJS is written in JavaScript.

Angular uses Microsoft’s TypeScript language, which is a superset of ECMAScript 6 (ES6). This has the combined advantages of the TypeScript features, like type declarations, and the benefits of ES6, like iterators and lambdas.
AngularJS is based on model-view-controller(MVC) Design and MVVM(Model-view-view-model) by two way data binding .In AngularJS the MVC pattern is implemented in JavaScript and HTML. The view is defined in HTML, while the model and controller are implemented in JavaScript. The controller accepts input, converts it into commands and sends the commands to the model and the view Angular 2 is more of component based architecture. You can assume everything as component like directives, services and so on. While directives and services are actually for the support of base components, they are also defined in similar fashion. A base component contains of dependencies, a view details and class declaration which may be considered as controller. So a well defined component consist of individual set of MVC architecture.Angular 2, controllers and $scope were replaced by components and directives. Components are directives with a template. They deal with a view of the application and logic on the page.
HTML markup is the View, Controller is the Controller & the Service (when it used to retrieve data) is the model. template is the View, class is the Controller & the Service (when it used to retrieve data) is the model.
To bind an image/property or an event with AngularJS, you have to remember the right ng directive. Angular focuses on “( )” for event binding and “[ ]” for property binding.
No Mobile Support Angular 2 and 4 both feature mobile support.
2-way binding, AngularJS reduced the development effort and time. However, by creating more processing on the client side, page load was taking considerable time. Angular implements unidirectional tree-based change detection and uses Hierarchical Dependency Injection system. This significantly boosts performance for the framework.
two-way data binding in Angular 2 is supported using the event and the property binding. We can use ngModel directive to use two-way data binding.

3.What are Directives
Directives are something that introduce new syntax / markup. They are markers on the DOM element which provides some special behavior to DOM elements and tells AngularJS’s HTML compiler to attach.

There are three kinds of directives in an Angular 2 application.
Components
Angular Component also refers to a directive with a template which deals with View of the Application and also contains the business logic. It is very useful to divide your Application into smaller parts. In other words, we can say that Components are directives that are always associated with the template directly.

Structural directives
Structural directives are able to change the behavior of DOM by adding and removing DOM elements. The directive NgFor, NgSwitch, and NgIf is the best example of structural directives.

Attribute directives
Attribute directives are able to change the behavior of DOM. The directive NgStyle is an example of Attribute directives which are used to change styles elements at the same time.

4.Difference between Component and Directive
Directives
Directives add behaviour to an existing DOM element or an existing component instance. One example use case for a directive would be to log a click on an element.

import {Directive} from '@angular/core';

@Directive({
    selector: "[logOnClick]",
    hostListeners: {
        'click': 'onClick()',
    },
})
class LogOnClick {
    constructor() {}
    onClick() { console.log('Element clicked!'); }
}
<button logOnClick>I log when clicked!</button>

Components
A component, rather than adding/modifying behaviour, actually creates its own view (hierarchy of DOM elements) with attached behaviour. An example use case for this might be a contact card component:

import {Component, View} from '@angular/core';

@Component({
  selector: 'contact-card',
  template: `
    <div>
      <h1>{{name}}</h1>
      <p>{{city}}</p>
    </div>
  `
})
class ContactCard {
  @Input() name: string
  @Input() city: string
  constructor() {}
}
<contact-card [name]="'foo'" [city]="'bar'"></contact-card>
Directive Component
They are used to create behavior to an existing DOM element. A component is a directive with a template and the @Component decorator is actually a @Directive decorator extended with template-oriented features. It used to shadow DOM to create encapsulates visual behavior. It is used to create UI widgets.
They help us to create re-usable components It helps us to break up our application in smaller component
We cannot create pipes using Attribute / Structural directive Pipes can be defined by component
We can define many directive per DOM element We can present only one component per DOM element
@directive keyword is used to define metadata @component keyword is used to define metadata

5.Directive Lifecycle

For the Directives there are three hooks provided for different event based on which we can take actions upon
ngOnChanges – It occurs when Angular sets data bound property. Here, we can get current and previous value of changed object. It is raised before the initialization event for directive.

ngOnInit – It occurs after Angular initializes the data-bound input properties.

ngDoCheck – It is raised every time when Angular detects any change.

ngOnDestroy – It is used for cleanup purposes and it is raised just before Angular destroys the directive. It is very much important in memory leak issues by un-subscribing observables and detaching event handlers.

6.What are Types of Databinding in Angular

  1. Interpolation / String Interpolation (one-way data binding)
  2. Property Binding (one-way data binding)
  3. Event Binding (one-way data binding)
  4. Two-Way Binding

7.What is Interpolation?
Interpolation(one-way data binding) allows you to define properties in a component class, and communicate these properties to and from the template.
nterpolation is a technique that allows the user to bind a value to a UI element.Interpolation binds the data one-way. This means that when value of the field bound using interpolation changes, it is updated in the page as well. It cannot change the value of the field. An object of the component class is used as data context for the template of the component. So the value to be bound on the view has to be assigned to a field in the component class.String Interpolation uses template expressions in double curly {{ }} braces to display data from the component, the special syntax {{ }}, also known as moustache syntax. The {{ }} contains JavaScript expression which can be run by Angular and the output will be inserted into the HTML.

          {{value}}  
Component----------->DOM

app.component.ts

import { Component } from '@angular/core';

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.css']
})
export class AppComponent {
  title = 'stockMarket';
}

app.component.html

<h1>
    Welcome to {{ title }}!
</h1>

8.What is Property Binding (one-way data binding)?
Property binding is used to bind values to the DOM properties of the HTML elements. Like interpolation, property binding is a one-way binding technique. Property bindings are evaluated on every browser event and any changes made to the objects in the event, are applied to the properties.There are 3 types of Property Binding

app.component.ts

import {Component} from '@angular/core';

@Component({
selector: 'my-app',
template:'
<h1>My First Angular App</h1>
<img [src]="imageUrl">
<img bind-src="imageUrl">'
})

export class AppComponent { 
imageUrl = 'http://codethataint.com/images/sample.jpg';
}
  1. Component property binding works within component element to bind parent component property into child component property. In the diagram the arrows and rectangles in red color are displaying the functionality related to component property binding.
  2. Element property binding works within HTML element and it binds a component property to a DOM property. In the diagram the arrows and rectangle in green color are displaying the functionality related to element property binding.
  3. Directive property binding works within HTML element with angular directives such as NgClass and NgStyle. In directive property binding a component property or any angular expression is bound to angular directive. In the diagram the arrows and rectangles in light blue color are displaying the functionality related to directive property binding.

9.What is Event Binding(one-way data binding)?
Event binding allows us to work in reverse from property binding. We can send information from the view, to the component class. Such information usually involves a click, hover or typing.

import {Component} from '@angular/core';

@Component({
selector: 'my-app',
template:'
<h1>My First Angular App</h1>
<img [src]="imageUrl" (click)='myMethod()'>
<img [src]="imageUrl" on-click='myMethod()'>'
})

myMethod() {
   console.log('Hey!');
}

10.What is Two way Binding?
Two-way data binding really just boils down to event binding and property binding.

<input [(ngModel)]="username">
<p>Hello {{username}}!</p>

The above code turns out to be

<input [value]="username" (input)="username = $event.target.value">
<p>Hello {{username}}!</p>
  1. [value]=”username” – Binds the expression username to the input element’s value property
  2. (input)=”expression” – Is a declarative way of binding an expression to the input element’s input event (yes there’s such event)
  3. username = $event.target.value – The expression that gets executed when the input event is fired
  4. $event – Is an expression exposed in event bindings by Angular, which has the value of the event’s payload

11.What is the Difference between One way Data Binding and Two Way Data Binding

One Way Data Binding Two Way Data Binding
In one-way data binding, the value of the Model is inserted into an HTML (DOM) element and there is no way to update the Model from the View. In two-way binding automatic synchronization of data happens between the Model and the View (whenever the Model changes it will be reflected in the View and vice-versa)
One way data Binding flow would be
(Model($scope) –> View)
Two way data Binding flow would be
(Model($scope) –> View and View –> Model($scope))
Angular 1 and Angular 2:

<h2 ng-bind="student.name"></h2>
<h2 [innerText]="student.name"></h2></pre>
Angular 1 and Angular 2:

<input ngModel="student.name"/>
<input [(ngModel)]="student.name"/></pre>

12.What is Dirty Checking?
Angular checks whether a value has changed in view and not yet synchronized across the app.Our Angular app keeps track of the values of the current watches. Angular walks down the $watch list, and, if the updated value has not changed from the old value, it continues down the list. If the value has changed, the app records the new value and continues down the $watch list.

Angular has a concept of ‘digest cycle’. You can consider it as a loop. In which Angular checks if there are any changes to all the variables watched by all the $scopes(internally $watch() and $apply() functions are getting bonded with each variable defined under $scope).So if you have $scope.myVar defined in your controller (that means this variable ‘myVar’ was marked for being watched) then you are explicitly telling Angular to monitor the changes on ‘myVar’ in each iteration of the loop. So when the value of ‘myVar’ changes, every time $watch() notices and execute $apply() to apply the changes in DOM.

How it Works
First Cycle

  1. Get a watch from list
  2. Check whether item has been changed
  3. If there is no change in item then
  4. No Action taken, move to next item in watch list

Second Cycle

  1. Get a watch from list
  2. Check whether item has been changed
  3. If there is Change in an item
  4. DOM needs to be updated, return to digest loop

14.What is difference between [ngClass] and [class]?
When multiple classes should potentially be added, the NgClass prefer NgClass. NgClass should receive an object with class names as keys and expressions that evaluate to true or false as values

<div [ngClass]="myClasses">
  ...
</div>
myClasses = {
  important: this.isImportant,
  inactive: !this.isActive,
  saved: this.isSaved,
  long: this.name.length > 6
}

If you want to apply a single class to DOM element rather than multiple classes go for [Class]

<div [class]="isGreen?'green':'cyan'">
  ...
</div>

The Other way of applying class is using Singular Version of Class Binding
.In this method either class style would be applied or it would be left without anystyle

<div [class.green]="expression()">
  ...
</div>

The expression is a function which returns true or false value.If true then green style would be applied or no style would be applied

15.What is difference between ngStyle and ngClass?
ng-style is used to interpolate javascript object into style attribute, not css class and And ng-class directive translates your object into class attribute.

[ngStyle]="{'font-size':24}"
[ngClass]="{'text-success':true}"

15.What is Service?

16.What is Provider?

17.What is difference between Factory vs Service vs Provider?

18.Difference between @Inject vs @Injectable

19.What are Reactive Forms

20.How Dependency Injection is done in Angular?

21.Difference between Subscribe, Transform, Map and Filter?

22.Navigator vs Router

23.What is Lazy Loading in Angular

24.What is Difference between Subscribe and Promise?

25.Lifecycle of Angular App

26.How to Create Custom Event in Angular?

27.What are different style Encapsulation in Angular?
ViewEncapsulation.Emulated – This is default, keep styles scoped to the components where they are added even though the styles are all added collected in the head of the page when components are loaded.
ViewEncapsulation.None – Uses global CSS without any encapsulation,When this value is specified, Angular simply adds the unmodified CSS styles to the head section of the HTML document and lets the browser figure out how to apply the styles using the normal CSS precedence rules.
ViewEncapsulation.Native – the browsers native implementation ensures the style scoping.If the browser doesn’t support shadow DOM natively, the web-components polyfills are required to shim the behavior.Check for browser support before enabling it. This is similar to ViewEncapsulation.Emulated but the polyfills are more expensive because of they polyfill lots of browser APIs even when most of them are never used.

The Native and None values should be used with caution. Browser support for the shadow DOM feature is so limited that using the Native option is sensible only if you are using a polyfill library that provides compatibility for other browsers.

The None option adds all the styles defined by components to the head section of the HTML document and lets the browser figure out how to apply them. This has the benefit of working in all browsers, but the results are unpredictable, and there is no isolation between the styles defined by different components.

28.What is View Projection?
Projection is useful when we want the user to decide the input of the Component something like carousel where the inputs could be image, text or page which doesnot have much to do with component logic.
Lets say we have a button text which needs to be decided by the parent component dynamically.In such case we will pass the text of button as parameter to selector of child component like one below

parent.component.ts

<app-parentcounter>Increment Counter From Parent</app-parentcounter>

child.component.ts

<button (click)="increaseCounter()">
  <ng-content></ng-content>
 </button>

ng-content in child tells its a argument which should be passed from parent component. There are two advantages of using ng-content.

  1. The value could be decided at runtime by passing as paremeter
  2. The Button could be used multiple times with different text

Spring Version 4

  1. Spring Framework 4.0 provides support for several Java 8 features
  2. Java EE version 6 or above with the JPA 2.0 and Servlet 3.0 specifications
  3. Groovy Bean Definition DSL- external bean configuration using a Groovy DSL
  4. Core Container Improvements
    1. The @Lazy annotation can now be used on injection points, as well as on @Bean definitions.
    2. The @Description annotation has been introduced for developers using Java-based configuration
    3. Using generics as autowiring qualifiers
    4. Beans can now be ordered when they are autowired into lists and arrays. Both the @Order annotation and Ordered interface are supported.
    5. A generalized model for conditionally filtering beans has been added via the @Conditional annotation

Spring Version 5

  1. Functional programming with Kotlin
  2. Reactive Programming Model.The Reactive Streams API is officially part of Java 9. In Java 8, you will need to include a dependency for the Reactive Streams API specification.
  3. @Nullable and @NotNull annotations will explicitly mark nullable arguments and return values. This enables dealing null values at compile time rather than throwing NullPointerExceptions at runtime.
  4. Spring Framework 5.0 now supports candidate component index as an alternative to classpath scanning..Reading entities from the index rather than scanning the classpath.Loading the component index is cheap. Therefore the startup time with the index remains constant as the number of classes increase. While for a compoent scan the startup time increases significantly.
  5. requires Java 8 as a minimum JDK version.Spring 5 is fully compatible with Java 9.
  6. Servlet 3.1,JMS 2.0,JPA 2.1,Hibernate5,JAX-RS 2.0,Bean Validation 1.1,JUnit 5

Java 7 Features:

  1. Usage of Strings in Switch Statement
  2. Diamond Operator – the diamond operator allows you to write more compact (and readable) code by saving repeated type arguments
  3. Try with Resources
  4. Multiple Exception Handling
  5. Suppressed Exceptions
  6. Allows Binay Literals – Binary Literal are expressing Integer Values in terms of Binary Value by adding the prefix 0b or 0B to the integral value.For more on BinayLiteral click here

Java 8 Features:

  1. Lambda Expressions
  2. Java Stream API for Bulk Data Operations on Collections.
  3. Static and Default method in Functional Interfaces
  4. forEach() method in Iterable interface
  5. Functional Interfaces
  6. Collection API improvements

Java 9 Features:

  1. Factory Methods for Immutable List, Set, Map and Map.Entry
  2. Private methods in Interfaces
  3. Reactive Streams
  4. JShell: the interactive Java REPL

Java 10 Features:

  1. Local-Variable Type Inference
  2. Application Class-Data Sharing
  3. default set of root Certification Authority (CA) certificates in the JDK
  4. Garbage Collector Interface

Java 11 Features:

  1. Java 11 JDK is not free for usage on commercial purpose
  2. No need to compile.typing >>Java in command prompt will compile and run java
  3. Remove the Java EE and CORBA Modules –
  4. Java String Methods – isBlank(), lines(), strip(), stripLeading(), stripTrailing()

Java 17 Features:

  1. LTS support and licenses Java 17 LTS is the latest long-term support release for the Java SE platform
  2. Pattern matching for the switch case
  3. Sealed classes and interfaces. Sealed classes and interfaces restrict which other classes or interfaces may extend or implement them.
    sealed class Human permits Manish, Vartika, Anjali 
    {    
        public void printName() 
        { 
            System.out.println("Default"); 
        } 
    } 
    
    non-sealed class Manish extends Human 
    { 
        public void printName() 
        { 
            System.out.println("Manish Sharma"); 
        } 
    }