How to use tar command in Linux

Author Piyush Gupta


GNU TAR (Tape Archive) combines multiple files together into a single tape or disk archive, and can restore individual files from the archive. Here are some Linux tar command with Useful practical examples.

Some useful command line switches are given below, which are used in this article.
  • -c => create a new archive file
  • -v => show the detailed output (or progress) of command
  • -x => extract an archive file
  • -f => specify file name of an archive
  • -z => Use archive through gzip
  • -j => Use archive through bzip2
  • -J => Use archive through xz
  • -t => viewing content of archive
  • -O => Display file content on stdout
  • -r => append file to existing archive
  • -C => Use of define destination directory
  • -W => Verify a archive file

Creating Archive File

Use the following examples to create new archive file in different formats like .tar (a simple archive file), .tar.gz (gzip archive), .tar.bz2 (bzip2 archive file), and .tar.xz (xz archive file).
1. Create .tar archive file – Compress all content of /var/www directory to a archive.tar file including all subdirectories.
tar -cvf archive.tar /var/www
2. Create .tar.gz archive file – Compress all content of /var/www directory to a archive.tar.gz file including all subdirectories. This will create higher compressed file that above.
tar -zcvf archive.tar.gz /var/www
3. Create .tar.bz2 archive file – Compress all content of /var/www directory to a archive.tar.bz2 file including all subdirectories. This takes more time to compress than others and provides the highest compressed file that above.
tar -jcvf archive.tar.gz /var/www
4. Create .tar.xz archive file – Compress all content of /var/www directory to a archive.tar.xz file including all subdirectories. This takes more time to compress than others and provides the highest compressed file that above.
tar -Jcvf archive.tar.xz /var/www

Extract Archive File

Use the following commands example to extract archive files. In this section each example has two commands, the First command will extract content in the current directory and the second command will extract file content in the specified directory with -C option.
5. Extract .tar archive file –
tar -xvf archive.tar
tar -xvf archive.tar -C /tmp/
6. Extract .tar.gz archive file –
tar -zxvf archive.tar.gz
tar -zxvf archive.tar.gz -C /tmp/
7. Extract .tar.bz2 archive file –
tar -jxvf archive.tar.bz2
tar -jxvf archive.tar.bz2 -C /tmp/
8. Extract .tar.xz archive file –
tar -Jxvf archive.tar.xz
tar -Jxvf archive.tar.xz -C /tmp/

List Archive File Content

You can list all the content inside a tar (Tape ARchive) file without extracting it. It helps the user to check available files in an archive and save users time.
9. List .tar archive file content –
tar -tvf archive.tar
10. List .tar.gz archive file content-
tar ztvf archive.tar.gz
11. List .tar.bz2 archive file content-
tar jtvf archive.tar.bz2
12. List .tar.xz archive file content-
tar Jtvf archive.tar.xz

Update Archive File

You can use -u option to simply update archive file. Using this option, it will only append files newer than the copy in the archive file.
13. Update .tar archive file –
tar -uvf archive.tar /var/www
14. Update .tar.gz archive file –
tar -zuvf archive.tar.gz /var/www
15. Update .tar.bz2 archive file –
tar -juvf archive.tar.bz2 /var/www
16. Update .tar.xz archive file –
tar -Juvf archive.tar.xz /var/www

Other Useful Archive File Commands

Below are some more useful options available for handling tape archive files. Below examples are shown in the simple .tar file. You can use them with .tar.gz (-z option), .tar.bz2 (-j option) and .tar.xzf (-J option).
17. Display file content – use -O followed by filename, this will display content of specified file.
tar -uvf archive.tar -O backup/index.html
18. Adding file to archive – use -r followed by filename to add more files to existing archive file.
tar -rvf archive.tar add_new_file.txt

Wget Linux Command to Download Files

Author Piyush Gupta

wget is Linux command line utility. wget is widely used for downloading files from Linux command line. There are many options available to download a file from remote server. wget works same as open url in browser window.

1: Download File using wget

Below example will download file from server to current local directory.
$ wget https://example.com/file.zip

2: Download File & Save to Specific Location

Below command will download zip file in /opt folder with name file.zip. -O is used for specify destination folder

# wget https://example.com/file.zip -O /opt/file.zip

3: Download File from FTP

Some time you required to download file from ftp server, so wget can easily download file from ftp url like below.

# wget ftp://ftp.example.com/file.zip

4: Download File from Password Protected URLs

Sometimes we required to specify username and password to download a file. While using browser its easy but using command line it doesn’t prompt for login credentials. Below examples will show to how to use username,password while downloading files from password protected sources.

4.1: Download file from Password protected ftp server.

$ wget --ftp-user=username --ftp-password=secretpassword ftp://ftp.example.com/file.zip

or

$ wget ftp://username:secretpassword@ftp.example.com/file.zip

4.2: Download file from password protected http server.

# wget --http-user=username --http-password=secretpassword https://example.com/file.zip

or

# wget --user=username --password=secretpassword https://example.com/file.zip

4.3: Download file behind password protected proxy server.

$ wget –proxy-user=username –proxy-password=secretpassword https://example.com/file.zip

5: Download File from Untrusted Secure URL.

If any download url is using untrusted ssl certificate, wget will not download that file. But we can download it by using –no-check-certificate parameter in url.
$ wget  https://example.com/file.zip  --no-check-certificate

How to Use zip Command in Linux

Author Piyush Gupta


The zip command is used for compression and file packaging under Linux/Unix operating systems. unzip is used for decompress an archive. See the below examples of some typical uses of zip and unzip.

1 – Zip All Files in Directory

This command will create zip of all files in /backup directory. I will not create zip recursively.
$ zip backup.zip /backup/*
Sample Output:
adding: backup/anaconda.ifcfg.log (deflated 47%)
adding: backup/anaconda.log (deflated 78%)
adding: backup/anaconda.program.log (deflated 84%)
adding: backup/anaconda.storage.log (deflated 90%)
adding: backup/boot.log (deflated 72%)
adding: backup/dracut.log (deflated 92%)
adding: backup/httpd/ (stored 0%)
adding: backup/kadmind.log (deflated 74%)
adding: backup/krb5kdc.log (deflated 71%)
adding: backup/mysqld.log (deflated 82%)
 

2 – Zip files with Wildcard

Use Linux wildcards to archive files of specific extensions only. Like backup only .log extension files in a directory.

$ zip backup.zip /backup/*.log
Sample Output:
adding: backup/anaconda.ifcfg.log (deflated 47%)
adding: backup/anaconda.log (deflated 78%)
adding: backup/anaconda.program.log (deflated 84%)
adding: backup/anaconda.storage.log (deflated 90%)
adding: backup/boot.log (deflated 72%)
adding: backup/dracut.log (deflated 92%)
adding: backup/kadmind.log (deflated 74%)
adding: backup/krb5kdc.log (deflated 71%)
adding: backup/mysqld.log (deflated 82%)
adding: backup/pm-powersave.log (deflated 15%)
adding: backup/wpa_supplicant.log (stored 0%)
adding: backup/Xorg.0.log (deflated 83%)
adding: backup/Xorg.9.log (deflated 83%)
adding: backup/yum.log (deflated 77%)
 

3 – Zip files Recursively

Below command will create an archive recursively with files in sub directories also.

$ zip -r backup.zip /backup 

4 – Create Password Protected Zip

Some times we need to create password protected archive. Use -p to make an archive password protected.

$ zip -P backup.zip /backup/*.log 

5 – Zip with Compression Levels

Zip command provides 10 levels of compression ( 0-9 ).

  • -6 is used as default compression level. 
  • -0 is used for lowest level compression. 
  • -9 is used for hightest level comression
$ zip -9 high-compressed-file.zip /backup/*
$ zip -0 lowest-compressed-file.zip /backup/*
Check differences between compressed file
$ ls -lh lowest-compressed-file.zip high-compressed-file.zip

-rw-r--r--. 1 root root 50K Apr 11 14:14 high-compressed-file.zip
-rw-r--r--. 1 root root 447K Apr 11 14:14 lowest-compressed-file.zip
You can see the difference between between both file sizes.

6 – List content of zip File

Using -l switch with unzip command to list only files inside a zip archive without decompressing it.

$ unzip -l backup.zip
Sample Output:
Archive: backup.zip
Length Date Time Name
--------- ---------- ----- ----
140 04-11-2013 14:07 backup/anaconda.ifcfg.log
11153 04-11-2013 14:07 backup/anaconda.log
15446 04-11-2013 14:07 backup/anaconda.program.log
136167 04-11-2013 14:07 backup/anaconda.storage.log
2722 04-11-2013 14:07 backup/boot.log
211614 04-11-2013 14:07 backup/dracut.log
0 04-11-2013 14:08 backup/httpd/
1382 04-11-2013 14:07 backup/kadmind.log
1248 04-11-2013 14:07 backup/krb5kdc.log
6485 04-11-2013 14:07 backup/mysqld.log
87 04-11-2013 14:07 backup/pm-powersave.log
0 04-11-2013 14:07 backup/wpa_supplicant.log
30186 04-11-2013 14:07 backup/Xorg.0.log
31094 04-11-2013 14:07 backup/Xorg.9.log
6739 04-11-2013 14:07 backup/yum.log
--------- -------
454463 15 files
 

7 – Extract a Zip File.

unzip command is used to extract a zip file. Use below command to simply extract a zip file.

$ unzip backup.zip

8 – Check an archive file

Use -t to check and archive files. This option extracts each specified file in memory and compares the CRC (cyclic redundancy check, an enhanced checksum) .

$ unzip -t backup.zip
Sample Output:
 Archive: backup-11Apr2013.zip
testing: backup/anaconda.ifcfg.log OK
testing: backup/anaconda.log OK
testing: backup/anaconda.program.log OK
testing: backup/anaconda.storage.log OK
testing: backup/boot.log OK
testing: backup/dracut.log OK
testing: backup/httpd/ OK
testing: backup/kadmind.log OK
testing: backup/krb5kdc.log OK
testing: backup/mysqld.log OK
testing: backup/pm-powersave.log OK
testing: backup/wpa_supplicant.log OK
testing: backup/Xorg.0.log OK
testing: backup/Xorg.9.log OK
testing: backup/yum.log OK
No errors detected in compressed data of backup.zip.

Linux Find Command Examples

Author Piyush Gupta

find is a Linux command line tool to search files and directories in the file system. The find command works much fast than any other command. It provides a large number of options for more specific search. It also supports wildcard characters.


Every Linux user must read this article and understand the uses of the find command. This command is very useful in your daily tasks. This article will help you to understand find command and its uses in Linux system.

Syntax: To search a file or directory under specified filesystem.

find /search/in/dir -name filename

Explanation:
find => command line tool
/search/in/dir => Directory name where to start search
-name => Switch to specify filename
filename => File or Directory name

Find files by Name

Use -name option to find a file with name “hello.txt” under root (/) file system.

find / -name hello.txt

Find files by Type

Search for the file (not directory) named “backup.zip” in entire file system. Use -type f to specify search for files and ignore direcories.

find / -type f -name backup.zip

Search directory only

Search for the directory (not file) named “backup” in entire file system. Use -type d to specify search for directory and ignore files.

find / -type d -name backup

Find files by Size

Search all files systemwide, which are greater than or equal 10MB with find command

find / -type f -size +10M

And this command will search all files in system which are less than 10MB.

find / -type f -size -10M

-size: switch are used for searching file on bais of there size. Plus ( + ) is used to greater than size and minus ( – ) sign is used for less than size.
like: +100MB, -50KB, +1GB etc…

Find files by Time

Search all files which modification time is more than 30 days.

find / -type f -mtime +30

Search all files which modification time is less than 30 days.

find / -type f -mtime -30

Find files by User/Group

Find command also provides search based on user and group ownership. like:

Search all .txt files own by user bob.

find  / -user bob -name "*.txt"

Search all .txt files with group ownership of root.

find  / -group root -name "*.txt"

You can combine both to more specific search to files with owner bob and group ownership of the root.

find  / -user bob -group root -name "*.txt"

Find files by Permissions

Search all files which are having 777 permissions under /var/www directory tree. This can be helpful for the security audit.

find . -perm 777

Find files by Inode Number

This command is used to search files on basis of their inode number. -inum is used for this search.

find / -inum 1532

If you want check inode number of a file using below command. The first field of output is an inode number

ls -li piyush.txt

30878 -rw-r--r--. 1 root root 0 Mar 22 17:20 piyush.txt

Find Empty files

This command is very useful to search empty files and directories. It is useful to the cleanup system with empty files and directories.

$ find / -empty

Find files by File Types

Search all block special files available under / filesystem.

find / -type b

Other file type options are as below:

b – block (buffered) special
c – character (unbuffered) special
d – directory
p – named pipe (FIFO)
f – regular file
s – socket
l – symbolic link; this is never true if the -L option or the -follow option is in effect unless the symbolic link is broken. If you want to search for symbolic links when -L is in effect, use -xtype.

One Time Task Scheduling using at Command in Linux

Author Piyush Gupta

While working on Linux systems we preferred crontab for scheduling jobs generally. There are another utility at command is very useful for scheduling one time tasks. It reads commands from standard input or script/file which can be executed later once. But we can’t use at command for any recurring tasks. For recurring tasks use Linux crontab.

At command can be useful for shutdown system at the specified time, Taking a one-time backup, sending email as a reminder at the specified time etc. This article will help you to understand the working of at command with useful examples.

Commands used with at:

  • at : execute commands at specified time.
  • atq : lists the pending jobs of users.
  • atrm : delete jobs by their job number.

1. Schedule first job using at command

Below example will schedule “sh backup.sh” command to be executed on next 9:00 AM once.

at 9:00 AM
at> sh backup.sh
at> ^d
job 3 at 2019-03-23 09:00

Use ^d to exit from at prompt.

You can also use the following option to schedule a job. The below command will run “sh backup.sh” at 9:00 in the morning.

echo "sh backup.sh" | at 9:00 AM

2. List the scheduled jobs using atq

When we list jobs by root account using atq, it shows all users jobs in the result. But if we execute it from a non-root account, it will show only that users jobs.

atq
3 2019-03-23 09:00 a root
5 2019-03-23 10:00 a piyush
1 2019-03-23 12:00 a root

Fields description:

First filed: job id
Second filed: Job execution date
third filed: Job execution time
Last field: User name, under which job is scheduled.

3. Remove scheduled job using atrm

You can remove any at job using atrm with their job id.

atrm 3
atq
5 2019-03-23 10:00 a piyush
1 2019-03-23 12:00 a root

4. Check the content of scheduled at job

atq command only shows the list of jobs but if you want to check what script/commands are scheduled with that task, below example will help you.

at -c 5

In the above example, 5 is the job id.

Examples of at Command:

Example 1: Schedule task at coming 10:00 AM.

# at 10:00 AM

Example 2: Schedule task at 10:00 AM on coming Sunday.

at 10:00 AM Sun

Example 3: Schedule task at 10:00 AM on coming 25’th July.

at 10:00 AM July 25

Example 4: Schedule task at 10:00 AM on coming 22’nd June 2015.

at 10:00 AM 6/22/2015at 10:00 AM 6.22.2015

Example 5: Schedule task at 10:00 AM on the same date at next month.

at 10:00 AM next month

Example 6: Schedule task at 10:00 AM tomorrow.

at 10:00 AM tomorrow

Example 7: Schedule task at 10:00 AM tomorrow.

at 10:00 AM tomorrow

Example 8: Schedule task to execute just after 1 hour.

at now + 1 hour

Example 9: Schedule task to execute just after 30 minutes.

at now + 30 minutes

Example 10: Schedule task to execute just after 1 and 2 weeks.

at now + 1 weekat now + 2 weeks

Example 11: Schedule task to execute just after 1 and 2 years.

at now + 1 yearat now + 2 years

Example 12: Schedule task to execute at midnight.

at midnight

The above job will execute on next 12:00 AM

Thanks for reading this article, We hope you will understand how to use ‘at’ command in Linux.

How to Delete Files Older than 30 days in Linux

Author Piyush Gupta

This is the best practice to remove old unused files from your server. For example, if we are running daily/hourly backup of files or database on the server then there will be much junk created on the server. So clean it regularly. To do it you can find older files from the backup directory and clean them. This article will help you to find files older than 30 days.

1. Delete Files Older Than 30 Days

This command will delete all files older than 30 days in system /opt/backup directory.
find /opt/backup -type f -mtime +30 -exec rm -f {} \;

2. Delete Files Older Than 30 Days with .log Extension

If you want to delete only specified extension files, you can use the following command.
find /var/log -name "*.log" -type f -mtime +30 -exec rm -f {} \;
Above command will delete only files having .log extension.

Spring Framework Interview Questions

Author Piyush Gupta

1) What is Spring Framework?

Spring is a lightweight inversion of control and aspect-oriented container framework. Spring Framework’s contribution towards java community is immense and spring community is the largest and most innovative community by size. They have numerous projects under their portfolio and have their own spring dmServer for running spring applications. This community is acquired by VMWare, a leading cloud computing company for enabling the java application in the cloud by using spring stacks. If you are looking to read more about the spring framework and its products, please read in their official site Spring Source.

2) Explain Spring?

  • Lightweight : Spring is lightweight when it comes to size and transparency. The basic version of spring framework is around 1MB. And the processing overhead is also very negligible.
  • Inversion of control (IoC) : Loose coupling is achieved in spring using the technique Inversion of Control. The objects give their dependencies instead of creating or looking for dependent objects.
  • Aspect oriented (AOP) : Spring supports Aspect oriented programming and enables cohesive development by separating application business logic from system services.
  • Container : Spring contains and manages the life cycle and configuration of application objects.
  • Framework : Spring provides most of the intra functionality leaving rest of the coding to the developer.

3) What are the different modules in Spring framework?

  • The Core container module
  • Application context module
  • AOP module (Aspect Oriented Programming)
  • JDBC abstraction and DAO module
  • O/R mapping integration module (Object/Relational)
  • Web module
  • MVC framework module

4) What is the Core container module?

This module is provides the fundamental functionality of the spring framework. In this module BeanFactory is the heart of any spring-based application. The entire framework was built on the top of this module. This module makes the Spring container.

5) What is Application context module?

The Application context module makes spring a framework. This module extends the concept of BeanFactory, providing support for internationalization (I18N) messages, application lifecycle events, and validation. This module also supplies many enterprise services such JNDI access, EJB integration, remoting, and scheduling. It also provides support to other framework.

6) What is AOP module?

The AOP module is used for developing aspects for our Spring-enabled application. Much of the support has been provided by the AOP Alliance in order to ensure the interoperability between Spring and other AOP frameworks. This module also introduces metadata programming to Spring. Using Spring’s metadata support, we will be able to add annotations to our source code that instruct Spring on where and how to apply aspects.

7)What is JDBC abstraction and DAO module?

Using this module we can keep up the database code clean and simple, and prevent problems that result from a failure to close database resources. A new layer of meaningful exceptions on top of the error messages given by several database servers is bought in this module. In addition, this module uses Spring’s AOP module to provide transaction management services for objects in a Spring application.

8) What are object/relational mapping integration module?

Spring also supports for using of an object/relational mapping (ORM) tool over straight JDBC by providing the ORM module. Spring provide support to tie into several popular ORM frameworks, including HibernateJDO, and iBATIS SQL Maps. Spring’s transaction management supports each of these ORM frameworks as well as JDBC.

9) What is web module?

This module is built on the application context module, providing a context that is appropriate for web-based applications. This module also contains support for several web-oriented tasks such as transparently handling multipart requests for file uploads and programmatic binding of request parameters to your business objects. It also contains integration support with Jakarta Struts.

11) What is web module?

Spring comes with a full-featured MVC framework for building web applications. Although Spring can easily be integrated with other MVC frameworks, such as Struts, Spring’s MVC framework uses IoC to provide for a clean separation of controller logic from business objects. It also allows you to decoratively bind request parameters to your business objects. It also can take advantage of any of Spring’s other services, such as I18N messaging and validation.

12) What is a BeanFactory?

A BeanFactory is an implementation of the factory pattern that applies Inversion of Control to separate the application’s configuration and dependencies from the actual application code.

13) What is AOP Alliance?

AOP Alliance is an open-source project whose goal is to promote adoption of AOP and interoperability among different AOP implementations by defining a common set of interfaces and components.

14) What is Spring configuration file?

Spring configuration file is an XML file. This file contains the classes information and describes how these classes are configured and introduced to each other.

15) What does a simple spring application contain?

These applications are like any Java application. They are made up of several classes, each performing a specific purpose within the application. But these classes are configured and introduced to each other through an XML file. This XML file describes how to configure the classes, known as the Spring configuration file.

16) What is XMLBeanFactory?

BeanFactory has many implementations in Spring. But one of the most useful one is org.springframework.beans.factory.xml.XmlBeanFactory, which loads its beans based on the definitions contained in an XML file. To create an XmlBeanFactory, pass a java.io.InputStream to the constructor. The InputStream will provide the XML to the factory. For example, the following code snippet uses a java.io.FileInputStream to provide a bean definition XML file to XmlBeanFactory.

BeanFactory factory = new XmlBeanFactory(new FileInputStream('beans.xml'));
To retrieve the bean from a BeanFactory, call the getBean() method by passing the name of the bean you want to retrieve.

MyBean myBean = (MyBean) factory.getBean('myBean');

17) What are important ApplicationContext implementations in spring framework?

  • ClassPathXmlApplicationContext – This context loads a context definition from an XML file located in the class path, treating context definition files as class path resources.
  • FileSystemXmlApplicationContext – This context loads a context definition from an XML file in the filesystem.
  • XmlWebApplicationContext – This context loads the context definitions from an XML file contained within a web application.

18) Explain Bean lifecycle in Spring framework?

  1. The spring container finds the bean’s definition from the XML file and instantiates the bean.
  2. Using the dependency injection, spring populates all of the properties as specified in the bean definition.
  3. If the bean implements the BeanNameAware interface, the factory calls setBeanName() passing the bean’s ID.
  4. If the bean implements the BeanFactoryAware interface, the factory calls setBeanFactory(), passing an instance of itself.
  5. If there are any BeanPostProcessors associated with the bean, their post- ProcessBeforeInitialization() methods will be called.
  6. If an init-method is specified for the bean, it will be called.
  7. Finally, if there are any BeanPostProcessors associated with the bean, their postProcessAfterInitialization() methods will be called.

19) What is bean wiring?

Combining together beans within the Spring container is known as bean wiring or wiring. When wiring beans, you should tell the container what beans are needed and how the container should use dependency injection to tie them together.

20) How do add a bean in spring application?


<!DOCTYPE beans PUBLIC '-//SPRING//DTD BEAN//EN'
'http://www.springframework.org/dtd/spring-beans.dtd'>


<bean id='bar' class='com.act.Bar'/

In the bean tag the id attribute specifies the bean name and the class attribute specifies the fully qualified class name.

21) What are singleton beans and how can you create prototype beans?

Beans defined in spring framework are singleton beans. There is an attribute in bean tag named ‘singleton’ if specified true then bean becomes singleton and if set to false then the bean becomes a prototype bean. By default it is set to true. So, all the beans in spring framework are by default singleton beans.


22) What are the important beans lifecycle methods?

There are two important bean lifecycle methods. The first one is setup which is called when the bean is loaded in to the container. The second method is the teardown method which is called when the bean is unloaded from the container.

23) How can you override beans default lifecycle methods?

The bean tag has two more important attributes with which you can define your own custom initialization and destroy methods. Here I have shown a small demonstration. Two new methods fooSetup and fooTeardown are to be added to your Foo class.


   

24) What are Inner Beans?

When wiring beans, if a bean element is embedded to a property tag directly, then that bean is said to the Inner Bean. The drawback of this bean is that it cannot be reused anywhere else.

25) What are the different types of bean injections?

There are two types of bean injections.
  1. By setter
  2. By constructor

26) What is Auto wiring?

You can wire the beans as you wish. But spring framework also does this work for you. It can auto wire the related beans together. All you have to do is just set the autowire attribute of bean tag to an autowire type.


27) What are different types of Autowire types?

There are four different types by which autowiring can be done.
    • byName
    • byType
    • constructor
    • autodetect

28) What are the different types of events related to Listeners?

There are a lot of events related to ApplicationContext of spring framework. All the events are subclasses of org.springframework.context.Application-Event. They are
  • ContextClosedEvent – This is fired when the context is closed.
  • ContextRefreshedEvent – This is fired when the context is initialized or refreshed.
  • RequestHandledEvent – This is fired when the web context handles any request.

29) What is an Aspect?

An aspect is the cross-cutting functionality that you are implementing. It is the aspect of your application you are modularizing. An example of an aspect is logging. Logging is something that is required throughout an application. However, because applications tend to be broken down into layers based on functionality, reusing a logging module through inheritance does not make sense. However, you can create a logging aspect and apply it throughout your application using AOP.

30) What is a Jointpoint?

A joinpoint is a point in the execution of the application where an aspect can be plugged in. This point could be a method being called, an exception being thrown, or even a field being modified. These are the points where your aspect’s code can be inserted into the normal flow of your application to add new behavior.

31) What is an Advice?

Advice is the implementation of an aspect. It is something like telling your application of a new behavior. Generally, and advice is inserted into an application at joinpoints.

32) What is a Pointcut?

A pointcut is something that defines at what joinpoints an advice should be applied. Advices can be applied at any joinpoint that is supported by the AOP framework. These Pointcuts allow you to specify where the advice can be applied.

33) What is an Introduction in AOP?

An introduction allows the user to add new methods or attributes to an existing class. This can then be introduced to an existing class without having to change the structure of the class, but give them the new behavior and state.

34) What is a Target?

A target is the class that is being advised. The class can be a third party class or your own class to which you want to add your own custom behavior. By using the concepts of AOP, the target class is free to center on its major concern, unaware to any advice that is being applied.

35) What is a Proxy?

A proxy is an object that is created after applying advice to a target object. When you think of client objects the target object and the proxy object are the same.

36) What is meant by Weaving?

The process of applying aspects to a target object to create a new proxy object is called as Weaving. The aspects are woven into the target object at the specified joinpoints.

37) What are the different points where weaving can be applied?

  • Compile Time
  • Classload Time
  • Runtime

38) What are the different advice types in spring?

    • Around : Intercepts the calls to the target method
    • Before : This is called before the target method is invoked
    • After : This is called after the target method is returned
    • Throws : This is called when the target method throws and exception
  • Around : org.aopalliance.intercept.MethodInterceptor
  • Before : org.springframework.aop.BeforeAdvice
  • After : org.springframework.aop.AfterReturningAdvice
  • Throws : org.springframework.aop.ThrowsAdvice

39) What are the different types of AutoProxying?

  • BeanNameAutoProxyCreator
  • DefaultAdvisorAutoProxyCreator
  • Metadata autoproxying

40) What is the Exception class related to all the exceptions that are thrown in spring applications?

DataAccessException - org.springframework.dao.DataAccessException

41) What kind of exceptions those spring DAO classes throw?

The spring’s DAO class does not throw any technology related exceptions such as SQLException. They throw exceptions which are subclasses of DataAccessException.

42) What is DataAccessException?

DataAccessException is a RuntimeException. This is an Unchecked Exception. The user is not forced to handle these kinds of exceptions.

43) How can you configure a bean to get DataSource from JNDI?



java:comp/env/jdbc/myDatasource

44) How can you create a DataSource connection pool?



${db.driver}


${db.url}


${db.username}


${db.password}

45) How JDBC can be used more efficiently in spring framework?

JDBC can be used more efficiently with the help of a template class provided by spring framework called as JdbcTemplate.

46) How JdbcTemplate can be used?

With use of Spring JDBC framework the burden of resource management and error handling is reduced a lot. So it leaves developers to write the statements and queries to get the data to and from the database.
JdbcTemplate template = new JdbcTemplate(myDataSource);

A simple DAO class looks like this.
public class StudentDaoJdbc implements StudentDao {
private JdbcTemplate jdbcTemplate;
public void setJdbcTemplate(JdbcTemplate jdbcTemplate) {
this.jdbcTemplate = jdbcTemplate;
} more..
}

The configuration is shown below.








47) How do you write data to backend in spring using JdbcTemplate?

The JdbcTemplate uses several of these callbacks when writing data to the database. The usefulness you will find in each of these interfaces will vary. There are two simple interfaces. One is PreparedStatementCreator and the other interface is BatchPreparedStatementSetter.

48) Explain about PreparedStatementCreator?

PreparedStatementCreator is one of the most common used interfaces for writing data to database. The interface has one method createPreparedStatement().
PreparedStatement createPreparedStatement (Connection conn) throws SQLException;

When this interface is implemented, we should create and return a PreparedStatement from the Connection argument, and the exception handling is automatically taken care off. When this interface is implemented, another interface SqlProvider is also implemented which has a method called getSql() which is used to provide sql strings to JdbcTemplate.

49) Explain about BatchPreparedStatementSetter?

If the user what to update more than one row at a shot then he can go for BatchPreparedStatementSetter. This interface provides two methods
setValues(PreparedStatement ps, int i) throws SQLException;
int getBatchSize();

The getBatchSize() tells the JdbcTemplate class how many statements to create. And this also determines how many times setValues() will be called.

50) Explain about RowCallbackHandler and why it is used?

In order to navigate through the records we generally go for ResultSet. But spring provides an interface that handles this entire burden and leaves the user to decide what to do with each row. The interface provided by spring is RowCallbackHandler. There is a method processRow() which needs to be implemented so that it is applicable for each and every row.

void processRow(java.sql.ResultSet rs);

HTTP response status codes

Author Piyush Gupta

HTTP response status codes indicate whether a specific HTTP request has been successfully completed. Responses are grouped in five classes: informational responses, successful responses, redirects, client errors, and servers errors. Status codes are defined by section 10 of RFC 2616.


Information responses

100 Continue

This interim response indicates that everything so far is OK and that the client should continue with the request or ignore it if it is already finished.

101 Switching Protocol

This code is sent in response to an Upgrade request header by the client, and indicates the protocol the server is switching to.

102 Processing (WebDAV)

This code indicates that the server has received and is processing the request, but no response is available yet.

Successful responses

200 OK

The request has succeeded. The meaning of a success varies depending on the HTTP method:
GET: The resource has been fetched and is transmitted in the message body.
HEAD: The entity headers are in the message body.
PUT or POST: The resource describing the result of the action is transmitted in the message body.
TRACE: The message body contains the request message as received by the server

201 Created

The request has succeeded and a new resource has been created as a result of it. This is typically the response sent after a POST request, or after some PUT requests.

202 Accepted

The request has been received but not yet acted upon. It is non-committal, meaning that there is no way in HTTP to later send an asynchronous response indicating the outcome of processing the request. It is intended for cases where another process or server handles the request, or for batch processing.

203 Non-Authoritative Information

203 Non-Authoritative Information response code means returned meta-information set is not exact set as available from the origin server, but collected from a local or a third party copy. Except this condition, 200 OK response should be preferred instead of this response.

204 No Content

There is no content to send for 204 No Content request, but the headers may be useful. The user-agent may update its cached headers for this resource with the new ones.

205 Reset Content

205 Reset Content response code is sent after accomplishing request to tell user agent reset document view which sent this request.

206 Partial Content

206 Partial Content response code is used because of range header sent by the client to separate download into multiple streams.

207 Multi-Status (WebDAV)

A Multi-Status response conveys information about multiple resources in situations where multiple status codes might be appropriate.

208 Multi-Status (WebDAV)

Used inside a DAV: propstat response element to avoid enumerating the internal members of multiple bindings to the same collection repeatedly.

226 IM Used (HTTP Delta encoding)

The server has fulfilled a GET request for the resource, and the response is a representation of the result of one or more instance-manipulations applied to the current instance.

Redirection messages

300 Multiple Choice

The request has more than one possible response. The user-agent or user should choose one of them. There is no standardized way of choosing one of the responses.

301 Moved Permanently

301 Moved Permanently response code means that the URI of the requested resource has been changed. Probably, the new URI would be given in the response.

302 Found

302 Found response code means that the URI of requested resource has been changed temporarily. New changes in the URI might be made in the future. Therefore, this same URI should be used by the client in future requests.

303 See Other

The server sent this response to direct the client to get the requested resource at another URI with a GET request.

304 Not Modified

This is used for caching purposes. It tells the client that the response has not been modified, so the client can continue to use the same cached version of the response.

305 Use Proxy

Was defined in a previous version of the HTTP specification to indicate that a requested response must be accessed by a proxy. It has been deprecated due to security concerns regarding in-band configuration of a proxy.

306 unused

306 unused response code is no longer used, it is just reserved currently. It was used in a previous version of the HTTP 1.1 specification.

307 Temporary Redirect

The server sends this response to direct the client to get the requested resource at another URI with same method that was used in the prior request. This has the same semantics as the 302 Found HTTP response code, with the exception that the user agent must not change the HTTP method used: If a POST was used in the first request, a POST must be used in the second request.

308 Permanent Redirect

This means that the resource is now permanently located at another URI, specified by the Location: HTTP Response header. This has the same semantics as the 301 Moved Permanently HTTP response code, with the exception that the user agent must not change the HTTP method used: If a POST was used in the first request, a POSTmust be used in the second request.

Client error responses

400 Bad Request

400 Bad Request response means that server could not understand the request due to invalid syntax.

401 Unauthorized

Although the HTTP standard specifies “unauthorized”, semantically this response means “unauthenticated”. That is, the client must authenticate itself to get the requested response.

402 Payment Required

402 Payment Required response code is reserved for future use. Initial aim for creating this code was using it for digital payment systems however this is not used currently.

403 Forbidden

The client does not have access rights to the content, i.e. they are unauthorized, so server is rejecting to give proper response. Unlike 401, the client’s identity is known to the server.

404 Not Found

The server can not find requested resource. In the browser, this means the URL is not recognized. In an API, this can also mean that the endpoint is valid but the resource itself does not exist. Servers may also send this response instead of 403 to hide the existence of a resource from an unauthorized client. This response code is probably the most famous one due to its frequent occurence on the web.

405 Method Not Allowed

The request method is known by the server but has been disabled and cannot be used. For example, an API may forbid DELETE-ing a resource. The two mandatory methods, GET and HEAD, must never be disabled and should not return this error code.

406 Not Acceptable

406 Not Acceptable response is sent when the web server, after performing server-driven content negotiation, doesn’t find any content following the criteria given by the user agent.

407 Proxy Authentication Required

407 Proxy Authentication Required reponse is similar to 401 but authentication is needed to be done by a proxy.

408 Request Timeout

408 Request Timeout response is sent on an idle connection by some servers, even without any previous request by the client. It means that the server would like to shut down this unused connection. This response is used much more since some browsers, like Chrome, Firefox 27+, or IE9, use HTTP pre-connection mechanisms to speed up surfing. Also note that some servers merely shut down the connection without sending this message.

409 Conflict

409 Conflict response is sent when a request conflicts with the current state of the server.

410 Gone

410 Gone response would be sent when the requested content has been permanently deleted from server, with no forwarding address. Clients are expected to remove their caches and links to the resource. The HTTP specification intends this status code to be used for “limited-time, promotional services”. APIs should not feel compelled to indicate resources that have been deleted with this status code.

411 Length Required

Server rejected the request because the Content-Length header field is not defined and the server requires it.

412 Precondition Failed

The client has indicated preconditions in its headers which the server does not meet.

413 Payload Too Large

Request entity is larger than limits defined by server; the server might close the connection or return an Retry-After header field.

414 URI Too Long

The URI requested by the client is longer than the server is willing to interpret.

415 Unsupported Media Type

The media format of the requested data is not supported by the server, so the server is rejecting the request.

416 Requested Range Not Satisfiable

The range specified by the Range header field in the request can’t be fulfilled; it’s possible that the range is outside the size of the target URI’s data.

417 Expectation Failed

417 Expectation Failed response code means the expectation indicated by the Expect request header field can’t be met by the server.

418 I’m a teapot

The server refuses the attempt to brew coffee with a teapot.

421 Misdirected Request

The request was directed at a server that is not able to produce a response. This can be sent by a server that is not configured to produce responses for the combination of scheme and authority that are included in the request URI.

422 Unprocessable Entity (WebDAV)

The request was well-formed but was unable to be followed due to semantic errors.

423 Locked (WebDAV)

The resource that is being accessed is locked.

424 Failed Dependency (WebDAV)

The request failed due to failure of a previous request.

426 Upgrade Required

The server refuses to perform the request using the current protocol but might be willing to do so after the client upgrades to a different protocol. The server sends an Upgrade header in a 426 response to indicate the required protocol(s).

428 Precondition Required

The origin server requires the request to be conditional. Intended to prevent the ‘lost update’ problem, where a client GETs a resource’s state, modifies it, and PUTs it back to the server, when meanwhile a third party has modified the state on the server, leading to a conflict.

429 Too Many Requests

The user has sent too many requests in a given amount of time (“rate limiting”).

431 Request Header Fields Too Large

The server is unwilling to process the request because its header fields are too large. The request MAY be resubmitted after reducing the size of the request header fields.

451 Unavailable For Legal Reasons

The user requests an illegal resource, such as a web page censored by a government.

Server error responses

500 Internal Server Error

The server has encountered a situation it doesn’t know how to handle.

501 Not Implemented

The request method is not supported by the server and cannot be handled. The only methods that servers are required to support (and therefore that must not return this code) are GET and HEAD.

502 Bad Gateway

502 Bad Gateway error response means that the server, while working as a gateway to get a response needed to handle the request, got an invalid response.

503 Service Unavailable

The server is not ready to handle the request. Common causes are a server that is down for maintenance or that is overloaded. Note that together with this response, a user-friendly page explaining the problem should be sent. This responses should be used for temporary conditions and the Retry-After: HTTP header should, if possible, contain the estimated time before the recovery of the service. The webmaster must also take care about the caching-related headers that are sent along with this response, as these temporary condition responses should usually not be cached.

504 Gateway Timeout

504 Gateway Timeout error response is given when the server is acting as a gateway and cannot get a response in time.

505 HTTP Version Not Supported

The HTTP version used in the request is not supported by the server.

506 Variant Also Negotiates

The server has an internal configuration error: transparent content negotiation for the request results in a circular reference.

507 Insufficient Storage

The server has an internal configuration error: the chosen variant resource is configured to engage in transparent content negotiation itself, and is therefore not a proper end point in the negotiation process.

508 Loop Detected (WebDAV)

The server detected an infinite loop while processing the request.

510 Not Extended

Further extensions to the request are required for the server to fulfill it.

511 Network Authentication Required

The 511 status code indicates that the client needs to authenticate to gain network access.

Why window is always in default drive C?

Author Piyush Gupta

Many Users, Developers and Programmers are using Computer and they know how to format or install OS in system but very less users know this fact that Why window is always in default drive C ?



Here is answer of this question which come in mind every users or computer operator.
Default drive C is associated with the traditional floppy drives. Earlier than tough hard drive have become preferred(1980), floppy disks were used for booting the computers. Those had been available in sizes at that point: 51/4” and 31/2″. Those two floppy disk drives had been labelled as local Disk (A) and nearby Disk (B). After the discovery of hard disk, floppy disk of size eight inch got here into life. The difficult disk drive turned into named C. As soon as tough disks became widespread and floppy disks have become out of date, the power names A and B vanished.
Thanks for read this post, If you want to know more hidden things related computer or programming then please stay with us and enjoy our interesting blogs.

Basic Linux Commands for daily usages

Author Piyush Gupta


This blog will explore the basic Linux commands and how to use them.

  1. ls: How would we know what a folder contains? With a graphical interface, you’d do this by opening a folder and inspecting its contents. From the command line, you use the command ls instead to list a folder’s contents.
    By default, ls will use a very compact output format. Many terminals show the files and subdirectories in different colors that represent different file types. Regular files don’t have special coloring applied to their names. Some file types, like JPEG or PNG images, or tar and ZIP files, are usually colored differently, and the same is true for programs that you can run and for directories. Try ls for yourself and compare the icons and emblems your graphical file manager uses with the colors that ls applies on the command line. If the output isn’t coloured, you can call ls with the option –color:
    $ ls --color
  2. man: You can learn about options and arguments to be used in any command in Linux. man(man is short for manual) is used to give description of any linux command like this:
    $ man ls
    Here, man is being asked to bring up the manual page for ls. You can use the arrow keys to scroll up and down in the screen that appears and you can close it using the q key (for quit).
  3. info: An alternative to obtain a comprehensive user documentation for a given program is to invoke info instead of man:
    $ info ls
    This is particularly effective to learn how to use complex GNU programs. You can also browse the info documentation inside the editor Emacs, which greatly improves its readability. But you should be ready to take your first step into the larger world of Emacs. You may do so by invoking:
    $ emacs -f info-standalone 
    that should display the Info main menu inside Emacs (if this does not work, try invoking emacs without arguments and then type Alt + x info, i.e. by pressing the Alt key, then pressing the x key, then releasing both keys and finally typing info followed by the Return or Enter key). If you type then m ls, the interactive Info documentation for ls will be loaded inside Emacs. In the standalone mode, the q key will quit the documentation, as usual with man and info.
  4. apropos: If you don’t know what something is or how to use it, the first place to look is its manual and information pages. If you don’t know the name of what you want to do, the apropos command can help. Let’s say you want to rename files but you don’t know what command does that. Try apropos with some word that is related to what you want, like this:
    $ apropos rename
    ...
    mv (1) - move (rename) files
    prename (1) - renames multiple files
    rename (2) - change the name or location of a file
    ...
    Here, apropos searches the manual pages that man knows about and prints commands it thinks are related to renaming. On your computer this command might (and probably will) display more information but it’s very likely to include the entries shown.
  5. mv: The mv command is used to move or rename files.
    $ mv oldname newname
    Depending on your system configuration, you may not be warned when renaming a file will overwrite an existing file whose name happens to be newname. So, as a safe-guard, always use `-i’ option when issuing mv like this:
    $ mv -i oldname newname
    If the last argument happens to be an existing directory, mv will move the file to that directory instead of renaming it. Because of this, you can provide mv more than two arguments:
    $ mv first_file second_file third_file ~/stuff 
    If ~/stuff exists, then mv will move the files there. If it doesn’t exist, it will produce an error message, like this:
    $ mv first_file second_file third_file ~/stuff 
    mv: target 'stuff' is not a directory
  6. mkdir: mkdir command is used to create a subdirectory in your current working directory type.
    $ mkdir practice
    To see the directory practice you have just created, type ls. If you wish to create a subdirectory (say the directory bar) inside another directory (say the directory foo) but you are not sure whether this one exists or not, you can ensure to create the subdirectory and (if needed) its parent directory without raising errors by typing:
    $ mkdir -p ~/foo/bar  
    This will work even for nested sub-sub-…-directories.
  7. cd: The command cd directory means change the current working directory to ‘directory’. The current working directory may be thought of as the directory you are in, i.e. your current position in the file-system tree.
    To change to the directory you have just made, type:
    $ cd practice
    Now, if you go back to your home directory, type
    $ cd ..
    NOTE: there is a space between cd and the dot
  8. rmdir: Now, that you are in the home directory and try to remove the directory called practice, rmdir will produce an error message:
    $ cd ..
    $ rmdir practice
    rmdir: failed to remove 'practice': Directory not empty
    If the directory you wish to remove is not empty, rmdir will produce an error message and will not remove it. If you want to remove a directory that contains files, you have to empty it.
  9. rm: rm removes each specified file like this:
    $ rm practice/fstab practice/hosts practice/issue practice/mod
    And now you can try removing the directory again:
    $ rmdir practice
    And now it works, without showing any output.
    But what happens if your directories have directories inside that also have files, you could be there for weeks making sure each folder is empty! The rm command solves this problem through the amazing option -R, which as usual stands for “recursive”. In the following example, the command fails because foo is not a plain file:
    $ rm ~/foo/ 
    rm: cannot remove `~/foo/`: Is a directory
    So maybe you try rmdir, but that fails because foo has something else under it:
    $ rmdir ~/foo
    rmdir: ~/foo: Directory not empty
    So you use rm -R, which succeeds and does not produce a message.
    $ rm -R ~/foo/
    So when you have a big directory, you don’t have to go and empty every subdirectory.
    But be warned that -R is a very powerful argument and you may lose data you wanted to keep!
  10. cat: You don’t need an editor to view the contents of a file. What you need is just to display it. The cat program fits the bill here:
    $ cat myspeech.txt
    Friends, Coders, Linux-lovers! This is an article in GeeksForGeeks.
  11. less: Here, cat just opens the file myspeech.txt and prints the entire file to your screen, as fast as it can. However if the file is really long, the contents will go by very quickly, and when cat is done, all you will see are the last few lines of the file. To just view the contents of a long file (or any text file) you can use the less program:
    $ less myspeech.txt
    Just as with using man, use the arrow keys to navigate, and press q to quit.
Note: we took references from linuxcommand.org
Design a site like this with WordPress.com
Get started