Ans. - It gives the status of the deamons which run Hadoop cluster. It gives the output mentioning the status of namenode, datanode , secondary namenode, Jobtracker and Task tracker.
Q. - How to restart Namenode?
Ans. - Step-1. Click on stop-all.sh and then click on start-all.sh OR Step-2. Write sudo hdfs (press enter), su-hdfs (press enter), /etc/init.d/ha (press enter) and then /etc/init.d/hadoop-0.20-namenode start (press enter).
Q. - Which are the three modes in which Hadoop can be run?
Ans. - The three modes in which Hadoop can be run are −1. standalone (local) mode 2. Pseudo-distributed mode 3. Fully distributed mode
Q. - What does /etc /init.d do?
Ans. - /etc /init.d specifies where daemons (services) are placed or to see the status of these daemons. It is very LINUX specific, and nothing to do with Hadoop.
Q. - What if a Namenode has no data?
Ans. - It cannot be part of the Hadoop cluster.
Q. - What happens to job tracker when Namenode is down?
Ans. - When Namenode is down, your cluster is OFF, this is because Namenode is the single point of failure in HDFS.
Q. - What is Big Data?
Ans. - Big Data is nothing but an assortment of such a huge and complex data that it becomes very tedious to capture, store, process, retrieve and analyze it with the help of on-hand database management tools or traditional data processing techniques.
Q. - What are the four characteristics of Big Data?
Ans. - the three characteristics of Big Data are −1.Volume − Facebook generating 500+ terabytes of data per day. 2. Velocity − Analyzing 2 million records each day to identify the reason for losses. 3.Variety − images, audio, video, sensor data, log files, etc. Veracity: biases, noise and abnormality in data
Q. - How is analysis of Big Data useful for organizations?
Ans. - Effective analysis of Big Data provides a lot of business advantage as organizations will learn which areas to focus on and which areas are less important. Big data analysis provides some early key indicators that can prevent the company from a huge loss or help in grasping a great opportunity with open hands! A precise analysis of Big Data helps in decision making! For instance, nowadays people rely so much on Facebook and Twitter before buying any product or service. All thanks to the Big Data explosion.
Q. - Why do we need Hadoop?
Ans. - Everyday a large amount of unstructured data is getting dumped into our machines. The major challenge is not to store large data sets in our systems but to retrieve and analyze the big data in the organizations, that too data present in different machines at different locations. In this situation a necessity for Hadoop arises. Hadoop has the ability to analyze the data present in different machines at different locations very quickly and in a very cost effective way. It uses the concept of MapReduce which enables it to divide the query into small parts and process them in parallel. This is also known as parallel computing. The following link Why Hadoop gives a detailed explanation about why Hadoop is gaining so much popularity!
Q. - What is the basic difference between traditional RDBMS and Hadoop?
Ans. - Traditional RDBMS is used for transactional systems to report and archive the data, whereas Hadoop is an approach to store huge amount of data in the distributed file system and process it. RDBMS will be useful when you want to seek one record from Big data, whereas, Hadoop will be useful when you want Big data in one shot and perform analysis on that later
Q. - What is Fault Tolerance?
Ans. - Suppose you have a file stored in a system, and due to some technical problem that file gets destroyed. Then there is no chance of getting the data back present in that file. To avoid such situations, Hadoop has introduced the feature of fault tolerance in HDFS. In Hadoop, when we store a file, it automatically gets replicated at two other locations also. So even if one or two of the systems collapse, the file is still available on the third system.
Q. - Replication causes data redundancy, then why is it pursued in HDFS?
Ans. - HDFS works with commodity hardware (systems with average configurations) that has high chances of getting crashed any time. Thus, to make the entire system highly fault-tolerant, HDFS replicates and stores data in different places. Any data on HDFS gets stored at least 3 different locations. So, even if one of them is corrupted and the other is unavailable for some time for any reason, then data can be accessed from the third one. Hence, there is no chance of losing the data. This replication factor helps us to attain the feature of Hadoop called Fault Tolerant.
Q. - Since the data is replicated thrice in HDFS, does it mean that any calculation done on one node will also be replicated on the other two?
Ans. - No, calculations will be done only on the original data. The master node will know which node exactly has that particular data. In case, if one of the nodes is not responding, it is assumed to be failed. Only then, the required calculation will be done on the second replica.
Q. - What is a Namenode?
Ans. - Namenode is the master node on which job tracker runs and consists of the metadata. It maintains and manages the blocks which are present on the datanodes. It is a high-availability machine and single point of failure in HDFS.
Q. - Is Namenode also a commodity hardware?
Ans. - No. Namenode can never be commodity hardware because the entire HDFS rely on it. It is the single point of failure in HDFS. Namenode has to be a high-availability machine.
Q. - What is a Datanode?
Ans. - Datanodes are the slaves which are deployed on each machine and provide the actual storage. These are responsible for serving read and write requests for the clients.
Q. - Why do we use HDFS for applications having large data sets and not when there are lot of small files?
Ans. - HDFS is more suitable for large amount of data sets in a single file as compared to small amount of data spread across multiple files. This is because Namenode is a very expensive high performance system, so it is not prudent to occupy the space in the Namenode by unnecessary amount of metadata that is generated for multiple small files. So, when there is a large amount of data in a single file, name node will occupy less space. Hence for getting optimized performance, HDFS supports large data sets instead of multiple small files.
Q. - What is a job tracker?
Ans. - Job tracker is a daemon that runs on a namenode for submitting and tracking MapReduce jobs in Hadoop. It assigns the tasks to the different task tracker. In a Hadoop cluster, there will be only one job tracker but many task trackers. It is the single point of failure for Hadoop and MapReduce Service. If the job tracker goes down all the running jobs are halted. It receives heartbeat from task tracker based on which Job tracker decides whether the assigned task is completed or not.
Q. - What is a task tracker?
Ans. - Task tracker is also a daemon that runs on datanodes. Task Trackers manage the execution of individual tasks on slave node. When a client submits a job, the job tracker will initialize the job and divide the work and assign them to different task trackers to perform MapReduce tasks. While performing this action, the task tracker will be simultaneously communicating with job tracker by sending heartbeat. If the job tracker does not receive heartbeat from task tracker within specified time, then it will assume that task tracker has crashed and assign that task to another task tracker in the cluster.
Q. - What is a heartbeat in HDFS?
Ans. - A heartbeat is a signal indicating that it is alive. A datanode sends heartbeat to Namenode and task tracker will send its heart beat to job tracker. If the Namenode or job tracker does not receive heart beat then they will decide that there is some problem in datanode or task tracker is unable to perform the assigned task.
Q. - What is a 'block' in HDFS?
Ans. - A 'block' is the minimum amount of data that can be read or written. In HDFS, the default block size is 64 MB as contrast to the block size of 8192 bytes in Unix/Linux. Files in HDFS are broken down into block-sized chunks, which are stored as independent units. HDFS blocks are large as compared to disk blocks, particularly to minimize the cost of seeks. If a particular file is 50 mb, will the HDFS block still consume 64 mb as the default size? No, not at all! 64 mb is just a unit where the data will be stored. In this particular situation, only 50 mb will be consumed by an HDFS block and 14 mb will be free to store something else. It is the MasterNode that does data allocation in an efficient manner.
Q. - What are the benefits of block transfer?
Ans. - A file can be larger than any single disk in the network. There's nothing that requires the blocks from a file to be stored on the same disk, so they can take advantage of any of the disks in the cluster. Making the unit of abstraction a block rather than a file simplifies the storage subsystem. Blocks provide fault tolerance and availability. To insure against corrupted blocks and disk and machine failure, each block is replicated to a small number of physically separate machines (typically three). If a block becomes unavailable, a copy can be read from another location in a way that is transparent to the client?
Q. - How indexing is done in HDFS?
Ans. - Hadoop has its own way of indexing. Depending upon the block size, once the data is stored, HDFS will keep on storing the last part of the data which will say where the next part of the data will be.
Q. - Are job tracker and task trackers present in separate machines?
Ans. - Yes, job tracker and task tracker are present in different machines. The reason is job tracker is a single point of failure for the Hadoop MapReduce service. If it goes down, all running jobs are halted.
Q. - What is the communication channel between client and namenode/datanode?
Ans. - The mode of communication is SSH.
Q. - What is a rack?
Ans. - Rack is a storage area with all the datanodes put together. These datanodes can be physically located at different places. Rack is a physical collection of datanodes which are stored at a single location. There can be multiple racks in a single location.
Q. - What is a Secondary Namenode? Is it a substitute to the Namenode?
Ans. - The secondary Namenode constantly reads the data from the RAM of the Namenode and writes it into the hard disk or the file system. It is not a substitute to the Namenode, so if the Namenode fails, the entire Hadoop system goes down.
Q. - Explain how do 'map' and 'reduce' works.
Ans. - Namenode takes the input and divide it into parts and assign them to data nodes. These datanodes process the tasks assigned to them and make a key-value pair and returns the intermediate output to the Reducer. The reducer collects this key value pairs of all the datanodes and combines them and generates the final output.
Q. - Why 'Reading' is done in parallel and 'Writing' is not in HDFS?
Ans. - Through mapreduce program the file can be read by splitting its blocks when reading. But while writing as the incoming values are not yet known to the system mapreduce cannot be applied and no parallel writing is possible.
Q. - Copy a directory from one node in the cluster to another
Ans. - Use '-distcp' command to copy,
Q. - Default replication factor to a file is 3.Use '-setrep' command to change replication factor of a file to 2.
Ans. - hadoop fs -setrep -w 2 apache_hadoop/sample.txt
Q. - What is rack awareness?
Ans. - Rack awareness is the way in which the namenode decides how to place blocks based on the rack definitions Hadoop will try to minimize the network traffic between datanodes within the same rack and will only contact remote racks if it has to. The namenode is able to control this due to rack awareness.
Q. - Which file does the Hadoop-core configuration?
Ans. - core-default.xml
Q. - Is there a hdfs command to see available free space in hdfs
Ans. - hadoop dfsadmin -report
Q. - The requirement is to add a new data node to a running Hadoop cluster; how do I start services on just one data node?
Ans. - You do not need to shutdown and/or restart the entire cluster in this case.
Q. - First, add the new node's DNS name to the conf/slaves file on the master node.
Ans. - Then log in to the new slave node and execute − $ cd path/to/hadoop, $ bin/hadoop-daemon.sh start datanode, $ bin/hadoop-daemon.sh start tasktracker , then issuehadoop dfsadmin -refreshNodes and hadoop mradmin -refreshNodes so that the NameNode and JobTracker know of the additional node that has been added.
Q. - How do you gracefully stop a running job?
Ans. - Hadoop job –kill jobid
Q. - Does the name-node stay in safe mode till all under-replicated files are fully replicated?
Ans. - No. During safe mode replication of blocks is prohibited. The name-node awaits when all or majority of data-nodes report their blocks.
Q. - What happens if one Hadoop client renames a file or a directory containing this file while another client is still writing into it?
Ans. - A file will appear in the name space as soon as it is created. If a writer is writing to a file and another client renames either the file itself or any of its path components, then the original writer will get an IOException either when it finishes writing to the current block or when it closes the file.
Q. - How to make a large cluster smaller by taking out some of the nodes?
Ans. - Hadoop offers the decommission feature to retire a set of existing data-nodes. The nodes to be retired should be included into the exclude file, and the exclude file name should be specified as a configuration parameter dfs.hosts.exclude. The decommission process can be terminated at any time by editing the configuration or the exclude files and repeating the -refreshNodes command
Q. - Can we search for files using wildcards?
Ans. - Yes. For example, to list all the files which begin with the letter a, you could use the ls command with the * wildcard &minu; hdfs dfs –ls a*
Q. - What happens when two clients try to write into the same HDFS file?
Ans. - HDFS supports exclusive writes only. When the first client contacts the name-node to open the file for writing, the name-node grants a lease to the client to create this file. When the second client tries to open the same file for writing, the name-node will see that the lease for the file is already granted to another client, and will reject the open request for the second client
Q. - What does file could only be replicated to 0 nodes, instead of 1 mean?
Ans. - The namenode does not have any available DataNodes.
Q. - What is a Combiner?
Ans. - The Combiner is a 'mini-reduce' process which operates only on data generated by a mapper. The Combiner will receive as input all data emitted by the Mapper instances on a given node. The output from the Combiner is then sent to the Reducers, instead of the output from the Mappers
Q. - Consider case scenario: In M/R system, - HDFS block size is 64 MB - Input format is FileInputFormat – We have 3 files of size 64K, 65Mb and 127Mb How many input splits will be made by Hadoop framework?
Ans. - Hadoop will make 5 splits as follows − - 1 split for 64K files - 2 splits for 65MB files - 2 splits for 127MB files
Q. - Suppose Hadoop spawned 100 tasks for a job and one of the task failed. What will Hadoop do?
Ans. - It will restart the task again on some other TaskTracker and only if the task fails more than four ( the default setting and can be changed) times will it kill the job.
Q. - What are Problems with small files and HDFS?
Ans. - HDFS is not good at handling large number of small files. Because every file, directory and block in HDFS is represented as an object in the namenode's memory, each of which occupies approx 150 bytes So 10 million files, each using a block, would use about 3 gigabytes of memory. when we go for a billion files the memory requirement in namenode cannot be met.
Q. - What is speculative execution in Hadoop?
Ans. - If a node appears to be running slow, the master node can redundantly execute another instance of the same task and first output will be taken .this process is called as Speculative execution.
Q. - Can Hadoop handle streaming data?
Ans. - Yes, through Technologies like Apache Kafka, Apache Flume, and Apache Spark it is possible to do large-scale streaming.
Q. - Why is Checkpointing Important in Hadoop?
Ans. - As more and more files are added the namenode creates large edit logs. Which can substantially delay NameNode startup as the NameNode reapplies all the edits. Checkpointing is a process that takes an fsimage and edit log and compacts them into a new fsimage. This way, instead of replaying a potentially unbounded edit log, the NameNode can load the final in-memory state directly from the fsimage. This is a far more efficient operation and reduces NameNode startup time.
Q. - What is Twitter Bootstrap?
Q. - Why use Bootstrap?
Ans. - Bootstrap can be used as − Mobile first approach − Since Bootstrap 3, the framework consists of Mobile first styles throughout the entire library instead of in separate files. Browser Support − It is supported by all popular browsers. Easy to get started − With just the knowledge of HTML and CSS anyone can get started with Bootstrap. Also the Bootstrap official site has a good documentation. Responsive design − Bootstrap's responsive CSS adjusts to Desktops,Tablets and Mobiles. Provides a clean and uniform solution for building an interface for developers. It contains beautiful and functional built-in components which are easy to customize. It also provides web based customization. And best of all it is an open source.
Q. - What does Bootstrap package includes?
Ans. - Bootstrap package includes −Scaffolding − Bootstrap provides a basic structure with Grid System, link styles, background. This is is covered in detail in the section Bootstrap Basic Structure
Q. - What is Contextual classes of table in Bootstrap?
Ans. - The Contextual classes allow you to change the background color of your table rows or individual cells.
Ans. - Class Description
Ans. - .active Applies the hover color to a particular row or cell
Ans. - .success Indicates a successful or positive action
Ans. - .warning Indicates a warning that might need attention
Ans. - .danger Indicates a dangerous or potentially negative action
Q. - What is Bootstrap Grid System?
Ans. - Bootstrap includes a responsive, mobile first fluid grid system that appropriately scales up to 12 columns as the device or viewport size increases. It includes predefined classes for easy layout options, as well as powerful mixins for generating more semantic layouts.
Q. - What are Bootstrap media queries?
Ans. - Media Queries in Bootstrap allow you to move, show and hide content based on viewport size.
Q. - Show a basic grid structure in Bootstrap.
Ans. - Following is basic structure of Bootstrap grid −
Q. - What are Offset columns?
Ans. - Offsets are a useful feature for more specialized layouts. They can be used to push columns over for more spacing, for example. The .col-xs=* classes don't support offsets, but they are easily replicated by using an empty cell.
Q. - How can you order columns in Bootstrap?
Ans. - You can easily change the order of built-in grid columns with .col-md-push-* and .col-md-pull-* modifier classes where * range from 1 to 11.
Q. - How do you make images responsive?
Ans. - Bootstrap 3 allows to make the images responsive by adding a class .img-responsive to the tag. This class applies max-width: 100%; and height: auto; to the image so that it scales nicely to the parent element.
Q. - Explain the typography and links in Bootstrap.
Ans. - Bootstrap sets a basic global display (background), typography, and link styles −
Basic Global display − Sets background-color: #fff; on the
Typography − Uses the @font-family-base, @font-size-base, and @line-height-base attributes as the typographic base
Link styles − Sets the global link color via attribute @link-color and apply link underlines only on :hover.
Q. - What is Normalize in Bootstrap?
Ans. - Bootstrap uses Normalize to establish cross browser consistency.
Normalize.css is a modern, HTML5-ready alternative to CSS resets. It is a small CSS file that provides better cross-browser consistency in the default styling of HTML elements.
Q. - What is Lead Body Copy
Ans. - To add some emphasis to a paragraph, add class=lead. This will give you larger font size, lighter weight, and a taller line height
Q. - Explain types of lists supported by Bootstrap.
Ans. - Bootstrap supports ordered lists, unordered lists, and definition lists.
Ordered lists − An ordered list is a list that falls in some sort of sequential order and is prefaced by numbers.
Unordered lists − An unordered list is a list that doesn't have any particular order and is traditionally styled with bullets. If you do not want the bullets to appear then you can remove the styling by using the class .list-unstyled. You can also place all list items on a single line using the class .list-inline.
Definition lists − In this type of list, each list item can consist of both the
stands for definition term, and like a dictionary, this is the term (or phrase) that is being defined. Subsequently, the
is the definition of the
You can make terms and descriptions in
line up side-by-side using class dl-horizontal.
Q. - What are glyphicons?
Ans. - Glyphicons are icon fonts which you can use in your web projects. Glyphicons Halflings are not free and require licensing, however their creator has made them available for Bootstrap projects free of cost.
Q. - How do you use Glyphicons?
Ans. - To use the icons, simply use the following code just about anywhere in your code. Leave a space between the icon and text for proper padding.
Q. - What is a transition plugin?
Ans. - The transition plugin provides simple transition effects such as Sliding or fading in modals.
Q. - What is a Modal Plugin?
Ans. - A modal is a child window that is layered over its parent window. Typically, the purpose is to display content from a separate source that can have some interaction without leaving the parent window. Child windows can provide information, interaction, or more.
Q. - How do you use the Dropdown plugin?
Ans. - You can toggle the dropdown plugin's hidden content:
Ans. - Via data attributes: Add data-toggle=dropdown to a link or button to toggle a dropdown as shown below −
Ans. - Dropdown
Ans. - ...
Ans. - $('.dropdown-toggle').dropdown()
Q. - What is Bootstrap caraousel?
Ans. - The Bootstrap carousel is a flexible, responsive way to add a slider to your site. In addition to being responsive, the content is flexible enough to allow images, iframes, videos, or just about any type of content that you might want.
Q. - What is button group
Ans. - Button groups allow multiple buttons to be stacked together on a single line. This is useful when you want to place items like alignment buttons together.
Q. - Which class is used for basic button group
Ans. - .btn-group class is used for a basic button group. Wrap a series of buttons with class .btn in .btn-group.
Q. - Which class is used to draw a toolbar of buttons
Ans. - .btn-toolbar helps to combine sets of
for more complex components.
Q. - Which classses can be applied to button group instead of resizing each button
Ans. - .btn-group-lg, .btn-group-sm, .btn-group-xs classses can be applied to button group instead of resizing each button.
Q. - Which class make a set of buttons appear vertically stacked rather than horizontally
Ans. - .btn-group-vertical class make a set of buttons appear vertically stacked rather than horizontally.
Q. - What are input groups
Ans. - Input groups are extended Form Controls. Using input groups you can easily prepend and append text or buttons to the text-based inputs.
By adding prepended and appended content to an input field, you can add common elements to the user's input. For example, you can add the dollar symbol, the @ for a Twitter username, or anything else that might be common for your application interface.
To prepend or append elements to a .form-control −
Wrap it in a
with class .input-group
As a next step, within that same
, place your extra content inside a with class .input-group-addon.
Now place this either before or after the element.
Q. - How will you create a tabbed navigation menu
Ans. - To create a tabbed navigation menu −
Start with a basic unordered list with the base class of .nav.
Add class .nav-tabs.
Q. - How will you create a pills navigation menu
Ans. - To create a pills navigation menu −
Start with a basic unordered list with the base class of .nav.
Add class .nav-pills.
Q. - How will you create a vertical pills navigation menu
Ans. - You can stack the pills vertically using the class .nav-stacked along with the classes: .nav, .nav-pills.
Q. - What is bootstrap navbar
Ans. - The navbar is one of the prominent features of Bootstrap sites. Navbars are responsive 'meta' components that serve as navigation headers for your application or site. Navbars collapse in mobile views and become horizontal as the available viewport width increases. At its core, the navbar includes styling for site names and basic navigation.
Q. - How to create a navbar in bootstrap
Ans. - To create a default navbar −
Add the classes .navbar, .navbar-default to the