1. What are the best practices for data cleaning?

Ans. There are 5 basic best practices for data cleaning:

  • Make a data cleaning plan by understanding where the common errors take place and keep communications open.
  • Standardize the data at the point of entry. This way it is less chaotic and you will be able to ensure that all information is standardized, leading to fewer errors on entry.
  • Focus on the accuracy of the data. Maintain the value types of data, provide mandatory constraints, and set cross-field validation.
  • Identify and remove duplicates before working with the data. This will lead to an effective data analysis process.
  • Create a set of utility tools/functions/scripts to handle common data cleaning tasks.
  1. What is the difference between data mining and data profiling?

Ans.

Data Mining

Data Profiling

Refers to the process of identifying patterns in a pre-built database

Analyses raw data from existing datasets

Turns raw data into useful information by evaluating the existing database and large datasets

Collects statistics or informative summaries about the data

Identifies the hidden patterns, searches for new, valuable, and non-trivial knowledge to generate useful information

Helps evaluate data sets for consistency, uniqueness, and logic

Cannot identify incorrect or inaccurate data values

Identifies the wrong data at the initial stage of data

 

  1. Which data validation methods used in data analytics?

Ans. The various types of data validation methods used are:

  • Field Level Validation – validation is done in each field as the user enters the data to avoid errors caused by human interaction.
  • Form Level Validation – In this method, validation is done once the user completes the form before a save of the information is needed.
  • Data Saving Validation – This type of validation is performed during the saving process of the actual file or database record. This is usually done when there are multiple data entry forms.
  • Search Criteria Validation – This type of validation is relevant to the user to match what the user is looking for to a certain degree. It is to ensure that the results are actually returned.
  1. Which challenges are usually faced by data analysts? Share your opinion.

Ans. There are various ways you can answer the question. It might be very badly formatted data when the data isn’t enough to work with, clients provide data they have supposedly cleaned it but it has been made worse, not getting updated data or there might be factual/data entry errors.

  1. How often should a data model be retained?

Ans. A good data analyst would be able to understand the market dynamics and act accordingly to retain a working data model so as to adjust to the new environment.

  1. Explain what you do with suspicious or missing data.

Ans. When there is a doubt in data or there is missing data, then:

  • Make a validation report to provide information on the suspected data.
  • Have experienced personnel look at it so that its acceptability can be determined.
  • Invalid data should be updated with a validation code.
  • Use the best analysis strategy to work on the missing data like simple imputation, deletion method, or case wise imputation.
  1. What is the difference between the true positive rate and recall?

Ans. There is no difference, they are the same, with the formula:

(true positive)/(true positive + false negative)

  1. What is a good data model?

Ans. The criteria that define a good data model are:

  • It is intuitive.
  • Its data can be easily consumed.
  • The data changes in it are scalable.
  • It can evolve and support new business cases.

 

  1. Estimate the number of weddings that take place in a year in India?

Ans. To answer this type of guesstimation questions, one should always follow four steps:

Step 1:

Start with the right proxy – here the right proxy will be the total population. You know that India has more than 1 billion population and to be a bit more precise, it’s around 1.2 billion.

Step 2:

Segment and filter – the next step is to find the right segments and filter out the ones which are not. You will have a tree-like structure, with branches for each segment and sub-branches which filters out each segment further. In this question, we will filter out the population above 35 years of age and below 15 for rural/below 20 for urban.

Step 3:

Always round of the proxy to one or zero decimal points so that your calculation is easy. Instead of doing a calculation like 1488/5, you can go for 1500/5.

Step 4:

Validate each number using your common sense to understand if it’s the right one. Add all the numbers that you have come up with after filtering. You will get the required guesstimate. E.g. we will validate the guesstimate to include one-time marriages only at the end.

Let’s do it:

Total population – 1.2 billion

Two main population segments – Rural (70%) and Urban (30%)

Now, filtering age group and sex ratio:

The average marriage age in rural – 15 to 35 years

The average marriage age in urban – 20 to 35 years

Assuming 65% of the total population is within 0-35 years,

Percentage of the population which has the probability of getting married in the rural area ≈ (35-15)/35*65 ≈ 40%

Percentage of the population which has the probability of getting married in the urban area ≈ (35-20)/35*65 ≈ 30%

Assuming the sex ratio to be 50% male and 50% female,

Total number of marriages in rural area ≈ .70*.40*1.2 billion/2 ≈ 170 million

Considering only first-time marriages in the rural area ≈ 170 million/20 ≈ 8.5 million

Total number of marriages in urban area ≈ .30*.30*1.2 billion/2 ≈ 50 million

Considering only first-time marriages in the rural area ≈ 50 million/15 ≈ 3million

Thus, the total marriage in India in a year ≈ 11 – 12 million

  1. What are the steps involved in a data analytics project?

Ans. It is one of the most commonly asked data analyst interview questions, go prepared for this one.

The fundamental steps involved in a data analysis project are –

  • Understand the Business
  • Get the data
  • Explore and clean the data
  • Validate the data
  • Implement and track the data sets
  • Make predictions
  • Iterate
  1. What do you do for data preparation?

Ans. Since data preparation is a critical approach to data analytics, the interviewer might be interested in knowing what path you will take up to clean and transform raw data before processing and analysis. As an answer to such data analyst interview questions, you should discuss the model you will be using, along with logical reasoning for it. In addition, you should also discuss how your steps would help you to ensure superior scalability and accelerated data usage.

  1. What are some of the most popular tools used in data analytics?

Ans. The most popular tools used in data analytics are:

  • Tableau
  • Google Fusion Tables
  • Google Search Operators
  • Konstanz Information Miner (KNIME)
  • RapidMiner
  • Solver
  • OpenRefine
  • NodeXL
  • Io
  • Pentaho
  • SQL Server Reporting Services (SSRS)
  • Microsoft data management stack
  1. What are the benefits of using version control?

Ans. The primary benefits of version control are –

  • Enables comparing files, identifying differences, and merging the changes
  • Allows keeping track of application builds by identifying which version is under development, QA, and production
  • Helps to improve the collaborative work culture
  • Keeps different versions and variants of code files secure
  • Allows seeing the changes made in the file’s content
  • Keeps a complete history of the project files in case of central server breakdown
  1. Do you have any idea about the job profile of a data analyst?

Ans. Yes, I have a fair idea of the job responsibilities of a data analyst. Their primary responsibilities are –

  • To work in collaboration with IT, management, and/or data scientist teams to determine organizational goals
  • Dig data from primary and secondary sources
  • Clean the data and discard irrelevant information
  • Perform data analysis and interpret results using standard statistical methodologies
  • Highlight changing trends, correlations, and patterns in complicated data sets
  • Strategize process improvement
  • Ensure clear data visualizations for management
  1. What is a data collection plan?

Ans. A data collection plan is used to collect all the critical data in a system. It covers –

  • Type of data that needs to be collected or gathered
  • Different data sources for analyzing a data set
  1. What is an Affinity Diagram?

Ans. An Affinity Diagram is an analytical tool used to cluster or organize data into subgroups based on their relationships. These data or ideas are mostly generating from discussions or brainstorming sessions, and are used in analyzing complex issues.

  1. Name some of the essential tools useful for Big Data analytics.

Ans. The important Big Data analytics tools are –

  • NodeXL
  • KNIME
  • Tableau
  • Solver
  • OpenRefine
  • Rattle GUI
  • Qlikview 
  1. What is data visualization?

Ans. In simpler terms, data visualization is a graphical representation of information and data. It enables the users to view and analyze data in a smarter way and use technology to draw them into diagrams and charts. 

  1. Why should you choose data visualization?

Ans. Since it is easier to view and understand complex data in the form of charts or graphs, the trend of data visualization has picked up rapidly. 

  1. What is the Metadata?

Ans. Metadata refers to the detailed information about the data system and its contents. It helps to define the type of data or information that will be sorted. 

  1. What are some Python libraries used in Data Analysis?

Ans. Some of the vital Python libraries used in Data Analysis include –

  • Bokeh
  • Matplotlib
  • NumPy
  • Pandas
  • SciKit
  • SciPy
  • Seaborn
  • TensorFlow
  • Keras
  1. Explain the essential steps in the data validation process. 

Ans. There are two steps involved in the data validation process, namely, data screening and data verification.

Data Screening: In this step, different algorithms are used to screen the entire data to find any inaccurate values. It is the process of ensuring that the data is clean and ready for analysis.

Data Verification: In the data verification step, the accuracy and quality of source data are checked before using it. Every suspected value is evaluated on various use-cases, and then a final decision is taken on whether the value has to be included in the data or not. Data validation is a form of data cleansing.

  1. Mention some problems that data analysts face while performing the analysis?

Ans. The problems that data analysts face while performing data analysis are:

  • Inconsistent and incomplete data
  • Duplicate entries and spelling mistakes
  • Poorly formatted data file
  • Different value representations and misclassified data
  • Overlapping data
  1. Explain how to deal with multi-source problems?

Ans. To deal with the multi-source problems, one can:

  • Restructure the schemas to accomplish a schema integration
  • Identify similar records and merge them into a single record containing all relevant attributes
  1. Explain KPI, the design of experiments, and the 80/20 rule.

Ans. KPI stands for Key Performance Indicator. It is a metric or a feature consisting of any combination of spreadsheets, reports, or charts about the business process.

Also known as experimental design, the design of experiments is the initial process that is used before data is collected. It is used to split the data, sample, and set up a data set for statistical analysis.

The 80/20 rule means that 80 percent of your income (or results) comes from 20 percent of your clients (or efforts).

  1. What do you mean by Hadoop Ecosystem?

Ans. Hadoop Ecosystem is the framework developed by Apache. It processes large datasets for an application in a distributed computing environment. It consists of the following Hadoop components.

  • HDFS
  • YARN
  • MapReduce
  • Spark
  • PIG
  • HIVE
  • HBase
  • Oozie
  • Mahout
  • Spark MLlib
  • Apache Drill
  • Zookeeper
  • Flume
  • Sqoop
  • Ambari
  • Solr
  • Lucene

 

  1. What is MapReduce?

Ans. MapReduce is a framework that enables you to write applications to process large data sets, splitting them into subsets, processing each subset on a different server, and then blending results obtained on each. It consists of two tasks, namely Map and Reduce. The map performs filtering and sorting while reduce performs a summary operation. As the name suggests, the Reduce process always takes place after the map task. 

  1. What is imputation? Explain different types of imputation techniques. 

Ans. Imputation is the process of replacing the missing data with substituted values. While there are many ways to approach missing data, the most common imputation techniques are:

There are two types of imputation–single or multiple.

Single Imputation: In this, you find a single estimate of the missing value. The following are the single imputation techniques:

Mean imputation: Replace the missing value with the mean of that variable for all other cases.

Hot deck imputation: Identify all the sample subjects who are similar on other variables, then randomly choose one of their values on the missing variable.

Cold deck imputation: It works just like the hot deck but in a systematic manner. A systematically chosen value from an individual who has similar values on other variables.

Regression imputation: The predicted value obtained by regressing the missing variable on other variables.

Stochastic regression: It works like the regression imputation and adds the average regression variance to regression imputation.

Substitution: Impute the value from a new variable that was not selected to be in the sample.

Multiple Imputation: In the Multiple Imputation technique, the values are estimated multiple times.

  1. What is the basic syntax style of writing code in SAS?

Ans. The basic syntax style of writing code in SAS is:

  • Write the DATA statement to name the dataset.
  • Write the INPUT statement to name the variables in the data set.
  • End all statements with a semi-colon.
  • Every SAS program must end with a RUN statement.
  • Use of proper space to separate the components in a SAS program statement.
  1. What is the ANYDIGIT function in SAS?

Ans. The ANYDIGIT function searches a string for the first occurrence of any character that is a digit. It gives the first position of a digit from a character string after searching for any digit in the character variable. If the character is there, the ANYDIGIT function will return the position in the string of that character. If there is no such character, then it will return a value of 0.

  1. What is interleaving in SAS?

Ans. In SAS, Interleaving means combining individual sorted SAS data sets into one big sorted data set. Data sets can be interleaved by using a SET statement and a BY statement.

  1. Which questions should you ask the user/client before you create a dashboard?

Though this depends on the user’s requirements, still some of the common questions that I would ask the client before creating a dashboard are –

  • What is the purpose of the dashboard?
  • Should the dashboard be retrospective or real-time?
  • How detailed the dashboard should be?
  • How tech and data-savvy is the end-user?
  • Does the data need to be segmented?
  • Should I explain the dashboard design to you?
  1. What is Clustering? What are the properties of clustering algorithms?

Ans. Clustering is a classification method that is applied to data. Clustering or cluster analysis is the process of grouping a set of objects in such a manner that the objects in the same cluster are more similar to each other than to those in other clusters.

Properties of the clustering algorithm are as follows:

  • Hierarchical or flat
  • Hard and soft
  • Iterative
  • Disjunctive
  1. What is the KNN imputation method?

Ans. KNN (K-nearest neighbor) is an algorithm that is used for matching a point with its closest k neighbors in a multi-dimensional space.

  1. Why is KNN used to determine missing numbers?

Ans. KNN is used for missing values under the assumption that a point value can be approximated by the values of the points that are closest to it, based on other variables.

  1. What is kmeans algorithm?

Ans. Kmeans algorithm partitions a data set into clusters such that a cluster formed is homogeneous and the points in each cluster are close to each other. The algorithm tries to maintain enough separation between these clusters. Due to the unsupervised nature, the clusters have no labels.

  1. What is the difference between linear regression and logistic regression?

Ans. The differences between linear regression and logistic regression are:

Linear Regression

Logistic Regression

It requires independent variables to be continuous

It can have dependent variables with more than two categories

Based on the least-square estimation

Based on maximum likelihood estimation

Requires 5 cases per independent variable

Requires at least 10 events per independent variable

Aimed at finding the best fitting straight line where the distance between the points and the regression line are errors

As it is used to predict a binary outcome, the resultant graph is an S-curved one.

  1. What is the condition for using a t-test or a z-test?

Ans. The T-test is usually used when we have a sample size of less than 30 and a z-test when we have a sample test greater than 30.

  1. What is the difference between standardized and unstandardized coefficients?

Ans. The standardized coefficient is interpreted in terms of standard deviation while the unstandardized coefficient is measured in actual values.

  1. What is the difference between R-squared and adjusted R-squared?

Ans. R-squared measures the proportion of variation in the dependent variables explained by the independent variables.

Adjusted R-squared gives the percentage of variation explained by those independent variables that in reality affect the dependent variable.

  1. What are the most popular statistical methods used when analyzing data?

Ans. The most popular statistical methods used in data analytics are –

  • Linear Regression
  • Classification
  • Resampling Methods
  • Subset Selection
  • Shrinkage
  • Dimension Reduction
  • Nonlinear Models
  • Tree-Based Methods
  • Support Vector Machines
  • Unsupervised Learning
  1. What is Collaborative Filtering?

Ans. Collaborative filtering is a technique used by recommender systems by making automatic predictions or filtering about a user’s interests. This is achieved by collecting information from many users.

  1. What is a Pivot Table?

Ans. A Pivot Table is a Microsoft Excel feature used to summarize huge datasets quickly. It sorts, reorganizes, counts, or groups data stored in a database. This data summarization includes sums, averages, or other statistics.

  1. Name different sections of a Pivot Table.

Ans. A Pivot table has four different sections, which include –

  • Values Area
  • Rows Area
  • Column Area
  • Filter Area
  1. What is Standard Deviation?

Ans. Standard deviation is a very popular method to measure any degree of variation in a data set. It measures the average spread of data around the mean most accurately.

  1. What is the Truth Table?

Ans. Truth Table is a collection of facts, determining the truth or falsity of a proposition. It works as a complete theorem-prover and is of three types –

  • Accumulative truth Table
  • Photograph truth Table
  • Truthless Fact Table
  1. What is the main difference between overfitting and underfitting?

Ans. Overfitting – In overfitting, a statistical model describes any random error or noise, and occurs when a model is super complicated. An overfit model has poor predictive performance as it overreacts to minor fluctuations in training data.

Underfitting – In underfitting, a statistical model is unable to capture the underlying data trend. This type of model also shows poor predictive performance.

  1. Explain the time series analysis. 

Ans. Time series analysis is a statistical technique that analyzes time-series data to extract meaningful statistics and other characteristics of the data. There are two ways to do it, namely the frequency domain and the time domain. Various methods like exponential smoothening and log-linear regression methods help in forecasting the output of a particular process by analyzing the previous data.

  1. What do you mean by Hash Table?

Ans. Hash Table is a data structure that stores data in an associative manner. It is a map of keys to values. It stores the data in an array format where each data value has its unique index value. A hash table uses a hash technique to generate an index into an array of slots, from which the desired value can be fetched.

 

  1. What are collisions in hash tables? How to deal with them?

Ans. A hash table collision occurs when two different keys are hashed to the same index in a hash table. In simple terms, it happens when two different keys hash to the same value. Collisions, thus, create a problem as two elements cannot be stored in the same slot in an array.

There are many techniques to avoid hash table collision:

  1. Separate Chaining technique
  2. Open Addressing
  1. What is n-gram?

Ans. An n-gram is a contiguous sequence of n items from a given sequence of text or speech. It is the combination of adjacent words or letters of length n that are in the source text. It is a type of probabilistic language model that predicts the next item in such a sequence in the form of a (n-1).

  1. How do you highlight cells with negative values in Excel?

Ans. We can highlight cells with negative values through conditional formatting, the steps include –

  • Select the required cells
  • Go to the Home tab and click on Conditional Formatting
  • Go to the Highlighted Cell Rules and click on the Less Than option
  • In the dialog box of Less Than, specify the value as 0
  1. How will you handle slow Excel workbooks?

Ans. To fix the slow Excel workbooks, we should –

  • Avoid Volatile Functions
  • Use manual calculation mode
  • Avoid Array Formulas
  • Use Helper Columns
  • Maintain all the referenced data in a single sheet
  • Use Helper columns instead of array formulas
  • Convert all the unused formulas to values
  • Use Conditional Formatting with Caution
  • Use Excel Tables and Named Ranges
  • Convert Unused Formulas to Values
  • Keep All Referenced Data in One Sheet
  1. What is Normal Distribution?

Ans. The normal distribution is a probability function that explains how values of a variable are distributed. It is a symmetric distribution where –

  • Maximum observations cluster around the central peak
  • Probabilities for values further away from the mean taper off equally in both directions

e.g. – The Bell Curve is a normal distribution.

55 What is an outlier?

Ans. Any observation that lies at an abnormal distance from other observations is known as an outlier. It indicates either variability in the measurement or an experimental error.

  1. What are the two main methods two detect outliers?

Ans. Box plot method: if the value is higher or lesser than 1.5*IQR (interquartile range) above the upper quartile (Q3) or below the lower quartile (Q1) respectively, then it is considered an outlier.

Standard deviation method: if the value higher or lower than mean ± (3*standard deviation), then it is considered an outlier.

  1. Why is ‘naïve Bayes’ naïve?

Ans. It is naïve because it assumes that all datasets are equally important and independent, which is not the case in a real-world scenario.

  1. What is the difference between factor analysis and principal component analysis?

Ans. The aim of the principal component analysis is to explain the covariance between variables while the aim of factor analysis is to explain the variance between variables.

  1. What is imputation?

Ans. Missing data may lead to some critical issues; hence, imputation is the methodology that can help to avoid pitfalls. It is the process of replacing missing data with substituted values. Imputation helps in preventing list-wise deletion of cases with missing values.

  1. What is A/B Testing?

Ans. A/B testing is a random experiment with two variants. It is mainly a comparative study where these variants of a page are presented before random users and their feedback is statistically analyzed to see which one performs better. A/B testing is also known as split testing.

 

By bpci