Terms Beginning With 'v'

Variable Overhead

Overheads are the non-labour expenditure that is required for a business to function.

Variable overheads refer to those expenses that varies from month to month depending on sales & other elements like promotional attempts, season change, & changes in the prices of supplies and services.

Calculating the cost of a product or an enterprise based on the direct and the indirect costs (overheads) involved. Multiple methods of absorption costing include Direct labour cost percentage rate, Direct material cost percentage rate, Labour hour rate , Prime cost percentage rate and Machine hour rate.    

What is Data Mining? Data mining is a process that facilitates the extraction of relevant information from a vast dataset. The process helps to discover a new, accurate and useful pattern in the data to derive helpful pattern in data and relevant information from the dataset for organization or individual who requires it. Key Features of data mining include: Based on the trend and behaviour analysis, data mining helps to predict pattern automatically. Predicts the possible outcome. Helps to create decision-oriented information. Focuses on large datasets and databases for analysis. Clustering based on findings and a visually documented group of facts that were earlier hidden. How does data mining work? The first step of the data mining process includes the collection of data and loading it into the data warehouse. In the next step, the data is stored and managed on cloud or in-house servers. Business analyst, data miners, IT professionals or the management team then extracts these data from the sources and accordingly access and determine the way they want to organize the data. The application software performs data sorting based on user’s result. In the last step, the user presents the data in the presentable format, which could be in the form of a graph or table.         Image Source: © Kalkine Group 2020 What is the process of data mining? Multiple processes are involved in the implementation of data mining before mining happens. These processes include: Business Research: Before we begin the process of data mining, we must have a complete understanding of the business problem, business objectives, the resources available plus the existing scenario to meet these requirements. Having a fair knowledge of these topics would help to create a detailed data mining plan that meets the goals set up by the business. Data Quality Checks: Once we have all the data collected, we must check the data so that there are no blockages in the data integration process. The quality assurance helps to detect any core irregularities in the data like missing data interpolation. Data Cleaning: A vital process, data cleaning costumes a considerable amount of time in the selection, formatting, and anonymization of data. Data Transformation: Once data cleaning completes, the next process involves data transformation. It comprises of five stages comprising, data smoothing, data summary, data generalization, data normalization and data attribute construction. Data Modelling: In this process, several mathematical models are implemented in the dataset. What are the techniques of data mining? Association: Association (or the relation technique) is the most used data mining technique. In this technique, the transaction and the relationship between the items are used to discover a pattern. Association is used for market basket analysis which is done to identify all those products which customer buy together. An example of this is a department store, where we find those goods close to each other, which the customers generally buy together, like bread, butter, jam, eggs. Clustering: Clustering technique involves the creation of a meaningful object with common characteristics. An example of this is the placement of books in the library in a way that a similar category of books is there on the same shelf. Classification: As the name suggests, the classification technique helps the user to classify and variable in the dataset into pre-defined groups and classes. It uses linear programming, statistics, decision tree and artificial neural networks. Through the classification technique, we can develop software that can be modelled so that data can be classified into different classes. Prediction: Prediction techniques help to identify the dependent and the independent variables. Based on the past sales data, a business can use this technique to identify how the business would do in the future. It can help the user to determine whether the business would make a profit or not. Sequential Pattern: In this technique, the transaction data is used and though this data, the user identifies similar trends, pattern, and events over a period. An example is the historical sales data which a department store pulls out to identify the items in the store which customer purchases together at different times of the year. Applications of data mining Data mining techniques find their applications across a broad range of industries. Some of the applications are listed below: Healthcare Education Customer Relationship Management Manufacturing Market Basket Analysis Finance and Banking Insurance Fraud Detection Monitoring Pattern Classification Data Mining Tools Data mining aims to find out the hidden, valid and all possible patterns in a large dataset. In this process, there are several tools available in the market that helps in data mining. Below is a list of ten of the most widely used data mining tools: SAS Data mining Teradata R-Programing Board Dundas Inetsoft H3O Qlik RapidMiner Oracle BI

A simple procedure for calculating the variable price rate & the total sum of fixed costs that are part of mixed costs is defined as the high-low method.

What is the Capital Asset Pricing Model (CAPM)? The capital asset pricing model (CAPM) calculates the expected return rate on an asset by considering the risk associated with it against the theoretical risk-free asset. CAPM was built upon the Modern portfolio theory, which evaluates the interrelationship between the expected return and risk associated with securities through correlation measure.  The idea around the entire model is to consider the risk-reward relationship, thereby allowing scope for more significant gains called risk premium in case of risky investments. A risk premium denotes a rate of return that is higher than the risk-free rate.  Thus, it helps investors add assets to make their portfolio well-diversified, taking into account the systematic risk or market risk (associated with the overall market) and specific risk (related to individual asset). What are the Components of the CAPM formula? The CAPM takes into account the time value of money and the expected risk to estimate the expected return, which is calculated using the formula: Expected Return (Ri) The expected return of an investment, after considering various market-dependent variables, highlights the potential payback an investor would achieve in the long term through particular security or investment.  Risk-Free Rate (Rf) The risk-free rate of return is a theoretical concept highlighting the return rate offered by securities with zero risks. The risk-free rate in real terms is generally calculated by deducting the inflation from yield on Treasury Bonds for a specific period (typically ten years). Beta Beta is used to gauge the volatility associated with the returns. Securities may show higher or lower fluctuation concerning the overall market. For securities having a beta greater than 1, their price fluctuation is higher than the average market fluctuation. On the other hand, for securities with a beta less than 1, the volatility is lower than the average market scenario. A beta value of 1 represents a perfect correlation with market risk, while -1 indicates a perfect negative correlation with the market.  Market Risk Premium Market risk premium indicates return surplus in addition to what zero-risk security can provide in order to compensate for additional risk taken for a particular investment. Thus, the riskier the security, the higher is the market risk premium. Growth stocks or shares of companies with low market capitalisation offer comparatively higher return-rate in exchange for the elevated risk in comparison to the shares of large and stabilised companies.  What are the assumptions used in the CAPM? The CAPM represents how financial markets behave, affecting the returns on investment. However, while analysing the return, it considers some of the assumptions: Asset quantities are fixed, and the security market competes ideally and efficiently with the necessary company or market-related information disseminated timely. Thus, all investors have homogenous information.  Trading expenses, such as transaction cost and taxes, are not present.  The market is perfect, with no restrictions on borrowing or short selling.  Investors seeking to maximise their returns are rational and risk-averse. Risk-free rate remains constant over the discounting period.  Return from securities follows a statistical nature (normal distribution) What are the uses of the CAPM? Due to varying assumptions, the CAPM appears to be more suited for an ideal condition and not the real-world scenario where various market imperfections are present. Despite the assumptions it uses in calculating the expected returns, it has multiple advantages associated with it.  CAPM offers a simplified measure to calculate the cost of equity, guiding the investors to evaluate the return conveniently.  The concepts and relationships used in the model highlight how market risk, return, and volatility determine the overall return.  CAPM concepts can be used as a toolkit to get a big picture of overall future returns. Thus, investors can use it to evaluate the returns and compare them against their expectations for better decision-making.  The model is a measure for comparing the expected performance of varying financial securities. The returns with respect to the rest of the market can also be compared.   It allows for building a well-diversified portfolio by using the combination of risky stocks that reduce the portfolio's overall volatility relative to its competent securities. 

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it. OK