Theoretical framework
As stated earlier, there are many ways to deal with multi-surrogate dependent variables (DV) when analyzing cause-and-effect relationships in research studies that bothers more on behavioral phenomena than real-life business-related problems. This is premised on the fact that business and applied economic problems demand the use of past transaction data not based on assumptions or mere conjecture in setting relational models that can be used to direct the affairs of an organization. To deal with the issues of behavioral phenomenon with more than one DV surrogate, it was suggested in a community of online discussants that one could run two separate regression equations in a case where there are two DV surrogates; one for each DV, but the general concern is that such treatment might likely not capture the interspersing relationship between all the DV surrogates. Also, fitting all surrogates’ regressions separately will indeed be equivalent to formulating multivariate relationships with a matrix of dependent variables (Transaction Processing Performance Council, 2011). However, if one is interested in describing a two-block structure, this could be done using partial least square regression (PLS). Partial least square is a regression framework which relies on the idea of building successive (orthogonal) linear combinations of the variables belonging to each block such that their covariance is maximal. It is a method for relating two data matrices, X and Y, by a linear multivariate model but goes beyond traditional regression in that it models also the structure of X and Y and also derive its usefulness from its ability to analyse data with many, noisy, collinear, and even incomplete variables in both X and Y (Wold, Sjostrom, & Erikkson, 2001).
Many social scientists on the other hand, prefer the use of the GLM multivariate and repeated measures ANOVA to fit the multiple equations resulting from the use of more than one dependent variable into a single equation for the purpose of getting a unified analytical result; however, a number of others prefer the use of more exotic methods such as binary response model (BRM), multiple classification analysis (MCA), and canonical correlation among others, to obtain the same effect. Particularly, the GLM Multivariate procedure allows the analyst to model the values of multiple dependent scale variables, based on their relationships to categorical and scale predictors. In a case of ordinary GLM, there is always a single dependent variable, with a prediction mean error of zero (0) and a variance that can be computed after the GLM is fitted. But when there are multiple dependent variables, each of the dependent variables will have a prediction error (Helwig, 2017; NCSS, 1989; Steiger, n.d.).
In chemometrics analysis, the use of PLS is favoured because of the multiplicity of the inputs and outputs of most chemical processes. Chemometrics is the use of mathematical and statistical methods to improve the understanding of chemical information and to correlate quality parameters or physical properties to analytical instrument data (Bu, 2007). Chemometrics analysis is a fascinating one because it is interdisciplinary and employs the extensive use of such tools as principal components analysis (PCA), multivariate statistics, three-pass regression, LPLS regression, latent structure regression, partial least square structural equation modeling (PLS-SEM), covariance based structural equation modeling (CB-SEM), and shrinkage structure analysis (Abdi, 2010; Afthanorhan, 2013; Helland, 1990; Kelly & Pruitt, 2015; Lingjaerde & Christophersen, 2000; Saeboa, Almoya, Flatbergb, Aastveita, & Martens, 2008). Chemometrics also employ the use of total least square (TLS) and Deming regression in analyzing multiple dependent variables because of the reasons earlier adduced. TLS is a method of fitting that is appropriate when there are errors in both the observation vector and in the data matrix (Golub & Van Loan, 1980). Deming regression on the other hand is a special case of TLS which allows for any number of predictors and complicated error structure to be analyzed (Jensen, 2007).
Though, most of the analytical tools enunciated above employ techniques that will eventually end in bringing out the mean values that will be used to fit the final model of the intended research relationship, such may not readily or necessarily suit the secondary nature of the data usually extracted for financial performance analysis. Besides, the average accountant or financial analyst are not expected to acquire the deep knowledge of econometric and statistical analysis necessary to undertake such intricate computations in the absence of a computer software. These are, however, the least of the problems.
In accounting and finance, ratios are used to convey performance information to stakeholders in a business atmosphere. These ratios are often relational in nature, meaning that they try to tell us what fraction, level or percentage of efficiency was achieved in the use of certain resources; in other cases, the ratios might be engaged to do comparative/differential analysis between one period’s transactions and another’s or even to compare the performance of different projects or activities. These are the kinds of information that investors and management need to guide them in their daily decisions and divisional performance evaluation exercises - not the abstraction thinking involved in advanced econometric measurements which have no legal substance in business and commercial transactions. In addition, the measurements used in arriving at financial and accounting ratios are ways different from the means, errors, variations and covariations produced and fitted into most econometric and statistical models of measurement by other social science researchers. Though, means and averages can be employed in financial and accounting performance measurements, the way and mode of their employment will vary significantly with those used for pure econometrics studies.