To Infinity and Beyond Dealing with the Mathematical Oddities of Ratio Analysis
Among the few things I detest more than reality-based TV shows are
denominators that approach zero. The former are insufferable; the latter are
inscrutable. During the course of formulating or reviewing a disclosure
statement or business plan, restructuring professionals invariably carry out
some form of benchmarking analysis, typically in testing the reasonableness of
a debtor's operating projections or capital structure relative to those
of designated peers. Much of this effort boils down to ratio analysis—a
useful tool because financial ratios are unaffected by size discrepancies among
firms or across time.
</p><p>Thanks
to providers of financial statement data in electronic form, products such as
Standard & Poor's Compustat allow the restructuring professional to
download enormous amounts of financial and market-based data directly into spreadsheets
within a matter of seconds. This empowers the analyst to choose dozens of peer
companies from among hundreds of candidates based on user-specified selection
criteria and to then perform relevant financial ratio benchmarking. Anyone who
has ever done this work is no doubt familiar with those annoying, exceedingly
large, approaching-infinity calculated values (resulting from extremely small,
approaching-zero denominators) that wreak havoc on summary statistics.
Furthermore, there are other mathematical oddities, such as ratios with
negative values, ratios with positive values resulting from a negative
numerator and denominator, or other calculated ratio values that have no
obvious meaning or interpretation. The analyst must decide how to effectively
deal with all these quirks without compromising the integrity of the analysis.
</p><h3>Near-zero Denominators</h3>
<p>How
should the analyst best deal with extreme ratio values caused by denominators
approaching zero? Deleting the particular observation or company from the
analysis is the obvious and most tempting option, but may produce summary
statistics that are incomplete or non-representative of the peer group. In
Exhibit 1, we calculated two coverage ratios, EBITDA-to-interest expense and
EBIT-to-interest expense, for a swath of manufacturing companies with issuer
credit ratings of single-A. Companies' DNA and COL have negligible
interest expense, and consequently, pull up the average for the entire
group—unfairly so. Removing these two companies from the data sets would
result in coverage ratios of 13.4 and 9.2, respectively—certainly more
realistic values than the unadjusted arithmetic means in Exhibit 1. However, by
omitting these four extreme observations, we are effectively removing two
companies from the group that choose to employ minimal leverage. This seems
somewhat arbitrary, as we are depriving the group of two representative
companies for no other reason than difficulty in interpreting their ratio
values. Generally speaking, deleting an extreme observation is considered an
appropriate measure only when it represents a true outlier—a value that
is wholly inconsistent with other data points, is not representative of the
underlying characteristic and cannot be explained in any logical way. Is there
a meaningful way to include these minimally leveraged companies in the
calculation of the group's coverage ratios without using their distorted
(and distorting) calculated values?
</p><p></p><center><img src="/AM/images/journal/03marturnchart1.gif" alt="" align="middle" height="324" hspace="5" vspace="5" width="600"></center>
<h3>Winsorizing</h3>
<p>The
analyst can "winsorize" these calculated values—that is,
adjust the computed value of the four extreme observations to the next closest
"reasonable value," thereby reining in these runaway values. For
example, the analyst can use Microsoft Excel's <i>If</i> function to create a command that calculates a ratio but caps the
ratio value at <i>n</i> if the calculated value exceeds
<i>n.</i> In this instance, we could have specified
that the calculated EBITDA coverage ratio not exceed, say, 25. This ensures
that our two companies with extreme values are reasonably represented in the
group. Lastly, we could have extended this command to companies in the group
with no leverage at all—that is, zero interest expense. (There was one in
our group, HDI, whose calculated coverage ratios in Exhibit 1—#DIV/0! due
to the zero denominator—were omitted.) Whether to delete or winsorize,
particularly in the last instance, is a judgment call by the analyst: To omit
minimally leveraged and unleveraged companies from our ratio calculations (due
to near-zero or zero denominators) would be to overlook those companies with
the most conservative capital structures, but imposing a subjective adjustment
to a calculated financial ratio whose computed value cannot be easily
interpreted might appear too manipulative. (There are statistical software
packages that winsorize a data set more rigidly, such as by taking those
observations in the bottom and top deciles and changing their computed values
to the decile values immediately above and below them, respectively.) If too
many data points in a data set require winsorizing due to denominator issues,
the analyst should consider an alternative ratio that measures a similar
characteristic. In this case, EBITDAR-to-Fixed Charges would likely have been a
fine substitute since broadening the definition of the denominator to include rent
expense lessens the likelihood that it will contain a zero or near-zero value
for any company.
</p><h3>Trimming the Data Set</h3>
<p>An
alternative to winsorizing individual data points in order to control the
impact of extreme values is to trim the data set. Trimming a data set requires
the omission of <i>n</i> percent of the calculated
values and then computing the mean of the remaining (1-<i>n</i>) percent of the data set. For example, if the analyst decides an
appropriate trim percentage (<i>n</i>) is 20 percent
for a data set consisting of 200 data points, then 40 data points are omitted
from the set—the top 20 and bottom 20 calculated ratio values—and
the mean of the remaining 160 values is computed. The TRIMMEAN calculation is a
standard Excel function. Trimming a data set with the TRIMMEAN function
effectively removes the most extreme values <i>at each end of a data set</i> when calculating summary statistics. It spares the analyst the
bother of having to explicitly scan data sets and delete individual data points
or companies from the sets. If we apply the TRIMMEAN function to our two
coverage ratios in Exhibit 1, we get ratio values of 17.2 and 12.5,
respectively, for the group. Any time the analyst uses Excel's AVERAGE
function, the TRIMMEAN function should be run on the data set as well, and the
analyst should note the degree to which these two averages agree or diverge.
</p><h3>Medians</h3>
<p>The
median is probably the most common but simplistic summary statistic used to
manage the impact of extreme values on a data set. It is simply the middle value
in a data set and is completely unaffected by extreme values. The median may be
a better indicator of central tendency than the arithmetic average, even if
only a couple of extreme values remain in a data set. The median values of our
two coverage ratios were 12.7 and 7.3, respectively, in each case smaller than
the adjusted arithmetic average (after deleting the extreme values) and the
trimmed mean. In its periodic reports on key industrial financial ratios,
S&P only presents median ratio values for each debt-rating category.
</p><h3>Negative Ratios and Other Oddities</h3>
<p>Another
common problem encountered in ratio analysis is how to deal with negative value
ratios. We see in Exhibit 2 that TXN's debt-to-EBIT ratio
is a negative value. Deleting a negative data point is the most common remedy,
but once again, this should not be done automatically. First, try to make sure
the figure in question is "clean." In this instance, the difference
between TXN's EBITDA and EBIT values is unusually large. Perhaps goodwill
was deemed impaired and written-off, or some other non-cash, non-recurring
charge hit the P&L. The analyst is encouraged to investigate these types of
discrepancies and normalize the financial statement data if the information
required to do so is available. Retrieving financial statement data for several
surrounding quarters allows the analyst to eyeball numbers, establish some
informal "range of normalcy" and quickly spot suspicious figures.
However, this data-scrubbing might be impractical or overly time-consuming if a
data set comprises dozens of companies. When using ratios that require
income-statement data, it's best to use trailing four-quarter P&L
data rather than quarterly data, so as to remove the impact of seasonality on
the computed ratio values.
</p><p></p><center><img src="/AM/images/journal/03marturnchart2.gif" alt="" align="middle" height="289" hspace="5" vspace="5" width="600"></center>
<p>For
certain ratios, a negative value may be an accurate (albeit undesirable)
measurement of the characteristic of interest. For example, negative operating
margins and subsequent negative returns on equity are not highly unusual
observations for distressed companies or industries, and are perfectly
explainable within the conceptual definition of the ratio. While negative
values for these two ratios cannot persist indefinitely without eventually
resulting in financial ruination, they can endure for several quarters and
should be left intact within a data set if they are reflective of underlying
business conditions during that time period. For other financial ratios, a
negative value has no discernible meaning, and the data point should be deleted.
If return on equity were negative due to negative shareholders' equity
(as opposed to a net loss), then this ratio value would have no obvious
interpretation and should be deleted from the data set. As with ratio values
involving zero and near-zero denominators, it might be preferable to identify
an alternative ratio that measures a related characteristic but avoids the math
quirk, such as return on total assets instead of return on equity.
</p><p>Unfortunately,
it isn't always obvious whether a negative value is an acceptable value
for a particular ratio. The analyst must first discern whether a negative value
is consistent or inconsistent with the natural direction of the ratio. For
example, the larger the debt-to-EBIT ratio, the more leveraged a firm is considered
to be. However, by including a company with negative EBIT, as we did with TXN
in Exhibit 2, we (improperly) reduced the group's average leverage ratio.
By this logic, a group that contains some firms with operating losses would
have a lower debt-to-EBIT ratio than if those firms had operating income <i>ceteris
paribus.</i> This conclusion is counterintuitive and
nonsensical. Therefore, any negative value data points should be excluded from
the group for this particular ratio. (Winsorizing these negative data points
would be a large and subjective adjustment here.) Conversely, the analyst might
decide to include a negative value data point for the coverage ratio
EBIT-to-interest expense, since such a reading is not inconsistent with the
normal direction of the ratio (<i>i.e.,</i> smaller is
"worse" and negative means smaller). In Exhibit 1, leaving
TXN's negative EBIT-to-interest expense ratio value in the data set does
not distort the group average as its negative debt-to-EBIT ratio value does in
Exhibit 2.
</p><p>Another
mathematical subtlety is a positive ratio caused by two negative numbers, such
as a positive return on equity resulting from a net loss and negative
shareholders' equity. Without question, it would be improper to allow
this calculation to remain in the group. The trick here is simply to identify
these instances, which are easy to miss if the data service provider's
software calculates the ratio directly. As a general rule, it's best to
download the raw financial statement data that underlie a ratio and scan them
to ensure that two negatives don't inadvertently produce a positive value
ratio that stays in the data set. Excel's MAX and MIN functions are
extremely helpful in quickly locating oddball numbers within a large data
series.
</p><p>Hopefully
it is clear by now that ratio analysis, when carried out thoughtfully, can be
tedious work that requires lots of attention to detail. The second-worst sin
the analyst could commit in this exercise (first place goes to deliberate
selection bias—that is, picking a sample that will produce a desired
outcome) is to ignore the insidious subtleties of working with fractions. With
ratio analysis, the formulas may all be fine, strictly speaking, but the
conclusions can still be way off the mark.