Standard Regression Error

I just learned something that I didn't know. I happened to be browsing and ...

>I suspect there's lots of stuff you don't know. I remember when ...
Pay attention!
I want to go over regression analysis:
  • We have a bunch of observed values: y1, y2 ... yn.
    Example: n successive daily returns for GE stock.
  • We suspect there may be a relationship between these numbers and another set: x1, x2 ... xn.
    Example: n successive daily returns for the DOW Index.
  • We plot the points (x1, y1), (x2, y2) ... (xn, yn) giving a "scatter plot", like so:
  • We try to find the "best" straight line approximation to the n points.
    That'll depend upon our definition of "best".
  • We consider a line described by the equation: y = α + β x.
    For x = xk, the y-coordinate on the line is: y = α + β xk.
    The observed value, for that particular x-value, is: yk.
  • For each point we calculate the error:
    ek = yk - (α + β xk).
  • We then calculate the sum of the squares of these errors:   e12 + e22 + ... +en2 = Σek2.
    For sanitary reasons, we'll use the notations:
    Σxk = x1+x2+ ... +xn
    Σxkyk = x1y1+x2y2+ ... +xnyn   etc.
    ... or, sometimes, simply: Σx or Σxy.
  • Our definition of "best" is to choose the 2 parameters α and β so that:
    Σek2 = Σ{yk - (α + β xk)}2 is a minimum.
  • The resultant "best" line is called the "regression line" and the "best" parameter values are:
    [1]
    α = {(Σx2/n) (Σy/n) - (Σx/n) (Σxy/n)} / {Σx2/n -(Σx/n)2}

    β = {Σxy/n -(Σx/n)(Σy/n)} / {Σx2/n -(Σx/n)2}

  • Recall that the Standard Deviation of a set of numbers such as x1, x2 ... xn is calculated according to:
    [2]
    SD2[x] = (1/n) Σ(xk - M[x])2
    where M[x] = (1/n) Σxk, the Mean (or Average) of the x-values.
  • Further, it can be shown that this is the same as:
    [3]
    SD2[x] = (1/n) Σxk2 - M[x]2 = Σx2/n - ( Σx/n)2 = (the Average of the squares) - (the square of the Average)
  • Then the denominators in the magic equations for α and β are just SD2[x].
>Okay, but what's that "standard regression error? When do you intend to ... ?
Patience!
Continuing, we note that:
  • The numerator of the equation for α is: (Mean of x2)(Mean of y) - (Mean of x)(Mean of xy) = M[x2]M[y] - M[x]M[xy]
  • The numerator of the equation for β is: (Mean of xy) - (Mean of x)(Mean of y) = M[xy] - M[x]M[y]
  • By definition, the Covariance of x and y is: COVAR[x,y] = Mean[xy] - Mean[x]Mean[y]
  • Hence we can write:
    [4]
    β = COVAR[x,y] / SD2[x]
  • There's also this thing called Pearson product-moment Correlation coefficient defined by:
    [5]
    (Pearson) Correlation = r = COVAR[x,y] / SD[x]SD[y]
Remember those errors? They were calculated as:   ek = yk - (α + β xk).
  • For the regression line, it can be shown that the average squared error is given by:
    [6]
    Error2 = Σek2 / n = SD2[y] (1 - r2)
>"It can be shown this" and "It can be shown that". Shown ... where?
Check things out here.
Okay, we're almost there!
    [7]
    (Standard Regression Error)2 = Σek2 / (n-2) = { n / (n-2) } SD2[y] (1 - r2)
>Huh? Isn't that what you got before? Why did you say it's something you didn't know?
I calculated the Error before, but the "standard" error ... that's a horse of a diff'runt hue, eh?

>But it's just that n/(n-2) guy out front. If you had a hundred stock returns, that'd be 100/98 and that's hardly ...
Don't argue. That's what they call the Standard Error. When you take the square root (for 100 daily stock returns), it'll differ from the garden variety Error by just a schnitzel.

For the GE vs DOW example we considered above (with just 21 points) we'd get:
Error = 2.39%   and   Standard Error = 2.51%.

If'n we look at a year's worth of returns (that's 252 points) we'd get:
Error = 2.07%   and   Standard Error = 2.08%.

Note #1:
The Standard Regression Error is available in Excel as:   STEYX(yk-values, xk-values)


Self Regression

Above, we talked about regression between two sets of numbers, xk and yk.
Now we suppose we have a single set of monthly returns for some stock, namely: r1, r2 ... rn.
Indeed, we consider the evolution of some monthly stock prices: P1, P2 ... Pn where Pk+1 = Pk (1+rk).
In particular:
[A]       Pk = P0 (1 + r)k   where k is the number of months that have passed since the price was P0.

This monthly chart displays the Value Added Monthly Index (VAMI)  
It's actually XOM (Exxon) stock, month-to-month.

Now consider the logarithm of that VAMI. It'll look like this:  

If the monthly returns were constant, the log[VAMI] graph would be a straight line.
That's because:
[B]       log[Pk] = log[P0] + k log[1+r]   where k is the number of months passed. [B] follows from [A].
See? For constant r, the plot of P vs k is linear.
The slope is log[1+r] which, for smallish values of r, is approximately r itself.

>Huh?
That's because log[1+r] ≈ r for small values of r.
In fact, from [B], log[Pk] - log[Pk-1] = log[ Pk/Pk-1] = r is used to define the logarithmic return.


>Are you using log10?
No, I'll be using the natural log, to base e ... because it's much sexier.

>I have to tell you that the log[VAMI] chart don't look nothing like a straight line.
Uh ... yes, that's true. What we'd like to do is see how close it is to a straight line. To that end we ...
>You get yourself a regression line and calculate the error, right?
Sort of. We plot log[VAMI] vs the number of the month (that's k), getting a scatter plot like so:  
It's XOM (Exxon) over 120 months or 10 years.

The slope of the regression line gives our monthly return. That's 0.0097 ... a "kind-of-average" monthly return of 0.97%.
The deviation of points from the regression line (measured by the Error) tells us how wild and wooly the returns really are.
Large Error means you can't count on getting similar returns.
The Error is then a measure of the risk associated with the stock.

>Don't tell me! You got Return and you got Risk so you got ...
You got a ratio: Return / Risk.

That sounds like a Sharpe Ratio, right?
[8]
Sharpe Ratio = [ (Average Return) - (risk-free Return) ] / (Standard Deviation of Returns)

>So you're gonna generate another kind of Sharpe Ratio?
I think there are lots of alternatives to Sharpe ... the Sortino Ratio, for example.
We want to talk about yet another ...


K-Ratio ... what K-Ratio?
motivated by e-mail from Shaun C.

Lars Kestner is a Equity Derivatives Trader and author.
In a book called Quantitative Trading Strategies, he introduced the K-ratio as an alternative to the Sharpe Ratio.
It goes something like this:
  • We generate a scatter plot of log[VAMI] versus (Number of periods).
  • The return per period (days, weeks, months, whatever) is measured by the Slope.
    For example, using monthly returns of XOM stock over 10 years, we'd get this:
  • The deviation of points from the regression line is measured by the Standard Error.
    This Error is a measure of "risk" asociated with the stock.
  • The ratio: Slope / Standard Error may be interpreted as: Return / Risk ... like the Sharpe Ratio.
  • That would give:
    [9]
    k-Ratio = (Slope of logVAMI regression line) / (Standard Regression Error)
Example:
For the 10-year XOM stock, considering 120 monthly returns:

  • The Slope of the logVAMI regression line is 0.0097 (corresponding to a 0.97% monthly return).
  • The (Standard Regression Error) is 0.158.
    The ratio is then 0.0097 / 0.1579 = 0.0614.
>And that's the Kestner K-Ratio?
Uh ... no.
When he first mentioned the K-Ratio he defined it as:
[1]       K-Ratio = (Slope of logVAMI regression line) / [ (Standard Regression Error) sqrt(n) ] where n = number of return periods being considered.
Somebody pointed out some error in his logic and he changed it to:
[2]       K-Ratio = (Slope of logVAMI regression line) / [ (Standard Regression Error) (n) ] where n = number of return periods being considered.

For our 10-year XOM example (where we used 120 monthly returns), we'd get:
[1a]       K-Ratio = 0.0097 / [0.158*sqrt(120) ]= 0.00561.
[1b]       K-Ratio = 0.0097 / [0.158*120] = 0.000512.

According to Kestner, the [1b] equation is the correct one.
However, neither agrees with our k-ratio, above.

>So why does Kestner divide by n?
I have no idea ... yet.
However, one would expect that the K-Ratio should be defined so that it'll give the same number (roughly) regardless of whether one uses daily weekly or monthly returns.
That's similar to calculating the Standard Deviation of monthly returns, then multiplying by sqrt(12) to annualize.
Annualizing the SD of weekly returns, by multiplying by sqrt(52), gives (roughly) the same number.
That explains why people get all excited about annualizing.

>So are you excited about annualizing that Kestner's?
Yes ... but I'd like to understand why Kestner sticks a "n" in the divisor.
Note that if one uses monthly data in the Sharpe Ratio, then annualizes, that Ratio gets multiplied by sqrt(12).
You'd think that, if one wanted to provide an alternate to the Sharpe ratio, that alternate should have similar properties, eh?

>I think you're confused. Maybe you should get some sleep.
zzzZZZ


Standard Error of the Slope

Okay, let's try to make sense of the Kestner K-Ratio.
The numerator of the Ratio as the SLOPE of the log[VAMI] regression (against time) seems reasonable, 'cause it provides a measure of the monthly returns.
But that denominator!
The denominator should measure, somehow, the volatility of the monthly returns, their deviation from the slope, their lack of consistency over time, their ...

>So what do you suggest?
Well, we started by associating the denominator with the "Standard Error" associated with the regression line.
That Error measures the deviations of the points on the log[VAMI] plot from the regression line.
But we're really interested in the slopes, so maybe we should be measuring the deviations of the monthly returns from that SLOPE .
To this end, let's do this:
  • Consider successive points on the log[VAMI] plot:   (k-1, log[Pk-1]) and (k, log[Pk]).
  • The slope of the line joining these two points is:   log[Pk] - log[Pk-1 = log[Pk/Pk-1].
  • That's our (monthly) logarithmic return, so we take that mini-slope as the measure of the return for the kth month.    
  • To measure the deviations of these mini-returns from the slope of the regression line (that's β), we ...
>We calculate the standard error!!
Excellent idea. We calculate the so-called Standard Error of the Slope.
Let's do that. To make the equations look neater, we'll set yk = log[Pk].
In fact, let's put aside this K-Ratio stuff and just consider an arbitrary scatter plot:   (xk, yk).

>Huh?
Don't worry about it. We'll talk about the Standard Error of the Slope later. In the meantime, we'll use it to calculate another k-Ratio.
Note that: (Standard Error of the Slope)2 = {Σ(y - yk)2 / (n-2)} / Σ(x - M[x])2.
That curious (n-2) reappears. For large n, it's indistinguishable from n.
If we replace (n-2) by n, this is like calculating:
(Standard Error of the Slope)2 = (1/n){(1/n)Σ(y - yk)2 } / (1/n)Σ(x - M[x])2 = (1/n) (Mean Square Error in the yk) / (Standard Deviation of the xk)2.

Note #2:
I might mention that statisticians usually regard the Standard Deviation of a set of numbers (like the xk) as a sample taken from some universe of x-values.
In attempting to estimate the SD of the entire x-population, they calculate the SD of the sample values, xk as:
    Σ(x - M[x])2 / (n-1)   rather than   Σ(x - M[x])2 / n as we've been doing.
Dividing by (n-1) instead of n is not as popular in financial circles as in statistical circles.
In fact, for large values of n there's little difference and ...

>Please continue.
We now have another k-Ratio:

    [11]
    k-Ratio = (Slope of logVAMI regression line) / (Standard Error of the Slope)
>Why the lower-case "k"?
Because it's not Kestner's K-ratio. In fact, in [1] and [2], above, we got it wrong.
Kestner did NOT use the (Standard Regression Error) in the denominator.
He used the (Standard Error of the Slope), as in [11]
... but he divided by n.

>So you're getting closer to Kestner, right?
It seems that way.

click to proceed