Skip to main content

Estimation of the Peak in Quadratic Regression

 Problem:  You are running a standard quadratic (polynomial) regression analysis, and are specifically interested in the X and Y values at the peak.  If you use standard regression software, typically there will be no option that allows the peak to be estimated, with standard errors.

Example:  You are studying Growth as a function of Age.  Of particular interest is when maximum Growth occurs, and at what Age.

SAS code to generate artificial data, and run the analysis is:

data one;
do Age=1 to 20;
Growth=95 + 2.7*Age - .3*Age*Age + 5*rannor(22);
end;
proc nlin plots=fit;
parms int=2 lin=1 quad=1;
model Growth = int + lin*Age + quad*Age*Age;
estimate 'Age at peak' -lin/(2*quad);
estimate 'Growth at peak' int + lin*(-lin/(2*quad)) + quad*(-lin/(2*quad))*(-lin/(2*quad));
run;


The standard quadratic regression model with intercept, linear and quadratic slopes, is coded into Proc NLIN which has the ability to estimate any function of the parameters.  The peak estimates are obtained using standard calculus, set the first derivative to zero to estimate Age at peak, then put that estimate into the quadratic regression equation to estimate Growth.
Output from the example is copied below, and you can visually verify that the peak estimates match the fitted curve.  Without using a statistically based process like this, you will not have standard errors or associated confidence intervals, which definitely help assess the confidence you should have in the estimated values.  Here we conclude that maximum growth occurs at 5.4 years, and maximum growth is 97.1 mm.


Additional Estimates
Label Estimate Standard
Error
DF t Value Approx
Pr > |t|
Alpha Approximate Confidence
Limits
Age at peak 5.4310 0.7727 17 7.03 <.0001 0.05 3.8007 7.0613
Growth at peak 97.1044 1.7731 17 54.76 <.0001 0.05 93.3634 100.8

Comments

Popular posts from this blog

DANDA - A macro collection for easier SAS statistical analysis

Objective :  You are running ANOVAs or regressions in SAS, and wish there was a way to avoid writing the dozens of commands needed to conduct the analysis and generate recommended diagnostics and summary of results, not to mention the hundreds of possible options that might be needed to access recommended methods.  A possible solution is to download a copy of danda.sas below, and use this macro collection to run the dozens of commands with one statement.  We will also have future posts covering various uses of danda.sas, giving examples as always. danda.sas is under continued development, check this page for updates. Date                       Version               Link 2021/03/15             2.12.030          danda.sas 2021/03/15       ...

Reporting results from transformed analyses

Objective :  Transformed data, for example log(y), is analyzed to correct normality or equal variance requirements.  But we want to report means and standard errors in the original units. SAS example : data one;  do treat=1 to 3;  do rep=1 to 5;    y=10 + treat+ exp(rannor(111));    logy=log(y);    output;  end;end; run; proc mixed plots=all;   class treat;   model y=treat;   lsmeans treat/pdiff; run; proc mixed plots=all;   class treat;   model logy=treat;   lsmeans treat/pdiff; run; The original data, variable y, might have units of pounds.  If a transformation is needed, we simply calculate a new variable by applying a mathematical function known to improve normality or equal variance, and run the same analysis on the new variable.  Commonly used choices are listed in the second table below. However, looking at the results for both analyses we see treat Mean Y S...

UTF character data, encoding of text

Objective and Background :  You have text data that is UTF encoded and need SAS/R to read and write datasets with that encoding.  If you have ever printed or viewed text information, and seen something like Giuffr?Ÿ’e?ƒe?Ÿƒ?ÿ?›ƒ?ªƒ?›?Ÿ’e›ƒ?ª­?Ÿƒeee, then you are running into this encoding issue.  Computers store text using numbers, with each number assigned to a particular character.  See  https://en.wikipedia.org/wiki/ASCII  to find that the character & is stored as 38 when using the ASCII encoding.  Unicode is popular internationally because it encodes special characters such as accented letters, and UTF-8 is a widely used version ( https://en.wikipedia.org/wiki/UTF-8 ).  In UTF-8 the & character is stored as 26, and you can imagine how the jumbled example above arises from the confusion of what letters are being stored. Solution 1 :  Use options to request that individual datasets be read and written in a particular encodin...