Description:
How much statistics does a clinician, surgeon or nurse need to know?
This book provides an essential handbook to help appraise evidence in a scientific paper, to design and interpret the results of research correctly, to guide our students and to review the work of our colleagues. This title is written by a clinician exclusively for fellow clinicians, in their own language and not in statistical or epidemiological terms.
When clinicians discuss probability, it is focussed on how it applies to the management of patients in the flesh and how they are managed in a clinical setting. Statistics for Clinicians does not overlook the basis of statistics, but reviews techniques specific to medicine with an emphasis on their application. It ensures that readers have the correct tools to hand, including worked examples, guides and links to online calculators and free software, enabling readers to execute most statistical calculations. This book will therefore be enormously helpful for many working across all fields of medicine at any stage of their career.
See more ebooks for you:
Artificial Intelligence in the Clinical Laboratory
First Aid for the USMLE Step 3 2019
Oxford Clinical Guidelines: Newly Qualified Doctor
The Patient as a Person: An Integrated and Systemic Approach to Patient and Disease
Preface
“Finally, the work is done. Let us look for a statistician to analyze the data.” This everyday—apparently benign—phrase jeopardizes any clinical research’s credibility for many reasons.
To increase the chance of reaching dependable results, the number of patients necessary for the study has to be calculated before it begins, using well-known mathematical equations and with an acceptable probability of finding what the researcher is looking for. The empirical designation of such a number is the leading cause of missing statistically significant results, known as Type II error or a false-negative study. The question is not just about finding evidence per se, as indicated by a statistically significant P value. It is about evaluating this finding to decide whether it was achieved by a serious researcher who prepared a sufficient sample to find the evidence or was just a matter of good luck.
Data are usually analyzed by the end of the study. However, the conditions necessary for data analysis must be verified before data collection. The type of variable, its distribution, and its expression in a particular mathematical form have to fit the statistical test used for the analysis. The researcher has to choose between pre-planning, a careful match between the data and the statistical test during the preparation of the study, where everything is possible, and reckless decision-making at the end, where a little can be changed. The statistical test must be implanted in fertile land, which should be prepared to receive it. Doing otherwise will only guarantee a poor product.
Common knowledge is that randomization creates comparable groups at the beginning of the study. Any observed differences by the end of the study can then be related to the treatment effect. Unfortunately, many researchers do not appreciate that randomization is just an implant that has to be taken care of throughout the study. Comparability can be easily lost in various situations, such as uncovering blindness, neglecting patients in the placebo group, or any other condition that favors one of the study groups, usually the treatment group. Concluding upon the latter, while it is not, is a false-positive result known as Type I error.
The role of statistics does not end by creating P-values and confidence intervals. I must begin by verifying the conditions of application of the statistical tests used in creating those results. A critical step is a correct interpretation, which needs a clear understanding of the meaning of underlying equations. For example, a statistically non-significant difference between the effects of two treatments does not mean that both treatments are equal because strict equality does not exist in biology; hence, we cannot prove it in the experiment.
Moreover, the need for statistical consultation must include reviewing the manuscript to ensure the use of correct statistical terms in the discussion section. It also should cover answering the statistical queries posed by the editors and the reviewers. Consequently, limiting the statistician’s role to data analysis at the end of the study is all wrong. The solution is simple, the researcher has to lead a research team to manage his study from “Protocol to Publisher,” with the statistician being a primary indispensable member.
On the other hand, our understanding of biostatistics has to be net and clear. Although we do not need to be involved in every mathematical detail, it would become dangerous not to understand the fundamental idea, assumptions, and, most importantly, the correct interpretation of each statistical analysis we use. It is just like prescribing a treatment to your patient without knowing how it works, when it should be used, and the drawbacks and limitations. The researcher does not have to be involved in the details of complicated statistical equations more than the need of a physician to go into the depth of every complicated biochemical reaction.
The main barrier is the difficulty of gaining statistical knowledge from textbooks, which is the same reason I made this book. My work aims to explain to lay biologists—like me—the basic statistical ideas in our everyday language without distorting knowledge’s mathematical and statistical basis. In other words, “Statistics for Clinicians” is not a textbook in biostatistics but a trial to answer the fundamental question raised by every biologist: how much statistics do I need to know? We need basic statistical information to keep in touch with the “exploding” medical knowledge while reading a manuscript or attending a conference. We need it to get more involved, whether as a research team member, as a reviewer, or as a member of an evaluating scientific committee.
I tried to bring the correct statistical reasoning and sound judgment in this book, which is all a biologist need. All statistical tests are actually executed by computer software, which unfortunately does not tell us: which test to use. They point to whether data satisfies the test but rarely put it clearly to the lay researcher. The large amount of statistical information generated by those software packages is sometimes more confusing than informative. Most importantly, the software does not provide a “suggestion” on interpreting the results correctly. I aim to help fellow biologists know which test can be used to answer a specific research question. To ensure that the conditions of application are verified, to interpret the results correctly, and report them fully.
“People never learn anything by being told; they have to find out for themselves (Paulo Coelho).” I brought 697 equations; the vast majority can be executed by hand and do not need any statistical background. In order to understand the output of a test, one must know which inputs were introduced in the first place. For example, a researcher who knows the five primary inputs of sample size calculation will be able to reduce the size of his study by manipulating those inputs. In order to be understandable and easily executable, I insisted on using small examples, which were too small to satisfy the conditions of application of some statistical tests. I made my choice to present those user-friendly examples and concomitantly clearly note any limitations. I advise the reader to carefully follow the example to understand this input-output relation. Then, he can continue executing the analysis by the statistical software with confidence and report the results with knowledge.
Prof. Ahmed Hassouna ChD, MCFCV, DU Microsurgery Biostatistics Diploma (STARC) Professor of Cardiothoracic Surgery Ain-Shams University
Cairo, Egypt
Table of contents :
Foreword: The Man and His Dream
Preface: Statistics for Clinicians: How Much Should a Doctor Know?
Acknowledgments: The Payoff
Contents
List of Figures
List of Tables
1 Expressing and Analyzing Variability
Abstract
1.1 Introduction
1.2 Variables: Types, Measurement and Role
1.2.1 Variable Conversion
1.3 Summarizing Data
1.3.1 The Qualitative Variable
1.3.2 The Quantitative Variable
1.3.3 Measures of Central Tendency
1.3.4 Measures of Dispersion
1.3.4.1 The Variance (S2)
1.3.4.2 Standard Deviation (Sd)
1.3.4.3 Standard Error of Mean (Se)
1.3.4.4 Extension to the Qualitative Variable
1.3.4.5 Minimum, Maximum and Interquartile Range
1.4 The Normal Distribution
1.4.1 The Standardized Normal Distribution and the Z Score
1.4.2 The Confidence Interval (CI)
1.4.2.1 The Confidence Interval of Subjects
1.4.2.2 The Confidence Interval of Mean
1.4.2.3 The Confidence Interval of the Small Study
1.4.2.4 The Confidence Interval of a Proportion
1.4.2.5 The Confidence Interval of a Unilateral Versus a Bilateral Study Design
1.4.2.6 The Confidence Interval of the Difference Between Two Means
1.4.2.7 The Confidence Interval of the Difference Between 2 Proportions
1.4.2.9 The Confidence Interval of Variance
1.4.2.10 The Confidence Interval of an Event that Has Never Happened
1.4.3 Verifying Normality
1.4.3.1 Visual Check: The Histogram, Normal Q-Q Plot
1.4.3.2 Calculation of Skewness and Kurtosis
1.4.3.3 Tests of Normality
1.4.4 Normalization of Data
1.5 The P-value
1.5.1 The Primary Risk of Error (α)
1.6 The Null and Alternative Hypothesis
1.6.1 Statistical Significance and the Degree of Significance
1.7 Testing Hypothesis
1.7.1 A Simple Parametric Test
1.7.2 Unilateral Study Design
1.7.4 The Secondary Risk of Error
1.7.4.1 The Power of a Study
1.8 Common Indices of Clinical Outcomes
1.8.1 The Risk and the Odds
1.8.2 The Relative Risks and the Odds Ratio
1.8.2.1 The Two Relative Risks
1.8.2.2 One Odds Ratio
1.8.3 The Hazard Ratio
1.8.3.1 The Hazard Ratio Versus the Relative Risk
1.8.3.2 The Hazard Function in Time to Event Analysis
1.8.4 Relative Risk Increase (RRI) and Relative Risk Reduction (RRR)
1.8.5 Absolute Risk Increase (ARI) and Reduction (ARR)
1.8.6 Number Needed to Treat Benefit (NNTB) and Number Needed to Harm (NNTH)
1.8.7 Calculation of the 95% Confidence Interval and Testing Statistical Significance
1.8.7.1 The 95% CI of the Relative Risk
1.8.7.2 The 95% CI of the Odds Ratio
1.8.7.4 The 95% CI of the Absolute Risk Difference and the Number Needed to Treat
1.9 Diagnostic Accuracy
1.9.1 The Discriminative Measures
1.9.1.1 Sensitivity of the Test (Sn)
1.9.1.2 Specificity of the Test (Sp)
1.9.1.3 Calculation and Interpretation: Choosing the Appropriate Cut-Off Point
1.9.1.4 What Should Be Reported
1.9.2 The Predictive Values
1.9.2.1 Positive Predictive Value (PPV)
1.9.2.2 Negative Predictive Value (NPV)
1.9.2.3 Calculation and Interpretation: The Role of Prevalence
1.9.2.4 What Should Be Reported
1.9.3 The Likelihood Ratios
1.9.3.1 The Positive Likelihood Ratio
1.9.3.2 The Negative Likelihood Ratio
1.9.3.3 Calculation and Interpretation: The Pre and Post-Test Odds
1.9.3.4 What Should Be Reported
1.9.4 Single Indicators of Test Performance
……………..
Reviews
There are no reviews yet.