The (Central) Cauchy distribution

The core of this post comes from Mathematical Statistics and Data Analysis by John A. Rice which is a useful resource for subjects such as UCT’s STA2004F.

Introduction

The Cauchy distribution has a number of interesting properties and is considered a pathological (badly behaved) distribution. What is interesting about it is that it is a distribution that we can think about in a number of different ways*, and we can formulate the probability density function these ways. This post will handle the derivation of the Cauchy distribution as a ratio of independent standard normals and as a special case of the Student’s t distribution.

Like the normal- and t-distributions, the standard form is centred on, and symmetric about 0. But unlike these distributions, it is known for its very heavy (fat) tails. Whereas you are unlikely to see values that are significantly larger or smaller than 0 coming from a normal distribution, this is just not the case when it comes to the Cauchy distribution.…

By | September 17th, 2019|English, Level: intermediate|0 Comments

p-values (part 3): meta distribution of p-values

Introduction

So far we have discussed what p-values are and how they are calculated, as well as how bad experiments can lead to artificially small p-values. The next thing that we will look at comes from a paper by N.N. Taleb (1), in which he derives the meta-distribution of p-values i.e. what ranges of p-values we might expect if we repeatedly did an experiment where we sampled from the same underlying distribution.

The derivations are pretty in depth and this content and the implications of the results are pretty new to me, so any discrepancies/misinterpretations found should be pointed out and/or discussed.

Thankfully, in this video (2) there is an explanation that covers some of what the paper says as well as some Monte-Carlo simulations. My discussion will focus on some simulations of my own that are based on those that are done in the video.

What we are talking about

We have already discussed what p-values mean and how they can go wrong.…

By | September 5th, 2019|English, Level: intermediate|1 Comment

p-values (part 2) : p-Hacking Why drinking red wine is not the same as exercising

What is p-hacking?

You might have heard about a reproducibility problem with scientific studies. Or you might have heard that drinking a glass of red wine every evening is equivalent to an hour’s worth of exercise.

Part of the reason that you might have heard about these things is p-hacking: ‘torturing the data until it confesses’. The reason for doing this is mostly pressure on researchers to find positive results (as these are more likely to be published) but it may also arise from misapplication of Statistical procedures or bad experimental design.

Some of the content here is based on a more serious video from Veritasium: https://www.youtube.com/watch?v=42QuXLucH3Q. John Oliver has also spoken about this on Last Week Tonight, for those who are interested in some more examples of science that makes its way onto morning talk shows.

p-hacking can be done in a number of ways- basically anything that is done either consciously or unconsciously to produce statistically significant results where there aren’t any.…

By | September 2nd, 2019|English, Undergraduate|1 Comment

A quick argument for why we don’t accept the null hypothesis

Introduction

When doing hypothesis testing, an often-repeated rule is ‘never accept the null hypothesis’. The reason for this is that we aren’t making probability statements about true underlying quantities, rather we are making statements about the observed data, given a hypothesis.

We reject the null hypothesis if the observed data is unlikely to be observed given the null hypothesis. In a sense we are trying to disprove the null hypothesis and the strongest thing we can say about it is that we fail to reject the null hypothesis.

That is because observing data that is not unlikely given that a hypothesis is true does not make that hypothesis true. That is a bit of a mouthful, but basically what we are saying is that if we make some claim about the world and then we see some data that does not disprove this claim, we cannot conclude that the claim is true.…

By | August 28th, 2019|English, Level: Simple, Uncategorized, Undergraduate|0 Comments

p-values: an introduction (Part 1)

The starting point

This is the first of (at least) 3 posts on p-values. p-values are everywhere in statistics- especially in fields that require experimental design.

They are also pretty tricky to get your head around at first. This is because of the nature of classical (frequentist) statistics. So to motivate this I am going to talk about a non-statistical situation that will hopefully give some intuition about how to think when interpreting p-values and doing hypothesis testing.

My New Car

I want to buy a car. So I go down to the second hand car dealership to get one. I walk around a bit until I find one that I like.

I think to myself: ‘this is a good car’. 

Now because I am at a second-hand car dealership I find it appropriate to gather some data. So I chat to the lady there (looks like a bit of a scammer, but I am here for a deal) about the car.…

By | August 21st, 2019|English, Level: Simple, Undergraduate|0 Comments

R-squared values for linear regression

What we are talking about

Linear regression is a common and useful statistical tool. You will have almost certainly come across it if your studies have presented you with any sort of statistical problems.

The pros of regression are that it is relatively easy to implement and that the relationship between inputs and outputs is linear (it’s in the name, but this simplifies the interpretation of the relationship significantly). On the downside, it relies fairly heavily on frequentist interpretation of probability (which is a little counterintuitive) and it’s very easy to draw erroneous conclusions from different models.

This post will deal with a measure of how good a model is: R^2. First, I will go through what this value means and what it measures. Then, I will discuss an example of how reliance on  R^2  is a dangerous game when it comes to linear models.

What you should know

Firstly, let’s establish a bit of context.…

By | August 18th, 2019|English, Undergraduate|1 Comment

Ten Great Ideas about Chance – By Persi Diaconis & Brian Skyrms, a review

NB. I was sent this book as a review copy.

http://i1.wp.com/press.princeton.edu/sites/default/files/styles/large/public/covers/9780691174167.png?resize=336%2C480&ssl=1

From Princeton University Press

This book straddles a tricky middle ground, given that it introduces topics from scratch and goes into some very specific details of them in a relatively few pages, before jumping onto the next. On starting to read it, I was skeptical of how this could possible work, but by the end of it I believe that I saw the real utility of a book like this. The audience is quite specific, but for them it will be a gem.

The book covers a huge range of ideas related to chance, from the underlying mathematics of probability, to the psychology of decision making, the physics of chaos and quantum mechanics, the problems inherent in induction and inference and much more besides.

The book is taken from a long-running course at Stanford which the authors taught for a number of years, and they have tried to condense down the most important aspects of it to a relatively light book.…

By | December 31st, 2017|Uncategorized|1 Comment

The Probability Lifesaver – by Steven J. Miller, a review

NB. I was sent this book as a review copy. In addition, I lent this book to a student studying statistics, as I thought that it would be more interesting for them to let me know how much they get out of it. This is the review by Singalakha Menziwa, one of our extremely bright first year students.

http://i1.wp.com/press.princeton.edu/sites/default/files/styles/large/public/covers/9780691149547.png?resize=336%2C480&ssl=1

From Princeton University Press

All the tools you need to understand chance, the insight of statistics at base, and more complex levels. Statistics is not just about substituting into the correct formulae but requires understanding of what the numbers mean. Counting rules and Statistical inference were two of the topics I struggled with, especially the logic behind statistical inference, but this book provided great insight and explanations regarding these topics with a step by step procedure and gave enough interesting exercises. Miller’s goal when writing the book was to introduce students to the material through lots of accurately done, in depth worked examples and some fascinating coding for those who want to get more practical, to have a lot of conversations about not just why equations and theorems are true, but why they have the form they do.…

By | October 20th, 2017|Book reviews, Reviews|1 Comment