Do Plants Know Math: Unwinding the Story of Plant Spirals, from Leonardo da Vinci to Now, a Review

NB. I was sent this book as a review copy.

From Princeton University Press

Take a pinecone and look at it from above. You will find that there are two ways of following the spirals on the pinecone:

Count the number of each spiral color, and most often, these numbers will correspond to Fibonacci numbers. This spiral formation pattern, known as parastichy, repeats widely across the botanical world and remained beyond human understanding for millennia.

Though not structured as such, this book feels like a mystery novel, in a way I hadn’t expected but thoroughly enjoyed. The story explores the appearance of Fibonacci numbers in the patterns of many plants, raising the question of how these numbers come about. The title, of course, hints that plants, in displaying mathematical structure, might seem to “know” math. The punchline arrives at the end, but the journey toward it is a beautiful exploration of research spanning the last two thousand years—from the first studies by the ancient Greeks on phyllotaxis (the arrangement of leaves on a plant stem), through China, Rome, India, Italy, and onward into the modern era, where complex systems science, genetics, information theory, computational modeling, and more bring us closer to understanding how spiral formations encode this mathematics.…

By | October 27th, 2024|Book reviews, Reviews, Uncategorized|0 Comments

Thoughts on thesis structure

I have now graduated around 30 Master’s and PhD students, and for most of them, I have discussed my thoughts on the structure of a thesis and its importance. I thought it was an opportune moment to save myself some time and write this down, so future students can read it instead of listening to me banging on about a topic that I think is very important, but others may not (yet) understand why.

In high school and university, I had to write reports for science projects. The structure was generally Introduction, Methods, Results, Discussion, and Conclusions, which seemed reasonable at the time, but the flow often felt linear and disjointed. By the time of my PhD thesis, I knew there had to be a literature review, which made sense, but I didn’t deeply appreciate it beyond ensuring the reader understood my work. Now, I see the literature review in a quite different light.…

By | July 17th, 2024|Uncategorized|0 Comments

When Animals Dream: The Hidden World of Animal Consciousness, by David M. Peña-Guzmán – A review

NB. I was sent this book as a review copy.

From Princeton University Press

The links between dreams, consciousness and memory are absolutely fascinating. I had imagined that the insight that we could get into animal dreaming was limited to dogs running in their sleep and waking themselves up by banging, violently into a wall. However, there is so much more research on this topic than I was aware of. The signals that octopuses can provide to us as to their thoughts through their colouration, shape and texture is incredibly rich, and an entire narrative can seemingly be read off from these visual clues while they sleep. David clearly has some serious frustration with researchers who don’t want to make the leap to the conclusion that this is really dreaming in the way that we know it, but could simply be the animal running through stereotyped behaviour simulations, in an unconscious way.…

By | December 13th, 2023|Book reviews, Reviews, Uncategorized|2 Comments

CRL Task 5: Learning Causal Models

We’ve now come to one of the most vital aspects of this theory – how can we learn causal models? Learning models is often an exceptionally computationally intensive process, so getting this right is crucial. We now develop some mathematical results which guarantee bounds on our learning. We’ll start by discussing the current state of this field in relation to causal inference and reinforcement learning.

This Series

  1. Causal Reinforcement Learning
  2. Preliminaries for CRL
  3. CRL Task 1: Generalised Policy Learning
  4. CRL Task 2: Interventions – When and Where?
  5. CRL Task 3: Counterfactual Decision Making
  6. CRL Task 4: Generalisability and Robustness
  7. Task 5: Learning Causal Models
  8. (Coming soon) Task 6: Causal Imitation Learning
  9. (Coming soon) Wrapping Up: Where To From Here?

Learning Causal Models

Perhaps one of the most computationally difficult processes in the field of causal inference is that of learning underlying causal structure by algorithmically identifying cause-effect relationships. In recent years there has been a surge of interest in learning such relationships in the fields of machine learning and artificial intelligence, though it has been relatively prevalent in the social sciences for many years now (e.g.…

By | September 19th, 2021|Uncategorized|0 Comments

CRL Task 3: Counterfactual Decision Making

In the previous blog post we discussed some theory of how to select optimal and possibly optimal interventions in a causal framework. For those interested in the decision science, this blog post may be more inspiring. This next task involves applying counterfactual quantities to boost learning performance. This is clearly very important for an RL agent where its entire learning mechanism is based on interventions in a system. What if intervention isn’t possible? Let’s begin!

This Series

  1. Causal Reinforcement Learning
  2. Preliminaries for CRL
  3. CRL Task 1: Generalised Policy Learning
  4. CRL Task 2: Interventions – When and Where?
  5. CRL Task 3: Counterfactual Decision Making
  6. CRL Task 4: Generalisability and Robustness
  7. Task 5: Learning Causal Models
  8. (Coming soon) Task 6: Causal Imitation Learning
  9. (Coming soon) Wrapping Up: Where To From Here?

Counterfactual Decision Making

A key feature of causal inference is its ability to deal with counterfactual queries. Reinforcement learning, by its nature, deals with interventional quantities in a trial-and-error style of learning.…

By | July 10th, 2021|English, Level: intermediate, Uncategorized|6 Comments

A challenging limit

This post comes mostly from the youtube video by BlackPenRedPen found here: https://www.youtube.com/watch?v=89d5f8WUf1Y&t=3s

This in turn comes from Brilliant.com – details and links can be found in the original video

In this post we will have a look at a complicated-looking limit that has an interesting solution. Here it is:

\lim_{n \rightarrow \infty} ( \frac{n!}{n^n})^{\frac{1}{n}}

This looks pretty daunting – but we will break the solution down into sections:

  • taking the logarithms and rearranging
  • recognising something familiar
  • finding the numerical value

 

Step 1: Taking the Logarithm

The first step here is to take the logarithm, a generally useful trick when applying limits. First we assign the variable L to the limit (so that we can solve for it in the end). Now lets do some algebra:

L = \lim_{n \rightarrow \infty} ( \frac{n!}{n^n})^{\frac{1}{n}}

\ln(L) = \ln(\lim_{n \rightarrow \infty} ( \frac{n!}{n^n})^{\frac{1}{n}})

Noting that the natural logarithm \ln is a continuous function and therefore we can take the limit outside of the function:

\ln(L) =  \lim_{n \rightarrow \infty} \ln( (\frac{n!}{n^n})^{\frac{1}{n}})

Next we can use the logarithm laws to bring down the exponent:

\ln(L) =  \lim_{n \rightarrow \infty}  \frac{1}{n} \ln(\frac{n!}{n^n})

Alright, now we have taken the logarithm, step 1 is complete.…

By | November 29th, 2020|MAM1000, Uncategorized|0 Comments

Parrondos Paradox

Introduction

In this post we will have a look at Parrondos paradox. In a paper* entitled “Information Entropy and Parrondo’s Discrete-Time Ratchet”** the authors demonstrate a situation where, by switching between 2 losing strategies, we can create a winning strategy.

Setup

The setup to this paradox is as follows:

We have 2 games that we can play – if we win we get 1 unit of wealth, if we lose, it costs 1 unit of wealth. Game A gives us a payout of 1 with a probability of slightly less than 0.5. Clearly if we play this game for long enough we will end up losing.

Game B is a little more complicated in that it is defined with reference to our existing winnings. If our current level of wealth is a multiple of M we play a game where the probability of winning is slightly less than 0.1. If it is not a multiple of M, the probability of winning is slightly less than 0.75.…

By | November 11th, 2020|Uncategorized|0 Comments

Basic Reverse Image Search Using an Autoencoder

Introduction

In this post we are going to create a simple reverse image search on the MNIST handwritten image dataset. That is to say, given any image, we want to return images that look most similar to it. To do this, we will use an autoencoder, trained using Tensorflow 2.

The dataset

The MNIST dataset is a commonly-used dataset in machine learning comprised of 28-by-28 images of handwritten digits between 0 and 9. For our purposes we would be interested in our image searcher returning images of the same number as the query images, i.e. if we input a 3 we want the images returned to all be 3s. However, if we had, say, four 3s and one 2 that mightn’t be too bad, considering how 2 and 3 look a bit similar. However, if we had three 3s, one 1 and a 7 we might say that the performance is not up to standard.…

By | October 21st, 2020|Uncategorized|0 Comments

A simple introduction to causal inference

 

Introduction

Causal inference is a branch of Statistics that is increasing in popularity. This is because it allows us to answer questions in a more direct way than do other methods. Usually, we can make inference about association or correlation between a variable and an outcome of interest, but these are often subject to outside influences and may not help us answer the questions in which we are most interested.

Causal inference seeks to remedy this by measuring the effect on the outcome (or response variable) that we see when we change another variable (the ‘treatment’). In a sense, we are looking to reproduce the situation that we have when we do an designed experiment (with a ‘treated’ and a ‘control’ group). The goal here is to have groups that are otherwise the same (with regard to factors that might influence the outcome) but where one is ‘treated’ and the other is not.…

By | August 20th, 2020|English, Uncategorized|0 Comments

Inverse Reinforcement Learning: Guided Cost Learning and Links to Generative Adversarial Networks

Recap

In the first post we introduced inverse reinforcement learning, then we stated some result on the characterisation of admissible reward functions (i.e reward functions that solve the inverse reinforcement learning problem), then on the second post we saw a way in which we proceed with solving problems, more or less, using a maximum entropy framework, and we encountered two problems:
1. It would be hard to use the method introduced if we did not know the dynamics of the system already, and
2. We have to solve the MDP in the inner loop, which may be an expensive process.

Here, we shall attempt to mitigate the challenges that we have encountered, as before, and we shall give a rather beautiful closing which shall link concepts in this space of inverse reinforcement learning to ‘general’ machine learning structures, in particular generative adversarial networks.

Inverse Reinforcement Learning with Unknown Dynamics and Possibly Higher Dimensional Spaces

As we saw previously, the maximum entropy inverse reinforcement learning approach proceeds by defining the probability of a certain trajectory under the expert as being,

p(\tau)=\dfrac{1}{Z}e^{R_\psi (\tau)},

where

Z=\int e^{R_\psi(\tau)}d \tau.

We mentioned that this is hard to compute in higher dimensional spaces.…

By | May 28th, 2020|Uncategorized|0 Comments