Post image

Sampling Algorithms

Published January 17, 2021

There are two popular strategies to sample from a dataset in reinforcement learning. The first is uniform sampling, where each data point has the same probability. The second one is prioritised sampling (Schaul et al., 2015), where each data point has a different probability of getting sampled e.g. weighted by training error.1

The aim of this post is to give an outline how to implement these sampling strategies as efficient algorithms.

Table of Contents

Uniform Distribution

The default strategy in stochastic gradient descent is sampling batches uniformly. All data points have the same probability.

P(xi)=1n   where i{0,1,,n1}P(x_i) = \frac{1}{n} \; \text{ where $i \in \{0, 1, \dots, n-1\}$}

Here xix_i is a data point from our dataset D={x0,x1,,xn1}\mathcal{D} = \{x_0, x_1, \dots, x_{n-1}\}.

Our goal here is to sample kk items from D\mathcal{D}. Throughout the post we will be sampling kk indices {i0,i1,,ik1}\{i_0, i_1, \dots, i_{k-1}\} that refer to the actual data points.

With Replacement

Sampling with replacement means you can pick items multiple times. This can easily be accomplished by picking kk random integers in the range [0;n1][0; n-1] using your favorite random number generator.

Without Replacement

We call it sampling without replacement if items can only be picked once per batch.

To get to an algorithm, let’s think of a physical analogy first: picking cards. If you were asked to pick kk random cards out of a poker deck without replacement, you might come up with the following strategy: 1. Pick a random card out of one stack of cards, 2. Put that card on a different stack, 3. Repeat.

If we replace stacks with arrays in this description, we already have a quite convincing algorithm, although a few details are missing. Here is one way to do it:

Much like with a stack of cards, we keep two arrays. The first array stores the result (result array), the second array stores the integers which we can still choose from (open array). Initially, there are zero integers in the result array and nn integers in the open array. On each iteration we then randomly pick one integer in the open array, remove it and put it into the result array. After doing this kk times, we have a batch of unique samples.

To make this more efficient, we can store both arrays in one “super” array which we divide into two at index rr. The result array goes from index 00 to index rr, while the open array goes from index rr to n1n-1 (inclusive). When we want to transfer the sample from open to result array, we simply swap the picked index ii with the integer at index rr and increase rr by one. Figure 1 visualises this algorithm.

0 1 2 3 4 5 6 3 1 2 0 4 5 6 3 6 2 0 4 5 1 3 6 5 0 4 2 1

Figure 1: Illustration of uniform algorithm without replacement. First, we pick a random element of the range and swap it with the first element. From now on, ignore the first element and only sample beginning with the second integer. Keep swapping the samples and shrinking the sampling range until we have kk integers. The first k=3k = 3 integers represent the final sample.

Implementation

Let’s implement this algorithm using C++. As we need to have an array of size nn in memory, it makes sense to reuse that througout multiple samples. Therefore we create a class for the sampler.

#include <stdio.h>
#include <random>

typedef std::mt19937 RandomEngine; // or any other PRNG

class UniformSampler {
public:

   UniformSampler(unsigned int N);
   ~UniformSampler();

   unsigned int *SampleWithReplacement(unsigned int k);
   unsigned int *SampleWithoutReplacement(unsigned int k);

   unsigned int N_;
   unsigned int *array_;

   RandomEngine engine_;

};

UniformSampler::UniformSampler(unsigned int N)
      : N_(N), engine_(RandomEngine { std::random_device{}() }) {
   // Initialize array A to the range [0, N-1)
   array_ = new unsigned int[N_];
   for (int j = 0; j < N_; ++j) {
      array_[j] = j;
   }
}

UniformSampler::~UniformSampler() {
   delete [] array_;
}
Figure 2: Class for the uniform sampler. Make sure to free the array in the destructor!

Now that we have our class, we can implement the algorithm. On each iteration ii, we sample jU{r,N}j \sim U\{r, N\} and swap the rr-th with jj-th integer as previously discussed.

// Sample k integers and return pointer to result array
unsigned int *UniformSampler::SampleWithoutReplacement(unsigned int k) {
   for (unsigned int r = 0, j; r < k; ++r) {
      // Sample random int in [r; N)
      j = engine_() % (N_ - r) + r;
      // Swap i-th with j-th integer
      std::swap(array_[r], array_[j]);
   }
   // Return pointer to result
   return array_;
}
Figure 3: Implementation of uniform algorithm without replacement.

The runtime of this algorithm is O(k)O(k). The memory requirement is O(n)O(n). In contrast to other algorithms, the runtime does not depend on nn. More analysis on the runtime can be found in Figure 9.

Unnormalised Categorical Distribution

As mentioned in the introduction, there are other ways to sample from a dataset besides uniform sampling. In the following, I will be focusing on the categorical distribution where each data point has its own probability of getting sampled.

P(xi)=piP(x_i) = p_i

Most of the time, however, we do not have probabilities. Instead we are given weights wiw_i which indicate relative importance of a data point. In machine learning, this could be the loss of the model. A set of weights might be {1,3,8,1,3,2,1,4}\{1, 3, 8, 1, 3, 2, 1, 4\}. In this case, the third data point has the highest chance of getting sampled with a weight of w2=8w_2 = 8.

However, a valid probability distribution requires that ipi=1\sum_i p_i = 1. Therefore we need to normalise the weights before we can sample from this distribution.

P(xi)=wijwjP(x_i) = \frac{w_i}{\sum_j w_j}

Returning to the example from above, we would have a probability distribution of {123,323,823,123,323,223,123,423}\{\frac{1}{23}, \frac{3}{23}, \frac{8}{23}, \frac{1}{23}, \frac{3}{23}, \frac{2}{23}, \frac{1}{23}, \frac{4}{23}\}. For larger datasets, sampling from this distribution is tricky. Summing over all elements has linear complexity. Moreover, we cannot cache the sum, because in reinforcement learning, for example, weights are changing after each iteration. Is there a way to improve over linear complexity?

Segment Tree

The answer is yes. We can make use of a binary tree structure to efficiently compute the sum over all elements. The idea is to arrange all data points as leafs of a tree. The key of each leaf is the weight wiw_i of its data point. To maintain a sum over all weights, the key of a higher-level node is defined as the sum of its child nodes.

key(i)=jchildren(i)key(j)\mathrm{key}\left(i\right) = \sum_{j \in \mathrm{children}\left(i\right)} \mathrm{key}\left(j\right)

This is known as a segment tree or sum heap. Take a look at Figure 4 for an illustration of this data structure.

4 9 5 5 1 3 8 1 3 2 1 4 13 10 23

Figure 4: Illustration of a ‘sum’ segment tree. The leafs of the tree represent the weight of each data point. Every node is equal to the sum of its child nodes. Dividing by the value of the root gives normalised probabilities (shown in green).

Given a segment tree of the weights, it is straightforward to sample from a categorical distribution. Take a random number uRu \in \mathbb{R} between 00 and 11 sampled from a uniform distribution uU{0,1}u \sim U\{0, 1\}. Let v=w^uv = \hat{w} \cdot u where w^\hat{w} is the value of the root node. Start at the root node and descent left if v<key(leftnode)v < \mathrm{key}\left(\mathrm{left\,node}\right), otherwise set v=vkey(leftnode)v = v - \mathrm{key}\left(\mathrm{left\,node}\right) and descent right. Once you reach a leaf node, you have your sample.

Essentially, we partition [0,w^)\left[0, \hat{w}\right) into nn intervals with the size wiw_i, sample from [0,w^)\left[0, \hat{w}\right) uniformly and then find the interval in which that number falls.

To sample without replacement, simply remove the weight from the tree. This has logn\log n computational cost.

Implementation

We start implementing this algorithm by designing a class that can hold the tree. To keep it simple, we use an array representation for the segment tree. The two children of node jj are stored at position 2j2j and 2j+12j + 1.

#include <cassert>
#include <stdio.h>
#include <random>

typedef std::mt19937 RandomEngine;

class CategoricalSampler {
public:

   CategoricalSampler(unsigned int N);
   ~CategoricalSampler();

   void SampleWithReplacement(unsigned int k, unsigned int *result);
   void SampleWithoutReplacement(unsigned int k, unsigned int *result);

   void SetWeight(unsigned int j, float weight);
   const float& GetWeight(unsigned int j);

   unsigned int N_; // Dataset size
   float *w_; // Tree array
   RandomEngine engine_;

};

CategoricalSampler::CategoricalSampler(unsigned int N)
      : N_(N), engine_(RandomEngine { std::random_device{}() }) {
   // Create tree structure using array representation
   w_ = new float[2 * N + 1];
   // Initialize the weights to w_i = 1
   std::fill(w_ + N + 1, w_ + 2 * N + 1, 1.0f);
   // Heapify
   for (unsigned int i = N; i > 0; i -= 1) {
      w_[i] = w_[2 * i] + w_[2 * i + 1];
   }
}

CategoricalSampler::~CategoricalSampler() {
   delete [] w_;
}

Figure 5: Categorical class using an array representation for the segment tree.

Next, we implement some basic functions to maintain the sum tree. Heapify walks up the tree and restores the sum condition. SetWeight and GetWeight are for users who wish to update the distribution.

inline void Heapify(float *w, int j) {
   // Walk up tree and restore sum condition
   while (j > 1) {
      j /= 2;
      w[j] = w[2 * j] + w[2 * j + 1];
   }
}

void CategoricalSampler::SetWeight(unsigned int i, float weight) {
   if (weight < 0.0F) {
      throw std::invalid_argument("Weight must be greater or equal to zero.");
   }
   unsigned int j = N_ + 1 + i; // Position of w_j in array
   w_[j] = weight;
   Heapify(w_, j); // Maintain sum condition
}

const float& CategoricalSampler::GetWeight(unsigned int i); {
   return w_[N_ + 1 + i];
}
Figure 6

After having all the setup done, we can focus on implementing the sampling algorithm as described earlier.

void CategoricalSampler::
     SampleWithoutReplacement(unsigned int k, unsigned int *result) {
   float v, w; // Sampled value
   for (unsigned int i = 0, j; i < k; i += 1) {
      // Pick random number between [0; w_[1])
      v = std::uniform_real_distribution<>(0.0f, w_[1])(engine_);
      // Binary search through segment tree
      j = 1;
      do {
         j *= 2;
         // Go right if w_[j] < v, or, if w_[j] is invalid
         if (w_[j] < v || (j > N_ && w_[j] < 0.0F)) {
            v -= w_[j];
            j += 1;
         }
      } while (j <= N_);
      // Set output index
      result[i] = j - N_ - 1;
      // Remove weight from tree
      w = w_[j];
      w_[j] *= -1.F; // Mark as invalid
      while (j > 1) {
         j /= 2;
         w_[j] -= w;
      }
   }
   // Restore original state
   for (unsigned int i = 0, j; i < k; i += 1) {
      j = N_ + 1 + result[i];
      w_[j] *= -1.F; // Mark as valid
      w = w_[j];
      while (j > 1) {
         j /= 2;
         w_[j] += w;
      }
   }
}

Figure 7: Sampling algorithm for a categorical distribution without replacement.

Show code for sampling with replacement [+]

This segment tree-based algorithm has runtime complexity O(klogn)O(k \log n). Its memory complexity is O(2n)O(2n).

Runtime Analysis

You might wonder how well these algorithms perform for bigger datasets in practice. To find that out, I ran some performance tests. Take a look at Figure 9.

Runtime analysis

Figure 9: Runtime analysis of the presented algorithms. Evaluations were conducted on a Intel Core i9 using a single-thread and -O3 optimization flag. (1) Runtime of categorical sampling w.r.t. nn and kk fixed to 256256. (2) Runtime of categorical sampling w.r.t. kk and nn fixed to 6553665 536. (3) Runtime of uniform algorithm w.r.t. kk and nn fixed to 6553665 536. (4) Ratio of categorical runtime divided by uniform runtime with n=65536n = 65 536.

The log-complexity of categorical sampling w.r.t. to nn is visible in plot (1). It takes around 100μs100\mu{s} to sample a batch of size 256256 from a dataset with 1010 million data points. Surprisingly, sampling without replacement is faster for the categorical distribution. Compared with uniform sampling, categorical sampling is around 1515 to 3030 times slower for n=65536n = 65 536, as seen in plot (4).

Effect on Training Neural Nets

We discussed sampling without replacement throughout this post. It might not be obvious what the effect on training a neural net is. From a theoretical standpoint, it reduces the variance of an estimator. Take for example a mean estimator Y=1ki=1kXiY = \frac{1}{k}\sum_{i=1}^k X_i with XiDX_i \sim \mathcal{D}. Using sampling without replacement, the variance of this estimator reduces to2

σ^2=nkn1σ2\hat{\sigma}^2 = \frac{n-k}{n-1} \cdot \sigma^2

where σ2\sigma^2 is the variance of the same estimator but with replacement sampling.

In domains where nn is small, this can make a difference. If nn is large, however, the factor nkn1\frac{n-k}{n-1} gets closer and closer to 11. For example, with a batch size of b=1%b = 1\% of the dataset size, we get a factor of n1bn11b=99%n \cdot \frac{1 - b}{n-1} \approx 1 - b = 99 \%. Sampling without replacement reduces the variance by 1%1\% in this case.

Although this applies only to mean estimators, the analogy might give you some intuition about the effect on neural network training of sampling without replacement.

Conclusion

In this post, I presented algorithms to sample from a dataset uniformly or weighted. These algorithms find application in reinforcement learning for prioritised replay buffers 1 or in supervised learning when sampling batches. The effect of sampling without replacement is larger for small datasets.

Further research into the effect of prioritising training examples based on learning error would be interesting. The algorithms presented here could be used as a basis for ideas in that space.


If you want to read more about sampling algorithms, take a look at Tim Vieira’s blog: https://timvieira.github.io/blog/post/2019/09/16/algorithms-for-sampling-without-replacement/