Two methods for bootstrapping confidence intervals for time-varying rates.

As part of my master’s thesis, I measured microsaccades – tiny, involuntary eye-movements that occur once or twice a second - to find out what sort of conscious perceptual information they index. Can I tell that you’re seeing something just by measuring tiny, invisible, involuntary eye-movements? Can I tell what you’re seeing?

To investigate this, we had some patient people hold still and look at stimuli while we record the position of their eyes 1000 times a second. We ended up with a lot of data – approximately 15,000 trials. What we were interested in was the epoch within 500ms of stimulus onset. By looking at how the rate of microsaccades changed throughout this window in different stimulus conditions, we could see if any stimulus properties are encoded in the rate itself.

Because microsaccades happen only once or twice a second, and we’re looking at a one second window per trial, we set each microsaccade relative to the stimulus onset time of its given trial:

 Histogram, X: event onset relative to stimulus onset. Y: Count. Above: event onset cloud.

Histogram, X: event onset relative to stimulus onset. Y: Count. Above: event onset cloud.

Run those onsets through a simple smoothing procedure, and you’ve got a rate. Add the three other conditions:

Here, I’ll present two different data-driven methods for bootstrapping[1] confidence intervals for time-varying rates like the above. The critical difference between the two methods is in what we ‘sample’ – either timepoints or microsaccades (events). All example code is for MATLAB.

Method one: Draw for each time point.

Convert each timepoint in the epoch of interest into the raw probability of an event occurring:

% nTrials = number of trials in condition
% eOnsets = all event onsets
epoch = [-500 500]; % Set epoch of interest
tScale = epoch(1):epoch(2); % Helpful for indexing our onsets later.
tPoints = length(tScale);
probs = nan(1,tPoints); % Initialize probs for speed
for t = 1:tPoints
    nEvents = length(find(eOnsets == t+epoch(1)-1)); % -1 to offset matlab indexing
    probs(t) = nEvents/nTrials;

For the number of bootstrap samples, for each trial, draw for an event at each time point:

nBoots = 1000;
bETimes = nan(nBoots,1); % Initialize event matrix
for b = 1:nBoots
    for t = 1:nTrials
        for tp = 1:tPoints
            if rand < probs(tp)
                bETimes(b,1) = [bETimes(b,1) tScale(tp)];

Now you have a matrix of 1000 vectors of onset times. Smooth each vector with whatever procedure you used for your original data, and take the range that contains 95% of the rates for a typical confidence interval:

CI = prctile(sRates,[2.5 97.5],1);

Fill the area between the curves, plot your original rates, and check it out:

While this method is heavily data-driven, it works questionably with sparse data. The above figure is an aggregate of 13,000 trials – all the observers that took part in the study. What if we use only one observer, with approximately 300 trials per condition?

One of the problems causing the weird peaks and oddly confident intervals (around 180, for example) is all the zero probability timepoints – this method is so data-driven that it makes little allowance for sparse sampling. Let’s look at the second sampling method and see how it compares.

Method two: Draw for each event.

Sum the target smoothed rate and divide each point by the sum to normalize the rate to 1.

% rate = vector of rate values to be bootstrapped.
rSum = rate;
rateNorm = rate./rSum;

Cumulatively sum the normalized rate to produce a vector from the normalized minimum to 1.

cats = cumsum(ratenNorm);

For each bootstrap sample, draw an onset for every microsaccade in the target epoch from the probability distribution in cats. (Note: to prevent ‘fall offs’ at the beginning and end, expanding the epoch may be helpful.)

% nEvents = number of events
nBoots = 1000;
for b = 1:nBoots
    bOnsets = nan(1, nEvents);
    offset = min(epoch);
    for e = 1:nEvents
        draw = rand;
        [nix,idx] = find(draw>cats);
        pos = length(idx);
        bOnsets(e) = pos + tcorr+1; % same +1 indexing correction as above

Now smooth your 1000 rates, and take the confidence interval as above. An additional advantage of this second method is that you can further smooth the already smoothed rate by taking the average of the bootstrap samples. Compare the same data for the noisy individual rate above with the new smoothed and bootstrapped confidence interval:

Awesome. More reasonable smoothing, less peaky confidence intervals, and less emphasis put on incidentally zero timepoints. Hope it helps.
Many thanks to Martin Rolfs.

Further reading/giants stood upon:

Efron B, Tibshirani R. An Introduction to the Bootstrap. New York: Chapman and Hall, 1993.
Rolfs M. Microsaccades: small steps on a long way. Vision Res 49: 2415–2441, 2009.
Rolfs M, Kliegl R, Engbert R. Toward a model of microsaccade generation: the case of microsaccadic inhibition. J Vis 8: 1–23, 2008.
Rucci M, Victor JD. The unsteady eye: an information-processing stage, not a bug. Trends Neurosci 38: 195–206, 2015.
White A, Rolfs M. Oculomotor inhibition covaries with conscious detection. J Neurophysiol 116: 1507-1521, 2016.