Expression Evaluation (On-Disk)

ironArray has transparent support for the evaluation of expressions whose operands are disk-based. The main advantage of this is that you can perform operations with data that exceeds your available memory (even in compressed state).

On the other hand, disks are pretty much slower beasts than memory (although with the advent of SSDs, the gap is closing significantly during the last few years), so you might expect evaluation speeds slowing down significantly, but due to the on-the-fly compression, perhaps not as much as you can realize.

In this tutorial we are going to exercise on disk expression evaluation and compare it with in memory evaluation.

[1]:
%load_ext memprofiler
%matplotlib inline
import matplotlib.pyplot as plt
import iarray as ia
import os

Let’s start providing some hints on what kind of speed you can expect from using ironArray with on-disk data. So as to show this, we are going to use our original on-disk array and will create an on-disk outcome where we will put the result of our operations. As in latter tutorials, we are going to evict the files to better assess an out of core evaluation:

[2]:
%%mprof_run
precip1 = ia.open("precip1.iarr")
precip2 = ia.open("precip2.iarr")
precip3 = ia.open("precip3.iarr")
memprofiler: used 0.51 MiB RAM (peak of 0.51 MiB) in 0.0030 s, total RAM usage 225.53 MiB

In this case, we are just getting views of the larger array that is on-disk. Remember that views do not create new containers, so this is why the above operation is fast and consumes little memory. Now, let’s build the expression for the mean values:

[3]:
!vmtouch -e "precip1.iarr" "precip2.iarr" "precip3.iarr"
           Files: 3
     Directories: 0
   Evicted Pages: 215221 (840M)
         Elapsed: 0.029921 seconds
[4]:
%%mprof_run
precip_mean = (precip1 + precip2 + precip3) / 3
memprofiler: used 0.01 MiB RAM (peak of 0.01 MiB) in 0.0010 s, total RAM usage 225.76 MiB

As usual, this is a very fast operation. And now let’s evaluate and make sure that the result is created on-disk:

[5]:
%%mprof_run mean
precip_mean_disk = precip_mean.eval(urlpath="mean-3m.iarr", mode="w")
precip_mean
[5]:
<iarray.lazy_expr.LazyExpr at 0x7f17629de070>
memprofiler: used 60.09 MiB RAM (peak of 75.96 MiB) in 2.2665 s, total RAM usage 286.00 MiB

We see that evaluation from disk takes quite more time than operating in memory, but this is kind of expected. What we are more interested in here is that the amount of RAM needed to perform the evaluation is around 100 MB, whereas the output array is quite larger than this:

[6]:
%ls -lh mean-3m.iarr
-rw-rw-r-- 1 faltet2 faltet2 820M dic 22 09:31 mean-3m.iarr

This is well above than the consumed memory. Here it is a more graphical view on memory consumption:

[7]:
%mprof_plot mean -t "Mean computation (on-disk)"