Latest Posts

Icechunk is a brand new open-source transactional storage engine for tensor / ND-array data designed for use on cloud object storage. Icechunk works together with Zarr, augmenting the Zarr core data model with features that enhance performance, collaboration, and safety in a multi-user cloud-computing context.

TLDR We are excited to announce the release of the Icechunk storage engine, a new open-source library and specification for the storage of multidimensional array (a.k.a. tensor) data in cloud object storage. Icechunk works together with Zarr, augmenting the Zarr core data model with features that enhance performance, collaboration, and safety in a multi-user cloud-computing context. With the release of Icechunk, powerful capabilities such as isolated transactions and time travel, which were previously only available to Earthmover customers via our Arraylake platform, are now free and open source. Head over to icechunk.io to get started! We’re also hosting a webinar to present Icechunk and answer questions from the community on Tuesday, October 22 from 12-1 PM US ET. Register here to att…
Read More

Thanks to Xvec and developments across a number of packages, the Xarray ecosystem now supports data cubes with vector geometries as coordinate locations.

This is a blog version of a webinar that took place on April 16, 2024. Here’s a video of that webinar: Geospatial datasets representing information about real-world features such as points, lines, and polygons are increasingly large, complex, and multidimensional. They are naturally represented as vector data cubes: n-dimensional arrays where at least one dimension is a set of vector geometries. The Xarray ecosystem now supports vector data cubes thanks to Xvec, a package designed for working with vector geometries within the Xarray data model 🎉. For those familiar with GeoPandas, Xvec is to Xarray as GeoPandas is to Pandas. This blog post is geared toward analysts working with geospatial datasets. We introduce vector data cubes, discuss how they differ from raster data cubes, and de…
Read More

How Arraylake is enabling scientific research.

Background The University of Wisconsin is home to a research team called Advanced Baseline Imager Live Imaging of Vegetated Ecosystems (ALIVE). The team, working remotely and led by Prof. Paul Stoy, PhD, is building a gradient-boosting regression model using geostationary satellites to estimate terrestrial carbon and water fluctuations in near real-time. The team trains its models using GOES-R and other public satellite and meteorological datasets. In trying to process this data, they ran into the central problem when working with raster data for time series analysis – the data’s format, mainly NetCDF and GeoTIFF, is not conducive to time-series analysis. This experience inspired them to strive to create output datasets that are analysis-ready for various applications. During AMS 2024, …
Read More

This post describes the fundamentals of Earth-Observation datacubes, outlines the basic Python building blocks for creating Zarr-backed datacubes, and presents a scalable serverless approach to building large-scale datacubes which is cost-effective, reliable, and performant.

This is a blog version of a webinar that took place on April 16, 2024. Here’s a video of that webinar: Earth Observation satellites generate massive volumes of data about our planet, and these data are vital for confronting global challenges. Satellite imagery is commonly distributed as individual “scenes” — a single file consisting of a single image of a tiny part of the Earth. Popular public satellite programs such such as NASA / USGS Landsat and Copernicus Sentinel produce millions of such images a year, comprising petabytes of data. Increasingly, we see organizations looking to aggregate raw satellite imagery into more analysis-ready datacubes. In contrast to millions of individual images sampled unevenly in space and time, Earth-system datacubes contain multiple variables, align…
Read More