1 Packages

library(raster)
library(rgdal)  # read/write with GDAL
library(sf)  # vector data (replaces sp)
library(mapview)  #mapView
library(RANN)  # nn2
library(data.table)  # fread
library(velox)  # alternative to raster
library(fasterize)  # fast resampling
library(viridisLite)  # colors (used in mapview)

This short introduction shows how to read, write and display large geographic data sets with R. A very complete and easy to follow on-line resource is the eBook by Lovelace, Nowosad and Muenchow. Package sp has been the main package to process vector spatial data, but it is being progressively abandoned in favor of the more recent package sf Simple Features for R. For raster data, package raster is still widely used, although some alternatives are available.

2 Read and display big geographic data sets

2.1 Vector data: the sf package

Let’s read a reasonably large shapefile: this is a subset of classes of COS2015 (DGT) and corresponds to the flammable classes (these are the areas in Continental Portugal that are labelled essentially as “floresta”, “matos” e “vegetacao esparsa”). Below, we compare the elapsed time to read the shapefile with rgdal::readOGR, which returns a sp object in R, and with sf::st_read that returns a sf object. For large data sets, st_read is much faster.

system.time({
    flam.sp <- rgdal::readOGR(dsn = file.path(getwd(), "datasets"), layer = "Flam2015v4", 
        verbose = FALSE)
})  # 21.27 
   user  system elapsed 
  21.93    4.23   29.21 
flam.sp
class       : SpatialPolygonsDataFrame 
features    : 56498 
extent      : -118698.4, 162119.9, -296662.2, 276059.5  (xmin, xmax, ymin, ymax)
coord. ref. : +proj=tmerc +lat_0=39.6682583333333 +lon_0=-8.133108333333331 +k=1 +x_0=0 +y_0=0 +ellps=GRS80 +units=m +no_defs 
variables   : 4
names       : Class,      AreaCOS, FID_COS, Group 
min values  :     3, 1.000022e+00,  100000,    NA 
max values  :     3, 9.999798e+00,   99999,    NA 
system.time({
    flam.sf <- sf::st_read(dsn = file.path(getwd(), "datasets", "Flam2015v4.shp"), 
        quiet = TRUE)
})  # 4.32
   user  system elapsed 
   2.62    0.71    4.79 
print(flam.sf, n = 5)
Simple feature collection with 56498 features and 4 fields
geometry type:  POLYGON
dimension:      XY
bbox:           xmin: -118698.4 ymin: -296662.2 xmax: 162119.9 ymax: 276059.5
epsg (SRID):    NA
proj4string:    +proj=tmerc +lat_0=39.6682583333333 +lon_0=-8.133108333333331 +k=1 +x_0=0 +y_0=0 +ellps=GRS80 +units=m +no_defs
First 5 features:
  Class   AreaCOS FID_COS Group                       geometry
1     3  7.028552   54716    NA POLYGON ((-72052.09 -294701...
2     3  3.266297   54717    NA POLYGON ((-72298.54 -294505...
3     3 14.344536   54718    NA POLYGON ((13645.62 -295011....
4     3  2.902049   54719    NA POLYGON ((-71924.96 -294350...
5     3  8.338297   54720    NA POLYGON ((-71685.64 -294333...

A sf object has a table structure. The attribute table of the data is extended with a special column called geometry. The remaining columns of the table are feature’s attributes, which each row of the table describes one feature. The main simple feature geometry types are POINT, LINESTRING, POLYGON, MULTIPOINT, MULTILINESTRING, MULTIPOLYGON and GEOMETRYCOLLECTION as the mixed type. The geometry of features in a sp object can be extracted with st_geometry, which returns a list of the geometries of all features (the list is an object of class sfc, for simple feature geometry list-column).

If one relies on packages that only can be applied to sp objects, it can be useful to convert objects from sf to sp and vice-versa. This can be done with the following commands:

as(x, "Spatial")  # x is a sf object
st_as_sf(x)  # x is a sp object

Let’s use sf functions for some data exploration. For instance, one may want to determine the largest feature in flam. This can easily be done with st_area that returns the areas of the features. Package sf supports all spatial methods of the GIS standard ISO 19125-1:2004.

areas <- st_area(flam.sf)
bigflam <- flam.sf[which.max(areas), ]

There are many ways of displaying vector data sets (see here). To display geographic data sets interactively in the viewer window, and combine them with background information, one can use mapview.

mapview(bigflam)

Another common task is to combine several data geographic data sets, possibly with distinct coordinate reference system. Let’s read the administrative map of Portugal and determine the flammable areas that intersect it.

caop <- sf::st_read(dsn = file.path(getwd(), "datasets", "Cont_AAD_CAOP2015.shp"), 
    quiet = TRUE)
print(caop, n = 3)
Simple feature collection with 3223 features and 8 fields
geometry type:  POLYGON
dimension:      XY
bbox:           xmin: -119191.4 ymin: -300404.8 xmax: 162129.1 ymax: 276083.8
epsg (SRID):    NA
proj4string:    +proj=tmerc +lat_0=39.66825833333333 +lon_0=-8.133108333333332 +k=1 +x_0=0 +y_0=0 +ellps=GRS80 +units=m +no_defs
First 3 features:
  DICOFRE Freguesia      Concelho Distrito             TAA AREA_EA_Ha AREA_T_Ha
1  081504    Sagres VILA DO BISPO     FARO ÁREA SECUNDÁRIA       0.04   3436.91
2  081504    Sagres VILA DO BISPO     FARO ÁREA SECUNDÁRIA       0.17   3436.91
3  081504    Sagres VILA DO BISPO     FARO ÁREA SECUNDÁRIA       0.20   3436.91
  Des_Simpli                       geometry
1     Sagres POLYGON ((-69674.53 -294276...
2     Sagres POLYGON ((-69645.82 -294240...
3     Sagres POLYGON ((-69720.37 -294163...
# reproject caop to the CRS of flam.sf
caop.proj <- sf::st_transform(caop, crs = sf::st_crs(flam.sf))
concelho <- caop.proj[caop.proj$Concelho == "MONCHIQUE", ]
flam.concelho <- sf::st_intersection(flam.sf, concelho)
mapview(concelho, col.regions = "blue") + mapview(flam.concelho, col.regions = "green")

## Raster data: the raster package

Next, we use package raster to read and process raster data. This package can deal with very large data sets since it does not need to load the whole data set in memory. If ones needs to read a very large file, raster can be used to create the connection, and then the data can be loaded by blocks of rows with getValues and processed one block at the time.

In the example below the data set is not too big, and therefore it can be loaded in memory. There are three distinct files to be stacked (three bands of a Landsat 8 surface reflectance image over the Alentejo) so raster::stack is used to read the list of file names. Then, the stack is converted into a multiband single image with raster:brick. Note that building a brick (instead of reading a native multilayer tif file) creates an object “in memory”.

system.time({
    s <- raster::stack(as.list(file.path(getwd(), "datasets", c("LC82030332014151LGN00_sr_band3.tif", 
        "LC82030332014151LGN00_sr_band4.tif", "LC82030332014151LGN00_sr_band5.tif"))))
})
   user  system elapsed 
   0.11    0.07    0.23 
inMemory(s)  #FALSE
[1] FALSE
system.time({
    b <- raster::brick(s)
})  # takes some time
   user  system elapsed 
  10.90    2.61   18.13 
inMemory(b)  #TRUE
[1] TRUE
writeRaster(b, "b.tif", overwrite = TRUE)  # creates multilayer file
b2 <- brick("b.tif")
inMemory(b2)  #FALSE
[1] FALSE
b
class       : RasterBrick 
dimensions  : 7791, 7651, 59608941, 3  (nrow, ncol, ncell, nlayers)
resolution  : 30, 30  (x, y)
extent      : 529785, 759315, 4190085, 4423815  (xmin, xmax, ymin, ymax)
coord. ref. : +proj=utm +zone=29 +datum=WGS84 +units=m +no_defs +ellps=WGS84 +towgs84=0,0,0 
data source : in memory
names       : LC82030332014151LGN00_sr_band3, LC82030332014151LGN00_sr_band4, LC82030332014151LGN00_sr_band5 
min values  :                            -20,                            -88,                           -205 
max values  :                          12661,                          13515,                          12429 

The number of pixels in the image can be accessed with ncell(b). In this case it is 59608941. Bands can be combined to compute indices or can be displayed as RGB color composites (for instance with mapview:viewRGB). For this image, the Landsat 8 surface reflectance values are not supposed to be smaller than 0 or larger than 10000, so we assign NA to those pixels.

names(b) <- c("b3", "b4", "b5")
b[b <= 0 | b > 10000] <- NA  # non valid values
ndvi <- (b$b5 - b$b4)/(b$b5 + b$b4)
writeRaster(ndvi, file = "ndvi.tif", overwrite = TRUE)

The velox package provides an alternative to package raster. It represents bands as matrices, so unlike raster it always load all pixel values in memory.

system.time({
    s <- raster::stack(as.list(file.path(getwd(), "datasets", c("LC82030332014151LGN00_sr_band3.tif", 
        "LC82030332014151LGN00_sr_band4.tif", "LC82030332014151LGN00_sr_band5.tif"))))
})
   user  system elapsed 
   0.11    0.03    0.30 
vxs <- velox(s)
names(vxs$rasterbands) <- c("b3", "b4", "b5")  # each band is a matrix

With velox, each band needs to be accessed individually. So, for instance, if one wish to apply some function to each band (like converting to NA non valid pixel values) this has to be applied either to each band at the time (alternatively, one could use lapply).

v <- function(x) {
    x[x <= 0 | x > 10000] <- NA
    return(x)
}
b5 <- v(vxs$rasterbands$b5)  # assign NA to non valid values
b4 <- v(vxs$rasterbands$b4)  # idem
vals <- (b5 - b4)/(b5 + b4)
vxndvi <- velox(vals, extent = vxs$extent, res = vxs$res, crs = vxs$crs)

Available methods for VeloxRaster can be listed with ?VeloxRaster. In particular, velox objects can be converted to raster objects.

r <- vxndvi$as.RasterLayer(band = 1)
mapview(r)

The main interest of Velox is to perform some operations on rasters that are slow with tools from package raster. For instance, suppose we wish to extract ndvi values at N random locations.

# extract values
ndvi <- raster("ndvi.tif")
ext <- as.vector(extent(ndvi))
N <- 10000
locs <- cbind(runif(N, min = ext[1], max = ext[2]), runif(N, min = ext[3], max = ext[4]))
system.time({
    x <- raster::extract(ndvi, y = locs)
})  # 4.94 for N=10000
   user  system elapsed 
   3.02    0.60    5.39 
sf.locs <- st_as_sf(data.frame(x = locs[, 1], y = locs[, 2]), coords = c("x", "y"), 
    crs = as.character(crs(ndvi)))
system.time({
    y <- vxndvi$extract_points(sf.locs)
})
   user  system elapsed 
      0       0       0 
n <- 4
setequal(round(x, n), round(y, n))
[1] FALSE

In the following example, we compare how long it takes to read the coordinates of the pixels and to crop a raster.

# read coordinates
system.time({
    xy <- coordinates(ndvi)
})  # 1.34
   user  system elapsed 
   1.19    0.05    1.30 
system.time({
    xy <- vxndvi$getCoordinates()
})  # 0.44
   user  system elapsed 
   0.30    0.03    0.34 
system.time({
    x <- raster::crop(ndvi, 0.5 * extent(ndvi))
})  # 2.66
   user  system elapsed 
   1.48    0.14    1.69 
system.time({
    vxndvi$crop(0.5 * extent(ndvi))
})  # 0.16 
   user  system elapsed 
   0.03    0.01    0.05 

VeloxRaster objects are ReferenceClass objects and thus mutable as explained in the documentation. This means in particular that one object can be altered without the use of the <- or = symbols. For instance, the crop operation did re-size vxndvi, with the initial version being lost. To make a new copy of the object, one needs to use $copy.

vxndvi$extent  # after crop
[1]  587145  701955 4248495 4365405
# original version
vxndvi <- velox(vals, extent = vxs$extent, res = vxs$res, crs = vxs$crs)
vxndvi$extent  # 
[1]  529785  759315 4190085 4423815
# make a copy that is not referenced to vxndvi
vxcopy <- vxndvi$copy()

2.2 Maps and colors

Maps, like the ones produced by plot or mapview have default colors. Usually, one wants to choose a given palette of colors to use and intervals of values that correspond to each color. Argument at of mapview is used to indicate the breaks in values that correspond to the colors. R provides different ways of defining vectors of colors: those are just strings that represent colors using an hexadecimal notation (see hex colors). For instance, color red corresponds to code “#ff0000”, and yellow has code “#ffff00”. There are many ways of defining vectors of colors, including the ones below. Parameter alpha in [0,1] indicates transparency, with opaque corresponding to alpha=1.

cores <- c("red", "yellow", "green", "blue")
cores <- colorRampPalette(c(rgb(0, 0, 1, alpha = 1), rgb(0, 0, 1, alpha = 0)), alpha = TRUE)(8)
cores <- viridisLite::inferno(n = 10, alpha = 1, begin = 0.2, end = 1, direction = -1)
pie(rep(1, length(cores)), col = cores)

For instance, if we want to display the ndvi map with 4 classes between 0.1 and 1 then we can set the colors and the breaks as in the following example. A description of options for mapview can be found here. If the number of colors is lower than the number of intervals, colors are recycled.

cores <- colorRampPalette(c(rgb(1, 1, 0, 0.5), rgb(0, 1, 0, 0.5)), alpha = TRUE)(10)  # any number >=4
mapview(ndvi, col.regions = cores, at = c(0.1, 0.3, 0.5, 0.7, 1))

One can edit data interactively over mapview maps (for instance, to create new features) with mapedit::editMap.

2.3 Tables

The simplest structure to represent geographic data is a table: this can be used to represent a set of points, including its coordinates over some known coordinate reference system. There are different packages that allow us to manipulate tables in R. The base function data.frame and its methods are widely used but can be very slow for large tables. As an alternative, one can use tibbles which are an updated approach to data.frame functionality (this is part of tidyverse, a collection of R packages designed for data science). If we need to use very large tables, the most efficient approach to date in R is package data.table as shown here.

Next we read a data set that contains global active fire data acquired by satellite. We compare the basic utility read.csv that returns a data.frame with fread of package data.table.

# system.time({df<-read.csv(file=file.path(getwd(),'datasets','viirs-fires-2017-world.csv'))})
# # 148.41
system.time({
    af <- data.table::fread(file.path(getwd(), "datasets", "viirs-fires-2017-world.csv"))
})  # 14.34 
   user  system elapsed 
   7.21    3.44   16.64 
af
           latitude  longitude bright_ti4 scan track   acq_date acq_time
       1: -27.70160   30.02300      302.4 0.46  0.47 2017-01-01     2334
       2: -26.80611   27.85621      304.9 0.52  0.41 2017-01-01     2333
       3: -26.66956   27.81071      318.2 0.51  0.41 2017-01-01     2333
       4: -26.65811   27.81250      316.2 0.51  0.41 2017-01-01     2333
       5: -26.65746   27.80734      312.9 0.51  0.41 2017-01-01     2333
      ---                                                               
19996359:  19.35031 -155.05087      367.0 0.44  0.63 2017-12-31       15
19996360:  19.35076 -155.04622      367.0 0.44  0.63 2017-12-31       15
19996361:  19.34422 -155.05305      343.9 0.44  0.63 2017-12-31       15
19996362:  19.34467 -155.04839      353.2 0.44  0.63 2017-12-31       15
19996363:  19.34509 -155.04398      348.4 0.44  0.63 2017-12-31       15
          satellite instrument confidence version bright_ti5  frp
       1:         N      VIIRS          n       1      290.2  1.7
       2:         N      VIIRS          n       1      294.3  2.1
       3:         N      VIIRS          n       1      284.4  4.4
       4:         N      VIIRS          n       1      285.0  2.9
       5:         N      VIIRS          n       1      283.7  2.9
      ---                                                        
19996359:         N      VIIRS          h       1      315.7 11.6
19996360:         N      VIIRS          h       1      318.0 17.4
19996361:         N      VIIRS          n       1      309.5 11.6
19996362:         N      VIIRS          n       1      311.3 11.6
19996363:         N      VIIRS          n       1      311.1 17.4

The output of fread is a data.table object, which can be queried and explored efficiently. For instance, to select active fires over Continental Portugal, one can use variables longitude and latitude to perform the query. Function fwrite creates a new text file.

afpt <- af[latitude > 36 & latitude < 43 & longitude < -6 & longitude > -10]
data.table::fwrite(afpt, file = file.path(getwd(), "datasets", "viirs-fires-2017-portugal.csv"))

New columns can be easily and very efficiently computed. In the example below, the footprint of each observation is computed as the product of the scan and track ground sample distances. Moreover,a data.table can be quickly ordered or grouped. An easy introduction to data.table can be found in this vignette.

af[, `:=`(footprint, scan * track)]  # create new variable 'footprint'
af[order(frp, decreasing = TRUE)]  # order table by frp value
           latitude  longitude bright_ti4 scan track   acq_date acq_time
       1:  10.73978    1.92764      343.6 0.51  0.41 2017-11-30     1326
       2:  10.62042    1.67430      351.2 0.50  0.41 2017-11-30     1326
       3:  10.58239    0.82000      343.2 0.45  0.39 2017-11-30     1326
       4:   9.74282   -4.61814      348.5 0.44  0.38 2017-11-30     1326
       5:   9.76456   -4.59276      347.6 0.44  0.38 2017-11-30     1326
      ---                                                               
19996359:  56.24148  -89.09474      324.2 0.39  0.36 2017-12-23      822
19996360: -23.58495   28.78425      297.4 0.46  0.39 2017-12-26     2301
19996361:  60.46083 -123.92152      326.2 0.49  0.48 2017-12-29      950
19996362: -22.36637   22.51753      302.4 0.47  0.40 2017-12-30     2326
19996363:   3.46376    5.57049      305.2 0.49  0.40 2017-12-30      119
          satellite instrument confidence version bright_ti5     frp footprint
       1:         N      VIIRS          n       1      313.2 16758.1    0.2091
       2:         N      VIIRS          n       1      315.2 15942.7    0.2050
       3:         N      VIIRS          l       1      314.8 13892.4    0.1755
       4:         N      VIIRS          n       1      297.4 13415.1    0.1672
       5:         N      VIIRS          n       1      297.3 13379.3    0.1672
      ---                                                                     
19996359:         N      VIIRS          n       1      244.3     0.0    0.1404
19996360:         N      VIIRS          n       1      282.7     0.0    0.1794
19996361:         N      VIIRS          n       1      241.2     0.0    0.2352
19996362:         N      VIIRS          n       1      290.0     0.0    0.1880
19996363:         N      VIIRS          n       1      289.7     0.0    0.1960
af[, sum(frp), by = acq_date]  # returns table with 365 rows
       acq_date       V1
  1: 2017-01-01 373368.2
  2: 2017-01-02 513955.2
  3: 2017-01-03 547550.3
  4: 2017-01-04 559185.6
  5: 2017-01-05 532717.4
 ---                    
361: 2017-12-27 474894.4
362: 2017-12-28 498746.5
363: 2017-12-29 466054.3
364: 2017-12-30 507827.3
365: 2017-12-31 373136.7
af[, `:=`(s, sum(frp)), by = acq_date]  # adds new column to af 
af[, .N, by = confidence]  # a bit faster than >table(af$confidence)
   confidence        N
1:          n 16504189
2:          h  1256082
3:          l  2236092
af[, .(confidence, frp)]  # select columns
          confidence  frp
       1:          n  1.7
       2:          n  2.1
       3:          n  4.4
       4:          n  2.9
       5:          n  2.9
      ---                
19996359:          h 11.6
19996360:          h 17.4
19996361:          n 11.6
19996362:          n 11.6
19996363:          n 17.4

Note that data.table adds/updates/deletes columns by reference as explained in this vignette. While this speeds-up considerably some operations, in practice it means that one has to be careful when making a copy of a data.table object.

DT <- data.table(x = 1:3, y = c(4, 8, 12))
newDT <- DT
newDT[, `:=`(x, NULL)]
DT  # is modified by reference
    y
1:  4
2:  8
3: 12

To avoid this problem, use function copy that creates a deep copy of D and operates on that copy.

DT <- data.table(x = 1:3, y = c(4, 8, 12))
newDT <- copy(DT)
newDT[, `:=`(x, NULL)]
DT  # is not modified
   x  y
1: 1  4
2: 2  8
3: 3 12

3 Converting from vector to raster

The goal of rasterize is to convert (more precisely, re-sample) a set of georefenced points into raster format, where the raster can have an arbitrary extent and spatial resolution. Function rasterize from the raster package works with all geometries but is slow for large data sets.

af <- data.table::fread(file.path(getwd(), "datasets", "viirs-fires-2017-world.csv"))
afpt <- af[latitude > 36 & latitude < 43 & longitude < -6 & longitude > -10]  # Continental Portugal
latlon <- CRS("+proj=longlat +datum=WGS84")
y <- raster(crs = latlon, resolution = 0.01, xmn = -10, xmx = -6, ymn = 36, ymx = 42)
r <- rasterize(x = data.frame(lon = afpt$longitude, lat = afpt$latitude), y = y, 
    field = afpt$frp, fun = max)
mapview(r)

In the example above, processing time was reasonable because the data set was rather small. If we consider a larger area (Africa), the number of points is much larger and rasterize becomes pretty slow.

afaf <- af[latitude > -35 & latitude < 0 & longitude > 0 & longitude < 40]  # S Africa
latlon <- CRS("+proj=longlat +datum=WGS84")
y <- raster(crs = latlon, resolution = 0.01, xmn = 0, xmx = 40, ymn = -35, ymx = 0)
system.time({
    r <- rasterize(x = data.frame(lon = afaf$longitude, lat = afaf$latitude), y = y, 
        field = afaf$frp, fun = max)
})  # 124.16 
   user  system elapsed 
 106.14    0.66  109.78 
mapview(r)

To speed up this function, perhaps the best option is ti run in R algorithms from the Geospatial Data Abstraction Library GDAL, available through package gdalUtils. However, this uses functions external to R, and installation can be tricky.

As an alternative one can write a new function that works well with large data sets. The following function relies on nn2 from package RANN. This provides a very fast way of finding neighbors in N dimensions and it is used below to determine, for each point, its closest pixel. Then, the value at the point is assigned to that pixel as long as the coordinates of the point fall into the pixel boundary.

myrasterize <- function(x, y, field, background = NA, fun = max, radius = min(res(y))) {
    colnames(x) <- c("lon", "lat")
    if (fun(0:1) == max(0:1)) {
        x <- x[order(field), ]
        field <- field[order(field)]
    }
    if (fun(0:1) == min(0:1)) {
        x <- x[order(field, decreasing = TRUE), ]
        field <- field[order(field, decreasing = TRUE)]
    }
    # for each x point, look for the closest pixel, within radius
    coordy <- coordinates(y)
    knn <- nn2(data = coordy, query = x, k = 1, searchtype = "radius", radius = radius)  # 
    idys <- knn$nn.idx[, 1]  # indicates the pixel which is closer to the point in x
    idys[idys == 0] <- NA
    # remove x observations that do not have pixel neighbor or do not fall into the
    # pixel
    DT <- data.table(idys, idxs = 1:nrow(x), x, lony = coordy[idxs, 1], laty = coordy[idxs, 
        2])
    DT <- DT[!is.na(idys) & abs(lon - lony) < res(y)[1]/2 & abs(lat - laty) < res(y)[2]/2]
    # assign resampled values to y
    valsnew <- rep(background, ncell(y))  # initialize
    valsnew[DT$idys] <- field[DT$idxs]
    values(y) <- valsnew
    return(y)
}
system.time({
    r <- myrasterize(x = data.frame(lon = afaf$longitude, lat = afaf$latitude), y = y, 
        field = afaf$frp, fun = max)
})  # 26.09 

For polygons, fasterize from the package with the same name is much faster than rasterize. The last command compares the resulting raster with the original vector data over the extent of concelho.

system.time({
    flam.sf <- sf::st_read(dsn = file.path(getwd(), "datasets", "Flam2015v4.shp"))
})
Reading layer `Flam2015v4' from data source `Y:\Aulas\CURSOS_R\isa-setembro-2018-grandes-cdg\aulas\datasets\Flam2015v4.shp' using driver `ESRI Shapefile'
Simple feature collection with 56498 features and 4 fields
geometry type:  POLYGON
dimension:      XY
bbox:           xmin: -118698.4 ymin: -296662.2 xmax: 162119.9 ymax: 276059.5
epsg (SRID):    NA
proj4string:    +proj=tmerc +lat_0=39.6682583333333 +lon_0=-8.133108333333331 +k=1 +x_0=0 +y_0=0 +ellps=GRS80 +units=m +no_defs
   user  system elapsed 
   1.79    0.65    4.31 
ext <- st_bbox(flam.sf)
RES <- 50
r <- raster(xmn = ext$xmin, xmx = ext$xmax, ymn = ext$ymin, ymx = ext$ymax, crs = st_crs(flam.sf)$proj4string, 
    resolution = RES)
rflam <- fasterize(flam.sf, r, field = "FID_COS")
# 
caop <- sf::st_read(dsn = file.path(getwd(), "datasets", "Cont_AAD_CAOP2015.shp"), 
    quiet = TRUE)
caop.proj <- sf::st_transform(caop, crs = sf::st_crs(flam.sf))
concelho <- caop.proj[caop.proj$Concelho == "MONCHIQUE", ]
mapview(rflam) + mapview(sf::st_crop(flam.sf, sf::st_bbox(concelho)))