… the seedling is doing surprisingly well at the end of the summer. No rain yet, but it seems quite green, and all the parameters I have been measuring increased substantially.

located at: Longitude 35d10'18.533"E and Latitude 30d47'33.321"N

Elevation -40 m

Date visited | canopy width (cm.) | height (max) | trunk diameter (cm.) |
---|---|---|---|

2/4/2011 | 58 | 22 | 0.14 |

28/1/2012 | 68 | 30 | 0.8 0.7 0.63 |

23/8/2012 | 72 | 35 | 0.8 0.76 0.57 |

17/11/2012 | 80 | 38 | 0.91 0.75 0.58 |

16/8/2013 | 82 | 51 | 0.96 0.88 0.67 |

18/4/2014 | 93 | 54 | 1.22 0.95 0.69 |

11/10/2014 | 112 | 70 | 1.40 0.98 0.72 |

]]>

I wrote about creating isohyetal contours some years ago. This time I’ll try to improve my interpolation by using the powerful, open source R statistics program.

We begin with the total annual precipitation measurements at discrete locations – rain gauges. Using this point data we interpolate the rainfall everywhere in the region. The interpolation method I choose is kriging, which works with a variance distribution function, the variogram, that we prepare as part of the procedure. Furthermore, the geostatistics procedure called co-kriging allows us to apply a second, independant variable to the analysis to predict even better the interpolated values. Co-kriging is applicable only when we know that there is some correlation between the main, dependant variable and the auxiliary variable. Intuitively, we are aware that in many areas rainfall is stronger in the mountains, and less at lower elevations. So, I will examine elevation of the rain gauges as the independant auxiliary variable.

We begin by starting R, loading some required libraries, and pulling in the rain gauge data.

library(sp) library(rgdal) library(gstat) # Load precipitation data rain.data <- "raindata_ttl_2014.csv" gauges <- read.csv(rain.data, col.names=c("gauge_id","precip","x","y"));

Now I load a border line, just for spatial reference, and make a simple bubble plot of the data.

# Put data into a Spatial Data Frame gauges.xy <- gauges[c("x","y")] coordinates(gauges) <- gauges.xy border_il <- readOGR('border_il.shp', 'border_il') # Set the correct CRS for the gauges data itm_proj4 <- proj4string(border_il) proj4string(gauges) <- itm_proj4 class(gauges) summary(gauges) # Have a look plot(gauges, pch=16, col='blue', cex=precip/20) plot(border_il, add=TRUE)

Now we need to determine the variogram. We start by plotting some candidate variogram models. The parameters that we need to set are “cutoff” and “width”. By default, the cutoff (how far from each prediction point to go to look for real points) is chosen as one third the diagonal of the whole analysis region. And the width (the variance between real points within the cutoff) is 1/15 of the cutoff. We can get an idea how these parameters influence the model variograms by plotting some possible combinations. Once we have a good looking variogram model, we use those in the gstat variogram() function to create the model variogram:

# Create a variogram # Display some possible models plot(gstat::variogram(precip ~ 1 , gauges, cutoff=20000, width=2000)) plot(gstat::variogram(precip ~ 1 , gauges, cutoff=30000, width=4000)) plot(gstat::variogram(precip ~ 1 , gauges, cutoff=70000, width=5000)) # Once we have a visually good variogram, use those parameters: vg <- gstat::variogram(precip ~ 1 , gauges, cutoff=70000)

Now we have a model, and we want to let R to actually fit a variogram to the point pairs that are included in the variogram model to create a fitted curve. The following code prints the final fitted variogram values and gives us the result below:

vg.fit <- gstat::fit.variogram(vg, vgm(700, "Exp", 20000, 300)) # Here are the fitted variogram parameters: print(vg.fit) plot(vg, vg.fit) model psill range 1 Nug 355.0346 0.00 2 Exp 421.1879 12672.45

With that we are now prepared to run the kriging and produce a raster of predicted rainfall throughout the region.

# create a grid onto which we will interpolate: # first get the range in data x.range <- as.integer(range(gauges@coords[,1])) y.range <- as.integer(range(gauges@coords[,2])) # now expand to a grid with 100 meter spacing: grd <- expand.grid(x=seq(from=x.range[1], to=x.range[2], by=100), y=seq(from=y.range[1], to=y.range[2], by=100) ) # convert to SpatialPixel class coordinates(grd) <- ~ x+y gridded(grd) <- TRUE proj4string(grd) <- itm_proj4 # perform ordinary kriging prediction: # make gstat object to hold the krige result: g <- gstat(id="Precipitation", formula=precip ~ 1, data=gauges, model=vg.fit) precip.krige <- predict(g, model=vg.fit, newdata=grd)

The predict() function chooses ordinary kriging, based on the basic parameters we passed to the function. Now we can go ahead and plot the result. I load the RColorBrewer library to take advantage of the full set of color palettes available, and I use the spplot function from the sp package for plotting, since it easily adds contours (isohyetal lines) to the plot.

# Some parameters for plotting par(font.main=2, cex.main=1.5,cex.lab=0.4, cex.sub=1) # Use the ColorBrewer library for color ramps library(RColorBrewer) precip.pal <- colorRampPalette(brewer.pal(7, name="Blues")) # plot the krige interpolation spplot(precip.krige, zcol='Precipitation.pred', col.regions=precip.pal, contour=TRUE, col='black', pretty=TRUE, main="Interpolated Precipitation - 2014", sub="Ordinary Kriging", labels=TRUE)

The plot of krige interpolated precipitation looks like this:

Now, as promised, let’s continue on to elevation data. We import a DEM and use the R over() function to attach an elevation value to each gauge. Then we check correlation between the annual precipitation and elevation. These are the steps:

# Now start with elevation data elev <- readGDAL('dem_negev.tif') class(elev) summary(elev) # Add elevation values to each gauge gauges.elev <- over(gauges, elev) precip.elev <- cbind(gauges, gauges.elev) coordinates(precip.elev) <- gauges.xy proj4string(precip.elev) <- itm_proj4 # Check that we have the elevations in a spatial data frame names(precip.elev) # the column "band1" contains the elevation values # Now check for correlation between precipitation and elevation cor.test(precip.elev$band1, precip.elev$precip)

Checking the output of the cor.test() function we see:

Pearson's product-moment correlation data: precip.elev$band1 and precip.elev$precip t = 0.331, df = 96, p-value = 0.7414 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: -0.1657725 0.2306346 sample estimates: cor 0.03375868

Well, a correlation of 0.033 means essentially **no correlation**. Values of correlation can range from 1 (prefect correlation) to -1 (perfect inverse correlation). Values near to 0 indicate nearly no correlation. So for this last season the spread of the total annual rainfall had **no connection to elevation** across the Negev. We will settle with the above kriging map as a good estimate of the distribution of rainfall for 2014. End of story.

But, we don’t give up so easily. Let’s go back a few years to the winter of 2010. Anyone dealing with rainfall and flooding in the Negev desert remembers that season and the storms of January. We had once in 50 year floods in several wadis throughout the south due to very heavy rains in the Negev mountains. Let’s search thru the archives to pull out the annual rainfall data for that year, and try to correlate with elevation. I rerun all the above steps for the gauge data for 2010. Of course the variogram will be different, and I obtain the ordinary kriging result as below:

Now I load the same elevation dataset and as above I test correlation between the precipitation data and elevation values:

cor.test(precip.2010.elev$band1, precip.2010.elev$precip) Pearson's product-moment correlation data: precip.2010.elev$band1 and precip.2010.elev$precip t = 6.6778, df = 71, p-value = 4.552e-09 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: 0.4562349 0.7447525 sample estimates: cor 0.6211078

Well, a correlation of 0.62 is nothing to write home about, but is does indicate some reasonable level of correlation. Additionally, the p-value is very low, meaning that there is a high level of confidence that the resulting correlation is accurate. So let’s forge ahead and use the elevation values to (hopefully) improve our precipitation estimation.

We first need to redo the gstat object to hold both variables:

# recreate g, the gstat object with a second variable rm(g) g <- gstat(id="Precip", formula=precip ~ 1, data=precip.2010.elev, model=vgm(psill=2000, model="Exp", range=100000, nugget=50)) g <- gstat(g, id="Elevation", formula=band1 ~ 1, data=precip.2010.elev, model=vgm(psill=100000, model="Exp", range=50000, nugget=10)) vg <- gstat::variogram(g) # Make the multivariable variogram (Linear Model of Coregionalization) vg.fit <- fit.lmc(vg, g, vgm(psill=2000, model="Exp", range=100000, nugget=50)) # Graph the model and experimental variograms plot(vg, vg.fit)

Note that we us the fit.lmc() function to create a fitted variogram with multiple variables, each variable with its own set of sill and range parameters. And we must add a non-zero nugget value to insure that the Linear Model of Coregionalization will be chosen.

The plot() shows us three variograms: one for elevation, one for precipitation, and a third for the covariance of the two variables together.

What’s left is to rerun the predict() function, and note that it chooses co-kriging:

# Now predict() should create a CoKriging interpolation precip.2010.cokrige <- predict.gstat(vg.fit, newdata=grd) Linear Model of Coregionalization found. Good. [using ordinary cokriging]

and here’s our new estimate of distribution of precipitation for 2010. Compared to the ordinary kriging map for 2010 above, we can identify an obvious change of rainfall distribution in the western mountain regions:

Here’s the updated measurements of my acacia seedling:

located at: Longitude 35d10'18.533"E and Latitude 30d47'33.321"N

Elevation -40 m

Date visited | canopy width (cm.) | height (max) | trunk diameter (cm.) |
---|---|---|---|

2/4/2011 | 58 | 22 | 0.14 |

28/1/2012 | 68 | 30 | 0.8 0.7 0.63 |

23/8/2012 | 72 | 35 | 0.8 0.76 0.57 |

17/11/2012 | 80 | 38 | 0.91 0.75 0.58 |

16/8/2013 | 82 | 51 | 0.96 0.88 0.67 |

18/4/2014 | 93 | 54 | 1.22 0.95 0.69 |

11/10/2014 | 112 | 70 | 1.40 0.98 0.72 |

]]>

GRASS numbers the directions in a drainage (flow direction) raster from 1-8 starting eastward and going counter clock-wise. ArcGIS also numbers the directions from the east but clock-wise, and in steps of powers of 2: 1,2,4,8,16, etc. This allows ArcGIS to calculate “in-between” directions when multiple surrounding cells have the same elevation, and the flow direction is not unique. GRASS, on the other hand, handles this situation by “looking further” along the flow direction to find where the overall flow is going, and then determines which of the multiple directions to choose. This is called a least-cost search algorithm, and should produce better results especially in flat and nearly flat regions.

So I want to create the flow_dir raster in GRASS, but the results need to use the ArcGIS numbering scheme. This is accomplished simply with a reclass command. Here’s the reclass rules file I use:

echo "1 -1 = 128

2 -2 = 64

3 -3 = 32

4 -4 = 16

5 -5 = 8

6 -6 = 4

7 -7 = 2

8 -8 = 1

0 = 255" > f_dir_reclass

First, you might ask why the negative numbers? GRASS uses negative direction to indicate that the flow from this cell is going off the map. But the absolute value still indicates the direction. So both 4 and -4 mean west, which in Arc terms is 16. Also, GRASS uses 0 to indicate a sink = no flow at all from this cell. ArcGIS implements a method to insure that single cell sinks are not used by combining values from different flow directions. So I simply set this situation to 255 = the sum of all the other directions.

Now I need only do:

`r.reclass input=f_dir output=f_dir_arc rules=f_dir_reclass`

to get a reclassed raster in the Arc style of numbered directions.

]]>located at: Longitude 35d10'18.533"E and Latitude 30d47'33.321"N

Elevation -40 m

Date visited | canopy width (cm.) | height (max) | trunk diameter (cm.) |
---|---|---|---|

2/4/2011 | 58 | 22 | 0.14 |

28/1/2012 | 68 | 30 | 0.8 0.7 0.63 |

23/8/2012 | 72 | 35 | 0.8 0.76 0.57 |

17/11/2012 | 80 | 38 | 0.91 0.75 0.58 |

16/8/2013 | 82 | 51 | 0.96 0.88 0.67 |

18/4/2014 | 93 | 54 | 1.22 0.95 0.69 |

11/10/2014 | 112 | 70 | 1.40 0.98 0.72 |

I’ll look forward to another visit thru the winter – hopefully after some rainfall – when I expect to see even more growth.

]]>The liblas set of utilities includes lasinfo, which lists all the metadate in a *.las file. Among the details is the “Bounding box” of that file – the minimum and maximum X and Y values of the points. So by grepping for the term “Bounding Box” and filtering out only the values, I could get these coordinates into a comma-separated-values text file. Here’s the command (run in the directory where the las files are located):

for f in *.las; do echo -n $f, >> bboxes.csv; echo `lasinfo $f | grep "Bounding Box" | awk '{print $3,$4,$5,$6}'` >> bboxes.csv; done

I use a “for” expression to loop thru all the *.las file. The first echo statement puts the file name, followd by a comma into my output file “bboxes.csv”. The “-n” flag causes echo **not** to write a new line. Then the lasinfo command outputs the bounding box row, and pushes it thru the “awk” expression to get the four corner values, then feeds them into the same bboxes.csv file. Here’s what the result looks like:

pt000001.las,215894.56, 489948.39, 216959.19, 491281.06 pt000002.las,215381.44, 490135.64, 215916.49, 491311.03 pt000003.las,218052.39, 498030.01, 220082.14, 499775.40 .....

For convenience, I add a row of column header names: “file”,”min_x”,”min_y”,”max_x”,”max_y” to the CSV file. And I make sure there are no stray spaces between the columns, with a quick search and replace in a text editor. (Beware: spaces will cause spatialite to import the data as text, rather than real numbers!).

Now I start spatialite_gui, and import that csv file into a table “bboxes” in a new database. Next I create a new table “las_tiles” for the polygon tile indices as follows:

CREATE TABLE las_tiles (pk_uid integer primary key autoincrement, file text); SELECT AddGeometryColumn('las_tiles','geometry', 4326, 'POLYGON',2);

Now I can reach for the spatialite function

to create polygons from each set of the bounding box corners, as follows:**BuildMBR()**

INSERT INTO las_tiles (file, geometry) SELECT file, BuildMBR(min_x, min_y, max_x, max_y, 4326) FROM bboxes;

And with that, I now have my polygon layer of the coverage of each of the the *.las files. Now on to some “real” work.

]]>I tried using, for example, GPSEssentials, an Android app with all the bells and whistles. I’m not endorsing any one location app but this one (like some others) stores all its data into sqlite files. So I plugged my phone into a USB socket and copied the “waypoints” file from the gpsessentials folder over to my computer. Then I renamed the file to add the popular *.sqlite extension, and opened it in Spatialite_gui. For our purposes there are three tables of interest: Waypoint, Track, and TrackElement. The Waypoint table contains, of course, the waypoints with Longitude and Latitude coordinates, along with altitude, description, etc. The Track table is a list of tracks, and the TrackElement table is a crumb trail of Longitude/latitude values for each location along each of the tracks.

To transform these tables into true spatial layers, for use in GIS, you must first make the sqlite DB spatial. So:

`SELECT InitSpatialMetaData();`

Now we can use the Longitude/Latitude values in the waypoints table to create actual geometries. First call the AddGeometryColumn function:

`SELECT AddGeometryColumn('Waypoint','geometry',4326,'POINT','XY');`

UPDATE Waypoints SET geometry = MakePoint(longitude, latitude, 4326);

and Waypoint becomes a point layer ready to be opened in QGIS, for example, or exported to a shapefile.

But what about the tracks? Here we need an additional step. We make the Track **and **TrackElement tables spatial, then use the TrackElements to create LINESTRING features in Track geometry column:

`SELECT AddGeometryColumn('Track','geometry',4326,'LINESTRING','XY');`

SELECT AddGeometryColumn('TrackElement','geometry',4326,'POINT','XY');

UPDATE TrackElement SET geometry = MakePoint(longitude, latitude, 4326);

and now do the update on the Track table:

`UPDATE Track SET geometry = (SELECT MakeLine(te.geometry)`

FROM TrackElement as te

WHERE te.trackID = Track._id

GROUP BY te.trackID)

and you’re done. The Track table is now a true line feature, and can be exported as a shapefile, etc. The column names above are as they appear in the sqlite tables created by this particular app, so you might have to replace some of the ingredients. But the recipe should be similar. So get out your spatial mixing bowl, and your phone becomes a handy GIS tool.

]]>

However the move raised a public controversy. All four of the poets are Ashkenazi, from Eastern European decent. Why no Sephardic poets? Silvan Shalom asked where are Yehuda Halevi, Shabazi or Ibn Gvirol? And other MP’s of both Sephardic and Ashkenazi backgrounds voiced similar objections. Of course the debate also brought on a flurry of parodies and satirical designs for the new notes with portraits of anything from pop singers to ninja turtles…

One talk show hosted Professor Roni Reich from the Archaeology department of Hebrew University. He was a leading member of the committee appointed by the Bank of Israel, and responsible for initiating the idea of depicting well know cultural figures rather than politicians. He explained what led him and the rest of the committee to their decision. As to the question of cultural figures from North African decent, he thought that the opinions were legitimate, and in 10 or 12 years time, when the next set of bills will be designed, careful attention should be placed on the backgrounds of the people chosen to decorate the currency. The current designs were made public months ago, and no objections were voiced. Now with the bills ready to go to print, it’s too late for changes.

As soon as I saw Professor Reich’s face, I recognized him from 40 years ago, when he was just beginning his academic career, and heading the excavations at Tel Ashdod. With his bushy hair and sharp features, he would bound from wall to pit, giving us volunteer diggers instructions where to shovel and when to brush. He always had a joke or good word to keep our spirits up through the hot summer days. Most of his professional career he dedicated to digs in and around Jerusalem. Now, after decades of sorting through pottery shards and ancient coins, writing books, and bringing archaeology to the “masses” he found himself helping design our new currency.

]]>Luckily, we had ordered in advance LIDAR coverage of the whole area, so I have a high resolution (1 meter per pixel) DTM of the reservoir and streams which feed into it during flash floods. I first visually examined the elevation values at the spillover point where the reservoir drains when it fills, and the lowest point in the center of the depression. I then ran the GRASS module r.lake feeding it the x,y coordinates of the deepest point as the “seed” and the elevation just below the spillover as the water level parameters. The result gave me the lake’s depths as a raster. Here’s the result:

Now I ran r.univar on the lake raster, to find

- the maximum value = water depth
- the sum of all pixels = total volume of the reservoir
- the number of pixels = surface area covered by water

Since the region settings were 1 meter per pixel, the number of cells = area in sq.m. and the sum of all cells = volume in cubic meters.

Now I wanted to run the same procedure for water levels stepping down from the top spillover level to the bottom of the reservoir when it dries up due to leaching and evaporation. A script suggested itself, and here’s the `depth_volume.sh`

script that I conjured up:

#!/bin/bash # Author: micha, 7/11/2012; copyright: GPL >= 2 # Purpose: calculate volume, area and depth of a reservoir # from a DEM raster and given water level using r.lake # calculate volume and area with r.univar # export the results to a text file for use in gnuplot # Usage: depth_volume.sh {dem raster} {water level} {seed location as x,y} if [ -z "$GISBASE" ] ; then echo "You must be in GRASS GIS to run this program." 1>&2 exit 1 fi if [ $# -lt 3 ] ; then echo "Some inputs not defined. Usage: $0 dem_raster water_level seed_x,seed_y" exit 1 fi DEM=$1 LEVEL=$2 SEED=$3 g.message "Flooding lake at water level $LEVEL" # Get resolution to calculate area and volume based on pixel size NSRES=`g.region -g | grep nsres | cut -d= -f2` EWRES=`g.region -g | grep ewres | cut -d= -f2` PIXEL=`echo $NSRES*$EWRES | bc` # Run r.lake r.lake --quiet --o $DEM wl=$LEVEL xy=$SEED lake="$DEM"_tmp # Collect needed numbers from r.univar eval `r.univar -g "$DEM"_tmp` VOLUME=`echo $sum*PIXEL/1000 | bc` DEPTH=$max AREA=`echo $n*$PIXEL | bc` # Dump results into a text file echo "$LEVEL $DEPTH $VOLUME $AREA" >> depth_volume.txt

The script takes three inputs: the DEM, water level and the X-Y coordinates for seeding the lake (as mentioned I chose the lowest spot in the depression). After running r.lake, the r.univar output is parsed to collect the needed numbers, and the results are dropped into a text file.

Now I ran this script in a loop with all the water levels that interested us. We wanted water volumes for each 1/2 meter drop in the water surface, so:

for level in 27 26.5 26 25.5 25 24.5 24 23.5 23 22.5; \ do depth_volume.sh dem $level 194365,397295; done

The depth_volume.txt file that is created has water level, volume and surface area for each of the above elevations. Now, to create a depth-volume curve, I could pull this text file into LibreOffice calc for designing a fancy chart. But for a quick and simple display, I used gnuplot with these commands:

set title "Depth-Volume Curve" set xlabel "Volume 1000 m3" set ylabel "Depth m." set x2label "Area sqm." set key left box set x2tics nomirror set autoscale x2 set grid y set terminal png enhanced size 800,600 set output "depth_volume.png" plot "depth_volume.txt" using 3:2 title "Volume 1000 m3" with lines lw 3, \ "depth_volume.txt" using 4:2 title "Area m2" with lines axes x2y1

The graph looks like this:

Once I have setup the GRASS script and gnuplot command file, running the model with other parameters or in some other location is a piece of cake.

]]>The basics of the New Labeling already are included in the QGIS Documentation. Data defined label placement has been improved and expanded compared with the old labeling. Now we can have columns in the layer’s attribute table for rotation of the label, buffer size and color, and even text alignment.

In Nathan W’s blog he introduced Expression based labeling. Now we can construct labels using combinations of math functions and string functions to manipulate any attribute column the way we want. Currently, in versions 1.8.x, find and click the “ABC” button to access the “Layer Label settings” window. Stay tuned for the next version of QGIS, when this New Labeling will become the default, and be merged into regular Layer Properties window. In the Layer Label settings window, after checking the checkbox to “Label this layer with” you can click the ellipsis button […] to open the full expression builder window, and use your imagination to create a variety of informative labels.

Here’s a classic example. If I add an Area column (using the Field calculator, $area function) the values I get, in a projected coordinate system. are square meters. But suppose I want to label a polygon layer with hectares? The label expression will be:

**"Area" / 10000**

But that’s not enough. This expression returns a real number with a long string of digits after the decimal point. So we instead can enter into the expression calculator:

**toint( "Area" / 10000)**

to round the number to an integer. And one more improvement: what about adding the “ha” tag as units to the labels?

**toint( "Area" / 10000) || ' ha'**

We also have the ability now to create conditional labels thanks to Martin Dobias. One example appears in Underdark’s blog. Here’s another straight forward example: I have a point layer of rain gauges. I want to label all those gauges with precipitation greater than 2 mm. No label at all should appear when the gauge shows < 2 mm.

**CASE WHEN ("precip" >= 2) THEN "precip" END**

Of course several

pairs can be chained together to get several different categories of labels.**WHEN {condition} THEN {label}**