-
R Days in Rennes, France
The R Days, the annual event around R in France, was held this year in Rennes.
You can get the abstract for the talk I made around the use of AI to predict network saturation.
-
& or &&, that is the question
The R language has two AND operators:
&
and&&
. Knowing the difference is key when working withif
clauses.The R documentation states that:
The longer form the left to right examining only the first element of each vector. Evaluation proceeds only until the result is determined. The longer form is appropriate for programming control-flow and typically preferred in if clauses.<
Consider the following variables:
The code below produces a logical vector of size 2: the evaluation is done for each of the elements.
The second example below ends up testing the first element of each vector (i.e.
x[[1]] == 1 & y[[1]] == 1
) and produces a single value (TRUE & FALSE = FALSE
).The last example produces
TRUE
as both the first value ofx
is equal to1
and the first value ofy
is equal to2
.Note that if the comparison is done on vectors of different lengths, operator
&
will throw a warning.Operator
&&
will not complain. -
Back-up Anki with Dropbox
Anki is one of the best applications to help structure your learning with flash cards. Anki Web comes as a complement as it backs up your decks and cards on the cloud and enables to study on multiple synchronized devices (computers and phones). Anki Web has a retention period of 3 months: if you do not use the service for more than 3 months, your decks are deleted from the cloud. You end up then relying on the local version(s) on your device(s).
In this post, I wanted to share my experience to ensure a consistent and reliable back-up of the Anki decks using dropbox on Windows.
-
Rolling median with Azure Data Lake Analytics
Azure Data Lake Analytics (DLA in short) provides a rich set of analytics functions to compute an aggregate value based on a group of rows. The typical example is with the rolling average over a specific window. Below is an example where the window is centered and of size 11 (5 preceding, the current row, and 5 following). The grouping is made over the
site
field.However, the median function does not support the
ROWS
option. It is not possible therefore to run rolling median straight out of the box with DLA.The aim of this post is to:
- show you how the running median can be calculated on ADL by using a mix of basic
JOIN
- discuss the finer points of the median functions in ADL
- compare a R implementation
- show you how the running median can be calculated on ADL by using a mix of basic
-
Custom sorting with DT
I came across a practical case a couple of days ago where the row and column ordering provided out of the box by the R DT package was not enough.
The DT package provides a wrapper around the DataTable javascript library powered by jQuery. DT can be used in Rmarkdown documents as well as in Shiny.
The goal was to display a table containing columns reporting the bandwidth consumption for some sites. The bandwidth is expressed in bits per second (bps). When quantifying large bit rates, decimal prefixes are used. For example:
- 1,000 bps = 1 Kbps (one kilobit or one thousand bits per second)
- 1,000,000 bps = 1 Mbps (one megabit or one million bits per second)
- 1,000,000,000 bps = 1 Gbps (one gigabit or one billion bits per second)
The first approch was to take the numeric value and convert it as a string using R. This is a basic implementation:
The underlying values (variable
v
) are as follows:The converted values (variable
output
) are as follows:The data are then combined in a data frame:
When the data frame is rendered with DT with the formatted bandwidth pre-sorted, the ordering is done on the character values. So 980kbps will appear above 1.5Mpbs despite representing a smaller amount.
One way to fix it would be to sort the dataframe in R and disable sorting with DT. This approach could be frustrating for the users if the table is being viewed through Shiny.
We could use column rendering, where the raw values will be passed to the datatable and the conversion will be done in javascript. In this approach, we are basically replicating the formatting code done in R for javascript (code inspired by the following gist.
This apprach has some drawbacks:
- javacript needs to be used. Implementing JavaScript functions for people coding in R may take some times.
- Code may be duplicated as the R implementation still needs to be required (for example, if I want to include )
Another approach would be do pass to DT two values within a cell:
- the underlying raw value (an integer)
- the formatted value regarding the bandwidth consumption
-
the pair can be separated by a special character such as * *,
The underlying raw value is used for sorting, while the formatted value will be used for display.
The trick here is to use the different values of the
type
parameter of the column rendering. Based on the documentation, the value can be:filter
. This is being used by the search box. In the implementation below, the value being searched is the same as being displayed.display
. Here, we used the second component of the string as the displayed value.type
sort
. Here, we used the first componen
This approach is purely client-side. When the datatable is included in Rmarkdown document or is generated by Shiny with the server-side disabled, it will work.
But when the datatable is generated using Shiny such as:
it will not work. When looking at the implementation, the filtering and the sorting is done in R. As you can see, when sorting an integer converted in string, R will consider that the value
"10"
is lower than the value"2"
.One way to fix it is adjust the first value of the pair by padding it with zero. It works well only for integer value. If you deal with numeric values, you need to make sure that all the numbers have the same number of characters to the left and to the right of the digit.
The following adjusted code should now work in a shiny-based environment:
-
R Days in Anglet
The R Days took place at the end of June 2017 in Anglet, France. The slides of the various presentations have been made public on github
-
Big Data Paris 2017
The conference BigData Paris was recently organized the 6th and 7th of March 2017. I though I would share my notes regarding some of the talks I attended.
-
Load files in R with specific encoding
When working with flat files, encoding needs to be factored in right away to avoid issues down the line. UTF-8 (or UTF-16) is the de facto encoding that you hope to get. If the encoding is different, pay attention on how you load the file into R.
Let’s take the example of a file encoded as Windows-1252. Its content is displayed below using Notepad++. The editor does a pretty good job figuring out the encoding of the file. The encoding is displayed in the status bar while the
Encoding
menu enables you to change the selected character set. -
Foundations of Service Level Management: Book Review
I wanted to share a couple of notes I have made while reading Foundations of Service Level Management by Sturm, Morris, and Jander.
The book, written in the ’00s, deals with all the aspects of Service Level Management. More specifically, it covers topics such as measurement, how the SLA are defined, human challenges, best practices, etc.
The book is not technical at all and is overall an easy read. The first part of the book is generic to be relevant in years to come. Chapter 9 is worth a read as it covers what the customer could do before contacting a service center that delivers managed services.
-
Distinguish between a base and a SPD library
In SAS, a library engine is an engine that accesses groups of files and puts them into a logical form for processing. The engine used by default is the base engine. In addition, you may come across other engines, such as the SPD engine.
The SAS Scalable Performance Data Engine (SPD Engine) provides parallel I/O as each SAS dataset is split over multiple disks. The structure of this engine allows a faster processing of large data.
A common production set-up may define different libraries with different purposes, and therefore different engines. A library with the base engine may be used for ad-hoc reporting and small data transformation, while the library with the SPD engine may be used to store a large data mart. From a user’s perspective, the layer provided by the SAS meta-data server hides the underlying engines used by the various libraries. It is however useful to validate the type of engine being used without relying on the IT department.