Skip to content

Sunbelt and EUSN workshops on using R and ‘igraph’ for SNA

2016 March 4
by Michał

I would like to (re)announce the workshop I will be giving on using R and ‘igraph’ for Social Network Analysis on the upcoming Sunbelt 2016 conference in Newport Beach. My goal is to provide gentle and practical tour through the SNA functionality of R package “igraph”. The exact date of the Sunbelt workshop is Tuesday April 5th, 8:00am – 2:30pm. Consult the Sunbelt workshop program or further details.

Do note that March 7 is the deadline for Sunbelt workshop registrations!

If you are not attending Sunbelt I will be very happy to meet you on the workshop at the European Social Networks Conference (EUSN 2016) which will take place in Paris (14-17th of June, 2016), see http://eusn2016.sciencesconf.org/84811 for further details.

See the bottom of this post for some more details on the workshop.

I hope to see you on Sunbelt or EUSN!

As a side note, on EUSN I will be also co-teaching two ‘statnet’ workshops which will be announced separately.

Using R and igraph for Social Network Analysis

The workshop introduces R and package igraph for social network data manipulation, isualization, and analysis. Package igraph is a collection of efficient tools for storing, manipulating, visualizing, and analyzing network data. Igraph is in part an alternative, in part a complement to other SNA-related R packages (e.g. statnet, tnet). It is an alternative as it goes for network data manipulation and visualization. It is a complement because of a large and growing collection of algorithms, including community detection problems, unavailable elsewhere. The material will cover:

  1. Brief introduction to R.
  2. Creating and manipulating network data objects.
  3. Working with node and tie attributes.
  4. Creating network visualizations.
  5. A tour through computing selected SNA methods including: degree distribution, centrality measures, shortest paths, connected components, quantifying homophily/segregation, network community detection.
  6. Connections to other R packages for SNA, e.g.: statnet, RSiena, egonetR.

The focus is on analysis of complete network data and providing prerequisites for other workshops including two on ego-network analysis: “Introduction to ego-network analysis” by Raffaele Vacca and “Simplifying advanced ego-network analysis in R with egonetR” by Till Krenz and Andreas Herz.

The workshop have been successfully organized on earlier Sunbelt conferences (since Sunbelt 2011) and on European Social Networks conference (EUSN 2014). The workshop attracted a lot of attention (total of over 130 participants since 2011) and positive feedback (80% report being satisfied, 75% would recommend the workshop to a colleague). The earlier workshop title was “Introduction to Social Network Analysis with R”. The content have been updated to catch up with newest developments in igraph and related packages.

Target audience and requirements:

The workshop is designed to be accessible for people who have limited experience with R. The participants are expected to be familiar with basic R objects (e.g. matrices and data frames) and functions (e.g., reading data, computing basic statistics, basic visualization). Some brief introduction to R will be provided. To be absolutely on the safe side we recommend taking an internet course on the level of R programming course on Coursera (https://www.coursera.org/course/rprog), which you can take every month, or skimming through a book on the level of initial eight sections of Roger D. Peng book “R programming” (https://leanpub.com/rprogramming). Participants are encouraged to bring own laptops. We have prepared examples and exercises to be completed during the workshop. Detailed instructions about how to prepare will be distributed in due time.

Prezentacja na Data Science Warsaw

2015 November 13
by Michał

W ostatni wtorek (2015-11-10) miałem prezentację na Meet-up’ie Data Science Warsaw na temat analizy sieci społecznych. Data Science Warsaw do regularne spotkania osób interesujących się analizą danych tak w biznesie jak i w nauce. Polecam wszystkim zainteresowanym.

Slajdy z mojego (krótkiego) wystąpienia można znaleźć tutaj.

Szkolenie z analizy sieciowej

2015 November 13
by Michał

szkolenie

Summary in English: We are organizing a two-day workshop on network analysis in R. The dates are 2-3 of December, 2015. The workshop will be in Polish. For more information and registration see this page.

Zapraszamy na szkolenie z analizy sieciowej w R w dniach 2-3 grudnia 2015.

Analiza sieci społecznych (ang. Social Network Analysis, SNA) to podejście do badania zbiorowości ludzi lub organizacji poprzez analizowanie relacji lub powiązań pomiędzy nimi. Relacje te tworzą złożone sieci. Ludzie lub organizacje mogą wchodzić w różnego rodzaju powiązania, np. pokrewieństwo, przyjaźń, współpraca, poszukiwanie informacji, władza, ale również współuczestnictwo w wydarzeniach itp. Celem SNA jest analiza struktury sieci relacji i ich znaczenia dla funkcjonowania interesującej nas zbiorowości.

Celem warsztatu jest wprowadzenie do SNA oraz zdobycie podstawowych technik analizy i wizualizacji grafów za pomocą R. Wśród poruszanych tematów znajdują się: metody zbierania danych sieciowych, reprezentacja danych sieciowych, wizualizacja oraz przegląd miar opisowych charakteryzujących pozycję w sieci, metody identyfikacji grupy i pozycji oraz własności sieci jako całości, a także analizy łączące dane o relacjach i atrybutach węzłów.

Więcej informacji oraz szczegóły dotyczące rejestracji na stronie http://www.icm.edu.pl/web/guest/wprowadzenie-do-analizy-sieciowej-w-r.

Dla rejestrujących się przed 23 listopada zniżka 20%!

Zapraszamy!

Some new presentations

2015 September 16
by Michał

Within last few weeks the website of the RECON project have been updated. Among other things, we have uploaded a couple of presentations that were given in 2014 in 2015. Below is a short list. See the Publications page on RECONs webpage for a complete list with abstracts.

  • Czerniawska D., Fenrich W., Bojanowski M. (2015), How does scholarly cooperation occur and how does it manifest itself? Evidence from Poland Presentation at ESA 2015 conference. PDF slides
  • Czerniawska D. (2015), Paths to interdisciplinarity: How do scholars start working on the edges of disciplines? Presentation at ‘What makes interdisciplinarity work? Crossing academic boundaries in real life’ Ustinov College, Durham University. HTML slides
  • Fenrich W., Czerniawska D., Bojanowski M. (2015) The story behind the graph: a mixed method study of scholarly collaboration networks in Poland. Presentation at Sunbelt XXXV. HTML slides

Linear models with weighted observations

2015 September 4
by Michał

In data analysis it happens sometimes that it is neccesary to use weights. Contexts
that come to mind include:

  • Analysis of data from complex surveys, e.g. stratified samples. Sample inclusion probabilities might have been unequal and thus observations from different strata should have different weights.
  • Application of propensity score weighting e.g. to correct data being Missing At Random (MAR).
  • Inverse-variance weighting (https://en.wikipedia.org/wiki/Inverse-variance_weighting) when different observations have been measured with different precision which is known apriori.
  • We are analyzing data in an aggregated form such that the weight variable encodes how many original observations each row in the aggregated data represents.
  • We are given survey data with post-stratification weights.

If you use, or have been using, SPSS you probably know about the possibility to define one of the variables as weights. This information is used when producing cross-tabulations (cells include sums of weights), regression models and so on. SPSS weights are frequency weights in the sense that $w_i$ is the number of observations particular case $i$ represents.

On the other hand, in R lm and glm functions have weights argument that serves a related purpose.

suppressMessages(local({
  library(dplyr)
  library(ggplot2)
  library(survey)
  library(knitr)
  library(tidyr)
  library(broom)
}))

Let’s compare different ways in which a linear model can be fitted to data with weights. We start by generating some artificial data:

set.seed(666)

N <- 30 # number of observations

# Aggregated data
aggregated <- data.frame(x=1:5) %>%
  mutate( y = round(2 * x + 2 + rnorm(length(x)) ),
          freq = as.numeric(table(sample(1:5, N, 
                replace=TRUE, prob=c(.3, .4, .5, .4, .3))))
          )
aggregated
##   x  y freq
## 1 1  5    4
## 2 2  8    5
## 3 3  8    8
## 4 4 12    8
## 5 5 10    5
# Disaggregated data
individuals <- aggregated[ rep(1:5, aggregated$freq) , c("x", "y") ]

Visually:

ggplot(aggregated, aes(x=x, y=y, size=freq)) + geom_point() + theme_bw()

data_plot-1

Let’s fit some models:

models <- list( 
               ind_lm = lm(y ~ x, data=individuals),
               raw_agg = lm( y ~ x, data=aggregated),
               ind_svy_glm = svyglm(y~x, design=svydesign(id=~1, data=individuals),
                                 family=gaussian() ),
               ind_glm = glm(y ~ x, family=gaussian(), data=individuals),
               wei_lm = lm(y ~ x, data=aggregated, weight=freq),
               wei_glm = glm(y ~ x, data=aggregated, family=gaussian(), weight=freq),
               svy_glm = svyglm(y ~ x, design=svydesign(id=~1, weights=~freq, data=aggregated),
                                family=gaussian())
               )
## Warning in svydesign.default(id = ~1, data = individuals): No weights or
## probabilities supplied, assuming equal probability

In short, we have the following linear models:

  • ind_lm is a OLS fit to individual data (the true model).
  • ind_agg is a OLS fit to aggregated data (definitely wrong).
  • ind_glm is a ML fit to individual data
  • ind_svy_glm is a ML fit to individual data using simple random sampling with replacement design.
  • wei_lm is OLS fit to aggregated data with frequencies as weights
  • wei_glm is a ML fit to aggregated data with frequencies as weights
  • svy_glm is a ML fit to aggregated using “survey” package and using frequencies as weights in the sampling design.

We would expect that models ind_lm, ind_glm, and ind_svy_glm will be identical.

Summarise and gather in long format

results <- do.call("rbind", lapply( names(models), function(n) cbind(model=n, tidy(models[[n]])) )) %>%
                                      gather(stat, value, -model, -term)

Check if point estimates of model coefficients are identical:

results %>% filter(stat=="estimate") %>% 
  select(model, term, value) %>%
  spread(term, value)
##         model (Intercept)        x
## 1      ind_lm     4.33218 1.474048
## 2     raw_agg     4.40000 1.400000
## 3 ind_svy_glm     4.33218 1.474048
## 4     ind_glm     4.33218 1.474048
## 5      wei_lm     4.33218 1.474048
## 6     wei_glm     4.33218 1.474048
## 7     svy_glm     4.33218 1.474048

Apart from the “wrong” raw_agg model, the coefficients are identical across models.

Let’s check the inference:

# Standard Errors
results %>% filter(stat=="std.error") %>%
  select(model, term, value) %>%
  spread(term, value)
##         model (Intercept)         x
## 1      ind_lm    0.652395 0.1912751
## 2     raw_agg    1.669331 0.5033223
## 3 ind_svy_glm    0.500719 0.1912161
## 4     ind_glm    0.652395 0.1912751
## 5      wei_lm    1.993100 0.5843552
## 6     wei_glm    1.993100 0.5843552
## 7     svy_glm    1.221133 0.4926638
# p-values
results %>% filter(stat=="p.value") %>%
  mutate(p=format.pval(value)) %>%
  select(model, term, p) %>%
  spread(term, p)
##         model (Intercept)          x
## 1      ind_lm  3.3265e-07 2.1458e-08
## 2     raw_agg    0.077937   0.068904
## 3 ind_svy_glm  2.1244e-09 2.1330e-08
## 4     ind_glm  3.3265e-07 2.1458e-08
## 5      wei_lm    0.118057   0.085986
## 6     wei_glm    0.118057   0.085986
## 7     svy_glm    0.038154   0.058038

Recall, that the correct model is ind_lm. Observations:

  • raw_agg is clearly wrong, as expected.
  • Should the weight argument to lm and glm implement frequency weights, the results for wei_lm and wei_glm will be identical to that from ind_lm. Only the point estimates are correct, all the inference stats are not correct.
  • The model using design with sampling weights svy_glm gives correct point estimates, but incorrect inference.
  • Suprisingly, the model fit with “survey” package to the individual data using simple random sampling design (ind_svy_glm) does not give identical inference stats to those from ind_lm. They are close though.

Functions weights lm and glm implement precision weights: inverse-variance weights that can be used to model differential precision with which the outcome variable was estimated.

Functions in the “survey” package implement sampling weights: inverse of the probability of particular observation to be selected from the population to the sample.

Frequency weights are a different animal.

However, it is possible get correct inference statistics for the model fitted to aggregated data using lm with frequency weights supplied as weights. What needs correcting is the degrees of freedom (see also http://stackoverflow.com/questions/10268689/weighted-regression-in-r).

models$wei_lm_fixed <- models$wei_lm
models$wei_lm_fixed$df.residual <- with(models$wei_lm_fixed, sum(weights) - length(coefficients))

results <- do.call("rbind", lapply( names(models), function(n) cbind(model=n, tidy(models[[n]])) )) %>%
                                      gather(stat, value, -model, -term)
## Warning in summary.lm(x): residual degrees of freedom in object suggest
## this is not an "lm" fit
# Coefficients
results %>% filter(stat=="estimate") %>% 
  select(model, term, value) %>%
  spread(term, value)
##          model (Intercept)        x
## 1       ind_lm     4.33218 1.474048
## 2      raw_agg     4.40000 1.400000
## 3  ind_svy_glm     4.33218 1.474048
## 4      ind_glm     4.33218 1.474048
## 5       wei_lm     4.33218 1.474048
## 6      wei_glm     4.33218 1.474048
## 7      svy_glm     4.33218 1.474048
## 8 wei_lm_fixed     4.33218 1.474048
# Standard Errors
results %>% filter(stat=="std.error") %>%
  select(model, term, value) %>%
  spread(term, value)
##          model (Intercept)         x
## 1       ind_lm    0.652395 0.1912751
## 2      raw_agg    1.669331 0.5033223
## 3  ind_svy_glm    0.500719 0.1912161
## 4      ind_glm    0.652395 0.1912751
## 5       wei_lm    1.993100 0.5843552
## 6      wei_glm    1.993100 0.5843552
## 7      svy_glm    1.221133 0.4926638
## 8 wei_lm_fixed    0.652395 0.1912751

See model wei_lm_fixed. Thus, correcting the degrees of freedom manually gives correct coefficient estimates as well as inference statistics.

Performance

Aggregating data and using frequency weights can save you quite some time. To illustrate it, let’s generate large data set in a disaggregated and aggregated form.

N <- 10^4 # number of observations

# Aggregated data
big_aggregated <- data.frame(x=1:5) %>%
  mutate( y = round(2 * x + 2 + rnorm(length(x)) ),
          freq = as.numeric(table(sample(1:5, N, replace=TRUE, prob=c(.3, .4, .5, .4, .3))))
          )

# Disaggregated data
big_individuals <- aggregated[ rep(1:5, big_aggregated$freq) , c("x", "y") ]

… and fit lm models weighting the model on aggregated data. Benchmarking:

library(microbenchmark)

speed <- microbenchmark(
  big_individual = lm(y ~ x, data=big_individuals),
  big_aggregated = lm(y ~ x, data=big_aggregated, weights=freq)
)

speed %>% group_by(expr) %>% summarise(median=median(time / 1000)) %>%
  mutate( ratio = median / median[1])
## Source: local data frame [2 x 3]
## 
##             expr   median     ratio
## 1 big_individual 7561.158 1.0000000
## 2 big_aggregated 1492.057 0.1973319

So quite an improvement.

The improvement is probably the bigger, the more we are able to aggregate the data.

P-values deemed no longer significant

2015 March 3
by Michał

Journal Basic and Applied Social Psychology (BASP) bans the use of statistical hypothesis testing.

The BASP editorial by Trafimow and Marks here.

The story have also been covered by:

And discussed in/by, among others:

Where this will go, I wonder…

Word-processing Wars

2015 January 8
by Michał

Three days ago Nature published a note commenting on an recent heated social media discussions whether MS Word is better than LaTeX for writing scientific papers. The note refers to a PLOS article by Knauf & Nejasmic reporting a study on word-processor use. The overall result of that study is that participants who used Word took less time and made less mistakes in reproducing the probe text as compared to people who used LaTeX.

I find it rather funny that Nature picked-up the topic. Such discussions always seemed rather futile to me (de gustibus non disputandum est and the fact that some solution A is better or more “efficient” than B does not necessarily lead to A becoming accepted, as is the case with QWERTY vs Dvorak keyboard layouts) and far away from anything scientific.

As it goes for myself, I do not like Word nor its Linux counterparts (LibreOffice, Abiword etc), let’s call them WYSIWYGs. First and foremost because I believe they are very poor text editors (as compared to Vim or Emacs): it is cumbersome to navigate longer texts, search. The fact that it is convenient to read a piece of text in, say, Times New Roman does not mean that it is convenient to write using it. Second, when writing in WYSIWYGs I always have an impression that I am handcrafting something: formatting, styles and so on. It is like sculpturing: if you don’t like the result you need to get another piece of wood and start from the beginning. All that seems to counter the main purpuse for which the computers were developed in the first place, which is taking over “mechanistic” tasks and leave “creative” ones to the user.

I like that the Nature note referred to Markdown as an emerging technology for writing [scientific] texts. If do not know, Markdown is a lightweight plain text format, not unlike Wikipedia markup. Texts written in Markdown can be processed to PDF, HTML, MSWord and so on. More and more people are using for writing articles or even books. It is simple (plain text) and allows to focus on writing.

Last, the note still contains a popular misconception that one of the downsides of LaTeX is a lack of spell checker…

Praktyki w ICM – oferta

2014 May 13

ICM, jak co roku, organizuje praktyki dla studentów. W tym roku poszukuję osoby, która byłaby zainteresowana pracą nad stworzeniem aplikacji umożliwiającej interaktywną wizualizację danych sieciowych.

Oferujemy pracę w młodej i dynamicznej grupie badaczy sieci oraz nawiązanie kontaktów z zagranicznym zespołem naukowym.

Wymagania (pierwsze jest warunkiem koniecznym, pozostałe będą dodatkowymi atutami):

  • Programowanie w R
  • Programowanie w JavaScript
  • Tworzenie aplikacji Shiny
  • Znajomość biblioteki D3js
  • Znajomość metod Social Network Analysis (SNA)

Jeżeli jesteś zainteresowany, wypełnij formularz na stronie ICM! Mój temat ma numer 22.

Alluvial diagrams

2014 March 27
by Michał

Parallel coordinates plot is one of the tools for visualizing multivariate data. Every observation in a dataset is represented with a polyline that crosses a set of parallel axes corresponding to variables in the dataset. You can create such plots in R using a function parcoord in package MASS. For example, we can create such plot for the built-in dataset mtcars:

library(MASS)
library(colorRamps)
 
data(mtcars)
k <- blue2red(100)
x <- cut( mtcars$mpg, 100)
 
op <- par(mar=c(3, rep(.1, 3)))
parcoord(mtcars, col=k[as.numeric(x)])
par(op)

This produces the plot below. The lines are colored using a blue-to-red color ramp according to the miles-per-gallon variable.

cars

What to do if some of the variables are categorical? One approach is to use polylines with different width. Another approach is to add some random noise (jitter) to the values. Titanic data is a crossclassification of Titanic passengers according to class, gender, age, and survival status (survived or not). Consequently, all variables are categorical. Let’s try the jittering approach. After converting the crossclassification (R table) to data frame we “blow it up” by repeating observations according to their frequency in the table.

data(Titanic)
# convert to data frame of numeric variables
titdf <- as.data.frame(lapply(as.data.frame(Titanic), as.numeric))
# repeat obs. according to their frequency
titdf2 <- titdf[ rep(1:nrow(titdf), titdf$Freq) , ]
# new columns with jittered values
titdf2[,6:9] <- lapply(titdf2[,1:4], jitter)
# colors according to survival status, with some transparency
k <- adjustcolor(brewer.pal(3, "Set1")[titdf2$Survived], alpha=.2)
op <- par(mar=c(3, 1, 1, 1))
parcoord(titdf2[,6:9], col=k)
par(op)

This produces the following (red lines are for passengers who did not survive):

titanic_pc

It is not so easy to read, is it. Did the majority of 1st class passengers (bottom category on leftmost axis) survived or not? Definitely most of women from that class did, but in aggregate?

At this point it would be nice to, instead of drawing a bunch of lines, to draw segments for different groups of passengers. Later I learned that such plot exists and even has a name: alluvial diagram. They seem to be related to Sankey diagrams blogged about on R-bloggers recently, e.g. here. What is more, I was not alone in thinking how to create such a thing with R, see for example here. Later I found that what I need is a “parallel set” plot, as it was called, and implemented, on CrossValidated here. Thats look terrific to me, nevertheless, I still would prefer to:

  • The axes to be vertical. If the variables correspond to measurements on different points in time, then we should have nice flows from left to right.
  • If only the segments could be smooth curves, e.g. splines or Bezier curves…

And so I wrote a prototype function alluvial (tadaaa!), now in a package alluvial on Github. I strongy relied on code by Aaron from his answer on CrossValidated (hat tip).

See the following examples of using alluvial on Titanic data:

First, just using two variables Class and Survival, and with stripes being simple polygons.

titanic1

This was produced with the code below.

# load packages and prepare data
library(alluvial)
tit <- as.data.frame(Titanic)
 
# only two variables: class and survival status
tit2d <- aggregate( Freq ~ Class + Survived, data=tit, sum)
 
alluvial( tit2d[,1:2], freq=tit2d$Freq, xw=0.0, alpha=0.8,
         gap.width=0.1, col= "steelblue", border="white",
         layer = tit2d$Survived != "Yes" )

The function accepts data as (collection of) vectors or data frames. The xw argument specifies the position of the knots of xspline relative to the axes. If positive, the knot is further away from the axis, which will make the stripes go horizontal longer before turning towards the other axis. Argument gap.width specifies distances between categories on the axes.

Another example is showing the whole Titanic data. Red stripes for those who did not survive.

titanic4

Now its possible to see that, e.g.:

  • A bit more than 50% of 1st class passangers survived
  • Women who did not survive come almost exclusively from 3rd class
  • etc.

The plot was produced with:

alluvial( tit[,1:4], freq=tit$Freq, border=NA,
         hide = tit$Freq < quantile(tit$Freq, .50),
         col=ifelse( tit$Survived == "No", "red", "gray") )

In this variant the stripes have no borders, color transparency is at 0.5, and for the purpose of the example the plot shows only “thickest” 50% of the stripes (argument hide).

As compared to the parallel set solution mentioned earlier, the main differences are:

  • Axes are vertical instead of horizontal
  • I used xspline to draw the “stripes”
  • with argument hide you can skip plotting of selected groups of cases

If you have suggestions or ideas for extensions/modifications, let me know on Github!

Stay tuned for more examples from panel data.

Gadka na SERze

2014 March 10
by Michał

SERIb

These are slides from the very first SER meeting – an R user group in Warsaw – that took place on February 27, 2014. I talked about various “lifehacking” tricks for R and focused how to use R with GNU make effectively. I will post some detailed examples in forthcoming posts.

Oto moje slajdy z pierwszego Spotkania Entuzjastów R (SER) , które odbyło się  27 lutego w ICM. Niebawem w oddzielnym poście opiszę szerzej prezentowane rozwiązanie GNU make + R.