Slides | All course materials
Earlier this month, I taught my two-day course on working with Twitter data in R, at the University of Lucerne. This was part of a Master’s Programme in Computational Social Science, LUMACSS.
This course is designed as an introduction to collecting, cleaning, and analysing Twitter data — without having to apply for a developer account. You can find the course material at GitHub. The slides are better viewed here on my website.
Click here for the slides, and here for all the course materials.
I recently organised a short course on web scraping in R, as part of a Master’s Programme in Computational Social Science, at the University of Lucerne.
I have built a website and a Shiny app just for this course, to facilitate learning. These are tailored for the exercises in the course.
You can find other course material at GitHub.
Click here for the slides, and here for all the workshop materials.
R Markdown has been at the centre of my research workflow for some time. It allows me to tidy and analyse data, create tables and figures, manage citations and references, and write up the results in one screen.
And if, say, a regression table needs a new model, it often takes only a few lines of code and a click to reproduce the output — be it a PDF, HTML, and/or a Word document.
The survey package is one of my favourites in R.
Among its many other uses, it can compute summary statistics by subgroups. For example, if you have a survey of individuals from several countries with an item on the respondents’ income, you can calculate the average income in each subgroup with the svyby() function.
However, like many other functions in the package, svyby() returns standard errors—but not standard deviations—of the mean values.