How to visualize your Jira backlog using Tableau

To a person with a hammer, every problem is a nail…

Right now I’m going through a phase where my hammer is Tableau and everything can be fixed through a decent dashboard. To that end, I developed a dashboard that visualizes my team’s backlog of work.

My team isn’t strictly in ‘software development’ but we’ve come to use Agile as the foundation to managing our projects. It’s flexibility allows us to manage all different types of work (i.e. scheduled reports, analytical projects, user support, etc) using Atlassian’s Jira (with their Agile, Greenhooper, Zephyr plugins).

The challenge I’ve faced is that, as we operate in a shared service model, we have lots of competing requests from our customers. The challenge is balancing these requests at our monthly prioritization meetings. As pseudo-product owners these customers determine what we do but with so many competing agendas, how do we get consensus for where we’ll focus our effort?

Enter the prioritization dashboard.

The purpose of the dashboard is to answer the main questions our customers ask:

  1. What are my requests?
  2. How important are my requests compared to everyone else’s?

Generally, if these questions can be answered quickly and transparently (that is, each customer can see everyone’ else requests) it becomes very apparent which requests should be done before others. For example, should we prioritize the complicated enhancement for a report going to 5 people or a simple bug fix going to 5,000?

What does the dashboard include?

As you can see from the dashboard (nick named the petri dish), you can quickly get a sense where the more important requests are (top right hand corner). Using the filters on the right you can click on your team name (‘Learning’ for example) and have those requests highlighted on the matrix and the request details are listed below in the table. You can also get a feel for the size of the task and the nature of the request.

The following are the definitions for the custom fields used.

Components: the nature and type of request

  • Group initiatives are requests from senior executive stakeholders
  • Customer initiatives are requests from our stakeholders
  • Continuous improvement are requests from the team
  • Scheduled reports are reports that recur regularly
  • Fast track are requests prioritized for immediate delivery

T Shirt Size: An approximate estimate of effort based on initial assessment of a request before elaboration.

  • XS =< 1 day
  • S = 2 day
  • M = 5 day
  • L =10 day
  • XL => 10 day

Benefits Score: This is a measure of benefit value between 1-5. Benefits that may be identified include reduced costs, reduced risks, or improved employee value proposition.

Strategy Score: This is a measure of  strategic alignment between 1-5. A strategically aligned item should align to the strategic technology stack and be in alignment with the team’s development road map

Why build a dashboard when you could use Jira’s Agile plan view?

Despite Jira’s useful reporting functionality, it’s hard to intuitively represent to our less Jira-savvy customers what exactly is going on with our backlog. While a list is great when you going through iteration planning there isn’t enough information available without constantly drilling down into each issue. It’s also not clear which requests came from which customers. Plus, you know, the whole hammer thing…

How does it update?

The dashboard is delivered through Tableau. Currently, to get the latest request information, I extract it from Jira manually as a CSV and update the dashboard. This is fine as the dashboard only needs to be updated monthly and only takes a few minutes. That said, if you wanted ‘live’ data, Jira does have a restful API that you can plug into.

What’s next?

In true Agile fashion, the dashboard has some enhancements waiting in the backlog. The most important is maturing from using the benefits score and t-shirt size to using actual cost/benefit measured in dollars. Even though the team doesn’t currently have a charge back model, speaking in ‘dollars’ is something all customers understand. It also can tell a compelling story (for example, your enhancement will cost $10,000 of the team’s available effort – is that a wise investment compared to the benefits you expect?)

Download

I’ve added a version of the dashboard with mocked up data to Tableau here.

UPDATED: Sentiment Analysis with “sentiment”

I was looking for a quick way to do sentiment analysis for comments from an employee survey. I came across this post here by Gaston Sanchez.

The guide is a little dated now (the “sentiment” package needs to be manually downloaded, ggplot2 has been updated, setting up a Twitter API has changed, etc). Since I found Gaston’s guide useful, I’ve included some updated steps to effectively get the same output that they provided previously.

This example looks for the sentiment of tweets about the #UCLfinal.

NOTE: R version 3.1.2 through R Studio

Step 1 – Install packages

You will only be required to install these packages the first time.

# Required packages for the plots
install.packages(c("plyr","ggplot2","wordcloud","RColorBrewer","httr","slam","mime","R6"," Rcpp"))

#Required packages to connect to your Twitter API
install.packages(c("twitteR", "bit","bit64","rjson","DBI")

# Required packages for sentiment
install.packages(c("NLP","tm","Rstem"))

Step 2 – Install ‘sentiment’

The sentiment package is not available from all the CRAN server, so you can install it manually. Download “sentiment_0.2.tar.gz” from http://cran.r-project.org/src/contrib/Archive/sentiment/

# Update [directory] with the location where you have saved "sentiment_0.2.tar.gz"
install.packages("[directory]", repos = NULL, type = "source")

Step 3 – Load all your packages

You will need to load these packages for each new session.

library(plyr)
library(ggplot2)
library(wordcloud)
library (RColorBrewer)
library(httr)
library(slam)
library(mime)
library(R6)
library(twitteR)
library(bit)
library(bit64)
library(rjson)
library(DBI)
library(tm)
library(Rstem)
library(NLP)
library(sentiment)
library(Rcpp)

Step 4 – Set up your API with Twitter

Go to https://apps.twitter.com/ and sign in (you’ll need to create a twitter account if you haven’t already)

Click on ‘Create New App’

Complete the compulsory fields, accept the Developer Agreement (note you can enter a placeholder Website if you don’t have one) and click ‘Create your Twitter Application’.

After the application management page loads click ‘Keys and Access Tokens’ and note your consumer key and secret.

Click ‘Create my access token’ and note your access token and token secret.

Step 5 – Connect to Twitter

Enter the authentication details below

# Authenticate with Twitter

api_key <- "[your key]"
api_secret <- "[your secret]"
token <- "[your token]"
token_secret <- "[your token secret]"
setup_twitter_oauth(api_key,api_secret,token,token_secret)

If you get the following prompt:

[1] "Using direct authentication"
Use a local file to cache OAuth access credentials between R sessions?
1: Yes
2: No

Press 1 and execute to save a local copy of the OAuth access credentials.

Step 6 – Harvest tweets

Now it’s time to harvest the tweets for analysis. Note, if you’re setting behind a firewall this may not work. If so, tweak your firewall settings. Additionally, it might take a minute to harvest the tweets.

# harvest some tweets
some_tweets = searchTwitter("uclfinal", n=1500, lang="en")

# get the text
some_txt = sapply(some_tweets, function(x) x$getText())

Step 7 – Prepare text for sentiment analysis

# remove retweet entities
some_txt = gsub("(RT|via)((?:\b\W*@\w+)+)", "", some_txt)

# remove at people
some_txt = gsub("@\w+", "", some_txt)

# remove punctuation
some_txt = gsub("[[:punct:]]", "", some_txt)

# remove numbers
some_txt = gsub("[[:digit:]]", "", some_txt)

# remove html links
some_txt = gsub("http\w+", "", some_txt)

# remove unnecessary spaces
some_txt = gsub("[ t]{2,}", "", some_txt)
some_txt = gsub("^\s+|\s+$", "", some_txt)

# define "tolower error handling" function 
try.error = function(x)
{
   # create missing value
   y = NA
   # tryCatch error
   try_error = tryCatch(tolower(x), error=function(e) e)
   # if not an error
   if (!inherits(try_error, "error"))
   y = tolower(x)
   # result
   return(y)
}

# lower case using try.error with sapply 
some_txt = sapply(some_txt, try.error)

# remove NAs in some_txt
some_txt = some_txt[!is.na(some_txt)]
names(some_txt) = NULL

Step 8 – Perform sentiment analysis

Please note that the classifying the polarity and emotion of the tweets may take a few minutes

# classify emotion
class_emo = classify_emotion(some_txt, algorithm="bayes", prior=1.0)

# get emotion best fit
emotion = class_emo[,7]

# substitute NA's by "unknown"
emotion[is.na(emotion)] = "unknown"

# classify polarity
class_pol = classify_polarity(some_txt, algorithm="bayes")

# get polarity best fit
polarity = class_pol[,4]

Step 9 – Create a data frame in order plot the results

# data frame with results
sent_df = data.frame(text=some_txt, emotion=emotion,
polarity=polarity, stringsAsFactors=FALSE)

# sort data frame
sent_df = within(sent_df, emotion

 This is what the first 5 rows of data may look like for df_sent

sentiment analysis R - first 5 rows

Step 10 – plot the emotions and polarity of the tweets

# plot distribution of emotions
ggplot(sent_df, aes(x=emotion)) +
geom_bar(aes(y=..count.., fill=emotion)) +
scale_fill_brewer(palette=”Dark2″) +
labs(x=”emotion categories”, y=”number of comments”) +
labs(title = “Sentiment Analysis of Tweets about UCL Finaln(classification by emotion)”, plot.title = element_text(size=12))

Sentiment analysis in R - emotionality

# plot distribution of polarity

ggplot(sent_df, aes(x=polarity)) +
geom_bar(aes(y=..count.., fill=polarity)) +
scale_fill_brewer(palette=”RdGy”) +
labs(x=”polarity categories”, y=”number of tweets”) +
labs(title = “Sentiment Analysis of Tweets about UCL Final n(classification by polarity)”,plot.title = element_text(size=12))

sentiment analysis of tweets about UCL final - polarity

# separating text by emotion

emos = levels(factor(sent_df$emotion))
nemo = length(emos)
emo.docs = rep(“”, nemo)

for (i in 1:nemo)
{
tmp = some_txt[emotion == emos[i]]
emo.docs[i] = paste(tmp, collapse=” “)
}

# remove stopwords
emo.docs = removeWords(emo.docs, stopwords(“english”))

# create corpus
corpus = Corpus(VectorSource(emo.docs))
tdm = TermDocumentMatrix(corpus)
tdm = as.matrix(tdm)
colnames(tdm) = emos

# comparison word cloud
comparison.cloud(tdm, colors = brewer.pal(nemo, “Dark2”), scale = c(3,.5), random.order = FALSE, title.size = 1.5)

Sentiment analysis in R - word cloud