The use of casual employees in Australia has been stable at about 20% of the workforce for the last two decades. These employees provide an affordable means for businesses, both small and large, to employ a contingent workforce. In turn, it provides employment opportunities to a range of people who would not be able to work otherwise, such as teenagers getting their first jobs through to stay at home parents wanting to supplement their household income. Casual employment is generally associated with a higher hourly rate (compared to their permanently employed peers), no provision for annual or sick leave (long service leave is an exception), and no notice for termination. While casual employment has a lot of benefits, there’s a hidden cost and risk if the nature of a casual employee’s work resembles a permanent job.
If a casual employee is working regular, ‘systematic’ hours, the law may construe their employment as permanent, making the employer liable for additional costs. One of these costs could be the financial liability for annual and sick leave (despite the fact that employee was paid a higher rate) as well as damages for unfair dismissal.
If you really need some of your casual employees to work like a permanent employee you can investigate alternative employment options like part-time arrangements. There are new types of part-time employment available these days like flexible part-time (e.g your ordinary hours of work may be averaged over a period of one to four weeks) and partial part-time (e.g work full time for 9 months and have the other 3 months off). In combination with traditional part-time arrangements, this provides employers with plenty of options to balance their resourcing requirements and the availability and desired flexibility of their staff.
While part-time arrangements provide a way to mitigate the hidden cost of a casual workforce, it helps to know which employees are in danger of working regular hours. One useful tool is a casual hours dashboard designed to 1) categorise the risk of employees based on their pattern of work, 2) allow you to drill down to the day to day details of their work and 3) then respond by changing their work shifts or employment arrangements accordingly.
The image below shows the landing page for a casual hours dashboard. Across the top are the count of casual employees that fall into the three risk categories for ‘regular’ employment. The meaing of ‘regular employment’ can change from organisation to organisation so the definition is included below each category. Below the definitions are the filters so that leaders from different areas of the business can confine the report to their relevant casual employees. Along the lower right is a spark line for each employee showing, at a high level, the pattern of their hours for each fortnight. This provides a way to assess how regular they are working and for how long.
From this landing page it’s possible to ‘zoom’ into the daily work hours for each employee. By clicking on an employee’s name you can move to the drill down page pictured below. This page provides a lot more detail as to how many hours an employee works, across which days and over what duration. It’s possible to load this drill down for all the employees that work for a certain manager or belong to a certain risk category.
The final page in this dashboard is a fortnightly summary of hours for all casual employees belonging to a certain teams. While dashboards are built to be interactive, it’s not always possible to work with your clients in front of a screen. This final page allows your to print out results from a page optimised for A3 printing.
If you seen another way to visualise this issue please let me know, also let me know if you have any questions about how you might be able to implement a similar solution for your organisation.
The world of analytics has exploded with a vast array of new technologies, tools, systems, training, opportunities and business models. Most people understand that analytics is powerful and have heard stories about how companies like Amazon and Google use it drive innovation and grow their organisations. However, when it comes to your own life, its can be difficult to understand exactly how you can use it. For some, analytics feels like its something akin to magic wielded by ‘data scientists’ with PhDs and decades of experience.
The reality is that analytics is being democratised by the very same technology that’s made it valuable. This has given raise to self-service analytics. After years of investment in centralising data, maturing data governance and user-friendly software there are now a range of options for anyone to answer their own questions using sophisticated analytical techniques.
There are a lot of tools available to anyone to do you own analytics. Some are ‘one off’ tools like Google’s Ngram viewer that will allow you to investigate how frequently specific words have been used in books or Twitter Analytics which will let you look over the stats for your own account. Then there are more broader tools that will allow you investigate a range of different data sources. While there are many examples I want to focus on three across the broad spectrum of options. They are Watson Analytics, Tableau and Popily.
Watson Analytics is cloud-based, lets you explore your own data, you can explore your data by typing natural language questions and it’s available with tiered payment options starting from free.
Tableau has desktop, cloud and server-based options, its optimised for Enterprise data sources, and has free and paid options.
Popily is a brand new offering and will continue to mature through new releases, it’s cloud-based, and currently only uses publicly available data but is free.
You may recognise the name ‘Watson’ as the artificial intelligence developed by IBM that won the quiz show Jeopardy in 2011. Watson was able to listen and respond to natural language questions beating two previous champions. Today, Watson is able to analyse large corpora of unstructured data allowing it to manage decisions in lung cancer treatment, find new food combinations for recipes and make music recommendations.
The Watson AI that is able to do all this is not the necessary the same ‘Watson’ you have access to as part of IBM’s cloud-based Watson Analytics offering. Watson Analytics allows you to ‘ask’ questions about your data sets in natural language by typing it questions. Watson Analytics responds with options and graphs that it’s determined will best answer you question.
While there appears to be no move to provide a desktop version of Watson Analytics, IBM’s enterprise-grade business intelligence offering, Cognos, is inheriting some of Watson Analytics natural language processing and visualisation aesthetics. For a great overview of the product, check out this video.
Tableau is best known as a visualisation tool. Its adoption within the business community continues to grow year on year. Tableau is a mature offering and recently released version 9. It can be deployed on your local machine, your server or from the cloud. It allows you to create beautiful, interactive graphs to quickly and intuitively tell a story or to provide insight into previously unintelligible data. To get a sense of the look and feel of Tableau’s visualisation check out their gallery.
Popily is a brand new offering released by the same team responsible for the analytical-themed podcast Partially Derivativeand who developed CrisisNET. Popily provides non-technical people the ability to explore data without needing to know code or statistics. As a brand new offering, the cloud-based Popily can only be used to explore publicly available data sets added to their platform. I believe the release of Popily is the start of a wave of new start ups with a focus on self-service analytics leveraging the raise of technologies like software-as-a-service, machine learning and scalable analytics.
Let’s test them
I’ve reviewed these offerings by the following areas:
The data we’re looking at has been limited to what’s currently available through Popily’s public library of data sources. We’ll use Airbnb’s data set because they share their listing information through a Creative Commons license. In fact, you can explore the data through their own visualisations here (created using Leaflet and Mapbox).
All three offerings have a free option (so feel free to jump in yourself and have a play – Watson Analytics, Tableau Public and Popily). Creating accounts for all options is straight forward, although you’ll need to download software for Tableau.
For Watson Analytics, if you pay you’ll be able to analyse more data (more rows and columns) and there’s an enterprise version where you can allocate access across a tenancy. Actual prices and packages are constantly changing (at least the time of writing) so check out the site for the latest prices.
Tableau has paid options designed for enterprises and are structured around the number of licensed users. For companies this means you’ll be paying for both desktop versions and a server license so that you can privately share your visualizations. Specifying users can be a bit limiting if your an organisation that prefers to have flexibility or plan on managing security access through Tableau server.
Watson Analytics allows you to upload your own data and, if you upgrade, you can also connect automatically to the Twitter API (they’ll grab a 10% sample of tweets for the last 6 months based off keywords). Adding data is as simple as clicking the add button from the login dashboard. The free account is limited to 50,000 rows and 40 fields. Adding an abridged version of the Airbnb data set took about 6 minutes over a medium speed NBN connection. Once uploaded, the first thing you’ll notice is that Watson Analytics has assessed the quality of your data. When you first click on your data set you’ll get a dialog box with a series of prompt questions.
Tableau is optimised to analyse large data sets. For Tableau Public, it can connect to Microsoft Excel, Microsoft Access, and text files. While you are limited to 1 million rows of data, this is only a limit per connection. There is a file size limit of 1 gigabyte to save to the cloud. Adding data connections is easy as you can select by source type (e.g Excel file, database, etc), you can view the data once connected, and select how you want to import the fields.
There is currently no ability to load your own data sets into Popily. This is why we’re using the Airbnb public data set already added to Popily. They are extending invitations to companies to add their data now.
The focus on this section will be looking for relationships between the price of accomodation and the number of rooms.
As we saw when we first loaded our data set, Watson Analytics is already suggesting areas that we might want to investigate. If you select the Explore option you’ll be able to ask you natural language questions. In this instance I’ve asked ‘what is the relationship between bedrooms and weekly_price?’.
Exploring these options I found that the visualisations are not all that useful initally. Watson Analytics likes to aggregate by average and it hides a lot of the information you want to see. However, clicking on the column function on the right allows you to select exactly what fields you want and how to graph them. Using this I created the following graph.
This is graph is more meaningful. I can see the relationship I’d expect to see between price and the number of rooms. But now I can also see which properties attract a higher premium per room (in this instance it’s trains and boats). Now you can also quickly click on the property_type field and select other relevant fields to investigate like Country and Neighborhood. Another powerful option available through Watson Analytics is its prediction engine. To see more about this feature check out some guides here and here.
Tableau is much more hands on then Watson Analytics or Popily. This means that when you first add your data set, you’re not going to get any automatic recommendations. However, Tableau has done a lot behind the scenes. It’s categorised each of the Airbnb fields and determined if they are attributes or dimensions. This works in your favour when deciding how to visualise your information.
From this starting screen you can start to explore your data. To explore the relationship between beds and price you grab the fields from the lists on the left and drag them across to the row and column shelves. Tableau will automatically select the scatter plot chart, which, for this investigation is exactly what we want. We can now decide which detail we want to split the plot by. Dragging across the property type field, and aggregating by average values, we can replicate a similar graph to what we create in Watson Analytics.
From here there’s a lot of flexibility with what you can do with this information. You can add dimensions to change size, shape and colour. You can also quickly add filters, trendlines and, forecast if you have time series data or graph data to a map.
When you first log in to Popily you’ll see a list of recent public data sources on the right. Click on Airbnb listings and you’ll immediately be presented with a set of charts. If you scroll to bottom you’ll see that the data source has been prepopulated with 2,421 pages of charts. You can go through and explore these pages, but it makes more sense to start limiting your search to those fields that you are interested in.
Let’s start our search with the relationship between cost and the number of rooms. You can search by fields within the yellow bordered search dialog at the top of the screen. Select monthly price and number of beds. You’ll see the number of pages has been limited to 5 and you can start exploring charts more relevant to your investigation. You’ll be presented with a chart called Average monthly price by number of beds over date cost started on AirBnB. Once again, not a particularly insightful. If you scroll down you’ll see Average monthly price of number of beds.
This graph is a little more useful as we can start to see the relationship – namely, more beds more expensive. However, from the example picture above you’ll notice an immediate limitation of Popily’s visualisation. There’s no axis headings, no legend and no labels. In fact, other then the heading the only indication you’ll know what you are looking at is if you mouse over the graph elements. Even more annoying is that if you have multiple elements on a line graph it won’t label the values (you need to guess) and you need to be very precise with how you position your mouse to get the values.
I like Tableau because it provides the most control over how you load, model and visual insights. However the value of self service analytics is giving anyone the power to do meaningful analytics. From the perspective of non-technical user I’d recommend Watson Analytics. It’s a more mature offering than Popily and doesn’t present you with learning curve required for Tableau. I’m looking forward to seeing how these offerings continue to grow and evolve. If you agree or disagree let me know below.
To a person with a hammer, every problem is a nail…
Right now I’m going through a phase where my hammer is Tableau and everything can be fixed through a decent dashboard. To that end, I developed a dashboard that visualizes my team’s backlog of work.
My team isn’t strictly in ‘software development’ but we’ve come to use Agile as the foundation to managing our projects. It’s flexibility allows us to manage all different types of work (i.e. scheduled reports, analytical projects, user support, etc) using Atlassian’s Jira (with their Agile, Greenhooper, Zephyr plugins).
The challenge I’ve faced is that, as we operate in a shared service model, we have lots of competing requests from our customers. The challenge is balancing these requests at our monthly prioritization meetings. As pseudo-product owners these customers determine what we do but with so many competing agendas, how do we get consensus for where we’ll focus our effort?
The purpose of the dashboard is to answer the main questions our customers ask:
What are my requests?
How important are my requests compared to everyone else’s?
Generally, if these questions can be answered quickly and transparently (that is, each customer can see everyone’ else requests) it becomes very apparent which requests should be done before others. For example, should we prioritize the complicated enhancement for a report going to 5 people or a simple bug fix going to 5,000?
What does the dashboard include?
As you can see from the dashboard (nick named the petri dish), you can quickly get a sense where the more important requests are (top right hand corner). Using the filters on the right you can click on your team name (‘Learning’ for example) and have those requests highlighted on the matrix and the request details are listed below in the table. You can also get a feel for the size of the task and the nature of the request.
The following are the definitions for the custom fields used.
Components: the nature and type of request
Group initiatives are requests from senior executive stakeholders
Customer initiatives are requests from our stakeholders
Continuous improvement are requests from the team
Scheduled reports are reports that recur regularly
Fast track are requests prioritized for immediate delivery
T Shirt Size: An approximate estimate of effort based on initial assessment of a request before elaboration.
XS =< 1 day
S = 2 day
M = 5 day
L =10 day
XL => 10 day
Benefits Score: This is a measure of benefit value between 1-5. Benefits that may be identified include reduced costs, reduced risks, or improved employee value proposition.
Strategy Score: This is a measure of strategic alignment between 1-5. A strategically aligned item should align to the strategic technology stack and be in alignment with the team’s development road map
Why build a dashboard when you could use Jira’s Agile plan view?
Despite Jira’s useful reporting functionality, it’s hard to intuitively represent to our less Jira-savvy customers what exactly is going on with our backlog. While a list is great when you going through iteration planning there isn’t enough information available without constantly drilling down into each issue. It’s also not clear which requests came from which customers. Plus, you know, the whole hammer thing…
How does it update?
The dashboard is delivered through Tableau. Currently, to get the latest request information, I extract it from Jira manually as a CSV and update the dashboard. This is fine as the dashboard only needs to be updated monthly and only takes a few minutes. That said, if you wanted ‘live’ data, Jira does have a restful API that you can plug into.
In true Agile fashion, the dashboard has some enhancements waiting in the backlog. The most important is maturing from using the benefits score and t-shirt size to using actual cost/benefit measured in dollars. Even though the team doesn’t currently have a charge back model, speaking in ‘dollars’ is something all customers understand. It also can tell a compelling story (for example, your enhancement will cost $10,000 of the team’s available effort – is that a wise investment compared to the benefits you expect?)
I’ve added a version of the dashboard with mocked up data to Tableau here.