Back in Action | December 2016

Back in Action | December 2016

Hi All,

A quick shout out to my followers that I will be back to posting articles this December. I’ve been so quiet because A LOT has been going on in my personal life.  The biggest being that I will be moving to The Netherlands (eek, exciting :)) So until next time, keep qliking and looking forward to sharing more interesting Qlik techniques with you


Qlikies Newbies

Qlikies Newbies

For every new QlikView developer, where to begin and what to focus on is often a difficult question to answer. Just this week I was tasked with handing over my work to a colleague that had only completed the designer training. For any new developers, reading Miguel Garcia’s and Barry Harmsen’s QlikView 11 for Developers is a great start.The book, much like this post will focus on practical steps to becoming a master QlikView practitioner.

Critical Thinking

I cannot stress enough how important this. I often see analysts and developers failing to meet requirements because they could not properly conceptualize the problem. With many of the posts in my blog, the focus is always on constant re-evaluation of what we are doing with emphasis on maintaining readability and comprehension.

The first step to becoming a great Business Intelligence professional is asking the right questions. No matter the problem or the skills you may not have, when you ask the right questions, you are guiding yourself to finding the right solution. Think logically and most importantly apply critically thinking to everything you do.


Your energy will determine your rate of growth. Being passionate about something means that you will want to spend copious amounts of time learning and applying your trade. If you cannot get excited about what you do, then don’t do it at all. Enthusiasm is your number 1 motivator, find a reason to be energized.


Before you develop in the script, start with understanding your front-end environment. At the end of the day, your users will see this part of your work and so it is important to be able to build and design dashboards that communicate effectively the information being presented.

One of the observations I have made regarding good compared to great developers is summed up quite nicely by Franklin Rooseveldt

It is common sense to take a method and try it. If it fails, admit it frankly and try another. But above all, try something

Without trying to fail you will never learn to succeed. Asking questions to colleagues is easy, getting your colleagues to tell you is easy, doing it yourself is hard but it is the only way YOU will remember and be able to apply it going forward.

Build and click on every button available to you. Pause after each change and see what it is doing. You know you know your stuff when you can guide another colleague to solving a problem without even looking at your QlikView environment

Your colleague “Hey Future QlikView Guru, my client wants to see the months running horizontally in this straight table chart, is there a way I can do this”. You,. Future QlikView Guru “Yes, go to the presentation tab and click the Horizontal tick box near the bottom left” Your Colleague “Awesome, thanks” 🙂

So here is a practical guide to starting with the front-end.

  1. Build all the charts – Familiarize yourself with what is available and get a rough idea what each type of chart would be used for
  2. Aesthetics – Build a chart, any chart and then try and make it look nicer. This is where critical thinking comes in. Ask yourself what you would like to change and then figure out if that is possible. Spend time on the QlikCommunity but don’t change your question until you have an answer as to where its possible or not. Asking a question like can I make this bar chart have an embedded rain drop .gif is not the kind of questions you should be asking. Keep it simple and focus on clean aesthetics
  3. Repeat for all other charts
  4. Play around – Get excited about your journey of learning, see what options are available on the tabs for each object and see what happens when you apply a change

Script Replication

Building expressions is a lot simpler then creating a script to append to our data-model. For instance, if we use the data provided to us from QlikView and we wanted to show total Sales by Product, we would do this


The next thing all new QlikView developers should do is then try and recreate this in the script. As with anything in life really, there’s always more than one way to do things, applying this approach to learning, at least teachers you one method for a scenario that you know works. It exposes you to techniques that strengthen both your front-end and script skills.

For the script we would create this


In the front-end we could add this to our chart to see if the values match


Do this for a myriad of scenarios. You will find yourself learning at a rate that will allow you to catch up to your more tenured colleagues faster than what they might have even thought.


Before you begin designing and developing your own dashboard, get an idea of what others are doing. Qlik makes available demo’s that can be downloaded and viewed. Most often with the scripting included


When I started out, I was a complete noob when it came to anything technical. The QlikCommunity provided a platform where I could ask questions to my hearts content and get replies within minutes. In the few communities I have been active on, nothing compares to QlikView. The experts and MVP’s are so prompt, so consistent (Gysbert Wassenaar Community Legend :)), that you will always find an answer to your problem.

The reason critical thinking is so important is that you must be able to apply yourself to thinking solution delivery. If you can think like this, you will always be able to have a question that’s answer will bring you closer to your solution. Having this arsenal of questions ready will greatly help your learning.

Let’s Build a Dashboard

Applying the thought process presented in this article, lets actually build a dashboard. Let’s see how simple questions can lead to data insights and while giving you the platform to really learn the product

1. Understand the data

The data we working with has the following information

  • Customer – The customer that purchased what products at what quantity
  • Sales Order ID – Unique ID for the shipment of products to a customer
  • Ship Date – The date at which the products left the warehouse
  • Product
  • Sales – Measure
  • Quantity – Measure

The above data shows Sales and Quantity information by Product sold to each customer. Lets create out template and then begin to build our report

2. Template

We want to show Year, month, and day information. We create this in our script so we can add it as a list box in the front-end


As you would of seen with dashboards, many of the date formats are manipulated to run horizontally and some logo information added to the top. Our template after a few changes looks like this


The techniques we learnt

  • Text boxes
  • List boxes
  • How to load images

3. KPIs

What would the KPIs be?  There are 2 measures in this data, Quantity and Sales, so let’s focus on understanding what this looks like irrespective of the dimension and build our report to understand what drove this


At an overview level we can see what the total sales and quantity is. When a user interacts with the dashboard, they will be presented with a visual cue as to the % contribution.


We learnt a few things by creating this object

  • Set Modifier
  • Text boxes
  • Gauges

4. Other Metrics

Next, we want to understand what else we can derive at a very high level from the 2 measures. Some of the additional metrics we came up with were


5. Visualize the Information

There is many ways we can present this information. What we need to do is understand the story the data and ourselves are trying to tell through the visual display of information. QlikView’s platform allows for only your creativity to be your stumbling block. If you can think it, visualize it, then you most probably can create it (within reason of course :))

One of the visualizations we wanted to show was how Sales compared to Quantity by Product. We chose to use a scatter chart for various reasons, but herewith the visual and explanation of what we are seeing

Newbies_Scatter Product

Each product is represented by a bubble, each bubbles size is the number of customers that purchased that product. This is then measured against sales and quantity. When initially built. The scatter chart looked like this

Newbies_Scatter Product_NoLog

The reason for this change is the x axis. As you can see, some products generate very high sales amounts yet most of the volume of sales is in the lower amounts. Using a log scale axis we can correctly visual the clustering of purchases. For each of the reporting objects built, the same approach was used.

6. Order

Dashboards are stories, each snippet of information must provide some additional info to support or clarify a previous areas value. Top, down, left to right.

The rest of the dashboard was built using quite a few methods which you can explore in the download. Here is some screenshots of our final dashboard. We certainly extracted a lot of information from a small data-set.



To download this QlikView demo, click here Qlikies Newbies.qvw



S-Curve Regression Modelling

S-Curve Regression Modelling

A little while back I was working with a client in the construction project management space. One of the requirements was to build an S-Curve chart to show the performance over time of a project. A traditional S-Curve looks something like this


This chart can be easily recreated using the logistic function


In QlikView, your dimension would be


and your expression


The main problem with the traditional S-Curve is that it does not show visually where a project or product life-cycle is within it’s agreed target. Let’s create some dummy data to tabulate a scenario


The above target and desired values can also be viewed as revised project timelines and/or minimum thresholds. In reality, this is not linear and so when additional resources or inefficiencies occur, thresholds will be affected. If we had to display the above data into line charts this is what we would see


If you spend enough time looking at the chart you are able to make inferences about the performance of this project over time but there is a lot that can be done by the developer to visualize this better

Area Charts

The first apparent observation for me is that the target and line charts represent the points at which either the project exceeds target or not. Anything above the desired line is exceeding performance and anything below the target line is under-performing. The space in between is the ‘everything is going according to plan’ area. The thing I wanted to do next was to convert the white space into a mapped area where the actual performance could be measured against. The first challenge is that you cannot have an area graph with a line graph so this modification uses 2 graphs overlayed. Our area chart piece when finished looks like this


This is a nice canvas with which to work from. It clearly shows the area you don’t want your project to be in, the desired delivery and when expectations have been exceeded. The trick to creating this chart is knowing that you need to use an area stacked chart and that each threshold is the summed difference from the preceding ones.

Red Area | Sum({1}Target)

Blue (Safe) Area | Sum({1}Desired) – Sum({1}Target)

Green Area | (1 – ([Blue (Safe Area)]+[Red Area]))

In all our expressions we have used the set modifier {1}. This is to ensure that our background reference point does not change when selections in the model are applied. Other important considerations we had to make is to the scale that is used. We used a min scale of -0.05 and a max of 1.05. This ensures that when we create the line chart, that the thickness of the line is not reduced when approaching either 0% or 100% and that both charts maintain their scales when selections are made

Line Chart


We did not want our line chart to simply plot against our target background. If a user wanted to make selections in the chart, then the other months completion % should still be visible as a reference point. The line chart, after the modifications were applied, looked like this


To achieve this the following expression and background/line style expressions were used

Chart Expression


The ‘-‘ is to create a data point that cannot be interpreted by the line chart. This ensures that all periods from the dimension are shown

Chart Background Expression


In our script we created an additional field called ActualFlag


This was created because of a regression model we going to be adding further on in our S-Curve. The other 2 RGB colors handle what happens when no selections are made or are made

Chart Line Style


There are various line styles available. If this is new to you then the following styles are available

  • S1 – Connected Line
  • S2 – Dashed Line
  • S3 – Dotted Line
  • S4 – Combo Dashed and Dotted Line

Our chart now provides a visually clear area of how actual project performance is against the referenced target and threshold areas. When users make selections, the delivery for the other available periods is not lost and can be viewed in relation

An important visual key missing from our chart is a legend. As is, not everything might be clear. Before we create our legend, lets create a regression model that will plot for the remaining months the forecasted values based on the linear trend of our data.

Regression Modelling

All props to this go to John Witherspoon. When researching how to create a regression model in QlikView, I came across a post that John commented on. I for some reason cannot find that now, but John saved me a lot of time figuring out how to use the regression functions and just amend to look at my data set. That aside, I wanted to provide business with a simple regression model that took the historical delivery performance and provide an inclusive forecast for the remaining periods. The expression we about to go through could have been done in the same line chart expression but there is one small change we wanted to make that is only possible if we split this as a different expression. That small change is that the forecast line connecting to our historical looks like it starts a month later. This is because with connecting lines, only at the point of occurrence does the value get captured. This way, we can see clearly where the forecast is coming from and not just going to. If you want to see this in action, just change this background expression from this


to this


Our regression expression is the following


and when applied to our graph, we get this


The only thing I have not catered for is when selections are made. This renders the forecast incapable of populating. I will maybe attempt to overcome this later on but for now, I leave this challenge to you 🙂

Final S – Curve Adjustments

To finish off our graph, I added some text boxes with colors representing what that means on the graph


Closing Notes

The transformation value we applied above stretches beyond just the really cool graph we were able to create. Using this theme in a dashboard we could use the text box legend to show the % of time the project spent in each state. We could essentially design an entire scorecard around this graph, showing slippage and gains through different thresholds. Each KPI providing a deeper understanding of the projects delivery and the risk assumed at each stage in it’s cycle.

When ever reviewing how to visualize the data, always ask yourself what do you understand from looking at the data and what can be done to best represent the information to you. In our example, we clearly wanted to see for each period what the delivery of the project was against each of it’s thresholds. This was achieved and to top things off, we added a forecast to see if based on historical performance, if the project was on track to being completed in time.

Download: S-Curve with Regression Modelling


Diminishing Returns with Pareto Analysis

Diminishing Returns with Pareto Analysis

It’s been a while since my last post and even though I have a good excuse why, it’s still an excuse 🙂 I have for the better part of this past month spent quite a bit of time reading articles and books on visualization. I’m currently busy with Karl Pover’s latest publication Mastering QlikView Data Visualization. I’ve read his first book and even though I found it very fitting and useful for beginners, Mastering QlikView Data Visualization has got me very excited.

There’s quite a bit of emphasis on statistical modelling, QlikView Development Framework (QDF) and Rob Wunderlich’s QlikView Components. This release focus’s on an exceptionally important area of BI that is often not dealt with in the detail and simplicity that Karl Pover has done. In chapter 2, you are introduced to Pareto analysis.


This chart was put together in a very clever way which we will explore shortly. Firstly, what and why a Pareto Chart? Pareto chart’s are used to identify the main causes or factors contributing to the overall delivery. It provides a visual way to see what the main contributors are and their relative %. Another name for this method is the 80/20 rule and that’s because as a rule of thumb, 80% occurs because of 20% of something.

How to build a Pareto Chart

There are quite a few good blog posts and information on the web already on this widely covered area. Henric Cronstom (HIC) covered this very nicely in the following blog post – Receipe for Pareto Analysis


The above chart uses the conventional methods to create this type of analysis. Your primary dimension (bar chart) is measured on the left axis and your accumulation %, representing the Pareto Piece (Line Chart) is represented on the right axis. I have always built my Pareto Charts like this until I saw Karl Pover’s version. For the dimension, we use


which ranks as a % the customer that goes 1st, 2nd and so on. Creating a numeric x axis allows us to use tick continuous axis and show the reference line for 20% of customers. The intersection of 80% and and 20% gives you a clear visual area to measure your own data’s stats against

Pareto Modification

There are only a few things I would do slightly different with Karl’s version and that is to add the stratification technique he explains further on in his book to the customer values. This is the same as the ABC analysis HIC mentions but uses a very cool technique from Christof Schwarz.


Other than that, there really isn’t much else that can and should be done. As mentioned earlier this area has been covered quite a bit and I don’t want to focus too much on how this can be built but rather a different take on Pareto’s and a method that I believe consistently adds more value in what it is representing

1. Buckets – Class()

If in your organisation you have lots of products or customers, it’s probably a good idea to group them into smaller buckets. The bucket size as well is important because you don’t want buckets so big that everything is lumped into only a few buckets. This gives you fewer possible selections. We will use the following calculated dimension


vClass_Intervals is a front-end variable to control how many buckets we have. This is useful for 2 reasons

  1. To allow the users to determine the frequency of each bucket
  2. When a bucket that has been selected, namely the first or last one, if the number of products in that bucket is less than the width created by our aggregated expression, then a null is shown.

Below is an example of number 2’s behavior


When bucket 1 is selected: 0 <= x < 10


When vClass_Intervals is changed from 19 to 6 for example. The products in that bucket are shown


2. Expressions

There are initially 2 very important expressions. The accumulated % and the reference line.

Accumulated % – With Relative Tick Box checked


Reference Line


The reference line expression is a little more complicated for a host of reasons. Initially when building this I created the report to start at 0%. The output looked something like this


The problem with this is that bucket 0 – the first bucket actually contains the highest gross products. This looks as if only at bucket 10 that sales are generated which is incorrect. So to fix this, a reference line needs to be created at the start of the first bucket and then the remaining accumulation % split so to create a straight reference line from the start to end of the accumulation %

The result is this


Another reason for the complicated reference line expression is that when  a user changes the number of intervals that they would like in each bucket, the number of classes gets changed. This is not always an even split and so we have to correctly account for the difference through the min and max functions and then wrap this around a rangesum() to accumulate. If anyone can come up with an easier way, please let me know. I struggled to even get it this far for quite some time 🙂

3.Diminishing Returns

According to Wikipedia

In economics, diminishing returns is the decrease in the incremental output of a production process as the amount of a single factor of production is incrementally increased, while the amounts of all other factors of production stay constant.

One of the main reasons we did our Pareto like this is to draw attention to that optimal point. If we subtract our accumulated % from our referenced line, the point that attracts the highest delta is our optimal point.

A Pareto’s accumulation shows that at each point on the x-axis the rate of change is different for each subsequent value. The question gets raised, is there an optimal point at which for each subsequent customer/product, that the rate of change starts to decelerate.

If we add a 45% reference line, which is essentially marking the same point on the x axis as the y axis, then we can create this. This would be the delta that is the greatest value


The question then gets raised, “how do we add this max value as reference”. The simplicity of this question will fool you. This is NOT easy. Unfortunately, there are limitations to the kind of aggregations you can do when using the class function. Nested aggregations are not allowed and some you can get through but not the functions you would want to use. The second challenge is getting this to show up nicely on the Pareto Chart. This fix required some out of the box application of bar styles, but eventually, it resembled the kind of reference I required

Flag 1:


Flag 2:


KP_Optimal Expression.PNG

The trick here is that only at one intersection will the above value and below values both evaluate to true. Once this point is identified, a 1 is allocated. Using a combo chart, we can set the Optimal point as a bar chart and this 1 as it’s value

KP_Optimal Line Before

This doesn’t look nice, but at least we have a ‘line’ running vertically up. In the style guide, choosing the shadow style and then making the bar translucent through the expression background color, gives us the effect of a line. This is a trick I am quite proud of, because honestly it was by complete fluke that this dawned on me 🙂

KP_Optimal Line After


Pareto analysis charts are really powerful and you can get quite creative with how you choose to present the information available to you. In our scenario, we wanted a mathematical way of representing where for every subsequent product, that the change in revenue generated was less than the optimal amount. This allows for organisations to plan time around activities till their optimal generating point. This same technique can be used for Revenue and Cost of Sales figures. At the intersection of the two, you have your break-even, but up until that point, which activities start to generate less profit for the time invested. Maximizing output on time utilized, results in optimal efficiency and this is valuable for any organisation to understand

Download: Diminishing Returns with Pareto Analysis.qvw


Buffer Load

Buffer Load

For my next post I wanted to discuss all the different options available when loading data from source or calling internally. There are quite a few prefixes available, 22 to be exact and when I decided to start writing on each one, I realized I was being over ambitious.

Load Prefixes

More than that though, there are some prefixes that I have not used before and one in particular caught my attention


Buffer - Script

There isn’t as much information as one would think about the Buffer prefix. What is more concerning is that most of the information available is quite old and some discussed various bugs like that you can’t use with a where clause on a QVD (work-around, use preceding load with buffer). It has also been noted that this prefix has known bugs and I also wanted to understand some of the limitations. Before over analyzing, a good approach would be to understand at it’s core what this prefix does and in what scenario’s it could be used. It’s also good practice to always check the output of your logic against what you expect. So even if you not aware of any limitations, your output should always be checked against some other source of information to validate that what you have done is correct


QVD files can be created and maintained automatically via the buffer prefix. This prefix can be used on most LOAD and SELECT statements in script. It indicates that QVD files are used to cache/buffer the result of the statement.


Buffer [(option [ , option])] ( load statement | select statement )

option::= incremental | stale [after] amount [(days | hours)]

If no option is used, the QVD buffer created by the first execution of the script will be used indefinitely. QVD buffers are stored in C:\ProgramData\QlikTech\QlikView\Buffers by default. You will something like this

Buffer - AppData.PNG

QVD buffers will normally be removed when no longer referenced anywhere throughout a complete script execution in the document that created it or when the document that created it no longer exists.

When executing a load statement that has been buffered you see the following

So essentially, a buffer creates a QVD so that when model is run again, it will use an optimized file leveraging off Qlik’s proprietary indexing. So how is this any different to when you already have a QVD architecture in place, what is the use case for this function and is there any real value to be gained from this

Use Case Scenarios

Just last week I was tasked to make some changes to some script logic on a model that had quite a bit of data, circa 185 million records. Each reload took 3 hours and if the expression didn’t handle all the values correctly, I had to change and wait again. This is one of those scenarios where all records need to be loaded because there were too many scenarios that needed to be validated. To add to this, the QVD architecture in place was a tier 1, only the source data was used for the QVD files, all logic was still handled during each run which meant loading in an un-optimized environment. If I had known of this function, I could have used the buffer prefix which would have made any future changes a lot quicker to action as the reload time would be substantially quicker for subsequent loads. I had to make in total 8 changes which equates to 3 days of testing and reloading. I could have done this in less than half that time if I had used this buffer prefix (won’t forget this for future reference :))

The other scenarios where this is useful is incremental loading and reading from a database. When doing incremental loads, you can state how many days/hours the cached QVD is still relevant for. This is great as you doing a hybrid optimized QVD and Source execution. When reading from a database, you can decrease the reload time for subsequent reloads by 5 to 10 times.

Any changes to the load statement that contains the buffer will render the QVD invalid and will load normally and create a new buffer*

*If you load the statement with no buffer for a future load, then the file in the hidden folder appdata is removed. This is great as some of the QVD’s created would take up quite a bit of space, but QlikView manages this quite nicely

Remove BUFFER Prefix when you move the model to production – You don’t want the model to be reloaded locally.

In Closing

This is a great time saving prefix and something I will definitely be using in future models. Even if the environment I was in had multiple tiers for their QVD architecture, Buffer adds functionality the layers couldn’t give me; the ability to optimize a specific select statement, the resource intensive time hogging statement that I need to get through to test my changes.





Mini Gauges – Linear and Friends

Mini Gauges – Linear and Friends

This is a continuation post from Mini Gauges – Circular where we explored the usefulness and viability of circular gauges inside a tabulated object within QlikView.

Linear Gauges are probably the most used form of mini gauge. It is very easy to setup and provides a clean visual representation in a horizontal alignment. No need to increase wrap text as with circular gauges. So let’s go through how to create the following visual below

Mini Gauage - Linear Default

Linear Gauge

I have seen that most linear gauges use a Max(Expression) in both the expression value and the expression max gauge setting. If your data is set up like this, no worries, but most data sets have multiple dimensions and the norm would be to use a Sum() function. If I used the same approach to create the same visual I would something like this

Mini Gauage - Linear Wrong

The reason for this is that the max gauge setting is taking the cumulative total of Revenue and representing that as the end point. To overcome this, we need to ensure that the max setting is set to take into the account the max cumulative value of a single value from our dimension. The settings used to create the correct linear gauge is

Mini Gauage - Linear Properties

For the Gauge Setting Max we used the following expression

Mini Gauage - Linear Max Gauge Setting

Odd Behavior

I’m guilty of forgetting to take the time to setup my document and user preferences on occasion when I change machines that I work on. The reason I am mentioning this is because the default font Tahoma is not my preferred and when I decided to change the font to Calibri and increase the size by 2, my straight table object produced the following.

Mini Gauage - Linear Gauge Odd Behavior

This is an interesting behavior because at the font size for Tahoma, the scale settings were not being rendered yet showed when I increased the font size. What’s more interesting is that when I switched to Webview, the scales didn’t show till I increased the font size even further. So, be careful to ensure that show scale is un-ticked and review display output in Webview or your access point.

Mini Gauage - Linear Gauge Scale Settings

Gannt Charts

Another useful feature of linear gauges is creating Gannt charts, well sort of a Gannt chart :). A Gannt chart is a bar chart that illustrates a projects schedule. When conventionally adding a linear gauge, you can create something that looks like this

Mini Gauage - Linear Gannt Chart.PNG

So can you do something similar with the mini chart options? Unfortunately, no. You can create a series of Gannt charts using a bar chart object but there are too many limitations in the mini gauges; the same limitations discussed in circular gauges. If you are creating a straight table object, a better approach would be to create a bar chart aligned next to the straight table that will add more functionality.

Friends with no Benefits

The other two available gauges are Traffic Light and LED

Mini Gauge - Traffic Light and LED

There is no point adding these gauges to your reports as the image representation and font options provide more flexibility then the following two gauges. The built-in images let you set traffic light indicators through a nested if statement and the LED gauge is really just the same as the expressions with a very difficult to read display. Linear and Circular gauges definitely have their uses but the traffic and LED options can be skipped






Mini Gauges – Circular

Mini Gauges – Circular

Once we know how to do something, we get into the habit of always doing it like that. It is a phenomenon that all of us experience and unless there is a real need to find another way of doing things, we always going to find ourselves getting into routines of completing our work.

The one aspect of writing about my experiences with QlikView that I really like is that it is allowing me to explore the product at a much deeper level and ask myself some very critical questions about my approach to design, data modeling and build efficiency. One of the areas of QlikView I was revisiting was the mini chart’s you can add as visual cues inside pivot/straight tables. Some of the questions I wanted answered were

  1. Is there any value to be gained for business by implementing the visual cue
  2. What are the limitations to conventionally adding this through the chart type selections
  3. Do I gain anything unique to building the chart inside a pivot / straight table


There are 4 gauges available

  • Circular
  • Linear
  • Traffic Light
  • and LED

Chart Gauges

Circular Gauge

At first, when I added the chart visual, I was like, “This is not going to work, who can read this?”.

Circular Gauge Default.PNG

I’ve always built straight and pivot tables on a single line. It produces dashboards that are pleasing to the eye and doesn’t take up too much space. It also allows us to draw attention to the visual charts we build in the rest of the design and leave the detail there if any deep-dive questions need to be answered. Once I increased the wrap text, I started to realize some cool things that I could do with this. I’ll be posting frequently this week to cover all 4 gauges. Herewith some samples of mini gauges I built using a straight table and a list of the advantages and disadvantages of conventionally adding an object

Circular Gauge 1

Circular Gauge 2

Circular Gauge 3

Circular Gauge 4


  • Allows dimensions to be added which in turn allows you to create ‘trellis’ gauges
  • Clicking ‘Read Only’ on the General tab lets you truly let this act as a balanced scorecard. By default this is not selected and in that case it can still be a benefit as you dimension is now selectable right from within the object which filters the data model for the other reports
  • Wrap Cell Text lets you increase the size of the gauges
  • Setting up of a balanced scorecard using gauges is easy to create and manage
  • Default gauge style is simple – This looks cleaner then the aero effect style
  • Straight tables allow for horizontal dimensions
  • Pivot tables let you use dimensionality() to restrict it at a certain level or allow for gauges to be created at any expanded view


  • Cannot use set analysis to create cool visual effects at an attribute level within the segment color settings. Only applies when a single attribute value is selected and not available at a dimension level
  • No pop up labels – Cannot hover over gauge to see value and work around is to create an expression that has this value underneath the gauge
  • No actions – Standard gauges let you allow actions if user clicks on gauge
  • Standard RAG status for all gauges
  • Showing expression totals for other expressions, even if gauges set to no totals, still forces the graph to be shown

Tips and Tricks

  • Enable ‘Always Show Design Elements’ in user preferences. This allows you to do some custom cell formatting on the fly
  • Custom format cell lets you use the font size to customize your header text to your value text. Text format has some limitations but allows conditional flagging which is useful in some situations and custom format cell offers other customization’s not available in text format of dimension. Just be aware that using text format over rides custom format cell for bold/italics and font size
    • You can get clever with tricking the font size to allow custom cell wraps so not every dimension / expression has the same wrap text size. This is very cool
  • Use dimensions that have the same direction to mark positive growth. In the case of Revenue, higher is better and for Costs, lower is better so having a dimension with these two metrics would only make sense if not using a RAG status and just flagging results on a gauge
  • When adding a value expression to represent the gauge use chr(10) to add a header and not increase the cell size. Let’s you fill up those blank spaces
  • Use the image representation in another expression to add RAG status to values that have a standard gauge scale


Adding a gauge via a straight table allows for quick creation of scorecards that can now be split by dimensions. It’s manageability is also a lot easier if you have multiple gauges in your dashboard. The downfall, and this is a big one, is that the draw card for gauges is the ability to set dynamic segments. This allows for RAG scales to correctly be applied to each specific measure and also create cool visualization illusions. Have a look at Arturo Muñoz post on QlikCommunity to see some cool customization of gauges and you’ll understand what I’m saying. In closing, the use case for this type of chart is really a matter of ease over custom visualization

Click Circular Gauges to download the example application





Part 1 – The Venice Beach of QlikView

Part 1 – The Venice Beach of QlikView

Venice Beach, the mecca of pec popping and quad flexing in the US of A. So what does Venice Beach and QlikView have in common. Well, actually quite a bit and just like Arnold Schwarzenegger went onto becoming the world’s greatest bodybuilder, so is QlikView strutting it’s stuff and dominating the world of BI.

So this got me thinking… is there a feature…. a technique, so flexible and so powerful that it becomes the quintessential weapon in a QlikView developers arsenal of awesomeness. There are a lot of really cool things QlikView can do but one feature in my opinion dominates them all. Before I get to that I’d like to point out that QlikSense, the relatively new and well received offering from Qlik is based on this fundamental premise

Self Service BI

Over the years I have found that no matter how robust, informative or aesthetically pleasing a dashboard is, there will always be that one person, yes you know that person, that would prefer to see that bar chart as a line chart, no a pie chart or just not like that at all; they would prefer to see those measures against those dimensions.

How can we exceed expectations when just meeting the demands of hundreds or even thousands of users seems like an impossible task. Before QlikSense, QlikView developers had to find ways of incorporating ‘additional’ dynamic capabilities into their dashboards. List boxes allowed us to filter data values in a data model but what if we wanted to allow our users to build their own reports. Could this be done in a way similar to using list boxes

QIY | Qlik-It Yourself

Some methods are easier to setup, other’s require more time but need less manageability and another offers unparalleled flexibility but very resource intensive. For the next month I will be exploring each method and the pro’s and con’s of going with either or.

So What’s First

The most common method used to build dynamic charts is through the use of a conditional statement using SubStringCount(Concat()). When QlikView 11 was released it was released with a dashboard that demonstrated the new capabilities of which one of them is the build your own dashboard.

To download the dashboard click on the image below

What's new in QlikView 11

Today’s post is not going to focus on this even though for purely building dynamic pivot tables, it is in my opinion one of the easiest methods to deploy but rather to discuss something I don’t see any developers really using and I want to motivate why this method should be used more frequently.

Show Me The Money $($(=))

Dollar expansions are one of my favorite function exploitation methods to use. Whether it is including it in your string search in your analysis statement or creating iterative calculated dimensions and calling this through a pick(match()) expression, dollar sign expressions are really powerful and really efficient to write. Once setup, the front-end manageability is really easy.

So to kick off this pec popping edition of Qlikies, let’s discuss the scenario

The Challenge

One of the hardest challenges when designing a dashboard is deciding what charts to use and the layout to address all of the stakeholders requirements; In comes dollar-sign expansions $(=string/variable). With some careful planning in your script, you are now able to allow your users to define their charting based on their requirements.

The particular example I will demonstrate was done for a call center business a few years ago. Before we look at what was done, I’d like to list the things I’ve done with this approach

  1. Allow dynamic selection of dimensions and measures for various chart types
    1. Dimensions is a single expression – No longer do you need to build multiple expressions and have a conditional show statement
  2. Comments – Most applications manage comments through a show/hide feature. Using $() allows you to hide a single sheet and manage all comments with a single variable action

Note to Self – Next weeks post will be a detailed example of how comments can be managed using the above technique

Step 1 – Measures

The entire premise of this technique is to leverage off data model designs built into the script that can be called on the front-end. First up, let’s define our measures through the use of LET statements


Step 2 – Measure and Dimension List

Our QlikView model will be using input boxes with a predefined list for the user to select from. To be able to define the values in the list box, we need to create a data island that will hold the field values to be called, for this exercise we will create an inline load


  • AgentExpressionVar – Is the variable that contains the expression for our agent measures
  • MeasureName – This is important for the user as they would want to be able to select a name that is familiar to them
  • AgentDimensions – The dimensions we want the user to be able to switch between – think cyclic group
  • DimensionName – As with MeasureName, this is an easier to read description of the source field name


Note: In the above, the two tables we created aren’t linked on any common field. For this exercise they don’t need to be as they act as lists that the $() will handle within the scope of the data model to represent our values

Step 3 – Input Boxes

What we will be creating is something that looks like this

Example Input Box Chart.PNG

It is important to note that there is always 2 ways to create your variables; in script or in the front-end. It is advised to do this in script for a few reasons, namely,

  1. -prj folder lets you get your script out with all your variables
  2. Let statements let you define a default value

THIS IS VERY IMPORTANT, our input box, not our variable will contain an expression that returns a delimited list of values the user can select from. If we build the variable in script we can use the LET statement to define a default value. So in your script, before the LET statements for the variables, load the following LET statements with a value for MeasureName and DimensionName


Below is an image with our Input box properties


Herewith a list of the properties that are important

  • Input Constraints – Tick Predefined Values Only
  • Value List – Tick Predefined Values in Drop-Down
  • Predefined Values – Listed Values | For each variable use the applicable concat expression
    • =’,’&concat(DISTINCT MeasureName,’,’)
    • =’,’&concat(DISTINCT DimensionName,’,’)


Step 4 – Dollar Sign Expansion

This is where the awesomeness of our approach comes into practice. Below is an image of the expression we will use. For our dimension we will use the variable vDimensionList, the one LET statement where we set a default value


  1. FieldValue(FieldIndex()) – References a field value based on its positional element as it appears in the data model. This function allows us to be able to dynamically reference values in a table based on a selected value in our input box
  2. Encapsulating the expression in a $(=) will force QlikView to evaluate the output of the FieldValue(FieldIndex()) and in our case it picks up that vAgentCallsAnswered = sum(1)
  3. As in stage 2, adding an additional $(=) would force Qlikview to calculate the output of the vAgentCallsAnswered which in our case is sum(1) and so the value 1 is returned

This behavior follows a sequence of events that once we understand as we did above, allows us to leverage off and ‘trick’ QlikView to calculate dynamically our chart based on string values selected via the means of an input box

This is a very powerful technique that is resource efficient as a single expression is only being evaluated at any point in the model when the user makes a selection. Lets take this one step further and look at what we have enabled our user to do other than build their own chart.


The business user wants to see the volume of calls that came through by Date. They select the measure and dimension they want to see. The data is a bit erratic and it’s not that easy to identify visually if any patterns are noticeable. The user changes the dimension to Week Day and in a Contact Center environment, this can alert you to some interesting patterns. Since no months or years are selected the output is a clustered allocation of where the call volumes have been distributed. The user has selected Friday as this has received substantially fewer calls then any other day of the week. Changing the dimension to month it is noticed that April, May and June received a lot more calls then compared to the rest of the year. The user asks “How did business put action into place to manage this increased volume of calls”, the business should have increased the number of agents (logged on time) allocated to ensure service levels would not be impacted. Changing the measure now to Logged on Time it is noted that there was no increase in logged in time which in turn directly impacted the organisations ability to deliver against it’s service level requirements.

The chart we provided to business allowed them to ask critical questions that guided their analytical findings to understand how the business reacted to changes in deliverables. This insight allows the operation to pose these questions to the correct person that will in turn put processes in place to ensure that meeting delivery requirements are managed even in cases of exception reporting.

I hope you enjoyed this detailed explanation of a function that I have much love for. Built to encourage exploitative comprehension of the underlying data, this has the power to enable organisations to really understand the drivers in their business with a single object.



Earlier this week I was asked to assist with a project that required some source transformations before the data could be used in the model. While reviewing the requirement it dawned on me that I might need to do something I haven’t had the need to do before, reverse cross-table. In other words, load data in a GENERIC table format

From this

From this

To this

To this

Admittedly, there might be better ways to solve the use case I am about to explain, but given the fact I could try something new, I went with the GENERIC approach.

USE Case Scenario

Group reporting requires that all business units submit their annual reports. One business unit has submitted data but has not correctly reflected the split of values against the different reporting levels. Instead, they have allocated all values to one dimensional value. For reasons not to discuss 🙂 it is up to the person responsible for the group submission to split out these values and allocate them to their correct area.

Tools of the Trade

We have 2 vital pieces of information

  1. The original submission in the correct format
  2. The % split that should be applied to each dimensional value to be applied to the submission

Identify the Problem to be Solved

At this stage, it is critical to map out what should be done and how it should be done to ensure that at no point the process fails to correctly split the values. Our particular data-set had multiple dimensional values at a very granular level with all values allocated to a single reporting level; simplified, something like this

Single Allocation.PNG

If you refer back to the image ‘from this’, this is what the allocation should look like. The example we using is a very simple one as it is the technique and application that is important to understand. Of which the most important rule to remember is that our split % table governs the level of granularity we can apply to our submission table.

In essence, we have a table with 1 record and for each attribute value (let’s use 3) create records that show the correct value. When we done with our load and join we should have 3 records in total.


Is a Prefix that we can use to transform our cross-table to a regular table. The reason we went with this approach is because we want to take our submission format and multiply each attribute value with the relative %.

  • Only 3 fields should be used – You can actually load more or *, but it is situational, explained below
  • The GENERIC LOAD interprets the input table such that, the last read field is the value, the 2nd last the attribute and the rest as the key. The first few fields will be treated as qualified fields. If you load more then 3 fields, you will have a synthetic key to your input table. In our particular case, it wouldn’t be a problem as we would eventually join this onto our submission report and we would want to do so for each matching field. In the event you don’t want to do this, rather create a composite key on both input and output tables with the input still containing the fields you want to have in your data model

To demonstrate what would happen if you keep all fields loaded in a generic load, herewith the script and output.

Script - Generic Load

Table View - Generic Load

Here’s what your generic load should look like

Script Generic Load


At this stage, we want to create a record for each key in our data set.  There are many ways to do this, but a nice, clean, dynamic and easy-to-manage method would be to use the script below

Generic Table CLEAN.PNG

Using our example in the The GENERIC Load, we allocated the KEY as TransID, the Attribute as Dim1 and the Value as Expression1

The output of the above cleaned our synthetic tables into one table to look like this

Table View - Cleaned Generic Load.PNG

The Final Piece

Now that we have our KEY table we are able to reference this back to our submission format as explained in the use case. We joined the tables together and multiplied the attributes by each other to correctly split the values assigned to only one attribute value. The sheer number of dimensions that was required to be manipulated in our real world problem would have been very time consuming to write for each. The above method offered a simple solution to align the value to the field we want to multiply by and provided us with easy to read levels that we could check and validate against during the process run

Food for Thought

One of the reasons I’ve never gone the route of the generic load is because of the ease that you can reference attribute values in expressions and list boxes when contained in a single field. Having attribute values spread across columns makes the front-end build very difficult to manage and would require individual expressions to be created. It so happened, that the above approach worked based on how we needed to report the information back and the ease at which to implement. I would always suggest keeping information in a cross-table format as it will greatly simplify future development and expression manageability


Switches, Triggers and Coolness

Switches, Triggers and Coolness

The one thing I really like experimenting with is ‘switches’. Having buttons, objects and images linked to actions that add to the experience of the information I can present.

I’ve come across some pretty cool techniques over the years and from what I’ve seen, the only limitation is your creativity. This subject is quite vast and in a future series I will be focusing on purely sharing the different use cases I have come across. For now, a good starting point would be something all of us get asked at some point, “How can I see only this when I get that but still see all if I don’t click that”.

A typical example would be when you have created a straight table that flags performance based on a set criteria


Let’s say our company wants to only see the good results when they click good and vice versa for the other scenarios. We could say you can click on the status in the table and this would filter your data, or would it? Unfortunately, this is not the case. Clicking a color will simply filter the table to the lowest dimensional value in the table.

The Request

Client has a straight table with multiple dimensions and 3 expressions that look at this months performance, last month and the month before. Create an expression that flags GREEN when 2 or more months have values, AMBER when only 1 month has values and RED when no month has any values. Additionally, find a way to allow the users to be able to toggle between the different states.

The Challenge

This is a change request; model is in production and the developer (us) assigned to this request is not the developer that built the model. At this point we need to set out how we going to accomplish the request.

There is one caveat to our design, you can either see all status’s or select 1. You cannot have variations of 2 of the 3 status’s shown. Since this is not required, we do not need to cater for this

  1. Create the expression on the straight table that flags the conditions
  2. Find a way to reduce the returned results through an action by the end user

Step 1

Fortunately, the logic we have to evaluate has the same number of discreet values as that of the number of columns we need to check against. This is vital as it greatly simplifies the approach we can use to create the flags.

The Request paragraph contains our business rule if you need to reference again. First up, create an expression to count the number of times each month (column) has values

RAG Status

Table Status Count

Visualization Note: Depending on your geographical location, you become hard-wired to interpret information in a certain sequential order. For most, it is left-to-right, but in our output, we drawing attention to the right side when we eventually flag with the conditional colors. Because of this, we represent our months from right-to-left in distance from current month. This is done for a number of reasons which I will motivate in a future post

Step 2

There are a number of ways we can represent our values in Step 1 via visual cues, lucky for us the visual cues tab allows for us to allocate 3 colors quite easily. For comparison reasons, we could have also used the text color or background color in our expression to conditionally flag the status’s as well. We also don’t want to produce a visual cue for the total mode; in the expression for RAG, select No Total under Total Model.

Visual Cues Tab

Table Visual Cues.PNG

Step 3

Now we done with the easy part, lets get to the fun stuff :). The challenge is to create a way for users to filter on each RAG result. A method we can use is setting an action to a text object that changes a variable’s value and call that variable through the object in the front-end. There are multiple steps in this phase of the build

Create the variables in script – This is done so that when the document opens, a default value is always assigned to the variableScript Variables.PNG

Create 4 buttons – 3 buttons will contain actions for the RAG status’s and 1 button will be the reset (the same as setting all variable values back to their default value). To further add to our visualization, we will add a saturation hue to our object so that when a status is selected, it will clearly be visible to our users. This is represented in the second image

Action Boxes ALLAction Boxes G

Action Boxes Expression

Since we have 3 states we need to manage, each button will have 3 actions. Each action assigns a value to that variable

Quite a common technique for handling the variables is by multiplying by -1, this rotates the output of 1 * -1 as either negative or positive 1 as seen above. The procedure is that when a user clicks RED, RED’S variable must change but the other 2 must stay -1 (in this case -1 means don’t show and 1 means show). This is repeated for each of the 3 status’s

Step 4

The above works quite nicely and we now have a user object that behaves as we need it to so that we can create a returned result that is based on the unique output of our variables. For each 3 states our variables will return

Variable Matrix

Clicking the same status twice is also like clicking the ‘Clear Status Selections’ button. The calculated dimension below looks like a lot to digest, but it’s actually quite simple. We need to create one condition that works and wrap that around an if statement to handle the other conditions.

Dimension List

Calculated Dimension

The final step is to hide the dimension. It’s purpose is purely to create a 1 or null value for each status and to suppress the table results when a status is selected. Below image shows you where in the presentation tab you can hide the image

Hide Column

Closing Notes

The above use case demonstrates that the solution itself may evolve to be complicated but can easily be accomplished if the steps are clearly broken down. We only focused on the end execution when we needed to get the end execution to work.

“I will be uploading the sample QVW to accompany this demonstration to a drop-box account. Once uploaded I will be replacing this comment with a link. Data used is the example data provided from Qlik in your default folder installation”