Skip to content

Use case Guide - Scenario 5 - Live dashboards#

This document is meant to guide the user through Scenario 5 - Automated dashboards. As disussed in the use case description, the goal is to provide an automated weather forecast dashboard which fetches data from an open API on a daily basis (for the city of Brussels). The guide will be a step by step tutorial towards such objective. More in detail, each subsection covers a step of the approach, namely:

  1. Step 1: Initialize the resources.
  2. Step 2: Retrieve the PostgreSQL host address.
  3. Step 3: Set up the data pipeline - Data fetching & DAG script.
  4. Step 4: Initiate the data pipeline - Airflow.
  5. Step 5: Create the dashboard.

Use case files#

Code
Use-case code - Scenario 5 - Live dashboards
Use-case dag - Scenario 5 - Live dashboards

Step 1: Initialize the resources#

As first step, the user should inizialize the required resources. More in particular, four instances should be launched: - Apache Airflow - Jupyterlab - PostgreSQL - Apache Superset

Initialize the Jupyterlab/Airflow instance#

  1. Go on the Service Catalog section of the Portal.
  2. Click on the button Launch on the Jupyterlab/Airflow badge.
  3. Assign a name to the instance and select a group from the ones available in the list.
  4. Select the Default configuration.
  5. Copy the auto-generated password. This will be needed to access the instance in the later stage. (NB: Instance credentials are automatically saved and acessible on the My Data section of the portal).
  6. Select the NFS PVC name corresponding to the DSL group selected at point 3.
  7. Launch the instance by clicking on the launch button.

Initialize the Postgres SQL instance#

  1. Go on the Service Catalog section of the Portal.
  2. Click on the button Launch on the PostgreSQL badge.
  3. Assign a name to the instance and select a group from the ones available in the list.
  4. Select the micro configuration.
  5. Copy the auto-generated Admin password. From now on, the PostgreSQL password will be referenced as:

    <postgreSQLPassword>
    
  6. Launch the instance by clicking on the launch button.

Initialize the Apache Superset instance#

  1. Go on the Service Catalog section of the Portal.
  2. Click on the button Launch on the Apache Superset badge.
  3. Assign a name to the instance and select a group from the ones available in the list.
  4. Select the micro configuration.
  5. Set your Admin username, Admin email, Admin firstname and Admin lastname.
  6. Copy the auto-generated password. This will be needed to access the instance in the later stage. (NB: Instance credentials are automatically saved and acessible on the My Data section of the portal).
  7. Launch the instance by clicking on the launch button.

After having launch the instances, go on the My Services section of the portal to verify that all the deployed services are up and running (all three instances should be in status ACTIVE).

alt text

Step 2: Retrieve the PostgreSQL host address#

  1. From the My Services section of the portal, click on the Copy button of the PostgreSQL instance to copy the host address of the instance in the clipboard.
  2. The PostgreSQL host address is in the format:
        <postgresSQLHost>:5432
    

Step 3: Set up the data pipeline - Data fetching & DAG script#

  1. Download the data fetching script and the DAG script:

    Please note that both scripts are commented for a better understanding of each step performed.

  2. Open the Data fetching script with a text editor and edit the code at point 3) Define the DB engine parameters in the following way:

    • Substitute <postgreSQLPassword> with the PostgreSQL password (see Initialize the Postgres SQL instance - point 5.);
    • Substitute <postgresSQLHost> with the PostgreSQL host address (see Step 2: Retrieve the PostgreSQL host address - point 2. )

    Please note that each line that needs to be edited is commented with "# EDIT THIS LINE" 3. From the My Services section of the portal, click on the Open button to access the Jupyterlab instance. 4. Login into your Jupyterlab instance with the access credentials defined in the configuration. 5. Upload into the dags folder the just edited Data fetching script and the DAG script.

DISCLAIMER: Please note that is important NOT to run the two scripts on Jupyterlab but just to upload them. As a matter of fact, running the DAG script on Jupyterlab would cause the override of the configuration that allows to connect Airflow to Jupyterlab.

Step 4: Initiate the data pipeline - Airflow#

  1. From the My Services section of the portal, click on the Open button to access the Airflow instance.
  2. Login into your Airflow instance with the access credentials defined in the configuration.
  3. In the DAG section of Airflow, you should see the meteo_data_fetching DAG. Please note that, accordingly to the configuration made on the DAG Script, this DAG is scheduled daily execute (Schedule) the Data fetching script.
  4. Click on the toggle on the top-left to activate the DAG (see image here below).
  5. Click on the play button and then on Trigger DAG to manually trigger its execution (see image here below).

alt text

  1. Wait a few seconds and you should see the indication that the DAG has been executed succesfully:

alt text

  1. The succesful execution of this DAG imply that 2 new tables (hourly data, metadata) have been created and filled in on our PostgreSQL DB.

Step 5: Create the dashboard#

Add the DB and datasets#

  1. From the My Services section of the portal, click on the Open button to access the Superset instance.
  2. Login into your Superset instance with the access credentials defined in the configuration.
  3. In the navbar, click on Data > Databases.
  4. Click on the " + DATABASE " button. This will open the Connect a database tool that will allow us to connect the Postgres SQL database to Apache Superset.
  5. Select PostgreSQL as database to connect.
  6. Enter the required PostgreSQL credentials:
    • HOST: input the <postgresSQLHost>
    • PORT: 5432
    • DATABASE NAME: postgres
    • USERNAME: postgres
    • PASSWORD: input the <postgreSQLPassword>
  7. Click on Connect.
  8. In the navbar, click on Data > Datasets.
  9. Click on the " + DATASETS " button. This will open the Add dataset tool that will allow us to add the tables from the newly connected Postgres SQL database.
  10. In the DATABASE field, select the database that has been connected to Apache Superset in the previous step.
  11. In the SCHEMA field, select public.
  12. In the SEE TABLE SCHEMA field, select the table hourly_data.
  13. Repeat the same operation to add the metadata dataset.
  14. For the hourly_data dataset, in the Actions section press on the Edit icon.
  15. Go on COLUMNS and tick the property Is temporal and Default datetime for the column hourly_time. This will allow Apache Superset to deal with such column as a temporal dimension, and this will be needed for all time-related data visualizations (eg. bar charts, time series, ...).

Create the charts#

  1. To create a new chart, click on Charts on the Superset navbar, then click on the + CHART button.
  2. Choose the hourly_data dataset.

A list of charts is here below proposed:

1. Weekly temperature forecast#

  1. Select the Time-series Line Chart and click on Create New Chart.
  2. By default, the TIME COLUMN should be hourly_time with Original Value as TIME GRAIN.
  3. In the QUERY section of the DATA tab, create a new metric as follows:

alt text

  1. In the CUSTOMIZE tab, have the following settings:

alt text

  1. Click on the SAVE button, assign a CHART NAME and click on SAVE.

2. Temperature vs. Perceived Temperature#

  1. Select the Bar Chart and click on Create New Chart.
  2. By default, the TIME COLUMN should be hourly_time.
  3. In the QUERY section of the DATA tab, create two new metrics as follows:

alt text


And select hourly_time as SERIES.

  1. In the CUSTOMIZE tab, have the following settings:

alt text

  1. Click on the SAVE button, assign a CHART NAME and click on SAVE.

3. Weekly Average Temperature with trendline and Weekly Min/Max temperature, cloud coverage, pressure level, rain volume#

  1. Select the Big Number and click on Create New Chart (for the Weekly Average Temperature with trendline, select Big Number with Trendline instead).
  2. By default, the TIME COLUMN should be hourly_time with Day as TIME GRAIN.
  3. Create four different charts with the following defined metrics. Please note that you can add a sub-header as caption of the big number showed in the chart:

alt text

  1. Click on the SAVE button, assign a CHART NAME and click on SAVE.

4. Heatmap - Cloud coverage vs. Rain volume#

  1. Select the Heatmap and click on Create New Chart .
  2. By default, the TIME COLUMN should be hourly_time.
  3. In the DATA tab, have the following settings:

alt text

  1. Click on the SAVE button, assign a CHART NAME and click on SAVE.

Create the dashboard and populate it with the charts#

  1. Click on Dashboards on the navbar and click on the "+ DASHBOARD" button to add a new dashboard. Assign the dashboard a title (i.e. Brussels - Meteo)
  2. From the Components section, drag and drop the Tabs element. Then, create two new tabs: one will contain the big number charts and the other the plot charts. Assign names to the tabs (i.e. Weather Data and Charts).
  3. In the first tab, add all Big Number charts by dragging-dropping them from the Charts section.
  4. In the second tab, add all plot charts by dragging-dropping them from the Charts section.
  5. Here a dashboard created as example for the final result:

alt text

alt text