Unit testing of Databricks notebooks

It is so easy to write Databrick notebooks! Let's take Azure DataBricks as an example. You create a Dev instance of workspace and just use it as your IDE. Develop your code, organize everything into nice commands, verify that everything works as expected, export those notebooks into a git repo, and then promote your code to follow up environments, up to production! Job well done!

Only then, there might be a nagging feeling from your good software developer part — where is the quality? That kind of question can also come from a DevOps person, architect, or security team.

So then you look at the produced code and think really hard — how is it possible to add unit tests to it? Imagine you have about 3 notebooks similar to that one:

# Databricks notebook source
# MAGIC %md ### Extraction of Site table for the Source system 1
# MAGIC Reference: Jira ticket for the task
# MAGIC ##### Description
# MAGIC Nice description of what is this notebook about.
# COMMAND ----------
import ...
import ...
# COMMAND ----------
logger = logging.getLogger("extract_from_system1")
# COMMAND ----------
username = dbutils.secrets.get(scope = "system1Scope", key = "username")
password = dbutils.secrets.get(scope = "system1Scope", key = "password")
# COMMAND ----------
logger.info("About to call API")
userAndPass = b64encode(bytes(username + ':' + password, "utf-8")).decode("ascii")
listHeader = { 'Authorization' : 'Basic %s' % userAndPass }
listResponse = requests.get("https://sourcesystem1_url/site",headers=listHeader, verify=True)
# COMMAND ----------
#transformation logic

You have a nice description of what it is about, then import statements, setup of variables for this notebook, data extraction from somewhere, and then transformation activities.

One of the ways to improve the code, make it DRY and testable is to introduce a separate notebook with common functions/classes:

# COMMAND ----------
# DBTITLE 1, Imports
import ...
# COMMAND ----------
# DBTITLE 1, Enums for Tables and SourceSystems
def enum(*sequential, **named):
enums = dict(zip(sequential, range(len(sequential))), **named)
reverse = dict((value, key) for key, value in enums.items())
enums['reverse_mapping'] = reverse
return type('Enum', (), enums)
Tables = enum('Site', 'Product')SourceSystems = enum('System1', 'System2')# COMMAND ----------
# DBTITLE 1, Enums for Tables and TargetSystems
class Variables(object): request_url = ""
username = ""
password = ""
some_other_configuration = ""
def __init__(self, table, source_system):
table_name = Tables.reverse_mapping[table]
if source_system == SourceSystems.System1:
self.request_url = "https://sourcesystem1/" + table_name
elif source_system == SourceSystems.System2:
self.request_url = "https://sourcesystem2/" + table_name

scopename = SourceSystems.reverse_mapping[source_system] + "Scope"
self.username = dbutils.secrets.get(scope = scopename, key = "username")
self.password = get_secret(scopename, "password")
print('cannot load secrets')

And then the actual notebook can have

%run ".common_functions"vars = Variables(Tables.Site, SourceSystems.System1)

Much easier to read and, more importantly — to test!

You can create a separate folder in your repo — “tests” with the “ test_common_functions.py” file to have all the tests for the common functions there, available with default pytest commands, e.g.

def test_initVariables():
vars = Variables(Tables.Supplier, SourceSystems.System1)
assert vars.request_url == "https://...."

You can even expand the testing framework and add a SonarQube into the mix with quality gateways. Enforce good code coverage and no repeats. Possible build pipeline can be something like this:

- main
vmImage: ubuntu-16.04
- task: Bash@3
displayName: 'Install pytest and coverage'
targetType: 'inline'
script: |
pip3 install coverage
pip3 install pytest
- task: Bash@3
displayName: 'Run tests and coverage'
targetType: 'inline'
script: |
python3 -m coverage erase
python3 -m coverage run --omit */site-packages/* -m pytest
python3 -m coverage xml -i
ls $(Build.Repository.LocalPath)- task: SonarQubePrepare@4
displayName: 'Prepare analysis on SonarQube'
SonarQube: 'SonarQube Service Connection'
scannerMode: CLI
configMode: manual
cliProjectKey: 'Project Key as configured in SQ'
cliSources: '$(Build.Repository.LocalPath)/Notebooks'
extraProperties: |
- task: SonarQubeAnalyze@4
displayName: 'Execute SonarQube Analysis'
- task: SonarQubePublish@4
displayName: 'Publish SonarQube Analysis'
pollingTimeoutSec: '300'
- task: sonar-buildbreaker@8
displayName: 'Break Failed SonarQube Analysis'
SonarQube: 'SonarQube Service Connection'
- task: PublishBuildArtifacts@1
PathtoPublish: '$(Build.Repository.LocalPath)/Notebooks'
ArtifactName: 'notebooks'
publishLocation: 'Container'

DevOps consultant with Servian

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store