Frequently Asked Questions - adobe/experience-platform-dsw-reference GitHub Wiki

Why am I unable to access JupyterLab on Safari?

Safari disables 3rd-party cookies by default. Because your Jupyter VM instance lives on a different domain than its parent frame we currently require that 3rd-party cookies be enabled. Please enable 3rd-party cookies or switch to a different browser, such as Google Chrome.

Our suggestion is to enable 3rd-party cookies selectively for the *.ds.adobe.net domain if the browser you are using allows you to.

Why am I seeing a '403 Forbidden' Message when trying to upload or delete a file?

We have found that using an Ad Blocker such as Ghostery or AdBlock Plus need to whitelist "*.adobe.net" in their respective Ad Blocker software. This is due to the fact our Jupyter VMs run on a different domain than the Experience Platform domain.

Why do some parts of my Jupyter Notebook look scrambled or don't render as code?

This can happen if the cell in question is accidentally changed from Code to Markdown. The key combination of ESC+M changes a cell to Markdown. A cell's type can be changed by the dropdown indicator at the top of the notebook for the selected cell(s). To change a cell type to Code:

  1. Select the given cell you want to change.
  2. Click the Cell type change dropdown and select Code.

How can I install custom libraries for a Python kernel?

The Python vm comes pre-installed with many popular machine learning libraries. However, to install custom libraries you can do this in a notebook cell: !pip install <library name>

How can I install custom libraries for a Pyspark kernel?

Unfortunately, at this point you cannot self-install extra libraries like you can for Python. You must contact your Adobe customer service rep to have them installed.

Is it possible to configure the Spark cluster resources from my JupyterLab VM, assuming I'm using a Spark or PySpark kernel?

Absolutely, you can configure resources by adding the following block to the first cell of your notebook:

%%configure -f 
{"numExecutors":10,"executorMemory":"8G","executorCores":4,"driverMemory":"2G","driverCores":2,"conf":{"spark.cores.max": "40"}}

You may customize the values as needed.