Explore with Notebooks
Good to know: Deque Notebooks are supercharged with natural language generation features by Open AI's GPT-3 and cortex models.
Deque Notebooks are a re-imagined way of doing data exploration in a fun interactive way. It is an experimentation playground where users can quickly explore and evaluate data sets collaboratively. Deque Notebooks are supercharged with features including access to various GPU instances, real time collaboration, code completion and built in experiment tracking.
Check out this video:
Deque Notebooks are deeply integrated with the rest of Deque components, which allows for seamless integration with data storage, experiment tracking and choice of cloud. The user can also quickly export the notebook to a job for large scale training on multiple nodes. Deque Notebooks are also by design meant to be collaborative. They offer features such as live editing, with your team members.
Setting Up the Notebook
Compute Settings: Deque supports many instance types from the Cloud providers, especially most GPU instances that are suitable for deep learning.
Choosing an instance type is as simple as selecting your preferred instance from the instance type drop down. The instances with blue highlighter are GPU instances and the instances with grey highlighter are CPU only instances. Each instance type clearly shows the number of GPUs, the number of CPUs, GPU memory and CPU memory in the instance label. This transparency is a benefit of using the Deque app, which is not available on other platforms.
Having this information allows users to optimally chose the instance balancing both training cost and speed.
Experiment Settings: Experiment tracking allows you to track the performance of your model over many iterations of training. It shows you the accuracy of the model and the value of the loss function, both of which provide intuition as to how to tune your hyperparameters. This process is also called hyperparameter tuning. Deque has integrated Tensorboard and MLFlow (opensource) right into the app, allowing users to seamlessly, visually evaluate and monitor the model performance. To turn on experiment tracking within a Deque notebook, the user simply has to select either Tensorboard or MLFlow under experiment settings within notebook settings.
Placement Settings: Where the user decides which Compute provider they want to use for each Notebook and in which geography they want to run the Notebook. This is beneficial because proximity to data is important. If you are running the data on the West Coast and the Notebook is on the East Coast that is suboptimal because of network latency.
Using the a Notebook
Accessing Data: Deque Drives are automatically mounted in each Notebook. Your data is always available in Notebook.
The Drive is mounted in both read and write mode and is accessible as (“/drive). You can load data stored in the drive into the notebook and also save any model artifacts from Notebook back to the Drive. These model artifacts can be used elsewhere within the Deque(i.e. in a different Job or for Deployment).
Accessing GPUs: Deque instrumentation service (See Instrumentation Services Section) automatically ships with NVIDIA CUDA drivers and prerequisite libraries preinstalled on the platform. When a Notebook instance is started it already has the necessary CUDA drivers and a set of standard deep learning packages available in different environments. These environments are published as Notebook Kernels, which the user can choose by selecting the More menu on the top left and then selecting “Change Kernel Action”.
Creating a Notebook Cell: Deque Notebook offers four (4) different Notebook cell types including: Code, English Instruction, Markdown and Raw.
Code cell is a python code cell where the user can write python code snippet.
English Instruction cell is a unique Deque cell where we allow users to write code generation instructions in English.
The Notebook then parses all the prior cells’ code and automatically predicts and generates the next several lines of code using Open API https://openai.com/blog/openai-api/ .
Text can be added to Deque Notebooks using Markdown cells. You can change the cell type to Markdown by using the Cell menu or the key shortcut m. Markdown is a popular markup language that is a superset of HTML. Its specification can be found here:
Raw cell contents are not evaluated by notebook kernel. When passed through nbconvert, they will be rendered as desired. If you type LatEx in a raw cell, rendering will happen after nbconvert is applied.
Add a cell: To create additional cells a user clicks on the small plus sign located at the top of the active cell. Once the user clicks the plus sign, the user can select any of the four cell types.
Delete a cell: If the user would like to delete a cell, they simply press the trash
can button at the top right of the active cell.
Fix Cell Code: This is a unique Deque cell action where when the user clicks on the snowflake icon
in the upper right corner of the active cell, Deque automatically fixes any issues with the code in the active cell using open AI GPT3 and Cortex APIs. (Not guaranteed to work at the present time. The open AI models are still under active development.)
Run a Cell: To run a cell the user presses the play button to the left of the active cell. The output of the cell will display right under the active cell. If the code is broken, the user can follow the process to Fix Cell Code.
Clear a Cell: To clear the active cell output the user selects clear active cell output from the more menu on the top left.
Notebook Shortcuts: All Notebook actions are available by pressing command shift P or command shift. These commands will open a shortcut window and the user can select any action from the window.
Notebook Level Actions are available from the more menu at the top left. These actions include Clear All Output, Restart Kernel and Change Kernel.
Track experiments: If the user turns on experiment tracking in the notebook settings, the Deque platform sets up the underlying services on the Notebook instance. These services are designed to work with Deque orchestration platform to centrally manage the experiments.
To visualize performance with Tensorboard all the user has to do is write the Tensorboard logs into a directory called “/Tensorboard”.
In order to use MLFlow, the user simply use MLFlow’s auto logging feature.
While you can set up tracking using Tensorboard and MLFlow outside of the Deque App it is beneficial to use the built in feature as the performance data is centrally stored and managed and the user does not need to worry about administering the servers or managing the storage of performance data through various experiments.
The user can monitor the model performance by opening the performance monitoring screen for the notebook:
Copy link