# Getting Started

Key Point

Onna in 6 Minutes.

💡 2 min read

# Requirements

Please make sure that you have all requirements installed.

This example uses Python.

Access to the Onna API requires that you have an active account.

You can sign up for an Onna account by filling out the registration form.

You should also create a workspace in your Onna account, and remember its name.

# Intro

In 6 minutes you can have a functioning Python program that authenticates against your Onna account, creates a Datasource and retrieves data from a remote location.

You can see the output of this program when you login to your Onna account.


Copy and paste the code blocks into a single file and run it, then check your Onna account.

import asyncio
import aiohttp
import json

base_url = "https://enterprise.onna.com"
container = "container"
scope = account = "account"
auth_code_url = f"{base_url}/api/{container}/{account}/@oauthgetcode?client_id=canonical&scope={account}"
workspace_name = "workspace"
username = "you@example.com"
password = "super-secret-password"


Change the values for:

  • container
  • scope
  • account
  • workspace_name
  • username
  • password

above to match your Onna account

# Authenticate

The first step is to generate the auth token. Once you have the token, you can create the Datasource and start collecting data.

async def auth():
    async with aiohttp.ClientSession() as session:
        resp = await session.get(auth_code_url)
        if resp.status == 200:
            data = await resp.json()
            auth_code = data.get("auth_code")

        payload = {
            "grant_type": "user",
            "code": auth_code,
            "username": username,
            "password": password,
            "scopes": [scope],
            "client_id": "canonical",
        headers = {"Accept": "application/json"}
        resp = await session.post(
        if resp.status == 200:
            jwt_token = await resp.text()
        return jwt_token
    return None

# Create A Web Crawler

Next, you can create the Datasource in a workspace and start collecting data.

async def create_crawler_ds():
    token = await auth()
    if not token:
        raise Exception
    workspace_url = (
    headers = {"Authorization": f"Bearer {token}"}
    async with aiohttp.ClientSession() as session:
        data = {
            "@id": f"{workspace_url}/onnacrawler",
            "@type": "CrawlerDatasource",
            "title": f"Onna Web Crawler",
            "urls": [
        resp = await session.post(
            workspace_url, data=json.dumps(data), headers=headers
        data = await resp.json()
        if resp.status == 201:
            resp = await session.get(
                f"{data['@id']}/@sendToSpyder?force=true", headers=headers
Last Updated: 5/18/2020, 1:34:58 PM