Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated Pipedrive README.md #210

Merged
merged 2 commits into from
Jul 7, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
118 changes: 55 additions & 63 deletions sources/pipedrive/README.md
Original file line number Diff line number Diff line change
@@ -1,84 +1,76 @@
---
title: Pipedrive
description: dlt source for Pipedrive API
keywords: [pipedrive api, pipedrive source, pipedrive]
---

# Pipedrive

Here you will find a setup guide for the [Pipedrive](https://developers.pipedrive.com/docs/api/v1) source.
Pipedrive is a popular customer relationship management (CRM) software designed to help businesses
manage their sales processes and improve overall sales performance. Using this Pipedrive verified
source, you can load data from the Pipedrive API to your preferred destination.

## Set up account
The `ENTITY_MAPPINGS` table below defines mappings between different entities and their associated
fields or attributes and is taken from "[settings.py](./settings.py)" in "pipedrive" folder. Each
tuple represents a mapping for a particular entity. Here's an explanation of each tuple:

**To get started:**
1. Set up a Pipedrive account
2. Grab your Pipedrive subdomain
| Entity_Mappings | Description |
| ---------------------------------------------- |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| ("activity", "activityFields", {"user_id": 0}) | Maps the "activity" entity to the "activityFields" representing associated fields, and the dictionary structure {"user_id": 0} sets the value of the "user_id" field to 0. |
| ("organization", "organizationFields", None) | Maps the "organization" entity, specifying the associated fields as "organizationFields" with no additional settings or information. |
| ("person", "personFields", None) | Maps the "person" entity, specifying the associated fields as "personFields", with no additional settings or information. |
| ("product", "productFields", None) | Maps the "product" entity, specifying the associated fields as "productFields" with no additional settings or information. |
| ("deal", "dealFields", None) | Maps the "deal" entity: "dealFields" represent its associated fields with no additional settings or information. |
| ("pipeline", None, None) | Maps the "pipeline" entity with no associated fields or additional settings. |
| ("stage", None, None) | Maps the "stage" entity with no associated fields or additional settings. |
| ("user", None, None) | Maps the "user" entity with no associated fields or additional settings. |

Pipedrive provides a unique domain name that is generally `[company].pipedrive.com`. For example, if your company name is `dltHub`, then the subdomain name is `dlthub.pipedrive.com`.
These entities map the fields associated with them. To get more information, please read the
[Pipedrive documentation.](https://developers.pipedrive.com/docs/api/v1)

## Initialize the source with an example pipeline
## Initialize the pipeline

**Initialize the source with an example pipeline by using the following command with your [destination](../general-usage/glossary.md#destination) of choice:**
```bash
dlt init pipedrive [destination]
dlt init pipedrive bigquery
```

This will create a directory that includes the following file structure:
```bash
pipedrive_pipeline
├── .dlt
│ ├── config.toml
│ └── secrets.toml
├── pipedrive
│ └── pipedrive_docs_images
│ └── __init__.py
│ └── custom_fields_munger.py
│ └── README.md
├── .gitignore
├── pipedrive_pipeline.py
└── requirements.txt
```
Here, we chose BigQuery as the destination. Alternatively, you can also select Redshift, DuckDB, or
any of the [other destinations.](https://dlthub.com/docs/dlt-ecosystem/destinations)

## Grab API auth token
## Grab Pipedrive credentials

**On Pipedrive:**
1. Go to your name (in the top right corner)
2. Select company settings
3. Go to personal preferences
4. Select the API tab
5. Copy your API token (to be used in the dlt configuration)
To grab the Pipedrive credentials, please refer to the following
[documentation here.](https://dlthub.com/docs/dlt-ecosystem/verified-sources/pipedrive)

You can learn more about Pipedrive API token authentication in the docs [here](https://pipedrive.readme.io/docs/how-to-find-the-api-token).
## Add credentials

## Configure `dlt` credentials
1. Open `.dlt/secrets.toml`.
1. Enter the API key:
```toml
# Put your secret values and credentials here
# Note: Do not share this file or push it to GitHub!
pipedrive_api_key = "please set me up!" # Replace this with the Pipedrive API key.
```
1. Enter credentials for your chosen destination [as per the docs.](https://dlthub.com/docs/dlt-ecosystem/destinations)

1. In the `.dlt` folder, you will find `secrets.toml`, which looks like this:
```bash
# Put your secret values and credentials here
# Note: Do not share this file and do not push it to GitHub!
pipedrive_api_key = "PIPEDRIVE_API_TOKEN" # please set me up :)

[destination.bigquery.credentials] # the credentials require will change based on the destination
project_id = "set me up" # GCP project ID
private_key = "set me up" # Unique private key (including `BEGINand END PRIVATE KEY`)
client_email = "set me up" # Service account email
location = "set me up" # Project location (e.g. “US”)
```
## Run the pipeline

2. Replace `PIPEDRIVE_API_TOKEN` with the API token you [copied above](#grab-api-auth-token)
1. Install requirements for the pipeline by running the following command:

3. Add the credentials required by your destination (e.g. [Google BigQuery](http://localhost:3000/docs/destinations#google-bigquery))
```bash
pip install -r requirements.txt
```

## Run the pipeline
1. Run the pipeline with the following command:

1. Install requirements for the pipeline by running the following command:
```bash
pip install -r requirements.txt
```
```bash
python3 pipedrive_pipeline.py
```

2. Run the pipeline with the following command:
```bash
python3 pipedrive_pipeline.py
```
1. To make sure that everything is loaded as expected, use the command:

```bash
dlt pipeline <pipeline_name> show
```

For example, the pipeline_name for the above pipeline is `pipedrive_pipeline`, you may also use any custom name instead.

3. Use `dlt pipeline pipedrive_pipeline show` to make sure that everything loaded as expected.
💡 To explore additional customizations for this pipeline, we recommend referring to the official dlt
Pipedrive documentation. It provides comprehensive information and guidance on how to further
customize and tailor the pipeline to suit your specific needs. You can find the Pipedrive verified
source documentation in
[Setup Guide: Pipedrive.](https://dlthub.com/docs/dlt-ecosystem/verified-sources/pipedrive)