Intercom to Redshift

This page provides you with instructions on how to extract data from Intercom and load it into Amazon Redshift. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)

Pulling Data Out of Intercom

In order to get your Intercom data into AWS Redshift, you have to start by extracting it from Intercom’s servers. You can do this using the Intercom API, which is available to all Intercom customers. Full API documentation can be accessed at this link.

Intercom’s API offers numerous endpoints that can provide information on users, tags, segments, conversations, and more. Using methods outlined in their API documentation, you can retrieve the data you’d like to place into Redshift.

Sample Intercom Data

The Intercom API returns JSON-formatted data. Below is an example of the kind of response you might see when querying for the details of a Conversation.

{
  "type": "conversation",
  "id": "147",
  "created_at": 1400850973,
  "updated_at": 1400857494,
  "conversation_message": {
    "type": "conversation_message",
    "subject": "",
    "body": "

Hi Alice,

\n\n

We noticed you using our Product, do you have any questions?

\n

- Jane

", "author": { "type": "admin", "id": "25" }, "attachments": [ { "name": "signature", "url": "http://example.org/signature.jpg" } ] }, "user": { "type": "user", "id": "536e564f316c83104c000020" }, "assignee": { "type": "admin", "id": "25" }, "open": true, "read": true, "conversation_parts": { "type": "conversation_part.list", "conversation_parts": [ //... List of conversation parts ] }, "tags": { "type": 'tag.list', "tags": [] } } }

Preparing Intercom Data for Redshift

Now the real fun starts. Once you’ve figured out what you want to pull down and how to pull it, you need to map the data that comes out of each Intercom API endpoint into a schema that can be inserted into a Redshift database.

This means that, for each value in the response, you need to identify a predefined datatype (i.e. INTEGER, DATETIME, etc.) and build a table that can receive them. The Intercom API documentation can give you a good sense of what fields will be provided by each endpoint, along with their corresponding datatypes.

Complicating things is the fact that these records are not always “flat” — in other words, there may be values that are actually lists. This complicates things because it means you’ll most likely to create additional tables to be able to capture the unpredictable cardinality in each record. (The “tags” value in the data above is an example of this.)

Inserting Intercom Data into Redshift

Once you have identified all of the columns you will want to insert, you can use the CREATE TABLE statement in Redshift to create a table that can receive all of this data.

With a table built, it may seem like the easiest way to add your data (especially if there isn’t much of it), is to build INSERT statements to add data to your Redshift table row-by-row. If you have any experience with SQL, this will be your gut reaction. But beware! Redshift isn’t optimized for inserting data one row at a time, and if you have any kind of high-volume data being inserted, you would be much better off loading the data into Amazon S3 and then using the COPY command to load it into Redshift.

Keeping Data Up-To-Date

So, now what? You’ve built a script that pulls data from Intercom and loads it into Redshift, but what happens tomorrow when you have dozens of new conversations and related data?

The key is to build your script in such a way that it can also identify incremental updates to your data. Thankfully, Intercom’s API results updated_at fields that allow you to quickly identify records that are new since your last update (or since the newest record you’ve copied into Redshift). You can set your script up as a cron job or continuous loop to keep pulling down new data as it appears.

Other Data Warehouse Options

Redshift is totally awesome, but sometimes you need to start smaller or optimize for different things. In this case, many people choose to get started with Postgres, which is an open source RDBMS that uses nearly identical SQL syntax to Redshift. If you’re interested in seeing the relevant steps for loading this data into Postgres, check out Intercom to Postgres

Easier and Faster Alternatives

If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.

Thankfully, products like Stitch were built to solve this problem automatically. With just a few clicks, Stitch starts extracting your Intercom data via the API, structuring it in a way that is optimized for analysis, and inserting that data into your Amazon Redshift data warehouse.