In this post I’m going to show how to quickly and easily create a Data Connector between your ObjectRocket MongoDB instance and your ObjectRocket Elasticsearch instance. Then I show you how to view your data within a custom Kibana dashboard.
You’re going to need the following:
- An ObjectRocket MongoDB instance.
- An ObjectRocket Elasticsearch(2.x) instance.
- Both instances are in the same zone.
- You have existing data within your MongoDB instance. A sample data generator script can be found here.
Select your source MongoDB collection and the target Elasticsearch index
This part is pretty straight forward with three easy steps.
- Give your Data Connector a name.
- Select the MongoDB instance, database, and collection to read from.
- Select the Elasticsearch instance and either create a new index or select an existing one.
Once you click “Add Connector” your schema analysis request will be queued up and you should expect your results back within a few moments. The workflow state is persisted, so you can close your browser or walk away between steps and resume at any time.
Now let’s choose the fields to index
This is where we’ve tried to make it a very easy and pain free process for you. No APIs to learn or script against, no need to learn the latest pipeline tool’s syntax. We’ve abstracted that away.
By default the schema analysis checks the most recent 1000 rows within your MongoDB collection. In the coming months we will be introducing a number of options for the depth and breadth of the schema analysis, as well as the ability to schedule the scan for off peak hours.
So what are we seeing here? This is a simple schema with no variety to the data types across the 1000 rows examined. Let’s see what these various fields mean.
- Types – these are the data types found within the same field. The number next to it tells you how many of that type were detected. For example, a field with [String (100), Int(900)] tell us that there were 100 rows with strings and 900 rows with integers
- Occurrences – how many times has this field appeared within your collection
- Percent – the percentage of this fields appearance
- Map Key Target – the type of the field within Elasticsearch
- Analyzed – analyze the string and full text index it.
- Not Analyzed – Index this field, so it is searchable, but index the value exactly as specified. Do not analyze it.
- No – do not index, store, or analyze this field.
- Raw – this create a second indexed field appended with “.raw”, and set to “Not Analyzed”. This is used for display purposes within Kibana and other reporting tools.
So now that we know that, go ahead and take a few moments to review the shape of your schema, make your selections and confirm your mapping.
Set it and forget it
That’s it! No, really, from this point on it’s our at job ObjectRocket to manage and monitor your Data Connector. You can now focus your time and energy on your own products, not another series of tubes.
Let’s take a look at the data! Kibana is included with each and every Elasticsearch instance and will make short work of our visualization needs.
Add a Kibana user to your Elasticsearch instance
This is super simple and will only take a moment. Let’s head over to our Elasticsearch instance and click on the Users section to expand it. Now click on ‘Add User’.
Now let’s go to the Security section and click on “Add ACL”. This is where you tell the Elasticsearch firewall which IP addresses are able to access the instance. For security reason, this list should be kept as short as possible.
In this example I have just clicked on the “MyIP” button to quickly add my current IP to the list.
Great! Now grab the Kibana url from the Connect section of the instance details and browse to it. Use your new user credentials when prompted.
Setup your dashboard
We’re so close, just a couple more steps to go.
Configure the index pattern and time-field
First step inside Kibaba is to set the index name you wish to work with, as well as a time-field if one is available. If your using the sample data generator then you have an ISODate timestamp to use.
Above you can see that I have selected to use the “product_reviews” index we specified in the mapping step, and I also selected the “timestamp” field. Clicking the Create button will take you to the index field view.
Define your default search select the fields
Make sure that you set the time range selector in the top right to a range that contains your data. Once data is available for your data range, you can then select the fields to add to your search. This is done in the left drawer.
After adding the fields, click on the Save icon in the top right corner, just beneath the time range selector. We’re going to use this search in the next step when we setup the visualizations.
Create the visualizations
Click on the Visualize tab on the top of the page to get started on our next step. For it, I’m going to select the Pie Chart to get an idea of how our Avengers are doing for reviews.
Next you want to select the Saved Search you had made previously. After selecting the search, you will be taken to the pie chart configuration.
As you can see above, I’ve select the Count aggregation and sliced on the product_name field within the Terms aggregation. Press the green play button in the left nav, just about the metrics setting, in order to see the resulting chart. If you have data, and have followed along, you should have something similar to what you see below. Make sure to save it.
Lather, Rinse, Repeat! Take some time to explore and play with the other visualization widgets to create new views into your data. We’re going to use these in the next step to fill out the first dashboard.
Time to make the dashboard
This is what we’ve been working towards, a custom dashboard driven from our existing MongoDB. Click on the Dashboard tab in the top nav of Kibana. Once on the dashboard, find the Add Visualization button ( it’s a plus sign in a circle ) and press it. You should see something like I have shown below.
Now you can select those visualizations you had previously created and drag them around the screen until your happy with your brand new dashboard. Don’t forget to save it! Check out the dark theme, which can be found under the gear icon in the top right toolbar.
That was pretty easy wasn’t it? And we hope to make it even easier as we continue to iterate on the Data Connectors.
Need to migrate from Parse? We got you covered!
Our combination of MongoDB, Elasticsearch, Kibana and an easy to use data synchronisation tool make ObjectRocket the perfect new home for your application’s Parse data needs. Check out our guide to help migrating your Parse data to ObjectRocket MongoDB. Do forget that the full support of our amazing customer data engineering team’s help and expertise is included with every instance and is available 24/7/365.
Want to learn more about ObjectRocket?
Great! Head over here for more information about our pricing page.
Need help with instance creation or setup? Check out our docs over here! That’s not cutting it for you? Then contact our best in the industry MongoDB support and Elasticsearch support, all included as part of your instance.
Does this sound like the kind of platform you’d like to work on?
Do you talk ETL in your sleep?
Checkout out our careers page and send us your resume. We’d love to hear from you and we’re always looking for talented engineers to our teams.