Use Cases

Top 10 tips for using Claude Desktop with Rill Data

Jon Walls
Author
June 25, 2025
Date
5
 minutes
Reading time

You've probably heard – AI language models make it possible to talk directly to your data! This is particularly true when you have a metrics layer like Rill. Every metric is calculated correctly, and every field has a name and a description that helps the AI understand what data is available. Giving your favourite language model a metrics layer, instead of pointing it at raw data, reduces the errors made when asking questions in natural language.

The industry is adopting the Model Context Protocol (MCP) from Anthropic as the default interface for connecting an AI to a tool like a metrics layer. Rill's new MCP Server uses this new standard to let your AI talk directly to your metrics. Be sure to read the documentation!

To get the most out of your data conversations, we've compiled our favorite tips. Get connected, try them out, and let us know if you find any new techniques!

Top 10 tips for using Claude Desktop with Rill Data

  • Configure Claude Desktop to use Rill
  • Add ai_instructions to your model
  • If your model doesn't have AI instructions, create a custom style
  • It's like a training role play – give lots of context
  • Ask specific questions
  • Sonnet for fast conversations, Opus and Extended Thinking for deeper investigations
  • Tell it what to do with the response to your question
  • Review and challenge the responses
  • The language models have industry knowledge, so ask for advice and explanations
  • If you want to build a data application, think about using a custom API (requires admin access)

Configure Claude Desktop to use Rill

The first step is to get connected! At the time of writing, we recommend using Claude Desktop. IDEs like Cursor and Windsurf either struggle to build the more complex payloads required, or over-interpret the local coding environment as part of the context. ChatGPT and Gemini have yet to introduce good support for adding MCP servers. This will probably all change – these tips should work across all chat clients using an MCP Server.

Rill makes it super easy to configure your Claude Desktop. Full instructions are available in our online docs. If you prefer a video walkthrough, we have a video showing the process. Here's the overview:

  1. Go to the new AI tab on your Rill project
  2. Click the Create token to update the snippet with a personal token
  3. In Claude, go into Claude -> Settings -> Developer -> Edit Config
  4. Update your claude_desktop_config.json file with the snippet
  5. Restart Claude

This is where you'll find your Rill AI config:

Add ai_instructions to your model

Claude gives the best experience when you provide instructions, specific to Rill, on how to use our MCP Server and how to present its outputs to the user. We've added a new ai_instructions option to make that possible. These can be applied to the top level rill.yaml project file, and to each metrics view. for defining your project and model.

In our example project, we've added instructions to rill.yaml, auction_metrics.yaml, and bids_metrics.yaml. It's important to note that you can just copy our rill.yaml example, but for the metrics views you have to provide URL examples specific to your project. 

These instructions will:

  • Give guidelines on how to handle user prompts. For example, the AI should conclude its responses with suggested next steps.
  • Teach Claude how to include Explore URLs in its responses. Important: each of your metrics views must be updated to include examples of URLs using that view.
  • Show Claude how to use text symbols to visualize data. This is a lot faster than creating React applications.

If your model doesn't have AI instructions, create a custom style

If you don't have access to edit the data model, you can get the same results by using a "custom style" in Claude. These are mostly intended to set Claude's writing style, with out-of-the-box examples including "Normal", "Concise", "Explanatory" and "Formal". You can include any instructions you like, though, and this means you can give Claude additional context for when you want to speak to your data. This is super useful because it saves you having to provide the same context every time you're about to use Rill in your chat.

As with configuring Claude, we've provided instructions in our online docs and also have a video showing you how to add your own custom style. You can use our template to get started, but remember to create your own Explore URL examples so that the language model can generate good links for your project.

It's like a role play – give lots of context

You're probably already familiar with AI chats. Even so, it's worth remembering that one of the best ways to get good results is to treat the chat like a real world conversation. When you're talking to Claude and expecting it to use Rill, it's like asking a work colleague a question. If you share the reason for your question or what you're investigating, the AI will have more to work with and will do a better job of querying the data. 

In the example below, letting the AI know that I'm focused on a particular industry vertical has meant Claude is able to filter the data only to pharmaceuticals. That information isn't actually in the dataset! There is no field categorising the data by industry vertical. Because the language model has been trained on the internet, it is nonetheless able to apply that context to its queries. This combines the raw data in Rill with the shared context between your prompt and the model's pretrained knowledge. 

Ask specific questions

The more specific you can be, the better. If you're familiar with the Rill project, you can directly ask questions like the one below which is easy for Claude to map to the data model. 

It doesn't have to be a perfect match, though. Claude needs to figure out that "mobile" and "desktop" map to the "device_type" field, for instance. If you ask precise questions, the AI can do the rest of the work in figuring out which fields you are thinking of.

Sonnet for fast conversations, Opus and Extended Thinking for deeper investigations

You set Claude Desktop to use one of two models: Claude Opus, or Claude Sonnet. Sonnet is the faster model, and works perfectly well for many Rill conversations. Especially when using the free client, using Sonnet will allow you to ask more questions.

For deeper questions, there are two ways to tell Claude to use more resources: switching to the most advanced model, Claude Opus, and enabling Extended Thinking. This makes Claude more likely to run additional queries as part of responding to your data question. It also seems to pay more attention to the details of your custom style, such as the instructions in our template to use text symbols for data visualization.

Tell it what to do with the response to your question

This tip combines nicely with using Opus and Extended Thinking! Sometimes you know your questions will take several queries to answer, for instance when you have specific followups you want to do. You can combine an entire train of thought into a single prompt, and Claude will usually then run a series of queries that follow your instructions.

Claude does some extended thinking before compiling a final summary)

Review and challenge the responses

It's important to remember that the AI can be too eager to interpret the data one way or another. Pay attention to actual data returned, and correct the AI if it has over-indexed on something that you know is an outlier or incomplete information. Over the course of a full conversation, Claude is pretty smart about learning from your input, as well as the data it is collecting from Rill.

In the example below, the AI has decided that very recent, incomplete data represents a downward trend, and I am correcting its conclusions.

The language models have industry knowledge, so ask for advice and explanations

Your questions don't all have to be about data queries. Just like the AI knew above which firms are in the pharmaceutical industry, its language model probably contains a lot of knowledge about the popular metrics in your industry. That means you can treat Claude as a sounding board and even mentor, just like a work colleague.

Claude can give you not just a list of available fields, but also put some additional context around them. In the example below, the data model itself doesn't contain any groupings of the different dimensions available. It's just been presented to me in a useful, readable format that puts related fields together. This can be really helpful when you're freely exploring the data by giving you ideas to try out.

If you want to build a data application, think about using a custom API (requires admin access)

Let's finish with an advanced example, using the MCP Server to help you build a custom API in Rill. This is one for Rill admins only, as you'll need elevated permissions to deploy the code. Custom APIs are a separate topic: you can find the relevant documentation here.

Metrics SQL is very similar to SQL, and pretty easy to write. The nice thing about asking Claude to write it for you is that it will check the metrics definitions, selecting the right fields and using the correct time column for the query:

Want to switch context from talking to your data to building a custom integration for a data app? No problem, the AI has all the context it needs from the MCP Server to write Rill Metrics SQL for you.

Wrapping up

OK, that's our top ten prompting tips! You should experiment with your own ideas, LLMs are very flexible and sensitive to context, so you might find tips of your own.

If you aren't using Rill already – try it out with our public project! All of the examples above were written using this project. The AI tab with MCP configuration details is here. As it is public, you don't need an access token, just copy the snippet as it is.

Ready for faster dashboards?

Try for free today.