How Cora Uses AI and InfluxDB to Deliver Personalized Health Analytics
By
Charles Mahler /
Developer
Aug 15, 2025
Navigate to:
Simply collecting and storing data doesn’t create value—the key is being able to derive insights from data to determine what actions to take based on what your data is telling you. In the past, this required specialized knowledge; however, with advancements in AI, we are reaching the point where users don’t necessarily need technical knowledge to derive value from their data.
In this blog, we will go over how the InfluxDB 3 Hackathon winner, Cora, utilizes InfluxDB and LLMs to enable anyone to generate actionable insights based on their own personal health data. For more details, be sure to watch the full webinar.
What is Cora?
Cora is an application that enables users to ingest and analyze data from various health applications, including Fitbit, Oura, and Apple Health, via natural language LLMs. These analyses form the basis of actionable insights for the user’s desired health outcomes, such as improving sleep quality, losing weight, and reducing stress.
Cora is available as a mobile application or with any LLM that can interact with MCP servers, such as Claude.
Why AI needs a time series platform
The features of a platform like Cora aren’t possible with an AI model on its own due to the following problems:
- Context window limitations - The large amounts of historical data that need analysis can’t fit into the context window of AI models.
- Inability to do math - LLMs are not capable of doing statistical analysis on their own; they need to have access to tools that can do the math for them and then take those answers to respond to the user.
InfluxDB provides the solution by enabling Cora to query data relevant to a user’s query in real-time with minimal latency. On the backend, Cora performs the aggregations and correlations on the data stored in InfluxDB and provides the answers to the LLM, crafting a helpful response to the user’s question about their data. By integrating LLMs with InfluxDB, analytics tools are able to skirt issues with context windows, access external knowledge not in LLM training data, and enable personalized experiences for users.
The diagram below illustrates an example of how a user’s interaction with Cora might occur behind the scenes, demonstrating how Cora can interpret questions and perform aggregations and other statistical analyses to provide answers on demand.
Evolution of AI knowledge access
The AI landscape is moving fast, and best practices change rapidly. One of the major challenges when working with AI models is how to give them access to data not included in their training set. In the earliest days, developers either relied entirely on the model’s internal weights or added information to a static prompt. Next was Retrieval Augmented Generation, which is essentially another method of finding relevant information from external sources and adding it to the context window for the LLM to use. Tool calling came next, allowing LLMs to access external tools for both inputs and outputs. However, the issue here was that there was no standard for how various tools interacted with models and with each other.
The current best practice for allowing AI models to interact with external data and tools is MCP, a standard created by Anthropic. MCP servers expose functions that the LLM can access to accomplish tasks in a standardized way—this is what Cora uses for their platform. InfluxDB has also created an MCP server to facilitate easy interaction with your InfluxDB instance using AI models.
Cora tech stack
The heart of Cora’s architecture is the gRPC server that acts as a coordinator between the MCP server, InfluxDB, and Firestore. InfluxDB is used to store all time series data, while Firestore is used to store basic user data, as well as any metadata relevant to specific time series metrics. Cora also uses the Firebase user ID to tag data stored with InfluxDB. InfluxDB is critical to the application because it can respond to queries fast enough to provide a real-time user experience, without requiring pre-computing statistics or other analytics.
Below is an example of the data structure returned by Cora’s MCP server when an AI model requests a metric aggregation.
Next steps
There is a ton of exciting activity in the AI ecosystem, and Cora is a prime example. If you are interested in how developers are building applications that rely heavily on AI models, be sure to watch the full webinar to learn about more best practices and learn how to integrate InfluxDB with your application.
To try out Cora, sign up for the waitlist here.