<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>InfluxData Blog - Getting Started</title>
    <description>Posts from the Getting Started category on the InfluxData Blog</description>
    <link>https://www.influxdata.com/blog/category/getting-started/</link>
    <language>en-us</language>
    <lastBuildDate>Tue, 24 Feb 2026 08:00:00 +0000</lastBuildDate>
    <pubDate>Tue, 24 Feb 2026 08:00:00 +0000</pubDate>
    <ttl>1800</ttl>
    <item>
      <title>What Is Predictive Analytics? A Complete Guide for 2026</title>
      <description>&lt;p&gt;In simple terms, predictive analytics is a form of analytics that tries to predict future events, trends, or behaviors based on historical and present data. You can achieve this goal in different ways, each involving trade-offs between accuracy and cost.&lt;/p&gt;

&lt;h2 id="why-is-predictive-analytics-important"&gt;Why is predictive analytics important?&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/blog/predictive-analytics-pipelines-real-world-ai-predictive-maintenance-time-series-data/"&gt;Predictive analytics&lt;/a&gt; enables organizations to be more efficient and accurate in how they plan for the future. The end result of a properly implemented predictive analytics system will depend on the industry, but at a high level, here are some common benefits:&lt;/p&gt;

&lt;h4 id="improved-strategic-decision-making"&gt;Improved Strategic Decision-Making&lt;/h4&gt;

&lt;p&gt;Predictive analytics provides insight into future trends, so business leaders can make better decisions faster rather than relying on reactivity.&lt;/p&gt;

&lt;h4 id="increased-operational-efficiency"&gt;Increased Operational Efficiency&lt;/h4&gt;

&lt;p&gt;Using predictive analytics can help businesses improve their profit margins and efficiency by predicting equipment failures and reducing downtime.&lt;/p&gt;

&lt;h4 id="improved-risk-management"&gt;Improved Risk Management&lt;/h4&gt;

&lt;p&gt;By looking at historical data where things went wrong, a business can reduce its risk by finding data that correlates with negative outcomes and avoiding them proactively. An example would be a bad investment in the finance industry.&lt;/p&gt;

&lt;h4 id="happier-customers"&gt;Happier customers&lt;/h4&gt;

&lt;p&gt;Predicting potential churn and reaching out to customers, or ensuring items are in stock by having more accurate predictions for inventory management help enhance customer experience.&lt;/p&gt;

&lt;h2 id="how-does-predictive-analytics-work"&gt;How does predictive analytics work?&lt;/h2&gt;

&lt;p&gt;The end goal of predictive analytics is to make accurate predictions based on historical data. Here is a general outline of the process for building a predictive analytics system:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Determine the goal for the project&lt;/strong&gt;.
The first step is to identify the problem or opportunity you are trying to address via predictive analytics. Define your goals and success metrics upfront.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Organize and collect data&lt;/strong&gt;.
The next step will be gathering the data to build your predictive analytics model, as well as the pipeline that will send fresh data to your model for generating predictions. 
This will typically be a combination of public data similar to your own, 3rd-party data relevant to your use case, and your own unique business data for fine-tuning your model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Process data&lt;/strong&gt;.
Once you have your data, one of the biggest challenges is often processing and cleaning it so it’s ready for your model. This can involve removing invalid data, filling in missing data, or transforming data into a standard format.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Develop a predictive analytics model&lt;/strong&gt;.
Now that your data has been collected and cleaned, you are ready to actually develop your predictive model. The model you use will depend on your business requirements, including accuracy requirements and the type of modeling you will be doing.&lt;/p&gt;

&lt;p&gt;A predictive model can be used for trend detection, classification, clustering, and more. You can create these models using statistical methods or modern machine learning techniques.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Validate results&lt;/strong&gt;.
Creating and deploying your model is just the first step; once the model is live, you will need to validate the results to confirm it works as expected. 
This generally involves testing against a separate dataset for accuracy, as well as running the model against live production data and evaluating the results based on the output. 
If the results aren’t as good as desired, you may need to return to the previous steps and modify factors like how data is processed and the type of model used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Deploy to production&lt;/strong&gt;.
If your predictive analytics model produces accurate, valuable results, you can now deploy it to production, where people will actually use the results. The system may need a human to confirm the action, or it may be fully automated, taking action solely based on the model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Update and improve the model over time&lt;/strong&gt;.
Predictive analytics isn’t a one-time deal. You will want to constantly feed your model recent data so it stays up to date and can be aware of potential changes that need to be integrated. 
Typical tasks would involve retraining the model, adjusting parameters, or providing it with additional data to improve accuracy. The entire system can also be fine-tuned over time to be more efficient and affordable.&lt;/p&gt;

&lt;h2 id="predictive-analytics-use-cases"&gt;Predictive analytics use cases&lt;/h2&gt;

&lt;p&gt;Predictive analytics are useful across almost every industry, but let’s take a look at a few specific examples where predictive analytics are particularly valuable. 
An ideal use case for predictive analytics is any situation where data is relatively easy to collect and having more accurate predictions will generate a significant business impact, such as revenue or cost reduction.&lt;/p&gt;

&lt;h4 id="manufacturing"&gt;Manufacturing&lt;/h4&gt;

&lt;p&gt;In the &lt;a href="https://www.influxdata.com/resources/advanced-manufacturing-monitoring-using-ctrlxos-with-influxdb/"&gt;manufacturing&lt;/a&gt; sector, predictive analytics can be used to predict and prevent machinery malfunctions before they occur. This reduces maintenance costs and improves factory efficiency of factories, resulting in higher profit margins.&lt;/p&gt;

&lt;h4 id="healthcare"&gt;Healthcare&lt;/h4&gt;

&lt;p&gt;Governments and businesses both use predictive analytics to improve the healthcare industry. Governments create predictive models to try to predict and prevent the spread of diseases and also determine investments in healthcare programs. 
Hospitals can use predictive models to look at patient medical records to create personalized treatment plans.&lt;/p&gt;

&lt;h4 id="marketing"&gt;Marketing&lt;/h4&gt;

&lt;p&gt;Predictive analytics can be used for marketing purposes to predict trends in consumer demand, improve customer engagement to prevent churn, and improve sales by recommending products customers might like based on their past purchases compared to those of similar customers.&lt;/p&gt;

&lt;h4 id="supply-chain-management"&gt;Supply Chain Management&lt;/h4&gt;

&lt;p&gt;Predictive analytics can help with supply chain management by forecasting changes in product supply and demand driven by factors such as time of year or location.It can also be used to optimize logistics and manage risk.&lt;/p&gt;

&lt;h4 id="finance"&gt;Finance&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://sigmatechnology.com/articles/predictive-analytics-for-finance-insights-and-case-studies/"&gt;finance&lt;/a&gt; industry uses predictive analytics in a number of ways, ranging from predicting stock prices to detecting fraudulent transactions. Banks can use predictive analytics to assess loan applicants’ risk by comparing historical data with the applicant’s personal history.&lt;/p&gt;

&lt;h2 id="predictive-analytics-challenges"&gt;Predictive analytics challenges&lt;/h2&gt;

&lt;p&gt;While predictive analytics can offer many business benefits, implementing it can be  challenging, especially if a company lacksin-house expertise or infrastructure. Here are some of the key roadblocks to consider when getting started.&lt;/p&gt;

&lt;h4 id="data-quality"&gt;Data Quality&lt;/h4&gt;
&lt;p&gt;To make accurate predictions, you will need a large volume of high-quality data relevant to your predictive analytics use case. This means you need to have a way to collect data and store it in a long-term format that is easy to access for teams creating predictive analytics models.&lt;/p&gt;

&lt;h4 id="integration-with-legacy-systems"&gt;Integration with Legacy Systems&lt;/h4&gt;
&lt;p&gt;Many established businesses will have systems that may not be seamlessly integrated. This means engineering effort will be required to ensure that data is not siloed and that the predictive analytics team can access the systems and data they require.&lt;/p&gt;

&lt;h4 id="accuracy-of-results"&gt;Accuracy of Results&lt;/h4&gt;
&lt;p&gt;The biggest challenge with predictive analytics will be creating a model that produces results accurate enough to justify the investment in creating them and that drives business value.&lt;/p&gt;

&lt;p&gt;This will require not only the initial creation of the model but also constant updates with new data to keep it accurate as conditions change.&lt;/p&gt;

&lt;h4 id="hiring-talent"&gt;Hiring Talent&lt;/h4&gt;
&lt;p&gt;All of the above problems require highly skilled employees to be solved. These skills are in demand across many industries, making it difficult to attract and retain the workers needed to implement a predictive analytics system.&lt;/p&gt;

&lt;h4 id="security"&gt;Security&lt;/h4&gt;
&lt;p&gt;Another challenge with predictive analytics is ensuring that all the new data collected and stored is secure. This data can contain sensitive information about customers or about your business, so security must be a top priority.&lt;/p&gt;

&lt;h2 id="predictive-analytics-techniques"&gt;Predictive analytics techniques&lt;/h2&gt;

&lt;p&gt;There are a number of  models available for generating insights via predictive analytics. The type of model to use for your organization depends on the data you are working with, as well as factors such as the cost to develop the model and your accuracy requirements.
Let’s take a look at some of the most common predictive analytics techniques and models.&lt;/p&gt;

&lt;h4 id="machine-learningai-models"&gt;Machine Learning/AI Models&lt;/h4&gt;

&lt;p&gt;In the past, classical statistical models have dominated predictive analytics and forecasting because of their ease of interpretation, lower computational costs, and accuracy. 
However, in recent years, ML/AI-based models have begun to surpass traditional forecasting methods in accuracy. They also offer the benefit of being easier to generalize across different predictions and of requiring less fine-tuning by highly trained statisticians.&lt;/p&gt;

&lt;h4 id="time-series-models"&gt;Time Series Models&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/blog/time-series-influxdb-vector-database/"&gt;Time series models&lt;/a&gt; are used to analyze temporal data and forecast future values. They are particularly useful when data shows sequential patterns or seasonality, such as stock prices, weather patterns, or sales data.&lt;/p&gt;

&lt;p&gt;Time series models are ideal for data that has seasonal variations and time-based dependencies, making them useful for forecasting.&lt;/p&gt;

&lt;p&gt;Some downsides of time series models are that they can struggle when the data isn’t at regular intervals and may assume past trends will continue, which can make them inaccurate at predicting drastic changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/blog/python-ARIMA-tutorial-influxDB/"&gt;ARIMA&lt;/a&gt; and exponential smoothing are examples of time series models. An easy way to start testing these models for predictive analytics is to use a library like Python Statsmodels.&lt;/p&gt;

&lt;h4 id="regression-models"&gt;Regression Models&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/blog/guide-regression-analysis-time-series-data/"&gt;Regression models&lt;/a&gt; predict a continuous outcome variable based on one or more predictor variables. They are widely used in predictive analytics, from predicting house prices to estimating stock returns.&lt;/p&gt;

&lt;p&gt;Regression models are useful for providing results that are easy to interpret and for identifying clear relationships between variables. Some downsides of regression models are that they do require a decent level of statistics knowledge and can struggle with non-linear relationships and datasets with many variables.&lt;/p&gt;

&lt;p&gt;Linear and logistic regression are examples of regression models. You can get started with regression models using the Python scikit-learn library.&lt;/p&gt;

&lt;h4 id="decision-tree-models"&gt;Decision Tree Models&lt;/h4&gt;

&lt;p&gt;Decision tree models make predictions by learning simple decision rules from the data. They can be used for both regression and classification problems. 
Decision tree models offer results that are easier to understand than those from machine learning models. A challenge is that they can be easily over- or underfit and be affected by small changes in the data.&lt;/p&gt;

&lt;h4 id="gradient-boosting-model"&gt;Gradient Boosting Model&lt;/h4&gt;

&lt;p&gt;Gradient boosting involves creating an ensemble of prediction models, typically from decision tree models. This method can be extremely accurate and has been used in recent years to win many machine learning competitions.&lt;/p&gt;

&lt;p&gt;Gradient boosting is good at providing accurate predictions for data with non-linear relationships between variables and datasets with high dimensionality.&lt;/p&gt;

&lt;p&gt;One weakness is that they can be overfit when they aren’t tuned properly and are more of a black box compared to traditional statistical models. XGBoost and LightGBM are libraries that can be used to create gradient boosting models.&lt;/p&gt;

&lt;h4 id="random-forest-models"&gt;Random Forest Models&lt;/h4&gt;

&lt;p&gt;Random forests are similar to gradient boosting in that they are ensemble models that use decision trees for making predictions. The main difference is that gradient boosting models generally use far more decision trees, and they are also trained sequentially so that errors from previous trees can be corrected.&lt;/p&gt;

&lt;p&gt;In comparison, random forest decision trees make predictions independently, and then the final prediction is created by aggregating those predictions. This makes the results easier to interpret because each decision tree’s prediction can be analyzed. You can test out random forest models on your data using a library like scikit-learn.&lt;/p&gt;

&lt;h4 id="clustering-models"&gt;Clustering Models&lt;/h4&gt;

&lt;p&gt;Clustering models, such as k-means clustering, can be used to group data points. While this is generally used for data analysis, these clusters can also serve as input features for predictive models like the ones mentioned above.&lt;/p&gt;

&lt;p&gt;Cluster modeling can help identify hidden patterns or relationships in your data, but to work, it requires a way to measure how similar data points are, and the number of clusters ‌must be chosen ahead of time.&lt;/p&gt;

&lt;h2 id="future-trends-in-predictive-analytics"&gt;Future trends in predictive analytics&lt;/h2&gt;

&lt;p&gt;The predictive analytics landscape is changing rapidly as technology advances and impacts all industries. Here are a few trends to look out for in the future:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Increased demand for real-time data&lt;/strong&gt;. To get the most accurate results, models need to be updated as frequently as possible so they aren’t out of sync with reality. This means that real-time data and systems that support it will become increasingly important.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Prescriptive analytics&lt;/strong&gt;. The term prescriptive analytics refers to the next step beyond predictive analytics. This involves taking action based on a predicted outcome before it occurs to try to influence the outcome.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Synthetic data&lt;/strong&gt;. Data is the key to making accurate predictions. The problem is that many businesses haven’t collected the data they need. 
A number of tools have been created to generate “synthetic” data, which can help get a           predictive analytics system off the ground using artificial data that mimics the use case.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Further adoption of machine learning and AI&lt;/strong&gt;. While most businesses still rely on traditional methods for prediction, cutting-edge practitioners are using ML/AI to win competitions because of its accuracy.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Easier to use predictive analytics tools&lt;/strong&gt;. Currently, implementing and using predictive analytics requires specialized skills. But domain knowledge is very important for making accurate predictions.&lt;/p&gt;

    &lt;p&gt;Future tools will focus on usability and enabling non-technical users to make predictions based   on their data. This will make implementation more affordable and drive more business value.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="best-practices"&gt;Best practices&lt;/h2&gt;

&lt;p&gt;Here are some helpful tips for using predictive analytics.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Have a well-defined objective&lt;/strong&gt;. Predictive analytics can only generate value when it influences a decision, and hence, the why should be the first thing followed by the model. Without a goal, you’ll maximize the things that make no difference. To implement this, you must clearly state what you want to predict, where you will apply the prediction, and what action you will take.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Focus more on feature engineering than model complexity&lt;/strong&gt;. Features are used to convert raw data into signals that the model can learn, and this step can be what makes the difference in determining success, more than the algorithm used. To do this effectively, design domain-aware features such as rolling averages, lagged values, and behavioral features like frequency and recency.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Measure models based on business impact&lt;/strong&gt;. Conventional measures such as accuracy may be misleading, particularly in skewed problems. It is significant because the technically correct model can be expensive or hazardous to implement. Use measures of actual trade-offs, like accuracy and accuracy of fraud detection, or average misplacement of demand forecasting.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Choose an easy, performance-dependent model&lt;/strong&gt;. Complex models may be appealing; however, they are more difficult to maintain, debug, and explain. This is important in production situations where stability and interpretability are paramount. It is better to start with baselines and simple models, and add complexity only as performance improves.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Provide quality, time-accurate data&lt;/strong&gt;. Predictive models use patterns in past records, and poor quality or poorly ordered records can lead to misleading results. Problems such as lost values, data leakage, or irregular timestamps may only inflate model performance during testing but not in production.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="common-pitfalls-to-avoid-in-predictive-analytics-projects"&gt;Common pitfalls to avoid in predictive analytics projects&lt;/h2&gt;

&lt;h4 id="overfitting-the-model"&gt;Overfitting the Model&lt;/h4&gt;

&lt;p&gt;Overfitting occurs when a model fits noise rather than any general patterns, usually because of too much complexity or too little data. This is important because these models are useful for training data but not for new data.&lt;/p&gt;

&lt;p&gt;An example of this is that a deep neural network trained on a small sample of customers might     work flawlessly at elucidating the past, but would not help predict what customers would         purchase in the future, whereas a simpler model would be more generalizable.&lt;/p&gt;

&lt;h4 id="data-leakage"&gt;Data Leakage&lt;/h4&gt;

&lt;p&gt;Data leakage occurs when the information of the future accidentally affects the model during training. This will happen when it has features with data that cannot be known at prediction time, achieving unrealistically high test performance but failing in practice.&lt;/p&gt;

&lt;p&gt;One such example is the use of the account closed date or an order completion status as an       input into the churn or demand prediction model, which makes the model seem very accurate, but   is not usable in practice.&lt;/p&gt;

&lt;h4 id="using-the-wrong-evaluation-metrics"&gt;Using the Wrong Evaluation Metrics&lt;/h4&gt;

&lt;p&gt;Accuracy alone can be a bad way to measure model performance, especially for use cases where positives are rare and costly when missed. An example would be fraud detection, a model that simply classifies all transactions as non-frauds would be very accurate(due to over 99% of transactions being legitimate), but in reality it’s still missing every case of fraud. For use cases like this teams need to use metrics that track actual business impact when evaluating their models.&lt;/p&gt;

&lt;h4 id="ignoring-changes-in-data-patterns"&gt;Ignoring Changes in Data Patterns&lt;/h4&gt;

&lt;p&gt;Predictive models assume that future data will behave like past data; however, in reality, systems continue to evolve. This is particularly problematic in areas such as retail or finance, where seasonality, promotions, or changes in user behaviour often change.&lt;/p&gt;

&lt;h2 id="faqs"&gt;FAQs&lt;/h2&gt;

&lt;div id="accordion_second"&gt;
    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-1"&gt;
            &lt;div class="message-header"&gt;
                &lt;p&gt;Predictive Analytics vs Predictive Maintenance&lt;/p&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-1" class="message-body is-collapsible is-active" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                Predictive analytics is a broad field that uses statistical algorithms, machine learning, and data to anticipate future events across many domains. It identifies patterns in historical and current data to predict future trends, behaviors, and activities. Predictive analytics is used across industries such as finance, healthcare, and marketing to inform decision-making and develop proactive strategies. Predictive maintenance, on the other hand, is a specific application of predictive analytics in maintenance and asset management. It uses predictive analytics techniques to anticipate when equipment might fail or require maintenance. By analyzing data from sensors, logs, and historical maintenance records, predictive maintenance models can forecast equipment failures before they happen. The goal is to perform maintenance in time to prevent failures, improving efficiency and reducing downtime. In short, predictive maintenance is a subset of the broader predictive analytics ecosystem.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;

    &lt;article class="message"&gt;
        &lt;a href="javascript:void(0)" data-action="collapse" data-target="collapsible-message-accordion-second-2"&gt;
            &lt;div class="message-header"&gt;
                &lt;p&gt;Traditional Statistical Models vs Machine Learning and AI Models for Predictive Analytics&lt;/p&gt;
                &lt;span class="icon"&gt;
                    &lt;i class="fas fa-angle-down" aria-hidden="true"&gt;&lt;/i&gt;
                &lt;/span&gt;
            &lt;/div&gt;&lt;/a&gt;
        &lt;div id="collapsible-message-accordion-second-2" class="message-body is-collapsible" data-parent="accordion_second" data-allow-multiple="true"&gt;
            &lt;div class="message-body-content"&gt;
                More traditional techniques, such as regression models and decision trees, have been used for decades in predictive analytics. This is due to their simplicity, lower computational requirements, and ability to show the relationship between specific variables and the impact of changing them on business outcomes. In recent years, AI/ML techniques like neural networks and gradient boosting have grown in popularity for predictive analytics use cases. The primary reason is that ML techniques can perform better with higher-dimensional data, where relationships among numerous variables are harder to define. These AI/ML models can learn from data without explicit tuning and can uncover relationships between variables that aren't obvious, resulting in higher accuracy. Some downsides of AI/ML for predictive analytics are that they tend to require more hardware for computation and are harder to interpret in terms of how they produce results, in some ways acting as black boxes.
            &lt;/div&gt;
        &lt;/div&gt;
    &lt;/article&gt;
&lt;/div&gt;
</description>
      <pubDate>Tue, 24 Feb 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/predictive-analytics-guide-2026/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/predictive-analytics-guide-2026/</guid>
      <category>Getting Started</category>
      <category>Developer</category>
      <author>Company (InfluxData)</author>
    </item>
    <item>
      <title>Getting Started with InfluxDB and Pandas: A Beginner's Guide</title>
      <description>&lt;p&gt;InfluxData prides itself on prioritizing developer happiness. A key ingredient to that formula is providing client libraries that let users interact with the database in their chosen language and library. Data analysis is the task most broadly associated with Python use cases, accounting for 58% of Python tasks, so it makes sense that &lt;a href="https://www.jetbrains.com/research/python-developers-survey-2018/"&gt;Pandas is the second most popular library for Python users&lt;/a&gt;. The InfluxDB 3 Python client library supports Pandas DataFrames, making it easy for data scientists to use InfluxDB.&lt;/p&gt;

&lt;p&gt;In this tutorial, we’ll learn how to query our InfluxDB instance and return the data as a DataFrame. We’ll also explore some data science resources included in the Client &lt;a href="https://github.com/InfluxCommunity/influxdb3-python"&gt;repo&lt;/a&gt;. To learn about how to get started with the InfluxDB 3 Python client library, please take a look at this &lt;a href="https://www.youtube.com/watch?v=tpdONTm1GC8"&gt;video&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1d5h3Jhciel0jmTuowdh41/b40b010caa9dc6b93377d04e6a41e734/pandas-influxdb.jpg" alt="Pandas Query: Getting Started with InfluxDB and Pandas | InfluxData" /&gt;&lt;/p&gt;
&lt;p style="text-align: center;"&gt;Me eagerly consuming Pandas and InfluxDB Documentation. Photo by Sid Balachandran on Unsplash.&lt;/p&gt;

&lt;h2 id="data-science-resources"&gt;Data science resources&lt;/h2&gt;

&lt;p&gt;A variety of data science resources have been included in the InfluxDB Python client repo to help you take advantage of the Pandas functionality of the client. I encourage you to take a look at the &lt;a href="https://github.com/InfluxCommunity/influxdb3-python/tree/main/Examples"&gt;example notebooks&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="dependencies"&gt;Dependencies&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;pyarrow (automatically comes with influxdb3-python installation)&lt;/li&gt;
  &lt;li&gt;pandas&lt;/li&gt;
  &lt;li&gt;certifi (if you are using Windows)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="installations"&gt;Installations&lt;/h2&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;pip install influxdb3-python pandas certifi&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="import-dependencies"&gt;Import Dependencies&lt;/h2&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;
import influxdb_client_3 as influxDBclient3
import pandas as pd
Import certifi #if you are on Windows
from influxdb_client_3 import flight_client_options&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="initialization"&gt;Initialization&lt;/h2&gt;

&lt;h4 id="direct-initialization"&gt;Direct Initialization&lt;/h4&gt;

&lt;p&gt;Take note that the “&lt;strong&gt;database&lt;/strong&gt;” argument in the function is the bucket name if you are using InfluxDB cloud:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;client = InfluxDBClient3(token="your-token",
                         host="your-host",
                         database="your-database or your bucket name")&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="for-windows-users"&gt;For Windows Users&lt;/h4&gt;

&lt;p&gt;Include certifi within the “&lt;strong&gt;flight_client_options&lt;/strong&gt;” argument within the client initialization to fix certificate issues:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;with open(certifi.where(), "r") as fh:
    cert = fh.read()
client = InfluxDBClient3(token="your-token",
                         host="your-host",
                         database="your-database or your bucket name”,
flight_client_options=flight_client_options(tls_root_certs=cert)&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="prepare-a-pandas-dataframe"&gt;Prepare a pandas Dataframe&lt;/h2&gt;

&lt;p&gt;Let’s use simple weather data:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;
# Example weather data
df = pd.DataFrame({
    "timestamp": pd.date_range("2025-09-01", periods=4, freq="h", tz="UTC"),
    "city": ["Lagos", "Illinois", "Chicago", "Abuja"],
    "temperature": [30.5, 15, 16, 32],
    "humidity": [20, 10, 10, 19]
})

# ensure timestamp dtype is datetime64[ns] and (optionally) timezone-aware
df['timestamp'] = pd.to_datetime(df['timestamp'], utc=True)
print(df.head())&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/4DCmuV3jkjxRNPXnwnV2PL/3208f398b107a6054eacb0847c981e38/Screenshot_2026-01-16_at_11.25.42Ã___AM.png" alt="Pandas table 1" /&gt;&lt;/p&gt;
&lt;p style="text-align: center;"&gt; Weather timestamp data.&lt;/p&gt;

&lt;h2 id="write-the-pandas-dataframe-to-influxdb"&gt;Write the pandas Dataframe to InfluxDB&lt;/h2&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;
client._write_api.write(
    bucket="my_bucket",
    record=df,
    data_frame_measurement_name="weather",
    data_frame_tag_columns=["city", "temperature"],
    data_frame_timestamp_column="timestamp"
)
print("DataFrame written to bucket=new-test-bucket, measurement=weather")&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Below is confirmation in your InfluxDB Cloud that the Pandas DataFrame was successfully written to the bucket.
&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/23svEC9jVURIZ1AZR6Yjun/acc25ac179927a0f58a8649fa1d767f6/Screenshot_2026-01-16_at_10.04.51â__AM.png" alt="Pandas DataFrame" /&gt;&lt;/p&gt;
&lt;p style="text-align: center;"&gt;   Pandas Dataframe written to InfluxDB.&lt;/p&gt;

&lt;h2 id="query-influxdb-and-return-a-pandas-dataframe"&gt;Query InfluxDB and return a Pandas DataFrame&lt;/h2&gt;

&lt;pre class=""&gt;&lt;code class="language-python"&gt;query = "SELECT * FROM weather"
table = client.query(query=query, language="influxql")
result_df = table.to_pandas()

print("Loading from InfluxDB:")
print(result_df.head())&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/1KQGAVtQYLnxaKnDpq5m6n/5a4e222723596e843fd31b04e729c534/Screenshot_2026-01-16_at_9.59.00_AM.png" alt="Pandas table" /&gt;
                                  Returning a Pandas DataFrame from InfluxDB&lt;/p&gt;

&lt;h2 id="start-building-with-influxdb-and-pandas"&gt;Start building with InfluxDB and Pandas&lt;/h2&gt;

&lt;p&gt;InfluxDB makes it easy to integrate with your existing &lt;a href="https://www.influxdata.com/time-series-analysis-methods/"&gt;data analysis&lt;/a&gt; tools and frameworks, such as Pandas, to get insights from your time series data. Under the hood, you get the benefits of Apache Arrow for fast data transfers into Pandas DataFrames without any performance hits. Whether you’re building dashboards, machine learning models, or just exploring your metrics, combining InfluxDB 3 with Pandas gives you the best of both worlds in terms of performance and developer experience. As always, if you run into hurdles, please share them on our &lt;a href="https://community.influxdata.com/?utm_source=website&amp;amp;utm_medium=getting_started_with_influxdb_and_pandas&amp;amp;utm_content=blog"&gt;community site&lt;/a&gt; or &lt;a href="https://influxdata.com/slack/?utm_source=website&amp;amp;utm_medium=getting_started_with_influxdb_and_pandas&amp;amp;utm_content=blog"&gt;Slack&lt;/a&gt; channel. We’d love to get your feedback and help with any problems you run into.&lt;/p&gt;
</description>
      <pubDate>Tue, 27 Jan 2026 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/getting-started-with-influxdb-and-pandas/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/getting-started-with-influxdb-and-pandas/</guid>
      <category>Developer</category>
      <category>Getting Started</category>
      <author>Community (InfluxData)</author>
    </item>
    <item>
      <title>System Tables Part 1: Introduction and Best Practices</title>
      <description>&lt;p&gt;As an InfluxDB &lt;a href="https://www.influxdata.com/products/influxdb-cloud/dedicated/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=system_tables_intro_part_one_influxdb&amp;amp;utm_content=blog"&gt;Cloud Dedicated&lt;/a&gt; or &lt;a href="https://www.influxdata.com/products/influxdb-clustered/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=system_tables_intro_part_one_influxdb&amp;amp;utm_content=blog"&gt;Clustered&lt;/a&gt; user, you may want to inspect your cluster to gain a better understanding of the size of your databases, tables, partitions, and compaction status. InfluxDB stores this essential metadata in &lt;em&gt;system tables&lt;/em&gt; (described in Section 1), which help inform decisions about cluster performance and maintenance.&lt;/p&gt;

&lt;h2 id="what-are-system-tableshttpswwwgooglecomurlqhttpsdocsinfluxdatacominfluxdbcloud-dedicatedadminquery-system-dataampsadampsourcedocsampust1728593664930076ampusgaovvaw0j2nrffoeiyz9-9xzz6gum"&gt;1. What are &lt;a href="https://www.google.com/url?q=https://docs.influxdata.com/influxdb/cloud-dedicated/admin/query-system-data&amp;amp;sa=D&amp;amp;source=docs&amp;amp;ust=1728593664930076&amp;amp;usg=AOvVaw0J2NrFfOEIYz9-9XzZ6guM"&gt;system tables&lt;/a&gt;?&lt;/h2&gt;

&lt;p&gt;System tables are “virtual” tables that present metadata for a specific database and provide insights into database storage. Each system table is scoped to a particular database and is read-only, meaning it cannot be modified.&lt;/p&gt;

&lt;p&gt;System tables are hidden by default, as high-frequency access to these tables can interfere with the ongoing operations of the database. Thus, querying system tables requires a special debug header with the request. Once the debug header is added (described in Section 2), you can query system tables using SQL, similar to any other table in InfluxDB.&lt;/p&gt;

&lt;p&gt;Here are the system tables that InfluxDB provides:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;+---------------+--------------------+-------------+------------+
| table_catalog | table_schema       | table_name  | table_type |
+---------------+--------------------+-------------+------------+
| ...           | ...                | ...         | ...        |
| public        | system             | compactor   | BASE TABLE |
| public        | system             | partitions  | BASE TABLE |
| public        | system             | queries     | BASE TABLE |
| public        | system             | tables      | BASE TABLE |
| ...           | ...                | ...         | ...        |
+---------------+--------------------+-------------+------------+&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In this blog, we will focus on three tables:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;system.tables&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;system.partitions&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class="language-markup"&gt;system.compactor&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Table&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;Description&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;Schema&lt;/strong&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code class="language-markup"&gt;system.tables&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Contains information about tables, such as table name and &lt;a href="https://docs.influxdata.com/influxdb/cloud-dedicated/admin/custom-partitions/partition-templates/"&gt;partition template&lt;/a&gt; in the specific database.&lt;/td&gt;
      &lt;td&gt;&lt;a href="https://docs.influxdata.com/influxdb/cloud-dedicated/admin/query-system-data/#view-systemtables-schema"&gt;Link&lt;/a&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code class="language-markup"&gt;system.partitions&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Contains information about &lt;a href="https://docs.influxdata.com/influxdb/cloud-dedicated/admin/custom-partitions/"&gt;partitions&lt;/a&gt;, partition sizes, file count, etc.&lt;/td&gt;
      &lt;td&gt;&lt;a href="https://docs.influxdata.com/influxdb/cloud-dedicated/admin/query-system-data/#view-systempartitions-schema"&gt;Link&lt;/a&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;code class="language-markup"&gt;system.compactor&lt;/code&gt;&lt;/td&gt;
      &lt;td&gt;Contains detailed information about compacted partitions at different compaction levels.&lt;/td&gt;
      &lt;td&gt;&lt;a href="https://docs.influxdata.com/influxdb/cloud-dedicated/admin/query-system-data/#view-systemcompactor-schema"&gt;Link&lt;/a&gt;&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;strong&gt;Warning:&lt;/strong&gt; System tables are not part of InfluxDB’s stable API. They are subject to change, and compatibility is not guaranteed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Warning:&lt;/strong&gt; Querying system tables may impact write and query performance. Use them only for debugging purposes and use filters to optimize queries and minimize their impact on your cluster.&lt;/p&gt;

&lt;h2 id="accessing-system-tables"&gt;2. Accessing system tables&lt;/h2&gt;

&lt;p&gt;To access system tables, you must provide a debug header with the request. The specific commands to add this header vary depending on the client you are using.&lt;/p&gt;

&lt;h4 id="influxctlhttpsdocsinfluxdatacominfluxdbcloud-dedicatedreferencecliinfluxctlquery-cli"&gt;&lt;a href="https://docs.influxdata.com/influxdb/cloud-dedicated/reference/cli/influxctl/query/"&gt;influxctl&lt;/a&gt; CLI&lt;/h4&gt;

&lt;p&gt;For &lt;code class="language-markup"&gt;influxctl&lt;/code&gt;, set the &lt;code class="language-markup"&gt;--enable-system-tables&lt;/code&gt; header:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxctl query \
  --enable-system-tables \
  --database DATABASE_NAME \
  --token DATABASE_TOKEN \
  "SQL_QUERY"&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="arrow-flight-sql-or-other-client-libraries"&gt;Arrow Flight SQL or other client libraries&lt;/h4&gt;

&lt;p&gt;For &lt;a href="https://docs.influxdata.com/influxdb/cloud-dedicated/reference/internals/arrow-flightsql/"&gt;Arrow Flight SQL&lt;/a&gt; or &lt;a href="https://docs.influxdata.com/influxdb/cloud-dedicated/query-data/execute-queries/client-libraries/"&gt;other client libraries&lt;/a&gt;, such as Go and Python, set the &lt;code class="language-markup"&gt;iox-debug&lt;/code&gt; header to &lt;code class="language-markup"&gt;true&lt;/code&gt;.&lt;/p&gt;

&lt;h2 id="querying-system-tables-examples"&gt;3. Querying system tables: Examples&lt;/h2&gt;

&lt;h4 id="view-the-partition-template-of-a-specific-table"&gt;1. View the partition template of a specific table&lt;/h4&gt;

&lt;pre class="line-numbers"&gt;&lt;code class="language-sql"&gt;SELECT  *  FROM  system.tables  WHERE  table_name  =  'TABLE_NAME'&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Example Result:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;+-----------------+--------------------------------------------------------+
| table_name      | partition_template                                     |
+-----------------+--------------------------------------------------------+
| your_table_name | {"parts":[{"timeFormat":"%Y-%m"},{"tagValue":"col1"}]} |
+-----------------+--------------------------------------------------------+&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If a table doesn’t include a &lt;a href="https://docs.influxdata.com/influxdb/cloud-dedicated/admin/custom-partitions/partition-templates/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=system_tables_intro_part_one_influxdb&amp;amp;utm_content=blog"&gt;partition template&lt;/a&gt; in the output of this command, the table uses the default (1 day) partition strategy and doesn’t partition by tags.&lt;/p&gt;

&lt;h4 id="view-the-number-of-partitions-and-total-size-per-table"&gt;2. View the number of partitions and total size per table&lt;/h4&gt;

&lt;pre class="line-numbers"&gt;&lt;code class="language-sql"&gt;SELECT
  table_name,
  COUNT(*) AS partition_count,
  SUM(total_size_mb) AS total_size_mb
FROM system.partitions
WHERE table_name IN ('foo', 'bar', 'baz')
GROUP BY table_name&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Example Result:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;+------------+-----------------+---------------+
| table_name | partition_count | total_size_mb |
+------------+-----------------+---------------+
| foo        | 1               | 2             |
| bar        | 4               | 5             |
| baz        | 10              | 23            |
+------------+-----------------+---------------+&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="view-the-size-for-different-levels-of-compacted-files"&gt;3. View the size for different levels of compacted files*&lt;/h4&gt;

&lt;pre class="line-numbers"&gt;&lt;code class="language-sql"&gt;SELECT
  table_name,
  SUM(total_l0_files) AS l0_files,
  SUM(total_l1_files) AS l1_files,
  SUM(total_l2_files) AS l2_files,
  SUM(total_l0_bytes) AS l0_bytes,
  SUM(total_l1_bytes) AS l1_bytes,
  SUM(total_l2_bytes) AS l2_bytes
FROM system.compactor
WHERE table_name IN ('foo', 'bar', 'baz')
GROUP BY table_name&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;*Compacted files are compressed Parquet files processed by the &lt;a href="https://docs.influxdata.com/influxdb/cloud-dedicated/reference/internals/storage-engine/#compactor"&gt;Compactor&lt;/a&gt; to optimize storage. These files have different &lt;a href="https://www.infoworld.com/article/2337820/compactor-a-hidden-engine-of-database-performance.html#compaction-levels"&gt;compaction levels&lt;/a&gt;: L0, L1, and L2. L0, or “Level 0”, represents newly ingested, uncompacted small files, while L2, or “Level 2”, represents compacted, non-overlapping files.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Example Result:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;+------------+----------+----------+----------+----------+----------+----------+
| table_name | l0_files | l1_files | l2_files | l0_bytes | l1_bytes | l2_bytes |
+------------+----------+----------+----------+----------+----------+----------+
| foo        | 0        | 1        | 0        | 0        | 20659    | 0        |
| bar        | 0        | 1        | 0        | 0        | 7215     | 0        |
| baz        | 0        | 1        | 0        | 0        | 10784    | 0        |
+------------+----------+----------+----------+----------+----------+----------+&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="optimize-queries-to-reduce-cluster-impact"&gt;4. Optimize queries to reduce cluster impact&lt;/h2&gt;

&lt;p&gt;Querying system tables can degrade the performance of other common queries, especially if you are trying to view every detail in clusters with hundreds of tables, hundreds of thousands of partitions, and millions of &lt;a href="https://www.influxdata.com/glossary/apache-parquet/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=system_tables_intro_part_one_influxdb&amp;amp;utm_content=blog"&gt;Parquet files&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To reduce the performance impact, we suggest selecting information for a specific table or a particular partition by adding filters as follows:&lt;/p&gt;

&lt;pre class="line-numbers"&gt;&lt;code class="language-sql"&gt;WHERE table_name = '...'
WHERE table_name = '...' AND partition_key = '...'
WHERE table_name = '...' AND partition_id = ...
WHERE partition_id = ...&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;See documents on how to obtain &lt;a href="https://docs.influxdata.com/influxdb/cloud-dedicated/admin/custom-partitions/#partition-keys"&gt;partition_key&lt;/a&gt; and &lt;a href="https://docs.influxdata.com/influxdb/cloud-dedicated/admin/query-system-data/#retrieve-a-partition-id"&gt;partition_id&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="use-the-most-efficient-filters"&gt;5. Use the most efficient filters&lt;/h2&gt;

&lt;p&gt;Among the above filters, the following filters are specially optimized and significantly reduce query latency to about 20 ms, even on our largest clusters:&lt;/p&gt;

&lt;pre class="line-numbers"&gt;&lt;code class="language-sql"&gt;WHERE table_name = '...' AND partition_key = '...'
WHERE table_name = '...' AND partition_id = ...&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="bringing-it-home"&gt;6. Bringing it home&lt;/h2&gt;

&lt;p&gt;In this first post, we introduced system tables, explained how to access them, and discussed how to optimize your queries with filters. In the next post, we will explain how we improved the performance of system tables.&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb/cloud-dedicated/admin/query-system-data/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=system_tables_intro_part_one_influxdb&amp;amp;utm_content=blog"&gt;Query system table in Cloud Dedicated&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb/clustered/admin/query-system-data/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=system_tables_intro_part_one_influxdb&amp;amp;utm_content=blog"&gt;Query system table in Clustered&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
      <pubDate>Tue, 29 Oct 2024 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/system-tables-intro-part-one-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/system-tables-intro-part-one-influxdb/</guid>
      <category>Developer</category>
      <category>Getting Started</category>
      <author>Chunchun Ye (InfluxData)</author>
    </item>
    <item>
      <title>Getting Started with Kafka, Telegraf, and InfluxDB v3</title>
      <description>&lt;p&gt;In the world of smart gardening, keeping track of environmental conditions like humidity, temperature, wind, and soil moisture is key to ensuring your plants thrive. But how do you bring all this data together in an efficient and scalable way? Enter the powerful trio of &lt;a href="https://kafka.apache.org/"&gt;Kafka&lt;/a&gt;, &lt;a href="https://www.influxdata.com/time-series-platform/telegraf/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=kafka_telegraf_influxdb&amp;amp;utm_content=blog"&gt;Telegraf&lt;/a&gt;, and &lt;a href="https://www.influxdata.com/products/influxdb-cloud/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=kafka_telegraf_influxdb&amp;amp;utm_content=blog"&gt;InfluxDB Cloud v3&lt;/a&gt;. In this guide, we’ll walk you through setting up a seamless pipeline that collects real-time data from garden sensors, streams it through Kafka, and stores it in InfluxDB for monitoring and analysis through the use of Telegraf. Whether you’re new to these tools or looking to expand your IoT toolkit, this example will show you how to get started. The corresponding repo for this tutorial can be found &lt;a href="https://github.com/InfluxCommunity/influxdb-kafka-demo"&gt;here&lt;/a&gt;. 
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5147836693564571a2e14d0bfe1e6e4e/222bea488ffa88ee3c3e28ca172ecbda/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;h2 id="requirements-and-run"&gt;Requirements and Run&lt;/h2&gt;

&lt;p&gt;Before we dive into the setup, there are a few requirements to have in place. First, you’ll need Docker and Docker Compose installed on your system, as the example relies on containerized services to simplify deployment. You should also have an InfluxDB Cloud v3 account, with your URL, token, organization, and bucket information readily available. These details will be crucial for configuring Telegraf to write garden sensor data to InfluxDB. Additionally, ensure you have Python installed, as the garden sensor gateway script relies on Python’s Kafka package to simulate and send sensor data. Finally, familiarity with the basic concepts of Kafka, Telegraf, and InfluxDB will help you follow along more easily.&lt;/p&gt;

&lt;p&gt;To run this example, follow these steps:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Clone the project and navigate to the directory.&lt;/li&gt;
  &lt;li&gt;Open the &lt;a href="https://github.com/InfluxCommunity/influxdb-kafka-demo/blob/main/resources/mytelegraf.conf"&gt;resources/mytelegraf.conf&lt;/a&gt; file and insert your InfluxDB Cloud v3 URL, token, organization, and bucket name. You can also use environment files if you desire instead.&lt;/li&gt;
  &lt;li&gt;Start the containers by changing “directory” to &lt;a href="https://github.com/InfluxCommunity/influxdb-kafka-demo/tree/main/resources"&gt;resources&lt;/a&gt; and running the command &lt;code class="language-markup"&gt;docker-compose up --build -d&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;Wait approximately 30 seconds for Telegraf to initialize and begin writing metrics.&lt;/li&gt;
  &lt;li&gt;Once everything is up and running, the garden sensor gateway will start generating random humidity, temperature, wind, and soil data, sending it through Kafka, and storing it in your InfluxDB Cloud v3 instance for monitoring and analysis.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id="code-explained"&gt;Code Explained&lt;/h2&gt;

&lt;p&gt;In this section, we’ll break down the example’s components and explain how each piece fits together to create a seamless data pipeline for monitoring garden sensor data using Kafka, Telegraf, and InfluxDB Cloud v3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. app/Dockerfile&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Dockerfile in the app directory is responsible creates a containerized environment to run the garden_sensor_gateway.py script.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. app/garden_sensor_gateway.py&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-python"&gt;import time
import json
import random

from kafka import KafkaProducer

def random_temp_cels():
    return round(random.uniform(-10, 50), 1)

def random_humidity():
    return round(random.uniform(0, 100), 1)

def random_wind():
    return round(random.uniform(0, 10), 1)

def random_soil():
    return round(random.uniform(0, 100), 1)

def get_json_data():
    data = {}

    data["temperature"] = random_temp_cels()
    data["humidity"] = random_humidity()
    data["wind"] = random_wind()
    data["soil"] = random_soil()

    return json.dumps(data) 

def main():
    producer = KafkaProducer(bootstrap_servers=['kafka:9092'])

    for _ in range(20000):
        json_data = get_json_data()
        producer.send('garden_sensor_data', bytes(f'{json_data}','UTF-8'))
        print(f"Sensor data is sent: {json_data}")
        time.sleep(5)

if __name__ == "__main__":
    main()&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This Python script simulates garden sensor data and sends it to a Kafka topic. Let’s look at how it works:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Importing Libraries: The script imports necessary libraries like time, json, random, and KafkaProducer from the kafka-python package.&lt;/li&gt;
  &lt;li&gt;Data Generation Functions: Functions like random_temp_cels(), random_humidity(), random_wind(), and random_soil() generate random values for temperature, humidity, wind, and soil moisture, respectively. These values are rounded to one decimal place to simulate realistic sensor readings.&lt;/li&gt;
  &lt;li&gt;Data Formatting: The get_json_data() function collects these generated values into a dictionary and converts it into a JSON string using json.dumps(data).&lt;/li&gt;
  &lt;li&gt;Kafka Producer: The main() function initializes a Kafka producer with KafkaProducer(bootstrap_servers=[‘kafka:9092’]), pointing it to the Kafka broker running in the container. It then enters a loop where it generates sensor data, sends it to the Kafka topic garden_sensor_data, and prints the data to the console. The loop runs 20,000 times, with a 5-second delay between each iteration.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="resourcesdocker-composeyml"&gt;3. resources/docker-compose.yml&lt;/h4&gt;

&lt;p&gt;The docker-compose.yml file in the resources directory defines the services required for the project, orchestrating the containers for Kafka, Zookeeper, Telegraf, and the garden sensor gateway. Here’s what each service does:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Kafka and Zookeeper: These services set up the Kafka broker and Zookeeper, which Kafka relies on for distributed coordination. Kafka is exposed on port 9092, and Zookeeper on port 2181.&lt;/li&gt;
  &lt;li&gt;Garden Sensor Gateway: This service builds the container for the garden_sensor_gateway.py script using the Dockerfile in the app directory. It depends on Kafka to ensure that Kafka is up and healthy before the script starts running.&lt;/li&gt;
  &lt;li&gt;Telegraf: The Telegraf service is configured to consume messages from the Kafka topic garden_sensor_data and write them to InfluxDB Cloud v3. The Telegraf configuration file, mytelegraf.conf, is mounted into the container to provide the necessary settings.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="resourcesmytelegrafconf"&gt;4. resources/mytelegraf.conf&lt;/h4&gt;

&lt;pre&gt;&lt;code class="language-markup"&gt;[[inputs.kafka_consumer]]
  ## Kafka brokers.
  brokers = ["kafka:9092"]
  ## Topics to consume.
  topics = ["garden_sensor_data"]
  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  data_format = "json"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This configuration file is where Telegraf is set up to process the garden sensor data:``&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://github.com/influxdata/telegraf/blob/master/plugins/outputs/influxdb_v2/README.md"&gt;InfluxDB Output&lt;/a&gt;: The [[outputs.influxdb_v2]] section configures Telegraf to write data to InfluxDB Cloud v3. You must replace the placeholders with your InfluxDB URL, token, organization, and bucket details.&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://github.com/influxdata/telegraf/blob/master/plugins/inputs/kafka_consumer/README.md"&gt;Kafka Consumer Input&lt;/a&gt;: The [[inputs.kafka_consumer]] section configures Telegraf to subscribe to the garden_sensor_data topic on Kafka. It consumes the JSON-formatted sensor data, which is sent to InfluxDB for storage and analysis.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, these components create a robust pipeline where garden sensor data is generated, sent to Kafka, processed by Telegraf, and stored in InfluxDB Cloud v3, allowing you to monitor your garden’s environment in real-time.&lt;/p&gt;

&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;This blog post describes how to start using InfluxDB, Kafka, and Telegraf. A Python script that generates garden data and sends it to a Kafka topic, Telegraf reads the data from the Kafka topic and writes it to InfluxDB. As always, get started with &lt;a href="https://cloud2.influxdata.com/signup?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=kafka_telegraf_influxdb&amp;amp;utm_content=blog"&gt;InfluxDB v3 Cloud here&lt;/a&gt;. In the next post, we’ll cover how to run the project, dive into the architecture and logic, and discuss some of the pros and cons of the selected stack. If you need help, please contact us on our &lt;a href="https://community.influxdata.com/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=kafka_telegraf_influxdb&amp;amp;utm_content=blog"&gt;community site&lt;/a&gt; or &lt;a href="https://influxcommunity.slack.com/join/shared_invite/zt-2pj7flo05-mF5J3rQZe_Y_Ws5FuJO9FQ#/shared-invite/email/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=kafka_telegraf_influxdb&amp;amp;utm_content=blog"&gt;Slack channel&lt;/a&gt;. If you are also working on a data pipelining project with InfluxDB, I’d love to hear from you!&lt;/p&gt;
</description>
      <pubDate>Tue, 15 Oct 2024 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/getting-started-kafka-telegraf-influxdb-v3-guide/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/getting-started-kafka-telegraf-influxdb-v3-guide/</guid>
      <category>Developer</category>
      <category>Getting Started</category>
      <author>Anais Dotis-Georgiou (InfluxData)</author>
    </item>
    <item>
      <title>An Introductory Guide to Cloud Security for IIoT</title>
      <description>&lt;p&gt;The state of industries has come a long way since the Industrial Revolution with new technologies such as smart devices, the internet, and the cloud. The Industrial Internet of Things (IIoT) is a network of industrial components that share and process data to gain insights. But as IIoT involves sensitive data and life-critical operations, this also comes with various IIoT cloud security challenges. Therefore, it is important to strengthen security.&lt;/p&gt;

&lt;p&gt;In this post, we’ll look into the benefits and challenges of IIoT and understand the common threats IIoT faces. Finally, we’ll discuss IIoT cloud security best practices and the tools you need to secure your IIoT infrastructure.&lt;/p&gt;

&lt;h2 id="benefits-of-iiot"&gt;&lt;strong&gt;Benefits of IIoT&lt;/strong&gt;&lt;/h2&gt;

&lt;h4 id="minimized-downtime"&gt;&lt;strong&gt;Minimized Downtime&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Organizations can get &lt;a href="https://www.influxdata.com/blog/powering-real-time-data-processing-influxdb-aws-kinesis/?utm_source=vendor&amp;amp;utm_medium=referral&amp;amp;utm_campaign=2024-07_spnsr-ctn_cloud-security-iiot-influx-ai_iiot-world"&gt;real-time data&lt;/a&gt; on machine performance and environmental conditions and plan for maintenance. This proactive measure minimizes &lt;a href="https://www.influxdata.com/blog/minimize-downtime-in-production-with-cloud-based-distributed-load-testing/?utm_source=vendor&amp;amp;utm_medium=referral&amp;amp;utm_campaign=2024-07_spnsr-ctn_cloud-security-iiot-influx-ai_iiot-world"&gt;downtime&lt;/a&gt; and enhances operational efficiency.&lt;/p&gt;

&lt;h4 id="data-driven-decision-making"&gt;&lt;strong&gt;Data-Driven Decision-Making&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Advanced analytics provide actionable insights to make data-driven decisions. Real-time analytics help in responding swiftly to changing conditions and competitiveness.&lt;/p&gt;

&lt;h4 id="safety-and-compliance"&gt;&lt;strong&gt;Safety and Compliance&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;IIoT detects hazardous conditions and trigger automatic shutdowns or alerts to protect workers. It also helps comply with regulatory requirements.&lt;/p&gt;

&lt;h4 id="cost-reduction-and-resource-optimization"&gt;&lt;strong&gt;Cost Reduction and Resource Optimization&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;IIoT provides insights to create strategies for more efficient use of resources, leading to cost reductions. Stakeholders can identify demand patterns and track inventory to increase resource optimization.&lt;/p&gt;

&lt;h4 id="enhanced-customer-satisfaction"&gt;&lt;strong&gt;Enhanced Customer Satisfaction&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;IIoT plays a crucial role in enhancing customer satisfaction as it leads to high quality products and on-time delivery.&lt;/p&gt;

&lt;h4 id="innovation-and-new-business-models"&gt;&lt;strong&gt;Innovation and New Business Models&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;IIoT opens doors for innovation and the development of new business models as it helps organizations identify what upgrades will be helpful and how they can make better profits.&lt;/p&gt;

&lt;p&gt;The benefits of IIoT make it so attractive that one might want to jump right in and implement it. While IIoT has a lot of benefits, it also comes with some challenges.&lt;/p&gt;

&lt;h2 id="key-challenges-in-iiot"&gt;&lt;strong&gt;Key challenges in IIoT&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;If the challenges of IIoT are not addressed, it might turn out to be a bane more than a boon. So, let’s look into some common challenges in IIoT.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/737422a45a8541d5b29734cd0aec5f22/04b4336b574016c728005c134976d659/unnamed.jpg" alt="" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;h4 id="security-vulnerabilities-and-threats"&gt;&lt;strong&gt;Security Vulnerabilities and Threats&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;A significant challenge in IIoT is securing the network, connected devices, and data as each component represents a potential entry point for cyber attackers.&lt;/p&gt;

&lt;h4 id="data-privacy-and-compliance"&gt;&lt;strong&gt;Data Privacy and Compliance&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;IIoT devices generate sensitive data that are subject to strict regulatory requirements. Ensuring compliance with these regulations is a complex task.&lt;/p&gt;

&lt;h4 id="scalability-and-network-latency"&gt;&lt;strong&gt;Scalability and Network Latency&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;For large-scale industrial and fast-growing units, managing the increased volume of data and network performance and ensuring real-time processing become challenging.&lt;/p&gt;

&lt;h4 id="skills-gap"&gt;&lt;strong&gt;Skills Gap&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Successful deployment and management of IIoT solutions require skills in both IT and operational technology (OT), which can be challenging to find.&lt;/p&gt;

&lt;h4 id="legacy-systems"&gt;&lt;strong&gt;Legacy Systems&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Many industrial facilities still rely on legacy systems, and integrating these systems with IIoT devices is a challenge.&lt;/p&gt;

&lt;p&gt;These key challenges can be a make-or-break for any organization. It’s important to understand how to deal with them. Since the focus of this post is IIoT cloud security, let’s now look at that.&lt;/p&gt;

&lt;h2 id="common-attack-vectors-and-types-of-threats"&gt;&lt;strong&gt;Common attack vectors and types of threats&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;IIoT is an attractive target for attackers because of the impact it has on organizations and how profitable it can be for attackers. Attackers can ask for unreasonable ransoms and create leverage over these organizations. If something goes wrong due to poor practices, it can cause catastrophic damage to organizations. Let’s look into some common attack vectors and types of threats that IIoT cloud security faces.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/51add94504204e0298b324d7e538464e/d1e14faa981454acff3fca9249ebd4e4/unnamed.png" alt="" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;h4 id="device-compromise-and-malware"&gt;&lt;strong&gt;Device Compromise and Malware&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Attackers can exploit vulnerabilities in device firmware or software to install malware, which can then be used to steal data, disrupt operations, or infiltrate cloud systems.&lt;/p&gt;

&lt;h4 id="man-in-the-middle-mitmhttpswwwimpervacomlearnapplication-securityman-in-the-middle-attack-mitm"&gt;&lt;strong&gt;Man-in-the-Middle (&lt;a href="https://www.imperva.com/learn/application-security/man-in-the-middle-attack-mitm/"&gt;MITM&lt;/a&gt;)&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Attackers can steal or manipulate sensitive data by intercepting the communication between devices and the cloud. Manipulation of critical data or commands can cause operational disruptions and unreliable data insights.&lt;/p&gt;

&lt;h4 id="denial-of-service-dos"&gt;&lt;strong&gt;Denial of Service (DoS)&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.cloudflare.com/learning/ddos/glossary/denial-of-service/"&gt;DoS&lt;/a&gt; and &lt;a href="https://www.fortinet.com/resources/cyberglossary/ddos-attack#:~:text=DDoS%20Attack%20means%20%22Distributed%20Denial,connected%20online%20services%20and%20sites."&gt;DDoS&lt;/a&gt; attacks aim to overwhelm a system, network, or service, making it unavailable and impacting the operations.&lt;/p&gt;

&lt;h4 id="phishing-and-social-engineering"&gt;&lt;strong&gt;Phishing and Social Engineering&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Attackers try to trick employees into giving out sensitive information that they can use to breach the security of the IIoT infrastructure.&lt;/p&gt;

&lt;h4 id="insider-threats"&gt;&lt;strong&gt;Insider Threats&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.cisa.gov/topics/physical-security/insider-threat-mitigation/defining-insider-threats"&gt;Insider threats&lt;/a&gt; involve intentional or unintentional actions that cause harm by individuals having legitimate access to the organization’s IIoT infrastructure.&lt;/p&gt;

&lt;h4 id="supply-chain-attacks"&gt;&lt;strong&gt;Supply Chain Attacks&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.crowdstrike.com/cybersecurity-101/cyberattacks/supply-chain-attacks/"&gt;Supply chain attacks&lt;/a&gt; can involve compromising software updates, hardware components, or third-party service providers.&lt;/p&gt;

&lt;p&gt;Now that we’ve looked into the common security issues, let’s look into the best practices for IIoT &lt;a href="https://www.influxdata.com/blog/monitoring-cloud-environments-apps-influxdb/?utm_source=vendor&amp;amp;utm_medium=referral&amp;amp;utm_campaign=2024-07_spnsr-ctn_cloud-security-iiot-influx-ai_iiot-world"&gt;cloud security&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="iiot-cloud-security-best-practices"&gt;&lt;strong&gt;IIoT cloud security best practices&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;Every technology has security concerns, and following best practices can help eliminate threats and reduce risks. While every organization is different and needs a tailored security posture, here are some IIoT cloud security best practices to start with.&lt;/p&gt;

&lt;h4 id="authentication-and-access-control"&gt;&lt;strong&gt;Authentication and Access Control&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Enforce the use of strong passwords and multi-factor authentication (MFA), and regularly review and update access using the &lt;a href="https://www.paloaltonetworks.com/cyberpedia/what-is-the-principle-of-least-privilege"&gt;principle of least privilege&lt;/a&gt;.&lt;/p&gt;

&lt;h4 id="encrypt-data"&gt;&lt;strong&gt;Encrypt Data&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Encrypt data at rest and in transit to prevent unauthorized access to sensitive data. Use strong and industry-standard encryption protocols such as &lt;a href="https://www.progress.com/blogs/use-aes-256-encryption-secure-data"&gt;AES-256&lt;/a&gt; and implement the latest &lt;a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"&gt;TLS&lt;/a&gt;.&lt;/p&gt;

&lt;h4 id="regularly-update-and-patch"&gt;&lt;strong&gt;Regularly Update and Patch&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Keep track of all components, check regularly for updates, and apply them. Regular updates help protect against exploitation of known security weaknesses.&lt;/p&gt;

&lt;h4 id="network-segmentation"&gt;&lt;strong&gt;Network Segmentation&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.fortinet.com/resources/cyberglossary/network-segmentation#:~:text=Network%20segmentation%20is%20an%20architecture,that%20flows%20into%20their%20systems."&gt;Network segmentation&lt;/a&gt; helps contain a potential breach by limiting the spread. Segment your network based on the criticality of components and how easy they are to reach by an attacker. Enforce strong firewall and access control rules.&lt;/p&gt;

&lt;h4 id="continuous-monitoring-and-incident-response-plan"&gt;&lt;strong&gt;Continuous Monitoring and Incident Response Plan&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Deploy IIoT cloud security &lt;a href="https://www.influxdata.com/blog/network-monitoring-tools-explained/?utm_source=vendor&amp;amp;utm_medium=referral&amp;amp;utm_campaign=2024-07_spnsr-ctn_cloud-security-iiot-influx-ai_iiot-world"&gt;monitoring tools&lt;/a&gt; to track system activity and detect suspicious, malicious, and anomalous activity. Create extensive SOPs for active response. This helps in remediating an attack before it could cause harm. Create an &lt;a href="https://www.cynet.com/incident-response/"&gt;incident response&lt;/a&gt; plan including roles, responsibilities, and procedures, and regularly update it.&lt;/p&gt;

&lt;h4 id="security-awareness"&gt;&lt;strong&gt;Security Awareness&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Hold regular security training and awareness programs for all employees. Encourage reporting of suspicious activities and provide clear guidelines on how to address potential security incidents.&lt;/p&gt;

&lt;p&gt;These best practices are just something to begin with. You have to additionally evaluate your IIoT infrastructure and implement IIoT cloud security accordingly. Taking a proactive security approach can help you reap the benefits of IIoT while mitigating the associated risks.&lt;/p&gt;

&lt;h2 id="useful-tools-for-iiot-cloud-security"&gt;&lt;strong&gt;Useful tools for IIoT cloud security&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;To implement a strong security posture, it is important to choose the right tools. So, let’s look into some useful tools for IIoT cloud security.&lt;/p&gt;

&lt;h4 id="endpoint-security-tools"&gt;&lt;strong&gt;Endpoint Security Tools&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/blog/influxdb-endpoint-security-state-template/?utm_source=vendor&amp;amp;utm_medium=referral&amp;amp;utm_campaign=2024-07_spnsr-ctn_cloud-security-iiot-influx-ai_iiot-world"&gt;Endpoint security&lt;/a&gt; solutions are critical for protecting IIoT devices. Deploy endpoint security solutions across all IIoT devices to ensure continuous protection and monitoring.&lt;/p&gt;

&lt;h4 id="network-security-tools"&gt;&lt;strong&gt;Network Security Tools&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Network security tools help secure communications between IIoT devices and cloud systems, preventing unauthorized access and data breaches.&lt;/p&gt;

&lt;h4 id="identity-and-access-management-iam-solutions"&gt;&lt;strong&gt;Identity and Access Management (IAM) Solutions&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.cloudflare.com/learning/access-management/what-is-identity-and-access-management/"&gt;IAM&lt;/a&gt; helps enforce strong authentication protocols and access controls. These solutions help ensure that only authorized personnel can access sensitive IIoT data and cloud resources.&lt;/p&gt;

&lt;h4 id="continuous-monitoring-and-siem-solutions"&gt;&lt;strong&gt;Continuous Monitoring and SIEM Solutions&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Continuous monitoring and Security Information and Event Management (&lt;a href="https://www.influxdata.com/blog/5-best-SIEM-tools-influxdb/?utm_source=vendor&amp;amp;utm_medium=referral&amp;amp;utm_campaign=2024-07_spnsr-ctn_cloud-security-iiot-influx-ai_iiot-world"&gt;SIEM&lt;/a&gt;) solutions provide real-time visibility into IIoT cloud security, ensuring rapid detection and response to security incidents.&lt;/p&gt;

&lt;h4 id="vulnerability-management-tools"&gt;&lt;strong&gt;Vulnerability Management Tools&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://expertinsights.com/insights/the-top-vulnerability-management-solutions/"&gt;Vulnerability management tools&lt;/a&gt; help identify and remediate security weaknesses within IIoT devices and cloud systems. Regularly scan IIoT devices and cloud systems for vulnerabilities, and promptly address identified issues.&lt;/p&gt;

&lt;h4 id="cloud-specific-security-solutions"&gt;&lt;strong&gt;Cloud-Specific Security Solutions&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Cloud-specific security &lt;a href="https://www.spiceworks.com/tech/cloud/articles/top-10-cloud-security-software/"&gt;solutions&lt;/a&gt; provide comprehensive security controls, visibility, and compliance management tailored for cloud environments. Most cloud service providers have several security features and solutions and also allow you to integrate additional tools to enhance IIoT cloud security.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/8e6f5a3c39fb44df867599f4af6f517a/47043892f3ea2676c8de42f98e3309c7/unnamed.jpg" alt="" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;h4 id="secure-data-storage-solutions"&gt;&lt;strong&gt;Secure Data Storage Solutions&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Data is the core of IIoT, and you need a tool that can handle IIoT-specific data securely. &lt;a href="https://www.influxdata.com/?utm_source=vendor&amp;amp;utm_medium=referral&amp;amp;utm_campaign=2024-07_spnsr-ctn_cloud-security-iiot-influx-ai_iiot-world"&gt;InfluxDB&lt;/a&gt; is one such solution. It provides real-time insights from any time series data with a single, purpose-built database. InfluxDB focuses on security with its &lt;a href="https://www.influxdata.com/security/?utm_source=vendor&amp;amp;utm_medium=referral&amp;amp;utm_campaign=2024-07_spnsr-ctn_cloud-security-iiot-influx-ai_iiot-world"&gt;security program&lt;/a&gt; including:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;regular third-party penetration tests&lt;/li&gt;
  &lt;li&gt;continuous security management and monitoring&lt;/li&gt;
  &lt;li&gt;industry-standard encryption&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/get-influxdb-start/?utm_source=vendor&amp;amp;utm_medium=referral&amp;amp;utm_campaign=2024-07_spnsr-ctn_cloud-security-iiot-influx-ai_iiot-world"&gt;Try InfluxDB for free&lt;/a&gt;&lt;em&gt;&lt;u&gt;&lt;/u&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2 id="conclusion"&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;IIoT is a great technology for industries, and it’s our responsibility to make the best use of it. By connecting industrial devices, machines, and systems, IIoT facilitates real-time data collection, analysis, and automation. However, IIoT also comes with some security challenges. In this post, we discussed some IIoT cloud security challenges and best practices to secure IIoT infrastructures. These best practices are a baseline, the bare minimum, and organizations must implement additional security measures for enhanced security. The path to embracing the true power of IIoT is through proactive security!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was written by Omkar Hiremath. &lt;a href="https://www.linkedin.com/in/omkar-hiremath-6a1729159/?originalSubdomain=in"&gt;Omkar&lt;/a&gt; is a cybersecurity team lead who is enthusiastic about cybersecurity, ethical hacking, and Python. He is keenly interested in bug bounty hunting and vulnerability analysis.&lt;/em&gt;&lt;/p&gt;
</description>
      <pubDate>Thu, 12 Sep 2024 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/iiot-cloud-security-introductory-guide-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/iiot-cloud-security-introductory-guide-influxdb/</guid>
      <category>Developer</category>
      <category>Getting Started</category>
      <author>Community (InfluxData)</author>
    </item>
    <item>
      <title>Home Assistant Tutorial: A Beginner’s Guide to Automation</title>
      <description>&lt;p&gt;In this post, we’ll be taking a closer look at Home Assistant, an open source platform for connecting your smart devices at home. We’ll walk through every important section of Home Assistant: dashboards, &lt;a href="https://www.influxdata.com/blog/how-integrate-gafana-home-assistant/"&gt;integrations&lt;/a&gt;, add-ons, devices and entities, automation, scripts, and scenes. In addition, we’ll be walking through how to set up your Home Assistant and create &lt;a href="https://www.influxdata.com/blog/the-smart-way-to-collect-iot-data-for-home-automation/"&gt;automation&lt;/a&gt; using Home Assistant’s graphical user interface.&lt;/p&gt;

&lt;h2 id="how-does-home-assistant-work"&gt;How Does Home Assistant Work?&lt;/h2&gt;

&lt;p&gt;Home Assistant is an open source smart home platform that allows you to connect your smart home devices like your TV, fan, cameras, thermostats, lights, and sensors. As a user, you can build intricate automation using Home Assistant’s user-friendly, unified web-based user interface.&lt;/p&gt;

&lt;p&gt;Home Assistant means you don’t need to be a programmer or a computer scientist to get a device working with your smart home. You can simply build and test automation without writing a single line of code. Home Assistant can also be as complex as you want it to be, just depending on how much time you’re willing to put into it.&lt;/p&gt;

&lt;h2 id="important-parts-of-home-assistant"&gt;Important Parts of Home Assistant&lt;/h2&gt;

&lt;p&gt;Home Assistant has some very important features, which include the following:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Dashboard&lt;/li&gt;
  &lt;li&gt;Integrations&lt;/li&gt;
  &lt;li&gt;Add-ons&lt;/li&gt;
  &lt;li&gt;Devices and entities&lt;/li&gt;
  &lt;li&gt;Automation&lt;/li&gt;
  &lt;li&gt;Scripts&lt;/li&gt;
  &lt;li&gt;Scenes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let’s take a closer look at each of them.&lt;/p&gt;

&lt;h3 id="home-assistant-dashboard"&gt;Home Assistant dashboard&lt;/h3&gt;

&lt;p&gt;The first thing you see once you’ve installed and logged into Home Assistant is the dashboard. The dashboard is a page in Home Assistant used to display information that is available in Home Assistant. The more devices and services you add to Home Assistant, the more information will be available for display. Whenever devices or services are added, Home Assistant automatically adds their information to a default dashboard. But if you’d like, you can take control and display exactly what you want.&lt;/p&gt;

&lt;p&gt;Dashboards are composed of cards that display information about the devices and services that are in Home Assistant. These cards can be added via the user interface or by editing YAML scripts. Even though Home Assistant is largely configurable through the user interface, you’re still going to run the YAML scripts. It is, therefore, paramount that you understand it properly.&lt;/p&gt;

&lt;h3 id="home-assistant-integrations"&gt;Home Assistant integrations&lt;/h3&gt;

&lt;p&gt;Integrations are additional software that can be installed within Home Assistant and allow it to connect to different platforms, which bring in data and devices. When integrations are installed, the data from these integrations are represented in Home Assistant as devices and entities. There are currently over 1,000+ built-in integrations in Home Assistant that are fully supported by the Home Assistant community. Built-in integrations are those that are directly supported by Home Assistant, and they’re often automatically discovered on your Wi-Fi network. For custom integrations, there is also a Home Assistant community store, which can be installed as an integration itself, and it gives you access to thousands of these custom integrations as well.&lt;/p&gt;

&lt;h3 id="home-assistant-add-ons"&gt;Home Assistant add-ons&lt;/h3&gt;

&lt;p&gt;Sometimes integrations and add-ons are confused with each other, but we should keep them separate because they are, in fact, different from one another. Depending on your installation type, you may or may not have the ability to install add-ons. Add-ons are applications that run alongside Home Assistant and whatever hardware your Home Assistant is running on. They can be easily and quickly installed, configured, and run within Home Assistant.&lt;/p&gt;

&lt;p&gt;While integrations only connect Home Assistant with many applications, devices, and services, add-ons provide additional functionality.&lt;/p&gt;

&lt;p&gt;An example of an add-on is the Z-Wave JS server, which runs alongside Home Assistant to act as a server for all Z-Wave devices. Those devices are then connected to Home Assistant using the Z-Wave JS integration. Other examples are file editors, which can allow you to edit your Home Assistant configuration files right within Home Assistant.&lt;/p&gt;

&lt;h3 id="home-assistant-devices-and-entities"&gt;Home Assistant devices and entities&lt;/h3&gt;

&lt;p&gt;Entities in Home Assistant represent logical groupings of functions within the system. A device, on the other hand, signifies a physical device connected to Home Assistant through &lt;a href="https://www.influxdata.com/products/integrations/"&gt;integration&lt;/a&gt;. However, entities can be added independently without grouping them into devices. They encompass not only devices but also automation, scripts, and scenes. For instance, the Philips Hue motion sensor in Home Assistant comprises temperature, illuminance, occupancy sensors, and a motion detector, each represented as separate entities within the device grouping. Entities can have different types, such as binary sensors and motion sensors, and exhibit various states. For example, a motion sensor may have an “ON” state when motion is detected and an “OFF” state when no motion is detected. Numeric states are also possible, such as a temperature sensor with a state of 70°F. Additionally, entities can possess attributes that provide further information about their state.&lt;/p&gt;

&lt;h3 id="home-assistant-automation"&gt;Home Assistant automation&lt;/h3&gt;

&lt;p&gt;Automation is a process of making certain actions happen automatically. Automation in Home Assistant typically include the following:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Triggers&lt;/strong&gt;: This is a prompt for those actions to happen.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Conditions&lt;/strong&gt;: These are rules dictating if the actions should happen.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Actions&lt;/strong&gt;: This represents the actions that Home Assistant will do when the automation is triggered.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An example of simple automation is as follows.&lt;/p&gt;

&lt;p&gt;A motion sensor device like the Philips Hue sensor has a motion sensor entity. When that entity turns on (meaning there is motion), it triggers an automation that turns the light on. The automation has a condition that it has to be after 8 pm. So the light only comes on at night.&lt;/p&gt;

&lt;h3 id="home-assistant-scripts"&gt;Home Assistant scripts&lt;/h3&gt;

&lt;p&gt;Scripts are similar to automation in that they allow you to run multiple actions in a row, step by step. The main difference here is that scripts don’t have triggers, meaning that they cannot automatically run. They have to be called upon by automation. For instance, if you have a series of actions you want to run in multiple automation, you can create a single script, and each automation can call that single script. In that way, if an action needs to be changed, you’re only changing one script instead of two automation. Within a script, delays can be added between steps if needed. You can also trigger scripts from your Home Assistant dashboard with cards.&lt;/p&gt;

&lt;p&gt;An example of a script is a projector setup. We choose this as a script because, let’s say, there are two different ways that you want to trigger your projector setup to run. Firstly, you want to use a dashboard button and a smart physical button that anyone can press to run the setup. Now when you press the start button, it triggers automation to call the projector setup script. This script lowers the projector’s screen, turns on the projector, and also selects HDMI as the input.&lt;/p&gt;

&lt;h3 id="home-assistant-scenes"&gt;Home Assistant scenes&lt;/h3&gt;

&lt;p&gt;Scenes are a set of saved states of entities that can be used as Home Assistant automation or scripts. For example, you can set up two different scenes, one for the morning and one for the evening, that both have specific brightness and light colors and then you can run an automation that changes the scene based on the time of day. This may seem like automation or a script can do what a scene can do, so there is no point in having scenes. However, contrary to that, automation and scripts are for actions, while scenes are for setting the state of entities. Automation and scripts follow a sequential flow and can be interrupted if triggers are not met, whereas scenes execute simultaneously.&lt;/p&gt;

&lt;h2 id="setting-up-home-assistant"&gt;Setting Up Home Assistant&lt;/h2&gt;

&lt;p&gt;There are two steps in setting up Home Assistant:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Installation&lt;/li&gt;
  &lt;li&gt;Initial setup wizard&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="installation"&gt;Installation&lt;/h3&gt;

&lt;p&gt;The first step in setting up Home Assistant on your system is by installing it. There are different ways of getting Home Assistant up and running on your device, some of which include the below:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb/cloud/monitor-alert/templates/infrastructure/raspberry-pi/"&gt;Raspberry Pi&lt;/a&gt;&lt;u&gt;&lt;/u&gt;&lt;/li&gt;
  &lt;li&gt;Virtual machine&lt;/li&gt;
  &lt;li&gt;Docker&lt;/li&gt;
  &lt;li&gt;Windows&lt;/li&gt;
  &lt;li&gt;macOS&lt;/li&gt;
  &lt;li&gt;Linux&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To get Home Assistant up and running, you need to choose from the list of options for installation by visiting the official Home Assistant website, &lt;a href="http://www.home-assistant.io"&gt;www.home-assistant.io&lt;/a&gt;, then following the instructions provided for the respective installation options.&lt;/p&gt;

&lt;h3 id="initial-setup-wizard"&gt;Initial setup wizard&lt;/h3&gt;

&lt;p&gt;Here at the initial setup wizard step, you need to fill in the blank fields (name, username, password) with your details to create an account. After filling in your details, you can now click “CREATE ACCOUNT.” You can see the details below.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/005c8657619c4c31842f99395c2774aa/71b1e8c24cd21b31016a39b95d6f3bea/Untitled.jpg" alt="Home Assistant Initial Setup Wizard" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;The next page shows you fields and options for you to set a name for your home, your location, and your unit system. You can manually get your preferred location using the map, or you can simply detect your location by clicking the “DETECT” option. Furthermore, the setting of your time zone, currency, and unit system will be required based on your location. Afterward, you click “NEXT.” You can see the details in the image below.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/3340a9a6f8714b6490d8fc0a14f94bab/89c8b89410bee6ab773d3db5a5630ecc/Untitled.jpg" alt="Set Name, Location, an Unit System for your Home Assistant" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;The next screen asks you if you would like to share any data with Home Assistant. This is entirely up to you to decide. Afterward, click “NEXT.” You can see the details below.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2ca9f775a4e34cfdb85d1ad0421d155b/46c5fc04e3d0cde14d4bf0c31dc0bc17/Untitled.jpg" alt="Share Data With Home Assistant" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;Now the next screen shows a bunch of devices that we can import into Home Assistant. You have the option to set these up now or later on. Lastly, you click on “FINISH.” You can see the details in the image below.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/604bb552c1634113be0a58d6cdf4651f/1af8291562d0c0834531b14699e9b011/Untitled.jpg" alt="Home Assistant Devices and Services" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;Bravo! We can now see our Home Assistant Dashboard!
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/8a555764870b4ad19e6db7e38c2a626c/b11541947ecce432ec6cde5fff870b17/Untitled.jpg" alt="Home Assistant GUI" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;h2 id="setting-up-automation-in-home-assistant"&gt;Setting Up Automation in Home Assistant&lt;/h2&gt;

&lt;p&gt;There are two ways to configure automation in Home Assistant. You can use the built-in automation editor right in the Home Assistant user interface, or you can manually write it yourself in a YAML script. In this post, we’ll be making use of the built-in automation editor within Home Assistant.&lt;/p&gt;

&lt;p&gt;The following steps show you how to build a basic automation that sends a simple notification when the front door is open.&lt;/p&gt;

&lt;h3 id="automation-and-scenes"&gt;Automation and scenes&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;You’ll find the automation editor under &lt;strong&gt;Settings&lt;/strong&gt;, and under this, you’ll find &lt;strong&gt;Automations &amp;amp; Scenes&lt;/strong&gt;. Now when &lt;strong&gt;Settings&lt;/strong&gt; is opened, click on &lt;strong&gt;Automations &amp;amp; Scenes&lt;/strong&gt;.&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/dd48160fe62140e59760d97b149e6873/2f282af91dfda6915b1cbfbbf94aaa31/Untitled.jpg" alt="" /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Click &lt;strong&gt;Create Automation&lt;/strong&gt; in the lower right&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/eda174433aee4627bef06622b1d44c3e/cd508f5bf7b739371ed237480205cc2a/Untitled.jpg" alt="" /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Now you have an option of using a blueprint or starting with empty automation. Select &lt;strong&gt;Start with empty automation&lt;/strong&gt; since our objective is to create a new automation.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Now you have the three parts of automation: &lt;strong&gt;Triggers&lt;/strong&gt;, &lt;strong&gt;Conditions&lt;/strong&gt;, and &lt;strong&gt;Actions&lt;/strong&gt;, which were defined earlier in the post. Now select &lt;strong&gt;+ ADD TRIGGER&lt;/strong&gt; under &lt;strong&gt;Trigger&lt;/strong&gt; to add a trigger.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="triggers"&gt;Triggers&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;For &lt;strong&gt;Triggers&lt;/strong&gt;, under which you’ll see a lot of trigger options to pick from, we’ll be making use of &lt;strong&gt;State&lt;/strong&gt;. Now select &lt;strong&gt;State&lt;/strong&gt; from the list. Under the blank space for &lt;strong&gt;Entity&lt;/strong&gt;, type in &lt;strong&gt;Front Door&lt;/strong&gt;. In the search results, select your entity for the front door. In &lt;strong&gt;From&lt;/strong&gt; and &lt;strong&gt;To&lt;/strong&gt;, fill the blanks with &lt;strong&gt;Closed&lt;/strong&gt; and &lt;strong&gt;Open&lt;/strong&gt;, respectively. Now we’re done with &lt;strong&gt;Trigger&lt;/strong&gt;, so next is to add the conditions. Now select &lt;strong&gt;+ ADD&lt;/strong&gt; &lt;strong&gt;CONDITION&lt;/strong&gt;, which can be found under &lt;strong&gt;Condition&lt;/strong&gt;. You can see the details below.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/10c881a350084d759161db9f113aa111/b1c60278ab3cf6ddd411149451aa7c59/Untitled.jpg" alt="" /&gt;&lt;/p&gt;

&lt;h3 id="conditions"&gt;Conditions&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Under &lt;strong&gt;Conditions&lt;/strong&gt;, you’ll find a list of options to choose from. Any of these conditions need to be true for the automation to continue. Since, in our case, we’re only sending a notification, we’ll make sure the automation happens only when people are away. For that reason, we’ll be making use of the sun as a proxy for that context. For the &lt;strong&gt;Condition&lt;/strong&gt;, we’ll use &lt;strong&gt;State&lt;/strong&gt; from the list of options provided. Now we’ll fill in the blanks for entity and state with&lt;strong&gt;Sun&lt;/strong&gt; and&lt;strong&gt;Above Horizon&lt;/strong&gt;, respectively. Remember that automation will continue only if these conditions are true. Now let’s proceed with the actions since we’ve finished with the conditions. Now select &lt;strong&gt;+ ADD ACTION&lt;/strong&gt;, which can be found under &lt;strong&gt;Action&lt;/strong&gt;. You can see the details below.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/o7xu9whrs0u9/8410e6c825a3417ba27fd2d682e0a432/eb3a3fac34f0b7b0d2cdebe953ab3bc0/Untitled.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3 id="actions"&gt;Actions&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Under &lt;strong&gt;Actions&lt;/strong&gt;, you’ll find a list of options to choose from. Now we’ll select &lt;strong&gt;Call Service&lt;/strong&gt;. To fill in the blank for &lt;strong&gt;Service&lt;/strong&gt;, type &lt;strong&gt;mobile&lt;/strong&gt; into the search bar (since we intend to use the mobile application for receiving the notification). Unfortunately, you may not find a notification service in the search results. But then, select any of the services from the search results that have “mobile_app” in the name. Furthermore, now that you have made the selection, the system will prompt you with other attributes you can use. It’s important to note that the options without a check box are mandatory, whereas the ones with a check box are optional. Let’s put a check on the &lt;strong&gt;Message&lt;/strong&gt; option and assign it the label &lt;strong&gt;Front door was opened&lt;/strong&gt;. You can see the details in the image below.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/075662cd593940d488b09264f8753b6b/500099c615b6b53d6279472474f5b28f/Untitled.jpg" alt="" /&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Now, if we want, we can check the &lt;strong&gt;Title&lt;/strong&gt; to give it the title &lt;strong&gt;Door Notification&lt;/strong&gt;. Lastly, hit save.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Bravo! The process is complete. So, whenever someone opens the front door, you’ll receive a notification like this:
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/7b2cc5e37221440a9b75b955c790ce5d/7f671b75fb559288d4d698c6468884d1/Untitled.jpg" alt="" /&gt;&lt;/p&gt;

&lt;h2 id="summary"&gt;Summary&lt;/h2&gt;

&lt;p&gt;Home Assistant enables users to automate and control home appliances like cameras, thermostats, lighting, and sensors. To get the Home Assistant software up and running on your system, you must choose the most convenient option for your installation. The list of options for installation and the necessary steps to successfully install Home Assistant on your system are provided on the Home Assistant official website. After completing the installation, you should set up your personal Home Assistant to connect your smart home devices by using the initial setup wizard. You can create automation using the GUI feature within Home Assistant, where you need to put in place triggers, conditions, and actions, which are essential features for your automation to run successfully.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was written by Theophilus Onyejiaku. &lt;a href="https://www.linkedin.com/in/theophilus-chidalu-onyejiaku"&gt;Theophilus&lt;/a&gt; has over 5 years of experience as data scientist and a machine learning engineer. He has garnered expertise in the field of Data Science, Machine Learning, Computer Vision, Deep learning, Object Detection, Model Development and Deployment. He has written well over 660+ articles in the aforementioned fields, python programming, data analytics and so much more.&lt;/em&gt;&lt;/p&gt;
</description>
      <pubDate>Fri, 05 Jan 2024 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/home-assistant-tutorial-beginners-guide-automation-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/home-assistant-tutorial-beginners-guide-automation-influxdb/</guid>
      <category>Developer</category>
      <category>Getting Started</category>
      <author>Community (InfluxData)</author>
    </item>
    <item>
      <title>Grafana Dashboard Tutorial: How to Get Started</title>
      <description>&lt;p&gt;Grafana is an open-source web application for visualizing data. You can query your data, create visuals, and receive alerts to better understand what you have. Some people think of Grafana as a Kubernetes-only tool, but in reality, it’s simply a data visualization tool that became popular within the Kubernetes ecosystem, especially when combined with Prometheus.&lt;/p&gt;

&lt;p&gt;In this post, I’ll focus on a very specific part of Grafana: the dashboards. To do so, I’ll use Grafana’s &lt;a href="https://github.com/grafana/tutorial-environment"&gt;tutorial environment setup&lt;/a&gt; to help you get up to speed and learn about dashboards using Docker Compose.&lt;/p&gt;

&lt;p&gt;Before I get into the specifics, let’s explore what you can do with Grafana dashboards.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/897c516967ee438e976926212a1728b0/511d2af84b443ad2cb01146f90fd12ca/Untitled.png" alt="text" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;h2 id="what-does-a-grafana-dashboard-do"&gt;What Does a Grafana Dashboard Do?&lt;/h2&gt;

&lt;p&gt;Dashboards, in general, are a visual representation of data. Raw data itself might not tell you much, but when you use a dashboard you can glean valuable information. Say, for example, you’re hosting a site about the latest gaming industry news. From time to time, you get customer complaints that the site is not loading or is taking too much time. How would you identify the problem? One way could be logging into the servers and reading some logs. That’s doable if you only have a few servers or containers. But when you have dozens, you need a better approach.&lt;/p&gt;

&lt;p&gt;Grafana dashboards can help you better understand what’s happening in advance. You can configure Grafana to use any data source you want, and the dashboards will help you with a visual representation so you can fix problems more rapidly. You can also annotate events and explain what happens after certain actions, like a recent release or a marketing campaign. I’ll show you how to do that in the next section.&lt;/p&gt;

&lt;h2 id="how-do-i-use-a-grafana-dashboard"&gt;How Do I Use a Grafana Dashboard?&lt;/h2&gt;

&lt;h3 id="setting-up"&gt;Setting up&lt;/h3&gt;

&lt;p&gt;Let’s give you some hands-on experience using the sample application from Grafana. To do so, you need to have &lt;a href="https://git-scm.com/"&gt;Git&lt;/a&gt;, &lt;a href="https://docs.docker.com/install/"&gt;Docker&lt;/a&gt;, and &lt;a href="https://docs.docker.com/compose/"&gt;Docker Compose&lt;/a&gt; installed. Then clone the repository to have all the files locally:&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-python"&gt;git clone github.com/grafana/tutorial-environment&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Go to the repository folder and start all the services you’re going to use. This could take a few minutes.&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-python"&gt;cd tutorial-environment
docker-compose up -d&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You can now browse the news site’s sample application on &lt;a href="http://localhost:8081"&gt;localhost:8081&lt;/a&gt; and Grafana on &lt;a href="http://localhost:3000"&gt;localhost:3000&lt;/a&gt;. Grafana’s credentials are “admin” for both the username and password. The first time you log in, you’ll need to change that password. Make sure you can browse both services and news, as that’s what you’ll use as the data source for the dashboard.&lt;/p&gt;

&lt;h3 id="importing-data"&gt;Importing data&lt;/h3&gt;

&lt;p&gt;Before you get started creating dashboards, you’ll need to let Grafana know which data sources you want to use. You can add multiple types of data sources, such as time series databases, logging and document databases, distributed tracing, SQL databases, and cloud providers. One common data source is &lt;a href="https://www.influxdata.com/integration/prometheus-monitoring-tool/"&gt;Prometheus&lt;/a&gt;, which we’re going to use that one today.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: You can easily send Prometheus data to InfluxDB using &lt;a href="https://www.influxdata.com/integration/prometheus-input/"&gt;Telegraf plugins&lt;/a&gt;. This enables you to combine Prometheus data with other data sources to create a more holistic view of your data sources. InfluxDB works with &lt;a href="https://www.influxdata.com/grafana/"&gt;Grafana&lt;/a&gt; natively, enabling you to query and visualize all your data in one place.&lt;/p&gt;

&lt;p&gt;In Grafana, click on the sidebar to open the settings menu, then click “Data sources” below “Connections.”&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/7c3b15f8912e4562a59d17606a6a78e2/c709a46d06f0addcbc5f30c25a088492/Untitled.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;Next, click the “Add data source” button, then click “Prometheus.” In the “Prometheus server URL” text box, enter: http://prometheus:9090. Scroll down to the bottom, then click the “Save &amp;amp; test” button.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/8913d63f1afa444f8dd87cd551efb802/a5b8e9ac0f681bbf55b02da9919613b7/Untitled.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;You’re going to use Prometheus to get metrics such as latency and the number of requests from the news site. To enrich your dashboards, you can add another data source to get logs from the news site.&lt;/p&gt;

&lt;p&gt;Go to the “Data sources” page again and click the “Add new data source” button, scroll down a little bit, and click “Loki” in the “Logging &amp;amp; document databases” section.&lt;/p&gt;

&lt;p&gt;Once there, enter http://loki:3100 in the URL text box. Scroll down to the bottom, then click the “Save &amp;amp; test” button.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/8b8b1633d6724c809c592a9468e31cd0/db18aad5bc8283763ee16568ed7aa400/Untitled.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;Open the &lt;a href="http://localhost:8081/"&gt;news site’s sample application&lt;/a&gt; to simulate traffic by adding new links, voting, or simply refreshing the browser. We need to get some fresh data to see something in the dashboard.&lt;/p&gt;

&lt;h3 id="creating-a-dashboard"&gt;Creating a dashboard&lt;/h3&gt;

&lt;p&gt;Now that you have two data sources and have generated some data from the news site, you can visualize the data to get more insights about what’s happening. Go back to the &lt;a href="http://localhost:3000"&gt;Grafana site&lt;/a&gt; and click “Dashboards” in the sidebar menu. Next, click the “New” button in the top-right corner and select “New Dashboard” in the drop-down list. Now click on the “+ Add visualization” button. You should see the two data sources you added earlier, Prometheus and Loki. Add the metrics data source first by clicking the “Prometheus” option.&lt;/p&gt;

&lt;p&gt;The screen below has several buttons, text boxes, and a big section where you should see a graph. Configure this screen step by step to create your first visualization of the Prometheus data. Click the “Query” tab below the graph and then click the “Code” option on the right side of the screen. Next, in the “Metrics browser” text box, type the following query:&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-python"&gt;sum(rate(tns_request_duration_seconds_count[5m])) by(route)&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now click the “Run queries” button—you should see a visual representation of the query on the screen.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/4ec8e06dbae14eb48a97be85e5b37147/ded5d3190b2170c026ee376954d3df74/Untitled.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;You can customize the title of the graph on the right side of the screen. Go to the “Panel options” and type a title like “Traffic” in the “Title” text box. Notice that the visualization title is different now. To finish creating your dashboard, click the “Apply” button in the top-right corner.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/2b56147f181a4312ba6c66b1f792a289/960dd1c553b347f400e6669be128e80f/Untitled.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;h3 id="annotating-events"&gt;Annotating events&lt;/h3&gt;

&lt;p&gt;Grafana dashboards allow you to add annotations. This tool is especially useful when things go wrong. You can create annotations automatically or manually. You can add annotations every time you release a new version of the site or a marketing campaign starts. With annotations, if the latency decreases, you will have more context as to why.&lt;/p&gt;

&lt;p&gt;Let’s suppose you initiated a deployment a few minutes ago and want to manually add this annotation to the dashboard. Before you can annotate events, you need to save the dashboard. To do so, click on the gear icon in the dashboard header.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/088f67c807de481e8cce1b7c11d1a163/159800d4f1a3173b78664b84cead24ef/Untitled.png" alt="" /&gt; 
Next, type a name for the dashboard in the “Name” text box, then click the “Save dashboard” button.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/a334121ee7d645508ecd95d04f120c42/5ba7b3dd2a1091fc22eab138dc88c98b/Untitled.png" alt="" /&gt; 
Click anywhere in the graph from the dashboard, then click the “Add annotation” button.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/031bd4de6a924465bd35120653b6e2f0/447cead08e96cc1a60249d9238f8fbdb/Untitled.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;Add a description and a tag, then click the “Save” button.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/d0ffe972b92349269a774bb50b43d851/46c08c49c0a45d3b490a459bc2b5ee25/Untitled.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;If you hover your mouse over the base of the annotation, in the line above the x-axis, you’ll see the annotation. Every time something happens while there’s an annotation in the dashboard, you’ll get extra context about what’s happening on your application. This information can help you make decisions, such as rolling back or quickly pushing a fix.&lt;/p&gt;

&lt;p&gt;When you’re done playing with Grafana, you can stop all the container services with the following command:&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-python"&gt;docker-compose down -v&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/237b61e632af4bef98e2161f314f1611/10c84b0883d5ba67269c038c34bc2000/Untitled.png" alt="text" /&gt;
&lt;br /&gt;&lt;/p&gt;

&lt;h2 id="wrapping-up"&gt;Wrapping Up&lt;/h2&gt;

&lt;p&gt;Creating dashboards in Grafana is straightforward. Make sure you have the data sources you need, then build queries that allow you to visualize this data. Adding multiple data sources can help you enrich your dashboards and you can store all this data in InfluxDB. Moreover, when you annotate events, it becomes easier to find the root cause of problems and resolve issues faster.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post was written by David Snatch. David is a cloud architect focused on implementing secure continuous delivery pipelines using Terraform, Kubernetes, and any other awesome tech that helps customers deliver results.&lt;/em&gt;&lt;/p&gt;
</description>
      <pubDate>Fri, 08 Dec 2023 09:30:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/grafana-dashboard-tutorial/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/grafana-dashboard-tutorial/</guid>
      <category>Developer</category>
      <category>Getting Started</category>
      <author>Community (InfluxData)</author>
    </item>
    <item>
      <title>An Introduction to Apache Superset: An Open Source BI solution</title>
      <description>&lt;p&gt;With native SQL support coming to InfluxDB, we can broaden the scope of developer tools used to analyze and visualize our &lt;a href="https://influxdata.com/what-is-time-series-data"&gt;time series data&lt;/a&gt;. One of these tools is &lt;a href="https://www.influxdata.com/integration/apache-superset/"&gt;Apache Superset&lt;/a&gt;. So let’s break down the basics of what Superset is, look at its features and benefits, and run a quick demo of Superset in action.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/6Owclv6qrvPxVgH69nMb5D/1c69db45a5b976209d84100caa5d9f65/Superset_Community_Metrics.png" alt="Superset Community Metrics" /&gt;&lt;/p&gt;

&lt;h2 id="what-is-apache-superset"&gt;What is Apache Superset?&lt;/h2&gt;

&lt;p&gt;Apache Superset is an open-source data exploration and &lt;a href="https://www.influxdata.com/how-to-visualize-time-series-data/"&gt;visualization&lt;/a&gt; platform. Originally started as a hack-a-thon project by Maxime Beauchemin while working at Airbnb, Superset entered the Apache Incubator program in 2017. Apache Superset is similar to enterprise business intelligence solutions, such as Power BI and Tableau, as opposed to other dashboarding software. This is due to its emphasis on data analytics and exploration, so bear this in mind if you are looking for a simple real-time dashboard solution.&lt;/p&gt;

&lt;p&gt;Apache Superset is based on a Dataset-Centric methodology, which lies firmly in the middle of a query-centric and semantic-centric architecture.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/4V3KZZ3vWtRL6Q4XbR1bHx/8bc4de352152b2347fe583df9f6226c7/The_Case_for_Dataset-Centric_Visualization.png" alt="The Case for Dataset-Centric Visualization" /&gt;&lt;/p&gt;
&lt;div class="has-text-centered is-italic mb-5"&gt;
Credit to &lt;a href="https://preset.io/blog/dataset-centric-visualization/" target="_blank"&gt;The Case for Dataset-Centric Visualization&lt;/a&gt;&lt;/div&gt;

&lt;p&gt;This architecture promotes the use of datasets that are similar to a &lt;a href="https://www.tutorialspoint.com/python_pandas/python_pandas_dataframe.htm"&gt;Pandas Dataframe&lt;/a&gt; but with some further enrichment. Essentially, what we end up with is an enriched tabular structure that contains a subset of semantic characteristics:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Labels and descriptions of the dataset and columns&lt;/li&gt;
  &lt;li&gt;Metrics as aggregated SQL expressions (AVG, MAX, COUNT)&lt;/li&gt;
  &lt;li&gt;Timezone and time granularity support&lt;/li&gt;
  &lt;li&gt;Definitions for which columns can be aggregated and filtered on&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This quote from &lt;a href="https://preset.io/blog/dataset-centric-visualization/"&gt;The Case for Dataset-Centric Visualization&lt;/a&gt; ties up the dataset approach neatly:&lt;/p&gt;

&lt;p&gt;“The dataset metaphor offers a simple and safe “dimensional” playground. In the dataset-centric approach, all charts are built from these datasets that contain a comprehensive collection of relevant dimensions and metrics. This enables users to self-serve within that context. Your team members can typically slice and dice, which entails superpowers like applying arbitrary filters, drilling into dimensional details, drilling through to atomic rows, and choosing the right visualization.”&lt;/p&gt;

&lt;h2 id="apache-superset-features-and-benefits"&gt;Apache Superset features and benefits&lt;/h2&gt;

&lt;p&gt;Now that we have an understanding of what Apache Superset is, let’s take a look at some of its core features and benefits.&lt;/p&gt;

&lt;h3 id="rich-visualization-library"&gt;Rich visualization library&lt;/h3&gt;

&lt;p&gt;Apache Superset currently supports 50+ visualization types to experiment with. These cover a variety of data types and use cases. For time series use cases specifically, I would make sure to check out the following visualization types:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Basic:&lt;/strong&gt; Time series Bar Chart, Time-series Line Chart, Time-series Scatter Plot&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Intermediate:&lt;/strong&gt; Histogram, MapBox, Big Number with Trendline, Time-series Table&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Advanced:&lt;/strong&gt; Calendar Heatmap, Radar Chart, Nightingale Rose Chart&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Remember Apache Superset visualizations present a “swiss army knife” of options, so make sure to choose the best visualization to fit your use case. We will explore how and when to use certain visualizations in another blog.&lt;/p&gt;

&lt;h3 id="versatile-backend-support"&gt;Versatile backend support&lt;/h3&gt;

&lt;p&gt;Apache Superset supports a large range of databases through SQLAlchemy (plus any required drivers). Support for Apache Superset in the new InfluxDB IOx storage engine is currently in development (specifics will be discussed in a future blog). Apache Superset also provides direct file ingest for formats such as JSON, CSV, Excel, and columnar files.&lt;/p&gt;

&lt;h3 id="customization-and-deployment"&gt;Customization and deployment&lt;/h3&gt;

&lt;p&gt;Apache Superset’s cloud-native design makes it scalable. It promotes high availability and deployment support for distributed architectures. Superset also promotes a flexible modular architecture, which lets you choose the right components for your deployment:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;webserver (Gunicorn, Nginx, Apache)&lt;/li&gt;
  &lt;li&gt;metadata database engine (MySQL, Postgres, MariaDB, etc)&lt;/li&gt;
  &lt;li&gt;message queue (Redis, RabbitMQ, SQS, etc)&lt;/li&gt;
  &lt;li&gt;results backend (S3, Redis, Memcached, etc),&lt;/li&gt;
  &lt;li&gt;caching layer (Memcached, Redis, etc),&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Apache Superset also allows users to create their own custom visualizations via their visualization plugins feature. These plugins can be created in JavaScript or TypeScript.&lt;/p&gt;

&lt;h2 id="apache-superset-tutorial"&gt;Apache Superset tutorial&lt;/h2&gt;

&lt;p&gt;With the new InfluxDB IOx storage engine quickly approaching general release, it won’t be long until you can deploy Apache Superset as part of your solution. In the meantime here is a small taste of what is possible with InfluxDB and Apache Superset:&lt;/p&gt;

&lt;p&gt;First of all, we use the integrated SQL Lab to query against InfluxDB:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/4nKabrVEhc2gykPgIzX857/b036c413d1bbfc628ead53691380b74e/we_use_the_integrated_SQL_Lab_to_query_against_InfluxDB.png" alt="we use the integrated SQL Lab to query against InfluxDB" /&gt;&lt;/p&gt;

&lt;p&gt;Our dataset is based on an IoT simulator for Emergency Power Generators. We save this as an Apache Superset dataset for further manipulation and visualization.&lt;/p&gt;

&lt;p&gt;The graph creator provides a low-code interface for selecting and creating your visualization for your dataset. In this case, to keep it simple I am using the Mixed-Time Series visualization:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/330tyV1v3S025c8n4F77Hm/7670feafd57f1e495f2ad345c9084892/Generator_one_-_fuel_and_load.png" alt="Generator one - fuel and load" /&gt;&lt;/p&gt;

&lt;p&gt;You can see that based on the dataset, we can allocate fields as metrics, tags as dimensions or filters with a simple drag-and-drop interface. From there we can save our new visualizations directly to a dashboard:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5nuyC4hmnrGssLRQVgNhFz/1ab8098025946d7adc9bb37c0710dbeb/Load_Precentage.png" alt="Load Precentage" /&gt;&lt;/p&gt;

&lt;p&gt;Dashboards offer you extended functionality such as auto-refresh, advanced filtering, and email reports, but we will save that for another blog.&lt;/p&gt;

&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;I hope this blog sparks your interest to investigate Apache Superset. It is a powerful business intelligence platform that provides equivalent functionality to the industry giants (Power BI and Tableau). The best part? It’s all based on open-source technology. Are you using Apache Superset? We would love to hear from you, so come join us on &lt;a href="https://www.influxdata.com/slack"&gt;Slack&lt;/a&gt; and the &lt;a href="https://community.influxdata.com/"&gt;forums&lt;/a&gt;. Share your thoughts — I look forward to seeing you there!&lt;/p&gt;
</description>
      <pubDate>Wed, 28 Dec 2022 07:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/introduction-apache-superset/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/introduction-apache-superset/</guid>
      <category>Product</category>
      <category>Use Cases</category>
      <category>Getting Started</category>
      <author>Jay Clifford (InfluxData)</author>
    </item>
    <item>
      <title>Getting Started with InfluxDB and Grafana</title>
      <description>&lt;p&gt;At some point if you’re working with data, you’ll probably want to be able to visualize it with different types of charts and organize those charts with dashboards. You’ll also need somewhere to store that data so it can be queried efficiently.&lt;/p&gt;

&lt;p&gt;One of the most popular combinations for storing and &lt;a href="https://www.influxdata.com/how-to-visualize-time-series-data/"&gt;visualizing time series data&lt;/a&gt; is Grafana and InfluxDB. InfluxDB serves as the data store and Grafana is then used to pull data from InfluxDB (and potentially other data sources) to create dashboards to visualize the data.&lt;/p&gt;

&lt;p&gt;In this article you will learn how to:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Install Grafana and InfluxDB using Grafana&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Connect Grafana to InfluxDB&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Create your first Flux query&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Visualize financial data with more advanced queries&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="influxdb-overview"&gt;InfluxDB overview&lt;/h2&gt;

&lt;p&gt;InfluxDB is an open source time series database that is optimized for fast and highly available data storage for time series data in use cases like monitoring, application metrics, IoT sensor data, real-time analytics, and more.&lt;/p&gt;

&lt;div style="padding:56.25% 0 0 0;position:relative; margin: 30px 0px;"&gt;&lt;iframe src="https://player.vimeo.com/video/766950918?h=4a058cfe60&amp;amp;badge=0&amp;amp;autopause=0&amp;amp;player_id=0&amp;amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen="" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Brian Mullen [InfluxData] | InfluxDB - The Smart Data Platform | InfluxDays 2022"&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;script src="https://player.vimeo.com/api/player.js"&gt;&lt;/script&gt;

&lt;h2 id="grafana-overview"&gt;Grafana overview&lt;/h2&gt;

&lt;p&gt;Grafana is an open source data visualization and monitoring platform. It is used to create dashboards and visualize data from a variety of sources like Prometheus or InfluxDB.&lt;/p&gt;

&lt;p&gt;Grafana allows users to quickly create visualizations of their data, such as graphs, tables, and heatmaps. It also provides alerting capabilities, allowing users to be notified when certain conditions are met.&lt;/p&gt;

&lt;h2 id="setting-up-influxdb-and-grafana"&gt;Setting up InfluxDB and Grafana&lt;/h2&gt;

&lt;p&gt;Now let’s get started with installing InfluxDB and Grafana. First you’ll need to make sure you have the following on your computer:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href="https://docs.influxdata.com/influxdb/cloud/tools/influx-cli/"&gt;InfluxDB CLI&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href="https://docs.docker.com/compose/install/compose-desktop/"&gt;Docker Desktop&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="docker-setup"&gt;Docker setup&lt;/h3&gt;

&lt;p&gt;This tutorial will use docker-compose to manage Grafana and InfluxDB. Before running the following script make sure to navigate to the directory where you want the project to be located.&lt;/p&gt;

&lt;p&gt;Once you are in the desired folder, run the following script in the command line:&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-bash"&gt;mkdir influxdb-getting-started-with-grafana
cd influxdb-getting-started-with-grafana
cat &amp;gt; ./docker-compose.yml &amp;lt;&amp;lt;EOF
version: "3"

networks:
  monitoring:

services:
  influxdb:
    image: influxdb:2.3.0
    ports:
      - 8086:8086
    networks:
      - monitoring

  grafana:
    image: grafana/grafana:9.0.4
    ports:
      - 3000:3000
    networks:
      - monitoring
EOF&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This script will create a &lt;code class="language-bash"&gt;docker-compose.yml&lt;/code&gt; file which defines the network and images used for InfluxDB and Grafana. To start the containers you just need to run the following command:&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-bash"&gt;docker-compose up -d&lt;/code&gt;&lt;/pre&gt;

&lt;h3 id="influxdb-setup"&gt;InfluxDB setup&lt;/h3&gt;

&lt;p&gt;Setup of credentials required when initiating connection to InfluxDB is as follows.&lt;/p&gt;

&lt;p&gt;Create initial &lt;strong&gt;super-admin&lt;/strong&gt; credentials, &lt;strong&gt;organization&lt;/strong&gt;, &lt;strong&gt;bucket&lt;/strong&gt; and the &lt;strong&gt;all-access&lt;/strong&gt; security token. Run the &lt;code class="language-bash"&gt;influx setup&lt;/code&gt; command:&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-bash"&gt;influx setup --name myinfluxdb2 --host http://localhost:8086 \
  -u admin -p admin54321 -o my-org \
  -b my-bucket -t my-token -r 0 -f&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The superuser, organization, bucket, and access token have been created. In addition, the &lt;code class="language-bash"&gt;influx&lt;/code&gt; command creates a new server configuration object and stores it into the active config named &lt;code class="language-bash"&gt;myinfluxdb&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Note: You can define as many configs as you want to work with multiple InfluxDB2 servers easily. The configuration objects are stored locally in the &lt;code class="language-bash"&gt;$HOME/.influxdbv2/configs&lt;/code&gt; file on your computer. To list all available server configurations use the command &lt;code class="language-bash"&gt;influx config ls&lt;/code&gt;.&lt;/p&gt;

&lt;h2 id="grafana-and-influxdb-connection-setup"&gt;Grafana and InfluxDB connection setup&lt;/h2&gt;

&lt;h3 id="add-data-source-in-grafana-ui"&gt;Add data source in Grafana UI&lt;/h3&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Open in browser &lt;a href="http://localhost:3000/datasources"&gt;http://localhost:3000/datasources&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Sign in as user &lt;code class="language-bash"&gt;admin&lt;/code&gt;, password &lt;code class="language-bash"&gt;admin&lt;/code&gt;.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Click on &lt;code class="language-bash"&gt;Skip&lt;/code&gt; to skip the question about the new password.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;In the left menu, click on the &lt;strong&gt;Gear&lt;/strong&gt; icon, to open Data Sources.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Click on &lt;strong&gt;Add data source&lt;/strong&gt;.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Select &lt;strong&gt;InfluxDB&lt;/strong&gt;.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Replace &lt;strong&gt;InfluxQL&lt;/strong&gt; with &lt;strong&gt;Flux&lt;/strong&gt; in the dropdown called &lt;strong&gt;Query Language&lt;/strong&gt;.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Type &lt;code class="language-bash"&gt;http://influxdb:8086/&lt;/code&gt; at the &lt;strong&gt;URL&lt;/strong&gt; field in the section called &lt;strong&gt;HTTP&lt;/strong&gt;.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Write &lt;code class="language-bash"&gt;my-org&lt;/code&gt; into the &lt;strong&gt;Organization&lt;/strong&gt; field in the &lt;strong&gt;InfluxDB Details&lt;/strong&gt; section.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Type &lt;code class="language-bash"&gt;my-token&lt;/code&gt; in the &lt;strong&gt;Token&lt;/strong&gt; field. (Once the &lt;strong&gt;save and test&lt;/strong&gt; button is clicked, the password is hidden and replaced with configured.)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Save &amp;amp; Test: Success will display two green notifications (3 buckets found + Datasource updated). Please see below.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5bTkzeL0eGHSDnJAEep4j8/5e1e717658472b6b5e5f5f8e45734c4e/Grafana_and_InfluxDB_connection_setup.png" alt="Grafana and InfluxDB connection setup" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note:&lt;/em&gt; The address &lt;a href="http://influxdb:8086/" target="_blank"&gt;http://influxdb:8086/&lt;/a&gt; is the address visible from the Grafana container in the internal network orchestrated by docker-compose.&lt;/p&gt;

&lt;h2 id="your-first-flux-query-from-grafana"&gt;Your first Flux query from Grafana&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;(&lt;strong&gt;Update&lt;/strong&gt;: &lt;a href="https://www.influxdata.com/products/influxdb-overview/"&gt;InfluxDB 3.0&lt;/a&gt; moved away from Flux and a built-in task engine. Users can use external tools, like Python-based &lt;a href="https://www.quix.io/kapacitor-alternative"&gt;Quix&lt;/a&gt;, to create tasks in InfluxDB 3.0.)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Open &lt;a href="http://localhost:3000/explore?"&gt;Grafana Explorer&lt;/a&gt; located in the left menu of Grafana GUI as the Compass icon.&lt;/p&gt;

&lt;p&gt;Type the following simple query on line 1 of the Explorer:&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-javascript"&gt;buckets()&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Hit the &lt;strong&gt;Run Query&lt;/strong&gt; button located at the top right corner. The result is the table listing 3 buckets.&lt;/p&gt;

&lt;p&gt;The buckets &lt;code class="language-javascript"&gt;_monitoring&lt;/code&gt; and &lt;code class="language-javascript"&gt;_tasks&lt;/code&gt; are internal InfluxDB buckets. The bucket is called &lt;code class="language-javascript"&gt;my-bucket&lt;/code&gt;, which was created with the influx setup command run earlier.&lt;/p&gt;

&lt;p&gt;InfluxDB is also running at &lt;a href="http://localhost:8086/"&gt;http://localhost:8086&lt;/a&gt;. You can login to the InfluxDB UI with the following credentials:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;User: admin&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Password: admin54321&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="doing-more-with-flux-and-grafana"&gt;Doing more with Flux and Grafana&lt;/h2&gt;

&lt;p&gt;In this section you will learn more about how to use Flux and how to create more advanced visualizations using Grafana Dashboards.&lt;/p&gt;

&lt;p&gt;Focus points:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Importing the CSV file of financial data into InfluxDB&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Usage of simple Flux queries to extract time series&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Data visualization in the candlestick chart saved as a Panel in Grafana Dashboard&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The data represents a historical time series of daily stock prices from selected companies, with a time range between the years 2010 and 2016.&lt;/p&gt;

&lt;p&gt;The following figure shows the head of the CSV, to preview how the spreadsheet looks:&lt;/p&gt;

&lt;p&gt;The data set is a 50MB file, containing daily values which are all double precision numbers, for __open, low, high and close__prices and traded volume, of 502 companies between the years 2010 and 2016.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5Kl1y2VCMPjXPgE610dBfW/1a345dbc09197b80b5f289193fe45537/analyze_data_in_spreadsheet_processor.png" alt="analyze data in spreadsheet processor" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;To analyze this size of data in a spreadsheet processor is a tedious process.&lt;/em&gt;&lt;/p&gt;

&lt;h3 id="import-data-into-influxdb"&gt;Import data into InfluxDB&lt;/h3&gt;

&lt;p&gt;First you will need to download the &lt;a href="https://docs.google.com/spreadsheets/d/1Mq6q-bnv031UlUD61sVSjebaszeJ0NOUqQckGcxwjnE/edit?usp=sharing"&gt;financial data CSV&lt;/a&gt; onto your computer. Open your terminal and navigate to the folder where the CSV file has been downloaded. Before using &lt;code class="language-javascript"&gt;influx&lt;/code&gt; CLI, make sure you are working with the correct active configuration.&lt;/p&gt;

&lt;p&gt;Use the following command to see the available configurations:&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-bash"&gt;influx config ls&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The output should produce a line with the config from InfluxDB setup:&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-bash"&gt;Active  Name         URL                       Org
*       myinfluxdb2  http://localhost:8086     my-org&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;Note: The asterisk signifies a config is active, and this means the –host and –org arguments don’t need repeated entry on CLI each time the influx command is used.&lt;/em&gt;&lt;/p&gt;

&lt;h3 id="import-csv-data-into-influxdb"&gt;Import CSV data into InfluxDB&lt;/h3&gt;

&lt;p&gt;Run this &lt;code class="language-javascript"&gt;influx&lt;/code&gt; CLI Data Import &amp;amp; Annotation CMD:&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-bash"&gt;influx write -b my-bucket -f ./stock-prices-example.csv \
  --header "#constant measurement,stocks" \
  --header "#datatype dateTime:2006-01-02,tag,double,double,double,double,double"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;Note: This influx write command takes roughly around 12-15 seconds to complete. To print the run time duration in the results printout of importing the data, add this “dry run” script to the query: time influx write -b my-bucket&lt;/em&gt;&lt;/p&gt;

&lt;h3 id="visualize-time-series-data-in-grafana-explorer"&gt;Visualize time series data in Grafana Explorer&lt;/h3&gt;

&lt;p&gt;To get started with creating your dashboard, choose InfluxDB as your data source using the dropdown selector:&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5OXN9DIdCexYtKNagQuJcN/30ea136534b533bd1205c9ca21cebbc8/Visualize_time_series_data_in_Grafana_Explorer.png" alt="Visualize time series data in Grafana Explorer" /&gt;&lt;/p&gt;

&lt;h3 id="query-and-visualizing"&gt;Query and visualizing&lt;/h3&gt;

&lt;p&gt;Run the following Flux query&lt;/p&gt;

&lt;pre&gt;&lt;code class="language-javascript"&gt;from(bucket: "my-bucket")
  |&amp;gt; range(start: 2016-01-01T00:00:00Z, stop: 2016-01-31T00:00:00Z)
  |&amp;gt; filter(fn: (r) =&amp;gt; r["_measurement"] == "stocks")
  |&amp;gt; filter(fn: (r) =&amp;gt; r["symbol"] == "AAPL")
  |&amp;gt; aggregateWindow(every: 1d, fn: mean, createEmpty: false)
  |&amp;gt; yield(name: "mean")&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/kbkDQuBP8CkpuvXFOqFpu/7bf2ec8cca6d9a36c050373d6b88776b/Simple_query.png" alt="Simple query" /&gt;&lt;/p&gt;

&lt;p&gt;The graph area is still empty because the result is out of data range. Click on the &lt;code class="language-javascript"&gt;Zoom to data&lt;/code&gt; button to see the AAPL raw data in the right time range, January 2016.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/5IHfrcQLw0P7iP1g3RQmzQ/134286692c635e11a6b074e0dd23b631/Adapting_time_frame_of_queries_to_user_needs.png" alt="Adapting time frame of queries to user needs" /&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note:&lt;/em&gt; Adapting time frame of queries to user needs is streamlined in the Grafana UI. Making an adjustment to data time range in Grafana UI is interactive and simple to adjust (see figure below).&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/25tHreGnx9ZL9IHMZWuh25/5d846ca537d25a9c7c137ce5bb00fe69/Setting_time_range_in_Grafana_UI.png" alt="Setting time range in Grafana UI" /&gt;&lt;/p&gt;

&lt;h2 id="create-your-first-grafana-dashboard"&gt;Create your first Grafana dashboard&lt;/h2&gt;

&lt;p&gt;Click on the &lt;code class="language-javascript"&gt;Add to Dashboard&lt;/code&gt; button located at the top of the page.&lt;/p&gt;

&lt;p&gt;&lt;img style=" " src="//images.ctfassets.net/o7xu9whrs0u9/7ccpm8fXWolYtUQoQ3jYnP/b373cf1f60860c54267b5a156e13e4d7/Add_to_dashboard.png" alt="Add to dashboard" width="164" height="auto" /&gt;&lt;/p&gt;

&lt;p&gt;Message box with options appears. Select New Dashboard and open the Grafana Dashboard in a New Window not to lose the Explorer tab.&lt;/p&gt;

&lt;p&gt;After the new browser tab opens:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Type &lt;code class="language-javascript"&gt;e&lt;/code&gt; on your keyboard, wait a few seconds, and the edit mode of the Panel will appear on the page.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Select Candlestick type on the right side.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;img style=" " src="//images.ctfassets.net/o7xu9whrs0u9/4GsRRrEPJIhoTF47ApcizZ/5e6c9af8b1e3a560a042c6a5cae518de/Select_Candlestick_type_on_the_right_side.png" alt="Select Candlestick type on the right side" width="500" height="auto" /&gt;&lt;/p&gt;

&lt;p&gt;Then:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Modify the previous Flux command, and extend the time range on line 2 from &lt;code class="language-html"&gt;2010-01-01&lt;/code&gt; to &lt;code class="language-html"&gt;2016-12-31&lt;/code&gt;.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Click the Apply button.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To zoom out, you can use the magnifying glass button with a minus sign in the Grafana toolbar. The keyboard shortcut is &lt;code class="language-javascript"&gt;Ctrl+Z&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To zoom in, select a bigger time range with your mouse or just use the time range selector (dropdown control).&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/7bi5lpMknH88cV1dRyP2vc/1d673679e9a6de1a33aaf214a7dff943/To_zoom_in_-_select_a_bigger_time_range_with_your_mouse.png" alt="To zoom in - select a bigger time range with your mouse" /&gt;&lt;/p&gt;

&lt;p&gt;Enter edit mode and try out other possibilities. Enjoy your new dashboard.&lt;/p&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/7vZWyFj94HY38wqNEP5Boy/3b87705945c808c23050708bd49db611/Enter_edit_mode_to_try_out_other_possibilities.png" alt="Enter edit mode to try out other possibilities" /&gt;&lt;/p&gt;

&lt;h2 id="doing-more-with-grafana-dashboards-and-flux"&gt;Doing more with Grafana dashboards and Flux&lt;/h2&gt;

&lt;p&gt;Using Flux with Grafana provides a number of options and functionality that aren’t possible using other query languages, such as joining data from multiple sources and doing advanced manipulation and transformation of your data before returning the query.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href="https://docs.influxdata.com/flux/v0.x/join-data/"&gt;Joins&lt;/a&gt; - create graphs that span multiple buckets. For example, you might want a Grafana chart that displays both bytes transferred and requests per second; Flux allows you to query these two measurements and join them into a single table.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href="https://docs.influxdata.com/flux/v0.x/query-data/"&gt;Multiple data sources&lt;/a&gt; - enrich time series data with metadata from relational databases such as MySQL, MariaDB, Postgres, Microsoft SQL Server, Snowflake, SQLite, AWS Athena, and Google BigTable; or from CSV files. This is useful when, for example, your time series data includes customer number fields but not customer names. Flux allows you to pull in the customer name so that it can be displayed in your Grafana dashboards.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some additional Flux features you might find useful:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href="https://docs.influxdata.com/flux/v0.x/stdlib/universe/sort/"&gt;Sort&lt;/a&gt; by tags&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href="https://docs.influxdata.com/flux/v0.x/stdlib/universe/pivot/"&gt;Pivot&lt;/a&gt; data&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href="https://docs.influxdata.com/flux/v0.x/stdlib/universe/group/"&gt;Group&lt;/a&gt; by any column&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href="https://docs.influxdata.com/flux/v0.x/stdlib/universe/window/"&gt;Window&lt;/a&gt; functions by date&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additional resource on using InfluxDB and Grafana: &lt;a href="https://www.influxdata.com/blog/how-integrate-gafana-home-assistant/"&gt;How to Integrate Grafana with Home Assistant&lt;/a&gt;&lt;/p&gt;
</description>
      <pubDate>Tue, 20 Dec 2022 07:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/getting-started-influxdb-grafana/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/getting-started-influxdb-grafana/</guid>
      <category>Product</category>
      <category>Use Cases</category>
      <category>Getting Started</category>
      <author>Charles Mahler (InfluxData)</author>
    </item>
  </channel>
</rss>
