Smart Home Monitoring with InfluxDB 3, Google Nest, and Grafana

Navigate to:

Your smart home devices generate vast amounts of scattered data—this tutorial shows you how to centralize it into a unified platform using InfluxDB 3 and Grafana. You’ll not only track your home’s vital signs but also learn professional software development concepts, such as time series database design and building resilient data pipelines, applicable to various monitoring and analytics systems. Raspberry Pi Before we begin, ensure you have:

  • Basic familiarity with Python and API concepts
  • Administrative access to your router (for bandwidth monitoring)
  • At least one smart device (Nest thermostat, smart meter, etc.)
  • A computer or Raspberry Pi to run InfluxDB 3, Grafana, and Python programs

Understanding what you’re working with

Time series data differs fundamentally from traditional relational data. Instead of focusing on relationships between entities, we’re capturing how values change over time. Each data point consists of:

  • Timestamp: When the measurement was taken
  • Measurement: What we’re measuring (temperature, power, bandwidth)
  • Tags: Metadata that helps us categorize data (device_id, location, type)
  • Fields: The actual values measured

This structure makes InfluxDB handy for IoT data because it’s optimized for write-heavy workloads with time-based queries. We define this in a syntax called “line protocol,” and it looks like this:

weather,location=london,season=summer temperature=30 1465839830100400200

Understanding this syntax:

weather is the name of your database table, also known as a measurement.

“location=london,season=summer” are key-value pairs or ‘tag sets’ separated by a comma that provide metadata.

temperature=30 is the fieldset, which is the actual data set.

1465839830100400200 is optional; it’s actually the timestamp 2016-06-13T17:43:50.1004002Z in RFC3339 format. If you don’t provide a timestamp, InfluxDB will use your server’s local nanosecond timestamp in UTC.

Setting up InfluxDB 3

We’ll use InfluxDB 3 Enterprise’s free at-home license, which will only require you to provide your email. It includes 2 CPUs and is for personal use only. You will need to check your inbox and verify the link to activate the at-home license.

# Pull image from Docker for InfluxDB 3 Enterprise
docker pull influxdb:3-enterprise

# Run InfluxDB 3 Enterprise with proper configuration
docker run -d \
  --name influxdb3-enterprise \
  -p 8181:8181 \
  -v $PWD/data:/var/lib/influxdb3/data \
  -v $PWD/plugins:/var/lib/influxdb3/plugins \
  -e [email protected] \
  influxdb:3-enterprise \
    influxdb3 serve \
      --node-id=node0 \
      --cluster-id=cluster0 \
      --object-store=file \
      --data-dir=/var/lib/influxdb3/data \
      --host=0.0.0.0 \
      --port=8181

Building a robust data collector

We’ll focus on creating a single comprehensive data collector that covers all the patterns you’ll need for any IoT integration. This example uses a Nest thermostat, but the principles apply to any smart device or API. We’ll build a collector that polls the Google Nest API and writes to InfluxDB 3 Enterprise using the v3 Python client.

  1. Create a database and optionally set a retention period.
    influxdb3 create database home-data
  2. Nest API setup. To collect data from your Nest thermostat, you need API access. To do this:
  • Go to Google Cloud Console.
  • Create a new project.
  • Save your Project ID.
  • Visit Device Access Console and follow the console instructions.
  • Create a project, link it to your Google Cloud project, and download OAuth credentials.
  • Create a Python program “get_nest_token.py” as follows:
# get_nest_token.py
import requests
import webbrowser
from urllib.parse import urlencode

CLIENT_ID = "your-client-id-here"
CLIENT_SECRET = "your-client-secret-here"

# Generate authorization URL
auth_url = f"https://accounts.google.com/o/oauth2/v2/auth?{urlencode({
    'client_id': CLIENT_ID,
    'redirect_uri': 'http://localhost',
    'response_type': 'code',
    'scope': 'https://www.googleapis.com/auth/sdm.service',
    'access_type': 'offline'
})}"

print(f"Visit: {auth_url}")
webbrowser.open(auth_url)

# Get authorization code from redirect URL
auth_code = input("Enter the code from the redirect URL: ")

# Exchange for tokens
token_response = requests.post('https://oauth2.googleapis.com/token', data={
    'client_id': CLIENT_ID,
    'client_secret': CLIENT_SECRET,
    'code': auth_code,
    'grant_type': 'authorization_code',
    'redirect_uri': 'http://localhost'
})

tokens = token_response.json()

print(f"Access Token: {tokens['access_token']}")
print(f"Refresh Token: {tokens['refresh_token']}")

Building the data collector

  1. Create a .env file and save it locally.
    NEST_ACCESS_TOKEN=your_access_token_here
    GOOGLE_CLOUD_PROJECT_ID=your_project_id
    INFLUXDB_HOST=http://localhost:8181
    INFLUXDB_TOKEN=your_influxdb_token
    INFLUXDB_DATABASE=home-data
  2. Install the Python dependencies.
    pip install influxdb3-python requests python-dotenv
  3. Create a new Python program that will act as a data collector and write to InfluxDB 3 “nest_collector.py.”
    # nest_collector.py
    import os
    import time
    import logging
    from datetime import datetime, timezone
    from functools import wraps
    import requests
    from influxdb_client_3 import InfluxDBClient3
    from dotenv import load_dotenv
    
    load_dotenv()
    
    logging.basicConfig(
        level=logging.INFO,
        format="%(asctime)s - %(levelname)s - %(message)s"
    
    )
    
    def retry_on_failure(max_retries=3, delay=5):
        def decorator(func):
            @wraps(func)
            def wrapper(*args, **kwargs):
                for attempt in range(max_retries):
                    try:
                        return func(*args, **kwargs)
                    except Exception as e:
                        if attempt == max_retries - 1:
                            logging.error(f"{func.__name__} failed: {e}")
                            raise
                        logging.warning(f"Retry {attempt + 1}: {e}")
                        time.sleep(delay)
            return wrapper
        return decorator
    
    class NestCollector:
        def __init__(self):
            self.access_token = os.getenv("NEST_ACCESS_TOKEN")
            self.project_id = os.getenv("GOOGLE_CLOUD_PROJECT_ID")
    
            if not self.access_token or not self.project_id:
                raise ValueError("Missing NEST_ACCESS_TOKEN or GOOGLE_CLOUD_PROJECT_ID in .env file")
    
            # Initialize InfluxDB 3 client
            self.client = InfluxDBClient3(
                host=os.getenv("INFLUXDB_HOST", "http://localhost:8181"),
                token=os.getenv("INFLUXDB_TOKEN"),
                database=os.getenv("INFLUXDB_DATABASE", "home-data"),
            )
    
            # Test connection
            try:
                list(self.client.query("SELECT 1", language="sql"))
                logging.info("InfluxDB connection successful")
            except Exception as e:
                logging.error(f"InfluxDB connection failed: {e}")
                raise
    
        @retry_on_failure(max_retries=3, delay=5)
        def get_thermostat_data(self):
            """Fetch data from Nest API"""
            url = f"https://smartdevicemanagement.googleapis.com/v1/enterprises/{self.project_id}/devices"
            headers = {
                "Authorization": f"Bearer {self.access_token}",
                "Content-Type": "application/json"
            }
    
            response = requests.get(url, headers=headers, timeout=30)
            response.raise_for_status()
    
            devices = response.json().get("devices", [])
            data_points = []        
    
            for device in devices:
                if "THERMOSTAT" not in device.get("type", ""):
                    continue                `
    
                traits = device.get("traits", {})
                device_id = device.get("name", "").split("/")[-1]            `
    
                # Extract measurements
                temp_trait = traits.get("sdm.devices.traits.Temperature", {})
                humidity_trait = traits.get("sdm.devices.traits.Humidity", {})
                hvac_trait = traits.get("sdm.devices.traits.ThermostatHvac", {})
                setpoint_trait = traits.get("sdm.devices.traits.ThermostatTemperatureSetpoint", {})
                info_trait = traits.get("sdm.devices.traits.Info", {})            `
    
                try:
                    temp_celsius = float(temp_trait.get("ambientTemperatureCelsius", 0))
                    humidity = float(humidity_trait.get("ambientHumidityPercent", 0))
                except (TypeError, ValueError):
                    continue
    
                # Build data point for InfluxDB
                point = {
                    "measurement": "nest_thermostat",
                    "tags": {
                        "device_id": device_id,
                        "room": info_trait.get("customName", "main"),
                        "device_type": "thermostat"
                    },
                    "fields": {
                        "temperature_celsius": temp_celsius,
                        "temperature_fahrenheit": temp_celsius * 9/5 + 32,
                        "humidity_percent": humidity,
                        "hvac_status": hvac_trait.get("status", "OFF"),
                        "hvac_mode": hvac_trait.get("mode", "UNKNOWN")
                    },
                    "time": int(datetime.now(timezone.utc).timestamp())
                }
    
                # Add setpoint temperatures if available
                if "heatCelsius" in setpoint_trait:
                    heat_c = float(setpoint_trait["heatCelsius"])
                    point["fields"]["heat_setpoint_celsius"] = heat_c
                    point["fields"]["heat_setpoint_fahrenheit"] = heat_c * 9/5 + 32                `
    
    `            if "coolCelsius" in setpoint_trait:
                    cool_c = float(setpoint_trait["coolCelsius"])
                    point["fields"]["cool_setpoint_celsius"] = cool_c
                    point["fields"]["cool_setpoint_fahrenheit"] = cool_c * 9/5 + 32            `
    
                data_points.append(point)
                logging.info(f"Collected {device_id}: {temp_celsius:.1f}°C, {humidity:.0f}%")           
            return data_points
    
        def write_to_influx(self, points):
            """Write data to InfluxDB"""
            if not points:
                logging.warning("No data to write")
                return            `
    
            success_count = 0
            for point in points:
    
                try:
                    self.client.write(record=point, write_precision="s")
                    success_count += 1
                except Exception as e:
                    logging.error(f"Write failed: {e}")
    
            logging.info(f"Wrote {success_count}/{len(points)} points")
    
        def run_cycle(self):
            """Run one collection cycle"""
            try:
                data = self.get_thermostat_data()
                self.write_to_influx(data)
            except Exception as e:
                logging.error(f"Cycle failed: {e}")
    
    if __name__ == "__main__":
        collector = NestCollector()    `
    
        try:
            while True:
                collector.run_cycle()
                time.sleep(300)  # Run every 5 minutes
        except KeyboardInterrupt:
            logging.info("Stopped by user")

Installing and configuring Grafana

# Install Grafana using Docker
docker run -d \
  --name grafana \
  -p 3000:3000 \
  -v grafana-storage:/var/lib/grafana \
  -e "GF_SECURITY_ADMIN_PASSWORD=your-secure-password" 
  grafana/grafana:latest

Essential dashboard configuration

  1. Make sure Grafana is up and running locally on port 3000.
    • Log into Grafana using your username/password at localhost:3000
    • Navigate to Connection —> Type ‘InfluxDB’ —> ‘Add new Data Source’
    • Type: InfluxDB3 Enterprise Home
    • Language : SQL
    • Database: home-data
    • URL: http://influxdb3-enterprise:8181 for connecting to InfluxDB 3 Enterprise
    • Token: Paste the string value for INFLUXDB_TOKEN environment variable from your .env file & toggle Insecure Connection to “ON”
  2. Create dashboards with two panels using the following SQL queries to monitor the data:

Current Temperature Panel

SELECT 

  temperature_fahrenheit,

  device_id

FROM nest_thermostat 

WHERE time >= now() - interval '5 minutes'

ORDER BY time DESC 

LIMIT 1

24-Hour Trend Panel

SELECT 

  date_trunc('minute', time) as time,

  AVG(temperature_fahrenheit) as avg_temp

FROM nest_thermostat 

WHERE time >= now() - interval '24 hours'

GROUP BY date_trunc('minute', time)

ORDER BY time

(Optional) Health Monitoring Script

Keep your systems healthy with simple checks by creating the script “health_check.py” as follows:

# health_check.py
import requests
from datetime import datetime

def check_health():
    services = {
        'InfluxDB': 'http://localhost:8181/health',
        'Grafana': 'http://localhost:3000/api/health'
    }

    print(f"\n=== Health Check - {datetime.now().strftime('%H:%M:%S')} ===")

    all_healthy = True
    for service, url in services.items():
        try:
            response = requests.get(url, timeout=5)
            healthy = response.status_code == 200
            status = "✅" if healthy else "❌"
            print(f"{service}: {status}")
            all_healthy = all_healthy and healthy
        except Exception:
            print(f"{service}: ❌ Connection failed")
            all_healthy = False

    print(f"Overall: {'✅ HEALTHY' if all_healthy else '❌ ISSUES'}\n")

if __name__ == "__main__":
    check_health()

Conclusion

What you’ve built here goes far beyond just monitoring your thermostat; you’ve implemented the foundational patterns that power modern observability systems at scale. The retry logic and circuit breakers you wrote to handle flaky IoT APIs are the same resilience patterns that keep Netflix running when services fail. At the same time, the time series data modeling and visualization pipeline you created mirrors the monitoring infrastructure used by major tech companies to track millions of metrics per second.

Most importantly, you now understand how to think about data as a stream of events over time rather than static records in tables, which is a mental shift that will serve you well whether you’re building application monitoring dashboards, analyzing business metrics, or working with any system that generates continuous data streams.