<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>InfluxData Blog - Heather Downing</title>
    <description>Posts by Heather Downing on the InfluxData Blog</description>
    <link>https://www.influxdata.com/blog/author/heather-downing/</link>
    <language>en-us</language>
    <lastBuildDate>Tue, 29 Jul 2025 08:00:00 +0000</lastBuildDate>
    <pubDate>Tue, 29 Jul 2025 08:00:00 +0000</pubDate>
    <ttl>1800</ttl>
    <item>
      <title>Real-Time Flight Telemetry Monitoring with InfluxDB 3 Enterprise</title>
      <description>&lt;p&gt;When Microsoft Flight Simulator 2024 generates telemetry data at 30-60 FPS, capturing and processing that stream in real-time becomes a fascinating engineering challenge. We built a complete telemetry pipeline that reads over 90 flight parameters through FSUIPC, streams them to &lt;a href="https://www.influxdata.com/products/influxdb-3-enterprise/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=flight_telemetry_monitoring_influxdb_3_enterprise&amp;amp;utm_content=blog"&gt;InfluxDB 3 Enterprise&lt;/a&gt;, and displays them in real-time dashboards that respond in under 5 milliseconds.&lt;/p&gt;

&lt;div style="padding:56.25% 0 0 0;position:relative;margin-bottom:40px; margin-top:20px;"&gt;&lt;iframe src="https://player.vimeo.com/video/1105269491?h=2ecd4820aa&amp;amp;badge=0&amp;amp;autopause=0&amp;amp;player_id=0&amp;amp;app_id=58479&amp;amp;autoplay=1" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="InfluxDB3 Enterprise FlightSim Demo"&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;script src="https://player.vimeo.com/api/player.js"&gt;&lt;/script&gt;

&lt;p&gt;This isn’t just about gaming (although it IS fun)—it’s a blueprint for building enterprise-grade aviation telemetry systems that can scale from flight training simulators to operational aircraft monitoring.&lt;/p&gt;

&lt;h2 id="the-architecture-streaming-aerospace-telemetry"&gt;&lt;strong&gt;The architecture: streaming aerospace telemetry&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/bc55b5b11c0641228c62dded0c373890/ec03bf38dabd554322d7f861669b3712/unnamed.png" alt="" /&gt;Our data pipeline uses aerospace simulator data in a realistic testbed for enterprise telemetry systems.&lt;/p&gt;

&lt;h4 id="key-components"&gt;Key Components&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://www.flightsimulator.com/"&gt;Microsoft Flight Simulator 2024&lt;/a&gt;: Where the flight telemetry data points originate from&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.fsuipc.com/"&gt;FSUIPC7&lt;/a&gt;: Interface to flight simulator memory using the simulator’s SimConnect API&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;C# Data Bridge&lt;/strong&gt;: Processes batches of memory block reading with &lt;a href="http://fsuipc.paulhenty.com/"&gt;FSUIPC Client DLL for .NET&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.influxdata.com/products/influxdb-3-enterprise/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=flight_telemetry_monitoring_influxdb_3_enterprise&amp;amp;utm_content=blog"&gt;InfluxDB 3 Enterprise&lt;/a&gt;: Time series database with last n-value caching and compaction&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next.js Dashboard&lt;/strong&gt;: Dual-mode visual app of real-time instruments and historical analysis&lt;/p&gt;

&lt;h2 id="reading-flight-telemetry-data-efficiently"&gt;&lt;strong&gt;Reading flight telemetry data efficiently&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;Microsoft Flight Simulator exposes hundreds of data points through FSUIPC7 offsets. Reading each individually creates additional overhead of 90+ separate API calls every 16-33 milliseconds.&lt;/p&gt;

&lt;h4 id="solution-memory-block-strategy"&gt;Solution: Memory Block Strategy&lt;/h4&gt;

&lt;p&gt;Instead of individual reads, we organized related metrics into logical memory blocks using the &lt;a href="http://fsuipc.paulhenty.com/"&gt;FSUPIC Client DLL for .NET&lt;/a&gt;:&lt;/p&gt;

&lt;pre class="line-numbers"&gt;&lt;code class="language-c"&gt;_memoryBlocks = new Dictionary"string, MemoryBlock"
{    // Position, attitude, altitude
    { "FlightData", new MemoryBlock(0x0560, 48) },
    { "Engine1", new MemoryBlock(0x088C, 64) },
    { "Engine2", new MemoryBlock(0x0924, 64) },    // Flight controls, trim       
    { "Controls", new MemoryBlock(0x0BC0, 44) },
    { "Autopilot", new MemoryBlock(0x07BC, 96) },    // VOR, ILS, navigation
    { "Navigation", new MemoryBlock(0x085C, 32) },
    { "Fuel", new MemoryBlock(0x0B74, 24) },    // Callsign, tail number
    { "AircraftData", new MemoryBlock(0x3130, 72) }
};&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Performance Impact:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Before&lt;/strong&gt;: 90+ individual FSUIPC calls per frame = 2,700-5,400 calls/second&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;After&lt;/strong&gt;: 8 memory block reads per frame = 240-480 calls/second&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each memory block read fetches multiple related parameters in a single operation, dramatically reducing the communication overhead with the flight simulator.&lt;/p&gt;

&lt;h2 id="writing-to-the-database-without-bottlenecks"&gt;&lt;strong&gt;Writing to the database without bottlenecks&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;Writing individual telemetry points to a database would create a network bottleneck. At 60 FPS with 90+ fields, that’s thousands of individual database writes per second.&lt;/p&gt;

&lt;h4 id="solution-intelligent-batching"&gt;Solution: Intelligent Batching&lt;/h4&gt;

&lt;pre class="line-numbers"&gt;&lt;code class="language-c"&gt;// Batching configuration
MaxBatchSize: 100
MaxBatchAgeMs: 100 milliseconds&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The system buffers telemetry points and flushes when either condition is met:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Size trigger&lt;/strong&gt;: 100 rows of &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/reference/line-protocol/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=flight_telemetry_monitoring_influxdb_3_enterprise&amp;amp;utm_content=blog"&gt;line protocol&lt;/a&gt;, each row consisting of 91 data points as fields/tags&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Time trigger&lt;/strong&gt;: 100 ms elapsed since last flush&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="real-world-performance"&gt;&lt;strong&gt;Real-World Performance&lt;/strong&gt;&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Average write latency&lt;/strong&gt;: 1.3 ms per row&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Sustained throughput&lt;/strong&gt;: Easily handles thousands of metrics per second&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Data consistency&lt;/strong&gt;: Reliable data collection during extended testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach captures approximately six complete flight data snapshots per database write operation, maintaining near real-time performance while minimizing database overhead.&lt;/p&gt;

&lt;h2 id="monitoring-data-visually-in-real-time"&gt;Monitoring data visually in real-time&lt;/h2&gt;

&lt;p&gt;Traditional time series queries for “current” values are expensive. Consider this typical query:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT * FROM flight_data 
WHERE time &amp;gt;= now() - INTERVAL '1 minute' 
ORDER BY time DESC LIMIT 1&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This scans recent data, sorts by time, and returns the latest point. This may be acceptable for historical analysis, but too slow for real-time cockpit displays.&lt;/p&gt;

&lt;h4 id="solution-last-value-cache-lvc"&gt;Solution: Last Value Cache (LVC)&lt;/h4&gt;

&lt;p&gt;Data from InfluxDB 3’s built-in &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/admin/last-value-cache/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=flight_telemetry_monitoring_influxdb_3_enterprise&amp;amp;utm_content=blog"&gt;Last Value Cache&lt;/a&gt; (LVC) drives the Cockpit tab on the dashboard, which displays only the most recent data point. This is good for the current heading, attitude, GPS location, altitude, airspeed, vertical speed, etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using SQL to query the LVC:&lt;/strong&gt;&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT * FROM last_cache('flight_data', 'flightsim_flight_data_lvc')&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;LVC is an in-memory cache that maintains the most recent values for each metric (at the time of the last WAL flush), enabling instant access without a storage layer round-trip.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LVC Configuration for Flight Data:&lt;/strong&gt;&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 create last_cache \
  --database flightsim \
  --table flight_data \
  --key-columns aircraft_tailnumber \
  --value-columns flight_altitude,speed_true_airspeed,flight_heading_magnetic,flight_latitude,flight_longitude \
  --count 1 \
  --ttl 10s \
  flightsim_flight_data_lvc&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Performance Transformation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The LVC enables cockpit displays that update at 5 FPS (200 ms intervals) while users experience them as instantaneous. The database typically returns the LVC row in less than 10 milliseconds.&lt;/p&gt;

&lt;h2 id="optimizing-storage-for-streaming-data"&gt;&lt;strong&gt;Optimizing storage for streaming data&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;High-frequency telemetry creates storage challenges. At 90+ fields × 60 FPS, we generate thousands of data points per second, creating hundreds of small files that hurt query performance. It’s time to save some space and optimize our query performance!&lt;/p&gt;

&lt;h4 id="flight-sim-data-compaction-strategy"&gt;Flight Sim Data Compaction Strategy&lt;/h4&gt;

&lt;p&gt;We set the following environment variables to &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/reference/config-options/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=flight_telemetry_monitoring_influxdb_3_enterprise&amp;amp;utm_content=blog"&gt;configure our InfluxDB server&lt;/a&gt;:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# COMPACTION OPTIMIZATION
INFLUXDB3_GEN1_DURATION=5m
INFLUXDB3_ENTERPRISE_COMPACTION_GEN2_DURATION=5m
INFLUXDB3_ENTERPRISE_COMPACTION_MAX_NUM_FILES_PER_PLAN=100

# PERFORMANCE TUNING
INFLUXDB3_DATAFUSION_NUM_THREADS=16
INFLUXDB3_EXEC_MEM_POOL_BYTES=40%
INFLUXDB3_PARQUET_MEM_CACHE_SIZE=40%

# REAL-TIME DATA ACCESS
INFLUXDB3_WAL_FLUSH_INTERVAL=100ms
INFLUXDB3_WAL_MAX_WRITE_BUFFER_SIZE=200000&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="why-these-settings-work-together"&gt;Why These Settings Work Together&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Compaction Optimization&lt;/strong&gt; breaks data into smaller 5-minute files instead of the default 10-minute files. This creates more frequent but smaller cleanup operations that don’t overwhelm the system (we ran this on a Windows gaming laptop that was also running Flight Simulator). We also limit how many files get processed at once (100 vs 500 default) to prevent performance spikes. (&lt;a href="https://docs.influxdata.com/influxdb3/enterprise/reference/config-options/#compaction"&gt;Learn more about InfluxDB 3’s generation-based compaction strategy&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance Tuning&lt;/strong&gt; allocates more memory resources for both query execution and Parquet file caching, improving read performance by keeping more data in memory and allowing more parallel query processing threads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-Time Data Access&lt;/strong&gt; reduces write-ahead log (WAL) flush interval from 1 second to 100 ms, making data available for queries much faster. The increased buffer size accommodates more writes before forcing a flush, balancing throughput with latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Measured Results:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;142 compaction events&lt;/strong&gt; automatically triggered over a 24-hour period&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;85% average reduction in the size of new data written&lt;/strong&gt; since the previous compaction&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;File optimization&lt;/strong&gt;: Significant reduction in file count (example: 127 small files → 18 optimized files)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Query performance&lt;/strong&gt;: Faster historical data access due to fewer files and improved data organization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/0703c48653334992abb0cac8894df13d/7aacc702de8ca66d1c9e26f1207b5a53/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;Our monitoring system polls the database directory size &lt;em&gt;every 10 seconds&lt;/em&gt;. In the graph above, you can see decreases in disk usage, which indicate WAL file deletions after the &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/reference/config-options/#snapshotted-wal-files-to-keep"&gt;snapshot&lt;/a&gt;, and &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/reference/config-options/#compaction"&gt;compaction&lt;/a&gt; events that reorganize the data into Parquet files for faster querying. This is available to view on the Data tab of the dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disk Usage Comparison: InfluxDB 3 Core vs Enterprise&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In order to differentiate between disk space savings due to WAL file deletion and data compaction into the Parquet files, we compared the performance of InfluxDB 3 &lt;a href="https://docs.influxdata.com/influxdb3/core/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=flight_telemetry_monitoring_influxdb_3_enterprise&amp;amp;utm_content=blog"&gt;Core&lt;/a&gt; and &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=flight_telemetry_monitoring_influxdb_3_enterprise&amp;amp;utm_content=blog"&gt;Enterprise&lt;/a&gt;; compaction is an Enterprise-only feature. While Core registered a disk usage drop every 15 minutes, Enterprise did so more often. For a fair comparison, we compared both at the 15-minute mark and found:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Core dropped from around 500MB to 230MB due to WAL file deletion&lt;/li&gt;
  &lt;li&gt;Enterprise dropped from around 160MB to 30MB due to both WAL file deletion and compaction&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Enterprise’s starting size was lower because it had already gone through previous deletion/compaction events, but we can determine from these numbers that without compaction or WAL file deletion, we would have 500MB of data. WAL file deletion reduces this by 54% to 230MB. Compaction then brings that 230MB down to 30MB—an 87% reduction from compaction, for a &lt;em&gt;94% total reduction overall&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;NOTE: These percentages represent savings of new data written since the directory size last dropped and are sequential, not compounding.&lt;/p&gt;

&lt;h2 id="two-tier-dashboard-architecture"&gt;&lt;strong&gt;Two-tier dashboard architecture&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;The Next.js dashboard demonstrates two patterns for different use cases:&lt;/p&gt;

&lt;h4 id="cockpit-tab-real-time-instruments"&gt;&lt;strong&gt;Cockpit Tab: Real-Time Instruments&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Uses LVC for instant current values, perfect for simulating aircraft instruments:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;// Based on an LVC of the `flight_data` measurements in the
// `flightsim` bucket  called `flightsim_flight_data_lvc`
// with a key column of `aircraft_tailnumber` and the fields
// `flight_altitude`, `speed_true_airspeed`,
// `flight_heading_magnetic`, `flight_latitude`,
// `flight_longitude`, `speed_vertical`,
// `autopilot_heading_target`, `autopilot_master`,
// `autopilot_altitude_target`, `flight_bank`, `flight_pitch`,
// `aircraft_airline`, `aircraft_callsign`, and `aircraft_type`
const q = ```
  SELECT * FROM last_cache(
    'flight_data', 'flightsim_flight_data_lvc')`;

// Send the query to the InfluxDB server REST endpoint
const dataResponse =
  await fetch(`${endpointUrl}api/v3/query_sql`, {...});&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Displays:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Current heading&lt;/li&gt;
  &lt;li&gt;Altitude&lt;/li&gt;
  &lt;li&gt;Airspeed&lt;/li&gt;
  &lt;li&gt;GPS coordinates&lt;/li&gt;
  &lt;li&gt;Attitude&lt;/li&gt;
  &lt;li&gt;Autopilot settings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Performance&lt;/strong&gt;: Sub-10 ms database response enables smooth 5 FPS updates that feel real-time to the person monitoring visually.&lt;/p&gt;

&lt;h4 id="data-tab-trends-and-historic-analysis"&gt;&lt;strong&gt;Data Tab: Trends and Historic Analysis&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Uses traditional SQL queries for historical analysis and system monitoring:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;SELECT * FROM flight_data 
WHERE time &amp;gt;= now() - INTERVAL '1 minute' 
ORDER BY time DESC LIMIT 20&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Recent measurements table with metric filtering&lt;/li&gt;
  &lt;li&gt;Database size monitoring over time&lt;/li&gt;
  &lt;li&gt;Compaction effectiveness tracking&lt;/li&gt;
  &lt;li&gt;Historical trend analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Update frequency&lt;/strong&gt;: 1-second refresh cycle, appropriate for trend analysis without overwhelming the interface.&lt;/p&gt;

&lt;p&gt;NOTE: For a single-client read scenario like ours, reading data directly from object storage is quite performant. However, if your use case requires multiple users or systems to read simultaneously, consider how a &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/admin/distinct-value-cache/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=flight_telemetry_monitoring_influxdb_3_enterprise&amp;amp;utm_content=blog"&gt;Distinct Value Cache (DVC)&lt;/a&gt; can add efficiency.&lt;/p&gt;

&lt;h2 id="real-world-applications"&gt;&lt;strong&gt;Real-world applications&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;This project, while a fun challenge and excellent testbed, offers a blueprint for going beyond simulation into operational aviation systems:&lt;/p&gt;

&lt;h4 id="flight-training-organizations"&gt;&lt;strong&gt;Flight Training Organizations&lt;/strong&gt;&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Real-time instructor oversight&lt;/strong&gt;: Monitor student performance during simulator sessions&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Objective assessment&lt;/strong&gt;: Data-driven evaluation of flying skills and decision-making&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Scenario replay&lt;/strong&gt;: Historical analysis for post-flight debriefing sessions&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="aircraft-manufacturers"&gt;&lt;strong&gt;Aircraft Manufacturers&lt;/strong&gt;&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;System validation&lt;/strong&gt;: Test avionics behavior in controlled environments&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Performance modeling&lt;/strong&gt;: Compare simulated vs actual aircraft characteristics&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Pilot training programs&lt;/strong&gt;: Develop type-specific training curricula&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="research-and-development"&gt;&lt;strong&gt;Research and Development&lt;/strong&gt;&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Human factors research&lt;/strong&gt;: Study pilot workload and performance patterns&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Safety analysis&lt;/strong&gt;: Investigate scenarios and incidents in controlled settings&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Technology integration&lt;/strong&gt;: Test new systems before expensive flight testing&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="getting-started-complete-implementation-guide"&gt;&lt;strong&gt;Getting started: complete implementation guide&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;Ready to build your own real-time flight monitoring system? The complete source code, configuration examples, and setup scripts demonstrate how. Whether you’re developing training systems, research platforms, or just enjoy MSFS, this example provides a good foundation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://www.xbox.com/en-US/games/microsoft-flight-simulator-2024"&gt;Microsoft Flight Simulator 2024&lt;/a&gt; (for PC)&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://fsuipc.com/"&gt;FSUIPC7&lt;/a&gt; for MSFS 2024 (licensed version recommended for full data access)&lt;/li&gt;
  &lt;li&gt;&lt;a href="http://fsuipc.paulhenty.com/#downloads"&gt;FSUIPC Client DLL for .NET&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://docs.influxdata.com/influxdb3/enterprise/install/#download-and-install-the-latest-build-artifacts"&gt;InfluxDB 3 Enterprise v3.2&lt;/a&gt; (self-hosted)&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://dotnet.microsoft.com/en-us/download/visual-studio-sdks"&gt;.NET 8.0 SDK&lt;/a&gt; for the data bridge&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://nodejs.org/en/download/current"&gt;Node.js 22&lt;/a&gt;&lt;strong&gt;+&lt;/strong&gt; for the dashboard&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="repository-structure"&gt;&lt;strong&gt;Repository Structure&lt;/strong&gt;&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://github.com/Quorralyne/msfs2influxdb3-enterprise/"&gt;msfs2influxdb3-enterprise&lt;/a&gt;: C# data bridge application&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://github.com/Quorralyne/FlightSim2024-InfluxDB3Enterprise"&gt;FlightSim2024-InfluxDB3Enterprise&lt;/a&gt;: Next.js dashboard and demo setup&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id="hardware-requirements"&gt;&lt;strong&gt;Hardware Requirements&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;This demo was developed and tested on a &lt;a href="https://www.dell.com/en-us/shop/cty/pdp/spd/alienware-aurora-ac16251-gaming-laptop"&gt;2025 Alienware 16 Aurora Gaming Laptop&lt;/a&gt; running Windows (the only way MSFS is playable) with:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Intel Core i7-240H (10-core processor)&lt;/li&gt;
  &lt;li&gt;64GB DDR5 RAM&lt;/li&gt;
  &lt;li&gt;2TB NVMe SSD&lt;/li&gt;
  &lt;li&gt;NVIDIA GeForce RTX 5060 (8GB)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The memory-intensive nature of real-time telemetry processing, InfluxDB’s in-memory caching, and Microsoft Flight Simulator 2024’s requirements make adequate RAM the most critical component for smooth operation. While high-end specs aren’t strictly required, we recommend:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Minimum&lt;/strong&gt;: 16GB RAM, quad-core processor, SSD storage&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Recommended&lt;/strong&gt;: 32GB+ RAM for optimal InfluxDB 3 Enterprise performance with large datasets&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Storage&lt;/strong&gt;: SSD recommended for database performance and flight simulator load times&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Results may vary based on hardware configuration and system specifications.&lt;/em&gt;&lt;/p&gt;

&lt;h4 id="key-technical-achievements"&gt;&lt;strong&gt;Key Technical Achievements&lt;/strong&gt;&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;LVC enables true real-time cockpit displays&lt;/strong&gt; with consistent sub-10 ms query performance.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Automated compaction makes continuous telemetry economically viable,&lt;/strong&gt; which results in an average of 87% storage savings.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Memory block reading strategy significantly reduces FSUIPC overhead&lt;/strong&gt; while maintaining data fidelity.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Intelligent batching delivers enterprise reliability&lt;/strong&gt; with stable operation during extended testing.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Microsoft Flight Simulator provides a realistic testbed&lt;/strong&gt; equivalent to real aircraft telemetry streams.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From gaming to enterprise monitoring, the principles remain the same: efficient collection, smart caching, and automated optimization enable real-time insights at any scale.&lt;/p&gt;

&lt;style&gt;
span.token.variable, span.token.string {
  color: #000!important;
  }
&lt;/style&gt;

</description>
      <pubDate>Tue, 29 Jul 2025 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/flight-telemetry-monitoring-influxdb-3-enterprise/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/flight-telemetry-monitoring-influxdb-3-enterprise/</guid>
      <category>Developer</category>
      <author>Heather Downing (InfluxData)</author>
    </item>
    <item>
      <title>Moving from Relational to Time Series Databases</title>
      <description>&lt;p&gt;I’ve been building apps with SQL Server for years. Everything worked well until I started dealing with sensor data, stock trade volume, and IoT telemetry. As the volume of time-stamped records grew into the millions, I saw relational databases struggling with workloads they weren’t designed for.&lt;/p&gt;

&lt;p&gt;That’s when I explored time series databases. The performance improvements were significant, but what surprised me was the mental shift required.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Relational databases trained me to think:&lt;/strong&gt; “What objects do I need, and how are they related?”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Time series databases made me ask:&lt;/strong&gt; “What measurements am I taking and when?”&lt;/p&gt;

&lt;p&gt;This fundamental change in thinking transforms how you approach certain data problems. But when does it make sense to switch?&lt;/p&gt;

&lt;h2 id="when-relational-databases-start-to-struggle"&gt;When relational databases start to struggle&lt;/h2&gt;

&lt;p&gt;The breaking point usually isn’t query speed—it’s when your database starts experiencing lock contention because you’re hammering it with high-frequency updates while trying to read data at the same time. You’ll know you’ve hit it when:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Dashboards freeze during data ingestion spikes&lt;/li&gt;
  &lt;li&gt;Concurrent reads and writes start blocking each other&lt;/li&gt;
  &lt;li&gt;Your “last 24 hours” queries take 30+ seconds&lt;/li&gt;
  &lt;li&gt;You’re spending more time optimizing indexes than building features&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where time series databases shine. They’re built for constant writes with occasional reads, not the balanced read/write patterns that relational databases expect. They define the schema on write, which means you don’t have to define the table columns up front.&lt;/p&gt;

&lt;p&gt;Here’s the thing about time series data: individual rows are meaningless. One temperature reading or GPS coordinate by itself tells you nothing—unless you’re dealing with real-time snapshots where that single moment actually matters (like the latest stock price or current system status). But for most time series use cases, the sheer volume of those rows will start to tell a story that you didn’t explicitly architect together. It’s actually a perfect database style for machine learning to study.&lt;/p&gt;

&lt;h2 id="the-mental-and-data-model-shift"&gt;The mental and data model shift&lt;/h2&gt;

&lt;p&gt;Working with time series data means letting go of relational context. At first, this feels uncomfortable. You lose the immediate understanding of what each piece of data “belongs to” in a business sense.&lt;/p&gt;

&lt;p&gt;But something interesting happens as you adjust: patterns start emerging that you never noticed before. Time becomes your primary organizing principle, and you begin seeing trends, cycles, and anomalies that were invisible when the data was scattered across normalized tables.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Patterns emerge from the data itself rather than from the relationships you designed.&lt;/strong&gt;&lt;/p&gt;

&lt;h4 id="data-model-transformation"&gt;Data Model Transformation&lt;/h4&gt;

&lt;p&gt;This isn’t just a mental model shift—it’s a fundamental data model transformation. Let me show you what I mean:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Relational data model (SQL):&lt;/strong&gt;&lt;/p&gt;

&lt;pre class="language-bash code-toolbar"&gt;&lt;code class=" language-bash"&gt;-- Flights table
flight_id | airline | departure_time | arrival_time | origin | destination
----------|---------|----------------|--------------|--------|------------
AA1234 | American | 2024-01-15 08:00:00 | 2024-01-15 11:30:00 | JFK | LAX

-- Flight Metrics table (with foreign key)
id | flight_id | metric_type | value    | timestamp
---|-----------|-------------|----------|------------------------
1  | AA1234    | altitude    | 28500.0  | 2024-01-15 08:00:00
2  | AA1234    | speed       | 540.0    | 2024-01-15 08:00:00
3  | AA1234    | heading     | 270.0    | 2024-01-15 08:00:00
4  | AA1234    | altitude    | 29200.0  | 2024-01-15 09:00:00
5  | AA1234    | speed       | 560.0    | 2024-01-15 09:00:00
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Time series data model (SQL):&lt;/strong&gt;&lt;/p&gt;

&lt;pre class="language-bash code-toolbar"&gt;&lt;code class=" language-bash"&gt;-- Single measurement with tags and multiple fields
measurement: flight_metrics
tags: flight_id=AA1234, airline=American, origin=JFK, destination=LAX
timestamp             | altitude | speed | heading
----------------------|----------|-------|--------
2024-01-15 08:00:00   | 28500.0  | 540.0 | 270.0
2024-01-15 09:00:00   | 29200.0  | 560.0 | 268.0
2024-01-15 10:00:00   | 29800.0  | 555.0 | 269.0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;See the difference? In the relational world, we’re building entities with attributes and relationships. In the time series world, we’re capturing measurements at specific moments. This shift in data structure changes how you interact with your data.&lt;/p&gt;

&lt;p&gt;The time series query states exactly what you mean: &lt;em&gt;give me the average altitude per minute&lt;/em&gt;. No pivoting, no CASE statements, no fighting the data model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Relational SQL:&lt;/strong&gt;&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;-- Fighting to get time-based data out of a relational structure
SELECT 
    AVG(CASE WHEN metric_type = 'altitude' THEN value END) as avg_altitude,
    DATE_TRUNC('minute', timestamp) as minute
FROM flight_data 
WHERE flight_id = 'AA1234' 
GROUP BY DATE_TRUNC('minute', timestamp);

Results:
minute                  | avg_altitude
------------------------|-------------
2024-01-15 08:00:00     | 28500.0
2024-01-15 09:00:00     | 29200.0
2024-01-15 10:00:00     | 29800.0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Time series SQL:&lt;/strong&gt;&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;-- Direct expression of what you actually want
SELECT 
    time_bucket('1m', time) as minute,
    AVG(altitude) as avg_altitude
FROM flight_metrics 
WHERE flight_id = 'AA1234' 
GROUP BY minute;

Results:
minute                  | avg_altitude
------------------------|-------------
2024-01-15 08:00:00     | 28500.0
2024-01-15 09:00:00     | 29200.0
2024-01-15 10:00:00     | 29800.0
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id="the-orm-challenge"&gt;The ORM challenge&lt;/h2&gt;

&lt;p&gt;The biggest adjustment is the ORM paradigm shift. Whatever your platform, you’re used to thinking in objects and relationships. For this example, we will use C# and Entity Framework.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The ORM way (C#):&lt;/strong&gt;&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;// Think in entities and relationships
public class FlightData
{
    public int Id { get; set; }
    public string FlightId { get; set; }
    public List &amp;lt; FlightMetric &amp;gt; Metrics { get; set; }
}

// Query with navigation properties
var flightWithMetrics = context.FlightData
    .Include(f =&amp;gt; f.Metrics.Where(m =&amp;gt; m.Timestamp &amp;gt; yesterday))
    .FirstOrDefault(f =&amp;gt; f.FlightId == "AA1234");
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;The time series way (C#):&lt;/strong&gt;&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-sql"&gt;// Think in measurements at specific time points
public async Task RecordFlightMetrics(string flightId, double altitude, 
    double speed, DateTime timestamp)
{
    var point = PointData
        .Measurement("flight_metrics")
        .Tag("flight_id", flightId)
        .Field("altitude", altitude)
        .Field("speed", speed)
        .Timestamp(timestamp, WritePrecision.Ms);

    await _influxClient.GetWriteApiAsync()
        .WritePointAsync(point, "aviation", "my-org");
}
&lt;/code&gt;&lt;/pre&gt;

&lt;h4 id="can-you-use-an-orm-with-a-time-series-database"&gt;Can You Use an ORM with a Time Series Database?&lt;/h4&gt;

&lt;p&gt;Short answer: not really, and you wouldn’t want to. ORMs are designed for modeling objects and their relationships using foreign keys and navigation properties, not how the objects evolve &lt;em&gt;over time&lt;/em&gt;. Time series data is fundamentally different—it’s measurements over time, not related entities.&lt;/p&gt;

&lt;p&gt;Instead of fighting this, embrace the directness. Time series databases give you more control and better performance by working directly with the data model rather than abstracting it through object mappings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The trade-offs are real, though.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you lose moving away from ORMs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Rich object models with navigation properties&lt;/li&gt;
  &lt;li&gt;Automatic SQL generation and change tracking&lt;/li&gt;
  &lt;li&gt;Language-integrated queries (LINQ, Criteria API, QuerySets, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What you gain with direct time series access:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Massive performance improvements for time-based queries&lt;/li&gt;
  &lt;li&gt;Schema flexibility without migrations&lt;/li&gt;
  &lt;li&gt;Purpose-built time aggregation functions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For many applications dealing with high-frequency data, the performance gains outweigh the development convenience you lose. Most time series databases also offer language-specific SDKs (like the &lt;a href="https://github.com/InfluxCommunity/influxdb3-csharp"&gt;InfluxDB 3 SDK for C#&lt;/a&gt;) and integrate with data collectors like Telegraf for simplified data ingestion.&lt;/p&gt;

&lt;h2 id="the-reality-check"&gt;The reality check&lt;/h2&gt;

&lt;p&gt;Here’s what I learned: most applications don’t need time series databases. If your primary use case is about managing the current state of an object, or you need consistent views across multiple tables, time series databases are likely not the right tools. If your data volume is manageable and you’re not seeing concurrent read/write conflicts, stick with what you know.&lt;/p&gt;

&lt;p&gt;Time series databases make sense when:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;High-frequency data ingestion is causing database locks&lt;/li&gt;
  &lt;li&gt;You’re building something that acts like a “data historian”&lt;/li&gt;
  &lt;li&gt;Patterns over time matter as much or more than the current values&lt;/li&gt;
  &lt;li&gt;Storage costs are becoming significant due to data volume&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stick with relational databases when:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Individual records have a critical business context&lt;/li&gt;
  &lt;li&gt;You need complex queries across different data types&lt;/li&gt;
  &lt;li&gt;Data volume isn’t causing performance issues&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="start-with-a-quick-test-and-hybrid-approach"&gt;Start with a quick test and hybrid approach&lt;/h2&gt;

&lt;p&gt;My advice: don’t overthink it. Take your most demanding, high-frequency API endpoint and try routing it to a time series database instead. Set it up in parallel with your existing system and see what happens.&lt;/p&gt;

&lt;p&gt;The usefulness becomes clear quickly. Either you’ll immediately see the benefit and start thinking of other places to apply it, or you’ll realize your current approach is working fine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High-frequency insert pattern – relational approach (C#):&lt;/strong&gt;&lt;/p&gt;
&lt;pre class=""&gt;&lt;code class="language-csharp"&gt;// Instead of this high-frequency insert pattern...
public async Task LogUserActivity(int userId, string action, DateTime timestamp)
{
    var activity = new UserActivity 
    { 
        UserId = userId, 
        Action = action, 
        Timestamp = timestamp 
    };

    _context.UserActivities.Add(activity);
    await _context.SaveChangesAsync(); // This can cause locks under load
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;High-frequency insert pattern – time series approach (C#):&lt;/strong&gt;&lt;/p&gt;
&lt;pre class=""&gt;&lt;code class="language-csharp"&gt;// Try this approach for high-frequency data
public async Task LogUserActivity(int userId, string action, DateTime timestamp)
{
    var point = PointData
        .Measurement("user_activity")
        .Tag("user_id", userId.ToString())
        .Field("action", action)
        .Timestamp(timestamp, WritePrecision.Ms);

    await _influxClient.GetWriteApiAsync()
        .WritePointAsync(point, "analytics", "my-org"); // Non-blocking writes
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Most real applications end up using both databases. Keep user accounts, orders, and business logic in your relational database; route high-frequency measurements, events, and analytics data to a time series database.&lt;/p&gt;

&lt;p&gt;This gives you the best of both worlds: rich relational context where it matters and efficient time-based storage where volume is the challenge.&lt;/p&gt;

&lt;h2 id="making-the-call"&gt;Making the call&lt;/h2&gt;

&lt;p&gt;After years of building with SQL databases, I can tell you there’s a clear breaking point. When you start spending more time optimizing database performance than shipping features, that’s your signal.&lt;/p&gt;

&lt;p&gt;InfluxDB 3’s SQL support eliminates the learning curve barrier that stopped many of us before. If the problems in this post sound familiar, &lt;a href="https://www.influxdata.com/products/influxdb-3-enterprise/?dl=enterprise/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=relational_vs_time_series_databases_influxdb&amp;amp;utm_content=blog"&gt;try it for free&lt;/a&gt; to see the difference immediately.&lt;/p&gt;
</description>
      <pubDate>Tue, 10 Jun 2025 08:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/relational-vs-time-series-databases-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/relational-vs-time-series-databases-influxdb/</guid>
      <category>Developer</category>
      <author>Heather Downing (InfluxData)</author>
    </item>
    <item>
      <title>Using Azure Blob Storage for InfluxDB 3 Core and Enterprise</title>
      <description>&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/66c180882bcb4aa7becc1d4ef8119e3e/4ff71aeec4659f1dec947138437e7f82/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;

&lt;p&gt;InfluxDB 3 Core and Enterprise introduce a powerful new diskless architecture that lets you store your time series data in cloud object storage while running the database engine locally. This approach offers significant advantages: you get the performance of a local database combined with the durability, scalability, and cost-effectiveness of cloud storage.&lt;/p&gt;

&lt;p&gt;In this tutorial, I’ll show you how to set up InfluxDB 3 Core or Enterprise with &lt;a href="http://azure.microsoft.com/en-us/products/storage/blobs" target="_blank"&gt;Azure Blob Storage&lt;/a&gt; as your object store. This configuration is ideal for scenarios where you want persistent storage without managing physical disks, need to access your data from multiple locations or require a more resilient backup strategy.&lt;/p&gt;

&lt;h2 id="prerequisites"&gt;Prerequisites&lt;/h2&gt;

&lt;p&gt;Before getting started, you’ll need:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;InfluxDB 3 &lt;a href="https://www.influxdata.com/downloads/"&gt;Core or Enterprise&lt;/a&gt; installed on your local machine (but don’t run it yet)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;An &lt;a href="https://learn.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal" target="_blank"&gt;Azure account&lt;/a&gt; with access to create storage resources&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;A terminal or command prompt with the necessary permissions (super user/admin)&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id="set-up-azure-blob-storage"&gt;Set up Azure Blob Storage&lt;/h2&gt;

&lt;p&gt;First, we need to set up an Azure Storage account and container to store our InfluxDB data:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Create an Azure Storage account&lt;/strong&gt;:&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;
        &lt;p&gt;Navigate to the Azure Portal and sign in (requires a &lt;a href="https://azure.microsoft.com/en-us/pricing/purchase-options/azure-account" target="_blank"&gt;subscription&lt;/a&gt; and a &lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal#create-resource-groups" target="_blank"&gt;resource group&lt;/a&gt;).&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;Create a new Storage account with “Blob storage” as the storage type.&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;Under Advanced, make sure a “Hot” access tier is selected for optimal performance.&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;Note your storage account name (e.g., &lt;code class="language-markup"&gt;influxdb3blobstorage&lt;/code&gt;) and select “Go To Resource.”&lt;/p&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Create a container&lt;/strong&gt;:&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;
        &lt;p&gt;Within your storage account, create a new container (e.g., &lt;code class="language-markup"&gt;influxdb3-data&lt;/code&gt;).&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;This container will store the WAL or Parquet files which InfluxDB 3 writes data to.&lt;/p&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Get your access credentials&lt;/strong&gt;:&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
  &lt;li&gt;From your storage account, obtain the &lt;em&gt;storage access key&lt;/em&gt; which you’ll use to authenticate InfluxDB with your Azure Blob Storage.&lt;/li&gt;
  &lt;li&gt;Expand “Security + networking” on the left-hand menu of the storage account (not the container).&lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Select “Access Keys.”&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;Copy one of the keys (not the connection string) to use for InfluxDB3.
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/eac031468d0d4c728c0a2106f765b370/9a5f20cb4e6d2241d7f340f38ba28cbe/unnamed.png" alt="" /&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="start-influxdb-with-azure-blob-storage"&gt;Start InfluxDB with Azure Blob Storage&lt;/h2&gt;

&lt;p&gt;Now that you’ve set up Azure Blob Storage, you can configure InfluxDB to use Azure as an object store. You can store your access key in an environment variable for better security.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Open your terminal.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Navigate to where InfluxDB 3 was installed (you have to &lt;a href="https://docs.influxdata.com/influxdb3/core/#verify-the-install" target="_blank"&gt;set your source&lt;/a&gt; in order to run it).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Replace the values in the config below with your Azure access key, storage account name and bucket (container) name and run the following:&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: The CLI serve examples below are for Enterprise and include the following parameters not needed for InfluxDB 3 Core:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;cluster-id&lt;/li&gt;
  &lt;li&gt;mode&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 serve \
    --object-store=azure \
    --node-id=azure01 \
    --cluster-id=cluster01
    --azure-storage-access-key="YOUR_ACCESS_KEY" \
    --azure-storage-account=influxdb3blobstorage \
    --bucket=influxdb3-data&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Replace the placeholder values with your Azure storage account name, access key, and container name. The access key should be set inside of a string in quotations.&lt;/p&gt;

&lt;h2 id="write-and-query-data"&gt;Write and query data&lt;/h2&gt;

&lt;p&gt;Once your InfluxDB instance is up and running with Azure Blob Storage, you can write and query data as you normally would. You can interact with your data in several ways, including language client SDKs, API, &lt;a href="https://docs.influxdata.com/telegraf/v1/" target="_blank"&gt;Telegraf&lt;/a&gt;, or CLI. You can use third-party visualization tools as well to read the data.&lt;/p&gt;

&lt;p&gt;The database engine runs on your local machine, while Azure provides persistent storage. Since this is a schema-on-write database, you can declare the database name when you write to it. Keep in mind the number of databases, tables and columns available for &lt;a href="https://docs.influxdata.com/influxdb3/core/admin/databases/#database-table-and-column-limits" target="_blank"&gt;Core&lt;/a&gt; and &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/admin/databases/#database-table-and-column-limits" target="_blank"&gt;Enterprise&lt;/a&gt; licenses when you craft your commands (for example, only one node is available for Core).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: Open a separate terminal window for your write/query commands to not interrupt the database engine while it’s running in your original window.&lt;/p&gt;

&lt;p&gt;Here’s an example of using the CLI to write data (with &lt;a href="https://docs.influxdata.com/influxdb3/core/reference/line-protocol/" target="_blank"&gt;line protocol syntax&lt;/a&gt;):&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 write --database=testdb "
cpu,host=prod-server1,region=us-west 
usage_percent=88.2,memory_gb=31.8,disk_used_percent=71.3 1739578205959259001
cpu,host=prod-server2,region=us-east 
usage_percent=87.4,memory_gb=62.1,disk_used_percent=78.9 1739578205959259002
"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Upon success, go to your Azure container to see the file system containing your new WAL file:&lt;/p&gt;

&lt;p&gt;influxdb3-data &amp;gt; azure01 &amp;gt; wal &amp;gt; 00000000001.wal
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/800ea6a548994fe080a569f726fae393/648afdd1561b7c6a581e5ff89a44bc78/unnamed.png" alt="" /&gt;
And to query that data using SQL:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;influxdb3 query --database=testdb "SELECT * FROM cpu LIMIT 10"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You should see: 
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/dac5ba1e308649cab67a2e6b0dd0a3d0/07ddc5e9e0e2840ff3f0402acde50838/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;h2 id="how-influxdb-uses-azure-blob-storage"&gt;How InfluxDB uses Azure Blob Storage&lt;/h2&gt;

&lt;p&gt;When InfluxDB receives write requests, it processes them through the following flow:
&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/22d6245903814e1aa800c24e614ceaf7/16c725db9e36949b5adc3353a46e854d/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Incoming writes are validated, then data is buffered in memory.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Every second (&lt;a href="https://docs.influxdata.com/influxdb3/core/reference/config-options/#wal-flush-interval" target="_blank"&gt;configurable&lt;/a&gt;), the write buffer’s contents are flushed to Write-Ahead-Log (&lt;a href="https://docs.influxdata.com/influxdb3/core/#data-durability" target="_blank"&gt;WAL&lt;/a&gt;) files in object storage (such as your Azure Blog Storage container).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;Data moves to a queryable in-memory buffer, where it’s available to incoming query requests.&lt;/li&gt;
  &lt;li&gt;Approximately every 10 minutes, the contents of the queryable buffer are persisted to &lt;a href="https://www.influxdata.com/glossary/apache-parquet/"&gt;Parquet&lt;/a&gt; files in your Azure Blob Storage container.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This architecture means:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Your most recent data is served from memory for fast access&lt;/li&gt;
  &lt;li&gt;Data is persistently stored in Azure Blob Storage&lt;/li&gt;
  &lt;li&gt;You get durability without the overhead of managing local disks&lt;/li&gt;
  &lt;li&gt;With InfluxDB3 Enterprise, you get compaction and cheaper storage costs&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="multi-node-setup-with-azure-blob-storage-enterprise"&gt;Multi-node setup with Azure Blob Storage (Enterprise)&lt;/h2&gt;

&lt;p&gt;One of the powerful features of InfluxDB 3 Enterprise is the ability to set up high-availability clusters. With Azure Blob Storage as your Object store, you can configure multiple nodes to read and write to the same storage:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: InfluxDB 3 Enterprise Clustering
In InfluxDB 3 Enterprise, a cluster is a group of nodes sharing the same object storage that work together to provide &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/#high-availability"&gt;high availability&lt;/a&gt; and workload distribution. Each node in the cluster requires a unique &lt;code class="language-markup"&gt;--node-id&lt;/code&gt; while sharing a common &lt;code class="language-markup"&gt;--cluster-id&lt;/code&gt;. For compaction management, run one node in compact mode to process WAL files into optimized Parquet files. Ingest nodes write data to the object store in their own directories, while compaction nodes consolidate and optimize this data for efficient querying. This separation lets you scale write operations independently from compaction processes, improving overall system performance.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The &lt;code class="language-markup"&gt;--cluster-id&lt;/code&gt; parameter is required for all new InfluxDB 3 Enterprise instances and must be different from any &lt;code class="language-markup"&gt;--node-id&lt;/code&gt; in the cluster.&lt;/li&gt;
  &lt;li&gt;Only one node can be designated as the Compactor.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src="//images.ctfassets.net/o7xu9whrs0u9/fbb01998dfb545ffb2dfce2a2386c2f3/416950abb199b2593cb0d41dd60be159/unnamed.png" alt="" /&gt;&lt;/p&gt;

&lt;p&gt;A basic Enterprise example:&lt;/p&gt;

&lt;pre class=""&gt;&lt;code class="language-bash"&gt;# Node 1
influxdb3 serve \
--node-id=azure01 \
--cluster-id=cluster01 \
--mode=ingest,query,compact \
--object-store=azure \
--azure-storage-account=influxdb3blobstorage \ 
--azure-storage-access-key="YOUR_ACCESS_KEY" \
--bucket=influxdb3-data \

# Node 2
influxdb3 serve \
--node-id=azure02 \
--cluster-id=cluster01
--mode=ingest,query \
--object-store=azure \
--azure-storage-account=influxdb3blobstorage \ 
--azure-storage-access-key="YOUR_ACCESS_KEY" \
--bucket=influxdb3-data
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This provides resilience: if one node fails, the other continues to operate with full access to all data stored in Azure.&lt;/p&gt;

&lt;h2 id="who-should-consider-this"&gt;Who should consider this&lt;/h2&gt;

&lt;p&gt;Using Azure Blob Storage with InfluxDB 3 gives you the best of both worlds—the performance of a local database engine with the durability and scalability of cloud storage. &lt;strong&gt;Your queries remain fast (especially when querying recent data) while your data stays safe in the cloud.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This configuration is particularly valuable for:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Organizations with existing Azure infrastructure&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Applications requiring high durability without complex local storage management&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Multi-region deployments where data needs to be accessible from different locations&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Cost-effective long-term storage for time series data&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We hope you’ve found this tutorial helpful! Please share your experiences using InfluxDB 3 with Azure Blob Storage in our community forums or Slack channel.&lt;/p&gt;

&lt;hr /&gt;

&lt;p&gt;Download &lt;a href="https://www.influxdata.com/downloads/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=azure_blob_storage_influxdb&amp;amp;utm_content=blog"&gt;Core or Enterprise&lt;/a&gt; to get started. Check out our &lt;a href="https://docs.influxdata.com/influxdb3/core/get-started/" target="_blank"&gt;Getting Started Guide for Core&lt;/a&gt; and &lt;a href="https://docs.influxdata.com/influxdb3/enterprise/get-started/#trigger-types/" target="_blank"&gt;Enterprise&lt;/a&gt;, and share your feedback with our development team on &lt;a href="https://discord.com/invite/vZe2w2Ds8B" target="_blank"&gt;Discord&lt;/a&gt; in the #influxdb3_core channel, &lt;a href="https://influxdata.com/slack/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=azure_blob_storage_influxdb&amp;amp;utm_content=blog"&gt;Slack&lt;/a&gt; in the #influxdb3_core channel, or our &lt;a href="https://community.influxdata.com/?utm_source=website&amp;amp;utm_medium=direct&amp;amp;utm_campaign=azure_blob_storage_influxdb&amp;amp;utm_content=blog"&gt;Community Forums&lt;/a&gt;.&lt;/p&gt;
</description>
      <pubDate>Thu, 20 Mar 2025 07:00:00 +0000</pubDate>
      <link>https://www.influxdata.com/blog/azure-blob-storage-influxdb/</link>
      <guid isPermaLink="true">https://www.influxdata.com/blog/azure-blob-storage-influxdb/</guid>
      <category>Developer</category>
      <author>Heather Downing (InfluxData)</author>
    </item>
  </channel>
</rss>
