Yesterday by 6 pm EST time, all customers’ ingestion and parsing delays had been remediated and parsing of data returned to a normal state. At no time through the degradation of service did we experience data loss in the ingestion pipelines.
Over the course of the next few days we’ll continue to analyze any impact to detections and provide a follow up root cause analysis brief to share with our customers.
Blumira’s Ingestion pipelines are currently significantly degraded as inbound logs are not being parsed in a timely manner.
Customers are experiencing delays in detections and reporting capabilities.
Our operations and engineering teams are actively triaging and investigating mitigation tactics to get log parsing back under normal operating conditions
Our data warehouse service provider Google Cloud, have resolved their outage with BigQuery services. We are no longer seeing delays in ingestion or querying due to their global outage.
The Blumira engineering team is continuing to monitor our customers’ accounts for any linger effects caused the outage. At this time, we do not believe there to be customer impact.
https://status.cloud.google.com/incidents/Gt6njQyniuxXViQULV2T#RP1d9aZLNFZEJmTBk8e1
Our services provider, Google Cloud is reporting latency and failures across their global regions for Big Query cloud storage. This represents the primary data warehouse where ingested and normalized log data is stored.
We are investigating and monitoring impact with our service provider and will continue to post updates throughout the day.
You may also follow the incident directly at: https://status.cloud.google.com/incidents/Gt6njQyniuxXViQULV2T#RP1d9aZLNFZEJmTBk8e1