New Relic

Overview

Once you've set up your Firehose, connecting it to New Relic is as easy as configuring a downstream http_endpoint destination and backup s3_configuration in Terraform.

The following Kinesis Firehose setup is a Terraform version of the New Relic tutorial.

📘

Prerequisites

  1. An environment.tf file generated by symflow init
    a. If you have not run symflow init, please follow the instructions in Installing Sym
  2. A runtime_connector module defined in connectors.tf
    a. If you do not have a connectors.tf, please follow the instructions in AWS Runtime Setup
  3. A kinesis_firehose_access module defined in connectors.tf during the tutorial on the main AWS Kinesis Firehose

Create a New Relic License Key

You will need a New Relic License Key. Create one in the New Relic API Key UI.

Save the New Relic License Key in AWS Secrets Manager

For Kinesis Firehose to stream to New Relic, it will need an API key. Saving it in AWS Secrets Manager will make it easily and securely accessible when configuring the delivery stream later.

resource "aws_secretsmanager_secret" "new_relic_api_key" {
  name        = "sym/main/new-relic-api-key"
  description = "New Relic API Key for Kinesis Firehose"
}

Terraform apply the secret and set its value

aws secretsmanager put-secret-value --secret-id "sym/main/new-relic-api-key" --secret-string "YOUR-NEW-RELIC-API-KEY"

Declare the Kinesis Firehose Connector

The Kinesis Firehose Connector module declares the AWS dependencies required to declare a Kinesis Firehose, such as the IAM role the Firehose will assume and the backup S3 bucket.

module "kinesis_firehose_connector" {
  source      = "symopsio/kinesis-firehose-connector/aws"
  version     = ">= 3.0.0"
  environment = "main"
}

Create a Delivery Stream

Declare a aws_kinesis_firehose_delivery_stream resource, and set the destination to http_endpoint. This will declare a Kinesis Firehose that will forward to the configured HTTP endpoint.

📘

Configuring the Delivery Stream Buffer Size

New Relic strongly advises to use 1 MiB as the Buffer size and to use GZIP body compression.

🚧

SymEnv is a required tag!

Your Kinesis Firehose Delivery Stream must have a tag SymEnv that matches the environment input of your kinesis_firehose_access module!

resource "aws_kinesis_firehose_delivery_stream" "new_relic" {
  name = "SymNewRelicReportingLogsMain"
  destination = "http_endpoint"

  # New Relic destination via HTTP
  http_endpoint_configuration {
    url                = "https://aws-api.newrelic.com/firehose/v1"
    name               = "New Relic"

    # Your New Relic API Key stored in AWS
    access_key         = aws_secretsmanager_secret.new_relic.arn

    # New Relic's recommended buffering size is 1 MiB
    buffering_size     = 1 # MiB
    buffering_interval = 600 # Seconds

    # The IAM Role is declared by the kinesis_firehose_connector module
    role_arn           = module.kinesis_firehose_connector.firehose_role_arn
    s3_backup_mode     = "FailedDataOnly"

    request_configuration {
      content_encoding = "GZIP"
    }
  }

  # Backup S3 configuration
  s3_configuration {
    role_arn = module.kinesis_firehose_connector.firehose_role_arn
    bucket_arn = module.kinesis_firehose_connector.firehose_bucket_arn
    buffer_size        = 10
    buffer_interval    = 60
    compression_format = "GZIP"
  }

  tags = {
    # This SymEnv tag is required and MUST match the SymEnv specified in your kinesis_firehose_access module
    SymEnv = local.environment_name
  }
}

Add a Log Destination

Define a sym_log_destination resource with type = kinesis_firehose.

  • integration_id: The integration containing the permissions to push to Kinesis Firehose. This should be set to module.runtime_connector.sym_integration.id, which has the permissions created by the kinesis_firehose_access module.
  • stream_name: The name of the Kinesis Firehose Delivery Stream
resource "sym_log_destination" "new_relic" {
  type = "kinesis_firehose"

  # The Runtime Connector sym_integration has Kinesis Firehose permissions defined by the kinesis_firehose_access module
  integration_id = module.runtime_connector.sym_integration.id

  settings = {
    stream_name = aws_kinesis_firehose_delivery_stream.new_relic.name
  }
}

Add the Log Destination to your Environment

Each sym_environment accepts a list of Log Destinations to send reporting logs to. Add the ID of the Log Destination you just defined to the log_destination_ids list.

resource "sym_environment" "this" {
  name            = "main"
  runtime_id      = sym_runtime.this.id
  error_logger_id = sym_error_logger.slack.id
  
  # Add your log destinations here
  log_destination_ids = [sym_log_destination.new_relic.id]

  integrations = {
    slack_id = sym_integration.slack.id
  }
}

Example Configuration Snippet

# Note: This snippet is truncated. Other unrelated resources have been omitted

# The runtime_connector module creates an IAM Role that the Sym Runtime can assume to execute operations in your AWS account.
module "runtime_connector" {
  source  = "symopsio/runtime-connector/aws"
  version = "~> 2.0"

  environment = local.environment_name
}

# The kinesis_firehose_access module generates an AWS IAM Policy that grants permissions to publish to the given AWS Kinesis Firehose.
# Those permissions will be granted to the Runtime Connector IAM Role so that the Sym Runtime can publish to the Kinesis Firehose.
module "kinesis_firehose_access" {
  source  = "symopsio/kinesis-firehose-addon/aws"
  version = ">= 1.1.0"

  environment = local.environment_name
  iam_role_name = module.runtime_connector.sym_runtime_connector_role.name
}

# New Relic API Key
resource "aws_secretsmanager_secret" "new_relic_api_key" {
  name        = "main/new-relic-api-key"
  description = "New Relic API Key for Kinesis Firehose"
}

# Kinesis Firehose Dependencies
module "kinesis_firehose_connector" {
  source  = "symopsio/kinesis-firehose-connector/aws"
  version = ">= 3.0.0"
  environment = local.environment_name
}

# Kinesis Firehose Delivery Stream to New Relic
resource "aws_kinesis_firehose_delivery_stream" "new_relic" {
  name = "SymNewRelicReportingLogsMain"
  destination = "http_endpoint"

  # New Relic destination via HTTP
  http_endpoint_configuration {
    url                = "https://aws-api.newrelic.com/firehose/v1"
    name               = "New Relic"

    # Your New Relic API Key stored in AWS
    access_key         = aws_secretsmanager_secret.new_relic.arn

    # New Relic's recommended buffering size is 1 MiB
    buffering_size     = 1 # MiB
    buffering_interval = 600 # Seconds

    # The IAM Role is declared by the kinesis_firehose_connector module
    role_arn           = module.kinesis_firehose_connector.firehose_role_arn
    s3_backup_mode     = "FailedDataOnly"

    request_configuration {
      content_encoding = "GZIP"
    }
  }

  # Backup S3 configuration
  s3_configuration {
    role_arn = module.kinesis_firehose_connector.firehose_role_arn
    bucket_arn = module.kinesis_firehose_connector.firehose_bucket_arn
    buffer_size        = 10
    buffer_interval    = 60
    compression_format = "GZIP"
  }

  tags = {
    # This SymEnv tag is required and MUST match the SymEnv specified in your kinesis_firehose_access module
    SymEnv = local.environment_name
  }
}

# The Sym Log Destination pointing to the New Relic Firehose
resource "sym_log_destination" "new_relic" {
  type = "kinesis_firehose"

  # The Runtime Connector sym_integration has Kinesis Firehose permissions defined by the kinesis_firehose_access module
  integration_id = module.runtime_connector.sym_integration.id

  settings = {
    stream_name = aws_kinesis_firehose_delivery_stream.new_relic.name
  }
}
# ... other resources omitted

# Sym Environment with New Relic as a Log Destination  
resource "sym_environment" "this" {
  # ... other attributes omitted
  
  # Add your log destinations here
  log_destination_ids = [sym_log_destination.new_relic.id]

  # ... other attributes omitted
}