Compare commits

...

9 Commits
v2.1.1 ... main

Author SHA1 Message Date
Benoit Tigeot 8e3c307fb6
FEATURE: Add new metrics busy_threads from Puma (#333)
New Puma version has a [nice addition](https://github.com/puma/puma/pull/3517) of metric on busy threads.

> busy_threads: running - how many threads are waiting to receive work + how many requests are waiting for a thread to pick them up. this is a "wholistic" stat reflecting the overall current state of work to be done and the capacity to do it.
[source](https://github.com/puma/puma/blob/master/docs/stats.md#single-mode-and-individual-workers-in-cluster-mode)
2025-05-09 14:58:52 +08:00
Alan Guo Xiang Tan 74fd1f2250
Update CHANGELOG to follow https://keepachangelog.com/en/1.1.0/ (#341) 2025-04-17 14:58:34 +08:00
Alan Guo Xiang Tan 0f6d2b4c02 DEV: Move linting to a seperate workflow to fix the build
We don't need to be running rubocop and stree multiple times
2025-04-17 09:37:15 +08:00
Michael Fahmy bca2229dcc Install curl in Docker image 2025-04-16 09:58:07 +08:00
Alan Guo Xiang Tan 46e88afd23 Version bump to 2.2.0 2024-12-05 10:57:10 +08:00
Alan Guo Xiang Tan 45df3dce92 FIX: Ensure socket is closed when error is raised while opening socket 2024-12-05 10:57:10 +08:00
Alan Guo Xiang Tan cf7bf84226
DEV: Introduce syntax_tree for formatting (#329) 2024-12-05 08:42:33 +08:00
Alan Guo Xiang Tan 4e21b4c443
Fix the build (#330)
Broken by cbb669bee1
2024-12-05 08:29:53 +08:00
Subramanya-Murugesan cbb669bee1
Feature: Add Dalli::Client memcache metrics for web_collector (#307)
Added Dalli::Client in middleware
Collected memcache metrics in web_collector
2024-06-20 14:27:52 +10:00
76 changed files with 1877 additions and 1301 deletions

View File

@ -27,7 +27,7 @@ jobs:
strategy:
fail-fast: false
matrix:
ruby: ['3.1', '3.2', '3.3']
ruby: ["3.1", "3.2", "3.3"]
activerecord: [61, 70, 71]
steps:
@ -39,9 +39,6 @@ jobs:
bundler: latest
bundler-cache: true
- name: Rubocop
run: bundle exec rubocop
- name: Run tests
run: bundle exec rake

28
.github/workflows/linting.yml vendored Normal file
View File

@ -0,0 +1,28 @@
name: Linting
on:
push:
branches:
- main
pull_request:
jobs:
lint:
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
- name: Set up Ruby
uses: ruby/setup-ruby@v1
with:
ruby-version: 3.4
bundler: latest
bundler-cache: true
- name: Rubocop
run: bundle exec rubocop
- name: Syntax tree
run: bundle exec stree check Gemfile $(git ls-files '*.rb') $(git ls-files '*.rake') $(git ls-files '*.thor')

2
.streerc Normal file
View File

@ -0,0 +1,2 @@
--print-width=100
--plugins=plugin/trailing_comma,disable_ternary

283
CHANGELOG
View File

@ -1,227 +1,398 @@
2.1.1 - 2024-06-19
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
- FEATURE: Added puma_busy_threads metric that provides a holistic view of server workload by calculating (active threads - idle threads) + queued requests
## [2.2.0] - 2024-12-05
### Added
- Feature: Add Dalli::Client memcache metrics for web_collector
### Fixed
- FIX: Ensure socket is closed when error is raised while opening socket
## [2.1.1] - 2024-06-19
### Added
- FEATURE: improve good_job instrumentation
- FIX: improve Ruby 3.X support
- FEATURE: imstrumentation for malloc / oldmalloc increace in GC stats
2.1.0 - 2024-01-08
### Fixed
- FIX: improve Ruby 3.X support
## [2.1.0] - 2024-01-08
### Added
- FEATURE: good_job instrumentation
### Changed
- PERF: improve performance of histogram
- DEV: use new metric collector pattern so we reuse code between collectors
2.0.8 - 2023-01-20
## [2.0.8] - 2023-01-20
### Added
- FEATURE: attempting to make our first docker release
2.0.7 - 2023-01-13
## [2.0.7] - 2023-01-13
### Added
- FEATURE: allow binding server to both ipv4 and v6
### Fixed
- FIX: expire stale sidekiq metrics
2.0.6 - 2022-11-22
## [2.0.6] - 2022-11-22
### Fixed
- FIX: use user specified labels over default in merge conflict
- FIX: sidekiq stats collector memory leak
2.0.5 - 2022-11-15
## [2.0.5] - 2022-11-15
### Fixed
- FIX: regression :prepend style instrumentation not working correctly
2.0.4 - 2022-11-10
## [2.0.4] - 2022-11-10
- FIX/FEATURE: support for Redis 5 gem instrumentation
### Fixed
2.0.3 - 2022-05-23
- FIX support for Redis 5 gem instrumentation
## [2.0.3] - 2022-05-23
### Added
- FEATURE: new ping endpoint for keepalive checks
### Fixed
- FIX: order histogram correctly for GCP support
- FIX: improve sidekiq instrumentation
2.0.2 - 2022-02-25
## [2.0.2] - 2022-02-25
### Fixed
- FIX: runner was not requiring unicorn integration correctly leading to a crash
2.0.1 - 2022-02-24
## [2.0.1] - 2022-02-24
### Fixed
- FIX: ensure threads do not leak when calling #start repeatedly on instrumentation classes, this is an urgent patch for Puma integration
2.0.0 - 2022-02-18
## [2.0.0] - 2022-02-18
### Added
- FEATURE: Add per worker custom labels
- FEATURE: support custom histogram buckets
### Fixed
- FIX: all metrics are exposing status label, and not only `http_requests_total`
### Changed
- BREAKING: rename all `http_duration` metrics to `http_request_duration` to match prometheus official naming conventions (See https://prometheus.io/docs/practices/naming/#metric-names).
1.0.1 - 2021-12-22
## [1.0.1] - 2021-12-22
### Added
- FEATURE: add labels to preflight requests
- FEATURE: SidekiqStats metrics
### Fixed
- FIX: mintor refactors to Sidekiq metrics
1.0.0 - 2021-11-23
## [1.0.0] - 2021-11-23
### Added
- BREAKING: rename metrics to match prometheus official naming conventions (See https://prometheus.io/docs/practices/naming/#metric-names)
- FEATURE: Sidekiq process metrics
- FEATURE: Allow collecting web metrics as histograms
### Fixed
- FIX: logger improved for web server
- FIX: Remove job labels from DelayedJob queues
0.8.1 - 2021-08-04
### Changed
- BREAKING: rename metrics to match prometheus official naming conventions (See https://prometheus.io/docs/practices/naming/#metric-names)
## [0.8.1] - 2021-08-04
### Added
- FEATURE: swap from hardcoded STDERR to logger pattern (see README for details)
0.8.0 - 2021-07-05
## [0.8.0] - 2021-07-05
### Added
- FIX: handle ThreadError more gracefully in cases where process shuts down
- FEATURE: add job_name and queue_name labels to delayed job metrics
- FEATURE: always scope puma metrics on hostname in collector
- FEATURE: add customizable labels option to puma collector
- FEATURE: support for Resque
- DEV: Remove support for EOL ruby 2.5
- FIX: Add source location to MethodProfiler patches
- FEATURE: Improve Active Record instrumentation
- FEATURE: Support HTTP_X_AMZN_TRACE_ID when supplied
0.7.0 - 2020-12-29
### Fixed
- FIX: handle ThreadError more gracefully in cases where process shuts down
- FIX: Add source location to MethodProfiler patches
### Removed
- DEV: Remove support for EOL ruby 2.5
## [0.7.0] - 2020-12-29
### Added
- FEATURE: clean pattern for overriding middleware labels was introduced (in README)
### Fixed
- Fix: Better support for forking
### Changed
- Dev: Removed support from EOL rubies, only 2.5, 2.6, 2.7 and 3.0 are supported now.
- Dev: Better support for Ruby 3.0, explicitly depending on webrick
- Dev: Rails 6.1 instrumentation support
- FEATURE: clean pattern for overriding middleware labels was introduced (in README)
- Fix: Better support for forking
0.6.0 - 2020-11-17
## [0.6.0] - 2020-11-17
### Added
- FEATURE: add support for basic-auth in the prometheus_exporter web server
0.5.3 - 2020-07-29
## [0.5.3] - 2020-07-29
### Added
- FEATURE: added #remove to all metric types so users can remove specific labels if needed
0.5.2 - 2020-07-01
## [0.5.2] - 2020-07-01
### Added
- FEATURE: expanded instrumentation for sidekiq
- FEATURE: configurable default labels
0.5.1 - 2020-02-25
## [0.5.1] - 2020-02-25
### Added
- FEATURE: Allow configuring the default client's host and port via environment variables
0.5.0 - 2020-02-14
## [0.5.0] - 2020-02-14
### Fixed
- Breaking change: listen only to localhost by default to prevent unintended insecure configuration
- FIX: Avoid calling `hostname` aggressively, instead cache it on the exporter instance
0.4.17 - 2020-01-13
### Changed
- Breaking change: listen only to localhost by default to prevent unintended insecure configuration
## [0.4.17] - 2020-01-13
### Added
- FEATURE: add support for `to_h` on all metrics which can be used to query existing key/values
0.4.16 - 2019-11-04
## [0.4.16] - 2019-11-04
### Added
- FEATURE: Support #reset! on all metric types to reset a metric to default
0.4.15 - 2019-11-04
## [0.4.15] - 2019-11-04
### Added
- FEATURE: Improve delayed job collector, add pending counts
- FEATURE: New ActiveRecord collector (documented in readme)
- FEATURE: Allow passing in histogram and summary options
- FEATURE: Allow custom labels for unicorn collector
0.4.14 - 2019-09-10
## [0.4.14] - 2019-09-10
### Added
- FEATURE: allow finding metrics by name RemoteMetric #find_registered_metric
### Fixed
- FIX: guard socket closing
0.4.13 - 2019-07-09
## [0.4.13] - 2019-07-09
### Fixed
- Fix: Memory leak in unicorn and puma collectors
0.4.12 - 2019-05-30
## [0.4.12] - 2019-05-30
### Fixed
- Fix: unicorn collector reporting incorrect number of unicorn workers
0.4.11 - 2019-05-15
## [0.4.11] - 2019-05-15
### Fixed
- Fix: Handle stopping nil worker_threads in Client
### Changed
- Dev: add frozen string literals
0.4.10 - 2019-04-29
## [0.4.10] - 2019-04-29
### Fixed
- Fix: Custom label support for puma collector
- Fix: Raindrops socket collector not working correctly
0.4.9 - 2019-04-11
## [0.4.9] - 2019-04-11
### Fixed
- Fix: Gem was not working correctly in Ruby 2.4 and below due to a syntax error
0.4.8 - 2019-04-10
## [0.4.8] - 2019-04-10
### Added
- Feature: added helpers for instrumenting unicorn using raindrops
0.4.7 - 2019-04-08
## [0.4.7] - 2019-04-08
### Fixed
- Fix: collector was not escaping " \ and \n correctly. This could lead
to a corrupt payload in some cases.
0.4.6 - 2019-04-02
## [0.4.6] - 2019-04-02
### Added
- Feature: Allow resetting a counter
- Feature: Add sidekiq metrics: restarted, dead jobs counters
### Fixed
- Fix: Client shutting down before sending metrics to collector
0.4.5 - 2019-02-14
## [0.4.5] - 2019-02-14
### Added
- Feature: Allow process collector to ship custom labels for all process metrics
### Fixed
- Fix: Always scope process metrics on hostname in collector
0.4.4 - 2019-02-13
## [0.4.4] - 2019-02-13
### Added
- Feature: add support for local metric collection without using HTTP
0.4.3 - 2019-02-11
## [0.4.3] - 2019-02-11
### Added
- Feature: Add alias for Gauge #observe called #set, this makes it a bit easier to migrate from prom
- Feature: Add increment and decrement to Counter
0.4.2 - 2018-11-30
## [0.4.2] - 2018-11-30
- Fix/Feature: setting a Gauge to nil will remove Gauge (setting to non numeric will raise)
### Fixed
0.4.0 - 2018-10-23
- Fix: setting a Gauge to nil will remove Gauge (setting to non numeric will raise)
## [0.4.0] - 2018-10-23
### Added
- Feature: histogram support
- Feature: custom quantile support for summary
- Feature: Puma metrics
### Fixed
- Fix: delayed job metrics
0.3.4 - 2018-10-02
## [0.3.4] - 2018-10-02
### Fixed
- Fix: custom collector via CLI was not working correctly
0.3.3
## [0.3.3]
### Added
- Feature: Add more metrics to delayed job collector
0.3.2
## [0.3.2]
### Added
- Feature: Add posibility to set custom_labels on multi process mode
0.3.1
## [0.3.1]
### Changed
- Allow runner to accept a --timeout var
- Allow runner to accept a blank prefix
0.3.0
## [0.3.0]
### Changed
- Breaking change: Follow Prometheus metric [naming conventions](https://prometheus.io/docs/practices/naming/#metric-names)
0.1.15 - 2018-02-19
## [0.1.15] - 2018-02-19
### Added
- Feature: Prefer to use oj if it is loadable
0.1.14 - 2018-02-17
## [0.1.14] - 2018-02-17
### Added
- Feature: runner was extracted so it can be reused @304
### Fixed
- Fix: error when shipping summary metric with no labels
- Feature: runner was extracted so it can be reused @304

View File

@ -3,6 +3,8 @@ ARG GEM_VERSION=
FROM ruby:${RUBY_VERSION}-slim
RUN apt update && apt install -y curl
RUN gem install --no-doc --version=${GEM_VERSION} prometheus_exporter
EXPOSE 9394

View File

@ -213,13 +213,14 @@ Rails.application.middleware.unshift PrometheusExporter::Middleware, instrument:
#### Metrics collected by Rails integration middleware
| Type | Name | Description |
| --- | --- | --- |
| Counter | `http_requests_total` | Total HTTP requests from web app |
| Summary | `http_request_duration_seconds` | Time spent in HTTP reqs in seconds |
| Summary | `http_request_redis_duration_seconds`¹ | Time spent in HTTP reqs in Redis, in seconds |
| Summary | `http_request_sql_duration_seconds`² | Time spent in HTTP reqs in SQL in seconds |
| Summary | `http_request_queue_duration_seconds`³ | Time spent queueing the request in load balancer in seconds |
| Type | Name | Description |
| --- | --- | --- |
| Counter | `http_requests_total` | Total HTTP requests from web app |
| Summary | `http_request_duration_seconds` | Time spent in HTTP reqs in seconds |
| Summary | `http_request_redis_duration_seconds`¹ | Time spent in HTTP reqs in Redis, in seconds |
| Summary | `http_request_sql_duration_seconds`² | Time spent in HTTP reqs in SQL in seconds |
| Summary | `http_request_queue_duration_seconds`³ | Time spent queueing the request in load balancer in seconds |
| Summary | `http_request_memcache_duration_seconds`⁴ | Time spent in HTTP reqs in Memcache in seconds |
All metrics have a `controller` and an `action` label.
`http_requests_total` additionally has a (HTTP response) `status` label.
@ -268,6 +269,7 @@ ruby_http_request_duration_seconds{path="/api/v1/teams/:id",method="GET",status=
¹) Only available when Redis is used.
²) Only available when Mysql or PostgreSQL are used.
³) Only available when [Instrumenting Request Queueing Time](#instrumenting-request-queueing-time) is set up.
⁴) Only available when Dalli is used.
#### Activerecord Connection Pool Metrics
@ -607,15 +609,16 @@ end
#### Metrics collected by Puma Instrumentation
| Type | Name | Description |
| --- | --- | --- |
| Gauge | `puma_workers` | Number of puma workers |
| Gauge | `puma_booted_workers` | Number of puma workers booted |
| Gauge | `puma_old_workers` | Number of old puma workers |
| Gauge | `puma_running_threads` | Number of puma threads currently running |
| Gauge | `puma_request_backlog` | Number of requests waiting to be processed by a puma thread |
| Gauge | `puma_thread_pool_capacity` | Number of puma threads available at current scale |
| Gauge | `puma_max_threads` | Number of puma threads at available at max scale |
| Type | Name | Description |
| --- | --- | --- |
| Gauge | `puma_workers` | Number of puma workers |
| Gauge | `puma_booted_workers` | Number of puma workers booted |
| Gauge | `puma_old_workers` | Number of old puma workers |
| Gauge | `puma_running_threads` | How many threads are spawned. A spawned thread may be busy processing a request or waiting for a new request |
| Gauge | `puma_request_backlog` | Number of requests waiting to be processed by a puma thread |
| Gauge | `puma_thread_pool_capacity` | Number of puma threads available at current scale |
| Gauge | `puma_max_threads` | Number of puma threads at available at max scale |
| Gauge | `puma_busy_threads` | Running - how many threads are waiting to receive work + how many requests are waiting for a thread to pick them up |
All metrics may have a `phase` label and all custom labels provided with the `labels` option.

View File

@ -1,8 +1,8 @@
# frozen_string_literal: true
require_relative '../lib/prometheus_exporter'
require_relative '../lib/prometheus_exporter/client'
require_relative '../lib/prometheus_exporter/server'
require_relative "../lib/prometheus_exporter"
require_relative "../lib/prometheus_exporter/client"
require_relative "../lib/prometheus_exporter/server"
# test how long it takes a custom collector to process 10k messages
@ -26,18 +26,19 @@ end
@client = nil
@runs = 1000
done = lambda do
puts "Elapsed for 10k messages is #{Time.now - @start}"
if (@runs -= 1) > 0
@start = Time.now
10_000.times { @client.send_json(hello: "world") }
done =
lambda do
puts "Elapsed for 10k messages is #{Time.now - @start}"
if (@runs -= 1) > 0
@start = Time.now
10_000.times { @client.send_json(hello: "world") }
end
end
end
collector = Collector.new(done)
server = PrometheusExporter::Server::WebServer.new port: 12349, collector: collector
server = PrometheusExporter::Server::WebServer.new port: 12_349, collector: collector
server.start
@client = PrometheusExporter::Client.new port: 12349, max_queue_size: 100_000
@client = PrometheusExporter::Client.new port: 12_349, max_queue_size: 100_000
@start = Time.now
10_000.times { @client.send_json(hello: "world") }

View File

@ -20,8 +20,6 @@ class MyCustomCollector < PrometheusExporter::Server::BaseCollector
end
def prometheus_metrics_text
@mutex.synchronize do
"#{@gauge1.to_prometheus_text}\n#{@gauge2.to_prometheus_text}"
end
@mutex.synchronize { "#{@gauge1.to_prometheus_text}\n#{@gauge2.to_prometheus_text}" }
end
end

View File

@ -17,13 +17,7 @@ module PrometheusExporter
end
def standard_values(value, keys, prometheus_exporter_action = nil)
values = {
type: @type,
help: @help,
name: @name,
keys: keys,
value: value
}
values = { type: @type, help: @help, name: @name, keys: keys, value: value }
values[
:prometheus_exporter_action
] = prometheus_exporter_action if prometheus_exporter_action
@ -59,16 +53,14 @@ module PrometheusExporter
def initialize(
host: ENV.fetch("PROMETHEUS_EXPORTER_HOST", "localhost"),
port: ENV.fetch(
"PROMETHEUS_EXPORTER_PORT",
PrometheusExporter::DEFAULT_PORT
),
port: ENV.fetch("PROMETHEUS_EXPORTER_PORT", PrometheusExporter::DEFAULT_PORT),
max_queue_size: nil,
thread_sleep: 0.5,
json_serializer: nil,
custom_labels: nil,
logger: Logger.new(STDERR),
log_level: Logger::WARN
log_level: Logger::WARN,
process_queue_once_and_stop: false
)
@logger = logger
@logger.level = log_level
@ -83,9 +75,7 @@ module PrometheusExporter
max_queue_size ||= MAX_QUEUE_SIZE
max_queue_size = max_queue_size.to_i
if max_queue_size <= 0
raise ArgumentError, "max_queue_size must be larger than 0"
end
raise ArgumentError, "max_queue_size must be larger than 0" if max_queue_size <= 0
@max_queue_size = max_queue_size
@host = host
@ -94,10 +84,10 @@ module PrometheusExporter
@mutex = Mutex.new
@thread_sleep = thread_sleep
@json_serializer =
json_serializer == :oj ? PrometheusExporter::OjCompat : JSON
@json_serializer = json_serializer == :oj ? PrometheusExporter::OjCompat : JSON
@custom_labels = custom_labels
@process_queue_once_and_stop = process_queue_once_and_stop
end
def custom_labels=(custom_labels)
@ -105,14 +95,7 @@ module PrometheusExporter
end
def register(type, name, help, opts = nil)
metric =
RemoteMetric.new(
type: type,
name: name,
help: help,
client: self,
opts: opts
)
metric = RemoteMetric.new(type: type, name: name, help: help, client: self, opts: opts)
@metrics << metric
metric
end
@ -163,7 +146,7 @@ module PrometheusExporter
@socket.write("\r\n")
rescue => e
logger.warn "Prometheus Exporter is dropping a message: #{e}"
@socket = nil
close_socket!
raise
end
end
@ -189,6 +172,11 @@ module PrometheusExporter
end
def ensure_worker_thread!
if @process_queue_once_and_stop
worker_loop
return
end
unless @worker_thread&.alive?
@mutex.synchronize do
return if @worker_thread&.alive?
@ -253,8 +241,7 @@ module PrometheusExporter
nil
rescue StandardError
@socket = nil
@socket_started = nil
close_socket!
@socket_pid = nil
raise
end
@ -262,10 +249,7 @@ module PrometheusExporter
def wait_for_empty_queue_with_timeout(timeout_seconds)
start_time = ::Process.clock_gettime(::Process::CLOCK_MONOTONIC)
while @queue.length > 0
if start_time + timeout_seconds <
::Process.clock_gettime(::Process::CLOCK_MONOTONIC)
break
end
break if start_time + timeout_seconds < ::Process.clock_gettime(::Process::CLOCK_MONOTONIC)
sleep(0.05)
end
end

View File

@ -3,14 +3,16 @@
# collects stats from currently running process
module PrometheusExporter::Instrumentation
class ActiveRecord < PeriodicStats
ALLOWED_CONFIG_LABELS = %i(database username host port)
ALLOWED_CONFIG_LABELS = %i[database username host port]
def self.start(client: nil, frequency: 30, custom_labels: {}, config_labels: [])
client ||= PrometheusExporter::Client.default
# Not all rails versions support connection pool stats
unless ::ActiveRecord::Base.connection_pool.respond_to?(:stat)
client.logger.error("ActiveRecord connection pool stats not supported in your rails version")
client.logger.error(
"ActiveRecord connection pool stats not supported in your rails version",
)
return
end
@ -29,7 +31,9 @@ module PrometheusExporter::Instrumentation
def self.validate_config_labels(config_labels)
return if config_labels.size == 0
raise "Invalid Config Labels, available options #{ALLOWED_CONFIG_LABELS}" if (config_labels - ALLOWED_CONFIG_LABELS).size > 0
if (config_labels - ALLOWED_CONFIG_LABELS).size > 0
raise "Invalid Config Labels, available options #{ALLOWED_CONFIG_LABELS}"
end
end
def initialize(metric_labels, config_labels)
@ -55,7 +59,7 @@ module PrometheusExporter::Instrumentation
pid: pid,
type: "active_record",
hostname: ::PrometheusExporter.hostname,
metric_labels: labels(pool)
metric_labels: labels(pool),
}
metric.merge!(pool.stat)
metrics << metric
@ -66,12 +70,20 @@ module PrometheusExporter::Instrumentation
def labels(pool)
if ::ActiveRecord.version < Gem::Version.new("6.1.0.rc1")
@metric_labels.merge(pool_name: pool.spec.name).merge(pool.spec.config
.select { |k, v| @config_labels.include? k }
.map { |k, v| [k.to_s.dup.prepend("dbconfig_"), v] }.to_h)
@metric_labels.merge(pool_name: pool.spec.name).merge(
pool
.spec
.config
.select { |k, v| @config_labels.include? k }
.map { |k, v| [k.to_s.dup.prepend("dbconfig_"), v] }
.to_h,
)
else
@metric_labels.merge(pool_name: pool.db_config.name).merge(
@config_labels.each_with_object({}) { |l, acc| acc["dbconfig_#{l}"] = pool.db_config.public_send(l) })
@config_labels.each_with_object({}) do |l, acc|
acc["dbconfig_#{l}"] = pool.db_config.public_send(l)
end,
)
end
end
end

View File

@ -2,24 +2,33 @@
module PrometheusExporter::Instrumentation
class DelayedJob
JOB_CLASS_REGEXP = %r{job_class: ((\w+:{0,2})+)}.freeze
JOB_CLASS_REGEXP = /job_class: ((\w+:{0,2})+)/.freeze
class << self
def register_plugin(client: nil, include_module_name: false)
instrumenter = self.new(client: client)
return unless defined?(Delayed::Plugin)
plugin = Class.new(Delayed::Plugin) do
callbacks do |lifecycle|
lifecycle.around(:invoke_job) do |job, *args, &block|
max_attempts = Delayed::Worker.max_attempts
enqueued_count = Delayed::Job.where(queue: job.queue).count
pending_count = Delayed::Job.where(attempts: 0, locked_at: nil, queue: job.queue).count
instrumenter.call(job, max_attempts, enqueued_count, pending_count, include_module_name,
*args, &block)
plugin =
Class.new(Delayed::Plugin) do
callbacks do |lifecycle|
lifecycle.around(:invoke_job) do |job, *args, &block|
max_attempts = Delayed::Worker.max_attempts
enqueued_count = Delayed::Job.where(queue: job.queue).count
pending_count =
Delayed::Job.where(attempts: 0, locked_at: nil, queue: job.queue).count
instrumenter.call(
job,
max_attempts,
enqueued_count,
pending_count,
include_module_name,
*args,
&block
)
end
end
end
end
Delayed::Worker.plugins << plugin
end
@ -50,7 +59,7 @@ module PrometheusExporter::Instrumentation
attempts: attempts,
max_attempts: max_attempts,
enqueued: enqueued_count,
pending: pending_count
pending: pending_count,
)
end
end

View File

@ -7,9 +7,7 @@ module PrometheusExporter::Instrumentation
good_job_collector = new
client ||= PrometheusExporter::Client.default
worker_loop do
client.send_json(good_job_collector.collect)
end
worker_loop { client.send_json(good_job_collector.collect) }
super
end
@ -23,7 +21,7 @@ module PrometheusExporter::Instrumentation
running: ::GoodJob::Job.running.size,
finished: ::GoodJob::Job.finished.size,
succeeded: ::GoodJob::Job.succeeded.size,
discarded: ::GoodJob::Job.discarded.size
discarded: ::GoodJob::Job.discarded.size,
}
end
end

View File

@ -19,7 +19,7 @@ module PrometheusExporter::Instrumentation
type: "hutch",
name: @klass.class.to_s,
success: success,
duration: duration
duration: duration,
)
end
end

View File

@ -1,7 +1,8 @@
# frozen_string_literal: true
# see https://samsaffron.com/archive/2017/10/18/fastest-way-to-profile-a-method-in-ruby
module PrometheusExporter::Instrumentation; end
module PrometheusExporter::Instrumentation
end
class PrometheusExporter::Instrumentation::MethodProfiler
def self.patch(klass, methods, name, instrument:)
@ -21,9 +22,8 @@ class PrometheusExporter::Instrumentation::MethodProfiler
end
def self.start(transfer = nil)
Thread.current[:_method_profiler] = transfer || {
__start: Process.clock_gettime(Process::CLOCK_MONOTONIC)
}
Thread.current[:_method_profiler] = transfer ||
{ __start: Process.clock_gettime(Process::CLOCK_MONOTONIC) }
end
def self.clear
@ -42,8 +42,8 @@ class PrometheusExporter::Instrumentation::MethodProfiler
def self.define_methods_on_module(klass, methods, name)
patch_source_line = __LINE__ + 3
patches = methods.map do |method_name|
<<~RUBY
patches = methods.map { |method_name| <<~RUBY }.join("\n")
def #{method_name}(...)
unless prof = Thread.current[:_method_profiler]
return super
@ -58,9 +58,8 @@ class PrometheusExporter::Instrumentation::MethodProfiler
end
end
RUBY
end.join("\n")
klass.module_eval patches, __FILE__, patch_source_line
klass.module_eval(patches, __FILE__, patch_source_line)
end
def self.patch_using_prepend(klass, methods, name)
@ -71,14 +70,16 @@ class PrometheusExporter::Instrumentation::MethodProfiler
def self.patch_using_alias_method(klass, methods, name)
patch_source_line = __LINE__ + 3
patches = methods.map do |method_name|
<<~RUBY
patches = methods.map { |method_name| <<~RUBY }.join("\n")
unless defined?(#{method_name}__mp_unpatched)
alias_method :#{method_name}__mp_unpatched, :#{method_name}
def #{method_name}(...)
unless prof = Thread.current[:_method_profiler]
return #{method_name}__mp_unpatched(...)
end
begin
start = Process.clock_gettime(Process::CLOCK_MONOTONIC)
#{method_name}__mp_unpatched(...)
@ -90,8 +91,7 @@ class PrometheusExporter::Instrumentation::MethodProfiler
end
end
RUBY
end.join("\n")
klass.class_eval patches, __FILE__, patch_source_line
klass.class_eval(patches, __FILE__, patch_source_line)
end
end

View File

@ -2,21 +2,14 @@
module PrometheusExporter::Instrumentation
class PeriodicStats
def self.start(*args, frequency:, client: nil, **kwargs)
client ||= PrometheusExporter::Client.default
if !(Numeric === frequency)
raise ArgumentError.new("Expected frequency to be a number")
end
raise ArgumentError.new("Expected frequency to be a number") if !(Numeric === frequency)
if frequency < 0
raise ArgumentError.new("Expected frequency to be a positive number")
end
raise ArgumentError.new("Expected frequency to be a positive number") if frequency < 0
if !@worker_loop
raise ArgumentError.new("Worker loop was not set")
end
raise ArgumentError.new("Worker loop was not set") if !@worker_loop
klass = self
@ -24,18 +17,18 @@ module PrometheusExporter::Instrumentation
@stop_thread = false
@thread = Thread.new do
while !@stop_thread
begin
@worker_loop.call
rescue => e
client.logger.error("#{klass} Prometheus Exporter Failed To Collect Stats #{e}")
ensure
sleep frequency
@thread =
Thread.new do
while !@stop_thread
begin
@worker_loop.call
rescue => e
client.logger.error("#{klass} Prometheus Exporter Failed To Collect Stats #{e}")
ensure
sleep frequency
end
end
end
end
end
def self.started?
@ -57,6 +50,5 @@ module PrometheusExporter::Instrumentation
end
@thread = nil
end
end
end

View File

@ -3,9 +3,7 @@
# collects stats from currently running process
module PrometheusExporter::Instrumentation
class Process < PeriodicStats
def self.start(client: nil, type: "ruby", frequency: 30, labels: nil)
metric_labels =
if labels && type
labels.merge(type: type)
@ -46,14 +44,22 @@ module PrometheusExporter::Instrumentation
end
def rss
@pagesize ||= `getconf PAGESIZE`.to_i rescue 4096
File.read("/proc/#{pid}/statm").split(' ')[1].to_i * @pagesize rescue 0
@pagesize ||=
begin
`getconf PAGESIZE`.to_i
rescue StandardError
4096
end
begin
File.read("/proc/#{pid}/statm").split(" ")[1].to_i * @pagesize
rescue StandardError
0
end
end
def collect_process_stats(metric)
metric[:pid] = pid
metric[:rss] = rss
end
def collect_gc_stats(metric)
@ -68,7 +74,7 @@ module PrometheusExporter::Instrumentation
end
def collect_v8_stats(metric)
return if !defined? MiniRacer
return if !defined?(MiniRacer)
metric[:v8_heap_count] = metric[:v8_heap_size] = 0
metric[:v8_heap_size] = metric[:v8_physical_size] = 0

View File

@ -26,7 +26,7 @@ module PrometheusExporter::Instrumentation
pid: pid,
type: "puma",
hostname: ::PrometheusExporter.hostname,
metric_labels: @metric_labels
metric_labels: @metric_labels,
}
collect_puma_stats(metric)
metric
@ -61,11 +61,13 @@ module PrometheusExporter::Instrumentation
metric[:running_threads] ||= 0
metric[:thread_pool_capacity] ||= 0
metric[:max_threads] ||= 0
metric[:busy_threads] ||= 0
metric[:request_backlog] += status["backlog"]
metric[:running_threads] += status["running"]
metric[:thread_pool_capacity] += status["pool_capacity"]
metric[:max_threads] += status["max_threads"]
metric[:busy_threads] += status["busy_threads"]
end
end
end

View File

@ -7,9 +7,7 @@ module PrometheusExporter::Instrumentation
resque_collector = new
client ||= PrometheusExporter::Client.default
worker_loop do
client.send_json(resque_collector.collect)
end
worker_loop { client.send_json(resque_collector.collect) }
super
end

View File

@ -2,7 +2,6 @@
module PrometheusExporter::Instrumentation
class Shoryuken
def initialize(client: nil)
@client = client || PrometheusExporter::Client.default
end
@ -19,12 +18,12 @@ module PrometheusExporter::Instrumentation
ensure
duration = ::Process.clock_gettime(::Process::CLOCK_MONOTONIC) - start
@client.send_json(
type: "shoryuken",
queue: queue,
name: worker.class.name,
success: success,
shutdown: shutdown,
duration: duration
type: "shoryuken",
queue: queue,
name: worker.class.name,
success: success,
shutdown: shutdown,
duration: duration,
)
end
end

View File

@ -3,8 +3,7 @@
require "yaml"
module PrometheusExporter::Instrumentation
JOB_WRAPPER_CLASS_NAME =
"ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper"
JOB_WRAPPER_CLASS_NAME = "ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper"
DELAYED_CLASS_NAMES = %w[
Sidekiq::Extensions::DelayedClass
Sidekiq::Extensions::DelayedModel
@ -24,7 +23,7 @@ module PrometheusExporter::Instrumentation
type: "sidekiq",
name: get_name(job["class"], job),
dead: true,
custom_labels: worker_custom_labels
custom_labels: worker_custom_labels,
)
end
end
@ -44,8 +43,7 @@ module PrometheusExporter::Instrumentation
end
def initialize(options = { client: nil })
@client =
options.fetch(:client, nil) || PrometheusExporter::Client.default
@client = options.fetch(:client, nil) || PrometheusExporter::Client.default
end
def call(worker, msg, queue)
@ -67,7 +65,7 @@ module PrometheusExporter::Instrumentation
success: success,
shutdown: shutdown,
duration: duration,
custom_labels: self.class.get_worker_custom_labels(worker.class, msg)
custom_labels: self.class.get_worker_custom_labels(worker.class, msg),
)
end

View File

@ -6,9 +6,7 @@ module PrometheusExporter::Instrumentation
client ||= PrometheusExporter::Client.default
sidekiq_process_collector = new
worker_loop do
client.send_json(sidekiq_process_collector.collect)
end
worker_loop { client.send_json(sidekiq_process_collector.collect) }
super
end
@ -19,10 +17,7 @@ module PrometheusExporter::Instrumentation
end
def collect
{
type: 'sidekiq_process',
process: collect_stats
}
{ type: "sidekiq_process", process: collect_stats }
end
def collect_stats
@ -30,23 +25,21 @@ module PrometheusExporter::Instrumentation
return {} unless process
{
busy: process['busy'],
concurrency: process['concurrency'],
busy: process["busy"],
concurrency: process["concurrency"],
labels: {
labels: process['labels'].sort.join(','),
queues: process['queues'].sort.join(','),
quiet: process['quiet'],
tag: process['tag'],
hostname: process['hostname'],
identity: process['identity'],
}
labels: process["labels"].sort.join(","),
queues: process["queues"].sort.join(","),
quiet: process["quiet"],
tag: process["tag"],
hostname: process["hostname"],
identity: process["identity"],
},
}
end
def current_process
::Sidekiq::ProcessSet.new.find do |sp|
sp['hostname'] == @hostname && sp['pid'] == @pid
end
::Sidekiq::ProcessSet.new.find { |sp| sp["hostname"] == @hostname && sp["pid"] == @pid }
end
end
end

View File

@ -6,9 +6,7 @@ module PrometheusExporter::Instrumentation
client ||= PrometheusExporter::Client.default
sidekiq_queue_collector = new(all_queues: all_queues)
worker_loop do
client.send_json(sidekiq_queue_collector.collect)
end
worker_loop { client.send_json(sidekiq_queue_collector.collect) }
super
end
@ -20,10 +18,7 @@ module PrometheusExporter::Instrumentation
end
def collect
{
type: 'sidekiq_queue',
queues: collect_queue_stats
}
{ type: "sidekiq_queue", queues: collect_queue_stats }
end
def collect_queue_stats
@ -34,13 +29,17 @@ module PrometheusExporter::Instrumentation
sidekiq_queues.select! { |sidekiq_queue| queues.include?(sidekiq_queue.name) }
end
sidekiq_queues.map do |queue|
{
backlog: queue.size,
latency_seconds: queue.latency.to_i,
labels: { queue: queue.name }
}
end.compact
sidekiq_queues
.map do |queue|
{
backlog: queue.size,
latency_seconds: queue.latency.to_i,
labels: {
queue: queue.name,
},
}
end
.compact
end
private
@ -48,11 +47,9 @@ module PrometheusExporter::Instrumentation
def collect_current_process_queues
ps = ::Sidekiq::ProcessSet.new
process = ps.find do |sp|
sp['hostname'] == @hostname && sp['pid'] == @pid
end
process = ps.find { |sp| sp["hostname"] == @hostname && sp["pid"] == @pid }
process.nil? ? [] : process['queues']
process.nil? ? [] : process["queues"]
end
end
end

View File

@ -6,31 +6,26 @@ module PrometheusExporter::Instrumentation
client ||= PrometheusExporter::Client.default
sidekiq_stats_collector = new
worker_loop do
client.send_json(sidekiq_stats_collector.collect)
end
worker_loop { client.send_json(sidekiq_stats_collector.collect) }
super
end
def collect
{
type: 'sidekiq_stats',
stats: collect_stats
}
{ type: "sidekiq_stats", stats: collect_stats }
end
def collect_stats
stats = ::Sidekiq::Stats.new
{
'dead_size' => stats.dead_size,
'enqueued' => stats.enqueued,
'failed' => stats.failed,
'processed' => stats.processed,
'processes_size' => stats.processes_size,
'retry_size' => stats.retry_size,
'scheduled_size' => stats.scheduled_size,
'workers_size' => stats.workers_size,
"dead_size" => stats.dead_size,
"enqueued" => stats.enqueued,
"failed" => stats.failed,
"processed" => stats.processed,
"processes_size" => stats.processes_size,
"retry_size" => stats.retry_size,
"scheduled_size" => stats.scheduled_size,
"workers_size" => stats.workers_size,
}
end
end

View File

@ -1,7 +1,7 @@
# frozen_string_literal: true
begin
require 'raindrops'
require "raindrops"
rescue LoadError
# No raindrops available, dont do anything
end
@ -29,7 +29,7 @@ module PrometheusExporter::Instrumentation
def collect
metric = {}
metric[:type] = 'unicorn'
metric[:type] = "unicorn"
collect_unicorn_stats(metric)
metric
end

View File

@ -2,7 +2,6 @@
module PrometheusExporter::Metric
class Base
@default_prefix = nil if !defined?(@default_prefix)
@default_labels = nil if !defined?(@default_labels)
@default_aggregation = nil if !defined?(@default_aggregation)
@ -77,11 +76,14 @@ module PrometheusExporter::Metric
def labels_text(labels)
labels = Base.default_labels.merge(labels || {})
if labels && labels.length > 0
s = labels.map do |key, value|
value = value.to_s
value = escape_value(value) if needs_escape?(value)
"#{key}=\"#{value}\""
end.join(",")
s =
labels
.map do |key, value|
value = value.to_s
value = escape_value(value) if needs_escape?(value)
"#{key}=\"#{value}\""
end
.join(",")
"{#{s}}"
end
end
@ -109,6 +111,5 @@ module PrometheusExporter::Metric
def needs_escape?(str)
str.match?(/[\n"\\]/m)
end
end
end

View File

@ -18,9 +18,7 @@ module PrometheusExporter::Metric
end
def metric_text
@data.map do |labels, value|
"#{prefix(@name)}#{labels_text(labels)} #{value}"
end.join("\n")
@data.map { |labels, value| "#{prefix(@name)}#{labels_text(labels)} #{value}" }.join("\n")
end
def to_h

View File

@ -18,9 +18,7 @@ module PrometheusExporter::Metric
end
def metric_text
@data.map do |labels, value|
"#{prefix(@name)}#{labels_text(labels)} #{value}"
end.join("\n")
@data.map { |labels, value| "#{prefix(@name)}#{labels_text(labels)} #{value}" }.join("\n")
end
def reset!
@ -39,9 +37,7 @@ module PrometheusExporter::Metric
if value.nil?
data.delete(labels)
else
if !(Numeric === value)
raise ArgumentError, 'value must be a number'
end
raise ArgumentError, "value must be a number" if !(Numeric === value)
@data[labels] = value
end
end

View File

@ -2,7 +2,6 @@
module PrometheusExporter::Metric
class Histogram < Base
DEFAULT_BUCKETS = [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5.0, 10.0].freeze
@default_buckets = nil if !defined?(@default_buckets)
@ -100,6 +99,5 @@ module PrometheusExporter::Metric
def with_bucket(labels, bucket)
labels.merge("le" => bucket)
end
end
end

View File

@ -2,7 +2,6 @@
module PrometheusExporter::Metric
class Summary < Base
DEFAULT_QUANTILES = [0.99, 0.9, 0.5, 0.1, 0.01]
ROTATE_AGE = 120
@ -49,9 +48,7 @@ module PrometheusExporter::Metric
result = {}
if length > 0
@quantiles.each do |quantile|
result[quantile] = sorted[(length * quantile).ceil - 1]
end
@quantiles.each { |quantile| result[quantile] = sorted[(length * quantile).ceil - 1] }
end
result
@ -61,12 +58,9 @@ module PrometheusExporter::Metric
buffer = @buffers[@current_buffer]
result = {}
buffer.each do |labels, raw_data|
result[labels] = calculate_quantiles(raw_data)
end
buffer.each { |labels, raw_data| result[labels] = calculate_quantiles(raw_data) }
result
end
def metric_text
@ -87,8 +81,8 @@ module PrometheusExporter::Metric
# makes sure we have storage
def ensure_summary(labels)
@buffers[0][labels] ||= []
@buffers[1][labels] ||= []
@buffers[0][labels] ||= []
@buffers[1][labels] ||= []
@sums[labels] ||= 0.0
@counts[labels] ||= 0
nil
@ -97,9 +91,7 @@ module PrometheusExporter::Metric
def rotate_if_needed
if (now = Process.clock_gettime(Process::CLOCK_MONOTONIC)) > (@last_rotated + ROTATE_AGE)
@last_rotated = now
@buffers[@current_buffer].each do |labels, raw|
raw.clear
end
@buffers[@current_buffer].each { |labels, raw| raw.clear }
@current_buffer = @current_buffer == 0 ? 1 : 0
end
nil
@ -116,6 +108,5 @@ module PrometheusExporter::Metric
@sums[labels] += value
@counts[labels] += 1
end
end
end

View File

@ -1,7 +1,7 @@
# frozen_string_literal: true
require 'prometheus_exporter/instrumentation/method_profiler'
require 'prometheus_exporter/client'
require "prometheus_exporter/instrumentation/method_profiler"
require "prometheus_exporter/client"
class PrometheusExporter::Middleware
MethodProfiler = PrometheusExporter::Instrumentation::MethodProfiler
@ -11,26 +11,42 @@ class PrometheusExporter::Middleware
@client = config[:client] || PrometheusExporter::Client.default
if config[:instrument]
if defined?(RedisClient)
apply_redis_client_middleware!
end
if defined?(Redis::VERSION) && (Gem::Version.new(Redis::VERSION) >= Gem::Version.new('5.0.0'))
apply_redis_client_middleware! if defined?(RedisClient)
if defined?(Redis::VERSION) && (Gem::Version.new(Redis::VERSION) >= Gem::Version.new("5.0.0"))
# redis 5 support handled via RedisClient
elsif defined? Redis::Client
MethodProfiler.patch(Redis::Client, [
:call, :call_pipeline
], :redis, instrument: config[:instrument])
elsif defined?(Redis::Client)
MethodProfiler.patch(
Redis::Client,
%i[call call_pipeline],
:redis,
instrument: config[:instrument],
)
end
if defined? PG::Connection
MethodProfiler.patch(PG::Connection, [
:exec, :async_exec, :exec_prepared, :exec_params, :send_query_prepared, :query
], :sql, instrument: config[:instrument])
if defined?(PG::Connection)
MethodProfiler.patch(
PG::Connection,
%i[exec async_exec exec_prepared exec_params send_query_prepared query],
:sql,
instrument: config[:instrument],
)
end
if defined? Mysql2::Client
if defined?(Mysql2::Client)
MethodProfiler.patch(Mysql2::Client, [:query], :sql, instrument: config[:instrument])
MethodProfiler.patch(Mysql2::Statement, [:execute], :sql, instrument: config[:instrument])
MethodProfiler.patch(Mysql2::Result, [:each], :sql, instrument: config[:instrument])
end
if defined?(Dalli::Client)
MethodProfiler.patch(
Dalli::Client,
%i[delete fetch get add set],
:memcache,
instrument: config[:instrument],
)
end
end
end
@ -49,12 +65,10 @@ class PrometheusExporter::Middleware
timings: info,
queue_time: queue_time,
status: status,
default_labels: default_labels(env, result)
default_labels: default_labels(env, result),
}
labels = custom_labels(env)
if labels
obj = obj.merge(custom_labels: labels)
end
obj = obj.merge(custom_labels: labels) if labels
@client.send_json(obj)
end
@ -72,10 +86,7 @@ class PrometheusExporter::Middleware
controller = "preflight"
end
{
action: action || "other",
controller: controller || "other"
}
{ action: action || "other", controller: controller || "other" }
end
# allows subclasses to add custom labels based on env
@ -103,32 +114,29 @@ class PrometheusExporter::Middleware
# determine queue start from well-known trace headers
def queue_start(env)
# get the content of the x-queue-start or x-request-start header
value = env['HTTP_X_REQUEST_START'] || env['HTTP_X_QUEUE_START']
unless value.nil? || value == ''
value = env["HTTP_X_REQUEST_START"] || env["HTTP_X_QUEUE_START"]
unless value.nil? || value == ""
# nginx returns time as milliseconds with 3 decimal places
# apache returns time as microseconds without decimal places
# this method takes care to convert both into a proper second + fractions timestamp
value = value.to_s.gsub(/t=|\./, '')
value = value.to_s.gsub(/t=|\./, "")
return "#{value[0, 10]}.#{value[10, 13]}".to_f
end
# get the content of the x-amzn-trace-id header
# see also: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-request-tracing.html
value = env['HTTP_X_AMZN_TRACE_ID']
value&.split('Root=')&.last&.split('-')&.fetch(1)&.to_i(16)
value = env["HTTP_X_AMZN_TRACE_ID"]
value&.split("Root=")&.last&.split("-")&.fetch(1)&.to_i(16)
end
private
module RedisInstrumenter
MethodProfiler.define_methods_on_module(self, ["call", "call_pipelined"], "redis")
MethodProfiler.define_methods_on_module(self, %w[call call_pipelined], "redis")
end
def apply_redis_client_middleware!
RedisClient.register(RedisInstrumenter)
end
end

View File

@ -10,15 +10,15 @@ module PrometheusExporter::Server
dead: "Dead connections in pool",
idle: "Idle connections in pool",
waiting: "Connection requests waiting",
size: "Maximum allowed connection pool size"
size: "Maximum allowed connection pool size",
}
def initialize
@active_record_metrics = MetricsContainer.new(ttl: MAX_METRIC_AGE)
@active_record_metrics.filter = -> (new_metric, old_metric) do
@active_record_metrics.filter = ->(new_metric, old_metric) do
new_metric["pid"] == old_metric["pid"] &&
new_metric["hostname"] == old_metric["hostname"] &&
new_metric["metric_labels"]["pool_name"] == old_metric["metric_labels"]["pool_name"]
new_metric["hostname"] == old_metric["hostname"] &&
new_metric["metric_labels"]["pool_name"] == old_metric["metric_labels"]["pool_name"]
end
end
@ -32,13 +32,18 @@ module PrometheusExporter::Server
metrics = {}
@active_record_metrics.map do |m|
metric_key = (m["metric_labels"] || {}).merge("pid" => m["pid"], "hostname" => m["hostname"])
metric_key =
(m["metric_labels"] || {}).merge("pid" => m["pid"], "hostname" => m["hostname"])
metric_key.merge!(m["custom_labels"]) if m["custom_labels"]
ACTIVE_RECORD_GAUGES.map do |k, help|
k = k.to_s
if v = m[k]
g = metrics[k] ||= PrometheusExporter::Metric::Gauge.new("active_record_connection_pool_#{k}", help)
g =
metrics[k] ||= PrometheusExporter::Metric::Gauge.new(
"active_record_connection_pool_#{k}",
help,
)
g.observe(v, metric_key)
end
end

View File

@ -1,9 +1,7 @@
# frozen_string_literal: true
module PrometheusExporter::Server
class Collector < CollectorBase
def initialize(json_serializer: nil)
@process_metrics = []
@metrics = {}
@ -40,19 +38,15 @@ module PrometheusExporter::Server
collector.collect(obj)
else
metric = @metrics[obj["name"]]
if !metric
metric = register_metric_unsafe(obj)
end
metric = register_metric_unsafe(obj) if !metric
keys = obj["keys"] || {}
if obj["custom_labels"]
keys = obj["custom_labels"].merge(keys)
end
keys = obj["custom_labels"].merge(keys) if obj["custom_labels"]
case obj["prometheus_exporter_action"]
when 'increment'
when "increment"
metric.increment(keys, obj["value"])
when 'decrement'
when "decrement"
metric.decrement(keys, obj["value"])
else
metric.observe(obj["value"], keys)
@ -63,15 +57,14 @@ module PrometheusExporter::Server
def prometheus_metrics_text
@mutex.synchronize do
(@metrics.values + @collectors.values.map(&:metrics).flatten)
.map(&:to_prometheus_text).join("\n")
(@metrics.values + @collectors.values.map(&:metrics).flatten).map(
&:to_prometheus_text
).join("\n")
end
end
def register_metric(metric)
@mutex.synchronize do
@metrics[metric.name] = metric
end
@mutex.synchronize { @metrics[metric.name] = metric }
end
protected
@ -101,7 +94,10 @@ module PrometheusExporter::Server
end
def symbolize_keys(hash)
hash.inject({}) { |memo, k| memo[k.first.to_sym] = k.last; memo }
hash.inject({}) do |memo, k|
memo[k.first.to_sym] = k.last
memo
end
end
end
end

View File

@ -1,10 +1,8 @@
# frozen_string_literal: true
module PrometheusExporter::Server
# minimal interface to implement a customer collector
class CollectorBase
# called each time a string is delivered from the web
def process(str)
end

View File

@ -20,19 +20,31 @@ module PrometheusExporter::Server
end
def collect(obj)
custom_labels = obj['custom_labels'] || {}
gauge_labels = { queue_name: obj['queue_name'] }.merge(custom_labels)
counter_labels = gauge_labels.merge(job_name: obj['name'])
custom_labels = obj["custom_labels"] || {}
gauge_labels = { queue_name: obj["queue_name"] }.merge(custom_labels)
counter_labels = gauge_labels.merge(job_name: obj["name"])
ensure_delayed_job_metrics
@delayed_job_duration_seconds.observe(obj["duration"], counter_labels)
@delayed_job_latency_seconds_total.observe(obj["latency"], counter_labels)
@delayed_jobs_total.observe(1, counter_labels)
@delayed_failed_jobs_total.observe(1, counter_labels) if !obj["success"]
@delayed_jobs_max_attempts_reached_total.observe(1, counter_labels) if obj["attempts"] >= obj["max_attempts"]
if obj["attempts"] >= obj["max_attempts"]
@delayed_jobs_max_attempts_reached_total.observe(1, counter_labels)
end
@delayed_job_duration_seconds_summary.observe(obj["duration"], counter_labels)
@delayed_job_duration_seconds_summary.observe(obj["duration"], counter_labels.merge(status: "success")) if obj["success"]
@delayed_job_duration_seconds_summary.observe(obj["duration"], counter_labels.merge(status: "failed")) if !obj["success"]
if obj["success"]
@delayed_job_duration_seconds_summary.observe(
obj["duration"],
counter_labels.merge(status: "success"),
)
end
if !obj["success"]
@delayed_job_duration_seconds_summary.observe(
obj["duration"],
counter_labels.merge(status: "failed"),
)
end
@delayed_job_attempts_summary.observe(obj["attempts"], counter_labels) if obj["success"]
@delayed_jobs_enqueued.observe(obj["enqueued"], gauge_labels)
@delayed_jobs_pending.observe(obj["pending"], gauge_labels)
@ -40,9 +52,17 @@ module PrometheusExporter::Server
def metrics
if @delayed_jobs_total
[@delayed_job_duration_seconds, @delayed_job_latency_seconds_total, @delayed_jobs_total, @delayed_failed_jobs_total,
@delayed_jobs_max_attempts_reached_total, @delayed_job_duration_seconds_summary, @delayed_job_attempts_summary,
@delayed_jobs_enqueued, @delayed_jobs_pending]
[
@delayed_job_duration_seconds,
@delayed_job_latency_seconds_total,
@delayed_jobs_total,
@delayed_failed_jobs_total,
@delayed_jobs_max_attempts_reached_total,
@delayed_job_duration_seconds_summary,
@delayed_job_attempts_summary,
@delayed_jobs_enqueued,
@delayed_jobs_pending,
]
else
[]
end
@ -52,42 +72,59 @@ module PrometheusExporter::Server
def ensure_delayed_job_metrics
if !@delayed_jobs_total
@delayed_job_duration_seconds =
PrometheusExporter::Metric::Counter.new(
"delayed_job_duration_seconds", "Total time spent in delayed jobs.")
PrometheusExporter::Metric::Counter.new(
"delayed_job_duration_seconds",
"Total time spent in delayed jobs.",
)
@delayed_job_latency_seconds_total =
PrometheusExporter::Metric::Counter.new(
"delayed_job_latency_seconds_total", "Total delayed jobs latency.")
PrometheusExporter::Metric::Counter.new(
"delayed_job_latency_seconds_total",
"Total delayed jobs latency.",
)
@delayed_jobs_total =
PrometheusExporter::Metric::Counter.new(
"delayed_jobs_total", "Total number of delayed jobs executed.")
PrometheusExporter::Metric::Counter.new(
"delayed_jobs_total",
"Total number of delayed jobs executed.",
)
@delayed_jobs_enqueued =
PrometheusExporter::Metric::Gauge.new(
"delayed_jobs_enqueued", "Number of enqueued delayed jobs.")
PrometheusExporter::Metric::Gauge.new(
"delayed_jobs_enqueued",
"Number of enqueued delayed jobs.",
)
@delayed_jobs_pending =
PrometheusExporter::Metric::Gauge.new(
"delayed_jobs_pending", "Number of pending delayed jobs.")
PrometheusExporter::Metric::Gauge.new(
"delayed_jobs_pending",
"Number of pending delayed jobs.",
)
@delayed_failed_jobs_total =
PrometheusExporter::Metric::Counter.new(
"delayed_failed_jobs_total", "Total number failed delayed jobs executed.")
PrometheusExporter::Metric::Counter.new(
"delayed_failed_jobs_total",
"Total number failed delayed jobs executed.",
)
@delayed_jobs_max_attempts_reached_total =
PrometheusExporter::Metric::Counter.new(
"delayed_jobs_max_attempts_reached_total", "Total number of delayed jobs that reached max attempts.")
PrometheusExporter::Metric::Counter.new(
"delayed_jobs_max_attempts_reached_total",
"Total number of delayed jobs that reached max attempts.",
)
@delayed_job_duration_seconds_summary =
PrometheusExporter::Metric::Base.default_aggregation.new("delayed_job_duration_seconds_summary",
"Summary of the time it takes jobs to execute.")
PrometheusExporter::Metric::Base.default_aggregation.new(
"delayed_job_duration_seconds_summary",
"Summary of the time it takes jobs to execute.",
)
@delayed_job_attempts_summary =
PrometheusExporter::Metric::Base.default_aggregation.new("delayed_job_attempts_summary",
"Summary of the amount of attempts it takes delayed jobs to succeed.")
PrometheusExporter::Metric::Base.default_aggregation.new(
"delayed_job_attempts_summary",
"Summary of the amount of attempts it takes delayed jobs to succeed.",
)
end
end
end

View File

@ -10,7 +10,7 @@ module PrometheusExporter::Server
running: "Total number of running GoodJob jobs.",
finished: "Total number of finished GoodJob jobs.",
succeeded: "Total number of succeeded GoodJob jobs.",
discarded: "Total number of discarded GoodJob jobs."
discarded: "Total number of discarded GoodJob jobs.",
}
def initialize

View File

@ -14,8 +14,8 @@ module PrometheusExporter::Server
end
def collect(obj)
default_labels = { job_name: obj['name'] }
custom_labels = obj['custom_labels']
default_labels = { job_name: obj["name"] }
custom_labels = obj["custom_labels"]
labels = custom_labels.nil? ? default_labels : default_labels.merge(custom_labels)
ensure_hutch_metrics
@ -36,15 +36,23 @@ module PrometheusExporter::Server
def ensure_hutch_metrics
if !@hutch_jobs_total
@hutch_job_duration_seconds =
PrometheusExporter::Metric::Counter.new(
"hutch_job_duration_seconds",
"Total time spent in hutch jobs.",
)
@hutch_job_duration_seconds = PrometheusExporter::Metric::Counter.new(
"hutch_job_duration_seconds", "Total time spent in hutch jobs.")
@hutch_jobs_total =
PrometheusExporter::Metric::Counter.new(
"hutch_jobs_total",
"Total number of hutch jobs executed.",
)
@hutch_jobs_total = PrometheusExporter::Metric::Counter.new(
"hutch_jobs_total", "Total number of hutch jobs executed.")
@hutch_failed_jobs_total = PrometheusExporter::Metric::Counter.new(
"hutch_failed_jobs_total", "Total number failed hutch jobs executed.")
@hutch_failed_jobs_total =
PrometheusExporter::Metric::Counter.new(
"hutch_failed_jobs_total",
"Total number failed hutch jobs executed.",
)
end
end
end

View File

@ -9,10 +9,10 @@ module PrometheusExporter::Server
attr_accessor :filter
def initialize(ttl: METRIC_MAX_AGE, expire_attr: METRIC_EXPIRE_ATTR, filter: nil)
@data = []
@ttl = ttl
@expire_attr = expire_attr
@filter = filter
@data = []
@ttl = ttl
@expire_attr = expire_attr
@filter = filter
end
def <<(obj)

View File

@ -1,7 +1,6 @@
# frozen_string_literal: true
module PrometheusExporter::Server
class ProcessCollector < TypeCollector
MAX_METRIC_AGE = 60
@ -13,8 +12,10 @@ module PrometheusExporter::Server
v8_physical_size: "Physical size consumed by V8 heaps.",
v8_heap_count: "Number of V8 contexts running.",
rss: "Total RSS used by process.",
malloc_increase_bytes_limit: 'Limit before Ruby triggers a GC against current objects (bytes).',
oldmalloc_increase_bytes_limit: 'Limit before Ruby triggers a major GC against old objects (bytes).'
malloc_increase_bytes_limit:
"Limit before Ruby triggers a GC against current objects (bytes).",
oldmalloc_increase_bytes_limit:
"Limit before Ruby triggers a major GC against old objects (bytes).",
}
PROCESS_COUNTERS = {
@ -25,7 +26,7 @@ module PrometheusExporter::Server
def initialize
@process_metrics = MetricsContainer.new(ttl: MAX_METRIC_AGE)
@process_metrics.filter = -> (new_metric, old_metric) do
@process_metrics.filter = ->(new_metric, old_metric) do
new_metric["pid"] == old_metric["pid"] && new_metric["hostname"] == old_metric["hostname"]
end
end
@ -40,7 +41,8 @@ module PrometheusExporter::Server
metrics = {}
@process_metrics.map do |m|
metric_key = (m["metric_labels"] || {}).merge("pid" => m["pid"], "hostname" => m["hostname"])
metric_key =
(m["metric_labels"] || {}).merge("pid" => m["pid"], "hostname" => m["hostname"])
metric_key.merge!(m["custom_labels"]) if m["custom_labels"]
PROCESS_GAUGES.map do |k, help|

View File

@ -13,9 +13,13 @@ module PrometheusExporter::Server
max_threads: "Number of puma threads at available at max scale.",
}
if defined?(::Puma::Const) && Gem::Version.new(::Puma::Const::VERSION) >= Gem::Version.new('6.6.0')
PUMA_GAUGES[:busy_threads] = "Wholistic stat reflecting the overall current state of work to be done and the capacity to do it"
end
def initialize
@puma_metrics = MetricsContainer.new(ttl: MAX_PUMA_METRIC_AGE)
@puma_metrics.filter = -> (new_metric, old_metric) do
@puma_metrics.filter = ->(new_metric, old_metric) do
new_metric["pid"] == old_metric["pid"] && new_metric["hostname"] == old_metric["hostname"]
end
end
@ -31,15 +35,9 @@ module PrometheusExporter::Server
@puma_metrics.map do |m|
labels = {}
if m["phase"]
labels.merge!(phase: m["phase"])
end
if m["custom_labels"]
labels.merge!(m["custom_labels"])
end
if m["metric_labels"]
labels.merge!(m["metric_labels"])
end
labels.merge!(phase: m["phase"]) if m["phase"]
labels.merge!(m["custom_labels"]) if m["custom_labels"]
labels.merge!(m["metric_labels"]) if m["metric_labels"]
PUMA_GAUGES.map do |k, help|
k = k.to_s

View File

@ -9,7 +9,7 @@ module PrometheusExporter::Server
pending_jobs: "Total number of pending Resque jobs.",
queues: "Total number of Resque queues.",
workers: "Total number of Resque workers running.",
working: "Total number of Resque workers working."
working: "Total number of Resque workers working.",
}
def initialize

View File

@ -1,10 +1,13 @@
# frozen_string_literal: true
require_relative '../client'
require_relative "../client"
module PrometheusExporter::Server
class RunnerException < StandardError; end
class WrongInheritance < RunnerException; end
class RunnerException < StandardError
end
class WrongInheritance < RunnerException
end
class Runner
def initialize(options = {})
@ -18,9 +21,7 @@ module PrometheusExporter::Server
@realm = nil
@histogram = nil
options.each do |k, v|
send("#{k}=", v) if self.class.method_defined?("#{k}=")
end
options.each { |k, v| send("#{k}=", v) if self.class.method_defined?("#{k}=") }
end
def start
@ -34,27 +35,47 @@ module PrometheusExporter::Server
register_type_collectors
unless collector.is_a?(PrometheusExporter::Server::CollectorBase)
raise WrongInheritance, 'Collector class must be inherited from PrometheusExporter::Server::CollectorBase'
raise WrongInheritance,
"Collector class must be inherited from PrometheusExporter::Server::CollectorBase"
end
if unicorn_listen_address && unicorn_pid_file
require_relative '../instrumentation'
require_relative "../instrumentation"
local_client = PrometheusExporter::LocalClient.new(collector: collector)
PrometheusExporter::Instrumentation::Unicorn.start(
pid_file: unicorn_pid_file,
listener_address: unicorn_listen_address,
client: local_client
client: local_client,
)
end
server = server_class.new(port: port, bind: bind, collector: collector, timeout: timeout, verbose: verbose, auth: auth, realm: realm)
server =
server_class.new(
port: port,
bind: bind,
collector: collector,
timeout: timeout,
verbose: verbose,
auth: auth,
realm: realm,
)
server.start
end
attr_accessor :unicorn_listen_address, :unicorn_pid_file
attr_writer :prefix, :port, :bind, :collector_class, :type_collectors, :timeout, :verbose, :server_class, :label, :auth, :realm, :histogram
attr_writer :prefix,
:port,
:bind,
:collector_class,
:type_collectors,
:timeout,
:verbose,
:server_class,
:label,
:auth,
:realm,
:histogram
def auth
@auth || nil
@ -89,7 +110,7 @@ module PrometheusExporter::Server
end
def verbose
return @verbose if defined? @verbose
return @verbose if defined?(@verbose)
false
end

View File

@ -2,7 +2,6 @@
module PrometheusExporter::Server
class ShoryukenCollector < TypeCollector
def initialize
@shoryuken_jobs_total = nil
@shoryuken_job_duration_seconds = nil
@ -16,8 +15,8 @@ module PrometheusExporter::Server
end
def collect(obj)
default_labels = { job_name: obj['name'] , queue_name: obj['queue'] }
custom_labels = obj['custom_labels']
default_labels = { job_name: obj["name"], queue_name: obj["queue"] }
custom_labels = obj["custom_labels"]
labels = custom_labels.nil? ? default_labels : default_labels.merge(custom_labels)
ensure_shoryuken_metrics
@ -30,10 +29,10 @@ module PrometheusExporter::Server
def metrics
if @shoryuken_jobs_total
[
@shoryuken_job_duration_seconds,
@shoryuken_jobs_total,
@shoryuken_restarted_jobs_total,
@shoryuken_failed_jobs_total,
@shoryuken_job_duration_seconds,
@shoryuken_jobs_total,
@shoryuken_restarted_jobs_total,
@shoryuken_failed_jobs_total,
]
else
[]
@ -44,23 +43,29 @@ module PrometheusExporter::Server
def ensure_shoryuken_metrics
if !@shoryuken_jobs_total
@shoryuken_job_duration_seconds =
PrometheusExporter::Metric::Counter.new(
"shoryuken_job_duration_seconds", "Total time spent in shoryuken jobs.")
PrometheusExporter::Metric::Counter.new(
"shoryuken_job_duration_seconds",
"Total time spent in shoryuken jobs.",
)
@shoryuken_jobs_total =
PrometheusExporter::Metric::Counter.new(
"shoryuken_jobs_total", "Total number of shoryuken jobs executed.")
PrometheusExporter::Metric::Counter.new(
"shoryuken_jobs_total",
"Total number of shoryuken jobs executed.",
)
@shoryuken_restarted_jobs_total =
PrometheusExporter::Metric::Counter.new(
"shoryuken_restarted_jobs_total", "Total number of shoryuken jobs that we restarted because of a shoryuken shutdown.")
PrometheusExporter::Metric::Counter.new(
"shoryuken_restarted_jobs_total",
"Total number of shoryuken jobs that we restarted because of a shoryuken shutdown.",
)
@shoryuken_failed_jobs_total =
PrometheusExporter::Metric::Counter.new(
"shoryuken_failed_jobs_total", "Total number of failed shoryuken jobs.")
PrometheusExporter::Metric::Counter.new(
"shoryuken_failed_jobs_total",
"Total number of failed shoryuken jobs.",
)
end
end
end

View File

@ -2,7 +2,6 @@
module PrometheusExporter::Server
class SidekiqCollector < TypeCollector
def initialize
@sidekiq_jobs_total = nil
@sidekiq_job_duration_seconds = nil
@ -17,8 +16,8 @@ module PrometheusExporter::Server
end
def collect(obj)
default_labels = { job_name: obj['name'], queue: obj['queue'] }
custom_labels = obj['custom_labels']
default_labels = { job_name: obj["name"], queue: obj["queue"] }
custom_labels = obj["custom_labels"]
labels = custom_labels.nil? ? default_labels : default_labels.merge(custom_labels)
ensure_sidekiq_metrics
@ -50,26 +49,35 @@ module PrometheusExporter::Server
def ensure_sidekiq_metrics
if !@sidekiq_jobs_total
@sidekiq_job_duration_seconds =
PrometheusExporter::Metric::Base.default_aggregation.new(
"sidekiq_job_duration_seconds", "Total time spent in sidekiq jobs.")
PrometheusExporter::Metric::Base.default_aggregation.new(
"sidekiq_job_duration_seconds",
"Total time spent in sidekiq jobs.",
)
@sidekiq_jobs_total =
PrometheusExporter::Metric::Counter.new(
"sidekiq_jobs_total", "Total number of sidekiq jobs executed.")
PrometheusExporter::Metric::Counter.new(
"sidekiq_jobs_total",
"Total number of sidekiq jobs executed.",
)
@sidekiq_restarted_jobs_total =
PrometheusExporter::Metric::Counter.new(
"sidekiq_restarted_jobs_total", "Total number of sidekiq jobs that we restarted because of a sidekiq shutdown.")
PrometheusExporter::Metric::Counter.new(
"sidekiq_restarted_jobs_total",
"Total number of sidekiq jobs that we restarted because of a sidekiq shutdown.",
)
@sidekiq_failed_jobs_total =
PrometheusExporter::Metric::Counter.new(
"sidekiq_failed_jobs_total", "Total number of failed sidekiq jobs.")
PrometheusExporter::Metric::Counter.new(
"sidekiq_failed_jobs_total",
"Total number of failed sidekiq jobs.",
)
@sidekiq_dead_jobs_total =
PrometheusExporter::Metric::Counter.new(
"sidekiq_dead_jobs_total", "Total number of dead sidekiq jobs.")
PrometheusExporter::Metric::Counter.new(
"sidekiq_dead_jobs_total",
"Total number of dead sidekiq jobs.",
)
end
end
end

View File

@ -5,8 +5,8 @@ module PrometheusExporter::Server
MAX_METRIC_AGE = 60
SIDEKIQ_PROCESS_GAUGES = {
'busy' => 'Number of running jobs',
'concurrency' => 'Maximum concurrency',
"busy" => "Number of running jobs",
"concurrency" => "Maximum concurrency",
}.freeze
attr_reader :sidekiq_metrics, :gauges
@ -17,17 +17,21 @@ module PrometheusExporter::Server
end
def type
'sidekiq_process'
"sidekiq_process"
end
def metrics
SIDEKIQ_PROCESS_GAUGES.each_key { |name| gauges[name]&.reset! }
sidekiq_metrics.map do |metric|
labels = metric.fetch('labels', {})
labels = metric.fetch("labels", {})
SIDEKIQ_PROCESS_GAUGES.map do |name, help|
if (value = metric[name])
gauge = gauges[name] ||= PrometheusExporter::Metric::Gauge.new("sidekiq_process_#{name}", help)
gauge =
gauges[name] ||= PrometheusExporter::Metric::Gauge.new(
"sidekiq_process_#{name}",
help,
)
gauge.observe(value, labels)
end
end

View File

@ -4,8 +4,8 @@ module PrometheusExporter::Server
MAX_METRIC_AGE = 60
SIDEKIQ_QUEUE_GAUGES = {
'backlog' => 'Size of the sidekiq queue.',
'latency_seconds' => 'Latency of the sidekiq queue.',
"backlog" => "Size of the sidekiq queue.",
"latency_seconds" => "Latency of the sidekiq queue.",
}.freeze
attr_reader :sidekiq_metrics, :gauges
@ -16,7 +16,7 @@ module PrometheusExporter::Server
end
def type
'sidekiq_queue'
"sidekiq_queue"
end
def metrics
@ -26,7 +26,8 @@ module PrometheusExporter::Server
labels = metric.fetch("labels", {})
SIDEKIQ_QUEUE_GAUGES.map do |name, help|
if (value = metric[name])
gauge = gauges[name] ||= PrometheusExporter::Metric::Gauge.new("sidekiq_queue_#{name}", help)
gauge =
gauges[name] ||= PrometheusExporter::Metric::Gauge.new("sidekiq_queue_#{name}", help)
gauge.observe(value, labels)
end
end
@ -36,8 +37,8 @@ module PrometheusExporter::Server
end
def collect(object)
object['queues'].each do |queue|
queue["labels"].merge!(object['custom_labels']) if object['custom_labels']
object["queues"].each do |queue|
queue["labels"].merge!(object["custom_labels"]) if object["custom_labels"]
@sidekiq_metrics << queue
end
end

View File

@ -5,14 +5,14 @@ module PrometheusExporter::Server
MAX_METRIC_AGE = 60
SIDEKIQ_STATS_GAUGES = {
'dead_size' => 'Size of dead the queue',
'enqueued' => 'Number of enqueued jobs',
'failed' => 'Number of failed jobs',
'processed' => 'Total number of processed jobs',
'processes_size' => 'Number of processes',
'retry_size' => 'Size of the retries queue',
'scheduled_size' => 'Size of the scheduled queue',
'workers_size' => 'Number of jobs actively being processed',
"dead_size" => "Size of dead the queue",
"enqueued" => "Number of enqueued jobs",
"failed" => "Number of failed jobs",
"processed" => "Total number of processed jobs",
"processes_size" => "Number of processes",
"retry_size" => "Size of the retries queue",
"scheduled_size" => "Size of the scheduled queue",
"workers_size" => "Number of jobs actively being processed",
}.freeze
attr_reader :sidekiq_metrics, :gauges
@ -23,7 +23,7 @@ module PrometheusExporter::Server
end
def type
'sidekiq_stats'
"sidekiq_stats"
end
def metrics
@ -31,8 +31,9 @@ module PrometheusExporter::Server
sidekiq_metrics.map do |metric|
SIDEKIQ_STATS_GAUGES.map do |name, help|
if (value = metric['stats'][name])
gauge = gauges[name] ||= PrometheusExporter::Metric::Gauge.new("sidekiq_stats_#{name}", help)
if (value = metric["stats"][name])
gauge =
gauges[name] ||= PrometheusExporter::Metric::Gauge.new("sidekiq_stats_#{name}", help)
gauge.observe(value)
end
end

View File

@ -7,9 +7,9 @@ module PrometheusExporter::Server
MAX_METRIC_AGE = 60
UNICORN_GAUGES = {
workers: 'Number of unicorn workers.',
active_workers: 'Number of active unicorn workers',
request_backlog: 'Number of requests waiting to be processed by a unicorn worker.'
workers: "Number of unicorn workers.",
active_workers: "Number of active unicorn workers",
request_backlog: "Number of requests waiting to be processed by a unicorn worker.",
}.freeze
def initialize
@ -17,7 +17,7 @@ module PrometheusExporter::Server
end
def type
'unicorn'
"unicorn"
end
def metrics

View File

@ -9,6 +9,7 @@ module PrometheusExporter::Server
@http_request_redis_duration_seconds = nil
@http_request_sql_duration_seconds = nil
@http_request_queue_duration_seconds = nil
@http_request_memcache_duration_seconds = nil
end
def type
@ -28,36 +29,49 @@ module PrometheusExporter::Server
def ensure_metrics
unless @http_requests_total
@metrics["http_requests_total"] = @http_requests_total = PrometheusExporter::Metric::Counter.new(
"http_requests_total",
"Total HTTP requests from web app."
)
@metrics["http_requests_total"] = @http_requests_total =
PrometheusExporter::Metric::Counter.new(
"http_requests_total",
"Total HTTP requests from web app.",
)
@metrics["http_request_duration_seconds"] = @http_request_duration_seconds = PrometheusExporter::Metric::Base.default_aggregation.new(
"http_request_duration_seconds",
"Time spent in HTTP reqs in seconds."
)
@metrics["http_request_duration_seconds"] = @http_request_duration_seconds =
PrometheusExporter::Metric::Base.default_aggregation.new(
"http_request_duration_seconds",
"Time spent in HTTP reqs in seconds.",
)
@metrics["http_request_redis_duration_seconds"] = @http_request_redis_duration_seconds = PrometheusExporter::Metric::Base.default_aggregation.new(
"http_request_redis_duration_seconds",
"Time spent in HTTP reqs in Redis, in seconds."
)
@metrics["http_request_redis_duration_seconds"] = @http_request_redis_duration_seconds =
PrometheusExporter::Metric::Base.default_aggregation.new(
"http_request_redis_duration_seconds",
"Time spent in HTTP reqs in Redis, in seconds.",
)
@metrics["http_request_sql_duration_seconds"] = @http_request_sql_duration_seconds = PrometheusExporter::Metric::Base.default_aggregation.new(
"http_request_sql_duration_seconds",
"Time spent in HTTP reqs in SQL in seconds."
)
@metrics["http_request_sql_duration_seconds"] = @http_request_sql_duration_seconds =
PrometheusExporter::Metric::Base.default_aggregation.new(
"http_request_sql_duration_seconds",
"Time spent in HTTP reqs in SQL in seconds.",
)
@metrics["http_request_queue_duration_seconds"] = @http_request_queue_duration_seconds = PrometheusExporter::Metric::Base.default_aggregation.new(
"http_request_queue_duration_seconds",
"Time spent queueing the request in load balancer in seconds."
)
@metrics[
"http_request_memcache_duration_seconds"
] = @http_request_memcache_duration_seconds =
PrometheusExporter::Metric::Base.default_aggregation.new(
"http_request_memcache_duration_seconds",
"Time spent in HTTP reqs in Memcache in seconds.",
)
@metrics["http_request_queue_duration_seconds"] = @http_request_queue_duration_seconds =
PrometheusExporter::Metric::Base.default_aggregation.new(
"http_request_queue_duration_seconds",
"Time spent queueing the request in load balancer in seconds.",
)
end
end
def observe(obj)
default_labels = obj['default_labels']
custom_labels = obj['custom_labels']
default_labels = obj["default_labels"]
custom_labels = obj["custom_labels"]
labels = custom_labels.nil? ? default_labels : default_labels.merge(custom_labels)
@http_requests_total.observe(1, labels.merge("status" => obj["status"]))
@ -70,6 +84,9 @@ module PrometheusExporter::Server
if sql = timings["sql"]
@http_request_sql_duration_seconds.observe(sql["duration"], labels)
end
if memcache = timings["memcache"]
@http_request_memcache_duration_seconds.observe(memcache["duration"], labels)
end
end
if queue_time = obj["queue_time"]
@http_request_queue_duration_seconds.observe(queue_time, labels)

View File

@ -21,19 +21,19 @@ module PrometheusExporter::Server
@metrics_total =
PrometheusExporter::Metric::Counter.new(
"collector_metrics_total",
"Total metrics processed by exporter web."
"Total metrics processed by exporter web.",
)
@sessions_total =
PrometheusExporter::Metric::Counter.new(
"collector_sessions_total",
"Total send_metric sessions processed by exporter web."
"Total send_metric sessions processed by exporter web.",
)
@bad_metrics_total =
PrometheusExporter::Metric::Counter.new(
"collector_bad_metrics_total",
"Total mis-handled metrics by collector."
"Total mis-handled metrics by collector.",
)
@metrics_total.observe(0)
@ -46,7 +46,7 @@ module PrometheusExporter::Server
if @verbose
@access_log = [
[$stderr, WEBrick::AccessLog::COMMON_LOG_FORMAT],
[$stderr, WEBrick::AccessLog::REFERER_LOG_FORMAT]
[$stderr, WEBrick::AccessLog::REFERER_LOG_FORMAT],
]
@logger = WEBrick::Log.new(log_target || $stderr)
else
@ -54,9 +54,7 @@ module PrometheusExporter::Server
@logger = WEBrick::Log.new(log_target || "/dev/null")
end
if @verbose && @auth
@logger.info "Using Basic Authentication via #{@auth}"
end
@logger.info "Using Basic Authentication via #{@auth}" if @verbose && @auth
if %w[ALL ANY].include?(@bind)
@logger.info "Listening on both 0.0.0.0/:: network interfaces"
@ -68,7 +66,7 @@ module PrometheusExporter::Server
Port: @port,
BindAddress: @bind,
Logger: @logger,
AccessLog: @access_log
AccessLog: @access_log,
)
@server.mount_proc "/" do |req, res|
@ -140,9 +138,7 @@ module PrometheusExporter::Server
def metrics
metric_text = nil
begin
Timeout.timeout(@timeout) do
metric_text = @collector.prometheus_metrics_text
end
Timeout.timeout(@timeout) { metric_text = @collector.prometheus_metrics_text }
rescue Timeout::Error
# we timed out ... bummer
@logger.error "Generating Prometheus metrics text timed out"
@ -153,14 +149,10 @@ module PrometheusExporter::Server
metrics << add_gauge(
"collector_working",
"Is the master process collector able to collect metrics",
metric_text && metric_text.length > 0 ? 1 : 0
metric_text && metric_text.length > 0 ? 1 : 0,
)
metrics << add_gauge(
"collector_rss",
"total memory used by collector process",
get_rss
)
metrics << add_gauge("collector_rss", "total memory used by collector process", get_rss)
metrics << @metrics_total
metrics << @sessions_total
@ -196,9 +188,7 @@ module PrometheusExporter::Server
def authenticate(req, res)
htpasswd = WEBrick::HTTPAuth::Htpasswd.new(@auth)
basic_auth =
WEBrick::HTTPAuth::BasicAuth.new(
{ Realm: @realm, UserDB: htpasswd, Logger: @logger }
)
WEBrick::HTTPAuth::BasicAuth.new({ Realm: @realm, UserDB: htpasswd, Logger: @logger })
basic_auth.authenticate(req, res)
end

View File

@ -1,5 +1,5 @@
# frozen_string_literal: true
module PrometheusExporter
VERSION = "2.1.1"
VERSION = "2.2.0"
end

View File

@ -5,22 +5,20 @@ $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
require "prometheus_exporter/version"
Gem::Specification.new do |spec|
spec.name = "prometheus_exporter"
spec.version = PrometheusExporter::VERSION
spec.authors = ["Sam Saffron"]
spec.email = ["sam.saffron@gmail.com"]
spec.name = "prometheus_exporter"
spec.version = PrometheusExporter::VERSION
spec.authors = ["Sam Saffron"]
spec.email = ["sam.saffron@gmail.com"]
spec.summary = %q{Prometheus Exporter}
spec.description = %q{Prometheus metric collector and exporter for Ruby}
spec.homepage = "https://github.com/discourse/prometheus_exporter"
spec.license = "MIT"
spec.summary = "Prometheus Exporter"
spec.description = "Prometheus metric collector and exporter for Ruby"
spec.homepage = "https://github.com/discourse/prometheus_exporter"
spec.license = "MIT"
spec.files = `git ls-files -z`.split("\x0").reject do |f|
f.match(%r{^(test|spec|features|bin)/})
end
spec.bindir = "bin"
spec.executables = ["prometheus_exporter"]
spec.require_paths = ["lib"]
spec.files = `git ls-files -z`.split("\x0").reject { |f| f.match(%r{^(test|spec|features|bin)/}) }
spec.bindir = "bin"
spec.executables = ["prometheus_exporter"]
spec.require_paths = ["lib"]
spec.add_dependency "webrick"
@ -36,11 +34,11 @@ Gem::Specification.new do |spec|
spec.add_development_dependency "minitest-stub-const", "~> 0.6"
spec.add_development_dependency "rubocop-discourse", ">= 3"
spec.add_development_dependency "appraisal", "~> 2.3"
spec.add_development_dependency "activerecord", "~> 6.0.0"
spec.add_development_dependency "activerecord", "~> 7.1"
spec.add_development_dependency "redis", "> 5"
spec.add_development_dependency "m"
if !RUBY_ENGINE == 'jruby'
spec.add_development_dependency "raindrops", "~> 0.19"
end
spec.required_ruby_version = '>= 3.0.0'
spec.add_development_dependency "syntax_tree"
spec.add_development_dependency "syntax_tree-disable_ternary"
spec.add_development_dependency "raindrops", "~> 0.19" if !RUBY_ENGINE == "jruby"
spec.required_ruby_version = ">= 3.0.0"
end

View File

@ -8,58 +8,77 @@ class PrometheusExporterTest < Minitest::Test
client = PrometheusExporter::Client.new
# register a metrics for testing
counter_metric = client.register(:counter, 'counter_metric', 'helping')
counter_metric = client.register(:counter, "counter_metric", "helping")
# when the given name doesn't match any existing metric, it returns nil
result = client.find_registered_metric('not_registered')
result = client.find_registered_metric("not_registered")
assert_nil(result)
# when the given name matches an existing metric, it returns this metric
result = client.find_registered_metric('counter_metric')
result = client.find_registered_metric("counter_metric")
assert_equal(counter_metric, result)
# when the given name matches an existing metric, but the given type doesn't, it returns nil
result = client.find_registered_metric('counter_metric', type: :gauge)
result = client.find_registered_metric("counter_metric", type: :gauge)
assert_nil(result)
# when the given name and type match an existing metric, it returns the metric
result = client.find_registered_metric('counter_metric', type: :counter)
result = client.find_registered_metric("counter_metric", type: :counter)
assert_equal(counter_metric, result)
# when the given name matches an existing metric, but the given help doesn't, it returns nil
result = client.find_registered_metric('counter_metric', help: 'not helping')
result = client.find_registered_metric("counter_metric", help: "not helping")
assert_nil(result)
# when the given name and help match an existing metric, it returns the metric
result = client.find_registered_metric('counter_metric', help: 'helping')
result = client.find_registered_metric("counter_metric", help: "helping")
assert_equal(counter_metric, result)
# when the given name matches an existing metric, but the given help and type don't, it returns nil
result = client.find_registered_metric('counter_metric', type: :gauge, help: 'not helping')
result = client.find_registered_metric("counter_metric", type: :gauge, help: "not helping")
assert_nil(result)
# when the given name, type, and help all match an existing metric, it returns the metric
result = client.find_registered_metric('counter_metric', type: :counter, help: 'helping')
result = client.find_registered_metric("counter_metric", type: :counter, help: "helping")
assert_equal(counter_metric, result)
end
def test_standard_values
client = PrometheusExporter::Client.new
counter_metric = client.register(:counter, 'counter_metric', 'helping')
assert_equal(false, counter_metric.standard_values('value', 'key').has_key?(:opts))
counter_metric = client.register(:counter, "counter_metric", "helping")
assert_equal(false, counter_metric.standard_values("value", "key").has_key?(:opts))
expected_quantiles = { quantiles: [0.99, 9] }
summary_metric = client.register(:summary, 'summary_metric', 'helping', expected_quantiles)
assert_equal(expected_quantiles, summary_metric.standard_values('value', 'key')[:opts])
summary_metric = client.register(:summary, "summary_metric", "helping", expected_quantiles)
assert_equal(expected_quantiles, summary_metric.standard_values("value", "key")[:opts])
end
def test_close_socket_on_error
logs = StringIO.new
logger = Logger.new(logs)
logger.level = :error
client =
PrometheusExporter::Client.new(logger: logger, port: 321, process_queue_once_and_stop: true)
client.send("put a message in the queue")
assert_includes(
logs.string,
"Prometheus Exporter, failed to send message Connection refused - connect(2) for \"localhost\" port 321",
)
end
def test_overriding_logger
logs = StringIO.new
logger = Logger.new(logs)
logger.level = :warn
client = PrometheusExporter::Client.new(logger: logger, max_queue_size: 1)
client =
PrometheusExporter::Client.new(
logger: logger,
max_queue_size: 1,
process_queue_once_and_stop: true,
)
client.send("put a message in the queue")
client.send("put a second message in the queue to trigger the logger")

View File

@ -1,22 +1,22 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'prometheus_exporter/instrumentation'
require 'active_record'
require_relative "../test_helper"
require "prometheus_exporter/instrumentation"
require "active_record"
class PrometheusInstrumentationActiveRecordTest < Minitest::Test
def setup
super
# With this trick this variable with be accessible with ::ObjectSpace
@pool = if active_record_version >= Gem::Version.create('6.1.0.rc1')
active_record61_pool
elsif active_record_version >= Gem::Version.create('6.0.0')
active_record60_pool
else
raise 'unsupported active_record version'
end
@pool =
if active_record_version >= Gem::Version.create("6.1.0.rc1")
active_record61_pool
elsif active_record_version >= Gem::Version.create("6.0.0")
active_record60_pool
else
raise "unsupported active_record version"
end
end
def metric_labels
@ -28,7 +28,8 @@ class PrometheusInstrumentationActiveRecordTest < Minitest::Test
end
def collector
@collector ||= PrometheusExporter::Instrumentation::ActiveRecord.new(metric_labels, config_labels)
@collector ||=
PrometheusExporter::Instrumentation::ActiveRecord.new(metric_labels, config_labels)
end
%i[size connections busy dead idle waiting checkout_timeout type metric_labels].each do |key|
@ -42,7 +43,7 @@ class PrometheusInstrumentationActiveRecordTest < Minitest::Test
end
def test_type
assert_equal collector.collect.first[:type], 'active_record'
assert_equal collector.collect.first[:type], "active_record"
end
private
@ -57,6 +58,7 @@ class PrometheusInstrumentationActiveRecordTest < Minitest::Test
def active_record61_pool
::ActiveRecord::ConnectionAdapters::ConnectionPool.new(
OpenStruct.new(db_config: OpenStruct.new(checkout_timeout: 0, idle_timeout: 0, pool: 5)))
OpenStruct.new(db_config: OpenStruct.new(checkout_timeout: 0, idle_timeout: 0, pool: 5)),
)
end
end

View File

@ -1,7 +1,7 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'prometheus_exporter/instrumentation'
require_relative "../test_helper"
require "prometheus_exporter/instrumentation"
class PrometheusInstrumentationMethodProfilerTest < Minitest::Test
class SomeClassPatchedUsingAliasMethod
@ -16,8 +16,19 @@ class PrometheusInstrumentationMethodProfilerTest < Minitest::Test
end
end
PrometheusExporter::Instrumentation::MethodProfiler.patch SomeClassPatchedUsingAliasMethod, [:some_method], :test, instrument: :alias_method
PrometheusExporter::Instrumentation::MethodProfiler.patch SomeClassPatchedUsingPrepend, [:some_method], :test, instrument: :prepend
PrometheusExporter::Instrumentation::MethodProfiler.patch(
SomeClassPatchedUsingAliasMethod,
[:some_method],
:test,
instrument: :alias_method,
)
PrometheusExporter::Instrumentation::MethodProfiler.patch(
SomeClassPatchedUsingPrepend,
[:some_method],
:test,
instrument: :prepend,
)
def test_alias_method_source_location
file, line = SomeClassPatchedUsingAliasMethod.instance_method(:some_method).source_location
@ -26,7 +37,7 @@ class PrometheusInstrumentationMethodProfilerTest < Minitest::Test
end
def test_alias_method_preserves_behavior
assert_equal 'Hello, world', SomeClassPatchedUsingAliasMethod.new.some_method
assert_equal "Hello, world", SomeClassPatchedUsingAliasMethod.new.some_method
end
def test_prepend_source_location
@ -36,6 +47,6 @@ class PrometheusInstrumentationMethodProfilerTest < Minitest::Test
end
def test_prepend_preserves_behavior
assert_equal 'Hello, world', SomeClassPatchedUsingPrepend.new.some_method
assert_equal "Hello, world", SomeClassPatchedUsingPrepend.new.some_method
end
end

View File

@ -1,7 +1,7 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'prometheus_exporter/metric'
require_relative "../test_helper"
require "prometheus_exporter/metric"
module PrometheusExporter::Metric
describe Base do
@ -9,20 +9,20 @@ module PrometheusExporter::Metric
Counter.new("a_counter", "my amazing counter")
end
before do
Base.default_prefix = ''
before do
Base.default_prefix = ""
Base.default_labels = {}
Base.default_aggregation = nil
end
after do
Base.default_prefix = ''
Base.default_prefix = ""
Base.default_labels = {}
Base.default_aggregation = nil
end
it "supports a dynamic prefix" do
Base.default_prefix = 'web_'
Base.default_prefix = "web_"
counter.observe
text = <<~TEXT
@ -67,7 +67,6 @@ module PrometheusExporter::Metric
end
it "supports reset! for Gauge" do
gauge = Gauge.new("test", "test")
gauge.observe(999)
@ -83,7 +82,6 @@ module PrometheusExporter::Metric
end
it "supports reset! for Counter" do
counter = Counter.new("test", "test")
counter.observe(999)
@ -99,7 +97,6 @@ module PrometheusExporter::Metric
end
it "supports reset! for Histogram" do
histogram = Histogram.new("test", "test")
histogram.observe(999)
@ -115,7 +112,6 @@ module PrometheusExporter::Metric
end
it "supports reset! for Summary" do
summary = Summary.new("test", "test")
summary.observe(999)

View File

@ -1,7 +1,7 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'prometheus_exporter/metric'
require_relative "../test_helper"
require "prometheus_exporter/metric"
module PrometheusExporter::Metric
describe Counter do
@ -9,12 +9,10 @@ module PrometheusExporter::Metric
Counter.new("a_counter", "my amazing counter")
end
before do
Base.default_prefix = ''
end
before { Base.default_prefix = "" }
it "supports a dynamic prefix" do
Base.default_prefix = 'web_'
Base.default_prefix = "web_"
counter.observe
text = <<~TEXT
@ -24,7 +22,7 @@ module PrometheusExporter::Metric
TEXT
assert_equal(counter.to_prometheus_text, text)
Base.default_prefix = ''
Base.default_prefix = ""
end
it "can correctly increment counters with labels" do
@ -70,7 +68,6 @@ module PrometheusExporter::Metric
end
it "can correctly log multiple increments" do
counter.observe
counter.observe
counter.observe

View File

@ -1,7 +1,7 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'prometheus_exporter/metric'
require_relative "../test_helper"
require "prometheus_exporter/metric"
module PrometheusExporter::Metric
describe Gauge do
@ -13,14 +13,10 @@ module PrometheusExporter::Metric
Gauge.new("a_gauge_total", "my amazing gauge")
end
before do
Base.default_prefix = ''
end
before { Base.default_prefix = "" }
it "should not allow observe to corrupt data" do
assert_raises do
gauge.observe("hello")
end
assert_raises { gauge.observe("hello") }
# going to special case nil here instead of adding a new API
# observing nil should set to nothing
@ -41,7 +37,7 @@ module PrometheusExporter::Metric
end
it "supports a dynamic prefix" do
Base.default_prefix = 'web_'
Base.default_prefix = "web_"
gauge.observe(400.11)
text = <<~TEXT
@ -52,7 +48,7 @@ module PrometheusExporter::Metric
assert_equal(gauge.to_prometheus_text, text)
Base.default_prefix = ''
Base.default_prefix = ""
end
it "can correctly set gauges with labels" do
@ -72,7 +68,6 @@ module PrometheusExporter::Metric
end
it "can correctly reset on change" do
gauge.observe(10)
gauge.observe(11)
@ -86,7 +81,6 @@ module PrometheusExporter::Metric
end
it "can use the set on alias" do
gauge.set(10)
gauge.set(11)

View File

@ -1,7 +1,7 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'prometheus_exporter/metric'
require_relative "../test_helper"
require "prometheus_exporter/metric"
module PrometheusExporter::Metric
describe Histogram do
@ -9,9 +9,7 @@ module PrometheusExporter::Metric
Histogram.new("a_histogram", "my amazing histogram")
end
before do
Base.default_prefix = ''
end
before { Base.default_prefix = "" }
it "can correctly gather a histogram" do
histogram.observe(0.1)
@ -45,7 +43,6 @@ module PrometheusExporter::Metric
end
it "can correctly gather a histogram over multiple labels" do
histogram.observe(0.1, nil)
histogram.observe(0.2)
histogram.observe(0.610001)
@ -146,12 +143,15 @@ module PrometheusExporter::Metric
assert_equal(histogram.to_h, key => val)
end
it 'supports default buckets' do
assert_equal(Histogram::DEFAULT_BUCKETS, [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5.0, 10.0])
it "supports default buckets" do
assert_equal(
Histogram::DEFAULT_BUCKETS,
[0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5.0, 10.0],
)
assert_equal(Histogram::DEFAULT_BUCKETS, Histogram.default_buckets)
end
it 'allows to change default buckets' do
it "allows to change default buckets" do
custom_buckets = [0.005, 0.1, 1, 2, 5, 10]
Histogram.default_buckets = custom_buckets
@ -160,11 +160,11 @@ module PrometheusExporter::Metric
Histogram.default_buckets = Histogram::DEFAULT_BUCKETS
end
it 'uses the default buckets for instance' do
it "uses the default buckets for instance" do
assert_equal(histogram.buckets, Histogram::DEFAULT_BUCKETS)
end
it 'uses the the custom default buckets for instance' do
it "uses the the custom default buckets for instance" do
custom_buckets = [0.005, 0.1, 1, 2, 5, 10]
Histogram.default_buckets = custom_buckets
@ -173,9 +173,9 @@ module PrometheusExporter::Metric
Histogram.default_buckets = Histogram::DEFAULT_BUCKETS
end
it 'uses the specified buckets' do
it "uses the specified buckets" do
buckets = [0.1, 0.2, 0.3]
histogram = Histogram.new('test_bucktets', 'I have specified buckets', buckets: buckets)
histogram = Histogram.new("test_bucktets", "I have specified buckets", buckets: buckets)
assert_equal(histogram.buckets, buckets)
end

View File

@ -1,7 +1,7 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'prometheus_exporter/metric'
require_relative "../test_helper"
require "prometheus_exporter/metric"
module PrometheusExporter::Metric
describe Summary do
@ -9,16 +9,12 @@ module PrometheusExporter::Metric
Summary.new("a_summary", "my amazing summary")
end
before do
Base.default_prefix = ''
end
before { Base.default_prefix = "" }
it "can correctly gather a summary with custom quantiles" do
summary = Summary.new("custom", "custom summary", quantiles: [0.4, 0.6])
(1..10).each do |i|
summary.observe(i)
end
(1..10).each { |i| summary.observe(i) }
expected = <<~TEXT
# HELP custom custom summary
@ -33,7 +29,6 @@ module PrometheusExporter::Metric
end
it "can correctly gather a summary over multiple labels" do
summary.observe(0.1, nil)
summary.observe(0.2)
summary.observe(0.610001)
@ -90,16 +85,13 @@ module PrometheusExporter::Metric
end
it "can correctly rotate quantiles" do
Process.stub(:clock_gettime, 1.0) do
summary.observe(0.1)
summary.observe(0.2)
summary.observe(0.6)
end
Process.stub(:clock_gettime, 1.0 + Summary::ROTATE_AGE + 1.0) do
summary.observe(300)
end
Process.stub(:clock_gettime, 1.0 + Summary::ROTATE_AGE + 1.0) { summary.observe(300) }
Process.stub(:clock_gettime, 1.0 + (Summary::ROTATE_AGE * 2) + 1.1) do
summary.observe(100)

View File

@ -1,9 +1,9 @@
# frozen_string_literal: true
require 'minitest/stub_const'
require_relative 'test_helper'
require 'rack/test'
require 'prometheus_exporter/middleware'
require "minitest/stub_const"
require_relative "test_helper"
require "rack/test"
require "prometheus_exporter/middleware"
class PrometheusExporterMiddlewareTest < Minitest::Test
include Rack::Test::Methods
@ -23,9 +23,7 @@ class PrometheusExporterMiddlewareTest < Minitest::Test
end
def inner_app
Proc.new do |env|
[200, {}, "OK"]
end
Proc.new { |env| [200, {}, "OK"] }
end
def now
@ -42,7 +40,7 @@ class PrometheusExporterMiddlewareTest < Minitest::Test
def assert_valid_headers_response(delta = 0.5)
configure_middleware
get '/'
get "/"
assert last_response.ok?
refute_nil client.last_send
refute_nil client.last_send[:queue_time]
@ -51,7 +49,7 @@ class PrometheusExporterMiddlewareTest < Minitest::Test
def assert_invalid_headers_response
configure_middleware
get '/'
get "/"
assert last_response.ok?
refute_nil client.last_send
assert_nil client.last_send[:queue_time]
@ -59,34 +57,34 @@ class PrometheusExporterMiddlewareTest < Minitest::Test
def test_converting_apache_request_start
configure_middleware
now_microsec = '1234567890123456'
header 'X-Request-Start', "t=#{now_microsec}"
now_microsec = "1234567890123456"
header "X-Request-Start", "t=#{now_microsec}"
assert_valid_headers_response
end
def test_converting_nginx_request_start
configure_middleware
now = '1234567890.123'
header 'X-Request-Start', "t=#{now}"
now = "1234567890.123"
header "X-Request-Start", "t=#{now}"
assert_valid_headers_response
end
def test_request_start_in_wrong_format
configure_middleware
header 'X-Request-Start', ""
header "X-Request-Start", ""
assert_invalid_headers_response
end
def test_converting_amzn_trace_id_start
configure_middleware
now = '1234567890'
header 'X-Amzn-Trace-Id', "Root=1-#{now.to_i.to_s(16)}-abc123"
now = "1234567890"
header "X-Amzn-Trace-Id", "Root=1-#{now.to_i.to_s(16)}-abc123"
assert_valid_headers_response
end
def test_amzn_trace_id_in_wrong_format
configure_middleware
header 'X-Amzn-Trace-Id', ""
header "X-Amzn-Trace-Id", ""
assert_invalid_headers_response
end
@ -164,12 +162,23 @@ class PrometheusExporterMiddlewareTest < Minitest::Test
mock.verify
end
end
Object.stub_const(:Dalli, Module) do
::Dalli.stub_const(:Client) do
mock = Minitest::Mock.new
mock.expect :call, nil, [Dalli::Client, Array, :memcache], instrument: :prepend
::PrometheusExporter::Instrumentation::MethodProfiler.stub(:patch, mock) do
configure_middleware(instrument: :prepend)
end
mock.verify
end
end
end
def test_patch_called_with_alias_method_instrument
Object.stub_const(:Redis, Module) do
# must be less than version 5 for this instrumentation
::Redis.stub_const(:VERSION, '4.0.4') do
::Redis.stub_const(:VERSION, "4.0.4") do
::Redis.stub_const(:Client) do
mock = Minitest::Mock.new
mock.expect :call, nil, [Redis::Client, Array, :redis], instrument: :alias_method
@ -204,5 +213,16 @@ class PrometheusExporterMiddlewareTest < Minitest::Test
mock.verify
end
end
Object.stub_const(:Dalli, Module) do
::Dalli.stub_const(:Client) do
mock = Minitest::Mock.new
mock.expect :call, nil, [Dalli::Client, Array, :memcache], instrument: :alias_method
::PrometheusExporter::Instrumentation::MethodProfiler.stub(:patch, mock) do
configure_middleware(instrument: :alias_method)
end
mock.verify
end
end
end
end

View File

@ -1,9 +1,9 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'mini_racer'
require 'prometheus_exporter/server'
require 'prometheus_exporter/instrumentation'
require_relative "../test_helper"
require "mini_racer"
require "prometheus_exporter/server"
require "prometheus_exporter/instrumentation"
class PrometheusActiveRecordCollectorTest < Minitest::Test
include CollectorHelper
@ -22,7 +22,7 @@ class PrometheusActiveRecordCollectorTest < Minitest::Test
"dead" => 10,
"idle" => 20,
"waiting" => 0,
"size" => 120
"size" => 120,
)
metrics = collector.metrics
assert_equal 6, metrics.size
@ -40,13 +40,17 @@ class PrometheusActiveRecordCollectorTest < Minitest::Test
"waiting" => 0,
"size" => 120,
"metric_labels" => {
"service" => "service1"
}
"service" => "service1",
},
)
metrics = collector.metrics
assert_equal 6, metrics.size
assert(metrics.first.metric_text.include?('active_record_connection_pool_connections{service="service1",pid="1000",hostname="localhost"} 50'))
assert(
metrics.first.metric_text.include?(
'active_record_connection_pool_connections{service="service1",pid="1000",hostname="localhost"} 50',
),
)
end
def test_collecting_metrics_with_client_default_labels
@ -61,16 +65,20 @@ class PrometheusActiveRecordCollectorTest < Minitest::Test
"waiting" => 0,
"size" => 120,
"metric_labels" => {
"service" => "service1"
"service" => "service1",
},
"custom_labels" => {
"environment" => "test"
}
"environment" => "test",
},
)
metrics = collector.metrics
assert_equal 6, metrics.size
assert(metrics.first.metric_text.include?('active_record_connection_pool_connections{service="service1",pid="1000",hostname="localhost",environment="test"} 50'))
assert(
metrics.first.metric_text.include?(
'active_record_connection_pool_connections{service="service1",pid="1000",hostname="localhost",environment="test"} 50',
),
)
end
def test_collecting_metrics_for_multiple_pools
@ -85,8 +93,8 @@ class PrometheusActiveRecordCollectorTest < Minitest::Test
"waiting" => 0,
"size" => 120,
"metric_labels" => {
"pool_name" => "primary"
}
"pool_name" => "primary",
},
)
collector.collect(
"type" => "active_record",
@ -99,22 +107,32 @@ class PrometheusActiveRecordCollectorTest < Minitest::Test
"waiting" => 0,
"size" => 12,
"metric_labels" => {
"pool_name" => "other"
}
"pool_name" => "other",
},
)
metrics = collector.metrics
assert_equal 6, metrics.size
assert(metrics.first.metric_text.include?('active_record_connection_pool_connections{pool_name="primary",pid="1000",hostname="localhost"} 50'))
assert(metrics.first.metric_text.include?('active_record_connection_pool_connections{pool_name="other",pid="1000",hostname="localhost"} 5'))
assert(
metrics.first.metric_text.include?(
'active_record_connection_pool_connections{pool_name="primary",pid="1000",hostname="localhost"} 50',
),
)
assert(
metrics.first.metric_text.include?(
'active_record_connection_pool_connections{pool_name="other",pid="1000",hostname="localhost"} 5',
),
)
end
def test_metrics_deduplication
data = {
"pid" => "1000",
"hostname" => "localhost",
"metric_labels" => { "pool_name" => "primary" },
"connections" => 100
"metric_labels" => {
"pool_name" => "primary",
},
"connections" => 100,
}
collector.collect(data)
@ -128,11 +146,12 @@ class PrometheusActiveRecordCollectorTest < Minitest::Test
assert_equal 1, metrics.size
assert_equal [
'active_record_connection_pool_connections{pool_name="primary",pid="1000",hostname="localhost"} 200',
'active_record_connection_pool_connections{pool_name="primary",pid="2000",hostname="localhost"} 300',
'active_record_connection_pool_connections{pool_name="primary",pid="3000",hostname="localhost"} 400',
'active_record_connection_pool_connections{pool_name="primary",pid="2000",hostname="localhost2"} 500'
], metrics_lines
'active_record_connection_pool_connections{pool_name="primary",pid="1000",hostname="localhost"} 200',
'active_record_connection_pool_connections{pool_name="primary",pid="2000",hostname="localhost"} 300',
'active_record_connection_pool_connections{pool_name="primary",pid="3000",hostname="localhost"} 400',
'active_record_connection_pool_connections{pool_name="primary",pid="2000",hostname="localhost2"} 500',
],
metrics_lines
end
def test_metrics_expiration
@ -146,8 +165,8 @@ class PrometheusActiveRecordCollectorTest < Minitest::Test
"waiting" => 0,
"size" => 120,
"metric_labels" => {
"pool_name" => "primary"
}
"pool_name" => "primary",
},
}
stub_monotonic_clock(0) do
@ -156,8 +175,6 @@ class PrometheusActiveRecordCollectorTest < Minitest::Test
assert_equal 6, collector.metrics.size
end
stub_monotonic_clock(max_metric_age + 1) do
assert_equal 0, collector.metrics.size
end
stub_monotonic_clock(max_metric_age + 1) { assert_equal 0, collector.metrics.size }
end
end

File diff suppressed because it is too large Load Diff

View File

@ -1,8 +1,8 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'prometheus_exporter/server'
require 'prometheus_exporter/instrumentation'
require_relative "../test_helper"
require "prometheus_exporter/server"
require "prometheus_exporter/instrumentation"
class PrometheusGoodJobCollectorTest < Minitest::Test
include CollectorHelper
@ -20,8 +20,8 @@ class PrometheusGoodJobCollectorTest < Minitest::Test
"running" => 5,
"finished" => 100,
"succeeded" => 2000,
"discarded" => 9
}
"discarded" => 9,
},
)
metrics = collector.metrics
@ -33,7 +33,7 @@ class PrometheusGoodJobCollectorTest < Minitest::Test
"good_job_running 5",
"good_job_finished 100",
"good_job_succeeded 2000",
"good_job_discarded 9"
"good_job_discarded 9",
]
assert_equal expected, metrics.map(&:metric_text)
end
@ -48,9 +48,9 @@ class PrometheusGoodJobCollectorTest < Minitest::Test
"finished" => 100,
"succeeded" => 2000,
"discarded" => 9,
'custom_labels' => {
'hostname' => 'good_job_host'
}
"custom_labels" => {
"hostname" => "good_job_host",
},
)
metrics = collector.metrics
@ -59,20 +59,13 @@ class PrometheusGoodJobCollectorTest < Minitest::Test
end
def test_metrics_expiration
data = {
"type" => "good_job",
"scheduled" => 3,
"retried" => 4,
"queued" => 0
}
data = { "type" => "good_job", "scheduled" => 3, "retried" => 4, "queued" => 0 }
stub_monotonic_clock(0) do
collector.collect(data)
assert_equal 3, collector.metrics.size
end
stub_monotonic_clock(max_metric_age + 1) do
assert_equal 0, collector.metrics.size
end
stub_monotonic_clock(max_metric_age + 1) { assert_equal 0, collector.metrics.size }
end
end

View File

@ -1,8 +1,8 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'prometheus_exporter/server'
require 'prometheus_exporter/instrumentation'
require 'prometheus_exporter/server/metrics_container'
require_relative "../test_helper"
require "prometheus_exporter/server"
require "prometheus_exporter/instrumentation"
require "prometheus_exporter/server/metrics_container"
class PrometheusMetricsContainerTest < Minitest::Test
def metrics
@ -20,7 +20,7 @@ class PrometheusMetricsContainerTest < Minitest::Test
stub_monotonic_clock(61.0) do
metrics << { key: "value2" }
assert_equal 2, metrics.size
assert_equal ["value", "value2"], metrics.map { |v| v[:key] }
assert_equal %w[value value2], metrics.map { |v| v[:key] }
assert_equal 61.0, metrics[0]["_expire_at"]
assert_equal 121.0, metrics[1]["_expire_at"]
end
@ -28,7 +28,7 @@ class PrometheusMetricsContainerTest < Minitest::Test
stub_monotonic_clock(62.0) do
metrics << { key: "value3" }
assert_equal 2, metrics.size
assert_equal ["value2", "value3"], metrics.map { |v| v[:key] }
assert_equal %w[value2 value3], metrics.map { |v| v[:key] }
assert_equal 121.0, metrics[0]["_expire_at"]
assert_equal 122.0, metrics[1]["_expire_at"]
end
@ -45,9 +45,7 @@ class PrometheusMetricsContainerTest < Minitest::Test
end
def test_container_with_filter
metrics.filter = -> (new_metric, old_metric) do
new_metric[:hostname] == old_metric[:hostname]
end
metrics.filter = ->(new_metric, old_metric) { new_metric[:hostname] == old_metric[:hostname] }
stub_monotonic_clock(1.0) do
metrics << { hostname: "host1", value: 100 }

View File

@ -1,9 +1,9 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'mini_racer'
require 'prometheus_exporter/server'
require 'prometheus_exporter/instrumentation'
require_relative "../test_helper"
require "mini_racer"
require "prometheus_exporter/server"
require "prometheus_exporter/instrumentation"
class ProcessCollectorTest < Minitest::Test
include CollectorHelper
@ -26,7 +26,7 @@ class ProcessCollectorTest < Minitest::Test
"rss" => 3000,
"major_gc_ops_total" => 4000,
"minor_gc_ops_total" => 4001,
"allocated_objects_total" => 4002
"allocated_objects_total" => 4002,
}
end
@ -35,17 +35,18 @@ class ProcessCollectorTest < Minitest::Test
assert_equal 10, collector.metrics.size
assert_equal [
'heap_free_slots{pid="1000",hostname="localhost"} 1000',
'heap_live_slots{pid="1000",hostname="localhost"} 1001',
'v8_heap_size{pid="1000",hostname="localhost"} 2000',
'v8_used_heap_size{pid="1000",hostname="localhost"} 2001',
'v8_physical_size{pid="1000",hostname="localhost"} 2003',
'v8_heap_count{pid="1000",hostname="localhost"} 2004',
'rss{pid="1000",hostname="localhost"} 3000',
'major_gc_ops_total{pid="1000",hostname="localhost"} 4000',
'minor_gc_ops_total{pid="1000",hostname="localhost"} 4001',
'allocated_objects_total{pid="1000",hostname="localhost"} 4002'
], collector_metric_lines
'heap_free_slots{pid="1000",hostname="localhost"} 1000',
'heap_live_slots{pid="1000",hostname="localhost"} 1001',
'v8_heap_size{pid="1000",hostname="localhost"} 2000',
'v8_used_heap_size{pid="1000",hostname="localhost"} 2001',
'v8_physical_size{pid="1000",hostname="localhost"} 2003',
'v8_heap_count{pid="1000",hostname="localhost"} 2004',
'rss{pid="1000",hostname="localhost"} 3000',
'major_gc_ops_total{pid="1000",hostname="localhost"} 4000',
'minor_gc_ops_total{pid="1000",hostname="localhost"} 4001',
'allocated_objects_total{pid="1000",hostname="localhost"} 4002',
],
collector_metric_lines
end
def test_metrics_deduplication
@ -68,8 +69,6 @@ class ProcessCollectorTest < Minitest::Test
assert_equal 10, collector.metrics.size
end
stub_monotonic_clock(max_metric_age + 1) do
assert_equal 0, collector.metrics.size
end
stub_monotonic_clock(max_metric_age + 1) { assert_equal 0, collector.metrics.size }
end
end

View File

@ -1,9 +1,9 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'mini_racer'
require 'prometheus_exporter/server'
require 'prometheus_exporter/instrumentation'
require_relative "../test_helper"
require "mini_racer"
require "prometheus_exporter/server"
require "prometheus_exporter/instrumentation"
class PrometheusPumaCollectorTest < Minitest::Test
include CollectorHelper
@ -24,7 +24,8 @@ class PrometheusPumaCollectorTest < Minitest::Test
"request_backlog" => 0,
"running_threads" => 4,
"thread_pool_capacity" => 10,
"max_threads" => 10
"max_threads" => 10,
"busy_threads" => 2,
)
collector.collect(
@ -38,7 +39,8 @@ class PrometheusPumaCollectorTest < Minitest::Test
"request_backlog" => 1,
"running_threads" => 9,
"thread_pool_capacity" => 10,
"max_threads" => 10
"max_threads" => 10,
"busy_threads" => 3,
)
# overwriting previous metrics from first host
@ -53,13 +55,13 @@ class PrometheusPumaCollectorTest < Minitest::Test
"request_backlog" => 2,
"running_threads" => 8,
"thread_pool_capacity" => 10,
"max_threads" => 10
"max_threads" => 10,
"busy_threads" => 4,
)
metrics = collector.metrics
assert_equal 7, metrics.size
assert_equal "puma_workers{phase=\"0\"} 3",
metrics.first.metric_text
assert_equal 8, metrics.size
assert_equal "puma_workers{phase=\"0\"} 3", metrics.first.metric_text
end
def test_collecting_metrics_for_different_hosts_with_custom_labels
@ -75,9 +77,10 @@ class PrometheusPumaCollectorTest < Minitest::Test
"running_threads" => 4,
"thread_pool_capacity" => 10,
"max_threads" => 10,
"busy_threads" => 2,
"custom_labels" => {
"hostname" => "test1.example.com"
}
"hostname" => "test1.example.com",
},
)
collector.collect(
@ -92,9 +95,10 @@ class PrometheusPumaCollectorTest < Minitest::Test
"running_threads" => 9,
"thread_pool_capacity" => 10,
"max_threads" => 10,
"busy_threads" => 3,
"custom_labels" => {
"hostname" => "test2.example.com"
}
"hostname" => "test2.example.com",
},
)
# overwriting previous metrics from first host
@ -110,15 +114,16 @@ class PrometheusPumaCollectorTest < Minitest::Test
"running_threads" => 8,
"thread_pool_capacity" => 10,
"max_threads" => 10,
"busy_threads" => 4,
"custom_labels" => {
"hostname" => "test1.example.com"
}
"hostname" => "test1.example.com",
},
)
metrics = collector.metrics
assert_equal 7, metrics.size
assert_equal 8, metrics.size
assert_equal "puma_workers{phase=\"0\",hostname=\"test2.example.com\"} 4\n" \
"puma_workers{phase=\"0\",hostname=\"test1.example.com\"} 3",
"puma_workers{phase=\"0\",hostname=\"test1.example.com\"} 3",
metrics.first.metric_text
end
end

View File

@ -1,8 +1,8 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'prometheus_exporter/server'
require 'prometheus_exporter/instrumentation'
require_relative "../test_helper"
require "prometheus_exporter/server"
require "prometheus_exporter/instrumentation"
class PrometheusResqueCollectorTest < Minitest::Test
include CollectorHelper
@ -12,31 +12,23 @@ class PrometheusResqueCollectorTest < Minitest::Test
end
def test_collecting_metrics
collector.collect(
'pending_jobs' => 4,
'processed_jobs' => 7,
'failed_jobs' => 1
)
collector.collect("pending_jobs" => 4, "processed_jobs" => 7, "failed_jobs" => 1)
metrics = collector.metrics
expected = [
'resque_processed_jobs 7',
'resque_failed_jobs 1',
'resque_pending_jobs 4'
]
expected = ["resque_processed_jobs 7", "resque_failed_jobs 1", "resque_pending_jobs 4"]
assert_equal expected, metrics.map(&:metric_text)
end
def test_collecting_metrics_with_custom_labels
collector.collect(
'type' => 'resque',
'pending_jobs' => 1,
'processed_jobs' => 2,
'failed_jobs' => 3,
'custom_labels' => {
'hostname' => 'a323d2f681e2'
}
"type" => "resque",
"pending_jobs" => 1,
"processed_jobs" => 2,
"failed_jobs" => 3,
"custom_labels" => {
"hostname" => "a323d2f681e2",
},
)
metrics = collector.metrics
@ -44,20 +36,13 @@ class PrometheusResqueCollectorTest < Minitest::Test
end
def test_metrics_expiration
data = {
'type' => 'resque',
'pending_jobs' => 1,
'processed_jobs' => 2,
'failed_jobs' => 3
}
data = { "type" => "resque", "pending_jobs" => 1, "processed_jobs" => 2, "failed_jobs" => 3 }
stub_monotonic_clock(0) do
collector.collect(data)
assert_equal 3, collector.metrics.size
end
stub_monotonic_clock(max_metric_age + 1) do
assert_equal 0, collector.metrics.size
end
stub_monotonic_clock(max_metric_age + 1) { assert_equal 0, collector.metrics.size }
end
end

View File

@ -1,7 +1,7 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'prometheus_exporter/server'
require_relative "../test_helper"
require "prometheus_exporter/server"
class PrometheusRunnerTest < Minitest::Test
class MockerWebServer < OpenStruct
@ -29,7 +29,7 @@ class PrometheusRunnerTest < Minitest::Test
class TypeCollectorMock < PrometheusExporter::Server::TypeCollector
def type
'test'
"test"
end
def collect(_)
@ -48,7 +48,7 @@ class PrometheusRunnerTest < Minitest::Test
def test_runner_defaults
runner = PrometheusExporter::Server::Runner.new
assert_equal(runner.prefix, 'ruby_')
assert_equal(runner.prefix, "ruby_")
assert_equal(runner.port, 9394)
assert_equal(runner.timeout, 2)
assert_equal(runner.collector_class, PrometheusExporter::Server::Collector)
@ -56,69 +56,80 @@ class PrometheusRunnerTest < Minitest::Test
assert_equal(runner.verbose, false)
assert_empty(runner.label)
assert_nil(runner.auth)
assert_equal(runner.realm, 'Prometheus Exporter')
assert_equal(runner.realm, "Prometheus Exporter")
end
def test_runner_custom_options
runner = PrometheusExporter::Server::Runner.new(
prefix: 'new_',
port: 1234,
timeout: 1,
collector_class: CollectorMock,
type_collectors: [TypeCollectorMock],
verbose: true,
label: { environment: 'integration' },
auth: 'my_htpasswd_file',
realm: 'test realm',
histogram: true
)
runner =
PrometheusExporter::Server::Runner.new(
prefix: "new_",
port: 1234,
timeout: 1,
collector_class: CollectorMock,
type_collectors: [TypeCollectorMock],
verbose: true,
label: {
environment: "integration",
},
auth: "my_htpasswd_file",
realm: "test realm",
histogram: true,
)
assert_equal(runner.prefix, 'new_')
assert_equal(runner.prefix, "new_")
assert_equal(runner.port, 1234)
assert_equal(runner.timeout, 1)
assert_equal(runner.collector_class, CollectorMock)
assert_equal(runner.type_collectors, [TypeCollectorMock])
assert_equal(runner.verbose, true)
assert_equal(runner.label, { environment: 'integration' })
assert_equal(runner.auth, 'my_htpasswd_file')
assert_equal(runner.realm, 'test realm')
assert_equal(runner.label, { environment: "integration" })
assert_equal(runner.auth, "my_htpasswd_file")
assert_equal(runner.realm, "test realm")
assert_equal(runner.histogram, true)
reset_base_metric_label
end
def test_runner_start
runner = PrometheusExporter::Server::Runner.new(server_class: MockerWebServer, label: { environment: 'integration' })
runner =
PrometheusExporter::Server::Runner.new(
server_class: MockerWebServer,
label: {
environment: "integration",
},
)
result = runner.start
assert_equal(result, true)
assert_equal(PrometheusExporter::Metric::Base.default_prefix, 'ruby_')
assert_equal(PrometheusExporter::Metric::Base.default_prefix, "ruby_")
assert_equal(runner.port, 9394)
assert_equal(runner.timeout, 2)
assert_equal(runner.verbose, false)
assert_nil(runner.auth)
assert_equal(runner.realm, 'Prometheus Exporter')
assert_equal(PrometheusExporter::Metric::Base.default_labels, { environment: 'integration' })
assert_equal(runner.realm, "Prometheus Exporter")
assert_equal(PrometheusExporter::Metric::Base.default_labels, { environment: "integration" })
assert_instance_of(PrometheusExporter::Server::Collector, runner.collector)
reset_base_metric_label
end
def test_runner_custom_collector
runner = PrometheusExporter::Server::Runner.new(
server_class: MockerWebServer,
collector_class: CollectorMock
)
runner =
PrometheusExporter::Server::Runner.new(
server_class: MockerWebServer,
collector_class: CollectorMock,
)
runner.start
assert_equal(runner.collector_class, CollectorMock)
end
def test_runner_wrong_collector
runner = PrometheusExporter::Server::Runner.new(
server_class: MockerWebServer,
collector_class: WrongCollectorMock
)
runner =
PrometheusExporter::Server::Runner.new(
server_class: MockerWebServer,
collector_class: WrongCollectorMock,
)
assert_raises PrometheusExporter::Server::WrongInheritance do
runner.start
@ -126,11 +137,12 @@ class PrometheusRunnerTest < Minitest::Test
end
def test_runner_custom_collector_types
runner = PrometheusExporter::Server::Runner.new(
server_class: MockerWebServer,
collector_class: CollectorMock,
type_collectors: [TypeCollectorMock]
)
runner =
PrometheusExporter::Server::Runner.new(
server_class: MockerWebServer,
collector_class: CollectorMock,
type_collectors: [TypeCollectorMock],
)
runner.start
custom_collectors = runner.collector.collectors
@ -140,13 +152,13 @@ class PrometheusRunnerTest < Minitest::Test
end
def test_runner_histogram_mode
runner = PrometheusExporter::Server::Runner.new(
server_class: MockerWebServer,
histogram: true
)
runner = PrometheusExporter::Server::Runner.new(server_class: MockerWebServer, histogram: true)
runner.start
assert_equal(PrometheusExporter::Metric::Base.default_aggregation, PrometheusExporter::Metric::Histogram)
assert_equal(
PrometheusExporter::Metric::Base.default_aggregation,
PrometheusExporter::Metric::Histogram,
)
end
def reset_base_metric_label

View File

@ -1,8 +1,8 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'prometheus_exporter/server'
require 'prometheus_exporter/instrumentation'
require_relative "../test_helper"
require "prometheus_exporter/server"
require "prometheus_exporter/instrumentation"
class PrometheusSidekiqProcessCollectorTest < Minitest::Test
include CollectorHelper
@ -13,18 +13,18 @@ class PrometheusSidekiqProcessCollectorTest < Minitest::Test
def test_collecting_metrics
collector.collect(
'process' => {
'busy' => 1,
'concurrency' => 2,
'labels' => {
'labels' => 'lab_1,lab_2',
'queues' => 'default,reliable',
'quiet' => 'false',
'tag' => 'default',
'hostname' => 'sidekiq-1234',
'identity' => 'sidekiq-1234:1',
}
}
"process" => {
"busy" => 1,
"concurrency" => 2,
"labels" => {
"labels" => "lab_1,lab_2",
"queues" => "default,reliable",
"quiet" => "false",
"tag" => "default",
"hostname" => "sidekiq-1234",
"identity" => "sidekiq-1234:1",
},
},
)
metrics = collector.metrics
@ -38,41 +38,41 @@ class PrometheusSidekiqProcessCollectorTest < Minitest::Test
def test_only_fresh_metrics_are_collected
stub_monotonic_clock(1.0) do
collector.collect(
'process' => {
'busy' => 1,
'concurrency' => 2,
'labels' => {
'labels' => 'lab_1,lab_2',
'queues' => 'default,reliable',
'quiet' => 'false',
'tag' => 'default',
'hostname' => 'sidekiq-1234',
'identity' => 'sidekiq-1234:1',
}
}
"process" => {
"busy" => 1,
"concurrency" => 2,
"labels" => {
"labels" => "lab_1,lab_2",
"queues" => "default,reliable",
"quiet" => "false",
"tag" => "default",
"hostname" => "sidekiq-1234",
"identity" => "sidekiq-1234:1",
},
},
)
end
stub_monotonic_clock(2.0, advance: max_metric_age) do
collector.collect(
'process' => {
'busy' => 2,
'concurrency' => 2,
'labels' => {
'labels' => 'other_label',
'queues' => 'default,reliable',
'quiet' => 'true',
'tag' => 'default',
'hostname' => 'sidekiq-1234',
'identity' => 'sidekiq-1234:1',
}
}
"process" => {
"busy" => 2,
"concurrency" => 2,
"labels" => {
"labels" => "other_label",
"queues" => "default,reliable",
"quiet" => "true",
"tag" => "default",
"hostname" => "sidekiq-1234",
"identity" => "sidekiq-1234:1",
},
},
)
metrics = collector.metrics
expected = [
'sidekiq_process_busy{labels="other_label",queues="default,reliable",quiet="true",tag="default",hostname="sidekiq-1234",identity="sidekiq-1234:1"} 2',
'sidekiq_process_concurrency{labels="other_label",queues="default,reliable",quiet="true",tag="default",hostname="sidekiq-1234",identity="sidekiq-1234:1"} 2'
'sidekiq_process_concurrency{labels="other_label",queues="default,reliable",quiet="true",tag="default",hostname="sidekiq-1234",identity="sidekiq-1234:1"} 2',
]
assert_equal expected, metrics.map(&:metric_text)
end

View File

@ -1,8 +1,8 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'prometheus_exporter/server'
require 'prometheus_exporter/instrumentation'
require_relative "../test_helper"
require "prometheus_exporter/server"
require "prometheus_exporter/instrumentation"
class PrometheusSidekiqQueueCollectorTest < Minitest::Test
include CollectorHelper
@ -13,11 +13,7 @@ class PrometheusSidekiqQueueCollectorTest < Minitest::Test
def test_collecting_metrics
collector.collect(
'queues' => [
'backlog' => 16,
'latency_seconds' => 7,
'labels' => { 'queue' => 'default' }
]
"queues" => ["backlog" => 16, "latency_seconds" => 7, "labels" => { "queue" => "default" }],
)
metrics = collector.metrics
@ -31,14 +27,10 @@ class PrometheusSidekiqQueueCollectorTest < Minitest::Test
def test_collecting_metrics_with_client_default_labels
collector.collect(
'queues' => [
'backlog' => 16,
'latency_seconds' => 7,
'labels' => { 'queue' => 'default' }
],
'custom_labels' => {
'environment' => 'test'
}
"queues" => ["backlog" => 16, "latency_seconds" => 7, "labels" => { "queue" => "default" }],
"custom_labels" => {
"environment" => "test",
},
)
metrics = collector.metrics
@ -52,27 +44,15 @@ class PrometheusSidekiqQueueCollectorTest < Minitest::Test
def test_only_fresh_metrics_are_collected
stub_monotonic_clock(1.0) do
collector.collect(
'queues' => [
'backlog' => 1,
'labels' => { 'queue' => 'default' }
]
)
collector.collect("queues" => ["backlog" => 1, "labels" => { "queue" => "default" }])
end
stub_monotonic_clock(2.0, advance: max_metric_age) do
collector.collect(
'queues' => [
'latency_seconds' => 1,
'labels' => { 'queue' => 'default' }
]
)
collector.collect("queues" => ["latency_seconds" => 1, "labels" => { "queue" => "default" }])
metrics = collector.metrics
expected = [
'sidekiq_queue_latency_seconds{queue="default"} 1',
]
expected = ['sidekiq_queue_latency_seconds{queue="default"} 1']
assert_equal expected, metrics.map(&:metric_text)
end
end

View File

@ -1,8 +1,8 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'prometheus_exporter/server'
require 'prometheus_exporter/instrumentation'
require_relative "../test_helper"
require "prometheus_exporter/server"
require "prometheus_exporter/instrumentation"
class PrometheusSidekiqStatsCollectorTest < Minitest::Test
include CollectorHelper
@ -13,16 +13,16 @@ class PrometheusSidekiqStatsCollectorTest < Minitest::Test
def test_collecting_metrics
collector.collect(
'stats' => {
'dead_size' => 1,
'enqueued' => 2,
'failed' => 3,
'processed' => 4,
'processes_size' => 5,
'retry_size' => 6,
'scheduled_size' => 7,
'workers_size' => 8,
}
"stats" => {
"dead_size" => 1,
"enqueued" => 2,
"failed" => 3,
"processed" => 4,
"processes_size" => 5,
"retry_size" => 6,
"scheduled_size" => 7,
"workers_size" => 8,
},
)
metrics = collector.metrics
@ -34,7 +34,7 @@ class PrometheusSidekiqStatsCollectorTest < Minitest::Test
"sidekiq_stats_processes_size 5",
"sidekiq_stats_retry_size 6",
"sidekiq_stats_scheduled_size 7",
"sidekiq_stats_workers_size 8"
"sidekiq_stats_workers_size 8",
]
assert_equal expected, metrics.map(&:metric_text)
end
@ -42,31 +42,31 @@ class PrometheusSidekiqStatsCollectorTest < Minitest::Test
def test_only_fresh_metrics_are_collected
stub_monotonic_clock(1.0) do
collector.collect(
'stats' => {
'dead_size' => 1,
'enqueued' => 2,
'failed' => 3,
'processed' => 4,
'processes_size' => 5,
'retry_size' => 6,
'scheduled_size' => 7,
'workers_size' => 8,
}
"stats" => {
"dead_size" => 1,
"enqueued" => 2,
"failed" => 3,
"processed" => 4,
"processes_size" => 5,
"retry_size" => 6,
"scheduled_size" => 7,
"workers_size" => 8,
},
)
end
stub_monotonic_clock(2.0, advance: max_metric_age) do
collector.collect(
'stats' => {
'dead_size' => 2,
'enqueued' => 3,
'failed' => 4,
'processed' => 5,
'processes_size' => 6,
'retry_size' => 7,
'scheduled_size' => 8,
'workers_size' => 9,
}
"stats" => {
"dead_size" => 2,
"enqueued" => 3,
"failed" => 4,
"processed" => 5,
"processes_size" => 6,
"retry_size" => 7,
"scheduled_size" => 8,
"workers_size" => 9,
},
)
metrics = collector.metrics
@ -78,7 +78,7 @@ class PrometheusSidekiqStatsCollectorTest < Minitest::Test
"sidekiq_stats_processes_size 6",
"sidekiq_stats_retry_size 7",
"sidekiq_stats_scheduled_size 8",
"sidekiq_stats_workers_size 9"
"sidekiq_stats_workers_size 9",
]
assert_equal expected, metrics.map(&:metric_text)

View File

@ -1,9 +1,9 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'mini_racer'
require 'prometheus_exporter/server'
require 'prometheus_exporter/instrumentation'
require_relative "../test_helper"
require "mini_racer"
require "prometheus_exporter/server"
require "prometheus_exporter/instrumentation"
class PrometheusUnicornCollectorTest < Minitest::Test
include CollectorHelper
@ -13,28 +13,24 @@ class PrometheusUnicornCollectorTest < Minitest::Test
end
def test_collecting_metrics
collector.collect(
'workers' => 4,
'active_workers' => 3,
'request_backlog' => 0
)
collector.collect("workers" => 4, "active_workers" => 3, "request_backlog" => 0)
assert_collector_metric_lines [
'unicorn_workers 4',
'unicorn_active_workers 3',
'unicorn_request_backlog 0'
]
"unicorn_workers 4",
"unicorn_active_workers 3",
"unicorn_request_backlog 0",
]
end
def test_collecting_metrics_with_custom_labels
collector.collect(
'type' => 'unicorn',
'workers' => 2,
'active_workers' => 0,
'request_backlog' => 0,
'custom_labels' => {
'hostname' => 'a323d2f681e2'
}
"type" => "unicorn",
"workers" => 2,
"active_workers" => 0,
"request_backlog" => 0,
"custom_labels" => {
"hostname" => "a323d2f681e2",
},
)
metrics = collector.metrics
@ -43,20 +39,23 @@ class PrometheusUnicornCollectorTest < Minitest::Test
end
def test_metrics_deduplication
collector.collect('workers' => 4, 'active_workers' => 3, 'request_backlog' => 0)
collector.collect('workers' => 4, 'active_workers' => 3, 'request_backlog' => 0)
collector.collect('workers' => 4, 'active_workers' => 3, 'request_backlog' => 0, 'hostname' => 'localhost2')
collector.collect("workers" => 4, "active_workers" => 3, "request_backlog" => 0)
collector.collect("workers" => 4, "active_workers" => 3, "request_backlog" => 0)
collector.collect(
"workers" => 4,
"active_workers" => 3,
"request_backlog" => 0,
"hostname" => "localhost2",
)
assert_equal 3, collector_metric_lines.size
end
def test_metrics_expiration
stub_monotonic_clock(0) do
collector.collect('workers' => 4, 'active_workers' => 3, 'request_backlog' => 0)
collector.collect("workers" => 4, "active_workers" => 3, "request_backlog" => 0)
assert_equal 3, collector.metrics.size
end
stub_monotonic_clock(max_metric_age + 1) do
assert_equal 0, collector.metrics.size
end
stub_monotonic_clock(max_metric_age + 1) { assert_equal 0, collector.metrics.size }
end
end

View File

@ -1,13 +1,13 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'mini_racer'
require 'prometheus_exporter/server'
require 'prometheus_exporter/instrumentation'
require_relative "../test_helper"
require "mini_racer"
require "prometheus_exporter/server"
require "prometheus_exporter/instrumentation"
class PrometheusWebCollectorTest < Minitest::Test
def setup
PrometheusExporter::Metric::Base.default_prefix = ''
PrometheusExporter::Metric::Base.default_prefix = ""
PrometheusExporter::Metric::Base.default_aggregation = nil
end
@ -24,15 +24,15 @@ class PrometheusWebCollectorTest < Minitest::Test
"type" => "web",
"timings" => nil,
"default_labels" => {
"action" => 'index',
"controller" => 'home',
"status": 200
"action" => "index",
"controller" => "home",
:"status" => 200,
},
)
metrics = collector.metrics
assert_equal 5, metrics.size
assert_equal 6, metrics.size
end
def test_collecting_metrics
@ -41,99 +41,121 @@ class PrometheusWebCollectorTest < Minitest::Test
"timings" => {
"sql" => {
duration: 0.5,
count: 40
count: 40,
},
"redis" => {
duration: 0.03,
count: 4
count: 4,
},
"memcache" => {
duration: 0.02,
count: 1,
},
"queue" => 0.03,
"total_duration" => 1.0
"total_duration" => 1.0,
},
'default_labels' => {
'action' => 'index',
'controller' => 'home',
"status" => 200
"default_labels" => {
"action" => "index",
"controller" => "home",
"status" => 200,
},
)
metrics = collector.metrics
assert_equal 5, metrics.size
assert_equal 6, metrics.size
end
def test_collecting_metrics_with_custom_labels
collector.collect(
'type' => 'web',
'timings' => nil,
'status' => 200,
'default_labels' => {
'controller' => 'home',
'action' => 'index'
"type" => "web",
"timings" => nil,
"status" => 200,
"default_labels" => {
"controller" => "home",
"action" => "index",
},
"custom_labels" => {
"service" => "service1",
},
'custom_labels' => {
'service' => 'service1'
}
)
metrics = collector.metrics
assert_equal 5, metrics.size
assert(metrics.first.metric_text.include?('http_requests_total{controller="home",action="index",service="service1",status="200"} 1'))
assert_equal 6, metrics.size
assert(
metrics.first.metric_text.include?(
'http_requests_total{controller="home",action="index",service="service1",status="200"} 1',
),
)
end
def test_collecting_metrics_merging_custom_labels_and_status
collector.collect(
'type' => 'web',
'timings' => nil,
'status' => 200,
'default_labels' => {
'controller' => 'home',
'action' => 'index'
"type" => "web",
"timings" => nil,
"status" => 200,
"default_labels" => {
"controller" => "home",
"action" => "index",
},
"custom_labels" => {
"service" => "service1",
"status" => 200,
},
'custom_labels' => {
'service' => 'service1',
'status' => 200
}
)
metrics = collector.metrics
assert_equal 5, metrics.size
assert(metrics.first.metric_text.include?('http_requests_total{controller="home",action="index",service="service1",status="200"} 1'))
assert_equal 6, metrics.size
assert(
metrics.first.metric_text.include?(
'http_requests_total{controller="home",action="index",service="service1",status="200"} 1',
),
)
end
def test_collecting_metrics_in_histogram_mode
PrometheusExporter::Metric::Base.default_aggregation = PrometheusExporter::Metric::Histogram
collector.collect(
'type' => 'web',
'status' => 200,
"type" => "web",
"status" => 200,
"timings" => {
"sql" => {
duration: 0.5,
count: 40
count: 40,
},
"redis" => {
duration: 0.03,
count: 4
count: 4,
},
"memcache" => {
duration: 0.02,
count: 1,
},
"queue" => 0.03,
"total_duration" => 1.0,
},
'default_labels' => {
'controller' => 'home',
'action' => 'index'
"default_labels" => {
"controller" => "home",
"action" => "index",
},
"custom_labels" => {
"service" => "service1",
},
'custom_labels' => {
'service' => 'service1'
}
)
metrics = collector.metrics
metrics_lines = metrics.map(&:metric_text).flat_map(&:lines)
assert_equal 5, metrics.size
assert_includes(metrics_lines, "http_requests_total{controller=\"home\",action=\"index\",service=\"service1\",status=\"200\"} 1")
assert_includes(metrics_lines, "http_request_duration_seconds_bucket{controller=\"home\",action=\"index\",service=\"service1\",le=\"+Inf\"} 1\n")
assert_equal 6, metrics.size
assert_includes(
metrics_lines,
"http_requests_total{controller=\"home\",action=\"index\",service=\"service1\",status=\"200\"} 1",
)
assert_includes(
metrics_lines,
"http_request_duration_seconds_bucket{controller=\"home\",action=\"index\",service=\"service1\",le=\"+Inf\"} 1\n",
)
end
end

View File

@ -1,39 +1,34 @@
# frozen_string_literal: true
require_relative '../test_helper'
require 'prometheus_exporter/server'
require 'prometheus_exporter/client'
require 'net/http'
require_relative "../test_helper"
require "prometheus_exporter/server"
require "prometheus_exporter/client"
require "net/http"
class DemoCollector
def initialize
@gauge = PrometheusExporter::Metric::Gauge.new "memory", "amount of memory"
end
def process(str)
obj = JSON.parse(str)
if obj["type"] == "mem metric"
@gauge.observe(obj["value"])
end
@gauge.observe(obj["value"]) if obj["type"] == "mem metric"
end
def prometheus_metrics_text
@gauge.to_prometheus_text
end
end
class PrometheusExporterTest < Minitest::Test
def setup
PrometheusExporter::Metric::Base.default_prefix = ''
PrometheusExporter::Metric::Base.default_prefix = ""
@auth_config = {
file: 'test/server/my_htpasswd_file',
realm: 'Prometheus Exporter',
user: 'test_user',
passwd: 'test_password',
file: "test/server/my_htpasswd_file",
realm: "Prometheus Exporter",
user: "test_user",
passwd: "test_password",
}
# Create an htpasswd file for basic auth
@ -49,7 +44,7 @@ class PrometheusExporterTest < Minitest::Test
end
def find_free_port
port = 12437
port = 12_437
while port < 13_000
begin
TCPSocket.new("localhost", port).close
@ -90,7 +85,6 @@ class PrometheusExporterTest < Minitest::Test
assert(text =~ /7/)
assert(text =~ /8/)
assert(text =~ /9/)
end
def test_it_can_collect_over_ipv6
@ -105,9 +99,7 @@ class PrometheusExporterTest < Minitest::Test
gauge = client.register(:gauge, "my_gauge", "some gauge")
gauge.observe(99)
TestHelper.wait_for(2) do
server.collector.prometheus_metrics_text =~ /99/
end
TestHelper.wait_for(2) { server.collector.prometheus_metrics_text =~ /99/ }
expected = <<~TEXT
# HELP my_gauge some gauge
@ -116,8 +108,16 @@ class PrometheusExporterTest < Minitest::Test
TEXT
assert_equal(expected, collector.prometheus_metrics_text)
ensure
client.stop rescue nil
server.stop rescue nil
begin
client.stop
rescue StandardError
nil
end
begin
server.stop
rescue StandardError
nil
end
end
def test_it_can_collect_metrics_from_standard
@ -137,9 +137,7 @@ class PrometheusExporterTest < Minitest::Test
counter.observe(3)
gauge.observe(92, abcd: 1)
TestHelper.wait_for(2) do
server.collector.prometheus_metrics_text =~ /92/
end
TestHelper.wait_for(2) { server.collector.prometheus_metrics_text =~ /92/ }
expected = <<~TEXT
# HELP my_gauge some gauge
@ -151,10 +149,17 @@ class PrometheusExporterTest < Minitest::Test
my_counter 4
TEXT
assert_equal(expected, collector.prometheus_metrics_text)
ensure
client.stop rescue nil
server.stop rescue nil
begin
client.stop
rescue StandardError
nil
end
begin
server.stop
rescue StandardError
nil
end
end
def test_it_can_collect_metrics_from_custom
@ -168,92 +173,121 @@ class PrometheusExporterTest < Minitest::Test
client.send_json "type" => "mem metric", "value" => 150
client.send_json "type" => "mem metric", "value" => 199
TestHelper.wait_for(2) do
collector.prometheus_metrics_text =~ /199/
end
TestHelper.wait_for(2) { collector.prometheus_metrics_text =~ /199/ }
assert_match(/199/, collector.prometheus_metrics_text)
body = nil
Net::HTTP.new("localhost", port).start do |http|
request = Net::HTTP::Get.new "/metrics"
Net::HTTP
.new("localhost", port)
.start do |http|
request = Net::HTTP::Get.new "/metrics"
http.request(request) do |response|
assert_equal(["gzip"], response.to_hash["content-encoding"])
body = response.body
http.request(request) do |response|
assert_equal(["gzip"], response.to_hash["content-encoding"])
body = response.body
end
end
end
assert_match(/199/, body)
one_minute = Time.now + 60
Time.stub(:now, one_minute) do
client.send_json "type" => "mem metric", "value" => 200.1
TestHelper.wait_for(2) do
collector.prometheus_metrics_text =~ /200.1/
end
TestHelper.wait_for(2) { collector.prometheus_metrics_text =~ /200.1/ }
assert_match(/200.1/, collector.prometheus_metrics_text)
end
ensure
client.stop rescue nil
server.stop rescue nil
begin
client.stop
rescue StandardError
nil
end
begin
server.stop
rescue StandardError
nil
end
end
def test_it_can_collect_metrics_with_basic_auth
collector = DemoCollector.new
port = find_free_port
server = PrometheusExporter::Server::WebServer.new port: port, collector: collector, auth: @auth_config[:file], realm: @auth_config[:realm]
server =
PrometheusExporter::Server::WebServer.new port: port,
collector: collector,
auth: @auth_config[:file],
realm: @auth_config[:realm]
server.start
client = PrometheusExporter::Client.new host: "localhost", port: port, thread_sleep: 0.001
client.send_json "type" => "mem metric", "value" => 150
client.send_json "type" => "mem metric", "value" => 199
TestHelper.wait_for(2) do
collector.prometheus_metrics_text =~ /199/
end
TestHelper.wait_for(2) { collector.prometheus_metrics_text =~ /199/ }
assert_match(/199/, collector.prometheus_metrics_text)
Net::HTTP.new("localhost", port).start do |http|
request = Net::HTTP::Get.new "/metrics"
request.basic_auth @auth_config[:user], @auth_config[:passwd]
Net::HTTP
.new("localhost", port)
.start do |http|
request = Net::HTTP::Get.new "/metrics"
request.basic_auth @auth_config[:user], @auth_config[:passwd]
http.request(request) do |response|
assert_equal("200", response.code)
assert_equal(["gzip"], response.to_hash["content-encoding"])
assert_match(/199/, response.body)
http.request(request) do |response|
assert_equal("200", response.code)
assert_equal(["gzip"], response.to_hash["content-encoding"])
assert_match(/199/, response.body)
end
end
end
ensure
client.stop rescue nil
server.stop rescue nil
begin
client.stop
rescue StandardError
nil
end
begin
server.stop
rescue StandardError
nil
end
end
def test_it_fails_with_invalid_auth
collector = DemoCollector.new
port = find_free_port
server = PrometheusExporter::Server::WebServer.new port: port, collector: collector, auth: @auth_config[:file], realm: @auth_config[:realm]
server =
PrometheusExporter::Server::WebServer.new port: port,
collector: collector,
auth: @auth_config[:file],
realm: @auth_config[:realm]
server.start
Net::HTTP.new("localhost", port).start do |http|
request = Net::HTTP::Get.new "/metrics"
Net::HTTP
.new("localhost", port)
.start do |http|
request = Net::HTTP::Get.new "/metrics"
http.request(request) do |response|
assert_equal("401", response.code)
assert_match(/Unauthorized/, response.body)
http.request(request) do |response|
assert_equal("401", response.code)
assert_match(/Unauthorized/, response.body)
end
end
end
ensure
client.stop rescue nil
server.stop rescue nil
begin
client.stop
rescue StandardError
nil
end
begin
server.stop
rescue StandardError
nil
end
end
def test_it_responds_to_ping
@ -265,17 +299,26 @@ class PrometheusExporterTest < Minitest::Test
client = PrometheusExporter::Client.new host: "localhost", port: port, thread_sleep: 0.001
Net::HTTP.new("localhost", port).start do |http|
request = Net::HTTP::Get.new "/ping"
Net::HTTP
.new("localhost", port)
.start do |http|
request = Net::HTTP::Get.new "/ping"
http.request(request) do |response|
assert_equal("200", response.code)
assert_match(/PONG/, response.body)
http.request(request) do |response|
assert_equal("200", response.code)
assert_match(/PONG/, response.body)
end
end
end
ensure
client.stop rescue nil
server.stop rescue nil
begin
client.stop
rescue StandardError
nil
end
begin
server.stop
rescue StandardError
nil
end
end
end

View File

@ -1,11 +1,10 @@
# frozen_string_literal: true
require 'minitest/stub_const'
require_relative 'test_helper'
require 'prometheus_exporter/instrumentation/sidekiq'
require "minitest/stub_const"
require_relative "test_helper"
require "prometheus_exporter/instrumentation/sidekiq"
class PrometheusExporterSidekiqMiddlewareTest < Minitest::Test
class FakeClient
end
@ -27,9 +26,11 @@ class PrometheusExporterSidekiqMiddlewareTest < Minitest::Test
end
def test_initiating_middlware
middleware_entry = FakeSidekiqMiddlewareChainEntry.new(
PrometheusExporter::Instrumentation::Sidekiq, { client: client })
middleware_entry =
FakeSidekiqMiddlewareChainEntry.new(
PrometheusExporter::Instrumentation::Sidekiq,
{ client: client },
)
assert_instance_of PrometheusExporter::Instrumentation::Sidekiq, middleware_entry.make_new
end
end

View File

@ -9,10 +9,10 @@ require "redis"
module TestingMod
class FakeConnection
def call_pipelined(_, _, _)
def call_pipelined(...)
end
def call(_, _)
def call(...)
end
def connected?
@ -64,6 +64,18 @@ end
RedisClient::Middlewares.prepend(TestingMod)
RedisClient.register(RedisValidationMiddleware)
unless defined?(::Puma)
module Puma
module Const
VERSION = "6.6.0"
end
def self.stats
'{}'
end
end
end
class TestHelper
def self.wait_for(time, &blk)
(time / 0.001).to_i.times do
@ -76,12 +88,7 @@ end
module ClockHelper
def stub_monotonic_clock(at = 0.0, advance: nil, &blk)
Process.stub(
:clock_gettime,
at + advance.to_f,
Process::CLOCK_MONOTONIC,
&blk
)
Process.stub(:clock_gettime, at + advance.to_f, Process::CLOCK_MONOTONIC, &blk)
end
end