remove reference folder (#326)

This commit is contained in:
alrex 2021-02-09 09:54:41 -08:00 committed by GitHub
parent b0f7268fb0
commit 499899a601
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
432 changed files with 1 additions and 63994 deletions

View File

@ -3,4 +3,3 @@ omit =
*/tests/*
*/setup.py
*/gen/*
reference/*

View File

@ -15,7 +15,6 @@ exclude =
.venv*/
venv*/
target
reference/
__pycache__
exporter/opentelemetry-exporter-jaeger/src/opentelemetry/exporter/jaeger/gen/
exporter/opentelemetry-exporter-jaeger/build/*

View File

@ -14,6 +14,6 @@ profile=black
; docs: https://github.com/timothycrosley/isort#multi-line-output-modes
multi_line_output=3
skip=target
skip_glob=**/gen/*,.venv*/*,venv*/*,reference*/*,opentelemetry-python-core/*,.tox/*
skip_glob=**/gen/*,.venv*/*,venv*/*,opentelemetry-python-core/*,.tox/*
known_first_party=opentelemetry
known_third_party=psutil,pytest,redis,redis_opentracing

View File

@ -160,35 +160,3 @@ For a deeper discussion, see: https://github.com/open-telemetry/opentelemetry-sp
as specified with the [napolean
extension](http://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html#google-vs-numpy)
extension in [Sphinx](http://www.sphinx-doc.org/en/master/index.html).
## Porting reference/ddtrace/contrib to instrumentation
The steps below describe suggested steps to port integrations from the reference directory containing the originally donated code to OpenTelemetry.
1. Move the code into the instrumentation directory
```
mkdir -p instrumentation/opentelemetry-instrumentation-jinja2/src/opentelemetry/instrumentation/jinja2
git mv reference/ddtrace/contrib/jinja2 instrumentation/opentelemetry-instrumentation-jinja2/src/opentelemetry/instrumentation/jinja2
```
2. Move the tests
```
git mv reference/tests/contrib/jinja2 instrumentation/opentelemetry-instrumentation-jinja2/tests
```
3. Add `README.rst`, `setup.cfg` and `setup.py` files and update them accordingly
```bash
cp _template/* instrumentation/opentelemetry-instrumentation-jinja2/
```
4. Add `version.py` file and update it accordingly
```bash
mv instrumentation/opentelemetry-instrumentation-jinja2/version.py instrumentation/opentelemetry-instrumentation-jinja2/src/opentelemetry/instrumentation/jinja2/version.py
```
5. Fix relative import paths to using ddtrace package instead of using relative paths
6. Update the code and tests to use the OpenTelemetry API

View File

@ -3,7 +3,6 @@ line-length = 79
exclude = '''
(
/(
reference| # original files from DataDog
)/
)
'''

View File

@ -1,5 +0,0 @@
FROM node:12
RUN useradd -ms /bin/bash casper
RUN npm install ghost-cli lightstep-opentelemetry-launcher-node
USER casper

View File

@ -1,200 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2016 Datadog, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,24 +0,0 @@
Copyright (c) 2016, Datadog <info@datadoghq.com>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of Datadog nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL DATADOG BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -1,77 +0,0 @@
desc "build the docs"
task :docs do
sh "pip install sphinx"
Dir.chdir 'docs' do
sh "make html"
end
end
# Deploy tasks
S3_DIR = ENV['S3_DIR']
S3_BUCKET = "pypi.datadoghq.com"
desc "release the a new wheel"
task :'release:wheel' do
fail "Missing environment variable S3_DIR" if !S3_DIR or S3_DIR.empty?
# Use custom `mkwheelhouse` to upload wheels and source distribution from dist/ to S3 bucket
sh "scripts/mkwheelhouse"
end
desc "release the docs website"
task :'release:docs' => :docs do
fail "Missing environment variable S3_DIR" if !S3_DIR or S3_DIR.empty?
sh "aws s3 cp --recursive docs/_build/html/ s3://#{S3_BUCKET}/#{S3_DIR}/docs/"
end
namespace :pypi do
RELEASE_DIR = './dist/'
def get_version()
return `python setup.py --version`.strip
end
def get_branch()
return `git name-rev --name-only HEAD`.strip
end
task :confirm do
ddtrace_version = get_version
if get_branch.downcase != 'tags/v#{ddtrace_version}'
print "WARNING: Expected current commit to be tagged as 'tags/v#{ddtrace_version}, instead we are on '#{get_branch}', proceed anyways [y|N]? "
$stdout.flush
abort if $stdin.gets.to_s.strip.downcase != 'y'
end
puts "WARNING: This task will build and release new wheels to https://pypi.org/project/ddtrace/, this action cannot be undone"
print " To proceed please type the version '#{ddtrace_version}': "
$stdout.flush
abort if $stdin.gets.to_s.strip.downcase != ddtrace_version
end
task :clean do
FileUtils.rm_rf(RELEASE_DIR)
end
task :install do
sh 'pip install twine'
end
task :build => :clean do
puts "building release in #{RELEASE_DIR}"
sh "scripts/build-dist"
end
task :release => [:confirm, :install, :build] do
builds = Dir.entries(RELEASE_DIR).reject {|f| f == '.' || f == '..'}
if builds.length == 0
fail "no build found in #{RELEASE_DIR}"
end
puts "uploading #{RELEASE_DIR}/*"
sh "twine upload #{RELEASE_DIR}/*"
end
end

View File

@ -1,54 +0,0 @@
"""
This file configures a local pytest plugin, which allows us to configure plugin hooks to control the
execution of our tests. Either by loading in fixtures, configuring directories to ignore, etc
Local plugins: https://docs.pytest.org/en/3.10.1/writing_plugins.html#local-conftest-plugins
Hook reference: https://docs.pytest.org/en/3.10.1/reference.html#hook-reference
"""
import os
import re
import sys
import pytest
PY_DIR_PATTERN = re.compile(r"^py3[0-9]$")
# Determine if the folder should be ignored
# https://docs.pytest.org/en/3.10.1/reference.html#_pytest.hookspec.pytest_ignore_collect
# DEV: We can only ignore folders/modules, we cannot ignore individual files
# DEV: We must wrap with `@pytest.mark.hookwrapper` to inherit from default (e.g. honor `--ignore`)
# https://github.com/pytest-dev/pytest/issues/846#issuecomment-122129189
@pytest.mark.hookwrapper
def pytest_ignore_collect(path, config):
"""
Skip directories defining a required minimum Python version
Example::
File: tests/contrib/vertica/py35/test.py
Python 3.4: Skip
Python 3.5: Collect
Python 3.6: Collect
"""
# Execute original behavior first
# DEV: We need to set `outcome.force_result(True)` if we need to override
# these results and skip this directory
outcome = yield
# Was not ignored by default behavior
if not outcome.get_result():
# DEV: `path` is a `LocalPath`
path = str(path)
if not os.path.isdir(path):
path = os.path.dirname(path)
dirname = os.path.basename(path)
# Directory name match `py[23][0-9]`
if PY_DIR_PATTERN.match(dirname):
# Split out version numbers into a tuple: `py35` -> `(3, 5)`
min_required = tuple((int(v) for v in dirname.strip("py")))
# If the current Python version does not meet the minimum required, skip this directory
if sys.version_info[0:2] < min_required:
outcome.force_result(True)

View File

@ -1,51 +0,0 @@
import sys
import pkg_resources
from .monkey import patch, patch_all
from .pin import Pin
from .span import Span
from .tracer import Tracer
from .settings import config
try:
__version__ = pkg_resources.get_distribution(__name__).version
except pkg_resources.DistributionNotFound:
# package is not installed
__version__ = None
# a global tracer instance with integration settings
tracer = Tracer()
__all__ = [
'patch',
'patch_all',
'Pin',
'Span',
'tracer',
'Tracer',
'config',
]
_ORIGINAL_EXCEPTHOOK = sys.excepthook
def _excepthook(tp, value, traceback):
tracer.global_excepthook(tp, value, traceback)
if _ORIGINAL_EXCEPTHOOK:
return _ORIGINAL_EXCEPTHOOK(tp, value, traceback)
def install_excepthook():
"""Install a hook that intercepts unhandled exception and send metrics about them."""
global _ORIGINAL_EXCEPTHOOK
_ORIGINAL_EXCEPTHOOK = sys.excepthook
sys.excepthook = _excepthook
def uninstall_excepthook():
"""Uninstall the global tracer except hook."""
sys.excepthook = _ORIGINAL_EXCEPTHOOK

View File

@ -1,82 +0,0 @@
import atexit
import threading
import os
from .internal.logger import get_logger
_LOG = get_logger(__name__)
class PeriodicWorkerThread(object):
"""Periodic worker thread.
This class can be used to instantiate a worker thread that will run its `run_periodic` function every `interval`
seconds.
The method `on_shutdown` will be called on worker shutdown. The worker will be shutdown when the program exits and
can be waited for with the `exit_timeout` parameter.
"""
_DEFAULT_INTERVAL = 1.0
def __init__(self, interval=_DEFAULT_INTERVAL, exit_timeout=None, name=None, daemon=True):
"""Create a new worker thread that runs a function periodically.
:param interval: The interval in seconds to wait between calls to `run_periodic`.
:param exit_timeout: The timeout to use when exiting the program and waiting for the thread to finish.
:param name: Name of the worker.
:param daemon: Whether the worker should be a daemon.
"""
self._thread = threading.Thread(target=self._target, name=name)
self._thread.daemon = daemon
self._stop = threading.Event()
self.interval = interval
self.exit_timeout = exit_timeout
atexit.register(self._atexit)
def _atexit(self):
self.stop()
if self.exit_timeout is not None:
key = 'ctrl-break' if os.name == 'nt' else 'ctrl-c'
_LOG.debug(
'Waiting %d seconds for %s to finish. Hit %s to quit.',
self.exit_timeout, self._thread.name, key,
)
self.join(self.exit_timeout)
def start(self):
"""Start the periodic worker."""
_LOG.debug('Starting %s thread', self._thread.name)
self._thread.start()
def stop(self):
"""Stop the worker."""
_LOG.debug('Stopping %s thread', self._thread.name)
self._stop.set()
def is_alive(self):
return self._thread.is_alive()
def join(self, timeout=None):
return self._thread.join(timeout)
def _target(self):
while not self._stop.wait(self.interval):
self.run_periodic()
self._on_shutdown()
@staticmethod
def run_periodic():
"""Method executed every interval."""
pass
def _on_shutdown(self):
_LOG.debug('Shutting down %s thread', self._thread.name)
self.on_shutdown()
@staticmethod
def on_shutdown():
"""Method ran on worker shutdown."""
pass

View File

@ -1,279 +0,0 @@
# stdlib
import ddtrace
from json import loads
import socket
# project
from .encoding import get_encoder, JSONEncoder
from .compat import httplib, PYTHON_VERSION, PYTHON_INTERPRETER, get_connection_response
from .internal.logger import get_logger
from .internal.runtime import container
from .payload import Payload, PayloadFull
from .utils.deprecation import deprecated
from .utils import time
log = get_logger(__name__)
_VERSIONS = {'v0.4': {'traces': '/v0.4/traces',
'services': '/v0.4/services',
'compatibility_mode': False,
'fallback': 'v0.3'},
'v0.3': {'traces': '/v0.3/traces',
'services': '/v0.3/services',
'compatibility_mode': False,
'fallback': 'v0.2'},
'v0.2': {'traces': '/v0.2/traces',
'services': '/v0.2/services',
'compatibility_mode': True,
'fallback': None}}
class Response(object):
"""
Custom API Response object to represent a response from calling the API.
We do this to ensure we know expected properties will exist, and so we
can call `resp.read()` and load the body once into an instance before we
close the HTTPConnection used for the request.
"""
__slots__ = ['status', 'body', 'reason', 'msg']
def __init__(self, status=None, body=None, reason=None, msg=None):
self.status = status
self.body = body
self.reason = reason
self.msg = msg
@classmethod
def from_http_response(cls, resp):
"""
Build a ``Response`` from the provided ``HTTPResponse`` object.
This function will call `.read()` to consume the body of the ``HTTPResponse`` object.
:param resp: ``HTTPResponse`` object to build the ``Response`` from
:type resp: ``HTTPResponse``
:rtype: ``Response``
:returns: A new ``Response``
"""
return cls(
status=resp.status,
body=resp.read(),
reason=getattr(resp, 'reason', None),
msg=getattr(resp, 'msg', None),
)
def get_json(self):
"""Helper to parse the body of this request as JSON"""
try:
body = self.body
if not body:
log.debug('Empty reply from Datadog Agent, %r', self)
return
if not isinstance(body, str) and hasattr(body, 'decode'):
body = body.decode('utf-8')
if hasattr(body, 'startswith') and body.startswith('OK'):
# This typically happens when using a priority-sampling enabled
# library with an outdated agent. It still works, but priority sampling
# will probably send too many traces, so the next step is to upgrade agent.
log.debug('Cannot parse Datadog Agent response, please make sure your Datadog Agent is up to date')
return
return loads(body)
except (ValueError, TypeError):
log.debug('Unable to parse Datadog Agent JSON response: %r', body, exc_info=True)
def __repr__(self):
return '{0}(status={1!r}, body={2!r}, reason={3!r}, msg={4!r})'.format(
self.__class__.__name__,
self.status,
self.body,
self.reason,
self.msg,
)
class UDSHTTPConnection(httplib.HTTPConnection):
"""An HTTP connection established over a Unix Domain Socket."""
# It's "important" to keep the hostname and port arguments here; while there are not used by the connection
# mechanism, they are actually used as HTTP headers such as `Host`.
def __init__(self, path, https, *args, **kwargs):
if https:
httplib.HTTPSConnection.__init__(self, *args, **kwargs)
else:
httplib.HTTPConnection.__init__(self, *args, **kwargs)
self.path = path
def connect(self):
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.connect(self.path)
self.sock = sock
class API(object):
"""
Send data to the trace agent using the HTTP protocol and JSON format
"""
TRACE_COUNT_HEADER = 'X-Datadog-Trace-Count'
# Default timeout when establishing HTTP connection and sending/receiving from socket.
# This ought to be enough as the agent is local
TIMEOUT = 2
def __init__(self, hostname, port, uds_path=None, https=False, headers=None, encoder=None, priority_sampling=False):
"""Create a new connection to the Tracer API.
:param hostname: The hostname.
:param port: The TCP port to use.
:param uds_path: The path to use if the connection is to be established with a Unix Domain Socket.
:param headers: The headers to pass along the request.
:param encoder: The encoder to use to serialize data.
:param priority_sampling: Whether to use priority sampling.
"""
self.hostname = hostname
self.port = int(port)
self.uds_path = uds_path
self.https = https
self._headers = headers or {}
self._version = None
if priority_sampling:
self._set_version('v0.4', encoder=encoder)
else:
self._set_version('v0.3', encoder=encoder)
self._headers.update({
'Datadog-Meta-Lang': 'python',
'Datadog-Meta-Lang-Version': PYTHON_VERSION,
'Datadog-Meta-Lang-Interpreter': PYTHON_INTERPRETER,
'Datadog-Meta-Tracer-Version': ddtrace.__version__,
})
# Add container information if we have it
self._container_info = container.get_container_info()
if self._container_info and self._container_info.container_id:
self._headers.update({
'Datadog-Container-Id': self._container_info.container_id,
})
def __str__(self):
if self.uds_path:
return 'unix://' + self.uds_path
if self.https:
scheme = 'https://'
else:
scheme = 'http://'
return '%s%s:%s' % (scheme, self.hostname, self.port)
def _set_version(self, version, encoder=None):
if version not in _VERSIONS:
version = 'v0.2'
if version == self._version:
return
self._version = version
self._traces = _VERSIONS[version]['traces']
self._services = _VERSIONS[version]['services']
self._fallback = _VERSIONS[version]['fallback']
self._compatibility_mode = _VERSIONS[version]['compatibility_mode']
if self._compatibility_mode:
self._encoder = JSONEncoder()
else:
self._encoder = encoder or get_encoder()
# overwrite the Content-type with the one chosen in the Encoder
self._headers.update({'Content-Type': self._encoder.content_type})
def _downgrade(self):
"""
Downgrades the used encoder and API level. This method must fallback to a safe
encoder and API, so that it will success despite users' configurations. This action
ensures that the compatibility mode is activated so that the downgrade will be
executed only once.
"""
self._set_version(self._fallback)
def send_traces(self, traces):
"""Send traces to the API.
:param traces: A list of traces.
:return: The list of API HTTP responses.
"""
if not traces:
return []
with time.StopWatch() as sw:
responses = []
payload = Payload(encoder=self._encoder)
for trace in traces:
try:
payload.add_trace(trace)
except PayloadFull:
# Is payload full or is the trace too big?
# If payload is not empty, then using a new Payload might allow us to fit the trace.
# Let's flush the Payload and try to put the trace in a new empty Payload.
if not payload.empty:
responses.append(self._flush(payload))
# Create a new payload
payload = Payload(encoder=self._encoder)
try:
# Add the trace that we were unable to add in that iteration
payload.add_trace(trace)
except PayloadFull:
# If the trace does not fit in a payload on its own, that's bad. Drop it.
log.warning('Trace %r is too big to fit in a payload, dropping it', trace)
# Check that the Payload is not empty:
# it could be empty if the last trace was too big to fit.
if not payload.empty:
responses.append(self._flush(payload))
log.debug('reported %d traces in %.5fs', len(traces), sw.elapsed())
return responses
def _flush(self, payload):
try:
response = self._put(self._traces, payload.get_payload(), payload.length)
except (httplib.HTTPException, OSError, IOError) as e:
return e
# the API endpoint is not available so we should downgrade the connection and re-try the call
if response.status in [404, 415] and self._fallback:
log.debug("calling endpoint '%s' but received %s; downgrading API", self._traces, response.status)
self._downgrade()
return self._flush(payload)
return response
@deprecated(message='Sending services to the API is no longer necessary', version='1.0.0')
def send_services(self, *args, **kwargs):
return
def _put(self, endpoint, data, count):
headers = self._headers.copy()
headers[self.TRACE_COUNT_HEADER] = str(count)
if self.uds_path is None:
if self.https:
conn = httplib.HTTPSConnection(self.hostname, self.port, timeout=self.TIMEOUT)
else:
conn = httplib.HTTPConnection(self.hostname, self.port, timeout=self.TIMEOUT)
else:
conn = UDSHTTPConnection(self.uds_path, self.https, self.hostname, self.port, timeout=self.TIMEOUT)
try:
conn.request('PUT', endpoint, data, headers)
# Parse the HTTPResponse into an API.Response
# DEV: This will call `resp.read()` which must happen before the `conn.close()` below,
# if we call `.close()` then all future `.read()` calls will return `b''`
resp = get_connection_response(conn)
return Response.from_http_response(resp)
finally:
conn.close()

View File

@ -1,149 +0,0 @@
"""
Bootstrapping code that is run when using the `ddtrace-run` Python entrypoint
Add all monkey-patching that needs to run by default here
"""
import os
import imp
import sys
import logging
from ddtrace.utils.formats import asbool, get_env
from ddtrace.internal.logger import get_logger
from ddtrace import constants
logs_injection = asbool(get_env("logs", "injection"))
DD_LOG_FORMAT = "%(asctime)s %(levelname)s [%(name)s] [%(filename)s:%(lineno)d] {}- %(message)s".format(
"[dd.trace_id=%(dd.trace_id)s dd.span_id=%(dd.span_id)s] " if logs_injection else ""
)
if logs_injection:
# immediately patch logging if trace id injected
from ddtrace import patch
patch(logging=True)
debug = os.environ.get("DATADOG_TRACE_DEBUG")
# Set here a default logging format for basicConfig
# DEV: Once basicConfig is called here, future calls to it cannot be used to
# change the formatter since it applies the formatter to the root handler only
# upon initializing it the first time.
# See https://github.com/python/cpython/blob/112e4afd582515fcdcc0cde5012a4866e5cfda12/Lib/logging/__init__.py#L1550
if debug and debug.lower() == "true":
logging.basicConfig(level=logging.DEBUG, format=DD_LOG_FORMAT)
else:
logging.basicConfig(format=DD_LOG_FORMAT)
log = get_logger(__name__)
EXTRA_PATCHED_MODULES = {
"bottle": True,
"django": True,
"falcon": True,
"flask": True,
"pyramid": True,
}
def update_patched_modules():
modules_to_patch = os.environ.get("DATADOG_PATCH_MODULES")
if not modules_to_patch:
return
for patch in modules_to_patch.split(","):
if len(patch.split(":")) != 2:
log.debug("skipping malformed patch instruction")
continue
module, should_patch = patch.split(":")
if should_patch.lower() not in ["true", "false"]:
log.debug("skipping malformed patch instruction for %s", module)
continue
EXTRA_PATCHED_MODULES.update({module: should_patch.lower() == "true"})
def add_global_tags(tracer):
tags = {}
for tag in os.environ.get("DD_TRACE_GLOBAL_TAGS", "").split(","):
tag_name, _, tag_value = tag.partition(":")
if not tag_name or not tag_value:
log.debug("skipping malformed tracer tag")
continue
tags[tag_name] = tag_value
tracer.set_tags(tags)
try:
from ddtrace import tracer
patch = True
# Respect DATADOG_* environment variables in global tracer configuration
# TODO: these variables are deprecated; use utils method and update our documentation
# correct prefix should be DD_*
enabled = os.environ.get("DATADOG_TRACE_ENABLED")
hostname = os.environ.get("DD_AGENT_HOST", os.environ.get("DATADOG_TRACE_AGENT_HOSTNAME"))
port = os.environ.get("DATADOG_TRACE_AGENT_PORT")
priority_sampling = os.environ.get("DATADOG_PRIORITY_SAMPLING")
opts = {}
if enabled and enabled.lower() == "false":
opts["enabled"] = False
patch = False
if hostname:
opts["hostname"] = hostname
if port:
opts["port"] = int(port)
if priority_sampling:
opts["priority_sampling"] = asbool(priority_sampling)
opts["collect_metrics"] = asbool(get_env("runtime_metrics", "enabled"))
if opts:
tracer.configure(**opts)
if logs_injection:
EXTRA_PATCHED_MODULES.update({"logging": True})
if patch:
update_patched_modules()
from ddtrace import patch_all
patch_all(**EXTRA_PATCHED_MODULES)
if "DATADOG_ENV" in os.environ:
tracer.set_tags({constants.ENV_KEY: os.environ["DATADOG_ENV"]})
if "DD_TRACE_GLOBAL_TAGS" in os.environ:
add_global_tags(tracer)
# Ensure sitecustomize.py is properly called if available in application directories:
# * exclude `bootstrap_dir` from the search
# * find a user `sitecustomize.py` module
# * import that module via `imp`
bootstrap_dir = os.path.dirname(__file__)
path = list(sys.path)
if bootstrap_dir in path:
path.remove(bootstrap_dir)
try:
(f, path, description) = imp.find_module("sitecustomize", path)
except ImportError:
pass
else:
# `sitecustomize.py` found, load it
log.debug("sitecustomize from user found in: %s", path)
imp.load_module("sitecustomize", f, path, description)
# Loading status used in tests to detect if the `sitecustomize` has been
# properly loaded without exceptions. This must be the last action in the module
# when the execution ends with a success.
loaded = True
except Exception:
loaded = False
log.warning("error configuring Datadog tracing", exc_info=True)

View File

@ -1,82 +0,0 @@
#!/usr/bin/env python
from distutils import spawn
import os
import sys
import logging
debug = os.environ.get('DATADOG_TRACE_DEBUG')
if debug and debug.lower() == 'true':
logging.basicConfig(level=logging.DEBUG)
# Do not use `ddtrace.internal.logger.get_logger` here
# DEV: It isn't really necessary to use `DDLogger` here so we want to
# defer importing `ddtrace` until we actually need it.
# As well, no actual rate limiting would apply here since we only
# have a few logged lines
log = logging.getLogger(__name__)
USAGE = """
Execute the given Python program after configuring it to emit Datadog traces.
Append command line arguments to your program as usual.
Usage: [ENV_VARS] ddtrace-run <my_program>
Available environment variables:
DATADOG_ENV : override an application's environment (no default)
DATADOG_TRACE_ENABLED=true|false : override the value of tracer.enabled (default: true)
DATADOG_TRACE_DEBUG=true|false : enabled debug logging (default: false)
DATADOG_PATCH_MODULES=module:patch,module:patch... e.g. boto:true,redis:false : override the modules patched for this execution of the program (default: none)
DATADOG_TRACE_AGENT_HOSTNAME=localhost: override the address of the trace agent host that the default tracer will attempt to submit to (default: localhost)
DATADOG_TRACE_AGENT_PORT=8126: override the port that the default tracer will submit to (default: 8126)
DATADOG_SERVICE_NAME : override the service name to be used for this program (no default)
This value is passed through when setting up middleware for web framework integrations.
(e.g. flask, django)
For tracing without a web integration, prefer setting the service name in code.
DATADOG_PRIORITY_SAMPLING=true|false : (default: false): enables Priority Sampling.
""" # noqa: E501
def _ddtrace_root():
from ddtrace import __file__
return os.path.dirname(__file__)
def _add_bootstrap_to_pythonpath(bootstrap_dir):
"""
Add our bootstrap directory to the head of $PYTHONPATH to ensure
it is loaded before program code
"""
python_path = os.environ.get('PYTHONPATH', '')
if python_path:
new_path = '%s%s%s' % (bootstrap_dir, os.path.pathsep, os.environ['PYTHONPATH'])
os.environ['PYTHONPATH'] = new_path
else:
os.environ['PYTHONPATH'] = bootstrap_dir
def main():
if len(sys.argv) < 2 or sys.argv[1] == '-h':
print(USAGE)
return
log.debug('sys.argv: %s', sys.argv)
root_dir = _ddtrace_root()
log.debug('ddtrace root: %s', root_dir)
bootstrap_dir = os.path.join(root_dir, 'bootstrap')
log.debug('ddtrace bootstrap: %s', bootstrap_dir)
_add_bootstrap_to_pythonpath(bootstrap_dir)
log.debug('PYTHONPATH: %s', os.environ['PYTHONPATH'])
log.debug('sys.path: %s', sys.path)
executable = sys.argv[1]
# Find the executable path
executable = spawn.find_executable(executable)
log.debug('program executable: %s', executable)
os.execl(executable, executable, *sys.argv[2:])

View File

@ -1,151 +0,0 @@
import platform
import re
import sys
import textwrap
from ddtrace.vendor import six
__all__ = [
'httplib',
'iteritems',
'PY2',
'Queue',
'stringify',
'StringIO',
'urlencode',
'parse',
'reraise',
]
PYTHON_VERSION_INFO = sys.version_info
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
# Infos about python passed to the trace agent through the header
PYTHON_VERSION = platform.python_version()
PYTHON_INTERPRETER = platform.python_implementation()
try:
StringIO = six.moves.cStringIO
except ImportError:
StringIO = six.StringIO
httplib = six.moves.http_client
urlencode = six.moves.urllib.parse.urlencode
parse = six.moves.urllib.parse
Queue = six.moves.queue.Queue
iteritems = six.iteritems
reraise = six.reraise
reload_module = six.moves.reload_module
stringify = six.text_type
string_type = six.string_types[0]
msgpack_type = six.binary_type
# DEV: `six` doesn't have `float` in `integer_types`
numeric_types = six.integer_types + (float, )
# Pattern class generated by `re.compile`
if PYTHON_VERSION_INFO >= (3, 7):
pattern_type = re.Pattern
else:
pattern_type = re._pattern_type
def is_integer(obj):
"""Helper to determine if the provided ``obj`` is an integer type or not"""
# DEV: We have to make sure it is an integer and not a boolean
# >>> type(True)
# <class 'bool'>
# >>> isinstance(True, int)
# True
return isinstance(obj, six.integer_types) and not isinstance(obj, bool)
try:
from time import time_ns
except ImportError:
from time import time as _time
def time_ns():
return int(_time() * 10e5) * 1000
if PYTHON_VERSION_INFO[0:2] >= (3, 4):
from asyncio import iscoroutinefunction
# Execute from a string to get around syntax errors from `yield from`
# DEV: The idea to do this was stolen from `six`
# https://github.com/benjaminp/six/blob/15e31431af97e5e64b80af0a3f598d382bcdd49a/six.py#L719-L737
six.exec_(textwrap.dedent("""
import functools
import asyncio
def make_async_decorator(tracer, coro, *params, **kw_params):
\"\"\"
Decorator factory that creates an asynchronous wrapper that yields
a coroutine result. This factory is required to handle Python 2
compatibilities.
:param object tracer: the tracer instance that is used
:param function f: the coroutine that must be executed
:param tuple params: arguments given to the Tracer.trace()
:param dict kw_params: keyword arguments given to the Tracer.trace()
\"\"\"
@functools.wraps(coro)
@asyncio.coroutine
def func_wrapper(*args, **kwargs):
with tracer.trace(*params, **kw_params):
result = yield from coro(*args, **kwargs) # noqa: E999
return result
return func_wrapper
"""))
else:
# asyncio is missing so we can't have coroutines; these
# functions are used only to ensure code executions in case
# of an unexpected behavior
def iscoroutinefunction(fn):
return False
def make_async_decorator(tracer, fn, *params, **kw_params):
return fn
# DEV: There is `six.u()` which does something similar, but doesn't have the guard around `hasattr(s, 'decode')`
def to_unicode(s):
""" Return a unicode string for the given bytes or string instance. """
# No reason to decode if we already have the unicode compatible object we expect
# DEV: `six.text_type` will be a `str` for python 3 and `unicode` for python 2
# DEV: Double decoding a `unicode` can cause a `UnicodeEncodeError`
# e.g. `'\xc3\xbf'.decode('utf-8').decode('utf-8')`
if isinstance(s, six.text_type):
return s
# If the object has a `decode` method, then decode into `utf-8`
# e.g. Python 2 `str`, Python 2/3 `bytearray`, etc
if hasattr(s, 'decode'):
return s.decode('utf-8')
# Always try to coerce the object into the `six.text_type` object we expect
# e.g. `to_unicode(1)`, `to_unicode(dict(key='value'))`
return six.text_type(s)
def get_connection_response(conn):
"""Returns the response for a connection.
If using Python 2 enable buffering.
Python 2 does not enable buffering by default resulting in many recv
syscalls.
See:
https://bugs.python.org/issue4879
https://github.com/python/cpython/commit/3c43fcba8b67ea0cec4a443c755ce5f25990a6cf
"""
if PY2:
return conn.getresponse(buffering=True)
else:
return conn.getresponse()

View File

@ -1,15 +0,0 @@
FILTERS_KEY = 'FILTERS'
SAMPLE_RATE_METRIC_KEY = '_sample_rate'
SAMPLING_PRIORITY_KEY = '_sampling_priority_v1'
ANALYTICS_SAMPLE_RATE_KEY = '_dd1.sr.eausr'
SAMPLING_AGENT_DECISION = '_dd.agent_psr'
SAMPLING_RULE_DECISION = '_dd.rule_psr'
SAMPLING_LIMIT_DECISION = '_dd.limit_psr'
ORIGIN_KEY = '_dd.origin'
HOSTNAME_KEY = '_dd.hostname'
ENV_KEY = 'env'
NUMERIC_TAGS = (ANALYTICS_SAMPLE_RATE_KEY, )
MANUAL_DROP_KEY = 'manual.drop'
MANUAL_KEEP_KEY = 'manual.keep'

View File

@ -1,216 +0,0 @@
import logging
import threading
from .constants import HOSTNAME_KEY, SAMPLING_PRIORITY_KEY, ORIGIN_KEY
from .internal.logger import get_logger
from .internal import hostname
from .settings import config
from .utils.formats import asbool, get_env
log = get_logger(__name__)
class Context(object):
"""
Context is used to keep track of a hierarchy of spans for the current
execution flow. During each logical execution, the same ``Context`` is
used to represent a single logical trace, even if the trace is built
asynchronously.
A single code execution may use multiple ``Context`` if part of the execution
must not be related to the current tracing. As example, a delayed job may
compose a standalone trace instead of being related to the same trace that
generates the job itself. On the other hand, if it's part of the same
``Context``, it will be related to the original trace.
This data structure is thread-safe.
"""
_partial_flush_enabled = asbool(get_env('tracer', 'partial_flush_enabled', 'false'))
_partial_flush_min_spans = int(get_env('tracer', 'partial_flush_min_spans', 500))
def __init__(self, trace_id=None, span_id=None, sampling_priority=None, _dd_origin=None):
"""
Initialize a new thread-safe ``Context``.
:param int trace_id: trace_id of parent span
:param int span_id: span_id of parent span
"""
self._trace = []
self._finished_spans = 0
self._current_span = None
self._lock = threading.Lock()
self._parent_trace_id = trace_id
self._parent_span_id = span_id
self._sampling_priority = sampling_priority
self._dd_origin = _dd_origin
@property
def trace_id(self):
"""Return current context trace_id."""
with self._lock:
return self._parent_trace_id
@property
def span_id(self):
"""Return current context span_id."""
with self._lock:
return self._parent_span_id
@property
def sampling_priority(self):
"""Return current context sampling priority."""
with self._lock:
return self._sampling_priority
@sampling_priority.setter
def sampling_priority(self, value):
"""Set sampling priority."""
with self._lock:
self._sampling_priority = value
def clone(self):
"""
Partially clones the current context.
It copies everything EXCEPT the registered and finished spans.
"""
with self._lock:
new_ctx = Context(
trace_id=self._parent_trace_id,
span_id=self._parent_span_id,
sampling_priority=self._sampling_priority,
)
new_ctx._current_span = self._current_span
return new_ctx
def get_current_root_span(self):
"""
Return the root span of the context or None if it does not exist.
"""
return self._trace[0] if len(self._trace) > 0 else None
def get_current_span(self):
"""
Return the last active span that corresponds to the last inserted
item in the trace list. This cannot be considered as the current active
span in asynchronous environments, because some spans can be closed
earlier while child spans still need to finish their traced execution.
"""
with self._lock:
return self._current_span
def _set_current_span(self, span):
"""
Set current span internally.
Non-safe if not used with a lock. For internal Context usage only.
"""
self._current_span = span
if span:
self._parent_trace_id = span.trace_id
self._parent_span_id = span.span_id
else:
self._parent_span_id = None
def add_span(self, span):
"""
Add a span to the context trace list, keeping it as the last active span.
"""
with self._lock:
self._set_current_span(span)
self._trace.append(span)
span._context = self
def close_span(self, span):
"""
Mark a span as a finished, increasing the internal counter to prevent
cycles inside _trace list.
"""
with self._lock:
self._finished_spans += 1
self._set_current_span(span._parent)
# notify if the trace is not closed properly; this check is executed only
# if the debug logging is enabled and when the root span is closed
# for an unfinished trace. This logging is meant to be used for debugging
# reasons, and it doesn't mean that the trace is wrongly generated.
# In asynchronous environments, it's legit to close the root span before
# some children. On the other hand, asynchronous web frameworks still expect
# to close the root span after all the children.
if span.tracer and span.tracer.log.isEnabledFor(logging.DEBUG) and span._parent is None:
unfinished_spans = [x for x in self._trace if not x.finished]
if unfinished_spans:
log.debug('Root span "%s" closed, but the trace has %d unfinished spans:',
span.name, len(unfinished_spans))
for wrong_span in unfinished_spans:
log.debug('\n%s', wrong_span.pprint())
def _is_sampled(self):
return any(span.sampled for span in self._trace)
def get(self):
"""
Returns a tuple containing the trace list generated in the current context and
if the context is sampled or not. It returns (None, None) if the ``Context`` is
not finished. If a trace is returned, the ``Context`` will be reset so that it
can be re-used immediately.
This operation is thread-safe.
"""
with self._lock:
# All spans are finished?
if self._finished_spans == len(self._trace):
# get the trace
trace = self._trace
sampled = self._is_sampled()
sampling_priority = self._sampling_priority
# attach the sampling priority to the context root span
if sampled and sampling_priority is not None and trace:
trace[0].set_metric(SAMPLING_PRIORITY_KEY, sampling_priority)
origin = self._dd_origin
# attach the origin to the root span tag
if sampled and origin is not None and trace:
trace[0].set_tag(ORIGIN_KEY, origin)
# Set hostname tag if they requested it
if config.report_hostname:
# DEV: `get_hostname()` value is cached
trace[0].set_tag(HOSTNAME_KEY, hostname.get_hostname())
# clean the current state
self._trace = []
self._finished_spans = 0
self._parent_trace_id = None
self._parent_span_id = None
self._sampling_priority = None
return trace, sampled
elif self._partial_flush_enabled:
finished_spans = [t for t in self._trace if t.finished]
if len(finished_spans) >= self._partial_flush_min_spans:
# partial flush when enabled and we have more than the minimal required spans
trace = self._trace
sampled = self._is_sampled()
sampling_priority = self._sampling_priority
# attach the sampling priority to the context root span
if sampled and sampling_priority is not None and trace:
trace[0].set_metric(SAMPLING_PRIORITY_KEY, sampling_priority)
origin = self._dd_origin
# attach the origin to the root span tag
if sampled and origin is not None and trace:
trace[0].set_tag(ORIGIN_KEY, origin)
# Set hostname tag if they requested it
if config.report_hostname:
# DEV: `get_hostname()` value is cached
trace[0].set_tag(HOSTNAME_KEY, hostname.get_hostname())
self._finished_spans = 0
# Any open spans will remain as `self._trace`
# Any finished spans will get returned to be flushed
self._trace = [t for t in self._trace if not t.finished]
return finished_spans, sampled
return None, None

View File

@ -1 +0,0 @@
from ..utils.importlib import func_name, module_name, require_modules # noqa

View File

@ -1,30 +0,0 @@
"""
The aiobotocore integration will trace all AWS calls made with the ``aiobotocore``
library. This integration isn't enabled when applying the default patching.
To enable it, you must run ``patch_all(aiobotocore=True)``
::
import aiobotocore.session
from ddtrace import patch
# If not patched yet, you can patch botocore specifically
patch(aiobotocore=True)
# This will report spans with the default instrumentation
aiobotocore.session.get_session()
lambda_client = session.create_client('lambda', region_name='us-east-1')
# This query generates a trace
lambda_client.list_functions()
"""
from ...utils.importlib import require_modules
required_modules = ['aiobotocore.client']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .patch import patch
__all__ = ['patch']

View File

@ -1,129 +0,0 @@
import asyncio
from ddtrace.vendor import wrapt
from ddtrace import config
import aiobotocore.client
from aiobotocore.endpoint import ClientResponseContentProxy
from ...constants import ANALYTICS_SAMPLE_RATE_KEY
from ...pin import Pin
from ...ext import SpanTypes, http, aws
from ...compat import PYTHON_VERSION_INFO
from ...utils.formats import deep_getattr
from ...utils.wrappers import unwrap
ARGS_NAME = ('action', 'params', 'path', 'verb')
TRACED_ARGS = ['params', 'path', 'verb']
def patch():
if getattr(aiobotocore.client, '_datadog_patch', False):
return
setattr(aiobotocore.client, '_datadog_patch', True)
wrapt.wrap_function_wrapper('aiobotocore.client', 'AioBaseClient._make_api_call', _wrapped_api_call)
Pin(service='aws', app='aws').onto(aiobotocore.client.AioBaseClient)
def unpatch():
if getattr(aiobotocore.client, '_datadog_patch', False):
setattr(aiobotocore.client, '_datadog_patch', False)
unwrap(aiobotocore.client.AioBaseClient, '_make_api_call')
class WrappedClientResponseContentProxy(wrapt.ObjectProxy):
def __init__(self, body, pin, parent_span):
super(WrappedClientResponseContentProxy, self).__init__(body)
self._self_pin = pin
self._self_parent_span = parent_span
@asyncio.coroutine
def read(self, *args, **kwargs):
# async read that must be child of the parent span operation
operation_name = '{}.read'.format(self._self_parent_span.name)
with self._self_pin.tracer.start_span(operation_name, child_of=self._self_parent_span) as span:
# inherit parent attributes
span.resource = self._self_parent_span.resource
span.span_type = self._self_parent_span.span_type
span.meta = dict(self._self_parent_span.meta)
span.metrics = dict(self._self_parent_span.metrics)
result = yield from self.__wrapped__.read(*args, **kwargs)
span.set_tag('Length', len(result))
return result
# wrapt doesn't proxy `async with` context managers
if PYTHON_VERSION_INFO >= (3, 5, 0):
@asyncio.coroutine
def __aenter__(self):
# call the wrapped method but return the object proxy
yield from self.__wrapped__.__aenter__()
return self
@asyncio.coroutine
def __aexit__(self, *args, **kwargs):
response = yield from self.__wrapped__.__aexit__(*args, **kwargs)
return response
@asyncio.coroutine
def _wrapped_api_call(original_func, instance, args, kwargs):
pin = Pin.get_from(instance)
if not pin or not pin.enabled():
result = yield from original_func(*args, **kwargs)
return result
endpoint_name = deep_getattr(instance, '_endpoint._endpoint_prefix')
with pin.tracer.trace('{}.command'.format(endpoint_name),
service='{}.{}'.format(pin.service, endpoint_name),
span_type=SpanTypes.HTTP) as span:
if len(args) > 0:
operation = args[0]
span.resource = '{}.{}'.format(endpoint_name, operation.lower())
else:
operation = None
span.resource = endpoint_name
aws.add_span_arg_tags(span, endpoint_name, args, ARGS_NAME, TRACED_ARGS)
region_name = deep_getattr(instance, 'meta.region_name')
meta = {
'aws.agent': 'aiobotocore',
'aws.operation': operation,
'aws.region': region_name,
}
span.set_tags(meta)
result = yield from original_func(*args, **kwargs)
body = result.get('Body')
if isinstance(body, ClientResponseContentProxy):
result['Body'] = WrappedClientResponseContentProxy(body, pin, span)
response_meta = result['ResponseMetadata']
response_headers = response_meta['HTTPHeaders']
span.set_tag(http.STATUS_CODE, response_meta['HTTPStatusCode'])
span.set_tag('retry_attempts', response_meta['RetryAttempts'])
request_id = response_meta.get('RequestId')
if request_id:
span.set_tag('aws.requestid', request_id)
request_id2 = response_headers.get('x-amz-id-2')
if request_id2:
span.set_tag('aws.requestid2', request_id2)
# set analytics sample rate
span.set_tag(
ANALYTICS_SAMPLE_RATE_KEY,
config.aiobotocore.get_analytics_sample_rate()
)
return result

View File

@ -1,61 +0,0 @@
"""
The ``aiohttp`` integration traces all requests defined in the application handlers.
Auto instrumentation is available using the ``trace_app`` function::
from aiohttp import web
from ddtrace import tracer, patch
from ddtrace.contrib.aiohttp import trace_app
# patch third-party modules like aiohttp_jinja2
patch(aiohttp=True)
# create your application
app = web.Application()
app.router.add_get('/', home_handler)
# trace your application handlers
trace_app(app, tracer, service='async-api')
web.run_app(app, port=8000)
Integration settings are attached to your application under the ``datadog_trace``
namespace. You can read or update them as follows::
# disables distributed tracing for all received requests
app['datadog_trace']['distributed_tracing_enabled'] = False
Available settings are:
* ``tracer`` (default: ``ddtrace.tracer``): set the default tracer instance that is used to
trace `aiohttp` internals. By default the `ddtrace` tracer is used.
* ``service`` (default: ``aiohttp-web``): set the service name used by the tracer. Usually
this configuration must be updated with a meaningful name.
* ``distributed_tracing_enabled`` (default: ``True``): enable distributed tracing during
the middleware execution, so that a new span is created with the given ``trace_id`` and
``parent_id`` injected via request headers.
* ``analytics_enabled`` (default: ``None``): enables APM events in Trace Search & Analytics.
Third-party modules that are currently supported by the ``patch()`` method are:
* ``aiohttp_jinja2``
When a request span is created, a new ``Context`` for this logical execution is attached
to the ``request`` object, so that it can be used in the application code::
async def home_handler(request):
ctx = request['datadog_context']
# do something with the tracing Context
"""
from ...utils.importlib import require_modules
required_modules = ['aiohttp']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .patch import patch, unpatch
from .middlewares import trace_app
__all__ = [
'patch',
'unpatch',
'trace_app',
]

View File

@ -1,146 +0,0 @@
import asyncio
from ..asyncio import context_provider
from ...compat import stringify
from ...constants import ANALYTICS_SAMPLE_RATE_KEY
from ...ext import SpanTypes, http
from ...propagation.http import HTTPPropagator
from ...settings import config
CONFIG_KEY = 'datadog_trace'
REQUEST_CONTEXT_KEY = 'datadog_context'
REQUEST_CONFIG_KEY = '__datadog_trace_config'
REQUEST_SPAN_KEY = '__datadog_request_span'
@asyncio.coroutine
def trace_middleware(app, handler):
"""
``aiohttp`` middleware that traces the handler execution.
Because handlers are run in different tasks for each request, we attach the Context
instance both to the Task and to the Request objects. In this way:
* the Task is used by the internal automatic instrumentation
* the ``Context`` attached to the request can be freely used in the application code
"""
@asyncio.coroutine
def attach_context(request):
# application configs
tracer = app[CONFIG_KEY]['tracer']
service = app[CONFIG_KEY]['service']
distributed_tracing = app[CONFIG_KEY]['distributed_tracing_enabled']
# Create a new context based on the propagated information.
if distributed_tracing:
propagator = HTTPPropagator()
context = propagator.extract(request.headers)
# Only need to active the new context if something was propagated
if context.trace_id:
tracer.context_provider.activate(context)
# trace the handler
request_span = tracer.trace(
'aiohttp.request',
service=service,
span_type=SpanTypes.WEB,
)
# Configure trace search sample rate
# DEV: aiohttp is special case maintains separate configuration from config api
analytics_enabled = app[CONFIG_KEY]['analytics_enabled']
if (config.analytics_enabled and analytics_enabled is not False) or analytics_enabled is True:
request_span.set_tag(
ANALYTICS_SAMPLE_RATE_KEY,
app[CONFIG_KEY].get('analytics_sample_rate', True)
)
# attach the context and the root span to the request; the Context
# may be freely used by the application code
request[REQUEST_CONTEXT_KEY] = request_span.context
request[REQUEST_SPAN_KEY] = request_span
request[REQUEST_CONFIG_KEY] = app[CONFIG_KEY]
try:
response = yield from handler(request)
return response
except Exception:
request_span.set_traceback()
raise
return attach_context
@asyncio.coroutine
def on_prepare(request, response):
"""
The on_prepare signal is used to close the request span that is created during
the trace middleware execution.
"""
# safe-guard: discard if we don't have a request span
request_span = request.get(REQUEST_SPAN_KEY, None)
if not request_span:
return
# default resource name
resource = stringify(response.status)
if request.match_info.route.resource:
# collect the resource name based on http resource type
res_info = request.match_info.route.resource.get_info()
if res_info.get('path'):
resource = res_info.get('path')
elif res_info.get('formatter'):
resource = res_info.get('formatter')
elif res_info.get('prefix'):
resource = res_info.get('prefix')
# prefix the resource name by the http method
resource = '{} {}'.format(request.method, resource)
if 500 <= response.status < 600:
request_span.error = 1
request_span.resource = resource
request_span.set_tag('http.method', request.method)
request_span.set_tag('http.status_code', response.status)
request_span.set_tag(http.URL, request.url.with_query(None))
# DEV: aiohttp is special case maintains separate configuration from config api
trace_query_string = request[REQUEST_CONFIG_KEY].get('trace_query_string')
if trace_query_string is None:
trace_query_string = config._http.trace_query_string
if trace_query_string:
request_span.set_tag(http.QUERY_STRING, request.query_string)
request_span.finish()
def trace_app(app, tracer, service='aiohttp-web'):
"""
Tracing function that patches the ``aiohttp`` application so that it will be
traced using the given ``tracer``.
:param app: aiohttp application to trace
:param tracer: tracer instance to use
:param service: service name of tracer
"""
# safe-guard: don't trace an application twice
if getattr(app, '__datadog_trace', False):
return
setattr(app, '__datadog_trace', True)
# configure datadog settings
app[CONFIG_KEY] = {
'tracer': tracer,
'service': service,
'distributed_tracing_enabled': True,
'analytics_enabled': None,
'analytics_sample_rate': 1.0,
}
# the tracer must work with asynchronous Context propagation
tracer.configure(context_provider=context_provider)
# add the async tracer middleware as a first middleware
# and be sure that the on_prepare signal is the last one
app.middlewares.insert(0, trace_middleware)
app.on_response_prepare.append(on_prepare)

View File

@ -1,39 +0,0 @@
from ddtrace.vendor import wrapt
from ...pin import Pin
from ...utils.wrappers import unwrap
try:
# instrument external packages only if they're available
import aiohttp_jinja2
from .template import _trace_render_template
template_module = True
except ImportError:
template_module = False
def patch():
"""
Patch aiohttp third party modules:
* aiohttp_jinja2
"""
if template_module:
if getattr(aiohttp_jinja2, '__datadog_patch', False):
return
setattr(aiohttp_jinja2, '__datadog_patch', True)
_w = wrapt.wrap_function_wrapper
_w('aiohttp_jinja2', 'render_template', _trace_render_template)
Pin(app='aiohttp', service=None).onto(aiohttp_jinja2)
def unpatch():
"""
Remove tracing from patched modules.
"""
if template_module:
if getattr(aiohttp_jinja2, '__datadog_patch', False):
setattr(aiohttp_jinja2, '__datadog_patch', False)
unwrap(aiohttp_jinja2, 'render_template')

View File

@ -1,29 +0,0 @@
import aiohttp_jinja2
from ddtrace import Pin
from ...ext import SpanTypes
def _trace_render_template(func, module, args, kwargs):
"""
Trace the template rendering
"""
# get the module pin
pin = Pin.get_from(aiohttp_jinja2)
if not pin or not pin.enabled():
return func(*args, **kwargs)
# original signature:
# render_template(template_name, request, context, *, app_key=APP_KEY, encoding='utf-8')
template_name = args[0]
request = args[1]
env = aiohttp_jinja2.get_env(request.app)
# the prefix is available only on PackageLoader
template_prefix = getattr(env.loader, 'package_path', '')
template_meta = '{}/{}'.format(template_prefix, template_name)
with pin.tracer.trace('aiohttp.template', span_type=SpanTypes.TEMPLATE) as span:
span.set_meta('aiohttp.template', template_meta)
return func(*args, **kwargs)

View File

@ -1,32 +0,0 @@
"""
The Algoliasearch__ integration will add tracing to your Algolia searches.
::
from ddtrace import patch_all
patch_all()
from algoliasearch import algoliasearch
client = alogliasearch.Client(<ID>, <API_KEY>)
index = client.init_index(<INDEX_NAME>)
index.search("your query", args={"attributesToRetrieve": "attribute1,attribute1"})
Configuration
~~~~~~~~~~~~~
.. py:data:: ddtrace.config.algoliasearch['collect_query_text']
Whether to pass the text of your query onto Datadog. Since this may contain sensitive data it's off by default
Default: ``False``
.. __: https://www.algolia.com
"""
from ...utils.importlib import require_modules
with require_modules(['algoliasearch', 'algoliasearch.version']) as missing_modules:
if not missing_modules:
from .patch import patch, unpatch
__all__ = ['patch', 'unpatch']

View File

@ -1,143 +0,0 @@
from ddtrace.pin import Pin
from ddtrace.settings import config
from ddtrace.utils.wrappers import unwrap as _u
from ddtrace.vendor.wrapt import wrap_function_wrapper as _w
DD_PATCH_ATTR = '_datadog_patch'
SERVICE_NAME = 'algoliasearch'
APP_NAME = 'algoliasearch'
try:
import algoliasearch
from algoliasearch.version import VERSION
algoliasearch_version = tuple([int(i) for i in VERSION.split('.')])
# Default configuration
config._add('algoliasearch', dict(
service_name=SERVICE_NAME,
collect_query_text=False
))
except ImportError:
algoliasearch_version = (0, 0)
def patch():
if algoliasearch_version == (0, 0):
return
if getattr(algoliasearch, DD_PATCH_ATTR, False):
return
setattr(algoliasearch, '_datadog_patch', True)
pin = Pin(
service=config.algoliasearch.service_name, app=APP_NAME
)
if algoliasearch_version < (2, 0) and algoliasearch_version >= (1, 0):
_w(algoliasearch.index, 'Index.search', _patched_search)
pin.onto(algoliasearch.index.Index)
elif algoliasearch_version >= (2, 0) and algoliasearch_version < (3, 0):
from algoliasearch import search_index
_w(algoliasearch, 'search_index.SearchIndex.search', _patched_search)
pin.onto(search_index.SearchIndex)
else:
return
def unpatch():
if algoliasearch_version == (0, 0):
return
if getattr(algoliasearch, DD_PATCH_ATTR, False):
setattr(algoliasearch, DD_PATCH_ATTR, False)
if algoliasearch_version < (2, 0) and algoliasearch_version >= (1, 0):
_u(algoliasearch.index.Index, 'search')
elif algoliasearch_version >= (2, 0) and algoliasearch_version < (3, 0):
from algoliasearch import search_index
_u(search_index.SearchIndex, 'search')
else:
return
# DEV: this maps serves the dual purpose of enumerating the algoliasearch.search() query_args that
# will be sent along as tags, as well as converting arguments names into tag names compliant with
# tag naming recommendations set out here: https://docs.datadoghq.com/tagging/
QUERY_ARGS_DD_TAG_MAP = {
'page': 'page',
'hitsPerPage': 'hits_per_page',
'attributesToRetrieve': 'attributes_to_retrieve',
'attributesToHighlight': 'attributes_to_highlight',
'attributesToSnippet': 'attributes_to_snippet',
'minWordSizefor1Typo': 'min_word_size_for_1_typo',
'minWordSizefor2Typos': 'min_word_size_for_2_typos',
'getRankingInfo': 'get_ranking_info',
'aroundLatLng': 'around_lat_lng',
'numericFilters': 'numeric_filters',
'tagFilters': 'tag_filters',
'queryType': 'query_type',
'optionalWords': 'optional_words',
'distinct': 'distinct'
}
def _patched_search(func, instance, wrapt_args, wrapt_kwargs):
"""
wrapt_args is called the way it is to distinguish it from the 'args'
argument to the algoliasearch.index.Index.search() method.
"""
if algoliasearch_version < (2, 0) and algoliasearch_version >= (1, 0):
function_query_arg_name = 'args'
elif algoliasearch_version >= (2, 0) and algoliasearch_version < (3, 0):
function_query_arg_name = 'request_options'
else:
return func(*wrapt_args, **wrapt_kwargs)
pin = Pin.get_from(instance)
if not pin or not pin.enabled():
return func(*wrapt_args, **wrapt_kwargs)
with pin.tracer.trace('algoliasearch.search', service=pin.service) as span:
if not span.sampled:
return func(*wrapt_args, **wrapt_kwargs)
if config.algoliasearch.collect_query_text:
span.set_tag('query.text', wrapt_kwargs.get('query', wrapt_args[0]))
query_args = wrapt_kwargs.get(function_query_arg_name, wrapt_args[1] if len(wrapt_args) > 1 else None)
if query_args and isinstance(query_args, dict):
for query_arg, tag_name in QUERY_ARGS_DD_TAG_MAP.items():
value = query_args.get(query_arg)
if value is not None:
span.set_tag('query.args.{}'.format(tag_name), value)
# Result would look like this
# {
# 'hits': [
# {
# .... your search results ...
# }
# ],
# 'processingTimeMS': 1,
# 'nbHits': 1,
# 'hitsPerPage': 20,
# 'exhaustiveNbHits': true,
# 'params': 'query=xxx',
# 'nbPages': 1,
# 'query': 'xxx',
# 'page': 0
# }
result = func(*wrapt_args, **wrapt_kwargs)
if isinstance(result, dict):
if result.get('processingTimeMS', None) is not None:
span.set_metric('processing_time_ms', int(result['processingTimeMS']))
if result.get('nbHits', None) is not None:
span.set_metric('number_of_hits', int(result['nbHits']))
return result

View File

@ -1,72 +0,0 @@
"""
This integration provides the ``AsyncioContextProvider`` that follows the execution
flow of a ``Task``, making possible to trace asynchronous code built on top
of ``asyncio``. To trace asynchronous execution, you must::
import asyncio
from ddtrace import tracer
from ddtrace.contrib.asyncio import context_provider
# enable asyncio support
tracer.configure(context_provider=context_provider)
async def some_work():
with tracer.trace('asyncio.some_work'):
# do something
# launch your coroutines as usual
loop = asyncio.get_event_loop()
loop.run_until_complete(some_work())
loop.close()
If ``contextvars`` is available, we use the
:class:`ddtrace.provider.DefaultContextProvider`, otherwise we use the legacy
:class:`ddtrace.contrib.asyncio.provider.AsyncioContextProvider`.
In addition, helpers are provided to simplify how the tracing ``Context`` is
handled between scheduled coroutines and ``Future`` invoked in separated
threads:
* ``set_call_context(task, ctx)``: attach the context to the given ``Task``
so that it will be available from the ``tracer.get_call_context()``
* ``ensure_future(coro_or_future, *, loop=None)``: wrapper for the
``asyncio.ensure_future`` that attaches the current context to a new
``Task`` instance
* ``run_in_executor(loop, executor, func, *args)``: wrapper for the
``loop.run_in_executor`` that attaches the current context to the
new thread so that the trace can be resumed regardless when
it's executed
* ``create_task(coro)``: creates a new asyncio ``Task`` that inherits
the current active ``Context`` so that generated traces in the new task
are attached to the main trace
A ``patch(asyncio=True)`` is available if you want to automatically use above
wrappers without changing your code. In that case, the patch method **must be
called before** importing stdlib functions.
"""
from ...utils.importlib import require_modules
required_modules = ['asyncio']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .provider import AsyncioContextProvider
from ...internal.context_manager import CONTEXTVARS_IS_AVAILABLE
from ...provider import DefaultContextProvider
if CONTEXTVARS_IS_AVAILABLE:
context_provider = DefaultContextProvider()
else:
context_provider = AsyncioContextProvider()
from .helpers import set_call_context, ensure_future, run_in_executor
from .patch import patch
__all__ = [
'context_provider',
'set_call_context',
'ensure_future',
'run_in_executor',
'patch'
]

View File

@ -1,9 +0,0 @@
import sys
# asyncio.Task.current_task method is deprecated and will be removed in Python
# 3.9. Instead use asyncio.current_task
if sys.version_info >= (3, 7, 0):
from asyncio import current_task as asyncio_current_task
else:
import asyncio
asyncio_current_task = asyncio.Task.current_task

View File

@ -1,83 +0,0 @@
"""
This module includes a list of convenience methods that
can be used to simplify some operations while handling
Context and Spans in instrumented ``asyncio`` code.
"""
import asyncio
import ddtrace
from .provider import CONTEXT_ATTR
from .wrappers import wrapped_create_task
from ...context import Context
def set_call_context(task, ctx):
"""
Updates the ``Context`` for the given Task. Useful when you need to
pass the context among different tasks.
This method is available for backward-compatibility. Use the
``AsyncioContextProvider`` API to set the current active ``Context``.
"""
setattr(task, CONTEXT_ATTR, ctx)
def ensure_future(coro_or_future, *, loop=None, tracer=None):
"""Wrapper that sets a context to the newly created Task.
If the current task already has a Context, it will be attached to the new Task so the Trace list will be preserved.
"""
tracer = tracer or ddtrace.tracer
current_ctx = tracer.get_call_context()
task = asyncio.ensure_future(coro_or_future, loop=loop)
set_call_context(task, current_ctx)
return task
def run_in_executor(loop, executor, func, *args, tracer=None):
"""Wrapper function that sets a context to the newly created Thread.
If the current task has a Context, it will be attached as an empty Context with the current_span activated to
inherit the ``trace_id`` and the ``parent_id``.
Because the Executor can run the Thread immediately or after the
coroutine is executed, we may have two different scenarios:
* the Context is copied in the new Thread and the trace is sent twice
* the coroutine flushes the Context and when the Thread copies the
Context it is already empty (so it will be a root Span)
To support both situations, we create a new Context that knows only what was
the latest active Span when the new thread was created. In this new thread,
we fallback to the thread-local ``Context`` storage.
"""
tracer = tracer or ddtrace.tracer
ctx = Context()
current_ctx = tracer.get_call_context()
ctx._current_span = current_ctx._current_span
# prepare the future using an executor wrapper
future = loop.run_in_executor(executor, _wrap_executor, func, args, tracer, ctx)
return future
def _wrap_executor(fn, args, tracer, ctx):
"""
This function is executed in the newly created Thread so the right
``Context`` can be set in the thread-local storage. This operation
is safe because the ``Context`` class is thread-safe and can be
updated concurrently.
"""
# the AsyncioContextProvider knows that this is a new thread
# so it is legit to pass the Context in the thread-local storage;
# fn() will be executed outside the asyncio loop as a synchronous code
tracer.context_provider.activate(ctx)
return fn(*args)
def create_task(*args, **kwargs):
"""This function spawns a task with a Context that inherits the
`trace_id` and the `parent_id` from the current active one if available.
"""
loop = asyncio.get_event_loop()
return wrapped_create_task(loop.create_task, None, args, kwargs)

View File

@ -1,32 +0,0 @@
import asyncio
from ddtrace.vendor.wrapt import wrap_function_wrapper as _w
from ...internal.context_manager import CONTEXTVARS_IS_AVAILABLE
from .wrappers import wrapped_create_task, wrapped_create_task_contextvars
from ...utils.wrappers import unwrap as _u
def patch():
"""Patches current loop `create_task()` method to enable spawned tasks to
parent to the base task context.
"""
if getattr(asyncio, '_datadog_patch', False):
return
setattr(asyncio, '_datadog_patch', True)
loop = asyncio.get_event_loop()
if CONTEXTVARS_IS_AVAILABLE:
_w(loop, 'create_task', wrapped_create_task_contextvars)
else:
_w(loop, 'create_task', wrapped_create_task)
def unpatch():
"""Remove tracing from patched modules."""
if getattr(asyncio, '_datadog_patch', False):
setattr(asyncio, '_datadog_patch', False)
loop = asyncio.get_event_loop()
_u(loop, 'create_task')

View File

@ -1,86 +0,0 @@
import asyncio
from ...context import Context
from ...provider import DefaultContextProvider
# Task attribute used to set/get the Context instance
CONTEXT_ATTR = '__datadog_context'
class AsyncioContextProvider(DefaultContextProvider):
"""
Context provider that retrieves all contexts for the current asyncio
execution. It must be used in asynchronous programming that relies
in the built-in ``asyncio`` library. Framework instrumentation that
is built on top of the ``asyncio`` library, can use this provider.
This Context Provider inherits from ``DefaultContextProvider`` because
it uses a thread-local storage when the ``Context`` is propagated to
a different thread, than the one that is running the async loop.
"""
def activate(self, context, loop=None):
"""Sets the scoped ``Context`` for the current running ``Task``.
"""
loop = self._get_loop(loop)
if not loop:
self._local.set(context)
return context
# the current unit of work (if tasks are used)
task = asyncio.Task.current_task(loop=loop)
setattr(task, CONTEXT_ATTR, context)
return context
def _get_loop(self, loop=None):
"""Helper to try and resolve the current loop"""
try:
return loop or asyncio.get_event_loop()
except RuntimeError:
# Detects if a loop is available in the current thread;
# DEV: This happens when a new thread is created from the out that is running the async loop
# DEV: It's possible that a different Executor is handling a different Thread that
# works with blocking code. In that case, we fallback to a thread-local Context.
pass
return None
def _has_active_context(self, loop=None):
"""Helper to determine if we have a currently active context"""
loop = self._get_loop(loop=loop)
if loop is None:
return self._local._has_active_context()
# the current unit of work (if tasks are used)
task = asyncio.Task.current_task(loop=loop)
if task is None:
return False
ctx = getattr(task, CONTEXT_ATTR, None)
return ctx is not None
def active(self, loop=None):
"""
Returns the scoped Context for this execution flow. The ``Context`` uses
the current task as a carrier so if a single task is used for the entire application,
the context must be handled separately.
"""
loop = self._get_loop(loop=loop)
if not loop:
return self._local.get()
# the current unit of work (if tasks are used)
task = asyncio.Task.current_task(loop=loop)
if task is None:
# providing a detached Context from the current Task, may lead to
# wrong traces. This defensive behavior grants that a trace can
# still be built without raising exceptions
return Context()
ctx = getattr(task, CONTEXT_ATTR, None)
if ctx is not None:
# return the active Context for this task (if any)
return ctx
# create a new Context using the Task as a Context carrier
ctx = Context()
setattr(task, CONTEXT_ATTR, ctx)
return ctx

View File

@ -1,58 +0,0 @@
import ddtrace
from .compat import asyncio_current_task
from .provider import CONTEXT_ATTR
from ...context import Context
def wrapped_create_task(wrapped, instance, args, kwargs):
"""Wrapper for ``create_task(coro)`` that propagates the current active
``Context`` to the new ``Task``. This function is useful to connect traces
of detached executions.
Note: we can't just link the task contexts due to the following scenario:
* begin task A
* task A starts task B1..B10
* finish task B1-B9 (B10 still on trace stack)
* task A starts task C
* now task C gets parented to task B10 since it's still on the stack,
however was not actually triggered by B10
"""
new_task = wrapped(*args, **kwargs)
current_task = asyncio_current_task()
ctx = getattr(current_task, CONTEXT_ATTR, None)
if ctx:
# current task has a context, so parent a new context to the base context
new_ctx = Context(
trace_id=ctx.trace_id,
span_id=ctx.span_id,
sampling_priority=ctx.sampling_priority,
)
setattr(new_task, CONTEXT_ATTR, new_ctx)
return new_task
def wrapped_create_task_contextvars(wrapped, instance, args, kwargs):
"""Wrapper for ``create_task(coro)`` that propagates the current active
``Context`` to the new ``Task``. This function is useful to connect traces
of detached executions. Uses contextvars for task-local storage.
"""
current_task_ctx = ddtrace.tracer.get_call_context()
if not current_task_ctx:
# no current context exists so nothing special to be done in handling
# context for new task
return wrapped(*args, **kwargs)
# clone and activate current task's context for new task to support
# detached executions
new_task_ctx = current_task_ctx.clone()
ddtrace.tracer.context_provider.activate(new_task_ctx)
try:
# activated context will now be copied to new task
return wrapped(*args, **kwargs)
finally:
# reactivate current task context
ddtrace.tracer.context_provider.activate(current_task_ctx)

View File

@ -1,23 +0,0 @@
"""
The bottle integration traces the Bottle web framework. Add the following
plugin to your app::
import bottle
from ddtrace import tracer
from ddtrace.contrib.bottle import TracePlugin
app = bottle.Bottle()
plugin = TracePlugin(service="my-web-app")
app.install(plugin)
"""
from ...utils.importlib import require_modules
required_modules = ['bottle']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .trace import TracePlugin
from .patch import patch
__all__ = ['TracePlugin', 'patch']

View File

@ -1,26 +0,0 @@
import os
from .trace import TracePlugin
import bottle
from ddtrace.vendor import wrapt
def patch():
"""Patch the bottle.Bottle class
"""
if getattr(bottle, '_datadog_patch', False):
return
setattr(bottle, '_datadog_patch', True)
wrapt.wrap_function_wrapper('bottle', 'Bottle.__init__', traced_init)
def traced_init(wrapped, instance, args, kwargs):
wrapped(*args, **kwargs)
service = os.environ.get('DATADOG_SERVICE_NAME') or 'bottle'
plugin = TracePlugin(service=service)
instance.install(plugin)

View File

@ -1,83 +0,0 @@
# 3p
from bottle import response, request, HTTPError, HTTPResponse
# stdlib
import ddtrace
# project
from ...constants import ANALYTICS_SAMPLE_RATE_KEY
from ...ext import SpanTypes, http
from ...propagation.http import HTTPPropagator
from ...settings import config
class TracePlugin(object):
name = 'trace'
api = 2
def __init__(self, service='bottle', tracer=None, distributed_tracing=True):
self.service = service
self.tracer = tracer or ddtrace.tracer
self.distributed_tracing = distributed_tracing
def apply(self, callback, route):
def wrapped(*args, **kwargs):
if not self.tracer or not self.tracer.enabled:
return callback(*args, **kwargs)
resource = '{} {}'.format(request.method, route.rule)
# Propagate headers such as x-datadog-trace-id.
if self.distributed_tracing:
propagator = HTTPPropagator()
context = propagator.extract(request.headers)
if context.trace_id:
self.tracer.context_provider.activate(context)
with self.tracer.trace(
'bottle.request', service=self.service, resource=resource, span_type=SpanTypes.WEB
) as s:
# set analytics sample rate with global config enabled
s.set_tag(
ANALYTICS_SAMPLE_RATE_KEY,
config.bottle.get_analytics_sample_rate(use_global_config=True)
)
code = None
result = None
try:
result = callback(*args, **kwargs)
return result
except (HTTPError, HTTPResponse) as e:
# you can interrupt flows using abort(status_code, 'message')...
# we need to respect the defined status_code.
# we also need to handle when response is raised as is the
# case with a 4xx status
code = e.status_code
raise
except Exception:
# bottle doesn't always translate unhandled exceptions, so
# we mark it here.
code = 500
raise
finally:
if isinstance(result, HTTPResponse):
response_code = result.status_code
elif code:
response_code = code
else:
# bottle local response has not yet been updated so this
# will be default
response_code = response.status_code
if 500 <= response_code < 600:
s.error = 1
s.set_tag(http.STATUS_CODE, response_code)
s.set_tag(http.URL, request.urlparts._replace(query='').geturl())
s.set_tag(http.METHOD, request.method)
if config.bottle.trace_query_string:
s.set_tag(http.QUERY_STRING, request.query_string)
return wrapped

View File

@ -1,35 +0,0 @@
"""Instrument Cassandra to report Cassandra queries.
``patch_all`` will automatically patch your Cluster instance to make it work.
::
from ddtrace import Pin, patch
from cassandra.cluster import Cluster
# If not patched yet, you can patch cassandra specifically
patch(cassandra=True)
# This will report spans with the default instrumentation
cluster = Cluster(contact_points=["127.0.0.1"], port=9042)
session = cluster.connect("my_keyspace")
# Example of instrumented query
session.execute("select id from my_table limit 10;")
# Use a pin to specify metadata related to this cluster
cluster = Cluster(contact_points=['10.1.1.3', '10.1.1.4', '10.1.1.5'], port=9042)
Pin.override(cluster, service='cassandra-backend')
session = cluster.connect("my_keyspace")
session.execute("select id from my_table limit 10;")
"""
from ...utils.importlib import require_modules
required_modules = ['cassandra.cluster']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .session import get_traced_cassandra, patch
__all__ = [
'get_traced_cassandra',
'patch',
]

View File

@ -1,3 +0,0 @@
from .session import patch, unpatch
__all__ = ['patch', 'unpatch']

View File

@ -1,297 +0,0 @@
"""
Trace queries along a session to a cassandra cluster
"""
import sys
# 3p
import cassandra.cluster
# project
from ...compat import stringify
from ...constants import ANALYTICS_SAMPLE_RATE_KEY
from ...ext import SpanTypes, net, cassandra as cassx, errors
from ...internal.logger import get_logger
from ...pin import Pin
from ...settings import config
from ...utils.deprecation import deprecated
from ...utils.formats import deep_getattr
from ...vendor import wrapt
log = get_logger(__name__)
RESOURCE_MAX_LENGTH = 5000
SERVICE = 'cassandra'
CURRENT_SPAN = '_ddtrace_current_span'
PAGE_NUMBER = '_ddtrace_page_number'
# Original connect connect function
_connect = cassandra.cluster.Cluster.connect
def patch():
""" patch will add tracing to the cassandra library. """
setattr(cassandra.cluster.Cluster, 'connect',
wrapt.FunctionWrapper(_connect, traced_connect))
Pin(service=SERVICE, app=SERVICE).onto(cassandra.cluster.Cluster)
def unpatch():
cassandra.cluster.Cluster.connect = _connect
def traced_connect(func, instance, args, kwargs):
session = func(*args, **kwargs)
if not isinstance(session.execute, wrapt.FunctionWrapper):
# FIXME[matt] this should probably be private.
setattr(session, 'execute_async', wrapt.FunctionWrapper(session.execute_async, traced_execute_async))
return session
def _close_span_on_success(result, future):
span = getattr(future, CURRENT_SPAN, None)
if not span:
log.debug('traced_set_final_result was not able to get the current span from the ResponseFuture')
return
try:
span.set_tags(_extract_result_metas(cassandra.cluster.ResultSet(future, result)))
except Exception:
log.debug('an exception occured while setting tags', exc_info=True)
finally:
span.finish()
delattr(future, CURRENT_SPAN)
def traced_set_final_result(func, instance, args, kwargs):
result = args[0]
_close_span_on_success(result, instance)
return func(*args, **kwargs)
def _close_span_on_error(exc, future):
span = getattr(future, CURRENT_SPAN, None)
if not span:
log.debug('traced_set_final_exception was not able to get the current span from the ResponseFuture')
return
try:
# handling the exception manually because we
# don't have an ongoing exception here
span.error = 1
span.set_tag(errors.ERROR_MSG, exc.args[0])
span.set_tag(errors.ERROR_TYPE, exc.__class__.__name__)
except Exception:
log.debug('traced_set_final_exception was not able to set the error, failed with error', exc_info=True)
finally:
span.finish()
delattr(future, CURRENT_SPAN)
def traced_set_final_exception(func, instance, args, kwargs):
exc = args[0]
_close_span_on_error(exc, instance)
return func(*args, **kwargs)
def traced_start_fetching_next_page(func, instance, args, kwargs):
has_more_pages = getattr(instance, 'has_more_pages', True)
if not has_more_pages:
return func(*args, **kwargs)
session = getattr(instance, 'session', None)
cluster = getattr(session, 'cluster', None)
pin = Pin.get_from(cluster)
if not pin or not pin.enabled():
return func(*args, **kwargs)
# In case the current span is not finished we make sure to finish it
old_span = getattr(instance, CURRENT_SPAN, None)
if old_span:
log.debug('previous span was not finished before fetching next page')
old_span.finish()
query = getattr(instance, 'query', None)
span = _start_span_and_set_tags(pin, query, session, cluster)
page_number = getattr(instance, PAGE_NUMBER, 1) + 1
setattr(instance, PAGE_NUMBER, page_number)
setattr(instance, CURRENT_SPAN, span)
try:
return func(*args, **kwargs)
except Exception:
with span:
span.set_exc_info(*sys.exc_info())
raise
def traced_execute_async(func, instance, args, kwargs):
cluster = getattr(instance, 'cluster', None)
pin = Pin.get_from(cluster)
if not pin or not pin.enabled():
return func(*args, **kwargs)
query = kwargs.get('query') or args[0]
span = _start_span_and_set_tags(pin, query, instance, cluster)
try:
result = func(*args, **kwargs)
setattr(result, CURRENT_SPAN, span)
setattr(result, PAGE_NUMBER, 1)
setattr(
result,
'_set_final_result',
wrapt.FunctionWrapper(
result._set_final_result,
traced_set_final_result
)
)
setattr(
result,
'_set_final_exception',
wrapt.FunctionWrapper(
result._set_final_exception,
traced_set_final_exception
)
)
setattr(
result,
'start_fetching_next_page',
wrapt.FunctionWrapper(
result.start_fetching_next_page,
traced_start_fetching_next_page
)
)
# Since we cannot be sure that the previous methods were overwritten
# before the call ended, we add callbacks that will be run
# synchronously if the call already returned and we remove them right
# after.
result.add_callbacks(
_close_span_on_success,
_close_span_on_error,
callback_args=(result,),
errback_args=(result,)
)
result.clear_callbacks()
return result
except Exception:
with span:
span.set_exc_info(*sys.exc_info())
raise
def _start_span_and_set_tags(pin, query, session, cluster):
service = pin.service
tracer = pin.tracer
span = tracer.trace('cassandra.query', service=service, span_type=SpanTypes.CASSANDRA)
_sanitize_query(span, query)
span.set_tags(_extract_session_metas(session)) # FIXME[matt] do once?
span.set_tags(_extract_cluster_metas(cluster))
# set analytics sample rate if enabled
span.set_tag(
ANALYTICS_SAMPLE_RATE_KEY,
config.cassandra.get_analytics_sample_rate()
)
return span
def _extract_session_metas(session):
metas = {}
if getattr(session, 'keyspace', None):
# FIXME the keyspace can be overridden explicitly in the query itself
# e.g. 'select * from trace.hash_to_resource'
metas[cassx.KEYSPACE] = session.keyspace.lower()
return metas
def _extract_cluster_metas(cluster):
metas = {}
if deep_getattr(cluster, 'metadata.cluster_name'):
metas[cassx.CLUSTER] = cluster.metadata.cluster_name
if getattr(cluster, 'port', None):
metas[net.TARGET_PORT] = cluster.port
return metas
def _extract_result_metas(result):
metas = {}
if result is None:
return metas
future = getattr(result, 'response_future', None)
if future:
# get the host
host = getattr(future, 'coordinator_host', None)
if host:
metas[net.TARGET_HOST] = host
elif hasattr(future, '_current_host'):
address = deep_getattr(future, '_current_host.address')
if address:
metas[net.TARGET_HOST] = address
query = getattr(future, 'query', None)
if getattr(query, 'consistency_level', None):
metas[cassx.CONSISTENCY_LEVEL] = query.consistency_level
if getattr(query, 'keyspace', None):
metas[cassx.KEYSPACE] = query.keyspace.lower()
page_number = getattr(future, PAGE_NUMBER, 1)
has_more_pages = getattr(future, 'has_more_pages')
is_paginated = has_more_pages or page_number > 1
metas[cassx.PAGINATED] = is_paginated
if is_paginated:
metas[cassx.PAGE_NUMBER] = page_number
if hasattr(result, 'current_rows'):
result_rows = result.current_rows or []
metas[cassx.ROW_COUNT] = len(result_rows)
return metas
def _sanitize_query(span, query):
# TODO (aaditya): fix this hacky type check. we need it to avoid circular imports
t = type(query).__name__
resource = None
if t in ('SimpleStatement', 'PreparedStatement'):
# reset query if a string is available
resource = getattr(query, 'query_string', query)
elif t == 'BatchStatement':
resource = 'BatchStatement'
# Each element in `_statements_and_parameters` is:
# (is_prepared, statement, parameters)
# ref:https://github.com/datastax/python-driver/blob/13d6d72be74f40fcef5ec0f2b3e98538b3b87459/cassandra/query.py#L844
#
# For prepared statements, the `statement` value is just the query_id
# which is not a statement and when trying to join with other strings
# raises an error in python3 around joining bytes to unicode, so this
# just filters out prepared statements from this tag value
q = '; '.join(q[1] for q in query._statements_and_parameters[:2] if not q[0])
span.set_tag('cassandra.query', q)
span.set_metric('cassandra.batch_size', len(query._statements_and_parameters))
elif t == 'BoundStatement':
ps = getattr(query, 'prepared_statement', None)
if ps:
resource = getattr(ps, 'query_string', None)
elif t == 'str':
resource = query
else:
resource = 'unknown-query-type' # FIXME[matt] what else do to here?
span.resource = stringify(resource)[:RESOURCE_MAX_LENGTH]
#
# DEPRECATED
#
@deprecated(message='Use patching instead (see the docs).', version='1.0.0')
def get_traced_cassandra(*args, **kwargs):
return _get_traced_cluster(*args, **kwargs)
def _get_traced_cluster(*args, **kwargs):
return cassandra.cluster.Cluster

View File

@ -1,29 +0,0 @@
"""Instrument Consul to trace KV queries.
Only supports tracing for the syncronous client.
``patch_all`` will automatically patch your Consul client to make it work.
::
from ddtrace import Pin, patch
import consul
# If not patched yet, you can patch consul specifically
patch(consul=True)
# This will report a span with the default settings
client = consul.Consul(host="127.0.0.1", port=8500)
client.get("my-key")
# Use a pin to specify metadata related to this client
Pin.override(client, service='consul-kv')
"""
from ...utils.importlib import require_modules
required_modules = ['consul']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .patch import patch, unpatch
__all__ = ['patch', 'unpatch']

View File

@ -1,57 +0,0 @@
import consul
from ddtrace.vendor.wrapt import wrap_function_wrapper as _w
from ddtrace import config
from ...constants import ANALYTICS_SAMPLE_RATE_KEY
from ...ext import consul as consulx
from ...pin import Pin
from ...utils.wrappers import unwrap as _u
_KV_FUNCS = ['put', 'get', 'delete']
def patch():
if getattr(consul, '__datadog_patch', False):
return
setattr(consul, '__datadog_patch', True)
pin = Pin(service=consulx.SERVICE, app=consulx.APP)
pin.onto(consul.Consul.KV)
for f_name in _KV_FUNCS:
_w('consul', 'Consul.KV.%s' % f_name, wrap_function(f_name))
def unpatch():
if not getattr(consul, '__datadog_patch', False):
return
setattr(consul, '__datadog_patch', False)
for f_name in _KV_FUNCS:
_u(consul.Consul.KV, f_name)
def wrap_function(name):
def trace_func(wrapped, instance, args, kwargs):
pin = Pin.get_from(instance)
if not pin or not pin.enabled():
return wrapped(*args, **kwargs)
# Only patch the syncronous implementation
if not isinstance(instance.agent.http, consul.std.HTTPClient):
return wrapped(*args, **kwargs)
path = kwargs.get('key') or args[0]
resource = name.upper()
with pin.tracer.trace(consulx.CMD, service=pin.service, resource=resource) as span:
rate = config.consul.get_analytics_sample_rate()
if rate is not None:
span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, rate)
span.set_tag(consulx.KEY, path)
span.set_tag(consulx.CMD, resource)
return wrapped(*args, **kwargs)
return trace_func

View File

@ -1,48 +0,0 @@
"""
Instrument dogpile.cache__ to report all cached lookups.
This will add spans around the calls to your cache backend (eg. redis, memory,
etc). The spans will also include the following tags:
- key/keys: The key(s) dogpile passed to your backend. Note that this will be
the output of the region's ``function_key_generator``, but before any key
mangling is applied (ie. the region's ``key_mangler``).
- region: Name of the region.
- backend: Name of the backend class.
- hit: If the key was found in the cache.
- expired: If the key is expired. This is only relevant if the key was found.
While cache tracing will generally already have keys in tags, some caching
setups will not have useful tag values - such as when you're using consistent
hashing with memcached - the key(s) will appear as a mangled hash.
::
# Patch before importing dogpile.cache
from ddtrace import patch
patch(dogpile_cache=True)
from dogpile.cache import make_region
region = make_region().configure(
"dogpile.cache.pylibmc",
expiration_time=3600,
arguments={"url": ["127.0.0.1"]},
)
@region.cache_on_arguments()
def hello(name):
# Some complicated, slow calculation
return "Hello, {}".format(name)
.. __: https://dogpilecache.sqlalchemy.org/
"""
from ...utils.importlib import require_modules
required_modules = ['dogpile.cache']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .patch import patch, unpatch
__all__ = ['patch', 'unpatch']

View File

@ -1,37 +0,0 @@
import dogpile
from ...pin import Pin
from ...utils.formats import asbool
def _wrap_lock_ctor(func, instance, args, kwargs):
"""
This seems rather odd. But to track hits, we need to patch the wrapped function that
dogpile passes to the region and locks. Unfortunately it's a closure defined inside
the get_or_create* methods themselves, so we can't easily patch those.
"""
func(*args, **kwargs)
ori_backend_fetcher = instance.value_and_created_fn
def wrapped_backend_fetcher():
pin = Pin.get_from(dogpile.cache)
if not pin or not pin.enabled():
return ori_backend_fetcher()
hit = False
expired = True
try:
value, createdtime = ori_backend_fetcher()
hit = value is not dogpile.cache.api.NO_VALUE
# dogpile sometimes returns None, but only checks for truthiness. Coalesce
# to minimize APM users' confusion.
expired = instance._is_expired(createdtime) or False
return value, createdtime
finally:
# Keys are checked in random order so the 'final' answer for partial hits
# should really be false (ie. if any are 'negative', then the tag value
# should be). This means ANDing all hit values and ORing all expired values.
span = pin.tracer.current_span()
span.set_tag('hit', asbool(span.get_tag('hit') or 'True') and hit)
span.set_tag('expired', asbool(span.get_tag('expired') or 'False') or expired)
instance.value_and_created_fn = wrapped_backend_fetcher

View File

@ -1,37 +0,0 @@
import dogpile
from ddtrace.pin import Pin, _DD_PIN_NAME, _DD_PIN_PROXY_NAME
from ddtrace.vendor.wrapt import wrap_function_wrapper as _w
from .lock import _wrap_lock_ctor
from .region import _wrap_get_create, _wrap_get_create_multi
_get_or_create = dogpile.cache.region.CacheRegion.get_or_create
_get_or_create_multi = dogpile.cache.region.CacheRegion.get_or_create_multi
_lock_ctor = dogpile.lock.Lock.__init__
def patch():
if getattr(dogpile.cache, '_datadog_patch', False):
return
setattr(dogpile.cache, '_datadog_patch', True)
_w('dogpile.cache.region', 'CacheRegion.get_or_create', _wrap_get_create)
_w('dogpile.cache.region', 'CacheRegion.get_or_create_multi', _wrap_get_create_multi)
_w('dogpile.lock', 'Lock.__init__', _wrap_lock_ctor)
Pin(app='dogpile.cache', service='dogpile.cache').onto(dogpile.cache)
def unpatch():
if not getattr(dogpile.cache, '_datadog_patch', False):
return
setattr(dogpile.cache, '_datadog_patch', False)
# This looks silly but the unwrap util doesn't support class instance methods, even
# though wrapt does. This was causing the patches to stack on top of each other
# during testing.
dogpile.cache.region.CacheRegion.get_or_create = _get_or_create
dogpile.cache.region.CacheRegion.get_or_create_multi = _get_or_create_multi
dogpile.lock.Lock.__init__ = _lock_ctor
setattr(dogpile.cache, _DD_PIN_NAME, None)
setattr(dogpile.cache, _DD_PIN_PROXY_NAME, None)

View File

@ -1,29 +0,0 @@
import dogpile
from ...pin import Pin
def _wrap_get_create(func, instance, args, kwargs):
pin = Pin.get_from(dogpile.cache)
if not pin or not pin.enabled():
return func(*args, **kwargs)
key = args[0]
with pin.tracer.trace('dogpile.cache', resource='get_or_create', span_type='cache') as span:
span.set_tag('key', key)
span.set_tag('region', instance.name)
span.set_tag('backend', instance.actual_backend.__class__.__name__)
return func(*args, **kwargs)
def _wrap_get_create_multi(func, instance, args, kwargs):
pin = Pin.get_from(dogpile.cache)
if not pin or not pin.enabled():
return func(*args, **kwargs)
keys = args[0]
with pin.tracer.trace('dogpile.cache', resource='get_or_create_multi', span_type='cache') as span:
span.set_tag('keys', keys)
span.set_tag('region', instance.name)
span.set_tag('backend', instance.actual_backend.__class__.__name__)
return func(*args, **kwargs)

View File

@ -1,59 +0,0 @@
"""
To trace the falcon web framework, install the trace middleware::
import falcon
from ddtrace import tracer
from ddtrace.contrib.falcon import TraceMiddleware
mw = TraceMiddleware(tracer, 'my-falcon-app')
falcon.API(middleware=[mw])
You can also use the autopatching functionality::
import falcon
from ddtrace import tracer, patch
patch(falcon=True)
app = falcon.API()
To disable distributed tracing when using autopatching, set the
``DATADOG_FALCON_DISTRIBUTED_TRACING`` environment variable to ``False``.
To enable generating APM events for Trace Search & Analytics, set the
``DD_FALCON_ANALYTICS_ENABLED`` environment variable to ``True``.
**Supported span hooks**
The following is a list of available tracer hooks that can be used to intercept
and modify spans created by this integration.
- ``request``
- Called before the response has been finished
- ``def on_falcon_request(span, request, response)``
Example::
import falcon
from ddtrace import config, patch_all
patch_all()
app = falcon.API()
@config.falcon.hooks.on('request')
def on_falcon_request(span, request, response):
span.set_tag('my.custom', 'tag')
:ref:`Headers tracing <http-headers-tracing>` is supported for this integration.
"""
from ...utils.importlib import require_modules
required_modules = ['falcon']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .middleware import TraceMiddleware
from .patch import patch
__all__ = ['TraceMiddleware', 'patch']

View File

@ -1,116 +0,0 @@
import sys
from ddtrace.ext import SpanTypes, http as httpx
from ddtrace.http import store_request_headers, store_response_headers
from ddtrace.propagation.http import HTTPPropagator
from ...compat import iteritems
from ...constants import ANALYTICS_SAMPLE_RATE_KEY
from ...settings import config
class TraceMiddleware(object):
def __init__(self, tracer, service='falcon', distributed_tracing=True):
# store tracing references
self.tracer = tracer
self.service = service
self._distributed_tracing = distributed_tracing
def process_request(self, req, resp):
if self._distributed_tracing:
# Falcon uppercases all header names.
headers = dict((k.lower(), v) for k, v in iteritems(req.headers))
propagator = HTTPPropagator()
context = propagator.extract(headers)
# Only activate the new context if there was a trace id extracted
if context.trace_id:
self.tracer.context_provider.activate(context)
span = self.tracer.trace(
'falcon.request',
service=self.service,
span_type=SpanTypes.WEB,
)
# set analytics sample rate with global config enabled
span.set_tag(
ANALYTICS_SAMPLE_RATE_KEY,
config.falcon.get_analytics_sample_rate(use_global_config=True)
)
span.set_tag(httpx.METHOD, req.method)
span.set_tag(httpx.URL, req.url)
if config.falcon.trace_query_string:
span.set_tag(httpx.QUERY_STRING, req.query_string)
# Note: any request header set after this line will not be stored in the span
store_request_headers(req.headers, span, config.falcon)
def process_resource(self, req, resp, resource, params):
span = self.tracer.current_span()
if not span:
return # unexpected
span.resource = '%s %s' % (req.method, _name(resource))
def process_response(self, req, resp, resource, req_succeeded=None):
# req_succeded is not a kwarg in the API, but we need that to support
# Falcon 1.0 that doesn't provide this argument
span = self.tracer.current_span()
if not span:
return # unexpected
status = httpx.normalize_status_code(resp.status)
# Note: any response header set after this line will not be stored in the span
store_response_headers(resp._headers, span, config.falcon)
# FIXME[matt] falcon does not map errors or unmatched routes
# to proper status codes, so we we have to try to infer them
# here. See https://github.com/falconry/falcon/issues/606
if resource is None:
status = '404'
span.resource = '%s 404' % req.method
span.set_tag(httpx.STATUS_CODE, status)
span.finish()
return
err_type = sys.exc_info()[0]
if err_type is not None:
if req_succeeded is None:
# backward-compatibility with Falcon 1.0; any version
# greater than 1.0 has req_succeded in [True, False]
# TODO[manu]: drop the support at some point
status = _detect_and_set_status_error(err_type, span)
elif req_succeeded is False:
# Falcon 1.1+ provides that argument that is set to False
# if get an Exception (404 is still an exception)
status = _detect_and_set_status_error(err_type, span)
span.set_tag(httpx.STATUS_CODE, status)
# Emit span hook for this response
# DEV: Emit before closing so they can overwrite `span.resource` if they want
config.falcon.hooks._emit('request', span, req, resp)
# Close the span
span.finish()
def _is_404(err_type):
return 'HTTPNotFound' in err_type.__name__
def _detect_and_set_status_error(err_type, span):
"""Detect the HTTP status code from the current stacktrace and
set the traceback to the given Span
"""
if not _is_404(err_type):
span.set_traceback()
return '500'
elif _is_404(err_type):
return '404'
def _name(r):
return '%s.%s' % (r.__module__, r.__class__.__name__)

View File

@ -1,31 +0,0 @@
import os
from ddtrace.vendor import wrapt
import falcon
from ddtrace import tracer
from .middleware import TraceMiddleware
from ...utils.formats import asbool, get_env
def patch():
"""
Patch falcon.API to include contrib.falcon.TraceMiddleware
by default
"""
if getattr(falcon, '_datadog_patch', False):
return
setattr(falcon, '_datadog_patch', True)
wrapt.wrap_function_wrapper('falcon', 'API.__init__', traced_init)
def traced_init(wrapped, instance, args, kwargs):
mw = kwargs.pop('middleware', [])
service = os.environ.get('DATADOG_SERVICE_NAME') or 'falcon'
distributed_tracing = asbool(get_env('falcon', 'distributed_tracing', True))
mw.insert(0, TraceMiddleware(tracer, service, distributed_tracing))
kwargs['middleware'] = mw
wrapped(*args, **kwargs)

View File

@ -1,44 +0,0 @@
"""
The flask cache tracer will track any access to a cache backend.
You can use this tracer together with the Flask tracer middleware.
To install the tracer, ``from ddtrace import tracer`` needs to be added::
from ddtrace import tracer
from ddtrace.contrib.flask_cache import get_traced_cache
and the tracer needs to be initialized::
Cache = get_traced_cache(tracer, service='my-flask-cache-app')
Here is the end result, in a sample app::
from flask import Flask
from ddtrace import tracer
from ddtrace.contrib.flask_cache import get_traced_cache
app = Flask(__name__)
# get the traced Cache class
Cache = get_traced_cache(tracer, service='my-flask-cache-app')
# use the Cache as usual with your preferred CACHE_TYPE
cache = Cache(app, config={'CACHE_TYPE': 'simple'})
def counter():
# this access is traced
conn_counter = cache.get("conn_counter")
"""
from ...utils.importlib import require_modules
required_modules = ['flask_cache']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .tracers import get_traced_cache
__all__ = ['get_traced_cache']

View File

@ -1,146 +0,0 @@
"""
Datadog trace code for flask_cache
"""
# stdlib
import logging
# project
from .utils import _extract_conn_tags, _resource_from_cache_prefix
from ...constants import ANALYTICS_SAMPLE_RATE_KEY
from ...ext import SpanTypes
from ...settings import config
# 3rd party
from flask.ext.cache import Cache
log = logging.Logger(__name__)
DEFAULT_SERVICE = 'flask-cache'
# standard tags
COMMAND_KEY = 'flask_cache.key'
CACHE_BACKEND = 'flask_cache.backend'
CONTACT_POINTS = 'flask_cache.contact_points'
def get_traced_cache(ddtracer, service=DEFAULT_SERVICE, meta=None):
"""
Return a traced Cache object that behaves exactly as the ``flask.ext.cache.Cache class``
"""
class TracedCache(Cache):
"""
Traced cache backend that monitors any operations done by flask_cache. Observed actions are:
* get, set, add, delete, clear
* all ``many_`` operations
"""
_datadog_tracer = ddtracer
_datadog_service = service
_datadog_meta = meta
def __trace(self, cmd):
"""
Start a tracing with default attributes and tags
"""
# create a new span
s = self._datadog_tracer.trace(
cmd,
span_type=SpanTypes.CACHE,
service=self._datadog_service
)
# set span tags
s.set_tag(CACHE_BACKEND, self.config.get('CACHE_TYPE'))
s.set_tags(self._datadog_meta)
# set analytics sample rate
s.set_tag(
ANALYTICS_SAMPLE_RATE_KEY,
config.flask_cache.get_analytics_sample_rate()
)
# add connection meta if there is one
if getattr(self.cache, '_client', None):
try:
s.set_tags(_extract_conn_tags(self.cache._client))
except Exception:
log.debug('error parsing connection tags', exc_info=True)
return s
def get(self, *args, **kwargs):
"""
Track ``get`` operation
"""
with self.__trace('flask_cache.cmd') as span:
span.resource = _resource_from_cache_prefix('GET', self.config)
if len(args) > 0:
span.set_tag(COMMAND_KEY, args[0])
return super(TracedCache, self).get(*args, **kwargs)
def set(self, *args, **kwargs):
"""
Track ``set`` operation
"""
with self.__trace('flask_cache.cmd') as span:
span.resource = _resource_from_cache_prefix('SET', self.config)
if len(args) > 0:
span.set_tag(COMMAND_KEY, args[0])
return super(TracedCache, self).set(*args, **kwargs)
def add(self, *args, **kwargs):
"""
Track ``add`` operation
"""
with self.__trace('flask_cache.cmd') as span:
span.resource = _resource_from_cache_prefix('ADD', self.config)
if len(args) > 0:
span.set_tag(COMMAND_KEY, args[0])
return super(TracedCache, self).add(*args, **kwargs)
def delete(self, *args, **kwargs):
"""
Track ``delete`` operation
"""
with self.__trace('flask_cache.cmd') as span:
span.resource = _resource_from_cache_prefix('DELETE', self.config)
if len(args) > 0:
span.set_tag(COMMAND_KEY, args[0])
return super(TracedCache, self).delete(*args, **kwargs)
def delete_many(self, *args, **kwargs):
"""
Track ``delete_many`` operation
"""
with self.__trace('flask_cache.cmd') as span:
span.resource = _resource_from_cache_prefix('DELETE_MANY', self.config)
span.set_tag(COMMAND_KEY, list(args))
return super(TracedCache, self).delete_many(*args, **kwargs)
def clear(self, *args, **kwargs):
"""
Track ``clear`` operation
"""
with self.__trace('flask_cache.cmd') as span:
span.resource = _resource_from_cache_prefix('CLEAR', self.config)
return super(TracedCache, self).clear(*args, **kwargs)
def get_many(self, *args, **kwargs):
"""
Track ``get_many`` operation
"""
with self.__trace('flask_cache.cmd') as span:
span.resource = _resource_from_cache_prefix('GET_MANY', self.config)
span.set_tag(COMMAND_KEY, list(args))
return super(TracedCache, self).get_many(*args, **kwargs)
def set_many(self, *args, **kwargs):
"""
Track ``set_many`` operation
"""
with self.__trace('flask_cache.cmd') as span:
span.resource = _resource_from_cache_prefix('SET_MANY', self.config)
if len(args) > 0:
span.set_tag(COMMAND_KEY, list(args[0].keys()))
return super(TracedCache, self).set_many(*args, **kwargs)
return TracedCache

View File

@ -1,46 +0,0 @@
# project
from ...ext import net
from ..redis.util import _extract_conn_tags as extract_redis_tags
from ..pylibmc.addrs import parse_addresses
def _resource_from_cache_prefix(resource, cache):
"""
Combine the resource name with the cache prefix (if any)
"""
if getattr(cache, 'key_prefix', None):
name = '{} {}'.format(resource, cache.key_prefix)
else:
name = resource
# enforce lowercase to make the output nicer to read
return name.lower()
def _extract_conn_tags(client):
"""
For the given client extracts connection tags
"""
tags = {}
if hasattr(client, 'servers'):
# Memcached backend supports an address pool
if isinstance(client.servers, list) and len(client.servers) > 0:
# use the first address of the pool as a host because
# the code doesn't expose more information
contact_point = client.servers[0].address
tags[net.TARGET_HOST] = contact_point[0]
tags[net.TARGET_PORT] = contact_point[1]
elif hasattr(client, 'connection_pool'):
# Redis main connection
redis_tags = extract_redis_tags(client.connection_pool.connection_kwargs)
tags.update(**redis_tags)
elif hasattr(client, 'addresses'):
# pylibmc
# FIXME[matt] should we memoize this?
addrs = parse_addresses(client.addresses)
if addrs:
_, host, port, _ = addrs[0]
tags[net.TARGET_PORT] = port
tags[net.TARGET_HOST] = host
return tags

View File

@ -1,29 +0,0 @@
"""
The ``futures`` integration propagates the current active Tracing Context
between threads. The integration ensures that when operations are executed
in a new thread, that thread can continue the previously generated trace.
The integration doesn't trace automatically threads execution, so manual
instrumentation or another integration must be activated. Threads propagation
is not enabled by default with the `patch_all()` method and must be activated
as follows::
from ddtrace import patch, patch_all
patch(futures=True)
# or, when instrumenting all libraries
patch_all(futures=True)
"""
from ...utils.importlib import require_modules
required_modules = ['concurrent.futures']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .patch import patch, unpatch
__all__ = [
'patch',
'unpatch',
]

View File

@ -1,24 +0,0 @@
from concurrent import futures
from ddtrace.vendor.wrapt import wrap_function_wrapper as _w
from .threading import _wrap_submit
from ...utils.wrappers import unwrap as _u
def patch():
"""Enables Context Propagation between threads"""
if getattr(futures, '__datadog_patch', False):
return
setattr(futures, '__datadog_patch', True)
_w('concurrent.futures', 'ThreadPoolExecutor.submit', _wrap_submit)
def unpatch():
"""Disables Context Propagation between threads"""
if not getattr(futures, '__datadog_patch', False):
return
setattr(futures, '__datadog_patch', False)
_u(futures.ThreadPoolExecutor, 'submit')

View File

@ -1,46 +0,0 @@
import ddtrace
def _wrap_submit(func, instance, args, kwargs):
"""
Wrap `Executor` method used to submit a work executed in another
thread. This wrapper ensures that a new `Context` is created and
properly propagated using an intermediate function.
"""
# If there isn't a currently active context, then do not create one
# DEV: Calling `.active()` when there isn't an active context will create a new context
# DEV: We need to do this in case they are either:
# - Starting nested futures
# - Starting futures from outside of an existing context
#
# In either of these cases we essentially will propagate the wrong context between futures
#
# The resolution is to not create/propagate a new context if one does not exist, but let the
# future's thread create the context instead.
current_ctx = None
if ddtrace.tracer.context_provider._has_active_context():
current_ctx = ddtrace.tracer.context_provider.active()
# If we have a context then make sure we clone it
# DEV: We don't know if the future will finish executing before the parent span finishes
# so we clone to ensure we properly collect/report the future's spans
current_ctx = current_ctx.clone()
# extract the target function that must be executed in
# a new thread and the `target` arguments
fn = args[0]
fn_args = args[1:]
return func(_wrap_execution, current_ctx, fn, fn_args, kwargs)
def _wrap_execution(ctx, fn, args, kwargs):
"""
Intermediate target function that is executed in a new thread;
it receives the original function with arguments and keyword
arguments, including our tracing `Context`. The current context
provider sets the Active context in a thread local storage
variable because it's outside the asynchronous loop.
"""
if ctx is not None:
ddtrace.tracer.context_provider.activate(ctx)
return fn(*args, **kwargs)

View File

@ -1,48 +0,0 @@
"""
To trace a request in a ``gevent`` environment, configure the tracer to use the greenlet
context provider, rather than the default one that relies on a thread-local storaging.
This allows the tracer to pick up a transaction exactly where it left off as greenlets
yield the context to another one.
The simplest way to trace a ``gevent`` application is to configure the tracer and
patch ``gevent`` **before importing** the library::
# patch before importing gevent
from ddtrace import patch, tracer
patch(gevent=True)
# use gevent as usual with or without the monkey module
from gevent import monkey; monkey.patch_thread()
def my_parent_function():
with tracer.trace("web.request") as span:
span.service = "web"
gevent.spawn(worker_function)
def worker_function():
# then trace its child
with tracer.trace("greenlet.call") as span:
span.service = "greenlet"
...
with tracer.trace("greenlet.child_call") as child:
...
"""
from ...utils.importlib import require_modules
required_modules = ['gevent']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .provider import GeventContextProvider
from .patch import patch, unpatch
context_provider = GeventContextProvider()
__all__ = [
'patch',
'unpatch',
'context_provider',
]

View File

@ -1,58 +0,0 @@
import gevent
import gevent.pool as gpool
from .provider import CONTEXT_ATTR
GEVENT_VERSION = gevent.version_info[0:3]
class TracingMixin(object):
def __init__(self, *args, **kwargs):
# get the current Context if available
current_g = gevent.getcurrent()
ctx = getattr(current_g, CONTEXT_ATTR, None)
# create the Greenlet as usual
super(TracingMixin, self).__init__(*args, **kwargs)
# the context is always available made exception of the main greenlet
if ctx:
# create a new context that inherits the current active span
new_ctx = ctx.clone()
setattr(self, CONTEXT_ATTR, new_ctx)
class TracedGreenlet(TracingMixin, gevent.Greenlet):
"""
``Greenlet`` class that is used to replace the original ``gevent``
class. This class is supposed to do ``Context`` replacing operation, so
that any greenlet inherits the context from the parent Greenlet.
When a new greenlet is spawned from the main greenlet, a new instance
of ``Context`` is created. The main greenlet is not affected by this behavior.
There is no need to inherit this class to create or optimize greenlets
instances, because this class replaces ``gevent.greenlet.Greenlet``
through the ``patch()`` method. After the patch, extending the gevent
``Greenlet`` class means extending automatically ``TracedGreenlet``.
"""
def __init__(self, *args, **kwargs):
super(TracedGreenlet, self).__init__(*args, **kwargs)
class TracedIMapUnordered(TracingMixin, gpool.IMapUnordered):
def __init__(self, *args, **kwargs):
super(TracedIMapUnordered, self).__init__(*args, **kwargs)
if GEVENT_VERSION >= (1, 3) or GEVENT_VERSION < (1, 1):
# For gevent <1.1 and >=1.3, IMap is its own class, so we derive
# from TracingMixin
class TracedIMap(TracingMixin, gpool.IMap):
def __init__(self, *args, **kwargs):
super(TracedIMap, self).__init__(*args, **kwargs)
else:
# For gevent >=1.1 and <1.3, IMap derives from IMapUnordered, so we derive
# from TracedIMapUnordered and get tracing that way
class TracedIMap(gpool.IMap, TracedIMapUnordered):
def __init__(self, *args, **kwargs):
super(TracedIMap, self).__init__(*args, **kwargs)

View File

@ -1,63 +0,0 @@
import gevent
import gevent.pool
import ddtrace
from .greenlet import TracedGreenlet, TracedIMap, TracedIMapUnordered, GEVENT_VERSION
from .provider import GeventContextProvider
from ...provider import DefaultContextProvider
__Greenlet = gevent.Greenlet
__IMap = gevent.pool.IMap
__IMapUnordered = gevent.pool.IMapUnordered
def patch():
"""
Patch the gevent module so that all references to the
internal ``Greenlet`` class points to the ``DatadogGreenlet``
class.
This action ensures that if a user extends the ``Greenlet``
class, the ``TracedGreenlet`` is used as a parent class.
"""
_replace(TracedGreenlet, TracedIMap, TracedIMapUnordered)
ddtrace.tracer.configure(context_provider=GeventContextProvider())
def unpatch():
"""
Restore the original ``Greenlet``. This function must be invoked
before executing application code, otherwise the ``DatadogGreenlet``
class may be used during initialization.
"""
_replace(__Greenlet, __IMap, __IMapUnordered)
ddtrace.tracer.configure(context_provider=DefaultContextProvider())
def _replace(g_class, imap_class, imap_unordered_class):
"""
Utility function that replace the gevent Greenlet class with the given one.
"""
# replace the original Greenlet classes with the new one
gevent.greenlet.Greenlet = g_class
if GEVENT_VERSION >= (1, 3):
# For gevent >= 1.3.0, IMap and IMapUnordered were pulled out of
# gevent.pool and into gevent._imap
gevent._imap.IMap = imap_class
gevent._imap.IMapUnordered = imap_unordered_class
gevent.pool.IMap = gevent._imap.IMap
gevent.pool.IMapUnordered = gevent._imap.IMapUnordered
gevent.pool.Greenlet = gevent.greenlet.Greenlet
else:
# For gevent < 1.3, only patching of gevent.pool classes necessary
gevent.pool.IMap = imap_class
gevent.pool.IMapUnordered = imap_unordered_class
gevent.pool.Group.greenlet_class = g_class
# replace gevent shortcuts
gevent.Greenlet = gevent.greenlet.Greenlet
gevent.spawn = gevent.greenlet.Greenlet.spawn
gevent.spawn_later = gevent.greenlet.Greenlet.spawn_later

View File

@ -1,55 +0,0 @@
import gevent
from ...context import Context
from ...provider import BaseContextProvider
# Greenlet attribute used to set/get the Context instance
CONTEXT_ATTR = '__datadog_context'
class GeventContextProvider(BaseContextProvider):
"""
Context provider that retrieves all contexts for the current asynchronous
execution. It must be used in asynchronous programming that relies
in the ``gevent`` library. Framework instrumentation that uses the
gevent WSGI server (or gevent in general), can use this provider.
"""
def _get_current_context(self):
"""Helper to get the current context from the current greenlet"""
current_g = gevent.getcurrent()
if current_g is not None:
return getattr(current_g, CONTEXT_ATTR, None)
return None
def _has_active_context(self):
"""Helper to determine if we have a currently active context"""
return self._get_current_context() is not None
def activate(self, context):
"""Sets the scoped ``Context`` for the current running ``Greenlet``.
"""
current_g = gevent.getcurrent()
if current_g is not None:
setattr(current_g, CONTEXT_ATTR, context)
return context
def active(self):
"""
Returns the scoped ``Context`` for this execution flow. The ``Context``
uses the ``Greenlet`` class as a carrier, and everytime a greenlet
is created it receives the "parent" context.
"""
ctx = self._get_current_context()
if ctx is not None:
# return the active Context for this greenlet (if any)
return ctx
# the Greenlet doesn't have a Context so it's created and attached
# even to the main greenlet. This is required in Distributed Tracing
# when a new arbitrary Context is provided.
current_g = gevent.getcurrent()
if current_g:
ctx = Context()
setattr(current_g, CONTEXT_ATTR, ctx)
return ctx

View File

@ -1,33 +0,0 @@
"""
Patch the built-in ``httplib``/``http.client`` libraries to trace all HTTP calls.
Usage::
# Patch all supported modules/functions
from ddtrace import patch
patch(httplib=True)
# Python 2
import httplib
import urllib
resp = urllib.urlopen('http://www.datadog.com/')
# Python 3
import http.client
import urllib.request
resp = urllib.request.urlopen('http://www.datadog.com/')
``httplib`` spans do not include a default service name. Before HTTP calls are
made, ensure a parent span has been started with a service name to be used for
spans generated from those calls::
with tracer.trace('main', service='my-httplib-operation'):
resp = urllib.request.urlopen('http://www.datadog.com/')
:ref:`Headers tracing <http-headers-tracing>` is supported for this integration.
"""
from .patch import patch, unpatch
__all__ = ['patch', 'unpatch']

View File

@ -1,138 +0,0 @@
# Third party
from ddtrace.vendor import wrapt
# Project
from ...compat import PY2, httplib, parse
from ...constants import ANALYTICS_SAMPLE_RATE_KEY
from ...ext import SpanTypes, http as ext_http
from ...http import store_request_headers, store_response_headers
from ...internal.logger import get_logger
from ...pin import Pin
from ...settings import config
from ...utils.wrappers import unwrap as _u
span_name = 'httplib.request' if PY2 else 'http.client.request'
log = get_logger(__name__)
def _wrap_init(func, instance, args, kwargs):
Pin(app='httplib', service=None).onto(instance)
return func(*args, **kwargs)
def _wrap_getresponse(func, instance, args, kwargs):
# Use any attached tracer if available, otherwise use the global tracer
pin = Pin.get_from(instance)
if not pin or not pin.enabled():
return func(*args, **kwargs)
resp = None
try:
resp = func(*args, **kwargs)
return resp
finally:
try:
# Get the span attached to this instance, if available
span = getattr(instance, '_datadog_span', None)
if span:
if resp:
span.set_tag(ext_http.STATUS_CODE, resp.status)
span.error = int(500 <= resp.status)
store_response_headers(dict(resp.getheaders()), span, config.httplib)
span.finish()
delattr(instance, '_datadog_span')
except Exception:
log.debug('error applying request tags', exc_info=True)
def _wrap_putrequest(func, instance, args, kwargs):
# Use any attached tracer if available, otherwise use the global tracer
pin = Pin.get_from(instance)
if should_skip_request(pin, instance):
return func(*args, **kwargs)
try:
# Create a new span and attach to this instance (so we can retrieve/update/close later on the response)
span = pin.tracer.trace(span_name, span_type=SpanTypes.HTTP)
setattr(instance, '_datadog_span', span)
method, path = args[:2]
scheme = 'https' if isinstance(instance, httplib.HTTPSConnection) else 'http'
port = ':{port}'.format(port=instance.port)
if (scheme == 'http' and instance.port == 80) or (scheme == 'https' and instance.port == 443):
port = ''
url = '{scheme}://{host}{port}{path}'.format(scheme=scheme, host=instance.host, port=port, path=path)
# sanitize url
parsed = parse.urlparse(url)
sanitized_url = parse.urlunparse((
parsed.scheme,
parsed.netloc,
parsed.path,
parsed.params,
None, # drop query
parsed.fragment
))
span.set_tag(ext_http.URL, sanitized_url)
span.set_tag(ext_http.METHOD, method)
if config.httplib.trace_query_string:
span.set_tag(ext_http.QUERY_STRING, parsed.query)
# set analytics sample rate
span.set_tag(
ANALYTICS_SAMPLE_RATE_KEY,
config.httplib.get_analytics_sample_rate()
)
except Exception:
log.debug('error applying request tags', exc_info=True)
return func(*args, **kwargs)
def _wrap_putheader(func, instance, args, kwargs):
span = getattr(instance, '_datadog_span', None)
if span:
store_request_headers({args[0]: args[1]}, span, config.httplib)
return func(*args, **kwargs)
def should_skip_request(pin, request):
"""Helper to determine if the provided request should be traced"""
if not pin or not pin.enabled():
return True
api = pin.tracer.writer.api
return request.host == api.hostname and request.port == api.port
def patch():
""" patch the built-in urllib/httplib/httplib.client methods for tracing"""
if getattr(httplib, '__datadog_patch', False):
return
setattr(httplib, '__datadog_patch', True)
# Patch the desired methods
setattr(httplib.HTTPConnection, '__init__',
wrapt.FunctionWrapper(httplib.HTTPConnection.__init__, _wrap_init))
setattr(httplib.HTTPConnection, 'getresponse',
wrapt.FunctionWrapper(httplib.HTTPConnection.getresponse, _wrap_getresponse))
setattr(httplib.HTTPConnection, 'putrequest',
wrapt.FunctionWrapper(httplib.HTTPConnection.putrequest, _wrap_putrequest))
setattr(httplib.HTTPConnection, 'putheader',
wrapt.FunctionWrapper(httplib.HTTPConnection.putheader, _wrap_putheader))
def unpatch():
""" unpatch any previously patched modules """
if not getattr(httplib, '__datadog_patch', False):
return
setattr(httplib, '__datadog_patch', False)
_u(httplib.HTTPConnection, '__init__')
_u(httplib.HTTPConnection, 'getresponse')
_u(httplib.HTTPConnection, 'putrequest')
_u(httplib.HTTPConnection, 'putheader')

View File

@ -1,43 +0,0 @@
"""Instrument kombu to report AMQP messaging.
``patch_all`` will not automatically patch your Kombu client to make it work, as this would conflict with the
Celery integration. You must specifically request kombu be patched, as in the example below.
Note: To permit distributed tracing for the kombu integration you must enable the tracer with priority
sampling. Refer to the documentation here:
http://pypi.datadoghq.com/trace/docs/advanced_usage.html#priority-sampling
Without enabling distributed tracing, spans within a trace generated by the kombu integration might be dropped
without the whole trace being dropped.
::
from ddtrace import Pin, patch
import kombu
# If not patched yet, you can patch kombu specifically
patch(kombu=True)
# This will report a span with the default settings
conn = kombu.Connection("amqp://guest:guest@127.0.0.1:5672//")
conn.connect()
task_queue = kombu.Queue('tasks', kombu.Exchange('tasks'), routing_key='tasks')
to_publish = {'hello': 'world'}
producer = conn.Producer()
producer.publish(to_publish,
exchange=task_queue.exchange,
routing_key=task_queue.routing_key,
declare=[task_queue])
# Use a pin to specify metadata related to this client
Pin.override(producer, service='kombu-consumer')
"""
from ...utils.importlib import require_modules
required_modules = ['kombu', 'kombu.messaging']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .patch import patch
__all__ = ['patch']

View File

@ -1 +0,0 @@
DEFAULT_SERVICE = 'kombu'

View File

@ -1,118 +0,0 @@
# 3p
import kombu
from ddtrace.vendor import wrapt
# project
from ...constants import ANALYTICS_SAMPLE_RATE_KEY
from ...ext import SpanTypes, kombu as kombux
from ...pin import Pin
from ...propagation.http import HTTPPropagator
from ...settings import config
from ...utils.formats import get_env
from ...utils.wrappers import unwrap
from .constants import DEFAULT_SERVICE
from .utils import (
get_exchange_from_args,
get_body_length_from_args,
get_routing_key_from_args,
extract_conn_tags,
HEADER_POS
)
# kombu default settings
config._add('kombu', {
'service_name': get_env('kombu', 'service_name', DEFAULT_SERVICE)
})
propagator = HTTPPropagator()
def patch():
"""Patch the instrumented methods
This duplicated doesn't look nice. The nicer alternative is to use an ObjectProxy on top
of Kombu. However, it means that any "import kombu.Connection" won't be instrumented.
"""
if getattr(kombu, '_datadog_patch', False):
return
setattr(kombu, '_datadog_patch', True)
_w = wrapt.wrap_function_wrapper
# We wrap the _publish method because the publish method:
# * defines defaults in its kwargs
# * potentially overrides kwargs with values from self
# * extracts/normalizes things like exchange
_w('kombu', 'Producer._publish', traced_publish)
_w('kombu', 'Consumer.receive', traced_receive)
Pin(
service=config.kombu['service_name'],
app='kombu'
).onto(kombu.messaging.Producer)
Pin(
service=config.kombu['service_name'],
app='kombu'
).onto(kombu.messaging.Consumer)
def unpatch():
if getattr(kombu, '_datadog_patch', False):
setattr(kombu, '_datadog_patch', False)
unwrap(kombu.Producer, '_publish')
unwrap(kombu.Consumer, 'receive')
#
# tracing functions
#
def traced_receive(func, instance, args, kwargs):
pin = Pin.get_from(instance)
if not pin or not pin.enabled():
return func(*args, **kwargs)
# Signature only takes 2 args: (body, message)
message = args[1]
context = propagator.extract(message.headers)
# only need to active the new context if something was propagated
if context.trace_id:
pin.tracer.context_provider.activate(context)
with pin.tracer.trace(kombux.RECEIVE_NAME, service=pin.service, span_type=SpanTypes.WORKER) as s:
# run the command
exchange = message.delivery_info['exchange']
s.resource = exchange
s.set_tag(kombux.EXCHANGE, exchange)
s.set_tags(extract_conn_tags(message.channel.connection))
s.set_tag(kombux.ROUTING_KEY, message.delivery_info['routing_key'])
# set analytics sample rate
s.set_tag(
ANALYTICS_SAMPLE_RATE_KEY,
config.kombu.get_analytics_sample_rate()
)
return func(*args, **kwargs)
def traced_publish(func, instance, args, kwargs):
pin = Pin.get_from(instance)
if not pin or not pin.enabled():
return func(*args, **kwargs)
with pin.tracer.trace(kombux.PUBLISH_NAME, service=pin.service, span_type=SpanTypes.WORKER) as s:
exchange_name = get_exchange_from_args(args)
s.resource = exchange_name
s.set_tag(kombux.EXCHANGE, exchange_name)
if pin.tags:
s.set_tags(pin.tags)
s.set_tag(kombux.ROUTING_KEY, get_routing_key_from_args(args))
s.set_tags(extract_conn_tags(instance.channel.connection))
s.set_metric(kombux.BODY_LEN, get_body_length_from_args(args))
# set analytics sample rate
s.set_tag(
ANALYTICS_SAMPLE_RATE_KEY,
config.kombu.get_analytics_sample_rate()
)
# run the command
propagator.inject(s.context, args[HEADER_POS])
return func(*args, **kwargs)

View File

@ -1,47 +0,0 @@
"""
Some utils used by the dogtrace kombu integration
"""
from ...ext import kombu as kombux, net
PUBLISH_BODY_IDX = 0
PUBLISH_ROUTING_KEY = 6
PUBLISH_EXCHANGE_IDX = 9
HEADER_POS = 4
def extract_conn_tags(connection):
""" Transform kombu conn info into dogtrace metas """
try:
host, port = connection.host.split(':')
return {
net.TARGET_HOST: host,
net.TARGET_PORT: port,
kombux.VHOST: connection.virtual_host,
}
except AttributeError:
# Unlikely that we don't have .host or .virtual_host but let's not die over it
return {}
def get_exchange_from_args(args):
"""Extract the exchange
The publish method extracts the name and hands that off to _publish (what we patch)
"""
return args[PUBLISH_EXCHANGE_IDX]
def get_routing_key_from_args(args):
"""Extract the routing key"""
name = args[PUBLISH_ROUTING_KEY]
return name
def get_body_length_from_args(args):
"""Extract the length of the body"""
length = len(args[PUBLISH_BODY_IDX])
return length

View File

@ -1,66 +0,0 @@
"""
Datadog APM traces can be integrated with Logs by first having the tracing
library patch the standard library ``logging`` module and updating the log
formatter used by an application. This feature enables you to inject the current
trace information into a log entry.
Before the trace information can be injected into logs, the formatter has to be
updated to include ``dd.trace_id`` and ``dd.span_id`` attributes from the log
record. The integration with Logs occurs as long as the log entry includes
``dd.trace_id=%(dd.trace_id)s`` and ``dd.span_id=%(dd.span_id)s``.
ddtrace-run
-----------
When using ``ddtrace-run``, enable patching by setting the environment variable
``DD_LOGS_INJECTION=true``. The logger by default will have a format that
includes trace information::
import logging
from ddtrace import tracer
log = logging.getLogger()
log.level = logging.INFO
@tracer.wrap()
def hello():
log.info('Hello, World!')
hello()
Manual Instrumentation
----------------------
If you prefer to instrument manually, patch the logging library then update the
log formatter as in the following example::
from ddtrace import patch_all; patch_all(logging=True)
import logging
from ddtrace import tracer
FORMAT = ('%(asctime)s %(levelname)s [%(name)s] [%(filename)s:%(lineno)d] '
'[dd.trace_id=%(dd.trace_id)s dd.span_id=%(dd.span_id)s] '
'- %(message)s')
logging.basicConfig(format=FORMAT)
log = logging.getLogger()
log.level = logging.INFO
@tracer.wrap()
def hello():
log.info('Hello, World!')
hello()
"""
from ...utils.importlib import require_modules
required_modules = ['logging']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .patch import patch, unpatch
__all__ = ['patch', 'unpatch']

View File

@ -1,49 +0,0 @@
import logging
from ddtrace import config
from ...helpers import get_correlation_ids
from ...utils.wrappers import unwrap as _u
from ...vendor.wrapt import wrap_function_wrapper as _w
RECORD_ATTR_TRACE_ID = 'dd.trace_id'
RECORD_ATTR_SPAN_ID = 'dd.span_id'
RECORD_ATTR_VALUE_NULL = 0
config._add('logging', dict(
tracer=None, # by default, override here for custom tracer
))
def _w_makeRecord(func, instance, args, kwargs):
record = func(*args, **kwargs)
# add correlation identifiers to LogRecord
trace_id, span_id = get_correlation_ids(tracer=config.logging.tracer)
if trace_id and span_id:
setattr(record, RECORD_ATTR_TRACE_ID, trace_id)
setattr(record, RECORD_ATTR_SPAN_ID, span_id)
else:
setattr(record, RECORD_ATTR_TRACE_ID, RECORD_ATTR_VALUE_NULL)
setattr(record, RECORD_ATTR_SPAN_ID, RECORD_ATTR_VALUE_NULL)
return record
def patch():
"""
Patch ``logging`` module in the Python Standard Library for injection of
tracer information by wrapping the base factory method ``Logger.makeRecord``
"""
if getattr(logging, '_datadog_patch', False):
return
setattr(logging, '_datadog_patch', True)
_w(logging.Logger, 'makeRecord', _w_makeRecord)
def unpatch():
if getattr(logging, '_datadog_patch', False):
setattr(logging, '_datadog_patch', False)
_u(logging.Logger, 'makeRecord')

View File

@ -1,24 +0,0 @@
"""
The ``mako`` integration traces templates rendering.
Auto instrumentation is available using the ``patch``. The following is an example::
from ddtrace import patch
from mako.template import Template
patch(mako=True)
t = Template(filename="index.html")
"""
from ...utils.importlib import require_modules
required_modules = ['mako']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .patch import patch, unpatch
__all__ = [
'patch',
'unpatch',
]

View File

@ -1 +0,0 @@
DEFAULT_TEMPLATE_NAME = '<memory>'

View File

@ -1,47 +0,0 @@
import mako
from mako.template import Template
from ...ext import SpanTypes
from ...pin import Pin
from ...utils.importlib import func_name
from ...utils.wrappers import unwrap as _u
from ...vendor.wrapt import wrap_function_wrapper as _w
from .constants import DEFAULT_TEMPLATE_NAME
def patch():
if getattr(mako, '__datadog_patch', False):
# already patched
return
setattr(mako, '__datadog_patch', True)
Pin(service='mako', app='mako').onto(Template)
_w(mako, 'template.Template.render', _wrap_render)
_w(mako, 'template.Template.render_unicode', _wrap_render)
_w(mako, 'template.Template.render_context', _wrap_render)
def unpatch():
if not getattr(mako, '__datadog_patch', False):
return
setattr(mako, '__datadog_patch', False)
_u(mako.template.Template, 'render')
_u(mako.template.Template, 'render_unicode')
_u(mako.template.Template, 'render_context')
def _wrap_render(wrapped, instance, args, kwargs):
pin = Pin.get_from(instance)
if not pin or not pin.enabled():
return wrapped(*args, **kwargs)
template_name = instance.filename or DEFAULT_TEMPLATE_NAME
with pin.tracer.trace(func_name(wrapped), pin.service, span_type=SpanTypes.TEMPLATE) as span:
try:
template = wrapped(*args, **kwargs)
return template
finally:
span.resource = template_name
span.set_tag('mako.template_name', template_name)

View File

@ -1,53 +0,0 @@
"""
The molten web framework is automatically traced by ``ddtrace`` when calling ``patch``::
from molten import App, Route
from ddtrace import patch_all; patch_all(molten=True)
def hello(name: str, age: int) -> str:
return f'Hello {age} year old named {name}!'
app = App(routes=[Route('/hello/{name}/{age}', hello)])
You may also enable molten tracing automatically via ``ddtrace-run``::
ddtrace-run python app.py
Configuration
~~~~~~~~~~~~~
.. py:data:: ddtrace.config.molten['distributed_tracing']
Whether to parse distributed tracing headers from requests received by your Molten app.
Default: ``True``
.. py:data:: ddtrace.config.molten['analytics_enabled']
Whether to generate APM events in Trace Search & Analytics.
Can also be enabled with the ``DD_MOLTEN_ANALYTICS_ENABLED`` environment variable.
Default: ``None``
.. py:data:: ddtrace.config.molten['service_name']
The service name reported for your Molten app.
Can also be configured via the ``DD_MOLTEN_SERVICE_NAME`` environment variable.
Default: ``'molten'``
"""
from ...utils.importlib import require_modules
required_modules = ['molten']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from . import patch as _patch
patch = _patch.patch
unpatch = _patch.unpatch
__all__ = ['patch', 'unpatch']

View File

@ -1,169 +0,0 @@
from ddtrace.vendor import wrapt
from ddtrace.vendor.wrapt import wrap_function_wrapper as _w
import molten
from ... import Pin, config
from ...compat import urlencode
from ...constants import ANALYTICS_SAMPLE_RATE_KEY
from ...ext import SpanTypes, http
from ...propagation.http import HTTPPropagator
from ...utils.formats import asbool, get_env
from ...utils.importlib import func_name
from ...utils.wrappers import unwrap as _u
from .wrappers import WrapperComponent, WrapperRenderer, WrapperMiddleware, WrapperRouter, MOLTEN_ROUTE
MOLTEN_VERSION = tuple(map(int, molten.__version__.split()[0].split('.')))
# Configure default configuration
config._add('molten', dict(
service_name=get_env('molten', 'service_name', 'molten'),
app='molten',
distributed_tracing=asbool(get_env('molten', 'distributed_tracing', True)),
))
def patch():
"""Patch the instrumented methods
"""
if getattr(molten, '_datadog_patch', False):
return
setattr(molten, '_datadog_patch', True)
pin = Pin(
service=config.molten['service_name'],
app=config.molten['app']
)
# add pin to module since many classes use __slots__
pin.onto(molten)
_w(molten.BaseApp, '__init__', patch_app_init)
_w(molten.App, '__call__', patch_app_call)
def unpatch():
"""Remove instrumentation
"""
if getattr(molten, '_datadog_patch', False):
setattr(molten, '_datadog_patch', False)
# remove pin
pin = Pin.get_from(molten)
if pin:
pin.remove_from(molten)
_u(molten.BaseApp, '__init__')
_u(molten.App, '__call__')
_u(molten.Router, 'add_route')
def patch_app_call(wrapped, instance, args, kwargs):
"""Patch wsgi interface for app
"""
pin = Pin.get_from(molten)
if not pin or not pin.enabled():
return wrapped(*args, **kwargs)
# DEV: This is safe because this is the args for a WSGI handler
# https://www.python.org/dev/peps/pep-3333/
environ, start_response = args
request = molten.http.Request.from_environ(environ)
resource = func_name(wrapped)
# Configure distributed tracing
if config.molten.get('distributed_tracing', True):
propagator = HTTPPropagator()
# request.headers is type Iterable[Tuple[str, str]]
context = propagator.extract(dict(request.headers))
# Only need to activate the new context if something was propagated
if context.trace_id:
pin.tracer.context_provider.activate(context)
with pin.tracer.trace('molten.request', service=pin.service, resource=resource, span_type=SpanTypes.WEB) as span:
# set analytics sample rate with global config enabled
span.set_tag(
ANALYTICS_SAMPLE_RATE_KEY,
config.molten.get_analytics_sample_rate(use_global_config=True)
)
@wrapt.function_wrapper
def _w_start_response(wrapped, instance, args, kwargs):
""" Patch respond handling to set metadata """
pin = Pin.get_from(molten)
if not pin or not pin.enabled():
return wrapped(*args, **kwargs)
status, headers, exc_info = args
code, _, _ = status.partition(' ')
try:
code = int(code)
except ValueError:
pass
if not span.get_tag(MOLTEN_ROUTE):
# if route never resolve, update root resource
span.resource = u'{} {}'.format(request.method, code)
span.set_tag(http.STATUS_CODE, code)
# mark 5xx spans as error
if 500 <= code < 600:
span.error = 1
return wrapped(*args, **kwargs)
# patching for extracting response code
start_response = _w_start_response(start_response)
span.set_tag(http.METHOD, request.method)
span.set_tag(http.URL, '%s://%s:%s%s' % (
request.scheme, request.host, request.port, request.path,
))
if config.molten.trace_query_string:
span.set_tag(http.QUERY_STRING, urlencode(dict(request.params)))
span.set_tag('molten.version', molten.__version__)
return wrapped(environ, start_response, **kwargs)
def patch_app_init(wrapped, instance, args, kwargs):
"""Patch app initialization of middleware, components and renderers
"""
# allow instance to be initialized before wrapping them
wrapped(*args, **kwargs)
# add Pin to instance
pin = Pin.get_from(molten)
if not pin or not pin.enabled():
return
# Wrappers here allow us to trace objects without altering class or instance
# attributes, which presents a problem when classes in molten use
# ``__slots__``
instance.router = WrapperRouter(instance.router)
# wrap middleware functions/callables
instance.middleware = [
WrapperMiddleware(mw)
for mw in instance.middleware
]
# wrap components objects within injector
# NOTE: the app instance also contains a list of components but it does not
# appear to be used for anything passing along to the dependency injector
instance.injector.components = [
WrapperComponent(c)
for c in instance.injector.components
]
# but renderers objects
instance.renderers = [
WrapperRenderer(r)
for r in instance.renderers
]

View File

@ -1,95 +0,0 @@
from ddtrace.vendor import wrapt
import molten
from ... import Pin
from ...utils.importlib import func_name
MOLTEN_ROUTE = 'molten.route'
def trace_wrapped(resource, wrapped, *args, **kwargs):
pin = Pin.get_from(molten)
if not pin or not pin.enabled():
return wrapped(*args, **kwargs)
with pin.tracer.trace(func_name(wrapped), service=pin.service, resource=resource):
return wrapped(*args, **kwargs)
def trace_func(resource):
"""Trace calls to function using provided resource name
"""
@wrapt.function_wrapper
def _trace_func(wrapped, instance, args, kwargs):
pin = Pin.get_from(molten)
if not pin or not pin.enabled():
return wrapped(*args, **kwargs)
with pin.tracer.trace(func_name(wrapped), service=pin.service, resource=resource):
return wrapped(*args, **kwargs)
return _trace_func
class WrapperComponent(wrapt.ObjectProxy):
""" Tracing of components """
def can_handle_parameter(self, *args, **kwargs):
func = self.__wrapped__.can_handle_parameter
cname = func_name(self.__wrapped__)
resource = '{}.{}'.format(cname, func.__name__)
return trace_wrapped(resource, func, *args, **kwargs)
# TODO[tahir]: the signature of a wrapped resolve method causes DIError to
# be thrown since paramter types cannot be determined
class WrapperRenderer(wrapt.ObjectProxy):
""" Tracing of renderers """
def render(self, *args, **kwargs):
func = self.__wrapped__.render
cname = func_name(self.__wrapped__)
resource = '{}.{}'.format(cname, func.__name__)
return trace_wrapped(resource, func, *args, **kwargs)
class WrapperMiddleware(wrapt.ObjectProxy):
""" Tracing of callable functional-middleware """
def __call__(self, *args, **kwargs):
func = self.__wrapped__.__call__
resource = func_name(self.__wrapped__)
return trace_wrapped(resource, func, *args, **kwargs)
class WrapperRouter(wrapt.ObjectProxy):
""" Tracing of router on the way back from a matched route """
def match(self, *args, **kwargs):
# catch matched route and wrap tracer around its handler and set root span resource
func = self.__wrapped__.match
route_and_params = func(*args, **kwargs)
pin = Pin.get_from(molten)
if not pin or not pin.enabled():
return route_and_params
if route_and_params is not None:
route, params = route_and_params
route.handler = trace_func(func_name(route.handler))(route.handler)
# update root span resource while we know the matched route
resource = '{} {}'.format(
route.method,
route.template,
)
root_span = pin.tracer.current_root_span()
root_span.resource = resource
# if no root route set make sure we record it based on this resolved
# route
if root_span and not root_span.get_tag(MOLTEN_ROUTE):
root_span.set_tag(MOLTEN_ROUTE, route.name)
return route, params
return route_and_params

View File

@ -1,29 +0,0 @@
"""Instrument mongoengine to report MongoDB queries.
``patch_all`` will automatically patch your mongoengine connect method to make it work.
::
from ddtrace import Pin, patch
import mongoengine
# If not patched yet, you can patch mongoengine specifically
patch(mongoengine=True)
# At that point, mongoengine is instrumented with the default settings
mongoengine.connect('db', alias='default')
# Use a pin to specify metadata related to this client
client = mongoengine.connect('db', alias='master')
Pin.override(client, service="mongo-master")
"""
from ...utils.importlib import require_modules
required_modules = ['mongoengine']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .patch import patch, trace_mongoengine
__all__ = ['patch', 'trace_mongoengine']

View File

@ -1,20 +0,0 @@
import mongoengine
from .trace import WrappedConnect
from ...utils.deprecation import deprecated
# Original connect function
_connect = mongoengine.connect
def patch():
setattr(mongoengine, 'connect', WrappedConnect(_connect))
def unpatch():
setattr(mongoengine, 'connect', _connect)
@deprecated(message='Use patching instead (see the docs).', version='1.0.0')
def trace_mongoengine(*args, **kwargs):
return _connect

View File

@ -1,32 +0,0 @@
# 3p
from ddtrace.vendor import wrapt
# project
import ddtrace
from ddtrace.ext import mongo as mongox
from ddtrace.contrib.pymongo.client import TracedMongoClient
# TODO(Benjamin): we should instrument register_connection instead, because more generic
# We should also extract the "alias" attribute and set it as a meta
class WrappedConnect(wrapt.ObjectProxy):
""" WrappedConnect wraps mongoengines 'connect' function to ensure
that all returned connections are wrapped for tracing.
"""
def __init__(self, connect):
super(WrappedConnect, self).__init__(connect)
ddtrace.Pin(service=mongox.SERVICE, tracer=ddtrace.tracer).onto(self)
def __call__(self, *args, **kwargs):
client = self.__wrapped__(*args, **kwargs)
pin = ddtrace.Pin.get_from(self)
if pin:
# mongoengine uses pymongo internally, so we can just piggyback on the
# existing pymongo integration and make sure that the connections it
# uses internally are traced.
client = TracedMongoClient(client)
ddtrace.Pin(service=pin.service, tracer=pin.tracer).onto(client)
return client

View File

@ -1,39 +0,0 @@
"""Instrument mysql to report MySQL queries.
``patch_all`` will automatically patch your mysql connection to make it work.
::
# Make sure to import mysql.connector and not the 'connect' function,
# otherwise you won't have access to the patched version
from ddtrace import Pin, patch
import mysql.connector
# If not patched yet, you can patch mysql specifically
patch(mysql=True)
# This will report a span with the default settings
conn = mysql.connector.connect(user="alice", password="b0b", host="localhost", port=3306, database="test")
cursor = conn.cursor()
cursor.execute("SELECT 6*7 AS the_answer;")
# Use a pin to specify metadata related to this connection
Pin.override(conn, service='mysql-users')
Only the default full-Python integration works. The binary C connector,
provided by _mysql_connector, is not supported yet.
Help on mysql.connector can be found on:
https://dev.mysql.com/doc/connector-python/en/
"""
from ...utils.importlib import require_modules
# check `mysql-connector` availability
required_modules = ['mysql.connector']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .patch import patch
from .tracers import get_traced_mysql_connection
__all__ = ['get_traced_mysql_connection', 'patch']

View File

@ -1,46 +0,0 @@
# 3p
from ddtrace.vendor import wrapt
import mysql.connector
# project
from ddtrace import Pin
from ddtrace.contrib.dbapi import TracedConnection
from ...ext import net, db
CONN_ATTR_BY_TAG = {
net.TARGET_HOST: 'server_host',
net.TARGET_PORT: 'server_port',
db.USER: 'user',
db.NAME: 'database',
}
def patch():
wrapt.wrap_function_wrapper('mysql.connector', 'connect', _connect)
# `Connect` is an alias for `connect`, patch it too
if hasattr(mysql.connector, 'Connect'):
mysql.connector.Connect = mysql.connector.connect
def unpatch():
if isinstance(mysql.connector.connect, wrapt.ObjectProxy):
mysql.connector.connect = mysql.connector.connect.__wrapped__
if hasattr(mysql.connector, 'Connect'):
mysql.connector.Connect = mysql.connector.connect
def _connect(func, instance, args, kwargs):
conn = func(*args, **kwargs)
return patch_conn(conn)
def patch_conn(conn):
tags = {t: getattr(conn, a) for t, a in CONN_ATTR_BY_TAG.items() if getattr(conn, a, '') != ''}
pin = Pin(service='mysql', app='mysql', tags=tags)
# grab the metadata from the conn
wrapped = TracedConnection(conn, pin=pin)
pin.onto(wrapped)
return wrapped

View File

@ -1,8 +0,0 @@
import mysql.connector
from ...utils.deprecation import deprecated
@deprecated(message='Use patching instead (see the docs).', version='1.0.0')
def get_traced_mysql_connection(*args, **kwargs):
return mysql.connector.MySQLConnection

View File

@ -1,38 +0,0 @@
"""Instrument mysqlclient / MySQL-python to report MySQL queries.
``patch_all`` will automatically patch your mysql connection to make it work.
::
# Make sure to import MySQLdb and not the 'connect' function,
# otherwise you won't have access to the patched version
from ddtrace import Pin, patch
import MySQLdb
# If not patched yet, you can patch mysqldb specifically
patch(mysqldb=True)
# This will report a span with the default settings
conn = MySQLdb.connect(user="alice", passwd="b0b", host="localhost", port=3306, db="test")
cursor = conn.cursor()
cursor.execute("SELECT 6*7 AS the_answer;")
# Use a pin to specify metadata related to this connection
Pin.override(conn, service='mysql-users')
This package works for mysqlclient or MySQL-python. Only the default
full-Python integration works. The binary C connector provided by
_mysql is not yet supported.
Help on mysqlclient can be found on:
https://mysqlclient.readthedocs.io/
"""
from ...utils.importlib import require_modules
required_modules = ['MySQLdb']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .patch import patch
__all__ = ['patch']

View File

@ -1,63 +0,0 @@
# 3p
import MySQLdb
from ddtrace.vendor.wrapt import wrap_function_wrapper as _w
# project
from ddtrace import Pin
from ddtrace.contrib.dbapi import TracedConnection
from ...ext import net, db
from ...utils.wrappers import unwrap as _u
KWPOS_BY_TAG = {
net.TARGET_HOST: ('host', 0),
db.USER: ('user', 1),
db.NAME: ('db', 3),
}
def patch():
# patch only once
if getattr(MySQLdb, '__datadog_patch', False):
return
setattr(MySQLdb, '__datadog_patch', True)
# `Connection` and `connect` are aliases for
# `Connect`; patch them too
_w('MySQLdb', 'Connect', _connect)
if hasattr(MySQLdb, 'Connection'):
_w('MySQLdb', 'Connection', _connect)
if hasattr(MySQLdb, 'connect'):
_w('MySQLdb', 'connect', _connect)
def unpatch():
if not getattr(MySQLdb, '__datadog_patch', False):
return
setattr(MySQLdb, '__datadog_patch', False)
# unpatch MySQLdb
_u(MySQLdb, 'Connect')
if hasattr(MySQLdb, 'Connection'):
_u(MySQLdb, 'Connection')
if hasattr(MySQLdb, 'connect'):
_u(MySQLdb, 'connect')
def _connect(func, instance, args, kwargs):
conn = func(*args, **kwargs)
return patch_conn(conn, *args, **kwargs)
def patch_conn(conn, *args, **kwargs):
tags = {t: kwargs[k] if k in kwargs else args[p]
for t, (k, p) in KWPOS_BY_TAG.items()
if k in kwargs or len(args) > p}
tags[net.TARGET_PORT] = conn.port
pin = Pin(service='mysql', app='mysql', tags=tags)
# grab the metadata from the conn
wrapped = TracedConnection(conn, pin=pin)
pin.onto(wrapped)
return wrapped

View File

@ -1,31 +0,0 @@
"""Instrument pylibmc to report Memcached queries.
``patch_all`` will automatically patch your pylibmc client to make it work.
::
# Be sure to import pylibmc and not pylibmc.Client directly,
# otherwise you won't have access to the patched version
from ddtrace import Pin, patch
import pylibmc
# If not patched yet, you can patch pylibmc specifically
patch(pylibmc=True)
# One client instrumented with default configuration
client = pylibmc.Client(["localhost:11211"]
client.set("key1", "value1")
# Use a pin to specify metadata related to this client
Pin.override(client, service="memcached-sessions")
"""
from ...utils.importlib import require_modules
required_modules = ['pylibmc']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .client import TracedClient
from .patch import patch
__all__ = ['TracedClient', 'patch']

View File

@ -1,14 +0,0 @@
translate_server_specs = None
try:
# NOTE: we rely on an undocumented method to parse addresses,
# so be a bit defensive and don't assume it exists.
from pylibmc.client import translate_server_specs
except ImportError:
pass
def parse_addresses(addrs):
if not translate_server_specs:
return []
return translate_server_specs(addrs)

View File

@ -1,158 +0,0 @@
from contextlib import contextmanager
import random
# 3p
from ddtrace.vendor.wrapt import ObjectProxy
import pylibmc
# project
import ddtrace
from ...constants import ANALYTICS_SAMPLE_RATE_KEY
from ...ext import SpanTypes, memcached, net
from ...internal.logger import get_logger
from ...settings import config
from .addrs import parse_addresses
# Original Client class
_Client = pylibmc.Client
log = get_logger(__name__)
class TracedClient(ObjectProxy):
""" TracedClient is a proxy for a pylibmc.Client that times it's network operations. """
def __init__(self, client=None, service=memcached.SERVICE, tracer=None, *args, **kwargs):
""" Create a traced client that wraps the given memcached client.
"""
# The client instance/service/tracer attributes are kept for compatibility
# with the old interface: TracedClient(client=pylibmc.Client(['localhost:11211']))
# TODO(Benjamin): Remove these in favor of patching.
if not isinstance(client, _Client):
# We are in the patched situation, just pass down all arguments to the pylibmc.Client
# Note that, in that case, client isn't a real client (just the first argument)
client = _Client(client, *args, **kwargs)
else:
log.warning('TracedClient instantiation is deprecated and will be remove '
'in future versions (0.6.0). Use patching instead (see the docs).')
super(TracedClient, self).__init__(client)
pin = ddtrace.Pin(service=service, tracer=tracer)
pin.onto(self)
# attempt to collect the pool of urls this client talks to
try:
self._addresses = parse_addresses(client.addresses)
except Exception:
log.debug('error setting addresses', exc_info=True)
def clone(self, *args, **kwargs):
# rewrap new connections.
cloned = self.__wrapped__.clone(*args, **kwargs)
traced_client = TracedClient(cloned)
pin = ddtrace.Pin.get_from(self)
if pin:
pin.clone().onto(traced_client)
return traced_client
def get(self, *args, **kwargs):
return self._trace_cmd('get', *args, **kwargs)
def set(self, *args, **kwargs):
return self._trace_cmd('set', *args, **kwargs)
def delete(self, *args, **kwargs):
return self._trace_cmd('delete', *args, **kwargs)
def gets(self, *args, **kwargs):
return self._trace_cmd('gets', *args, **kwargs)
def touch(self, *args, **kwargs):
return self._trace_cmd('touch', *args, **kwargs)
def cas(self, *args, **kwargs):
return self._trace_cmd('cas', *args, **kwargs)
def incr(self, *args, **kwargs):
return self._trace_cmd('incr', *args, **kwargs)
def decr(self, *args, **kwargs):
return self._trace_cmd('decr', *args, **kwargs)
def append(self, *args, **kwargs):
return self._trace_cmd('append', *args, **kwargs)
def prepend(self, *args, **kwargs):
return self._trace_cmd('prepend', *args, **kwargs)
def get_multi(self, *args, **kwargs):
return self._trace_multi_cmd('get_multi', *args, **kwargs)
def set_multi(self, *args, **kwargs):
return self._trace_multi_cmd('set_multi', *args, **kwargs)
def delete_multi(self, *args, **kwargs):
return self._trace_multi_cmd('delete_multi', *args, **kwargs)
def _trace_cmd(self, method_name, *args, **kwargs):
""" trace the execution of the method with the given name and will
patch the first arg.
"""
method = getattr(self.__wrapped__, method_name)
with self._span(method_name) as span:
if span and args:
span.set_tag(memcached.QUERY, '%s %s' % (method_name, args[0]))
return method(*args, **kwargs)
def _trace_multi_cmd(self, method_name, *args, **kwargs):
""" trace the execution of the multi command with the given name. """
method = getattr(self.__wrapped__, method_name)
with self._span(method_name) as span:
pre = kwargs.get('key_prefix')
if span and pre:
span.set_tag(memcached.QUERY, '%s %s' % (method_name, pre))
return method(*args, **kwargs)
@contextmanager
def _no_span(self):
yield None
def _span(self, cmd_name):
""" Return a span timing the given command. """
pin = ddtrace.Pin.get_from(self)
if not pin or not pin.enabled():
return self._no_span()
span = pin.tracer.trace(
'memcached.cmd',
service=pin.service,
resource=cmd_name,
span_type=SpanTypes.CACHE)
try:
self._tag_span(span)
except Exception:
log.debug('error tagging span', exc_info=True)
return span
def _tag_span(self, span):
# FIXME[matt] the host selection is buried in c code. we can't tell what it's actually
# using, so fallback to randomly choosing one. can we do better?
if self._addresses:
_, host, port, _ = random.choice(self._addresses)
span.set_meta(net.TARGET_HOST, host)
span.set_meta(net.TARGET_PORT, port)
# set analytics sample rate
span.set_tag(
ANALYTICS_SAMPLE_RATE_KEY,
config.pylibmc.get_analytics_sample_rate()
)

View File

@ -1,14 +0,0 @@
import pylibmc
from .client import TracedClient
# Original Client class
_Client = pylibmc.Client
def patch():
setattr(pylibmc, 'Client', TracedClient)
def unpatch():
setattr(pylibmc, 'Client', _Client)

View File

@ -1,28 +0,0 @@
"""Instrument rediscluster to report Redis Cluster queries.
``patch_all`` will automatically patch your Redis Cluster client to make it work.
::
from ddtrace import Pin, patch
import rediscluster
# If not patched yet, you can patch redis specifically
patch(rediscluster=True)
# This will report a span with the default settings
client = rediscluster.StrictRedisCluster(startup_nodes=[{'host':'localhost', 'port':'7000'}])
client.get('my-key')
# Use a pin to specify metadata related to this client
Pin.override(client, service='redis-queue')
"""
from ...utils.importlib import require_modules
required_modules = ['rediscluster', 'rediscluster.client']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .patch import patch
__all__ = ['patch']

View File

@ -1,59 +0,0 @@
# 3p
import rediscluster
from ddtrace.vendor import wrapt
# project
from ddtrace import config
from ...constants import ANALYTICS_SAMPLE_RATE_KEY
from ...pin import Pin
from ...ext import SpanTypes, redis as redisx
from ...utils.wrappers import unwrap
from ..redis.patch import traced_execute_command, traced_pipeline
from ..redis.util import format_command_args
def patch():
"""Patch the instrumented methods
"""
if getattr(rediscluster, '_datadog_patch', False):
return
setattr(rediscluster, '_datadog_patch', True)
_w = wrapt.wrap_function_wrapper
_w('rediscluster', 'StrictRedisCluster.execute_command', traced_execute_command)
_w('rediscluster', 'StrictRedisCluster.pipeline', traced_pipeline)
_w('rediscluster', 'StrictClusterPipeline.execute', traced_execute_pipeline)
Pin(service=redisx.DEFAULT_SERVICE, app=redisx.APP).onto(rediscluster.StrictRedisCluster)
def unpatch():
if getattr(rediscluster, '_datadog_patch', False):
setattr(rediscluster, '_datadog_patch', False)
unwrap(rediscluster.StrictRedisCluster, 'execute_command')
unwrap(rediscluster.StrictRedisCluster, 'pipeline')
unwrap(rediscluster.StrictClusterPipeline, 'execute')
#
# tracing functions
#
def traced_execute_pipeline(func, instance, args, kwargs):
pin = Pin.get_from(instance)
if not pin or not pin.enabled():
return func(*args, **kwargs)
cmds = [format_command_args(c.args) for c in instance.command_stack]
resource = '\n'.join(cmds)
tracer = pin.tracer
with tracer.trace(redisx.CMD, resource=resource, service=pin.service, span_type=SpanTypes.REDIS) as s:
s.set_tag(redisx.RAWCMD, resource)
s.set_metric(redisx.PIPELINE_LEN, len(instance.command_stack))
# set analytics sample rate if enabled
s.set_tag(
ANALYTICS_SAMPLE_RATE_KEY,
config.rediscluster.get_analytics_sample_rate()
)
return func(*args, **kwargs)

View File

@ -1,126 +0,0 @@
r"""
The Tornado integration traces all ``RequestHandler`` defined in a Tornado web application.
Auto instrumentation is available using the ``patch`` function that **must be called before**
importing the tornado library.
**Note:** Tornado 5 and 6 supported only for Python 3.7.
The following is an example::
# patch before importing tornado and concurrent.futures
from ddtrace import tracer, patch
patch(tornado=True)
import tornado.web
import tornado.gen
import tornado.ioloop
# create your handlers
class MainHandler(tornado.web.RequestHandler):
@tornado.gen.coroutine
def get(self):
self.write("Hello, world")
# create your application
app = tornado.web.Application([
(r'/', MainHandler),
])
# and run it as usual
app.listen(8888)
tornado.ioloop.IOLoop.current().start()
When any type of ``RequestHandler`` is hit, a request root span is automatically created. If
you want to trace more parts of your application, you can use the ``wrap()`` decorator and
the ``trace()`` method as usual::
class MainHandler(tornado.web.RequestHandler):
@tornado.gen.coroutine
def get(self):
yield self.notify()
yield self.blocking_method()
with tracer.trace('tornado.before_write') as span:
# trace more work in the handler
@tracer.wrap('tornado.executor_handler')
@tornado.concurrent.run_on_executor
def blocking_method(self):
# do something expensive
@tracer.wrap('tornado.notify', service='tornado-notification')
@tornado.gen.coroutine
def notify(self):
# do something
If you are overriding the ``on_finish`` or ``log_exception`` methods on a
``RequestHandler``, you will need to call the super method to ensure the
tracer's patched methods are called::
class MainHandler(tornado.web.RequestHandler):
@tornado.gen.coroutine
def get(self):
self.write("Hello, world")
def on_finish(self):
super(MainHandler, self).on_finish()
# do other clean-up
def log_exception(self, typ, value, tb):
super(MainHandler, self).log_exception(typ, value, tb)
# do other logging
Tornado settings can be used to change some tracing configuration, like::
settings = {
'datadog_trace': {
'default_service': 'my-tornado-app',
'tags': {'env': 'production'},
'distributed_tracing': False,
'analytics_enabled': False,
'settings': {
'FILTERS': [
FilterRequestsOnUrl(r'http://test\\.example\\.com'),
],
},
},
}
app = tornado.web.Application([
(r'/', MainHandler),
], **settings)
The available settings are:
* ``default_service`` (default: `tornado-web`): set the service name used by the tracer. Usually
this configuration must be updated with a meaningful name.
* ``tags`` (default: `{}`): set global tags that should be applied to all spans.
* ``enabled`` (default: `True`): define if the tracer is enabled or not. If set to `false`, the
code is still instrumented but no spans are sent to the APM agent.
* ``distributed_tracing`` (default: `True`): enable distributed tracing if this is called
remotely from an instrumented application.
We suggest to enable it only for internal services where headers are under your control.
* ``analytics_enabled`` (default: `None`): enable generating APM events for Trace Search & Analytics.
* ``agent_hostname`` (default: `localhost`): define the hostname of the APM agent.
* ``agent_port`` (default: `8126`): define the port of the APM agent.
* ``settings`` (default: ``{}``): Tracer extra settings used to change, for instance, the filtering behavior.
"""
from ...utils.importlib import require_modules
required_modules = ['tornado']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .stack_context import run_with_trace_context, TracerStackContext
context_provider = TracerStackContext()
from .patch import patch, unpatch
__all__ = [
'patch',
'unpatch',
'context_provider',
'run_with_trace_context',
'TracerStackContext',
]

View File

@ -1,56 +0,0 @@
import ddtrace
from tornado import template
from . import decorators, context_provider
from .constants import CONFIG_KEY
def tracer_config(__init__, app, args, kwargs):
"""
Wrap Tornado web application so that we can configure services info and
tracing settings after the initialization.
"""
# call the Application constructor
__init__(*args, **kwargs)
# default settings
settings = {
'tracer': ddtrace.tracer,
'default_service': 'tornado-web',
'distributed_tracing': True,
'analytics_enabled': None
}
# update defaults with users settings
user_settings = app.settings.get(CONFIG_KEY)
if user_settings:
settings.update(user_settings)
app.settings[CONFIG_KEY] = settings
tracer = settings['tracer']
service = settings['default_service']
# extract extra settings
extra_settings = settings.get('settings', {})
# the tracer must use the right Context propagation and wrap executor;
# this action is done twice because the patch() method uses the
# global tracer while here we can have a different instance (even if
# this is not usual).
tracer.configure(
context_provider=context_provider,
wrap_executor=decorators.wrap_executor,
enabled=settings.get('enabled', None),
hostname=settings.get('agent_hostname', None),
port=settings.get('agent_port', None),
settings=extra_settings,
)
# set global tags if any
tags = settings.get('tags', None)
if tags:
tracer.set_tags(tags)
# configure the PIN object for template rendering
ddtrace.Pin(app='tornado', service=service, tracer=tracer).onto(template)

View File

@ -1,12 +0,0 @@
try:
# detect if concurrent.futures is available as a Python
# stdlib or Python 2.7 backport
from ..futures import patch as wrap_futures, unpatch as unwrap_futures
futures_available = True
except ImportError:
def wrap_futures():
pass
def unwrap_futures():
pass
futures_available = False

View File

@ -1,9 +0,0 @@
"""
This module defines Tornado settings that are shared between
integration modules.
"""
CONFIG_KEY = 'datadog_trace'
REQUEST_CONTEXT_KEY = 'datadog_context'
REQUEST_SPAN_KEY = '__datadog_request_span'
FUTURE_SPAN_KEY = '__datadog_future_span'
PARENT_SPAN_KEY = '__datadog_parent_span'

View File

@ -1,151 +0,0 @@
import ddtrace
import sys
from functools import wraps
from .constants import FUTURE_SPAN_KEY, PARENT_SPAN_KEY
from .stack_context import TracerStackContext
def _finish_span(future):
"""
Finish the span if it's attached to the given ``Future`` object.
This method is a Tornado callback used to close a decorated function
executed as a coroutine or as a synchronous function in another thread.
"""
span = getattr(future, FUTURE_SPAN_KEY, None)
if span:
# `tornado.concurrent.Future` in PY3 tornado>=4.0,<5 has `exc_info`
if callable(getattr(future, 'exc_info', None)):
# retrieve the exception from the coroutine object
exc_info = future.exc_info()
if exc_info:
span.set_exc_info(*exc_info)
elif callable(getattr(future, 'exception', None)):
# in tornado>=4.0,<5 with PY2 `concurrent.futures._base.Future`
# `exception_info()` returns `(exception, traceback)` but
# `exception()` only returns the first element in the tuple
if callable(getattr(future, 'exception_info', None)):
exc, exc_tb = future.exception_info()
if exc and exc_tb:
exc_type = type(exc)
span.set_exc_info(exc_type, exc, exc_tb)
# in tornado>=5 with PY3, `tornado.concurrent.Future` is alias to
# `asyncio.Future` in PY3 `exc_info` not available, instead use
# exception method
else:
exc = future.exception()
if exc:
# we expect exception object to have a traceback attached
if hasattr(exc, '__traceback__'):
exc_type = type(exc)
exc_tb = getattr(exc, '__traceback__', None)
span.set_exc_info(exc_type, exc, exc_tb)
# if all else fails use currently handled exception for
# current thread
else:
span.set_exc_info(*sys.exc_info())
span.finish()
def _run_on_executor(run_on_executor, _, params, kw_params):
"""
Wrap the `run_on_executor` function so that when a function is executed
in a different thread, we pass the current parent Span to the intermediate
function that will execute the original call. The original function
is then executed within a `TracerStackContext` so that `tracer.trace()`
can be used as usual, both with empty or existing `Context`.
"""
def pass_context_decorator(fn):
"""
Decorator that is used to wrap the original `run_on_executor_decorator`
so that we can pass the current active context before the `executor.submit`
is called. In this case we get the `parent_span` reference and we pass
that reference to `fn` reference. Because in the outer wrapper we replace
the original call with our `traced_wrapper`, we're sure that the `parent_span`
is passed to our intermediate function and not to the user function.
"""
@wraps(fn)
def wrapper(*args, **kwargs):
# from the current context, retrive the active span
current_ctx = ddtrace.tracer.get_call_context()
parent_span = getattr(current_ctx, '_current_span', None)
# pass the current parent span in the Future call so that
# it can be retrieved later
kwargs.update({PARENT_SPAN_KEY: parent_span})
return fn(*args, **kwargs)
return wrapper
# we expect exceptions here if the `run_on_executor` is called with
# wrong arguments; in that case we should not do anything because
# the exception must not be handled here
decorator = run_on_executor(*params, **kw_params)
# `run_on_executor` can be called with arguments; in this case we
# return an inner decorator that holds the real function that should be
# called
if decorator.__module__ == 'tornado.concurrent':
def run_on_executor_decorator(deco_fn):
def inner_traced_wrapper(*args, **kwargs):
# retrieve the parent span from the function kwargs
parent_span = kwargs.pop(PARENT_SPAN_KEY, None)
return run_executor_stack_context(deco_fn, args, kwargs, parent_span)
return pass_context_decorator(decorator(inner_traced_wrapper))
return run_on_executor_decorator
# return our wrapper function that executes an intermediate function to
# trace the real execution in a different thread
def traced_wrapper(*args, **kwargs):
# retrieve the parent span from the function kwargs
parent_span = kwargs.pop(PARENT_SPAN_KEY, None)
return run_executor_stack_context(params[0], args, kwargs, parent_span)
return pass_context_decorator(run_on_executor(traced_wrapper))
def run_executor_stack_context(fn, args, kwargs, parent_span):
"""
This intermediate function is always executed in a newly created thread. Here
using a `TracerStackContext` is legit because this function doesn't interfere
with the main thread loop. `StackContext` states are thread-local and retrieving
the context here will always bring to an empty `Context`.
"""
with TracerStackContext():
ctx = ddtrace.tracer.get_call_context()
ctx._current_span = parent_span
return fn(*args, **kwargs)
def wrap_executor(tracer, fn, args, kwargs, span_name, service=None, resource=None, span_type=None):
"""
Wrap executor function used to change the default behavior of
``Tracer.wrap()`` method. A decorated Tornado function can be
a regular function or a coroutine; if a coroutine is decorated, a
span is attached to the returned ``Future`` and a callback is set
so that it will close the span when the ``Future`` is done.
"""
span = tracer.trace(span_name, service=service, resource=resource, span_type=span_type)
# catch standard exceptions raised in synchronous executions
try:
future = fn(*args, **kwargs)
# duck-typing: if it has `add_done_callback` it's a Future
# object whatever is the underlying implementation
if callable(getattr(future, 'add_done_callback', None)):
setattr(future, FUTURE_SPAN_KEY, span)
future.add_done_callback(_finish_span)
else:
# we don't have a future so the `future` variable
# holds the result of the function
span.finish()
except Exception:
span.set_traceback()
span.finish()
raise
return future

View File

@ -1,105 +0,0 @@
from tornado.web import HTTPError
from .constants import CONFIG_KEY, REQUEST_CONTEXT_KEY, REQUEST_SPAN_KEY
from .stack_context import TracerStackContext
from ...constants import ANALYTICS_SAMPLE_RATE_KEY
from ...ext import SpanTypes, http
from ...propagation.http import HTTPPropagator
from ...settings import config
def execute(func, handler, args, kwargs):
"""
Wrap the handler execute method so that the entire request is within the same
``TracerStackContext``. This simplifies users code when the automatic ``Context``
retrieval is used via ``Tracer.trace()`` method.
"""
# retrieve tracing settings
settings = handler.settings[CONFIG_KEY]
tracer = settings['tracer']
service = settings['default_service']
distributed_tracing = settings['distributed_tracing']
with TracerStackContext():
# attach the context to the request
setattr(handler.request, REQUEST_CONTEXT_KEY, tracer.get_call_context())
# Read and use propagated context from HTTP headers
if distributed_tracing:
propagator = HTTPPropagator()
context = propagator.extract(handler.request.headers)
if context.trace_id:
tracer.context_provider.activate(context)
# store the request span in the request so that it can be used later
request_span = tracer.trace(
'tornado.request',
service=service,
span_type=SpanTypes.WEB
)
# set analytics sample rate
# DEV: tornado is special case maintains separate configuration from config api
analytics_enabled = settings['analytics_enabled']
if (config.analytics_enabled and analytics_enabled is not False) or analytics_enabled is True:
request_span.set_tag(
ANALYTICS_SAMPLE_RATE_KEY,
settings.get('analytics_sample_rate', True)
)
setattr(handler.request, REQUEST_SPAN_KEY, request_span)
return func(*args, **kwargs)
def on_finish(func, handler, args, kwargs):
"""
Wrap the ``RequestHandler.on_finish`` method. This is the last executed method
after the response has been sent, and it's used to retrieve and close the
current request span (if available).
"""
request = handler.request
request_span = getattr(request, REQUEST_SPAN_KEY, None)
if request_span:
# use the class name as a resource; if an handler is not available, the
# default handler class will be used so we don't pollute the resource
# space here
klass = handler.__class__
request_span.resource = '{}.{}'.format(klass.__module__, klass.__name__)
request_span.set_tag('http.method', request.method)
request_span.set_tag('http.status_code', handler.get_status())
request_span.set_tag(http.URL, request.full_url().rsplit('?', 1)[0])
if config.tornado.trace_query_string:
request_span.set_tag(http.QUERY_STRING, request.query)
request_span.finish()
return func(*args, **kwargs)
def log_exception(func, handler, args, kwargs):
"""
Wrap the ``RequestHandler.log_exception``. This method is called when an
Exception is not handled in the user code. In this case, we save the exception
in the current active span. If the Tornado ``Finish`` exception is raised, this wrapper
will not be called because ``Finish`` is not an exception.
"""
# safe-guard: expected arguments -> log_exception(self, typ, value, tb)
value = args[1] if len(args) == 3 else None
if not value:
return func(*args, **kwargs)
# retrieve the current span
tracer = handler.settings[CONFIG_KEY]['tracer']
current_span = tracer.current_span()
if isinstance(value, HTTPError):
# Tornado uses HTTPError exceptions to stop and return a status code that
# is not a 2xx. In this case we want to check the status code to be sure that
# only 5xx are traced as errors, while any other HTTPError exception is handled as
# usual.
if 500 <= value.status_code <= 599:
current_span.set_exc_info(*args)
else:
# any other uncaught exception should be reported as error
current_span.set_exc_info(*args)
return func(*args, **kwargs)

View File

@ -1,58 +0,0 @@
import ddtrace
import tornado
from ddtrace.vendor.wrapt import wrap_function_wrapper as _w
from . import handlers, application, decorators, template, compat, context_provider
from ...utils.wrappers import unwrap as _u
def patch():
"""
Tracing function that patches the Tornado web application so that it will be
traced using the given ``tracer``.
"""
# patch only once
if getattr(tornado, '__datadog_patch', False):
return
setattr(tornado, '__datadog_patch', True)
# patch Application to initialize properly our settings and tracer
_w('tornado.web', 'Application.__init__', application.tracer_config)
# patch RequestHandler to trace all Tornado handlers
_w('tornado.web', 'RequestHandler._execute', handlers.execute)
_w('tornado.web', 'RequestHandler.on_finish', handlers.on_finish)
_w('tornado.web', 'RequestHandler.log_exception', handlers.log_exception)
# patch Template system
_w('tornado.template', 'Template.generate', template.generate)
# patch Python Futures if available when an Executor pool is used
compat.wrap_futures()
# configure the global tracer
ddtrace.tracer.configure(
context_provider=context_provider,
wrap_executor=decorators.wrap_executor,
)
def unpatch():
"""
Remove all tracing functions in a Tornado web application.
"""
if not getattr(tornado, '__datadog_patch', False):
return
setattr(tornado, '__datadog_patch', False)
# unpatch Tornado
_u(tornado.web.RequestHandler, '_execute')
_u(tornado.web.RequestHandler, 'on_finish')
_u(tornado.web.RequestHandler, 'log_exception')
_u(tornado.web.Application, '__init__')
_u(tornado.concurrent, 'run_on_executor')
_u(tornado.template.Template, 'generate')
# unpatch `futures`
compat.unwrap_futures()

View File

@ -1,142 +0,0 @@
import tornado
from tornado.ioloop import IOLoop
import sys
from ...context import Context
from ...provider import DefaultContextProvider
# tornado.stack_context deprecated in Tornado 5 removed in Tornado 6
# instead use DefaultContextProvider with ContextVarContextManager for asyncio
_USE_STACK_CONTEXT = not (
sys.version_info >= (3, 7) and tornado.version_info >= (5, 0)
)
if _USE_STACK_CONTEXT:
from tornado.stack_context import StackContextInconsistentError, _state
class TracerStackContext(DefaultContextProvider):
"""
A context manager that manages ``Context`` instances in a thread-local state.
It must be used everytime a Tornado's handler or coroutine is used within a
tracing Context. It is meant to work like a traditional ``StackContext``,
preserving the state across asynchronous calls.
Everytime a new manager is initialized, a new ``Context()`` is created for
this execution flow. A context created in a ``TracerStackContext`` is not
shared between different threads.
This implementation follows some suggestions provided here:
https://github.com/tornadoweb/tornado/issues/1063
"""
def __init__(self):
# DEV: skip resetting context manager since TracerStackContext is used
# as a with-statement context where we do not want to be clearing the
# current context for a thread or task
super(TracerStackContext, self).__init__(reset_context_manager=False)
self._active = True
self._context = Context()
def enter(self):
"""
Required to preserve the ``StackContext`` protocol.
"""
pass
def exit(self, type, value, traceback): # noqa: A002
"""
Required to preserve the ``StackContext`` protocol.
"""
pass
def __enter__(self):
self.old_contexts = _state.contexts
self.new_contexts = (self.old_contexts[0] + (self,), self)
_state.contexts = self.new_contexts
return self
def __exit__(self, type, value, traceback): # noqa: A002
final_contexts = _state.contexts
_state.contexts = self.old_contexts
if final_contexts is not self.new_contexts:
raise StackContextInconsistentError(
'stack_context inconsistency (may be caused by yield '
'within a "with TracerStackContext" block)')
# break the reference to allow faster GC on CPython
self.new_contexts = None
def deactivate(self):
self._active = False
def _has_io_loop(self):
"""Helper to determine if we are currently in an IO loop"""
return getattr(IOLoop._current, 'instance', None) is not None
def _has_active_context(self):
"""Helper to determine if we have an active context or not"""
if not self._has_io_loop():
return self._local._has_active_context()
else:
# we're inside a Tornado loop so the TracerStackContext is used
return self._get_state_active_context() is not None
def _get_state_active_context(self):
"""Helper to get the currently active context from the TracerStackContext"""
# we're inside a Tornado loop so the TracerStackContext is used
for stack in reversed(_state.contexts[0]):
if isinstance(stack, self.__class__) and stack._active:
return stack._context
return None
def active(self):
"""
Return the ``Context`` from the current execution flow. This method can be
used inside a Tornado coroutine to retrieve and use the current tracing context.
If used in a separated Thread, the `_state` thread-local storage is used to
propagate the current Active context from the `MainThread`.
"""
if not self._has_io_loop():
# if a Tornado loop is not available, it means that this method
# has been called from a synchronous code, so we can rely in a
# thread-local storage
return self._local.get()
else:
# we're inside a Tornado loop so the TracerStackContext is used
return self._get_state_active_context()
def activate(self, ctx):
"""
Set the active ``Context`` for this async execution. If a ``TracerStackContext``
is not found, the context is discarded.
If used in a separated Thread, the `_state` thread-local storage is used to
propagate the current Active context from the `MainThread`.
"""
if not self._has_io_loop():
# because we're outside of an asynchronous execution, we store
# the current context in a thread-local storage
self._local.set(ctx)
else:
# we're inside a Tornado loop so the TracerStackContext is used
for stack_ctx in reversed(_state.contexts[0]):
if isinstance(stack_ctx, self.__class__) and stack_ctx._active:
stack_ctx._context = ctx
return ctx
else:
# no-op when not using stack_context
class TracerStackContext(DefaultContextProvider):
def __enter__(self):
pass
def __exit__(self, *exc):
pass
def run_with_trace_context(func, *args, **kwargs):
"""
Run the given function within a traced StackContext. This function is used to
trace Tornado web handlers, but can be used in your code to trace coroutines
execution.
"""
with TracerStackContext():
return func(*args, **kwargs)

View File

@ -1,31 +0,0 @@
from tornado import template
from ddtrace import Pin
from ...ext import SpanTypes
def generate(func, renderer, args, kwargs):
"""
Wrap the ``generate`` method used in templates rendering. Because the method
may be called everywhere, the execution is traced in a tracer StackContext that
inherits the current one if it's already available.
"""
# get the module pin
pin = Pin.get_from(template)
if not pin or not pin.enabled():
return func(*args, **kwargs)
# change the resource and the template name
# if it's created from a string instead of a file
if '<string>' in renderer.name:
resource = template_name = 'render_string'
else:
resource = template_name = renderer.name
# trace the original call
with pin.tracer.trace(
'tornado.template', service=pin.service, resource=resource, span_type=SpanTypes.TEMPLATE
) as span:
span.set_meta('tornado.template_name', template_name)
return func(*args, **kwargs)

View File

@ -1,16 +0,0 @@
# [Backward compatibility]: keep importing modules functions
from ..utils.deprecation import deprecation
from ..utils.importlib import require_modules, func_name, module_name
deprecation(
name='ddtrace.contrib.util',
message='Use `ddtrace.utils.importlib` module instead',
version='1.0.0',
)
__all__ = [
'require_modules',
'func_name',
'module_name',
]

View File

@ -1,51 +0,0 @@
"""
The Vertica integration will trace queries made using the vertica-python
library.
Vertica will be automatically instrumented with ``patch_all``, or when using
the ``ddtrace-run`` command.
Vertica is instrumented on import. To instrument Vertica manually use the
``patch`` function. Note the ordering of the following statements::
from ddtrace import patch
patch(vertica=True)
import vertica_python
# use vertica_python like usual
To configure the Vertica integration globally you can use the ``Config`` API::
from ddtrace import config, patch
patch(vertica=True)
config.vertica['service_name'] = 'my-vertica-database'
To configure the Vertica integration on an instance-per-instance basis use the
``Pin`` API::
from ddtrace import Pin, patch, Tracer
patch(vertica=True)
import vertica_python
custom_tracer = Tracer()
conn = vertica_python.connect(**YOUR_VERTICA_CONFIG)
# override the service and tracer to be used
Pin.override(conn, service='myverticaservice', tracer=custom_tracer)
"""
from ...utils.importlib import require_modules
required_modules = ['vertica_python']
with require_modules(required_modules) as missing_modules:
if not missing_modules:
from .patch import patch, unpatch
__all__ = [patch, unpatch]

View File

@ -1,2 +0,0 @@
# Service info
APP = 'vertica'

Some files were not shown because too many files have changed in this diff Show More