Compare commits

...

115 Commits

Author SHA1 Message Date
renovate[bot] 43fafe224e
chore(deps): update debian:bookworm docker digest to 731dd13 (#358) 2025-08-14 01:58:42 -05:00
Victor_Canard e7dadc986d
feat(kernel-collector): Support AWS IMDSv2 with IMDSv1 fallback (#357)
The kernel-collector currently fails to retrieve instance metadata on modern AWS EC2 instances where IMDSv1 is disabled by default. This makes the collector non-functional in a standard, secure AWS environment.

This fix introduces support for IMDSv2 while maintaining backward compatibility with IMDSv1.

The implementation follows this logic:
- A short-lived (2-second timeout) PUT request is made to the IMDSv2 token endpoint to fetch a session token.
- If a token is successfully retrieved, it is used in the 'X-aws-ec2-metadata-token' header for all subsequent metadata requests, which are directed to the IMDSv2 '/latest/' endpoints.
- If the token request fails (due to timeout, network error, or IMDSv1-only environment), the system gracefully falls back to the original IMDSv1 behavior, making requests without the token to the '/2016-09-02/' endpoints.

Fixes: #356
2025-08-13 08:49:29 -05:00
renovate[bot] 164d510f0f
chore(deps): update all patch versions (#336) 2025-08-11 11:12:02 -05:00
renovate[bot] e573ade50f
chore(deps): update docker/login-action action to v3.5.0 (#354) 2025-08-11 11:11:14 -05:00
renovate[bot] d9ec046e59
chore(deps): update actions/download-artifact action to v5 (#355) 2025-08-11 11:10:26 -05:00
Jonathan Perry 2458282067
fix(nat): add nat existing kprobe with exported symbol (#353) 2025-08-10 14:51:14 -05:00
renovate[bot] 65b256357b
chore(deps): update softprops/action-gh-release digest to f82d31e (#351)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-10 05:15:50 -05:00
renovate[bot] 869319daaf
chore(deps): update docker.io/bitnami/minideb:bookworm docker digest to c08bf19 (#352)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-10 05:13:38 -05:00
Jonathan Perry a3dabc3ebb
feat(kernel-collector)!: support CO-RE (compile once run everywhere) by transitioning from bcc to libbpf (#350)
* add vmlinux.h submodule

* add libbpf cmake target and switch #includes to vmlinux.h

* fix static assert for clang / libbpf

* add missing include

* update the kernel version macros

* fix map definitions and their accessors

* fix perf ring event output in renderc

* fix include directory

* fix bpf_log line output

* refactor checking the error codes in bpf_map_lookup_elem to keep the same behavior

* remove old checking of delete unexpected return value

* migrate calls for ntohs to bpf_ntohs helper macros

* solve integer to non-integer cast warning with explicit cast

* migrate BPF_PROG_ARRAY to libbpf

* migrate tail calls

* add networking macros not in vmlinux.h

* add bpf_tracing.h helpers for PT_REGS_PARM

* solve unsigned-to-signed assignment warning

* migrate htons (we only changed ntohs earlier)

* port tcp processor's rings

* migrate DNS perf event output

* add another define for a macro missing from vmlinux.h

* port dns_message_array

* solve errors for missing define

* solve signed-unsigned conversion warnings

* add some netfilter structs missing from vmlinux.h

* move added kernel structs to vmlinux_extensions.h and added doc

* fix string_starts_with without strlen

* add structs from previous versions

* solve multiple kernel version compatibility and missing definition errors

* fix enum read from cgroup_subsys_id

* add enum declaration for cgroup_subsys_id

* convert another bcc map lookup to libbpf format

* remove preserve_access_index attribute from enum (not allowed)

* support configuration of constants using libbpf global variables. max_pid needs to be constant however

* fix string_starts_with

* fix number of parameters by inlining

* fix bpf_log exceeding number of function parameters by encoding fields onto stack to pass to bpf_log

* support functions with many parameters

* migrate userspace code to libbpf

* more migration work
 Please enter the commit message for your changes. Lines starting

* use forward declaration for the skeleton where possible

* removed redundant includes of the skel file

* clang-format

* reorder #includes for correct ordering

* fix the too long prototype for tcp_recvmsg

* fix permission check to use libbpf

* run clang-format

* add SEC annotations to bpf programs

* remove kernel header fetching

* update kernel header test to just run the collector

* run clang format

* statically link against libbpf

* add libelf1 to kernel collector dockerfile

* run a reducer when testing the kernel collector

* remove kernel fetching from the kernel-collector-test, add libelf1

* do not cancel running workflows

* mount sysfs inside the kernel collector container, for BTF

* add host networking so kernel-collector components can reach the test reducer

* remove double "--network=host"

* extract kprobe parameters using REGS_PARM for taskstats_exit

* add license to bpf code

* change to kernel-compatible license

* try to fix load test checks

* parse kprobe parameters from ctx, first update

* parse kprobe parameters from ctx, second update

* run clang format

* fix conversion typo

* support pre-mounted /sys in kernel-collector entrypoint

* give verifier hint as to tgid range in END_SAVE_ARGS

* try using u64 for bpf_trace_printk in END_SAVE_ARGS

* output pid_tgid in bpf_trace_printk in END_SAVE_ARGS to appease the verifier

* allow mounted /sys in kernel-collector-test

* fix typo

* add &= to satisfy verifier

* make new unsigned variable for printing

* fix missing escape

* remove tgid printing in macro (cannot satisfty verifier)

* try to re-get pid_tgid

* different printk format string

* add explicit cast

* try signed printing

* print dummy

* give up on getting verifier to agree to trace in macro, count in global variables instead.

* make reads verifier safe and remove BPF_REGS_PARMS with KERNEL_VERSION (not well supported it seems)

* run clang-format

* use nc instead of reducer

* increase simple test time to 30 seconds

* switch some paths from kernel version to bpf_core_field_exists

* remove getting fd of bpf program

* remove kernel symbol resolution lookup

* remove unneeded fd tracking for kprobes

* install netcat for simple test

* add missing SEC markings

* fix program parameter parsing to use PT_REGS_PARM*

* run clang-format

* fix netcat fetching

* use CORE for msg_iter

* fix duplicate SEC markings

* fix probe naming

* annotate onret_cgroup_control with SEC

* add more missing SEC annotations

* fix PT_REGS_PARM and BPF_CORE_READ in new SEC markings

* clang format

* add BPF_CORE_READ where appropriate in udp_update_stats_impl()

* use the openbsd version of netcat

* move udp_update_stats to always inline

* simplify BPF_CORE_READ

* explicitly disable UDP tracing (was already disabled)

* add BPF_CORE_READ where appropriate

* enhance error reporting when loading tail calls

* mark tail calls with SEC("kprobe")

* add BPF_CORE_READ and PT_REGS_PARM to tail probes in render_bpf.c

* fix verifier in DNS reporting

* move reading of struct iov closer to its origin to help verifier

* fix verifier issues with string_starts_with

* run clang-format

* tackle verifier error on 5.10

* remove bpf-to-bpf calls in functions that also use tail calls so 5.4 kernels will verify

* enumerate existing cgroups using cgroup_get_from_fd

* run clang-format

* fail the simple tests if the word "error" appears in the kernel-collector output

* add __always_inline to avoid mixing bpf-to-bpf calls with tail calls (forbidden in older kernels)

* change failing BPF_CORE_READ of type to bpf_probe_read_kernel

* fix bpf_log variable

* add more __always_inline directives to allow tail calls

* run clang-format

* add __always_inline to another function where compiler complained

* add __always_inline to all functions with 6 or more parameters

* add explicit null checks after casting to pre 5.14 msg struct

* simplify verification for 5.10 kernel

* add __always_inline to functions that might be called from handle_receive_udp_skb to allow tail calls on 5.4 kernels

* simplify backwards compat of handle_kprobe__tcp_sendmsg

* run clang-format

* add more __always_inline to enable tail calls in continue_tcp_sendmsg on 5.4 kernels

* fix printouts during simple run

* remove debug print

* simplify backwards compat code in handle_kprobe__tcp_recvmsg to aid verifier on 5.10 kernels

* run clang-format

* add more __always_inline on functions we missed, for continue_tcp_sendmsg on 5.4 kernels

* fix the bpf_log in handle_kprobe__tcp_recvmsg

* remove the 4.19 kernel from the test matrix since it does not support BTF

* help verifier on 5.4 kernel limit iteration size in string_starts_with

* move bpf configuration to libbpf and remove the bpf code string handling

* fix kernel_collector_test unused variable

* remove dead code

* simplify function name and remove irrelevant docs

* remove `report_debug_events` global variable -- now unused

* remove unneded defines and pragmas

* remove old comment

* remove tcp_*_handler wrappers, unneeded

* remove redundant LINUX_KERNEL_VERSION externs

* Simplify parameter loading in tcp_recvmsg kprobe

* simplify parameter loading in tcp_sendmsg kprobe

* remove dead compat code

* remove bcc from makefiles

* remove compilation and header fetching errors from troubleshooting

* fix bcc-related code comments

* add deprecation warning to the bcc-based tcp-processor python script

* remove kprobe cleanup -- unnecessary with libbpf

* simplify flow

* remove entrypoint error reporting

* remove probe_handler class member -- unneeded

* run clang-format

* remove backwards compatibility artifact

* simplify cgroup selection

* move variables to right scope

* remove dead code

* remove dead code

* remove prefix __ from __onret_udp_get_port_impl

* remove unused parameters

* don't re-read sk twice in on_ip_send_skb

* fix wrong porting bcc->libbpf hidden behind #ifdef

* simplify on_skb_free_datagram_locked

* remove some pre 3.12 cgroup support

* run clang-format

* simplify backwards compatibility

* simplify get_css_id

* rename handle_cgroup_processing -> handle_existing_cgroup

* simplify backwards compatibility in onret_cgroup_control

* suppress libbpf prints

* remove entrypoint_error.h include in reducer

* run clang-format

* keep entrypoint_error message printing, but do not require the enum values

* clang format

* run kernel-collector tests on every gha run

* only check out submodules in ext/

* remove extraneous dependency of kct test on kernel-collector

* fix removal of change set

* Use CMAKE_HOST_SYSTEM_PROCESSOR instead of `uname -m`

* remove redundant cmd_args parameters in entrypoint_kct.sh

* write all libpf messages as debug in the BPF logging kind

* clang-format

* remove redundant __attribute__((preserve_access_index))

* write libbpf logs in default debug mode

* increase libbpf debug buffer size

* move LIBBPF_DEBUG messages to LOG::trace

* clang-format
2025-08-10 02:35:43 -05:00
OpenTelemetry Bot 5dca6134b4
Add subscript to issue templates (#349) 2025-08-01 10:59:18 -05:00
renovate[bot] ba0d124e69
chore(deps): update docker/login-action action to v3.4.0 (#348) 2025-07-27 22:40:26 -05:00
Jonathan Perry d175ad7adb
build: move the opentelemetry-network-build-tools repo into build-tools (#347) 2025-07-25 20:39:21 -05:00
Jonathan Perry f57f9ec8b1
chore(deps): update renderc for gradle 8 and update gradle dependency versions (#346) 2025-07-25 13:36:09 -05:00
Jonathan Perry 9849d93b9b
chore(deps): update go and go dependency versions (#345)
* don't require HTTPS if pushing to localhost:5000

* fix variable scope

* add missing image pull from the temporary container registry

* clean up registry push script

* update go versions and vendor
2025-07-25 13:10:05 -05:00
Jonathan Perry 40737cbf8d
fix(ci): correctly pull containers from container builds in build-and-test.yaml (#344)
* don't require HTTPS if pushing to localhost:5000

* fix variable scope

* add missing image pull from the temporary container registry

* clean up registry push script
2025-07-25 13:09:51 -05:00
renovate[bot] d58a49cd30
chore(deps): update actions/download-artifact action to v4.3.0 (#341) 2025-07-25 10:49:14 -05:00
renovate[bot] e1db15f19c
chore(deps): update dorny/paths-filter action to v3 (#342) 2025-07-25 10:48:47 -05:00
renovate[bot] bd071c6dc2
chore(deps): update docker.io/bitnami/minideb:bookworm docker digest to 53344e9 (#340) 2025-07-25 10:39:55 -05:00
renovate[bot] 70972063df
chore(deps): pin actions/download-artifact action to d3f86a1 (#339) 2025-07-25 10:37:24 -05:00
renovate[bot] afaee40176
chore(deps): update github/codeql-action action to v3.29.4 (#334) 2025-07-25 10:36:20 -05:00
Jonathan Perry f80da3dc85
ci: use build tools with distro packages of curl, openssl, grpc, and abseil (#338)
* migrate ebpf tests to little-vm-helper

* fix actions versions

* speed up apt install

* build the container images for kernel collector, reducer and tests in separate jobs

* fix image references

* different way of speeding up apt installations

* add error handling

* get the container's internal error code, for the kernel-collector-test

* ci: remove snyk gha (#333)

* build: rename shell variable in clang-format scripts for better readability (#330)

found by @bjandras

* fix build for distro packages of curl, openssl, grpc, and abseil

* add header to include unique_ptr template

* fix by_key with initializer list

* add support for printing __int128 (for ipv6 addressess)

* add log formatter for protobuf

* fix constexpr enforcement in logging code

* fix protobuf dependencies so they won't recompile every time

* remove OpenSSL version check (we use the distro package)

* fix initializer list

* add function parameters required by new llvm version

* fix returning weak reference (a deleted function started getting enforced with llvm update)

* fix chrono logging support

* remove double template default parameter

* fix logging of optional<>

* move chrono support only to files requiring it

* update bcc for clang/llvm version 16

* use find_package for protobuf and grpc

* run clang-format with updated version (required for github actions to succeed)

* ci: disable fossa, ossf-scorecard, and trivy-scans on forks

* allow users to override their benv image

* use the same clang-format version as in benv for gha

* explicitly set clang-format binary with version

* run clang-format-16 on codebase

* use new mount structure in build-and-test.yaml

* update release directory structure

* reinstate linking workaround for zlib

* fix reference to protobuf

* fix protobuf generation dependency

* ensure protoc runs after all preparations

* add a protobuf build target

* fix overly restrictive casting in test

* remove counter_to_rate (dead code)

* removed mention of dead code

* fix debug build - requires chrono fmt

* update makefiles to use podman

* upgrade docker images to bookworm

* switch to install_packages (reduces apt state in container)

* update comment on protobuf

* add required libraries to containers

* add permissions for podman to make containers

* use podman when tagging docker images ahead of registry push
2025-07-25 08:37:12 -05:00
Jonathan Perry fa0c4fe40d
ci: use little-vm-helper for kernel tests (#337)
* migrate ebpf tests to little-vm-helper

* fix actions versions

* speed up apt install

* build the container images for kernel collector, reducer and tests in separate jobs

* fix image references

* different way of speeding up apt installations

* add error handling

* get the container's internal error code, for the kernel-collector-test
2025-07-25 08:35:25 -05:00
Jonathan Perry ba3ed58d04
build: rename shell variable in clang-format scripts for better readability (#330)
found by @bjandras
2025-07-23 11:27:16 -05:00
Jonathan Perry 30d6d15dda
ci: remove snyk gha (#333) 2025-07-23 11:26:48 -05:00
renovate[bot] f307564283
fix(deps): update xtextversion to v2.39.0 (#322)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-22 15:04:28 -05:00
renovate[bot] 82af5eba32
chore(deps): update actions/upload-artifact action to v4 (#331)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-22 14:19:07 -05:00
renovate[bot] b5b9e6fb44
chore(deps): update bitnami/minideb:bullseye docker digest to a6f3a96 (#328)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-22 14:12:17 -05:00
renovate[bot] 7686e06e3b
chore(deps): update aquasecurity/trivy-action action to v0.32.0 (#314)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-22 11:03:37 -05:00
renovate[bot] 8e9828c1bb
fix(deps): update dependency args4j:args4j to v2.37 (#317)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-22 11:01:33 -05:00
renovate[bot] d3bf411ef5
chore(deps): update softprops/action-gh-release digest to f2352b9 (#318)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-22 10:38:03 -05:00
OpenTelemetry Bot 5d072c59f1
Add minimum token permissions for all github workflow files (#332)
Co-authored-by: otelbot <197425009+otelbot@users.noreply.github.com>
Co-authored-by: Trask Stalnaker <trask.stalnaker@gmail.com>
2025-07-12 23:02:27 -05:00
OpenTelemetry Bot 591ac61ed4
Fix outdated community membership link (#327) 2025-07-08 14:34:15 -05:00
Jonathan Perry 073f9fb973
ci: update runners to ubuntu-24.04 (#329)
* build: update runners to ubuntu-24.04 from deprecated ubuntu-20.04

* update clang-format package installation

* use `clang-format` in checker script

* run clang-format

* add clang-format script

* run clang-format from ubuntu-24.04

* fix JSON request parsing in otlp grpc formatter test

* switch from camelCase to snake_case in otlp formatter verification
2025-06-27 14:43:15 -05:00
renovate[bot] 7e1b5c7edf
chore(deps): update actions/checkout action to v4 (#325)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Jonathan Perry <yonch@yonch.com>
2025-06-25 23:16:52 -05:00
renovate[bot] f3dc5d8d6b
chore(deps): update bitnami/minideb:bullseye docker digest to b78b0e1 (#323)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-06-25 23:15:13 -05:00
OpenTelemetry Bot 68fdf5b69c
Update community member listings (#326)
Co-authored-by: otelbot <197425009+otelbot@users.noreply.github.com>
2025-06-24 22:55:12 -05:00
renovate[bot] a823ef4a6e chore(deps): update dependency go to v1.24.4 2025-06-09 03:30:32 -05:00
renovate[bot] a76fca219e chore(deps): update fossas/fossa-action action to v1.7.0 2025-06-09 03:26:33 -05:00
renovate[bot] 9d496c7d35 chore(deps): update dorny/paths-filter action to v2.12.0 2025-06-02 19:58:19 -05:00
renovate[bot] 048a9fcbd3 chore(deps): update aquasecurity/trivy-action action to v0.30.0 2025-06-01 21:15:49 -05:00
renovate[bot] ab4f85fd12 chore(deps): update actions/upload-artifact action to v3.2.1 2025-06-01 21:05:28 -05:00
renovate[bot] b63e0ba19e chore(deps): update ossf/scorecard-action action to v2.4.2 2025-06-01 21:03:47 -05:00
renovate[bot] 418038039c chore(deps): update actions/checkout action to v3.6.0 2025-06-01 21:02:53 -05:00
renovate[bot] aa8860a169 chore(deps): update bitnami/minideb:bullseye docker digest to a652b44 2025-05-30 22:26:10 -05:00
renovate[bot] 886c57b669 chore(deps): update bitnami/minideb:bullseye docker digest to e8a5447 2025-05-28 22:10:26 -05:00
renovate[bot] 59764336b1 chore(deps): update github/codeql-action action to v3.28.18 2025-05-28 21:25:56 -05:00
renovate[bot] 56c3922e99 chore(deps): update softprops/action-gh-release digest to 37fd9d0 2025-05-28 08:40:57 -05:00
Jonathan Perry 05ece02299
Merge pull request #300 from open-telemetry/renovate/pin-dependencies
chore(deps): pin dependencies
2025-05-28 08:40:04 -05:00
renovate[bot] 2ef73b935a chore(deps): pin dependencies 2025-05-28 08:39:55 -05:00
Jonathan Perry e2832ceef2
Merge pull request #292 from opentelemetrybot/fossa
Add FOSSA scanning workflow
2025-05-28 08:39:26 -05:00
Jonathan Perry 0cc402a0ad
Merge branch 'main' into fossa 2025-05-28 08:39:09 -05:00
Jonathan Perry 0b2eca8d6c
Merge pull request #298 from opentelemetrybot/renovate-config 2025-05-21 22:57:05 -05:00
otelbot 395cafbd61 Add Renovate configuration 2025-05-12 13:18:24 -07:00
Jonathan Perry c2522ed850
Merge pull request #297 from opentelemetrybot/ossf-scorecard 2025-04-01 09:42:07 +01:00
otelbot a141bc94b3 Add end of file newline 2025-04-01 00:40:56 +01:00
otelbot 6a0baf8fb8 Add ossf-scorecard scanning workflow 2025-03-31 22:33:11 +01:00
Jonathan Perry 0ff70641ff
Merge pull request #296 from open-telemetry/samiura-patch-1 2025-03-29 13:57:25 +00:00
Samiur Arif 8d66032a29
Update README.md 2025-03-28 10:21:50 -07:00
otelbot 71f5c0754e Add FOSSA scanning workflow 2025-02-17 20:48:23 -08:00
Jonathan Perry eab93bef5b
Merge pull request #291 from hanshal101/fix-synk-scan 2025-02-14 13:33:33 -06:00
Hanshal Mehta 1abb28bc93 lfix: synk repository scan for cpp
Signed-off-by: Hanshal Mehta <122217807+hanshal101@users.noreply.github.com>
2025-02-10 11:07:46 +00:00
Jonathan Perry 69804ab63f
Merge pull request #290 from hanshal101/add-synk-scans 2025-02-08 09:34:16 -06:00
Hanshal Mehta 89cac7cc32 feat: add synk repository scan
Signed-off-by: Hanshal Mehta <122217807+hanshal101@users.noreply.github.com>
2025-02-08 12:40:24 +00:00
Hanshal Mehta 6c8e3a2da8 feat: add synk repository scan
Signed-off-by: Hanshal Mehta <122217807+hanshal101@users.noreply.github.com>
2025-02-08 12:32:01 +00:00
Jonathan Perry 84c7315ada
Merge pull request #274 from codeboten/codeboten/remove-logging-reference
examples: update references to logging exporter
2025-01-07 16:58:39 -06:00
Jonathan Perry a7258b46ae
Merge branch 'main' into codeboten/remove-logging-reference 2025-01-07 16:57:59 -06:00
Jonathan Perry 2de840b151
Merge pull request #289 from yonch/main
Update build image in build-and-test CI workflow
2025-01-07 15:00:44 -06:00
Jonathan Perry e5fa29db60 change the build env image used in the build-and-test workflow
we previously only changed the build-and-release workflow to use the container image generated by opentelemetry-network-build-tools. This also changes the build-and-test workflow.
2025-01-07 14:59:23 -06:00
Jonathan Perry 3b96af0b5e
Merge pull request #288 from yonch/main
Fix clang formatting in #286
2025-01-07 14:52:24 -06:00
Jonathan Perry e4b5be8801 run clang-format to fix lint errors 2025-01-07 14:43:13 -06:00
Jonathan Perry 15f808e339
Merge pull request #286 from yonch:bump-packages
merge build fixes by jakub-racek-swi and shivanshuraj1333
2025-01-07 12:38:42 -06:00
Jonathan Perry 47079e53c4 Merge remote-tracking branch 'shivanshuraj1333/grpc-testing' into bump-packages 2025-01-07 12:30:38 -06:00
Jonathan Perry 6c19ff1a11 build with the image that GitHub Actions produces from opentelemetry-network-build-tools 2025-01-07 11:57:14 -06:00
Jonathan Perry e146b9a6b7 Merge branch 'main' into bump-packages 2025-01-07 11:53:17 -06:00
shivanshu e73ef23519
fix: update otlp_grpc 2024-12-29 00:58:29 +05:30
shivanshu a6424b5740
fix: GUARDED_BY(stats_mu_) 2024-12-29 00:25:00 +05:30
shivanshu 59f383de6f
fix: GUARDED_BY(mu_) 2024-12-29 00:13:25 +05:30
shivanshu dde5d8104e
fix: GUARDED_BY(mu_) 2024-12-28 23:10:08 +05:30
shivanshu 7dcaff965e
fix: json modifier 2024-12-28 17:42:02 +05:30
shivanshu 10f79bbc65
fix: json modifier 2024-12-28 16:44:03 +05:30
shivanshu 3c51360ea1
fix: log modifier 2024-12-28 12:25:08 +05:30
shivanshu 155792240b
initial commit 2024-12-28 11:09:23 +05:30
Jonathan Perry 3e8320faf4
Merge pull request #281 from letian0805/fix/reducer_config_example 2024-12-15 19:26:55 -06:00
letian0805 d571ab69db fix reducer wrong example of enable-metrics and disable-metrics
Signed-off-by: letian0805 <letian0805@gmail.com>
2024-12-13 15:55:37 +08:00
Jonathan Perry 791c9f8c03
Merge pull request #273 from fidelity-contributions/Increase-bpf_max_cpus
Increase logical CPU count
2024-11-13 21:13:22 -06:00
KarthikeyanB, Arun 13d8ffc5c3 Increase logical CPU count
Signed-off-by: KarthikeyanB, Arun <Arun.KarthikeyanB@fmr.com>
2024-11-12 22:52:32 -05:00
jakub-racek-swi b61f0fffdc Merge branch 'bump-packages' of https://github.com/jakub-racek-swi/opentelemetry-network into bump-packages 2024-09-23 13:35:45 +00:00
jakub-racek-swi 9e7a319ed5 fix dynamic linking issue
Signed-off-by: jakub-racek-swi <jakub.racek@solarwinds.com>
2024-09-23 13:35:07 +00:00
Jakub Ráček 03cc02ad1d
rerun build 2024-09-23 11:05:35 +02:00
jakub-racek-swi 49d15d8211 Merge branch 'bump-packages' of https://github.com/jakub-racek-swi/opentelemetry-network into bump-packages 2024-09-23 07:18:40 +00:00
jakub-racek-swi bbbf89dd5a bump protoc
Signed-off-by: jakub-racek-swi <jakub.racek@solarwinds.com>
2024-09-23 07:18:04 +00:00
jakub-racek-swi e66d1b35fd bump go packages
Signed-off-by: jakub-racek-swi <jakub.racek@solarwinds.com>
2024-09-23 07:17:03 +00:00
Jakub Ráček beeca204cb
Update build-and-test.yaml 2024-09-23 09:11:06 +02:00
jakub-racek-swi 79ff0cd645 Set correct MessageToString return value
Signed-off-by: jakub-racek-swi <jakub.racek@solarwinds.com>
2024-09-17 13:53:19 +00:00
jakub-racek-swi fe2bbaff98 Merge branch 'bump-packages' of https://github.com/jakub-racek-swi/opentelemetry-network into bump-packages 2024-09-17 13:30:33 +00:00
jakub-racek-swi 0a1453a214 Handle MessageToJsonString return value
Signed-off-by: jakub-racek-swi <jakub.racek@solarwinds.com>
2024-09-17 13:28:49 +00:00
Jakub Ráček 24ffaf601f
Bump MacOs ver in workflow 2024-09-17 15:01:17 +02:00
Jakub Ráček 65924270e3
Change build-env path in order to test build 2024-09-17 14:49:54 +02:00
jakub-racek-swi 82f9896e08 Bump abseil and grpc
Signed-off-by: jakub-racek-swi <jakub.racek@solarwinds.com>
2024-09-12 16:22:33 +00:00
jakub-racek-swi e34d776112 remove grpc required version
Signed-off-by: jakub-racek-swi <jakub.racek@solarwinds.com>
2024-09-12 11:58:15 +00:00
jakub-racek-swi f9a0d9b171 go mod vendor
Signed-off-by: jakub-racek-swi <jakub.racek@solarwinds.com>
2024-09-12 11:52:51 +00:00
Jakub Ráček 6a2345b91e
keep local gradle 2024-09-12 10:57:25 +02:00
jakub-racek-swi b6f8f57086
bump go packages 2024-09-12 07:26:03 +00:00
jakub-racek-swi 27b63738ed Bump up packages
Signed-off-by: jakub-racek-swi <jakub.racek@solarwinds.com>
2024-09-11 14:15:28 +00:00
Alex Boten f420efc739
examples: update references to logging exporter
This exporter has been replaced by the debug exporter and will be removed soon

Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
2024-09-06 14:45:56 -07:00
Jonathan Perry 71fe60b01c
Merge pull request #266 from jakub-racek-swi/main
Enable full DNS names
2024-09-03 11:20:20 -05:00
Jonathan Perry 95569c7635
Merge branch 'main' into main 2024-09-03 11:19:58 -05:00
Jonathan Perry 21df659450
Merge pull request #269 from fidelity-contributions/update-ebfp-to-be-ebpf
fix type ebfp to ebpf
2024-08-11 20:36:24 -06:00
Zhao, Dirk cc5d1c39bb fix type ebfp to ebpf
Signed-off-by: Zhao, Dirk <dirk.zhao@fmr.com>
2024-07-11 14:49:48 +08:00
jakub-racek-swi 40dcc8902d
sign, migrate to new branch
Signed-off-by: jakub-racek-swi <jakub.racek@solarwinds.com>
2024-05-16 15:44:35 +02:00
Jonathan Perry d7441104b8
Merge pull request #259 from yonch/yonch-change-affiliation
update maintainer company affiliation
2024-04-02 11:31:22 -05:00
Jonathan Perry 7c82169b6b fix 2024-04-02 11:29:28 -05:00
Jonathan Perry 62fd7e990b update maintainer affiliation 2024-04-02 11:23:55 -05:00
Jonathan Perry 0e33f66a3f
Merge pull request #255 from golisai/nat
fix NAT handling for IPv6 formatted IPv4 addresses
2024-03-19 09:24:15 -07:00
Sri Goli 96b49fdea1 NAT handling for IPv6 formatted IPv4 addresses 2024-03-18 11:51:20 -07:00
3033 changed files with 267191 additions and 110461 deletions

2
.github/CODEOWNERS vendored
View File

@ -18,4 +18,4 @@
# important for validation steps
#
* @open-telemetry/ebpf-approvers
* @open-telemetry/network-approvers

View File

@ -54,4 +54,11 @@ body:
attributes:
label: Additional context
description: Any additional information you think may be relevant to this issue.
- type: dropdown
attributes:
label: Tip
description: This element is static, used to render a helpful sub-heading for end-users and community members to help prioritize issues. Please leave as is.
options:
- <sub>[React](https://github.blog/news-insights/product-news/add-reactions-to-pull-requests-issues-and-comments/) with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding `+1` or `me too`, to help us triage it. Learn more [here](https://opentelemetry.io/community/end-user/issue-participation/).</sub>
default: 0

View File

@ -22,4 +22,11 @@ body:
attributes:
label: Additional context
description: Add any other context or screenshots about the feature request here.
- type: dropdown
attributes:
label: Tip
description: This element is static, used to render a helpful sub-heading for end-users and community members to help prioritize issues. Please leave as is.
options:
- <sub>[React](https://github.blog/news-insights/product-news/add-reactions-to-pull-requests-issues-and-comments/) with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding `+1` or `me too`, to help us triage it. Learn more [here](https://opentelemetry.io/community/end-user/issue-participation/).</sub>
default: 0

View File

@ -8,3 +8,10 @@ body:
description: A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
validations:
required: true
- type: dropdown
attributes:
label: Tip
description: This element is static, used to render a helpful sub-heading for end-users and community members to help prioritize issues. Please leave as is.
options:
- <sub>[React](https://github.blog/news-insights/product-news/add-reactions-to-pull-requests-issues-and-comments/) with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding `+1` or `me too`, to help us triage it. Learn more [here](https://opentelemetry.io/community/end-user/issue-participation/).</sub>
default: 0

View File

@ -0,0 +1,278 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
name: 'Build and Push Container'
description: 'Build and push a Docker container with dependency management and registry caching'
inputs:
directory:
description: 'Directory name to build'
required: true
registry:
description: 'Container registry to use'
required: true
default: 'ghcr.io'
registry_username:
description: 'Registry username'
required: true
registry_password:
description: 'Registry password/token'
required: true
image_prefix:
description: 'Prefix for image names'
required: false
default: 'benv'
ref:
description: 'Git ref to checkout'
required: false
default: 'main'
force_rebuild:
description: 'Force rebuild and push even if image exists in registry'
required: false
default: 'false'
outputs:
image-tag:
description: 'The computed image tag'
value: ${{ steps.compute-recursive-tags.outputs.image-tag }}
full-image-tag:
description: 'The full image tag with registry'
value: ${{ steps.compute-recursive-tags.outputs.full-image-tag }}
image-exists:
description: 'Whether the image already exists in registry'
value: ${{ steps.check-exists.outputs.exists }}
build-needed:
description: 'Whether a build was needed'
value: ${{ steps.check-exists.outputs.exists == 'false' }}
runs:
using: 'composite'
steps:
- name: Checkout sources
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.ref }}
fetch-depth: 0
- name: Compute recursive tags for all directories
id: compute-recursive-tags
shell: bash
env:
DOCKER_TAG_PREFIX: ${{ github.repository_owner }}/
run: |
DIRECTORY="build-tools/${{ inputs.directory }}"
BASE_DIRECTORY="${{ inputs.directory }}"
# Define dependency mapping based on CMakeLists.txt
declare -A DEPS
DEPS["base"]=""
DEPS["bcc"]="base"
DEPS["libuv"]="base"
DEPS["cpp_misc"]="base"
DEPS["go"]="base"
DEPS["libmaxminddb"]="base"
DEPS["libbpf"]="base"
DEPS["aws_sdk"]="base"
DEPS["gcp_cpp"]="base"
DEPS["opentelemetry"]="base"
DEPS["final"]="base bcc libuv aws_sdk cpp_misc go libmaxminddb gcp_cpp opentelemetry libbpf"
# Compute direct hashes for all directories upfront
declare -A DIRECT_HASHES
ALL_DIRS="base bcc libuv cpp_misc go libmaxminddb libbpf aws_sdk gcp_cpp opentelemetry final"
echo "Computing direct hashes..." >&2
for dir in $ALL_DIRS; do
direct_hash=$(git log -1 --format=%h "build-tools/${dir}")
DIRECT_HASHES[$dir]=$direct_hash
echo "Direct hash for $dir: $direct_hash" >&2
done
# Function to compute dependency closure (all transitive dependencies)
compute_closure() {
local target="$1"
local visited_key="VISITED_$target"
# Check for circular dependency
if [[ -n "${!visited_key:-}" ]]; then
echo "ERROR: Circular dependency detected for $target" >&2
exit 1
fi
# Mark as visiting
declare -g "$visited_key=1"
# Start with direct dependencies
local deps="${DEPS[$target]:-}"
local closure_set=""
# Add direct dependencies
for dep in $deps; do
closure_set="$closure_set $dep"
# Recursively add their closures
local dep_closure=$(compute_closure "$dep")
closure_set="$closure_set $dep_closure"
done
# Remove duplicates by converting to array and back
local unique_closure=($(echo $closure_set | tr ' ' '\n' | sort -u | tr '\n' ' '))
# Unmark visiting
unset "$visited_key"
echo "${unique_closure[@]}"
}
# Function to compute recursive hash using closure approach
compute_recursive_hash() {
local dir="$1"
# Get the full dependency closure
local closure=$(compute_closure "$dir")
# Include the directory itself in the hash computation
local all_dirs_for_hash="$dir $closure"
# Sort all directories
local sorted_dirs=($(echo $all_dirs_for_hash | tr ' ' '\n' | sort -u | tr '\n' ' '))
# Concatenate their direct hashes with dashes
local hash_input=""
for d in "${sorted_dirs[@]}"; do
if [[ -n "$d" ]]; then
if [[ -n "$hash_input" ]]; then
hash_input="$hash_input-${DIRECT_HASHES[$d]}"
else
hash_input="${DIRECT_HASHES[$d]}"
fi
fi
done
# Use the dash-separated hashes directly as the tag
local final_hash="$hash_input"
echo "Closure for $dir: ${sorted_dirs[@]}" >&2
echo "Final hash for $dir: $final_hash" >&2
echo "$final_hash"
}
# Compute recursive hash for target directory
RECURSIVE_HASH=$(compute_recursive_hash "$BASE_DIRECTORY")
# Create image tag
IMAGE_TAG="${{ github.repository_owner }}/opentelemetry-network-build-tools-cache:${BASE_DIRECTORY}-${RECURSIVE_HASH}"
FULL_IMAGE_TAG="${{ inputs.registry }}/${IMAGE_TAG}"
echo "image-tag=${IMAGE_TAG}" >> $GITHUB_OUTPUT
echo "full-image-tag=${FULL_IMAGE_TAG}" >> $GITHUB_OUTPUT
echo "recursive-hash=${RECURSIVE_HASH}" >> $GITHUB_OUTPUT
echo "Computed recursive image tag: ${IMAGE_TAG}" >&2
echo "Full image tag: ${FULL_IMAGE_TAG}" >&2
echo "Recursive hash: ${RECURSIVE_HASH}" >&2
# Compute all dependency tags for build args
echo "Computing all dependency tags..." >&2
for dir in $ALL_DIRS; do
if [[ "$dir" != "$BASE_DIRECTORY" ]]; then
dir_hash=$(compute_recursive_hash "$dir")
dir_image_tag="${{ github.repository_owner }}/opentelemetry-network-build-tools-cache:${dir}-${dir_hash}"
dir_full_tag="${{ inputs.registry }}/${dir_image_tag}"
# Export as environment variable for use in build args
export "${dir}_IMAGE_TAG=${dir_full_tag}"
echo "${dir}_IMAGE_TAG=${dir_full_tag}" >> $GITHUB_OUTPUT
echo "Dependency: ${dir} -> ${dir_full_tag}" >&2
fi
done
- name: Check if image exists in registry
id: check-exists
shell: bash
run: |
FULL_IMAGE_TAG="${{ steps.compute-recursive-tags.outputs.full-image-tag }}"
if [[ "${{ inputs.force_rebuild }}" == "true" ]]; then
echo "exists=false" >> $GITHUB_OUTPUT
echo "Force rebuild enabled - will rebuild ${FULL_IMAGE_TAG} regardless of registry state"
elif docker manifest inspect "${FULL_IMAGE_TAG}" >/dev/null 2>&1; then
echo "exists=true" >> $GITHUB_OUTPUT
echo "Image ${FULL_IMAGE_TAG} already exists in registry"
else
echo "exists=false" >> $GITHUB_OUTPUT
echo "Image ${FULL_IMAGE_TAG} does not exist in registry"
fi
- name: Initialize directory submodules
if: steps.check-exists.outputs.exists == 'false'
shell: bash
run: |
DIRECTORY="build-tools/${{ inputs.directory }}"
echo "Initializing submodules for directory: ${DIRECTORY}"
# Initialize submodules for the specific directory path
git submodule update --init --recursive -- "${DIRECTORY}/"
- name: Log in to Container Registry
if: steps.check-exists.outputs.exists == 'false'
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ${{ inputs.registry }}
username: ${{ inputs.registry_username }}
password: ${{ inputs.registry_password }}
- name: Build and push image
if: steps.check-exists.outputs.exists == 'false'
shell: bash
run: |
DIRECTORY="build-tools/${{ inputs.directory }}"
FULL_IMAGE_TAG="${{ steps.compute-recursive-tags.outputs.full-image-tag }}"
# Start building the docker command
BUILD_ARGS="--build-arg NPROC=$(nproc)"
# Add all dependency image tags as build args using outputs from compute-recursive-tags step
BUILD_ARGS="${BUILD_ARGS} --build-arg base_IMAGE_TAG=${{ steps.compute-recursive-tags.outputs.base_IMAGE_TAG }}"
BUILD_ARGS="${BUILD_ARGS} --build-arg bcc_IMAGE_TAG=${{ steps.compute-recursive-tags.outputs.bcc_IMAGE_TAG }}"
BUILD_ARGS="${BUILD_ARGS} --build-arg libuv_IMAGE_TAG=${{ steps.compute-recursive-tags.outputs.libuv_IMAGE_TAG }}"
BUILD_ARGS="${BUILD_ARGS} --build-arg cpp_misc_IMAGE_TAG=${{ steps.compute-recursive-tags.outputs.cpp_misc_IMAGE_TAG }}"
BUILD_ARGS="${BUILD_ARGS} --build-arg go_IMAGE_TAG=${{ steps.compute-recursive-tags.outputs.go_IMAGE_TAG }}"
BUILD_ARGS="${BUILD_ARGS} --build-arg libmaxminddb_IMAGE_TAG=${{ steps.compute-recursive-tags.outputs.libmaxminddb_IMAGE_TAG }}"
BUILD_ARGS="${BUILD_ARGS} --build-arg libbpf_IMAGE_TAG=${{ steps.compute-recursive-tags.outputs.libbpf_IMAGE_TAG }}"
BUILD_ARGS="${BUILD_ARGS} --build-arg aws_sdk_IMAGE_TAG=${{ steps.compute-recursive-tags.outputs.aws_sdk_IMAGE_TAG }}"
BUILD_ARGS="${BUILD_ARGS} --build-arg gcp_cpp_IMAGE_TAG=${{ steps.compute-recursive-tags.outputs.gcp_cpp_IMAGE_TAG }}"
BUILD_ARGS="${BUILD_ARGS} --build-arg opentelemetry_IMAGE_TAG=${{ steps.compute-recursive-tags.outputs.opentelemetry_IMAGE_TAG }}"
BUILD_ARGS="${BUILD_ARGS} --build-arg final_IMAGE_TAG=${{ steps.compute-recursive-tags.outputs.final_IMAGE_TAG }}"
# Add environment-specific build args if they exist
if [ -n "${BENV_BASE_IMAGE_DISTRO}" ]; then
BUILD_ARGS="${BUILD_ARGS} --build-arg BENV_BASE_IMAGE_DISTRO=${BENV_BASE_IMAGE_DISTRO}"
fi
if [ -n "${BENV_BASE_IMAGE_VERSION}" ]; then
BUILD_ARGS="${BUILD_ARGS} --build-arg BENV_BASE_IMAGE_VERSION=${BENV_BASE_IMAGE_VERSION}"
fi
# Add CMAKE_BUILD_TYPE (defaults to Release if not set)
CMAKE_BUILD_TYPE="${CMAKE_BUILD_TYPE:-Release}"
BUILD_ARGS="${BUILD_ARGS} --build-arg CMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}"
# Add BUILD_CFLAGS based on build type
if [ "${CMAKE_BUILD_TYPE}" = "Debug" ]; then
BUILD_ARGS="${BUILD_ARGS} --build-arg BUILD_CFLAGS='-O0 -g'"
fi
# Build the image
echo "Building image: ${FULL_IMAGE_TAG}"
echo "Build args: ${BUILD_ARGS}"
docker build -t "${FULL_IMAGE_TAG}" ${BUILD_ARGS} "${DIRECTORY}/"
# Always push intermediate builds to cache registry (dry_run only affects final Docker Hub push)
echo "Pushing image to cache registry: ${FULL_IMAGE_TAG}"
docker push "${FULL_IMAGE_TAG}"

21
.github/renovate.json5 vendored Normal file
View File

@ -0,0 +1,21 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"config:best-practices",
"helpers:pinGitHubActionDigestsToSemver"
],
"packageRules": [
{
"groupName": "all patch versions",
"matchUpdateTypes": ["patch"],
"schedule": ["before 8am every weekday"]
},
{
"matchUpdateTypes": ["minor", "major"],
"schedule": ["before 8am on Monday"]
}
],
"labels": [
"dependencies"
]
}

View File

@ -35,8 +35,11 @@ on:
type: boolean
default: false
permissions:
contents: read
env:
BENV_IMAGE: quay.io/splunko11ytest/network-explorer-debug/build-env
BENV_IMAGE: ${{ vars.BENV_IMAGE || 'docker.io/otel/opentelemetry-network-build-tools' }}
DOCKER_USERNAME: ${{ secrets.DOCKER_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.DOCKER_PASSWORD }}
DOCKER_REGISTRY: ${{ vars.DOCKER_REGISTRY }}
@ -45,16 +48,22 @@ env:
jobs:
build-and-release:
permissions:
contents: write # required for creating releases
name: Build and release
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
steps:
- name: Checkout sources
uses: actions/checkout@v3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.ref }}
fetch-depth: 0
submodules: recursive
submodules: false
path: src
- name: Checkout ext/ submodules
run: |
cd $GITHUB_WORKSPACE/src
git submodule update --init --recursive ext/
- name: Compute version numbers
run: |
# sets environment variables for use in later steps.
@ -97,29 +106,29 @@ jobs:
mkdir -p $GITHUB_WORKSPACE/out
docker run -t --rm \
--mount "type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock" \
--mount "type=bind,source=$GITHUB_WORKSPACE/src,destination=/root/src,readonly" \
--mount "type=bind,source=$GITHUB_WORKSPACE/out,destination=/root/out" \
--env EBPF_NET_SRC_ROOT=/root/src \
--mount "type=bind,source=$GITHUB_WORKSPACE/src,destination=/home/user/src,readonly" \
--mount "type=bind,source=$GITHUB_WORKSPACE/out,destination=/home/user/out" \
--env EBPF_NET_SRC_ROOT=/home/user/src \
$BENV_IMAGE \
./build.sh pipeline-docker
- name: Build packages
run: |
docker run -t --rm \
--mount "type=bind,source=$GITHUB_WORKSPACE/src,destination=/root/src,readonly" \
--mount "type=bind,source=$GITHUB_WORKSPACE/out,destination=/root/out" \
--env EBPF_NET_SRC_ROOT=/root/src \
--workdir /root/out \
--mount "type=bind,source=$GITHUB_WORKSPACE/src,destination=/home/user/src,readonly" \
--mount "type=bind,source=$GITHUB_WORKSPACE/out,destination=/home/user/out" \
--env EBPF_NET_SRC_ROOT=/home/user/src \
--workdir /home/user/out \
$BENV_IMAGE \
cpack -G 'RPM;DEB'
- name: Upload packages to GitHub Action artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: packages
path: |
out/opentelemetry-ebpf-*.rpm
out/opentelemetry-ebpf-*.deb
- name: Upload packages to Release
uses: softprops/action-gh-release@c9b46fe7aad9f02afd89b12450b780f52dacfb2d
uses: softprops/action-gh-release@f82d31e53e61a962573dd0c5fcd6b446ca78871f
if: ${{ !inputs.dry_run }}
with:
tag_name: ${{ env.github_tag }}

View File

@ -10,16 +10,19 @@ on:
pull_request:
paths:
env:
BENV_IMAGE: quay.io/splunko11ytest/network-explorer-debug/build-env
permissions:
contents: read
concurrency:
group: build-and-test-${{ github.event.pull_request_number || github.ref }}
cancel-in-progress: true
env:
BENV_IMAGE: ${{ vars.BENV_IMAGE || 'docker.io/otel/opentelemetry-network-build-tools' }}
# concurrency:
# group: build-and-test-${{ github.event.pull_request_number || github.ref }}
# cancel-in-progress: true
jobs:
clang-format-check:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
name: clang-format-check
steps:
@ -31,7 +34,7 @@ jobs:
echo "$GITHUB_CONTEXT"
- name: Check out the codebase
uses: actions/checkout@v3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Get current date
id: date
@ -39,8 +42,12 @@ jobs:
- name: Runs format checker
run: |
# disable man page updates for faster apt install
echo "set man-db/auto-update false" | sudo debconf-communicate || true
sudo dpkg-reconfigure man-db
sudo apt update
sudo apt install -y clang-format-11
sudo apt install -y --no-install-recommends clang-format-16
cd ${{ github.workspace }}
./.github/workflows/scripts/check-clang-format.sh
@ -49,11 +56,11 @@ jobs:
build-reducer:
name: build-reducer
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
needs: [clang-format-check]
steps:
- name: Check out the codebase
uses: actions/checkout@v3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
@ -63,22 +70,45 @@ jobs:
run: |
echo "github.workspace = ${{ github.workspace }}"
docker pull $BENV_IMAGE
git submodule update --init --recursive
git submodule update --init --recursive ext/
# Start local registry for the build process
docker run -d -p 5000:5000 --name registry docker.io/library/registry:2
# Build reducer with registry access
docker run -t \
--rm \
--mount "type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock" \
--mount "type=bind,source=$(git rev-parse --show-toplevel),destination=/root/src,readonly" \
--env EBPF_NET_SRC_ROOT=/root/src \
--mount "type=bind,source=$(git rev-parse --show-toplevel),destination=/home/user/src,readonly" \
--env EBPF_NET_SRC_ROOT=/home/user/src \
--network host \
--privileged \
$BENV_IMAGE \
./build.sh reducer
./build.sh reducer-docker-registry
# Export reducer container
mkdir -p container-exports
docker pull localhost:5000/reducer
docker save localhost:5000/reducer > container-exports/reducer.tar
# Clean up registry
docker stop registry
docker rm registry
- name: Upload reducer container
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: reducer-container
path: container-exports/reducer.tar
if-no-files-found: error
retention-days: 1
build-kernel-collector:
name: build-kernel-collector
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
needs: [clang-format-check]
steps:
- name: Check out the codebase
uses: actions/checkout@v3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
@ -88,22 +118,93 @@ jobs:
run: |
echo "github.workspace = ${{ github.workspace }}"
docker pull $BENV_IMAGE
git submodule update --init --recursive
git submodule update --init --recursive ext/
# Start local registry for the build process
docker run -d -p 5000:5000 --name registry docker.io/library/registry:2
# Build kernel-collector with registry access
docker run -t \
--rm \
--mount "type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock" \
--mount "type=bind,source=$(git rev-parse --show-toplevel),destination=/root/src,readonly" \
--env EBPF_NET_SRC_ROOT=/root/src \
--mount "type=bind,source=$(git rev-parse --show-toplevel),destination=/home/user/src,readonly" \
--env EBPF_NET_SRC_ROOT=/home/user/src \
--network host \
--privileged \
$BENV_IMAGE \
./build.sh kernel-collector
./build.sh kernel-collector-docker-registry
# Export kernel-collector container
mkdir -p container-exports
docker pull localhost:5000/kernel-collector
docker save localhost:5000/kernel-collector > container-exports/kernel-collector.tar
# Clean up registry
docker stop registry
docker rm registry
build-k8s-relay:
name: build-k8s-relay
runs-on: ubuntu-20.04
- name: Upload kernel collector container
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kernel-collector-container
path: container-exports/kernel-collector.tar
if-no-files-found: error
retention-days: 1
build-kernel-collector-test:
name: build-kernel-collector-test
runs-on: ubuntu-24.04
needs: [clang-format-check]
steps:
- name: Check out the codebase
uses: actions/checkout@v3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: build kernel-collector-test container
env:
PASS: ${{ secrets.DOCKER_PASSWORD }}
run: |
echo "github.workspace = ${{ github.workspace }}"
docker pull $BENV_IMAGE
git submodule update --init --recursive ext/
# Start local registry for the build process
docker run -d -p 5000:5000 --name registry docker.io/library/registry:2
# Build kernel-collector-test with registry access
docker run -t \
--rm \
--mount "type=bind,source=$(git rev-parse --show-toplevel),destination=/home/user/src,readonly" \
--env EBPF_NET_SRC_ROOT=/home/user/src \
--network host \
--privileged \
$BENV_IMAGE \
./build.sh kernel-collector-test-docker-registry
# Export kernel-collector-test container
mkdir -p container-exports
docker pull localhost:5000/kernel-collector-test
docker save localhost:5000/kernel-collector-test > container-exports/kernel-collector-test.tar
# Clean up registry
docker stop registry
docker rm registry
- name: Upload kernel-collector-test container
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kernel-collector-test-container
path: container-exports/kernel-collector-test.tar
if-no-files-found: error
retention-days: 1
build-k8s-relay:
name: build-k8s-relay
runs-on: ubuntu-24.04
needs: [clang-format-check]
steps:
- name: Check out the codebase
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
@ -111,22 +212,22 @@ jobs:
run: |
echo "github.workspace = ${{ github.workspace }}"
docker pull $BENV_IMAGE
git submodule update --init --recursive
git submodule update --init --recursive ext/
docker run -t \
--rm \
--mount "type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock" \
--mount "type=bind,source=$(git rev-parse --show-toplevel),destination=/root/src,readonly" \
--env EBPF_NET_SRC_ROOT=/root/src \
--mount "type=bind,source=$(git rev-parse --show-toplevel),destination=/home/user/src,readonly" \
--env EBPF_NET_SRC_ROOT=/home/user/src \
$BENV_IMAGE \
./build.sh k8s-relay
build-cloud-collector:
name: build-cloud-collector
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
needs: [clang-format-check]
steps:
- name: Check out the codebase
uses: actions/checkout@v3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
@ -134,22 +235,22 @@ jobs:
run: |
echo "github.workspace = ${{ github.workspace }}"
docker pull $BENV_IMAGE
git submodule update --init --recursive
git submodule update --init --recursive ext/
docker run -t \
--rm \
--mount "type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock" \
--mount "type=bind,source=$(git rev-parse --show-toplevel),destination=/root/src,readonly" \
--env EBPF_NET_SRC_ROOT=/root/src \
--mount "type=bind,source=$(git rev-parse --show-toplevel),destination=/home/user/src,readonly" \
--env EBPF_NET_SRC_ROOT=/home/user/src \
$BENV_IMAGE \
./build.sh cloud-collector
build-k8s-watcher:
name: build-k8s-watcher
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
needs: [clang-format-check]
steps:
- name: Check out the codebase
uses: actions/checkout@v3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
@ -157,34 +258,34 @@ jobs:
run: |
echo "github.workspace = ${{ github.workspace }}"
docker pull $BENV_IMAGE
git submodule update --init --recursive
git submodule update --init --recursive ext/
docker run -t \
--rm \
--mount "type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock" \
--mount "type=bind,source=$(git rev-parse --show-toplevel),destination=/root/src,readonly" \
--env EBPF_NET_SRC_ROOT=/root/src \
--mount "type=bind,source=$(git rev-parse --show-toplevel),destination=/home/user/src,readonly" \
--env EBPF_NET_SRC_ROOT=/home/user/src \
$BENV_IMAGE \
./build.sh k8s-watcher
build-run-unit-tests:
name: build-run-unit-tests
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
needs: [clang-format-check]
steps:
- name: Check out the codebase
uses: actions/checkout@v3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: run unit tests
run: |
echo "github.workspace = ${{ github.workspace }}"
docker pull $BENV_IMAGE
git submodule update --init --recursive
git submodule update --init --recursive ext/
docker run -t \
--rm \
--mount "type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock" \
--mount "type=bind,source=$(git rev-parse --show-toplevel),destination=/root/src,readonly" \
--env EBPF_NET_SRC_ROOT=/root/src \
--mount "type=bind,source=$(git rev-parse --show-toplevel),destination=/home/user/src,readonly" \
--env EBPF_NET_SRC_ROOT=/home/user/src \
--env ARGS="--output-on-failure --repeat until-pass:3" \
--env SPDLOG_LEVEL="trace" \
$BENV_IMAGE \
@ -192,175 +293,281 @@ jobs:
build-run-unit-tests-with-asan-and-debug-flags:
name: build-run-unit-tests-with-asan-and-debug-flags
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
needs: [clang-format-check]
steps:
- name: Check out the codebase
uses: actions/checkout@v3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: build unit tests with asan and debug flags on then run all tests
run: |
docker pull $BENV_IMAGE
git submodule update --init --recursive
git submodule update --init --recursive ext/
docker run -t \
--rm \
--mount "type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock" \
--mount "type=bind,source=$(git rev-parse --show-toplevel),destination=/root/src,readonly" \
--env EBPF_NET_SRC_ROOT=/root/src \
--mount "type=bind,source=$(git rev-parse --show-toplevel),destination=/home/user/src,readonly" \
--env EBPF_NET_SRC_ROOT=/home/user/src \
--env ARGS="--output-on-failure --repeat until-pass:3" \
--env SPDLOG_LEVEL="trace" \
$BENV_IMAGE \
./build.sh --debug --asan unit_tests test
run-kernel-header-tests:
name: run-kernel-header-tests
needs: [clang-format-check]
runs-on: macos-12
env:
EBPF_NET_SRC_ROOT: ${{ github.workspace }}
run-kernel-collector-simple-tests:
name: run-kernel-collector-simple-tests
needs: [build-kernel-collector]
runs-on: ubuntu-24.04
strategy:
fail-fast: false
matrix:
include:
# renovate: datasource=docker depName=quay.io/lvh-images/complexity-test
- kernel: '5.4-20250721.013324'
description: 'Kernel 5.4'
# renovate: datasource=docker depName=quay.io/lvh-images/complexity-test
- kernel: '5.10-20250507.063028'
description: 'Kernel 5.10'
# renovate: datasource=docker depName=quay.io/lvh-images/complexity-test
- kernel: '5.15-20250507.063028'
description: 'Kernel 5.15'
# renovate: datasource=docker depName=quay.io/lvh-images/complexity-test
- kernel: '6.1-20250507.063028'
description: 'Kernel 6.1'
# renovate: datasource=docker depName=quay.io/lvh-images/complexity-test
- kernel: '6.6-20250507.063028'
description: 'Kernel 6.6'
# renovate: datasource=docker depName=quay.io/lvh-images/complexity-test
- kernel: '6.12-20250507.063028'
description: 'Kernel 6.12'
timeout-minutes: 10
steps:
- name: Check out the codebase
uses: actions/checkout@v3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- uses: dorny/paths-filter@v2
id: changes
- name: Download kernel-collector container
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
with:
filters: |
kernel:
- 'channel/**'
- 'cmake/**'
- 'collector/kernel/**'
- 'common/**'
- 'config/**'
- 'ext/**'
- 'geoip/**'
- 'jitbuf/**'
- 'otlp/**'
- 'platform/**'
- 'render/**'
- 'renderc/**'
- 'scheduling/**'
- 'test/kernel/**'
- 'util/**'
github:
- '.github/**'
name: kernel-collector-container
path: ./container-exports
- name: Run kernel header tests on multiple linux distributions
id: run-kernel-header-tests
if: steps.changes.outputs.kernel == 'true' || steps.changes.outputs.github == 'true'
- name: Run kernel collector simple tests on ${{ matrix.description }}
uses: yonch/little-vm-helper@main
with:
test-name: kernel-collector-simple-test-${{ matrix.kernel }}
image: 'complexity-test'
image-version: ${{ matrix.kernel }}
host-mount: ./
images-folder-parent: "/tmp"
cpu: 2
mem: 2G
cpu-kind: 'host,pmu=on'
lvh-version: "v0.0.23"
install-dependencies: 'true'
verbose: 'true'
cmd: |
set -e # Exit on any error
cd /host
# Load container images
docker load < container-exports/kernel-collector.tar
# Start nc listener
apt-get update && apt-get install -y netcat-openbsd
echo "Starting netcat listener on port 8000..."
nc -vl 8000 &
nc_pid=$!
echo "NC listener started with PID: $nc_pid"
# Wait a moment for nc to start
sleep 2
# Test: Verify kernel collector loads successfully with libbpf
echo "=== Kernel Collector Simple Test with libbpf ==="
# Run kernel collector and verify it starts successfully
container_id=$(docker create \
--name "test-kernel-collector-libbpf" \
--env EBPF_NET_INTAKE_PORT="8000" \
--env EBPF_NET_INTAKE_HOST="127.0.0.1" \
--env EBPF_NET_HOST_DIR="/hostfs" \
--privileged --pid host --network host \
--volume /sys/fs/cgroup:/hostfs/sys/fs/cgroup \
--volume /etc:/hostfs/etc \
--volume /var/run/docker.sock:/var/run/docker.sock \
localhost:5000/kernel-collector --log-console --debug)
echo "Starting kernel collector and running for 30 seconds..."
docker start $container_id &
collector_pid=$!
# Wait for 30 seconds
sleep 30
# Check if container is still running
echo Checking if container is still running:
if docker ps --filter "id=$container_id" --filter "status=running" --quiet > /dev/null; then
echo "✓ Kernel collector loaded successfully and ran for 30 seconds"
echo "---Kernel collector logs:"
collector_logs=$(docker logs $container_id 2>&1 || true)
echo "$collector_logs"
# Check for error strings in the logs (exclude GCP metadata fetch errors which are expected)
if echo "$collector_logs" | grep -i "error" | grep -v "Unable to fetch GCP metadata: error while fetching Google Cloud Platform instance metadata" > /dev/null 2>&1; then
echo "✗ Found 'error' in kernel collector output - test failed"
docker stop $container_id || true
docker rm $container_id || true
# Stop nc listener
kill $nc_pid || true
exit 1
fi
docker stop $container_id || true
docker rm $container_id || true
# Stop nc listener
kill $nc_pid || true
exit 0
else
echo "✗ Kernel collector failed to run properly"
echo "---Kernel collector logs:"
docker logs $container_id || true
docker rm $container_id || true
# Stop nc listener
kill $nc_pid || true
exit 1
fi
- name: Stop qemu
if: always()
run: |
sudo spctl --add /usr/local/bin/brew
# These following 4 lines are added to overcome brew link failures seen while executing brew update.
rm /usr/local/bin/2to3*
rm /usr/local/bin/idle*
rm /usr/local/bin/pydoc*
rm /usr/local/bin/python3*
brew update
brew install podman
sudo spctl --add /usr/local/bin/podman
podman machine init --rootful --cpus 3 --disk-size 14 --memory 12384
podman machine start
podman pull $BENV_IMAGE
podman machine ssh 'cat > /etc/containers/registries.conf.d/localhost.conf' <<EOF
[[registry]]
location = "localhost:5000"
insecure = true
EOF
podman machine ssh systemctl restart podman
podman container run -dt -p 5000:5000 --name registry docker.io/library/registry:2
podman info
git submodule update --init --recursive
podman run -t --rm \
--mount type=bind,source=$PWD,destination=/root/src,readonly \
--mount type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock \
--env EBPF_NET_SRC_ROOT=/root/src \
--name benv \
--privileged \
$BENV_IMAGE \
./build.sh -j 3 reducer-docker-registry kernel-collector-docker-registry
vagrant plugin install vagrant-sshfs
vagrant plugin install vagrant-scp
./test/kernel/run-tests.sh --kernel-header-test
sudo pkill -f qemu-system-x86_64 || true
run-kernel-collector-tests:
name: run-kernel-collector-tests
needs: [clang-format-check]
runs-on: macos-12
env:
EBPF_NET_SRC_ROOT: ${{ github.workspace }}
needs: [build-reducer, build-kernel-collector-test]
runs-on: ubuntu-24.04
strategy:
fail-fast: false
matrix:
include:
# renovate: datasource=docker depName=quay.io/lvh-images/complexity-test
- kernel: '5.4-20250721.013324'
description: 'Kernel 5.4'
# renovate: datasource=docker depName=quay.io/lvh-images/complexity-test
- kernel: '5.10-20250507.063028'
description: 'Kernel 5.10'
# renovate: datasource=docker depName=quay.io/lvh-images/complexity-test
- kernel: '5.15-20250507.063028'
description: 'Kernel 5.15'
# renovate: datasource=docker depName=quay.io/lvh-images/complexity-test
- kernel: '6.1-20250507.063028'
description: 'Kernel 6.1'
# renovate: datasource=docker depName=quay.io/lvh-images/complexity-test
- kernel: '6.6-20250507.063028'
description: 'Kernel 6.6'
# renovate: datasource=docker depName=quay.io/lvh-images/complexity-test
- kernel: '6.12-20250507.063028'
description: 'Kernel 6.12'
timeout-minutes: 10
steps:
- name: Check out the codebase
uses: actions/checkout@v3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- uses: dorny/paths-filter@v2
id: changes
- name: Download reducer container
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
with:
filters: |
kernel:
- 'channel/**'
- 'cmake/**'
- 'collector/kernel/**'
- 'common/**'
- 'config/**'
- 'ext/**'
- 'geoip/**'
- 'jitbuf/**'
- 'otlp/**'
- 'platform/**'
- 'render/**'
- 'renderc/**'
- 'scheduling/**'
- 'test/kernel/**'
- 'util/**'
github:
- '.github/**'
name: reducer-container
path: ./container-exports
- name: Run kernel_collector_test on multiple linux distributions
id: run-kernel-collector-test
if: steps.changes.outputs.kernel == 'true' || steps.changes.outputs.github == 'true'
- name: Download kernel-collector-test container
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
with:
name: kernel-collector-test-container
path: ./container-exports
- name: Run kernel collector tests on ${{ matrix.description }}
uses: yonch/little-vm-helper@main
with:
test-name: kernel-collector-test-${{ matrix.kernel }}
image: 'complexity-test'
image-version: ${{ matrix.kernel }}
host-mount: ./
images-folder-parent: "/tmp"
cpu: 2
mem: 2G
cpu-kind: 'host,pmu=on'
lvh-version: "v0.0.23"
install-dependencies: 'true'
verbose: 'true'
cmd: |
set -e # Exit on any error
cd /host
# Load container images
docker load < container-exports/reducer.tar
docker load < container-exports/kernel-collector-test.tar
# Create data directory
mkdir -p data
# Start reducer
reducer_id=$(docker run --detach --rm \
--network=host \
localhost:5000/reducer \
--port 8000 \
--prom 0.0.0.0:7000 \
--partitions-per-shard 1 \
--num-ingest-shards=1 \
--num-matching-shards=1 \
--num-aggregation-shards=1 \
--enable-aws-enrichment \
--enable-otlp-grpc-metrics \
--log-console \
--debug)
echo "Reducer started with ID: $reducer_id"
# Wait a moment for reducer to start
sleep 5
# Run kernel collector test
container_id=$(docker create -t --rm \
--env EBPF_NET_HOST_DIR="/hostfs" \
--privileged \
--network host \
--volume /sys/fs/cgroup:/hostfs/sys/fs/cgroup \
--volume /usr/src:/hostfs/usr/src \
--volume /lib/modules:/hostfs/lib/modules \
--volume /etc:/hostfs/etc \
--volume /var/cache:/hostfs/cache \
--volume /var/run/docker.sock:/var/run/docker.sock \
--env EBPF_NET_KERNEL_HEADERS_AUTO_FETCH="true" \
--env EBPF_NET_EXPORT_BPF_SRC_FILE="/hostfs/data/bpf.src.c" \
--env EBPF_NET_MINIDUMP_DIR="/hostfs/data/minidump" \
--volume "$(pwd)/data:/hostfs/data" \
localhost:5000/kernel-collector-test \
--log-console)
echo "Starting kernel collector test..."
docker start -a $container_id
set +e # disable exit on error
docker wait $container_id
test_exit_code=$?
set -e # re-enable exit on error
# Stop reducer
docker stop $reducer_id || true
echo "Test completed with exit code: $test_exit_code"
exit $test_exit_code
- name: Stop qemu
if: always()
run: |
sudo spctl --add /usr/local/bin/brew
# These following 4 lines are added to overcome brew link failures seen while executing brew update.
rm /usr/local/bin/2to3*
rm /usr/local/bin/idle*
rm /usr/local/bin/pydoc*
rm /usr/local/bin/python3*
brew update
brew install podman
sudo spctl --add /usr/local/bin/podman
podman machine init --rootful --cpus 3 --disk-size 14 --memory 12348
podman machine start
podman pull $BENV_IMAGE
podman machine ssh 'cat > /etc/containers/registries.conf.d/localhost.conf' <<EOF
[[registry]]
location = "localhost:5000"
insecure = true
EOF
podman machine ssh systemctl restart podman
podman container run -dt -p 5000:5000 --name registry docker.io/library/registry:2
podman info
git submodule update --init --recursive
podman run -t --rm \
--mount type=bind,source=$PWD,destination=/root/src,readonly \
--mount type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock \
--env EBPF_NET_SRC_ROOT=/root/src \
--name benv \
--privileged \
$BENV_IMAGE \
./build.sh -j 3 kernel-collector-test-docker-registry
vagrant plugin install vagrant-sshfs
vagrant plugin install vagrant-scp
./test/kernel/run-tests.sh --kernel-collector-test
sudo pkill -f qemu-system-x86_64 || true

View File

@ -0,0 +1,361 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
name: create-build-tools-container
run-name: Create the opentelemetry-network-build-tools container
on:
push:
branches:
- main
paths:
- 'build-tools/**'
- '.github/workflows/build_and_push_parallel.yml'
workflow_dispatch:
inputs:
ref:
description: "Tag, branch or SHA to checkout"
required: true
type: string
default: "main"
image_prefix:
description: "Prefix to use for destination image name"
required: false
type: string
default: "opentelemetry-network-"
additional_tag:
description: "Additional tag to use when pushing to docker repository"
required: false
type: string
dry_run:
description: "Build everything but don't actually push to repository"
required: false
type: boolean
default: false
registry_workspace:
description: "Registry workspace/namespace to push final image to"
required: false
type: string
default: "otel"
force_rebuild:
description: "Force rebuild all containers (ignore cache)"
required: false
type: boolean
default: false
permissions:
contents: read
packages: write
env:
CACHE_REGISTRY: ghcr.io
FINAL_REGISTRY: docker.io
IMAGE_PREFIX: ${{ inputs.image_prefix || 'opentelemetry-network-' }}
DOCKER_USERNAME: ${{ secrets.DOCKER_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.DOCKER_PASSWORD }}
DOCKER_NAMESPACE: ${{ inputs.registry_workspace || 'otel' }}
DRY_RUN: ${{ github.event_name != 'workflow_dispatch' || inputs.dry_run }}
REF: ${{ inputs.ref || github.ref }}
FORCE_REBUILD: ${{ inputs.force_rebuild || false }}
jobs:
build-base:
name: Build base image
runs-on: ubuntu-24.04
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ env.REF }}
- name: Build and push base image
uses: ./.github/actions/build-tools-single-stage/
with:
directory: base
registry: ${{ env.CACHE_REGISTRY }}
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.REF }}
force_rebuild: ${{ env.FORCE_REBUILD }}
build-bcc:
name: Build bcc image
runs-on: ubuntu-24.04
needs: build-base
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ env.REF }}
- name: Build and push bcc image
uses: ./.github/actions/build-tools-single-stage/
with:
directory: bcc
registry: ${{ env.CACHE_REGISTRY }}
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.REF }}
force_rebuild: ${{ env.FORCE_REBUILD }}
build-libuv:
name: Build libuv image
runs-on: ubuntu-24.04
needs: build-base
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ env.REF }}
- name: Build and push libuv image
uses: ./.github/actions/build-tools-single-stage/
with:
directory: libuv
registry: ${{ env.CACHE_REGISTRY }}
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.REF }}
force_rebuild: ${{ env.FORCE_REBUILD }}
build-cpp-misc:
name: Build cpp_misc image
runs-on: ubuntu-24.04
needs: build-base
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ env.REF }}
- name: Build and push cpp_misc image
uses: ./.github/actions/build-tools-single-stage/
with:
directory: cpp_misc
registry: ${{ env.CACHE_REGISTRY }}
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.REF }}
force_rebuild: ${{ env.FORCE_REBUILD }}
build-go:
name: Build go image
runs-on: ubuntu-24.04
needs: build-base
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ env.REF }}
- name: Build and push go image
uses: ./.github/actions/build-tools-single-stage/
with:
directory: go
registry: ${{ env.CACHE_REGISTRY }}
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.REF }}
force_rebuild: ${{ env.FORCE_REBUILD }}
build-libmaxminddb:
name: Build libmaxminddb image
runs-on: ubuntu-24.04
needs: build-base
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ env.REF }}
- name: Build and push libmaxminddb image
uses: ./.github/actions/build-tools-single-stage/
with:
directory: libmaxminddb
registry: ${{ env.CACHE_REGISTRY }}
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.REF }}
force_rebuild: ${{ env.FORCE_REBUILD }}
build-libbpf:
name: Build libbpf image
runs-on: ubuntu-24.04
needs: build-base
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ env.REF }}
- name: Build and push libbpf image
uses: ./.github/actions/build-tools-single-stage/
with:
directory: libbpf
registry: ${{ env.CACHE_REGISTRY }}
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.REF }}
force_rebuild: ${{ env.FORCE_REBUILD }}
build-aws-sdk:
name: Build aws_sdk image
runs-on: ubuntu-24.04
needs: [build-base]
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ env.REF }}
- name: Build and push aws_sdk image
uses: ./.github/actions/build-tools-single-stage/
with:
directory: aws_sdk
registry: ${{ env.CACHE_REGISTRY }}
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.REF }}
force_rebuild: ${{ env.FORCE_REBUILD }}
build-gcp-cpp:
name: Build gcp_cpp image
runs-on: ubuntu-24.04
needs: [build-base]
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ env.REF }}
- name: Build and push gcp_cpp image
uses: ./.github/actions/build-tools-single-stage/
with:
directory: gcp_cpp
registry: ${{ env.CACHE_REGISTRY }}
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.REF }}
force_rebuild: ${{ env.FORCE_REBUILD }}
build-opentelemetry:
name: Build opentelemetry image
runs-on: ubuntu-24.04
needs: [build-base]
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ env.REF }}
- name: Build and push opentelemetry image
uses: ./.github/actions/build-tools-single-stage/
with:
directory: opentelemetry
registry: ${{ env.CACHE_REGISTRY }}
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.REF }}
force_rebuild: ${{ env.FORCE_REBUILD }}
build-final:
name: Build final image
runs-on: ubuntu-24.04
needs: [
build-base,
build-bcc,
build-libuv,
build-aws-sdk,
build-cpp-misc,
build-go,
build-libmaxminddb,
build-gcp-cpp,
build-opentelemetry,
build-libbpf
]
outputs:
image-tag: ${{ steps.build.outputs.image-tag }}
full-image-tag: ${{ steps.build.outputs.full-image-tag }}
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ env.REF }}
- name: Build and push final image to cache registry
id: build
uses: ./.github/actions/build-tools-single-stage/
with:
directory: final
registry: ${{ env.CACHE_REGISTRY }}
registry_username: ${{ github.actor }}
registry_password: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ env.REF }}
force_rebuild: ${{ env.FORCE_REBUILD }}
# Push final image to docker.io with proper tags
push-to-dockerhub:
name: Push final image to Docker Hub
runs-on: ubuntu-24.04
needs: build-final
if: github.event_name == 'workflow_dispatch'
steps:
- name: Checkout sources
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ env.REF }}
fetch-depth: 0
- name: Log in to GitHub Container Registry
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ${{ env.CACHE_REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Log in to Docker Hub
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ${{ env.FINAL_REGISTRY }}
username: ${{ env.DOCKER_USERNAME }}
password: ${{ env.DOCKER_PASSWORD }}
- name: Pull, tag and push final image to Docker Hub
run: |
# Use the recursive final image tag from the build job
CACHE_IMAGE="${{ needs.build-final.outputs.full-image-tag }}"
echo "Pulling final image: ${CACHE_IMAGE}"
docker pull "${CACHE_IMAGE}"
# Compute git hash for additional tagging
git_short_hash=$(git rev-parse --short=8 HEAD)
# Set up tags
tags=(
latest
git-${git_short_hash}
)
if [[ "${{ inputs.additional_tag }}" != "" ]]; then
tags=(${tags[@]} "${{ inputs.additional_tag }}")
fi
# Set up image name and path for Docker Hub
image_name="${{ env.IMAGE_PREFIX }}build-tools"
docker_registry=$(sed -e 's,^https://,,' -e 's,/*$,,' <<< ${{ env.FINAL_REGISTRY }})
image_path="${docker_registry}/${{ env.DOCKER_NAMESPACE }}/${image_name}"
# Tag and push to Docker Hub
for tag in ${tags[@]}; do
docker tag "${CACHE_IMAGE}" "${image_path}:${tag}"
if [[ "${{ env.DRY_RUN }}" == "false" ]]; then
docker push "${image_path}:${tag}"
echo "Pushed ${image_path}:${tag}"
else
echo "Dry run: would push ${image_path}:${tag}"
fi
done
# List all images for verification
docker images --no-trunc

21
.github/workflows/fossa.yml vendored Normal file
View File

@ -0,0 +1,21 @@
name: FOSSA scanning
on:
push:
branches:
- main
permissions:
contents: read
jobs:
fossa:
if: github.repository == 'open-telemetry/opentelemetry-network-build-tools'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: fossas/fossa-action@3ebcea1862c6ffbd5cf1b4d0bd6b3fe7bd6f2cac # v1.7.0
with:
api-key: ${{secrets.FOSSA_API_KEY}}
team: OpenTelemetry

48
.github/workflows/ossf-scorecard.yml vendored Normal file
View File

@ -0,0 +1,48 @@
name: OSSF Scorecard
on:
push:
branches:
- main
schedule:
- cron: "50 10 * * 3" # once a week
workflow_dispatch:
permissions: read-all
jobs:
analysis:
if: github.repository == 'open-telemetry/opentelemetry-network-build-tools'
runs-on: ubuntu-latest
permissions:
# Needed for Code scanning upload
security-events: write
# Needed for GitHub OIDC token if publish_results is true
id-token: write
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- uses: ossf/scorecard-action@05b42c624433fc40578a4040d5cf5e36ddca8cde # v2.4.2
with:
results_file: results.sarif
results_format: sarif
publish_results: true
# Upload the results as artifacts (optional). Commenting out will disable
# uploads of run results in SARIF format to the repository Actions tab.
# https://docs.github.com/en/actions/advanced-guides/storing-workflow-data-as-artifacts
- name: "Upload artifact"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: SARIF file
path: results.sarif
retention-days: 5
# Upload the results to GitHub's code scanning dashboard (optional).
# Commenting out will disable upload of results to your repo's Code Scanning dashboard
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@76621b61decf072c1cee8dd1ce2d2a82d33c17ed # v3.29.8
with:
sarif_file: results.sarif

View File

@ -3,15 +3,15 @@
# SPDX-License-Identifier: Apache-2.0
CLANG_FORMAT_VERSION="clang-format-11"
if ! command -v ${CLANG_FORMAT_VERSION}
CLANG_FORMAT="clang-format-16"
if ! command -v ${CLANG_FORMAT}
then
echo "ERROR: requires ${CLANG_FORMAT_VERSION}"
echo "ERROR: requires ${CLANG_FORMAT}"
exit 1
fi
RC=0
CMD="${CLANG_FORMAT_VERSION} -Werror --dry-run -style=file"
CMD="${CLANG_FORMAT} -Werror --dry-run -style=file"
function check_file
{
if ! ${CMD} $1

View File

@ -9,14 +9,17 @@ on:
- '.github/workflows/trivy-scans.yml'
- '.trivyignore'
permissions:
contents: read
jobs:
trivy-fs-scan:
# Use 20.04.5 until https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/16450 is resolved
runs-on: ubuntu-20.04
if: github.repository == 'open-telemetry/opentelemetry-network-build-tools'
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Run trivy filesystem scan
uses: aquasecurity/trivy-action@0.8.0
uses: aquasecurity/trivy-action@dc5a429b52fcf669ce959baa2c2dd26090d2a6c4 # 0.32.0
with:
scan-type: 'fs'
scan-ref: '.'

51
.gitmodules vendored
View File

@ -1,3 +1,54 @@
[submodule "ext/civetweb"]
path = ext/civetweb
url = https://github.com/civetweb/civetweb.git
[submodule "build-tools/libmaxminddb/libmaxminddb"]
path = build-tools/libmaxminddb/libmaxminddb
url = https://github.com/maxmind/libmaxminddb
[submodule "build-tools/libbpf/bpftool"]
path = build-tools/libbpf/bpftool
url = https://github.com/libbpf/bpftool.git
[submodule "build-tools/cpp_misc/lz4"]
path = build-tools/cpp_misc/lz4
url = https://github.com/lz4/lz4.git
[submodule "build-tools/aws_sdk/aws-sdk-cpp"]
path = build-tools/aws_sdk/aws-sdk-cpp
url = https://github.com/aws/aws-sdk-cpp
[submodule "build-tools/gcp_cpp/google-cloud-cpp-common"]
path = build-tools/gcp_cpp/google-cloud-cpp-common
url = https://github.com/googleapis/google-cloud-cpp-common.git
[submodule "build-tools/opentelemetry/opentelemetry-proto"]
path = build-tools/opentelemetry/opentelemetry-proto
url = https://github.com/open-telemetry/opentelemetry-proto.git
[submodule "build-tools/bcc/bcc"]
path = build-tools/bcc/bcc
url = https://github.com/iovisor/bcc.git
[submodule "build-tools/cpp_misc/json"]
path = build-tools/cpp_misc/json
url = https://github.com/nlohmann/json.git
[submodule "build-tools/libuv/libuv"]
path = build-tools/libuv/libuv
url = https://github.com/libuv/libuv.git
[submodule "build-tools/cpp_misc/spdlog"]
path = build-tools/cpp_misc/spdlog
url = https://github.com/gabime/spdlog.git
[submodule "build-tools/cpp_misc/args"]
path = build-tools/cpp_misc/args
url = https://github.com/Taywee/args.git
[submodule "build-tools/libbpf/libbpf"]
path = build-tools/libbpf/libbpf
url = https://github.com/libbpf/libbpf.git
[submodule "build-tools/cpp_misc/yaml-cpp"]
path = build-tools/cpp_misc/yaml-cpp
url = https://github.com/jbeder/yaml-cpp.git
[submodule "build-tools/gcp_cpp/google-cloud-cpp"]
path = build-tools/gcp_cpp/google-cloud-cpp
url = https://github.com/googleapis/google-cloud-cpp.git
[submodule "build-tools/gcp_cpp/googleapis"]
path = build-tools/gcp_cpp/googleapis
url = https://github.com/googleapis/googleapis.git
[submodule "build-tools/cpp_misc/googletest"]
path = build-tools/cpp_misc/googletest
url = https://github.com/google/googletest.git
[submodule "ext/vmlinux.h"]
path = ext/vmlinux.h
url = https://github.com/libbpf/vmlinux.h.git

View File

@ -44,12 +44,12 @@ include(protobuf)
include(llvm)
include(clang)
include(libelf)
include(bcc)
include(test)
include(uv)
include(breakpad)
include(abseil)
include(yamlcpp)
include(libbpf)
include(render)
include_directories(

View File

@ -1721,6 +1721,189 @@ https://github.com/maxmind/libmaxminddb
limitations under the License.
-------------------------------------------------------------------------------
gradle
https://github.com/gradle/gradle
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
-------------------------------------------------------------------------------
demangle

View File

@ -32,23 +32,27 @@ Check out the [Developer Guide](docs/developing.md).
See the [Roadmap](docs/roadmap.md) for an overwiew of the project's goals.
Triagers ([@open-telemetry/ebpf-triagers](https://github.com/orgs/open-telemetry/teams/ebpf-triagers))
### Maintainers
- [Borko Jandras](https://github.com/bjandras)
- [Jim Wilson](https://github.com/jmw51798), DataDog
- [Jonathan Perry](https://github.com/yonch)
For more information about the maintainer role, see the [community repository](https://github.com/open-telemetry/community/blob/main/guides/contributor/membership.md#maintainer).
### Approvers
- [Samiur Arif](https://github.com/samiura), Sumo Logic
- Actively seeking approvers to review pull requests
For more information about the approver role, see the [community repository](https://github.com/open-telemetry/community/blob/main/guides/contributor/membership.md#approver).
### Triagers
- [Antoine Toulme](https://github.com/atoulme), Splunk
- Actively seeking contributors to triage issues
Approvers ([@open-telemetry/ebpf-approvers](https://github.com/orgs/open-telemetry/teams/ebpf-approvers)):
- [Samiur Arif](https://github.com/samiura), Splunk
- Actively seeking approvers to review pull requests
Maintainers ([@open-telemetry/ebpf-maintainers](https://github.com/orgs/open-telemetry/teams/ebpf-maintainers)):
- [Borko Jandras](https://github.com/bjandras), Splunk
- [Jim Wilson](https://github.com/jmw51798), DataDog
- [Jonathan Perry](https://github.com/yonch), Splunk
Learn more about roles in the [community repository](https://github.com/open-telemetry/community/blob/main/community-membership.md).
For more information about the triager role, see the [community repository](https://github.com/open-telemetry/community/blob/main/guides/contributor/membership.md#triager).
## Questions ##

2
build-tools/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
/Debug/
/Release/

View File

@ -0,0 +1,19 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
# DEPENDENCY_NAME
ARG base_IMAGE_TAG
FROM $base_IMAGE_TAG
ARG NPROC
WORKDIR $HOME
COPY DEPENDENCY_NAME DEPENDENCY_NAME
WORKDIR $HOME/DEPENDENCY_NAME
# add build/install commands here, e.g.:
#RUN ./bootstrap
#RUN ./configure --prefix=$HOME/install \
# --enable-static
#RUN nice make -j$NPROC && make install

157
build-tools/CMakeLists.txt Normal file
View File

@ -0,0 +1,157 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
cmake_minimum_required (VERSION 3.5)
project (opentelemetry-ebpf-build-tools VERSION 0.1.0)
# Architecture:
#
# The build is comprised of multiple directories, each making one Docker image.
# Each such directory can use other docker images as substrates or for artefacts.
# The build system should rebuild a container when:
# 1. There was a more recent commit into the directory, or
# 2. One of the dependencies was rebuilt, or
# 3. The build was cleaned, or
# 4. docker doesn't have the images (e.g., because a dev explicitly erased it)
#
# We maintain two files for each directory inside the build-status directory:
# A. `missing`: touch'd if docker doesn't already have an image for the most recent
# commit into the directory
# B. `built`: touch'd when we've successfully created a docker image
#
# Each directory's docker build depends on its `missing` and the `built` of its dependencies,
# and outputs (touch's) its own `built`.
#
# On every run, the build system always checks if the most recent commit to the directory is in
# docker, and if not touches `missing`. This handles (1), and (4). Furthermore, if `missing`
# is itself not on the filesystem, then it is touched -- this handles (3). When one of the
# dependencies is rebuilt, this causes a rebuild of the container, solving (2).
set(STATUS_DIR "build-status")
# a dummy target to force checks against docker
add_custom_command(
OUTPUT "dummy_target_to_force_rebuild"
COMMAND true
)
include(ProcessorCount)
ProcessorCount(NPROCS)
if(${NPROCS} GREATER 1)
# don't use up all the cores, leave at least one for other processes and
# scheduling to avoid trashing
math(EXPR NPROCS "${NPROCS} - 1")
endif()
message(STATUS "using ${NPROCS} parallel jobs to build")
option(BENV_UNMINIMIZE "whether or not to unminimize the benv image" OFF)
function(build_directory NAME)
cmake_parse_arguments(P "" "" "DEPENDS" ${ARGN})
# the missing filename
set(MISSING_FILENAME ${CMAKE_CURRENT_BINARY_DIR}/${STATUS_DIR}/${NAME}/missing)
# update the `missing` file
add_custom_target(check_missing_${NAME}
# OUTPUT
# ${MISSING_FILENAME} # our real output
COMMAND ${CMAKE_COMMAND} -E make_directory ${CMAKE_CURRENT_BINARY_DIR}/${STATUS_DIR}
COMMAND ${CMAKE_COMMAND} -E make_directory ${CMAKE_CURRENT_BINARY_DIR}/${STATUS_DIR}/${NAME}
COMMAND ${CMAKE_SOURCE_DIR}/check_missing.sh ${NAME} ${MISSING_FILENAME}
# DEPENDS "dummy_target_to_force_rebuild" # fake, to force the check to always run
)
# for each dependent directory, make command line parameters to pass docker the
# directory's resulting docker tag
set(DOCKER_PARAMS) # the image tags for the dependencies to pass to docker
set(DEPENDS_FILES) # the `built` files of the dependencies
list(APPEND DOCKER_PARAMS "--build-arg" "NPROC=${NPROCS}")
foreach(DEP ${P_DEPENDS})
list(APPEND DOCKER_PARAMS "--build-arg" "${DEP}_IMAGE_TAG=$$(${CMAKE_SOURCE_DIR}/get_tag.sh" "${DEP})")
list(APPEND DEPENDS_FILES "${CMAKE_CURRENT_BINARY_DIR}/${STATUS_DIR}/${DEP}/built")
endforeach()
if(DEFINED ENV{BENV_BASE_IMAGE_DISTRO})
message(STATUS "using $ENV{BENV_BASE_IMAGE_DISTRO} as base image distro for ${NAME}")
list(APPEND DOCKER_PARAMS "--build-arg" "BENV_BASE_IMAGE_DISTRO=$ENV{BENV_BASE_IMAGE_DISTRO}")
endif()
if(DEFINED ENV{BENV_BASE_IMAGE_VERSION})
message(STATUS "using $ENV{BENV_BASE_IMAGE_VERSION} as base image version for ${NAME}")
list(APPEND DOCKER_PARAMS "--build-arg" "BENV_BASE_IMAGE_VERSION=$ENV{BENV_BASE_IMAGE_VERSION}")
endif()
if(BENV_UNMINIMIZE)
list(APPEND DOCKER_PARAMS "--build-arg" "BENV_UNMINIMIZE=true")
endif()
list(APPEND DOCKER_PARAMS "--build-arg" "CMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}")
if(CMAKE_BUILD_TYPE STREQUAL "Debug")
list(APPEND DOCKER_PARAMS "--build-arg" "RESTRICTED_NPROC='1'")
list(APPEND DOCKER_PARAMS "--build-arg" "BUILD_CFLAGS='-O0 -g'")
list(APPEND DOCKER_PARAMS "--build-arg" "CONFIGURE_ENABLE_DEBUG='--enable-debug'")
list(APPEND DOCKER_PARAMS "--build-arg" "CONFIGURE_DEBUG='--debug'")
list(APPEND DOCKER_PARAMS "--build-arg" "CONFIGURE_RELEASE_DEBUG='--debug'")
else()
list(APPEND DOCKER_PARAMS "--build-arg" "RESTRICTED_NPROC=${NPROCS}")
list(APPEND DOCKER_PARAMS "--build-arg" "GRPC_BUILD_CFLAGS='-Wno-error=class-memaccess -Wno-error=ignored-qualifiers -Wno-error=stringop-truncation'")
list(APPEND DOCKER_PARAMS "--build-arg" "CONFIGURE_RELEASE_DEBUG='--release'")
endif()
add_custom_command(
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/${STATUS_DIR}/${NAME}/built
COMMAND docker build -t "$$(${CMAKE_SOURCE_DIR}/get_tag.sh" "${NAME})" ${DOCKER_PARAMS} ${CMAKE_SOURCE_DIR}/${NAME}
COMMAND touch ${CMAKE_CURRENT_BINARY_DIR}/${STATUS_DIR}/${NAME}/built
DEPENDS ${P_DEPENDS} check_missing_${NAME} ${MISSING_FILENAME} ${DEPENDS_FILES}
)
add_custom_target(${NAME} DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/${STATUS_DIR}/${NAME}/built)
endfunction(build_directory)
build_directory(base)
build_directory(openssl DEPENDS base)
build_directory(curl DEPENDS base openssl)
build_directory(bcc DEPENDS base)
build_directory(libuv DEPENDS base)
build_directory(aws_sdk DEPENDS base openssl curl)
build_directory(cpp_misc DEPENDS base)
build_directory(go DEPENDS base)
build_directory(grpc_cpp DEPENDS base abseil_cpp openssl)
build_directory(gcp_cpp DEPENDS base openssl curl grpc_cpp)
build_directory(abseil_cpp DEPENDS base)
build_directory(libmaxminddb DEPENDS base)
build_directory(opentelemetry DEPENDS base grpc_cpp)
build_directory(libbpf DEPENDS base)
#gen:dep-dir
build_directory(
final
DEPENDS
base
openssl
curl
bcc
libuv
aws_sdk
cpp_misc
go
grpc_cpp
abseil_cpp
libmaxminddb
gcp_cpp
opentelemetry
libbpf
)
if(CMAKE_BUILD_TYPE STREQUAL "Debug")
add_custom_target(benv ALL DEPENDS final
COMMAND docker tag "$$(${CMAKE_SOURCE_DIR}/get_tag.sh" "final)" debug-build-env)
else()
add_custom_target(debug-benv ALL DEPENDS final
COMMAND docker tag "$$(${CMAKE_SOURCE_DIR}/get_tag.sh" "final)" build-env)
endif()

47
build-tools/add_dependency.sh Executable file
View File

@ -0,0 +1,47 @@
#!/bin/bash
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
set -e
if [ -z "$2" ]; then
echo "usage: $0 dependency_name dependency_git_url"
exit 1
fi
dep_name="$1"
dep_repo="$2"
echo "adding dependency $dep_name"
echo " from repo $dep_repo"
mkdir "$dep_name"
sed "s/DEPENDENCY_NAME/$dep_name/g" \
".templates/dependency/Dockerfile" \
> "$dep_name/Dockerfile"
git add "$dep_name/Dockerfile"
sed -i \
-e "s/^#gen:dep-dir\$/build_directory($dep_name DEPENDS base)\n&/g" \
-e "s/^\(build_directory(final DEPENDS base .*\))\$/\1 $dep_name)/g" \
CMakeLists.txt
git add CMakeLists.txt
sed -i \
-e "s/^#gen:dep-arg\$/ARG ${dep_name}_IMAGE_TAG\n&/g" \
-e "s/^#gen:dep-from\$/FROM \\\$${dep_name}_IMAGE_TAG as build-${dep_name}\n&/g" \
-e "s/^#gen:dep-copy\$/COPY --from=build-${dep_name} \\\$HOME\/install \\\$HOME\/install\n&/g" \
final/Dockerfile
git add final/Dockerfile
git submodule add "$dep_repo" "$dep_name/$dep_name"
git commit -m "adding dependency $dep_name"
echo
echo "ACTION REQUIRED:"
echo "update file \`$dep_name/Dockerfile\` with proper build instructions then update commit with:"
echo
echo " editor \"$dep_name/Dockerfile\" \\"
echo " && git add \"$dep_name/Dockerfile\" \\"
echo " && git commit --amend --no-edit"

View File

@ -0,0 +1,35 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
# AWS SDK
ARG base_IMAGE_TAG
ARG CMAKE_BUILD_TYPE
FROM $base_IMAGE_TAG AS build
ARG NPROC
WORKDIR $HOME
COPY --chown=${UID}:${GID} aws-sdk-cpp aws-sdk-cpp
WORKDIR $HOME/build/aws-sdk-cpp
RUN cmake \
-DCUSTOM_MEMORY_MANAGEMENT=0 \
-DBUILD_SHARED_LIBS=OFF \
-DBUILD_ONLY="ec2;s3" \
-DFORCE_CURL=ON \
-DUSE_OPENSSL=ON \
-DENABLE_TESTING=OFF \
-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE} \
-DCMAKE_INSTALL_PREFIX:PATH=$HOME/install \
$HOME/aws-sdk-cpp
RUN nice make -j${NPROC:-3}
RUN nice make install
# Runtime stage - copy only necessary artifacts
FROM $base_IMAGE_TAG
COPY --from=build $HOME/install $HOME/install

@ -0,0 +1 @@
Subproject commit d759f4ae94d319170a7fbfd0cd32abad10cf9289

110
build-tools/base/Dockerfile Normal file
View File

@ -0,0 +1,110 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
ARG BENV_BASE_IMAGE_DISTRO=debian
ARG BENV_BASE_IMAGE_VERSION=bookworm@sha256:731dd1380d6a8d170a695dbeb17fe0eade0e1c29f654cf0a3a07f372191c3f4b
FROM ${BENV_BASE_IMAGE_DISTRO}:${BENV_BASE_IMAGE_VERSION} AS build-main
################ DEPENDENCIES ################
# fixes for some of the build bugs/warnings in docker
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
ENV BENV_BASE_IMAGE_DISTRO=${BENV_BASE_IMAGE_DISTRO}
ENV BENV_BASE_IMAGE_VERSION=${BENV_BASE_IMAGE_VERSION}
ARG GO_VERSION="1.21.0"
# Package definitions
ARG PKG_CORE="wget curl git gnupg bc aptitude netcat-openbsd sudo"
ARG PKG_TEXT="xxd sed ripgrep less jq"
ARG PKG_COMPILERS="g++"
ARG PKG_BUILD="ninja-build"
ARG PKG_LINTERS="clang-format-16 clang-tidy-16 shellcheck"
ARG PKG_MANAGERS="pkg-config rpm"
ARG PKG_KERNEL="dkms build-essential"
ARG PKG_DEV="gdb cgdb tmux strace"
ARG PKG_LIBS="libc-ares-dev libelf-dev libssl-dev libzstd-dev libgrpc-dev libcurl4-openssl-dev libabsl-dev protobuf-compiler-grpc libcurlpp-dev libgrpc++-dev libprotobuf-dev"
ARG PKG_PY_TEST="python3-pytest python3-dev python3-pip python3-setuptools python3-wheel pylint"
ARG PKG_JAVA="default-jdk-headless"
ARG PKG_LLVM="llvm-16-dev libclang-16-dev clang-16 libpolly-16-dev"
ARG PKG_MAKE="cmake ccache autoconf autoconf-archive automake libtool make"
ARG PKG_BCC="bison flex zip"
ARG PKG_LIBBPF="zip pkg-config libelf-dev zlib1g-dev libbfd-dev libcap-dev"
# setup apt (add non-free, contrib and backports)
RUN [ "$BENV_BASE_IMAGE_DISTRO" != 'debian' ] || [ "$BENV_BASE_IMAGE_VERSION" != 'sid' ] \
|| cat > /etc/apt/sources.list << EOF \
deb http://deb.debian.org/debian/ $BENV_BASE_IMAGE_VERSION main non-free contrib \
deb http://deb.debian.org/debian-security/ $BENV_BASE_IMAGE_VERSION/updates main non-free contrib \
deb http://deb.debian.org/debian/ $BENV_BASE_IMAGE_VERSION-updates main non-free contrib \
deb http://deb.debian.org/debian/ $BENV_BASE_IMAGE_VERSION-backports main non-free contrib \
EOF
# Update, upgrade, and install all packages
RUN apt-get -y update && \
apt-get -y install --no-install-recommends apt-utils && \
apt-get upgrade -y --no-install-recommends && \
apt-get -y install --no-install-recommends \
$PKG_CORE \
$PKG_TEXT \
$PKG_COMPILERS \
$PKG_BUILD \
$PKG_LINTERS \
$PKG_MANAGERS \
$PKG_KERNEL \
$PKG_DEV \
$PKG_LIBS \
$PKG_PY_TEST \
$PKG_JAVA \
$PKG_LLVM \
$PKG_LIBBPF && \
apt-get -y install \
$PKG_MAKE \
$PKG_BCC && \
apt-get upgrade -y --no-install-recommends && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Recent version of Go
WORKDIR /usr/local
RUN case $(uname -m) in \
x86_64) curl -L "https://go.dev/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar -xz ;; \
aarch64) curl -L "https://go.dev/dl/go${GO_VERSION}.linux-arm64.tar.gz" | tar -xz ;; \
esac
# recommended: add PATH=$PATH:/usr/local/go/bin to ~/.bashrc
################ ENVIRONMENT ################
ARG UNAME=user
ARG UID=1000
ARG GNAME=user
ARG GID=1000
RUN set -x; \
# These commands are allowed to fail (it happens for root, for example).
# The result will be checked in the next RUN.
userdel -r `getent passwd ${UID} | cut -d : -f 1` > /dev/null 2>&1; \
groupdel -f `getent group ${GID} | cut -d : -f 1` > /dev/null 2>&1; \
groupadd -g ${GID} ${GNAME}; \
useradd -u $UID -g $GID -G sudo -ms /bin/bash ${UNAME}; \
echo "${UNAME} ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
USER ${UNAME}:${GNAME}
ENV HOME=/home/${UNAME}
WORKDIR $HOME
RUN set -ex; \
id | grep "uid=${UID}(${UNAME}) gid=${GID}(${GNAME})" || (echo "ERROR: User ID verification failed" && exit 1); \
sudo ls || (echo "ERROR: sudo test failed" && exit 1); \
pwd | grep "^/home/${UNAME}" || (echo "ERROR: Working directory verification failed" && exit 1); \
echo $HOME | grep "^/home/${UNAME}" || (echo "ERROR: HOME directory verification failed" && exit 1); \
touch $HOME/test || (echo "ERROR: File creation test failed" && exit 1); \
rm $HOME/test || (echo "ERROR: File removal test failed" && exit 1)
# setup path in both ENV and shell profile
ENV PATH="$HOME/install/bin:/usr/local/go/bin:$PATH"
RUN echo 'export PATH="$HOME/install/bin:/usr/local/go/bin:$PATH"' >> $HOME/.profile
# set UID, GID
ENV UID=${UID}
ENV GID=${GID}
# note: we do not export UNAME as it interferes with LZ4 builds

View File

@ -0,0 +1,33 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
# compile our own bcc
ARG base_IMAGE_TAG
FROM $base_IMAGE_TAG AS build
ARG CMAKE_BUILD_TYPE
ARG RESTRICTED_NPROC
WORKDIR $HOME
RUN git clone --depth 1 https://github.com/iovisor/bcc.git && \
cd bcc && \
git fetch --unshallow && \
git checkout 6acb86effa7a6e8029b68eccb805dd1ee60ecc5a
WORKDIR $HOME/build/bcc
RUN echo $PATH
RUN cmake \
-G Ninja \
-DCMAKE_INSTALL_PREFIX:PATH=$HOME/install \
-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE} \
-DENABLE_LLVM_SHARED=OFF \
$HOME/bcc
RUN nice ninja -j ${RESTRICTED_NPROC:-1} && ninja -j ${RESTRICTED_NPROC:-1} install
# Runtime stage - copy only necessary artifacts
FROM $base_IMAGE_TAG
COPY --from=build $HOME/install $HOME/install

1
build-tools/bcc/bcc Submodule

@ -0,0 +1 @@
Subproject commit 6acb86effa7a6e8029b68eccb805dd1ee60ecc5a

View File

@ -0,0 +1,5 @@
#!/bin/bash
docker_args+=(
--mount "type=bind,source=/sys/fs/cgroup,destination=/sys/fs/cgroup,readonly"
)

6
build-tools/benv/docker.d/gdb Executable file
View File

@ -0,0 +1,6 @@
#!/bin/bash
docker_args+=(
--cap-add=SYS_PTRACE
--security-opt seccomp=unconfined
)

View File

@ -0,0 +1,11 @@
#!/bin/bash
docker_args+=(
--mount "type=bind,source=/lib/modules/`uname --kernel-release`,destination=/lib/modules/`uname --kernel-release`,readonly"
--mount "type=bind,source=/lib/modules/`uname --kernel-release`/build,destination=/lib/modules/`uname --kernel-release`/build,readonly"
--mount "type=bind,source=/lib/modules/`uname --kernel-release`/build/scripts,destination=/lib/modules/`uname --kernel-release`/build/scripts,readonly"
--mount "type=bind,source=/lib/modules/`uname --kernel-release`/build/tools,destination=/lib/modules/`uname --kernel-release`/build/tools,readonly"
--mount "type=bind,source=/lib/modules/`uname --kernel-release`/source,destination=/lib/modules/`uname --kernel-release`/source,readonly"
--mount "type=bind,source=/lib/modules/`uname --kernel-release`/source/scripts,destination=/lib/modules/`uname --kernel-release`/source/scripts,readonly"
--mount "type=bind,source=/lib/modules/`uname --kernel-release`/source/tools,destination=/lib/modules/`uname --kernel-release`/source/tools,readonly"
)

View File

@ -0,0 +1,5 @@
#!/bin/bash
docker_args+=(
--pid=host
)

View File

@ -0,0 +1,5 @@
#!/bin/bash
docker_args+=(
--privileged
)

View File

@ -0,0 +1,6 @@
#!/bin/bash
docker_args+=(
--mount "type=bind,source=$HOME/.vim,destination=/root/.vim,readonly"
--mount "type=bind,source=$HOME/.vimrc,destination=/root/.vimrc,readonly"
)

68
build-tools/build.sh Executable file
View File

@ -0,0 +1,68 @@
#!/bin/bash -e
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
# Builds the build environment -- a container image which is then used
# to build the project in the main repo.
# Call with `VERBOSE=1` for verbose output
# e.g.: `./build.sh VERBOSE=1`
# use env variables BENV_BASE_IMAGE_DISTRO and BENV_BASE_IMAGE_VERSION
# to customize the base image used to build benv:
#
# BENV_BASE_IMAGE_DISTRO=debian BENV_BASE_IMAGE_VERSION=testing ./build.sh
# use command line argument -DBENV_UNMINIMIZE=ON to unminimize the benv image
# Note: the `--jobs` flag requires git 2.8.0 or later
# Call with 'debug' to build a debug version of the build-env
if [[ "$1" == "--help" ]]; then
echo "usage: $0 [{--help | debug}]"
echo
echo " --help: shows this help message"
echo " debug: builds benv with debug builds of 3rd party libraries"
exit 0
fi
set -x
nproc="$(./nproc.sh)"
git submodule update --init --recursive --jobs "${nproc}"
# If this is a debug build set some extra flags to rename the benv image
if [[ "$1" == "debug" ]]; then
# Debug build
echo Enabling debug build
EXTRA_CMAKE_OPTIONS=-DCMAKE_BUILD_TYPE=Debug
export DOCKER_TAG_PREFIX=debug-
shift
mkdir -p Debug
cd Debug
CMAKE_SOURCE_DIR=..
else
# Release build
echo Enabling release build
EXTRA_CMAKE_OPTIONS=-DCMAKE_BUILD_TYPE=Release
mkdir -p Release
cd Release
CMAKE_SOURCE_DIR=..
fi
cmake $EXTRA_CMAKE_OPTIONS "$@" $CMAKE_SOURCE_DIR
if [[ "$1" == "--cmake-only" ]]; then
echo "=============================================="
echo "cmake completed - skipping make and docker tag"
echo "=============================================="
exit 0
fi
make
if [[ -n "${BENV_BASE_IMAGE_DISTRO}" ]] && [[ -n "${BENV_BASE_IMAGE_VERSION}" ]]; then
echo docker tag ${DOCKER_TAG_PREFIX}build-env:latest "${DOCKER_TAG_PREFIX}build-env:${BENV_BASE_IMAGE_DISTRO}-${BENV_BASE_IMAGE_VERSION}"
docker tag ${DOCKER_TAG_PREFIX}build-env:latest "${DOCKER_TAG_PREFIX}build-env:${BENV_BASE_IMAGE_DISTRO}-${BENV_BASE_IMAGE_VERSION}"
fi

25
build-tools/build_directory.sh Executable file
View File

@ -0,0 +1,25 @@
#!/bin/bash
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
# This script gets the latest git modification hash for a specific directory (DIR=$1).
# If docker doesn't have an image named ${BENV_PREFIX}-${DIR}:${VERSION_HASH},
# builds that image in DIR.
SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
DIR="$1"
shift 1
IMAGE_TAG=$(${SCRIPTDIR}/get_tag.sh ${DIR})
EXISTING=$(docker images --filter "reference=${IMAGE_TAG}" -q)
# if the docker image already exists, we're done
if [ "${EXISTING}" != "" ]
then
echo ${IMAGE_TAG}: exists
exit 0
fi
echo ${IMAGE_TAG}: does not exist, building
docker build ${DIR} -t ${IMAGE_TAG} $@

35
build-tools/check_missing.sh Executable file
View File

@ -0,0 +1,35 @@
#!/bin/bash
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
# For a docker image directory DIR ($1), checks if that version is not in docker. If so,
# touches FILENAME ($2). If FILENAME itself is missing, creates it.
SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
DIR="$1"
FILENAME="$2"
IMAGE_TAG=$(${SCRIPTDIR}/get_tag.sh ${DIR})
EXISTING=$(docker images --filter "reference=${IMAGE_TAG}" -q)
# if the docker image doesn't exists, touch the file
if [ "${EXISTING}" == "" ]
then
echo "No existing image ${IMAGE_TAG}. Touching ${FILENAME}."
touch "${FILENAME}"
exit 0
fi
# if `missing` is itself not on the filesystem, create it
# does the version file exist
if [ ! -f "${FILENAME}" ]
then
echo "File ${FILENAME} does not exist when checking for existing image ${IMAGE_TAG}. Touching. $(pwd)"
# doesn't exist, just create it.
touch "${FILENAME}"
exit 0
fi
#echo "${IMAGE_TAG} and ${FILENAME} exist. No change."

View File

@ -0,0 +1,102 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
# various c++ libraries:
# - LZ4
# - yaml-cpp
# - breakpad
# - args
# - nlohmann_json
# - spdlog
# - ccan
# - googletest
ARG base_IMAGE_TAG
FROM $base_IMAGE_TAG AS build
ARG CMAKE_BUILD_TYPE
ARG BUILD_CFLAGS
ARG NPROC
# LZ4
WORKDIR $HOME
COPY --chown=${UID}:${GID} lz4 lz4
WORKDIR $HOME/lz4
RUN make prefix=$HOME/install install
# yaml-cpp
WORKDIR $HOME
COPY --chown=${UID}:${GID} yaml-cpp yaml-cpp
WORKDIR $HOME/build/yaml-cpp
RUN cmake \
-G Ninja \
-DCMAKE_INSTALL_PREFIX:PATH=$HOME/install \
-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE} \
$HOME/yaml-cpp
RUN nice ninja && ninja install
# google Breakpad
WORKDIR $HOME/build/breakpad
RUN git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git
RUN PATH=$PATH:$HOME/build/breakpad/depot_tools fetch breakpad
# Switch to a version before the c++20 change that isn't supported in debian bullseye
RUN cd src && git checkout 2c73630 && cd ..
# configure breakpad to avoid using getrandom() to reduce GLIBC_2.25 dependency
RUN CXXFLAGS="-Wno-narrowing" src/configure ac_cv_func_getrandom=no --prefix=$HOME/install
RUN CFLAGS=`echo ${BUILD_CFLAGS} | sed 's/\\\\ / /g'`; nice make -j${NPROC:-3} && nice make install
# args
WORKDIR $HOME
COPY --chown=${UID}:${GID} args args
WORKDIR $HOME/args
RUN make DESTDIR=$HOME/install install
# nlohmann_json
WORKDIR $HOME
COPY --chown=${UID}:${GID} json json
WORKDIR $HOME/json
RUN cmake \
"-DCMAKE_INSTALL_PREFIX=$HOME/install" \
-DCMAKE_BUILD_TYPE=$CMAKE_BUILD_TYPE \
-DJSON_BuildTests=OFF \
.
RUN nice cmake --build . --target install -j ${NPROC:-3} --config $CMAKE_BUILD_TYPE
# spdlog
WORKDIR $HOME
COPY --chown=${UID}:${GID} spdlog spdlog
WORKDIR $HOME/spdlog
RUN cmake \
"-DCMAKE_INSTALL_PREFIX=$HOME/install" \
-DCMAKE_BUILD_TYPE=$CMAKE_BUILD_TYPE \
-DSPDLOG_BUILD_BENCH=OFF \
-DSPDLOG_BUILD_EXAMPLES=OFF \
-DSPDLOG_BUILD_TESTING=OFF \
.
RUN nice cmake --build . --target install -j ${NPROC:-3} --config $CMAKE_BUILD_TYPE
# ccan
WORKDIR $HOME
COPY --chown=${UID}:${GID} ccan ccan
RUN tar -cf- ccan/**/*.h | tar -xvf- -C $HOME/install/include
# googletest
WORKDIR $HOME
COPY --chown=${UID}:${GID} googletest googletest
WORKDIR $HOME/googletest
RUN cmake \
"-DCMAKE_INSTALL_PREFIX=$HOME/install" \
-DCMAKE_BUILD_TYPE=$CMAKE_BUILD_TYPE \
-DBUILD_GMOCK=ON \
-DINSTALL_GTEST=OFF \
.
RUN nice cmake --build . -j ${NPROC:-3} --config $CMAKE_BUILD_TYPE
RUN cp lib/*.a $HOME/install/lib
RUN cp -R googletest/include/gtest $HOME/install/include
RUN cp -R googlemock/include/gmock $HOME/install/include
# Runtime stage - copy only necessary artifacts
FROM $base_IMAGE_TAG
COPY --from=build $HOME/install $HOME/install

@ -0,0 +1 @@
Subproject commit cc2368ca0d8a962862c96c00fe919e1480050f51

View File

@ -0,0 +1 @@
../licenses/CC0

View File

@ -0,0 +1,49 @@
#include <stdio.h>
#include <string.h>
#include "config.h"
/**
* build_assert - routines for build-time assertions
*
* This code provides routines which will cause compilation to fail should some
* assertion be untrue: such failures are preferable to run-time assertions,
* but much more limited since they can only depends on compile-time constants.
*
* These assertions are most useful when two parts of the code must be kept in
* sync: it is better to avoid such cases if possible, but seconds best is to
* detect invalid changes at build time.
*
* For example, a tricky piece of code might rely on a certain element being at
* the start of the structure. To ensure that future changes don't break it,
* you would catch such changes in your code like so:
*
* Example:
* #include <stddef.h>
* #include <ccan/build_assert/build_assert.h>
*
* struct foo {
* char string[5];
* int x;
* };
*
* static char *foo_string(struct foo *foo)
* {
* // This trick requires that the string be first in the structure
* BUILD_ASSERT(offsetof(struct foo, string) == 0);
* return (char *)foo;
* }
*
* License: CC0 (Public domain)
* Author: Rusty Russell <rusty@rustcorp.com.au>
*/
int main(int argc, char *argv[])
{
if (argc != 2)
return 1;
if (strcmp(argv[1], "depends") == 0)
/* Nothing. */
return 0;
return 1;
}

View File

@ -0,0 +1,40 @@
/* CC0 (Public domain) - see LICENSE file for details */
#ifndef CCAN_BUILD_ASSERT_H
#define CCAN_BUILD_ASSERT_H
/**
* BUILD_ASSERT - assert a build-time dependency.
* @cond: the compile-time condition which must be true.
*
* Your compile will fail if the condition isn't true, or can't be evaluated
* by the compiler. This can only be used within a function.
*
* Example:
* #include <stddef.h>
* ...
* static char *foo_to_char(struct foo *foo)
* {
* // This code needs string to be at start of foo.
* BUILD_ASSERT(offsetof(struct foo, string) == 0);
* return (char *)foo;
* }
*/
#define BUILD_ASSERT(cond) \
do { (void) sizeof(char [1 - 2*!(cond)]); } while(0)
/**
* BUILD_ASSERT_OR_ZERO - assert a build-time dependency, as an expression.
* @cond: the compile-time condition which must be true.
*
* Your compile will fail if the condition isn't true, or can't be evaluated
* by the compiler. This can be used in an expression: its value is "0".
*
* Example:
* #define foo_to_char(foo) \
* ((char *)(foo) \
* + BUILD_ASSERT_OR_ZERO(offsetof(struct foo, string) == 0))
*/
#define BUILD_ASSERT_OR_ZERO(cond) \
(sizeof(char [1 - 2*!(cond)]) - 1)
#endif /* CCAN_BUILD_ASSERT_H */

View File

@ -0,0 +1,10 @@
#include <ccan/build_assert/build_assert.h>
int main(int argc, char *argv[])
{
#ifdef FAIL
return BUILD_ASSERT_OR_ZERO(1 == 0);
#else
return 0;
#endif
}

View File

@ -0,0 +1,9 @@
#include <ccan/build_assert/build_assert.h>
int main(int argc, char *argv[])
{
#ifdef FAIL
BUILD_ASSERT(1 == 0);
#endif
return 0;
}

View File

@ -0,0 +1,7 @@
#include <ccan/build_assert/build_assert.h>
int main(int argc, char *argv[])
{
BUILD_ASSERT(1 == 1);
return 0;
}

View File

@ -0,0 +1,9 @@
#include <ccan/build_assert/build_assert.h>
#include <ccan/tap/tap.h>
int main(int argc, char *argv[])
{
plan_tests(1);
ok1(BUILD_ASSERT_OR_ZERO(1 == 1) == 0);
return exit_status();
}

View File

@ -0,0 +1 @@
../licenses/CC0

View File

@ -0,0 +1,33 @@
#include <stdio.h>
#include <string.h>
#include "config.h"
/**
* check_type - routines for compile time type checking
*
* C has fairly weak typing: ints get automatically converted to longs, signed
* to unsigned, etc. There are some cases where this is best avoided, and
* these macros provide methods for evoking warnings (or build errors) when
* a precise type isn't used.
*
* On compilers which don't support typeof() these routines are less effective,
* since they have to use sizeof() which can only distiguish between types of
* different size.
*
* License: CC0 (Public domain)
* Author: Rusty Russell <rusty@rustcorp.com.au>
*/
int main(int argc, char *argv[])
{
if (argc != 2)
return 1;
if (strcmp(argv[1], "depends") == 0) {
#if !HAVE_TYPEOF
printf("ccan/build_assert\n");
#endif
return 0;
}
return 1;
}

View File

@ -0,0 +1,65 @@
/* CC0 (Public domain) - see LICENSE file for details */
#ifndef CCAN_CHECK_TYPE_H
#define CCAN_CHECK_TYPE_H
#define HAVE_TYPEOF 1
/**
* check_type - issue a warning or build failure if type is not correct.
* @expr: the expression whose type we should check (not evaluated).
* @type: the exact type we expect the expression to be.
*
* This macro is usually used within other macros to try to ensure that a macro
* argument is of the expected type. No type promotion of the expression is
* done: an unsigned int is not the same as an int!
*
* check_type() always evaluates to 0.
*
* If your compiler does not support typeof, then the best we can do is fail
* to compile if the sizes of the types are unequal (a less complete check).
*
* Example:
* // They should always pass a 64-bit value to _set_some_value!
* #define set_some_value(expr) \
* _set_some_value((check_type((expr), uint64_t), (expr)))
*/
/**
* check_types_match - issue a warning or build failure if types are not same.
* @expr1: the first expression (not evaluated).
* @expr2: the second expression (not evaluated).
*
* This macro is usually used within other macros to try to ensure that
* arguments are of identical types. No type promotion of the expressions is
* done: an unsigned int is not the same as an int!
*
* check_types_match() always evaluates to 0.
*
* If your compiler does not support typeof, then the best we can do is fail
* to compile if the sizes of the types are unequal (a less complete check).
*
* Example:
* // Do subtraction to get to enclosing type, but make sure that
* // pointer is of correct type for that member.
* #define container_of(mbr_ptr, encl_type, mbr) \
* (check_types_match((mbr_ptr), &((encl_type *)0)->mbr), \
* ((encl_type *) \
* ((char *)(mbr_ptr) - offsetof(enclosing_type, mbr))))
*/
#if HAVE_TYPEOF
#define check_type(expr, type) \
((typeof(expr) *)0 != (type *)0)
#define check_types_match(expr1, expr2) \
((typeof(expr1) *)0 != (typeof(expr2) *)0)
#else
#include <ccan/build_assert/build_assert.h>
/* Without typeof, we can only test the sizes. */
#define check_type(expr, type) \
BUILD_ASSERT_OR_ZERO(sizeof(expr) == sizeof(type))
#define check_types_match(expr1, expr2) \
BUILD_ASSERT_OR_ZERO(sizeof(expr1) == sizeof(expr2))
#endif /* HAVE_TYPEOF */
#endif /* CCAN_CHECK_TYPE_H */

View File

@ -0,0 +1,9 @@
#include <ccan/check_type/check_type.h>
int main(int argc, char *argv[])
{
#ifdef FAIL
check_type(argc, char);
#endif
return 0;
}

View File

@ -0,0 +1,14 @@
#include <ccan/check_type/check_type.h>
int main(int argc, char *argv[])
{
#ifdef FAIL
#if HAVE_TYPEOF
check_type(argc, unsigned int);
#else
/* This doesn't work without typeof, so just fail */
#error "Fail without typeof"
#endif
#endif
return 0;
}

View File

@ -0,0 +1,10 @@
#include <ccan/check_type/check_type.h>
int main(int argc, char *argv[])
{
unsigned char x = argc;
#ifdef FAIL
check_types_match(argc, x);
#endif
return x;
}

View File

@ -0,0 +1,22 @@
#include <ccan/check_type/check_type.h>
#include <ccan/tap/tap.h>
int main(int argc, char *argv[])
{
int x = 0, y = 0;
plan_tests(9);
ok1(check_type(argc, int) == 0);
ok1(check_type(&argc, int *) == 0);
ok1(check_types_match(argc, argc) == 0);
ok1(check_types_match(argc, x) == 0);
ok1(check_types_match(&argc, &x) == 0);
ok1(check_type(x++, int) == 0);
ok(x == 0, "check_type does not evaluate expression");
ok1(check_types_match(x++, y++) == 0);
ok(x == 0 && y == 0, "check_types_match does not evaluate expressions");
return exit_status();
}

View File

@ -0,0 +1 @@
../licenses/CC0

View File

@ -0,0 +1,64 @@
#include <string.h>
#include <stdio.h>
#include "config.h"
/**
* compiler - macros for common compiler extensions
*
* Abstracts away some compiler hints. Currently these include:
* - COLD
* For functions not called in fast paths (aka. cold functions)
* - PRINTF_FMT
* For functions which take printf-style parameters.
* - CONST_FUNCTION
* For functions which return the same value for same parameters.
* - NEEDED
* For functions and variables which must be emitted even if unused.
* - UNNEEDED
* For functions and variables which need not be emitted if unused.
* - UNUSED
* For parameters which are not used.
* - IS_COMPILE_CONSTANT()
* For using different tradeoffs for compiletime vs runtime evaluation.
*
* License: CC0 (Public domain)
* Author: Rusty Russell <rusty@rustcorp.com.au>
*
* Example:
* #include <ccan/compiler/compiler.h>
* #include <stdio.h>
* #include <stdarg.h>
*
* // Example of a (slow-path) logging function.
* static int log_threshold = 2;
* static void COLD PRINTF_FMT(2,3)
* logger(int level, const char *fmt, ...)
* {
* va_list ap;
* va_start(ap, fmt);
* if (level >= log_threshold)
* vfprintf(stderr, fmt, ap);
* va_end(ap);
* }
*
* int main(int argc, char *argv[])
* {
* if (argc != 1) {
* logger(3, "Don't want %i arguments!\n", argc-1);
* return 1;
* }
* return 0;
* }
*/
int main(int argc, char *argv[])
{
/* Expect exactly one argument */
if (argc != 2)
return 1;
if (strcmp(argv[1], "depends") == 0) {
return 0;
}
return 1;
}

View File

@ -0,0 +1,216 @@
/* CC0 (Public domain) - see LICENSE file for details */
#ifndef CCAN_COMPILER_H
#define CCAN_COMPILER_H
#ifndef COLD
#if HAVE_ATTRIBUTE_COLD
/**
* COLD - a function is unlikely to be called.
*
* Used to mark an unlikely code path and optimize appropriately.
* It is usually used on logging or error routines.
*
* Example:
* static void COLD moan(const char *reason)
* {
* fprintf(stderr, "Error: %s (%s)\n", reason, strerror(errno));
* }
*/
#define COLD __attribute__((cold))
#else
#define COLD
#endif
#endif
#ifndef NORETURN
#if HAVE_ATTRIBUTE_NORETURN
/**
* NORETURN - a function does not return
*
* Used to mark a function which exits; useful for suppressing warnings.
*
* Example:
* static void NORETURN fail(const char *reason)
* {
* fprintf(stderr, "Error: %s (%s)\n", reason, strerror(errno));
* exit(1);
* }
*/
#define NORETURN __attribute__((noreturn))
#else
#define NORETURN
#endif
#endif
#ifndef PRINTF_FMT
#if HAVE_ATTRIBUTE_PRINTF
/**
* PRINTF_FMT - a function takes printf-style arguments
* @nfmt: the 1-based number of the function's format argument.
* @narg: the 1-based number of the function's first variable argument.
*
* This allows the compiler to check your parameters as it does for printf().
*
* Example:
* void PRINTF_FMT(2,3) my_printf(const char *prefix, const char *fmt, ...);
*/
#define PRINTF_FMT(nfmt, narg) \
__attribute__((format(__printf__, nfmt, narg)))
#else
#define PRINTF_FMT(nfmt, narg)
#endif
#endif
#ifndef CONST_FUNCTION
#if HAVE_ATTRIBUTE_CONST
/**
* CONST_FUNCTION - a function's return depends only on its argument
*
* This allows the compiler to assume that the function will return the exact
* same value for the exact same arguments. This implies that the function
* must not use global variables, or dereference pointer arguments.
*/
#define CONST_FUNCTION __attribute__((const))
#else
#define CONST_FUNCTION
#endif
#endif
#if HAVE_ATTRIBUTE_UNUSED
#ifndef UNNEEDED
/**
* UNNEEDED - a variable/function may not be needed
*
* This suppresses warnings about unused variables or functions, but tells
* the compiler that if it is unused it need not emit it into the source code.
*
* Example:
* // With some preprocessor options, this is unnecessary.
* static UNNEEDED int counter;
*
* // With some preprocessor options, this is unnecessary.
* static UNNEEDED void add_to_counter(int add)
* {
* counter += add;
* }
*/
#define UNNEEDED __attribute__((unused))
#endif
#ifndef NEEDED
#if HAVE_ATTRIBUTE_USED
/**
* NEEDED - a variable/function is needed
*
* This suppresses warnings about unused variables or functions, but tells
* the compiler that it must exist even if it (seems) unused.
*
* Example:
* // Even if this is unused, these are vital for debugging.
* static NEEDED int counter;
* static NEEDED void dump_counter(void)
* {
* printf("Counter is %i\n", counter);
* }
*/
#define NEEDED __attribute__((used))
#else
/* Before used, unused functions and vars were always emitted. */
#define NEEDED __attribute__((unused))
#endif
#endif
#ifndef UNUSED
/**
* UNUSED - a parameter is unused
*
* Some compilers (eg. gcc with -W or -Wunused) warn about unused
* function parameters. This suppresses such warnings and indicates
* to the reader that it's deliberate.
*
* Example:
* // This is used as a callback, so needs to have this prototype.
* static int some_callback(void *unused UNUSED)
* {
* return 0;
* }
*/
#define UNUSED __attribute__((unused))
#endif
#else
#ifndef UNNEEDED
#define UNNEEDED
#endif
#ifndef NEEDED
#define NEEDED
#endif
#ifndef UNUSED
#define UNUSED
#endif
#endif
#ifndef IS_COMPILE_CONSTANT
#if HAVE_BUILTIN_CONSTANT_P
/**
* IS_COMPILE_CONSTANT - does the compiler know the value of this expression?
* @expr: the expression to evaluate
*
* When an expression manipulation is complicated, it is usually better to
* implement it in a function. However, if the expression being manipulated is
* known at compile time, it is better to have the compiler see the entire
* expression so it can simply substitute the result.
*
* This can be done using the IS_COMPILE_CONSTANT() macro.
*
* Example:
* enum greek { ALPHA, BETA, GAMMA, DELTA, EPSILON };
*
* // Out-of-line version.
* const char *greek_name(enum greek greek);
*
* // Inline version.
* static inline const char *_greek_name(enum greek greek)
* {
* switch (greek) {
* case ALPHA: return "alpha";
* case BETA: return "beta";
* case GAMMA: return "gamma";
* case DELTA: return "delta";
* case EPSILON: return "epsilon";
* default: return "**INVALID**";
* }
* }
*
* // Use inline if compiler knows answer. Otherwise call function
* // to avoid copies of the same code everywhere.
* #define greek_name(g) \
* (IS_COMPILE_CONSTANT(greek) ? _greek_name(g) : greek_name(g))
*/
#define IS_COMPILE_CONSTANT(expr) __builtin_constant_p(expr)
#else
/* If we don't know, assume it's not. */
#define IS_COMPILE_CONSTANT(expr) 0
#endif
#endif
#ifndef WARN_UNUSED_RESULT
#if HAVE_WARN_UNUSED_RESULT
/**
* WARN_UNUSED_RESULT - warn if a function return value is unused.
*
* Used to mark a function where it is extremely unlikely that the caller
* can ignore the result, eg realloc().
*
* Example:
* // buf param may be freed by this; need return value!
* static char *WARN_UNUSED_RESULT enlarge(char *buf, unsigned *size)
* {
* return realloc(buf, (*size) *= 2);
* }
*/
#define WARN_UNUSED_RESULT __attribute__((warn_unused_result))
#else
#define WARN_UNUSED_RESULT
#endif
#endif
#endif /* CCAN_COMPILER_H */

View File

@ -0,0 +1,22 @@
#include <ccan/compiler/compiler.h>
static void PRINTF_FMT(2,3) my_printf(int x, const char *fmt, ...)
{
}
int main(int argc, char *argv[])
{
unsigned int i = 0;
my_printf(1, "Not a pointer "
#ifdef FAIL
"%p",
#if !HAVE_ATTRIBUTE_PRINTF
#error "Unfortunately we don't fail if !HAVE_ATTRIBUTE_PRINTF."
#endif
#else
"%i",
#endif
i);
return 0;
}

View File

@ -0,0 +1,15 @@
#include <ccan/compiler/compiler.h>
#include <ccan/tap/tap.h>
int main(int argc, char *argv[])
{
plan_tests(2);
ok1(!IS_COMPILE_CONSTANT(argc));
#if HAVE_BUILTIN_CONSTANT_P
ok1(IS_COMPILE_CONSTANT(7));
#else
pass("If !HAVE_BUILTIN_CONSTANT_P, IS_COMPILE_CONSTANT always false");
#endif
return exit_status();
}

View File

@ -0,0 +1 @@
../licenses/CC0

View File

@ -0,0 +1,63 @@
#include <stdio.h>
#include <string.h>
#include "config.h"
/**
* container_of - routine for upcasting
*
* It is often convenient to create code where the caller registers a pointer
* to a generic structure and a callback. The callback might know that the
* pointer points to within a larger structure, and container_of gives a
* convenient and fairly type-safe way of returning to the enclosing structure.
*
* This idiom is an alternative to providing a void * pointer for every
* callback.
*
* Example:
* #include <stdio.h>
* #include <ccan/container_of/container_of.h>
*
* struct timer {
* void *members;
* };
*
* struct info {
* int my_stuff;
* struct timer timer;
* };
*
* static void register_timer(struct timer *timer)
* {
* //...
* }
*
* static void my_timer_callback(struct timer *timer)
* {
* struct info *info = container_of(timer, struct info, timer);
* printf("my_stuff is %u\n", info->my_stuff);
* }
*
* int main(void)
* {
* struct info info = { .my_stuff = 1 };
*
* register_timer(&info.timer);
* // ...
* return 0;
* }
*
* License: CC0 (Public domain)
* Author: Rusty Russell <rusty@rustcorp.com.au>
*/
int main(int argc, char *argv[])
{
if (argc != 2)
return 1;
if (strcmp(argv[1], "depends") == 0) {
printf("ccan/check_type\n");
return 0;
}
return 1;
}

View File

@ -0,0 +1,108 @@
/* CC0 (Public domain) - see LICENSE file for details */
#ifndef CCAN_CONTAINER_OF_H
#define CCAN_CONTAINER_OF_H
#include <stddef.h>
#include <ccan/check_type/check_type.h>
/**
* container_of - get pointer to enclosing structure
* @member_ptr: pointer to the structure member
* @containing_type: the type this member is within
* @member: the name of this member within the structure.
*
* Given a pointer to a member of a structure, this macro does pointer
* subtraction to return the pointer to the enclosing type.
*
* Example:
* struct foo {
* int fielda, fieldb;
* // ...
* };
* struct info {
* int some_other_field;
* struct foo my_foo;
* };
*
* static struct info *foo_to_info(struct foo *foo)
* {
* return container_of(foo, struct info, my_foo);
* }
*/
#define container_of(member_ptr, containing_type, member) \
((containing_type *) \
((char *)(member_ptr) \
- container_off(containing_type, member)) \
+ check_types_match(*(member_ptr), ((containing_type *)0)->member))
/**
* container_off - get offset to enclosing structure
* @containing_type: the type this member is within
* @member: the name of this member within the structure.
*
* Given a pointer to a member of a structure, this macro does
* typechecking and figures out the offset to the enclosing type.
*
* Example:
* struct foo {
* int fielda, fieldb;
* // ...
* };
* struct info {
* int some_other_field;
* struct foo my_foo;
* };
*
* static struct info *foo_to_info(struct foo *foo)
* {
* size_t off = container_off(struct info, my_foo);
* return (void *)((char *)foo - off);
* }
*/
#define container_off(containing_type, member) \
offsetof(containing_type, member)
/**
* container_of_var - get pointer to enclosing structure using a variable
* @member_ptr: pointer to the structure member
* @container_var: a pointer of same type as this member's container
* @member: the name of this member within the structure.
*
* Given a pointer to a member of a structure, this macro does pointer
* subtraction to return the pointer to the enclosing type.
*
* Example:
* static struct info *foo_to_i(struct foo *foo)
* {
* struct info *i = container_of_var(foo, i, my_foo);
* return i;
* }
*/
#if HAVE_TYPEOF
#define container_of_var(member_ptr, container_var, member) \
container_of(member_ptr, typeof(*container_var), member)
#else
#define container_of_var(member_ptr, container_var, member) \
((void *)((char *)(member_ptr) - \
container_off_var(container_var, member)))
#endif
/**
* container_off_var - get offset of a field in enclosing structure
* @container_var: a pointer to a container structure
* @member: the name of a member within the structure.
*
* Given (any) pointer to a structure and a its member name, this
* macro does pointer subtraction to return offset of member in a
* structure memory layout.
*
*/
#if HAVE_TYPEOF
#define container_off_var(var, member) \
container_off(typeof(*var), member)
#else
#define container_off_var(var, member) \
((char *)&(var)->member - (char *)(var))
#endif
#endif /* CCAN_CONTAINER_OF_H */

View File

@ -0,0 +1,22 @@
#include <ccan/container_of/container_of.h>
#include <stdlib.h>
struct foo {
int a;
char b;
};
int main(int argc, char *argv[])
{
struct foo foo = { .a = 1, .b = 2 };
int *intp = &foo.a;
char *p;
#ifdef FAIL
/* p is a char *, but this gives a struct foo * */
p = container_of(intp, struct foo, a);
#else
p = (char *)intp;
#endif
return p == NULL;
}

View File

@ -0,0 +1,22 @@
#include <ccan/container_of/container_of.h>
#include <stdlib.h>
struct foo {
int a;
char b;
};
int main(int argc, char *argv[])
{
struct foo foo = { .a = 1, .b = 2 }, *foop;
int *intp = &foo.a;
#ifdef FAIL
/* b is a char, but intp is an int * */
foop = container_of(intp, struct foo, b);
#else
foop = NULL;
#endif
(void) foop; /* Suppress unused-but-set-variable warning. */
return intp == NULL;
}

View File

@ -0,0 +1,25 @@
#include <ccan/container_of/container_of.h>
#include <stdlib.h>
struct foo {
int a;
char b;
};
int main(int argc, char *argv[])
{
struct foo foo = { .a = 1, .b = 2 }, *foop;
int *intp = &foo.a;
#ifdef FAIL
/* b is a char, but intp is an int * */
foop = container_of_var(intp, foop, b);
#if !HAVE_TYPEOF
#error "Unfortunately we don't fail if we don't have typeof."
#endif
#else
foop = NULL;
#endif
(void) foop; /* Suppress unused-but-set-variable warning. */
return intp == NULL;
}

View File

@ -0,0 +1,26 @@
#include <ccan/container_of/container_of.h>
#include <ccan/tap/tap.h>
struct foo {
int a;
char b;
};
int main(int argc, char *argv[])
{
struct foo foo = { .a = 1, .b = 2 };
int *intp = &foo.a;
char *charp = &foo.b;
plan_tests(8);
ok1(container_of(intp, struct foo, a) == &foo);
ok1(container_of(charp, struct foo, b) == &foo);
ok1(container_of_var(intp, &foo, a) == &foo);
ok1(container_of_var(charp, &foo, b) == &foo);
ok1(container_off(struct foo, a) == 0);
ok1(container_off(struct foo, b) == offsetof(struct foo, b));
ok1(container_off_var(&foo, a) == 0);
ok1(container_off_var(&foo, b) == offsetof(struct foo, b));
return exit_status();
}

View File

@ -0,0 +1 @@
../licenses/CC0

View File

@ -0,0 +1,31 @@
#include <string.h>
#include <stdio.h>
/**
* hash - routines for hashing bytes
*
* When creating a hash table it's important to have a hash function
* which mixes well and is fast. This package supplies such functions.
*
* The hash functions come in two flavors: the normal ones and the
* stable ones. The normal ones can vary from machine-to-machine and
* may change if we find better or faster hash algorithms in future.
* The stable ones will always give the same results on any computer,
* and on any version of this package.
*
* License: CC0 (Public domain)
* Maintainer: Rusty Russell <rusty@rustcorp.com.au>
* Author: Bob Jenkins <bob_jenkins@burtleburtle.net>
*/
int main(int argc, char *argv[])
{
if (argc != 2)
return 1;
if (strcmp(argv[1], "depends") == 0) {
printf("ccan/build_assert\n");
return 0;
}
return 1;
}

View File

@ -0,0 +1,928 @@
/* CC0 (Public domain) - see LICENSE file for details */
/*
-------------------------------------------------------------------------------
lookup3.c, by Bob Jenkins, May 2006, Public Domain.
These are functions for producing 32-bit hashes for hash table lookup.
hash_word(), hashlittle(), hashlittle2(), hashbig(), mix(), and final()
are externally useful functions. Routines to test the hash are included
if SELF_TEST is defined. You can use this free for any purpose. It's in
the public domain. It has no warranty.
You probably want to use hashlittle(). hashlittle() and hashbig()
hash byte arrays. hashlittle() is is faster than hashbig() on
little-endian machines. Intel and AMD are little-endian machines.
On second thought, you probably want hashlittle2(), which is identical to
hashlittle() except it returns two 32-bit hashes for the price of one.
You could implement hashbig2() if you wanted but I haven't bothered here.
If you want to find a hash of, say, exactly 7 integers, do
a = i1; b = i2; c = i3;
mix(a,b,c);
a += i4; b += i5; c += i6;
mix(a,b,c);
a += i7;
final(a,b,c);
then use c as the hash value. If you have a variable length array of
4-byte integers to hash, use hash_word(). If you have a byte array (like
a character string), use hashlittle(). If you have several byte arrays, or
a mix of things, see the comments above hashlittle().
Why is this so big? I read 12 bytes at a time into 3 4-byte integers,
then mix those integers. This is fast (you can do a lot more thorough
mixing with 12*3 instructions on 3 integers than you can with 3 instructions
on 1 byte), but shoehorning those bytes into integers efficiently is messy.
-------------------------------------------------------------------------------
*/
//#define SELF_TEST 1
#if 0
#include <stdio.h> /* defines printf for tests */
#include <time.h> /* defines time_t for timings in the test */
#include <stdint.h> /* defines uint32_t etc */
#include <sys/param.h> /* attempt to define endianness */
#ifdef linux
# include <endian.h> /* attempt to define endianness */
#endif
/*
* My best guess at if you are big-endian or little-endian. This may
* need adjustment.
*/
#if (defined(__BYTE_ORDER) && defined(__LITTLE_ENDIAN) && \
__BYTE_ORDER == __LITTLE_ENDIAN) || \
(defined(i386) || defined(__i386__) || defined(__i486__) || \
defined(__i586__) || defined(__i686__) || defined(__x86_64) || \
defined(vax) || defined(MIPSEL))
# define HASH_LITTLE_ENDIAN 1
# define HASH_BIG_ENDIAN 0
#elif (defined(__BYTE_ORDER) && defined(__BIG_ENDIAN) && \
__BYTE_ORDER == __BIG_ENDIAN) || \
(defined(sparc) || defined(POWERPC) || defined(mc68000) || defined(sel))
# define HASH_LITTLE_ENDIAN 0
# define HASH_BIG_ENDIAN 1
#else
#define HASH_LITTLE_ENDIAN 1
#define HASH_BIG_ENDIAN 0
#endif
#endif /* old hash.c headers. */
#include "hash.h"
#if HAVE_LITTLE_ENDIAN
#define HASH_LITTLE_ENDIAN 1
#define HASH_BIG_ENDIAN 0
#elif HAVE_BIG_ENDIAN
#define HASH_LITTLE_ENDIAN 0
#define HASH_BIG_ENDIAN 1
#else
#define HASH_LITTLE_ENDIAN 1
#define HASH_BIG_ENDIAN 0
#endif
#define hashsize(n) ((uint32_t)1<<(n))
#define hashmask(n) (hashsize(n)-1)
#define rot(x,k) (((x)<<(k)) | ((x)>>(32-(k))))
/*
-------------------------------------------------------------------------------
mix -- mix 3 32-bit values reversibly.
This is reversible, so any information in (a,b,c) before mix() is
still in (a,b,c) after mix().
If four pairs of (a,b,c) inputs are run through mix(), or through
mix() in reverse, there are at least 32 bits of the output that
are sometimes the same for one pair and different for another pair.
This was tested for:
* pairs that differed by one bit, by two bits, in any combination
of top bits of (a,b,c), or in any combination of bottom bits of
(a,b,c).
* "differ" is defined as +, -, ^, or ~^. For + and -, I transformed
the output delta to a Gray code (a^(a>>1)) so a string of 1's (as
is commonly produced by subtraction) look like a single 1-bit
difference.
* the base values were pseudorandom, all zero but one bit set, or
all zero plus a counter that starts at zero.
Some k values for my "a-=c; a^=rot(c,k); c+=b;" arrangement that
satisfy this are
4 6 8 16 19 4
9 15 3 18 27 15
14 9 3 7 17 3
Well, "9 15 3 18 27 15" didn't quite get 32 bits diffing
for "differ" defined as + with a one-bit base and a two-bit delta. I
used http://burtleburtle.net/bob/hash/avalanche.html to choose
the operations, constants, and arrangements of the variables.
This does not achieve avalanche. There are input bits of (a,b,c)
that fail to affect some output bits of (a,b,c), especially of a. The
most thoroughly mixed value is c, but it doesn't really even achieve
avalanche in c.
This allows some parallelism. Read-after-writes are good at doubling
the number of bits affected, so the goal of mixing pulls in the opposite
direction as the goal of parallelism. I did what I could. Rotates
seem to cost as much as shifts on every machine I could lay my hands
on, and rotates are much kinder to the top and bottom bits, so I used
rotates.
-------------------------------------------------------------------------------
*/
#define mix(a,b,c) \
{ \
a -= c; a ^= rot(c, 4); c += b; \
b -= a; b ^= rot(a, 6); a += c; \
c -= b; c ^= rot(b, 8); b += a; \
a -= c; a ^= rot(c,16); c += b; \
b -= a; b ^= rot(a,19); a += c; \
c -= b; c ^= rot(b, 4); b += a; \
}
/*
-------------------------------------------------------------------------------
final -- final mixing of 3 32-bit values (a,b,c) into c
Pairs of (a,b,c) values differing in only a few bits will usually
produce values of c that look totally different. This was tested for
* pairs that differed by one bit, by two bits, in any combination
of top bits of (a,b,c), or in any combination of bottom bits of
(a,b,c).
* "differ" is defined as +, -, ^, or ~^. For + and -, I transformed
the output delta to a Gray code (a^(a>>1)) so a string of 1's (as
is commonly produced by subtraction) look like a single 1-bit
difference.
* the base values were pseudorandom, all zero but one bit set, or
all zero plus a counter that starts at zero.
These constants passed:
14 11 25 16 4 14 24
12 14 25 16 4 14 24
and these came close:
4 8 15 26 3 22 24
10 8 15 26 3 22 24
11 8 15 26 3 22 24
-------------------------------------------------------------------------------
*/
#define final(a,b,c) \
{ \
c ^= b; c -= rot(b,14); \
a ^= c; a -= rot(c,11); \
b ^= a; b -= rot(a,25); \
c ^= b; c -= rot(b,16); \
a ^= c; a -= rot(c,4); \
b ^= a; b -= rot(a,14); \
c ^= b; c -= rot(b,24); \
}
/*
--------------------------------------------------------------------
This works on all machines. To be useful, it requires
-- that the key be an array of uint32_t's, and
-- that the length be the number of uint32_t's in the key
The function hash_word() is identical to hashlittle() on little-endian
machines, and identical to hashbig() on big-endian machines,
except that the length has to be measured in uint32_ts rather than in
bytes. hashlittle() is more complicated than hash_word() only because
hashlittle() has to dance around fitting the key bytes into registers.
--------------------------------------------------------------------
*/
uint32_t hash_u32(
const uint32_t *k, /* the key, an array of uint32_t values */
size_t length, /* the length of the key, in uint32_ts */
uint32_t initval) /* the previous hash, or an arbitrary value */
{
uint32_t a,b,c;
/* Set up the internal state */
a = b = c = 0xdeadbeef + (((uint32_t)length)<<2) + initval;
/*------------------------------------------------- handle most of the key */
while (length > 3)
{
a += k[0];
b += k[1];
c += k[2];
mix(a,b,c);
length -= 3;
k += 3;
}
/*------------------------------------------- handle the last 3 uint32_t's */
switch(length) /* all the case statements fall through */
{
case 3 : c+=k[2];
case 2 : b+=k[1];
case 1 : a+=k[0];
final(a,b,c);
case 0: /* case 0: nothing left to add */
break;
}
/*------------------------------------------------------ report the result */
return c;
}
/*
-------------------------------------------------------------------------------
hashlittle() -- hash a variable-length key into a 32-bit value
k : the key (the unaligned variable-length array of bytes)
length : the length of the key, counting by bytes
val2 : IN: can be any 4-byte value OUT: second 32 bit hash.
Returns a 32-bit value. Every bit of the key affects every bit of
the return value. Two keys differing by one or two bits will have
totally different hash values. Note that the return value is better
mixed than val2, so use that first.
The best hash table sizes are powers of 2. There is no need to do
mod a prime (mod is sooo slow!). If you need less than 32 bits,
use a bitmask. For example, if you need only 10 bits, do
h = (h & hashmask(10));
In which case, the hash table should have hashsize(10) elements.
If you are hashing n strings (uint8_t **)k, do it like this:
for (i=0, h=0; i<n; ++i) h = hashlittle( k[i], len[i], h);
By Bob Jenkins, 2006. bob_jenkins@burtleburtle.net. You may use this
code any way you wish, private, educational, or commercial. It's free.
Use for hash table lookup, or anything where one collision in 2^^32 is
acceptable. Do NOT use for cryptographic purposes.
-------------------------------------------------------------------------------
*/
static uint32_t hashlittle( const void *key, size_t length, uint32_t *val2 )
{
uint32_t a,b,c; /* internal state */
union { const void *ptr; size_t i; } u; /* needed for Mac Powerbook G4 */
/* Set up the internal state */
a = b = c = 0xdeadbeef + ((uint32_t)length) + *val2;
u.ptr = key;
if (HASH_LITTLE_ENDIAN && ((u.i & 0x3) == 0)) {
const uint32_t *k = (const uint32_t *)key; /* read 32-bit chunks */
const uint8_t *k8;
/*------ all but last block: aligned reads and affect 32 bits of (a,b,c) */
while (length > 12)
{
a += k[0];
b += k[1];
c += k[2];
mix(a,b,c);
length -= 12;
k += 3;
}
/*----------------------------- handle the last (probably partial) block */
/*
* "k[2]&0xffffff" actually reads beyond the end of the string, but
* then masks off the part it's not allowed to read. Because the
* string is aligned, the masked-off tail is in the same word as the
* rest of the string. Every machine with memory protection I've seen
* does it on word boundaries, so is OK with this. But VALGRIND will
* still catch it and complain. The masking trick does make the hash
* noticably faster for short strings (like English words).
*
* Not on my testing with gcc 4.5 on an intel i5 CPU, at least --RR.
*/
#if 0
switch(length)
{
case 12: c+=k[2]; b+=k[1]; a+=k[0]; break;
case 11: c+=k[2]&0xffffff; b+=k[1]; a+=k[0]; break;
case 10: c+=k[2]&0xffff; b+=k[1]; a+=k[0]; break;
case 9 : c+=k[2]&0xff; b+=k[1]; a+=k[0]; break;
case 8 : b+=k[1]; a+=k[0]; break;
case 7 : b+=k[1]&0xffffff; a+=k[0]; break;
case 6 : b+=k[1]&0xffff; a+=k[0]; break;
case 5 : b+=k[1]&0xff; a+=k[0]; break;
case 4 : a+=k[0]; break;
case 3 : a+=k[0]&0xffffff; break;
case 2 : a+=k[0]&0xffff; break;
case 1 : a+=k[0]&0xff; break;
case 0 : return c; /* zero length strings require no mixing */
}
#else /* make valgrind happy */
k8 = (const uint8_t *)k;
switch(length)
{
case 12: c+=k[2]; b+=k[1]; a+=k[0]; break;
case 11: c+=((uint32_t)k8[10])<<16; /* fall through */
case 10: c+=((uint32_t)k8[9])<<8; /* fall through */
case 9 : c+=k8[8]; /* fall through */
case 8 : b+=k[1]; a+=k[0]; break;
case 7 : b+=((uint32_t)k8[6])<<16; /* fall through */
case 6 : b+=((uint32_t)k8[5])<<8; /* fall through */
case 5 : b+=k8[4]; /* fall through */
case 4 : a+=k[0]; break;
case 3 : a+=((uint32_t)k8[2])<<16; /* fall through */
case 2 : a+=((uint32_t)k8[1])<<8; /* fall through */
case 1 : a+=k8[0]; break;
case 0 : return c;
}
#endif /* !valgrind */
} else if (HASH_LITTLE_ENDIAN && ((u.i & 0x1) == 0)) {
const uint16_t *k = (const uint16_t *)key; /* read 16-bit chunks */
const uint8_t *k8;
/*--------------- all but last block: aligned reads and different mixing */
while (length > 12)
{
a += k[0] + (((uint32_t)k[1])<<16);
b += k[2] + (((uint32_t)k[3])<<16);
c += k[4] + (((uint32_t)k[5])<<16);
mix(a,b,c);
length -= 12;
k += 6;
}
/*----------------------------- handle the last (probably partial) block */
k8 = (const uint8_t *)k;
switch(length)
{
case 12: c+=k[4]+(((uint32_t)k[5])<<16);
b+=k[2]+(((uint32_t)k[3])<<16);
a+=k[0]+(((uint32_t)k[1])<<16);
break;
case 11: c+=((uint32_t)k8[10])<<16; /* fall through */
case 10: c+=k[4];
b+=k[2]+(((uint32_t)k[3])<<16);
a+=k[0]+(((uint32_t)k[1])<<16);
break;
case 9 : c+=k8[8]; /* fall through */
case 8 : b+=k[2]+(((uint32_t)k[3])<<16);
a+=k[0]+(((uint32_t)k[1])<<16);
break;
case 7 : b+=((uint32_t)k8[6])<<16; /* fall through */
case 6 : b+=k[2];
a+=k[0]+(((uint32_t)k[1])<<16);
break;
case 5 : b+=k8[4]; /* fall through */
case 4 : a+=k[0]+(((uint32_t)k[1])<<16);
break;
case 3 : a+=((uint32_t)k8[2])<<16; /* fall through */
case 2 : a+=k[0];
break;
case 1 : a+=k8[0];
break;
case 0 : return c; /* zero length requires no mixing */
}
} else { /* need to read the key one byte at a time */
const uint8_t *k = (const uint8_t *)key;
/*--------------- all but the last block: affect some 32 bits of (a,b,c) */
while (length > 12)
{
a += k[0];
a += ((uint32_t)k[1])<<8;
a += ((uint32_t)k[2])<<16;
a += ((uint32_t)k[3])<<24;
b += k[4];
b += ((uint32_t)k[5])<<8;
b += ((uint32_t)k[6])<<16;
b += ((uint32_t)k[7])<<24;
c += k[8];
c += ((uint32_t)k[9])<<8;
c += ((uint32_t)k[10])<<16;
c += ((uint32_t)k[11])<<24;
mix(a,b,c);
length -= 12;
k += 12;
}
/*-------------------------------- last block: affect all 32 bits of (c) */
switch(length) /* all the case statements fall through */
{
case 12: c+=((uint32_t)k[11])<<24;
case 11: c+=((uint32_t)k[10])<<16;
case 10: c+=((uint32_t)k[9])<<8;
case 9 : c+=k[8];
case 8 : b+=((uint32_t)k[7])<<24;
case 7 : b+=((uint32_t)k[6])<<16;
case 6 : b+=((uint32_t)k[5])<<8;
case 5 : b+=k[4];
case 4 : a+=((uint32_t)k[3])<<24;
case 3 : a+=((uint32_t)k[2])<<16;
case 2 : a+=((uint32_t)k[1])<<8;
case 1 : a+=k[0];
break;
case 0 : return c;
}
}
final(a,b,c);
*val2 = b;
return c;
}
/*
* hashbig():
* This is the same as hash_word() on big-endian machines. It is different
* from hashlittle() on all machines. hashbig() takes advantage of
* big-endian byte ordering.
*/
static uint32_t hashbig( const void *key, size_t length, uint32_t *val2)
{
uint32_t a,b,c;
union { const void *ptr; size_t i; } u; /* to cast key to (size_t) happily */
/* Set up the internal state */
a = b = c = 0xdeadbeef + ((uint32_t)length) + *val2;
u.ptr = key;
if (HASH_BIG_ENDIAN && ((u.i & 0x3) == 0)) {
const uint32_t *k = (const uint32_t *)key; /* read 32-bit chunks */
const uint8_t *k8;
/*------ all but last block: aligned reads and affect 32 bits of (a,b,c) */
while (length > 12)
{
a += k[0];
b += k[1];
c += k[2];
mix(a,b,c);
length -= 12;
k += 3;
}
/*----------------------------- handle the last (probably partial) block */
/*
* "k[2]<<8" actually reads beyond the end of the string, but
* then shifts out the part it's not allowed to read. Because the
* string is aligned, the illegal read is in the same word as the
* rest of the string. Every machine with memory protection I've seen
* does it on word boundaries, so is OK with this. But VALGRIND will
* still catch it and complain. The masking trick does make the hash
* noticably faster for short strings (like English words).
*
* Not on my testing with gcc 4.5 on an intel i5 CPU, at least --RR.
*/
#if 0
switch(length)
{
case 12: c+=k[2]; b+=k[1]; a+=k[0]; break;
case 11: c+=k[2]&0xffffff00; b+=k[1]; a+=k[0]; break;
case 10: c+=k[2]&0xffff0000; b+=k[1]; a+=k[0]; break;
case 9 : c+=k[2]&0xff000000; b+=k[1]; a+=k[0]; break;
case 8 : b+=k[1]; a+=k[0]; break;
case 7 : b+=k[1]&0xffffff00; a+=k[0]; break;
case 6 : b+=k[1]&0xffff0000; a+=k[0]; break;
case 5 : b+=k[1]&0xff000000; a+=k[0]; break;
case 4 : a+=k[0]; break;
case 3 : a+=k[0]&0xffffff00; break;
case 2 : a+=k[0]&0xffff0000; break;
case 1 : a+=k[0]&0xff000000; break;
case 0 : return c; /* zero length strings require no mixing */
}
#else /* make valgrind happy */
k8 = (const uint8_t *)k;
switch(length) /* all the case statements fall through */
{
case 12: c+=k[2]; b+=k[1]; a+=k[0]; break;
case 11: c+=((uint32_t)k8[10])<<8; /* fall through */
case 10: c+=((uint32_t)k8[9])<<16; /* fall through */
case 9 : c+=((uint32_t)k8[8])<<24; /* fall through */
case 8 : b+=k[1]; a+=k[0]; break;
case 7 : b+=((uint32_t)k8[6])<<8; /* fall through */
case 6 : b+=((uint32_t)k8[5])<<16; /* fall through */
case 5 : b+=((uint32_t)k8[4])<<24; /* fall through */
case 4 : a+=k[0]; break;
case 3 : a+=((uint32_t)k8[2])<<8; /* fall through */
case 2 : a+=((uint32_t)k8[1])<<16; /* fall through */
case 1 : a+=((uint32_t)k8[0])<<24; break;
case 0 : return c;
}
#endif /* !VALGRIND */
} else { /* need to read the key one byte at a time */
const uint8_t *k = (const uint8_t *)key;
/*--------------- all but the last block: affect some 32 bits of (a,b,c) */
while (length > 12)
{
a += ((uint32_t)k[0])<<24;
a += ((uint32_t)k[1])<<16;
a += ((uint32_t)k[2])<<8;
a += ((uint32_t)k[3]);
b += ((uint32_t)k[4])<<24;
b += ((uint32_t)k[5])<<16;
b += ((uint32_t)k[6])<<8;
b += ((uint32_t)k[7]);
c += ((uint32_t)k[8])<<24;
c += ((uint32_t)k[9])<<16;
c += ((uint32_t)k[10])<<8;
c += ((uint32_t)k[11]);
mix(a,b,c);
length -= 12;
k += 12;
}
/*-------------------------------- last block: affect all 32 bits of (c) */
switch(length) /* all the case statements fall through */
{
case 12: c+=k[11];
case 11: c+=((uint32_t)k[10])<<8;
case 10: c+=((uint32_t)k[9])<<16;
case 9 : c+=((uint32_t)k[8])<<24;
case 8 : b+=k[7];
case 7 : b+=((uint32_t)k[6])<<8;
case 6 : b+=((uint32_t)k[5])<<16;
case 5 : b+=((uint32_t)k[4])<<24;
case 4 : a+=k[3];
case 3 : a+=((uint32_t)k[2])<<8;
case 2 : a+=((uint32_t)k[1])<<16;
case 1 : a+=((uint32_t)k[0])<<24;
break;
case 0 : return c;
}
}
final(a,b,c);
*val2 = b;
return c;
}
/* I basically use hashlittle here, but use native endian within each
* element. This delivers least-surprise: hash such as "int arr[] = {
* 1, 2 }; hash_stable(arr, 2, 0);" will be the same on big and little
* endian machines, even though a bytewise hash wouldn't be. */
uint64_t hash64_stable_64(const void *key, size_t n, uint64_t base)
{
const uint64_t *k = key;
uint32_t a,b,c;
/* Set up the internal state */
a = b = c = 0xdeadbeef + ((uint32_t)n*8) + (base >> 32) + base;
while (n > 3) {
a += (uint32_t)k[0];
b += (uint32_t)(k[0] >> 32);
c += (uint32_t)k[1];
mix(a,b,c);
a += (uint32_t)(k[1] >> 32);
b += (uint32_t)k[2];
c += (uint32_t)(k[2] >> 32);
mix(a,b,c);
n -= 3;
k += 3;
}
switch (n) {
case 2:
a += (uint32_t)k[0];
b += (uint32_t)(k[0] >> 32);
c += (uint32_t)k[1];
mix(a,b,c);
a += (uint32_t)(k[1] >> 32);
break;
case 1:
a += (uint32_t)k[0];
b += (uint32_t)(k[0] >> 32);
break;
case 0:
return c;
}
final(a,b,c);
return ((uint64_t)b << 32) | c;
}
uint64_t hash64_stable_32(const void *key, size_t n, uint64_t base)
{
const uint32_t *k = key;
uint32_t a,b,c;
/* Set up the internal state */
a = b = c = 0xdeadbeef + ((uint32_t)n*4) + (base >> 32) + base;
while (n > 3) {
a += k[0];
b += k[1];
c += k[2];
mix(a,b,c);
n -= 3;
k += 3;
}
switch (n) {
case 2:
b += (uint32_t)k[1];
case 1:
a += (uint32_t)k[0];
break;
case 0:
return c;
}
final(a,b,c);
return ((uint64_t)b << 32) | c;
}
uint64_t hash64_stable_16(const void *key, size_t n, uint64_t base)
{
const uint16_t *k = key;
uint32_t a,b,c;
/* Set up the internal state */
a = b = c = 0xdeadbeef + ((uint32_t)n*2) + (base >> 32) + base;
while (n > 6) {
a += (uint32_t)k[0] + ((uint32_t)k[1] << 16);
b += (uint32_t)k[2] + ((uint32_t)k[3] << 16);
c += (uint32_t)k[4] + ((uint32_t)k[5] << 16);
mix(a,b,c);
n -= 6;
k += 6;
}
switch (n) {
case 5:
c += (uint32_t)k[4];
case 4:
b += ((uint32_t)k[3] << 16);
case 3:
b += (uint32_t)k[2];
case 2:
a += ((uint32_t)k[1] << 16);
case 1:
a += (uint32_t)k[0];
break;
case 0:
return c;
}
final(a,b,c);
return ((uint64_t)b << 32) | c;
}
uint64_t hash64_stable_8(const void *key, size_t n, uint64_t base)
{
uint32_t b32 = base + (base >> 32);
uint32_t lower = hashlittle(key, n, &b32);
return ((uint64_t)b32 << 32) | lower;
}
uint32_t hash_any(const void *key, size_t length, uint32_t base)
{
if (HASH_BIG_ENDIAN)
return hashbig(key, length, &base);
else
return hashlittle(key, length, &base);
}
uint32_t hash_stable_64(const void *key, size_t n, uint32_t base)
{
return hash64_stable_64(key, n, base);
}
uint32_t hash_stable_32(const void *key, size_t n, uint32_t base)
{
return hash64_stable_32(key, n, base);
}
uint32_t hash_stable_16(const void *key, size_t n, uint32_t base)
{
return hash64_stable_16(key, n, base);
}
uint32_t hash_stable_8(const void *key, size_t n, uint32_t base)
{
return hashlittle(key, n, &base);
}
/* Jenkins' lookup8 is a 64 bit hash, but he says it's obsolete. Use
* the plain one and recombine into 64 bits. */
uint64_t hash64_any(const void *key, size_t length, uint64_t base)
{
uint32_t b32 = base + (base >> 32);
uint32_t lower;
if (HASH_BIG_ENDIAN)
lower = hashbig(key, length, &b32);
else
lower = hashlittle(key, length, &b32);
return ((uint64_t)b32 << 32) | lower;
}
#ifdef SELF_TEST
/* used for timings */
void driver1()
{
uint8_t buf[256];
uint32_t i;
uint32_t h=0;
time_t a,z;
time(&a);
for (i=0; i<256; ++i) buf[i] = 'x';
for (i=0; i<1; ++i)
{
h = hashlittle(&buf[0],1,h);
}
time(&z);
if (z-a > 0) printf("time %d %.8x\n", z-a, h);
}
/* check that every input bit changes every output bit half the time */
#define HASHSTATE 1
#define HASHLEN 1
#define MAXPAIR 60
#define MAXLEN 70
void driver2()
{
uint8_t qa[MAXLEN+1], qb[MAXLEN+2], *a = &qa[0], *b = &qb[1];
uint32_t c[HASHSTATE], d[HASHSTATE], i=0, j=0, k, l, m=0, z;
uint32_t e[HASHSTATE],f[HASHSTATE],g[HASHSTATE],h[HASHSTATE];
uint32_t x[HASHSTATE],y[HASHSTATE];
uint32_t hlen;
printf("No more than %d trials should ever be needed \n",MAXPAIR/2);
for (hlen=0; hlen < MAXLEN; ++hlen)
{
z=0;
for (i=0; i<hlen; ++i) /*----------------------- for each input byte, */
{
for (j=0; j<8; ++j) /*------------------------ for each input bit, */
{
for (m=1; m<8; ++m) /*------------ for several possible initvals, */
{
for (l=0; l<HASHSTATE; ++l)
e[l]=f[l]=g[l]=h[l]=x[l]=y[l]=~((uint32_t)0);
/*---- check that every output bit is affected by that input bit */
for (k=0; k<MAXPAIR; k+=2)
{
uint32_t finished=1;
/* keys have one bit different */
for (l=0; l<hlen+1; ++l) {a[l] = b[l] = (uint8_t)0;}
/* have a and b be two keys differing in only one bit */
a[i] ^= (k<<j);
a[i] ^= (k>>(8-j));
c[0] = hashlittle(a, hlen, m);
b[i] ^= ((k+1)<<j);
b[i] ^= ((k+1)>>(8-j));
d[0] = hashlittle(b, hlen, m);
/* check every bit is 1, 0, set, and not set at least once */
for (l=0; l<HASHSTATE; ++l)
{
e[l] &= (c[l]^d[l]);
f[l] &= ~(c[l]^d[l]);
g[l] &= c[l];
h[l] &= ~c[l];
x[l] &= d[l];
y[l] &= ~d[l];
if (e[l]|f[l]|g[l]|h[l]|x[l]|y[l]) finished=0;
}
if (finished) break;
}
if (k>z) z=k;
if (k==MAXPAIR)
{
printf("Some bit didn't change: ");
printf("%.8x %.8x %.8x %.8x %.8x %.8x ",
e[0],f[0],g[0],h[0],x[0],y[0]);
printf("i %d j %d m %d len %d\n", i, j, m, hlen);
}
if (z==MAXPAIR) goto done;
}
}
}
done:
if (z < MAXPAIR)
{
printf("Mix success %2d bytes %2d initvals ",i,m);
printf("required %d trials\n", z/2);
}
}
printf("\n");
}
/* Check for reading beyond the end of the buffer and alignment problems */
void driver3()
{
uint8_t buf[MAXLEN+20], *b;
uint32_t len;
uint8_t q[] = "This is the time for all good men to come to the aid of their country...";
uint32_t h;
uint8_t qq[] = "xThis is the time for all good men to come to the aid of their country...";
uint32_t i;
uint8_t qqq[] = "xxThis is the time for all good men to come to the aid of their country...";
uint32_t j;
uint8_t qqqq[] = "xxxThis is the time for all good men to come to the aid of their country...";
uint32_t ref,x,y;
uint8_t *p;
printf("Endianness. These lines should all be the same (for values filled in):\n");
printf("%.8x %.8x %.8x\n",
hash_word((const uint32_t *)q, (sizeof(q)-1)/4, 13),
hash_word((const uint32_t *)q, (sizeof(q)-5)/4, 13),
hash_word((const uint32_t *)q, (sizeof(q)-9)/4, 13));
p = q;
printf("%.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x\n",
hashlittle(p, sizeof(q)-1, 13), hashlittle(p, sizeof(q)-2, 13),
hashlittle(p, sizeof(q)-3, 13), hashlittle(p, sizeof(q)-4, 13),
hashlittle(p, sizeof(q)-5, 13), hashlittle(p, sizeof(q)-6, 13),
hashlittle(p, sizeof(q)-7, 13), hashlittle(p, sizeof(q)-8, 13),
hashlittle(p, sizeof(q)-9, 13), hashlittle(p, sizeof(q)-10, 13),
hashlittle(p, sizeof(q)-11, 13), hashlittle(p, sizeof(q)-12, 13));
p = &qq[1];
printf("%.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x\n",
hashlittle(p, sizeof(q)-1, 13), hashlittle(p, sizeof(q)-2, 13),
hashlittle(p, sizeof(q)-3, 13), hashlittle(p, sizeof(q)-4, 13),
hashlittle(p, sizeof(q)-5, 13), hashlittle(p, sizeof(q)-6, 13),
hashlittle(p, sizeof(q)-7, 13), hashlittle(p, sizeof(q)-8, 13),
hashlittle(p, sizeof(q)-9, 13), hashlittle(p, sizeof(q)-10, 13),
hashlittle(p, sizeof(q)-11, 13), hashlittle(p, sizeof(q)-12, 13));
p = &qqq[2];
printf("%.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x\n",
hashlittle(p, sizeof(q)-1, 13), hashlittle(p, sizeof(q)-2, 13),
hashlittle(p, sizeof(q)-3, 13), hashlittle(p, sizeof(q)-4, 13),
hashlittle(p, sizeof(q)-5, 13), hashlittle(p, sizeof(q)-6, 13),
hashlittle(p, sizeof(q)-7, 13), hashlittle(p, sizeof(q)-8, 13),
hashlittle(p, sizeof(q)-9, 13), hashlittle(p, sizeof(q)-10, 13),
hashlittle(p, sizeof(q)-11, 13), hashlittle(p, sizeof(q)-12, 13));
p = &qqqq[3];
printf("%.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x %.8x\n",
hashlittle(p, sizeof(q)-1, 13), hashlittle(p, sizeof(q)-2, 13),
hashlittle(p, sizeof(q)-3, 13), hashlittle(p, sizeof(q)-4, 13),
hashlittle(p, sizeof(q)-5, 13), hashlittle(p, sizeof(q)-6, 13),
hashlittle(p, sizeof(q)-7, 13), hashlittle(p, sizeof(q)-8, 13),
hashlittle(p, sizeof(q)-9, 13), hashlittle(p, sizeof(q)-10, 13),
hashlittle(p, sizeof(q)-11, 13), hashlittle(p, sizeof(q)-12, 13));
printf("\n");
/* check that hashlittle2 and hashlittle produce the same results */
i=47; j=0;
hashlittle2(q, sizeof(q), &i, &j);
if (hashlittle(q, sizeof(q), 47) != i)
printf("hashlittle2 and hashlittle mismatch\n");
/* check that hash_word2 and hash_word produce the same results */
len = 0xdeadbeef;
i=47, j=0;
hash_word2(&len, 1, &i, &j);
if (hash_word(&len, 1, 47) != i)
printf("hash_word2 and hash_word mismatch %x %x\n",
i, hash_word(&len, 1, 47));
/* check hashlittle doesn't read before or after the ends of the string */
for (h=0, b=buf+1; h<8; ++h, ++b)
{
for (i=0; i<MAXLEN; ++i)
{
len = i;
for (j=0; j<i; ++j) *(b+j)=0;
/* these should all be equal */
ref = hashlittle(b, len, (uint32_t)1);
*(b+i)=(uint8_t)~0;
*(b-1)=(uint8_t)~0;
x = hashlittle(b, len, (uint32_t)1);
y = hashlittle(b, len, (uint32_t)1);
if ((ref != x) || (ref != y))
{
printf("alignment error: %.8x %.8x %.8x %d %d\n",ref,x,y,
h, i);
}
}
}
}
/* check for problems with nulls */
void driver4()
{
uint8_t buf[1];
uint32_t h,i,state[HASHSTATE];
buf[0] = ~0;
for (i=0; i<HASHSTATE; ++i) state[i] = 1;
printf("These should all be different\n");
for (i=0, h=0; i<8; ++i)
{
h = hashlittle(buf, 0, h);
printf("%2ld 0-byte strings, hash is %.8x\n", i, h);
}
}
int main()
{
driver1(); /* test that the key is hashed: used for timings */
driver2(); /* test that whole key is hashed thoroughly */
driver3(); /* test that nothing but the key is hashed */
driver4(); /* test hashing multiple buffers (all buffers are null) */
return 1;
}
#endif /* SELF_TEST */

View File

@ -0,0 +1,312 @@
/* CC0 (Public domain) - see LICENSE file for details */
#ifndef CCAN_HASH_H
#define CCAN_HASH_H
#include <stdint.h>
#include <stdlib.h>
#include <ccan/build_assert/build_assert.h>
/* Stolen mostly from: lookup3.c, by Bob Jenkins, May 2006, Public Domain.
*
* http://burtleburtle.net/bob/c/lookup3.c
*/
/**
* hash - fast hash of an array for internal use
* @p: the array or pointer to first element
* @num: the number of elements to hash
* @base: the base number to roll into the hash (usually 0)
*
* The memory region pointed to by p is combined with the base to form
* a 32-bit hash.
*
* This hash will have different results on different machines, so is
* only useful for internal hashes (ie. not hashes sent across the
* network or saved to disk).
*
* It may also change with future versions: it could even detect at runtime
* what the fastest hash to use is.
*
* See also: hash64, hash_stable.
*
* Example:
* #include <ccan/hash/hash.h>
* #include <err.h>
* #include <stdio.h>
* #include <string.h>
*
* // Simple demonstration: idential strings will have the same hash, but
* // two different strings will probably not.
* int main(int argc, char *argv[])
* {
* uint32_t hash1, hash2;
*
* if (argc != 3)
* err(1, "Usage: %s <string1> <string2>", argv[0]);
*
* hash1 = hash(argv[1], strlen(argv[1]), 0);
* hash2 = hash(argv[2], strlen(argv[2]), 0);
* printf("Hash is %s\n", hash1 == hash2 ? "same" : "different");
* return 0;
* }
*/
#define hash(p, num, base) hash_any((p), (num)*sizeof(*(p)), (base))
/**
* hash_stable - hash of an array for external use
* @p: the array or pointer to first element
* @num: the number of elements to hash
* @base: the base number to roll into the hash (usually 0)
*
* The array of simple integer types pointed to by p is combined with
* the base to form a 32-bit hash.
*
* This hash will have the same results on different machines, so can
* be used for external hashes (ie. hashes sent across the network or
* saved to disk). The results will not change in future versions of
* this module.
*
* Note that it is only legal to hand an array of simple integer types
* to this hash (ie. char, uint16_t, int64_t, etc). In these cases,
* the same values will have the same hash result, even though the
* memory representations of integers depend on the machine
* endianness.
*
* See also:
* hash64_stable
*
* Example:
* #include <ccan/hash/hash.h>
* #include <err.h>
* #include <stdio.h>
* #include <string.h>
*
* int main(int argc, char *argv[])
* {
* if (argc != 2)
* err(1, "Usage: %s <string-to-hash>", argv[0]);
*
* printf("Hash stable result is %u\n",
* hash_stable(argv[1], strlen(argv[1]), 0));
* return 0;
* }
*/
#define hash_stable(p, num, base) \
(BUILD_ASSERT_OR_ZERO(sizeof(*(p)) == 8 || sizeof(*(p)) == 4 \
|| sizeof(*(p)) == 2 || sizeof(*(p)) == 1) + \
sizeof(*(p)) == 8 ? hash_stable_64((p), (num), (base)) \
: sizeof(*(p)) == 4 ? hash_stable_32((p), (num), (base)) \
: sizeof(*(p)) == 2 ? hash_stable_16((p), (num), (base)) \
: hash_stable_8((p), (num), (base)))
/**
* hash_u32 - fast hash an array of 32-bit values for internal use
* @key: the array of uint32_t
* @num: the number of elements to hash
* @base: the base number to roll into the hash (usually 0)
*
* The array of uint32_t pointed to by @key is combined with the base
* to form a 32-bit hash. This is 2-3 times faster than hash() on small
* arrays, but the advantage vanishes over large hashes.
*
* This hash will have different results on different machines, so is
* only useful for internal hashes (ie. not hashes sent across the
* network or saved to disk).
*/
uint32_t hash_u32(const uint32_t *key, size_t num, uint32_t base);
/**
* hash_string - very fast hash of an ascii string
* @str: the nul-terminated string
*
* The string is hashed, using a hash function optimized for ASCII and
* similar strings. It's weaker than the other hash functions.
*
* This hash may have different results on different machines, so is
* only useful for internal hashes (ie. not hashes sent across the
* network or saved to disk). The results will be different from the
* other hash functions in this module, too.
*/
static inline uint32_t hash_string(const char *string)
{
/* This is Karl Nelson <kenelson@ece.ucdavis.edu>'s X31 hash.
* It's a little faster than the (much better) lookup3 hash(): 56ns vs
* 84ns on my 2GHz Intel Core Duo 2 laptop for a 10 char string. */
uint32_t ret;
for (ret = 0; *string; string++)
ret = (ret << 5) - ret + *string;
return ret;
}
/**
* hash64 - fast 64-bit hash of an array for internal use
* @p: the array or pointer to first element
* @num: the number of elements to hash
* @base: the 64-bit base number to roll into the hash (usually 0)
*
* The memory region pointed to by p is combined with the base to form
* a 64-bit hash.
*
* This hash will have different results on different machines, so is
* only useful for internal hashes (ie. not hashes sent across the
* network or saved to disk).
*
* It may also change with future versions: it could even detect at runtime
* what the fastest hash to use is.
*
* See also: hash.
*
* Example:
* #include <ccan/hash/hash.h>
* #include <err.h>
* #include <stdio.h>
* #include <string.h>
*
* // Simple demonstration: idential strings will have the same hash, but
* // two different strings will probably not.
* int main(int argc, char *argv[])
* {
* uint64_t hash1, hash2;
*
* if (argc != 3)
* err(1, "Usage: %s <string1> <string2>", argv[0]);
*
* hash1 = hash64(argv[1], strlen(argv[1]), 0);
* hash2 = hash64(argv[2], strlen(argv[2]), 0);
* printf("Hash is %s\n", hash1 == hash2 ? "same" : "different");
* return 0;
* }
*/
#define hash64(p, num, base) hash64_any((p), (num)*sizeof(*(p)), (base))
/**
* hash64_stable - 64 bit hash of an array for external use
* @p: the array or pointer to first element
* @num: the number of elements to hash
* @base: the base number to roll into the hash (usually 0)
*
* The array of simple integer types pointed to by p is combined with
* the base to form a 64-bit hash.
*
* This hash will have the same results on different machines, so can
* be used for external hashes (ie. hashes sent across the network or
* saved to disk). The results will not change in future versions of
* this module.
*
* Note that it is only legal to hand an array of simple integer types
* to this hash (ie. char, uint16_t, int64_t, etc). In these cases,
* the same values will have the same hash result, even though the
* memory representations of integers depend on the machine
* endianness.
*
* See also:
* hash_stable
*
* Example:
* #include <ccan/hash/hash.h>
* #include <err.h>
* #include <stdio.h>
* #include <string.h>
*
* int main(int argc, char *argv[])
* {
* if (argc != 2)
* err(1, "Usage: %s <string-to-hash>", argv[0]);
*
* printf("Hash stable result is %llu\n",
* (long long)hash64_stable(argv[1], strlen(argv[1]), 0));
* return 0;
* }
*/
#define hash64_stable(p, num, base) \
(BUILD_ASSERT_OR_ZERO(sizeof(*(p)) == 8 || sizeof(*(p)) == 4 \
|| sizeof(*(p)) == 2 || sizeof(*(p)) == 1) + \
sizeof(*(p)) == 8 ? hash64_stable_64((p), (num), (base)) \
: sizeof(*(p)) == 4 ? hash64_stable_32((p), (num), (base)) \
: sizeof(*(p)) == 2 ? hash64_stable_16((p), (num), (base)) \
: hash64_stable_8((p), (num), (base)))
/**
* hashl - fast 32/64-bit hash of an array for internal use
* @p: the array or pointer to first element
* @num: the number of elements to hash
* @base: the base number to roll into the hash (usually 0)
*
* This is either hash() or hash64(), on 32/64 bit long machines.
*/
#define hashl(p, num, base) \
(BUILD_ASSERT_OR_ZERO(sizeof(long) == sizeof(uint32_t) \
|| sizeof(long) == sizeof(uint64_t)) + \
(sizeof(long) == sizeof(uint64_t) \
? hash64((p), (num), (base)) : hash((p), (num), (base))))
/* Our underlying operations. */
uint32_t hash_any(const void *key, size_t length, uint32_t base);
uint32_t hash_stable_64(const void *key, size_t n, uint32_t base);
uint32_t hash_stable_32(const void *key, size_t n, uint32_t base);
uint32_t hash_stable_16(const void *key, size_t n, uint32_t base);
uint32_t hash_stable_8(const void *key, size_t n, uint32_t base);
uint64_t hash64_any(const void *key, size_t length, uint64_t base);
uint64_t hash64_stable_64(const void *key, size_t n, uint64_t base);
uint64_t hash64_stable_32(const void *key, size_t n, uint64_t base);
uint64_t hash64_stable_16(const void *key, size_t n, uint64_t base);
uint64_t hash64_stable_8(const void *key, size_t n, uint64_t base);
/**
* hash_pointer - hash a pointer for internal use
* @p: the pointer value to hash
* @base: the base number to roll into the hash (usually 0)
*
* The pointer p (not what p points to!) is combined with the base to form
* a 32-bit hash.
*
* This hash will have different results on different machines, so is
* only useful for internal hashes (ie. not hashes sent across the
* network or saved to disk).
*
* Example:
* #include <ccan/hash/hash.h>
*
* // Code to keep track of memory regions.
* struct region {
* struct region *chain;
* void *start;
* unsigned int size;
* };
* // We keep a simple hash table.
* static struct region *region_hash[128];
*
* static void add_region(struct region *r)
* {
* unsigned int h = hash_pointer(r->start, 0);
*
* r->chain = region_hash[h];
* region_hash[h] = r->chain;
* }
*
* static struct region *find_region(const void *start)
* {
* struct region *r;
*
* for (r = region_hash[hash_pointer(start, 0)]; r; r = r->chain)
* if (r->start == start)
* return r;
* return NULL;
* }
*/
static inline uint32_t hash_pointer(const void *p, uint32_t base)
{
if (sizeof(p) % sizeof(uint32_t) == 0) {
/* This convoluted union is the right way of aliasing. */
union {
uint32_t a[sizeof(p) / sizeof(uint32_t)];
const void *p;
} u;
u.p = p;
return hash_u32(u.a, sizeof(p) / sizeof(uint32_t), base);
} else
return hash(&p, 1, base);
}
#endif /* HASH_H */

View File

@ -0,0 +1,300 @@
#include <ccan/hash/hash.h>
#include <ccan/tap/tap.h>
#include <stdbool.h>
#include <string.h>
#define ARRAY_WORDS 5
int main(int argc, char *argv[])
{
unsigned int i;
uint8_t u8array[ARRAY_WORDS];
uint16_t u16array[ARRAY_WORDS];
uint32_t u32array[ARRAY_WORDS];
uint64_t u64array[ARRAY_WORDS];
/* Initialize arrays. */
for (i = 0; i < ARRAY_WORDS; i++) {
u8array[i] = i;
u16array[i] = i;
u32array[i] = i;
u64array[i] = i;
}
plan_tests(264);
/* hash_stable is API-guaranteed. */
ok1(hash_stable(u8array, ARRAY_WORDS, 0) == 0x1d4833cc);
ok1(hash_stable(u8array, ARRAY_WORDS, 1) == 0x37125e2 );
ok1(hash_stable(u8array, ARRAY_WORDS, 2) == 0x330a007a);
ok1(hash_stable(u8array, ARRAY_WORDS, 4) == 0x7b0df29b);
ok1(hash_stable(u8array, ARRAY_WORDS, 8) == 0xe7e5d741);
ok1(hash_stable(u8array, ARRAY_WORDS, 16) == 0xaae57471);
ok1(hash_stable(u8array, ARRAY_WORDS, 32) == 0xc55399e5);
ok1(hash_stable(u8array, ARRAY_WORDS, 64) == 0x67f21f7 );
ok1(hash_stable(u8array, ARRAY_WORDS, 128) == 0x1d795b71);
ok1(hash_stable(u8array, ARRAY_WORDS, 256) == 0xeb961671);
ok1(hash_stable(u8array, ARRAY_WORDS, 512) == 0xc2597247);
ok1(hash_stable(u8array, ARRAY_WORDS, 1024) == 0x3f5c4d75);
ok1(hash_stable(u8array, ARRAY_WORDS, 2048) == 0xe65cf4f9);
ok1(hash_stable(u8array, ARRAY_WORDS, 4096) == 0xf2cd06cb);
ok1(hash_stable(u8array, ARRAY_WORDS, 8192) == 0x443041e1);
ok1(hash_stable(u8array, ARRAY_WORDS, 16384) == 0xdfc618f5);
ok1(hash_stable(u8array, ARRAY_WORDS, 32768) == 0x5e3d5b97);
ok1(hash_stable(u8array, ARRAY_WORDS, 65536) == 0xd5f64730);
ok1(hash_stable(u8array, ARRAY_WORDS, 131072) == 0x372bbecc);
ok1(hash_stable(u8array, ARRAY_WORDS, 262144) == 0x7c194c8d);
ok1(hash_stable(u8array, ARRAY_WORDS, 524288) == 0x16cbb416);
ok1(hash_stable(u8array, ARRAY_WORDS, 1048576) == 0x53e99222);
ok1(hash_stable(u8array, ARRAY_WORDS, 2097152) == 0x6394554a);
ok1(hash_stable(u8array, ARRAY_WORDS, 4194304) == 0xd83a506d);
ok1(hash_stable(u8array, ARRAY_WORDS, 8388608) == 0x7619d9a4);
ok1(hash_stable(u8array, ARRAY_WORDS, 16777216) == 0xfe98e5f6);
ok1(hash_stable(u8array, ARRAY_WORDS, 33554432) == 0x6c262927);
ok1(hash_stable(u8array, ARRAY_WORDS, 67108864) == 0x3f0106fd);
ok1(hash_stable(u8array, ARRAY_WORDS, 134217728) == 0xc91e3a28);
ok1(hash_stable(u8array, ARRAY_WORDS, 268435456) == 0x14229579);
ok1(hash_stable(u8array, ARRAY_WORDS, 536870912) == 0x9dbefa76);
ok1(hash_stable(u8array, ARRAY_WORDS, 1073741824) == 0xb05c0c78);
ok1(hash_stable(u8array, ARRAY_WORDS, 2147483648U) == 0x88f24d81);
ok1(hash_stable(u16array, ARRAY_WORDS, 0) == 0xecb5f507);
ok1(hash_stable(u16array, ARRAY_WORDS, 1) == 0xadd666e6);
ok1(hash_stable(u16array, ARRAY_WORDS, 2) == 0xea0f214c);
ok1(hash_stable(u16array, ARRAY_WORDS, 4) == 0xae4051ba);
ok1(hash_stable(u16array, ARRAY_WORDS, 8) == 0x6ed28026);
ok1(hash_stable(u16array, ARRAY_WORDS, 16) == 0xa3917a19);
ok1(hash_stable(u16array, ARRAY_WORDS, 32) == 0xf370f32b);
ok1(hash_stable(u16array, ARRAY_WORDS, 64) == 0x807af460);
ok1(hash_stable(u16array, ARRAY_WORDS, 128) == 0xb4c8cd83);
ok1(hash_stable(u16array, ARRAY_WORDS, 256) == 0xa10cb5b0);
ok1(hash_stable(u16array, ARRAY_WORDS, 512) == 0x8b7d7387);
ok1(hash_stable(u16array, ARRAY_WORDS, 1024) == 0x9e49d1c );
ok1(hash_stable(u16array, ARRAY_WORDS, 2048) == 0x288830d1);
ok1(hash_stable(u16array, ARRAY_WORDS, 4096) == 0xbe078a43);
ok1(hash_stable(u16array, ARRAY_WORDS, 8192) == 0xa16d5d88);
ok1(hash_stable(u16array, ARRAY_WORDS, 16384) == 0x46839fcd);
ok1(hash_stable(u16array, ARRAY_WORDS, 32768) == 0x9db9bd4f);
ok1(hash_stable(u16array, ARRAY_WORDS, 65536) == 0xedff58f8);
ok1(hash_stable(u16array, ARRAY_WORDS, 131072) == 0x95ecef18);
ok1(hash_stable(u16array, ARRAY_WORDS, 262144) == 0x23c31b7d);
ok1(hash_stable(u16array, ARRAY_WORDS, 524288) == 0x1d85c7d0);
ok1(hash_stable(u16array, ARRAY_WORDS, 1048576) == 0x25218842);
ok1(hash_stable(u16array, ARRAY_WORDS, 2097152) == 0x711d985c);
ok1(hash_stable(u16array, ARRAY_WORDS, 4194304) == 0x85470eca);
ok1(hash_stable(u16array, ARRAY_WORDS, 8388608) == 0x99ed4ceb);
ok1(hash_stable(u16array, ARRAY_WORDS, 16777216) == 0x67b3710c);
ok1(hash_stable(u16array, ARRAY_WORDS, 33554432) == 0x77f1ab35);
ok1(hash_stable(u16array, ARRAY_WORDS, 67108864) == 0x81f688aa);
ok1(hash_stable(u16array, ARRAY_WORDS, 134217728) == 0x27b56ca5);
ok1(hash_stable(u16array, ARRAY_WORDS, 268435456) == 0xf21ba203);
ok1(hash_stable(u16array, ARRAY_WORDS, 536870912) == 0xd48d1d1 );
ok1(hash_stable(u16array, ARRAY_WORDS, 1073741824) == 0xa542b62d);
ok1(hash_stable(u16array, ARRAY_WORDS, 2147483648U) == 0xa04c7058);
ok1(hash_stable(u32array, ARRAY_WORDS, 0) == 0x13305f8c);
ok1(hash_stable(u32array, ARRAY_WORDS, 1) == 0x171abf74);
ok1(hash_stable(u32array, ARRAY_WORDS, 2) == 0x7646fcc7);
ok1(hash_stable(u32array, ARRAY_WORDS, 4) == 0xa758ed5);
ok1(hash_stable(u32array, ARRAY_WORDS, 8) == 0x2dedc2e4);
ok1(hash_stable(u32array, ARRAY_WORDS, 16) == 0x28e2076b);
ok1(hash_stable(u32array, ARRAY_WORDS, 32) == 0xb73091c5);
ok1(hash_stable(u32array, ARRAY_WORDS, 64) == 0x87daf5db);
ok1(hash_stable(u32array, ARRAY_WORDS, 128) == 0xa16dfe20);
ok1(hash_stable(u32array, ARRAY_WORDS, 256) == 0x300c63c3);
ok1(hash_stable(u32array, ARRAY_WORDS, 512) == 0x255c91fc);
ok1(hash_stable(u32array, ARRAY_WORDS, 1024) == 0x6357b26);
ok1(hash_stable(u32array, ARRAY_WORDS, 2048) == 0x4bc5f339);
ok1(hash_stable(u32array, ARRAY_WORDS, 4096) == 0x1301617c);
ok1(hash_stable(u32array, ARRAY_WORDS, 8192) == 0x506792c9);
ok1(hash_stable(u32array, ARRAY_WORDS, 16384) == 0xcd596705);
ok1(hash_stable(u32array, ARRAY_WORDS, 32768) == 0xa8713cac);
ok1(hash_stable(u32array, ARRAY_WORDS, 65536) == 0x94d9794);
ok1(hash_stable(u32array, ARRAY_WORDS, 131072) == 0xac753e8);
ok1(hash_stable(u32array, ARRAY_WORDS, 262144) == 0xcd8bdd20);
ok1(hash_stable(u32array, ARRAY_WORDS, 524288) == 0xd44faf80);
ok1(hash_stable(u32array, ARRAY_WORDS, 1048576) == 0x2547ccbe);
ok1(hash_stable(u32array, ARRAY_WORDS, 2097152) == 0xbab06dbc);
ok1(hash_stable(u32array, ARRAY_WORDS, 4194304) == 0xaac0e882);
ok1(hash_stable(u32array, ARRAY_WORDS, 8388608) == 0x443f48d0);
ok1(hash_stable(u32array, ARRAY_WORDS, 16777216) == 0xdff49fcc);
ok1(hash_stable(u32array, ARRAY_WORDS, 33554432) == 0x9ce0fd65);
ok1(hash_stable(u32array, ARRAY_WORDS, 67108864) == 0x9ddb1def);
ok1(hash_stable(u32array, ARRAY_WORDS, 134217728) == 0x86096f25);
ok1(hash_stable(u32array, ARRAY_WORDS, 268435456) == 0xe713b7b5);
ok1(hash_stable(u32array, ARRAY_WORDS, 536870912) == 0x5baeffc5);
ok1(hash_stable(u32array, ARRAY_WORDS, 1073741824) == 0xde874f52);
ok1(hash_stable(u32array, ARRAY_WORDS, 2147483648U) == 0xeca13b4e);
ok1(hash_stable(u64array, ARRAY_WORDS, 0) == 0x12ef6302);
ok1(hash_stable(u64array, ARRAY_WORDS, 1) == 0xe9aeb406);
ok1(hash_stable(u64array, ARRAY_WORDS, 2) == 0xc4218ceb);
ok1(hash_stable(u64array, ARRAY_WORDS, 4) == 0xb3d11412);
ok1(hash_stable(u64array, ARRAY_WORDS, 8) == 0xdafbd654);
ok1(hash_stable(u64array, ARRAY_WORDS, 16) == 0x9c336cba);
ok1(hash_stable(u64array, ARRAY_WORDS, 32) == 0x65059721);
ok1(hash_stable(u64array, ARRAY_WORDS, 64) == 0x95b5bbe6);
ok1(hash_stable(u64array, ARRAY_WORDS, 128) == 0xe7596b84);
ok1(hash_stable(u64array, ARRAY_WORDS, 256) == 0x503622a2);
ok1(hash_stable(u64array, ARRAY_WORDS, 512) == 0xecdcc5ca);
ok1(hash_stable(u64array, ARRAY_WORDS, 1024) == 0xc40d0513);
ok1(hash_stable(u64array, ARRAY_WORDS, 2048) == 0xaab25e4d);
ok1(hash_stable(u64array, ARRAY_WORDS, 4096) == 0xcc353fb9);
ok1(hash_stable(u64array, ARRAY_WORDS, 8192) == 0x18e2319f);
ok1(hash_stable(u64array, ARRAY_WORDS, 16384) == 0xfddaae8d);
ok1(hash_stable(u64array, ARRAY_WORDS, 32768) == 0xef7976f2);
ok1(hash_stable(u64array, ARRAY_WORDS, 65536) == 0x86359fc9);
ok1(hash_stable(u64array, ARRAY_WORDS, 131072) == 0x8b5af385);
ok1(hash_stable(u64array, ARRAY_WORDS, 262144) == 0x80d4ee31);
ok1(hash_stable(u64array, ARRAY_WORDS, 524288) == 0x42f5f85b);
ok1(hash_stable(u64array, ARRAY_WORDS, 1048576) == 0x9a6920e1);
ok1(hash_stable(u64array, ARRAY_WORDS, 2097152) == 0x7b7c9850);
ok1(hash_stable(u64array, ARRAY_WORDS, 4194304) == 0x69573e09);
ok1(hash_stable(u64array, ARRAY_WORDS, 8388608) == 0xc942bc0e);
ok1(hash_stable(u64array, ARRAY_WORDS, 16777216) == 0x7a89f0f1);
ok1(hash_stable(u64array, ARRAY_WORDS, 33554432) == 0x2dd641ca);
ok1(hash_stable(u64array, ARRAY_WORDS, 67108864) == 0x89bbd391);
ok1(hash_stable(u64array, ARRAY_WORDS, 134217728) == 0xbcf88e31);
ok1(hash_stable(u64array, ARRAY_WORDS, 268435456) == 0xfa7a3460);
ok1(hash_stable(u64array, ARRAY_WORDS, 536870912) == 0x49a37be0);
ok1(hash_stable(u64array, ARRAY_WORDS, 1073741824) == 0x1b346394);
ok1(hash_stable(u64array, ARRAY_WORDS, 2147483648U) == 0x6c3a1592);
ok1(hash64_stable(u8array, ARRAY_WORDS, 0) == 16887282882572727244ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 1) == 12032777473133454818ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 2) == 18183407363221487738ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 4) == 17860764172704150171ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 8) == 18076051600675559233ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 16) == 9909361918431556721ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 32) == 12937969888744675813ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 64) == 5245669057381736951ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 128) == 4376874646406519665ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 256) == 14219974419871569521ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 512) == 2263415354134458951ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 1024) == 4953859694526221685ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 2048) == 3432228642067641593ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 4096) == 1219647244417697483ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 8192) == 7629939424585859553ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 16384) == 10041660531376789749ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 32768) == 13859885793922603927ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 65536) == 15069060338344675120ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 131072) == 818163430835601100ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 262144) == 14914314323019517069ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 524288) == 17518437749769352214ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 1048576) == 14920048004901212706ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 2097152) == 8758567366332536138ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 4194304) == 6226655736088907885ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 8388608) == 13716650013685832100ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 16777216) == 305325651636315638ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 33554432) == 16784147606583781671ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 67108864) == 16509467555140798205ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 134217728) == 8717281234694060584ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 268435456) == 8098476701725660537ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 536870912) == 16345871539461094006ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 1073741824) == 3755557000429964408ULL);
ok1(hash64_stable(u8array, ARRAY_WORDS, 2147483648U) == 15017348801959710081ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 0) == 1038028831307724039ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 1) == 10155473272642627302ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 2) == 5714751190106841420ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 4) == 3923885607767527866ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 8) == 3931017318293995558ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 16) == 1469696588339313177ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 32) == 11522218526952715051ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 64) == 6953517591561958496ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 128) == 7406689491740052867ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 256) == 10101844489704093104ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 512) == 12511348870707245959ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 1024) == 1614019938016861468ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 2048) == 5294796182374592721ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 4096) == 16089570706643716675ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 8192) == 1689302638424579464ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 16384) == 1446340172370386893ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 32768) == 16535503506744393039ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 65536) == 3496794142527150328ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 131072) == 6568245367474548504ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 262144) == 9487676460765485949ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 524288) == 4519762130966530000ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 1048576) == 15623412069215340610ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 2097152) == 544013388676438108ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 4194304) == 5594904760290840266ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 8388608) == 18098755780041592043ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 16777216) == 6389168672387330316ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 33554432) == 896986127732419381ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 67108864) == 13232626471143901354ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 134217728) == 53378562890493093ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 268435456) == 10072361400297824771ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 536870912) == 14511948118285144529ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 1073741824) == 6981033484844447277ULL);
ok1(hash64_stable(u16array, ARRAY_WORDS, 2147483648U) == 5619339091684126808ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 0) == 3037571077312110476ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 1) == 14732398743825071988ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 2) == 14949132158206672071ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 4) == 1291370080511561429ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 8) == 10792665964172133092ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 16) == 14250138032054339435ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 32) == 17136741522078732741ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 64) == 3260193403318236635ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 128) == 10526616652205653536ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 256) == 9019690373358576579ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 512) == 6997491436599677436ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 1024) == 18302783371416533798ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 2048) == 10149320644446516025ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 4096) == 7073759949410623868ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 8192) == 17442399482223760073ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 16384) == 2983906194216281861ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 32768) == 4975845419129060524ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 65536) == 594019910205413268ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 131072) == 11903010186073691112ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 262144) == 7339636527154847008ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 524288) == 15243305400579108736ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 1048576) == 16737926245392043198ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 2097152) == 15725083267699862972ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 4194304) == 12527834265678833794ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 8388608) == 13908436455987824848ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 16777216) == 9672773345173872588ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 33554432) == 2305314279896710501ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 67108864) == 1866733780381408751ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 134217728) == 11906263969465724709ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 268435456) == 5501594918093830069ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 536870912) == 15823785789276225477ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 1073741824) == 17353000723889475410ULL);
ok1(hash64_stable(u32array, ARRAY_WORDS, 2147483648U) == 7494736910655503182ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 0) == 9765419389786481410ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 1) == 11182806172127114246ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 2) == 2559155171395472619ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 4) == 3311692033324815378ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 8) == 1297175419505333844ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 16) == 617896928653569210ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 32) == 1517398559958603553ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 64) == 4504821917445110758ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 128) == 1971743331114904452ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 256) == 6177667912354374306ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 512) == 15570521289777792458ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 1024) == 9204559632415917331ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 2048) == 9008982669760028237ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 4096) == 14803537660281700281ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 8192) == 2873966517448487327ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 16384) == 5859277625928363661ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 32768) == 15520461285618185970ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 65536) == 16746489793331175369ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 131072) == 514952025484227461ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 262144) == 10867212269810675249ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 524288) == 9822204377278314587ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 1048576) == 3295088921987850465ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 2097152) == 7559197431498053712ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 4194304) == 1667267269116771849ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 8388608) == 2916804068951374862ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 16777216) == 14422558383125688561ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 33554432) == 10083112683694342602ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 67108864) == 7222777647078298513ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 134217728) == 18424513674048212529ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 268435456) == 14913668581101810784ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 536870912) == 14377721174297902048ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 1073741824) == 6031715005667500948ULL);
ok1(hash64_stable(u64array, ARRAY_WORDS, 2147483648U) == 4827100319722378642ULL);
return exit_status();
}

View File

@ -0,0 +1,149 @@
#include <ccan/hash/hash.h>
#include <ccan/tap/tap.h>
#include <ccan/hash/hash.c>
#include <stdbool.h>
#include <string.h>
#define ARRAY_WORDS 5
int main(int argc, char *argv[])
{
unsigned int i, j, k;
uint32_t array[ARRAY_WORDS], val;
char array2[sizeof(array) + sizeof(uint32_t)];
uint32_t results[256];
/* Initialize array. */
for (i = 0; i < ARRAY_WORDS; i++)
array[i] = i;
plan_tests(39);
/* Hash should be the same, indep of memory alignment. */
val = hash(array, ARRAY_WORDS, 0);
for (i = 0; i < sizeof(uint32_t); i++) {
memcpy(array2 + i, array, sizeof(array));
ok(hash(array2 + i, ARRAY_WORDS, 0) != val,
"hash matched at offset %i", i);
}
/* Hash of random values should have random distribution:
* check one byte at a time. */
for (i = 0; i < sizeof(uint32_t); i++) {
unsigned int lowest = -1U, highest = 0;
memset(results, 0, sizeof(results));
for (j = 0; j < 256000; j++) {
for (k = 0; k < ARRAY_WORDS; k++)
array[k] = random();
results[(hash(array, ARRAY_WORDS, 0) >> i*8)&0xFF]++;
}
for (j = 0; j < 256; j++) {
if (results[j] < lowest)
lowest = results[j];
if (results[j] > highest)
highest = results[j];
}
/* Expect within 20% */
ok(lowest > 800, "Byte %i lowest %i", i, lowest);
ok(highest < 1200, "Byte %i highest %i", i, highest);
diag("Byte %i, range %u-%u", i, lowest, highest);
}
/* Hash of random values should have random distribution:
* check one byte at a time. */
for (i = 0; i < sizeof(uint64_t); i++) {
unsigned int lowest = -1U, highest = 0;
memset(results, 0, sizeof(results));
for (j = 0; j < 256000; j++) {
for (k = 0; k < ARRAY_WORDS; k++)
array[k] = random();
results[(hash64(array, sizeof(array)/sizeof(uint64_t),
0) >> i*8)&0xFF]++;
}
for (j = 0; j < 256; j++) {
if (results[j] < lowest)
lowest = results[j];
if (results[j] > highest)
highest = results[j];
}
/* Expect within 20% */
ok(lowest > 800, "Byte %i lowest %i", i, lowest);
ok(highest < 1200, "Byte %i highest %i", i, highest);
diag("Byte %i, range %u-%u", i, lowest, highest);
}
/* Hash of pointer values should also have random distribution. */
for (i = 0; i < sizeof(uint32_t); i++) {
unsigned int lowest = -1U, highest = 0;
char *p = malloc(256000);
memset(results, 0, sizeof(results));
for (j = 0; j < 256000; j++)
results[(hash_pointer(p + j, 0) >> i*8)&0xFF]++;
free(p);
for (j = 0; j < 256; j++) {
if (results[j] < lowest)
lowest = results[j];
if (results[j] > highest)
highest = results[j];
}
/* Expect within 20% */
ok(lowest > 800, "hash_pointer byte %i lowest %i", i, lowest);
ok(highest < 1200, "hash_pointer byte %i highest %i",
i, highest);
diag("hash_pointer byte %i, range %u-%u", i, lowest, highest);
}
if (sizeof(long) == sizeof(uint32_t))
ok1(hashl(array, ARRAY_WORDS, 0)
== hash(array, ARRAY_WORDS, 0));
else
ok1(hashl(array, ARRAY_WORDS, 0)
== hash64(array, ARRAY_WORDS, 0));
/* String hash: weak, so only test bottom byte */
for (i = 0; i < 1; i++) {
unsigned int num = 0, cursor, lowest = -1U, highest = 0;
char p[5];
memset(results, 0, sizeof(results));
memset(p, 'A', sizeof(p));
p[sizeof(p)-1] = '\0';
for (;;) {
for (cursor = 0; cursor < sizeof(p)-1; cursor++) {
p[cursor]++;
if (p[cursor] <= 'z')
break;
p[cursor] = 'A';
}
if (cursor == sizeof(p)-1)
break;
results[(hash_string(p) >> i*8)&0xFF]++;
num++;
}
for (j = 0; j < 256; j++) {
if (results[j] < lowest)
lowest = results[j];
if (results[j] > highest)
highest = results[j];
}
/* Expect within 20% */
ok(lowest > 35000, "hash_pointer byte %i lowest %i", i, lowest);
ok(highest < 53000, "hash_pointer byte %i highest %i",
i, highest);
diag("hash_pointer byte %i, range %u-%u", i, lowest, highest);
}
return exit_status();
}

View File

@ -0,0 +1,17 @@
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@ -0,0 +1,28 @@
Statement of Purpose
The laws of most jurisdictions throughout the world automatically confer exclusive Copyright and Related Rights (defined below) upon the creator and subsequent owner(s) (each and all, an "owner") of an original work of authorship and/or a database (each, a "Work").
Certain owners wish to permanently relinquish those rights to a Work for the purpose of contributing to a commons of creative, cultural and scientific works ("Commons") that the public can reliably and without fear of later claims of infringement build upon, modify, incorporate in other works, reuse and redistribute as freely as possible in any form whatsoever and for any purposes, including without limitation commercial purposes. These owners may contribute to the Commons to promote the ideal of a free culture and the further production of creative, cultural and scientific works, or to gain reputation or greater distribution for their Work in part through the use and efforts of others.
For these and/or other purposes and motivations, and without any expectation of additional consideration or compensation, the person associating CC0 with a Work (the "Affirmer"), to the extent that he or she is an owner of Copyright and Related Rights in the Work, voluntarily elects to apply CC0 to the Work and publicly distribute the Work under its terms, with knowledge of his or her Copyright and Related Rights in the Work and the meaning and intended legal effect of CC0 on those rights.
1. Copyright and Related Rights. A Work made available under CC0 may be protected by copyright and related or neighboring rights ("Copyright and Related Rights"). Copyright and Related Rights include, but are not limited to, the following:
the right to reproduce, adapt, distribute, perform, display, communicate, and translate a Work;
moral rights retained by the original author(s) and/or performer(s);
publicity and privacy rights pertaining to a person's image or likeness depicted in a Work;
rights protecting against unfair competition in regards to a Work, subject to the limitations in paragraph 4(a), below;
rights protecting the extraction, dissemination, use and reuse of data in a Work;
database rights (such as those arising under Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, and under any national implementation thereof, including any amended or successor version of such directive); and
other similar, equivalent or corresponding rights throughout the world based on applicable law or treaty, and any national implementations thereof.
2. Waiver. To the greatest extent permitted by, but not in contravention of, applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and unconditionally waives, abandons, and surrenders all of Affirmer's Copyright and Related Rights and associated claims and causes of action, whether now known or unknown (including existing as well as future claims and causes of action), in the Work (i) in all territories worldwide, (ii) for the maximum duration provided by applicable law or treaty (including future time extensions), (iii) in any current or future medium and for any number of copies, and (iv) for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each member of the public at large and to the detriment of Affirmer's heirs and successors, fully intending that such Waiver shall not be subject to revocation, rescission, cancellation, termination, or any other legal or equitable action to disrupt the quiet enjoyment of the Work by the public as contemplated by Affirmer's express Statement of Purpose.
3. Public License Fallback. Should any part of the Waiver for any reason be judged legally invalid or ineffective under applicable law, then the Waiver shall be preserved to the maximum extent permitted taking into account Affirmer's express Statement of Purpose. In addition, to the extent the Waiver is so judged Affirmer hereby grants to each affected person a royalty-free, non transferable, non sublicensable, non exclusive, irrevocable and unconditional license to exercise Affirmer's Copyright and Related Rights in the Work (i) in all territories worldwide, (ii) for the maximum duration provided by applicable law or treaty (including future time extensions), (iii) in any current or future medium and for any number of copies, and (iv) for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the "License"). The License shall be deemed effective as of the date CC0 was applied by Affirmer to the Work. Should any part of the License for any reason be judged legally invalid or ineffective under applicable law, such partial invalidity or ineffectiveness shall not invalidate the remainder of the License, and in such case Affirmer hereby affirms that he or she will not (i) exercise any of his or her remaining Copyright and Related Rights in the Work or (ii) assert any associated claims and causes of action with respect to the Work, in either case contrary to Affirmer's express Statement of Purpose.
4. Limitations and Disclaimers.
No trademark or patent rights held by Affirmer are waived, abandoned, surrendered, licensed or otherwise affected by this document.
Affirmer offers the Work as-is and makes no representations or warranties of any kind concerning the Work, express, implied, statutory or otherwise, including without limitation warranties of title, merchantability, fitness for a particular purpose, non infringement, or the absence of latent or other defects, accuracy, or the present or absence of errors, whether or not discoverable, all to the greatest extent permissible under applicable law.
Affirmer disclaims responsibility for clearing rights of other persons that may apply to the Work or any use thereof, including without limitation any person's Copyright and Related Rights in the Work. Further, Affirmer disclaims responsibility for obtaining any necessary consents, permissions or other rights required for any use of the Work.
Affirmer understands and acknowledges that Creative Commons is not a party to this document and has no duty or obligation with respect to this CC0 or use of the Work.

View File

@ -0,0 +1 @@
../licenses/BSD-MIT

View File

@ -0,0 +1,70 @@
#include <stdio.h>
#include <string.h>
#include "config.h"
/**
* list - double linked list routines
*
* The list header contains routines for manipulating double linked lists.
* It defines two types: struct list_head used for anchoring lists, and
* struct list_node which is usually embedded in the structure which is placed
* in the list.
*
* Example:
* #include <err.h>
* #include <stdio.h>
* #include <stdlib.h>
* #include <ccan/list/list.h>
*
* struct parent {
* const char *name;
* struct list_head children;
* unsigned int num_children;
* };
*
* struct child {
* const char *name;
* struct list_node list;
* };
*
* int main(int argc, char *argv[])
* {
* struct parent p;
* struct child *c;
* unsigned int i;
*
* if (argc < 2)
* errx(1, "Usage: %s parent children...", argv[0]);
*
* p.name = argv[1];
* list_head_init(&p.children);
* p.num_children = 0;
* for (i = 2; i < argc; i++) {
* c = malloc(sizeof(*c));
* c->name = argv[i];
* list_add(&p.children, &c->list);
* p.num_children++;
* }
*
* printf("%s has %u children:", p.name, p.num_children);
* list_for_each(&p.children, c, list)
* printf("%s ", c->name);
* printf("\n");
* return 0;
* }
*
* License: BSD-MIT
* Author: Rusty Russell <rusty@rustcorp.com.au>
*/
int main(int argc, char *argv[])
{
if (argc != 2)
return 1;
if (strcmp(argv[1], "depends") == 0) {
printf("ccan/container_of\n");
return 0;
}
return 1;
}

View File

@ -0,0 +1,43 @@
/* Licensed under BSD-MIT - see LICENSE file for details */
#include <stdio.h>
#include <stdlib.h>
#include "list.h"
static void *corrupt(const char *abortstr,
const struct list_node *head,
const struct list_node *node,
unsigned int count)
{
if (abortstr) {
fprintf(stderr,
"%s: prev corrupt in node %p (%u) of %p\n",
abortstr, node, count, head);
abort();
}
return NULL;
}
struct list_node *list_check_node(const struct list_node *node,
const char *abortstr)
{
const struct list_node *p, *n;
int count = 0;
for (p = node, n = node->next; n != node; p = n, n = n->next) {
count++;
if (n->prev != p)
return corrupt(abortstr, node, n, count);
}
/* Check prev on head node. */
if (node->prev != p)
return corrupt(abortstr, node, node, 0);
return (struct list_node *)node;
}
struct list_head *list_check(const struct list_head *h, const char *abortstr)
{
if (!list_check_node(&h->n, abortstr))
return NULL;
return (struct list_head *)h;
}

View File

@ -0,0 +1,615 @@
/* Licensed under BSD-MIT - see LICENSE file for details */
#ifndef CCAN_LIST_H
#define CCAN_LIST_H
#include <stdbool.h>
#include <assert.h>
#include <ccan/container_of/container_of.h>
#include <ccan/check_type/check_type.h>
/**
* struct list_node - an entry in a doubly-linked list
* @next: next entry (self if empty)
* @prev: previous entry (self if empty)
*
* This is used as an entry in a linked list.
* Example:
* struct child {
* const char *name;
* // Linked list of all us children.
* struct list_node list;
* };
*/
struct list_node
{
struct list_node *next, *prev;
};
/**
* struct list_head - the head of a doubly-linked list
* @h: the list_head (containing next and prev pointers)
*
* This is used as the head of a linked list.
* Example:
* struct parent {
* const char *name;
* struct list_head children;
* unsigned int num_children;
* };
*/
struct list_head
{
struct list_node n;
};
/**
* list_check - check head of a list for consistency
* @h: the list_head
* @abortstr: the location to print on aborting, or NULL.
*
* Because list_nodes have redundant information, consistency checking between
* the back and forward links can be done. This is useful as a debugging check.
* If @abortstr is non-NULL, that will be printed in a diagnostic if the list
* is inconsistent, and the function will abort.
*
* Returns the list head if the list is consistent, NULL if not (it
* can never return NULL if @abortstr is set).
*
* See also: list_check_node()
*
* Example:
* static void dump_parent(struct parent *p)
* {
* struct child *c;
*
* printf("%s (%u children):\n", p->name, p->num_children);
* list_check(&p->children, "bad child list");
* list_for_each(&p->children, c, list)
* printf(" -> %s\n", c->name);
* }
*/
struct list_head *list_check(const struct list_head *h, const char *abortstr);
/**
* list_check_node - check node of a list for consistency
* @n: the list_node
* @abortstr: the location to print on aborting, or NULL.
*
* Check consistency of the list node is in (it must be in one).
*
* See also: list_check()
*
* Example:
* static void dump_child(const struct child *c)
* {
* list_check_node(&c->list, "bad child list");
* printf("%s\n", c->name);
* }
*/
struct list_node *list_check_node(const struct list_node *n,
const char *abortstr);
#ifdef CCAN_LIST_DEBUG
#define list_debug(h) list_check((h), __func__)
#define list_debug_node(n) list_check_node((n), __func__)
#else
#define list_debug(h) (h)
#define list_debug_node(n) (n)
#endif
/**
* LIST_HEAD_INIT - initializer for an empty list_head
* @name: the name of the list.
*
* Explicit initializer for an empty list.
*
* See also:
* LIST_HEAD, list_head_init()
*
* Example:
* static struct list_head my_list = LIST_HEAD_INIT(my_list);
*/
#define LIST_HEAD_INIT(name) { { &name.n, &name.n } }
/**
* LIST_HEAD - define and initialize an empty list_head
* @name: the name of the list.
*
* The LIST_HEAD macro defines a list_head and initializes it to an empty
* list. It can be prepended by "static" to define a static list_head.
*
* See also:
* LIST_HEAD_INIT, list_head_init()
*
* Example:
* static LIST_HEAD(my_global_list);
*/
/*#define LIST_HEAD(name) \
struct list_head name = LIST_HEAD_INIT(name) */
/**
* list_head_init - initialize a list_head
* @h: the list_head to set to the empty list
*
* Example:
* ...
* struct parent *parent = malloc(sizeof(*parent));
*
* list_head_init(&parent->children);
* parent->num_children = 0;
*/
static inline void list_head_init(struct list_head *h)
{
h->n.next = h->n.prev = &h->n;
}
/**
* list_add - add an entry at the start of a linked list.
* @h: the list_head to add the node to
* @n: the list_node to add to the list.
*
* The list_node does not need to be initialized; it will be overwritten.
* Example:
* struct child *child = malloc(sizeof(*child));
*
* child->name = "marvin";
* list_add(&parent->children, &child->list);
* parent->num_children++;
*/
static inline void list_add(struct list_head *h, struct list_node *n)
{
n->next = h->n.next;
n->prev = &h->n;
h->n.next->prev = n;
h->n.next = n;
(void)list_debug(h);
}
/**
* list_add_tail - add an entry at the end of a linked list.
* @h: the list_head to add the node to
* @n: the list_node to add to the list.
*
* The list_node does not need to be initialized; it will be overwritten.
* Example:
* list_add_tail(&parent->children, &child->list);
* parent->num_children++;
*/
static inline void list_add_tail(struct list_head *h, struct list_node *n)
{
n->next = &h->n;
n->prev = h->n.prev;
h->n.prev->next = n;
h->n.prev = n;
(void)list_debug(h);
}
/**
* list_empty - is a list empty?
* @h: the list_head
*
* If the list is empty, returns true.
*
* Example:
* assert(list_empty(&parent->children) == (parent->num_children == 0));
*/
static inline bool list_empty(const struct list_head *h)
{
(void)list_debug(h);
return h->n.next == &h->n;
}
/**
* list_del - delete an entry from an (unknown) linked list.
* @n: the list_node to delete from the list.
*
* Note that this leaves @n in an undefined state; it can be added to
* another list, but not deleted again.
*
* See also:
* list_del_from()
*
* Example:
* list_del(&child->list);
* parent->num_children--;
*/
static inline void list_del(struct list_node *n)
{
(void)list_debug_node(n);
n->next->prev = n->prev;
n->prev->next = n->next;
#ifdef CCAN_LIST_DEBUG
/* Catch use-after-del. */
n->next = n->prev = NULL;
#endif
}
/**
* list_del_from - delete an entry from a known linked list.
* @h: the list_head the node is in.
* @n: the list_node to delete from the list.
*
* This explicitly indicates which list a node is expected to be in,
* which is better documentation and can catch more bugs.
*
* See also: list_del()
*
* Example:
* list_del_from(&parent->children, &child->list);
* parent->num_children--;
*/
static inline void list_del_from(struct list_head *h, struct list_node *n)
{
#ifdef CCAN_LIST_DEBUG
{
/* Thorough check: make sure it was in list! */
struct list_node *i;
for (i = h->n.next; i != n; i = i->next)
assert(i != &h->n);
}
#endif /* CCAN_LIST_DEBUG */
/* Quick test that catches a surprising number of bugs. */
assert(!list_empty(h));
list_del(n);
}
/**
* list_entry - convert a list_node back into the structure containing it.
* @n: the list_node
* @type: the type of the entry
* @member: the list_node member of the type
*
* Example:
* // First list entry is children.next; convert back to child.
* child = list_entry(parent->children.n.next, struct child, list);
*
* See Also:
* list_top(), list_for_each()
*/
#define list_entry(n, type, member) container_of(n, type, member)
/**
* list_top - get the first entry in a list
* @h: the list_head
* @type: the type of the entry
* @member: the list_node member of the type
*
* If the list is empty, returns NULL.
*
* Example:
* struct child *first;
* first = list_top(&parent->children, struct child, list);
* if (!first)
* printf("Empty list!\n");
*/
#define list_top(h, type, member) \
((type *)list_top_((h), list_off_(type, member)))
static inline const void *list_top_(const struct list_head *h, size_t off)
{
if (list_empty(h))
return NULL;
return (const char *)h->n.next - off;
}
/**
* list_pop - remove the first entry in a list
* @h: the list_head
* @type: the type of the entry
* @member: the list_node member of the type
*
* If the list is empty, returns NULL.
*
* Example:
* struct child *one;
* one = list_pop(&parent->children, struct child, list);
* if (!one)
* printf("Empty list!\n");
*/
#define list_pop(h, type, member) \
((type *)list_pop_((h), list_off_(type, member)))
static inline void *list_pop_(const struct list_head *h, size_t off)
{
struct list_node *n;
if (list_empty(h))
return NULL;
n = h->n.next;
list_del(n);
return (char *)n - off;
}
/**
* list_tail - get the last entry in a list
* @h: the list_head
* @type: the type of the entry
* @member: the list_node member of the type
*
* If the list is empty, returns NULL.
*
* Example:
* struct child *last;
* last = list_tail(&parent->children, struct child, list);
* if (!last)
* printf("Empty list!\n");
*/
#define list_tail(h, type, member) \
((type *)list_tail_((h), list_off_(type, member)))
static inline const void *list_tail_(const struct list_head *h, size_t off)
{
if (list_empty(h))
return NULL;
return (const char *)h->n.prev - off;
}
/**
* list_for_each - iterate through a list.
* @h: the list_head (warning: evaluated multiple times!)
* @i: the structure containing the list_node
* @member: the list_node member of the structure
*
* This is a convenient wrapper to iterate @i over the entire list. It's
* a for loop, so you can break and continue as normal.
*
* Example:
* list_for_each(&parent->children, child, list)
* printf("Name: %s\n", child->name);
*/
#define list_for_each(h, i, member) \
list_for_each_off(h, i, list_off_var_(i, member))
/**
* list_for_each_rev - iterate through a list backwards.
* @h: the list_head
* @i: the structure containing the list_node
* @member: the list_node member of the structure
*
* This is a convenient wrapper to iterate @i over the entire list. It's
* a for loop, so you can break and continue as normal.
*
* Example:
* list_for_each_rev(&parent->children, child, list)
* printf("Name: %s\n", child->name);
*/
#define list_for_each_rev(h, i, member) \
for (i = container_of_var(list_debug(h)->n.prev, i, member); \
&i->member != &(h)->n; \
i = container_of_var(i->member.prev, i, member))
/**
* list_for_each_safe - iterate through a list, maybe during deletion
* @h: the list_head
* @i: the structure containing the list_node
* @nxt: the structure containing the list_node
* @member: the list_node member of the structure
*
* This is a convenient wrapper to iterate @i over the entire list. It's
* a for loop, so you can break and continue as normal. The extra variable
* @nxt is used to hold the next element, so you can delete @i from the list.
*
* Example:
* struct child *next;
* list_for_each_safe(&parent->children, child, next, list) {
* list_del(&child->list);
* parent->num_children--;
* }
*/
#define list_for_each_safe(h, i, nxt, member) \
list_for_each_safe_off(h, i, nxt, list_off_var_(i, member))
/**
* list_next - get the next entry in a list
* @h: the list_head
* @i: a pointer to an entry in the list.
* @member: the list_node member of the structure
*
* If @i was the last entry in the list, returns NULL.
*
* Example:
* struct child *second;
* second = list_next(&parent->children, first, list);
* if (!second)
* printf("No second child!\n");
*/
#define list_next(h, i, member) \
((list_typeof(i))list_entry_or_null(list_debug(h), \
(i)->member.next, \
list_off_var_((i), member)))
/**
* list_prev - get the previous entry in a list
* @h: the list_head
* @i: a pointer to an entry in the list.
* @member: the list_node member of the structure
*
* If @i was the first entry in the list, returns NULL.
*
* Example:
* first = list_prev(&parent->children, second, list);
* if (!first)
* printf("Can't go back to first child?!\n");
*/
#define list_prev(h, i, member) \
((list_typeof(i))list_entry_or_null(list_debug(h), \
(i)->member.prev, \
list_off_var_((i), member)))
/**
* list_append_list - empty one list onto the end of another.
* @to: the list to append into
* @from: the list to empty.
*
* This takes the entire contents of @from and moves it to the end of
* @to. After this @from will be empty.
*
* Example:
* struct list_head adopter;
*
* list_append_list(&adopter, &parent->children);
* assert(list_empty(&parent->children));
* parent->num_children = 0;
*/
static inline void list_append_list(struct list_head *to,
struct list_head *from)
{
struct list_node *from_tail = list_debug(from)->n.prev;
struct list_node *to_tail = list_debug(to)->n.prev;
/* Sew in head and entire list. */
to->n.prev = from_tail;
from_tail->next = &to->n;
to_tail->next = &from->n;
from->n.prev = to_tail;
/* Now remove head. */
list_del(&from->n);
list_head_init(from);
}
/**
* list_prepend_list - empty one list into the start of another.
* @to: the list to prepend into
* @from: the list to empty.
*
* This takes the entire contents of @from and moves it to the start
* of @to. After this @from will be empty.
*
* Example:
* list_prepend_list(&adopter, &parent->children);
* assert(list_empty(&parent->children));
* parent->num_children = 0;
*/
static inline void list_prepend_list(struct list_head *to,
struct list_head *from)
{
struct list_node *from_tail = list_debug(from)->n.prev;
struct list_node *to_head = list_debug(to)->n.next;
/* Sew in head and entire list. */
to->n.next = &from->n;
from->n.prev = &to->n;
to_head->prev = from_tail;
from_tail->next = to_head;
/* Now remove head. */
list_del(&from->n);
list_head_init(from);
}
/**
* list_for_each_off - iterate through a list of memory regions.
* @h: the list_head
* @i: the pointer to a memory region wich contains list node data.
* @off: offset(relative to @i) at which list node data resides.
*
* This is a low-level wrapper to iterate @i over the entire list, used to
* implement all oher, more high-level, for-each constructs. It's a for loop,
* so you can break and continue as normal.
*
* WARNING! Being the low-level macro that it is, this wrapper doesn't know
* nor care about the type of @i. The only assumtion made is that @i points
* to a chunk of memory that at some @offset, relative to @i, contains a
* properly filled `struct node_list' which in turn contains pointers to
* memory chunks and it's turtles all the way down. Whith all that in mind
* remember that given the wrong pointer/offset couple this macro will
* happilly churn all you memory untill SEGFAULT stops it, in other words
* caveat emptor.
*
* It is worth mentioning that one of legitimate use-cases for that wrapper
* is operation on opaque types with known offset for `struct list_node'
* member(preferably 0), because it allows you not to disclose the type of
* @i.
*
* Example:
* list_for_each_off(&parent->children, child,
* offsetof(struct child, list))
* printf("Name: %s\n", child->name);
*/
#define list_for_each_off(h, i, off) \
for (i = (typeof(i))list_node_to_off_(list_debug(h)->n.next, (off)); \
list_node_from_off_((void *)i, (off)) != &(h)->n; \
i = (typeof(i))list_node_to_off_(list_node_from_off_((void *)i, (off))->next, \
(off)))
/**
* list_for_each_safe_off - iterate through a list of memory regions, maybe
* during deletion
* @h: the list_head
* @i: the pointer to a memory region wich contains list node data.
* @nxt: the structure containing the list_node
* @off: offset(relative to @i) at which list node data resides.
*
* For details see `list_for_each_off' and `list_for_each_safe'
* descriptions.
*
* Example:
* list_for_each_safe_off(&parent->children, child,
* next, offsetof(struct child, list))
* printf("Name: %s\n", child->name);
*/
#define list_for_each_safe_off(h, i, nxt, off) \
for (i = (typeof(i))list_node_to_off_(list_debug(h)->n.next, (off)), \
nxt = (typeof(nxt))list_node_to_off_(list_node_from_off_(i, (off))->next, \
(off)); \
list_node_from_off_(i, (off)) != &(h)->n; \
i = nxt, \
nxt = (typeof(nxt))list_node_to_off_(list_node_from_off_(i, (off))->next, \
(off)))
/* Other -off variants. */
#define list_entry_off(n, type, off) \
((type *)list_node_from_off_((n), (off)))
#define list_head_off(h, type, off) \
((type *)list_head_off((h), (off)))
#define list_tail_off(h, type, off) \
((type *)list_tail_((h), (off)))
#define list_add_off(h, n, off) \
list_add((h), list_node_from_off_((n), (off)))
#define list_del_off(n, off) \
list_del(list_node_from_off_((n), (off)))
#define list_del_from_off(h, n, off) \
list_del_from(h, list_node_from_off_((n), (off)))
/* Offset helper functions so we only single-evaluate. */
static inline void *list_node_to_off_(struct list_node *node, size_t off)
{
return (void *)((char *)node - off);
}
static inline struct list_node *list_node_from_off_(void *ptr, size_t off)
{
return (struct list_node *)((char *)ptr + off);
}
/* Get the offset of the member, but make sure it's a list_node. */
#define list_off_(type, member) \
(container_off(type, member) + \
check_type(((type *)0)->member, struct list_node))
#define list_off_var_(var, member) \
(container_off_var(var, member) + \
check_type(var->member, struct list_node))
#if HAVE_TYPEOF
#define list_typeof(var) typeof(var)
#else
#define list_typeof(var) void *
#endif
/* Returns member, or NULL if at end of list. */
static inline void *list_entry_or_null(const struct list_head *h,
struct list_node *n,
size_t off)
{
if (n == &h->n)
return NULL;
return (char *)n - off;
}
#endif /* CCAN_LIST_H */

View File

@ -0,0 +1,49 @@
#include <ccan/list/list.h>
#include <ccan/tap/tap.h>
#include <ccan/list/list.c>
#include <stdbool.h>
#include <stdio.h>
struct child {
const char *name;
struct list_node list;
};
static bool children(const struct list_head *list)
{
return !list_empty(list);
}
static const struct child *first_child(const struct list_head *list)
{
return list_top(list, struct child, list);
}
static const struct child *last_child(const struct list_head *list)
{
return list_tail(list, struct child, list);
}
static void check_children(const struct list_head *list)
{
list_check(list, "bad child list");
}
static void print_children(const struct list_head *list)
{
const struct child *c;
list_for_each(list, c, list)
printf("%s\n", c->name);
}
int main(void)
{
LIST_HEAD(h);
children(&h);
first_child(&h);
last_child(&h);
check_children(&h);
print_children(&h);
return 0;
}

View File

@ -0,0 +1,56 @@
#include <stdlib.h>
#include <stdbool.h>
#include <time.h>
#include <ccan/list/list.h>
#include "helper.h"
#define ANSWER_TO_THE_ULTIMATE_QUESTION_OF_LIFE_THE_UNIVERSE_AND_EVERYTHING \
(42)
struct opaque {
struct list_node list;
size_t secret_offset;
char secret_drawer[42];
};
static bool not_randomized = true;
struct opaque *create_opaque_blob(void)
{
struct opaque *blob = calloc(1, sizeof(struct opaque));
if (not_randomized) {
srandom((int)time(NULL));
not_randomized = false;
}
blob->secret_offset = random() % (sizeof(blob->secret_drawer));
blob->secret_drawer[blob->secret_offset] =
ANSWER_TO_THE_ULTIMATE_QUESTION_OF_LIFE_THE_UNIVERSE_AND_EVERYTHING;
return blob;
}
bool if_blobs_know_the_secret(struct opaque *blob)
{
bool answer = true;
int i;
for (i = 0; i < sizeof(blob->secret_drawer) /
sizeof(blob->secret_drawer[0]); i++)
if (i != blob->secret_offset)
answer = answer && (blob->secret_drawer[i] == 0);
else
answer = answer &&
(blob->secret_drawer[blob->secret_offset] ==
ANSWER_TO_THE_ULTIMATE_QUESTION_OF_LIFE_THE_UNIVERSE_AND_EVERYTHING);
return answer;
}
void destroy_opaque_blob(struct opaque *blob)
{
free(blob);
}

View File

@ -0,0 +1,9 @@
/* These are in a separate C file so we can test undefined structures. */
struct opaque;
typedef struct opaque opaque_t;
opaque_t *create_opaque_blob(void);
bool if_blobs_know_the_secret(opaque_t *blob);
void destroy_opaque_blob(opaque_t *blob);

View File

@ -0,0 +1,89 @@
#include <setjmp.h>
#include <stdlib.h>
#include <stdio.h>
#include <stdarg.h>
#include <string.h>
#include <err.h>
/* We don't actually want it to exit... */
static jmp_buf aborted;
#define abort() longjmp(aborted, 1)
#define fprintf my_fprintf
static char printf_buffer[1000];
static int my_fprintf(FILE *stream, const char *format, ...)
{
va_list ap;
int ret;
va_start(ap, format);
ret = vsprintf(printf_buffer, format, ap);
va_end(ap);
return ret;
}
#include <ccan/list/list.h>
#include <ccan/tap/tap.h>
#include <ccan/list/list.c>
int main(int argc, char *argv[])
{
struct list_head list;
struct list_node n1;
char expect[100];
plan_tests(9);
/* Empty list. */
list.n.next = &list.n;
list.n.prev = &list.n;
ok1(list_check(&list, NULL) == &list);
/* Bad back ptr */
list.n.prev = &n1;
/* Non-aborting version. */
ok1(list_check(&list, NULL) == NULL);
/* Aborting version. */
sprintf(expect, "test message: prev corrupt in node %p (0) of %p\n",
&list, &list);
if (setjmp(aborted) == 0) {
list_check(&list, "test message");
fail("list_check on empty with bad back ptr didn't fail!");
} else {
ok1(strcmp(printf_buffer, expect) == 0);
}
/* n1 in list. */
list.n.next = &n1;
list.n.prev = &n1;
n1.prev = &list.n;
n1.next = &list.n;
ok1(list_check(&list, NULL) == &list);
ok1(list_check_node(&n1, NULL) == &n1);
/* Bad back ptr */
n1.prev = &n1;
ok1(list_check(&list, NULL) == NULL);
ok1(list_check_node(&n1, NULL) == NULL);
/* Aborting version. */
sprintf(expect, "test message: prev corrupt in node %p (1) of %p\n",
&n1, &list);
if (setjmp(aborted) == 0) {
list_check(&list, "test message");
fail("list_check on n1 bad back ptr didn't fail!");
} else {
ok1(strcmp(printf_buffer, expect) == 0);
}
sprintf(expect, "test message: prev corrupt in node %p (0) of %p\n",
&n1, &n1);
if (setjmp(aborted) == 0) {
list_check_node(&n1, "test message");
fail("list_check_node on n1 bad back ptr didn't fail!");
} else {
ok1(strcmp(printf_buffer, expect) == 0);
}
return exit_status();
}

View File

@ -0,0 +1,36 @@
#define CCAN_LIST_DEBUG 1
#include <ccan/list/list.h>
#include <ccan/tap/tap.h>
#include <ccan/list/list.c>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>
#include <signal.h>
int main(int argc, char *argv[])
{
struct list_head list1, list2;
struct list_node n1, n2, n3;
pid_t child;
int status;
plan_tests(1);
list_head_init(&list1);
list_head_init(&list2);
list_add(&list1, &n1);
list_add(&list2, &n2);
list_add_tail(&list2, &n3);
child = fork();
if (child) {
wait(&status);
} else {
/* This should abort. */
list_del_from(&list1, &n3);
exit(0);
}
ok1(WIFSIGNALED(status) && WTERMSIG(status) == SIGABRT);
list_del_from(&list2, &n3);
return exit_status();
}

View File

@ -0,0 +1,65 @@
#include <ccan/list/list.h>
#include <ccan/tap/tap.h>
#include <ccan/list/list.c>
#include "helper.h"
struct parent {
const char *name;
unsigned int num_children;
struct list_head children;
};
struct child {
const char *name;
struct list_node list;
};
int main(int argc, char *argv[])
{
struct parent parent;
struct child c1, c2, c3;
const struct parent *p;
const struct child *c;
plan_tests(20);
parent.num_children = 0;
list_head_init(&parent.children);
c1.name = "c1";
list_add(&parent.children, &c1.list);
ok1(list_next(&parent.children, &c1, list) == NULL);
ok1(list_prev(&parent.children, &c1, list) == NULL);
c2.name = "c2";
list_add_tail(&parent.children, &c2.list);
ok1(list_next(&parent.children, &c1, list) == &c2);
ok1(list_prev(&parent.children, &c1, list) == NULL);
ok1(list_next(&parent.children, &c2, list) == NULL);
ok1(list_prev(&parent.children, &c2, list) == &c1);
c3.name = "c3";
list_add_tail(&parent.children, &c3.list);
ok1(list_next(&parent.children, &c1, list) == &c2);
ok1(list_prev(&parent.children, &c1, list) == NULL);
ok1(list_next(&parent.children, &c2, list) == &c3);
ok1(list_prev(&parent.children, &c2, list) == &c1);
ok1(list_next(&parent.children, &c3, list) == NULL);
ok1(list_prev(&parent.children, &c3, list) == &c2);
/* Const variants */
p = &parent;
c = &c2;
ok1(list_next(&p->children, &c1, list) == &c2);
ok1(list_prev(&p->children, &c1, list) == NULL);
ok1(list_next(&p->children, c, list) == &c3);
ok1(list_prev(&p->children, c, list) == &c1);
ok1(list_next(&parent.children, c, list) == &c3);
ok1(list_prev(&parent.children, c, list) == &c1);
ok1(list_next(&p->children, &c3, list) == NULL);
ok1(list_prev(&p->children, &c3, list) == &c2);
return exit_status();
}

View File

@ -0,0 +1,111 @@
#include <ccan/list/list.h>
#include <ccan/tap/tap.h>
#include <ccan/list/list.c>
#include <stdarg.h>
static bool list_expect(struct list_head *h, ...)
{
va_list ap;
struct list_node *n = &h->n, *expected;
va_start(ap, h);
while ((expected = va_arg(ap, struct list_node *)) != NULL) {
n = n->next;
if (n != expected)
return false;
}
return (n->next == &h->n);
}
int main(int argc, char *argv[])
{
struct list_head h1, h2;
struct list_node n[4];
plan_tests(40);
list_head_init(&h1);
list_head_init(&h2);
/* Append an empty list to an empty list. */
list_append_list(&h1, &h2);
ok1(list_empty(&h1));
ok1(list_empty(&h2));
ok1(list_check(&h1, NULL));
ok1(list_check(&h2, NULL));
/* Prepend an empty list to an empty list. */
list_prepend_list(&h1, &h2);
ok1(list_empty(&h1));
ok1(list_empty(&h2));
ok1(list_check(&h1, NULL));
ok1(list_check(&h2, NULL));
/* Append an empty list to a non-empty list */
list_add(&h1, &n[0]);
list_append_list(&h1, &h2);
ok1(list_empty(&h2));
ok1(list_check(&h1, NULL));
ok1(list_check(&h2, NULL));
ok1(list_expect(&h1, &n[0], NULL));
/* Prepend an empty list to a non-empty list */
list_prepend_list(&h1, &h2);
ok1(list_empty(&h2));
ok1(list_check(&h1, NULL));
ok1(list_check(&h2, NULL));
ok1(list_expect(&h1, &n[0], NULL));
/* Append a non-empty list to an empty list. */
list_append_list(&h2, &h1);
ok1(list_empty(&h1));
ok1(list_check(&h1, NULL));
ok1(list_check(&h2, NULL));
ok1(list_expect(&h2, &n[0], NULL));
/* Prepend a non-empty list to an empty list. */
list_prepend_list(&h1, &h2);
ok1(list_empty(&h2));
ok1(list_check(&h1, NULL));
ok1(list_check(&h2, NULL));
ok1(list_expect(&h1, &n[0], NULL));
/* Prepend a non-empty list to non-empty list. */
list_add(&h2, &n[1]);
list_prepend_list(&h1, &h2);
ok1(list_empty(&h2));
ok1(list_check(&h1, NULL));
ok1(list_check(&h2, NULL));
ok1(list_expect(&h1, &n[1], &n[0], NULL));
/* Append a non-empty list to non-empty list. */
list_add(&h2, &n[2]);
list_append_list(&h1, &h2);
ok1(list_empty(&h2));
ok1(list_check(&h1, NULL));
ok1(list_check(&h2, NULL));
ok1(list_expect(&h1, &n[1], &n[0], &n[2], NULL));
/* Prepend a 2-entry list to a 2-entry list. */
list_del_from(&h1, &n[2]);
list_add(&h2, &n[2]);
list_add_tail(&h2, &n[3]);
list_prepend_list(&h1, &h2);
ok1(list_empty(&h2));
ok1(list_check(&h1, NULL));
ok1(list_check(&h2, NULL));
ok1(list_expect(&h1, &n[2], &n[3], &n[1], &n[0], NULL));
/* Append a 2-entry list to a 2-entry list. */
list_del_from(&h1, &n[2]);
list_del_from(&h1, &n[3]);
list_add(&h2, &n[2]);
list_add_tail(&h2, &n[3]);
list_append_list(&h1, &h2);
ok1(list_empty(&h2));
ok1(list_check(&h1, NULL));
ok1(list_check(&h2, NULL));
ok1(list_expect(&h1, &n[1], &n[0], &n[2], &n[3], NULL));
return exit_status();
}

View File

@ -0,0 +1,168 @@
/* Make sure macros only evaluate their args once. */
#include <ccan/list/list.h>
#include <ccan/tap/tap.h>
#include <ccan/list/list.c>
struct parent {
const char *name;
struct list_head children;
unsigned int num_children;
int eval_count;
};
struct child {
const char *name;
struct list_node list;
};
static LIST_HEAD(static_list);
#define ref(obj, counter) ((counter)++, (obj))
int main(int argc, char *argv[])
{
struct parent parent;
struct child c1, c2, c3, *c, *n;
unsigned int i;
unsigned int static_count = 0, parent_count = 0, list_count = 0,
node_count = 0;
struct list_head list = LIST_HEAD_INIT(list);
plan_tests(74);
/* Test LIST_HEAD, LIST_HEAD_INIT, list_empty and check_list */
ok1(list_empty(ref(&static_list, static_count)));
ok1(static_count == 1);
ok1(list_check(ref(&static_list, static_count), NULL));
ok1(static_count == 2);
ok1(list_empty(ref(&list, list_count)));
ok1(list_count == 1);
ok1(list_check(ref(&list, list_count), NULL));
ok1(list_count == 2);
parent.num_children = 0;
list_head_init(ref(&parent.children, parent_count));
ok1(parent_count == 1);
/* Test list_head_init */
ok1(list_empty(ref(&parent.children, parent_count)));
ok1(parent_count == 2);
ok1(list_check(ref(&parent.children, parent_count), NULL));
ok1(parent_count == 3);
c2.name = "c2";
list_add(ref(&parent.children, parent_count), &c2.list);
ok1(parent_count == 4);
/* Test list_add and !list_empty. */
ok1(!list_empty(ref(&parent.children, parent_count)));
ok1(parent_count == 5);
ok1(c2.list.next == &parent.children.n);
ok1(c2.list.prev == &parent.children.n);
ok1(parent.children.n.next == &c2.list);
ok1(parent.children.n.prev == &c2.list);
/* Test list_check */
ok1(list_check(ref(&parent.children, parent_count), NULL));
ok1(parent_count == 6);
c1.name = "c1";
list_add(ref(&parent.children, parent_count), &c1.list);
ok1(parent_count == 7);
/* Test list_add and !list_empty. */
ok1(!list_empty(ref(&parent.children, parent_count)));
ok1(parent_count == 8);
ok1(c2.list.next == &parent.children.n);
ok1(c2.list.prev == &c1.list);
ok1(parent.children.n.next == &c1.list);
ok1(parent.children.n.prev == &c2.list);
ok1(c1.list.next == &c2.list);
ok1(c1.list.prev == &parent.children.n);
/* Test list_check */
ok1(list_check(ref(&parent.children, parent_count), NULL));
ok1(parent_count == 9);
c3.name = "c3";
list_add_tail(ref(&parent.children, parent_count), &c3.list);
ok1(parent_count == 10);
/* Test list_add_tail and !list_empty. */
ok1(!list_empty(ref(&parent.children, parent_count)));
ok1(parent_count == 11);
ok1(parent.children.n.next == &c1.list);
ok1(parent.children.n.prev == &c3.list);
ok1(c1.list.next == &c2.list);
ok1(c1.list.prev == &parent.children.n);
ok1(c2.list.next == &c3.list);
ok1(c2.list.prev == &c1.list);
ok1(c3.list.next == &parent.children.n);
ok1(c3.list.prev == &c2.list);
/* Test list_check */
ok1(list_check(ref(&parent.children, parent_count), NULL));
ok1(parent_count == 12);
/* Test list_check_node */
ok1(list_check_node(&c1.list, NULL));
ok1(list_check_node(&c2.list, NULL));
ok1(list_check_node(&c3.list, NULL));
/* Test list_top */
ok1(list_top(ref(&parent.children, parent_count), struct child, list) == &c1);
ok1(parent_count == 13);
/* Test list_tail */
ok1(list_tail(ref(&parent.children, parent_count), struct child, list) == &c3);
ok1(parent_count == 14);
/* Test list_for_each. */
i = 0;
list_for_each(&parent.children, c, list) {
switch (i++) {
case 0:
ok1(c == &c1);
break;
case 1:
ok1(c == &c2);
break;
case 2:
ok1(c == &c3);
break;
}
if (i > 2)
break;
}
ok1(i == 3);
/* Test list_for_each_safe, list_del and list_del_from. */
i = 0;
list_for_each_safe(&parent.children, c, n, list) {
switch (i++) {
case 0:
ok1(c == &c1);
list_del(ref(&c->list, node_count));
ok1(node_count == 1);
break;
case 1:
ok1(c == &c2);
list_del_from(ref(&parent.children, parent_count),
ref(&c->list, node_count));
ok1(node_count == 2);
break;
case 2:
ok1(c == &c3);
list_del_from(ref(&parent.children, parent_count),
ref(&c->list, node_count));
ok1(node_count == 3);
break;
}
ok1(list_check(ref(&parent.children, parent_count), NULL));
if (i > 2)
break;
}
ok1(i == 3);
ok1(parent_count == 19);
ok1(list_empty(ref(&parent.children, parent_count)));
ok1(parent_count == 20);
/* Test list_top/list_tail on empty list. */
ok1(list_top(ref(&parent.children, parent_count), struct child, list) == NULL);
ok1(parent_count == 21);
ok1(list_tail(ref(&parent.children, parent_count), struct child, list) == NULL);
ok1(parent_count == 22);
return exit_status();
}

View File

@ -0,0 +1,3 @@
/* Just like run.c, but with all debug checks enabled. */
#define CCAN_LIST_DEBUG 1
#include <ccan/list/test/run.c>

View File

@ -0,0 +1,206 @@
#include <ccan/list/list.h>
#include <ccan/tap/tap.h>
#include <ccan/list/list.c>
#include "helper.h"
struct parent {
const char *name;
struct list_head children;
unsigned int num_children;
};
struct child {
const char *name;
struct list_node list;
};
static LIST_HEAD(static_list);
int main(int argc, char *argv[])
{
struct parent parent;
struct child c1, c2, c3, *c, *n;
unsigned int i;
struct list_head list = LIST_HEAD_INIT(list);
opaque_t *q, *nq;
struct list_head opaque_list = LIST_HEAD_INIT(opaque_list);
plan_tests(68);
/* Test LIST_HEAD, LIST_HEAD_INIT, list_empty and check_list */
ok1(list_empty(&static_list));
ok1(list_check(&static_list, NULL));
ok1(list_empty(&list));
ok1(list_check(&list, NULL));
parent.num_children = 0;
list_head_init(&parent.children);
/* Test list_head_init */
ok1(list_empty(&parent.children));
ok1(list_check(&parent.children, NULL));
c2.name = "c2";
list_add(&parent.children, &c2.list);
/* Test list_add and !list_empty. */
ok1(!list_empty(&parent.children));
ok1(c2.list.next == &parent.children.n);
ok1(c2.list.prev == &parent.children.n);
ok1(parent.children.n.next == &c2.list);
ok1(parent.children.n.prev == &c2.list);
/* Test list_check */
ok1(list_check(&parent.children, NULL));
c1.name = "c1";
list_add(&parent.children, &c1.list);
/* Test list_add and !list_empty. */
ok1(!list_empty(&parent.children));
ok1(c2.list.next == &parent.children.n);
ok1(c2.list.prev == &c1.list);
ok1(parent.children.n.next == &c1.list);
ok1(parent.children.n.prev == &c2.list);
ok1(c1.list.next == &c2.list);
ok1(c1.list.prev == &parent.children.n);
/* Test list_check */
ok1(list_check(&parent.children, NULL));
c3.name = "c3";
list_add_tail(&parent.children, &c3.list);
/* Test list_add_tail and !list_empty. */
ok1(!list_empty(&parent.children));
ok1(parent.children.n.next == &c1.list);
ok1(parent.children.n.prev == &c3.list);
ok1(c1.list.next == &c2.list);
ok1(c1.list.prev == &parent.children.n);
ok1(c2.list.next == &c3.list);
ok1(c2.list.prev == &c1.list);
ok1(c3.list.next == &parent.children.n);
ok1(c3.list.prev == &c2.list);
/* Test list_check */
ok1(list_check(&parent.children, NULL));
/* Test list_check_node */
ok1(list_check_node(&c1.list, NULL));
ok1(list_check_node(&c2.list, NULL));
ok1(list_check_node(&c3.list, NULL));
/* Test list_top */
ok1(list_top(&parent.children, struct child, list) == &c1);
/* Test list_pop */
ok1(list_pop(&parent.children, struct child, list) == &c1);
ok1(list_top(&parent.children, struct child, list) == &c2);
list_add(&parent.children, &c1.list);
/* Test list_tail */
ok1(list_tail(&parent.children, struct child, list) == &c3);
/* Test list_for_each. */
i = 0;
list_for_each(&parent.children, c, list) {
switch (i++) {
case 0:
ok1(c == &c1);
break;
case 1:
ok1(c == &c2);
break;
case 2:
ok1(c == &c3);
break;
}
if (i > 2)
break;
}
ok1(i == 3);
/* Test list_for_each_rev. */
i = 0;
list_for_each_rev(&parent.children, c, list) {
switch (i++) {
case 0:
ok1(c == &c3);
break;
case 1:
ok1(c == &c2);
break;
case 2:
ok1(c == &c1);
break;
}
if (i > 2)
break;
}
ok1(i == 3);
/* Test list_for_each_safe, list_del and list_del_from. */
i = 0;
list_for_each_safe(&parent.children, c, n, list) {
switch (i++) {
case 0:
ok1(c == &c1);
list_del(&c->list);
break;
case 1:
ok1(c == &c2);
list_del_from(&parent.children, &c->list);
break;
case 2:
ok1(c == &c3);
list_del_from(&parent.children, &c->list);
break;
}
ok1(list_check(&parent.children, NULL));
if (i > 2)
break;
}
ok1(i == 3);
ok1(list_empty(&parent.children));
/* Test list_for_each_off. */
list_add_tail(&opaque_list,
(struct list_node *)create_opaque_blob());
list_add_tail(&opaque_list,
(struct list_node *)create_opaque_blob());
list_add_tail(&opaque_list,
(struct list_node *)create_opaque_blob());
i = 0;
list_for_each_off(&opaque_list, q, 0) {
i++;
ok1(if_blobs_know_the_secret(q));
}
ok1(i == 3);
/* Test list_for_each_safe_off, list_del_off and list_del_from_off. */
i = 0;
list_for_each_safe_off(&opaque_list, q, nq, 0) {
switch (i++) {
case 0:
ok1(if_blobs_know_the_secret(q));
list_del_off(q, 0);
destroy_opaque_blob(q);
break;
case 1:
ok1(if_blobs_know_the_secret(q));
list_del_from_off(&opaque_list, q, 0);
destroy_opaque_blob(q);
break;
case 2:
ok1(c == &c3);
list_del_from_off(&opaque_list, q, 0);
destroy_opaque_blob(q);
break;
}
ok1(list_check(&opaque_list, NULL));
if (i > 2)
break;
}
ok1(i == 3);
ok1(list_empty(&opaque_list));
/* Test list_top/list_tail/list_pop on empty list. */
ok1(list_top(&parent.children, struct child, list) == NULL);
ok1(list_tail(&parent.children, struct child, list) == NULL);
ok1(list_pop(&parent.children, struct child, list) == NULL);
return exit_status();
}

@ -0,0 +1 @@
Subproject commit 0953a17a4281fc26831da647ad3fcd5e21e6473b

@ -0,0 +1 @@
Subproject commit b36f4c477c40356a0ae1204b567cca3c2a57d201

@ -0,0 +1 @@
Subproject commit e0781fbe4b667f529864b268c5df491a9094e3f0

@ -0,0 +1 @@
Subproject commit ad0e89cbfb4d0c1ce4d097e134eb7be67baebb36

@ -0,0 +1 @@
Subproject commit 47cd2725d61e54719933b83ea51c64ad60c24066

View File

@ -0,0 +1,114 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
ARG base_IMAGE_TAG
ARG bcc_IMAGE_TAG
ARG libuv_IMAGE_TAG
ARG aws_sdk_IMAGE_TAG
ARG cpp_misc_IMAGE_TAG
ARG go_IMAGE_TAG
ARG libmaxminddb_IMAGE_TAG
ARG gcp_cpp_IMAGE_TAG
ARG opentelemetry_IMAGE_TAG
ARG libbpf_IMAGE_TAG
#gen:dep-arg
FROM $base_IMAGE_TAG as build-main
FROM $bcc_IMAGE_TAG as build-bcc
FROM $libuv_IMAGE_TAG as build-libuv
FROM $aws_sdk_IMAGE_TAG as build-aws-sdk
FROM $cpp_misc_IMAGE_TAG as build-cpp-misc
FROM $go_IMAGE_TAG as build-go
FROM $libmaxminddb_IMAGE_TAG as build-libmaxminddb
FROM $gcp_cpp_IMAGE_TAG as build-gcp_cpp
FROM $opentelemetry_IMAGE_TAG as build-opentelemetry
FROM $libbpf_IMAGE_TAG as build-libbpf
#gen:dep-from
# Bring everything together
FROM build-main AS build-result
# Package definitions
ARG PKG_DOCKER="podman uidmap slirp4netns"
ARG PKG_KERNEL_TOOLS="kmod selinux-utils"
ARG PKG_CORE_TOOLS="pass"
ARG PKG_DEV_TOOLS="vim-nox lsof silversearcher-ag ssh"
ARG PKG_AWS_TOOLS="awscli"
ARG BENV_JAVA_VERSION=17
ARG PKG_EXTRA_PACKAGES="openjdk-${BENV_JAVA_VERSION}-jdk-headless google-cloud-sdk google-cloud-sdk-skaffold"
ARG PKG_PYTHON_LIBS="python3-ijson python3-docker"
RUN sudo sh -c 'echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list' && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && \
sudo apt-get -y update && \
sudo apt-get install --no-install-recommends -y \
$PKG_DOCKER \
$PKG_KERNEL_TOOLS \
$PKG_CORE_TOOLS \
$PKG_DEV_TOOLS \
$PKG_AWS_TOOLS \
$PKG_EXTRA_PACKAGES \
$PKG_PYTHON_LIBS \
libcap2-bin && \
# START fix podman permissions -- see comment below \
sudo chmod 0755 /usr/bin/newuidmap /usr/bin/newgidmap && \
sudo setcap cap_setuid=ep /usr/bin/newuidmap && \
sudo setcap cap_setgid=ep /usr/bin/newgidmap && \
sudo apt-get autoremove --purge -y libcap2-bin && \
# END fix podman permissions \
sudo apt-get clean && \
sudo rm -rf /var/lib/apt/lists/*
# For info on the fix to podman in container, see https://samuel.forestier.app/blog/security/podman-rootless-in-podman-rootless-the-debian-way
# Replace setuid bits by proper file capabilities for uidmap binaries.
# See <https://github.com/containers/podman/discussions/19931>.
## java version required by render framework parser
RUN case $(uname -m) in \
x86_64) sudo update-alternatives --set java /usr/lib/jvm/java-${BENV_JAVA_VERSION}-openjdk-amd64/bin/java && \
sudo update-alternatives --set javac /usr/lib/jvm/java-${BENV_JAVA_VERSION}-openjdk-amd64/bin/javac \
;; \
aarch64) sudo update-alternatives --set java /usr/lib/jvm/java-${BENV_JAVA_VERSION}-openjdk-arm64/bin/java && \
sudo update-alternatives --set javac /usr/lib/jvm/java-${BENV_JAVA_VERSION}-openjdk-arm64/bin/javac \
;; \
esac
# gradle
RUN sudo wget https://services.gradle.org/distributions/gradle-8.14.3-bin.zip -O /usr/local/lib/gradle.zip
# swagger-codegen-cli, used to build the API docs
RUN sudo wget \
https://repo1.maven.org/maven2/io/swagger/swagger-codegen-cli/2.4.12/swagger-codegen-cli-2.4.12.jar \
-O /usr/local/lib/swagger-codegen-cli.jar
# Preprocessor for BPF used by cmake
RUN pip3 install --break-system-packages pcpp
# add a script to setup build inside of container
# to be run after we build the image.
RUN ln -s $HOME/src/dev/benv-build.sh build.sh
# Licensing information
#
COPY LICENSE.txt $HOME/
COPY NOTICE.txt $HOME/
# copy artifacts from individual builds
COPY --from=build-bcc $HOME/install $HOME/install
COPY --from=build-libuv $HOME/install $HOME/install
COPY --from=build-aws-sdk $HOME/install $HOME/install
COPY --from=build-cpp-misc $HOME/install $HOME/install
COPY --from=build-libmaxminddb $HOME/install $HOME/install
COPY --from=build-gcp_cpp $HOME/install $HOME/install
COPY --from=build-opentelemetry $HOME/install $HOME/install
COPY --from=build-libbpf $HOME/install $HOME/install
#gen:dep-copy
COPY --from=build-go $HOME/go/bin /usr/local/go/bin
COPY --from=build-go $HOME/go/src /usr/local/go/src
COPY --from=build-go $HOME/go/pkg /usr/local/go/pkg
ARG BENV_UNMINIMIZE=false
RUN (which unminimize && $BENV_UNMINIMIZE && (yes | sudo unminimize)) || true
RUN echo 'if [ -e "$HOME/src/dev/build-env/profile" ]; then source "$HOME/src/dev/build-env/profile"; fi' >> $HOME/.profile

1800
build-tools/final/NOTICE.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,61 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
# gcp_cpp
ARG base_IMAGE_TAG
FROM $base_IMAGE_TAG AS build
ARG NPROC
ARG CMAKE_BUILD_TYPE
##############
# googleapis #
##############
WORKDIR $HOME
COPY --chown=${UID}:${GID} googleapis googleapis
###########################
# google-cloud-cpp-common #
###########################
WORKDIR $HOME
COPY --chown=${UID}:${GID} google-cloud-cpp-common google-cloud-cpp-common
WORKDIR $HOME/build/google-cloud-cpp-common
RUN nice cmake \
-DCMAKE_INSTALL_PREFIX:PATH=$HOME/install \
-DBUILD_TESTING=OFF \
-DGOOGLE_CLOUD_CPP_ENABLE_GRPC_UTILS=OFF \
-DCMAKE_C_FLAGS="-fPIC" \
-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE} \
$HOME/google-cloud-cpp-common
RUN nice make -j${NPROC:-3}
RUN nice make -j${NPROC:-3} install
####################
# google-cloud-cpp #
####################
WORKDIR $HOME
COPY --chown=${UID}:${GID} google-cloud-cpp google-cloud-cpp
WORKDIR $HOME/build/google-cloud-cpp
RUN nice cmake \
-DCMAKE_INSTALL_PREFIX:PATH=$HOME/install \
-DBUILD_TESTING=OFF \
-DGOOGLE_CLOUD_CPP_ENABLE_BIGTABLE=OFF \
-DGOOGLE_CLOUD_CPP_ENABLE_STORAGE=OFF \
-DGOOGLE_CLOUD_CPP_ENABLE_FIRESTORE=OFF \
-DGOOGLE_CLOUD_CPP_ENABLE_GRPC_UTILS=OFF \
-DCMAKE_C_FLAGS="-fPIC" \
-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE} \
$HOME/google-cloud-cpp
RUN nice make -j${NPROC:-3}
#RUN nice make -j${NPROC:-3} install
# Runtime stage - copy only necessary artifacts
FROM $base_IMAGE_TAG
COPY --from=build $HOME/install $HOME/install

@ -0,0 +1 @@
Subproject commit e60c67ae5b3aa4dc725cab9adc8ac211b34af85c

@ -0,0 +1 @@
Subproject commit 4192f662349a35bdd34090c20bade14839b1bfb7

@ -0,0 +1 @@
Subproject commit 0f44538daf93e648e4fe5529acf8219cef3a0a39

17
build-tools/get_tag.sh Executable file
View File

@ -0,0 +1,17 @@
#!/bin/bash
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
# This script gets a tag for the given directory (DIR=$1) from the latest git modification.
# The image+tag will be ${BENV_PREFIX}-${DIR}:${VERSION_HASH}
SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
DIR="$1"
BENV_PREFIX="${DOCKER_TAG_PREFIX}benv"
VERSION_HASH=$(git log -1 --format=%h $SCRIPTDIR/${DIR})
IMAGE_TAG="${BENV_PREFIX}-${DIR}:${VERSION_HASH}"
echo ${IMAGE_TAG}

30
build-tools/go/Dockerfile Normal file
View File

@ -0,0 +1,30 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
# Golang: grpc, grpc-gateway
ARG base_IMAGE_TAG
FROM $base_IMAGE_TAG AS build
# protoc-gen-go
RUN /usr/local/go/bin/go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.34.2
RUN /usr/local/go/bin/go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@v1.5.1
# protoc-gen-grpc-gateway and protoc-gen-swagger
# use v1 branch of grpc-gateway
RUN mkdir -p $HOME/go/src/github.com/grpc-ecosystem && \
cd $HOME/go/src/github.com/grpc-ecosystem && \
git clone https://github.com/grpc-ecosystem/grpc-gateway && \
cd grpc-gateway && git checkout v1 && \
/usr/local/go/bin/go install github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway && \
/usr/local/go/bin/go install github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
# staticheck linter
RUN /usr/local/go/bin/go install honnef.co/go/tools/cmd/staticcheck@2023.1
# Runtime stage - copy only necessary artifacts
FROM $base_IMAGE_TAG
COPY --from=build $HOME/go/bin $HOME/go/bin
COPY --from=build $HOME/go/src $HOME/go/src
COPY --from=build $HOME/go/pkg $HOME/go/pkg

View File

@ -0,0 +1,26 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
# compile our own libbpf
ARG base_IMAGE_TAG
FROM $base_IMAGE_TAG AS build
ARG CMAKE_BUILD_TYPE
ARG RESTRICTED_NPROC
WORKDIR $HOME
COPY --chown=${UID}:${GID} libbpf libbpf
COPY --chown=${UID}:${GID} bpftool bpftool
# Build libbpf first
WORKDIR $HOME/libbpf/src
RUN make -j ${RESTRICTED_NPROC:-1} DESTDIR=$HOME/install install
# Build bpftool (it will use the libbpf we just built)
WORKDIR $HOME/bpftool/src
RUN make -j ${RESTRICTED_NPROC:-1} DESTDIR=$HOME/install install
# Runtime stage - copy only necessary artifacts
FROM $base_IMAGE_TAG
COPY --from=build $HOME/install $HOME/install

Some files were not shown because too many files have changed in this diff Show More