Compare commits

..

45 Commits

Author SHA1 Message Date
Daniel J Walsh 3f8e31a073
Merge pull request #1714 from slp/install-virglrenderer
container-images: add virglrenderer to vulkan
2025-07-19 06:35:54 -04:00
Daniel J Walsh 08722738cf
Merge pull request #1718 from containers/konflux/references/main
Update Konflux references
2025-07-19 06:34:54 -04:00
red-hat-konflux-kflux-prd-rh03[bot] ab7adbb430
Update Konflux references
Signed-off-by: red-hat-konflux-kflux-prd-rh03 <206760901+red-hat-konflux-kflux-prd-rh03[bot]@users.noreply.github.com>
2025-07-19 08:03:10 +00:00
Mike Bonnet 72504179fc
Merge pull request #1716 from containers/fix-sentencepiece-build
build_rag.sh: install cmake
2025-07-18 10:54:06 -07:00
Mike Bonnet dcfeee8538 build_rag.sh: install cmake
cmake is required to build sentencepiece.

Signed-off-by: Mike Bonnet <mikeb@redhat.com>
2025-07-18 09:17:33 -07:00
Daniel J Walsh 1d903e746c
Merge pull request #1677 from containers/vllm-cpu
Add vllm to cpu inferencing Containerfile
2025-07-18 06:05:26 -04:00
Daniel J Walsh 13a22f6671
Merge pull request #1708 from containers/konflux-more-images
konflux: add pipelines for asahi, cann, intel-gpu, llama-stack, musa, openvino, and ramalama-cli
2025-07-18 06:04:12 -04:00
Daniel J Walsh 1d6aa51cd7
Merge pull request #1712 from tonyjames/main
Add support for Intel Iris Xe Graphics (46AA, 46A6, 46A8)
2025-07-18 06:03:34 -04:00
Tony James 50d01f177b Add support for Intel Iris Xe Graphics (46AA, 46A6, 46A8)
Signed-off-by: Tony James <3128081+tonyjames@users.noreply.github.com>
2025-07-17 18:58:07 -04:00
Eric Curtin 234134b5cc Add vllm to cpu inferencing Containerfile
To be built upon "ramalama" image

Signed-off-by: Eric Curtin <ecurtin@redhat.com>
2025-07-17 21:09:20 +01:00
Daniel J Walsh 64ca9cfb4a
Merge pull request #1709 from containers/fix-cuda-gpu
fix GPU selection and pytorch URL when building rag images
2025-07-17 11:31:41 -04:00
Eric Curtin e3dda75ec6
Merge pull request #1707 from rhatdan/install
README: remove duplicate statements
2025-07-17 15:57:12 +01:00
Daniel J Walsh 075df4bb87
Merge pull request #1617 from jwieleRH/check_nvidia
Improve NVIDIA GPU detection.
2025-07-17 06:29:40 -04:00
Daniel J Walsh 5b46b23f2e
README: remove duplicate statements
Simplify ramalama's top-level description. Remove the duplicate
statements.

Also make sure all references to PyPI are spelled this way.

Signed-off-by: Ken Dreyer <kdreyer@redhat.com>
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-07-17 06:26:55 -04:00
Daniel J Walsh 1fe1b20c8c
Merge pull request #1711 from carlwgeorge/include-config-in-wheel
Included ramalama.conf in wheel
2025-07-17 06:21:47 -04:00
Mike Bonnet f5512c8f65 build_rag.sh: install sentencepiece via pip
python3-sentencepiece was pulling in an older version of protobuf.

Signed-off-by: Mike Bonnet <mikeb@redhat.com>
2025-07-16 23:27:54 -07:00
Mike Bonnet 7132d5a7f8 build_rag.sh: disable pip cache
pip's caching behavior was causing errors when downloading huge (4.5G) torch wheels during
the rocm-ubi-rag build.

Signed-off-by: Mike Bonnet <mikeb@redhat.com>
2025-07-16 23:27:54 -07:00
Mike Bonnet 2d3f8dfe28 fix GPU selection and pytorch URL when building rag images
A previous commit changed the second argument to add_rag() from the image name to the
full repo path. Update the case statement accordingly, so the "GPU" variable is set correctly.

The "cuda" directory is no longer available on download.pytorch.org. When building for cuda,
pull wheels from the "cu128" directory, which contains binaries built for CUDA 12.8.

When building rocm* images, download binaries from the "rocm6.3" directory, which are built
for ROCm 6.3.

Signed-off-by: Mike Bonnet <mikeb@redhat.com>
2025-07-16 23:27:54 -07:00
Carl George 1d8a2e5b6c Included ramalama.conf in wheel
Currently other data files such as shortnames.conf, man pages, and shell
completions are included in the Python wheel.  Including ramalama.conf
as well means we can avoid several calls to make in the RPM spec file,
instead relying on the wheel mechanisms to put these files in place.  As
long as `make docs` is run before the wheel generation, all the
necessary files are included.

Signed-off-by: Carl George <carlwgeorge@gmail.com>
2025-07-17 01:20:28 -05:00
Eric Curtin 42ac787686
Merge pull request #1710 from containers/konflux/mintmaker/main/registry.access.redhat.com-ubi9-ubi-9.x
chore(deps): update registry.access.redhat.com/ubi9/ubi docker tag to v9.6-1752625787
2025-07-17 01:15:43 +01:00
red-hat-konflux-kflux-prd-rh03[bot] 18c560fff6
chore(deps): update registry.access.redhat.com/ubi9/ubi docker tag to v9.6-1752625787
Signed-off-by: red-hat-konflux-kflux-prd-rh03 <206760901+red-hat-konflux-kflux-prd-rh03[bot]@users.noreply.github.com>
2025-07-17 00:03:57 +00:00
John Wiele ce35ccb4c3 Remove pyYAML as a dependency.
Extract information directly from the CDI YAML file by making some
simplifying assumptions instead of doing a complete YAML parse.

Default to all devices known to nvidia-smi.

Fix the signature of check_nvidia().

Remove some debug logging.

Signed-off-by: John Wiele <jwiele@redhat.com>
2025-07-16 16:11:39 -04:00
John Wiele b97177b408 Apply suggestions from code review
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
Signed-off-by: John Wiele <jwiele@redhat.com>
2025-07-16 16:08:32 -04:00
John Wiele 14c4aaca39 Improve NVIDIA GPU detection.
Allow GPUs to be specified by UUID as well as index since the index is
not guaranteed to persist across reboots.

Crosscheck requested GPUs with nvidia-smi and CDI configuration. If
any requested GPUs lack corresponding CDI configuration, print a
message with a pointer to documentation.

If the only GPU specified in the CDI configuration is "all", as
appears to be the case on WSL2, use "all" as the default.

Add an optional encoding argument to run_cmd() to facilitate checking
the output of the command.

Add pyYAML as a dependency for parsing the CDI configuration.

Signed-off-by: John Wiele <jwiele@redhat.com>
2025-07-16 16:08:32 -04:00
Mike Bonnet bf4fd56106 konflux: add pipelines for asahi, cann, intel-gpu, llama-stack, musa, openvino, and ramalama-cli
Signed-off-by: Mike Bonnet <mikeb@redhat.com>
2025-07-16 12:23:12 -07:00
Mike Bonnet 1373a8e7ba konflux: don't trigger pipelines on PR transition to "Ready for Review"
By default, Konflux triggers new pipelines when a PR moves from Draft to
"Ready for Review". Because the commit SHA hasn't changed, no new builds
are performed. However, a new integration test is also triggered, and because
no builds were performed it is unable to find the URL and digest of the images,
causing the integration test to fail. Updating the "on-cel-expression" to exclude
the transition to "Ready to Review" avoids the unnecessary pipelines and the
false integration test failures.

Update the whitespace of the "on-cel-expression" in the push pipelines for consistency.
No functional change.

Signed-off-by: Mike Bonnet <mikeb@redhat.com>
2025-07-16 12:11:54 -07:00
Sergio Lopez 74584d0b5e container-images: add virglrenderer to vulkan
When running in a krun-isolated container, we need
"/usr/libexec/virgl_render_server" to be present in the container
image to launch it before entering the microVM.

Install the virglrenderer package in addition to mesa-vulkan-drivers.

Signed-off-by: Sergio Lopez <slp@redhat.com>
2025-07-16 18:53:11 +02:00
Daniel J Walsh 4dea2ee02f
Merge pull request #1687 from containers/konflux-cuda-arm64
konflux: build cuda on arm64, and simplify testing
2025-07-16 12:01:45 -04:00
Mike Bonnet 069e98c095 fix unit tests to be independent of environment
Setting RAMALAMA_IMAGE would cause some unit tests to fail. Make those
tests independent of the calling environment.

Signed-off-by: Mike Bonnet <mikeb@redhat.com>
2025-07-16 06:44:00 -07:00
Mike Bonnet f57b8eb284 konflux: copy source into the bats image
Including the source in the bats image ensures that we're always testing with the same
version of the code that was used to build the images. It also eliminates the need for
repeated checkouts of the repo and simplifies testing, avoiding additional volumes and
artifact references.

Signed-off-by: Mike Bonnet <mikeb@redhat.com>
2025-07-16 06:44:00 -07:00
Mike Bonnet 299d3b9b75 konflux: build cuda and layered images on arm64
Signed-off-by: Mike Bonnet <mikeb@redhat.com>
2025-07-16 06:44:00 -07:00
Stephen Smoogen 683b8fb8a0
Minor fixes to rpm builds by packit and spec file. (#1704)
* This removes epel9 from packit rules as epel9 does not currently
  build without many additional packages added to the distro.
* This fixes a breakage in epel10 by adding mailcap as a buildrequires.

Signed-off-by: Stephen Smoogen <ssmoogen@redhat.com>
Co-authored-by: Stephen Smoogen <ssmoogen@redhat.com>
2025-07-16 09:37:00 -04:00
Mike Bonnet 64e22ee0aa
Merge pull request #1700 from containers/test-optimization-and-fixup
reduce unnecessary image pulls during testing, and re-enable a couple tests
2025-07-15 11:34:59 -07:00
Mike Bonnet 651fc503bd implement "ps --noheading" for docker using --format
"docker ps" does not support the "--noheading" option. Use the --format
option to emulate the behavior.

Signed-off-by: Mike Bonnet <mikeb@redhat.com>
2025-07-15 10:32:53 -07:00
Daniel J Walsh 384cad7161
Merge pull request #1696 from containers/renovate/quay.io-konflux-ci-build-trusted-artifacts-latest
chore(deps): update quay.io/konflux-ci/build-trusted-artifacts:latest docker digest to f7d0c51
2025-07-15 13:17:33 -04:00
Daniel J Walsh 3dec0d7487
Merge pull request #1699 from containers/renovate/registry.access.redhat.com-ubi9-ubi-9.x
chore(deps): update registry.access.redhat.com/ubi9/ubi docker tag to v9.6-1752587049
2025-07-15 13:16:27 -04:00
Daniel J Walsh d7763ad1c5
Merge pull request #1698 from containers/mistral
Mistral should point to lmstudio gguf
2025-07-15 13:15:11 -04:00
Mike Bonnet b550cc97d2 bats: re-enable a couple tests, and minor cleanup
Fix the "serve and stop" test by passing the correct (possibly random) port to "ramalama chat".

Fix the definition of "ramalama_runtime".

Signed-off-by: Mike Bonnet <mikeb@redhat.com>
2025-07-15 09:03:49 -07:00
Mike Bonnet 927d2f992a bats: allow the container to use the overlay driver when possible
Remove the STORAGE_DRIVER env var from the container so it doesn't force use
of the vfs driver in all cases.

Mount /dev/fuse into the container when running locally.

Signed-off-by: Mike Bonnet <mikeb@redhat.com>
2025-07-15 09:03:49 -07:00
Mike Bonnet f176bb3926 add a dryrun field to Config, and set it early
accel_image() is called to set option defaults, before options are even parsed.
This can cause images to be pulled even if they will not actually be used, slowing
down testing and making the cli less responsive. Set the "dryrun" option before
the first call to accel_image() to avoid unnecessary image pulls.

Signed-off-by: Mike Bonnet <mikeb@redhat.com>
2025-07-15 09:03:49 -07:00
renovate[bot] f38c736d23
chore(deps): update registry.access.redhat.com/ubi9/ubi docker tag to v9.6-1752587049
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-15 15:47:38 +00:00
Eric Curtin fa2f485175 Mistral should point to lmstudio gguf
I don't know who MaziyarPanahi is, but I know who lmstudio are

Signed-off-by: Eric Curtin <ecurtin@redhat.com>
2025-07-15 15:04:54 +01:00
Mike Bonnet f8c41b38c1 avoid unnecessary image pulls
Don't pull images in _get_rag() and _get_source_model() if pull == "never"
or if running with "--dryrun".

Signed-off-by: Mike Bonnet <mikeb@redhat.com>
2025-07-14 14:42:21 -07:00
renovate[bot] b7323f7972
chore(deps): update quay.io/konflux-ci/build-trusted-artifacts:latest docker digest to f7d0c51
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-14 20:13:34 +00:00
Daniel J Walsh 53e38dea8f
Merge pull request #1694 from rhatdan/VERSION
Bump to 0.11.0
2025-07-14 10:59:06 -04:00
102 changed files with 2078 additions and 270 deletions

View File

@ -79,7 +79,7 @@ jobs:
dist_git_branches: &fedora_targets
- fedora-all
- epel10
- epel9
- epel10.0
- job: koji_build
trigger: commit
@ -92,4 +92,4 @@ jobs:
dist_git_branches:
- fedora-branched # rawhide updates are created automatically
- epel10
- epel9
- epel10.0

View File

@ -0,0 +1,48 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: asahi-llama-server
pipelines.appstudio.openshift.io/type: build
name: asahi-llama-server-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/asahi-llama-server:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-m2xlarge/amd64
- linux-m2xlarge/arm64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/asahi:on-pr-{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/asahi:on-pr-{{revision}}
- ENTRYPOINT=/usr/bin/llama-server.sh
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,45 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: asahi-llama-server
pipelines.appstudio.openshift.io/type: build
name: asahi-llama-server-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/asahi-llama-server:{{revision}}
- name: build-platforms
value:
- linux-m2xlarge/amd64
- linux-m2xlarge/arm64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/asahi:{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/asahi:{{revision}}
- ENTRYPOINT=/usr/bin/llama-server.sh
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,48 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: asahi-rag
pipelines.appstudio.openshift.io/type: build
name: asahi-rag-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/asahi-rag:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-d160-m2xlarge/amd64
- linux-d160-m2xlarge/arm64
- name: dockerfile
value: container-images/common/Containerfile.rag
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/asahi:on-pr-{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/asahi:on-pr-{{revision}}
- GPU=cpu
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,45 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: asahi-rag
pipelines.appstudio.openshift.io/type: build
name: asahi-rag-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/asahi-rag:{{revision}}
- name: build-platforms
value:
- linux-d160-m2xlarge/amd64
- linux-d160-m2xlarge/arm64
- name: dockerfile
value: container-images/common/Containerfile.rag
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/asahi:{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/asahi:{{revision}}
- GPU=cpu
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,48 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: asahi-whisper-server
pipelines.appstudio.openshift.io/type: build
name: asahi-whisper-server-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/asahi-whisper-server:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-m2xlarge/amd64
- linux-m2xlarge/arm64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/asahi:on-pr-{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/asahi:on-pr-{{revision}}
- ENTRYPOINT=/usr/bin/whisper-server.sh
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,45 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: asahi-whisper-server
pipelines.appstudio.openshift.io/type: build
name: asahi-whisper-server-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/asahi-whisper-server:{{revision}}
- name: build-platforms
value:
- linux-m2xlarge/amd64
- linux-m2xlarge/arm64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/asahi:{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/asahi:{{revision}}
- ENTRYPOINT=/usr/bin/whisper-server.sh
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,42 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: asahi
pipelines.appstudio.openshift.io/type: build
name: asahi-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/asahi:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-c4xlarge/amd64
- linux-c4xlarge/arm64
- name: dockerfile
value: container-images/asahi/Containerfile
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,39 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: asahi
pipelines.appstudio.openshift.io/type: build
name: asahi-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/asahi:{{revision}}
- name: build-platforms
value:
- linux-c4xlarge/amd64
- linux-c4xlarge/arm64
- name: dockerfile
value: container-images/asahi/Containerfile
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -8,8 +8,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: bats

View File

@ -7,8 +7,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "push" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: bats

View File

@ -0,0 +1,48 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: cann-llama-server
pipelines.appstudio.openshift.io/type: build
name: cann-llama-server-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/cann-llama-server:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-m2xlarge/amd64
- linux-m2xlarge/arm64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/cann:on-pr-{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/cann:on-pr-{{revision}}
- ENTRYPOINT=/usr/bin/llama-server.sh
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,45 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: cann-llama-server
pipelines.appstudio.openshift.io/type: build
name: cann-llama-server-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/cann-llama-server:{{revision}}
- name: build-platforms
value:
- linux-m2xlarge/amd64
- linux-m2xlarge/arm64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/cann:{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/cann:{{revision}}
- ENTRYPOINT=/usr/bin/llama-server.sh
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,48 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: cann-rag
pipelines.appstudio.openshift.io/type: build
name: cann-rag-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/cann-rag:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-d160-m2xlarge/amd64
- linux-d160-m2xlarge/arm64
- name: dockerfile
value: container-images/common/Containerfile.rag
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/cann:on-pr-{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/cann:on-pr-{{revision}}
- GPU=cpu
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,45 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: cann-rag
pipelines.appstudio.openshift.io/type: build
name: cann-rag-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/cann-rag:{{revision}}
- name: build-platforms
value:
- linux-d160-m2xlarge/amd64
- linux-d160-m2xlarge/arm64
- name: dockerfile
value: container-images/common/Containerfile.rag
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/cann:{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/cann:{{revision}}
- GPU=cpu
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,48 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: cann-whisper-server
pipelines.appstudio.openshift.io/type: build
name: cann-whisper-server-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/cann-whisper-server:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-m2xlarge/amd64
- linux-m2xlarge/arm64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/cann:on-pr-{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/cann:on-pr-{{revision}}
- ENTRYPOINT=/usr/bin/whisper-server.sh
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,45 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: cann-whisper-server
pipelines.appstudio.openshift.io/type: build
name: cann-whisper-server-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/cann-whisper-server:{{revision}}
- name: build-platforms
value:
- linux-m2xlarge/amd64
- linux-m2xlarge/arm64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/cann:{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/cann:{{revision}}
- ENTRYPOINT=/usr/bin/whisper-server.sh
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,42 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: cann
pipelines.appstudio.openshift.io/type: build
name: cann-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/cann:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-c4xlarge/amd64
- linux-c4xlarge/arm64
- name: dockerfile
value: container-images/cann/Containerfile
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,39 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: cann
pipelines.appstudio.openshift.io/type: build
name: cann-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/cann:{{revision}}
- name: build-platforms
value:
- linux-c4xlarge/amd64
- linux-c4xlarge/arm64
- name: dockerfile
value: container-images/cann/Containerfile
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -8,8 +8,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: cuda-llama-server
@ -29,6 +29,7 @@ spec:
- name: build-platforms
value:
- linux-m2xlarge/amd64
- linux-m2xlarge/arm64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image

View File

@ -7,8 +7,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "push" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: cuda-llama-server
@ -26,6 +26,7 @@ spec:
- name: build-platforms
value:
- linux-m2xlarge/amd64
- linux-m2xlarge/arm64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image

View File

@ -8,8 +8,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: cuda-rag
@ -29,6 +29,7 @@ spec:
- name: build-platforms
value:
- linux-d160-m2xlarge/amd64
- linux-d160-m2xlarge/arm64
- name: dockerfile
value: container-images/common/Containerfile.rag
- name: parent-image

View File

@ -7,8 +7,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "push" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: cuda-rag
@ -26,6 +26,7 @@ spec:
- name: build-platforms
value:
- linux-d160-m2xlarge/amd64
- linux-d160-m2xlarge/arm64
- name: dockerfile
value: container-images/common/Containerfile.rag
- name: parent-image

View File

@ -8,8 +8,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: cuda-whisper-server
@ -29,6 +29,7 @@ spec:
- name: build-platforms
value:
- linux-m2xlarge/amd64
- linux-m2xlarge/arm64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image

View File

@ -7,8 +7,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "push" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: cuda-whisper-server
@ -26,6 +26,7 @@ spec:
- name: build-platforms
value:
- linux-m2xlarge/amd64
- linux-m2xlarge/arm64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image

View File

@ -8,8 +8,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: cuda
@ -29,6 +29,7 @@ spec:
- name: build-platforms
value:
- linux-c4xlarge/amd64
- linux-c4xlarge/arm64
- name: dockerfile
value: container-images/cuda/Containerfile
pipelineRef:

View File

@ -7,8 +7,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "push" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: cuda
@ -26,6 +26,7 @@ spec:
- name: build-platforms
value:
- linux-c4xlarge/amd64
- linux-c4xlarge/arm64
- name: dockerfile
value: container-images/cuda/Containerfile
pipelineRef:

View File

@ -52,10 +52,6 @@ spec:
params:
- name: image
value: $(tasks.init.results.bats-image)
- name: git-url
value: $(params.git-url)
- name: git-revision
value: $(params.git-revision)
- name: envs
value:
- RAMALAMA_IMAGE=$(tasks.init.results.ramalama-image)

View File

@ -9,10 +9,6 @@ spec:
description: The platform of the VM to provision
- name: image
description: The image to use when setting up the test environment
- name: git-url
description: The URL of the source code repository
- name: git-revision
description: The revision of the source code to test
- name: cmd
description: The command to run
- name: envs
@ -38,10 +34,6 @@ spec:
name: ssh
workingDir: /var/workdir
env:
- name: GIT_URL
value: $(params.git-url)
- name: GIT_REVISION
value: $(params.git-revision)
- name: TEST_IMAGE
value: $(params.image)
- name: TEST_CMD
@ -57,13 +49,7 @@ spec:
}
log Install packages
dnf -y install openssh-clients rsync git-core jq
log Clone source
git clone -n "$GIT_URL" source
pushd source
git checkout "$GIT_REVISION"
popd
dnf -y install openssh-clients rsync jq
log Prepare connection
@ -107,7 +93,7 @@ spec:
--security-opt label=disable \
--security-opt unmask=/proc/* \
--device /dev/net/tun \
-v \$PWD/source:/src \
--device /dev/fuse \
${PODMAN_ENV[*]} \
$TEST_IMAGE $TEST_CMD
SCRIPTEOF
@ -119,7 +105,7 @@ spec:
export SSH_ARGS="-o StrictHostKeyChecking=no -o ServerAliveInterval=60 -o ServerAliveCountMax=10"
# ssh once before rsync to retrieve the host key
ssh $SSH_ARGS "$SSH_HOST" "uname -a"
rsync -ra scripts source "$SSH_HOST:$BUILD_DIR"
rsync -ra scripts "$SSH_HOST:$BUILD_DIR"
ssh $SSH_ARGS "$SSH_HOST" "$BUILD_DIR/scripts/test.sh"
log End VM exec
else

View File

@ -0,0 +1,47 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: intel-gpu-llama-server
pipelines.appstudio.openshift.io/type: build
name: intel-gpu-llama-server-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu-llama-server:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-m2xlarge/amd64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu:on-pr-{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu:on-pr-{{revision}}
- ENTRYPOINT=/usr/bin/llama-server.sh
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,44 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: intel-gpu-llama-server
pipelines.appstudio.openshift.io/type: build
name: intel-gpu-llama-server-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu-llama-server:{{revision}}
- name: build-platforms
value:
- linux-m2xlarge/amd64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu:{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu:{{revision}}
- ENTRYPOINT=/usr/bin/llama-server.sh
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,47 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: intel-gpu-rag
pipelines.appstudio.openshift.io/type: build
name: intel-gpu-rag-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu-rag:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-d160-m2xlarge/amd64
- name: dockerfile
value: container-images/common/Containerfile.rag
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu:on-pr-{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu:on-pr-{{revision}}
- GPU=cpu
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,44 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: intel-gpu-rag
pipelines.appstudio.openshift.io/type: build
name: intel-gpu-rag-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu-rag:{{revision}}
- name: build-platforms
value:
- linux-d160-m2xlarge/amd64
- name: dockerfile
value: container-images/common/Containerfile.rag
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu:{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu:{{revision}}
- GPU=cpu
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,47 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: intel-gpu-whisper-server
pipelines.appstudio.openshift.io/type: build
name: intel-gpu-whisper-server-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu-whisper-server:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-m2xlarge/amd64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu:on-pr-{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu:on-pr-{{revision}}
- ENTRYPOINT=/usr/bin/whisper-server.sh
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,44 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: intel-gpu-whisper-server
pipelines.appstudio.openshift.io/type: build
name: intel-gpu-whisper-server-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu-whisper-server:{{revision}}
- name: build-platforms
value:
- linux-m2xlarge/amd64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu:{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu:{{revision}}
- ENTRYPOINT=/usr/bin/whisper-server.sh
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,41 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: intel-gpu
pipelines.appstudio.openshift.io/type: build
name: intel-gpu-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-c4xlarge/amd64
- name: dockerfile
value: container-images/intel-gpu/Containerfile
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,38 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: intel-gpu
pipelines.appstudio.openshift.io/type: build
name: intel-gpu-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/intel-gpu:{{revision}}
- name: build-platforms
value:
- linux-c4xlarge/amd64
- name: dockerfile
value: container-images/intel-gpu/Containerfile
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,42 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: llama-stack
pipelines.appstudio.openshift.io/type: build
name: llama-stack-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/llama-stack:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-c4xlarge/amd64
- linux-c4xlarge/arm64
- name: dockerfile
value: container-images/llama-stack/Containerfile
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,39 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: llama-stack
pipelines.appstudio.openshift.io/type: build
name: llama-stack-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/llama-stack:{{revision}}
- name: build-platforms
value:
- linux-c4xlarge/amd64
- linux-c4xlarge/arm64
- name: dockerfile
value: container-images/llama-stack/Containerfile
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,47 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: musa-llama-server
pipelines.appstudio.openshift.io/type: build
name: musa-llama-server-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/musa-llama-server:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-m2xlarge/amd64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/musa:on-pr-{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/musa:on-pr-{{revision}}
- ENTRYPOINT=/usr/bin/llama-server.sh
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,44 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: musa-llama-server
pipelines.appstudio.openshift.io/type: build
name: musa-llama-server-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/musa-llama-server:{{revision}}
- name: build-platforms
value:
- linux-m2xlarge/amd64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/musa:{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/musa:{{revision}}
- ENTRYPOINT=/usr/bin/llama-server.sh
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,47 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: musa-rag
pipelines.appstudio.openshift.io/type: build
name: musa-rag-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/musa-rag:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-d160-m2xlarge/amd64
- name: dockerfile
value: container-images/common/Containerfile.rag
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/musa:on-pr-{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/musa:on-pr-{{revision}}
- GPU=musa
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,44 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: musa-rag
pipelines.appstudio.openshift.io/type: build
name: musa-rag-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/musa-rag:{{revision}}
- name: build-platforms
value:
- linux-d160-m2xlarge/amd64
- name: dockerfile
value: container-images/common/Containerfile.rag
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/musa:{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/musa:{{revision}}
- GPU=musa
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,47 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: musa-whisper-server
pipelines.appstudio.openshift.io/type: build
name: musa-whisper-server-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/musa-whisper-server:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-m2xlarge/amd64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/musa:on-pr-{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/musa:on-pr-{{revision}}
- ENTRYPOINT=/usr/bin/whisper-server.sh
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,44 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: musa-whisper-server
pipelines.appstudio.openshift.io/type: build
name: musa-whisper-server-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/musa-whisper-server:{{revision}}
- name: build-platforms
value:
- linux-m2xlarge/amd64
- name: dockerfile
value: container-images/common/Containerfile.entrypoint
- name: parent-image
value: quay.io/redhat-user-workloads/ramalama-tenant/musa:{{revision}}
- name: build-args
value:
- PARENT=quay.io/redhat-user-workloads/ramalama-tenant/musa:{{revision}}
- ENTRYPOINT=/usr/bin/whisper-server.sh
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,41 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: musa
pipelines.appstudio.openshift.io/type: build
name: musa-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/musa:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-c4xlarge/amd64
- name: dockerfile
value: container-images/musa/Containerfile
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,38 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: musa
pipelines.appstudio.openshift.io/type: build
name: musa-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/musa:{{revision}}
- name: build-platforms
value:
- linux-c4xlarge/amd64
- name: dockerfile
value: container-images/musa/Containerfile
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,41 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: openvino
pipelines.appstudio.openshift.io/type: build
name: openvino-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/openvino:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-c4xlarge/amd64
- name: dockerfile
value: container-images/openvino/Containerfile
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,38 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: openvino
pipelines.appstudio.openshift.io/type: build
name: openvino-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/openvino:{{revision}}
- name: build-platforms
value:
- linux-c4xlarge/amd64
- name: dockerfile
value: container-images/openvino/Containerfile
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -113,7 +113,7 @@ spec:
- name: name
value: init
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-init:0.2@sha256:66e90d31e1386bf516fb548cd3e3f0082b5d0234b8b90dbf9e0d4684b70dbe1a
value: quay.io/konflux-ci/tekton-catalog/task-init:0.2@sha256:1d8221c84f91b923d89de50bf16481ea729e3b68ea04a9a7cbe8485ddbb27ee6
- name: kind
value: task
resolver: bundles
@ -163,7 +163,7 @@ spec:
- name: name
value: prefetch-dependencies-oci-ta
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-prefetch-dependencies-oci-ta:0.2@sha256:f10a4841e6f75fbb314b1d8cbf14f652499c1fe7f59e59aed59f7431c680aa17
value: quay.io/konflux-ci/tekton-catalog/task-prefetch-dependencies-oci-ta:0.2@sha256:092491ac0f6e1009d10c58a1319d1029371bf637cc1293cceba53c6da5314ed1
- name: kind
value: task
resolver: bundles
@ -225,7 +225,7 @@ spec:
- name: name
value: buildah-remote-oci-ta
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-buildah-remote-oci-ta:0.4@sha256:5b8d51fa889cdac873750904c3fccc0cca1c4f65af16902ebb2b573151f80657
value: quay.io/konflux-ci/tekton-catalog/task-buildah-remote-oci-ta:0.4@sha256:9e866d4d0489a6ab84ae263db416c9f86d2d6117ef4444f495a0e97388ae3ac0
- name: kind
value: task
resolver: bundles
@ -254,7 +254,7 @@ spec:
- name: name
value: build-image-index
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-build-image-index:0.1@sha256:846dc9975914f31380ec2712fdbac9df3b06c00a9cc7df678315a7f97145efc2
value: quay.io/konflux-ci/tekton-catalog/task-build-image-index:0.1@sha256:3499772af90aad0d3935629be6d37dd9292195fb629e6f43ec839c7f545a0faa
- name: kind
value: task
resolver: bundles
@ -285,8 +285,6 @@ spec:
params:
- name: image
value: $(params.test-image)@$(tasks.wait-for-test-image.results.digest)
- name: source-artifact
value: $(tasks.prefetch-dependencies.results.SOURCE_ARTIFACT)
- name: envs
value:
- $(params.test-envs[*])
@ -364,7 +362,7 @@ spec:
- name: name
value: clair-scan
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-clair-scan:0.2@sha256:d354939892f3a904223ec080cc3771bd11931085a5d202323ea491ee8e8c5e43
value: quay.io/konflux-ci/tekton-catalog/task-clair-scan:0.2@sha256:417f44117f8d87a4a62fea6589b5746612ac61640b454dbd88f74892380411f2
- name: kind
value: task
resolver: bundles
@ -384,7 +382,7 @@ spec:
- name: name
value: ecosystem-cert-preflight-checks
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-ecosystem-cert-preflight-checks:0.2@sha256:abbe195626eec925288df6425679559025d1be4af5ae70ca6dbbcb49ad3bf08b
value: quay.io/konflux-ci/tekton-catalog/task-ecosystem-cert-preflight-checks:0.2@sha256:f99d2bdb02f13223d494077a2cde31418d09369f33c02134a8e7e5fad2f61eda
- name: kind
value: task
resolver: bundles
@ -410,7 +408,7 @@ spec:
- name: name
value: sast-snyk-check-oci-ta
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-sast-snyk-check-oci-ta:0.4@sha256:e61f541189b30d14292ef8df36ccaf13f7feb2378fed5f74cb6293b3e79eb687
value: quay.io/konflux-ci/tekton-catalog/task-sast-snyk-check-oci-ta:0.4@sha256:fe5e5ba3a72632cd505910de2eacd62c9d11ed570c325173188f8d568ac60771
- name: kind
value: task
resolver: bundles
@ -432,7 +430,7 @@ spec:
- name: name
value: clamav-scan
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-clamav-scan:0.2@sha256:9cab95ac9e833d77a63c079893258b73b8d5a298d93aaf9bdd6722471bc2f338
value: quay.io/konflux-ci/tekton-catalog/task-clamav-scan:0.2@sha256:7749146f7e4fe530846f1b15c9366178ec9f44776ef1922a60d3e7e2b8c6426b
- name: kind
value: task
resolver: bundles
@ -477,7 +475,7 @@ spec:
- name: name
value: sast-coverity-check-oci-ta
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-sast-coverity-check-oci-ta:0.3@sha256:c926568ce63e4f63e18bb6a4178caca2e8192f6e3b830bbcd354e6485d29458c
value: quay.io/konflux-ci/tekton-catalog/task-sast-coverity-check-oci-ta:0.3@sha256:f9ca942208dc2e63b479384ccc56a611cc793397ecc837637b5b9f89c2ecbefe
- name: kind
value: task
resolver: bundles
@ -524,7 +522,7 @@ spec:
- name: name
value: sast-shell-check-oci-ta
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-sast-shell-check-oci-ta:0.1@sha256:808bcaf75271db6a999f53fdefb973a385add94a277d37fbd3df68f8ac7dfaa3
value: quay.io/konflux-ci/tekton-catalog/task-sast-shell-check-oci-ta:0.1@sha256:bf7bdde00b7212f730c1356672290af6f38d070da2c8a316987b5c32fd49e0b9
- name: kind
value: task
resolver: bundles
@ -595,7 +593,7 @@ spec:
- name: name
value: push-dockerfile-oci-ta
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-push-dockerfile-oci-ta:0.1@sha256:5d8013b6a27bbc5e4ff261144616268f28417ed0950d583ef36349fcd59d3d3d
value: quay.io/konflux-ci/tekton-catalog/task-push-dockerfile-oci-ta:0.1@sha256:8c75c4a747e635e5f3e12266a3bb6e5d3132bf54e37eaa53d505f89897dd8eca
- name: kind
value: task
resolver: bundles
@ -631,7 +629,7 @@ spec:
- name: name
value: show-sbom
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-show-sbom:0.1@sha256:1b1df4da95966d08ac6a5b8198710e09e68b5c2cdc707c37d9d19769e65884b2
value: quay.io/konflux-ci/tekton-catalog/task-show-sbom:0.1@sha256:86c069cac0a669797e8049faa8aa4088e70ff7fcd579d5bdc37626a9e0488a05
- name: kind
value: task
resolver: bundles

View File

@ -113,7 +113,7 @@ spec:
- name: name
value: init
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-init:0.2@sha256:66e90d31e1386bf516fb548cd3e3f0082b5d0234b8b90dbf9e0d4684b70dbe1a
value: quay.io/konflux-ci/tekton-catalog/task-init:0.2@sha256:1d8221c84f91b923d89de50bf16481ea729e3b68ea04a9a7cbe8485ddbb27ee6
- name: kind
value: task
resolver: bundles
@ -163,7 +163,7 @@ spec:
- name: name
value: prefetch-dependencies-oci-ta
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-prefetch-dependencies-oci-ta:0.2@sha256:f10a4841e6f75fbb314b1d8cbf14f652499c1fe7f59e59aed59f7431c680aa17
value: quay.io/konflux-ci/tekton-catalog/task-prefetch-dependencies-oci-ta:0.2@sha256:092491ac0f6e1009d10c58a1319d1029371bf637cc1293cceba53c6da5314ed1
- name: kind
value: task
resolver: bundles
@ -225,7 +225,7 @@ spec:
- name: name
value: buildah-remote-oci-ta
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-buildah-remote-oci-ta:0.4@sha256:5b8d51fa889cdac873750904c3fccc0cca1c4f65af16902ebb2b573151f80657
value: quay.io/konflux-ci/tekton-catalog/task-buildah-remote-oci-ta:0.4@sha256:9e866d4d0489a6ab84ae263db416c9f86d2d6117ef4444f495a0e97388ae3ac0
- name: kind
value: task
resolver: bundles
@ -254,7 +254,7 @@ spec:
- name: name
value: build-image-index
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-build-image-index:0.1@sha256:846dc9975914f31380ec2712fdbac9df3b06c00a9cc7df678315a7f97145efc2
value: quay.io/konflux-ci/tekton-catalog/task-build-image-index:0.1@sha256:3499772af90aad0d3935629be6d37dd9292195fb629e6f43ec839c7f545a0faa
- name: kind
value: task
resolver: bundles
@ -285,8 +285,6 @@ spec:
params:
- name: image
value: $(params.test-image)@$(tasks.wait-for-test-image.results.digest)
- name: source-artifact
value: $(tasks.prefetch-dependencies.results.SOURCE_ARTIFACT)
- name: envs
value:
- $(params.test-envs[*])
@ -364,7 +362,7 @@ spec:
- name: name
value: clair-scan
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-clair-scan:0.2@sha256:d354939892f3a904223ec080cc3771bd11931085a5d202323ea491ee8e8c5e43
value: quay.io/konflux-ci/tekton-catalog/task-clair-scan:0.2@sha256:417f44117f8d87a4a62fea6589b5746612ac61640b454dbd88f74892380411f2
- name: kind
value: task
resolver: bundles
@ -384,7 +382,7 @@ spec:
- name: name
value: ecosystem-cert-preflight-checks
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-ecosystem-cert-preflight-checks:0.2@sha256:abbe195626eec925288df6425679559025d1be4af5ae70ca6dbbcb49ad3bf08b
value: quay.io/konflux-ci/tekton-catalog/task-ecosystem-cert-preflight-checks:0.2@sha256:f99d2bdb02f13223d494077a2cde31418d09369f33c02134a8e7e5fad2f61eda
- name: kind
value: task
resolver: bundles
@ -410,7 +408,7 @@ spec:
- name: name
value: sast-snyk-check-oci-ta
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-sast-snyk-check-oci-ta:0.4@sha256:e61f541189b30d14292ef8df36ccaf13f7feb2378fed5f74cb6293b3e79eb687
value: quay.io/konflux-ci/tekton-catalog/task-sast-snyk-check-oci-ta:0.4@sha256:fe5e5ba3a72632cd505910de2eacd62c9d11ed570c325173188f8d568ac60771
- name: kind
value: task
resolver: bundles
@ -432,7 +430,7 @@ spec:
- name: name
value: clamav-scan
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-clamav-scan:0.2@sha256:9cab95ac9e833d77a63c079893258b73b8d5a298d93aaf9bdd6722471bc2f338
value: quay.io/konflux-ci/tekton-catalog/task-clamav-scan:0.2@sha256:7749146f7e4fe530846f1b15c9366178ec9f44776ef1922a60d3e7e2b8c6426b
- name: kind
value: task
resolver: bundles
@ -477,7 +475,7 @@ spec:
- name: name
value: sast-coverity-check-oci-ta
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-sast-coverity-check-oci-ta:0.3@sha256:c926568ce63e4f63e18bb6a4178caca2e8192f6e3b830bbcd354e6485d29458c
value: quay.io/konflux-ci/tekton-catalog/task-sast-coverity-check-oci-ta:0.3@sha256:f9ca942208dc2e63b479384ccc56a611cc793397ecc837637b5b9f89c2ecbefe
- name: kind
value: task
resolver: bundles
@ -524,7 +522,7 @@ spec:
- name: name
value: sast-shell-check-oci-ta
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-sast-shell-check-oci-ta:0.1@sha256:808bcaf75271db6a999f53fdefb973a385add94a277d37fbd3df68f8ac7dfaa3
value: quay.io/konflux-ci/tekton-catalog/task-sast-shell-check-oci-ta:0.1@sha256:bf7bdde00b7212f730c1356672290af6f38d070da2c8a316987b5c32fd49e0b9
- name: kind
value: task
resolver: bundles
@ -595,7 +593,7 @@ spec:
- name: name
value: push-dockerfile-oci-ta
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-push-dockerfile-oci-ta:0.1@sha256:5d8013b6a27bbc5e4ff261144616268f28417ed0950d583ef36349fcd59d3d3d
value: quay.io/konflux-ci/tekton-catalog/task-push-dockerfile-oci-ta:0.1@sha256:8c75c4a747e635e5f3e12266a3bb6e5d3132bf54e37eaa53d505f89897dd8eca
- name: kind
value: task
resolver: bundles
@ -631,7 +629,7 @@ spec:
- name: name
value: show-sbom
- name: bundle
value: quay.io/konflux-ci/tekton-catalog/task-show-sbom:0.1@sha256:1b1df4da95966d08ac6a5b8198710e09e68b5c2cdc707c37d9d19769e65884b2
value: quay.io/konflux-ci/tekton-catalog/task-show-sbom:0.1@sha256:86c069cac0a669797e8049faa8aa4088e70ff7fcd579d5bdc37626a9e0488a05
- name: kind
value: task
resolver: bundles

View File

@ -0,0 +1,42 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/pull_request_number: '{{pull_request_number}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: ramalama-cli
pipelines.appstudio.openshift.io/type: build
name: ramalama-cli-on-pull-request
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/ramalama-cli:on-pr-{{revision}}
- name: image-expires-after
value: 5d
- name: build-platforms
value:
- linux-c4xlarge/amd64
- linux-c4xlarge/arm64
- name: dockerfile
value: container-images/ramalama-cli/Containerfile
pipelineRef:
name: pull-request-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -0,0 +1,39 @@
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
annotations:
build.appstudio.openshift.io/repo: https://github.com/containers/ramalama?rev={{revision}}
build.appstudio.redhat.com/commit_sha: '{{revision}}'
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: ramalama-cli
pipelines.appstudio.openshift.io/type: build
name: ramalama-cli-on-push
namespace: ramalama-tenant
spec:
params:
- name: git-url
value: '{{source_url}}'
- name: revision
value: '{{revision}}'
- name: output-image
value: quay.io/redhat-user-workloads/ramalama-tenant/ramalama-cli:{{revision}}
- name: build-platforms
value:
- linux-c4xlarge/amd64
- linux-c4xlarge/arm64
- name: dockerfile
value: container-images/ramalama-cli/Containerfile
pipelineRef:
name: push-pipeline
timeouts:
pipeline: 6h
workspaces:
- name: git-auth
secret:
secretName: '{{ git_auth_secret }}'

View File

@ -8,8 +8,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: ramalama-llama-server

View File

@ -7,8 +7,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "push" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: ramalama-llama-server

View File

@ -8,8 +8,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: ramalama-rag

View File

@ -7,8 +7,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "push" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: ramalama-rag

View File

@ -8,8 +8,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: ramalama-whisper-server

View File

@ -7,8 +7,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "push" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: ramalama-whisper-server

View File

@ -8,8 +8,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: ramalama

View File

@ -7,8 +7,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "push" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: ramalama

View File

@ -8,8 +8,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: rocm-llama-server

View File

@ -7,8 +7,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "push" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: rocm-llama-server

View File

@ -8,8 +8,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: rocm-rag

View File

@ -7,8 +7,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "push" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: rocm-rag

View File

@ -8,8 +8,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: rocm-ubi-llama-server

View File

@ -7,8 +7,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "push" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: rocm-ubi-llama-server

View File

@ -8,8 +8,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: rocm-ubi-rag

View File

@ -7,8 +7,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "push" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: rocm-ubi-rag

View File

@ -8,8 +8,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: rocm-ubi-whisper-server

View File

@ -7,8 +7,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "push" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: rocm-ubi-whisper-server

View File

@ -8,8 +8,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: rocm-ubi

View File

@ -7,8 +7,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "push" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: rocm-ubi

View File

@ -8,8 +8,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: rocm-whisper-server

View File

@ -7,8 +7,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "push" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: rocm-whisper-server

View File

@ -8,8 +8,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "true"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "pull_request" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "pull_request" && target_branch == "main" && body.action != "ready_for_review"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: rocm

View File

@ -7,8 +7,8 @@ metadata:
build.appstudio.redhat.com/target_branch: '{{target_branch}}'
pipelinesascode.tekton.dev/cancel-in-progress: "false"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression: event == "push" && target_branch
== "main"
pipelinesascode.tekton.dev/on-cel-expression: >-
event == "push" && target_branch == "main"
labels:
appstudio.openshift.io/application: ramalama
appstudio.openshift.io/component: rocm

View File

@ -7,58 +7,29 @@ spec:
params:
- name: image
description: The image to use when setting up the test environment.
- name: source-artifact
description: The Trusted Artifact URI pointing to the artifact with the application source code.
- name: cmd
description: The command to run.
- name: envs
description: List of environment variables (NAME=VALUE) to be set in the test environment.
type: array
default: []
volumes:
- name: workdir
emptyDir: {}
stepTemplate:
volumeMounts:
- mountPath: /var/workdir
name: workdir
steps:
- name: run
image: $(params.image)
computeResources:
limits:
memory: 4Gi
requests:
cpu: "1"
memory: 1Gi
steps:
- name: use-trusted-artifact
image: quay.io/konflux-ci/build-trusted-artifacts:latest@sha256:4689f88dd253bd1feebf57f1a76a5a751880f739000719cd662bbdc76990a7fd
args:
- use
- $(params.source-artifact)=/var/workdir/source
- name: set-env
image: $(params.image)
workingDir: /var/workdir/source
args:
- $(params.envs[*])
script: |
#!/bin/bash -e
rm -f .bashenv
while [ $# -ne 0 ]; do
echo "$1" >> .bashenv
shift
done
- name: run
image: $(params.image)
securityContext:
capabilities:
add:
- SETFCAP
workingDir: /var/workdir/source
env:
- name: BASH_ENV
value: .bashenv
command:
- /usr/bin/entrypoint.sh
args:
- $(params.envs[*])
- /bin/bash
- -ex
- -c

View File

@ -168,7 +168,7 @@ bats-image:
podman inspect $(BATS_IMAGE) &> /dev/null || \
podman build -t $(BATS_IMAGE) -f container-images/bats/Containerfile .
bats-in-container: extra-opts = --security-opt unmask=/proc/* --device /dev/net/tun
bats-in-container: extra-opts = --security-opt unmask=/proc/* --device /dev/net/tun --device /dev/fuse
%-in-container: bats-image
podman run -it --rm \

View File

@ -2,14 +2,12 @@
<img src="https://github.com/user-attachments/assets/1a338ecf-dc84-4495-8c70-16882955da47" width=50%>
</p>
[RamaLama](https://ramalama.ai) is an open-source tool that simplifies the local use and serving of AI models for inference from any source through the familiar approach of containers.
[RamaLama](https://ramalama.ai) strives to make working with AI simple, straightforward, and familiar by using OCI containers.
<br>
<br>
## Description
RamaLama strives to make working with AI simple, straightforward, and familiar by using OCI containers.
RamaLama is an open-source tool that simplifies the local use and serving of AI models for inference from any source through the familiar approach of containers. Using a container engine like Podman, engineers can use container-centric development patterns and benefits to extend to AI use cases.
RamaLama is an open-source tool that simplifies the local use and serving of AI models for inference from any source through the familiar approach of containers. It allows engineers to use container-centric development patterns and benefits to extend to AI use cases.
RamaLama eliminates the need to configure the host system by instead pulling a container image specific to the GPUs discovered on the host system, and allowing you to work with various models and platforms.
@ -23,6 +21,25 @@ RamaLama eliminates the need to configure the host system by instead pulling a c
- Interact with models via REST API or as a chatbot.
<br>
## Install
### Install on Fedora
RamaLama is available in [Fedora](https://fedoraproject.org/) and later. To install it, run:
```
sudo dnf install python3-ramalama
```
### Install via PyPI
RamaLama is available via PyPI at [https://pypi.org/project/ramalama](https://pypi.org/project/ramalama)
```
pip install ramalama
```
### Install script (Linux and macOS)
Install RamaLama by running:
```
curl -fsSL https://ramalama.ai/install.sh | bash
```
## Accelerated images
| Accelerator | Image |
@ -103,25 +120,6 @@ pip install mlx-lm
ramalama --runtime=mlx serve hf://mlx-community/Unsloth-Phi-4-4bit
```
## Install
### Install on Fedora
RamaLama is available in [Fedora 40](https://fedoraproject.org/) and later. To install it, run:
```
sudo dnf install ramalama
```
### Install via PyPi
RamaLama is available via PyPi at [https://pypi.org/project/ramalama](https://pypi.org/project/ramalama)
```
pip install ramalama
```
### Install script (Linux and macOS)
Install RamaLama by running:
```
curl -fsSL https://ramalama.ai/install.sh | bash
```
#### Default Container Engine
When both Podman and Docker are installed, RamaLama defaults to Podman. The `RAMALAMA_CONTAINER_ENGINE=docker` environment variable can override this behaviour. When neither are installed, RamaLama will attempt to run the model with software on the local system.
<br>

View File

@ -1,12 +1,11 @@
FROM quay.io/fedora/fedora:42
ENV HOME=/tmp \
XDG_RUNTIME_DIR=/tmp \
STORAGE_DRIVER=vfs
XDG_RUNTIME_DIR=/tmp
WORKDIR /src
ENTRYPOINT ["/usr/bin/entrypoint.sh"]
RUN dnf -y install make bats jq iproute podman openssl httpd-tools \
RUN dnf -y install make bats jq iproute podman openssl httpd-tools diffutils \
python3-huggingface-hub \
$([ $(uname -m) == "x86_64" ] && echo ollama) \
# for validate and unit-tests
@ -26,4 +25,6 @@ RUN git clone --depth=1 https://github.com/ggml-org/llama.cpp && \
COPY container-images/bats/entrypoint.sh /usr/bin
COPY container-images/bats/containers.conf /etc/containers
COPY . /src
RUN chmod -R a+rw /src
RUN chmod a+rw /etc/subuid /etc/subgid

View File

@ -3,6 +3,16 @@
echo "$(id -un):10000:2000" > /etc/subuid
echo "$(id -un):10000:2000" > /etc/subgid
while [ $# -gt 0 ]; do
if [[ "$1" =~ = ]]; then
# shellcheck disable=SC2163
export "$1"
shift
else
break
fi
done
if [ $# -gt 0 ]; then
exec "$@"
else

View File

@ -1,4 +1,4 @@
FROM registry.access.redhat.com/ubi9/ubi:9.6-1752069608
FROM registry.access.redhat.com/ubi9/ubi:9.6-1752625787
# Install Python development dependencies
RUN dnf install -y python3-devel wget compat-openssl11 python3-jinja2 python3-markupsafe

View File

@ -0,0 +1,17 @@
FROM quay.io/ramalama/ramalama
ENV PATH="/root/.local/bin:$PATH"
ENV VIRTUAL_ENV="/opt/venv"
ENV UV_PYTHON_INSTALL_DIR="/opt/uv/python"
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
ENV UV_HTTP_TIMEOUT=500
ENV UV_INDEX_STRATEGY="unsafe-best-match"
ENV UV_LINK_MODE="copy"
COPY . /src/ramalama
WORKDIR /src/ramalama
RUN container-images/scripts/build-vllm.sh
WORKDIR /

View File

@ -1,4 +1,4 @@
FROM registry.access.redhat.com/ubi9/ubi:9.6-1752069608
FROM registry.access.redhat.com/ubi9/ubi:9.6-1752625787
COPY container-images/rocm-ubi/amdgpu.repo /etc/yum.repos.d/
COPY container-images/rocm-ubi/rocm.repo /etc/yum.repos.d/

View File

@ -0,0 +1,90 @@
#!/bin/bash
available() {
command -v "$1" >/dev/null
}
install_deps() {
set -eux -o pipefail
if available dnf; then
dnf install -y git curl wget ca-certificates gcc gcc-c++ \
gperftools-libs numactl-devel ffmpeg libSM libXext mesa-libGL jq lsof \
vim numactl
dnf -y clean all
rm -rf /var/cache/*dnf*
elif available apt-get; then
apt-get update -y
apt-get install -y --no-install-recommends git curl wget ca-certificates \
gcc g++ libtcmalloc-minimal4 libnuma-dev ffmpeg libsm6 libxext6 libgl1 \
jq lsof vim numactl
rm -rf /var/lib/apt/lists/*
fi
curl -LsSf https://astral.sh/uv/0.7.21/install.sh | bash
}
preload_and_ulimit() {
local ld_preload_file="libtcmalloc_minimal.so.4"
local ld_preload_file_1="/usr/lib/$arch-linux-gnu/$ld_preload_file"
local ld_preload_file_2="/usr/lib64/$ld_preload_file"
if [ -e "$ld_preload_file_1" ]; then
ld_preload_file="$ld_preload_file_1"
elif [ -e "$ld_preload_file_2" ]; then
ld_preload_file="$ld_preload_file_2"
fi
if [ -e "$ld_preload_file" ]; then
echo "LD_PRELOAD=$ld_preload_file" >> /etc/environment
fi
echo 'ulimit -c 0' >> ~/.bashrc
}
pip_install() {
local url="https://download.pytorch.org/whl/cpu"
uv pip install -v -r "$1" --extra-index-url $url
}
git_clone_specific_commit() {
local repo="${vllm_url##*/}"
git init "$repo"
cd "$repo"
git remote add origin "$vllm_url"
git fetch --depth 1 origin $commit
git reset --hard $commit
}
main() {
set -eux -o pipefail
install_deps
local arch
arch=$(uname -m)
preload_and_ulimit
uv venv --python 3.12 --seed "$VIRTUAL_ENV"
uv pip install --upgrade pip
local vllm_url="https://github.com/vllm-project/vllm"
local commit="ac9fb732a5c0b8e671f8c91be8b40148282bb14a"
git_clone_specific_commit
if [ "$arch" == "x86_64" ]; then
export VLLM_CPU_DISABLE_AVX512="0"
export VLLM_CPU_AVX512BF16="0"
export VLLM_CPU_AVX512VNNI="0"
elif [ "$arch" == "aarch64" ]; then
export VLLM_CPU_DISABLE_AVX512="true"
fi
pip_install requirements/cpu-build.txt
pip_install requirements/cpu.txt
MAX_JOBS=2 VLLM_TARGET_DEVICE=cpu python3 setup.py install
cd -
rm -rf vllm /root/.cache
}
main "$@"

View File

@ -99,10 +99,10 @@ is_rhel_based() { # doesn't include openEuler
dnf_install_mesa() {
if [ "${ID}" = "fedora" ]; then
dnf copr enable -y slp/mesa-libkrun-vulkan
dnf install -y mesa-vulkan-drivers-25.0.7-100.fc42 "${vulkan_rpms[@]}"
dnf install -y mesa-vulkan-drivers-25.0.7-100.fc42 virglrenderer "${vulkan_rpms[@]}"
dnf versionlock add mesa-vulkan-drivers-25.0.7-100.fc42
else
dnf install -y mesa-vulkan-drivers "${vulkan_rpms[@]}"
dnf install -y mesa-vulkan-drivers virglrenderer "${vulkan_rpms[@]}"
fi
rm_non_ubi_repos

View File

@ -40,7 +40,18 @@ update_python() {
}
docling() {
${python} -m pip install --prefix=/usr docling docling-core accelerate --extra-index-url https://download.pytorch.org/whl/"$1"
case $1 in
cuda)
PYTORCH_DIR="cu128"
;;
rocm)
PYTORCH_DIR="rocm6.3"
;;
*)
PYTORCH_DIR="cpu"
;;
esac
${python} -m pip install --prefix=/usr docling docling-core accelerate --extra-index-url "https://download.pytorch.org/whl/$PYTORCH_DIR"
# Preloads models (assumes its installed from container_build.sh)
doc2rag load
}
@ -51,6 +62,8 @@ rag() {
}
to_gguf() {
# required to build under GCC 15 until a new release is available, see https://github.com/google/sentencepiece/issues/1108 for details
export CXXFLAGS="-include cstdint"
${python} -m pip install --prefix=/usr "numpy~=1.26.4" "sentencepiece~=0.2.0" "transformers>=4.45.1,<5.0.0" git+https://github.com/ggml-org/llama.cpp#subdirectory=gguf-py "protobuf>=4.21.0,<5.0.0"
}
@ -60,6 +73,9 @@ main() {
# shellcheck disable=SC1091
source /etc/os-release
# caching in a container build is unhelpful, and can cause errors
export PIP_NO_CACHE_DIR=1
local arch
arch="$(uname -m)"
local gpu="${1-cpu}"
@ -67,18 +83,14 @@ main() {
python=$(python_version)
local pkgs
if available dnf; then
pkgs=("git-core" "gcc" "gcc-c++")
pkgs=("git-core" "gcc" "gcc-c++" "cmake")
else
pkgs=("git" "gcc" "g++")
pkgs=("git" "gcc" "g++" "cmake")
fi
if [ "${gpu}" = "cuda" ]; then
pkgs+=("libcudnn9-devel-cuda-12" "libcusparselt0" "cuda-cupti-12-*")
fi
if [[ "$ID" = "fedora" && "$VERSION_ID" -ge 42 ]] ; then
pkgs+=("python3-sentencepiece-0.2.0")
fi
update_python
to_gguf

View File

@ -62,7 +62,7 @@ add_rag() {
tag=$tag-rag
containerfile="container-images/common/Containerfile.rag"
GPU=cpu
case $2 in
case "${2##*/}" in
cuda)
GPU=cuda
;;

View File

@ -110,7 +110,7 @@ log_cli_date_format = "%Y-%m-%d %H:%M:%S"
include = ["ramalama", "ramalama.*"]
[tool.setuptools.data-files]
"share/ramalama" = ["shortnames/shortnames.conf"]
"share/ramalama" = ["shortnames/shortnames.conf", "docs/ramalama.conf"]
"share/man/man1" = ["docs/*.1"]
"share/man/man5" = ["docs/*.5"]
"share/man/man7" = ["docs/*.7"]

View File

@ -133,6 +133,10 @@ def get_parser():
def init_cli():
"""Initialize the RamaLama CLI and parse command line arguments."""
# Need to know if we're running with --dryrun or --generate before adding the subcommands,
# otherwise calls to accel_image() when setting option defaults will cause unnecessary image pulls.
if any(arg in ("--dryrun", "--dry-run", "--generate") or arg.startswith("--generate=") for arg in sys.argv[1:]):
CONFIG.dryrun = True
parser = get_parser()
args = parse_arguments(parser)
post_parse_setup(args)
@ -703,7 +707,7 @@ def _get_source_model(args):
smodel = New(src, args)
if smodel.type == "OCI":
raise ValueError(f"converting from an OCI based image {src} is not supported")
if not smodel.exists():
if not smodel.exists() and not args.dryrun:
smodel.pull(args)
return smodel
@ -1004,6 +1008,8 @@ def serve_parser(subparsers):
def _get_rag(args):
if os.path.exists(args.rag):
return
if args.pull == "never" or args.dryrun:
return
model = New(args.rag, args=args, transport="oci")
if not model.exists():
model.pull(args)

View File

@ -127,7 +127,7 @@ def exec_cmd(args, stdout2null: bool = False, stderr2null: bool = False):
raise
def run_cmd(args, cwd=None, stdout=subprocess.PIPE, ignore_stderr=False, ignore_all=False):
def run_cmd(args, cwd=None, stdout=subprocess.PIPE, ignore_stderr=False, ignore_all=False, encoding=None):
"""
Run the given command arguments.
@ -137,6 +137,7 @@ def run_cmd(args, cwd=None, stdout=subprocess.PIPE, ignore_stderr=False, ignore_
stdout: standard output configuration
ignore_stderr: if True, ignore standard error
ignore_all: if True, ignore both standard output and standard error
encoding: encoding to apply to the result text
"""
logger.debug(f"run_cmd: {quoted(args)}")
logger.debug(f"Working directory: {cwd}")
@ -151,7 +152,7 @@ def run_cmd(args, cwd=None, stdout=subprocess.PIPE, ignore_stderr=False, ignore_
if ignore_all:
sout = subprocess.DEVNULL
result = subprocess.run(args, check=True, cwd=cwd, stdout=sout, stderr=serr)
result = subprocess.run(args, check=True, cwd=cwd, stdout=sout, stderr=serr, encoding=encoding)
logger.debug(f"Command finished with return code: {result.returncode}")
return result
@ -225,34 +226,72 @@ def engine_version(engine: SUPPORTED_ENGINES) -> str:
return run_cmd(cmd_args).stdout.decode("utf-8").strip()
def resolve_cdi(spec_dirs: List[str]):
"""Loads all CDI specs from the given directories."""
for spec_dir in spec_dirs:
for root, _, files in os.walk(spec_dir):
for file in files:
if file.endswith('.json') or file.endswith('.yaml'):
if load_spec(os.path.join(root, file)):
return True
def load_cdi_yaml(stream) -> dict:
# Returns a dict containing just the "devices" key, whose value is
# a list of dicts, each mapping the key "name" to a device name.
# For example: {'devices': [{'name': 'all'}]}
# This depends on the key "name" being unique to the list of dicts
# under "devices" and the value of the "name" key being on the
# same line following a colon.
return False
def yaml_safe_load(stream) -> dict:
data = {}
data = {"devices": []}
for line in stream:
if ':' in line:
key, value = line.split(':', 1)
data[key.strip()] = value.strip()
if key.strip() == "name":
data['devices'].append({'name': value.strip().strip('"')})
return data
def load_spec(path: str):
"""Loads a single CDI spec file."""
with open(path, 'r') as f:
spec = json.load(f) if path.endswith('.json') else yaml_safe_load(f)
def load_cdi_config(spec_dirs: List[str]) -> dict | None:
# Loads the first YAML or JSON CDI configuration file found in the
# given directories."""
return spec.get('kind')
for spec_dir in spec_dirs:
for root, _, files in os.walk(spec_dir):
for file in files:
_, ext = os.path.splitext(file)
file_path = os.path.join(root, file)
if ext in [".yaml", ".yml"]:
try:
with open(file_path, "r") as stream:
return load_cdi_yaml(stream)
except OSError:
continue
elif ext == ".json":
try:
with open(file_path, "r") as stream:
return json.load(stream)
except json.JSONDecodeError:
continue
except UnicodeDecodeError:
continue
except OSError:
continue
return None
def find_in_cdi(devices: List[str]) -> tuple[List[str], List[str]]:
# Attempts to find a CDI configuration for each device in devices
# and returns a list of configured devices and a list of
# unconfigured devices.
cdi = load_cdi_config(['/etc/cdi', '/var/run/cdi'])
cdi_devices = cdi.get("devices", []) if cdi else []
cdi_device_names = [name for cdi_device in cdi_devices if (name := cdi_device.get("name"))]
configured = []
unconfigured = []
for device in devices:
if device in cdi_device_names:
configured.append(device)
# A device can be specified by a prefix of the uuid
elif device.startswith("GPU") and any(name.startswith(device) for name in cdi_device_names):
configured.append(device)
else:
perror(f"Device {device} does not have a CDI configuration")
unconfigured.append(device)
return configured, unconfigured
def check_asahi() -> Literal["asahi"] | None:
@ -278,27 +317,41 @@ def check_metal(args: ContainerArgType) -> bool:
@lru_cache(maxsize=1)
def check_nvidia() -> Literal["cuda"] | None:
try:
command = ['nvidia-smi']
run_cmd(command).stdout.decode("utf-8")
command = ['nvidia-smi', '--query-gpu=index,uuid', '--format=csv,noheader']
result = run_cmd(command, encoding="utf-8")
except OSError:
return None
# ensure at least one CDI device resolves
if resolve_cdi(['/etc/cdi', '/var/run/cdi']):
if "CUDA_VISIBLE_DEVICES" not in os.environ:
dev_command = ['nvidia-smi', '--query-gpu=index', '--format=csv,noheader']
try:
result = run_cmd(dev_command)
output = result.stdout.decode("utf-8").strip()
if not output:
raise ValueError("nvidia-smi returned empty GPU indices")
devices = ','.join(output.split('\n'))
except Exception:
devices = "0"
smi_lines = result.stdout.splitlines()
parsed_lines = [[item.strip() for item in line.split(',')] for line in smi_lines if line]
if not parsed_lines:
return None
os.environ["CUDA_VISIBLE_DEVICES"] = devices
indices, uuids = zip(*parsed_lines) if parsed_lines else (tuple(), tuple())
# Get the list of devices specified by CUDA_VISIBLE_DEVICES, if any
cuda_visible_devices = os.environ.get("CUDA_VISIBLE_DEVICES", "")
visible_devices = cuda_visible_devices.split(',') if cuda_visible_devices else []
for device in visible_devices:
if device not in indices and not any(uuid.startswith(device) for uuid in uuids):
perror(f"{device} not found")
return None
return "cuda"
except Exception:
pass
configured, unconfigured = find_in_cdi(visible_devices + ["all"])
if unconfigured and "all" not in configured:
perror(f"No CDI configuration found for {','.join(unconfigured)}")
perror("You can use the \"nvidia-ctk cdi generate\" command from the ")
perror("nvidia-container-toolkit to generate a CDI configuration.")
perror("See ramalama-cuda(7).")
return None
elif configured:
if "all" in configured:
configured.remove("all")
if not configured:
configured = indices
os.environ["CUDA_VISIBLE_DEVICES"] = ','.join(configured)
return "cuda"
return None
@ -350,6 +403,9 @@ def check_intel() -> Literal["intel"] | None:
intel_gpus = (
b"0xe20b",
b"0xe20c",
b"0x46a6",
b"0x46a8",
b"0x46aa",
b"0x56a0",
b"0x56a1",
b"0x7d51",
@ -566,7 +622,7 @@ def accel_image(config: Config) -> str:
vers = minor_release()
should_pull = config.pull in ["always", "missing"]
should_pull = config.pull in ["always", "missing"] and not config.dryrun
if attempt_to_use_versioned(config.engine, image, vers, True, should_pull):
return f"{image}:{vers}"

View File

@ -98,6 +98,7 @@ class BaseConfig:
default_image: str = DEFAULT_IMAGE
user: UserConfig = field(default_factory=UserConfig)
selinux: bool = False
dryrun: bool = False
settings: RamalamaSettings = field(default_factory=RamalamaSettings)
def __post_init__(self):

View File

@ -216,7 +216,11 @@ def containers(args):
conman_args = [conman, "ps", "-a", "--filter", "label=ai.ramalama"]
if getattr(args, "noheading", False):
conman_args += ["--noheading"]
if conman == "docker" and not args.format:
# implement --noheading by using --format
conman_args += ["--format={{.ID}} {{.Image}} {{.Command}} {{.CreatedAt}} {{.Status}} {{.Ports}} {{.Names}}"]
else:
conman_args += ["--noheading"]
if getattr(args, "notrunc", False):
conman_args += ["--no-trunc"]

View File

@ -15,7 +15,7 @@ from ramalama.model_store.snapshot_file import SnapshotFileType
missing_huggingface = """
Optional: Huggingface models require the huggingface-cli module.
This module can be installed via PyPi tools like uv, pip, pip3, pipx, or via
This module can be installed via PyPI tools like uv, pip, pip3, pipx, or via
distribution package managers like dnf or apt. Example:
uv pip install huggingface_hub
"""

View File

@ -12,7 +12,7 @@ from ramalama.model_store.snapshot_file import SnapshotFileType
missing_modelscope = """
Optional: ModelScope models require the modelscope module.
This module can be installed via PyPi tools like uv, pip, pip3, pipx, or via
This module can be installed via PyPI tools like uv, pip, pip3, pipx, or via
distribution package managers like dnf or apt. Example:
uv pip install modelscope
"""

View File

@ -31,9 +31,10 @@ BuildRequires: make
BuildRequires: python3-devel
BuildRequires: podman
BuildRequires: python3-pytest
BuildRequires: mailcap
Provides: python3-ramalama = %{version}-%{release}
Obsoletes: python3-ramalama < 0.10.1-2
Obsoletes: python3-ramalama < 0.11.0-1
Requires: podman
@ -55,14 +56,12 @@ will run the AI Models within a container based on the OCI image.
%forgeautosetup -p1
%build
make docs
%pyproject_wheel
%{__make} docs
%install
%pyproject_install
%pyproject_save_files -l %{pypi_name}
%{__make} DESTDIR=%{buildroot} PREFIX=%{_prefix} install-docs install-shortnames
%{__make} DESTDIR=%{buildroot} PREFIX=%{_prefix} install-completions
%check
%pytest -v test/unit

View File

@ -34,9 +34,9 @@
"merlinite-lab:7b" = "huggingface://instructlab/merlinite-7b-lab-GGUF/merlinite-7b-lab-Q4_K_M.gguf"
"merlinite-lab-7b" = "huggingface://instructlab/merlinite-7b-lab-GGUF/merlinite-7b-lab-Q4_K_M.gguf"
"tiny" = "ollama://tinyllama"
"mistral" = "huggingface://TheBloke/Mistral-7B-Instruct-v0.2-GGUF/mistral-7b-instruct-v0.2.Q4_K_M.gguf"
"mistral:7b" = "huggingface://TheBloke/Mistral-7B-Instruct-v0.2-GGUF/mistral-7b-instruct-v0.2.Q4_K_M.gguf"
"mistral:7b-v3" = "huggingface://MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF/Mistral-7B-Instruct-v0.3.Q4_K_M.gguf"
"mistral" = "hf://lmstudio-community/Mistral-7B-Instruct-v0.3-GGUF/Mistral-7B-Instruct-v0.3-Q4_K_M.gguf"
"mistral:7b" = "hf://lmstudio-community/Mistral-7B-Instruct-v0.3-GGUF/Mistral-7B-Instruct-v0.3-Q4_K_M.gguf"
"mistral:7b-v3" = "hf://lmstudio-community/Mistral-7B-Instruct-v0.3-GGUF/Mistral-7B-Instruct-v0.3-Q4_K_M.gguf"
"mistral:7b-v2" = "huggingface://TheBloke/Mistral-7B-Instruct-v0.2-GGUF/mistral-7b-instruct-v0.2.Q4_K_M.gguf"
"mistral:7b-v1" = "huggingface://TheBloke/Mistral-7B-Instruct-v0.1-GGUF/mistral-7b-instruct-v0.1.Q5_K_M.gguf"
"mistral-small3.1" = "hf://bartowski/mistralai_Mistral-Small-3.1-24B-Instruct-2503-GGUF/mistralai_Mistral-Small-3.1-24B-Instruct-2503-IQ2_M.gguf"

View File

@ -14,7 +14,7 @@ EOF
if is_container; then
run_ramalama info
conman=$(jq .Engine.Name <<< $output | tr -d '"' )
conman=$(jq -r .Engine.Name <<< $output)
verify_begin="${conman} run --rm"
run_ramalama -q --dryrun run ${MODEL}
@ -140,7 +140,7 @@ EOF
skip_if_docker
run_ramalama 22 run --image bogus --pull=never tiny
is "$output" ".*Error: bogus: image not known"
run_ramalama 125 run --image bogus1 --rag quay.io/ramalama/testrag --pull=never tiny
run_ramalama 125 run --image bogus1 --rag quay.io/ramalama/rag --pull=never tiny
is "$output" ".*Error: bogus1: image not known"
}
@ -148,13 +148,10 @@ EOF
skip_if_nocontainer
skip_if_darwin
skip_if_docker
run_ramalama 125 --dryrun run --rag quay.io/ramalama/rag --pull=never tiny
is "$output" "Error: quay.io/ramalama/rag: image not known.*"
run_ramalama --dryrun run --rag quay.io/ramalama/testrag --pull=never tiny
run_ramalama --dryrun run --rag quay.io/ramalama/rag --pull=never tiny
is "$output" ".*quay.io/ramalama/.*-rag:"
run_ramalama --dryrun run --image quay.io/ramalama/ramalama:1.0 --rag quay.io/ramalama/testrag --pull=never tiny
run_ramalama --dryrun run --image quay.io/ramalama/ramalama:1.0 --rag quay.io/ramalama/rag --pull=never tiny
is "$output" ".*quay.io/ramalama/ramalama:1.0"
}

View File

@ -115,7 +115,6 @@ verify_begin=".*run --rm"
}
@test "ramalama serve and stop" {
skip "Seems to cause race conditions"
skip_if_nocontainer
model=ollama://smollm:135m
@ -123,17 +122,15 @@ verify_begin=".*run --rm"
container2=c_$(safename)
run_ramalama serve --name ${container1} --detach ${model}
cid="$output"
run_ramalama info
conmon=$(jq .Engine <<< $output)
run -0 ${conman} inspect1 $cid
run_ramalama ps
is "$output" ".*${container1}" "list correct for container1"
run_ramalama chat --ls
is "$output" "ollama://smollm:135m" "list of models available correct"
run_ramalama ps --format '{{.Ports}}'
port=${output: -8:4}
run_ramalama chat --ls --url http://127.0.0.1:${port}/v1
is "$output" "smollm:135m" "list of models available correct"
run_ramalama containers --noheading
is "$output" ".*${container1}" "list correct for container1"
@ -151,7 +148,6 @@ verify_begin=".*run --rm"
}
@test "ramalama --detach serve multiple" {
skip "Seems to cause race conditions"
skip_if_nocontainer
model=ollama://smollm:135m
@ -349,7 +345,7 @@ verify_begin=".*run --rm"
rm /tmp/$name.yaml
}
@test "ramalama serve --api llama-stack --generate=kube:/tmp" {
@test "ramalama serve --api llama-stack" {
skip_if_docker
skip_if_nocontainer
model=tiny
@ -393,7 +389,7 @@ verify_begin=".*run --rm"
run_ramalama 125 serve --image bogus --pull=never tiny
is "$output" "Error: bogus: image not known"
run_ramalama 125 serve --image bogus1 --rag quay.io/ramalama/testrag --pull=never tiny
run_ramalama 125 serve --image bogus1 --rag quay.io/ramalama/rag --pull=never tiny
is "$output" ".*Error: bogus1: image not known"
}
@ -402,13 +398,10 @@ verify_begin=".*run --rm"
skip_if_darwin
skip_if_docker
run_ramalama ? stop ${name}
run_ramalama ? --dryrun serve --rag quay.io/ramalama/rag --pull=never tiny
is "$output" ".*Error: quay.io/ramalama/rag: image not known"
run_ramalama --dryrun serve --rag quay.io/ramalama/testrag --pull=never tiny
run_ramalama --dryrun serve --rag quay.io/ramalama/rag --pull=never tiny
is "$output" ".*quay.io/ramalama/.*-rag:"
run_ramalama --dryrun serve --image quay.io/ramalama/ramalama:1.0 --rag quay.io/ramalama/testrag --pull=never tiny
run_ramalama --dryrun serve --image quay.io/ramalama/ramalama:1.0 --rag quay.io/ramalama/rag --pull=never tiny
is "$output" ".*quay.io/ramalama/ramalama:1.0"
}

Some files were not shown because too many files have changed in this diff Show More