This commit adds `sharedscripts` which will ensure that our `postrotate`
script is only ran once even if multiple log files in the `/shared/log/rails/`
are rotated. If `sharedscripts` is not specified, we are sending `sv 1
unicorn` once per log file rotated and this has resulted in weird
behaviours like our Sidekiq process hanging indefinitely.
Note the following from the manpage for logrotate:
```
sharedscripts
Normally, prerotate and postrotate scripts are run for each log which is rotated and the absolute path to the log file is passed as first argument to the script. That means a single script may be run multiple times for log file entries which match multiple files (such as the /var/log/news/* example). If sharedscripts is specified, the scripts are only run once, no matter how many logs match the wildcarded pattern, and whole pattern is passed to them.
```
* DEV: Updated vanilla.template.yml
* updated vanilla.template.yml to make the migration process more straight forward
* removed branch pull
* implemented suggested changes
* added suggested chantes
* added before_code hook to set remote fork
* updated with suggested changes
Bumping Ruby to 3.3.1 to pull in latest performance and memory
improvements made to YJIT. On Discourse hosting services with Ruby 3.3.1
+ YJIT, we saw an
estimate 10-20% improvement in time spent executing Ruby code over Ruby
3.2.3 + YJIT.
In order to download the free MaxMind GeoLite2 databases, an account ID
and license key is required going forward. This commit updates
`discourse-setup` to start prompting the user to provide the MaxMind
Account ID first before asking for the MaxMind license key. If the user
does not provide the Account ID, the script will not prompt for the
license key as we assume the user has opted out.
We are aware that we don't have a reliable way to test for changes to
the `discourse-setup` script but it is what it is at this point in time.
We intend to invest resources in improving things in the future but now
is not the time.
This commit adds a `ruby_3_3` job to our Github workflow which releases
a `discourse/base:release-ruby-3.3.1` Docker image to allow us to test
Ruby 3.3.1 before eventually changing to that version as the default.
This commit does 2 things:
1. Added a new yarn hook to replace the npm mirror before `yarn install`.
2. Modified `web.china.template.yml` to add more mirror sources.
Below is an explanation of these modifications:
- The GitHub proxy added in `web.china.template.yml` has existed in China for many years, and its repository https://github.com/hunshcn/gh-proxy has 6k+ stars, which can ensure its security and stability.
- The NPM mirror site added in `web.china.template.yml` is maintained by Alibaba Group, one of the largest Internet companies in China.
- Modified the Gem mirror in `web.china.template.yml` to the mirror provided by Tsinghua University, one of the top universities in China.
- The reason why sed is used to replace the `yarn.lock` file is because `yarn install --frozen-lockfile` is used for installation below. If the url is not replaced, the NPM mirror will not take effect.
After applying these modifications, I successfully installed Discourse on the Tencent Cloud China server. No more network problems.
This commit updates Ruby to 3.2.4 which includes security fixes for the
following CVEs:
* CVE-2024-27282: Arbitrary memory address read vulnerability with Regex search
* CVE-2024-27281: RCE vulnerability with .rdoc_options in RDoc
* CVE-2024-27280: Buffer overread vulnerability in StringIO
* Add tags to pups templates
The purpose here is to allow greater flexibility in how and where
docker images are built and run. It achieves this by breaking up
build steps into distinct run steps which can be saved along the way.
Customizable base images may then be prebuilt with as many batteries
included as possible, with zero environment setup so those images
can then be configured at a later stage.
Add the ability to run partial pups configuration:
`build`: build base image with no db - ember build.
`precompile`: precompile stage that requires postgres and redis.
`migrate`: run migration tasks.
`db`: start bundled postgres/redis, if included.
Adds a create_db script in postgres template for creating db on the fly.
Called below in unicorn run:
updates unicorn run command with 3 env flags:
CREATE_DB_ON_BOOT: if 1, creates base db schema, allows for deferral of creation.
MIGRATE_ON_BOOT: if 1, runs db:migrate - allows for deferral of db migration.
PRECOMPILE_ON_BOOT: if 1, precompiles assets (without ember build).
PRECOMPILE_ON_BOOT initially defaults to 1 in base builds (no tags).
During the `precompile` build step, this updates the default to be 0.
All other new flags default to 0 (off). With these three flags, we're now able
to ship and start a container from a base image, and it'll be able to bootstrap
a blank database.
Updates hook to start redis before_db_migrate as before_code hook
is not guaranteed to fire before migrate tasks if pups is filtered by tags.
Removing the -p from the "nc" command.
Reason:
# nc -w 4 -l -p 80
nc: cannot use -p and -l
Without -p it works just fine.
> -l' Used to specify that nc should listen for an incoming connection rather than initiate a connection to a remote host. It is an error to use this option in conjunction with the -p, -s, or -z options. Additionally, any timeouts specified with the -w option are ignored.
Chrome isn’t available for aarch64 yet, but Chromium (which is basically
the same browser without the proprietary bits from Google) is shipped by
Debian. They also ship a Chrome driver compiled for aarch64.
This patch adds Chromium to our images without removing Chrome on
x86_64, allowing a smooth transition to using Chromium only.
Chrome isn’t available yet for aarch64, but Chromium (which is basically
the same browser without the proprietary bits from Google) is shipped by
Debian. They also ship a Chrome driver compiled for aarch64.
By using Chromium instead of Chrome, we unify how we do things
regardless of the architecture used in the generated image.
Why this change?
Now that we can efficiently build Docker images targeted at `linux/arm64`,
we will start to release images for `linux/arm64` in the same way we do
for `linux/amd64` images.
Images released for `linux/amd64` are tagged as follows:
1. discourse/base:2.0.\<datetime\>-slim
2. discourse/base:slim
3. discourse/base:2.0.\<datetime\>
4. discourse/base:release
For `linux/arm64`, the images are tagged as follows:
1. discourse/base:2.0.\<datetime\>-slim-arm64
2. discourse/base:slim-arm64
3. discourse/base:2.0.\<datetime\>-arm64
4. discourse/base:release-arm64
5. discourse/base:aarch64 (For backwards compatibility)
For `linux/arm64`, we unfortunately cannot install chrome because chrome
does not currently release binaries for the arch. Therefore, we install
chromium which chrome is based off and also install the chromedriver
binary for `linux/arm64` released by the electron project.
Why this change?
We have been given access to Github's private beta of ARM hosted
runners. Switching to ARM runners should drastically speed up the time
required for us to build our ARM image.
What does this change do?
1. Switch to use Github's ARM hosted runners.
2. Build release image for arm64 as well. We previously only built the
slim image because building the release image through emulation is
way too slow so we skipped the release image.
3. Update `bundle` in `release.Dockerfile` to install gems in parallel
based on the number of cores instead of hardcoding it to 4 jobs.
automatically
While x64 is still on jemalloc 3.6, arm64 is using latest jemalloc.
They have different names for the library file, so we will now use the
symlink to automatically load the one available.
The instructions have, for quite some time now, pointed users at the
`discourse-setup` script. That will prompt the user to create a swapfile
if necessary and configure relevant sysctls.
while `DISCOURSE_MAIL_ENDPOINT` is still accepted by the mail-receiver code, the documentation prefers `DISCOURSE_BASE_URL` and so should this example
see deae52039f/README.md
The new templates/postgres.15.template.yml file allows bootstrapping
new containers using PostgreSQL version 15, or upgrading an existing
container running on older PostgreSQL versions.
The default postgres template and base image shall be bumped in a
follow-up commit.
On a M3 Max macbook pro with 14 cores,
Before:
```
=> [25/44] RUN /tmp/install-imagemagick 150.6s
=> [27/44] RUN /tmp/install-jemalloc 54.9s
=> [31/44] RUN /tmp/install-redis 42.9s
```
After:
```
=> [25/44] RUN /tmp/install-imagemagick 44.4s
=> [27/44] RUN /tmp/install-jemalloc 13.7s
=> [31/44] RUN /tmp/install-redis 11.7s
```
Why this change?
We have noticed that our compiled imagemagick binary is slower than the
distributed binaries in the same environment and started debugging why.
One thing I noticed is that distributed binaries usually include the
`-O2` gcc compilation flag. When applying it locally, I saw significant
speed up.
Without -O2 flag:
```
root@1d7277f72a4f:/# time convert -limit memory 10GiB -limit disk 10GiB -size $(seq 8000 8500 | shuf | head -n1)x9000 xc:"rgb($(shuf -i 0-255 -n1),$(shuf -i 0-255 -n1),$(shuf -i 0-255 -n1))" random_image.png
real 0m3.376s
user 0m6.355s
sys 0m0.410s
root@1d7277f72a4f:/# time identify -format "%Q" random_image.png
92
real 0m1.018s
user 0m0.883s
sys 0m0.135s
```
With -O2 flag:
```
root@0779afa71102:/# time convert -limit memory 10GiB -limit disk 10GiB -size $(seq 8000 8500 | shuf | head -n1)x9000 xc:"rgb($(shuf -i 0-255 -n1),$(shuf -i 0-255 -n1),$(shuf -i 0-255 -n1))" random_image.png
real 0m1.118s
user 0m1.555s
sys 0m1.680s
root@0779afa71102:/# time identify -format "%Q" random_image.png
92
real 0m0.330s
user 0m0.197s
sys 0m0.133s
```
This patch adds some new steps to support the aarch64 architecture
on Linux.
An updated version of Rust is needed to compile the `selenium-manager`
binary as it’s not shipped with the `selenium-webdriver` gem yet.
In the same vein, Google doesn’t ship an aarch64 version of Chrome yet,
so it doesn’t make sense to install even Chromium in the image. We have
to rely on Firefox to run the system specs.
Why this change?
In
dec68d780c,
the `plugin:install_all_gems` Rake task was made a noop because the Rake
task itself was flawed and running a Rake task will actually activate
all plugins which installs the required gems in the process. However,
plugins are not automatically activated in the test environment which
this image operates in. As such, we need to set `LOAD_PLUGINS=1` to when
running the `plugin:install_all_gems` Rake task.