Jekyll: don't put {% raw %} directives in pre blocks

Replace all occurrences of

    ```foo
    {% raw %}
    bar
    {% endraw %}
    ```

(which generates spurious empty lines in the rendered pre block) with

    {% raw %}
    ```foo
    bar
    ```
    {% endraw %}

Also, fix some occurrences where the raw section is too large and
prevented interpretation of Jekyll directives.

This is the syntax used in the documentation of Jekyll itself:

https://raw.githubusercontent.com/jekyll/jekyll/master/docs/_docs/templates.md

FTR, done with two perl substitutions:

    '^([\t ]*```[^\n]*
    )([ \t]*\{% raw %\}[^\n]*
    )' '$2$1'

and

    '^([ \t]*\{% endraw %\}[^\n]*
    )([\t ]*```[^\n]*
    )' '$2$1'

and manually tweaks.  A mechanical check would be most useful.

Signed-off-by: Akim Demaille <akim.demaille@docker.com>
This commit is contained in:
Akim Demaille 2018-03-13 10:13:23 +01:00 committed by Joao Fernandes
parent d99b2b4852
commit 14b53b68c3
33 changed files with 90 additions and 95 deletions

View File

@ -82,13 +82,13 @@ To find the current logging driver for a running container, if the daemon
is using the `json-file` logging driver, run the following `docker inspect`
command, substituting the container name or ID for `<CONTAINER>`:
```bash
{% raw %}
```bash
$ docker inspect -f '{{.HostConfig.LogConfig.Type}}' <CONTAINER>
json-file
{% endraw %}
```
{% endraw %}
## Configure the delivery mode of log messages from container to log driver

View File

@ -83,8 +83,8 @@ The path to the root certificate and Common Name is specified using an HTTPS
scheme. This is used for verification. The `SplunkServerDefaultCert` is
automatically generated by Splunk certificates.
```bash
{% raw %}
```bash
$ docker run --log-driver=splunk \
--log-opt splunk-token=176FCEBF-4CF5-4EDF-91BC-703796522D20 \
--log-opt splunk-url=https://splunkhost:8088 \
@ -96,8 +96,8 @@ $ docker run --log-driver=splunk \
--env "TEST=false" \
--label location=west \
your/application
{% endraw %}
```
{% endraw %}
The `splunk-url` for Splunk instances hosted on Splunk Cloud is in a format
like `https://http-inputs-XXXXXXXX.splunkcloud.com` and does not include a

View File

@ -49,8 +49,8 @@ DTR replica to check the DTR internal state.
Use SSH to log into a node that is running a DTR replica, and run the following
commands:
```bash
{% raw %}
```bash
# REPLICA_ID will be the replica ID for the current node.
REPLICA_ID=$(docker ps -lf name='^/dtr-rethinkdb-.{12}$' --format '{{.Names}}' | cut -d- -f3)
# This command will start a RethinkDB client attached to the database
@ -59,8 +59,8 @@ docker run -it --rm \
--net dtr-ol \
-v dtr-ca-$REPLICA_ID:/ca dockerhubenterprise/rethinkcli:v2.2.0 \
$REPLICA_ID
{% endraw %}
```
{% endraw %}
This container connects to the local DTR replica and launches a RethinkDB client
that can be used to inspect the contents of the DB. RethinkDB

View File

@ -66,12 +66,11 @@ you can backup the images by using ssh to log into a node where DTR is running,
and creating a tar archive of the [dtr-registry volume](../architecture.md):
```none
{% raw %}
sudo tar -cf {{ image_backup_file }} \
$(dirname $(docker volume inspect --format '{{.Mountpoint}}' dtr-registry-<replica-id>))
{% endraw %}
{% raw %}$(dirname $(docker volume inspect --format '{{.Mountpoint}}' dtr-registry-<replica-id>)){% endraw %}
```
If you're using a different storage backend, follow the best practices
recommended for that system.

View File

@ -51,16 +51,16 @@ same images.
To check how much space your images are taking in the local filesystem, you
can ssh into the node where DTR is deployed and run:
```
{% raw %}
```
# Find the path to the volume
docker volume inspect dtr-registry-<replica-id>
# Check the disk usage
sudo du -hs \
$(dirname $(docker volume inspect --format '{{.Mountpoint}}' dtr-registry-<dtr-replica>))
{% endraw %}
```
{% endraw %}
## NFS

View File

@ -49,8 +49,8 @@ DTR replica to check the DTR internal state.
Use SSH to log into a node that is running a DTR replica, and run the following
commands:
```bash
{% raw %}
```bash
# REPLICA_ID will be the replica ID for the current node.
REPLICA_ID=$(docker ps -lf name='^/dtr-rethinkdb-.{12}$' --format '{{.Names}}' | cut -d- -f3)
# This command will start a RethinkDB client attached to the database
@ -59,8 +59,8 @@ docker run -it --rm \
--net dtr-ol \
-v dtr-ca-$REPLICA_ID:/ca dockerhubenterprise/rethinkcli:v2.2.0 \
$REPLICA_ID
{% endraw %}
```
{% endraw %}
This container connects to the local DTR replica and launches a RethinkDB client
that can be used to inspect the contents of the DB. RethinkDB

View File

@ -67,7 +67,7 @@ and creating a tar archive of the [dtr-registry volume](../architecture.md):
```none
sudo tar -cf {{ image_backup_file }} \
{% raw %}$(dirname $(docker volume inspect --format '{{.Mountpoint}}' dtr-registry-<replica-id>)){% endraw %}
{% raw %}$(dirname $(docker volume inspect --format '{{.Mountpoint}}' dtr-registry-<replica-id>)){% endraw %}
```
If you're using a different storage backend, follow the best practices

View File

@ -51,16 +51,16 @@ same images.
To check how much space your images are taking in the local filesystem, you
can ssh into the node where DTR is deployed and run:
```
{% raw %}
```
# Find the path to the volume
docker volume inspect dtr-registry-<replica-id>
# Check the disk usage
sudo du -hs \
$(dirname $(docker volume inspect --format '{{.Mountpoint}}' dtr-registry-<dtr-replica>))
{% endraw %}
```
{% endraw %}
## NFS

View File

@ -49,13 +49,13 @@ DTR replica to check the DTR internal state.
Use SSH to log into a node that is running a DTR replica, and run the following
commands:
```bash
{% raw %}
```bash
# This command will start a RethinkDB client attached to the database
# on the current node.
docker exec -it $(docker ps -q --filter name=dtr-rethinkdb) rethinkcli
{% endraw %}
```
{% endraw %}
RethinkDB stores data in different databases that contain multiple tables. The `rethinkcli`
tool launches an interactive prompt where you can run RethinkDB

View File

@ -56,12 +56,12 @@ To verify a client certificate bundle has been loaded and the client is
successfully communicating with UCP, look for `ucp` in the `Server Version`
returned by `docker version`.
```bash
{% raw %}
```bash
$ docker version --format '{{.Server.Version}}'
{% endraw %}
ucp/2.0.0
```
{% endraw %}
From now on, when you use the Docker CLI client, it includes your client
certificates as part of the request to the Docker Engine.

View File

@ -37,12 +37,12 @@ that join the cluster.
You can also do this from the CLI by first running:
```bash
{% raw %}
```bash
$ docker node inspect --format '{{ index .Spec.Labels "com.docker.ucp.SANs" }}' <node-id>
default-cs,127.0.0.1,172.17.0.1
{% endraw %}
```
{% endraw %}
This gets the current set of SANs for the given manager node. Append your
desired SAN to this list. For example, `default-cs,127.0.0.1,172.17.0.1,example.com`.

View File

@ -19,10 +19,10 @@ with administrator credentials to change your password.
If you're the only administrator, use **ssh** to log in to a manager
node managed by UCP, and run:
```none
{% raw %}
```none
docker exec -it ucp-auth-api enzi \
"$(docker inspect --format '{{ index .Args 0 }}' ucp-auth-api)" \
passwd -i
{% endraw %}
```
{% endraw %}

View File

@ -88,8 +88,8 @@ The examples below assume you are logged in with ssh into a UCP manager node.
### Check the status of the database
```bash
{% raw %}
```bash
# NODE_ADDRESS will be the IP address of this Docker Swarm manager node
NODE_ADDRESS=$(docker info --format '{{.Swarm.NodeAddr}}')
# VERSION will be your most recent version of the docker/ucp-auth image
@ -97,13 +97,13 @@ VERSION=$(docker image ls --format '{{.Tag}}' docker/ucp-auth | head -n 1)
# This command will output detailed status of all servers and database tables
# in the RethinkDB cluster.
docker run --rm -v ucp-auth-store-certs:/tls docker/ucp-auth:${VERSION} --db-addr=${NODE_ADDRESS}:12383 db-status
{% endraw %}
```
{% endraw %}
### Manually reconfigure database replication
```bash
{% raw %}
```bash
# NODE_ADDRESS will be the IP address of this Docker Swarm manager node
NODE_ADDRESS=$(docker info --format '{{.Swarm.NodeAddr}}')
# NUM_MANAGERS will be the current number of manager nodes in the cluster
@ -113,8 +113,8 @@ VERSION=$(docker image ls --format '{{.Tag}}' docker/ucp-auth | head -n 1)
# This reconfigure-db command will repair the RethinkDB cluster to have a
# number of replicas equal to the number of manager nodes in the cluster.
docker run --rm -v ucp-auth-store-certs:/tls docker/ucp-auth:${VERSION} --db-addr=${NODE_ADDRESS}:12383 --debug reconfigure-db --num-replicas ${NUM_MANAGERS} --emergency-repair
{% endraw %}
```
{% endraw %}
## Where to go next

View File

@ -56,12 +56,12 @@ To verify a client certificate bundle has been loaded and the client is
successfully communicating with UCP, look for `ucp` in the `Server Version`
returned by `docker version`.
```bash
{% raw %}
```bash
$ docker version --format '{{.Server.Version}}'
{% endraw %}
ucp/2.1.0
```
{% endraw %}
From now on, when you use the Docker CLI client, it includes your client
certificates as part of the request to the Docker Engine.

View File

@ -23,10 +23,10 @@ with administrator credentials to change your password.
If you're the only administrator, use **ssh** to log in to a manager
node managed by UCP, and run:
```none
{% raw %}
```none
docker exec -it ucp-auth-api enzi \
$(docker inspect --format '{{range .Args}}{{if eq "--db-addr=" (printf "%.10s" .)}}{{.}}{{end}}{{end}}' ucp-auth-api) \
passwd -i
{% endraw %}
```
{% endraw %}

View File

@ -36,12 +36,12 @@ that join the swarm.
You can also do this from the CLI by first running:
```bash
{% raw %}
```bash
$ docker node inspect --format '{{ index .Spec.Labels "com.docker.ucp.SANs" }}' <node-id>
default-cs,127.0.0.1,172.17.0.1
{% endraw %}
```
{% endraw %}
This will get the current set of SANs for the given manager node. Append your
desired SAN to this list, for example `default-cs,127.0.0.1,172.17.0.1,example.com`,

View File

@ -23,19 +23,19 @@ $ docker container run --rm {{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_v
1. Use the following command to extract the name of the currently active
configuration from the `ucp-agent` service.
```bash
{% raw %}
```bash
$ CURRENT_CONFIG_NAME=$(docker service inspect --format '{{ range $config := .Spec.TaskTemplate.ContainerSpec.Configs }}{{ $config.ConfigName }}{{ "\n" }}{{ end }}' ucp-agent | grep 'com.docker.ucp.config-')
{% endraw %}
```
{% endraw %}
2. Get the current configuration and save it to a TOML file.
```bash
{% raw %}
```bash
$ docker config inspect --format '{{ printf "%s" .Spec.Data }}' $CURRENT_CONFIG_NAME > config.toml
{% endraw %}
```
{% endraw %}
3. Use the output of the `example-config` command as a guide to edit your
`config.toml` file. Under the `[auth]` sections, set `backend = "ldap"`

View File

@ -33,14 +33,14 @@ number that increases with each version, like `com.docker.ucp.config-1`. The
Use the `docker config inspect` command to view the current settings and emit
them to a file.
```bash
{% raw %}
```bash
# CURRENT_CONFIG_NAME will be the name of the currently active UCP configuration
CURRENT_CONFIG_NAME=$(docker service inspect ucp-agent --format '{{range .Spec.TaskTemplate.ContainerSpec.Configs}}{{if eq "/etc/ucp/ucp.toml" .File.Name}}{{.ConfigName}}{{end}}{{end}}')
# Collect the current config with `docker config inspect`
docker config inspect --format '{{ printf "%s" .Spec.Data }}' $CURRENT_CONFIG_NAME > ucp-config.toml
{% endraw %}
```
{% endraw %}
Edit the file, then use the `docker config create` and `docker service update`
commands to create and apply the configuration from the file.

View File

@ -17,9 +17,7 @@ a [UCP support dump](..\..\get-support.md) to use an environment variable
that indicates the current architecture:
```bash
{% raw %}
[[ $(docker info --format='{{.Architecture}}') == s390x ]] && export _ARCH='-s390x' || export _ARCH=''
{% endraw %}
{% raw %}[[ $(docker info --format='{{.Architecture}}') == s390x ]] && export _ARCH='-s390x' || export _ARCH=''{% endraw %}
docker container run --rm \
--name ucp \

View File

@ -91,8 +91,8 @@ The examples below assume you are logged in with ssh into a UCP manager node.
### Check the status of the database
```bash
{% raw %}
```bash
# NODE_ADDRESS will be the IP address of this Docker Swarm manager node
NODE_ADDRESS=$(docker info --format '{{.Swarm.NodeAddr}}')
# VERSION will be your most recent version of the docker/ucp-auth image
@ -117,13 +117,13 @@ Server Status: [
}
]
...
{% endraw %}
```
{% endraw %}
### Manually reconfigure database replication
```bash
{% raw %}
```bash
# NODE_ADDRESS will be the IP address of this Docker Swarm manager node
NODE_ADDRESS=$(docker info --format '{{.Swarm.NodeAddr}}')
# NUM_MANAGERS will be the current number of manager nodes in the cluster
@ -140,8 +140,8 @@ time="2017-07-14T20:46:09Z" level=debug msg="Reconfiguring number of replicas to
time="2017-07-14T20:46:09Z" level=debug msg="(00/16) Emergency Repairing Tables..."
time="2017-07-14T20:46:09Z" level=debug msg="(01/16) Emergency Repaired Table \"grant_objects\""
...
{% endraw %}
```
{% endraw %}
## Where to go next

View File

@ -61,9 +61,7 @@ successfully communicating with UCP, look for `ucp` in the `Server Version`
returned by `docker version`.
```bash
{% raw %}
docker version --format '{{.Server.Version}}'
{% endraw %}
{% raw %}docker version --format '{{.Server.Version}}'{% endraw %}
{{ page.ucp_repo }}/{{ page.ucp_version }}
```

View File

@ -14,8 +14,8 @@ For instructions, see [Link Amazon Web Services to Docker Cloud](/docker-cloud/c
This feature is called [AWS CloudFormation Service Role](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-servicerole.html?icmpid=docs_cfn_console)
follow the link for more information.
```none
{% raw %}
```none
{
"Version": "2012-10-17",
"Statement": [
@ -343,5 +343,5 @@ follow the link for more information.
}
]
}
{% endraw %}
```
{% endraw %}

View File

@ -131,15 +131,15 @@ as well.
The only option available for EFS is `perfmode`. You can set `perfmode` to
`maxio` for high IO throughput:
```bash
{% raw %}
```bash
$ docker service create \
--replicas 5 \
--name ping3 \
--mount type=volume,volume-driver=cloudstor:aws,source={{.Service.Name}}-{{.Task.Slot}}-vol5,destination=/mydata,volume-opt=perfmode=maxio \
alpine ping docker.com
{% endraw %}
```
{% endraw %}
You can also create `shared` Cloudstor volumes using the
`docker volume create` CLI:
@ -164,15 +164,15 @@ EBS volumes typically take a few minutes to be created. Besides
Example usage:
```bash
{% raw %}
```bash
$ docker service create \
--replicas 5 \
--name ping3 \
--mount type=volume,volume-driver=cloudstor:aws,source={{.Service.Name}}-{{.Task.Slot}}-vol,destination=/mydata,volume-opt=backing=relocatable,volume-opt=size=25,volume-opt=ebstype=gp2 \
alpine ping docker.com
{% endraw %}
```
{% endraw %}
The above example creates and mounts a distinct Cloudstor volume backed by 25 GB EBS
volumes of type `gp2` for each task of the `ping3` service. Each task mounts its
@ -210,15 +210,15 @@ mount a unique EFS-backed volume into each task of a service. This is useful if
you already have too many EBS volumes or want to reduce the amount of time it
takes to transfer volume data across availability zones.
```bash
{% raw %}
```bash
$ docker service create \
--replicas 5 \
--name ping2 \
--mount type=volume,volume-driver=cloudstor:aws,source={{.Service.Name}}-{{.Task.Slot}}-vol,destination=/mydata \
alpine ping docker.com
{% endraw %}
```
{% endraw %}
Here, each task has mounted its own volume at `/mydata/` and the files under
that mountpoint are unique to that task.

View File

@ -76,15 +76,15 @@ create and mount a unique Cloudstor volume for each task in a swarm service.
It is possible to use the templatized notation to indicate to Docker Swarm that a unique Cloudstor volume be created and mounted for each replica/task of a service. This may be useful if the tasks write to the same file under the same path which may lead to corruption in case of shared storage. Example:
```bash
{% raw %}
```bash
$ docker service create \
--replicas 5 \
--name ping2 \
--mount type=volume,volume-driver=cloudstor:azure,source={{.Service.Name}}-{{.Task.Slot}}-vol,destination=/mydata \
alpine ping docker.com
{% endraw %}
```
{% endraw %}
A unique volume is created and mounted for each task participating in the
`ping2` service. Each task mounts its own volume at `/mydata/` and all files

View File

@ -58,8 +58,8 @@ There are three main steps in binding IBM Cloud services to your Docker EE for I
5. Update your service code to use the secret that you created. For example:
```none
{% raw %}
```none
...
// WatsonSecret holds Watson VR service keys
type WatsonSecret struct {
@ -82,8 +82,8 @@ There are three main steps in binding IBM Cloud services to your Docker EE for I
...
msgQ.Add("api_key", watsonSecrets.APIKey)
...
{% endraw %}
```
{% endraw %}
### Step 2: Build a Docker image
1. Log in to the registry that you are using to store the image.
@ -96,8 +96,8 @@ There are three main steps in binding IBM Cloud services to your Docker EE for I
**Example** snippet for a Dockerfile that uses _mmssearch_ service.
```none
{% raw %}
```none
FROM golang:latest
WORKDIR /go/src/mmssearch
COPY . /go/src/mmssearch
@ -109,8 +109,8 @@ There are three main steps in binding IBM Cloud services to your Docker EE for I
COPY --from=0 go/src/mmssearch/main .
CMD ["./main"]
LABEL version=demo-3
{% endraw %}
```
{% endraw %}
3. Navigate to the directory of the Dockerfile, and build the image. Don't forget the period in the `docker build` command.
@ -141,8 +141,8 @@ There are three main steps in binding IBM Cloud services to your Docker EE for I
* For the service `environment` field, use a service environment, such as a workspace ID, from the IBM Cloud service that you made before you began.
* **Example** snippet for a `docker-service.yaml` that uses _mmssearch_ with a Watson secret.
```none
{% raw %}
```none
mmssearch:
image: mmssearch:latest
build: .
@ -154,8 +154,8 @@ There are three main steps in binding IBM Cloud services to your Docker EE for I
secrets:
watson-secret:
external: true
{% endraw %}
```
{% endraw %}
2. Connect to your cluster by setting the environment variables from the [client certificate bundle that you downloaded](administering-swarms.md#download-client-certificates).

View File

@ -63,8 +63,8 @@ If you want to change the default settings for storage type, IOPS, or capacity,
Example output:
```bash
{% raw %}
```bash
[
{
"Driver": "d4ic-volume:latest",
@ -89,8 +89,8 @@ If you want to change the default settings for storage type, IOPS, or capacity,
}
}
]
{% endraw %}
```
{% endraw %}
> File storage provisioning
>
@ -110,11 +110,11 @@ Use an existing IBM Cloud infrastructure file storage volume with Docker EE for
2. Retrieve the cluster ID:
```bash
{% raw %}
```bash
$ docker info --format={{.Swarm.Cluster.ID}}
{% endraw %}
```
{% endraw %}
3. From your browser, log in to your [IBM Cloud infrastructure account](https://control.softlayer.com/) and access the file storage volume that you want to use.

View File

@ -450,25 +450,25 @@ You can get the container IP address by using [`docker inspect`](/engine/referen
the command would look like this, using the name we gave to the container
(`webserver`) instead of the container ID:
```bash
{% raw %}
```bash
$ docker inspect \
--format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' \
webserver
{% endraw %}
```
{% endraw %}
This gives you the IP address of the container, for example:
```bash
{% raw %}
```bash
$ docker inspect \
--format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' \
webserver
172.17.0.2
{% endraw %}
```
{% endraw %}
Now you can connect to the webserver by using `http://172.17.0.2:80` (or simply
`http://172.17.0.2`, since port `80` is the default HTTP port.)

View File

@ -160,21 +160,21 @@ for more information.
From the command line, run `docker node inspect <id-node>` to query the nodes.
For instance, to query the reachability of the node as a manager:
```bash
{% raw %}
```bash
docker node inspect manager1 --format "{{ .ManagerStatus.Reachability }}"
reachable
{% endraw %}
```
{% endraw %}
To query the status of the node as a worker that accept tasks:
```bash
{% raw %}
```bash
docker node inspect manager1 --format "{{ .Status.State }}"
ready
{% endraw %}
```
{% endraw %}
From those commands, we can see that `manager1` is both at the status
`reachable` as a manager and `ready` as a worker.

View File

@ -82,13 +82,13 @@ $ docker service update \
You can use `docker service inspect` to view the service's published port. For
instance:
```bash
{% raw %}
```bash
$ docker service inspect --format="{{json .Endpoint.Spec.Ports}}" my-web
[{"Protocol":"tcp","TargetPort":80,"PublishedPort":8080}]
{% endraw %}
```
{% endraw %}
The output shows the `<CONTAINER-PORT>` (labeled `TargetPort`) from the containers and the
`<PUBLISHED-PORT>` (labeled `PublishedPort`) where nodes listen for requests for the service.

View File

@ -961,13 +961,13 @@ Valid placeholders for the Go template are:
This example sets the template of the created containers based on the
service's name and the ID of the node where the container is running:
```bash
{% raw %}
```bash
$ docker service create --name hosttempl \
--hostname="{{.Node.ID}}-{{.Service.Name}}"\
busybox top
{% endraw %}
```
{% endraw %}
To see the result of using the template, use the `docker service ps` and
`docker inspect` commands.
@ -979,11 +979,11 @@ ID NAME IMAGE
wo41w8hg8qan hosttempl.1 busybox:latest@sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912 2e7a8a9c4da2 Running Running about a minute ago
```
```bash
{% raw %}
```bash
$ docker inspect --format="{{.Config.Hostname}}" hosttempl.1.wo41w8hg8qanxwjwsg4kxpprj
{% endraw %}
```
{% endraw %}
## Learn More

View File

@ -53,12 +53,12 @@ $ docker-machine inspect dev
For the most part, you can pick out any field from the JSON in a fairly
straightforward manner.
```none
{% raw %}
```none
$ docker-machine inspect --format='{{.Driver.IPAddress}}' dev
192.168.5.99
{% endraw %}
```
{% endraw %}
**Formatting details:**
@ -73,8 +73,8 @@ $ docker-machine inspect --format='{{json .Driver}}' dev-fusion
While this is usable, it's not very human-readable. For this reason, there is
`prettyjson`:
```none
{% raw %}
```none
$ docker-machine inspect --format='{{prettyjson .Driver}}' dev-fusion
{
"Boot2DockerURL": "",

View File

@ -97,21 +97,21 @@ when using the table directive, includes column headers as well.
The following example uses a template without headers and outputs the `Name` and `Driver` entries separated by a colon
for all running machines:
```none
{% raw %}
```none
$ docker-machine ls --format "{{.Name}}: {{.DriverName}}"
default: virtualbox
ec2: amazonec2
{% endraw %}
```
{% endraw %}
To list all machine names with their driver in a table format you can use:
```none
{% raw %}
```none
$ docker-machine ls --format "table {{.Name}} {{.DriverName}}"
NAME DRIVER
default virtualbox
ec2 amazonec2
```
{% endraw %}
```

View File

@ -658,23 +658,23 @@ we use often.
The raw markup is needed to keep Liquid from interperting the things with double
braces as templating language.
{% raw %}
```none
none with raw
{% raw %}
$ some command with {{double braces}}
$ some other command
{% endraw %}
```
{% endraw %}
### Raw, Bash
{% raw %}
```bash
bash with raw
{% raw %}
$ some command with {{double braces}}
$ some other command
{% endraw %}
```
{% endraw %}
### Bash