bump kubernetes to 1.11

This commit is contained in:
Andy Xie 2018-07-02 16:36:54 +08:00
parent bf3b9f18b6
commit bcb230560f
1282 changed files with 118492 additions and 83558 deletions

View File

@ -27,18 +27,14 @@ representing the resource name and `unit` labels representing the resource unit.
* kube_pod_container_resource_limits_cpu_cores * kube_pod_container_resource_limits_cpu_cores
* kube_pod_container_resource_requests_memory_bytes * kube_pod_container_resource_requests_memory_bytes
* kube_pod_container_resource_limits_memory_bytes * kube_pod_container_resource_limits_memory_bytes
* kube_pod_container_resource_requests_nvidia_gpu_devices
* kube_pod_container_resource_limits_nvidia_gpu_devices
* **The following non-generic resource metrics for nodes are marked deprecated. They will be removed in kube-state-metrics v2.0.0.** * **The following non-generic resource metrics for nodes are marked deprecated. They will be removed in kube-state-metrics v2.0.0.**
`kube_node_status_capacity` and `kube_node_status_allocatable` are the replacements with `resource` labels `kube_node_status_capacity` and `kube_node_status_allocatable` are the replacements with `resource` labels
representing the resource name and `unit` labels representing the resource unit. representing the resource name and `unit` labels representing the resource unit.
* kube_node_status_capacity_pods * kube_node_status_capacity_pods
* kube_node_status_capacity_cpu_cores * kube_node_status_capacity_cpu_cores
* kube_node_status_capacity_nvidia_gpu_cards
* kube_node_status_capacity_memory_bytes * kube_node_status_capacity_memory_bytes
* kube_node_status_allocatable_pods * kube_node_status_allocatable_pods
* kube_node_status_allocatable_cpu_cores * kube_node_status_allocatable_cpu_cores
* kube_node_status_allocatable_nvidia_gpu_cards
* kube_node_status_allocatable_memory_bytes * kube_node_status_allocatable_memory_bytes
## Exposed Metrics ## Exposed Metrics

View File

@ -9,12 +9,10 @@
| kube_node_status_phase| Gauge | `node`=&lt;node-address&gt; <br> `phase`=&lt;Pending\|Running\|Terminated&gt; | STABLE | | kube_node_status_phase| Gauge | `node`=&lt;node-address&gt; <br> `phase`=&lt;Pending\|Running\|Terminated&gt; | STABLE |
| kube_node_status_capacity | Gauge | `node`=&lt;node-address&gt; <br> `resource`=&lt;resource-name&gt; <br> `unit=`&lt;resource-unit&gt;| STABLE | | kube_node_status_capacity | Gauge | `node`=&lt;node-address&gt; <br> `resource`=&lt;resource-name&gt; <br> `unit=`&lt;resource-unit&gt;| STABLE |
| kube_node_status_capacity_cpu_cores | Gauge | `node`=&lt;node-address&gt;| STABLE | | kube_node_status_capacity_cpu_cores | Gauge | `node`=&lt;node-address&gt;| STABLE |
| kube_node_status_capacity_nvidia_gpu_cards | Gauge | `node`=&lt;node-address&gt;| EXPERIMENTAL |
| kube_node_status_capacity_memory_bytes | Gauge | `node`=&lt;node-address&gt;| STABLE | | kube_node_status_capacity_memory_bytes | Gauge | `node`=&lt;node-address&gt;| STABLE |
| kube_node_status_capacity_pods | Gauge | `node`=&lt;node-address&gt;| STABLE | | kube_node_status_capacity_pods | Gauge | `node`=&lt;node-address&gt;| STABLE |
| kube_node_status_allocatable | Gauge | `node`=&lt;node-address&gt; <br> `resource`=&lt;resource-name&gt; <br> `unit=`&lt;resource-unit&gt;| STABLE | | kube_node_status_allocatable | Gauge | `node`=&lt;node-address&gt; <br> `resource`=&lt;resource-name&gt; <br> `unit=`&lt;resource-unit&gt;| STABLE |
| kube_node_status_allocatable_cpu_cores | Gauge | `node`=&lt;node-address&gt;| STABLE | | kube_node_status_allocatable_cpu_cores | Gauge | `node`=&lt;node-address&gt;| STABLE |
| kube_node_status_allocatable_nvidia_gpu_cards | Gauge | `node`=&lt;node-address&gt;| EXPERIMENTAL |
| kube_node_status_allocatable_memory_bytes | Gauge | `node`=&lt;node-address&gt;| STABLE | | kube_node_status_allocatable_memory_bytes | Gauge | `node`=&lt;node-address&gt;| STABLE |
| kube_node_status_allocatable_pods | Gauge | `node`=&lt;node-address&gt;| STABLE | | kube_node_status_allocatable_pods | Gauge | `node`=&lt;node-address&gt;| STABLE |
| kube_node_status_condition | Gauge | `node`=&lt;node-address&gt; <br> `condition`=&lt;node-condition&gt; <br> `status`=&lt;true\|false\|unknown&gt; | STABLE | | kube_node_status_condition | Gauge | `node`=&lt;node-address&gt; <br> `condition`=&lt;node-condition&gt; <br> `status`=&lt;true\|false\|unknown&gt; | STABLE |

View File

@ -24,8 +24,6 @@
| kube_pod_container_resource_limits_cpu_cores | Gauge | `container`=&lt;container-name&gt; <br> `pod`=&lt;pod-name&gt; <br> `namespace`=&lt;pod-namespace&gt; <br> `node`=&lt; node-name&gt; | STABLE | | kube_pod_container_resource_limits_cpu_cores | Gauge | `container`=&lt;container-name&gt; <br> `pod`=&lt;pod-name&gt; <br> `namespace`=&lt;pod-namespace&gt; <br> `node`=&lt; node-name&gt; | STABLE |
| kube_pod_container_resource_limits | Gauge | `resource`=&lt;resource-name&gt; <br> `unit`=&lt;resource-unit&gt; <br> `container`=&lt;container-name&gt; <br> `pod`=&lt;pod-name&gt; <br> `namespace`=&lt;pod-namespace&gt; <br> `node`=&lt; node-name&gt; | STABLE | | kube_pod_container_resource_limits | Gauge | `resource`=&lt;resource-name&gt; <br> `unit`=&lt;resource-unit&gt; <br> `container`=&lt;container-name&gt; <br> `pod`=&lt;pod-name&gt; <br> `namespace`=&lt;pod-namespace&gt; <br> `node`=&lt; node-name&gt; | STABLE |
| kube_pod_container_resource_limits_memory_bytes | Gauge | `container`=&lt;container-name&gt; <br> `pod`=&lt;pod-name&gt; <br> `namespace`=&lt;pod-namespace&gt; <br> `node`=&lt; node-name&gt; | STABLE | | kube_pod_container_resource_limits_memory_bytes | Gauge | `container`=&lt;container-name&gt; <br> `pod`=&lt;pod-name&gt; <br> `namespace`=&lt;pod-namespace&gt; <br> `node`=&lt; node-name&gt; | STABLE |
| kube_pod_container_resource_requests_nvidia_gpu_devices | Gauge | `container`=&lt;container-name&gt; <br> `pod`=&lt;pod-name&gt; <br> `namespace`=&lt;pod-namespace&gt; <br> `node`=&lt; node-name&gt; | EXPERIMENTAL |
| kube_pod_container_resource_limits_nvidia_gpu_devices | Gauge | `container`=&lt;container-name&gt; <br> `pod`=&lt;pod-name&gt; <br> `namespace`=&lt;pod-namespace&gt; <br> `node`=&lt; node-name&gt; | EXPERIMENTAL |
| kube_pod_created | Gauge | `pod`=&lt;pod-name&gt; <br> `namespace`=&lt;pod-namespace&gt; | | kube_pod_created | Gauge | `pod`=&lt;pod-name&gt; <br> `namespace`=&lt;pod-namespace&gt; |
| kube_pod_spec_volumes_persistentvolumeclaims_info | Gauge | `pod`=&lt;pod-name&gt; <br> `namespace`=&lt;pod-namespace&gt; <br> `volume`=&lt;volume-name&gt; <br> `persistentvolumeclaim`=&lt;persistentvolumeclaim-claimname&gt; | STABLE | | kube_pod_spec_volumes_persistentvolumeclaims_info | Gauge | `pod`=&lt;pod-name&gt; <br> `namespace`=&lt;pod-namespace&gt; <br> `volume`=&lt;volume-name&gt; <br> `persistentvolumeclaim`=&lt;persistentvolumeclaim-claimname&gt; | STABLE |
| kube_pod_spec_volumes_persistentvolumeclaims_readonly | Gauge | `pod`=&lt;pod-name&gt; <br> `namespace`=&lt;pod-namespace&gt; <br> `volume`=&lt;volume-name&gt; <br> `persistentvolumeclaim`=&lt;persistentvolumeclaim-claimname&gt; | STABLE | | kube_pod_spec_volumes_persistentvolumeclaims_readonly | Gauge | `pod`=&lt;pod-name&gt; <br> `namespace`=&lt;pod-namespace&gt; <br> `volume`=&lt;volume-name&gt; <br> `persistentvolumeclaim`=&lt;persistentvolumeclaim-claimname&gt; | STABLE |

787
Godeps/Godeps.json generated

File diff suppressed because it is too large Load Diff

View File

@ -55,14 +55,14 @@ All additional compatibility is only best effort, or happens to still/already be
#### Compatibility matrix #### Compatibility matrix
At most 5 kube-state-metrics releases will be recorded below. At most 5 kube-state-metrics releases will be recorded below.
| kube-state-metrics | client-go | **Kubernetes 1.6** | **Kubernetes 1.7** | **Kubernetes 1.8** | **Kubernetes 1.9** | | kube-state-metrics | client-go | **Kubernetes 1.8** | **Kubernetes 1.9** | **Kubernetes 1.10** | **Kubernetes 1.11** |
|--------------------|-----------|--------------------|--------------------|--------------------|--------------------| |--------------------|-----------|--------------------|--------------------|--------------------|--------------------|
| **v1.0.x** | 4.0.0-beta.0 | ✓ | ✓ | - | - | | **v1.0.x** | 4.0.0-beta.0 | ✓ | ✓ | - | - |
| **v1.1.0** | release-5.0 | ✓ | ✓ | ✓ | - | | **v1.1.0** | release-5.0 | ✓ | ✓ | ✓ | - |
| **v1.2.0** | v6.0.0 | ✓ | ✓ | ✓ | ✓ | | **v1.2.0** | v6.0.0 | ✓ | ✓ | ✓ | ✓ |
| **v1.3.0** | v6.0.0 | ✓ | ✓ | ✓ | ✓ | | **v1.3.0** | v6.0.0 | ✓ | ✓ | ✓ | ✓ |
| **v1.3.1** | v6.0.0 | ✓ | ✓ | ✓ | ✓ | | **v1.3.1** | v6.0.0 | ✓ | ✓ | ✓ | ✓ |
| **master** | v6.0.0 | ✓ | ✓ | ✓ | ✓ | | **master** | v8.0.0 | ✓ | ✓ | ✓ | ✓ |
- `✓` Fully supported version range. - `✓` Fully supported version range.
- `-` The Kubernetes cluster has features the client-go library can't use (additional API objects, etc). - `-` The Kubernetes cluster has features the client-go library can't use (additional API objects, etc).
@ -96,7 +96,7 @@ additional metrics!
> * kube_node_status_capacity_nvidia_gpu_cards > * kube_node_status_capacity_nvidia_gpu_cards
> * kube_node_status_allocatable_nvidia_gpu_cards > * kube_node_status_allocatable_nvidia_gpu_cards
> >
> are deprecated and will be completely removed when the Kubernetes accelerator feature support is removed in version v1.11. (Kubernetes accelerator support is already deprecated in v1.10). > are removed in kube-state-metrics v1.4.0.
> >
> Any collectors and metrics based on alpha Kubernetes APIs are excluded from any stability guarantee, > Any collectors and metrics based on alpha Kubernetes APIs are excluded from any stability guarantee,
> which may be changed at any given release. > which may be changed at any given release.

View File

@ -99,12 +99,6 @@ var (
descNodeLabelsDefaultLabels, descNodeLabelsDefaultLabels,
nil, nil,
) )
descNodeStatusCapacityNvidiaGPU = prometheus.NewDesc(
"kube_node_status_capacity_nvidia_gpu_cards",
"The total Nvidia GPU resources of the node.",
descNodeLabelsDefaultLabels,
nil,
)
descNodeStatusCapacityMemory = prometheus.NewDesc( descNodeStatusCapacityMemory = prometheus.NewDesc(
"kube_node_status_capacity_memory_bytes", "kube_node_status_capacity_memory_bytes",
"The total memory resources of the node.", "The total memory resources of the node.",
@ -129,12 +123,6 @@ var (
descNodeLabelsDefaultLabels, descNodeLabelsDefaultLabels,
nil, nil,
) )
descNodeStatusAllocatableNvidiaGPU = prometheus.NewDesc(
"kube_node_status_allocatable_nvidia_gpu_cards",
"The Nvidia GPU resources of a node that are available for scheduling.",
descNodeLabelsDefaultLabels,
nil,
)
descNodeStatusAllocatableMemory = prometheus.NewDesc( descNodeStatusAllocatableMemory = prometheus.NewDesc(
"kube_node_status_allocatable_memory_bytes", "kube_node_status_allocatable_memory_bytes",
"The memory resources of a node that are available for scheduling.", "The memory resources of a node that are available for scheduling.",
@ -192,11 +180,9 @@ func (nc *nodeCollector) Describe(ch chan<- *prometheus.Desc) {
if !nc.opts.DisableNodeNonGenericResourceMetrics { if !nc.opts.DisableNodeNonGenericResourceMetrics {
ch <- descNodeStatusCapacityCPU ch <- descNodeStatusCapacityCPU
ch <- descNodeStatusCapacityNvidiaGPU
ch <- descNodeStatusCapacityMemory ch <- descNodeStatusCapacityMemory
ch <- descNodeStatusCapacityPods ch <- descNodeStatusCapacityPods
ch <- descNodeStatusAllocatableCPU ch <- descNodeStatusAllocatableCPU
ch <- descNodeStatusAllocatableNvidiaGPU
ch <- descNodeStatusAllocatableMemory ch <- descNodeStatusAllocatableMemory
ch <- descNodeStatusAllocatablePods ch <- descNodeStatusAllocatablePods
} }
@ -285,12 +271,10 @@ func (nc *nodeCollector) collectNode(ch chan<- prometheus.Metric, n v1.Node) {
} }
addResource(descNodeStatusCapacityCPU, n.Status.Capacity, v1.ResourceCPU) addResource(descNodeStatusCapacityCPU, n.Status.Capacity, v1.ResourceCPU)
addResource(descNodeStatusCapacityNvidiaGPU, n.Status.Capacity, v1.ResourceNvidiaGPU)
addResource(descNodeStatusCapacityMemory, n.Status.Capacity, v1.ResourceMemory) addResource(descNodeStatusCapacityMemory, n.Status.Capacity, v1.ResourceMemory)
addResource(descNodeStatusCapacityPods, n.Status.Capacity, v1.ResourcePods) addResource(descNodeStatusCapacityPods, n.Status.Capacity, v1.ResourcePods)
addResource(descNodeStatusAllocatableCPU, n.Status.Allocatable, v1.ResourceCPU) addResource(descNodeStatusAllocatableCPU, n.Status.Allocatable, v1.ResourceCPU)
addResource(descNodeStatusAllocatableNvidiaGPU, n.Status.Allocatable, v1.ResourceNvidiaGPU)
addResource(descNodeStatusAllocatableMemory, n.Status.Allocatable, v1.ResourceMemory) addResource(descNodeStatusAllocatableMemory, n.Status.Allocatable, v1.ResourceMemory)
addResource(descNodeStatusAllocatablePods, n.Status.Allocatable, v1.ResourcePods) addResource(descNodeStatusAllocatablePods, n.Status.Allocatable, v1.ResourcePods)
} }
@ -308,8 +292,6 @@ func (nc *nodeCollector) collectNode(ch chan<- prometheus.Metric, n v1.Node) {
fallthrough fallthrough
case v1.ResourceMemory: case v1.ResourceMemory:
addGauge(descNodeStatusCapacity, float64(val.MilliValue())/1000, sanitizeLabelName(string(resourceName)), string(constant.UnitByte)) addGauge(descNodeStatusCapacity, float64(val.MilliValue())/1000, sanitizeLabelName(string(resourceName)), string(constant.UnitByte))
case v1.ResourceNvidiaGPU:
addGauge(descNodeStatusCapacity, float64(val.MilliValue())/1000, sanitizeLabelName(string(resourceName)), string(constant.UnitInteger))
case v1.ResourcePods: case v1.ResourcePods:
addGauge(descNodeStatusCapacity, float64(val.MilliValue())/1000, sanitizeLabelName(string(resourceName)), string(constant.UnitInteger)) addGauge(descNodeStatusCapacity, float64(val.MilliValue())/1000, sanitizeLabelName(string(resourceName)), string(constant.UnitInteger))
default: default:
@ -332,8 +314,6 @@ func (nc *nodeCollector) collectNode(ch chan<- prometheus.Metric, n v1.Node) {
fallthrough fallthrough
case v1.ResourceMemory: case v1.ResourceMemory:
addGauge(descNodeStatusAllocatable, float64(val.MilliValue())/1000, sanitizeLabelName(string(resourceName)), string(constant.UnitByte)) addGauge(descNodeStatusAllocatable, float64(val.MilliValue())/1000, sanitizeLabelName(string(resourceName)), string(constant.UnitByte))
case v1.ResourceNvidiaGPU:
addGauge(descNodeStatusAllocatable, float64(val.MilliValue())/1000, sanitizeLabelName(string(resourceName)), string(constant.UnitInteger))
case v1.ResourcePods: case v1.ResourcePods:
addGauge(descNodeStatusAllocatable, float64(val.MilliValue())/1000, sanitizeLabelName(string(resourceName)), string(constant.UnitInteger)) addGauge(descNodeStatusAllocatable, float64(val.MilliValue())/1000, sanitizeLabelName(string(resourceName)), string(constant.UnitInteger))
default: default:

View File

@ -57,8 +57,6 @@ func TestNodeCollector(t *testing.T) {
# HELP kube_node_status_capacity_pods The total pod resources of the node. # HELP kube_node_status_capacity_pods The total pod resources of the node.
# TYPE kube_node_status_capacity_cpu_cores gauge # TYPE kube_node_status_capacity_cpu_cores gauge
# HELP kube_node_status_capacity_cpu_cores The total CPU resources of the node. # HELP kube_node_status_capacity_cpu_cores The total CPU resources of the node.
# HELP kube_node_status_capacity_nvidia_gpu_cards The total Nvidia GPU resources of the node.
# TYPE kube_node_status_capacity_nvidia_gpu_cards gauge
# TYPE kube_node_status_capacity_memory_bytes gauge # TYPE kube_node_status_capacity_memory_bytes gauge
# HELP kube_node_status_capacity_memory_bytes The total memory resources of the node. # HELP kube_node_status_capacity_memory_bytes The total memory resources of the node.
# TYPE kube_node_status_allocatable gauge # TYPE kube_node_status_allocatable gauge
@ -67,8 +65,6 @@ func TestNodeCollector(t *testing.T) {
# HELP kube_node_status_allocatable_pods The pod resources of a node that are available for scheduling. # HELP kube_node_status_allocatable_pods The pod resources of a node that are available for scheduling.
# TYPE kube_node_status_allocatable_cpu_cores gauge # TYPE kube_node_status_allocatable_cpu_cores gauge
# HELP kube_node_status_allocatable_cpu_cores The CPU resources of a node that are available for scheduling. # HELP kube_node_status_allocatable_cpu_cores The CPU resources of a node that are available for scheduling.
# HELP kube_node_status_allocatable_nvidia_gpu_cards The Nvidia GPU resources of a node that are available for scheduling.
# TYPE kube_node_status_allocatable_nvidia_gpu_cards gauge
# TYPE kube_node_status_allocatable_memory_bytes gauge # TYPE kube_node_status_allocatable_memory_bytes gauge
# HELP kube_node_status_allocatable_memory_bytes The memory resources of a node that are available for scheduling. # HELP kube_node_status_allocatable_memory_bytes The memory resources of a node that are available for scheduling.
# HELP kube_node_status_condition The condition of a cluster node. # HELP kube_node_status_condition The condition of a cluster node.
@ -131,7 +127,6 @@ func TestNodeCollector(t *testing.T) {
}, },
Capacity: v1.ResourceList{ Capacity: v1.ResourceList{
v1.ResourceCPU: resource.MustParse("4.3"), v1.ResourceCPU: resource.MustParse("4.3"),
v1.ResourceNvidiaGPU: resource.MustParse("4"),
v1.ResourceMemory: resource.MustParse("2G"), v1.ResourceMemory: resource.MustParse("2G"),
v1.ResourcePods: resource.MustParse("1000"), v1.ResourcePods: resource.MustParse("1000"),
v1.ResourceStorage: resource.MustParse("3G"), v1.ResourceStorage: resource.MustParse("3G"),
@ -140,7 +135,6 @@ func TestNodeCollector(t *testing.T) {
}, },
Allocatable: v1.ResourceList{ Allocatable: v1.ResourceList{
v1.ResourceCPU: resource.MustParse("3"), v1.ResourceCPU: resource.MustParse("3"),
v1.ResourceNvidiaGPU: resource.MustParse("2"),
v1.ResourceMemory: resource.MustParse("1G"), v1.ResourceMemory: resource.MustParse("1G"),
v1.ResourcePods: resource.MustParse("555"), v1.ResourcePods: resource.MustParse("555"),
v1.ResourceStorage: resource.MustParse("2G"), v1.ResourceStorage: resource.MustParse("2G"),
@ -156,25 +150,21 @@ func TestNodeCollector(t *testing.T) {
kube_node_labels{label_type="master",node="127.0.0.1"} 1 kube_node_labels{label_type="master",node="127.0.0.1"} 1
kube_node_spec_unschedulable{node="127.0.0.1"} 1 kube_node_spec_unschedulable{node="127.0.0.1"} 1
kube_node_status_capacity{node="127.0.0.1",resource="cpu",unit="core"} 4.3 kube_node_status_capacity{node="127.0.0.1",resource="cpu",unit="core"} 4.3
kube_node_status_capacity{node="127.0.0.1",resource="alpha_kubernetes_io_nvidia_gpu",unit="integer"} 4
kube_node_status_capacity{node="127.0.0.1",resource="memory",unit="byte"}2e9 kube_node_status_capacity{node="127.0.0.1",resource="memory",unit="byte"}2e9
kube_node_status_capacity{node="127.0.0.1",resource="pods",unit="integer"} 1000 kube_node_status_capacity{node="127.0.0.1",resource="pods",unit="integer"} 1000
kube_node_status_capacity{node="127.0.0.1",resource="nvidia_com_gpu",unit="integer"} 4 kube_node_status_capacity{node="127.0.0.1",resource="nvidia_com_gpu",unit="integer"} 4
kube_node_status_capacity{node="127.0.0.1",resource="storage",unit="byte"} 3e9 kube_node_status_capacity{node="127.0.0.1",resource="storage",unit="byte"} 3e9
kube_node_status_capacity{node="127.0.0.1",resource="ephemeral_storage",unit="byte"} 4e9 kube_node_status_capacity{node="127.0.0.1",resource="ephemeral_storage",unit="byte"} 4e9
kube_node_status_capacity_cpu_cores{node="127.0.0.1"} 4.3 kube_node_status_capacity_cpu_cores{node="127.0.0.1"} 4.3
kube_node_status_capacity_nvidia_gpu_cards{node="127.0.0.1"} 4
kube_node_status_capacity_memory_bytes{node="127.0.0.1"} 2e9 kube_node_status_capacity_memory_bytes{node="127.0.0.1"} 2e9
kube_node_status_capacity_pods{node="127.0.0.1"} 1000 kube_node_status_capacity_pods{node="127.0.0.1"} 1000
kube_node_status_allocatable{node="127.0.0.1",resource="cpu",unit="core"} 3 kube_node_status_allocatable{node="127.0.0.1",resource="cpu",unit="core"} 3
kube_node_status_allocatable{node="127.0.0.1",resource="alpha_kubernetes_io_nvidia_gpu",unit="integer"} 2
kube_node_status_allocatable{node="127.0.0.1",resource="memory",unit="byte"} 1e9 kube_node_status_allocatable{node="127.0.0.1",resource="memory",unit="byte"} 1e9
kube_node_status_allocatable{node="127.0.0.1",resource="pods",unit="integer"} 555 kube_node_status_allocatable{node="127.0.0.1",resource="pods",unit="integer"} 555
kube_node_status_allocatable{node="127.0.0.1",resource="nvidia_com_gpu",unit="integer"} 1
kube_node_status_allocatable{node="127.0.0.1",resource="storage",unit="byte"} 2e9 kube_node_status_allocatable{node="127.0.0.1",resource="storage",unit="byte"} 2e9
kube_node_status_allocatable{node="127.0.0.1",resource="nvidia_com_gpu",unit="integer"} 1
kube_node_status_allocatable{node="127.0.0.1",resource="ephemeral_storage",unit="byte"} 3e9 kube_node_status_allocatable{node="127.0.0.1",resource="ephemeral_storage",unit="byte"} 3e9
kube_node_status_allocatable_cpu_cores{node="127.0.0.1"} 3 kube_node_status_allocatable_cpu_cores{node="127.0.0.1"} 3
kube_node_status_allocatable_nvidia_gpu_cards{node="127.0.0.1"} 2
kube_node_status_allocatable_memory_bytes{node="127.0.0.1"} 1e9 kube_node_status_allocatable_memory_bytes{node="127.0.0.1"} 1e9
kube_node_status_allocatable_pods{node="127.0.0.1"} 555 kube_node_status_allocatable_pods{node="127.0.0.1"} 555
`, `,

View File

@ -182,18 +182,6 @@ var (
append(descPodLabelsDefaultLabels, "container", "node"), append(descPodLabelsDefaultLabels, "container", "node"),
nil, nil,
) )
descPodContainerResourceRequestsNvidiaGPUDevices = prometheus.NewDesc(
"kube_pod_container_resource_requests_nvidia_gpu_devices",
"The number of requested gpu devices by a container.",
append(descPodLabelsDefaultLabels, "container", "node"),
nil,
)
descPodContainerResourceLimitsNvidiaGPUDevices = prometheus.NewDesc(
"kube_pod_container_resource_limits_nvidia_gpu_devices",
"The limit on gpu devices to be used by a container.",
append(descPodLabelsDefaultLabels, "container", "node"),
nil,
)
descPodSpecVolumesPersistentVolumeClaimsInfo = prometheus.NewDesc( descPodSpecVolumesPersistentVolumeClaimsInfo = prometheus.NewDesc(
"kube_pod_spec_volumes_persistentvolumeclaims_info", "kube_pod_spec_volumes_persistentvolumeclaims_info",
"Information about persistentvolumeclaim volumes in a pod.", "Information about persistentvolumeclaim volumes in a pod.",
@ -273,8 +261,6 @@ func (pc *podCollector) Describe(ch chan<- *prometheus.Desc) {
ch <- descPodContainerResourceRequestsMemoryBytes ch <- descPodContainerResourceRequestsMemoryBytes
ch <- descPodContainerResourceLimitsCPUCores ch <- descPodContainerResourceLimitsCPUCores
ch <- descPodContainerResourceLimitsMemoryBytes ch <- descPodContainerResourceLimitsMemoryBytes
ch <- descPodContainerResourceRequestsNvidiaGPUDevices
ch <- descPodContainerResourceLimitsNvidiaGPUDevices
} }
} }
@ -435,10 +421,6 @@ func (pc *podCollector) collectPod(ch chan<- prometheus.Metric, p v1.Pod) {
c.Name, nodeName) c.Name, nodeName)
} }
if gpu, ok := req[v1.ResourceNvidiaGPU]; ok {
addGauge(descPodContainerResourceRequestsNvidiaGPUDevices, float64(gpu.Value()), c.Name, nodeName)
}
if cpu, ok := lim[v1.ResourceCPU]; ok { if cpu, ok := lim[v1.ResourceCPU]; ok {
addGauge(descPodContainerResourceLimitsCPUCores, float64(cpu.MilliValue())/1000, addGauge(descPodContainerResourceLimitsCPUCores, float64(cpu.MilliValue())/1000,
c.Name, nodeName) c.Name, nodeName)
@ -448,10 +430,6 @@ func (pc *podCollector) collectPod(ch chan<- prometheus.Metric, p v1.Pod) {
addGauge(descPodContainerResourceLimitsMemoryBytes, float64(mem.Value()), addGauge(descPodContainerResourceLimitsMemoryBytes, float64(mem.Value()),
c.Name, nodeName) c.Name, nodeName)
} }
if gpu, ok := lim[v1.ResourceNvidiaGPU]; ok {
addGauge(descPodContainerResourceLimitsNvidiaGPUDevices, float64(gpu.Value()), c.Name, nodeName)
}
} }
} }
@ -471,9 +449,6 @@ func (pc *podCollector) collectPod(ch chan<- prometheus.Metric, p v1.Pod) {
case v1.ResourceMemory: case v1.ResourceMemory:
addGauge(descPodContainerResourceRequests, float64(val.Value()), addGauge(descPodContainerResourceRequests, float64(val.Value()),
c.Name, nodeName, sanitizeLabelName(string(resourceName)), string(constant.UnitByte)) c.Name, nodeName, sanitizeLabelName(string(resourceName)), string(constant.UnitByte))
case v1.ResourceNvidiaGPU:
addGauge(descPodContainerResourceRequests, float64(val.Value()),
c.Name, nodeName, sanitizeLabelName(string(resourceName)), string(constant.UnitInteger))
default: default:
if helper.IsHugePageResourceName(resourceName) { if helper.IsHugePageResourceName(resourceName) {
addGauge(descPodContainerResourceRequests, float64(val.Value()), addGauge(descPodContainerResourceRequests, float64(val.Value()),
@ -498,9 +473,6 @@ func (pc *podCollector) collectPod(ch chan<- prometheus.Metric, p v1.Pod) {
case v1.ResourceMemory: case v1.ResourceMemory:
addGauge(descPodContainerResourceLimits, float64(val.Value()), addGauge(descPodContainerResourceLimits, float64(val.Value()),
c.Name, nodeName, sanitizeLabelName(string(resourceName)), string(constant.UnitByte)) c.Name, nodeName, sanitizeLabelName(string(resourceName)), string(constant.UnitByte))
case v1.ResourceNvidiaGPU:
addGauge(descPodContainerResourceLimits, float64(val.Value()),
c.Name, nodeName, sanitizeLabelName(string(resourceName)), string(constant.UnitInteger))
default: default:
if helper.IsHugePageResourceName(resourceName) { if helper.IsHugePageResourceName(resourceName) {
addGauge(descPodContainerResourceLimits, float64(val.Value()), addGauge(descPodContainerResourceLimits, float64(val.Value()),

View File

@ -93,10 +93,6 @@ func TestPodCollector(t *testing.T) {
# TYPE kube_pod_container_resource_limits_cpu_cores gauge # TYPE kube_pod_container_resource_limits_cpu_cores gauge
# HELP kube_pod_container_resource_limits_memory_bytes The limit on memory to be used by a container in bytes. # HELP kube_pod_container_resource_limits_memory_bytes The limit on memory to be used by a container in bytes.
# TYPE kube_pod_container_resource_limits_memory_bytes gauge # TYPE kube_pod_container_resource_limits_memory_bytes gauge
# HELP kube_pod_container_resource_requests_nvidia_gpu_devices The number of requested gpu devices by a container.
# TYPE kube_pod_container_resource_requests_nvidia_gpu_devices gauge
# HELP kube_pod_container_resource_limits_nvidia_gpu_devices The limit on gpu devices to be used by a container.
# TYPE kube_pod_container_resource_limits_nvidia_gpu_devices gauge
# HELP kube_pod_spec_volumes_persistentvolumeclaims_info Information about persistentvolumeclaim volumes in a pod. # HELP kube_pod_spec_volumes_persistentvolumeclaims_info Information about persistentvolumeclaim volumes in a pod.
# TYPE kube_pod_spec_volumes_persistentvolumeclaims_info gauge # TYPE kube_pod_spec_volumes_persistentvolumeclaims_info gauge
# HELP kube_pod_spec_volumes_persistentvolumeclaims_readonly Describes whether a persistentvolumeclaim is mounted read only. # HELP kube_pod_spec_volumes_persistentvolumeclaims_readonly Describes whether a persistentvolumeclaim is mounted read only.
@ -655,7 +651,6 @@ func TestPodCollector(t *testing.T) {
Requests: map[v1.ResourceName]resource.Quantity{ Requests: map[v1.ResourceName]resource.Quantity{
v1.ResourceCPU: resource.MustParse("200m"), v1.ResourceCPU: resource.MustParse("200m"),
v1.ResourceMemory: resource.MustParse("100M"), v1.ResourceMemory: resource.MustParse("100M"),
v1.ResourceNvidiaGPU: resource.MustParse("3"),
v1.ResourceEphemeralStorage: resource.MustParse("300M"), v1.ResourceEphemeralStorage: resource.MustParse("300M"),
v1.ResourceStorage: resource.MustParse("400M"), v1.ResourceStorage: resource.MustParse("400M"),
v1.ResourceName("nvidia.com/gpu"): resource.MustParse("1"), v1.ResourceName("nvidia.com/gpu"): resource.MustParse("1"),
@ -663,7 +658,6 @@ func TestPodCollector(t *testing.T) {
Limits: map[v1.ResourceName]resource.Quantity{ Limits: map[v1.ResourceName]resource.Quantity{
v1.ResourceCPU: resource.MustParse("200m"), v1.ResourceCPU: resource.MustParse("200m"),
v1.ResourceMemory: resource.MustParse("100M"), v1.ResourceMemory: resource.MustParse("100M"),
v1.ResourceNvidiaGPU: resource.MustParse("3"),
v1.ResourceEphemeralStorage: resource.MustParse("300M"), v1.ResourceEphemeralStorage: resource.MustParse("300M"),
v1.ResourceStorage: resource.MustParse("400M"), v1.ResourceStorage: resource.MustParse("400M"),
v1.ResourceName("nvidia.com/gpu"): resource.MustParse("1"), v1.ResourceName("nvidia.com/gpu"): resource.MustParse("1"),
@ -674,14 +668,12 @@ func TestPodCollector(t *testing.T) {
Name: "pod1_con2", Name: "pod1_con2",
Resources: v1.ResourceRequirements{ Resources: v1.ResourceRequirements{
Requests: map[v1.ResourceName]resource.Quantity{ Requests: map[v1.ResourceName]resource.Quantity{
v1.ResourceCPU: resource.MustParse("300m"), v1.ResourceCPU: resource.MustParse("300m"),
v1.ResourceMemory: resource.MustParse("200M"), v1.ResourceMemory: resource.MustParse("200M"),
v1.ResourceNvidiaGPU: resource.MustParse("2"),
}, },
Limits: map[v1.ResourceName]resource.Quantity{ Limits: map[v1.ResourceName]resource.Quantity{
v1.ResourceCPU: resource.MustParse("300m"), v1.ResourceCPU: resource.MustParse("300m"),
v1.ResourceMemory: resource.MustParse("200M"), v1.ResourceMemory: resource.MustParse("200M"),
v1.ResourceNvidiaGPU: resource.MustParse("2"),
}, },
}, },
}, },
@ -699,14 +691,12 @@ func TestPodCollector(t *testing.T) {
Name: "pod2_con1", Name: "pod2_con1",
Resources: v1.ResourceRequirements{ Resources: v1.ResourceRequirements{
Requests: map[v1.ResourceName]resource.Quantity{ Requests: map[v1.ResourceName]resource.Quantity{
v1.ResourceCPU: resource.MustParse("400m"), v1.ResourceCPU: resource.MustParse("400m"),
v1.ResourceMemory: resource.MustParse("300M"), v1.ResourceMemory: resource.MustParse("300M"),
v1.ResourceNvidiaGPU: resource.MustParse("3"),
}, },
Limits: map[v1.ResourceName]resource.Quantity{ Limits: map[v1.ResourceName]resource.Quantity{
v1.ResourceCPU: resource.MustParse("400m"), v1.ResourceCPU: resource.MustParse("400m"),
v1.ResourceMemory: resource.MustParse("300M"), v1.ResourceMemory: resource.MustParse("300M"),
v1.ResourceNvidiaGPU: resource.MustParse("3"),
}, },
}, },
}, },
@ -714,14 +704,12 @@ func TestPodCollector(t *testing.T) {
Name: "pod2_con2", Name: "pod2_con2",
Resources: v1.ResourceRequirements{ Resources: v1.ResourceRequirements{
Requests: map[v1.ResourceName]resource.Quantity{ Requests: map[v1.ResourceName]resource.Quantity{
v1.ResourceCPU: resource.MustParse("500m"), v1.ResourceCPU: resource.MustParse("500m"),
v1.ResourceMemory: resource.MustParse("400M"), v1.ResourceMemory: resource.MustParse("400M"),
v1.ResourceNvidiaGPU: resource.MustParse("5"),
}, },
Limits: map[v1.ResourceName]resource.Quantity{ Limits: map[v1.ResourceName]resource.Quantity{
v1.ResourceCPU: resource.MustParse("500m"), v1.ResourceCPU: resource.MustParse("500m"),
v1.ResourceMemory: resource.MustParse("400M"), v1.ResourceMemory: resource.MustParse("400M"),
v1.ResourceNvidiaGPU: resource.MustParse("5"),
}, },
}, },
}, },
@ -742,10 +730,6 @@ func TestPodCollector(t *testing.T) {
kube_pod_container_resource_requests_memory_bytes{container="pod1_con2",namespace="ns1",node="node1",pod="pod1"} 2e+08 kube_pod_container_resource_requests_memory_bytes{container="pod1_con2",namespace="ns1",node="node1",pod="pod1"} 2e+08
kube_pod_container_resource_requests_memory_bytes{container="pod2_con1",namespace="ns2",node="node2",pod="pod2"} 3e+08 kube_pod_container_resource_requests_memory_bytes{container="pod2_con1",namespace="ns2",node="node2",pod="pod2"} 3e+08
kube_pod_container_resource_requests_memory_bytes{container="pod2_con2",namespace="ns2",node="node2",pod="pod2"} 4e+08 kube_pod_container_resource_requests_memory_bytes{container="pod2_con2",namespace="ns2",node="node2",pod="pod2"} 4e+08
kube_pod_container_resource_requests_nvidia_gpu_devices{container="pod1_con1",namespace="ns1",node="node1",pod="pod1"} 3
kube_pod_container_resource_requests_nvidia_gpu_devices{container="pod1_con2",namespace="ns1",node="node1",pod="pod1"} 2
kube_pod_container_resource_requests_nvidia_gpu_devices{container="pod2_con1",namespace="ns2",node="node2",pod="pod2"} 3
kube_pod_container_resource_requests_nvidia_gpu_devices{container="pod2_con2",namespace="ns2",node="node2",pod="pod2"} 5
kube_pod_container_resource_limits_cpu_cores{container="pod1_con1",namespace="ns1",node="node1",pod="pod1"} 0.2 kube_pod_container_resource_limits_cpu_cores{container="pod1_con1",namespace="ns1",node="node1",pod="pod1"} 0.2
kube_pod_container_resource_limits_cpu_cores{container="pod1_con2",namespace="ns1",node="node1",pod="pod1"} 0.3 kube_pod_container_resource_limits_cpu_cores{container="pod1_con2",namespace="ns1",node="node1",pod="pod1"} 0.3
kube_pod_container_resource_limits_cpu_cores{container="pod2_con1",namespace="ns2",node="node2",pod="pod2"} 0.4 kube_pod_container_resource_limits_cpu_cores{container="pod2_con1",namespace="ns2",node="node2",pod="pod2"} 0.4
@ -754,10 +738,6 @@ func TestPodCollector(t *testing.T) {
kube_pod_container_resource_limits_memory_bytes{container="pod1_con2",namespace="ns1",node="node1",pod="pod1"} 2e+08 kube_pod_container_resource_limits_memory_bytes{container="pod1_con2",namespace="ns1",node="node1",pod="pod1"} 2e+08
kube_pod_container_resource_limits_memory_bytes{container="pod2_con1",namespace="ns2",node="node2",pod="pod2"} 3e+08 kube_pod_container_resource_limits_memory_bytes{container="pod2_con1",namespace="ns2",node="node2",pod="pod2"} 3e+08
kube_pod_container_resource_limits_memory_bytes{container="pod2_con2",namespace="ns2",node="node2",pod="pod2"} 4e+08 kube_pod_container_resource_limits_memory_bytes{container="pod2_con2",namespace="ns2",node="node2",pod="pod2"} 4e+08
kube_pod_container_resource_limits_nvidia_gpu_devices{container="pod1_con1",namespace="ns1",node="node1",pod="pod1"} 3
kube_pod_container_resource_limits_nvidia_gpu_devices{container="pod1_con2",namespace="ns1",node="node1",pod="pod1"} 2
kube_pod_container_resource_limits_nvidia_gpu_devices{container="pod2_con1",namespace="ns2",node="node2",pod="pod2"} 3
kube_pod_container_resource_limits_nvidia_gpu_devices{container="pod2_con2",namespace="ns2",node="node2",pod="pod2"} 5
kube_pod_container_resource_requests{container="pod1_con1",namespace="ns1",node="node1",pod="pod1",resource="cpu",unit="core"} 0.2 kube_pod_container_resource_requests{container="pod1_con1",namespace="ns1",node="node1",pod="pod1",resource="cpu",unit="core"} 0.2
kube_pod_container_resource_requests{container="pod1_con2",namespace="ns1",node="node1",pod="pod1",resource="cpu",unit="core"} 0.3 kube_pod_container_resource_requests{container="pod1_con2",namespace="ns1",node="node1",pod="pod1",resource="cpu",unit="core"} 0.3
kube_pod_container_resource_requests{container="pod1_con1",namespace="ns1",node="node1",pod="pod1",resource="nvidia_com_gpu",unit="integer"} 1 kube_pod_container_resource_requests{container="pod1_con1",namespace="ns1",node="node1",pod="pod1",resource="nvidia_com_gpu",unit="integer"} 1
@ -769,10 +749,6 @@ func TestPodCollector(t *testing.T) {
kube_pod_container_resource_requests{container="pod1_con1",namespace="ns1",node="node1",pod="pod1",resource="ephemeral_storage",unit="byte"} 3e+08 kube_pod_container_resource_requests{container="pod1_con1",namespace="ns1",node="node1",pod="pod1",resource="ephemeral_storage",unit="byte"} 3e+08
kube_pod_container_resource_requests{container="pod2_con1",namespace="ns2",node="node2",pod="pod2",resource="memory",unit="byte"} 3e+08 kube_pod_container_resource_requests{container="pod2_con1",namespace="ns2",node="node2",pod="pod2",resource="memory",unit="byte"} 3e+08
kube_pod_container_resource_requests{container="pod2_con2",namespace="ns2",node="node2",pod="pod2",resource="memory",unit="byte"} 4e+08 kube_pod_container_resource_requests{container="pod2_con2",namespace="ns2",node="node2",pod="pod2",resource="memory",unit="byte"} 4e+08
kube_pod_container_resource_requests{container="pod1_con1",namespace="ns1",node="node1",pod="pod1",resource="alpha_kubernetes_io_nvidia_gpu",unit="integer"} 3
kube_pod_container_resource_requests{container="pod1_con2",namespace="ns1",node="node1",pod="pod1",resource="alpha_kubernetes_io_nvidia_gpu",unit="integer"} 2
kube_pod_container_resource_requests{container="pod2_con1",namespace="ns2",node="node2",pod="pod2",resource="alpha_kubernetes_io_nvidia_gpu",unit="integer"} 3
kube_pod_container_resource_requests{container="pod2_con2",namespace="ns2",node="node2",pod="pod2",resource="alpha_kubernetes_io_nvidia_gpu",unit="integer"} 5
kube_pod_container_resource_limits{container="pod1_con1",namespace="ns1",node="node1",pod="pod1",resource="cpu",unit="core"} 0.2 kube_pod_container_resource_limits{container="pod1_con1",namespace="ns1",node="node1",pod="pod1",resource="cpu",unit="core"} 0.2
kube_pod_container_resource_limits{container="pod1_con1",namespace="ns1",node="node1",pod="pod1",resource="nvidia_com_gpu",unit="integer"} 1 kube_pod_container_resource_limits{container="pod1_con1",namespace="ns1",node="node1",pod="pod1",resource="nvidia_com_gpu",unit="integer"} 1
kube_pod_container_resource_limits{container="pod1_con2",namespace="ns1",node="node1",pod="pod1",resource="cpu",unit="core"} 0.3 kube_pod_container_resource_limits{container="pod1_con2",namespace="ns1",node="node1",pod="pod1",resource="cpu",unit="core"} 0.3
@ -784,18 +760,12 @@ func TestPodCollector(t *testing.T) {
kube_pod_container_resource_limits{container="pod1_con1",namespace="ns1",node="node1",pod="pod1",resource="ephemeral_storage",unit="byte"} 3e+08 kube_pod_container_resource_limits{container="pod1_con1",namespace="ns1",node="node1",pod="pod1",resource="ephemeral_storage",unit="byte"} 3e+08
kube_pod_container_resource_limits{container="pod2_con1",namespace="ns2",node="node2",pod="pod2",resource="memory",unit="byte"} 3e+08 kube_pod_container_resource_limits{container="pod2_con1",namespace="ns2",node="node2",pod="pod2",resource="memory",unit="byte"} 3e+08
kube_pod_container_resource_limits{container="pod2_con2",namespace="ns2",node="node2",pod="pod2",resource="memory",unit="byte"} 4e+08 kube_pod_container_resource_limits{container="pod2_con2",namespace="ns2",node="node2",pod="pod2",resource="memory",unit="byte"} 4e+08
kube_pod_container_resource_limits{container="pod1_con1",namespace="ns1",node="node1",pod="pod1",resource="alpha_kubernetes_io_nvidia_gpu",unit="integer"} 3
kube_pod_container_resource_limits{container="pod1_con2",namespace="ns1",node="node1",pod="pod1",resource="alpha_kubernetes_io_nvidia_gpu",unit="integer"} 2
kube_pod_container_resource_limits{container="pod2_con1",namespace="ns2",node="node2",pod="pod2",resource="alpha_kubernetes_io_nvidia_gpu",unit="integer"} 3
kube_pod_container_resource_limits{container="pod2_con2",namespace="ns2",node="node2",pod="pod2",resource="alpha_kubernetes_io_nvidia_gpu",unit="integer"} 5
`, `,
metrics: []string{ metrics: []string{
"kube_pod_container_resource_requests_cpu_cores", "kube_pod_container_resource_requests_cpu_cores",
"kube_pod_container_resource_requests_memory_bytes", "kube_pod_container_resource_requests_memory_bytes",
"kube_pod_container_resource_requests_nvidia_gpu_devices",
"kube_pod_container_resource_limits_cpu_cores", "kube_pod_container_resource_limits_cpu_cores",
"kube_pod_container_resource_limits_memory_bytes", "kube_pod_container_resource_limits_memory_bytes",
"kube_pod_container_resource_limits_nvidia_gpu_devices",
"kube_pod_container_resource_requests", "kube_pod_container_resource_requests",
"kube_pod_container_resource_limits", "kube_pod_container_resource_limits",
}, },

View File

@ -17,7 +17,7 @@
set -e set -e
set -o pipefail set -o pipefail
KUBERNETES_VERSION=v1.8.0 KUBERNETES_VERSION=v1.10.0
KUBE_STATE_METRICS_LOG_DIR=./log KUBE_STATE_METRICS_LOG_DIR=./log
KUBE_STATE_METRICS_IMAGE_NAME='quay.io/coreos/kube-state-metrics' KUBE_STATE_METRICS_IMAGE_NAME='quay.io/coreos/kube-state-metrics'
PROMETHEUS_VERSION=2.0.0 PROMETHEUS_VERSION=2.0.0
@ -141,7 +141,7 @@ kubectl proxy &
# this for loop waits until kube-state-metrics is running by accessing the healthz endpoint # this for loop waits until kube-state-metrics is running by accessing the healthz endpoint
for i in {1..30}; do # timeout for 1 minutes for i in {1..30}; do # timeout for 1 minutes
KUBE_STATE_METRICS_STATUS=$(curl -s "http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-state-metrics:8080/healthz") KUBE_STATE_METRICS_STATUS=$(curl -s "http://localhost:8001/api/v1/namespaces/kube-system/services/kube-state-metrics:http-metrics/proxy/healthz")
if [ "$KUBE_STATE_METRICS_STATUS" == "ok" ]; then if [ "$KUBE_STATE_METRICS_STATUS" == "ok" ]; then
is_kube_state_metrics_running="true" is_kube_state_metrics_running="true"
break break
@ -162,7 +162,7 @@ set -e
echo "kube-state-metrics is up and running" echo "kube-state-metrics is up and running"
echo "access kube-state-metrics metrics endpoint" echo "access kube-state-metrics metrics endpoint"
curl -s "http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-state-metrics:8080/metrics" >$KUBE_STATE_METRICS_LOG_DIR/metrics curl -s "http://localhost:8001/api/v1/namespaces/kube-system/services/kube-state-metrics:http-metrics/proxy/metrics" >$KUBE_STATE_METRICS_LOG_DIR/metrics
echo "check metrics format with promtool" echo "check metrics format with promtool"
[ -n "$E2E_SETUP_PROMTOOL" ] && setup_promtool [ -n "$E2E_SETUP_PROMTOOL" ] && setup_promtool
@ -175,7 +175,7 @@ for collector in $collectors; do
grep "^kube_${collector}_" $KUBE_STATE_METRICS_LOG_DIR/metrics grep "^kube_${collector}_" $KUBE_STATE_METRICS_LOG_DIR/metrics
done done
KUBE_STATE_METRICS_STATUS=$(curl -s "http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kube-state-metrics:8080/healthz") KUBE_STATE_METRICS_STATUS=$(curl -s "http://localhost:8001/api/v1/namespaces/kube-system/services/kube-state-metrics:http-metrics/proxy/healthz")
if [ "$KUBE_STATE_METRICS_STATUS" == "ok" ]; then if [ "$KUBE_STATE_METRICS_STATUS" == "ok" ]; then
echo "kube-state-metrics is still running after accessing metrics endpoint" echo "kube-state-metrics is still running after accessing metrics endpoint"
exit 0 exit 0

View File

@ -1,19 +1,24 @@
# Azure Active Directory library for Go # Azure Active Directory authentication for Go
This project provides a stand alone Azure Active Directory library for Go. The code was extracted This is a standalone package for authenticating with Azure Active
from [go-autorest](https://github.com/Azure/go-autorest/) project, which is used as a base for Directory from other Go libraries and applications, in particular the [Azure SDK
[azure-sdk-for-go](https://github.com/Azure/azure-sdk-for-go). for Go](https://github.com/Azure/azure-sdk-for-go).
Note: Despite the package's name it is not related to other "ADAL" libraries
maintained in the [github.com/AzureAD](https://github.com/AzureAD) org. Issues
should be opened in [this repo's](https://github.com/Azure/go-autorest/issues)
or [the SDK's](https://github.com/Azure/azure-sdk-for-go/issues) issue
trackers.
## Installation ## Install
``` ```bash
go get -u github.com/Azure/go-autorest/autorest/adal go get -u github.com/Azure/go-autorest/autorest/adal
``` ```
## Usage ## Usage
An Active Directory application is required in order to use this library. An application can be registered in the [Azure Portal](https://portal.azure.com/) follow these [guidelines](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-integrating-applications) or using the [Azure CLI](https://github.com/Azure/azure-cli). An Active Directory application is required in order to use this library. An application can be registered in the [Azure Portal](https://portal.azure.com/) by following these [guidelines](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-integrating-applications) or using the [Azure CLI](https://github.com/Azure/azure-cli).
### Register an Azure AD Application with secret ### Register an Azure AD Application with secret
@ -218,6 +223,40 @@ if (err == nil) {
} }
``` ```
#### Username password authenticate
```Go
spt, err := adal.NewServicePrincipalTokenFromUsernamePassword(
oauthConfig,
applicationID,
username,
password,
resource,
callbacks...)
if (err == nil) {
token := spt.Token
}
```
#### Authorization code authenticate
``` Go
spt, err := adal.NewServicePrincipalTokenFromAuthorizationCode(
oauthConfig,
applicationID,
clientSecret,
authorizationCode,
redirectURI,
resource,
callbacks...)
err = spt.Refresh()
if (err == nil) {
token := spt.Token
}
```
### Command Line Tool ### Command Line Tool
A command line tool is available in `cmd/adal.go` that can acquire a token for a given resource. It supports all flows mentioned above. A command line tool is available in `cmd/adal.go` that can acquire a token for a given resource. It supports all flows mentioned above.

View File

@ -32,8 +32,24 @@ type OAuthConfig struct {
DeviceCodeEndpoint url.URL DeviceCodeEndpoint url.URL
} }
// IsZero returns true if the OAuthConfig object is zero-initialized.
func (oac OAuthConfig) IsZero() bool {
return oac == OAuthConfig{}
}
func validateStringParam(param, name string) error {
if len(param) == 0 {
return fmt.Errorf("parameter '" + name + "' cannot be empty")
}
return nil
}
// NewOAuthConfig returns an OAuthConfig with tenant specific urls // NewOAuthConfig returns an OAuthConfig with tenant specific urls
func NewOAuthConfig(activeDirectoryEndpoint, tenantID string) (*OAuthConfig, error) { func NewOAuthConfig(activeDirectoryEndpoint, tenantID string) (*OAuthConfig, error) {
if err := validateStringParam(activeDirectoryEndpoint, "activeDirectoryEndpoint"); err != nil {
return nil, err
}
// it's legal for tenantID to be empty so don't validate it
const activeDirectoryEndpointTemplate = "%s/oauth2/%s?api-version=%s" const activeDirectoryEndpointTemplate = "%s/oauth2/%s?api-version=%s"
u, err := url.Parse(activeDirectoryEndpoint) u, err := url.Parse(activeDirectoryEndpoint)
if err != nil { if err != nil {

View File

@ -1,20 +0,0 @@
// +build !windows
package adal
// Copyright 2017 Microsoft Corporation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// msiPath is the path to the MSI Extension settings file (to discover the endpoint)
var msiPath = "/var/lib/waagent/ManagedIdentity-Settings"

View File

@ -1,25 +0,0 @@
// +build windows
package adal
// Copyright 2017 Microsoft Corporation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import (
"os"
"strings"
)
// msiPath is the path to the MSI Extension settings file (to discover the endpoint)
var msiPath = strings.Join([]string{os.Getenv("SystemDrive"), "WindowsAzure/Config/ManagedIdentity-Settings"}, "/")

View File

@ -27,6 +27,7 @@ import (
"net/url" "net/url"
"strconv" "strconv"
"strings" "strings"
"sync"
"time" "time"
"github.com/Azure/go-autorest/autorest/date" "github.com/Azure/go-autorest/autorest/date"
@ -42,11 +43,20 @@ const (
// OAuthGrantTypeClientCredentials is the "grant_type" identifier used in credential flows // OAuthGrantTypeClientCredentials is the "grant_type" identifier used in credential flows
OAuthGrantTypeClientCredentials = "client_credentials" OAuthGrantTypeClientCredentials = "client_credentials"
// OAuthGrantTypeUserPass is the "grant_type" identifier used in username and password auth flows
OAuthGrantTypeUserPass = "password"
// OAuthGrantTypeRefreshToken is the "grant_type" identifier used in refresh token flows // OAuthGrantTypeRefreshToken is the "grant_type" identifier used in refresh token flows
OAuthGrantTypeRefreshToken = "refresh_token" OAuthGrantTypeRefreshToken = "refresh_token"
// OAuthGrantTypeAuthorizationCode is the "grant_type" identifier used in authorization code flows
OAuthGrantTypeAuthorizationCode = "authorization_code"
// metadataHeader is the header required by MSI extension // metadataHeader is the header required by MSI extension
metadataHeader = "Metadata" metadataHeader = "Metadata"
// msiEndpoint is the well known endpoint for getting MSI authentications tokens
msiEndpoint = "http://169.254.169.254/metadata/identity/oauth2/token"
) )
// OAuthTokenProvider is an interface which should be implemented by an access token retriever // OAuthTokenProvider is an interface which should be implemented by an access token retriever
@ -54,6 +64,12 @@ type OAuthTokenProvider interface {
OAuthToken() string OAuthToken() string
} }
// TokenRefreshError is an interface used by errors returned during token refresh.
type TokenRefreshError interface {
error
Response() *http.Response
}
// Refresher is an interface for token refresh functionality // Refresher is an interface for token refresh functionality
type Refresher interface { type Refresher interface {
Refresh() error Refresh() error
@ -78,6 +94,11 @@ type Token struct {
Type string `json:"token_type"` Type string `json:"token_type"`
} }
// IsZero returns true if the token object is zero-initialized.
func (t Token) IsZero() bool {
return t == Token{}
}
// Expires returns the time.Time when the Token expires. // Expires returns the time.Time when the Token expires.
func (t Token) Expires() time.Time { func (t Token) Expires() time.Time {
s, err := strconv.Atoi(t.ExpiresOn) s, err := strconv.Atoi(t.ExpiresOn)
@ -145,6 +166,34 @@ type ServicePrincipalCertificateSecret struct {
type ServicePrincipalMSISecret struct { type ServicePrincipalMSISecret struct {
} }
// ServicePrincipalUsernamePasswordSecret implements ServicePrincipalSecret for username and password auth.
type ServicePrincipalUsernamePasswordSecret struct {
Username string
Password string
}
// ServicePrincipalAuthorizationCodeSecret implements ServicePrincipalSecret for authorization code auth.
type ServicePrincipalAuthorizationCodeSecret struct {
ClientSecret string
AuthorizationCode string
RedirectURI string
}
// SetAuthenticationValues is a method of the interface ServicePrincipalSecret.
func (secret *ServicePrincipalAuthorizationCodeSecret) SetAuthenticationValues(spt *ServicePrincipalToken, v *url.Values) error {
v.Set("code", secret.AuthorizationCode)
v.Set("client_secret", secret.ClientSecret)
v.Set("redirect_uri", secret.RedirectURI)
return nil
}
// SetAuthenticationValues is a method of the interface ServicePrincipalSecret.
func (secret *ServicePrincipalUsernamePasswordSecret) SetAuthenticationValues(spt *ServicePrincipalToken, v *url.Values) error {
v.Set("username", secret.Username)
v.Set("password", secret.Password)
return nil
}
// SetAuthenticationValues is a method of the interface ServicePrincipalSecret. // SetAuthenticationValues is a method of the interface ServicePrincipalSecret.
func (msiSecret *ServicePrincipalMSISecret) SetAuthenticationValues(spt *ServicePrincipalToken, v *url.Values) error { func (msiSecret *ServicePrincipalMSISecret) SetAuthenticationValues(spt *ServicePrincipalToken, v *url.Values) error {
return nil return nil
@ -197,27 +246,47 @@ func (secret *ServicePrincipalCertificateSecret) SetAuthenticationValues(spt *Se
// ServicePrincipalToken encapsulates a Token created for a Service Principal. // ServicePrincipalToken encapsulates a Token created for a Service Principal.
type ServicePrincipalToken struct { type ServicePrincipalToken struct {
Token token Token
secret ServicePrincipalSecret secret ServicePrincipalSecret
oauthConfig OAuthConfig oauthConfig OAuthConfig
clientID string clientID string
resource string resource string
autoRefresh bool autoRefresh bool
refreshLock *sync.RWMutex
refreshWithin time.Duration refreshWithin time.Duration
sender Sender sender Sender
refreshCallbacks []TokenRefreshCallback refreshCallbacks []TokenRefreshCallback
} }
func validateOAuthConfig(oac OAuthConfig) error {
if oac.IsZero() {
return fmt.Errorf("parameter 'oauthConfig' cannot be zero-initialized")
}
return nil
}
// NewServicePrincipalTokenWithSecret create a ServicePrincipalToken using the supplied ServicePrincipalSecret implementation. // NewServicePrincipalTokenWithSecret create a ServicePrincipalToken using the supplied ServicePrincipalSecret implementation.
func NewServicePrincipalTokenWithSecret(oauthConfig OAuthConfig, id string, resource string, secret ServicePrincipalSecret, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { func NewServicePrincipalTokenWithSecret(oauthConfig OAuthConfig, id string, resource string, secret ServicePrincipalSecret, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) {
if err := validateOAuthConfig(oauthConfig); err != nil {
return nil, err
}
if err := validateStringParam(id, "id"); err != nil {
return nil, err
}
if err := validateStringParam(resource, "resource"); err != nil {
return nil, err
}
if secret == nil {
return nil, fmt.Errorf("parameter 'secret' cannot be nil")
}
spt := &ServicePrincipalToken{ spt := &ServicePrincipalToken{
oauthConfig: oauthConfig, oauthConfig: oauthConfig,
secret: secret, secret: secret,
clientID: id, clientID: id,
resource: resource, resource: resource,
autoRefresh: true, autoRefresh: true,
refreshLock: &sync.RWMutex{},
refreshWithin: defaultRefresh, refreshWithin: defaultRefresh,
sender: &http.Client{}, sender: &http.Client{},
refreshCallbacks: callbacks, refreshCallbacks: callbacks,
@ -227,6 +296,18 @@ func NewServicePrincipalTokenWithSecret(oauthConfig OAuthConfig, id string, reso
// NewServicePrincipalTokenFromManualToken creates a ServicePrincipalToken using the supplied token // NewServicePrincipalTokenFromManualToken creates a ServicePrincipalToken using the supplied token
func NewServicePrincipalTokenFromManualToken(oauthConfig OAuthConfig, clientID string, resource string, token Token, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { func NewServicePrincipalTokenFromManualToken(oauthConfig OAuthConfig, clientID string, resource string, token Token, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) {
if err := validateOAuthConfig(oauthConfig); err != nil {
return nil, err
}
if err := validateStringParam(clientID, "clientID"); err != nil {
return nil, err
}
if err := validateStringParam(resource, "resource"); err != nil {
return nil, err
}
if token.IsZero() {
return nil, fmt.Errorf("parameter 'token' cannot be zero-initialized")
}
spt, err := NewServicePrincipalTokenWithSecret( spt, err := NewServicePrincipalTokenWithSecret(
oauthConfig, oauthConfig,
clientID, clientID,
@ -237,7 +318,7 @@ func NewServicePrincipalTokenFromManualToken(oauthConfig OAuthConfig, clientID s
return nil, err return nil, err
} }
spt.Token = token spt.token = token
return spt, nil return spt, nil
} }
@ -245,6 +326,18 @@ func NewServicePrincipalTokenFromManualToken(oauthConfig OAuthConfig, clientID s
// NewServicePrincipalToken creates a ServicePrincipalToken from the supplied Service Principal // NewServicePrincipalToken creates a ServicePrincipalToken from the supplied Service Principal
// credentials scoped to the named resource. // credentials scoped to the named resource.
func NewServicePrincipalToken(oauthConfig OAuthConfig, clientID string, secret string, resource string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { func NewServicePrincipalToken(oauthConfig OAuthConfig, clientID string, secret string, resource string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) {
if err := validateOAuthConfig(oauthConfig); err != nil {
return nil, err
}
if err := validateStringParam(clientID, "clientID"); err != nil {
return nil, err
}
if err := validateStringParam(secret, "secret"); err != nil {
return nil, err
}
if err := validateStringParam(resource, "resource"); err != nil {
return nil, err
}
return NewServicePrincipalTokenWithSecret( return NewServicePrincipalTokenWithSecret(
oauthConfig, oauthConfig,
clientID, clientID,
@ -256,8 +349,23 @@ func NewServicePrincipalToken(oauthConfig OAuthConfig, clientID string, secret s
) )
} }
// NewServicePrincipalTokenFromCertificate create a ServicePrincipalToken from the supplied pkcs12 bytes. // NewServicePrincipalTokenFromCertificate creates a ServicePrincipalToken from the supplied pkcs12 bytes.
func NewServicePrincipalTokenFromCertificate(oauthConfig OAuthConfig, clientID string, certificate *x509.Certificate, privateKey *rsa.PrivateKey, resource string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { func NewServicePrincipalTokenFromCertificate(oauthConfig OAuthConfig, clientID string, certificate *x509.Certificate, privateKey *rsa.PrivateKey, resource string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) {
if err := validateOAuthConfig(oauthConfig); err != nil {
return nil, err
}
if err := validateStringParam(clientID, "clientID"); err != nil {
return nil, err
}
if err := validateStringParam(resource, "resource"); err != nil {
return nil, err
}
if certificate == nil {
return nil, fmt.Errorf("parameter 'certificate' cannot be nil")
}
if privateKey == nil {
return nil, fmt.Errorf("parameter 'privateKey' cannot be nil")
}
return NewServicePrincipalTokenWithSecret( return NewServicePrincipalTokenWithSecret(
oauthConfig, oauthConfig,
clientID, clientID,
@ -270,59 +378,163 @@ func NewServicePrincipalTokenFromCertificate(oauthConfig OAuthConfig, clientID s
) )
} }
// GetMSIVMEndpoint gets the MSI endpoint on Virtual Machines. // NewServicePrincipalTokenFromUsernamePassword creates a ServicePrincipalToken from the username and password.
func GetMSIVMEndpoint() (string, error) { func NewServicePrincipalTokenFromUsernamePassword(oauthConfig OAuthConfig, clientID string, username string, password string, resource string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) {
return getMSIVMEndpoint(msiPath) if err := validateOAuthConfig(oauthConfig); err != nil {
return nil, err
}
if err := validateStringParam(clientID, "clientID"); err != nil {
return nil, err
}
if err := validateStringParam(username, "username"); err != nil {
return nil, err
}
if err := validateStringParam(password, "password"); err != nil {
return nil, err
}
if err := validateStringParam(resource, "resource"); err != nil {
return nil, err
}
return NewServicePrincipalTokenWithSecret(
oauthConfig,
clientID,
resource,
&ServicePrincipalUsernamePasswordSecret{
Username: username,
Password: password,
},
callbacks...,
)
} }
func getMSIVMEndpoint(path string) (string, error) { // NewServicePrincipalTokenFromAuthorizationCode creates a ServicePrincipalToken from the
// Read MSI settings func NewServicePrincipalTokenFromAuthorizationCode(oauthConfig OAuthConfig, clientID string, clientSecret string, authorizationCode string, redirectURI string, resource string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) {
bytes, err := ioutil.ReadFile(path)
if err != nil { if err := validateOAuthConfig(oauthConfig); err != nil {
return "", err return nil, err
} }
msiSettings := struct { if err := validateStringParam(clientID, "clientID"); err != nil {
URL string `json:"url"` return nil, err
}{} }
err = json.Unmarshal(bytes, &msiSettings) if err := validateStringParam(clientSecret, "clientSecret"); err != nil {
if err != nil { return nil, err
return "", err }
if err := validateStringParam(authorizationCode, "authorizationCode"); err != nil {
return nil, err
}
if err := validateStringParam(redirectURI, "redirectURI"); err != nil {
return nil, err
}
if err := validateStringParam(resource, "resource"); err != nil {
return nil, err
} }
return msiSettings.URL, nil return NewServicePrincipalTokenWithSecret(
oauthConfig,
clientID,
resource,
&ServicePrincipalAuthorizationCodeSecret{
ClientSecret: clientSecret,
AuthorizationCode: authorizationCode,
RedirectURI: redirectURI,
},
callbacks...,
)
}
// GetMSIVMEndpoint gets the MSI endpoint on Virtual Machines.
func GetMSIVMEndpoint() (string, error) {
return msiEndpoint, nil
} }
// NewServicePrincipalTokenFromMSI creates a ServicePrincipalToken via the MSI VM Extension. // NewServicePrincipalTokenFromMSI creates a ServicePrincipalToken via the MSI VM Extension.
// It will use the system assigned identity when creating the token.
func NewServicePrincipalTokenFromMSI(msiEndpoint, resource string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { func NewServicePrincipalTokenFromMSI(msiEndpoint, resource string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) {
return newServicePrincipalTokenFromMSI(msiEndpoint, resource, nil, callbacks...)
}
// NewServicePrincipalTokenFromMSIWithUserAssignedID creates a ServicePrincipalToken via the MSI VM Extension.
// It will use the specified user assigned identity when creating the token.
func NewServicePrincipalTokenFromMSIWithUserAssignedID(msiEndpoint, resource string, userAssignedID string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) {
return newServicePrincipalTokenFromMSI(msiEndpoint, resource, &userAssignedID, callbacks...)
}
func newServicePrincipalTokenFromMSI(msiEndpoint, resource string, userAssignedID *string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) {
if err := validateStringParam(msiEndpoint, "msiEndpoint"); err != nil {
return nil, err
}
if err := validateStringParam(resource, "resource"); err != nil {
return nil, err
}
if userAssignedID != nil {
if err := validateStringParam(*userAssignedID, "userAssignedID"); err != nil {
return nil, err
}
}
// We set the oauth config token endpoint to be MSI's endpoint // We set the oauth config token endpoint to be MSI's endpoint
msiEndpointURL, err := url.Parse(msiEndpoint) msiEndpointURL, err := url.Parse(msiEndpoint)
if err != nil { if err != nil {
return nil, err return nil, err
} }
oauthConfig, err := NewOAuthConfig(msiEndpointURL.String(), "") v := url.Values{}
if err != nil { v.Set("resource", resource)
return nil, err v.Set("api-version", "2018-02-01")
if userAssignedID != nil {
v.Set("client_id", *userAssignedID)
} }
msiEndpointURL.RawQuery = v.Encode()
spt := &ServicePrincipalToken{ spt := &ServicePrincipalToken{
oauthConfig: *oauthConfig, oauthConfig: OAuthConfig{
TokenEndpoint: *msiEndpointURL,
},
secret: &ServicePrincipalMSISecret{}, secret: &ServicePrincipalMSISecret{},
resource: resource, resource: resource,
autoRefresh: true, autoRefresh: true,
refreshLock: &sync.RWMutex{},
refreshWithin: defaultRefresh, refreshWithin: defaultRefresh,
sender: &http.Client{}, sender: &http.Client{},
refreshCallbacks: callbacks, refreshCallbacks: callbacks,
} }
if userAssignedID != nil {
spt.clientID = *userAssignedID
}
return spt, nil return spt, nil
} }
// internal type that implements TokenRefreshError
type tokenRefreshError struct {
message string
resp *http.Response
}
// Error implements the error interface which is part of the TokenRefreshError interface.
func (tre tokenRefreshError) Error() string {
return tre.message
}
// Response implements the TokenRefreshError interface, it returns the raw HTTP response from the refresh operation.
func (tre tokenRefreshError) Response() *http.Response {
return tre.resp
}
func newTokenRefreshError(message string, resp *http.Response) TokenRefreshError {
return tokenRefreshError{message: message, resp: resp}
}
// EnsureFresh will refresh the token if it will expire within the refresh window (as set by // EnsureFresh will refresh the token if it will expire within the refresh window (as set by
// RefreshWithin) and autoRefresh flag is on. // RefreshWithin) and autoRefresh flag is on. This method is safe for concurrent use.
func (spt *ServicePrincipalToken) EnsureFresh() error { func (spt *ServicePrincipalToken) EnsureFresh() error {
if spt.autoRefresh && spt.WillExpireIn(spt.refreshWithin) { if spt.autoRefresh && spt.token.WillExpireIn(spt.refreshWithin) {
return spt.Refresh() // take the write lock then check to see if the token was already refreshed
spt.refreshLock.Lock()
defer spt.refreshLock.Unlock()
if spt.token.WillExpireIn(spt.refreshWithin) {
return spt.refreshInternal(spt.resource)
}
} }
return nil return nil
} }
@ -331,7 +543,7 @@ func (spt *ServicePrincipalToken) EnsureFresh() error {
func (spt *ServicePrincipalToken) InvokeRefreshCallbacks(token Token) error { func (spt *ServicePrincipalToken) InvokeRefreshCallbacks(token Token) error {
if spt.refreshCallbacks != nil { if spt.refreshCallbacks != nil {
for _, callback := range spt.refreshCallbacks { for _, callback := range spt.refreshCallbacks {
err := callback(spt.Token) err := callback(spt.token)
if err != nil { if err != nil {
return fmt.Errorf("adal: TokenRefreshCallback handler failed. Error = '%v'", err) return fmt.Errorf("adal: TokenRefreshCallback handler failed. Error = '%v'", err)
} }
@ -341,44 +553,80 @@ func (spt *ServicePrincipalToken) InvokeRefreshCallbacks(token Token) error {
} }
// Refresh obtains a fresh token for the Service Principal. // Refresh obtains a fresh token for the Service Principal.
// This method is not safe for concurrent use and should be syncrhonized.
func (spt *ServicePrincipalToken) Refresh() error { func (spt *ServicePrincipalToken) Refresh() error {
spt.refreshLock.Lock()
defer spt.refreshLock.Unlock()
return spt.refreshInternal(spt.resource) return spt.refreshInternal(spt.resource)
} }
// RefreshExchange refreshes the token, but for a different resource. // RefreshExchange refreshes the token, but for a different resource.
// This method is not safe for concurrent use and should be syncrhonized.
func (spt *ServicePrincipalToken) RefreshExchange(resource string) error { func (spt *ServicePrincipalToken) RefreshExchange(resource string) error {
spt.refreshLock.Lock()
defer spt.refreshLock.Unlock()
return spt.refreshInternal(resource) return spt.refreshInternal(resource)
} }
func (spt *ServicePrincipalToken) refreshInternal(resource string) error { func (spt *ServicePrincipalToken) getGrantType() string {
v := url.Values{} switch spt.secret.(type) {
v.Set("client_id", spt.clientID) case *ServicePrincipalUsernamePasswordSecret:
v.Set("resource", resource) return OAuthGrantTypeUserPass
case *ServicePrincipalAuthorizationCodeSecret:
if spt.RefreshToken != "" { return OAuthGrantTypeAuthorizationCode
v.Set("grant_type", OAuthGrantTypeRefreshToken) default:
v.Set("refresh_token", spt.RefreshToken) return OAuthGrantTypeClientCredentials
} else {
v.Set("grant_type", OAuthGrantTypeClientCredentials)
err := spt.secret.SetAuthenticationValues(spt, &v)
if err != nil {
return err
}
} }
}
s := v.Encode() func isIMDS(u url.URL) bool {
body := ioutil.NopCloser(strings.NewReader(s)) imds, err := url.Parse(msiEndpoint)
req, err := http.NewRequest(http.MethodPost, spt.oauthConfig.TokenEndpoint.String(), body) if err != nil {
return false
}
return u.Host == imds.Host && u.Path == imds.Path
}
func (spt *ServicePrincipalToken) refreshInternal(resource string) error {
req, err := http.NewRequest(http.MethodPost, spt.oauthConfig.TokenEndpoint.String(), nil)
if err != nil { if err != nil {
return fmt.Errorf("adal: Failed to build the refresh request. Error = '%v'", err) return fmt.Errorf("adal: Failed to build the refresh request. Error = '%v'", err)
} }
req.ContentLength = int64(len(s)) if !isIMDS(spt.oauthConfig.TokenEndpoint) {
req.Header.Set(contentType, mimeTypeFormPost) v := url.Values{}
v.Set("client_id", spt.clientID)
v.Set("resource", resource)
if spt.token.RefreshToken != "" {
v.Set("grant_type", OAuthGrantTypeRefreshToken)
v.Set("refresh_token", spt.token.RefreshToken)
} else {
v.Set("grant_type", spt.getGrantType())
err := spt.secret.SetAuthenticationValues(spt, &v)
if err != nil {
return err
}
}
s := v.Encode()
body := ioutil.NopCloser(strings.NewReader(s))
req.ContentLength = int64(len(s))
req.Header.Set(contentType, mimeTypeFormPost)
req.Body = body
}
if _, ok := spt.secret.(*ServicePrincipalMSISecret); ok { if _, ok := spt.secret.(*ServicePrincipalMSISecret); ok {
req.Method = http.MethodGet
req.Header.Set(metadataHeader, "true") req.Header.Set(metadataHeader, "true")
} }
resp, err := spt.sender.Do(req)
var resp *http.Response
if isIMDS(spt.oauthConfig.TokenEndpoint) {
resp, err = retry(spt.sender, req)
} else {
resp, err = spt.sender.Do(req)
}
if err != nil { if err != nil {
return fmt.Errorf("adal: Failed to execute the refresh request. Error = '%v'", err) return fmt.Errorf("adal: Failed to execute the refresh request. Error = '%v'", err)
} }
@ -388,9 +636,9 @@ func (spt *ServicePrincipalToken) refreshInternal(resource string) error {
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
if err != nil { if err != nil {
return fmt.Errorf("adal: Refresh request failed. Status Code = '%d'. Failed reading response body", resp.StatusCode) return newTokenRefreshError(fmt.Sprintf("adal: Refresh request failed. Status Code = '%d'. Failed reading response body", resp.StatusCode), resp)
} }
return fmt.Errorf("adal: Refresh request failed. Status Code = '%d'. Response body: %s", resp.StatusCode, string(rb)) return newTokenRefreshError(fmt.Sprintf("adal: Refresh request failed. Status Code = '%d'. Response body: %s", resp.StatusCode, string(rb)), resp)
} }
if err != nil { if err != nil {
@ -405,11 +653,84 @@ func (spt *ServicePrincipalToken) refreshInternal(resource string) error {
return fmt.Errorf("adal: Failed to unmarshal the service principal token during refresh. Error = '%v' JSON = '%s'", err, string(rb)) return fmt.Errorf("adal: Failed to unmarshal the service principal token during refresh. Error = '%v' JSON = '%s'", err, string(rb))
} }
spt.Token = token spt.token = token
return spt.InvokeRefreshCallbacks(token) return spt.InvokeRefreshCallbacks(token)
} }
func retry(sender Sender, req *http.Request) (resp *http.Response, err error) {
retries := []int{
http.StatusRequestTimeout, // 408
http.StatusTooManyRequests, // 429
http.StatusInternalServerError, // 500
http.StatusBadGateway, // 502
http.StatusServiceUnavailable, // 503
http.StatusGatewayTimeout, // 504
}
// Extra retry status codes requered
retries = append(retries, http.StatusNotFound,
// all remaining 5xx
http.StatusNotImplemented,
http.StatusHTTPVersionNotSupported,
http.StatusVariantAlsoNegotiates,
http.StatusInsufficientStorage,
http.StatusLoopDetected,
http.StatusNotExtended,
http.StatusNetworkAuthenticationRequired)
attempt := 0
maxAttempts := 5
for attempt < maxAttempts {
resp, err = sender.Do(req)
if err != nil {
return
}
if resp.StatusCode == http.StatusOK {
return
}
if containsInt(retries, resp.StatusCode) {
delayed := false
if resp.StatusCode == http.StatusTooManyRequests {
delayed = delay(resp, req.Cancel)
}
if !delayed {
time.Sleep(time.Second)
attempt++
}
} else {
return
}
}
return
}
func containsInt(ints []int, n int) bool {
for _, i := range ints {
if i == n {
return true
}
}
return false
}
func delay(resp *http.Response, cancel <-chan struct{}) bool {
if resp == nil {
return false
}
retryAfter, _ := strconv.Atoi(resp.Header.Get("Retry-After"))
if resp.StatusCode == http.StatusTooManyRequests && retryAfter > 0 {
select {
case <-time.After(time.Duration(retryAfter) * time.Second):
return true
case <-cancel:
return false
}
}
return false
}
// SetAutoRefresh enables or disables automatic refreshing of stale tokens. // SetAutoRefresh enables or disables automatic refreshing of stale tokens.
func (spt *ServicePrincipalToken) SetAutoRefresh(autoRefresh bool) { func (spt *ServicePrincipalToken) SetAutoRefresh(autoRefresh bool) {
spt.autoRefresh = autoRefresh spt.autoRefresh = autoRefresh
@ -425,3 +746,17 @@ func (spt *ServicePrincipalToken) SetRefreshWithin(d time.Duration) {
// SetSender sets the http.Client used when obtaining the Service Principal token. An // SetSender sets the http.Client used when obtaining the Service Principal token. An
// undecorated http.Client is used by default. // undecorated http.Client is used by default.
func (spt *ServicePrincipalToken) SetSender(s Sender) { spt.sender = s } func (spt *ServicePrincipalToken) SetSender(s Sender) { spt.sender = s }
// OAuthToken implements the OAuthTokenProvider interface. It returns the current access token.
func (spt *ServicePrincipalToken) OAuthToken() string {
spt.refreshLock.RLock()
defer spt.refreshLock.RUnlock()
return spt.token.OAuthToken()
}
// Token returns a copy of the current token.
func (spt *ServicePrincipalToken) Token() Token {
spt.refreshLock.RLock()
defer spt.refreshLock.RUnlock()
return spt.token
}

View File

@ -24,9 +24,12 @@ import (
) )
const ( const (
bearerChallengeHeader = "Www-Authenticate" bearerChallengeHeader = "Www-Authenticate"
bearer = "Bearer" bearer = "Bearer"
tenantID = "tenantID" tenantID = "tenantID"
apiKeyAuthorizerHeader = "Ocp-Apim-Subscription-Key"
bingAPISdkHeader = "X-BingApis-SDK-Client"
golangBingAPISdkHeaderValue = "Go-SDK"
) )
// Authorizer is the interface that provides a PrepareDecorator used to supply request // Authorizer is the interface that provides a PrepareDecorator used to supply request
@ -44,6 +47,53 @@ func (na NullAuthorizer) WithAuthorization() PrepareDecorator {
return WithNothing() return WithNothing()
} }
// APIKeyAuthorizer implements API Key authorization.
type APIKeyAuthorizer struct {
headers map[string]interface{}
queryParameters map[string]interface{}
}
// NewAPIKeyAuthorizerWithHeaders creates an ApiKeyAuthorizer with headers.
func NewAPIKeyAuthorizerWithHeaders(headers map[string]interface{}) *APIKeyAuthorizer {
return NewAPIKeyAuthorizer(headers, nil)
}
// NewAPIKeyAuthorizerWithQueryParameters creates an ApiKeyAuthorizer with query parameters.
func NewAPIKeyAuthorizerWithQueryParameters(queryParameters map[string]interface{}) *APIKeyAuthorizer {
return NewAPIKeyAuthorizer(nil, queryParameters)
}
// NewAPIKeyAuthorizer creates an ApiKeyAuthorizer with headers.
func NewAPIKeyAuthorizer(headers map[string]interface{}, queryParameters map[string]interface{}) *APIKeyAuthorizer {
return &APIKeyAuthorizer{headers: headers, queryParameters: queryParameters}
}
// WithAuthorization returns a PrepareDecorator that adds an HTTP headers and Query Paramaters
func (aka *APIKeyAuthorizer) WithAuthorization() PrepareDecorator {
return func(p Preparer) Preparer {
return DecoratePreparer(p, WithHeaders(aka.headers), WithQueryParameters(aka.queryParameters))
}
}
// CognitiveServicesAuthorizer implements authorization for Cognitive Services.
type CognitiveServicesAuthorizer struct {
subscriptionKey string
}
// NewCognitiveServicesAuthorizer is
func NewCognitiveServicesAuthorizer(subscriptionKey string) *CognitiveServicesAuthorizer {
return &CognitiveServicesAuthorizer{subscriptionKey: subscriptionKey}
}
// WithAuthorization is
func (csa *CognitiveServicesAuthorizer) WithAuthorization() PrepareDecorator {
headers := make(map[string]interface{})
headers[apiKeyAuthorizerHeader] = csa.subscriptionKey
headers[bingAPISdkHeader] = golangBingAPISdkHeaderValue
return NewAPIKeyAuthorizerWithHeaders(headers).WithAuthorization()
}
// BearerAuthorizer implements the bearer authorization // BearerAuthorizer implements the bearer authorization
type BearerAuthorizer struct { type BearerAuthorizer struct {
tokenProvider adal.OAuthTokenProvider tokenProvider adal.OAuthTokenProvider
@ -54,10 +104,6 @@ func NewBearerAuthorizer(tp adal.OAuthTokenProvider) *BearerAuthorizer {
return &BearerAuthorizer{tokenProvider: tp} return &BearerAuthorizer{tokenProvider: tp}
} }
func (ba *BearerAuthorizer) withBearerAuthorization() PrepareDecorator {
return WithHeader(headerAuthorization, fmt.Sprintf("Bearer %s", ba.tokenProvider.OAuthToken()))
}
// WithAuthorization returns a PrepareDecorator that adds an HTTP Authorization header whose // WithAuthorization returns a PrepareDecorator that adds an HTTP Authorization header whose
// value is "Bearer " followed by the token. // value is "Bearer " followed by the token.
// //
@ -65,15 +111,23 @@ func (ba *BearerAuthorizer) withBearerAuthorization() PrepareDecorator {
func (ba *BearerAuthorizer) WithAuthorization() PrepareDecorator { func (ba *BearerAuthorizer) WithAuthorization() PrepareDecorator {
return func(p Preparer) Preparer { return func(p Preparer) Preparer {
return PreparerFunc(func(r *http.Request) (*http.Request, error) { return PreparerFunc(func(r *http.Request) (*http.Request, error) {
refresher, ok := ba.tokenProvider.(adal.Refresher) r, err := p.Prepare(r)
if ok { if err == nil {
err := refresher.EnsureFresh() refresher, ok := ba.tokenProvider.(adal.Refresher)
if err != nil { if ok {
return r, NewErrorWithError(err, "azure.BearerAuthorizer", "WithAuthorization", nil, err := refresher.EnsureFresh()
"Failed to refresh the Token for request to %s", r.URL) if err != nil {
var resp *http.Response
if tokError, ok := err.(adal.TokenRefreshError); ok {
resp = tokError.Response()
}
return r, NewErrorWithError(err, "azure.BearerAuthorizer", "WithAuthorization", resp,
"Failed to refresh the Token for request to %s", r.URL)
}
} }
return Prepare(r, WithHeader(headerAuthorization, fmt.Sprintf("Bearer %s", ba.tokenProvider.OAuthToken())))
} }
return (ba.withBearerAuthorization()(p)).Prepare(r) return r, err
}) })
} }
} }
@ -103,25 +157,28 @@ func NewBearerAuthorizerCallback(sender Sender, callback BearerAuthorizerCallbac
func (bacb *BearerAuthorizerCallback) WithAuthorization() PrepareDecorator { func (bacb *BearerAuthorizerCallback) WithAuthorization() PrepareDecorator {
return func(p Preparer) Preparer { return func(p Preparer) Preparer {
return PreparerFunc(func(r *http.Request) (*http.Request, error) { return PreparerFunc(func(r *http.Request) (*http.Request, error) {
// make a copy of the request and remove the body as it's not r, err := p.Prepare(r)
// required and avoids us having to create a copy of it. if err == nil {
rCopy := *r // make a copy of the request and remove the body as it's not
removeRequestBody(&rCopy) // required and avoids us having to create a copy of it.
rCopy := *r
removeRequestBody(&rCopy)
resp, err := bacb.sender.Do(&rCopy) resp, err := bacb.sender.Do(&rCopy)
if err == nil && resp.StatusCode == 401 { if err == nil && resp.StatusCode == 401 {
defer resp.Body.Close() defer resp.Body.Close()
if hasBearerChallenge(resp) { if hasBearerChallenge(resp) {
bc, err := newBearerChallenge(resp) bc, err := newBearerChallenge(resp)
if err != nil {
return r, err
}
if bacb.callback != nil {
ba, err := bacb.callback(bc.values[tenantID], bc.values["resource"])
if err != nil { if err != nil {
return r, err return r, err
} }
return ba.WithAuthorization()(p).Prepare(r) if bacb.callback != nil {
ba, err := bacb.callback(bc.values[tenantID], bc.values["resource"])
if err != nil {
return r, err
}
return Prepare(r, ba.WithAuthorization())
}
} }
} }
} }
@ -179,3 +236,22 @@ func newBearerChallenge(resp *http.Response) (bc bearerChallenge, err error) {
return bc, err return bc, err
} }
// EventGridKeyAuthorizer implements authorization for event grid using key authentication.
type EventGridKeyAuthorizer struct {
topicKey string
}
// NewEventGridKeyAuthorizer creates a new EventGridKeyAuthorizer
// with the specified topic key.
func NewEventGridKeyAuthorizer(topicKey string) EventGridKeyAuthorizer {
return EventGridKeyAuthorizer{topicKey: topicKey}
}
// WithAuthorization returns a PrepareDecorator that adds the aeg-sas-key authentication header.
func (egta EventGridKeyAuthorizer) WithAuthorization() PrepareDecorator {
headers := map[string]interface{}{
"aeg-sas-key": egta.topicKey,
}
return NewAPIKeyAuthorizerWithHeaders(headers).WithAuthorization()
}

View File

@ -72,6 +72,7 @@ package autorest
// limitations under the License. // limitations under the License.
import ( import (
"context"
"net/http" "net/http"
"time" "time"
) )
@ -87,6 +88,9 @@ const (
// ResponseHasStatusCode returns true if the status code in the HTTP Response is in the passed set // ResponseHasStatusCode returns true if the status code in the HTTP Response is in the passed set
// and false otherwise. // and false otherwise.
func ResponseHasStatusCode(resp *http.Response, codes ...int) bool { func ResponseHasStatusCode(resp *http.Response, codes ...int) bool {
if resp == nil {
return false
}
return containsInt(codes, resp.StatusCode) return containsInt(codes, resp.StatusCode)
} }
@ -127,3 +131,20 @@ func NewPollingRequest(resp *http.Response, cancel <-chan struct{}) (*http.Reque
return req, nil return req, nil
} }
// NewPollingRequestWithContext allocates and returns a new http.Request with the specified context to poll for the passed response.
func NewPollingRequestWithContext(ctx context.Context, resp *http.Response) (*http.Request, error) {
location := GetLocation(resp)
if location == "" {
return nil, NewErrorWithResponse("autorest", "NewPollingRequestWithContext", resp, "Location header missing from response that requires polling")
}
req, err := Prepare((&http.Request{}).WithContext(ctx),
AsGet(),
WithBaseURL(location))
if err != nil {
return nil, NewErrorWithError(err, "autorest", "NewPollingRequestWithContext", nil, "Failure creating poll request to %s", location)
}
return req, nil
}

View File

@ -16,6 +16,8 @@ package azure
import ( import (
"bytes" "bytes"
"context"
"encoding/json"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"net/http" "net/http"
@ -37,6 +39,184 @@ const (
operationSucceeded string = "Succeeded" operationSucceeded string = "Succeeded"
) )
var pollingCodes = [...]int{http.StatusNoContent, http.StatusAccepted, http.StatusCreated, http.StatusOK}
// Future provides a mechanism to access the status and results of an asynchronous request.
// Since futures are stateful they should be passed by value to avoid race conditions.
type Future struct {
req *http.Request
resp *http.Response
ps pollingState
}
// NewFuture returns a new Future object initialized with the specified request.
func NewFuture(req *http.Request) Future {
return Future{req: req}
}
// Response returns the last HTTP response or nil if there isn't one.
func (f Future) Response() *http.Response {
return f.resp
}
// Status returns the last status message of the operation.
func (f Future) Status() string {
if f.ps.State == "" {
return "Unknown"
}
return f.ps.State
}
// PollingMethod returns the method used to monitor the status of the asynchronous operation.
func (f Future) PollingMethod() PollingMethodType {
return f.ps.PollingMethod
}
// Done queries the service to see if the operation has completed.
func (f *Future) Done(sender autorest.Sender) (bool, error) {
// exit early if this future has terminated
if f.ps.hasTerminated() {
return true, f.errorInfo()
}
resp, err := sender.Do(f.req)
f.resp = resp
if err != nil {
return false, err
}
if !autorest.ResponseHasStatusCode(resp, pollingCodes[:]...) {
// check response body for error content
if resp.Body != nil {
type respErr struct {
ServiceError ServiceError `json:"error"`
}
re := respErr{}
defer resp.Body.Close()
b, err := ioutil.ReadAll(resp.Body)
if err != nil {
return false, err
}
err = json.Unmarshal(b, &re)
if err != nil {
return false, err
}
return false, re.ServiceError
}
// try to return something meaningful
return false, ServiceError{
Code: fmt.Sprintf("%v", resp.StatusCode),
Message: resp.Status,
}
}
err = updatePollingState(resp, &f.ps)
if err != nil {
return false, err
}
if f.ps.hasTerminated() {
return true, f.errorInfo()
}
f.req, err = newPollingRequest(f.ps)
return false, err
}
// GetPollingDelay returns a duration the application should wait before checking
// the status of the asynchronous request and true; this value is returned from
// the service via the Retry-After response header. If the header wasn't returned
// then the function returns the zero-value time.Duration and false.
func (f Future) GetPollingDelay() (time.Duration, bool) {
if f.resp == nil {
return 0, false
}
retry := f.resp.Header.Get(autorest.HeaderRetryAfter)
if retry == "" {
return 0, false
}
d, err := time.ParseDuration(retry + "s")
if err != nil {
panic(err)
}
return d, true
}
// WaitForCompletion will return when one of the following conditions is met: the long
// running operation has completed, the provided context is cancelled, or the client's
// polling duration has been exceeded. It will retry failed polling attempts based on
// the retry value defined in the client up to the maximum retry attempts.
func (f Future) WaitForCompletion(ctx context.Context, client autorest.Client) error {
ctx, cancel := context.WithTimeout(ctx, client.PollingDuration)
defer cancel()
done, err := f.Done(client)
for attempts := 0; !done; done, err = f.Done(client) {
if attempts >= client.RetryAttempts {
return autorest.NewErrorWithError(err, "azure", "WaitForCompletion", f.resp, "the number of retries has been exceeded")
}
// we want delayAttempt to be zero in the non-error case so
// that DelayForBackoff doesn't perform exponential back-off
var delayAttempt int
var delay time.Duration
if err == nil {
// check for Retry-After delay, if not present use the client's polling delay
var ok bool
delay, ok = f.GetPollingDelay()
if !ok {
delay = client.PollingDelay
}
} else {
// there was an error polling for status so perform exponential
// back-off based on the number of attempts using the client's retry
// duration. update attempts after delayAttempt to avoid off-by-one.
delayAttempt = attempts
delay = client.RetryDuration
attempts++
}
// wait until the delay elapses or the context is cancelled
delayElapsed := autorest.DelayForBackoff(delay, delayAttempt, ctx.Done())
if !delayElapsed {
return autorest.NewErrorWithError(ctx.Err(), "azure", "WaitForCompletion", f.resp, "context has been cancelled")
}
}
return err
}
// if the operation failed the polling state will contain
// error information and implements the error interface
func (f *Future) errorInfo() error {
if !f.ps.hasSucceeded() {
return f.ps
}
return nil
}
// MarshalJSON implements the json.Marshaler interface.
func (f Future) MarshalJSON() ([]byte, error) {
return json.Marshal(&f.ps)
}
// UnmarshalJSON implements the json.Unmarshaler interface.
func (f *Future) UnmarshalJSON(data []byte) error {
err := json.Unmarshal(data, &f.ps)
if err != nil {
return err
}
f.req, err = newPollingRequest(f.ps)
return err
}
// PollingURL returns the URL used for retrieving the status of the long-running operation.
// For LROs that use the Location header the final URL value is used to retrieve the result.
func (f Future) PollingURL() string {
return f.ps.URI
}
// DoPollForAsynchronous returns a SendDecorator that polls if the http.Response is for an Azure // DoPollForAsynchronous returns a SendDecorator that polls if the http.Response is for an Azure
// long-running operation. It will delay between requests for the duration specified in the // long-running operation. It will delay between requests for the duration specified in the
// RetryAfter header or, if the header is absent, the passed delay. Polling may be canceled by // RetryAfter header or, if the header is absent, the passed delay. Polling may be canceled by
@ -48,8 +228,7 @@ func DoPollForAsynchronous(delay time.Duration) autorest.SendDecorator {
if err != nil { if err != nil {
return resp, err return resp, err
} }
pollingCodes := []int{http.StatusAccepted, http.StatusCreated, http.StatusOK} if !autorest.ResponseHasStatusCode(resp, pollingCodes[:]...) {
if !autorest.ResponseHasStatusCode(resp, pollingCodes...) {
return resp, nil return resp, nil
} }
@ -66,10 +245,11 @@ func DoPollForAsynchronous(delay time.Duration) autorest.SendDecorator {
break break
} }
r, err = newPollingRequest(resp, ps) r, err = newPollingRequest(ps)
if err != nil { if err != nil {
return resp, err return resp, err
} }
r = r.WithContext(resp.Request.Context())
delay = autorest.GetRetryAfter(resp, delay) delay = autorest.GetRetryAfter(resp, delay)
resp, err = autorest.SendWithSender(s, r, resp, err = autorest.SendWithSender(s, r,
@ -86,20 +266,15 @@ func getAsyncOperation(resp *http.Response) string {
} }
func hasSucceeded(state string) bool { func hasSucceeded(state string) bool {
return state == operationSucceeded return strings.EqualFold(state, operationSucceeded)
} }
func hasTerminated(state string) bool { func hasTerminated(state string) bool {
switch state { return strings.EqualFold(state, operationCanceled) || strings.EqualFold(state, operationFailed) || strings.EqualFold(state, operationSucceeded)
case operationCanceled, operationFailed, operationSucceeded:
return true
default:
return false
}
} }
func hasFailed(state string) bool { func hasFailed(state string) bool {
return state == operationFailed return strings.EqualFold(state, operationFailed)
} }
type provisioningTracker interface { type provisioningTracker interface {
@ -157,39 +332,50 @@ func (ps provisioningStatus) hasTerminated() bool {
} }
func (ps provisioningStatus) hasProvisioningError() bool { func (ps provisioningStatus) hasProvisioningError() bool {
return ps.ProvisioningError != ServiceError{} // code and message are required fields so only check them
return len(ps.ProvisioningError.Code) > 0 ||
len(ps.ProvisioningError.Message) > 0
} }
type pollingResponseFormat string // PollingMethodType defines a type used for enumerating polling mechanisms.
type PollingMethodType string
const ( const (
usesOperationResponse pollingResponseFormat = "OperationResponse" // PollingAsyncOperation indicates the polling method uses the Azure-AsyncOperation header.
usesProvisioningStatus pollingResponseFormat = "ProvisioningStatus" PollingAsyncOperation PollingMethodType = "AsyncOperation"
formatIsUnknown pollingResponseFormat = ""
// PollingLocation indicates the polling method uses the Location header.
PollingLocation PollingMethodType = "Location"
// PollingUnknown indicates an unknown polling method and is the default value.
PollingUnknown PollingMethodType = ""
) )
type pollingState struct { type pollingState struct {
responseFormat pollingResponseFormat PollingMethod PollingMethodType `json:"pollingMethod"`
uri string URI string `json:"uri"`
state string State string `json:"state"`
code string ServiceError *ServiceError `json:"error,omitempty"`
message string
} }
func (ps pollingState) hasSucceeded() bool { func (ps pollingState) hasSucceeded() bool {
return hasSucceeded(ps.state) return hasSucceeded(ps.State)
} }
func (ps pollingState) hasTerminated() bool { func (ps pollingState) hasTerminated() bool {
return hasTerminated(ps.state) return hasTerminated(ps.State)
} }
func (ps pollingState) hasFailed() bool { func (ps pollingState) hasFailed() bool {
return hasFailed(ps.state) return hasFailed(ps.State)
} }
func (ps pollingState) Error() string { func (ps pollingState) Error() string {
return fmt.Sprintf("Long running operation terminated with status '%s': Code=%q Message=%q", ps.state, ps.code, ps.message) s := fmt.Sprintf("Long running operation terminated with status '%s'", ps.State)
if ps.ServiceError != nil {
s = fmt.Sprintf("%s: %+v", s, *ps.ServiceError)
}
return s
} }
// updatePollingState maps the operation status -- retrieved from either a provisioningState // updatePollingState maps the operation status -- retrieved from either a provisioningState
@ -204,7 +390,7 @@ func updatePollingState(resp *http.Response, ps *pollingState) error {
// -- The first response will always be a provisioningStatus response; only the polling requests, // -- The first response will always be a provisioningStatus response; only the polling requests,
// depending on the header returned, may be something otherwise. // depending on the header returned, may be something otherwise.
var pt provisioningTracker var pt provisioningTracker
if ps.responseFormat == usesOperationResponse { if ps.PollingMethod == PollingAsyncOperation {
pt = &operationResource{} pt = &operationResource{}
} else { } else {
pt = &provisioningStatus{} pt = &provisioningStatus{}
@ -212,30 +398,30 @@ func updatePollingState(resp *http.Response, ps *pollingState) error {
// If this is the first request (that is, the polling response shape is unknown), determine how // If this is the first request (that is, the polling response shape is unknown), determine how
// to poll and what to expect // to poll and what to expect
if ps.responseFormat == formatIsUnknown { if ps.PollingMethod == PollingUnknown {
req := resp.Request req := resp.Request
if req == nil { if req == nil {
return autorest.NewError("azure", "updatePollingState", "Azure Polling Error - Original HTTP request is missing") return autorest.NewError("azure", "updatePollingState", "Azure Polling Error - Original HTTP request is missing")
} }
// Prefer the Azure-AsyncOperation header // Prefer the Azure-AsyncOperation header
ps.uri = getAsyncOperation(resp) ps.URI = getAsyncOperation(resp)
if ps.uri != "" { if ps.URI != "" {
ps.responseFormat = usesOperationResponse ps.PollingMethod = PollingAsyncOperation
} else { } else {
ps.responseFormat = usesProvisioningStatus ps.PollingMethod = PollingLocation
} }
// Else, use the Location header // Else, use the Location header
if ps.uri == "" { if ps.URI == "" {
ps.uri = autorest.GetLocation(resp) ps.URI = autorest.GetLocation(resp)
} }
// Lastly, requests against an existing resource, use the last request URI // Lastly, requests against an existing resource, use the last request URI
if ps.uri == "" { if ps.URI == "" {
m := strings.ToUpper(req.Method) m := strings.ToUpper(req.Method)
if m == http.MethodPatch || m == http.MethodPut || m == http.MethodGet { if m == http.MethodPatch || m == http.MethodPut || m == http.MethodGet {
ps.uri = req.URL.String() ps.URI = req.URL.String()
} }
} }
} }
@ -256,23 +442,23 @@ func updatePollingState(resp *http.Response, ps *pollingState) error {
// -- Unknown states are per-service inprogress states // -- Unknown states are per-service inprogress states
// -- Otherwise, infer state from HTTP status code // -- Otherwise, infer state from HTTP status code
if pt.hasTerminated() { if pt.hasTerminated() {
ps.state = pt.state() ps.State = pt.state()
} else if pt.state() != "" { } else if pt.state() != "" {
ps.state = operationInProgress ps.State = operationInProgress
} else { } else {
switch resp.StatusCode { switch resp.StatusCode {
case http.StatusAccepted: case http.StatusAccepted:
ps.state = operationInProgress ps.State = operationInProgress
case http.StatusNoContent, http.StatusCreated, http.StatusOK: case http.StatusNoContent, http.StatusCreated, http.StatusOK:
ps.state = operationSucceeded ps.State = operationSucceeded
default: default:
ps.state = operationFailed ps.State = operationFailed
} }
} }
if ps.state == operationInProgress && ps.uri == "" { if strings.EqualFold(ps.State, operationInProgress) && ps.URI == "" {
return autorest.NewError("azure", "updatePollingState", "Azure Polling Error - Unable to obtain polling URI for %s %s", resp.Request.Method, resp.Request.URL) return autorest.NewError("azure", "updatePollingState", "Azure Polling Error - Unable to obtain polling URI for %s %s", resp.Request.Method, resp.Request.URL)
} }
@ -281,36 +467,45 @@ func updatePollingState(resp *http.Response, ps *pollingState) error {
// -- Response // -- Response
// -- Otherwise, Unknown // -- Otherwise, Unknown
if ps.hasFailed() { if ps.hasFailed() {
if ps.responseFormat == usesOperationResponse { if or, ok := pt.(*operationResource); ok {
or := pt.(*operationResource) ps.ServiceError = &or.OperationError
ps.code = or.OperationError.Code } else if p, ok := pt.(*provisioningStatus); ok && p.hasProvisioningError() {
ps.message = or.OperationError.Message ps.ServiceError = &p.ProvisioningError
} else { } else {
p := pt.(*provisioningStatus) ps.ServiceError = &ServiceError{
if p.hasProvisioningError() { Code: "Unknown",
ps.code = p.ProvisioningError.Code Message: "None",
ps.message = p.ProvisioningError.Message
} else {
ps.code = "Unknown"
ps.message = "None"
} }
} }
} }
return nil return nil
} }
func newPollingRequest(resp *http.Response, ps pollingState) (*http.Request, error) { func newPollingRequest(ps pollingState) (*http.Request, error) {
req := resp.Request reqPoll, err := autorest.Prepare(&http.Request{},
if req == nil {
return nil, autorest.NewError("azure", "newPollingRequest", "Azure Polling Error - Original HTTP request is missing")
}
reqPoll, err := autorest.Prepare(&http.Request{Cancel: req.Cancel},
autorest.AsGet(), autorest.AsGet(),
autorest.WithBaseURL(ps.uri)) autorest.WithBaseURL(ps.URI))
if err != nil { if err != nil {
return nil, autorest.NewErrorWithError(err, "azure", "newPollingRequest", nil, "Failure creating poll request to %s", ps.uri) return nil, autorest.NewErrorWithError(err, "azure", "newPollingRequest", nil, "Failure creating poll request to %s", ps.URI)
} }
return reqPoll, nil return reqPoll, nil
} }
// AsyncOpIncompleteError is the type that's returned from a future that has not completed.
type AsyncOpIncompleteError struct {
// FutureType is the name of the type composed of a azure.Future.
FutureType string
}
// Error returns an error message including the originating type name of the error.
func (e AsyncOpIncompleteError) Error() string {
return fmt.Sprintf("%s: asynchronous operation has not completed", e.FutureType)
}
// NewAsyncOpIncompleteError creates a new AsyncOpIncompleteError with the specified parameters.
func NewAsyncOpIncompleteError(futureType string) AsyncOpIncompleteError {
return AsyncOpIncompleteError{
FutureType: futureType,
}
}

View File

@ -1,8 +1,5 @@
/* // Package azure provides Azure-specific implementations used with AutoRest.
Package azure provides Azure-specific implementations used with AutoRest. // See the included examples for more detail.
See the included examples for more detail.
*/
package azure package azure
// Copyright 2017 Microsoft Corporation // Copyright 2017 Microsoft Corporation
@ -24,7 +21,9 @@ import (
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"net/http" "net/http"
"regexp"
"strconv" "strconv"
"strings"
"github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest"
) )
@ -43,21 +42,88 @@ const (
) )
// ServiceError encapsulates the error response from an Azure service. // ServiceError encapsulates the error response from an Azure service.
// It adhears to the OData v4 specification for error responses.
type ServiceError struct { type ServiceError struct {
Code string `json:"code"` Code string `json:"code"`
Message string `json:"message"` Message string `json:"message"`
Details *[]interface{} `json:"details"` Target *string `json:"target"`
Details []map[string]interface{} `json:"details"`
InnerError map[string]interface{} `json:"innererror"`
} }
func (se ServiceError) Error() string { func (se ServiceError) Error() string {
if se.Details != nil { result := fmt.Sprintf("Code=%q Message=%q", se.Code, se.Message)
d, err := json.Marshal(*(se.Details))
if err != nil { if se.Target != nil {
return fmt.Sprintf("Code=%q Message=%q Details=%v", se.Code, se.Message, *se.Details) result += fmt.Sprintf(" Target=%q", *se.Target)
}
return fmt.Sprintf("Code=%q Message=%q Details=%v", se.Code, se.Message, string(d))
} }
return fmt.Sprintf("Code=%q Message=%q", se.Code, se.Message)
if se.Details != nil {
d, err := json.Marshal(se.Details)
if err != nil {
result += fmt.Sprintf(" Details=%v", se.Details)
}
result += fmt.Sprintf(" Details=%v", string(d))
}
if se.InnerError != nil {
d, err := json.Marshal(se.InnerError)
if err != nil {
result += fmt.Sprintf(" InnerError=%v", se.InnerError)
}
result += fmt.Sprintf(" InnerError=%v", string(d))
}
return result
}
// UnmarshalJSON implements the json.Unmarshaler interface for the ServiceError type.
func (se *ServiceError) UnmarshalJSON(b []byte) error {
// per the OData v4 spec the details field must be an array of JSON objects.
// unfortunately not all services adhear to the spec and just return a single
// object instead of an array with one object. so we have to perform some
// shenanigans to accommodate both cases.
// http://docs.oasis-open.org/odata/odata-json-format/v4.0/os/odata-json-format-v4.0-os.html#_Toc372793091
type serviceError1 struct {
Code string `json:"code"`
Message string `json:"message"`
Target *string `json:"target"`
Details []map[string]interface{} `json:"details"`
InnerError map[string]interface{} `json:"innererror"`
}
type serviceError2 struct {
Code string `json:"code"`
Message string `json:"message"`
Target *string `json:"target"`
Details map[string]interface{} `json:"details"`
InnerError map[string]interface{} `json:"innererror"`
}
se1 := serviceError1{}
err := json.Unmarshal(b, &se1)
if err == nil {
se.populate(se1.Code, se1.Message, se1.Target, se1.Details, se1.InnerError)
return nil
}
se2 := serviceError2{}
err = json.Unmarshal(b, &se2)
if err == nil {
se.populate(se2.Code, se2.Message, se2.Target, nil, se2.InnerError)
se.Details = append(se.Details, se2.Details)
return nil
}
return err
}
func (se *ServiceError) populate(code, message string, target *string, details []map[string]interface{}, inner map[string]interface{}) {
se.Code = code
se.Message = message
se.Target = target
se.Details = details
se.InnerError = inner
} }
// RequestError describes an error response returned by Azure service. // RequestError describes an error response returned by Azure service.
@ -83,6 +149,41 @@ func IsAzureError(e error) bool {
return ok return ok
} }
// Resource contains details about an Azure resource.
type Resource struct {
SubscriptionID string
ResourceGroup string
Provider string
ResourceType string
ResourceName string
}
// ParseResourceID parses a resource ID into a ResourceDetails struct.
// See https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-template-functions-resource#return-value-4.
func ParseResourceID(resourceID string) (Resource, error) {
const resourceIDPatternText = `(?i)subscriptions/(.+)/resourceGroups/(.+)/providers/(.+?)/(.+?)/(.+)`
resourceIDPattern := regexp.MustCompile(resourceIDPatternText)
match := resourceIDPattern.FindStringSubmatch(resourceID)
if len(match) == 0 {
return Resource{}, fmt.Errorf("parsing failed for %s. Invalid resource Id format", resourceID)
}
v := strings.Split(match[5], "/")
resourceName := v[len(v)-1]
result := Resource{
SubscriptionID: match[1],
ResourceGroup: match[2],
Provider: match[3],
ResourceType: match[4],
ResourceName: resourceName,
}
return result, nil
}
// NewErrorWithError creates a new Error conforming object from the // NewErrorWithError creates a new Error conforming object from the
// passed packageType, method, statusCode of the given resp (UndefinedStatusCode // passed packageType, method, statusCode of the given resp (UndefinedStatusCode
// if resp is nil), message, and original error. message is treated as a format // if resp is nil), message, and original error. message is treated as a format

View File

@ -15,10 +15,17 @@ package azure
// limitations under the License. // limitations under the License.
import ( import (
"encoding/json"
"fmt" "fmt"
"io/ioutil"
"os"
"strings" "strings"
) )
// EnvironmentFilepathName captures the name of the environment variable containing the path to the file
// to be used while populating the Azure Environment.
const EnvironmentFilepathName = "AZURE_ENVIRONMENT_FILEPATH"
var environments = map[string]Environment{ var environments = map[string]Environment{
"AZURECHINACLOUD": ChinaCloud, "AZURECHINACLOUD": ChinaCloud,
"AZUREGERMANCLOUD": GermanCloud, "AZUREGERMANCLOUD": GermanCloud,
@ -37,6 +44,8 @@ type Environment struct {
GalleryEndpoint string `json:"galleryEndpoint"` GalleryEndpoint string `json:"galleryEndpoint"`
KeyVaultEndpoint string `json:"keyVaultEndpoint"` KeyVaultEndpoint string `json:"keyVaultEndpoint"`
GraphEndpoint string `json:"graphEndpoint"` GraphEndpoint string `json:"graphEndpoint"`
ServiceBusEndpoint string `json:"serviceBusEndpoint"`
BatchManagementEndpoint string `json:"batchManagementEndpoint"`
StorageEndpointSuffix string `json:"storageEndpointSuffix"` StorageEndpointSuffix string `json:"storageEndpointSuffix"`
SQLDatabaseDNSSuffix string `json:"sqlDatabaseDNSSuffix"` SQLDatabaseDNSSuffix string `json:"sqlDatabaseDNSSuffix"`
TrafficManagerDNSSuffix string `json:"trafficManagerDNSSuffix"` TrafficManagerDNSSuffix string `json:"trafficManagerDNSSuffix"`
@ -45,6 +54,7 @@ type Environment struct {
ServiceManagementVMDNSSuffix string `json:"serviceManagementVMDNSSuffix"` ServiceManagementVMDNSSuffix string `json:"serviceManagementVMDNSSuffix"`
ResourceManagerVMDNSSuffix string `json:"resourceManagerVMDNSSuffix"` ResourceManagerVMDNSSuffix string `json:"resourceManagerVMDNSSuffix"`
ContainerRegistryDNSSuffix string `json:"containerRegistryDNSSuffix"` ContainerRegistryDNSSuffix string `json:"containerRegistryDNSSuffix"`
TokenAudience string `json:"tokenAudience"`
} }
var ( var (
@ -59,14 +69,17 @@ var (
GalleryEndpoint: "https://gallery.azure.com/", GalleryEndpoint: "https://gallery.azure.com/",
KeyVaultEndpoint: "https://vault.azure.net/", KeyVaultEndpoint: "https://vault.azure.net/",
GraphEndpoint: "https://graph.windows.net/", GraphEndpoint: "https://graph.windows.net/",
ServiceBusEndpoint: "https://servicebus.windows.net/",
BatchManagementEndpoint: "https://batch.core.windows.net/",
StorageEndpointSuffix: "core.windows.net", StorageEndpointSuffix: "core.windows.net",
SQLDatabaseDNSSuffix: "database.windows.net", SQLDatabaseDNSSuffix: "database.windows.net",
TrafficManagerDNSSuffix: "trafficmanager.net", TrafficManagerDNSSuffix: "trafficmanager.net",
KeyVaultDNSSuffix: "vault.azure.net", KeyVaultDNSSuffix: "vault.azure.net",
ServiceBusEndpointSuffix: "servicebus.azure.com", ServiceBusEndpointSuffix: "servicebus.windows.net",
ServiceManagementVMDNSSuffix: "cloudapp.net", ServiceManagementVMDNSSuffix: "cloudapp.net",
ResourceManagerVMDNSSuffix: "cloudapp.azure.com", ResourceManagerVMDNSSuffix: "cloudapp.azure.com",
ContainerRegistryDNSSuffix: "azurecr.io", ContainerRegistryDNSSuffix: "azurecr.io",
TokenAudience: "https://management.azure.com/",
} }
// USGovernmentCloud is the cloud environment for the US Government // USGovernmentCloud is the cloud environment for the US Government
@ -76,10 +89,12 @@ var (
PublishSettingsURL: "https://manage.windowsazure.us/publishsettings/index", PublishSettingsURL: "https://manage.windowsazure.us/publishsettings/index",
ServiceManagementEndpoint: "https://management.core.usgovcloudapi.net/", ServiceManagementEndpoint: "https://management.core.usgovcloudapi.net/",
ResourceManagerEndpoint: "https://management.usgovcloudapi.net/", ResourceManagerEndpoint: "https://management.usgovcloudapi.net/",
ActiveDirectoryEndpoint: "https://login.microsoftonline.com/", ActiveDirectoryEndpoint: "https://login.microsoftonline.us/",
GalleryEndpoint: "https://gallery.usgovcloudapi.net/", GalleryEndpoint: "https://gallery.usgovcloudapi.net/",
KeyVaultEndpoint: "https://vault.usgovcloudapi.net/", KeyVaultEndpoint: "https://vault.usgovcloudapi.net/",
GraphEndpoint: "https://graph.usgovcloudapi.net/", GraphEndpoint: "https://graph.windows.net/",
ServiceBusEndpoint: "https://servicebus.usgovcloudapi.net/",
BatchManagementEndpoint: "https://batch.core.usgovcloudapi.net/",
StorageEndpointSuffix: "core.usgovcloudapi.net", StorageEndpointSuffix: "core.usgovcloudapi.net",
SQLDatabaseDNSSuffix: "database.usgovcloudapi.net", SQLDatabaseDNSSuffix: "database.usgovcloudapi.net",
TrafficManagerDNSSuffix: "usgovtrafficmanager.net", TrafficManagerDNSSuffix: "usgovtrafficmanager.net",
@ -88,6 +103,7 @@ var (
ServiceManagementVMDNSSuffix: "usgovcloudapp.net", ServiceManagementVMDNSSuffix: "usgovcloudapp.net",
ResourceManagerVMDNSSuffix: "cloudapp.windowsazure.us", ResourceManagerVMDNSSuffix: "cloudapp.windowsazure.us",
ContainerRegistryDNSSuffix: "azurecr.io", ContainerRegistryDNSSuffix: "azurecr.io",
TokenAudience: "https://management.usgovcloudapi.net/",
} }
// ChinaCloud is the cloud environment operated in China // ChinaCloud is the cloud environment operated in China
@ -101,14 +117,17 @@ var (
GalleryEndpoint: "https://gallery.chinacloudapi.cn/", GalleryEndpoint: "https://gallery.chinacloudapi.cn/",
KeyVaultEndpoint: "https://vault.azure.cn/", KeyVaultEndpoint: "https://vault.azure.cn/",
GraphEndpoint: "https://graph.chinacloudapi.cn/", GraphEndpoint: "https://graph.chinacloudapi.cn/",
ServiceBusEndpoint: "https://servicebus.chinacloudapi.cn/",
BatchManagementEndpoint: "https://batch.chinacloudapi.cn/",
StorageEndpointSuffix: "core.chinacloudapi.cn", StorageEndpointSuffix: "core.chinacloudapi.cn",
SQLDatabaseDNSSuffix: "database.chinacloudapi.cn", SQLDatabaseDNSSuffix: "database.chinacloudapi.cn",
TrafficManagerDNSSuffix: "trafficmanager.cn", TrafficManagerDNSSuffix: "trafficmanager.cn",
KeyVaultDNSSuffix: "vault.azure.cn", KeyVaultDNSSuffix: "vault.azure.cn",
ServiceBusEndpointSuffix: "servicebus.chinacloudapi.net", ServiceBusEndpointSuffix: "servicebus.chinacloudapi.cn",
ServiceManagementVMDNSSuffix: "chinacloudapp.cn", ServiceManagementVMDNSSuffix: "chinacloudapp.cn",
ResourceManagerVMDNSSuffix: "cloudapp.azure.cn", ResourceManagerVMDNSSuffix: "cloudapp.azure.cn",
ContainerRegistryDNSSuffix: "azurecr.io", ContainerRegistryDNSSuffix: "azurecr.io",
TokenAudience: "https://management.chinacloudapi.cn/",
} }
// GermanCloud is the cloud environment operated in Germany // GermanCloud is the cloud environment operated in Germany
@ -122,6 +141,8 @@ var (
GalleryEndpoint: "https://gallery.cloudapi.de/", GalleryEndpoint: "https://gallery.cloudapi.de/",
KeyVaultEndpoint: "https://vault.microsoftazure.de/", KeyVaultEndpoint: "https://vault.microsoftazure.de/",
GraphEndpoint: "https://graph.cloudapi.de/", GraphEndpoint: "https://graph.cloudapi.de/",
ServiceBusEndpoint: "https://servicebus.cloudapi.de/",
BatchManagementEndpoint: "https://batch.cloudapi.de/",
StorageEndpointSuffix: "core.cloudapi.de", StorageEndpointSuffix: "core.cloudapi.de",
SQLDatabaseDNSSuffix: "database.cloudapi.de", SQLDatabaseDNSSuffix: "database.cloudapi.de",
TrafficManagerDNSSuffix: "azuretrafficmanager.de", TrafficManagerDNSSuffix: "azuretrafficmanager.de",
@ -130,15 +151,41 @@ var (
ServiceManagementVMDNSSuffix: "azurecloudapp.de", ServiceManagementVMDNSSuffix: "azurecloudapp.de",
ResourceManagerVMDNSSuffix: "cloudapp.microsoftazure.de", ResourceManagerVMDNSSuffix: "cloudapp.microsoftazure.de",
ContainerRegistryDNSSuffix: "azurecr.io", ContainerRegistryDNSSuffix: "azurecr.io",
TokenAudience: "https://management.microsoftazure.de/",
} }
) )
// EnvironmentFromName returns an Environment based on the common name specified // EnvironmentFromName returns an Environment based on the common name specified.
func EnvironmentFromName(name string) (Environment, error) { func EnvironmentFromName(name string) (Environment, error) {
// IMPORTANT
// As per @radhikagupta5:
// This is technical debt, fundamentally here because Kubernetes is not currently accepting
// contributions to the providers. Once that is an option, the provider should be updated to
// directly call `EnvironmentFromFile`. Until then, we rely on dispatching Azure Stack environment creation
// from this method based on the name that is provided to us.
if strings.EqualFold(name, "AZURESTACKCLOUD") {
return EnvironmentFromFile(os.Getenv(EnvironmentFilepathName))
}
name = strings.ToUpper(name) name = strings.ToUpper(name)
env, ok := environments[name] env, ok := environments[name]
if !ok { if !ok {
return env, fmt.Errorf("autorest/azure: There is no cloud environment matching the name %q", name) return env, fmt.Errorf("autorest/azure: There is no cloud environment matching the name %q", name)
} }
return env, nil return env, nil
} }
// EnvironmentFromFile loads an Environment from a configuration file available on disk.
// This function is particularly useful in the Hybrid Cloud model, where one must define their own
// endpoints.
func EnvironmentFromFile(location string) (unmarshaled Environment, err error) {
fileContents, err := ioutil.ReadFile(location)
if err != nil {
return
}
err = json.Unmarshal(fileContents, &unmarshaled)
return
}

View File

@ -0,0 +1,245 @@
package azure
import (
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"strings"
"github.com/Azure/go-autorest/autorest"
)
// Copyright 2017 Microsoft Corporation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
type audience []string
type authentication struct {
LoginEndpoint string `json:"loginEndpoint"`
Audiences audience `json:"audiences"`
}
type environmentMetadataInfo struct {
GalleryEndpoint string `json:"galleryEndpoint"`
GraphEndpoint string `json:"graphEndpoint"`
PortalEndpoint string `json:"portalEndpoint"`
Authentication authentication `json:"authentication"`
}
// EnvironmentProperty represent property names that clients can override
type EnvironmentProperty string
const (
// EnvironmentName ...
EnvironmentName EnvironmentProperty = "name"
// EnvironmentManagementPortalURL ..
EnvironmentManagementPortalURL EnvironmentProperty = "managementPortalURL"
// EnvironmentPublishSettingsURL ...
EnvironmentPublishSettingsURL EnvironmentProperty = "publishSettingsURL"
// EnvironmentServiceManagementEndpoint ...
EnvironmentServiceManagementEndpoint EnvironmentProperty = "serviceManagementEndpoint"
// EnvironmentResourceManagerEndpoint ...
EnvironmentResourceManagerEndpoint EnvironmentProperty = "resourceManagerEndpoint"
// EnvironmentActiveDirectoryEndpoint ...
EnvironmentActiveDirectoryEndpoint EnvironmentProperty = "activeDirectoryEndpoint"
// EnvironmentGalleryEndpoint ...
EnvironmentGalleryEndpoint EnvironmentProperty = "galleryEndpoint"
// EnvironmentKeyVaultEndpoint ...
EnvironmentKeyVaultEndpoint EnvironmentProperty = "keyVaultEndpoint"
// EnvironmentGraphEndpoint ...
EnvironmentGraphEndpoint EnvironmentProperty = "graphEndpoint"
// EnvironmentServiceBusEndpoint ...
EnvironmentServiceBusEndpoint EnvironmentProperty = "serviceBusEndpoint"
// EnvironmentBatchManagementEndpoint ...
EnvironmentBatchManagementEndpoint EnvironmentProperty = "batchManagementEndpoint"
// EnvironmentStorageEndpointSuffix ...
EnvironmentStorageEndpointSuffix EnvironmentProperty = "storageEndpointSuffix"
// EnvironmentSQLDatabaseDNSSuffix ...
EnvironmentSQLDatabaseDNSSuffix EnvironmentProperty = "sqlDatabaseDNSSuffix"
// EnvironmentTrafficManagerDNSSuffix ...
EnvironmentTrafficManagerDNSSuffix EnvironmentProperty = "trafficManagerDNSSuffix"
// EnvironmentKeyVaultDNSSuffix ...
EnvironmentKeyVaultDNSSuffix EnvironmentProperty = "keyVaultDNSSuffix"
// EnvironmentServiceBusEndpointSuffix ...
EnvironmentServiceBusEndpointSuffix EnvironmentProperty = "serviceBusEndpointSuffix"
// EnvironmentServiceManagementVMDNSSuffix ...
EnvironmentServiceManagementVMDNSSuffix EnvironmentProperty = "serviceManagementVMDNSSuffix"
// EnvironmentResourceManagerVMDNSSuffix ...
EnvironmentResourceManagerVMDNSSuffix EnvironmentProperty = "resourceManagerVMDNSSuffix"
// EnvironmentContainerRegistryDNSSuffix ...
EnvironmentContainerRegistryDNSSuffix EnvironmentProperty = "containerRegistryDNSSuffix"
// EnvironmentTokenAudience ...
EnvironmentTokenAudience EnvironmentProperty = "tokenAudience"
)
// OverrideProperty represents property name and value that clients can override
type OverrideProperty struct {
Key EnvironmentProperty
Value string
}
// EnvironmentFromURL loads an Environment from a URL
// This function is particularly useful in the Hybrid Cloud model, where one may define their own
// endpoints.
func EnvironmentFromURL(resourceManagerEndpoint string, properties ...OverrideProperty) (environment Environment, err error) {
var metadataEnvProperties environmentMetadataInfo
if resourceManagerEndpoint == "" {
return environment, fmt.Errorf("Metadata resource manager endpoint is empty")
}
if metadataEnvProperties, err = retrieveMetadataEnvironment(resourceManagerEndpoint); err != nil {
return environment, err
}
// Give priority to user's override values
overrideProperties(&environment, properties)
if environment.Name == "" {
environment.Name = "HybridEnvironment"
}
stampDNSSuffix := environment.StorageEndpointSuffix
if stampDNSSuffix == "" {
stampDNSSuffix = strings.TrimSuffix(strings.TrimPrefix(strings.Replace(resourceManagerEndpoint, strings.Split(resourceManagerEndpoint, ".")[0], "", 1), "."), "/")
environment.StorageEndpointSuffix = stampDNSSuffix
}
if environment.KeyVaultDNSSuffix == "" {
environment.KeyVaultDNSSuffix = fmt.Sprintf("%s.%s", "vault", stampDNSSuffix)
}
if environment.KeyVaultEndpoint == "" {
environment.KeyVaultEndpoint = fmt.Sprintf("%s%s", "https://", environment.KeyVaultDNSSuffix)
}
if environment.TokenAudience == "" {
environment.TokenAudience = metadataEnvProperties.Authentication.Audiences[0]
}
if environment.ActiveDirectoryEndpoint == "" {
environment.ActiveDirectoryEndpoint = metadataEnvProperties.Authentication.LoginEndpoint
}
if environment.ResourceManagerEndpoint == "" {
environment.ResourceManagerEndpoint = resourceManagerEndpoint
}
if environment.GalleryEndpoint == "" {
environment.GalleryEndpoint = metadataEnvProperties.GalleryEndpoint
}
if environment.GraphEndpoint == "" {
environment.GraphEndpoint = metadataEnvProperties.GraphEndpoint
}
return environment, nil
}
func overrideProperties(environment *Environment, properties []OverrideProperty) {
for _, property := range properties {
switch property.Key {
case EnvironmentName:
{
environment.Name = property.Value
}
case EnvironmentManagementPortalURL:
{
environment.ManagementPortalURL = property.Value
}
case EnvironmentPublishSettingsURL:
{
environment.PublishSettingsURL = property.Value
}
case EnvironmentServiceManagementEndpoint:
{
environment.ServiceManagementEndpoint = property.Value
}
case EnvironmentResourceManagerEndpoint:
{
environment.ResourceManagerEndpoint = property.Value
}
case EnvironmentActiveDirectoryEndpoint:
{
environment.ActiveDirectoryEndpoint = property.Value
}
case EnvironmentGalleryEndpoint:
{
environment.GalleryEndpoint = property.Value
}
case EnvironmentKeyVaultEndpoint:
{
environment.KeyVaultEndpoint = property.Value
}
case EnvironmentGraphEndpoint:
{
environment.GraphEndpoint = property.Value
}
case EnvironmentServiceBusEndpoint:
{
environment.ServiceBusEndpoint = property.Value
}
case EnvironmentBatchManagementEndpoint:
{
environment.BatchManagementEndpoint = property.Value
}
case EnvironmentStorageEndpointSuffix:
{
environment.StorageEndpointSuffix = property.Value
}
case EnvironmentSQLDatabaseDNSSuffix:
{
environment.SQLDatabaseDNSSuffix = property.Value
}
case EnvironmentTrafficManagerDNSSuffix:
{
environment.TrafficManagerDNSSuffix = property.Value
}
case EnvironmentKeyVaultDNSSuffix:
{
environment.KeyVaultDNSSuffix = property.Value
}
case EnvironmentServiceBusEndpointSuffix:
{
environment.ServiceBusEndpointSuffix = property.Value
}
case EnvironmentServiceManagementVMDNSSuffix:
{
environment.ServiceManagementVMDNSSuffix = property.Value
}
case EnvironmentResourceManagerVMDNSSuffix:
{
environment.ResourceManagerVMDNSSuffix = property.Value
}
case EnvironmentContainerRegistryDNSSuffix:
{
environment.ContainerRegistryDNSSuffix = property.Value
}
case EnvironmentTokenAudience:
{
environment.TokenAudience = property.Value
}
}
}
}
func retrieveMetadataEnvironment(endpoint string) (environment environmentMetadataInfo, err error) {
client := autorest.NewClientWithUserAgent("")
managementEndpoint := fmt.Sprintf("%s%s", strings.TrimSuffix(endpoint, "/"), "/metadata/endpoints?api-version=1.0")
req, _ := http.NewRequest("GET", managementEndpoint, nil)
response, err := client.Do(req)
if err != nil {
return environment, err
}
defer response.Body.Close()
jsonResponse, err := ioutil.ReadAll(response.Body)
if err != nil {
return environment, err
}
err = json.Unmarshal(jsonResponse, &environment)
return environment, err
}

View File

@ -1,3 +1,17 @@
// Copyright 2017 Microsoft Corporation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package azure package azure
import ( import (
@ -30,7 +44,7 @@ func DoRetryWithRegistration(client autorest.Client) autorest.SendDecorator {
return resp, err return resp, err
} }
if resp.StatusCode != http.StatusConflict { if resp.StatusCode != http.StatusConflict || client.SkipResourceProviderRegistration {
return resp, err return resp, err
} }
var re RequestError var re RequestError
@ -41,25 +55,23 @@ func DoRetryWithRegistration(client autorest.Client) autorest.SendDecorator {
if err != nil { if err != nil {
return resp, err return resp, err
} }
err = re
if re.ServiceError != nil && re.ServiceError.Code == "MissingSubscriptionRegistration" { if re.ServiceError != nil && re.ServiceError.Code == "MissingSubscriptionRegistration" {
err = register(client, r, re) regErr := register(client, r, re)
if err != nil { if regErr != nil {
return resp, fmt.Errorf("failed auto registering Resource Provider: %s", err) return resp, fmt.Errorf("failed auto registering Resource Provider: %s. Original error: %s", regErr, err)
} }
} }
} }
return resp, errors.New("failed request and resource provider registration") return resp, fmt.Errorf("failed request: %s", err)
}) })
} }
} }
func getProvider(re RequestError) (string, error) { func getProvider(re RequestError) (string, error) {
if re.ServiceError != nil { if re.ServiceError != nil && len(re.ServiceError.Details) > 0 {
if re.ServiceError.Details != nil && len(*re.ServiceError.Details) > 0 { return re.ServiceError.Details[0]["target"].(string), nil
detail := (*re.ServiceError.Details)[0].(map[string]interface{})
return detail["target"].(string), nil
}
} }
return "", errors.New("provider was not found in the response") return "", errors.New("provider was not found in the response")
} }
@ -103,7 +115,7 @@ func register(client autorest.Client, originalReq *http.Request, re RequestError
if err != nil { if err != nil {
return err return err
} }
req.Cancel = originalReq.Cancel req = req.WithContext(originalReq.Context())
resp, err := autorest.SendWithSender(client, req, resp, err := autorest.SendWithSender(client, req,
autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...),
@ -142,9 +154,9 @@ func register(client autorest.Client, originalReq *http.Request, re RequestError
if err != nil { if err != nil {
return err return err
} }
req.Cancel = originalReq.Cancel req = req.WithContext(originalReq.Context())
resp, err := autorest.SendWithSender(client.Sender, req, resp, err := autorest.SendWithSender(client, req,
autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...), autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...),
) )
if err != nil { if err != nil {
@ -166,9 +178,9 @@ func register(client autorest.Client, originalReq *http.Request, re RequestError
break break
} }
delayed := autorest.DelayWithRetryAfter(resp, originalReq.Cancel) delayed := autorest.DelayWithRetryAfter(resp, originalReq.Context().Done())
if !delayed { if !delayed && !autorest.DelayForBackoff(client.PollingDelay, 0, originalReq.Context().Done()) {
autorest.DelayForBackoff(client.PollingDelay, 0, originalReq.Cancel) return originalReq.Context().Err()
} }
} }
if !(time.Since(now) < client.PollingDuration) { if !(time.Since(now) < client.PollingDuration) {

View File

@ -35,6 +35,9 @@ const (
// DefaultRetryAttempts is number of attempts for retry status codes (5xx). // DefaultRetryAttempts is number of attempts for retry status codes (5xx).
DefaultRetryAttempts = 3 DefaultRetryAttempts = 3
// DefaultRetryDuration is the duration to wait between retries.
DefaultRetryDuration = 30 * time.Second
) )
var ( var (
@ -163,6 +166,9 @@ type Client struct {
UserAgent string UserAgent string
Jar http.CookieJar Jar http.CookieJar
// Set to true to skip attempted registration of resource providers (false by default).
SkipResourceProviderRegistration bool
} }
// NewClientWithUserAgent returns an instance of a Client with the UserAgent set to the passed // NewClientWithUserAgent returns an instance of a Client with the UserAgent set to the passed
@ -172,9 +178,10 @@ func NewClientWithUserAgent(ua string) Client {
PollingDelay: DefaultPollingDelay, PollingDelay: DefaultPollingDelay,
PollingDuration: DefaultPollingDuration, PollingDuration: DefaultPollingDuration,
RetryAttempts: DefaultRetryAttempts, RetryAttempts: DefaultRetryAttempts,
RetryDuration: 30 * time.Second, RetryDuration: DefaultRetryDuration,
UserAgent: defaultUserAgent, UserAgent: defaultUserAgent,
} }
c.Sender = c.sender()
c.AddToUserAgent(ua) c.AddToUserAgent(ua)
return c return c
} }
@ -196,15 +203,22 @@ func (c Client) Do(r *http.Request) (*http.Response, error) {
r, _ = Prepare(r, r, _ = Prepare(r,
WithUserAgent(c.UserAgent)) WithUserAgent(c.UserAgent))
} }
// NOTE: c.WithInspection() must be last in the list so that it can inspect all preceding operations
r, err := Prepare(r, r, err := Prepare(r,
c.WithInspection(), c.WithAuthorization(),
c.WithAuthorization()) c.WithInspection())
if err != nil { if err != nil {
return nil, NewErrorWithError(err, "autorest/Client", "Do", nil, "Preparing request failed") var resp *http.Response
if detErr, ok := err.(DetailedError); ok {
// if the authorization failed (e.g. invalid credentials) there will
// be a response associated with the error, be sure to return it.
resp = detErr.Response
}
return resp, NewErrorWithError(err, "autorest/Client", "Do", nil, "Preparing request failed")
} }
resp, err := SendWithSender(c.sender(), r) resp, err := SendWithSender(c.sender(), r)
Respond(resp, Respond(resp, c.ByInspecting())
c.ByInspecting())
return resp, err return resp, err
} }

View File

@ -27,8 +27,9 @@ import (
) )
const ( const (
mimeTypeJSON = "application/json" mimeTypeJSON = "application/json"
mimeTypeFormPost = "application/x-www-form-urlencoded" mimeTypeOctetStream = "application/octet-stream"
mimeTypeFormPost = "application/x-www-form-urlencoded"
headerAuthorization = "Authorization" headerAuthorization = "Authorization"
headerContentType = "Content-Type" headerContentType = "Content-Type"
@ -112,6 +113,28 @@ func WithHeader(header string, value string) PrepareDecorator {
} }
} }
// WithHeaders returns a PrepareDecorator that sets the specified HTTP headers of the http.Request to
// the passed value. It canonicalizes the passed headers name (via http.CanonicalHeaderKey) before
// adding them.
func WithHeaders(headers map[string]interface{}) PrepareDecorator {
h := ensureValueStrings(headers)
return func(p Preparer) Preparer {
return PreparerFunc(func(r *http.Request) (*http.Request, error) {
r, err := p.Prepare(r)
if err == nil {
if r.Header == nil {
r.Header = make(http.Header)
}
for name, value := range h {
r.Header.Set(http.CanonicalHeaderKey(name), value)
}
}
return r, err
})
}
}
// WithBearerAuthorization returns a PrepareDecorator that adds an HTTP Authorization header whose // WithBearerAuthorization returns a PrepareDecorator that adds an HTTP Authorization header whose
// value is "Bearer " followed by the supplied token. // value is "Bearer " followed by the supplied token.
func WithBearerAuthorization(token string) PrepareDecorator { func WithBearerAuthorization(token string) PrepareDecorator {
@ -142,6 +165,11 @@ func AsJSON() PrepareDecorator {
return AsContentType(mimeTypeJSON) return AsContentType(mimeTypeJSON)
} }
// AsOctetStream returns a PrepareDecorator that adds the "application/octet-stream" Content-Type header.
func AsOctetStream() PrepareDecorator {
return AsContentType(mimeTypeOctetStream)
}
// WithMethod returns a PrepareDecorator that sets the HTTP method of the passed request. The // WithMethod returns a PrepareDecorator that sets the HTTP method of the passed request. The
// decorator does not validate that the passed method string is a known HTTP method. // decorator does not validate that the passed method string is a known HTTP method.
func WithMethod(method string) PrepareDecorator { func WithMethod(method string) PrepareDecorator {
@ -215,6 +243,11 @@ func WithFormData(v url.Values) PrepareDecorator {
r, err := p.Prepare(r) r, err := p.Prepare(r)
if err == nil { if err == nil {
s := v.Encode() s := v.Encode()
if r.Header == nil {
r.Header = make(http.Header)
}
r.Header.Set(http.CanonicalHeaderKey(headerContentType), mimeTypeFormPost)
r.ContentLength = int64(len(s)) r.ContentLength = int64(len(s))
r.Body = ioutil.NopCloser(strings.NewReader(s)) r.Body = ioutil.NopCloser(strings.NewReader(s))
} }
@ -430,11 +463,16 @@ func WithQueryParameters(queryParameters map[string]interface{}) PrepareDecorato
if r.URL == nil { if r.URL == nil {
return r, NewError("autorest", "WithQueryParameters", "Invoked with a nil URL") return r, NewError("autorest", "WithQueryParameters", "Invoked with a nil URL")
} }
v := r.URL.Query() v := r.URL.Query()
for key, value := range parameters { for key, value := range parameters {
v.Add(key, value) d, err := url.QueryUnescape(value)
if err != nil {
return r, err
}
v.Add(key, d)
} }
r.URL.RawQuery = createQuery(v) r.URL.RawQuery = v.Encode()
} }
return r, err return r, err
}) })

View File

@ -86,7 +86,7 @@ func SendWithSender(s Sender, r *http.Request, decorators ...SendDecorator) (*ht
func AfterDelay(d time.Duration) SendDecorator { func AfterDelay(d time.Duration) SendDecorator {
return func(s Sender) Sender { return func(s Sender) Sender {
return SenderFunc(func(r *http.Request) (*http.Response, error) { return SenderFunc(func(r *http.Request) (*http.Response, error) {
if !DelayForBackoff(d, 0, r.Cancel) { if !DelayForBackoff(d, 0, r.Context().Done()) {
return nil, fmt.Errorf("autorest: AfterDelay canceled before full delay") return nil, fmt.Errorf("autorest: AfterDelay canceled before full delay")
} }
return s.Do(r) return s.Do(r)
@ -165,7 +165,7 @@ func DoPollForStatusCodes(duration time.Duration, delay time.Duration, codes ...
resp, err = s.Do(r) resp, err = s.Do(r)
if err == nil && ResponseHasStatusCode(resp, codes...) { if err == nil && ResponseHasStatusCode(resp, codes...) {
r, err = NewPollingRequest(resp, r.Cancel) r, err = NewPollingRequestWithContext(r.Context(), resp)
for err == nil && ResponseHasStatusCode(resp, codes...) { for err == nil && ResponseHasStatusCode(resp, codes...) {
Respond(resp, Respond(resp,
@ -198,7 +198,9 @@ func DoRetryForAttempts(attempts int, backoff time.Duration) SendDecorator {
if err == nil { if err == nil {
return resp, err return resp, err
} }
DelayForBackoff(backoff, attempt, r.Cancel) if !DelayForBackoff(backoff, attempt, r.Context().Done()) {
return nil, r.Context().Err()
}
} }
return resp, err return resp, err
}) })
@ -215,18 +217,25 @@ func DoRetryForStatusCodes(attempts int, backoff time.Duration, codes ...int) Se
rr := NewRetriableRequest(r) rr := NewRetriableRequest(r)
// Increment to add the first call (attempts denotes number of retries) // Increment to add the first call (attempts denotes number of retries)
attempts++ attempts++
for attempt := 0; attempt < attempts; attempt++ { for attempt := 0; attempt < attempts; {
err = rr.Prepare() err = rr.Prepare()
if err != nil { if err != nil {
return resp, err return resp, err
} }
resp, err = s.Do(rr.Request()) resp, err = s.Do(rr.Request())
if err != nil || !ResponseHasStatusCode(resp, codes...) { // we want to retry if err is not nil (e.g. transient network failure). note that for failed authentication
// resp and err will both have a value, so in this case we don't want to retry as it will never succeed.
if err == nil && !ResponseHasStatusCode(resp, codes...) || IsTokenRefreshError(err) {
return resp, err return resp, err
} }
delayed := DelayWithRetryAfter(resp, r.Cancel) delayed := DelayWithRetryAfter(resp, r.Context().Done())
if !delayed { if !delayed && !DelayForBackoff(backoff, attempt, r.Context().Done()) {
DelayForBackoff(backoff, attempt, r.Cancel) return nil, r.Context().Err()
}
// don't count a 429 against the number of attempts
// so that we continue to retry until it succeeds
if resp == nil || resp.StatusCode != http.StatusTooManyRequests {
attempt++
} }
} }
return resp, err return resp, err
@ -237,6 +246,9 @@ func DoRetryForStatusCodes(attempts int, backoff time.Duration, codes ...int) Se
// DelayWithRetryAfter invokes time.After for the duration specified in the "Retry-After" header in // DelayWithRetryAfter invokes time.After for the duration specified in the "Retry-After" header in
// responses with status code 429 // responses with status code 429
func DelayWithRetryAfter(resp *http.Response, cancel <-chan struct{}) bool { func DelayWithRetryAfter(resp *http.Response, cancel <-chan struct{}) bool {
if resp == nil {
return false
}
retryAfter, _ := strconv.Atoi(resp.Header.Get("Retry-After")) retryAfter, _ := strconv.Atoi(resp.Header.Get("Retry-After"))
if resp.StatusCode == http.StatusTooManyRequests && retryAfter > 0 { if resp.StatusCode == http.StatusTooManyRequests && retryAfter > 0 {
select { select {
@ -267,7 +279,9 @@ func DoRetryForDuration(d time.Duration, backoff time.Duration) SendDecorator {
if err == nil { if err == nil {
return resp, err return resp, err
} }
DelayForBackoff(backoff, attempt, r.Cancel) if !DelayForBackoff(backoff, attempt, r.Context().Done()) {
return nil, r.Context().Err()
}
} }
return resp, err return resp, err
}) })

View File

@ -20,10 +20,12 @@ import (
"encoding/xml" "encoding/xml"
"fmt" "fmt"
"io" "io"
"net/http"
"net/url" "net/url"
"reflect" "reflect"
"sort"
"strings" "strings"
"github.com/Azure/go-autorest/autorest/adal"
) )
// EncodedAs is a series of constants specifying various data encodings // EncodedAs is a series of constants specifying various data encodings
@ -137,13 +139,38 @@ func MapToValues(m map[string]interface{}) url.Values {
return v return v
} }
// String method converts interface v to string. If interface is a list, it // AsStringSlice method converts interface{} to []string. This expects a
// joins list elements using separator. //that the parameter passed to be a slice or array of a type that has the underlying
func String(v interface{}, sep ...string) string { //type a string.
if len(sep) > 0 { func AsStringSlice(s interface{}) ([]string, error) {
return ensureValueString(strings.Join(v.([]string), sep[0])) v := reflect.ValueOf(s)
if v.Kind() != reflect.Slice && v.Kind() != reflect.Array {
return nil, NewError("autorest", "AsStringSlice", "the value's type is not an array.")
} }
return ensureValueString(v) stringSlice := make([]string, 0, v.Len())
for i := 0; i < v.Len(); i++ {
stringSlice = append(stringSlice, v.Index(i).String())
}
return stringSlice, nil
}
// String method converts interface v to string. If interface is a list, it
// joins list elements using the seperator. Note that only sep[0] will be used for
// joining if any separator is specified.
func String(v interface{}, sep ...string) string {
if len(sep) == 0 {
return ensureValueString(v)
}
stringSlice, ok := v.([]string)
if ok == false {
var err error
stringSlice, err = AsStringSlice(v)
if err != nil {
panic(fmt.Sprintf("autorest: Couldn't convert value to a string %s.", err))
}
}
return ensureValueString(strings.Join(stringSlice, sep[0]))
} }
// Encode method encodes url path and query parameters. // Encode method encodes url path and query parameters.
@ -167,26 +194,25 @@ func queryEscape(s string) string {
return url.QueryEscape(s) return url.QueryEscape(s)
} }
// This method is same as Encode() method of "net/url" go package, // ChangeToGet turns the specified http.Request into a GET (it assumes it wasn't).
// except it does not encode the query parameters because they // This is mainly useful for long-running operations that use the Azure-AsyncOperation
// already come encoded. It formats values map in query format (bar=foo&a=b). // header, so we change the initial PUT into a GET to retrieve the final result.
func createQuery(v url.Values) string { func ChangeToGet(req *http.Request) *http.Request {
var buf bytes.Buffer req.Method = "GET"
keys := make([]string, 0, len(v)) req.Body = nil
for k := range v { req.ContentLength = 0
keys = append(keys, k) req.Header.Del("Content-Length")
} return req
sort.Strings(keys) }
for _, k := range keys {
vs := v[k] // IsTokenRefreshError returns true if the specified error implements the TokenRefreshError
prefix := url.QueryEscape(k) + "=" // interface. If err is a DetailedError it will walk the chain of Original errors.
for _, v := range vs { func IsTokenRefreshError(err error) bool {
if buf.Len() > 0 { if _, ok := err.(adal.TokenRefreshError); ok {
buf.WriteByte('&') return true
} }
buf.WriteString(prefix) if de, ok := err.(DetailedError); ok {
buf.WriteString(v) return IsTokenRefreshError(de.Original)
} }
} return false
return buf.String()
} }

View File

@ -14,36 +14,7 @@ package autorest
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
import (
"bytes"
"fmt"
"strings"
"sync"
)
const (
major = 8
minor = 0
patch = 0
tag = ""
)
var once sync.Once
var version string
// Version returns the semantic version (see http://semver.org). // Version returns the semantic version (see http://semver.org).
func Version() string { func Version() string {
once.Do(func() { return "v10.5.0"
semver := fmt.Sprintf("%d.%d.%d", major, minor, patch)
verBuilder := bytes.NewBufferString(semver)
if tag != "" && tag != "-" {
updated := strings.TrimPrefix(tag, "-")
_, err := verBuilder.WriteString("-" + updated)
if err == nil {
verBuilder = bytes.NewBufferString(semver)
}
}
version = verBuilder.String()
})
return version
} }

View File

@ -1,5 +0,0 @@
*.sublime-*
.DS_Store
*.swp
*.swo
tags

View File

@ -1,12 +0,0 @@
Copyright (c) 2012, Martin Angers
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* Neither the name of the author nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -1,185 +0,0 @@
# Purell
Purell is a tiny Go library to normalize URLs. It returns a pure URL. Pure-ell. Sanitizer and all. Yeah, I know...
Based on the [wikipedia paper][wiki] and the [RFC 3986 document][rfc].
[![build status](https://secure.travis-ci.org/PuerkitoBio/purell.png)](http://travis-ci.org/PuerkitoBio/purell)
## Install
`go get github.com/PuerkitoBio/purell`
## Changelog
* **2016-07-27 (v1.0.0)** : Normalize IDN to ASCII (thanks to @zenovich).
* **2015-02-08** : Add fix for relative paths issue ([PR #5][pr5]) and add fix for unnecessary encoding of reserved characters ([see issue #7][iss7]).
* **v0.2.0** : Add benchmarks, Attempt IDN support.
* **v0.1.0** : Initial release.
## Examples
From `example_test.go` (note that in your code, you would import "github.com/PuerkitoBio/purell", and would prefix references to its methods and constants with "purell."):
```go
package purell
import (
"fmt"
"net/url"
)
func ExampleNormalizeURLString() {
if normalized, err := NormalizeURLString("hTTp://someWEBsite.com:80/Amazing%3f/url/",
FlagLowercaseScheme|FlagLowercaseHost|FlagUppercaseEscapes); err != nil {
panic(err)
} else {
fmt.Print(normalized)
}
// Output: http://somewebsite.com:80/Amazing%3F/url/
}
func ExampleMustNormalizeURLString() {
normalized := MustNormalizeURLString("hTTpS://someWEBsite.com:443/Amazing%fa/url/",
FlagsUnsafeGreedy)
fmt.Print(normalized)
// Output: http://somewebsite.com/Amazing%FA/url
}
func ExampleNormalizeURL() {
if u, err := url.Parse("Http://SomeUrl.com:8080/a/b/.././c///g?c=3&a=1&b=9&c=0#target"); err != nil {
panic(err)
} else {
normalized := NormalizeURL(u, FlagsUsuallySafeGreedy|FlagRemoveDuplicateSlashes|FlagRemoveFragment)
fmt.Print(normalized)
}
// Output: http://someurl.com:8080/a/c/g?c=3&a=1&b=9&c=0
}
```
## API
As seen in the examples above, purell offers three methods, `NormalizeURLString(string, NormalizationFlags) (string, error)`, `MustNormalizeURLString(string, NormalizationFlags) (string)` and `NormalizeURL(*url.URL, NormalizationFlags) (string)`. They all normalize the provided URL based on the specified flags. Here are the available flags:
```go
const (
// Safe normalizations
FlagLowercaseScheme NormalizationFlags = 1 << iota // HTTP://host -> http://host, applied by default in Go1.1
FlagLowercaseHost // http://HOST -> http://host
FlagUppercaseEscapes // http://host/t%ef -> http://host/t%EF
FlagDecodeUnnecessaryEscapes // http://host/t%41 -> http://host/tA
FlagEncodeNecessaryEscapes // http://host/!"#$ -> http://host/%21%22#$
FlagRemoveDefaultPort // http://host:80 -> http://host
FlagRemoveEmptyQuerySeparator // http://host/path? -> http://host/path
// Usually safe normalizations
FlagRemoveTrailingSlash // http://host/path/ -> http://host/path
FlagAddTrailingSlash // http://host/path -> http://host/path/ (should choose only one of these add/remove trailing slash flags)
FlagRemoveDotSegments // http://host/path/./a/b/../c -> http://host/path/a/c
// Unsafe normalizations
FlagRemoveDirectoryIndex // http://host/path/index.html -> http://host/path/
FlagRemoveFragment // http://host/path#fragment -> http://host/path
FlagForceHTTP // https://host -> http://host
FlagRemoveDuplicateSlashes // http://host/path//a///b -> http://host/path/a/b
FlagRemoveWWW // http://www.host/ -> http://host/
FlagAddWWW // http://host/ -> http://www.host/ (should choose only one of these add/remove WWW flags)
FlagSortQuery // http://host/path?c=3&b=2&a=1&b=1 -> http://host/path?a=1&b=1&b=2&c=3
// Normalizations not in the wikipedia article, required to cover tests cases
// submitted by jehiah
FlagDecodeDWORDHost // http://1113982867 -> http://66.102.7.147
FlagDecodeOctalHost // http://0102.0146.07.0223 -> http://66.102.7.147
FlagDecodeHexHost // http://0x42660793 -> http://66.102.7.147
FlagRemoveUnnecessaryHostDots // http://.host../path -> http://host/path
FlagRemoveEmptyPortSeparator // http://host:/path -> http://host/path
// Convenience set of safe normalizations
FlagsSafe NormalizationFlags = FlagLowercaseHost | FlagLowercaseScheme | FlagUppercaseEscapes | FlagDecodeUnnecessaryEscapes | FlagEncodeNecessaryEscapes | FlagRemoveDefaultPort | FlagRemoveEmptyQuerySeparator
// For convenience sets, "greedy" uses the "remove trailing slash" and "remove www. prefix" flags,
// while "non-greedy" uses the "add (or keep) the trailing slash" and "add www. prefix".
// Convenience set of usually safe normalizations (includes FlagsSafe)
FlagsUsuallySafeGreedy NormalizationFlags = FlagsSafe | FlagRemoveTrailingSlash | FlagRemoveDotSegments
FlagsUsuallySafeNonGreedy NormalizationFlags = FlagsSafe | FlagAddTrailingSlash | FlagRemoveDotSegments
// Convenience set of unsafe normalizations (includes FlagsUsuallySafe)
FlagsUnsafeGreedy NormalizationFlags = FlagsUsuallySafeGreedy | FlagRemoveDirectoryIndex | FlagRemoveFragment | FlagForceHTTP | FlagRemoveDuplicateSlashes | FlagRemoveWWW | FlagSortQuery
FlagsUnsafeNonGreedy NormalizationFlags = FlagsUsuallySafeNonGreedy | FlagRemoveDirectoryIndex | FlagRemoveFragment | FlagForceHTTP | FlagRemoveDuplicateSlashes | FlagAddWWW | FlagSortQuery
// Convenience set of all available flags
FlagsAllGreedy = FlagsUnsafeGreedy | FlagDecodeDWORDHost | FlagDecodeOctalHost | FlagDecodeHexHost | FlagRemoveUnnecessaryHostDots | FlagRemoveEmptyPortSeparator
FlagsAllNonGreedy = FlagsUnsafeNonGreedy | FlagDecodeDWORDHost | FlagDecodeOctalHost | FlagDecodeHexHost | FlagRemoveUnnecessaryHostDots | FlagRemoveEmptyPortSeparator
)
```
For convenience, the set of flags `FlagsSafe`, `FlagsUsuallySafe[Greedy|NonGreedy]`, `FlagsUnsafe[Greedy|NonGreedy]` and `FlagsAll[Greedy|NonGreedy]` are provided for the similarly grouped normalizations on [wikipedia's URL normalization page][wiki]. You can add (using the bitwise OR `|` operator) or remove (using the bitwise AND NOT `&^` operator) individual flags from the sets if required, to build your own custom set.
The [full godoc reference is available on gopkgdoc][godoc].
Some things to note:
* `FlagDecodeUnnecessaryEscapes`, `FlagEncodeNecessaryEscapes`, `FlagUppercaseEscapes` and `FlagRemoveEmptyQuerySeparator` are always implicitly set, because internally, the URL string is parsed as an URL object, which automatically decodes unnecessary escapes, uppercases and encodes necessary ones, and removes empty query separators (an unnecessary `?` at the end of the url). So this operation cannot **not** be done. For this reason, `FlagRemoveEmptyQuerySeparator` (as well as the other three) has been included in the `FlagsSafe` convenience set, instead of `FlagsUnsafe`, where Wikipedia puts it.
* The `FlagDecodeUnnecessaryEscapes` decodes the following escapes (*from -> to*):
- %24 -> $
- %26 -> &
- %2B-%3B -> +,-./0123456789:;
- %3D -> =
- %40-%5A -> @ABCDEFGHIJKLMNOPQRSTUVWXYZ
- %5F -> _
- %61-%7A -> abcdefghijklmnopqrstuvwxyz
- %7E -> ~
* When the `NormalizeURL` function is used (passing an URL object), this source URL object is modified (that is, after the call, the URL object will be modified to reflect the normalization).
* The *replace IP with domain name* normalization (`http://208.77.188.166/ → http://www.example.com/`) is obviously not possible for a library without making some network requests. This is not implemented in purell.
* The *remove unused query string parameters* and *remove default query parameters* are also not implemented, since this is a very case-specific normalization, and it is quite trivial to do with an URL object.
### Safe vs Usually Safe vs Unsafe
Purell allows you to control the level of risk you take while normalizing an URL. You can aggressively normalize, play it totally safe, or anything in between.
Consider the following URL:
`HTTPS://www.RooT.com/toto/t%45%1f///a/./b/../c/?z=3&w=2&a=4&w=1#invalid`
Normalizing with the `FlagsSafe` gives:
`https://www.root.com/toto/tE%1F///a/./b/../c/?z=3&w=2&a=4&w=1#invalid`
With the `FlagsUsuallySafeGreedy`:
`https://www.root.com/toto/tE%1F///a/c?z=3&w=2&a=4&w=1#invalid`
And with `FlagsUnsafeGreedy`:
`http://root.com/toto/tE%1F/a/c?a=4&w=1&w=2&z=3`
## TODOs
* Add a class/default instance to allow specifying custom directory index names? At the moment, removing directory index removes `(^|/)((?:default|index)\.\w{1,4})$`.
## Thanks / Contributions
@rogpeppe
@jehiah
@opennota
@pchristopher1275
@zenovich
## License
The [BSD 3-Clause license][bsd].
[bsd]: http://opensource.org/licenses/BSD-3-Clause
[wiki]: http://en.wikipedia.org/wiki/URL_normalization
[rfc]: http://tools.ietf.org/html/rfc3986#section-6
[godoc]: http://go.pkgdoc.org/github.com/PuerkitoBio/purell
[pr5]: https://github.com/PuerkitoBio/purell/pull/5
[iss7]: https://github.com/PuerkitoBio/purell/issues/7

View File

@ -1,375 +0,0 @@
/*
Package purell offers URL normalization as described on the wikipedia page:
http://en.wikipedia.org/wiki/URL_normalization
*/
package purell
import (
"bytes"
"fmt"
"net/url"
"regexp"
"sort"
"strconv"
"strings"
"github.com/PuerkitoBio/urlesc"
"golang.org/x/net/idna"
"golang.org/x/text/secure/precis"
"golang.org/x/text/unicode/norm"
)
// A set of normalization flags determines how a URL will
// be normalized.
type NormalizationFlags uint
const (
// Safe normalizations
FlagLowercaseScheme NormalizationFlags = 1 << iota // HTTP://host -> http://host, applied by default in Go1.1
FlagLowercaseHost // http://HOST -> http://host
FlagUppercaseEscapes // http://host/t%ef -> http://host/t%EF
FlagDecodeUnnecessaryEscapes // http://host/t%41 -> http://host/tA
FlagEncodeNecessaryEscapes // http://host/!"#$ -> http://host/%21%22#$
FlagRemoveDefaultPort // http://host:80 -> http://host
FlagRemoveEmptyQuerySeparator // http://host/path? -> http://host/path
// Usually safe normalizations
FlagRemoveTrailingSlash // http://host/path/ -> http://host/path
FlagAddTrailingSlash // http://host/path -> http://host/path/ (should choose only one of these add/remove trailing slash flags)
FlagRemoveDotSegments // http://host/path/./a/b/../c -> http://host/path/a/c
// Unsafe normalizations
FlagRemoveDirectoryIndex // http://host/path/index.html -> http://host/path/
FlagRemoveFragment // http://host/path#fragment -> http://host/path
FlagForceHTTP // https://host -> http://host
FlagRemoveDuplicateSlashes // http://host/path//a///b -> http://host/path/a/b
FlagRemoveWWW // http://www.host/ -> http://host/
FlagAddWWW // http://host/ -> http://www.host/ (should choose only one of these add/remove WWW flags)
FlagSortQuery // http://host/path?c=3&b=2&a=1&b=1 -> http://host/path?a=1&b=1&b=2&c=3
// Normalizations not in the wikipedia article, required to cover tests cases
// submitted by jehiah
FlagDecodeDWORDHost // http://1113982867 -> http://66.102.7.147
FlagDecodeOctalHost // http://0102.0146.07.0223 -> http://66.102.7.147
FlagDecodeHexHost // http://0x42660793 -> http://66.102.7.147
FlagRemoveUnnecessaryHostDots // http://.host../path -> http://host/path
FlagRemoveEmptyPortSeparator // http://host:/path -> http://host/path
// Convenience set of safe normalizations
FlagsSafe NormalizationFlags = FlagLowercaseHost | FlagLowercaseScheme | FlagUppercaseEscapes | FlagDecodeUnnecessaryEscapes | FlagEncodeNecessaryEscapes | FlagRemoveDefaultPort | FlagRemoveEmptyQuerySeparator
// For convenience sets, "greedy" uses the "remove trailing slash" and "remove www. prefix" flags,
// while "non-greedy" uses the "add (or keep) the trailing slash" and "add www. prefix".
// Convenience set of usually safe normalizations (includes FlagsSafe)
FlagsUsuallySafeGreedy NormalizationFlags = FlagsSafe | FlagRemoveTrailingSlash | FlagRemoveDotSegments
FlagsUsuallySafeNonGreedy NormalizationFlags = FlagsSafe | FlagAddTrailingSlash | FlagRemoveDotSegments
// Convenience set of unsafe normalizations (includes FlagsUsuallySafe)
FlagsUnsafeGreedy NormalizationFlags = FlagsUsuallySafeGreedy | FlagRemoveDirectoryIndex | FlagRemoveFragment | FlagForceHTTP | FlagRemoveDuplicateSlashes | FlagRemoveWWW | FlagSortQuery
FlagsUnsafeNonGreedy NormalizationFlags = FlagsUsuallySafeNonGreedy | FlagRemoveDirectoryIndex | FlagRemoveFragment | FlagForceHTTP | FlagRemoveDuplicateSlashes | FlagAddWWW | FlagSortQuery
// Convenience set of all available flags
FlagsAllGreedy = FlagsUnsafeGreedy | FlagDecodeDWORDHost | FlagDecodeOctalHost | FlagDecodeHexHost | FlagRemoveUnnecessaryHostDots | FlagRemoveEmptyPortSeparator
FlagsAllNonGreedy = FlagsUnsafeNonGreedy | FlagDecodeDWORDHost | FlagDecodeOctalHost | FlagDecodeHexHost | FlagRemoveUnnecessaryHostDots | FlagRemoveEmptyPortSeparator
)
const (
defaultHttpPort = ":80"
defaultHttpsPort = ":443"
)
// Regular expressions used by the normalizations
var rxPort = regexp.MustCompile(`(:\d+)/?$`)
var rxDirIndex = regexp.MustCompile(`(^|/)((?:default|index)\.\w{1,4})$`)
var rxDupSlashes = regexp.MustCompile(`/{2,}`)
var rxDWORDHost = regexp.MustCompile(`^(\d+)((?:\.+)?(?:\:\d*)?)$`)
var rxOctalHost = regexp.MustCompile(`^(0\d*)\.(0\d*)\.(0\d*)\.(0\d*)((?:\.+)?(?:\:\d*)?)$`)
var rxHexHost = regexp.MustCompile(`^0x([0-9A-Fa-f]+)((?:\.+)?(?:\:\d*)?)$`)
var rxHostDots = regexp.MustCompile(`^(.+?)(:\d+)?$`)
var rxEmptyPort = regexp.MustCompile(`:+$`)
// Map of flags to implementation function.
// FlagDecodeUnnecessaryEscapes has no action, since it is done automatically
// by parsing the string as an URL. Same for FlagUppercaseEscapes and FlagRemoveEmptyQuerySeparator.
// Since maps have undefined traversing order, make a slice of ordered keys
var flagsOrder = []NormalizationFlags{
FlagLowercaseScheme,
FlagLowercaseHost,
FlagRemoveDefaultPort,
FlagRemoveDirectoryIndex,
FlagRemoveDotSegments,
FlagRemoveFragment,
FlagForceHTTP, // Must be after remove default port (because https=443/http=80)
FlagRemoveDuplicateSlashes,
FlagRemoveWWW,
FlagAddWWW,
FlagSortQuery,
FlagDecodeDWORDHost,
FlagDecodeOctalHost,
FlagDecodeHexHost,
FlagRemoveUnnecessaryHostDots,
FlagRemoveEmptyPortSeparator,
FlagRemoveTrailingSlash, // These two (add/remove trailing slash) must be last
FlagAddTrailingSlash,
}
// ... and then the map, where order is unimportant
var flags = map[NormalizationFlags]func(*url.URL){
FlagLowercaseScheme: lowercaseScheme,
FlagLowercaseHost: lowercaseHost,
FlagRemoveDefaultPort: removeDefaultPort,
FlagRemoveDirectoryIndex: removeDirectoryIndex,
FlagRemoveDotSegments: removeDotSegments,
FlagRemoveFragment: removeFragment,
FlagForceHTTP: forceHTTP,
FlagRemoveDuplicateSlashes: removeDuplicateSlashes,
FlagRemoveWWW: removeWWW,
FlagAddWWW: addWWW,
FlagSortQuery: sortQuery,
FlagDecodeDWORDHost: decodeDWORDHost,
FlagDecodeOctalHost: decodeOctalHost,
FlagDecodeHexHost: decodeHexHost,
FlagRemoveUnnecessaryHostDots: removeUnncessaryHostDots,
FlagRemoveEmptyPortSeparator: removeEmptyPortSeparator,
FlagRemoveTrailingSlash: removeTrailingSlash,
FlagAddTrailingSlash: addTrailingSlash,
}
// MustNormalizeURLString returns the normalized string, and panics if an error occurs.
// It takes an URL string as input, as well as the normalization flags.
func MustNormalizeURLString(u string, f NormalizationFlags) string {
result, e := NormalizeURLString(u, f)
if e != nil {
panic(e)
}
return result
}
// NormalizeURLString returns the normalized string, or an error if it can't be parsed into an URL object.
// It takes an URL string as input, as well as the normalization flags.
func NormalizeURLString(u string, f NormalizationFlags) (string, error) {
if parsed, e := url.Parse(u); e != nil {
return "", e
} else {
options := make([]precis.Option, 1, 3)
options[0] = precis.IgnoreCase
if f&FlagLowercaseHost == FlagLowercaseHost {
options = append(options, precis.FoldCase())
}
options = append(options, precis.Norm(norm.NFC))
profile := precis.NewFreeform(options...)
if parsed.Host, e = idna.ToASCII(profile.NewTransformer().String(parsed.Host)); e != nil {
return "", e
}
return NormalizeURL(parsed, f), nil
}
panic("Unreachable code.")
}
// NormalizeURL returns the normalized string.
// It takes a parsed URL object as input, as well as the normalization flags.
func NormalizeURL(u *url.URL, f NormalizationFlags) string {
for _, k := range flagsOrder {
if f&k == k {
flags[k](u)
}
}
return urlesc.Escape(u)
}
func lowercaseScheme(u *url.URL) {
if len(u.Scheme) > 0 {
u.Scheme = strings.ToLower(u.Scheme)
}
}
func lowercaseHost(u *url.URL) {
if len(u.Host) > 0 {
u.Host = strings.ToLower(u.Host)
}
}
func removeDefaultPort(u *url.URL) {
if len(u.Host) > 0 {
scheme := strings.ToLower(u.Scheme)
u.Host = rxPort.ReplaceAllStringFunc(u.Host, func(val string) string {
if (scheme == "http" && val == defaultHttpPort) || (scheme == "https" && val == defaultHttpsPort) {
return ""
}
return val
})
}
}
func removeTrailingSlash(u *url.URL) {
if l := len(u.Path); l > 0 {
if strings.HasSuffix(u.Path, "/") {
u.Path = u.Path[:l-1]
}
} else if l = len(u.Host); l > 0 {
if strings.HasSuffix(u.Host, "/") {
u.Host = u.Host[:l-1]
}
}
}
func addTrailingSlash(u *url.URL) {
if l := len(u.Path); l > 0 {
if !strings.HasSuffix(u.Path, "/") {
u.Path += "/"
}
} else if l = len(u.Host); l > 0 {
if !strings.HasSuffix(u.Host, "/") {
u.Host += "/"
}
}
}
func removeDotSegments(u *url.URL) {
if len(u.Path) > 0 {
var dotFree []string
var lastIsDot bool
sections := strings.Split(u.Path, "/")
for _, s := range sections {
if s == ".." {
if len(dotFree) > 0 {
dotFree = dotFree[:len(dotFree)-1]
}
} else if s != "." {
dotFree = append(dotFree, s)
}
lastIsDot = (s == "." || s == "..")
}
// Special case if host does not end with / and new path does not begin with /
u.Path = strings.Join(dotFree, "/")
if u.Host != "" && !strings.HasSuffix(u.Host, "/") && !strings.HasPrefix(u.Path, "/") {
u.Path = "/" + u.Path
}
// Special case if the last segment was a dot, make sure the path ends with a slash
if lastIsDot && !strings.HasSuffix(u.Path, "/") {
u.Path += "/"
}
}
}
func removeDirectoryIndex(u *url.URL) {
if len(u.Path) > 0 {
u.Path = rxDirIndex.ReplaceAllString(u.Path, "$1")
}
}
func removeFragment(u *url.URL) {
u.Fragment = ""
}
func forceHTTP(u *url.URL) {
if strings.ToLower(u.Scheme) == "https" {
u.Scheme = "http"
}
}
func removeDuplicateSlashes(u *url.URL) {
if len(u.Path) > 0 {
u.Path = rxDupSlashes.ReplaceAllString(u.Path, "/")
}
}
func removeWWW(u *url.URL) {
if len(u.Host) > 0 && strings.HasPrefix(strings.ToLower(u.Host), "www.") {
u.Host = u.Host[4:]
}
}
func addWWW(u *url.URL) {
if len(u.Host) > 0 && !strings.HasPrefix(strings.ToLower(u.Host), "www.") {
u.Host = "www." + u.Host
}
}
func sortQuery(u *url.URL) {
q := u.Query()
if len(q) > 0 {
arKeys := make([]string, len(q))
i := 0
for k, _ := range q {
arKeys[i] = k
i++
}
sort.Strings(arKeys)
buf := new(bytes.Buffer)
for _, k := range arKeys {
sort.Strings(q[k])
for _, v := range q[k] {
if buf.Len() > 0 {
buf.WriteRune('&')
}
buf.WriteString(fmt.Sprintf("%s=%s", k, urlesc.QueryEscape(v)))
}
}
// Rebuild the raw query string
u.RawQuery = buf.String()
}
}
func decodeDWORDHost(u *url.URL) {
if len(u.Host) > 0 {
if matches := rxDWORDHost.FindStringSubmatch(u.Host); len(matches) > 2 {
var parts [4]int64
dword, _ := strconv.ParseInt(matches[1], 10, 0)
for i, shift := range []uint{24, 16, 8, 0} {
parts[i] = dword >> shift & 0xFF
}
u.Host = fmt.Sprintf("%d.%d.%d.%d%s", parts[0], parts[1], parts[2], parts[3], matches[2])
}
}
}
func decodeOctalHost(u *url.URL) {
if len(u.Host) > 0 {
if matches := rxOctalHost.FindStringSubmatch(u.Host); len(matches) > 5 {
var parts [4]int64
for i := 1; i <= 4; i++ {
parts[i-1], _ = strconv.ParseInt(matches[i], 8, 0)
}
u.Host = fmt.Sprintf("%d.%d.%d.%d%s", parts[0], parts[1], parts[2], parts[3], matches[5])
}
}
}
func decodeHexHost(u *url.URL) {
if len(u.Host) > 0 {
if matches := rxHexHost.FindStringSubmatch(u.Host); len(matches) > 2 {
// Conversion is safe because of regex validation
parsed, _ := strconv.ParseInt(matches[1], 16, 0)
// Set host as DWORD (base 10) encoded host
u.Host = fmt.Sprintf("%d%s", parsed, matches[2])
// The rest is the same as decoding a DWORD host
decodeDWORDHost(u)
}
}
}
func removeUnncessaryHostDots(u *url.URL) {
if len(u.Host) > 0 {
if matches := rxHostDots.FindStringSubmatch(u.Host); len(matches) > 1 {
// Trim the leading and trailing dots
u.Host = strings.Trim(matches[1], ".")
if len(matches) > 2 {
u.Host += matches[2]
}
}
}
}
func removeEmptyPortSeparator(u *url.URL) {
if len(u.Host) > 0 {
u.Host = rxEmptyPort.ReplaceAllString(u.Host, "")
}
}

View File

@ -1,11 +0,0 @@
language: go
go:
- 1.4
- tip
install:
- go build .
script:
- go test -v

View File

@ -1,16 +0,0 @@
urlesc [![Build Status](https://travis-ci.org/PuerkitoBio/urlesc.png?branch=master)](https://travis-ci.org/PuerkitoBio/urlesc) [![GoDoc](http://godoc.org/github.com/PuerkitoBio/urlesc?status.svg)](http://godoc.org/github.com/PuerkitoBio/urlesc)
======
Package urlesc implements query escaping as per RFC 3986.
It contains some parts of the net/url package, modified so as to allow
some reserved characters incorrectly escaped by net/url (see [issue 5684](https://github.com/golang/go/issues/5684)).
## Install
go get github.com/PuerkitoBio/urlesc
## License
Go license (BSD-3-Clause)

View File

@ -1,180 +0,0 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package urlesc implements query escaping as per RFC 3986.
// It contains some parts of the net/url package, modified so as to allow
// some reserved characters incorrectly escaped by net/url.
// See https://github.com/golang/go/issues/5684
package urlesc
import (
"bytes"
"net/url"
"strings"
)
type encoding int
const (
encodePath encoding = 1 + iota
encodeUserPassword
encodeQueryComponent
encodeFragment
)
// Return true if the specified character should be escaped when
// appearing in a URL string, according to RFC 3986.
func shouldEscape(c byte, mode encoding) bool {
// §2.3 Unreserved characters (alphanum)
if 'A' <= c && c <= 'Z' || 'a' <= c && c <= 'z' || '0' <= c && c <= '9' {
return false
}
switch c {
case '-', '.', '_', '~': // §2.3 Unreserved characters (mark)
return false
// §2.2 Reserved characters (reserved)
case ':', '/', '?', '#', '[', ']', '@', // gen-delims
'!', '$', '&', '\'', '(', ')', '*', '+', ',', ';', '=': // sub-delims
// Different sections of the URL allow a few of
// the reserved characters to appear unescaped.
switch mode {
case encodePath: // §3.3
// The RFC allows sub-delims and : @.
// '/', '[' and ']' can be used to assign meaning to individual path
// segments. This package only manipulates the path as a whole,
// so we allow those as well. That leaves only ? and # to escape.
return c == '?' || c == '#'
case encodeUserPassword: // §3.2.1
// The RFC allows : and sub-delims in
// userinfo. The parsing of userinfo treats ':' as special so we must escape
// all the gen-delims.
return c == ':' || c == '/' || c == '?' || c == '#' || c == '[' || c == ']' || c == '@'
case encodeQueryComponent: // §3.4
// The RFC allows / and ?.
return c != '/' && c != '?'
case encodeFragment: // §4.1
// The RFC text is silent but the grammar allows
// everything, so escape nothing but #
return c == '#'
}
}
// Everything else must be escaped.
return true
}
// QueryEscape escapes the string so it can be safely placed
// inside a URL query.
func QueryEscape(s string) string {
return escape(s, encodeQueryComponent)
}
func escape(s string, mode encoding) string {
spaceCount, hexCount := 0, 0
for i := 0; i < len(s); i++ {
c := s[i]
if shouldEscape(c, mode) {
if c == ' ' && mode == encodeQueryComponent {
spaceCount++
} else {
hexCount++
}
}
}
if spaceCount == 0 && hexCount == 0 {
return s
}
t := make([]byte, len(s)+2*hexCount)
j := 0
for i := 0; i < len(s); i++ {
switch c := s[i]; {
case c == ' ' && mode == encodeQueryComponent:
t[j] = '+'
j++
case shouldEscape(c, mode):
t[j] = '%'
t[j+1] = "0123456789ABCDEF"[c>>4]
t[j+2] = "0123456789ABCDEF"[c&15]
j += 3
default:
t[j] = s[i]
j++
}
}
return string(t)
}
var uiReplacer = strings.NewReplacer(
"%21", "!",
"%27", "'",
"%28", "(",
"%29", ")",
"%2A", "*",
)
// unescapeUserinfo unescapes some characters that need not to be escaped as per RFC3986.
func unescapeUserinfo(s string) string {
return uiReplacer.Replace(s)
}
// Escape reassembles the URL into a valid URL string.
// The general form of the result is one of:
//
// scheme:opaque
// scheme://userinfo@host/path?query#fragment
//
// If u.Opaque is non-empty, String uses the first form;
// otherwise it uses the second form.
//
// In the second form, the following rules apply:
// - if u.Scheme is empty, scheme: is omitted.
// - if u.User is nil, userinfo@ is omitted.
// - if u.Host is empty, host/ is omitted.
// - if u.Scheme and u.Host are empty and u.User is nil,
// the entire scheme://userinfo@host/ is omitted.
// - if u.Host is non-empty and u.Path begins with a /,
// the form host/path does not add its own /.
// - if u.RawQuery is empty, ?query is omitted.
// - if u.Fragment is empty, #fragment is omitted.
func Escape(u *url.URL) string {
var buf bytes.Buffer
if u.Scheme != "" {
buf.WriteString(u.Scheme)
buf.WriteByte(':')
}
if u.Opaque != "" {
buf.WriteString(u.Opaque)
} else {
if u.Scheme != "" || u.Host != "" || u.User != nil {
buf.WriteString("//")
if ui := u.User; ui != nil {
buf.WriteString(unescapeUserinfo(ui.String()))
buf.WriteByte('@')
}
if h := u.Host; h != "" {
buf.WriteString(h)
}
}
if u.Path != "" && u.Path[0] != '/' && u.Host != "" {
buf.WriteByte('/')
}
buf.WriteString(escape(u.Path, encodePath))
}
if u.RawQuery != "" {
buf.WriteByte('?')
buf.WriteString(u.RawQuery)
}
if u.Fragment != "" {
buf.WriteByte('#')
buf.WriteString(escape(u.Fragment, encodeFragment))
}
return buf.String()
}

View File

@ -1,4 +1,6 @@
Copyright (c) 2012-2013 Dave Collins <dave@davec.name> ISC License
Copyright (c) 2012-2016 Dave Collins <dave@davec.name>
Permission to use, copy, modify, and distribute this software for any Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above purpose with or without fee is hereby granted, provided that the above

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015 Dave Collins <dave@davec.name> // Copyright (c) 2015-2016 Dave Collins <dave@davec.name>
// //
// Permission to use, copy, modify, and distribute this software for any // Permission to use, copy, modify, and distribute this software for any
// purpose with or without fee is hereby granted, provided that the above // purpose with or without fee is hereby granted, provided that the above
@ -13,9 +13,10 @@
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. // OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
// NOTE: Due to the following build constraints, this file will only be compiled // NOTE: Due to the following build constraints, this file will only be compiled
// when the code is not running on Google App Engine and "-tags disableunsafe" // when the code is not running on Google App Engine, compiled by GopherJS, and
// is not added to the go build command line. // "-tags safe" is not added to the go build command line. The "disableunsafe"
// +build !appengine,!disableunsafe // tag is deprecated and thus should not be used.
// +build !js,!appengine,!safe,!disableunsafe
package spew package spew

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015 Dave Collins <dave@davec.name> // Copyright (c) 2015-2016 Dave Collins <dave@davec.name>
// //
// Permission to use, copy, modify, and distribute this software for any // Permission to use, copy, modify, and distribute this software for any
// purpose with or without fee is hereby granted, provided that the above // purpose with or without fee is hereby granted, provided that the above
@ -13,9 +13,10 @@
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. // OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
// NOTE: Due to the following build constraints, this file will only be compiled // NOTE: Due to the following build constraints, this file will only be compiled
// when either the code is running on Google App Engine or "-tags disableunsafe" // when the code is running on Google App Engine, compiled by GopherJS, or
// is added to the go build command line. // "-tags safe" is added to the go build command line. The "disableunsafe"
// +build appengine disableunsafe // tag is deprecated and thus should not be used.
// +build js appengine safe disableunsafe
package spew package spew

View File

@ -1,5 +1,5 @@
/* /*
* Copyright (c) 2013 Dave Collins <dave@davec.name> * Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
* *
* Permission to use, copy, modify, and distribute this software for any * Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above * purpose with or without fee is hereby granted, provided that the above
@ -180,7 +180,7 @@ func printComplex(w io.Writer, c complex128, floatPrecision int) {
w.Write(closeParenBytes) w.Write(closeParenBytes)
} }
// printHexPtr outputs a uintptr formatted as hexidecimal with a leading '0x' // printHexPtr outputs a uintptr formatted as hexadecimal with a leading '0x'
// prefix to Writer w. // prefix to Writer w.
func printHexPtr(w io.Writer, p uintptr) { func printHexPtr(w io.Writer, p uintptr) {
// Null pointer. // Null pointer.

View File

@ -1,5 +1,5 @@
/* /*
* Copyright (c) 2013 Dave Collins <dave@davec.name> * Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
* *
* Permission to use, copy, modify, and distribute this software for any * Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above * purpose with or without fee is hereby granted, provided that the above
@ -64,9 +64,18 @@ type ConfigState struct {
// inside these interface methods. As a result, this option relies on // inside these interface methods. As a result, this option relies on
// access to the unsafe package, so it will not have any effect when // access to the unsafe package, so it will not have any effect when
// running in environments without access to the unsafe package such as // running in environments without access to the unsafe package such as
// Google App Engine or with the "disableunsafe" build tag specified. // Google App Engine or with the "safe" build tag specified.
DisablePointerMethods bool DisablePointerMethods bool
// DisablePointerAddresses specifies whether to disable the printing of
// pointer addresses. This is useful when diffing data structures in tests.
DisablePointerAddresses bool
// DisableCapacities specifies whether to disable the printing of capacities
// for arrays, slices, maps and channels. This is useful when diffing
// data structures in tests.
DisableCapacities bool
// ContinueOnMethod specifies whether or not recursion should continue once // ContinueOnMethod specifies whether or not recursion should continue once
// a custom error or Stringer interface is invoked. The default, false, // a custom error or Stringer interface is invoked. The default, false,
// means it will print the results of invoking the custom error or Stringer // means it will print the results of invoking the custom error or Stringer

View File

@ -1,5 +1,5 @@
/* /*
* Copyright (c) 2013 Dave Collins <dave@davec.name> * Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
* *
* Permission to use, copy, modify, and distribute this software for any * Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above * purpose with or without fee is hereby granted, provided that the above
@ -91,6 +91,15 @@ The following configuration options are available:
which only accept pointer receivers from non-pointer variables. which only accept pointer receivers from non-pointer variables.
Pointer method invocation is enabled by default. Pointer method invocation is enabled by default.
* DisablePointerAddresses
DisablePointerAddresses specifies whether to disable the printing of
pointer addresses. This is useful when diffing data structures in tests.
* DisableCapacities
DisableCapacities specifies whether to disable the printing of
capacities for arrays, slices, maps and channels. This is useful when
diffing data structures in tests.
* ContinueOnMethod * ContinueOnMethod
Enables recursion into types after invoking error and Stringer interface Enables recursion into types after invoking error and Stringer interface
methods. Recursion after method invocation is disabled by default. methods. Recursion after method invocation is disabled by default.

View File

@ -1,5 +1,5 @@
/* /*
* Copyright (c) 2013 Dave Collins <dave@davec.name> * Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
* *
* Permission to use, copy, modify, and distribute this software for any * Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above * purpose with or without fee is hereby granted, provided that the above
@ -129,7 +129,7 @@ func (d *dumpState) dumpPtr(v reflect.Value) {
d.w.Write(closeParenBytes) d.w.Write(closeParenBytes)
// Display pointer information. // Display pointer information.
if len(pointerChain) > 0 { if !d.cs.DisablePointerAddresses && len(pointerChain) > 0 {
d.w.Write(openParenBytes) d.w.Write(openParenBytes)
for i, addr := range pointerChain { for i, addr := range pointerChain {
if i > 0 { if i > 0 {
@ -282,13 +282,13 @@ func (d *dumpState) dump(v reflect.Value) {
case reflect.Map, reflect.String: case reflect.Map, reflect.String:
valueLen = v.Len() valueLen = v.Len()
} }
if valueLen != 0 || valueCap != 0 { if valueLen != 0 || !d.cs.DisableCapacities && valueCap != 0 {
d.w.Write(openParenBytes) d.w.Write(openParenBytes)
if valueLen != 0 { if valueLen != 0 {
d.w.Write(lenEqualsBytes) d.w.Write(lenEqualsBytes)
printInt(d.w, int64(valueLen), 10) printInt(d.w, int64(valueLen), 10)
} }
if valueCap != 0 { if !d.cs.DisableCapacities && valueCap != 0 {
if valueLen != 0 { if valueLen != 0 {
d.w.Write(spaceBytes) d.w.Write(spaceBytes)
} }

View File

@ -1,5 +1,5 @@
/* /*
* Copyright (c) 2013 Dave Collins <dave@davec.name> * Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
* *
* Permission to use, copy, modify, and distribute this software for any * Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above * purpose with or without fee is hereby granted, provided that the above

View File

@ -1,5 +1,5 @@
/* /*
* Copyright (c) 2013 Dave Collins <dave@davec.name> * Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
* *
* Permission to use, copy, modify, and distribute this software for any * Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above * purpose with or without fee is hereby granted, provided that the above

View File

@ -1,70 +0,0 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
restful.html
*.out
tmp.prof
go-restful.test
examples/restful-basic-authentication
examples/restful-encoding-filter
examples/restful-filters
examples/restful-hello-world
examples/restful-resource-functions
examples/restful-serve-static
examples/restful-user-service
*.DS_Store
examples/restful-user-resource
examples/restful-multi-containers
examples/restful-form-handling
examples/restful-CORS-filter
examples/restful-options-filter
examples/restful-curly-router
examples/restful-cpuprofiler-service
examples/restful-pre-post-filters
curly.prof
examples/restful-NCSA-logging
examples/restful-html-template
s.html
restful-path-tail

View File

@ -1,163 +0,0 @@
Change history of go-restful
=
2016-02-14
- take the qualify factor of the Accept header mediatype into account when deciding the contentype of the response
- add constructors for custom entity accessors for xml and json
2015-09-27
- rename new WriteStatusAnd... to WriteHeaderAnd... for consistency
2015-09-25
- fixed problem with changing Header after WriteHeader (issue 235)
2015-09-14
- changed behavior of WriteHeader (immediate write) and WriteEntity (no status write)
- added support for custom EntityReaderWriters.
2015-08-06
- add support for reading entities from compressed request content
- use sync.Pool for compressors of http response and request body
- add Description to Parameter for documentation in Swagger UI
2015-03-20
- add configurable logging
2015-03-18
- if not specified, the Operation is derived from the Route function
2015-03-17
- expose Parameter creation functions
- make trace logger an interface
- fix OPTIONSFilter
- customize rendering of ServiceError
- JSR311 router now handles wildcards
- add Notes to Route
2014-11-27
- (api add) PrettyPrint per response. (as proposed in #167)
2014-11-12
- (api add) ApiVersion(.) for documentation in Swagger UI
2014-11-10
- (api change) struct fields tagged with "description" show up in Swagger UI
2014-10-31
- (api change) ReturnsError -> Returns
- (api add) RouteBuilder.Do(aBuilder) for DRY use of RouteBuilder
- fix swagger nested structs
- sort Swagger response messages by code
2014-10-23
- (api add) ReturnsError allows you to document Http codes in swagger
- fixed problem with greedy CurlyRouter
- (api add) Access-Control-Max-Age in CORS
- add tracing functionality (injectable) for debugging purposes
- support JSON parse 64bit int
- fix empty parameters for swagger
- WebServicesUrl is now optional for swagger
- fixed duplicate AccessControlAllowOrigin in CORS
- (api change) expose ServeMux in container
- (api add) added AllowedDomains in CORS
- (api add) ParameterNamed for detailed documentation
2014-04-16
- (api add) expose constructor of Request for testing.
2014-06-27
- (api add) ParameterNamed gives access to a Parameter definition and its data (for further specification).
- (api add) SetCacheReadEntity allow scontrol over whether or not the request body is being cached (default true for compatibility reasons).
2014-07-03
- (api add) CORS can be configured with a list of allowed domains
2014-03-12
- (api add) Route path parameters can use wildcard or regular expressions. (requires CurlyRouter)
2014-02-26
- (api add) Request now provides information about the matched Route, see method SelectedRoutePath
2014-02-17
- (api change) renamed parameter constants (go-lint checks)
2014-01-10
- (api add) support for CloseNotify, see http://golang.org/pkg/net/http/#CloseNotifier
2014-01-07
- (api change) Write* methods in Response now return the error or nil.
- added example of serving HTML from a Go template.
- fixed comparing Allowed headers in CORS (is now case-insensitive)
2013-11-13
- (api add) Response knows how many bytes are written to the response body.
2013-10-29
- (api add) RecoverHandler(handler RecoverHandleFunction) to change how panic recovery is handled. Default behavior is to log and return a stacktrace. This may be a security issue as it exposes sourcecode information.
2013-10-04
- (api add) Response knows what HTTP status has been written
- (api add) Request can have attributes (map of string->interface, also called request-scoped variables
2013-09-12
- (api change) Router interface simplified
- Implemented CurlyRouter, a Router that does not use|allow regular expressions in paths
2013-08-05
- add OPTIONS support
- add CORS support
2013-08-27
- fixed some reported issues (see github)
- (api change) deprecated use of WriteError; use WriteErrorString instead
2014-04-15
- (fix) v1.0.1 tag: fix Issue 111: WriteErrorString
2013-08-08
- (api add) Added implementation Container: a WebServices collection with its own http.ServeMux allowing multiple endpoints per program. Existing uses of go-restful will register their services to the DefaultContainer.
- (api add) the swagger package has be extended to have a UI per container.
- if panic is detected then a small stack trace is printed (thanks to runner-mei)
- (api add) WriteErrorString to Response
Important API changes:
- (api remove) package variable DoNotRecover no longer works ; use restful.DefaultContainer.DoNotRecover(true) instead.
- (api remove) package variable EnableContentEncoding no longer works ; use restful.DefaultContainer.EnableContentEncoding(true) instead.
2013-07-06
- (api add) Added support for response encoding (gzip and deflate(zlib)). This feature is disabled on default (for backwards compatibility). Use restful.EnableContentEncoding = true in your initialization to enable this feature.
2013-06-19
- (improve) DoNotRecover option, moved request body closer, improved ReadEntity
2013-06-03
- (api change) removed Dispatcher interface, hide PathExpression
- changed receiver names of type functions to be more idiomatic Go
2013-06-02
- (optimize) Cache the RegExp compilation of Paths.
2013-05-22
- (api add) Added support for request/response filter functions
2013-05-18
- (api add) Added feature to change the default Http Request Dispatch function (travis cline)
- (api change) Moved Swagger Webservice to swagger package (see example restful-user)
[2012-11-14 .. 2013-05-18>
- See https://github.com/emicklei/go-restful/commits
2012-11-14
- Initial commit

View File

@ -1,22 +0,0 @@
Copyright (c) 2012,2013 Ernest Micklei
MIT License
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@ -1,74 +0,0 @@
go-restful
==========
package for building REST-style Web Services using Google Go
REST asks developers to use HTTP methods explicitly and in a way that's consistent with the protocol definition. This basic REST design principle establishes a one-to-one mapping between create, read, update, and delete (CRUD) operations and HTTP methods. According to this mapping:
- GET = Retrieve a representation of a resource
- POST = Create if you are sending content to the server to create a subordinate of the specified resource collection, using some server-side algorithm.
- PUT = Create if you are sending the full content of the specified resource (URI).
- PUT = Update if you are updating the full content of the specified resource.
- DELETE = Delete if you are requesting the server to delete the resource
- PATCH = Update partial content of a resource
- OPTIONS = Get information about the communication options for the request URI
### Example
```Go
ws := new(restful.WebService)
ws.
Path("/users").
Consumes(restful.MIME_XML, restful.MIME_JSON).
Produces(restful.MIME_JSON, restful.MIME_XML)
ws.Route(ws.GET("/{user-id}").To(u.findUser).
Doc("get a user").
Param(ws.PathParameter("user-id", "identifier of the user").DataType("string")).
Writes(User{}))
...
func (u UserResource) findUser(request *restful.Request, response *restful.Response) {
id := request.PathParameter("user-id")
...
}
```
[Full API of a UserResource](https://github.com/emicklei/go-restful/tree/master/examples/restful-user-resource.go)
### Features
- Routes for request &#8594; function mapping with path parameter (e.g. {id}) support
- Configurable router:
- Routing algorithm after [JSR311](http://jsr311.java.net/nonav/releases/1.1/spec/spec.html) that is implemented using (but does **not** accept) regular expressions (See RouterJSR311 which is used by default)
- Fast routing algorithm that allows static elements, regular expressions and dynamic parameters in the URL path (e.g. /meetings/{id} or /static/{subpath:*}, See CurlyRouter)
- Request API for reading structs from JSON/XML and accesing parameters (path,query,header)
- Response API for writing structs to JSON/XML and setting headers
- Filters for intercepting the request &#8594; response flow on Service or Route level
- Request-scoped variables using attributes
- Containers for WebServices on different HTTP endpoints
- Content encoding (gzip,deflate) of request and response payloads
- Automatic responses on OPTIONS (using a filter)
- Automatic CORS request handling (using a filter)
- API declaration for Swagger UI (see swagger package)
- Panic recovery to produce HTTP 500, customizable using RecoverHandler(...)
- Route errors produce HTTP 404/405/406/415 errors, customizable using ServiceErrorHandler(...)
- Configurable (trace) logging
- Customizable encoding using EntityReaderWriter registration
- Customizable gzip/deflate readers and writers using CompressorProvider registration
### Resources
- [Documentation on godoc.org](http://godoc.org/github.com/emicklei/go-restful)
- [Code examples](https://github.com/emicklei/go-restful/tree/master/examples)
- [Example posted on blog](http://ernestmicklei.com/2012/11/go-restful-first-working-example/)
- [Design explained on blog](http://ernestmicklei.com/2012/11/go-restful-api-design/)
- [sourcegraph](https://sourcegraph.com/github.com/emicklei/go-restful)
- [gopkg.in](https://gopkg.in/emicklei/go-restful.v1)
- [showcase: Mora - MongoDB REST Api server](https://github.com/emicklei/mora)
[![Build Status](https://drone.io/github.com/emicklei/go-restful/status.png)](https://drone.io/github.com/emicklei/go-restful/latest)
(c) 2012 - 2015, http://ernestmicklei.com. MIT License
Type ```git shortlog -s``` for a full list of contributors.

View File

@ -1 +0,0 @@
{"SkipDirs": ["examples"]}

View File

@ -1,10 +0,0 @@
#go test -run=none -file bench_test.go -test.bench . -cpuprofile=bench_test.out
go test -c
./go-restful.test -test.run=none -test.cpuprofile=tmp.prof -test.bench=BenchmarkMany
./go-restful.test -test.run=none -test.cpuprofile=curly.prof -test.bench=BenchmarkManyCurly
#go tool pprof go-restful.test tmp.prof
go tool pprof go-restful.test curly.prof

View File

@ -1,123 +0,0 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"bufio"
"compress/gzip"
"compress/zlib"
"errors"
"io"
"net"
"net/http"
"strings"
)
// OBSOLETE : use restful.DefaultContainer.EnableContentEncoding(true) to change this setting.
var EnableContentEncoding = false
// CompressingResponseWriter is a http.ResponseWriter that can perform content encoding (gzip and zlib)
type CompressingResponseWriter struct {
writer http.ResponseWriter
compressor io.WriteCloser
encoding string
}
// Header is part of http.ResponseWriter interface
func (c *CompressingResponseWriter) Header() http.Header {
return c.writer.Header()
}
// WriteHeader is part of http.ResponseWriter interface
func (c *CompressingResponseWriter) WriteHeader(status int) {
c.writer.WriteHeader(status)
}
// Write is part of http.ResponseWriter interface
// It is passed through the compressor
func (c *CompressingResponseWriter) Write(bytes []byte) (int, error) {
if c.isCompressorClosed() {
return -1, errors.New("Compressing error: tried to write data using closed compressor")
}
return c.compressor.Write(bytes)
}
// CloseNotify is part of http.CloseNotifier interface
func (c *CompressingResponseWriter) CloseNotify() <-chan bool {
return c.writer.(http.CloseNotifier).CloseNotify()
}
// Close the underlying compressor
func (c *CompressingResponseWriter) Close() error {
if c.isCompressorClosed() {
return errors.New("Compressing error: tried to close already closed compressor")
}
c.compressor.Close()
if ENCODING_GZIP == c.encoding {
currentCompressorProvider.ReleaseGzipWriter(c.compressor.(*gzip.Writer))
}
if ENCODING_DEFLATE == c.encoding {
currentCompressorProvider.ReleaseZlibWriter(c.compressor.(*zlib.Writer))
}
// gc hint needed?
c.compressor = nil
return nil
}
func (c *CompressingResponseWriter) isCompressorClosed() bool {
return nil == c.compressor
}
// Hijack implements the Hijacker interface
// This is especially useful when combining Container.EnabledContentEncoding
// in combination with websockets (for instance gorilla/websocket)
func (c *CompressingResponseWriter) Hijack() (net.Conn, *bufio.ReadWriter, error) {
hijacker, ok := c.writer.(http.Hijacker)
if !ok {
return nil, nil, errors.New("ResponseWriter doesn't support Hijacker interface")
}
return hijacker.Hijack()
}
// WantsCompressedResponse reads the Accept-Encoding header to see if and which encoding is requested.
func wantsCompressedResponse(httpRequest *http.Request) (bool, string) {
header := httpRequest.Header.Get(HEADER_AcceptEncoding)
gi := strings.Index(header, ENCODING_GZIP)
zi := strings.Index(header, ENCODING_DEFLATE)
// use in order of appearance
if gi == -1 {
return zi != -1, ENCODING_DEFLATE
} else if zi == -1 {
return gi != -1, ENCODING_GZIP
} else {
if gi < zi {
return true, ENCODING_GZIP
}
return true, ENCODING_DEFLATE
}
}
// NewCompressingResponseWriter create a CompressingResponseWriter for a known encoding = {gzip,deflate}
func NewCompressingResponseWriter(httpWriter http.ResponseWriter, encoding string) (*CompressingResponseWriter, error) {
httpWriter.Header().Set(HEADER_ContentEncoding, encoding)
c := new(CompressingResponseWriter)
c.writer = httpWriter
var err error
if ENCODING_GZIP == encoding {
w := currentCompressorProvider.AcquireGzipWriter()
w.Reset(httpWriter)
c.compressor = w
c.encoding = ENCODING_GZIP
} else if ENCODING_DEFLATE == encoding {
w := currentCompressorProvider.AcquireZlibWriter()
w.Reset(httpWriter)
c.compressor = w
c.encoding = ENCODING_DEFLATE
} else {
return nil, errors.New("Unknown encoding:" + encoding)
}
return c, err
}

View File

@ -1,103 +0,0 @@
package restful
// Copyright 2015 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"compress/gzip"
"compress/zlib"
)
// BoundedCachedCompressors is a CompressorProvider that uses a cache with a fixed amount
// of writers and readers (resources).
// If a new resource is acquired and all are in use, it will return a new unmanaged resource.
type BoundedCachedCompressors struct {
gzipWriters chan *gzip.Writer
gzipReaders chan *gzip.Reader
zlibWriters chan *zlib.Writer
writersCapacity int
readersCapacity int
}
// NewBoundedCachedCompressors returns a new, with filled cache, BoundedCachedCompressors.
func NewBoundedCachedCompressors(writersCapacity, readersCapacity int) *BoundedCachedCompressors {
b := &BoundedCachedCompressors{
gzipWriters: make(chan *gzip.Writer, writersCapacity),
gzipReaders: make(chan *gzip.Reader, readersCapacity),
zlibWriters: make(chan *zlib.Writer, writersCapacity),
writersCapacity: writersCapacity,
readersCapacity: readersCapacity,
}
for ix := 0; ix < writersCapacity; ix++ {
b.gzipWriters <- newGzipWriter()
b.zlibWriters <- newZlibWriter()
}
for ix := 0; ix < readersCapacity; ix++ {
b.gzipReaders <- newGzipReader()
}
return b
}
// AcquireGzipWriter returns an resettable *gzip.Writer. Needs to be released.
func (b *BoundedCachedCompressors) AcquireGzipWriter() *gzip.Writer {
var writer *gzip.Writer
select {
case writer, _ = <-b.gzipWriters:
default:
// return a new unmanaged one
writer = newGzipWriter()
}
return writer
}
// ReleaseGzipWriter accepts a writer (does not have to be one that was cached)
// only when the cache has room for it. It will ignore it otherwise.
func (b *BoundedCachedCompressors) ReleaseGzipWriter(w *gzip.Writer) {
// forget the unmanaged ones
if len(b.gzipWriters) < b.writersCapacity {
b.gzipWriters <- w
}
}
// AcquireGzipReader returns a *gzip.Reader. Needs to be released.
func (b *BoundedCachedCompressors) AcquireGzipReader() *gzip.Reader {
var reader *gzip.Reader
select {
case reader, _ = <-b.gzipReaders:
default:
// return a new unmanaged one
reader = newGzipReader()
}
return reader
}
// ReleaseGzipReader accepts a reader (does not have to be one that was cached)
// only when the cache has room for it. It will ignore it otherwise.
func (b *BoundedCachedCompressors) ReleaseGzipReader(r *gzip.Reader) {
// forget the unmanaged ones
if len(b.gzipReaders) < b.readersCapacity {
b.gzipReaders <- r
}
}
// AcquireZlibWriter returns an resettable *zlib.Writer. Needs to be released.
func (b *BoundedCachedCompressors) AcquireZlibWriter() *zlib.Writer {
var writer *zlib.Writer
select {
case writer, _ = <-b.zlibWriters:
default:
// return a new unmanaged one
writer = newZlibWriter()
}
return writer
}
// ReleaseZlibWriter accepts a writer (does not have to be one that was cached)
// only when the cache has room for it. It will ignore it otherwise.
func (b *BoundedCachedCompressors) ReleaseZlibWriter(w *zlib.Writer) {
// forget the unmanaged ones
if len(b.zlibWriters) < b.writersCapacity {
b.zlibWriters <- w
}
}

View File

@ -1,91 +0,0 @@
package restful
// Copyright 2015 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"bytes"
"compress/gzip"
"compress/zlib"
"sync"
)
// SyncPoolCompessors is a CompressorProvider that use the standard sync.Pool.
type SyncPoolCompessors struct {
GzipWriterPool *sync.Pool
GzipReaderPool *sync.Pool
ZlibWriterPool *sync.Pool
}
// NewSyncPoolCompessors returns a new ("empty") SyncPoolCompessors.
func NewSyncPoolCompessors() *SyncPoolCompessors {
return &SyncPoolCompessors{
GzipWriterPool: &sync.Pool{
New: func() interface{} { return newGzipWriter() },
},
GzipReaderPool: &sync.Pool{
New: func() interface{} { return newGzipReader() },
},
ZlibWriterPool: &sync.Pool{
New: func() interface{} { return newZlibWriter() },
},
}
}
func (s *SyncPoolCompessors) AcquireGzipWriter() *gzip.Writer {
return s.GzipWriterPool.Get().(*gzip.Writer)
}
func (s *SyncPoolCompessors) ReleaseGzipWriter(w *gzip.Writer) {
s.GzipWriterPool.Put(w)
}
func (s *SyncPoolCompessors) AcquireGzipReader() *gzip.Reader {
return s.GzipReaderPool.Get().(*gzip.Reader)
}
func (s *SyncPoolCompessors) ReleaseGzipReader(r *gzip.Reader) {
s.GzipReaderPool.Put(r)
}
func (s *SyncPoolCompessors) AcquireZlibWriter() *zlib.Writer {
return s.ZlibWriterPool.Get().(*zlib.Writer)
}
func (s *SyncPoolCompessors) ReleaseZlibWriter(w *zlib.Writer) {
s.ZlibWriterPool.Put(w)
}
func newGzipWriter() *gzip.Writer {
// create with an empty bytes writer; it will be replaced before using the gzipWriter
writer, err := gzip.NewWriterLevel(new(bytes.Buffer), gzip.BestSpeed)
if err != nil {
panic(err.Error())
}
return writer
}
func newGzipReader() *gzip.Reader {
// create with an empty reader (but with GZIP header); it will be replaced before using the gzipReader
// we can safely use currentCompressProvider because it is set on package initialization.
w := currentCompressorProvider.AcquireGzipWriter()
defer currentCompressorProvider.ReleaseGzipWriter(w)
b := new(bytes.Buffer)
w.Reset(b)
w.Flush()
w.Close()
reader, err := gzip.NewReader(bytes.NewReader(b.Bytes()))
if err != nil {
panic(err.Error())
}
return reader
}
func newZlibWriter() *zlib.Writer {
writer, err := zlib.NewWriterLevel(new(bytes.Buffer), gzip.BestSpeed)
if err != nil {
panic(err.Error())
}
return writer
}

View File

@ -1,53 +0,0 @@
package restful
// Copyright 2015 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"compress/gzip"
"compress/zlib"
)
type CompressorProvider interface {
// Returns a *gzip.Writer which needs to be released later.
// Before using it, call Reset().
AcquireGzipWriter() *gzip.Writer
// Releases an aqcuired *gzip.Writer.
ReleaseGzipWriter(w *gzip.Writer)
// Returns a *gzip.Reader which needs to be released later.
AcquireGzipReader() *gzip.Reader
// Releases an aqcuired *gzip.Reader.
ReleaseGzipReader(w *gzip.Reader)
// Returns a *zlib.Writer which needs to be released later.
// Before using it, call Reset().
AcquireZlibWriter() *zlib.Writer
// Releases an aqcuired *zlib.Writer.
ReleaseZlibWriter(w *zlib.Writer)
}
// DefaultCompressorProvider is the actual provider of compressors (zlib or gzip).
var currentCompressorProvider CompressorProvider
func init() {
currentCompressorProvider = NewSyncPoolCompessors()
}
// CurrentCompressorProvider returns the current CompressorProvider.
// It is initialized using a SyncPoolCompessors.
func CurrentCompressorProvider() CompressorProvider {
return currentCompressorProvider
}
// CompressorProvider sets the actual provider of compressors (zlib or gzip).
func SetCompressorProvider(p CompressorProvider) {
if p == nil {
panic("cannot set compressor provider to nil")
}
currentCompressorProvider = p
}

View File

@ -1,30 +0,0 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
const (
MIME_XML = "application/xml" // Accept or Content-Type used in Consumes() and/or Produces()
MIME_JSON = "application/json" // Accept or Content-Type used in Consumes() and/or Produces()
MIME_OCTET = "application/octet-stream" // If Content-Type is not present in request, use the default
HEADER_Allow = "Allow"
HEADER_Accept = "Accept"
HEADER_Origin = "Origin"
HEADER_ContentType = "Content-Type"
HEADER_LastModified = "Last-Modified"
HEADER_AcceptEncoding = "Accept-Encoding"
HEADER_ContentEncoding = "Content-Encoding"
HEADER_AccessControlExposeHeaders = "Access-Control-Expose-Headers"
HEADER_AccessControlRequestMethod = "Access-Control-Request-Method"
HEADER_AccessControlRequestHeaders = "Access-Control-Request-Headers"
HEADER_AccessControlAllowMethods = "Access-Control-Allow-Methods"
HEADER_AccessControlAllowOrigin = "Access-Control-Allow-Origin"
HEADER_AccessControlAllowCredentials = "Access-Control-Allow-Credentials"
HEADER_AccessControlAllowHeaders = "Access-Control-Allow-Headers"
HEADER_AccessControlMaxAge = "Access-Control-Max-Age"
ENCODING_GZIP = "gzip"
ENCODING_DEFLATE = "deflate"
)

View File

@ -1,361 +0,0 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"bytes"
"errors"
"fmt"
"net/http"
"os"
"runtime"
"strings"
"sync"
"github.com/emicklei/go-restful/log"
)
// Container holds a collection of WebServices and a http.ServeMux to dispatch http requests.
// The requests are further dispatched to routes of WebServices using a RouteSelector
type Container struct {
webServicesLock sync.RWMutex
webServices []*WebService
ServeMux *http.ServeMux
isRegisteredOnRoot bool
containerFilters []FilterFunction
doNotRecover bool // default is false
recoverHandleFunc RecoverHandleFunction
serviceErrorHandleFunc ServiceErrorHandleFunction
router RouteSelector // default is a RouterJSR311, CurlyRouter is the faster alternative
contentEncodingEnabled bool // default is false
}
// NewContainer creates a new Container using a new ServeMux and default router (RouterJSR311)
func NewContainer() *Container {
return &Container{
webServices: []*WebService{},
ServeMux: http.NewServeMux(),
isRegisteredOnRoot: false,
containerFilters: []FilterFunction{},
doNotRecover: false,
recoverHandleFunc: logStackOnRecover,
serviceErrorHandleFunc: writeServiceError,
router: RouterJSR311{},
contentEncodingEnabled: false}
}
// RecoverHandleFunction declares functions that can be used to handle a panic situation.
// The first argument is what recover() returns. The second must be used to communicate an error response.
type RecoverHandleFunction func(interface{}, http.ResponseWriter)
// RecoverHandler changes the default function (logStackOnRecover) to be called
// when a panic is detected. DoNotRecover must be have its default value (=false).
func (c *Container) RecoverHandler(handler RecoverHandleFunction) {
c.recoverHandleFunc = handler
}
// ServiceErrorHandleFunction declares functions that can be used to handle a service error situation.
// The first argument is the service error, the second is the request that resulted in the error and
// the third must be used to communicate an error response.
type ServiceErrorHandleFunction func(ServiceError, *Request, *Response)
// ServiceErrorHandler changes the default function (writeServiceError) to be called
// when a ServiceError is detected.
func (c *Container) ServiceErrorHandler(handler ServiceErrorHandleFunction) {
c.serviceErrorHandleFunc = handler
}
// DoNotRecover controls whether panics will be caught to return HTTP 500.
// If set to true, Route functions are responsible for handling any error situation.
// Default value is false = recover from panics. This has performance implications.
func (c *Container) DoNotRecover(doNot bool) {
c.doNotRecover = doNot
}
// Router changes the default Router (currently RouterJSR311)
func (c *Container) Router(aRouter RouteSelector) {
c.router = aRouter
}
// EnableContentEncoding (default=false) allows for GZIP or DEFLATE encoding of responses.
func (c *Container) EnableContentEncoding(enabled bool) {
c.contentEncodingEnabled = enabled
}
// Add a WebService to the Container. It will detect duplicate root paths and exit in that case.
func (c *Container) Add(service *WebService) *Container {
c.webServicesLock.Lock()
defer c.webServicesLock.Unlock()
// if rootPath was not set then lazy initialize it
if len(service.rootPath) == 0 {
service.Path("/")
}
// cannot have duplicate root paths
for _, each := range c.webServices {
if each.RootPath() == service.RootPath() {
log.Printf("[restful] WebService with duplicate root path detected:['%v']", each)
os.Exit(1)
}
}
// If not registered on root then add specific mapping
if !c.isRegisteredOnRoot {
c.isRegisteredOnRoot = c.addHandler(service, c.ServeMux)
}
c.webServices = append(c.webServices, service)
return c
}
// addHandler may set a new HandleFunc for the serveMux
// this function must run inside the critical region protected by the webServicesLock.
// returns true if the function was registered on root ("/")
func (c *Container) addHandler(service *WebService, serveMux *http.ServeMux) bool {
pattern := fixedPrefixPath(service.RootPath())
// check if root path registration is needed
if "/" == pattern || "" == pattern {
serveMux.HandleFunc("/", c.dispatch)
return true
}
// detect if registration already exists
alreadyMapped := false
for _, each := range c.webServices {
if each.RootPath() == service.RootPath() {
alreadyMapped = true
break
}
}
if !alreadyMapped {
serveMux.HandleFunc(pattern, c.dispatch)
if !strings.HasSuffix(pattern, "/") {
serveMux.HandleFunc(pattern+"/", c.dispatch)
}
}
return false
}
func (c *Container) Remove(ws *WebService) error {
if c.ServeMux == http.DefaultServeMux {
errMsg := fmt.Sprintf("[restful] cannot remove a WebService from a Container using the DefaultServeMux: ['%v']", ws)
log.Printf(errMsg)
return errors.New(errMsg)
}
c.webServicesLock.Lock()
defer c.webServicesLock.Unlock()
// build a new ServeMux and re-register all WebServices
newServeMux := http.NewServeMux()
newServices := []*WebService{}
newIsRegisteredOnRoot := false
for _, each := range c.webServices {
if each.rootPath != ws.rootPath {
// If not registered on root then add specific mapping
if !newIsRegisteredOnRoot {
newIsRegisteredOnRoot = c.addHandler(each, newServeMux)
}
newServices = append(newServices, each)
}
}
c.webServices, c.ServeMux, c.isRegisteredOnRoot = newServices, newServeMux, newIsRegisteredOnRoot
return nil
}
// logStackOnRecover is the default RecoverHandleFunction and is called
// when DoNotRecover is false and the recoverHandleFunc is not set for the container.
// Default implementation logs the stacktrace and writes the stacktrace on the response.
// This may be a security issue as it exposes sourcecode information.
func logStackOnRecover(panicReason interface{}, httpWriter http.ResponseWriter) {
var buffer bytes.Buffer
buffer.WriteString(fmt.Sprintf("[restful] recover from panic situation: - %v\r\n", panicReason))
for i := 2; ; i += 1 {
_, file, line, ok := runtime.Caller(i)
if !ok {
break
}
buffer.WriteString(fmt.Sprintf(" %s:%d\r\n", file, line))
}
log.Print(buffer.String())
httpWriter.WriteHeader(http.StatusInternalServerError)
httpWriter.Write(buffer.Bytes())
}
// writeServiceError is the default ServiceErrorHandleFunction and is called
// when a ServiceError is returned during route selection. Default implementation
// calls resp.WriteErrorString(err.Code, err.Message)
func writeServiceError(err ServiceError, req *Request, resp *Response) {
resp.WriteErrorString(err.Code, err.Message)
}
// Dispatch the incoming Http Request to a matching WebService.
func (c *Container) dispatch(httpWriter http.ResponseWriter, httpRequest *http.Request) {
writer := httpWriter
// CompressingResponseWriter should be closed after all operations are done
defer func() {
if compressWriter, ok := writer.(*CompressingResponseWriter); ok {
compressWriter.Close()
}
}()
// Instal panic recovery unless told otherwise
if !c.doNotRecover { // catch all for 500 response
defer func() {
if r := recover(); r != nil {
c.recoverHandleFunc(r, writer)
return
}
}()
}
// Install closing the request body (if any)
defer func() {
if nil != httpRequest.Body {
httpRequest.Body.Close()
}
}()
// Detect if compression is needed
// assume without compression, test for override
if c.contentEncodingEnabled {
doCompress, encoding := wantsCompressedResponse(httpRequest)
if doCompress {
var err error
writer, err = NewCompressingResponseWriter(httpWriter, encoding)
if err != nil {
log.Print("[restful] unable to install compressor: ", err)
httpWriter.WriteHeader(http.StatusInternalServerError)
return
}
}
}
// Find best match Route ; err is non nil if no match was found
var webService *WebService
var route *Route
var err error
func() {
c.webServicesLock.RLock()
defer c.webServicesLock.RUnlock()
webService, route, err = c.router.SelectRoute(
c.webServices,
httpRequest)
}()
if err != nil {
// a non-200 response has already been written
// run container filters anyway ; they should not touch the response...
chain := FilterChain{Filters: c.containerFilters, Target: func(req *Request, resp *Response) {
switch err.(type) {
case ServiceError:
ser := err.(ServiceError)
c.serviceErrorHandleFunc(ser, req, resp)
}
// TODO
}}
chain.ProcessFilter(NewRequest(httpRequest), NewResponse(writer))
return
}
wrappedRequest, wrappedResponse := route.wrapRequestResponse(writer, httpRequest)
// pass through filters (if any)
if len(c.containerFilters)+len(webService.filters)+len(route.Filters) > 0 {
// compose filter chain
allFilters := []FilterFunction{}
allFilters = append(allFilters, c.containerFilters...)
allFilters = append(allFilters, webService.filters...)
allFilters = append(allFilters, route.Filters...)
chain := FilterChain{Filters: allFilters, Target: func(req *Request, resp *Response) {
// handle request by route after passing all filters
route.Function(wrappedRequest, wrappedResponse)
}}
chain.ProcessFilter(wrappedRequest, wrappedResponse)
} else {
// no filters, handle request by route
route.Function(wrappedRequest, wrappedResponse)
}
}
// fixedPrefixPath returns the fixed part of the partspec ; it may include template vars {}
func fixedPrefixPath(pathspec string) string {
varBegin := strings.Index(pathspec, "{")
if -1 == varBegin {
return pathspec
}
return pathspec[:varBegin]
}
// ServeHTTP implements net/http.Handler therefore a Container can be a Handler in a http.Server
func (c *Container) ServeHTTP(httpwriter http.ResponseWriter, httpRequest *http.Request) {
c.ServeMux.ServeHTTP(httpwriter, httpRequest)
}
// Handle registers the handler for the given pattern. If a handler already exists for pattern, Handle panics.
func (c *Container) Handle(pattern string, handler http.Handler) {
c.ServeMux.Handle(pattern, handler)
}
// HandleWithFilter registers the handler for the given pattern.
// Container's filter chain is applied for handler.
// If a handler already exists for pattern, HandleWithFilter panics.
func (c *Container) HandleWithFilter(pattern string, handler http.Handler) {
f := func(httpResponse http.ResponseWriter, httpRequest *http.Request) {
if len(c.containerFilters) == 0 {
handler.ServeHTTP(httpResponse, httpRequest)
return
}
chain := FilterChain{Filters: c.containerFilters, Target: func(req *Request, resp *Response) {
handler.ServeHTTP(httpResponse, httpRequest)
}}
chain.ProcessFilter(NewRequest(httpRequest), NewResponse(httpResponse))
}
c.Handle(pattern, http.HandlerFunc(f))
}
// Filter appends a container FilterFunction. These are called before dispatching
// a http.Request to a WebService from the container
func (c *Container) Filter(filter FilterFunction) {
c.containerFilters = append(c.containerFilters, filter)
}
// RegisteredWebServices returns the collections of added WebServices
func (c *Container) RegisteredWebServices() []*WebService {
c.webServicesLock.RLock()
defer c.webServicesLock.RUnlock()
result := make([]*WebService, len(c.webServices))
for ix := range c.webServices {
result[ix] = c.webServices[ix]
}
return result
}
// computeAllowedMethods returns a list of HTTP methods that are valid for a Request
func (c *Container) computeAllowedMethods(req *Request) []string {
// Go through all RegisteredWebServices() and all its Routes to collect the options
methods := []string{}
requestPath := req.Request.URL.Path
for _, ws := range c.RegisteredWebServices() {
matches := ws.pathExpr.Matcher.FindStringSubmatch(requestPath)
if matches != nil {
finalMatch := matches[len(matches)-1]
for _, rt := range ws.Routes() {
matches := rt.pathExpr.Matcher.FindStringSubmatch(finalMatch)
if matches != nil {
lastMatch := matches[len(matches)-1]
if lastMatch == "" || lastMatch == "/" { // do not include if value is neither empty nor /.
methods = append(methods, rt.Method)
}
}
}
}
}
// methods = append(methods, "OPTIONS") not sure about this
return methods
}
// newBasicRequestResponse creates a pair of Request,Response from its http versions.
// It is basic because no parameter or (produces) content-type information is given.
func newBasicRequestResponse(httpWriter http.ResponseWriter, httpRequest *http.Request) (*Request, *Response) {
resp := NewResponse(httpWriter)
resp.requestAccept = httpRequest.Header.Get(HEADER_Accept)
return NewRequest(httpRequest), resp
}

View File

@ -1,202 +0,0 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"regexp"
"strconv"
"strings"
)
// CrossOriginResourceSharing is used to create a Container Filter that implements CORS.
// Cross-origin resource sharing (CORS) is a mechanism that allows JavaScript on a web page
// to make XMLHttpRequests to another domain, not the domain the JavaScript originated from.
//
// http://en.wikipedia.org/wiki/Cross-origin_resource_sharing
// http://enable-cors.org/server.html
// http://www.html5rocks.com/en/tutorials/cors/#toc-handling-a-not-so-simple-request
type CrossOriginResourceSharing struct {
ExposeHeaders []string // list of Header names
AllowedHeaders []string // list of Header names
AllowedDomains []string // list of allowed values for Http Origin. An allowed value can be a regular expression to support subdomain matching. If empty all are allowed.
AllowedMethods []string
MaxAge int // number of seconds before requiring new Options request
CookiesAllowed bool
Container *Container
allowedOriginPatterns []*regexp.Regexp // internal field for origin regexp check.
}
// Filter is a filter function that implements the CORS flow as documented on http://enable-cors.org/server.html
// and http://www.html5rocks.com/static/images/cors_server_flowchart.png
func (c CrossOriginResourceSharing) Filter(req *Request, resp *Response, chain *FilterChain) {
origin := req.Request.Header.Get(HEADER_Origin)
if len(origin) == 0 {
if trace {
traceLogger.Print("no Http header Origin set")
}
chain.ProcessFilter(req, resp)
return
}
if !c.isOriginAllowed(origin) { // check whether this origin is allowed
if trace {
traceLogger.Printf("HTTP Origin:%s is not part of %v, neither matches any part of %v", origin, c.AllowedDomains, c.allowedOriginPatterns)
}
chain.ProcessFilter(req, resp)
return
}
if req.Request.Method != "OPTIONS" {
c.doActualRequest(req, resp)
chain.ProcessFilter(req, resp)
return
}
if acrm := req.Request.Header.Get(HEADER_AccessControlRequestMethod); acrm != "" {
c.doPreflightRequest(req, resp)
} else {
c.doActualRequest(req, resp)
chain.ProcessFilter(req, resp)
return
}
}
func (c CrossOriginResourceSharing) doActualRequest(req *Request, resp *Response) {
c.setOptionsHeaders(req, resp)
// continue processing the response
}
func (c *CrossOriginResourceSharing) doPreflightRequest(req *Request, resp *Response) {
if len(c.AllowedMethods) == 0 {
if c.Container == nil {
c.AllowedMethods = DefaultContainer.computeAllowedMethods(req)
} else {
c.AllowedMethods = c.Container.computeAllowedMethods(req)
}
}
acrm := req.Request.Header.Get(HEADER_AccessControlRequestMethod)
if !c.isValidAccessControlRequestMethod(acrm, c.AllowedMethods) {
if trace {
traceLogger.Printf("Http header %s:%s is not in %v",
HEADER_AccessControlRequestMethod,
acrm,
c.AllowedMethods)
}
return
}
acrhs := req.Request.Header.Get(HEADER_AccessControlRequestHeaders)
if len(acrhs) > 0 {
for _, each := range strings.Split(acrhs, ",") {
if !c.isValidAccessControlRequestHeader(strings.Trim(each, " ")) {
if trace {
traceLogger.Printf("Http header %s:%s is not in %v",
HEADER_AccessControlRequestHeaders,
acrhs,
c.AllowedHeaders)
}
return
}
}
}
resp.AddHeader(HEADER_AccessControlAllowMethods, strings.Join(c.AllowedMethods, ","))
resp.AddHeader(HEADER_AccessControlAllowHeaders, acrhs)
c.setOptionsHeaders(req, resp)
// return http 200 response, no body
}
func (c CrossOriginResourceSharing) setOptionsHeaders(req *Request, resp *Response) {
c.checkAndSetExposeHeaders(resp)
c.setAllowOriginHeader(req, resp)
c.checkAndSetAllowCredentials(resp)
if c.MaxAge > 0 {
resp.AddHeader(HEADER_AccessControlMaxAge, strconv.Itoa(c.MaxAge))
}
}
func (c CrossOriginResourceSharing) isOriginAllowed(origin string) bool {
if len(origin) == 0 {
return false
}
if len(c.AllowedDomains) == 0 {
return true
}
allowed := false
for _, domain := range c.AllowedDomains {
if domain == origin {
allowed = true
break
}
}
if !allowed {
if len(c.allowedOriginPatterns) == 0 {
// compile allowed domains to allowed origin patterns
allowedOriginRegexps, err := compileRegexps(c.AllowedDomains)
if err != nil {
return false
}
c.allowedOriginPatterns = allowedOriginRegexps
}
for _, pattern := range c.allowedOriginPatterns {
if allowed = pattern.MatchString(origin); allowed {
break
}
}
}
return allowed
}
func (c CrossOriginResourceSharing) setAllowOriginHeader(req *Request, resp *Response) {
origin := req.Request.Header.Get(HEADER_Origin)
if c.isOriginAllowed(origin) {
resp.AddHeader(HEADER_AccessControlAllowOrigin, origin)
}
}
func (c CrossOriginResourceSharing) checkAndSetExposeHeaders(resp *Response) {
if len(c.ExposeHeaders) > 0 {
resp.AddHeader(HEADER_AccessControlExposeHeaders, strings.Join(c.ExposeHeaders, ","))
}
}
func (c CrossOriginResourceSharing) checkAndSetAllowCredentials(resp *Response) {
if c.CookiesAllowed {
resp.AddHeader(HEADER_AccessControlAllowCredentials, "true")
}
}
func (c CrossOriginResourceSharing) isValidAccessControlRequestMethod(method string, allowedMethods []string) bool {
for _, each := range allowedMethods {
if each == method {
return true
}
}
return false
}
func (c CrossOriginResourceSharing) isValidAccessControlRequestHeader(header string) bool {
for _, each := range c.AllowedHeaders {
if strings.ToLower(each) == strings.ToLower(header) {
return true
}
}
return false
}
// Take a list of strings and compile them into a list of regular expressions.
func compileRegexps(regexpStrings []string) ([]*regexp.Regexp, error) {
regexps := []*regexp.Regexp{}
for _, regexpStr := range regexpStrings {
r, err := regexp.Compile(regexpStr)
if err != nil {
return regexps, err
}
regexps = append(regexps, r)
}
return regexps, nil
}

View File

@ -1,2 +0,0 @@
go test -coverprofile=coverage.out
go tool cover -html=coverage.out

View File

@ -1,162 +0,0 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"net/http"
"regexp"
"sort"
"strings"
)
// CurlyRouter expects Routes with paths that contain zero or more parameters in curly brackets.
type CurlyRouter struct{}
// SelectRoute is part of the Router interface and returns the best match
// for the WebService and its Route for the given Request.
func (c CurlyRouter) SelectRoute(
webServices []*WebService,
httpRequest *http.Request) (selectedService *WebService, selected *Route, err error) {
requestTokens := tokenizePath(httpRequest.URL.Path)
detectedService := c.detectWebService(requestTokens, webServices)
if detectedService == nil {
if trace {
traceLogger.Printf("no WebService was found to match URL path:%s\n", httpRequest.URL.Path)
}
return nil, nil, NewError(http.StatusNotFound, "404: Page Not Found")
}
candidateRoutes := c.selectRoutes(detectedService, requestTokens)
if len(candidateRoutes) == 0 {
if trace {
traceLogger.Printf("no Route in WebService with path %s was found to match URL path:%s\n", detectedService.rootPath, httpRequest.URL.Path)
}
return detectedService, nil, NewError(http.StatusNotFound, "404: Page Not Found")
}
selectedRoute, err := c.detectRoute(candidateRoutes, httpRequest)
if selectedRoute == nil {
return detectedService, nil, err
}
return detectedService, selectedRoute, nil
}
// selectRoutes return a collection of Route from a WebService that matches the path tokens from the request.
func (c CurlyRouter) selectRoutes(ws *WebService, requestTokens []string) sortableCurlyRoutes {
candidates := sortableCurlyRoutes{}
for _, each := range ws.routes {
matches, paramCount, staticCount := c.matchesRouteByPathTokens(each.pathParts, requestTokens)
if matches {
candidates.add(curlyRoute{each, paramCount, staticCount}) // TODO make sure Routes() return pointers?
}
}
sort.Sort(sort.Reverse(candidates))
return candidates
}
// matchesRouteByPathTokens computes whether it matches, howmany parameters do match and what the number of static path elements are.
func (c CurlyRouter) matchesRouteByPathTokens(routeTokens, requestTokens []string) (matches bool, paramCount int, staticCount int) {
if len(routeTokens) < len(requestTokens) {
// proceed in matching only if last routeToken is wildcard
count := len(routeTokens)
if count == 0 || !strings.HasSuffix(routeTokens[count-1], "*}") {
return false, 0, 0
}
// proceed
}
for i, routeToken := range routeTokens {
if i == len(requestTokens) {
// reached end of request path
return false, 0, 0
}
requestToken := requestTokens[i]
if strings.HasPrefix(routeToken, "{") {
paramCount++
if colon := strings.Index(routeToken, ":"); colon != -1 {
// match by regex
matchesToken, matchesRemainder := c.regularMatchesPathToken(routeToken, colon, requestToken)
if !matchesToken {
return false, 0, 0
}
if matchesRemainder {
break
}
}
} else { // no { prefix
if requestToken != routeToken {
return false, 0, 0
}
staticCount++
}
}
return true, paramCount, staticCount
}
// regularMatchesPathToken tests whether the regular expression part of routeToken matches the requestToken or all remaining tokens
// format routeToken is {someVar:someExpression}, e.g. {zipcode:[\d][\d][\d][\d][A-Z][A-Z]}
func (c CurlyRouter) regularMatchesPathToken(routeToken string, colon int, requestToken string) (matchesToken bool, matchesRemainder bool) {
regPart := routeToken[colon+1 : len(routeToken)-1]
if regPart == "*" {
if trace {
traceLogger.Printf("wildcard parameter detected in route token %s that matches %s\n", routeToken, requestToken)
}
return true, true
}
matched, err := regexp.MatchString(regPart, requestToken)
return (matched && err == nil), false
}
// detectRoute selectes from a list of Route the first match by inspecting both the Accept and Content-Type
// headers of the Request. See also RouterJSR311 in jsr311.go
func (c CurlyRouter) detectRoute(candidateRoutes sortableCurlyRoutes, httpRequest *http.Request) (*Route, error) {
// tracing is done inside detectRoute
return RouterJSR311{}.detectRoute(candidateRoutes.routes(), httpRequest)
}
// detectWebService returns the best matching webService given the list of path tokens.
// see also computeWebserviceScore
func (c CurlyRouter) detectWebService(requestTokens []string, webServices []*WebService) *WebService {
var best *WebService
score := -1
for _, each := range webServices {
matches, eachScore := c.computeWebserviceScore(requestTokens, each.pathExpr.tokens)
if matches && (eachScore > score) {
best = each
score = eachScore
}
}
return best
}
// computeWebserviceScore returns whether tokens match and
// the weighted score of the longest matching consecutive tokens from the beginning.
func (c CurlyRouter) computeWebserviceScore(requestTokens []string, tokens []string) (bool, int) {
if len(tokens) > len(requestTokens) {
return false, 0
}
score := 0
for i := 0; i < len(tokens); i++ {
each := requestTokens[i]
other := tokens[i]
if len(each) == 0 && len(other) == 0 {
score++
continue
}
if len(other) > 0 && strings.HasPrefix(other, "{") {
// no empty match
if len(each) == 0 {
return false, score
}
score += 1
} else {
// not a parameter
if each != other {
return false, score
}
score += (len(tokens) - i) * 10 //fuzzy
}
}
return true, score
}

View File

@ -1,52 +0,0 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
// curlyRoute exits for sorting Routes by the CurlyRouter based on number of parameters and number of static path elements.
type curlyRoute struct {
route Route
paramCount int
staticCount int
}
type sortableCurlyRoutes []curlyRoute
func (s *sortableCurlyRoutes) add(route curlyRoute) {
*s = append(*s, route)
}
func (s sortableCurlyRoutes) routes() (routes []Route) {
for _, each := range s {
routes = append(routes, each.route) // TODO change return type
}
return routes
}
func (s sortableCurlyRoutes) Len() int {
return len(s)
}
func (s sortableCurlyRoutes) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
func (s sortableCurlyRoutes) Less(i, j int) bool {
ci := s[i]
cj := s[j]
// primary key
if ci.staticCount < cj.staticCount {
return true
}
if ci.staticCount > cj.staticCount {
return false
}
// secundary key
if ci.paramCount < cj.paramCount {
return true
}
if ci.paramCount > cj.paramCount {
return false
}
return ci.route.Path < cj.route.Path
}

View File

@ -1,196 +0,0 @@
/*
Package restful, a lean package for creating REST-style WebServices without magic.
WebServices and Routes
A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls.
Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes.
WebServices must be added to a container (see below) in order to handler Http requests from a server.
A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept).
This package has the logic to find the best matching Route and if found, call its Function.
ws := new(restful.WebService)
ws.
Path("/users").
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON, restful.MIME_XML)
ws.Route(ws.GET("/{user-id}").To(u.findUser)) // u is a UserResource
...
// GET http://localhost:8080/users/1
func (u UserResource) findUser(request *restful.Request, response *restful.Response) {
id := request.PathParameter("user-id")
...
}
The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response.
See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation.
Regular expression matching Routes
A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path.
For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters.
Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax)
This feature requires the use of a CurlyRouter.
Containers
A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests.
Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container.
The Default container of go-restful uses the http.DefaultServeMux.
You can create your own Container and create a new http.Server for that particular container.
container := restful.NewContainer()
server := &http.Server{Addr: ":8081", Handler: container}
Filters
A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses.
You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc.
In the restful package there are three hooks into the request,response flow where filters can be added.
Each filter must define a FilterFunction:
func (req *restful.Request, resp *restful.Response, chain *restful.FilterChain)
Use the following statement to pass the request,response pair to the next filter or RouteFunction
chain.ProcessFilter(req, resp)
Container Filters
These are processed before any registered WebService.
// install a (global) filter for the default container (processed before any webservice)
restful.Filter(globalLogging)
WebService Filters
These are processed before any Route of a WebService.
// install a webservice filter (processed before any route)
ws.Filter(webserviceLogging).Filter(measureTime)
Route Filters
These are processed before calling the function associated with the Route.
// install 2 chained route filters (processed before calling findUser)
ws.Route(ws.GET("/{user-id}").Filter(routeLogging).Filter(NewCountFilter().routeCounter).To(findUser))
See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations.
Response Encoding
Two encodings are supported: gzip and deflate. To enable this for all responses:
restful.DefaultContainer.EnableContentEncoding(true)
If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding.
Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route.
See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go
OPTIONS support
By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request.
Filter(OPTIONSFilter())
CORS
By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests.
cors := CrossOriginResourceSharing{ExposeHeaders: []string{"X-My-Header"}, CookiesAllowed: false, Container: DefaultContainer}
Filter(cors.Filter)
Error Handling
Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why.
For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation.
400: Bad Request
If path or query parameters are not valid (content or type) then use http.StatusBadRequest.
404: Not Found
Despite a valid URI, the resource requested may not be available
500: Internal Server Error
If the application logic could not process the request (or write the response) then use http.StatusInternalServerError.
405: Method Not Allowed
The request has a valid URL but the method (GET,PUT,POST,...) is not allowed.
406: Not Acceptable
The request does not have or has an unknown Accept Header set for this operation.
415: Unsupported Media Type
The request does not have or has an unknown Content-Type Header set for this operation.
ServiceError
In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response.
Performance options
This package has several options that affect the performance of your service. It is important to understand them and how you can change it.
restful.DefaultContainer.Router(CurlyRouter{})
The default router is the RouterJSR311 which is an implementation of its spec (http://jsr311.java.net/nonav/releases/1.1/spec/spec.html).
However, it uses regular expressions for all its routes which, depending on your usecase, may consume a significant amount of time.
The CurlyRouter implementation is more lightweight that also allows you to use wildcards and expressions, but only if needed.
restful.DefaultContainer.DoNotRecover(true)
DoNotRecover controls whether panics will be caught to return HTTP 500.
If set to true, Route functions are responsible for handling any error situation.
Default value is false; it will recover from panics. This has performance implications.
restful.SetCacheReadEntity(false)
SetCacheReadEntity controls whether the response data ([]byte) is cached such that ReadEntity is repeatable.
If you expect to read large amounts of payload data, and you do not use this feature, you should set it to false.
restful.SetCompressorProvider(NewBoundedCachedCompressors(20, 20))
If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool.
Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation.
Trouble shooting
This package has the means to produce detail logging of the complete Http request matching process and filter invocation.
Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as:
restful.TraceLogger(log.New(os.Stdout, "[restful] ", log.LstdFlags|log.Lshortfile))
Logging
The restful.SetLogger() method allows you to override the logger used by the package. By default restful
uses the standard library `log` package and logs to stdout. Different logging packages are supported as
long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your
preferred package is simple.
Resources
[project]: https://github.com/emicklei/go-restful
[examples]: https://github.com/emicklei/go-restful/blob/master/examples
[design]: http://ernestmicklei.com/2012/11/11/go-restful-api-design/
[showcases]: https://github.com/emicklei/mora, https://github.com/emicklei/landskape
(c) 2012-2015, http://ernestmicklei.com. MIT License
*/
package restful

View File

@ -1,163 +0,0 @@
package restful
// Copyright 2015 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"encoding/json"
"encoding/xml"
"strings"
"sync"
)
// EntityReaderWriter can read and write values using an encoding such as JSON,XML.
type EntityReaderWriter interface {
// Read a serialized version of the value from the request.
// The Request may have a decompressing reader. Depends on Content-Encoding.
Read(req *Request, v interface{}) error
// Write a serialized version of the value on the response.
// The Response may have a compressing writer. Depends on Accept-Encoding.
// status should be a valid Http Status code
Write(resp *Response, status int, v interface{}) error
}
// entityAccessRegistry is a singleton
var entityAccessRegistry = &entityReaderWriters{
protection: new(sync.RWMutex),
accessors: map[string]EntityReaderWriter{},
}
// entityReaderWriters associates MIME to an EntityReaderWriter
type entityReaderWriters struct {
protection *sync.RWMutex
accessors map[string]EntityReaderWriter
}
func init() {
RegisterEntityAccessor(MIME_JSON, NewEntityAccessorJSON(MIME_JSON))
RegisterEntityAccessor(MIME_XML, NewEntityAccessorXML(MIME_XML))
}
// RegisterEntityAccessor add/overrides the ReaderWriter for encoding content with this MIME type.
func RegisterEntityAccessor(mime string, erw EntityReaderWriter) {
entityAccessRegistry.protection.Lock()
defer entityAccessRegistry.protection.Unlock()
entityAccessRegistry.accessors[mime] = erw
}
// NewEntityAccessorJSON returns a new EntityReaderWriter for accessing JSON content.
// This package is already initialized with such an accessor using the MIME_JSON contentType.
func NewEntityAccessorJSON(contentType string) EntityReaderWriter {
return entityJSONAccess{ContentType: contentType}
}
// NewEntityAccessorXML returns a new EntityReaderWriter for accessing XML content.
// This package is already initialized with such an accessor using the MIME_XML contentType.
func NewEntityAccessorXML(contentType string) EntityReaderWriter {
return entityXMLAccess{ContentType: contentType}
}
// accessorAt returns the registered ReaderWriter for this MIME type.
func (r *entityReaderWriters) accessorAt(mime string) (EntityReaderWriter, bool) {
r.protection.RLock()
defer r.protection.RUnlock()
er, ok := r.accessors[mime]
if !ok {
// retry with reverse lookup
// more expensive but we are in an exceptional situation anyway
for k, v := range r.accessors {
if strings.Contains(mime, k) {
return v, true
}
}
}
return er, ok
}
// entityXMLAccess is a EntityReaderWriter for XML encoding
type entityXMLAccess struct {
// This is used for setting the Content-Type header when writing
ContentType string
}
// Read unmarshalls the value from XML
func (e entityXMLAccess) Read(req *Request, v interface{}) error {
return xml.NewDecoder(req.Request.Body).Decode(v)
}
// Write marshalls the value to JSON and set the Content-Type Header.
func (e entityXMLAccess) Write(resp *Response, status int, v interface{}) error {
return writeXML(resp, status, e.ContentType, v)
}
// writeXML marshalls the value to JSON and set the Content-Type Header.
func writeXML(resp *Response, status int, contentType string, v interface{}) error {
if v == nil {
resp.WriteHeader(status)
// do not write a nil representation
return nil
}
if resp.prettyPrint {
// pretty output must be created and written explicitly
output, err := xml.MarshalIndent(v, " ", " ")
if err != nil {
return err
}
resp.Header().Set(HEADER_ContentType, contentType)
resp.WriteHeader(status)
_, err = resp.Write([]byte(xml.Header))
if err != nil {
return err
}
_, err = resp.Write(output)
return err
}
// not-so-pretty
resp.Header().Set(HEADER_ContentType, contentType)
resp.WriteHeader(status)
return xml.NewEncoder(resp).Encode(v)
}
// entityJSONAccess is a EntityReaderWriter for JSON encoding
type entityJSONAccess struct {
// This is used for setting the Content-Type header when writing
ContentType string
}
// Read unmarshalls the value from JSON
func (e entityJSONAccess) Read(req *Request, v interface{}) error {
decoder := json.NewDecoder(req.Request.Body)
decoder.UseNumber()
return decoder.Decode(v)
}
// Write marshalls the value to JSON and set the Content-Type Header.
func (e entityJSONAccess) Write(resp *Response, status int, v interface{}) error {
return writeJSON(resp, status, e.ContentType, v)
}
// write marshalls the value to JSON and set the Content-Type Header.
func writeJSON(resp *Response, status int, contentType string, v interface{}) error {
if v == nil {
resp.WriteHeader(status)
// do not write a nil representation
return nil
}
if resp.prettyPrint {
// pretty output must be created and written explicitly
output, err := json.MarshalIndent(v, " ", " ")
if err != nil {
return err
}
resp.Header().Set(HEADER_ContentType, contentType)
resp.WriteHeader(status)
_, err = resp.Write(output)
return err
}
// not-so-pretty
resp.Header().Set(HEADER_ContentType, contentType)
resp.WriteHeader(status)
return json.NewEncoder(resp).Encode(v)
}

View File

@ -1,26 +0,0 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
// FilterChain is a request scoped object to process one or more filters before calling the target RouteFunction.
type FilterChain struct {
Filters []FilterFunction // ordered list of FilterFunction
Index int // index into filters that is currently in progress
Target RouteFunction // function to call after passing all filters
}
// ProcessFilter passes the request,response pair through the next of Filters.
// Each filter can decide to proceed to the next Filter or handle the Response itself.
func (f *FilterChain) ProcessFilter(request *Request, response *Response) {
if f.Index < len(f.Filters) {
f.Index++
f.Filters[f.Index-1](request, response, f)
} else {
f.Target(request, response)
}
}
// FilterFunction definitions must call ProcessFilter on the FilterChain to pass on the control and eventually call the RouteFunction
type FilterFunction func(*Request, *Response, *FilterChain)

View File

@ -1,10 +0,0 @@
go test -test.v ...restful && \
go test -test.v ...swagger && \
go vet ...restful && \
go fmt ...swagger && \
go install ...swagger && \
go fmt ...restful && \
go install ...restful
cd examples
ls *.go | xargs -I {} go build -o /tmp/ignore {}
cd ..

View File

@ -1,248 +0,0 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"errors"
"fmt"
"net/http"
"sort"
)
// RouterJSR311 implements the flow for matching Requests to Routes (and consequently Resource Functions)
// as specified by the JSR311 http://jsr311.java.net/nonav/releases/1.1/spec/spec.html.
// RouterJSR311 implements the Router interface.
// Concept of locators is not implemented.
type RouterJSR311 struct{}
// SelectRoute is part of the Router interface and returns the best match
// for the WebService and its Route for the given Request.
func (r RouterJSR311) SelectRoute(
webServices []*WebService,
httpRequest *http.Request) (selectedService *WebService, selectedRoute *Route, err error) {
// Identify the root resource class (WebService)
dispatcher, finalMatch, err := r.detectDispatcher(httpRequest.URL.Path, webServices)
if err != nil {
return nil, nil, NewError(http.StatusNotFound, "")
}
// Obtain the set of candidate methods (Routes)
routes := r.selectRoutes(dispatcher, finalMatch)
if len(routes) == 0 {
return dispatcher, nil, NewError(http.StatusNotFound, "404: Page Not Found")
}
// Identify the method (Route) that will handle the request
route, ok := r.detectRoute(routes, httpRequest)
return dispatcher, route, ok
}
// http://jsr311.java.net/nonav/releases/1.1/spec/spec3.html#x3-360003.7.2
func (r RouterJSR311) detectRoute(routes []Route, httpRequest *http.Request) (*Route, error) {
// http method
methodOk := []Route{}
for _, each := range routes {
if httpRequest.Method == each.Method {
methodOk = append(methodOk, each)
}
}
if len(methodOk) == 0 {
if trace {
traceLogger.Printf("no Route found (in %d routes) that matches HTTP method %s\n", len(routes), httpRequest.Method)
}
return nil, NewError(http.StatusMethodNotAllowed, "405: Method Not Allowed")
}
inputMediaOk := methodOk
// content-type
contentType := httpRequest.Header.Get(HEADER_ContentType)
inputMediaOk = []Route{}
for _, each := range methodOk {
if each.matchesContentType(contentType) {
inputMediaOk = append(inputMediaOk, each)
}
}
if len(inputMediaOk) == 0 {
if trace {
traceLogger.Printf("no Route found (from %d) that matches HTTP Content-Type: %s\n", len(methodOk), contentType)
}
return nil, NewError(http.StatusUnsupportedMediaType, "415: Unsupported Media Type")
}
// accept
outputMediaOk := []Route{}
accept := httpRequest.Header.Get(HEADER_Accept)
if len(accept) == 0 {
accept = "*/*"
}
for _, each := range inputMediaOk {
if each.matchesAccept(accept) {
outputMediaOk = append(outputMediaOk, each)
}
}
if len(outputMediaOk) == 0 {
if trace {
traceLogger.Printf("no Route found (from %d) that matches HTTP Accept: %s\n", len(inputMediaOk), accept)
}
return nil, NewError(http.StatusNotAcceptable, "406: Not Acceptable")
}
// return r.bestMatchByMedia(outputMediaOk, contentType, accept), nil
return &outputMediaOk[0], nil
}
// http://jsr311.java.net/nonav/releases/1.1/spec/spec3.html#x3-360003.7.2
// n/m > n/* > */*
func (r RouterJSR311) bestMatchByMedia(routes []Route, contentType string, accept string) *Route {
// TODO
return &routes[0]
}
// http://jsr311.java.net/nonav/releases/1.1/spec/spec3.html#x3-360003.7.2 (step 2)
func (r RouterJSR311) selectRoutes(dispatcher *WebService, pathRemainder string) []Route {
filtered := &sortableRouteCandidates{}
for _, each := range dispatcher.Routes() {
pathExpr := each.pathExpr
matches := pathExpr.Matcher.FindStringSubmatch(pathRemainder)
if matches != nil {
lastMatch := matches[len(matches)-1]
if len(lastMatch) == 0 || lastMatch == "/" { // do not include if value is neither empty nor /.
filtered.candidates = append(filtered.candidates,
routeCandidate{each, len(matches) - 1, pathExpr.LiteralCount, pathExpr.VarCount})
}
}
}
if len(filtered.candidates) == 0 {
if trace {
traceLogger.Printf("WebService on path %s has no routes that match URL path remainder:%s\n", dispatcher.rootPath, pathRemainder)
}
return []Route{}
}
sort.Sort(sort.Reverse(filtered))
// select other routes from candidates whoes expression matches rmatch
matchingRoutes := []Route{filtered.candidates[0].route}
for c := 1; c < len(filtered.candidates); c++ {
each := filtered.candidates[c]
if each.route.pathExpr.Matcher.MatchString(pathRemainder) {
matchingRoutes = append(matchingRoutes, each.route)
}
}
return matchingRoutes
}
// http://jsr311.java.net/nonav/releases/1.1/spec/spec3.html#x3-360003.7.2 (step 1)
func (r RouterJSR311) detectDispatcher(requestPath string, dispatchers []*WebService) (*WebService, string, error) {
filtered := &sortableDispatcherCandidates{}
for _, each := range dispatchers {
matches := each.pathExpr.Matcher.FindStringSubmatch(requestPath)
if matches != nil {
filtered.candidates = append(filtered.candidates,
dispatcherCandidate{each, matches[len(matches)-1], len(matches), each.pathExpr.LiteralCount, each.pathExpr.VarCount})
}
}
if len(filtered.candidates) == 0 {
if trace {
traceLogger.Printf("no WebService was found to match URL path:%s\n", requestPath)
}
return nil, "", errors.New("not found")
}
sort.Sort(sort.Reverse(filtered))
return filtered.candidates[0].dispatcher, filtered.candidates[0].finalMatch, nil
}
// Types and functions to support the sorting of Routes
type routeCandidate struct {
route Route
matchesCount int // the number of capturing groups
literalCount int // the number of literal characters (means those not resulting from template variable substitution)
nonDefaultCount int // the number of capturing groups with non-default regular expressions (i.e. not ([^ /]+?))
}
func (r routeCandidate) expressionToMatch() string {
return r.route.pathExpr.Source
}
func (r routeCandidate) String() string {
return fmt.Sprintf("(m=%d,l=%d,n=%d)", r.matchesCount, r.literalCount, r.nonDefaultCount)
}
type sortableRouteCandidates struct {
candidates []routeCandidate
}
func (rcs *sortableRouteCandidates) Len() int {
return len(rcs.candidates)
}
func (rcs *sortableRouteCandidates) Swap(i, j int) {
rcs.candidates[i], rcs.candidates[j] = rcs.candidates[j], rcs.candidates[i]
}
func (rcs *sortableRouteCandidates) Less(i, j int) bool {
ci := rcs.candidates[i]
cj := rcs.candidates[j]
// primary key
if ci.literalCount < cj.literalCount {
return true
}
if ci.literalCount > cj.literalCount {
return false
}
// secundary key
if ci.matchesCount < cj.matchesCount {
return true
}
if ci.matchesCount > cj.matchesCount {
return false
}
// tertiary key
if ci.nonDefaultCount < cj.nonDefaultCount {
return true
}
if ci.nonDefaultCount > cj.nonDefaultCount {
return false
}
// quaternary key ("source" is interpreted as Path)
return ci.route.Path < cj.route.Path
}
// Types and functions to support the sorting of Dispatchers
type dispatcherCandidate struct {
dispatcher *WebService
finalMatch string
matchesCount int // the number of capturing groups
literalCount int // the number of literal characters (means those not resulting from template variable substitution)
nonDefaultCount int // the number of capturing groups with non-default regular expressions (i.e. not ([^ /]+?))
}
type sortableDispatcherCandidates struct {
candidates []dispatcherCandidate
}
func (dc *sortableDispatcherCandidates) Len() int {
return len(dc.candidates)
}
func (dc *sortableDispatcherCandidates) Swap(i, j int) {
dc.candidates[i], dc.candidates[j] = dc.candidates[j], dc.candidates[i]
}
func (dc *sortableDispatcherCandidates) Less(i, j int) bool {
ci := dc.candidates[i]
cj := dc.candidates[j]
// primary key
if ci.matchesCount < cj.matchesCount {
return true
}
if ci.matchesCount > cj.matchesCount {
return false
}
// secundary key
if ci.literalCount < cj.literalCount {
return true
}
if ci.literalCount > cj.literalCount {
return false
}
// tertiary key
return ci.nonDefaultCount < cj.nonDefaultCount
}

View File

@ -1,31 +0,0 @@
package log
import (
stdlog "log"
"os"
)
// Logger corresponds to a minimal subset of the interface satisfied by stdlib log.Logger
type StdLogger interface {
Print(v ...interface{})
Printf(format string, v ...interface{})
}
var Logger StdLogger
func init() {
// default Logger
SetLogger(stdlog.New(os.Stderr, "[restful] ", stdlog.LstdFlags|stdlog.Lshortfile))
}
func SetLogger(customLogger StdLogger) {
Logger = customLogger
}
func Print(v ...interface{}) {
Logger.Print(v...)
}
func Printf(format string, v ...interface{}) {
Logger.Printf(format, v...)
}

View File

@ -1,32 +0,0 @@
package restful
// Copyright 2014 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"github.com/emicklei/go-restful/log"
)
var trace bool = false
var traceLogger log.StdLogger
func init() {
traceLogger = log.Logger // use the package logger by default
}
// TraceLogger enables detailed logging of Http request matching and filter invocation. Default no logger is set.
// You may call EnableTracing() directly to enable trace logging to the package-wide logger.
func TraceLogger(logger log.StdLogger) {
traceLogger = logger
EnableTracing(logger != nil)
}
// expose the setter for the global logger on the top-level package
func SetLogger(customLogger log.StdLogger) {
log.SetLogger(customLogger)
}
// EnableTracing can be used to Trace logging on and off.
func EnableTracing(enabled bool) {
trace = enabled
}

View File

@ -1,45 +0,0 @@
package restful
import (
"strconv"
"strings"
)
type mime struct {
media string
quality float64
}
// insertMime adds a mime to a list and keeps it sorted by quality.
func insertMime(l []mime, e mime) []mime {
for i, each := range l {
// if current mime has lower quality then insert before
if e.quality > each.quality {
left := append([]mime{}, l[0:i]...)
return append(append(left, e), l[i:]...)
}
}
return append(l, e)
}
// sortedMimes returns a list of mime sorted (desc) by its specified quality.
func sortedMimes(accept string) (sorted []mime) {
for _, each := range strings.Split(accept, ",") {
typeAndQuality := strings.Split(strings.Trim(each, " "), ";")
if len(typeAndQuality) == 1 {
sorted = insertMime(sorted, mime{typeAndQuality[0], 1.0})
} else {
// take factor
parts := strings.Split(typeAndQuality[1], "=")
if len(parts) == 2 {
f, err := strconv.ParseFloat(parts[1], 64)
if err != nil {
traceLogger.Printf("unable to parse quality in %s, %v", each, err)
} else {
sorted = insertMime(sorted, mime{typeAndQuality[0], f})
}
}
}
}
return
}

View File

@ -1,26 +0,0 @@
package restful
import "strings"
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
// OPTIONSFilter is a filter function that inspects the Http Request for the OPTIONS method
// and provides the response with a set of allowed methods for the request URL Path.
// As for any filter, you can also install it for a particular WebService within a Container.
// Note: this filter is not needed when using CrossOriginResourceSharing (for CORS).
func (c *Container) OPTIONSFilter(req *Request, resp *Response, chain *FilterChain) {
if "OPTIONS" != req.Request.Method {
chain.ProcessFilter(req, resp)
return
}
resp.AddHeader(HEADER_Allow, strings.Join(c.computeAllowedMethods(req), ","))
}
// OPTIONSFilter is a filter function that inspects the Http Request for the OPTIONS method
// and provides the response with a set of allowed methods for the request URL Path.
// Note: this filter is not needed when using CrossOriginResourceSharing (for CORS).
func OPTIONSFilter() FilterFunction {
return DefaultContainer.OPTIONSFilter
}

View File

@ -1,114 +0,0 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
const (
// PathParameterKind = indicator of Request parameter type "path"
PathParameterKind = iota
// QueryParameterKind = indicator of Request parameter type "query"
QueryParameterKind
// BodyParameterKind = indicator of Request parameter type "body"
BodyParameterKind
// HeaderParameterKind = indicator of Request parameter type "header"
HeaderParameterKind
// FormParameterKind = indicator of Request parameter type "form"
FormParameterKind
)
// Parameter is for documententing the parameter used in a Http Request
// ParameterData kinds are Path,Query and Body
type Parameter struct {
data *ParameterData
}
// ParameterData represents the state of a Parameter.
// It is made public to make it accessible to e.g. the Swagger package.
type ParameterData struct {
Name, Description, DataType, DataFormat string
Kind int
Required bool
AllowableValues map[string]string
AllowMultiple bool
DefaultValue string
}
// Data returns the state of the Parameter
func (p *Parameter) Data() ParameterData {
return *p.data
}
// Kind returns the parameter type indicator (see const for valid values)
func (p *Parameter) Kind() int {
return p.data.Kind
}
func (p *Parameter) bePath() *Parameter {
p.data.Kind = PathParameterKind
return p
}
func (p *Parameter) beQuery() *Parameter {
p.data.Kind = QueryParameterKind
return p
}
func (p *Parameter) beBody() *Parameter {
p.data.Kind = BodyParameterKind
return p
}
func (p *Parameter) beHeader() *Parameter {
p.data.Kind = HeaderParameterKind
return p
}
func (p *Parameter) beForm() *Parameter {
p.data.Kind = FormParameterKind
return p
}
// Required sets the required field and returns the receiver
func (p *Parameter) Required(required bool) *Parameter {
p.data.Required = required
return p
}
// AllowMultiple sets the allowMultiple field and returns the receiver
func (p *Parameter) AllowMultiple(multiple bool) *Parameter {
p.data.AllowMultiple = multiple
return p
}
// AllowableValues sets the allowableValues field and returns the receiver
func (p *Parameter) AllowableValues(values map[string]string) *Parameter {
p.data.AllowableValues = values
return p
}
// DataType sets the dataType field and returns the receiver
func (p *Parameter) DataType(typeName string) *Parameter {
p.data.DataType = typeName
return p
}
// DataFormat sets the dataFormat field for Swagger UI
func (p *Parameter) DataFormat(formatName string) *Parameter {
p.data.DataFormat = formatName
return p
}
// DefaultValue sets the default value field and returns the receiver
func (p *Parameter) DefaultValue(stringRepresentation string) *Parameter {
p.data.DefaultValue = stringRepresentation
return p
}
// Description sets the description value field and returns the receiver
func (p *Parameter) Description(doc string) *Parameter {
p.data.Description = doc
return p
}

View File

@ -1,69 +0,0 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"bytes"
"fmt"
"regexp"
"strings"
)
// PathExpression holds a compiled path expression (RegExp) needed to match against
// Http request paths and to extract path parameter values.
type pathExpression struct {
LiteralCount int // the number of literal characters (means those not resulting from template variable substitution)
VarCount int // the number of named parameters (enclosed by {}) in the path
Matcher *regexp.Regexp
Source string // Path as defined by the RouteBuilder
tokens []string
}
// NewPathExpression creates a PathExpression from the input URL path.
// Returns an error if the path is invalid.
func newPathExpression(path string) (*pathExpression, error) {
expression, literalCount, varCount, tokens := templateToRegularExpression(path)
compiled, err := regexp.Compile(expression)
if err != nil {
return nil, err
}
return &pathExpression{literalCount, varCount, compiled, expression, tokens}, nil
}
// http://jsr311.java.net/nonav/releases/1.1/spec/spec3.html#x3-370003.7.3
func templateToRegularExpression(template string) (expression string, literalCount int, varCount int, tokens []string) {
var buffer bytes.Buffer
buffer.WriteString("^")
//tokens = strings.Split(template, "/")
tokens = tokenizePath(template)
for _, each := range tokens {
if each == "" {
continue
}
buffer.WriteString("/")
if strings.HasPrefix(each, "{") {
// check for regular expression in variable
colon := strings.Index(each, ":")
if colon != -1 {
// extract expression
paramExpr := strings.TrimSpace(each[colon+1 : len(each)-1])
if paramExpr == "*" { // special case
buffer.WriteString("(.*)")
} else {
buffer.WriteString(fmt.Sprintf("(%s)", paramExpr)) // between colon and closing moustache
}
} else {
// plain var
buffer.WriteString("([^/]+?)")
}
varCount += 1
} else {
literalCount += len(each)
encoded := each // TODO URI encode
buffer.WriteString(regexp.QuoteMeta(encoded))
}
}
return strings.TrimRight(buffer.String(), "/") + "(/.*)?$", literalCount, varCount, tokens
}

View File

@ -1,131 +0,0 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"bytes"
"compress/zlib"
"io/ioutil"
"net/http"
)
var defaultRequestContentType string
var doCacheReadEntityBytes = true
// Request is a wrapper for a http Request that provides convenience methods
type Request struct {
Request *http.Request
bodyContent *[]byte // to cache the request body for multiple reads of ReadEntity
pathParameters map[string]string
attributes map[string]interface{} // for storing request-scoped values
selectedRoutePath string // root path + route path that matched the request, e.g. /meetings/{id}/attendees
}
func NewRequest(httpRequest *http.Request) *Request {
return &Request{
Request: httpRequest,
pathParameters: map[string]string{},
attributes: map[string]interface{}{},
} // empty parameters, attributes
}
// If ContentType is missing or */* is given then fall back to this type, otherwise
// a "Unable to unmarshal content of type:" response is returned.
// Valid values are restful.MIME_JSON and restful.MIME_XML
// Example:
// restful.DefaultRequestContentType(restful.MIME_JSON)
func DefaultRequestContentType(mime string) {
defaultRequestContentType = mime
}
// SetCacheReadEntity controls whether the response data ([]byte) is cached such that ReadEntity is repeatable.
// Default is true (due to backwardcompatibility). For better performance, you should set it to false if you don't need it.
func SetCacheReadEntity(doCache bool) {
doCacheReadEntityBytes = doCache
}
// PathParameter accesses the Path parameter value by its name
func (r *Request) PathParameter(name string) string {
return r.pathParameters[name]
}
// PathParameters accesses the Path parameter values
func (r *Request) PathParameters() map[string]string {
return r.pathParameters
}
// QueryParameter returns the (first) Query parameter value by its name
func (r *Request) QueryParameter(name string) string {
return r.Request.FormValue(name)
}
// BodyParameter parses the body of the request (once for typically a POST or a PUT) and returns the value of the given name or an error.
func (r *Request) BodyParameter(name string) (string, error) {
err := r.Request.ParseForm()
if err != nil {
return "", err
}
return r.Request.PostFormValue(name), nil
}
// HeaderParameter returns the HTTP Header value of a Header name or empty if missing
func (r *Request) HeaderParameter(name string) string {
return r.Request.Header.Get(name)
}
// ReadEntity checks the Accept header and reads the content into the entityPointer.
func (r *Request) ReadEntity(entityPointer interface{}) (err error) {
contentType := r.Request.Header.Get(HEADER_ContentType)
contentEncoding := r.Request.Header.Get(HEADER_ContentEncoding)
// OLD feature, cache the body for reads
if doCacheReadEntityBytes {
if r.bodyContent == nil {
data, err := ioutil.ReadAll(r.Request.Body)
if err != nil {
return err
}
r.bodyContent = &data
}
r.Request.Body = ioutil.NopCloser(bytes.NewReader(*r.bodyContent))
}
// check if the request body needs decompression
if ENCODING_GZIP == contentEncoding {
gzipReader := currentCompressorProvider.AcquireGzipReader()
defer currentCompressorProvider.ReleaseGzipReader(gzipReader)
gzipReader.Reset(r.Request.Body)
r.Request.Body = gzipReader
} else if ENCODING_DEFLATE == contentEncoding {
zlibReader, err := zlib.NewReader(r.Request.Body)
if err != nil {
return err
}
r.Request.Body = zlibReader
}
// lookup the EntityReader
entityReader, ok := entityAccessRegistry.accessorAt(contentType)
if !ok {
return NewError(http.StatusBadRequest, "Unable to unmarshal content of type:"+contentType)
}
return entityReader.Read(r, entityPointer)
}
// SetAttribute adds or replaces the attribute with the given value.
func (r *Request) SetAttribute(name string, value interface{}) {
r.attributes[name] = value
}
// Attribute returns the value associated to the given name. Returns nil if absent.
func (r Request) Attribute(name string) interface{} {
return r.attributes[name]
}
// SelectedRoutePath root path + route path that matched the request, e.g. /meetings/{id}/attendees
func (r Request) SelectedRoutePath() string {
return r.selectedRoutePath
}

View File

@ -1,235 +0,0 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"errors"
"net/http"
)
// DEPRECATED, use DefaultResponseContentType(mime)
var DefaultResponseMimeType string
//PrettyPrintResponses controls the indentation feature of XML and JSON serialization
var PrettyPrintResponses = true
// Response is a wrapper on the actual http ResponseWriter
// It provides several convenience methods to prepare and write response content.
type Response struct {
http.ResponseWriter
requestAccept string // mime-type what the Http Request says it wants to receive
routeProduces []string // mime-types what the Route says it can produce
statusCode int // HTTP status code that has been written explicity (if zero then net/http has written 200)
contentLength int // number of bytes written for the response body
prettyPrint bool // controls the indentation feature of XML and JSON serialization. It is initialized using var PrettyPrintResponses.
err error // err property is kept when WriteError is called
}
// Creates a new response based on a http ResponseWriter.
func NewResponse(httpWriter http.ResponseWriter) *Response {
return &Response{httpWriter, "", []string{}, http.StatusOK, 0, PrettyPrintResponses, nil} // empty content-types
}
// If Accept header matching fails, fall back to this type.
// Valid values are restful.MIME_JSON and restful.MIME_XML
// Example:
// restful.DefaultResponseContentType(restful.MIME_JSON)
func DefaultResponseContentType(mime string) {
DefaultResponseMimeType = mime
}
// InternalServerError writes the StatusInternalServerError header.
// DEPRECATED, use WriteErrorString(http.StatusInternalServerError,reason)
func (r Response) InternalServerError() Response {
r.WriteHeader(http.StatusInternalServerError)
return r
}
// PrettyPrint changes whether this response must produce pretty (line-by-line, indented) JSON or XML output.
func (r *Response) PrettyPrint(bePretty bool) {
r.prettyPrint = bePretty
}
// AddHeader is a shortcut for .Header().Add(header,value)
func (r Response) AddHeader(header string, value string) Response {
r.Header().Add(header, value)
return r
}
// SetRequestAccepts tells the response what Mime-type(s) the HTTP request said it wants to accept. Exposed for testing.
func (r *Response) SetRequestAccepts(mime string) {
r.requestAccept = mime
}
// EntityWriter returns the registered EntityWriter that the entity (requested resource)
// can write according to what the request wants (Accept) and what the Route can produce or what the restful defaults say.
// If called before WriteEntity and WriteHeader then a false return value can be used to write a 406: Not Acceptable.
func (r *Response) EntityWriter() (EntityReaderWriter, bool) {
sorted := sortedMimes(r.requestAccept)
for _, eachAccept := range sorted {
for _, eachProduce := range r.routeProduces {
if eachProduce == eachAccept.media {
if w, ok := entityAccessRegistry.accessorAt(eachAccept.media); ok {
return w, true
}
}
}
if eachAccept.media == "*/*" {
for _, each := range r.routeProduces {
if w, ok := entityAccessRegistry.accessorAt(each); ok {
return w, true
}
}
}
}
// if requestAccept is empty
writer, ok := entityAccessRegistry.accessorAt(r.requestAccept)
if !ok {
// if not registered then fallback to the defaults (if set)
if DefaultResponseMimeType == MIME_JSON {
return entityAccessRegistry.accessorAt(MIME_JSON)
}
if DefaultResponseMimeType == MIME_XML {
return entityAccessRegistry.accessorAt(MIME_XML)
}
// Fallback to whatever the route says it can produce.
// https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
for _, each := range r.routeProduces {
if w, ok := entityAccessRegistry.accessorAt(each); ok {
return w, true
}
}
if trace {
traceLogger.Printf("no registered EntityReaderWriter found for %s", r.requestAccept)
}
}
return writer, ok
}
// WriteEntity calls WriteHeaderAndEntity with Http Status OK (200)
func (r *Response) WriteEntity(value interface{}) error {
return r.WriteHeaderAndEntity(http.StatusOK, value)
}
// WriteHeaderAndEntity marshals the value using the representation denoted by the Accept Header and the registered EntityWriters.
// If no Accept header is specified (or */*) then respond with the Content-Type as specified by the first in the Route.Produces.
// If an Accept header is specified then respond with the Content-Type as specified by the first in the Route.Produces that is matched with the Accept header.
// If the value is nil then no response is send except for the Http status. You may want to call WriteHeader(http.StatusNotFound) instead.
// If there is no writer available that can represent the value in the requested MIME type then Http Status NotAcceptable is written.
// Current implementation ignores any q-parameters in the Accept Header.
// Returns an error if the value could not be written on the response.
func (r *Response) WriteHeaderAndEntity(status int, value interface{}) error {
writer, ok := r.EntityWriter()
if !ok {
r.WriteHeader(http.StatusNotAcceptable)
return nil
}
return writer.Write(r, status, value)
}
// WriteAsXml is a convenience method for writing a value in xml (requires Xml tags on the value)
// It uses the standard encoding/xml package for marshalling the value ; not using a registered EntityReaderWriter.
func (r *Response) WriteAsXml(value interface{}) error {
return writeXML(r, http.StatusOK, MIME_XML, value)
}
// WriteHeaderAndXml is a convenience method for writing a status and value in xml (requires Xml tags on the value)
// It uses the standard encoding/xml package for marshalling the value ; not using a registered EntityReaderWriter.
func (r *Response) WriteHeaderAndXml(status int, value interface{}) error {
return writeXML(r, status, MIME_XML, value)
}
// WriteAsJson is a convenience method for writing a value in json.
// It uses the standard encoding/json package for marshalling the value ; not using a registered EntityReaderWriter.
func (r *Response) WriteAsJson(value interface{}) error {
return writeJSON(r, http.StatusOK, MIME_JSON, value)
}
// WriteJson is a convenience method for writing a value in Json with a given Content-Type.
// It uses the standard encoding/json package for marshalling the value ; not using a registered EntityReaderWriter.
func (r *Response) WriteJson(value interface{}, contentType string) error {
return writeJSON(r, http.StatusOK, contentType, value)
}
// WriteHeaderAndJson is a convenience method for writing the status and a value in Json with a given Content-Type.
// It uses the standard encoding/json package for marshalling the value ; not using a registered EntityReaderWriter.
func (r *Response) WriteHeaderAndJson(status int, value interface{}, contentType string) error {
return writeJSON(r, status, contentType, value)
}
// WriteError write the http status and the error string on the response.
func (r *Response) WriteError(httpStatus int, err error) error {
r.err = err
return r.WriteErrorString(httpStatus, err.Error())
}
// WriteServiceError is a convenience method for a responding with a status and a ServiceError
func (r *Response) WriteServiceError(httpStatus int, err ServiceError) error {
r.err = err
return r.WriteHeaderAndEntity(httpStatus, err)
}
// WriteErrorString is a convenience method for an error status with the actual error
func (r *Response) WriteErrorString(httpStatus int, errorReason string) error {
if r.err == nil {
// if not called from WriteError
r.err = errors.New(errorReason)
}
r.WriteHeader(httpStatus)
if _, err := r.Write([]byte(errorReason)); err != nil {
return err
}
return nil
}
// Flush implements http.Flusher interface, which sends any buffered data to the client.
func (r *Response) Flush() {
if f, ok := r.ResponseWriter.(http.Flusher); ok {
f.Flush()
} else if trace {
traceLogger.Printf("ResponseWriter %v doesn't support Flush", r)
}
}
// WriteHeader is overridden to remember the Status Code that has been written.
// Changes to the Header of the response have no effect after this.
func (r *Response) WriteHeader(httpStatus int) {
r.statusCode = httpStatus
r.ResponseWriter.WriteHeader(httpStatus)
}
// StatusCode returns the code that has been written using WriteHeader.
func (r Response) StatusCode() int {
if 0 == r.statusCode {
// no status code has been written yet; assume OK
return http.StatusOK
}
return r.statusCode
}
// Write writes the data to the connection as part of an HTTP reply.
// Write is part of http.ResponseWriter interface.
func (r *Response) Write(bytes []byte) (int, error) {
written, err := r.ResponseWriter.Write(bytes)
r.contentLength += written
return written, err
}
// ContentLength returns the number of bytes written for the response content.
// Note that this value is only correct if all data is written through the Response using its Write* methods.
// Data written directly using the underlying http.ResponseWriter is not accounted for.
func (r Response) ContentLength() int {
return r.contentLength
}
// CloseNotify is part of http.CloseNotifier interface
func (r Response) CloseNotify() <-chan bool {
return r.ResponseWriter.(http.CloseNotifier).CloseNotify()
}
// Error returns the err created by WriteError
func (r Response) Error() error {
return r.err
}

View File

@ -1,183 +0,0 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"bytes"
"net/http"
"strings"
)
// RouteFunction declares the signature of a function that can be bound to a Route.
type RouteFunction func(*Request, *Response)
// Route binds a HTTP Method,Path,Consumes combination to a RouteFunction.
type Route struct {
Method string
Produces []string
Consumes []string
Path string // webservice root path + described path
Function RouteFunction
Filters []FilterFunction
// cached values for dispatching
relativePath string
pathParts []string
pathExpr *pathExpression // cached compilation of relativePath as RegExp
// documentation
Doc string
Notes string
Operation string
ParameterDocs []*Parameter
ResponseErrors map[int]ResponseError
ReadSample, WriteSample interface{} // structs that model an example request or response payload
}
// Initialize for Route
func (r *Route) postBuild() {
r.pathParts = tokenizePath(r.Path)
}
// Create Request and Response from their http versions
func (r *Route) wrapRequestResponse(httpWriter http.ResponseWriter, httpRequest *http.Request) (*Request, *Response) {
params := r.extractParameters(httpRequest.URL.Path)
wrappedRequest := NewRequest(httpRequest)
wrappedRequest.pathParameters = params
wrappedRequest.selectedRoutePath = r.Path
wrappedResponse := NewResponse(httpWriter)
wrappedResponse.requestAccept = httpRequest.Header.Get(HEADER_Accept)
wrappedResponse.routeProduces = r.Produces
return wrappedRequest, wrappedResponse
}
// dispatchWithFilters call the function after passing through its own filters
func (r *Route) dispatchWithFilters(wrappedRequest *Request, wrappedResponse *Response) {
if len(r.Filters) > 0 {
chain := FilterChain{Filters: r.Filters, Target: r.Function}
chain.ProcessFilter(wrappedRequest, wrappedResponse)
} else {
// unfiltered
r.Function(wrappedRequest, wrappedResponse)
}
}
// Return whether the mimeType matches to what this Route can produce.
func (r Route) matchesAccept(mimeTypesWithQuality string) bool {
parts := strings.Split(mimeTypesWithQuality, ",")
for _, each := range parts {
var withoutQuality string
if strings.Contains(each, ";") {
withoutQuality = strings.Split(each, ";")[0]
} else {
withoutQuality = each
}
// trim before compare
withoutQuality = strings.Trim(withoutQuality, " ")
if withoutQuality == "*/*" {
return true
}
for _, producibleType := range r.Produces {
if producibleType == "*/*" || producibleType == withoutQuality {
return true
}
}
}
return false
}
// Return whether this Route can consume content with a type specified by mimeTypes (can be empty).
func (r Route) matchesContentType(mimeTypes string) bool {
if len(r.Consumes) == 0 {
// did not specify what it can consume ; any media type (“*/*”) is assumed
return true
}
if len(mimeTypes) == 0 {
// idempotent methods with (most-likely or garanteed) empty content match missing Content-Type
m := r.Method
if m == "GET" || m == "HEAD" || m == "OPTIONS" || m == "DELETE" || m == "TRACE" {
return true
}
// proceed with default
mimeTypes = MIME_OCTET
}
parts := strings.Split(mimeTypes, ",")
for _, each := range parts {
var contentType string
if strings.Contains(each, ";") {
contentType = strings.Split(each, ";")[0]
} else {
contentType = each
}
// trim before compare
contentType = strings.Trim(contentType, " ")
for _, consumeableType := range r.Consumes {
if consumeableType == "*/*" || consumeableType == contentType {
return true
}
}
}
return false
}
// Extract the parameters from the request url path
func (r Route) extractParameters(urlPath string) map[string]string {
urlParts := tokenizePath(urlPath)
pathParameters := map[string]string{}
for i, key := range r.pathParts {
var value string
if i >= len(urlParts) {
value = ""
} else {
value = urlParts[i]
}
if strings.HasPrefix(key, "{") { // path-parameter
if colon := strings.Index(key, ":"); colon != -1 {
// extract by regex
regPart := key[colon+1 : len(key)-1]
keyPart := key[1:colon]
if regPart == "*" {
pathParameters[keyPart] = untokenizePath(i, urlParts)
break
} else {
pathParameters[keyPart] = value
}
} else {
// without enclosing {}
pathParameters[key[1:len(key)-1]] = value
}
}
}
return pathParameters
}
// Untokenize back into an URL path using the slash separator
func untokenizePath(offset int, parts []string) string {
var buffer bytes.Buffer
for p := offset; p < len(parts); p++ {
buffer.WriteString(parts[p])
// do not end
if p < len(parts)-1 {
buffer.WriteString("/")
}
}
return buffer.String()
}
// Tokenize an URL path using the slash separator ; the result does not have empty tokens
func tokenizePath(path string) []string {
if "/" == path {
return []string{}
}
return strings.Split(strings.Trim(path, "/"), "/")
}
// for debugging
func (r Route) String() string {
return r.Method + " " + r.Path
}

View File

@ -1,240 +0,0 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"os"
"reflect"
"runtime"
"strings"
"github.com/emicklei/go-restful/log"
)
// RouteBuilder is a helper to construct Routes.
type RouteBuilder struct {
rootPath string
currentPath string
produces []string
consumes []string
httpMethod string // required
function RouteFunction // required
filters []FilterFunction
// documentation
doc string
notes string
operation string
readSample, writeSample interface{}
parameters []*Parameter
errorMap map[int]ResponseError
}
// Do evaluates each argument with the RouteBuilder itself.
// This allows you to follow DRY principles without breaking the fluent programming style.
// Example:
// ws.Route(ws.DELETE("/{name}").To(t.deletePerson).Do(Returns200, Returns500))
//
// func Returns500(b *RouteBuilder) {
// b.Returns(500, "Internal Server Error", restful.ServiceError{})
// }
func (b *RouteBuilder) Do(oneArgBlocks ...func(*RouteBuilder)) *RouteBuilder {
for _, each := range oneArgBlocks {
each(b)
}
return b
}
// To bind the route to a function.
// If this route is matched with the incoming Http Request then call this function with the *Request,*Response pair. Required.
func (b *RouteBuilder) To(function RouteFunction) *RouteBuilder {
b.function = function
return b
}
// Method specifies what HTTP method to match. Required.
func (b *RouteBuilder) Method(method string) *RouteBuilder {
b.httpMethod = method
return b
}
// Produces specifies what MIME types can be produced ; the matched one will appear in the Content-Type Http header.
func (b *RouteBuilder) Produces(mimeTypes ...string) *RouteBuilder {
b.produces = mimeTypes
return b
}
// Consumes specifies what MIME types can be consumes ; the Accept Http header must matched any of these
func (b *RouteBuilder) Consumes(mimeTypes ...string) *RouteBuilder {
b.consumes = mimeTypes
return b
}
// Path specifies the relative (w.r.t WebService root path) URL path to match. Default is "/".
func (b *RouteBuilder) Path(subPath string) *RouteBuilder {
b.currentPath = subPath
return b
}
// Doc tells what this route is all about. Optional.
func (b *RouteBuilder) Doc(documentation string) *RouteBuilder {
b.doc = documentation
return b
}
// A verbose explanation of the operation behavior. Optional.
func (b *RouteBuilder) Notes(notes string) *RouteBuilder {
b.notes = notes
return b
}
// Reads tells what resource type will be read from the request payload. Optional.
// A parameter of type "body" is added ,required is set to true and the dataType is set to the qualified name of the sample's type.
func (b *RouteBuilder) Reads(sample interface{}) *RouteBuilder {
b.readSample = sample
typeAsName := reflect.TypeOf(sample).String()
bodyParameter := &Parameter{&ParameterData{Name: "body"}}
bodyParameter.beBody()
bodyParameter.Required(true)
bodyParameter.DataType(typeAsName)
b.Param(bodyParameter)
return b
}
// ParameterNamed returns a Parameter already known to the RouteBuilder. Returns nil if not.
// Use this to modify or extend information for the Parameter (through its Data()).
func (b RouteBuilder) ParameterNamed(name string) (p *Parameter) {
for _, each := range b.parameters {
if each.Data().Name == name {
return each
}
}
return p
}
// Writes tells what resource type will be written as the response payload. Optional.
func (b *RouteBuilder) Writes(sample interface{}) *RouteBuilder {
b.writeSample = sample
return b
}
// Param allows you to document the parameters of the Route. It adds a new Parameter (does not check for duplicates).
func (b *RouteBuilder) Param(parameter *Parameter) *RouteBuilder {
if b.parameters == nil {
b.parameters = []*Parameter{}
}
b.parameters = append(b.parameters, parameter)
return b
}
// Operation allows you to document what the actual method/function call is of the Route.
// Unless called, the operation name is derived from the RouteFunction set using To(..).
func (b *RouteBuilder) Operation(name string) *RouteBuilder {
b.operation = name
return b
}
// ReturnsError is deprecated, use Returns instead.
func (b *RouteBuilder) ReturnsError(code int, message string, model interface{}) *RouteBuilder {
log.Print("ReturnsError is deprecated, use Returns instead.")
return b.Returns(code, message, model)
}
// Returns allows you to document what responses (errors or regular) can be expected.
// The model parameter is optional ; either pass a struct instance or use nil if not applicable.
func (b *RouteBuilder) Returns(code int, message string, model interface{}) *RouteBuilder {
err := ResponseError{
Code: code,
Message: message,
Model: model,
}
// lazy init because there is no NewRouteBuilder (yet)
if b.errorMap == nil {
b.errorMap = map[int]ResponseError{}
}
b.errorMap[code] = err
return b
}
type ResponseError struct {
Code int
Message string
Model interface{}
}
func (b *RouteBuilder) servicePath(path string) *RouteBuilder {
b.rootPath = path
return b
}
// Filter appends a FilterFunction to the end of filters for this Route to build.
func (b *RouteBuilder) Filter(filter FilterFunction) *RouteBuilder {
b.filters = append(b.filters, filter)
return b
}
// If no specific Route path then set to rootPath
// If no specific Produces then set to rootProduces
// If no specific Consumes then set to rootConsumes
func (b *RouteBuilder) copyDefaults(rootProduces, rootConsumes []string) {
if len(b.produces) == 0 {
b.produces = rootProduces
}
if len(b.consumes) == 0 {
b.consumes = rootConsumes
}
}
// Build creates a new Route using the specification details collected by the RouteBuilder
func (b *RouteBuilder) Build() Route {
pathExpr, err := newPathExpression(b.currentPath)
if err != nil {
log.Printf("[restful] Invalid path:%s because:%v", b.currentPath, err)
os.Exit(1)
}
if b.function == nil {
log.Printf("[restful] No function specified for route:" + b.currentPath)
os.Exit(1)
}
operationName := b.operation
if len(operationName) == 0 && b.function != nil {
// extract from definition
operationName = nameOfFunction(b.function)
}
route := Route{
Method: b.httpMethod,
Path: concatPath(b.rootPath, b.currentPath),
Produces: b.produces,
Consumes: b.consumes,
Function: b.function,
Filters: b.filters,
relativePath: b.currentPath,
pathExpr: pathExpr,
Doc: b.doc,
Notes: b.notes,
Operation: operationName,
ParameterDocs: b.parameters,
ResponseErrors: b.errorMap,
ReadSample: b.readSample,
WriteSample: b.writeSample}
route.postBuild()
return route
}
func concatPath(path1, path2 string) string {
return strings.TrimRight(path1, "/") + "/" + strings.TrimLeft(path2, "/")
}
// nameOfFunction returns the short name of the function f for documentation.
// It uses a runtime feature for debugging ; its value may change for later Go versions.
func nameOfFunction(f interface{}) string {
fun := runtime.FuncForPC(reflect.ValueOf(f).Pointer())
tokenized := strings.Split(fun.Name(), ".")
last := tokenized[len(tokenized)-1]
last = strings.TrimSuffix(last, ")·fm") // < Go 1.5
last = strings.TrimSuffix(last, ")-fm") // Go 1.5
last = strings.TrimSuffix(last, "·fm") // < Go 1.5
last = strings.TrimSuffix(last, "-fm") // Go 1.5
return last
}

View File

@ -1,18 +0,0 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import "net/http"
// A RouteSelector finds the best matching Route given the input HTTP Request
type RouteSelector interface {
// SelectRoute finds a Route given the input HTTP Request and a list of WebServices.
// It returns a selected Route and its containing WebService or an error indicating
// a problem.
SelectRoute(
webServices []*WebService,
httpRequest *http.Request) (selectedService *WebService, selected *Route, err error)
}

View File

@ -1,23 +0,0 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import "fmt"
// ServiceError is a transport object to pass information about a non-Http error occurred in a WebService while processing a request.
type ServiceError struct {
Code int
Message string
}
// NewError returns a ServiceError using the code and reason
func NewError(code int, message string) ServiceError {
return ServiceError{Code: code, Message: message}
}
// Error returns a text representation of the service error
func (s ServiceError) Error() string {
return fmt.Sprintf("[ServiceError:%v] %v", s.Code, s.Message)
}

View File

@ -1,268 +0,0 @@
package restful
import (
"errors"
"os"
"sync"
"github.com/emicklei/go-restful/log"
)
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
// WebService holds a collection of Route values that bind a Http Method + URL Path to a function.
type WebService struct {
rootPath string
pathExpr *pathExpression // cached compilation of rootPath as RegExp
routes []Route
produces []string
consumes []string
pathParameters []*Parameter
filters []FilterFunction
documentation string
apiVersion string
dynamicRoutes bool
// protects 'routes' if dynamic routes are enabled
routesLock sync.RWMutex
}
func (w *WebService) SetDynamicRoutes(enable bool) {
w.dynamicRoutes = enable
}
// compilePathExpression ensures that the path is compiled into a RegEx for those routers that need it.
func (w *WebService) compilePathExpression() {
compiled, err := newPathExpression(w.rootPath)
if err != nil {
log.Printf("[restful] invalid path:%s because:%v", w.rootPath, err)
os.Exit(1)
}
w.pathExpr = compiled
}
// ApiVersion sets the API version for documentation purposes.
func (w *WebService) ApiVersion(apiVersion string) *WebService {
w.apiVersion = apiVersion
return w
}
// Version returns the API version for documentation purposes.
func (w *WebService) Version() string { return w.apiVersion }
// Path specifies the root URL template path of the WebService.
// All Routes will be relative to this path.
func (w *WebService) Path(root string) *WebService {
w.rootPath = root
if len(w.rootPath) == 0 {
w.rootPath = "/"
}
w.compilePathExpression()
return w
}
// Param adds a PathParameter to document parameters used in the root path.
func (w *WebService) Param(parameter *Parameter) *WebService {
if w.pathParameters == nil {
w.pathParameters = []*Parameter{}
}
w.pathParameters = append(w.pathParameters, parameter)
return w
}
// PathParameter creates a new Parameter of kind Path for documentation purposes.
// It is initialized as required with string as its DataType.
func (w *WebService) PathParameter(name, description string) *Parameter {
return PathParameter(name, description)
}
// PathParameter creates a new Parameter of kind Path for documentation purposes.
// It is initialized as required with string as its DataType.
func PathParameter(name, description string) *Parameter {
p := &Parameter{&ParameterData{Name: name, Description: description, Required: true, DataType: "string"}}
p.bePath()
return p
}
// QueryParameter creates a new Parameter of kind Query for documentation purposes.
// It is initialized as not required with string as its DataType.
func (w *WebService) QueryParameter(name, description string) *Parameter {
return QueryParameter(name, description)
}
// QueryParameter creates a new Parameter of kind Query for documentation purposes.
// It is initialized as not required with string as its DataType.
func QueryParameter(name, description string) *Parameter {
p := &Parameter{&ParameterData{Name: name, Description: description, Required: false, DataType: "string"}}
p.beQuery()
return p
}
// BodyParameter creates a new Parameter of kind Body for documentation purposes.
// It is initialized as required without a DataType.
func (w *WebService) BodyParameter(name, description string) *Parameter {
return BodyParameter(name, description)
}
// BodyParameter creates a new Parameter of kind Body for documentation purposes.
// It is initialized as required without a DataType.
func BodyParameter(name, description string) *Parameter {
p := &Parameter{&ParameterData{Name: name, Description: description, Required: true}}
p.beBody()
return p
}
// HeaderParameter creates a new Parameter of kind (Http) Header for documentation purposes.
// It is initialized as not required with string as its DataType.
func (w *WebService) HeaderParameter(name, description string) *Parameter {
return HeaderParameter(name, description)
}
// HeaderParameter creates a new Parameter of kind (Http) Header for documentation purposes.
// It is initialized as not required with string as its DataType.
func HeaderParameter(name, description string) *Parameter {
p := &Parameter{&ParameterData{Name: name, Description: description, Required: false, DataType: "string"}}
p.beHeader()
return p
}
// FormParameter creates a new Parameter of kind Form (using application/x-www-form-urlencoded) for documentation purposes.
// It is initialized as required with string as its DataType.
func (w *WebService) FormParameter(name, description string) *Parameter {
return FormParameter(name, description)
}
// FormParameter creates a new Parameter of kind Form (using application/x-www-form-urlencoded) for documentation purposes.
// It is initialized as required with string as its DataType.
func FormParameter(name, description string) *Parameter {
p := &Parameter{&ParameterData{Name: name, Description: description, Required: false, DataType: "string"}}
p.beForm()
return p
}
// Route creates a new Route using the RouteBuilder and add to the ordered list of Routes.
func (w *WebService) Route(builder *RouteBuilder) *WebService {
w.routesLock.Lock()
defer w.routesLock.Unlock()
builder.copyDefaults(w.produces, w.consumes)
w.routes = append(w.routes, builder.Build())
return w
}
// RemoveRoute removes the specified route, looks for something that matches 'path' and 'method'
func (w *WebService) RemoveRoute(path, method string) error {
if !w.dynamicRoutes {
return errors.New("dynamic routes are not enabled.")
}
w.routesLock.Lock()
defer w.routesLock.Unlock()
newRoutes := make([]Route, (len(w.routes) - 1))
current := 0
for ix := range w.routes {
if w.routes[ix].Method == method && w.routes[ix].Path == path {
continue
}
newRoutes[current] = w.routes[ix]
current = current + 1
}
w.routes = newRoutes
return nil
}
// Method creates a new RouteBuilder and initialize its http method
func (w *WebService) Method(httpMethod string) *RouteBuilder {
return new(RouteBuilder).servicePath(w.rootPath).Method(httpMethod)
}
// Produces specifies that this WebService can produce one or more MIME types.
// Http requests must have one of these values set for the Accept header.
func (w *WebService) Produces(contentTypes ...string) *WebService {
w.produces = contentTypes
return w
}
// Consumes specifies that this WebService can consume one or more MIME types.
// Http requests must have one of these values set for the Content-Type header.
func (w *WebService) Consumes(accepts ...string) *WebService {
w.consumes = accepts
return w
}
// Routes returns the Routes associated with this WebService
func (w *WebService) Routes() []Route {
if !w.dynamicRoutes {
return w.routes
}
// Make a copy of the array to prevent concurrency problems
w.routesLock.RLock()
defer w.routesLock.RUnlock()
result := make([]Route, len(w.routes))
for ix := range w.routes {
result[ix] = w.routes[ix]
}
return result
}
// RootPath returns the RootPath associated with this WebService. Default "/"
func (w *WebService) RootPath() string {
return w.rootPath
}
// PathParameters return the path parameter names for (shared amoung its Routes)
func (w *WebService) PathParameters() []*Parameter {
return w.pathParameters
}
// Filter adds a filter function to the chain of filters applicable to all its Routes
func (w *WebService) Filter(filter FilterFunction) *WebService {
w.filters = append(w.filters, filter)
return w
}
// Doc is used to set the documentation of this service.
func (w *WebService) Doc(plainText string) *WebService {
w.documentation = plainText
return w
}
// Documentation returns it.
func (w *WebService) Documentation() string {
return w.documentation
}
/*
Convenience methods
*/
// HEAD is a shortcut for .Method("HEAD").Path(subPath)
func (w *WebService) HEAD(subPath string) *RouteBuilder {
return new(RouteBuilder).servicePath(w.rootPath).Method("HEAD").Path(subPath)
}
// GET is a shortcut for .Method("GET").Path(subPath)
func (w *WebService) GET(subPath string) *RouteBuilder {
return new(RouteBuilder).servicePath(w.rootPath).Method("GET").Path(subPath)
}
// POST is a shortcut for .Method("POST").Path(subPath)
func (w *WebService) POST(subPath string) *RouteBuilder {
return new(RouteBuilder).servicePath(w.rootPath).Method("POST").Path(subPath)
}
// PUT is a shortcut for .Method("PUT").Path(subPath)
func (w *WebService) PUT(subPath string) *RouteBuilder {
return new(RouteBuilder).servicePath(w.rootPath).Method("PUT").Path(subPath)
}
// PATCH is a shortcut for .Method("PATCH").Path(subPath)
func (w *WebService) PATCH(subPath string) *RouteBuilder {
return new(RouteBuilder).servicePath(w.rootPath).Method("PATCH").Path(subPath)
}
// DELETE is a shortcut for .Method("DELETE").Path(subPath)
func (w *WebService) DELETE(subPath string) *RouteBuilder {
return new(RouteBuilder).servicePath(w.rootPath).Method("DELETE").Path(subPath)
}

View File

@ -1,39 +0,0 @@
package restful
// Copyright 2013 Ernest Micklei. All rights reserved.
// Use of this source code is governed by a license
// that can be found in the LICENSE file.
import (
"net/http"
)
// DefaultContainer is a restful.Container that uses http.DefaultServeMux
var DefaultContainer *Container
func init() {
DefaultContainer = NewContainer()
DefaultContainer.ServeMux = http.DefaultServeMux
}
// If set the true then panics will not be caught to return HTTP 500.
// In that case, Route functions are responsible for handling any error situation.
// Default value is false = recover from panics. This has performance implications.
// OBSOLETE ; use restful.DefaultContainer.DoNotRecover(true)
var DoNotRecover = false
// Add registers a new WebService add it to the DefaultContainer.
func Add(service *WebService) {
DefaultContainer.Add(service)
}
// Filter appends a container FilterFunction from the DefaultContainer.
// These are called before dispatching a http.Request to a WebService.
func Filter(filter FilterFunction) {
DefaultContainer.Filter(filter)
}
// RegisteredWebServices returns the collections of WebServices from the DefaultContainer
func RegisteredWebServices() []*WebService {
return DefaultContainer.RegisteredWebServices()
}

View File

@ -1 +0,0 @@
eyJhbGciOiJSU0EtT0FFUCIsImVuYyI6IkExMjhHQ00ifQ.pDqezepze0YqRx4u6M8GFaWtnVR-utTWZic-GX-RvMATAoYpG4H2sc9tlnGNCxa44dbRY0vY10qfBU7Sno8vkp21fsK42ofGLfen_suum_0ilm0sFS0X-kAwk7TIq5L5lPPKiChPMUiGp5oJW-g5MqMFX1jNiI-4fP-vSM3B3-eyZtJD_O517TgfIRLnblCzqwIkyRmAfPNopi-Fe8Y31TmO2Vd0nFc1Aqro_VaJSACzEVxOHTNpjETcMjlYzwgMXLeiAfLV-5hM0f6DXgHMlLSuMkB_Ndnw25dkB7hreGk4x0tHQ3X9mUfTgLq1hIDoyeeKDIM83Tqw4LBRph20BQ.qd_pNuyi23B0PlWz.JtpO7kqOm0SWOGzWDalkWheHuNd-eDpVbqI9WPAEFDOIBvz7TbsYMBlIYVWEGWbat4mkx_ejxnMn1L1l996NJnyP7eY-QE82cfPJbjx94d0Ob70KZ4DCm_UxcY2t-OKFiPJqxW7MA5jKyDuGD16bdxpjLEoe_cMSEr8FNu-MVG6wcchPcyYyRkqTQSl4mb09KikkAzHjwjo-DcO0f8ps4Uzsoc0aqAAWdE-ocG0YqierLoemjusYMiLH-eLF6MvaLRvHSte-cLzPuYCeZURnBDgxu3i3UApgddnX7g1c7tdGGBGvgCl-tEEDW58Vxgdjksim2S7y3lfoJ8FFzSWeRH2y7Kq04hgew3b2J_RiDB9ejzIopzG8ZGjJa3EO1-i9ORTl12nXK1RdlLGqu604ENaeVOPCIHL-0C8e6_wHdUGHydLZImSxKYSrNvy8resP1D_9t4B-3q2mkS9mhnMONrXbPDVw5QY5mvXlWs0Db99ARwzsl-Qlu0A_tsZwMjWT2I1QMvWPyTRScmMm0FJSv9zStjzxWa_q2GL7Naz1fI4Dd6ZgNJWYYq-mHN5chEeBdIcwb_zMPHczMQXXNL5nmfRGM1aPffkToFWCDpIlI8IXec83ZC6_POxZegS6n9Drrvc.6Nz8EXxs1lWX3ASaCeNElA

View File

@ -1,32 +0,0 @@
clone:
path: github.com/go-openapi/jsonpointer
matrix:
GO_VERSION:
- "1.6"
build:
integration:
image: golang:$$GO_VERSION
pull: true
commands:
- go get -u github.com/stretchr/testify/assert
- go get -u github.com/go-openapi/swag
- go test -race
- go test -v -cover -coverprofile=coverage.out -covermode=count ./...
notify:
slack:
channel: bots
webhook_url: $$SLACK_URL
username: drone
publish:
coverage:
server: https://coverage.vmware.run
token: $$GITHUB_TOKEN
# threshold: 70
# must_increase: true
when:
matrix:
GO_VERSION: "1.6"

View File

@ -1 +0,0 @@
secrets.yml

View File

@ -1,13 +0,0 @@
approve_by_comment: true
approve_regex: '^(:shipit:|:\+1:|\+1|LGTM|lgtm|Approved)'
reject_regex: ^[Rr]ejected
reset_on_push: false
reviewers:
members:
- casualjim
- chancez
- frapposelli
- vburenin
- pytlesk4
name: pullapprove
required: 1

View File

@ -1,74 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
nationality, personal appearance, race, religion, or sexual identity and
orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at ivan+abuse@flanders.co.nz. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

View File

@ -1,15 +0,0 @@
# gojsonpointer [![Build Status](https://ci.vmware.run/api/badges/go-openapi/jsonpointer/status.svg)](https://ci.vmware.run/go-openapi/jsonpointer) [![Coverage](https://coverage.vmware.run/badges/go-openapi/jsonpointer/coverage.svg)](https://coverage.vmware.run/go-openapi/jsonpointer) [![Slack Status](https://slackin.goswagger.io/badge.svg)](https://slackin.goswagger.io)
[![license](http://img.shields.io/badge/license-Apache%20v2-orange.svg)](https://raw.githubusercontent.com/go-openapi/jsonpointer/master/LICENSE) [![GoDoc](https://godoc.org/github.com/go-openapi/jsonpointer?status.svg)](http://godoc.org/github.com/go-openapi/jsonpointer)
An implementation of JSON Pointer - Go language
## Status
Completed YES
Tested YES
## References
http://tools.ietf.org/html/draft-ietf-appsawg-json-pointer-07
### Note
The 4.Evaluation part of the previous reference, starting with 'If the currently referenced value is a JSON array, the reference token MUST contain either...' is not implemented.

View File

@ -1,238 +0,0 @@
// Copyright 2013 sigu-399 ( https://github.com/sigu-399 )
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// author sigu-399
// author-github https://github.com/sigu-399
// author-mail sigu.399@gmail.com
//
// repository-name jsonpointer
// repository-desc An implementation of JSON Pointer - Go language
//
// description Main and unique file.
//
// created 25-02-2013
package jsonpointer
import (
"errors"
"fmt"
"reflect"
"strconv"
"strings"
"github.com/go-openapi/swag"
)
const (
emptyPointer = ``
pointerSeparator = `/`
invalidStart = `JSON pointer must be empty or start with a "` + pointerSeparator
)
var jsonPointableType = reflect.TypeOf(new(JSONPointable)).Elem()
// JSONPointable is an interface for structs to implement when they need to customize the
// json pointer process
type JSONPointable interface {
JSONLookup(string) (interface{}, error)
}
type implStruct struct {
mode string // "SET" or "GET"
inDocument interface{}
setInValue interface{}
getOutNode interface{}
getOutKind reflect.Kind
outError error
}
// New creates a new json pointer for the given string
func New(jsonPointerString string) (Pointer, error) {
var p Pointer
err := p.parse(jsonPointerString)
return p, err
}
// Pointer the json pointer reprsentation
type Pointer struct {
referenceTokens []string
}
// "Constructor", parses the given string JSON pointer
func (p *Pointer) parse(jsonPointerString string) error {
var err error
if jsonPointerString != emptyPointer {
if !strings.HasPrefix(jsonPointerString, pointerSeparator) {
err = errors.New(invalidStart)
} else {
referenceTokens := strings.Split(jsonPointerString, pointerSeparator)
for _, referenceToken := range referenceTokens[1:] {
p.referenceTokens = append(p.referenceTokens, referenceToken)
}
}
}
return err
}
// Get uses the pointer to retrieve a value from a JSON document
func (p *Pointer) Get(document interface{}) (interface{}, reflect.Kind, error) {
return p.get(document, swag.DefaultJSONNameProvider)
}
// GetForToken gets a value for a json pointer token 1 level deep
func GetForToken(document interface{}, decodedToken string) (interface{}, reflect.Kind, error) {
return getSingleImpl(document, decodedToken, swag.DefaultJSONNameProvider)
}
func getSingleImpl(node interface{}, decodedToken string, nameProvider *swag.NameProvider) (interface{}, reflect.Kind, error) {
kind := reflect.Invalid
rValue := reflect.Indirect(reflect.ValueOf(node))
kind = rValue.Kind()
switch kind {
case reflect.Struct:
if rValue.Type().Implements(jsonPointableType) {
r, err := node.(JSONPointable).JSONLookup(decodedToken)
if err != nil {
return nil, kind, err
}
return r, kind, nil
}
nm, ok := nameProvider.GetGoNameForType(rValue.Type(), decodedToken)
if !ok {
return nil, kind, fmt.Errorf("object has no field %q", decodedToken)
}
fld := rValue.FieldByName(nm)
return fld.Interface(), kind, nil
case reflect.Map:
kv := reflect.ValueOf(decodedToken)
mv := rValue.MapIndex(kv)
if mv.IsValid() && !swag.IsZero(mv) {
return mv.Interface(), kind, nil
}
return nil, kind, fmt.Errorf("object has no key %q", decodedToken)
case reflect.Slice:
tokenIndex, err := strconv.Atoi(decodedToken)
if err != nil {
return nil, kind, err
}
sLength := rValue.Len()
if tokenIndex < 0 || tokenIndex >= sLength {
return nil, kind, fmt.Errorf("index out of bounds array[0,%d] index '%d'", sLength, tokenIndex)
}
elem := rValue.Index(tokenIndex)
return elem.Interface(), kind, nil
default:
return nil, kind, fmt.Errorf("invalid token reference %q", decodedToken)
}
}
func (p *Pointer) get(node interface{}, nameProvider *swag.NameProvider) (interface{}, reflect.Kind, error) {
if nameProvider == nil {
nameProvider = swag.DefaultJSONNameProvider
}
kind := reflect.Invalid
// Full document when empty
if len(p.referenceTokens) == 0 {
return node, kind, nil
}
for _, token := range p.referenceTokens {
decodedToken := Unescape(token)
r, knd, err := getSingleImpl(node, decodedToken, nameProvider)
if err != nil {
return nil, knd, err
}
node, kind = r, knd
}
rValue := reflect.ValueOf(node)
kind = rValue.Kind()
return node, kind, nil
}
// DecodedTokens returns the decoded tokens
func (p *Pointer) DecodedTokens() []string {
result := make([]string, 0, len(p.referenceTokens))
for _, t := range p.referenceTokens {
result = append(result, Unescape(t))
}
return result
}
// IsEmpty returns true if this is an empty json pointer
// this indicates that it points to the root document
func (p *Pointer) IsEmpty() bool {
return len(p.referenceTokens) == 0
}
// Pointer to string representation function
func (p *Pointer) String() string {
if len(p.referenceTokens) == 0 {
return emptyPointer
}
pointerString := pointerSeparator + strings.Join(p.referenceTokens, pointerSeparator)
return pointerString
}
// Specific JSON pointer encoding here
// ~0 => ~
// ~1 => /
// ... and vice versa
const (
encRefTok0 = `~0`
encRefTok1 = `~1`
decRefTok0 = `~`
decRefTok1 = `/`
)
// Unescape unescapes a json pointer reference token string to the original representation
func Unescape(token string) string {
step1 := strings.Replace(token, encRefTok1, decRefTok1, -1)
step2 := strings.Replace(step1, encRefTok0, decRefTok0, -1)
return step2
}
// Escape escapes a pointer reference token string
func Escape(token string) string {
step1 := strings.Replace(token, decRefTok0, encRefTok0, -1)
step2 := strings.Replace(step1, decRefTok1, encRefTok1, -1)
return step2
}

View File

@ -1 +0,0 @@
eyJhbGciOiJSU0EtT0FFUCIsImVuYyI6IkExMjhHQ00ifQ.Xe40Wx6g5Y-iN0JVMhKyFfubtOId3zAVE564szw_yYGzFNhc_cGZO9F3BtAcJ55CfHG9C_ozn9dpnUDl_zYZoy_6cPCq13Ekb95z8NAC3ekDtbAATsc9HZwRNwI7UfkhstdwxljEouGB01qoLcUn6lFutrou-Ho21COHeDb2caemnPSA-rEAnXkOiBFu0RQ1MIwMygzvHXIHHYNpNwAtXqmiggM10miSjqBM3JmRPxCi7VK6_Rxij5p6LlhmK1BDi8Y6oBh-9BX3--5GAJeWZ6Vof5TnP-Enioia18j8c8KFtfY4q0y6Ednjb-AarLZ12gj695ppkBNJUdTJQmwGwA.fVcz_RiLrUB5fgMS.rjWllDYC6m_NB-ket_LizNEy9mlJ27odBTZQcMKaUqqXZBtWUCmPrOoMXGq-_cc-c7chg7D-WMh9SPQ23pV0P-DY-jsDpbOqHG2STOMEfW9ZREoaOLJXQaWcuBldLjRyWFcq0HGj97LgE6szD1Zlou3bmdHS_Q-U9Up9YQ_8_YnDcESD_cj1w5FZom7HjchKJFeGjQjfDQpoCKCQNMJaavUqy9jHQEeQ_uVocSrETg3GpewDcUF2tuv8uGq7ZZWu7Vl8zmnY1MFTynaGBWzTCSRmCkAXjcsaUheDP_NT5D7k-xUS6LwtqEUiXAXV07SNFraorFj5lnBQZRDlZMYcA3NWR6zHiOxekR9LBYPofst6w1rIqUchj_5m1tDpVTBMPir1eAaFcnJtPgo4ch17OF-kmcmQGLhJI3U7n8wv4sTrmP1dewtRRKrvlJe5r3_6eDiK4xZ8K0rnK1D4g6zuQqU1gA8KaU7pmZkKpFx3Bew4v-6DH32YwQBvAI7Lbb8afou9WsCNB_iswz5XGimP4bifiJRwpWBEz9VGhZFdiw-hZpYWgbxzVb5gtqfTDLIvpbLDmFz1vge16uUQHHVFpo1pSozyr7A60X8qsh9pmmO3RcJ-ZGZBWqiRC-Kl5ejz7WQ.LFoK4Ibi11B2lWQ5WcPSag

View File

@ -1,33 +0,0 @@
clone:
path: github.com/go-openapi/jsonreference
matrix:
GO_VERSION:
- "1.6"
build:
integration:
image: golang:$$GO_VERSION
pull: true
commands:
- go get -u github.com/stretchr/testify/assert
- go get -u github.com/PuerkitoBio/purell
- go get -u github.com/go-openapi/jsonpointer
- go test -race
- go test -v -cover -coverprofile=coverage.out -covermode=count ./...
notify:
slack:
channel: bots
webhook_url: $$SLACK_URL
username: drone
publish:
coverage:
server: https://coverage.vmware.run
token: $$GITHUB_TOKEN
# threshold: 70
# must_increase: true
when:
matrix:
GO_VERSION: "1.6"

View File

@ -1 +0,0 @@
secrets.yml

View File

@ -1,13 +0,0 @@
approve_by_comment: true
approve_regex: '^(:shipit:|:\+1:|\+1|LGTM|lgtm|Approved)'
reject_regex: ^[Rr]ejected
reset_on_push: false
reviewers:
members:
- casualjim
- chancez
- frapposelli
- vburenin
- pytlesk4
name: pullapprove
required: 1

View File

@ -1,74 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
nationality, personal appearance, race, religion, or sexual identity and
orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at ivan+abuse@flanders.co.nz. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

View File

@ -1,15 +0,0 @@
# gojsonreference [![Build Status](https://ci.vmware.run/api/badges/go-openapi/jsonreference/status.svg)](https://ci.vmware.run/go-openapi/jsonreference) [![Coverage](https://coverage.vmware.run/badges/go-openapi/jsonreference/coverage.svg)](https://coverage.vmware.run/go-openapi/jsonreference) [![Slack Status](https://slackin.goswagger.io/badge.svg)](https://slackin.goswagger.io)
[![license](http://img.shields.io/badge/license-Apache%20v2-orange.svg)](https://raw.githubusercontent.com/go-openapi/jsonreference/master/LICENSE) [![GoDoc](https://godoc.org/github.com/go-openapi/jsonreference?status.svg)](http://godoc.org/github.com/go-openapi/jsonreference)
An implementation of JSON Reference - Go language
## Status
Work in progress ( 90% done )
## Dependencies
https://github.com/xeipuuv/gojsonpointer
## References
http://tools.ietf.org/html/draft-ietf-appsawg-json-pointer-07
http://tools.ietf.org/html/draft-pbryan-zyp-json-ref-03

View File

@ -1,156 +0,0 @@
// Copyright 2013 sigu-399 ( https://github.com/sigu-399 )
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// author sigu-399
// author-github https://github.com/sigu-399
// author-mail sigu.399@gmail.com
//
// repository-name jsonreference
// repository-desc An implementation of JSON Reference - Go language
//
// description Main and unique file.
//
// created 26-02-2013
package jsonreference
import (
"errors"
"net/url"
"strings"
"github.com/PuerkitoBio/purell"
"github.com/go-openapi/jsonpointer"
)
const (
fragmentRune = `#`
)
// New creates a new reference for the given string
func New(jsonReferenceString string) (Ref, error) {
var r Ref
err := r.parse(jsonReferenceString)
return r, err
}
// MustCreateRef parses the ref string and panics when it's invalid.
// Use the New method for a version that returns an error
func MustCreateRef(ref string) Ref {
r, err := New(ref)
if err != nil {
panic(err)
}
return r
}
// Ref represents a json reference object
type Ref struct {
referenceURL *url.URL
referencePointer jsonpointer.Pointer
HasFullURL bool
HasURLPathOnly bool
HasFragmentOnly bool
HasFileScheme bool
HasFullFilePath bool
}
// GetURL gets the URL for this reference
func (r *Ref) GetURL() *url.URL {
return r.referenceURL
}
// GetPointer gets the json pointer for this reference
func (r *Ref) GetPointer() *jsonpointer.Pointer {
return &r.referencePointer
}
// String returns the best version of the url for this reference
func (r *Ref) String() string {
if r.referenceURL != nil {
return r.referenceURL.String()
}
if r.HasFragmentOnly {
return fragmentRune + r.referencePointer.String()
}
return r.referencePointer.String()
}
// IsRoot returns true if this reference is a root document
func (r *Ref) IsRoot() bool {
return r.referenceURL != nil &&
!r.IsCanonical() &&
!r.HasURLPathOnly &&
r.referenceURL.Fragment == ""
}
// IsCanonical returns true when this pointer starts with http(s):// or file://
func (r *Ref) IsCanonical() bool {
return (r.HasFileScheme && r.HasFullFilePath) || (!r.HasFileScheme && r.HasFullURL)
}
// "Constructor", parses the given string JSON reference
func (r *Ref) parse(jsonReferenceString string) error {
parsed, err := url.Parse(jsonReferenceString)
if err != nil {
return err
}
r.referenceURL, _ = url.Parse(purell.NormalizeURL(parsed, purell.FlagsSafe|purell.FlagRemoveDuplicateSlashes))
refURL := r.referenceURL
if refURL.Scheme != "" && refURL.Host != "" {
r.HasFullURL = true
} else {
if refURL.Path != "" {
r.HasURLPathOnly = true
} else if refURL.RawQuery == "" && refURL.Fragment != "" {
r.HasFragmentOnly = true
}
}
r.HasFileScheme = refURL.Scheme == "file"
r.HasFullFilePath = strings.HasPrefix(refURL.Path, "/")
// invalid json-pointer error means url has no json-pointer fragment. simply ignore error
r.referencePointer, _ = jsonpointer.New(refURL.Fragment)
return nil
}
// Inherits creates a new reference from a parent and a child
// If the child cannot inherit from the parent, an error is returned
func (r *Ref) Inherits(child Ref) (*Ref, error) {
childURL := child.GetURL()
parentURL := r.GetURL()
if childURL == nil {
return nil, errors.New("child url is nil")
}
if parentURL == nil {
return &child, nil
}
ref, err := New(parentURL.ResolveReference(childURL).String())
if err != nil {
return nil, err
}
return &ref, nil
}

View File

@ -1 +0,0 @@
eyJhbGciOiJSU0EtT0FFUCIsImVuYyI6IkExMjhHQ00ifQ.Epk8dDFH8U1RPYIPDpajZO26L5zFJ1wnQNGWxVHHo5cXrWF148kENoZzh35FT9cAxxPS_4CeVVpf59EgvCc8bem1puuj0gBZptn-lYa7iXZdI-ESN2Te7nF5VbZfwbnI62nEikYGyxz-ozL_IFuMl-qWek4iLerF8Z_xh0MZOJ_w8Nog7qb2WQov72d997TJv5ZKjWcRYPbnsAy1q60-Cqxq3a6enhcSPXqpK46nYSXGKfHvognWBJ_pxwkEqIBPN6hE4EfNtJjMf2LFKEdYy02nbHz78d-2YZ8wIUSJ-IWIwn3GTzObdGqRed20Qf3JtWTsOespmexDrLSeo3HW6A.7XaHW-Y1jjRAWt_W.S1Adut62RLOYZc-lN02M0MGczEucch3zIr4J1UPBPnZooWzntiE5UaUz0UdhjHVszQE5hTfG-yocKD1rDQGER6qrLtnJVrCm9J3n4lHglM-xOz1eZln1XKrWcAgZnAKaKSzuAa5scPG4iTHW6RwbWi_PWm04tBJ1yazdjaVo3uvuhflwvU9if7uMPMtscrDesbBVvpG89xmeudiFjX-wjsV5oGBIjz6ukEBAMKzNDMqikNoG4SnGenpxUpjUjMkDXxiC3BC8oL2_myeIfFeEOF066DqEN3CLkqBVO25zdpWAF4Ou2jKv--mgGEb_E1aMgiSoAVBnybene0TKn2IJ8rtkyRdmWlLIRKZdDT3v775C1FPK6-tYzS7NVg9nnuvpta5PhzYNkqI1Ie74Sl0I-RFClhsdx9dLDhoFEKCx2etC4UDX9jhj2u0Y2MrL76dRGE9kEV1hL1fh6HMvS4ZAAWw3Qce4skCjcL-2YyIOHzKjgLGkZsR5cTUQwCJyacVkdHUOUKFdDGZaUzWkFyeZ1oyrlG2d52svaplpU5-vCOVbWkqUN9rOALGPTC51Ur0L7DFx29aDImhaxZqTe2t9mcdqY7VLcO3JgUiD3JKsEet7s2EDeN44MqITv9KBS8wqJW4.sRv4ov0wB0IxTHw90kJy-A

View File

@ -1,35 +0,0 @@
clone:
path: github.com/go-openapi/spec
matrix:
GO_VERSION:
- "1.6"
build:
integration:
image: golang:$$GO_VERSION
pull: true
commands:
- go get -u github.com/stretchr/testify/assert
- go get -u gopkg.in/yaml.v2
- go get -u github.com/go-openapi/swag
- go get -u github.com/go-openapi/jsonpointer
- go get -u github.com/go-openapi/jsonreference
- go test -race
- go test -v -cover -coverprofile=coverage.out -covermode=count ./...
notify:
slack:
channel: bots
webhook_url: $$SLACK_URL
username: drone
publish:
coverage:
server: https://coverage.vmware.run
token: $$GITHUB_TOKEN
# threshold: 70
# must_increase: true
when:
matrix:
GO_VERSION: "1.6"

View File

@ -1,2 +0,0 @@
secrets.yml
coverage.out

View File

@ -1,13 +0,0 @@
approve_by_comment: true
approve_regex: '^(:shipit:|:\+1:|\+1|LGTM|lgtm|Approved)'
reject_regex: ^[Rr]ejected
reset_on_push: false
reviewers:
members:
- casualjim
- chancez
- frapposelli
- vburenin
- pytlesk4
name: pullapprove
required: 1

View File

@ -1,74 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
nationality, personal appearance, race, religion, or sexual identity and
orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at ivan+abuse@flanders.co.nz. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

View File

@ -1,5 +0,0 @@
# OAI object model [![Build Status](https://ci.vmware.run/api/badges/go-openapi/spec/status.svg)](https://ci.vmware.run/go-openapi/spec) [![Coverage](https://coverage.vmware.run/badges/go-openapi/spec/coverage.svg)](https://coverage.vmware.run/go-openapi/spec) [![Slack Status](https://slackin.goswagger.io/badge.svg)](https://slackin.goswagger.io)
[![license](http://img.shields.io/badge/license-Apache%20v2-orange.svg)](https://raw.githubusercontent.com/go-openapi/spec/master/LICENSE) [![GoDoc](https://godoc.org/github.com/go-openapi/spec?status.svg)](http://godoc.org/github.com/go-openapi/spec)
The object model for OpenAPI specification documents

File diff suppressed because one or more lines are too long

View File

@ -1,24 +0,0 @@
// Copyright 2015 go-swagger maintainers
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package spec
// ContactInfo contact information for the exposed API.
//
// For more information: http://goo.gl/8us55a#contactObject
type ContactInfo struct {
Name string `json:"name,omitempty"`
URL string `json:"url,omitempty"`
Email string `json:"email,omitempty"`
}

Some files were not shown because too many files have changed in this diff Show More