Initial commit

This commit is contained in:
Donny
2019-04-22 20:46:32 +08:00
commit 49ab8aadd1
25441 changed files with 4055000 additions and 0 deletions

202
vendor/k8s.io/test-infra/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,25 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["register.go"],
importpath = "k8s.io/test-infra/prow/apis/prowjobs",
visibility = ["//visibility:public"],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [
":package-srcs",
"//vendor/k8s.io/test-infra/prow/apis/prowjobs/v1:all-srcs",
],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

View File

@@ -0,0 +1,21 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package prowjobs
const (
GroupName = "prow.k8s.io"
)

View File

@@ -0,0 +1,35 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"doc.go",
"register.go",
"types.go",
"zz_generated.deepcopy.go",
],
importpath = "k8s.io/test-infra/prow/apis/prowjobs/v1",
visibility = ["//visibility:public"],
deps = [
"//vendor/github.com/knative/build/pkg/apis/build/v1alpha1:go_default_library",
"//vendor/k8s.io/api/core/v1:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/apis/meta/v1:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/runtime:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/runtime/schema:go_default_library",
"//vendor/k8s.io/test-infra/prow/apis/prowjobs:go_default_library",
],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

21
vendor/k8s.io/test-infra/prow/apis/prowjobs/v1/doc.go generated vendored Normal file
View File

@@ -0,0 +1,21 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// +k8s:deepcopy-gen=package
// Package v1 is the v1 version of the API.
// +groupName=prow.k8s.io
package v1

View File

@@ -0,0 +1,53 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/test-infra/prow/apis/prowjobs"
)
// SchemeGroupVersion is group version used to register these objects
var SchemeGroupVersion = schema.GroupVersion{Group: prowjobs.GroupName, Version: "v1"}
// Kind takes an unqualified kind and returns back a Group qualified GroupKind
func Kind(kind string) schema.GroupKind {
return SchemeGroupVersion.WithKind(kind).GroupKind()
}
// Resource takes an unqualified resource and returns a Group qualified GroupResource
func Resource(resource string) schema.GroupResource {
return SchemeGroupVersion.WithResource(resource).GroupResource()
}
var (
SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes)
AddToScheme = SchemeBuilder.AddToScheme
)
// Adds the list of known types to Scheme.
func addKnownTypes(scheme *runtime.Scheme) error {
scheme.AddKnownTypes(SchemeGroupVersion,
&ProwJob{},
&ProwJobList{},
)
metav1.AddToGroupVersion(scheme, SchemeGroupVersion)
return nil
}

487
vendor/k8s.io/test-infra/prow/apis/prowjobs/v1/types.go generated vendored Normal file
View File

@@ -0,0 +1,487 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
import (
"errors"
"fmt"
"strings"
"time"
buildv1alpha1 "github.com/knative/build/pkg/apis/build/v1alpha1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// ProwJobType specifies how the job is triggered.
type ProwJobType string
// Various job types.
const (
// PresubmitJob means it runs on unmerged PRs.
PresubmitJob ProwJobType = "presubmit"
// PostsubmitJob means it runs on each new commit.
PostsubmitJob = "postsubmit"
// Periodic job means it runs on a time-basis, unrelated to git changes.
PeriodicJob = "periodic"
// BatchJob tests multiple unmerged PRs at the same time.
BatchJob = "batch"
)
// ProwJobState specifies whether the job is running
type ProwJobState string
// Various job states.
const (
// TriggeredState means the job has been created but not yet scheduled.
TriggeredState ProwJobState = "triggered"
// PendingState means the job is scheduled but not yet running.
PendingState = "pending"
// SuccessState means the job completed without error (exit 0)
SuccessState = "success"
// FailureState means the job completed with errors (exit non-zero)
FailureState = "failure"
// AbortedState means prow killed the job early (new commit pushed, perhaps).
AbortedState = "aborted"
// ErrorState means the job could not schedule (bad config, perhaps).
ErrorState = "error"
)
// ProwJobAgent specifies the controller (such as plank or jenkins-agent) that runs the job.
type ProwJobAgent string
const (
// KubernetesAgent means prow will create a pod to run this job.
KubernetesAgent ProwJobAgent = "kubernetes"
// JenkinsAgent means prow will schedule the job on jenkins.
JenkinsAgent = "jenkins"
// KnativeBuildAgent means prow will schedule the job via a build-crd resource.
KnativeBuildAgent = "knative-build"
)
const (
// DefaultClusterAlias specifies the default cluster key to schedule jobs.
DefaultClusterAlias = "default"
)
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ProwJob contains the spec as well as runtime metadata.
type ProwJob struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
GitType string `json:"gittype,omitempty"`
Spec ProwJobSpec `json:"spec,omitempty"`
Status ProwJobStatus `json:"status,omitempty"`
}
// ProwJobSpec configures the details of the prow job.
//
// Details include the podspec, code to clone, the cluster it runs
// any child jobs, concurrency limitations, etc.
type ProwJobSpec struct {
// Type is the type of job and informs how
// the jobs is triggered
Type ProwJobType `json:"type,omitempty"`
// Agent determines which controller fulfills
// this specific ProwJobSpec and runs the job
Agent ProwJobAgent `json:"agent,omitempty"`
// Cluster is which Kubernetes cluster is used
// to run the job, only applicable for that
// specific agent
Cluster string `json:"cluster,omitempty"`
// Namespace defines where to create pods/resources.
Namespace string `json:"namespace,omitempty"`
// Job is the name of the job
Job string `json:"job,omitempty"`
// Refs is the code under test, determined at
// runtime by Prow itself
Refs *Refs `json:"refs,omitempty"`
// ExtraRefs are auxiliary repositories that
// need to be cloned, determined from config
ExtraRefs []Refs `json:"extra_refs,omitempty"`
// Report determines if the result of this job should
// be posted as a status on GitHub
Report bool `json:"report,omitempty"`
// Context is the name of the status context used to
// report back to GitHub
Context string `json:"context,omitempty"`
// RerunCommand is the command a user would write to
// trigger this job on their pull request
RerunCommand string `json:"rerun_command,omitempty"`
// MaxConcurrency restricts the total number of instances
// of this job that can run in parallel at once
MaxConcurrency int `json:"max_concurrency,omitempty"`
// ErrorOnEviction indicates that the ProwJob should be completed and given
// the ErrorState status if the pod that is executing the job is evicted.
// If this field is unspecified or false, a new pod will be created to replace
// the evicted one.
ErrorOnEviction bool `json:"error_on_eviction,omitempty"`
// PodSpec provides the basis for running the test under
// a Kubernetes agent
PodSpec *corev1.PodSpec `json:"pod_spec,omitempty"`
// BuildSpec provides the basis for running the test as
// a build-crd resource
// https://github.com/knative/build
BuildSpec *buildv1alpha1.BuildSpec `json:"build_spec,omitempty"`
// DecorationConfig holds configuration options for
// decorating PodSpecs that users provide
DecorationConfig *DecorationConfig `json:"decoration_config,omitempty"`
// RunAfterSuccess are jobs that should be triggered if
// this job runs and does not fail
RunAfterSuccess []ProwJobSpec `json:"run_after_success,omitempty"`
}
// DecorationConfig specifies how to augment pods.
//
// This is primarily used to provide automatic integration with gubernator
// and testgrid.
type DecorationConfig struct {
// Timeout is how long the pod utilities will wait
// before aborting a job with SIGINT.
Timeout time.Duration `json:"timeout,omitempty"`
// GracePeriod is how long the pod utilities will wait
// after sending SIGINT to send SIGKILL when aborting
// a job. Only applicable if decorating the PodSpec.
GracePeriod time.Duration `json:"grace_period,omitempty"`
// UtilityImages holds pull specs for utility container
// images used to decorate a PodSpec.
UtilityImages *UtilityImages `json:"utility_images,omitempty"`
// GCSConfiguration holds options for pushing logs and
// artifacts to GCS from a job.
GCSConfiguration *GCSConfiguration `json:"gcs_configuration,omitempty"`
// GCSCredentialsSecret is the name of the Kubernetes secret
// that holds GCS push credentials
GCSCredentialsSecret string `json:"gcs_credentials_secret,omitempty"`
// SSHKeySecrets are the names of Kubernetes secrets that contain
// SSK keys which should be used during the cloning process
SSHKeySecrets []string `json:"ssh_key_secrets,omitempty"`
// SSHHostFingerprints are the fingerprints of known ssh hosts
// that the cloning process can trust.
// Create with ssh-keyscan [-t rsa] host
SSHHostFingerprints []string `json:"ssh_host_fingerprints,omitempty"`
// SkipCloning determines if we should clone source code in the
// initcontainers for jobs that specify refs
SkipCloning *bool `json:"skip_cloning,omitempty"`
// CookieFileSecret is the name of a kubernetes secret that contains
// a git http.cookiefile, which should be used during the cloning process.
CookiefileSecret string `json:"cookiefile_secret,omitempty"`
}
func (d *DecorationConfig) ApplyDefault(def *DecorationConfig) *DecorationConfig {
if d == nil && def == nil {
return nil
}
var merged DecorationConfig
if d != nil {
merged = *d
} else {
merged = *def
}
if d == nil || def == nil {
return &merged
}
merged.UtilityImages = merged.UtilityImages.ApplyDefault(def.UtilityImages)
merged.GCSConfiguration = merged.GCSConfiguration.ApplyDefault(def.GCSConfiguration)
if merged.Timeout == 0 {
merged.Timeout = def.Timeout
}
if merged.GracePeriod == 0 {
merged.GracePeriod = def.GracePeriod
}
if merged.GCSCredentialsSecret == "" {
merged.GCSCredentialsSecret = def.GCSCredentialsSecret
}
if len(merged.SSHKeySecrets) == 0 {
merged.SSHKeySecrets = def.SSHKeySecrets
}
if len(merged.SSHHostFingerprints) == 0 {
merged.SSHHostFingerprints = def.SSHHostFingerprints
}
if merged.SkipCloning == nil {
merged.SkipCloning = def.SkipCloning
}
if merged.CookiefileSecret == "" {
merged.CookiefileSecret = def.CookiefileSecret
}
return &merged
}
func (d *DecorationConfig) Validate() error {
if d.UtilityImages == nil {
return errors.New("utility image config is not specified")
}
var missing []string
if d.UtilityImages.CloneRefs == "" {
missing = append(missing, "clonerefs")
}
if d.UtilityImages.InitUpload == "" {
missing = append(missing, "initupload")
}
if d.UtilityImages.Entrypoint == "" {
missing = append(missing, "entrypoint")
}
if d.UtilityImages.Sidecar == "" {
missing = append(missing, "sidecar")
}
if len(missing) > 0 {
return fmt.Errorf("the following utility images are not specified: %q", missing)
}
if d.GCSConfiguration == nil {
return errors.New("GCS upload configuration is not specified")
}
if d.GCSCredentialsSecret == "" {
return errors.New("GCS upload credential secret is not specified")
}
if err := d.GCSConfiguration.Validate(); err != nil {
return fmt.Errorf("GCS configuration is invalid: %v", err)
}
return nil
}
// UtilityImages holds pull specs for the utility images
// to be used for a job
type UtilityImages struct {
// CloneRefs is the pull spec used for the clonerefs utility
CloneRefs string `json:"clonerefs,omitempty"`
// InitUpload is the pull spec used for the initupload utility
InitUpload string `json:"initupload,omitempty"`
// Entrypoint is the pull spec used for the entrypoint utility
Entrypoint string `json:"entrypoint,omitempty"`
// sidecar is the pull spec used for the sidecar utility
Sidecar string `json:"sidecar,omitempty"`
}
func (u *UtilityImages) ApplyDefault(def *UtilityImages) *UtilityImages {
if u == nil {
return def
} else if def == nil {
return u
}
merged := *u
if merged.CloneRefs == "" {
merged.CloneRefs = def.CloneRefs
}
if merged.InitUpload == "" {
merged.InitUpload = def.InitUpload
}
if merged.Entrypoint == "" {
merged.Entrypoint = def.Entrypoint
}
if merged.Sidecar == "" {
merged.Sidecar = def.Sidecar
}
return &merged
}
// PathStrategy specifies minutia about how to construct the url.
// Usually consumed by gubernator/testgrid.
const (
PathStrategyLegacy = "legacy"
PathStrategySingle = "single"
PathStrategyExplicit = "explicit"
)
// GCSConfiguration holds options for pushing logs and
// artifacts to GCS from a job.
type GCSConfiguration struct {
// Bucket is the GCS bucket to upload to
Bucket string `json:"bucket,omitempty"`
// PathPrefix is an optional path that follows the
// bucket name and comes before any structure
PathPrefix string `json:"path_prefix,omitempty"`
// PathStrategy dictates how the org and repo are used
// when calculating the full path to an artifact in GCS
PathStrategy string `json:"path_strategy,omitempty"`
// DefaultOrg is omitted from GCS paths when using the
// legacy or simple strategy
DefaultOrg string `json:"default_org,omitempty"`
// DefaultRepo is omitted from GCS paths when using the
// legacy or simple strategy
DefaultRepo string `json:"default_repo,omitempty"`
}
func (g *GCSConfiguration) ApplyDefault(def *GCSConfiguration) *GCSConfiguration {
if g == nil && def == nil {
return nil
}
var merged GCSConfiguration
if g != nil {
merged = *g
} else {
merged = *def
}
if g == nil || def == nil {
return &merged
}
if merged.Bucket == "" {
merged.Bucket = def.Bucket
}
if merged.PathPrefix == "" {
merged.PathPrefix = def.PathPrefix
}
if merged.PathStrategy == "" {
merged.PathStrategy = def.PathStrategy
}
if merged.DefaultOrg == "" {
merged.DefaultOrg = def.DefaultOrg
}
if merged.DefaultRepo == "" {
merged.DefaultRepo = def.DefaultRepo
}
return &merged
}
func (g *GCSConfiguration) Validate() error {
if g.PathStrategy != PathStrategyLegacy && g.PathStrategy != PathStrategyExplicit && g.PathStrategy != PathStrategySingle {
return fmt.Errorf("gcs_path_strategy must be one of %q, %q, or %q", PathStrategyLegacy, PathStrategyExplicit, PathStrategySingle)
}
if g.PathStrategy != PathStrategyExplicit && (g.DefaultOrg == "" || g.DefaultRepo == "") {
return fmt.Errorf("default org and repo must be provided for GCS strategy %q", g.PathStrategy)
}
return nil
}
// ProwJobStatus provides runtime metadata, such as when it finished, whether it is running, etc.
type ProwJobStatus struct {
StartTime metav1.Time `json:"startTime,omitempty"`
CompletionTime *metav1.Time `json:"completionTime,omitempty"`
State ProwJobState `json:"state,omitempty"`
Description string `json:"description,omitempty"`
URL string `json:"url,omitempty"`
// PodName applies only to ProwJobs fulfilled by
// plank. This field should always be the same as
// the ProwJob.ObjectMeta.Name field.
PodName string `json:"pod_name,omitempty"`
// BuildID is the build identifier vended either by tot
// or the snowflake library for this job and used as an
// identifier for grouping artifacts in GCS for views in
// TestGrid and Gubernator. Idenitifiers vended by tot
// are monotonically increasing whereas identifiers vended
// by the snowflake library are not.
BuildID string `json:"build_id,omitempty"`
// JenkinsBuildID applies only to ProwJobs fulfilled
// by the jenkins-operator. This field is the build
// identifier that Jenkins gave to the build for this
// ProwJob.
JenkinsBuildID string `json:"jenkins_build_id,omitempty"`
// PrevReportStates stores the previous reported prowjob state per reporter
// So crier won't make duplicated report attempt
PrevReportStates map[string]ProwJobState `json:"prev_report_states,omitempty"`
}
// Complete returns true if the prow job has finished
func (j *ProwJob) Complete() bool {
// TODO(fejta): support a timeout?
return j.Status.CompletionTime != nil
}
// SetComplete marks the job as completed (at time now).
func (j *ProwJob) SetComplete() {
j.Status.CompletionTime = new(metav1.Time)
*j.Status.CompletionTime = metav1.Now()
}
// ClusterAlias specifies the key in the clusters map to use.
//
// This allows scheduling a prow job somewhere aside from the default build cluster.
func (j *ProwJob) ClusterAlias() string {
if j.Spec.Cluster == "" {
return DefaultClusterAlias
}
return j.Spec.Cluster
}
// Pull describes a pull request at a particular point in time.
type Pull struct {
Number int `json:"number,omitempty"`
Author string `json:"author,omitempty"`
SHA string `json:"sha,omitempty"`
// Ref is git ref can be checked out for a change
// for example,
// github: pull/123/head
// gerrit: refs/changes/00/123/1
Ref string `json:"ref,omitempty"`
}
// Refs describes how the repo was constructed.
type Refs struct {
GitType string `json:"gittype,omitempty"`
// Org is something like kubernetes or k8s.io
Org string `json:"org,omitempty"`
// Repo is something like test-infra
Repo string `json:"repo,omitempty"`
BaseRef string `json:"base_ref,omitempty"`
BaseSHA string `json:"base_sha,omitempty"`
Pulls []Pull `json:"pulls,omitempty"`
// PathAlias is the location under <root-dir>/src
// where this repository is cloned. If this is not
// set, <root-dir>/src/github.com/org/repo will be
// used as the default.
PathAlias string `json:"path_alias,omitempty"`
// CloneURI is the URI that is used to clone the
// repository. If unset, will default to
// `https://github.com/org/repo.git`.
CloneURI string `json:"clone_uri,omitempty"`
// SkipSubmodules determines if submodules should be
// cloned when the job is run. Defaults to true.
SkipSubmodules bool `json:"skip_submodules,omitempty"`
}
func (r Refs) String() string {
rs := []string{fmt.Sprintf("%s:%s", r.BaseRef, r.BaseSHA)}
for _, pull := range r.Pulls {
ref := fmt.Sprintf("%d:%s", pull.Number, pull.SHA)
if pull.Ref != "" {
ref = fmt.Sprintf("%s:%s", ref, pull.Ref)
}
rs = append(rs, ref)
}
return strings.Join(rs, ",")
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ProwJobList is a list of ProwJob resources
type ProwJobList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata"`
Items []ProwJob `json:"items"`
}

View File

@@ -0,0 +1,276 @@
// +build !ignore_autogenerated
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by deepcopy-gen. DO NOT EDIT.
package v1
import (
v1alpha1 "github.com/knative/build/pkg/apis/build/v1alpha1"
corev1 "k8s.io/api/core/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DecorationConfig) DeepCopyInto(out *DecorationConfig) {
*out = *in
if in.UtilityImages != nil {
in, out := &in.UtilityImages, &out.UtilityImages
*out = new(UtilityImages)
**out = **in
}
if in.GCSConfiguration != nil {
in, out := &in.GCSConfiguration, &out.GCSConfiguration
*out = new(GCSConfiguration)
**out = **in
}
if in.SSHKeySecrets != nil {
in, out := &in.SSHKeySecrets, &out.SSHKeySecrets
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.SSHHostFingerprints != nil {
in, out := &in.SSHHostFingerprints, &out.SSHHostFingerprints
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.SkipCloning != nil {
in, out := &in.SkipCloning, &out.SkipCloning
*out = new(bool)
**out = **in
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DecorationConfig.
func (in *DecorationConfig) DeepCopy() *DecorationConfig {
if in == nil {
return nil
}
out := new(DecorationConfig)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GCSConfiguration) DeepCopyInto(out *GCSConfiguration) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GCSConfiguration.
func (in *GCSConfiguration) DeepCopy() *GCSConfiguration {
if in == nil {
return nil
}
out := new(GCSConfiguration)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ProwJob) DeepCopyInto(out *ProwJob) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProwJob.
func (in *ProwJob) DeepCopy() *ProwJob {
if in == nil {
return nil
}
out := new(ProwJob)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *ProwJob) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ProwJobList) DeepCopyInto(out *ProwJobList) {
*out = *in
out.TypeMeta = in.TypeMeta
out.ListMeta = in.ListMeta
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]ProwJob, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProwJobList.
func (in *ProwJobList) DeepCopy() *ProwJobList {
if in == nil {
return nil
}
out := new(ProwJobList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *ProwJobList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ProwJobSpec) DeepCopyInto(out *ProwJobSpec) {
*out = *in
if in.Refs != nil {
in, out := &in.Refs, &out.Refs
*out = new(Refs)
(*in).DeepCopyInto(*out)
}
if in.ExtraRefs != nil {
in, out := &in.ExtraRefs, &out.ExtraRefs
*out = make([]Refs, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.PodSpec != nil {
in, out := &in.PodSpec, &out.PodSpec
*out = new(corev1.PodSpec)
(*in).DeepCopyInto(*out)
}
if in.BuildSpec != nil {
in, out := &in.BuildSpec, &out.BuildSpec
*out = new(v1alpha1.BuildSpec)
(*in).DeepCopyInto(*out)
}
if in.DecorationConfig != nil {
in, out := &in.DecorationConfig, &out.DecorationConfig
*out = new(DecorationConfig)
(*in).DeepCopyInto(*out)
}
if in.RunAfterSuccess != nil {
in, out := &in.RunAfterSuccess, &out.RunAfterSuccess
*out = make([]ProwJobSpec, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProwJobSpec.
func (in *ProwJobSpec) DeepCopy() *ProwJobSpec {
if in == nil {
return nil
}
out := new(ProwJobSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ProwJobStatus) DeepCopyInto(out *ProwJobStatus) {
*out = *in
in.StartTime.DeepCopyInto(&out.StartTime)
if in.CompletionTime != nil {
in, out := &in.CompletionTime, &out.CompletionTime
*out = (*in).DeepCopy()
}
if in.PrevReportStates != nil {
in, out := &in.PrevReportStates, &out.PrevReportStates
*out = make(map[string]ProwJobState, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProwJobStatus.
func (in *ProwJobStatus) DeepCopy() *ProwJobStatus {
if in == nil {
return nil
}
out := new(ProwJobStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Pull) DeepCopyInto(out *Pull) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Pull.
func (in *Pull) DeepCopy() *Pull {
if in == nil {
return nil
}
out := new(Pull)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Refs) DeepCopyInto(out *Refs) {
*out = *in
if in.Pulls != nil {
in, out := &in.Pulls, &out.Pulls
*out = make([]Pull, len(*in))
copy(*out, *in)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Refs.
func (in *Refs) DeepCopy() *Refs {
if in == nil {
return nil
}
out := new(Refs)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *UtilityImages) DeepCopyInto(out *UtilityImages) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new UtilityImages.
func (in *UtilityImages) DeepCopy() *UtilityImages {
if in == nil {
return nil
}
out := new(UtilityImages)
in.DeepCopyInto(out)
return out
}

33
vendor/k8s.io/test-infra/prow/clonerefs/BUILD.bazel generated vendored Normal file
View File

@@ -0,0 +1,33 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"doc.go",
"options.go",
"parse.go",
"run.go",
],
importpath = "k8s.io/test-infra/prow/clonerefs",
visibility = ["//visibility:public"],
deps = [
"//vendor/github.com/sirupsen/logrus:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/util/sets:go_default_library",
"//vendor/k8s.io/test-infra/prow/kube:go_default_library",
"//vendor/k8s.io/test-infra/prow/pod-utils/clone:go_default_library",
],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

18
vendor/k8s.io/test-infra/prow/clonerefs/doc.go generated vendored Normal file
View File

@@ -0,0 +1,18 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package clonerefs is a library for cloning references
package clonerefs

238
vendor/k8s.io/test-infra/prow/clonerefs/options.go generated vendored Normal file
View File

@@ -0,0 +1,238 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package clonerefs
import (
"bytes"
"encoding/json"
"errors"
"flag"
"fmt"
"strings"
"text/template"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/test-infra/prow/kube"
)
// Options configures the clonerefs tool
// completely and may be provided using JSON
// or user-specified flags, but not both.
type Options struct {
// SrcRoot is the root directory under which
// all source code is cloned
SrcRoot string `json:"src_root"`
// Log is the log file to which clone records are written
Log string `json:"log"`
// GitUserName is an optional field that is used with
// `git config user.name`
GitUserName string `json:"git_user_name,omitempty"`
// GitUserEmail is an optional field that is used with
// `git config user.email`
GitUserEmail string `json:"git_user_email,omitempty"`
// GitRefs are the refs to clone
GitRefs []kube.Refs `json:"refs"`
// KeyFiles are files containing SSH keys to be used
// when cloning. Will be added to `ssh-agent`.
KeyFiles []string `json:"key_files,omitempty"`
// HostFingerPrints are ssh-keyscan host fingerprint lines to use
// when cloning. Will be added to ~/.ssh/known_hosts
HostFingerprints []string `json:"host_fingerprints,omitempty"`
// MaxParallelWorkers determines how many repositories
// can be cloned in parallel. If 0, interpreted as no
// limit to parallelism
MaxParallelWorkers int `json:"max_parallel_workers,omitempty"`
// used to hold flag values
refs gitRefs
clonePath orgRepoFormat
cloneURI orgRepoFormat
keys stringSlice
CookiePath string `json:"cookie_path,omitempty"`
}
// Validate ensures that the configuration options are valid
func (o *Options) Validate() error {
if o.SrcRoot == "" {
return errors.New("no source root specified")
}
if o.Log == "" {
return errors.New("no log file specified")
}
if len(o.GitRefs) == 0 {
return errors.New("no refs specified to clone")
}
seen := map[string]sets.String{}
for _, ref := range o.GitRefs {
if _, seenOrg := seen[ref.Org]; seenOrg {
if seen[ref.Org].Has(ref.Repo) {
return errors.New("sync config for %s/%s provided more than once")
}
seen[ref.Org].Insert(ref.Repo)
} else {
seen[ref.Org] = sets.NewString(ref.Repo)
}
}
return nil
}
const (
// JSONConfigEnvVar is the environment variable that
// clonerefs expects to find a full JSON configuration
// in when run.
JSONConfigEnvVar = "CLONEREFS_OPTIONS"
// DefaultGitUserName is the default name used in git config
DefaultGitUserName = "ci-robot"
// DefaultGitUserEmail is the default email used in git config
DefaultGitUserEmail = "ci-robot@k8s.io"
)
// ConfigVar exposes the environment variable used
// to store serialized configuration
func (o *Options) ConfigVar() string {
return JSONConfigEnvVar
}
// LoadConfig loads options from serialized config
func (o *Options) LoadConfig(config string) error {
return json.Unmarshal([]byte(config), o)
}
// Complete internalizes command line arguments
func (o *Options) Complete(args []string) {
o.GitRefs = o.refs.gitRefs
o.KeyFiles = o.keys.data
for _, ref := range o.GitRefs {
alias, err := o.clonePath.Execute(OrgRepo{Org: ref.Org, Repo: ref.Repo})
if err != nil {
panic(err)
}
ref.PathAlias = alias
alias, err = o.cloneURI.Execute(OrgRepo{Org: ref.Org, Repo: ref.Repo})
if err != nil {
panic(err)
}
ref.CloneURI = alias
}
}
// AddFlags adds flags to the FlagSet that populate
// the GCS upload options struct given.
func (o *Options) AddFlags(fs *flag.FlagSet) {
fs.StringVar(&o.SrcRoot, "src-root", "", "Where to root source checkouts")
fs.StringVar(&o.Log, "log", "", "Where to write logs")
fs.StringVar(&o.GitUserName, "git-user-name", DefaultGitUserName, "Username to set in git config")
fs.StringVar(&o.GitUserEmail, "git-user-email", DefaultGitUserEmail, "Email to set in git config")
fs.Var(&o.refs, "repo", "Mapping of Git URI to refs to check out, can be provided more than once")
fs.Var(&o.keys, "ssh-key", "Path to SSH key to enable during cloning, can be provided more than once")
fs.Var(&o.clonePath, "clone-alias", "Format string for the path to clone to")
fs.Var(&o.cloneURI, "uri-prefix", "Format string for the URI prefix to clone from")
fs.IntVar(&o.MaxParallelWorkers, "max-workers", 0, "Maximum number of parallel workers, unset for unlimited.")
fs.StringVar(&o.CookiePath, "cookiefile", "", "Path to git http.coookiefile")
}
type gitRefs struct {
gitRefs []kube.Refs
}
func (r *gitRefs) String() string {
representation := bytes.Buffer{}
for _, ref := range r.gitRefs {
fmt.Fprintf(&representation, "%s,%s=%s", ref.Org, ref.Repo, ref.String())
}
return representation.String()
}
// Set parses out a kube.Refs from the user string.
// The following example shows all possible fields:
// org,repo=base-ref:base-sha[,pull-number:pull-sha]...
// For the base ref and every pull number, the SHAs
// are optional and any number of them may be set or
// unset.
func (r *gitRefs) Set(value string) error {
gitRef, err := ParseRefs(value)
if err != nil {
return err
}
r.gitRefs = append(r.gitRefs, *gitRef)
return nil
}
type stringSlice struct {
data []string
}
func (r *stringSlice) String() string {
return strings.Join(r.data, ",")
}
// Set records the value passed
func (r *stringSlice) Set(value string) error {
r.data = append(r.data, value)
return nil
}
type orgRepoFormat struct {
raw string
format *template.Template
}
func (a *orgRepoFormat) String() string {
return a.raw
}
// Set parses out overrides from user input
func (a *orgRepoFormat) Set(value string) error {
templ, err := template.New("format").Parse(value)
if err != nil {
return err
}
a.raw = value
a.format = templ
return nil
}
// OrgRepo hold both an org and repo name.
type OrgRepo struct {
Org, Repo string
}
func (a *orgRepoFormat) Execute(data OrgRepo) (string, error) {
if a.format != nil {
output := bytes.Buffer{}
err := a.format.Execute(&output, data)
return output.String(), err
}
return "", nil
}
// Encode will encode the set of options in the format that
// is expected for the configuration environment variable
func Encode(options Options) (string, error) {
encoded, err := json.Marshal(options)
return string(encoded), err
}

91
vendor/k8s.io/test-infra/prow/clonerefs/parse.go generated vendored Normal file
View File

@@ -0,0 +1,91 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package clonerefs
import (
"fmt"
"strconv"
"strings"
"k8s.io/test-infra/prow/kube"
)
// ParseRefs parses a human-provided string into the repo
// that should be cloned and the refs that need to be
// checked out once it is. The format is:
// org,repo=base-ref[:base-sha][,pull-id[:pull-sha[:pull-ref]]]...
// For the base ref and pull IDs, a SHA may optionally be
// provided or may be omitted for the latest available SHA.
// Examples:
// kubernetes,test-infra=master
// kubernetes,test-infra=master:abcde12
// kubernetes,test-infra=master:abcde12,34
// kubernetes,test-infra=master:abcde12,34:fghij56
// kubernetes,test-infra=master,34:fghij56
// kubernetes,test-infra=master:abcde12,34:fghij56,78
// gerrit,test-infra=master:abcde12,34:fghij56:refs/changes/00/123/1
func ParseRefs(value string) (*kube.Refs, error) {
gitRef := &kube.Refs{}
values := strings.SplitN(value, "=", 2)
if len(values) != 2 {
return gitRef, fmt.Errorf("refspec %s invalid: does not contain '='", value)
}
info := values[0]
allRefs := values[1]
infoValues := strings.SplitN(info, ",", 2)
if len(infoValues) != 2 {
return gitRef, fmt.Errorf("refspec %s invalid: does not contain 'org,repo' as prefix", value)
}
gitRef.Org = infoValues[0]
gitRef.Repo = infoValues[1]
refValues := strings.Split(allRefs, ",")
if len(refValues) == 1 && refValues[0] == "" {
return gitRef, fmt.Errorf("refspec %s invalid: does not contain any refs", value)
}
baseRefParts := strings.Split(refValues[0], ":")
if len(baseRefParts) != 1 && len(baseRefParts) != 2 {
return gitRef, fmt.Errorf("refspec %s invalid: malformed base ref", refValues[0])
}
gitRef.BaseRef = baseRefParts[0]
if len(baseRefParts) == 2 {
gitRef.BaseSHA = baseRefParts[1]
}
for _, refValue := range refValues[1:] {
refParts := strings.Split(refValue, ":")
if len(refParts) == 0 || len(refParts) > 3 {
return gitRef, fmt.Errorf("refspec %s invalid: malformed pull ref", refValue)
}
pullNumber, err := strconv.Atoi(refParts[0])
if err != nil {
return gitRef, fmt.Errorf("refspec %s invalid: pull request identifier not a number: %v", refValue, err)
}
pullRef := kube.Pull{
Number: pullNumber,
}
if len(refParts) > 1 {
pullRef.SHA = refParts[1]
}
if len(refParts) > 2 {
pullRef.Ref = refParts[2]
}
gitRef.Pulls = append(gitRef.Pulls, pullRef)
}
return gitRef, nil
}

161
vendor/k8s.io/test-infra/prow/clonerefs/run.go generated vendored Normal file
View File

@@ -0,0 +1,161 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package clonerefs
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"strings"
"sync"
"github.com/sirupsen/logrus"
"k8s.io/test-infra/prow/kube"
"k8s.io/test-infra/prow/pod-utils/clone"
)
var cloneFunc = clone.Run
// Run clones the configured refs
func (o Options) Run() error {
var env []string
if len(o.KeyFiles) > 0 {
var err error
env, err = addSSHKeys(o.KeyFiles)
if err != nil {
logrus.WithError(err).Error("Failed to add SSH keys.")
// Continue on error. Clones will fail with an appropriate error message
// that initupload can consume whereas quitting without writing the clone
// record log is silent and results in an errored prow job instead of a
// failed one.
}
}
if len(o.HostFingerprints) > 0 {
if err := addHostFingerprints(o.HostFingerprints); err != nil {
logrus.WithError(err).Error("failed to add host fingerprints")
}
}
var numWorkers int
if o.MaxParallelWorkers != 0 {
numWorkers = o.MaxParallelWorkers
} else {
numWorkers = len(o.GitRefs)
}
wg := &sync.WaitGroup{}
wg.Add(numWorkers)
input := make(chan kube.Refs)
output := make(chan clone.Record, len(o.GitRefs))
for i := 0; i < numWorkers; i++ {
go func() {
defer wg.Done()
for ref := range input {
output <- cloneFunc(ref, o.SrcRoot, o.GitUserName, o.GitUserEmail, o.CookiePath, env)
}
}()
}
for _, ref := range o.GitRefs {
input <- ref
}
close(input)
wg.Wait()
close(output)
var results []clone.Record
for record := range output {
results = append(results, record)
}
logData, err := json.Marshal(results)
if err != nil {
return fmt.Errorf("failed to marshal clone records: %v", err)
}
if err := ioutil.WriteFile(o.Log, logData, 0755); err != nil {
return fmt.Errorf("failed to write clone records: %v", err)
}
return nil
}
func addHostFingerprints(fingerprints []string) error {
path := filepath.Join(os.Getenv("HOME"), ".ssh", "known_hosts")
f, err := os.OpenFile(path, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
return fmt.Errorf("could not create/append to %s: %v", path, err)
}
if _, err := f.Write([]byte(strings.Join(fingerprints, "\n"))); err != nil {
return fmt.Errorf("failed to write fingerprints to %s: %v", path, err)
}
if err := f.Close(); err != nil {
return fmt.Errorf("failed to close %s: %v", path, err)
}
return nil
}
// addSSHKeys will start the ssh-agent and add all the specified
// keys, returning the ssh-agent environment variables for reuse
func addSSHKeys(paths []string) ([]string, error) {
vars, err := exec.Command("ssh-agent").CombinedOutput()
if err != nil {
return []string{}, fmt.Errorf("failed to start ssh-agent: %v", err)
}
logrus.Info("Started SSH agent")
// ssh-agent will output three lines of text, in the form:
// SSH_AUTH_SOCK=xxx; export SSH_AUTH_SOCK;
// SSH_AGENT_PID=xxx; export SSH_AGENT_PID;
// echo Agent pid xxx;
// We need to parse out the environment variables from that.
parts := strings.Split(string(vars), ";")
env := []string{strings.TrimSpace(parts[0]), strings.TrimSpace(parts[2])}
for _, keyPath := range paths {
// we can be given literal paths to keys or paths to dirs
// that are mounted from a secret, so we need to check which
// we have
if err := filepath.Walk(keyPath, func(path string, info os.FileInfo, err error) error {
if strings.HasPrefix(info.Name(), "..") {
// kubernetes volumes also include files we
// should not look be looking into for keys
if info.IsDir() {
return filepath.SkipDir
}
return nil
}
if info.IsDir() {
return nil
}
cmd := exec.Command("ssh-add", path)
cmd.Env = append(cmd.Env, env...)
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to add ssh key at %s: %v: %s", path, err, output)
}
logrus.Infof("Added SSH key at %s", path)
return nil
}); err != nil {
return env, fmt.Errorf("error walking path %q: %v", keyPath, err)
}
}
return env, nil
}

58
vendor/k8s.io/test-infra/prow/config/BUILD.bazel generated vendored Normal file
View File

@@ -0,0 +1,58 @@
package(default_visibility = ["//visibility:public"])
load(
"@io_bazel_rules_go//go:def.bzl",
"go_library",
)
go_library(
name = "go_default_library",
srcs = [
"agent.go",
"branch_protection.go",
"build_status.go",
"config.go",
"githuboauth.go",
"gitlaboauth.go",
"jobs.go",
"secrets_agent.go",
"tide.go",
],
importpath = "k8s.io/test-infra/prow/config",
tags = ["manual"],
deps = [
"//vendor/github.com/ghodss/yaml:go_default_library",
"//vendor/github.com/gorilla/sessions:go_default_library",
"//vendor/github.com/knative/build/pkg/apis/build/v1alpha1:go_default_library",
"//vendor/github.com/sirupsen/logrus:go_default_library",
"//vendor/github.com/xanzy/go-gitlab:go_default_library",
"//vendor/golang.org/x/oauth2:go_default_library",
"//vendor/gopkg.in/robfig/cron.v2:go_default_library",
"//vendor/k8s.io/api/core/v1:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/labels:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/util/sets:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/util/validation:go_default_library",
"//vendor/k8s.io/test-infra/prow/apis/prowjobs/v1:go_default_library",
"//vendor/k8s.io/test-infra/prow/config/org:go_default_library",
"//vendor/k8s.io/test-infra/prow/gitserver:go_default_library",
"//vendor/k8s.io/test-infra/prow/kube:go_default_library",
"//vendor/k8s.io/test-infra/prow/pod-utils/decorate:go_default_library",
"//vendor/k8s.io/test-infra/prow/pod-utils/downwardapi:go_default_library",
],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [
":package-srcs",
"//vendor/k8s.io/test-infra/prow/config/org:all-srcs",
],
tags = ["automanaged"],
)

4
vendor/k8s.io/test-infra/prow/config/README.md generated vendored Normal file
View File

@@ -0,0 +1,4 @@
# Prow Configuration
Core Prow component configuration is managed by the `config` package and stored in the [`Config` struct](https://godoc.org/k8s.io/test-infra/prow/config#Config). If a configuration guide is available for a component it can be found in the [`/prow/cmd`](/prow/cmd) directory. See [`jobs.md`](/prow/jobs.md) for a guide to configuring ProwJobs.
Configuration for plugins is handled and stored separately. See the [`plugins`](/prow/plugins) package for details.

130
vendor/k8s.io/test-infra/prow/config/agent.go generated vendored Normal file
View File

@@ -0,0 +1,130 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
"os"
"sync"
"time"
"github.com/sirupsen/logrus"
)
// Agent watches a path and automatically loads the config stored
// therein.
type Agent struct {
sync.Mutex
c *Config
subscriptions []chan<- ConfigDelta
}
// Start will begin polling the config file at the path. If the first load
// fails, Start with return the error and abort. Future load failures will log
// the failure message but continue attempting to load.
func (ca *Agent) Start(prowConfig, jobConfig string) error {
c, err := Load(prowConfig, jobConfig)
if err != nil {
return err
}
ca.Set(c)
go func() {
var lastModTime time.Time
// Rarely, if two changes happen in the same second, mtime will
// be the same for the second change, and an mtime-based check would
// fail. Reload periodically just in case.
skips := 0
for range time.Tick(1 * time.Second) {
if skips < 600 {
// Check if the file changed to see if it needs to be re-read.
// os.Stat follows symbolic links, which is how ConfigMaps work.
prowStat, err := os.Stat(prowConfig)
if err != nil {
logrus.WithField("prowConfig", prowConfig).WithError(err).Error("Error loading prow config.")
continue
}
recentModTime := prowStat.ModTime()
// TODO(krzyzacy): allow empty jobConfig till fully migrate config to subdirs
if jobConfig != "" {
jobConfigStat, err := os.Stat(jobConfig)
if err != nil {
logrus.WithField("jobConfig", jobConfig).WithError(err).Error("Error loading job configs.")
continue
}
if jobConfigStat.ModTime().After(recentModTime) {
recentModTime = jobConfigStat.ModTime()
}
}
if !recentModTime.After(lastModTime) {
skips++
continue // file hasn't been modified
}
lastModTime = recentModTime
}
if c, err := Load(prowConfig, jobConfig); err != nil {
logrus.WithField("prowConfig", prowConfig).
WithField("jobConfig", jobConfig).
WithError(err).Error("Error loading config.")
} else {
skips = 0
ca.Set(c)
}
}
}()
return nil
}
type ConfigDelta struct {
Before, After Config
}
// Subscribe registers the channel for messages on config reload.
// The caller can expect a copy of the previous and current config
// to be sent down the subscribed channel when a new configuration
// is loaded.
func (ca *Agent) Subscribe(subscription chan<- ConfigDelta) {
ca.Lock()
defer ca.Unlock()
ca.subscriptions = append(ca.subscriptions, subscription)
}
// Config returns the latest config. Do not modify the config.
func (ca *Agent) Config() *Config {
ca.Lock()
defer ca.Unlock()
return ca.c
}
// Set sets the config. Useful for testing.
func (ca *Agent) Set(c *Config) {
ca.Lock()
defer ca.Unlock()
var oldConfig Config
if ca.c != nil {
oldConfig = *ca.c
}
delta := ConfigDelta{oldConfig, *c}
for _, subscription := range ca.subscriptions {
// we can't let unbuffered channels for subscriptions lock us up
// here, so we will send events best-effort into the channels we have
go func(out chan<- ConfigDelta) { out <- delta }(subscription)
}
ca.c = c
}

View File

@@ -0,0 +1,336 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
"errors"
"fmt"
"github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/util/sets"
)
// Policy for the config/org/repo/branch.
// When merging policies, a nil value results in inheriting the parent policy.
type Policy struct {
deprecatedPolicy
deprecatedWarning bool // true if a warning message was sent
// Protect overrides whether branch protection is enabled if set.
Protect *bool `json:"protect,omitempty"`
// RequiredStatusChecks configures github contexts
RequiredStatusChecks *ContextPolicy `json:"required_status_checks,omitempty"`
// Admins overrides whether protections apply to admins if set.
Admins *bool `json:"enforce_admins,omitempty"`
// Restrictions limits who can merge
Restrictions *Restrictions `json:"restrictions,omitempty"`
// RequiredPullRequestReviews specifies github approval/review criteria.
RequiredPullRequestReviews *ReviewPolicy `json:"required_pull_request_reviews,omitempty"`
}
// deprecatedPolicy deserializes fields that are no longer in use
type deprecatedPolicy struct {
DeprecatedProtect *bool `json:"protect-by-default,omitempty"`
DeprecatedContexts []string `json:"require-contexts,omitempty"`
DeprecatedPushers []string `json:"allow-push,omitempty"`
}
func (d deprecatedPolicy) defined() bool {
return d.DeprecatedProtect != nil || d.DeprecatedContexts != nil || d.DeprecatedPushers != nil
}
func (p Policy) defined() bool {
return p.Protect != nil || p.RequiredStatusChecks != nil || p.Admins != nil || p.Restrictions != nil || p.RequiredPullRequestReviews != nil
}
// HasProtect returns true if the policy or deprecated policy defines protection
func (p Policy) HasProtect() bool {
return p.Protect != nil || p.deprecatedPolicy.DeprecatedProtect != nil
}
// ContextPolicy configures required github contexts.
// When merging policies, contexts are appended to context list from parent.
// Strict determines whether merging to the branch invalidates existing contexts.
type ContextPolicy struct {
// Contexts appends required contexts that must be green to merge
Contexts []string `json:"contexts,omitempty"`
// Strict overrides whether new commits in the base branch require updating the PR if set
Strict *bool `json:"strict,omitempty"`
}
// ReviewPolicy specifies github approval/review criteria.
// Any nil values inherit the policy from the parent, otherwise bool/ints are overridden.
// Non-empty lists are appended to parent lists.
type ReviewPolicy struct {
// Restrictions appends users/teams that are allowed to merge
DismissalRestrictions *Restrictions `json:"dismissal_restrictions,omitempty"`
// DismissStale overrides whether new commits automatically dismiss old reviews if set
DismissStale *bool `json:"dismiss_stale_reviews,omitempty"`
// RequireOwners overrides whether CODEOWNERS must approve PRs if set
RequireOwners *bool `json:"require_code_owner_reviews,omitempty"`
// Approvals overrides the number of approvals required if set (set to 0 to disable)
Approvals *int `json:"required_approving_review_count,omitempty"`
}
// Restrictions limits who can merge
// Users and Teams items are appended to parent lists.
type Restrictions struct {
Users []string `json:"users"`
Teams []string `json:"teams"`
}
// selectInt returns the child if set, else parent
func selectInt(parent, child *int) *int {
if child != nil {
return child
}
return parent
}
// selectBool returns the child argument if set, otherwise the parent
func selectBool(parent, child *bool) *bool {
if child != nil {
return child
}
return parent
}
// unionStrings merges the parent and child items together
func unionStrings(parent, child []string) []string {
if child == nil {
return parent
}
if parent == nil {
return child
}
s := sets.NewString(parent...)
s.Insert(child...)
return s.List()
}
func mergeContextPolicy(parent, child *ContextPolicy) *ContextPolicy {
if child == nil {
return parent
}
if parent == nil {
return child
}
return &ContextPolicy{
Contexts: unionStrings(parent.Contexts, child.Contexts),
Strict: selectBool(parent.Strict, child.Strict),
}
}
func mergeReviewPolicy(parent, child *ReviewPolicy) *ReviewPolicy {
if child == nil {
return parent
}
if parent == nil {
return child
}
return &ReviewPolicy{
DismissalRestrictions: mergeRestrictions(parent.DismissalRestrictions, child.DismissalRestrictions),
DismissStale: selectBool(parent.DismissStale, child.DismissStale),
RequireOwners: selectBool(parent.RequireOwners, child.RequireOwners),
Approvals: selectInt(parent.Approvals, child.Approvals),
}
}
func mergeRestrictions(parent, child *Restrictions) *Restrictions {
if child == nil {
return parent
}
if parent == nil {
return child
}
return &Restrictions{
Users: unionStrings(parent.Users, child.Users),
Teams: unionStrings(parent.Teams, child.Teams),
}
}
// Apply returns a policy that merges the child into the parent
func (p Policy) Apply(child Policy) (Policy, error) {
if old := child.deprecatedPolicy.defined(); old && child.defined() {
return p, errors.New("cannot mix Policy and deprecatedPolicy branch protection fields")
} else if old {
if !p.deprecatedWarning {
p.deprecatedWarning = true
logrus.Warn("WARNING: protect-by-default, require-contexts, allow-push are deprecated. Please replace them before July 2018")
}
d := child.deprecatedPolicy
child = Policy{
Protect: d.DeprecatedProtect,
}
if d.DeprecatedContexts != nil {
child.RequiredStatusChecks = &ContextPolicy{
Contexts: d.DeprecatedContexts,
}
}
if d.DeprecatedPushers != nil {
child.Restrictions = &Restrictions{
Teams: d.DeprecatedPushers,
}
}
}
return Policy{
Protect: selectBool(p.Protect, child.Protect),
RequiredStatusChecks: mergeContextPolicy(p.RequiredStatusChecks, child.RequiredStatusChecks),
Admins: selectBool(p.Admins, child.Admins),
Restrictions: mergeRestrictions(p.Restrictions, child.Restrictions),
RequiredPullRequestReviews: mergeReviewPolicy(p.RequiredPullRequestReviews, child.RequiredPullRequestReviews),
deprecatedWarning: p.deprecatedWarning,
}, nil
}
// BranchProtection specifies the global branch protection policy
type BranchProtection struct {
Policy
ProtectTested bool `json:"protect-tested-repos,omitempty"`
Orgs map[string]Org `json:"orgs,omitempty"`
AllowDisabledPolicies bool `json:"allow_disabled_policies,omitempty"`
warned bool // warn if deprecated fields are use
}
// Org holds the default protection policy for an entire org, as well as any repo overrides.
type Org struct {
Policy
Repos map[string]Repo `json:"repos,omitempty"`
}
// Repo holds protection policy overrides for all branches in a repo, as well as specific branch overrides.
type Repo struct {
Policy
Branches map[string]Branch `json:"branches,omitempty"`
}
// Branch holds protection policy overrides for a particular branch.
type Branch struct {
Policy
}
// GetBranchProtection returns the policy for a given branch.
//
// Handles merging any policies defined at repo/org/global levels into the branch policy.
func (c *Config) GetBranchProtection(org, repo, branch string) (*Policy, error) {
bp := c.BranchProtection
var policy Policy
policy, err := policy.Apply(bp.Policy)
if err != nil {
return nil, err
}
if o, ok := bp.Orgs[org]; ok {
policy, err = policy.Apply(o.Policy)
if err != nil {
return nil, err
}
if r, ok := o.Repos[repo]; ok {
policy, err = policy.Apply(r.Policy)
if err != nil {
return nil, err
}
if b, ok := r.Branches[branch]; ok {
policy, err = policy.Apply(b.Policy)
if err != nil {
return nil, err
}
if policy.Protect == nil {
return nil, errors.New("defined branch policies must set protect")
}
}
}
} else {
return nil, nil
}
// Automatically require any required prow jobs
if prowContexts, _ := BranchRequirements(org, repo, branch, c.Presubmits); len(prowContexts) > 0 {
// Error if protection is disabled
if policy.Protect != nil && !*policy.Protect {
return nil, fmt.Errorf("required prow jobs require branch protection")
}
ps := Policy{
RequiredStatusChecks: &ContextPolicy{
Contexts: prowContexts,
},
}
// Require protection by default if ProtectTested is true
if bp.ProtectTested {
yes := true
ps.Protect = &yes
}
policy, err = policy.Apply(ps)
if err != nil {
return nil, err
}
}
if policy.Protect != nil && !*policy.Protect {
// Ensure that protection is false => no protection settings
var old *bool
old, policy.Protect = policy.Protect, old
switch {
case policy.defined() && bp.AllowDisabledPolicies:
logrus.Warnf("%s/%s=%s defines a policy but has protect: false", org, repo, branch)
policy = Policy{
Protect: policy.Protect,
}
case policy.defined():
return nil, fmt.Errorf("%s/%s=%s defines a policy, which requires protect: true", org, repo, branch)
}
policy.Protect = old
}
if !policy.defined() {
return nil, nil
}
return &policy, nil
}
func jobRequirements(jobs []Presubmit, branch string, after bool) ([]string, []string) {
var required, optional []string
for _, j := range jobs {
if !j.Brancher.RunsAgainstBranch(branch) {
continue
}
// Does this job require a context or have kids that might need one?
if !after && !j.AlwaysRun && j.RunIfChanged == "" {
continue // No
}
if j.ContextRequired() { // This job needs a context
required = append(required, j.Context)
} else {
optional = append(optional, j.Context)
}
// Check which children require contexts
r, o := jobRequirements(j.RunAfterSuccess, branch, true)
required = append(required, r...)
optional = append(optional, o...)
}
return required, optional
}
// BranchRequirements returns required and optional presubmits prow jobs for a given org, repo branch.
func BranchRequirements(org, repo, branch string, presubmits map[string][]Presubmit) ([]string, []string) {
p, ok := presubmits[org+"/"+repo]
if !ok {
return nil, nil
}
return jobRequirements(p, branch, false)
}

17
vendor/k8s.io/test-infra/prow/config/build_status.go generated vendored Normal file
View File

@@ -0,0 +1,17 @@
package config
import "time"
type BuildStatus struct {
DB DB `json:"db, omitempty"`
}
type DB struct {
IP string `json:"ip, omitempty"`
Port string `json:"port, omitempty"`
Name string `json:"name, omitempty"`
Username string `json:"username, omitempty"`
Password string `json:"password, omitempty"`
Active int `json:"active, omitempty"`
Idle int `json:"idle, omitempty"`
IdleTimeout time.Duration `json:"idleTimeout, omitempty"`
}

1287
vendor/k8s.io/test-infra/prow/config/config.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

48
vendor/k8s.io/test-infra/prow/config/githuboauth.go generated vendored Normal file
View File

@@ -0,0 +1,48 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
"encoding/gob"
"github.com/gorilla/sessions"
"golang.org/x/oauth2"
)
// Cookie holds the secret returned from github that authenticates the user who authorized this app.
type Cookie struct {
Secret string `json:"secret,omitempty"`
}
// GithubOAuthConfig is a config for requesting users access tokens from Github API. It also has
// a Cookie Store that retains user credentials deriving from Github API.
type GithubOAuthConfig struct {
ClientID string `json:"client_id"`
ClientSecret string `json:"client_secret"`
RedirectURL string `json:"redirect_url"`
Scopes []string `json:"scopes,omitempty"`
FinalRedirectURL string `json:"final_redirect_url"`
CookieStore *sessions.CookieStore `json:"-"`
}
// InitGithubOAuthConfig creates an OAuthClient using GithubOAuth config and a Cookie Store
// to retain user credentials.
func (gac *GithubOAuthConfig) InitGithubOAuthConfig(cookie *sessions.CookieStore) {
gob.Register(&oauth2.Token{})
gac.CookieStore = cookie
}

43
vendor/k8s.io/test-infra/prow/config/gitlaboauth.go generated vendored Normal file
View File

@@ -0,0 +1,43 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
"encoding/gob"
"github.com/gorilla/sessions"
"golang.org/x/oauth2"
)
// GitlabOAuthConfig is a config for requesting users access tokens from Github API. It also has
// a Cookie Store that retains user credentials deriving from Github API.
type GitlabOAuthConfig struct {
ClientID string `json:"client_id"`
ClientSecret string `json:"client_secret"`
RedirectURL string `json:"redirect_url"`
Scopes []string `json:"scopes,omitempty"`
FinalRedirectURL string `json:"final_redirect_url"`
CookieStore *sessions.CookieStore `json:"-"`
}
// InitGithubOAuthConfig creates an OAuthClient using GithubOAuth config and a Cookie Store
// to retain user credentials.
func (gac *GitlabOAuthConfig) InitGitlabOAuthConfig(cookie *sessions.CookieStore) {
gob.Register(&oauth2.Token{})
gac.CookieStore = cookie
}

486
vendor/k8s.io/test-infra/prow/config/jobs.go generated vendored Normal file
View File

@@ -0,0 +1,486 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
"fmt"
"regexp"
"time"
buildv1alpha1 "github.com/knative/build/pkg/apis/build/v1alpha1"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/test-infra/prow/kube"
)
// Preset is intended to match the k8s' PodPreset feature, and may be removed
// if that feature goes beta.
type Preset struct {
Labels map[string]string `json:"labels"`
Env []v1.EnvVar `json:"env"`
Volumes []v1.Volume `json:"volumes"`
VolumeMounts []v1.VolumeMount `json:"volumeMounts"`
}
func mergePreset(preset Preset, labels map[string]string, pod *v1.PodSpec) error {
if pod == nil {
return nil
}
for l, v := range preset.Labels {
if v2, ok := labels[l]; !ok || v2 != v {
return nil
}
}
for _, e1 := range preset.Env {
for i := range pod.Containers {
for _, e2 := range pod.Containers[i].Env {
if e1.Name == e2.Name {
return fmt.Errorf("env var duplicated in pod spec: %s", e1.Name)
}
}
pod.Containers[i].Env = append(pod.Containers[i].Env, e1)
}
}
for _, v1 := range preset.Volumes {
for _, v2 := range pod.Volumes {
if v1.Name == v2.Name {
return fmt.Errorf("volume duplicated in pod spec: %s", v1.Name)
}
}
pod.Volumes = append(pod.Volumes, v1)
}
for _, vm1 := range preset.VolumeMounts {
for i := range pod.Containers {
for _, vm2 := range pod.Containers[i].VolumeMounts {
if vm1.Name == vm2.Name {
return fmt.Errorf("volume mount duplicated in pod spec: %s", vm1.Name)
}
}
pod.Containers[i].VolumeMounts = append(pod.Containers[i].VolumeMounts, vm1)
}
}
return nil
}
// JobBase contains attributes common to all job types
type JobBase struct {
// The name of the job.
// e.g. pull-test-infra-bazel-build
Name string `json:"name"`
// Labels are added to prowjobs and pods created for this job.
Labels map[string]string `json:"labels,omitempty"`
// MaximumConcurrency of this job, 0 implies no limit.
MaxConcurrency int `json:"max_concurrency,omitempty"`
// Agent that will take care of running this job.
Agent string `json:"agent"`
// Cluster is the alias of the cluster to run this job in.
// (Default: kube.DefaultClusterAlias)
Cluster string `json:"cluster,omitempty"`
// Namespace is the namespace in which pods schedule.
// nil: results in config.PodNamespace (aka pod default)
// empty: results in config.ProwJobNamespace (aka same as prowjob)
Namespace *string `json:"namespace,omitempty"`
// ErrorOnEviction indicates that the ProwJob should be completed and given
// the ErrorState status if the pod that is executing the job is evicted.
// If this field is unspecified or false, a new pod will be created to replace
// the evicted one.
ErrorOnEviction bool `json:"error_on_eviction,omitempty"`
// SourcePath contains the path where this job is defined
SourcePath string `json:"-"`
// Spec is the Kubernetes pod spec used if Agent is kubernetes.
Spec *v1.PodSpec `json:"spec,omitempty"`
// BuildSpec is the Knative build spec used if Agent is knative-build.
BuildSpec *buildv1alpha1.BuildSpec `json:"build_spec,omitempty"`
UtilityConfig
}
// Presubmit runs on PRs.
type Presubmit struct {
JobBase
// AlwaysRun automatically for every PR, or only when a comment triggers it.
AlwaysRun bool `json:"always_run"`
// RunIfChanged automatically run if the PR modifies a file that matches this regex.
RunIfChanged string `json:"run_if_changed,omitempty"`
// TrustedLabels automatically run if the PR has label in TrustedLabels
TrustedLabels []string `json:"trusted_labels,omitempty"`
// UntrustedLabels automatically not run if the PR has label in UntrustedLabels
UntrustedLabels []string `json:"untrusted_labels,omitempty"`
// RunPRPushed automatically run if the source branch pushed
RunPRPushed bool `json:"run_pr_pushed"`
// Context is the name of the GitHub status context for the job.
Context string `json:"context"`
// Optional indicates that the job's status context should not be required for merge.
Optional bool `json:"optional,omitempty"`
// SkipReport skips commenting and setting status on GitHub.
SkipReport bool `json:"skip_report,omitempty"`
// Trigger is the regular expression to trigger the job.
// e.g. `@k8s-bot e2e test this`
// RerunCommand must also be specified if this field is specified.
// (Default: `(?m)^/test (?:.*? )?<job name>(?: .*?)?$`)
Trigger string `json:"trigger"`
// The RerunCommand to give users. Must match Trigger.
// Trigger must also be specified if this field is specified.
// (Default: `/test <job name>`)
RerunCommand string `json:"rerun_command"`
// RunAfterSuccess is a list of jobs to run after successfully running this one.
RunAfterSuccess []Presubmit `json:"run_after_success,omitempty"`
Brancher
// We'll set these when we load it.
re *regexp.Regexp // from Trigger.
reChanges *regexp.Regexp // from RunIfChanged
}
// Postsubmit runs on push events.
type Postsubmit struct {
JobBase
RegexpChangeMatcher
Brancher
// Run these jobs after successfully running this one.
RunAfterSuccess []Postsubmit `json:"run_after_success,omitempty"`
}
// Periodic runs on a timer.
type Periodic struct {
JobBase
// (deprecated)Interval to wait between two runs of the job.
Interval string `json:"interval"`
// Cron representation of job trigger time
Cron string `json:"cron"`
// Tags for config entries
Tags []string `json:"tags,omitempty"`
// Run these jobs after successfully running this one.
RunAfterSuccess []Periodic `json:"run_after_success,omitempty"`
interval time.Duration
}
// SetInterval updates interval, the frequency duration it runs.
func (p *Periodic) SetInterval(d time.Duration) {
p.interval = d
}
// GetInterval returns interval, the frequency duration it runs.
func (p *Periodic) GetInterval() time.Duration {
return p.interval
}
// RegexpChangeMatcher is for code shared between jobs that run only when certain files are changed.
type RegexpChangeMatcher struct {
// RunIfChanged defines a regex used to select which subset of file changes should trigger this job.
// If any file in the changeset matches this regex, the job will be triggered
RunIfChanged string `json:"run_if_changed,omitempty"`
reChanges *regexp.Regexp // from RunIfChanged
}
// RunsAgainstChanges returns true if any of the changed input paths match the run_if_changed regex.
func (cm RegexpChangeMatcher) RunsAgainstChanges(changes []string) bool {
if cm.RunIfChanged == "" {
return true
}
for _, change := range changes {
if cm.reChanges.MatchString(change) {
return true
}
}
return false
}
// Brancher is for shared code between jobs that only run against certain
// branches. An empty brancher runs against all branches.
type Brancher struct {
// Do not run against these branches. Default is no branches.
SkipBranches []string `json:"skip_branches,omitempty"`
// Only run against these branches. Default is all branches.
Branches []string `json:"branches,omitempty"`
// We'll set these when we load it.
re *regexp.Regexp
reSkip *regexp.Regexp
}
// RunsAgainstAllBranch returns true if there are both branches and skip_branches are unset
func (br Brancher) RunsAgainstAllBranch() bool {
return len(br.SkipBranches) == 0 && len(br.Branches) == 0
}
// RunsAgainstBranch returns true if the input branch matches, given the whitelist/blacklist.
func (br Brancher) RunsAgainstBranch(branch string) bool {
if br.RunsAgainstAllBranch() {
return true
}
// Favor SkipBranches over Branches
if len(br.SkipBranches) != 0 && br.reSkip.MatchString(branch) {
return false
}
if len(br.Branches) == 0 || br.re.MatchString(branch) {
return true
}
return false
}
// Intersects checks if other Brancher would trigger for the same branch.
func (br Brancher) Intersects(other Brancher) bool {
if br.RunsAgainstAllBranch() || other.RunsAgainstAllBranch() {
return true
}
if len(br.Branches) > 0 {
baseBranches := sets.NewString(br.Branches...)
if len(other.Branches) > 0 {
otherBranches := sets.NewString(other.Branches...)
if baseBranches.Intersection(otherBranches).Len() > 0 {
return true
}
return false
}
if !baseBranches.Intersection(sets.NewString(other.SkipBranches...)).Equal(baseBranches) {
return true
}
return false
}
if len(other.Branches) == 0 {
// There can only be one Brancher with skip_branches.
return true
}
return other.Intersects(br)
}
// RunsAgainstChanges returns true if any of the changed input paths match the run_if_changed regex.
func (ps Presubmit) RunsAgainstChanges(changes []string) bool {
for _, change := range changes {
if ps.reChanges.MatchString(change) {
return true
}
}
return false
}
// TriggerMatches returns true if the comment body should trigger this presubmit.
//
// This is usually a /test foo string.
func (ps Presubmit) TriggerMatches(body string) bool {
return ps.re.MatchString(body)
}
// ContextRequired checks whether a context is required from github points of view (required check).
func (ps Presubmit) ContextRequired() bool {
if ps.Optional || ps.SkipReport {
return false
}
return true
}
// ChangedFilesProvider returns a slice of modified files.
type ChangedFilesProvider func() ([]string, error)
func matching(j Presubmit, body string, testAll bool) []Presubmit {
// When matching ignore whether the job runs for the branch or whether the job runs for the
// PR's changes. Even if the job doesn't run, it still matches the PR and may need to be marked
// as skipped on github.
var result []Presubmit
if (testAll && (j.AlwaysRun || j.RunIfChanged != "")) || j.TriggerMatches(body) {
result = append(result, j)
}
for _, child := range j.RunAfterSuccess {
result = append(result, matching(child, body, testAll)...)
}
return result
}
// MatchingPresubmits returns a slice of presubmits to trigger based on the repo and a comment text.
func (c *JobConfig) MatchingPresubmits(fullRepoName, body string, testAll bool) []Presubmit {
var result []Presubmit
if jobs, ok := c.Presubmits[fullRepoName]; ok {
for _, job := range jobs {
result = append(result, matching(job, body, testAll)...)
}
}
return result
}
// UtilityConfig holds decoration metadata, such as how to clone and additional containers/etc
type UtilityConfig struct {
// Decorate determines if we decorate the PodSpec or not
Decorate bool `json:"decorate,omitempty"`
// PathAlias is the location under <root-dir>/src
// where the repository under test is cloned. If this
// is not set, <root-dir>/src/github.com/org/repo will
// be used as the default.
PathAlias string `json:"path_alias,omitempty"`
// CloneURI is the URI that is used to clone the
// repository. If unset, will default to
// `https://github.com/org/repo.git`.
CloneURI string `json:"clone_uri,omitempty"`
// SkipSubmodules determines if submodules should be
// cloned when the job is run. Defaults to true.
SkipSubmodules bool `json:"skip_submodules,omitempty"`
// ExtraRefs are auxiliary repositories that
// need to be cloned, determined from config
ExtraRefs []kube.Refs `json:"extra_refs,omitempty"`
// DecorationConfig holds configuration options for
// decorating PodSpecs that users provide
DecorationConfig *kube.DecorationConfig `json:"decoration_config,omitempty"`
}
// RetestPresubmits returns all presubmits that should be run given a /retest command.
// This is the set of all presubmits intersected with ((alwaysRun + runContexts) - skipContexts)
func (c *JobConfig) RetestPresubmits(fullRepoName string, skipContexts, runContexts map[string]bool) []Presubmit {
var result []Presubmit
if jobs, ok := c.Presubmits[fullRepoName]; ok {
for _, job := range jobs {
if skipContexts[job.Context] {
continue
}
if job.AlwaysRun || job.RunIfChanged != "" || runContexts[job.Context] {
result = append(result, job)
}
}
}
return result
}
// GetPresubmit returns the presubmit job for the provided repo and job name.
func (c *JobConfig) GetPresubmit(repo, jobName string) *Presubmit {
presubmits := c.AllPresubmits([]string{repo})
for i := range presubmits {
ps := presubmits[i]
if ps.Name == jobName {
return &ps
}
}
return nil
}
// SetPresubmits updates c.Presubmits to jobs, after compiling and validating their regexes.
func (c *JobConfig) SetPresubmits(jobs map[string][]Presubmit) error {
nj := map[string][]Presubmit{}
for k, v := range jobs {
nj[k] = make([]Presubmit, len(v))
copy(nj[k], v)
if err := SetPresubmitRegexes(nj[k]); err != nil {
return err
}
}
c.Presubmits = nj
return nil
}
// SetPostsubmits updates c.Postsubmits to jobs, after compiling and validating their regexes.
func (c *JobConfig) SetPostsubmits(jobs map[string][]Postsubmit) error {
nj := map[string][]Postsubmit{}
for k, v := range jobs {
nj[k] = make([]Postsubmit, len(v))
copy(nj[k], v)
if err := SetPostsubmitRegexes(nj[k]); err != nil {
return err
}
}
c.Postsubmits = nj
return nil
}
// listPresubmits list all the presubmit for a given repo including the run after success jobs.
func listPresubmits(ps []Presubmit) []Presubmit {
var res []Presubmit
for _, p := range ps {
res = append(res, p)
res = append(res, listPresubmits(p.RunAfterSuccess)...)
}
return res
}
// AllPresubmits returns all prow presubmit jobs in repos.
// if repos is empty, return all presubmits.
func (c *JobConfig) AllPresubmits(repos []string) []Presubmit {
var res []Presubmit
for repo, v := range c.Presubmits {
if len(repos) == 0 {
res = append(res, listPresubmits(v)...)
} else {
for _, r := range repos {
if r == repo {
res = append(res, listPresubmits(v)...)
break
}
}
}
}
return res
}
// listPostsubmits list all the postsubmits for a given repo including the run after success jobs.
func listPostsubmits(ps []Postsubmit) []Postsubmit {
var res []Postsubmit
for _, p := range ps {
res = append(res, p)
res = append(res, listPostsubmits(p.RunAfterSuccess)...)
}
return res
}
// AllPostsubmits returns all prow postsubmit jobs in repos.
// if repos is empty, return all postsubmits.
func (c *JobConfig) AllPostsubmits(repos []string) []Postsubmit {
var res []Postsubmit
for repo, v := range c.Postsubmits {
if len(repos) == 0 {
res = append(res, listPostsubmits(v)...)
} else {
for _, r := range repos {
if r == repo {
res = append(res, listPostsubmits(v)...)
break
}
}
}
}
return res
}
// AllPeriodics returns all prow periodic jobs.
func (c *JobConfig) AllPeriodics() []Periodic {
var listPeriodic func(ps []Periodic) []Periodic
listPeriodic = func(ps []Periodic) []Periodic {
var res []Periodic
for _, p := range ps {
res = append(res, p)
res = append(res, listPeriodic(p.RunAfterSuccess)...)
}
return res
}
return listPeriodic(c.Periodics)
}

22
vendor/k8s.io/test-infra/prow/config/org/BUILD.bazel generated vendored Normal file
View File

@@ -0,0 +1,22 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["org.go"],
importpath = "k8s.io/test-infra/prow/config/org",
visibility = ["//visibility:public"],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

133
vendor/k8s.io/test-infra/prow/config/org/org.go generated vendored Normal file
View File

@@ -0,0 +1,133 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package org
import (
"fmt"
)
// Metadata declares metadata about the GitHub org.
//
// See https://developer.github.com/v3/orgs/#edit-an-organization
type Metadata struct {
BillingEmail *string `json:"billing_email,omitempty"`
Company *string `json:"company,omitempty"`
Email *string `json:"email,omitempty"`
Name *string `json:"name,omitempty"`
Description *string `json:"description,omitempty"`
Location *string `json:"location,omitempty"`
HasOrganizationProjects *bool `json:"has_organization_projects,omitempty"`
HasRepositoryProjects *bool `json:"has_repository_projects,omitempty"`
DefaultRepositoryPermission *RepoPermissionLevel `json:"default_repository_permission,omitempty"`
MembersCanCreateRepositories *bool `json:"members_can_create_repositories,omitempty"`
}
// Config declares org metadata as well as its people and teams.
type Config struct {
Metadata
Teams map[string]Team `json:"teams,omitempty"`
Members []string `json:"members,omitempty"`
Admins []string `json:"admins,omitempty"`
}
// TeamMetadata declares metadata about the github team.
//
// See https://developer.github.com/v3/teams/#edit-team
type TeamMetadata struct {
Description *string `json:"description,omitempty"`
Privacy *Privacy `json:"privacy,omitempty"`
}
// Team declares metadata as well as its poeple.
type Team struct {
TeamMetadata
Members []string `json:"members,omitempty"`
Maintainers []string `json:"maintainers,omitempty"`
Children map[string]Team `json:"teams,omitempty"`
Previously []string `json:"previously,omitempty"`
}
// RepoPermissionLevel is admin, write, read or none.
//
// See https://developer.github.com/v3/repos/collaborators/#review-a-users-permission-level
type RepoPermissionLevel string
const (
// Read allows pull but not push
Read RepoPermissionLevel = "read"
// Write allows Read plus push
Write RepoPermissionLevel = "write"
// Admin allows Write plus change others' rights.
Admin RepoPermissionLevel = "admin"
// None disallows everything
None RepoPermissionLevel = "none"
)
var repoPermissionLevels = map[RepoPermissionLevel]bool{
Read: true,
Write: true,
Admin: true,
None: true,
}
// MarshalText returns the byte representation of the permission
func (l RepoPermissionLevel) MarshalText() ([]byte, error) {
return []byte(l), nil
}
// UnmarshalText validates the text is a valid string
func (l *RepoPermissionLevel) UnmarshalText(text []byte) error {
v := RepoPermissionLevel(text)
if _, ok := repoPermissionLevels[v]; !ok {
return fmt.Errorf("bad repo permission: %s not in %v", v, repoPermissionLevels)
}
*l = v
return nil
}
// Privacy is secret or closed.
//
// See https://developer.github.com/v3/teams/#edit-team
type Privacy string
const (
// Closed means it is only visible to org members
Closed Privacy = "closed"
// Secret means it is only visible to team members.
Secret Privacy = "secret"
)
var privacySettings = map[Privacy]bool{
Closed: true,
Secret: true,
}
// MarshalText returns bytes that equal secret or closed
func (p Privacy) MarshalText() ([]byte, error) {
return []byte(p), nil
}
// UnmarshalText returns an error if text != secret or closed
func (p *Privacy) UnmarshalText(text []byte) error {
v := Privacy(text)
if _, ok := privacySettings[v]; !ok {
return fmt.Errorf("bad privacy setting: %s", v)
}
*p = v
return nil
}

104
vendor/k8s.io/test-infra/prow/config/secrets_agent.go generated vendored Normal file
View File

@@ -0,0 +1,104 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Implements an agent to read and reload the secrets.
package config
import (
"os"
"sync"
"time"
"github.com/sirupsen/logrus"
)
// SecretAgent watches a path and automatically loads the secrets stored.
type SecretAgent struct {
sync.Mutex
secretsMap map[string][]byte
}
// Start will begin polling the secret file at the path. If the first load
// fails, Start with return the error and abort. Future load failures will log
// the failure message but continue attempting to load.
func (sa *SecretAgent) Start(paths []string) error {
secretsMap, err := LoadSecrets(paths)
if err != nil {
return err
}
sa.secretsMap = secretsMap
// Start one goroutine for each file to monitor and update the secret's values.
for secretPath := range secretsMap {
go sa.reloadSecret(secretPath)
}
return nil
}
func (sa *SecretAgent) reloadSecret(secretPath string) {
var lastModTime time.Time
logger := logrus.NewEntry(logrus.StandardLogger())
skips := 0
for range time.Tick(1 * time.Second) {
if skips < 600 {
// Check if the file changed to see if it needs to be re-read.
secretStat, err := os.Stat(secretPath)
if err != nil {
logger.WithField("secret-path", secretPath).
WithError(err).Error("Error loading secret file.")
continue
}
recentModTime := secretStat.ModTime()
if !recentModTime.After(lastModTime) {
skips++
continue // file hasn't been modified
}
lastModTime = recentModTime
}
if secretValue, err := LoadSingleSecret(secretPath); err != nil {
logger.WithField("secret-path: ", secretPath).
WithError(err).Error("Error loading secret.")
} else {
sa.SetSecret(secretPath, secretValue)
}
}
}
// GetSecret returns the value of a secret stored in a map.
func (sa *SecretAgent) GetSecret(secretPath string) []byte {
sa.Lock()
defer sa.Unlock()
return sa.secretsMap[secretPath]
}
// Set sets the map of secrets.
func (sa *SecretAgent) SetSecret(secretPath string, secretValue []byte) {
sa.Lock()
defer sa.Unlock()
sa.secretsMap[secretPath] = secretValue
}
// GetTokenGenerator returns a function that gets the value of a given secret.
func (sa *SecretAgent) GetTokenGenerator(secretPath string) func() []byte {
return func() []byte {
return sa.GetSecret(secretPath)
}
}

530
vendor/k8s.io/test-infra/prow/config/tide.go generated vendored Normal file
View File

@@ -0,0 +1,530 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package config
import (
"errors"
"fmt"
"github.com/xanzy/go-gitlab"
"strings"
"sync"
"time"
"github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/test-infra/prow/gitserver"
)
// TideQueries is a TideQuery slice.
type TideQueries []TideQuery
// TideContextPolicy configures options about how to handle various contexts.
type TideContextPolicy struct {
// whether to consider unknown contexts optional (skip) or required.
SkipUnknownContexts *bool `json:"skip-unknown-contexts,omitempty"`
RequiredContexts []string `json:"required-contexts,omitempty"`
OptionalContexts []string `json:"optional-contexts,omitempty"`
// Infer required and optional jobs from Branch Protection configuration
FromBranchProtection *bool `json:"from-branch-protection,omitempty"`
}
// TideOrgContextPolicy overrides the policy for an org, and any repo overrides.
type TideOrgContextPolicy struct {
TideContextPolicy
Repos map[string]TideRepoContextPolicy `json:"repos,omitempty"`
}
// TideRepoContextPolicy overrides the policy for repo, and any branch overrides.
type TideRepoContextPolicy struct {
TideContextPolicy
Branches map[string]TideContextPolicy `json:"branches,omitempty"`
}
// TideContextPolicyOptions holds the default policy, and any org overrides.
type TideContextPolicyOptions struct {
TideContextPolicy
// Github Orgs
Orgs map[string]TideOrgContextPolicy `json:"orgs,omitempty"`
}
// Tide is config for the tide pool.
type Tide struct {
// SyncPeriodString compiles into SyncPeriod at load time.
SyncPeriodString string `json:"sync_period,omitempty"`
// SyncPeriod specifies how often Tide will sync jobs with Github. Defaults to 1m.
SyncPeriod time.Duration `json:"-"`
// StatusUpdatePeriodString compiles into StatusUpdatePeriod at load time.
StatusUpdatePeriodString string `json:"status_update_period,omitempty"`
// StatusUpdatePeriod specifies how often Tide will update Github status contexts.
// Defaults to the value of SyncPeriod.
StatusUpdatePeriod time.Duration `json:"-"`
// Queries represents a list of GitHub search queries that collectively
// specify the set of PRs that meet merge requirements.
Queries TideQueries `json:"queries,omitempty"`
// A key/value pair of an org/repo as the key and merge method to override
// the default method of merge. Valid options are squash, rebase, and merge.
MergeType map[string]gitserver.PullRequestMergeType `json:"merge_method,omitempty"`
// URL for tide status contexts.
// We can consider allowing this to be set separately for separate repos, or
// allowing it to be a template.
TargetURL string `json:"target_url,omitempty"`
// PRStatusBaseURL is the base URL for the PR status page.
// This is used to link to a merge requirements overview
// in the tide status context.
PRStatusBaseURL string `json:"pr_status_base_url,omitempty"`
// BlockerLabel is an optional label that is used to identify merge blocking
// Github issues.
// Leave this blank to disable this feature and save 1 API token per sync loop.
BlockerLabel string `json:"blocker_label,omitempty"`
// SquashLabel is an optional label that is used to identify PRs that should
// always be squash merged.
// Leave this blank to disable this feature.
SquashLabel string `json:"squash_label,omitempty"`
// MaxGoroutines is the maximum number of goroutines spawned inside the
// controller to handle org/repo:branch pools. Defaults to 20. Needs to be a
// positive number.
MaxGoroutines int `json:"max_goroutines,omitempty"`
// TideContextPolicyOptions defines merge options for context. If not set it will infer
// the required and optional contexts from the prow jobs configured and use the github
// combined status; otherwise it may apply the branch protection setting or let user
// define their own options in case branch protection is not used.
ContextOptions TideContextPolicyOptions `json:"context_options,omitempty"`
}
// MergeMethod returns the merge method to use for a repo. The default of merge is
// returned when not overridden.
func (t *Tide) MergeMethod(org, repo string) gitserver.PullRequestMergeType {
name := org + "/" + repo
v, ok := t.MergeType[name]
if !ok {
if ov, found := t.MergeType[org]; found {
return ov
}
return gitserver.MergeMerge
}
return v
}
// TideQuery is turned into a GitHub search query. See the docs for details:
// https://help.github.com/articles/searching-issues-and-pull-requests/
type TideQuery struct {
Orgs []string `json:"orgs,omitempty"`
Repos []string `json:"repos,omitempty"`
ExcludedRepos []string `json:"excludedRepos,omitempty"`
ExcludedBranches []string `json:"excludedBranches,omitempty"`
IncludedBranches []string `json:"includedBranches,omitempty"`
Labels []string `json:"labels,omitempty"`
MissingLabels []string `json:"missingLabels,omitempty"`
Milestone string `json:"milestone,omitempty"`
ReviewApprovedRequired bool `json:"reviewApprovedRequired,omitempty"`
}
// Query returns the corresponding github search string for the tide query.
func (tq *TideQuery) Query() string {
toks := []string{"is:pr", "state:open"}
for _, o := range tq.Orgs {
toks = append(toks, fmt.Sprintf("org:\"%s\"", o))
}
for _, r := range tq.Repos {
toks = append(toks, fmt.Sprintf("repo:\"%s\"", r))
}
for _, r := range tq.ExcludedRepos {
toks = append(toks, fmt.Sprintf("-repo:\"%s\"", r))
}
for _, b := range tq.ExcludedBranches {
toks = append(toks, fmt.Sprintf("-base:\"%s\"", b))
}
for _, b := range tq.IncludedBranches {
toks = append(toks, fmt.Sprintf("base:\"%s\"", b))
}
for _, l := range tq.Labels {
toks = append(toks, fmt.Sprintf("label:\"%s\"", l))
}
for _, l := range tq.MissingLabels {
toks = append(toks, fmt.Sprintf("-label:\"%s\"", l))
}
if tq.Milestone != "" {
toks = append(toks, fmt.Sprintf("milestone:\"%s\"", tq.Milestone))
}
if tq.ReviewApprovedRequired {
toks = append(toks, "review:approved")
}
return strings.Join(toks, " ")
}
func (tq *TideQuery) ListProjectMergeRequestsOptions(start, end *time.Time) *gitlab.ListProjectMergeRequestsOptions {
opened := "opened"
options := &gitlab.ListProjectMergeRequestsOptions{
State: &opened,
CreatedBefore: start,
CreatedAfter: end,
}
if len(tq.Labels) > 0 {
options.Labels = gitlab.Labels(tq.Labels)
}
if tq.Milestone != "" {
options.Milestone = &tq.Milestone
}
return options
}
func (tq *TideQuery) ListProjectIssuesOptions(start, end *time.Time) *gitlab.ListProjectIssuesOptions {
opened := "opened"
options := &gitlab.ListProjectIssuesOptions{
State: &opened,
CreatedBefore: start,
CreatedAfter: end,
}
if len(tq.Labels) > 0 {
options.Labels = gitlab.Labels(tq.Labels)
}
if tq.Milestone != "" {
options.Milestone = &tq.Milestone
}
return options
}
// Query returns the corresponding github search string for the tide query.
func (tq *TideQuery) QueryGitlab() string {
return ""
}
// ForRepo indicates if the tide query applies to the specified repo.
func (tq TideQuery) ForRepo(org, repo string) bool {
fullName := fmt.Sprintf("%s/%s", org, repo)
for _, queryOrg := range tq.Orgs {
if queryOrg != org {
continue
}
// Check for repos excluded from the org.
for _, excludedRepo := range tq.ExcludedRepos {
if excludedRepo == fullName {
return false
}
}
return true
}
for _, queryRepo := range tq.Repos {
if queryRepo == fullName {
return true
}
}
return false
}
func reposInOrg(org string, repos []string) []string {
prefix := org + "/"
var res []string
for _, repo := range repos {
if strings.HasPrefix(repo, prefix) {
res = append(res, repo)
}
}
return res
}
// OrgExceptionsAndRepos determines which orgs and repos a set of queries cover.
// Output is returned as a mapping from 'included org'->'repos excluded in the org'
// and a set of included repos.
func (tqs TideQueries) OrgExceptionsAndRepos() (map[string]sets.String, sets.String) {
orgs := make(map[string]sets.String)
for i := range tqs {
for _, org := range tqs[i].Orgs {
applicableRepos := sets.NewString(reposInOrg(org, tqs[i].ExcludedRepos)...)
if excepts, ok := orgs[org]; !ok {
// We have not seen this org so the exceptions are just applicable
// members of 'excludedRepos'.
orgs[org] = applicableRepos
} else {
// We have seen this org so the exceptions are the applicable
// members of 'excludedRepos' intersected with existing exceptions.
orgs[org] = excepts.Intersection(applicableRepos)
}
}
}
repos := sets.NewString()
for i := range tqs {
repos.Insert(tqs[i].Repos...)
}
// Remove any org exceptions that are explicitly included in a different query.
reposList := repos.UnsortedList()
for _, excepts := range orgs {
excepts.Delete(reposList...)
}
return orgs, repos
}
// QueryMap is a struct mapping from "org/repo" -> TideQueries that
// apply to that org or repo. It is lazily populated, but threadsafe.
type QueryMap struct {
queries TideQueries
cache map[string]TideQueries
sync.Mutex
}
// QueryMap creates a QueryMap from TideQueries
func (tqs TideQueries) QueryMap() *QueryMap {
return &QueryMap{
queries: tqs,
cache: make(map[string]TideQueries),
}
}
// ForRepo returns the tide queries that apply to a repo.
func (qm *QueryMap) ForRepo(org, repo string) TideQueries {
res := TideQueries(nil)
fullName := fmt.Sprintf("%s/%s", org, repo)
qm.Lock()
defer qm.Unlock()
if qs, ok := qm.cache[fullName]; ok {
return append(res, qs...) // Return a copy.
}
// Cache miss. Need to determine relevant queries.
for _, query := range qm.queries {
if query.ForRepo(org, repo) {
res = append(res, query)
}
}
qm.cache[fullName] = res
return res
}
// Validate returns an error if the query has any errors.
//
// Examples include:
// * an org name that is empty or includes a /
// * repos that are not org/repo
// * a label that is in both the labels and missing_labels section
// * a branch that is in both included and excluded branch set.
func (tq *TideQuery) Validate() error {
duplicates := func(field string, list []string) error {
dups := sets.NewString()
seen := sets.NewString()
for _, elem := range list {
if seen.Has(elem) {
dups.Insert(elem)
} else {
seen.Insert(elem)
}
}
dupCount := len(list) - seen.Len()
if dupCount == 0 {
return nil
}
return fmt.Errorf("%q contains %d duplicate entries: %s", field, dupCount, strings.Join(dups.List(), ", "))
}
orgs := sets.NewString()
for o := range tq.Orgs {
if strings.Contains(tq.Orgs[o], "/") {
return fmt.Errorf("orgs[%d]: %q contains a '/' which is not valid", o, tq.Orgs[o])
}
if len(tq.Orgs[o]) == 0 {
return fmt.Errorf("orgs[%d]: is an empty string", o)
}
orgs.Insert(tq.Orgs[o])
}
if err := duplicates("orgs", tq.Orgs); err != nil {
return err
}
for r := range tq.Repos {
parts := strings.Split(tq.Repos[r], "/")
if len(parts) != 2 || len(parts[0]) == 0 || len(parts[1]) == 0 {
return fmt.Errorf("repos[%d]: %q is not of the form \"org/repo\"", r, tq.Repos[r])
}
if orgs.Has(parts[0]) {
return fmt.Errorf("repos[%d]: %q is already included via org: %q", r, tq.Repos[r], parts[0])
}
}
if err := duplicates("repos", tq.Repos); err != nil {
return err
}
if len(tq.Orgs) == 0 && len(tq.Repos) == 0 {
return errors.New("'orgs' and 'repos' cannot both be empty")
}
for er := range tq.ExcludedRepos {
parts := strings.Split(tq.ExcludedRepos[er], "/")
if len(parts) != 2 || len(parts[0]) == 0 || len(parts[1]) == 0 {
return fmt.Errorf("excludedRepos[%d]: %q is not of the form \"org/repo\"", er, tq.ExcludedRepos[er])
}
if !orgs.Has(parts[0]) {
return fmt.Errorf("excludedRepos[%d]: %q has no effect because org %q is not included", er, tq.ExcludedRepos[er], parts[0])
}
// Note: At this point we also know that this excludedRepo is not found in 'repos'.
}
if err := duplicates("excludedRepos", tq.ExcludedRepos); err != nil {
return err
}
if invalids := sets.NewString(tq.Labels...).Intersection(sets.NewString(tq.MissingLabels...)); len(invalids) > 0 {
return fmt.Errorf("the labels: %q are both required and forbidden", invalids.List())
}
if err := duplicates("labels", tq.Labels); err != nil {
return err
}
if err := duplicates("missingLabels", tq.MissingLabels); err != nil {
return err
}
if len(tq.ExcludedBranches) > 0 && len(tq.IncludedBranches) > 0 {
return errors.New("both 'includedBranches' and 'excludedBranches' are specified ('excludedBranches' have no effect)")
}
if err := duplicates("includedBranches", tq.IncludedBranches); err != nil {
return err
}
if err := duplicates("excludedBranches", tq.ExcludedBranches); err != nil {
return err
}
return nil
}
// Validate returns an error if any contexts are both required and optional.
func (cp *TideContextPolicy) Validate() error {
inter := sets.NewString(cp.RequiredContexts...).Intersection(sets.NewString(cp.OptionalContexts...))
if inter.Len() > 0 {
return fmt.Errorf("contexts %s are defined has required and optional", strings.Join(inter.List(), ", "))
}
return nil
}
func mergeTideContextPolicy(a, b TideContextPolicy) TideContextPolicy {
mergeBool := func(a, b *bool) *bool {
if b == nil {
return a
}
return b
}
c := TideContextPolicy{}
c.FromBranchProtection = mergeBool(a.FromBranchProtection, b.FromBranchProtection)
c.SkipUnknownContexts = mergeBool(a.SkipUnknownContexts, b.SkipUnknownContexts)
required := sets.NewString(a.RequiredContexts...)
optional := sets.NewString(a.OptionalContexts...)
required.Insert(b.RequiredContexts...)
optional.Insert(b.OptionalContexts...)
if required.Len() > 0 {
c.RequiredContexts = required.List()
}
if optional.Len() > 0 {
c.OptionalContexts = optional.List()
}
return c
}
func parseTideContextPolicyOptions(org, repo, branch string, options TideContextPolicyOptions) TideContextPolicy {
option := options.TideContextPolicy
if o, ok := options.Orgs[org]; ok {
option = mergeTideContextPolicy(option, o.TideContextPolicy)
if r, ok := o.Repos[repo]; ok {
option = mergeTideContextPolicy(option, r.TideContextPolicy)
if b, ok := r.Branches[branch]; ok {
option = mergeTideContextPolicy(option, b)
}
}
}
return option
}
// GetTideContextPolicy parses the prow config to find context merge options.
// If none are set, it will use the prow jobs configured and use the default github combined status.
// Otherwise if set it will use the branch protection setting, or the listed jobs.
func (c Config) GetTideContextPolicy(org, repo, branch string) (*TideContextPolicy, error) {
options := parseTideContextPolicyOptions(org, repo, branch, c.Tide.ContextOptions)
// Adding required and optional contexts from options
required := sets.NewString(options.RequiredContexts...)
optional := sets.NewString(options.OptionalContexts...)
// automatically generate required and optional entries for Prow Jobs
prowRequired, prowOptional := BranchRequirements(org, repo, branch, c.Presubmits)
required.Insert(prowRequired...)
optional.Insert(prowOptional...)
// Using Branch protection configuration
if options.FromBranchProtection != nil && *options.FromBranchProtection {
bp, err := c.GetBranchProtection(org, repo, branch)
if err != nil {
logrus.WithError(err).Warningf("Error getting branch protection for %s/%s+%s", org, repo, branch)
} else if bp == nil {
logrus.Warningf("branch protection not set for %s/%s+%s", org, repo, branch)
} else if bp.Protect != nil && *bp.Protect {
required.Insert(bp.RequiredStatusChecks.Contexts...)
}
}
t := &TideContextPolicy{
RequiredContexts: required.List(),
OptionalContexts: optional.List(),
SkipUnknownContexts: options.SkipUnknownContexts,
}
if err := t.Validate(); err != nil {
return t, err
}
return t, nil
}
// IsOptional checks whether a context can be ignored.
// Will return true if
// - context is registered as optional
// - required contexts are registered and the context provided is not required
// Will return false otherwise. Every context is required.
func (cp *TideContextPolicy) IsOptional(c string) bool {
if sets.NewString(cp.OptionalContexts...).Has(c) {
return true
}
if sets.NewString(cp.RequiredContexts...).Has(c) {
return false
}
if cp.SkipUnknownContexts != nil && *cp.SkipUnknownContexts {
return true
}
return false
}
// MissingRequiredContexts discard the optional contexts and only look of extra required contexts that are not provided.
func (cp *TideContextPolicy) MissingRequiredContexts(contexts []string) []string {
if len(cp.RequiredContexts) == 0 {
return nil
}
existingContexts := sets.NewString()
for _, c := range contexts {
existingContexts.Insert(c)
}
var missingContexts []string
for c := range sets.NewString(cp.RequiredContexts...).Difference(existingContexts) {
missingContexts = append(missingContexts, c)
}
return missingContexts
}

30
vendor/k8s.io/test-infra/prow/entrypoint/BUILD.bazel generated vendored Normal file
View File

@@ -0,0 +1,30 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"doc.go",
"options.go",
"run.go",
],
importpath = "k8s.io/test-infra/prow/entrypoint",
visibility = ["//visibility:public"],
deps = [
"//vendor/github.com/sirupsen/logrus:go_default_library",
"//vendor/k8s.io/test-infra/prow/pod-utils/wrapper:go_default_library",
],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

19
vendor/k8s.io/test-infra/prow/entrypoint/doc.go generated vendored Normal file
View File

@@ -0,0 +1,19 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package entrypoint is a library that knows how to wrap
// a process and write it's output and exit code to disk
package entrypoint

103
vendor/k8s.io/test-infra/prow/entrypoint/options.go generated vendored Normal file
View File

@@ -0,0 +1,103 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package entrypoint
import (
"encoding/json"
"errors"
"flag"
"time"
"k8s.io/test-infra/prow/pod-utils/wrapper"
)
// NewOptions returns an empty Options with no nil fields
func NewOptions() *Options {
return &Options{
Options: &wrapper.Options{},
}
}
// Options exposes the configuration necessary
// for defining the process being watched and
// where in GCS an upload will land.
type Options struct {
// Args is the process and args to run
Args []string `json:"args"`
// Timeout determines how long to wait before the
// entrypoint sends SIGINT to the process
Timeout time.Duration `json:"timeout"`
// GracePeriod determines how long to wait after
// sending SIGINT before the entrypoint sends
// SIGKILL.
GracePeriod time.Duration `json:"grace_period"`
// ArtifactDir is a directory where test processes can dump artifacts
// for upload to persistent storage (courtesy of sidecar).
// If specified, it is created by entrypoint before starting the test process.
// May be ignored if not using sidecar.
ArtifactDir string `json:"artifact_dir,omitempty"`
*wrapper.Options
}
// Validate ensures that the set of options are
// self-consistent and valid
func (o *Options) Validate() error {
if len(o.Args) == 0 {
return errors.New("no process to wrap specified")
}
return o.Options.Validate()
}
const (
// JSONConfigEnvVar is the environment variable that
// utilities expect to find a full JSON configuration
// in when run.
JSONConfigEnvVar = "ENTRYPOINT_OPTIONS"
)
// ConfigVar exposes the environment variable used
// to store serialized configuration
func (o *Options) ConfigVar() string {
return JSONConfigEnvVar
}
// LoadConfig loads options from serialized config
func (o *Options) LoadConfig(config string) error {
return json.Unmarshal([]byte(config), o)
}
// AddFlags binds flags to options
func (o *Options) AddFlags(flags *flag.FlagSet) {
flags.DurationVar(&o.Timeout, "timeout", DefaultTimeout, "Timeout for the test command.")
flags.DurationVar(&o.GracePeriod, "grace-period", DefaultGracePeriod, "Grace period after timeout for the test command.")
flags.StringVar(&o.ArtifactDir, "artifact-dir", "", "directory where test artifacts should be placed for upload to persistent storage")
o.Options.AddFlags(flags)
}
// Complete internalizes command line arguments
func (o *Options) Complete(args []string) {
o.Args = args
}
// Encode will encode the set of options in the format that
// is expected for the configuration environment variable
func Encode(options Options) (string, error) {
encoded, err := json.Marshal(options)
return string(encoded), err
}

208
vendor/k8s.io/test-infra/prow/entrypoint/run.go generated vendored Normal file
View File

@@ -0,0 +1,208 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package entrypoint
import (
"errors"
"fmt"
"io"
"io/ioutil"
"os"
"os/exec"
"os/signal"
"path/filepath"
"strconv"
"syscall"
"time"
"github.com/sirupsen/logrus"
)
const (
// InternalErrorCode is what we write to the marker file to
// indicate that we failed to start the wrapped command
InternalErrorCode = 127
// AbortedErrorCode is what we write to the marker file to
// indicate that we were terminated via a signal.
AbortedErrorCode = 130
// DefaultTimeout is the default timeout for the test
// process before SIGINT is sent
DefaultTimeout = 120 * time.Minute
// DefaultGracePeriod is the default timeout for the test
// process after SIGINT is sent before SIGKILL is sent
DefaultGracePeriod = 15 * time.Second
)
var (
// errTimedOut is used as the command's error when the command
// is terminated after the timeout is reached
errTimedOut = errors.New("process timed out")
// errAborted is used as the command's error when the command
// is shut down by an external signal
errAborted = errors.New("process aborted")
)
// Run executes the test process then writes the exit code to the marker file.
// This function returns the status code that should be passed to os.Exit().
func (o Options) Run() int {
code, err := o.ExecuteProcess()
if err != nil {
logrus.WithError(err).Error("Error executing test process")
}
if err := o.mark(code); err != nil {
logrus.WithError(err).Error("Error writing exit code to marker file")
return InternalErrorCode
}
return code
}
// ExecuteProcess creates the artifact directory then executes the process as
// configured, writing the output to the process log.
func (o Options) ExecuteProcess() (int, error) {
if o.ArtifactDir != "" {
if err := os.MkdirAll(o.ArtifactDir, os.ModePerm); err != nil {
return InternalErrorCode, fmt.Errorf("could not create artifact directory(%s): %v", o.ArtifactDir, err)
}
}
processLogFile, err := os.Create(o.ProcessLog)
if err != nil {
return InternalErrorCode, fmt.Errorf("could not create process logfile(%s): %v", o.ProcessLog, err)
}
defer processLogFile.Close()
output := io.MultiWriter(os.Stdout, processLogFile)
logrus.SetOutput(output)
defer logrus.SetOutput(os.Stdout)
executable := o.Args[0]
var arguments []string
if len(o.Args) > 1 {
arguments = o.Args[1:]
}
command := exec.Command(executable, arguments...)
command.Stderr = output
command.Stdout = output
if err := command.Start(); err != nil {
return InternalErrorCode, fmt.Errorf("could not start the process: %v", err)
}
// if we get asked to terminate we need to forward
// that to the wrapped process as if it timed out
interrupt := make(chan os.Signal, 1)
signal.Notify(interrupt, os.Interrupt, syscall.SIGTERM)
timeout := optionOrDefault(o.Timeout, DefaultTimeout)
gracePeriod := optionOrDefault(o.GracePeriod, DefaultGracePeriod)
var commandErr error
cancelled, aborted := false, false
done := make(chan error)
go func() {
done <- command.Wait()
}()
select {
case err := <-done:
commandErr = err
case <-time.After(timeout):
logrus.Errorf("Process did not finish before %s timeout", timeout)
cancelled = true
gracefullyTerminate(command, done, gracePeriod)
case s := <-interrupt:
logrus.Errorf("Entrypoint received interrupt: %v", s)
cancelled = true
aborted = true
gracefullyTerminate(command, done, gracePeriod)
}
var returnCode int
if cancelled {
if aborted {
commandErr = errAborted
returnCode = AbortedErrorCode
} else {
commandErr = errTimedOut
returnCode = InternalErrorCode
}
} else {
if status, ok := command.ProcessState.Sys().(syscall.WaitStatus); ok {
returnCode = status.ExitStatus()
} else if commandErr == nil {
returnCode = 0
} else {
returnCode = 1
}
if returnCode != 0 {
commandErr = fmt.Errorf("wrapped process failed: %v", commandErr)
}
}
return returnCode, commandErr
}
func (o *Options) mark(exitCode int) error {
content := []byte(strconv.Itoa(exitCode))
// create temp file in the same directory as the desired marker file
dir := filepath.Dir(o.MarkerFile)
tempFile, err := ioutil.TempFile(dir, "temp-marker")
if err != nil {
return fmt.Errorf("could not create temp marker file in %s: %v", dir, err)
}
// write the exit code to the tempfile, sync to disk and close
if _, err = tempFile.Write(content); err != nil {
return fmt.Errorf("could not write to temp marker file (%s): %v", tempFile.Name(), err)
}
if err = tempFile.Sync(); err != nil {
return fmt.Errorf("could not sync temp marker file (%s): %v", tempFile.Name(), err)
}
tempFile.Close()
// set desired permission bits, then rename to the desired file name
if err = os.Chmod(tempFile.Name(), os.ModePerm); err != nil {
return fmt.Errorf("could not chmod (%x) temp marker file (%s): %v", os.ModePerm, tempFile.Name(), err)
}
if err := os.Rename(tempFile.Name(), o.MarkerFile); err != nil {
return fmt.Errorf("could not move marker file to destination path (%s): %v", o.MarkerFile, err)
}
return nil
}
// optionOrDefault defaults to a value if option
// is the zero value
func optionOrDefault(option, defaultValue time.Duration) time.Duration {
if option == 0 {
return defaultValue
}
return option
}
func gracefullyTerminate(command *exec.Cmd, done <-chan error, gracePeriod time.Duration) {
if err := command.Process.Signal(os.Interrupt); err != nil {
logrus.WithError(err).Error("Could not interrupt process after timeout")
}
select {
case <-done:
logrus.Errorf("Process gracefully exited before %s grace period", gracePeriod)
// but we ignore the output error as we will want errTimedOut
case <-time.After(gracePeriod):
logrus.Errorf("Process did not exit before %s grace period", gracePeriod)
if err := command.Process.Kill(); err != nil {
logrus.WithError(err).Error("Could not kill process after grace period")
}
}
}

25
vendor/k8s.io/test-infra/prow/errorutil/BUILD.bazel generated vendored Normal file
View File

@@ -0,0 +1,25 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"aggregate.go",
"doc.go",
],
importpath = "k8s.io/test-infra/prow/errorutil",
visibility = ["//visibility:public"],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

85
vendor/k8s.io/test-infra/prow/errorutil/aggregate.go generated vendored Normal file
View File

@@ -0,0 +1,85 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package errorutil
import (
"fmt"
"strings"
)
// Aggregate represents an object that contains multiple errors, but does not
// necessarily have singular semantic meaning.
type Aggregate interface {
error
Errors() []error
Strings() []string
}
// NewAggregate converts a slice of errors into an Aggregate interface, which
// is itself an implementation of the error interface. If the slice is empty,
// this returns nil.
// It will check if any of the element of input error list is nil, to avoid
// nil pointer panic when call Error().
func NewAggregate(errlist ...error) Aggregate {
if len(errlist) == 0 {
return nil
}
// In case of input error list contains nil
var errs []error
for _, e := range errlist {
if e != nil {
errs = append(errs, e)
}
}
if len(errs) == 0 {
return nil
}
return aggregate(errs)
}
// This helper implements the error and Errors interfaces. Keeping it private
// prevents people from making an aggregate of 0 errors, which is not
// an error, but does satisfy the error interface.
type aggregate []error
// Error is part of the error interface.
func (agg aggregate) Error() string {
if len(agg) == 0 {
// This should never happen, really.
return ""
}
return fmt.Sprintf("[%s]", strings.Join(agg.Strings(), ", "))
}
// Strings flattens the aggregate (and any sub aggregates) into a
// slice of strings.
func (agg aggregate) Strings() []string {
strs := make([]string, 0, len(agg))
for _, e := range agg {
if subAgg, ok := e.(aggregate); ok {
strs = append(strs, subAgg.Strings()...)
} else {
strs = append(strs, e.Error())
}
}
return strs
}
// Errors is part of the Aggregate interface.
func (agg aggregate) Errors() []error {
return []error(agg)
}

18
vendor/k8s.io/test-infra/prow/errorutil/doc.go generated vendored Normal file
View File

@@ -0,0 +1,18 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package errorutil provides utilities for errors
package errorutil

35
vendor/k8s.io/test-infra/prow/gcsupload/BUILD.bazel generated vendored Normal file
View File

@@ -0,0 +1,35 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"doc.go",
"options.go",
"run.go",
],
importpath = "k8s.io/test-infra/prow/gcsupload",
visibility = ["//visibility:public"],
deps = [
"//vendor/cloud.google.com/go/storage:go_default_library",
"//vendor/github.com/sirupsen/logrus:go_default_library",
"//vendor/google.golang.org/api/option:go_default_library",
"//vendor/k8s.io/test-infra/prow/kube:go_default_library",
"//vendor/k8s.io/test-infra/prow/pod-utils/downwardapi:go_default_library",
"//vendor/k8s.io/test-infra/prow/pod-utils/gcs:go_default_library",
"//vendor/k8s.io/test-infra/testgrid/util/gcs:go_default_library",
],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

19
vendor/k8s.io/test-infra/prow/gcsupload/doc.go generated vendored Normal file
View File

@@ -0,0 +1,19 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package gcsupload uploads artifacts to a GCS path
// resolved from job configuration
package gcsupload

120
vendor/k8s.io/test-infra/prow/gcsupload/options.go generated vendored Normal file
View File

@@ -0,0 +1,120 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package gcsupload
import (
"encoding/json"
"errors"
"flag"
"k8s.io/test-infra/prow/kube"
"k8s.io/test-infra/testgrid/util/gcs"
)
// NewOptions returns an empty Options with no nil fields
func NewOptions() *Options {
return &Options{
GCSConfiguration: &kube.GCSConfiguration{},
}
}
// Options exposes the configuration necessary
// for defining where in GCS an upload will land.
type Options struct {
// Items are files or directories to upload
Items []string `json:"items,omitempty"`
// SubDir is appended to the GCS path
SubDir string `json:"sub_dir,omitempty"`
*kube.GCSConfiguration
// GcsCredentialsFile is the path to the JSON
// credentials for pushing to GCS
GcsCredentialsFile string `json:"gcs_credentials_file,omitempty"`
DryRun bool `json:"dry_run"`
// gcsPath is used to store human-provided GCS
// paths that are parsed to get more granular
// fields
gcsPath gcs.Path
}
// Validate ensures that the set of options are
// self-consistent and valid
func (o *Options) Validate() error {
if o.gcsPath.String() != "" {
o.Bucket = o.gcsPath.Bucket()
o.PathPrefix = o.gcsPath.Object()
}
if !o.DryRun {
if o.Bucket == "" {
return errors.New("GCS upload was requested no GCS bucket was provided")
}
if o.GcsCredentialsFile == "" {
return errors.New("GCS upload was requested but no GCS credentials file was provided")
}
}
return o.GCSConfiguration.Validate()
}
// ConfigVar exposes the environment variable used
// to store serialized configuration
func (o *Options) ConfigVar() string {
return JSONConfigEnvVar
}
// LoadConfig loads options from serialized config
func (o *Options) LoadConfig(config string) error {
return json.Unmarshal([]byte(config), o)
}
// Complete internalizes command line arguments
func (o *Options) Complete(args []string) {
o.Items = args
}
// AddFlags adds flags to the FlagSet that populate
// the GCS upload options struct given.
func (o *Options) AddFlags(fs *flag.FlagSet) {
fs.StringVar(&o.SubDir, "sub-dir", "", "Optional sub-directory of the job's path to which artifacts are uploaded")
fs.StringVar(&o.PathStrategy, "path-strategy", kube.PathStrategyExplicit, "how to encode org and repo into GCS paths")
fs.StringVar(&o.DefaultOrg, "default-org", "", "optional default org for GCS path encoding")
fs.StringVar(&o.DefaultRepo, "default-repo", "", "optional default repo for GCS path encoding")
fs.Var(&o.gcsPath, "gcs-path", "GCS path to upload into")
fs.StringVar(&o.GcsCredentialsFile, "gcs-credentials-file", "", "file where Google Cloud authentication credentials are stored")
fs.BoolVar(&o.DryRun, "dry-run", true, "do not interact with GCS")
}
const (
// JSONConfigEnvVar is the environment variable that
// utilities expect to find a full JSON configuration
// in when run.
JSONConfigEnvVar = "GCSUPLOAD_OPTIONS"
)
// Encode will encode the set of options in the format that
// is expected for the configuration environment variable
func Encode(options Options) (string, error) {
encoded, err := json.Marshal(options)
return string(encoded), err
}

165
vendor/k8s.io/test-infra/prow/gcsupload/run.go generated vendored Normal file
View File

@@ -0,0 +1,165 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package gcsupload
import (
"context"
"fmt"
"os"
"path"
"path/filepath"
"strings"
"cloud.google.com/go/storage"
"github.com/sirupsen/logrus"
"google.golang.org/api/option"
"k8s.io/test-infra/prow/kube"
"k8s.io/test-infra/prow/pod-utils/downwardapi"
"k8s.io/test-infra/prow/pod-utils/gcs"
)
// Run will upload files to GCS as prescribed by
// the options. Any extra files can be passed as
// a parameter and will have the prefix prepended
// to their destination in GCS, so the caller can
// operate relative to the base of the GCS dir.
func (o Options) Run(spec *downwardapi.JobSpec, extra map[string]gcs.UploadFunc) error {
uploadTargets := o.assembleTargets(spec, extra)
if !o.DryRun {
ctx := context.Background()
gcsClient, err := storage.NewClient(ctx, option.WithCredentialsFile(o.GcsCredentialsFile))
if err != nil {
return fmt.Errorf("could not connect to GCS: %v", err)
}
if err := gcs.Upload(gcsClient.Bucket(o.Bucket), uploadTargets); err != nil {
return fmt.Errorf("failed to upload to GCS: %v", err)
}
} else {
for destination := range uploadTargets {
logrus.WithField("dest", destination).Info("Would upload")
}
}
logrus.Info("Finished upload to GCS")
return nil
}
func (o Options) assembleTargets(spec *downwardapi.JobSpec, extra map[string]gcs.UploadFunc) map[string]gcs.UploadFunc {
jobBasePath, gcsPath, builder := PathsForJob(o.GCSConfiguration, spec, o.SubDir)
uploadTargets := map[string]gcs.UploadFunc{}
// ensure that an alias exists for any
// job we're uploading artifacts for
if alias := gcs.AliasForSpec(spec); alias != "" {
fullBasePath := "gs://" + path.Join(o.Bucket, jobBasePath)
uploadTargets[alias] = gcs.DataUpload(strings.NewReader(fullBasePath))
}
if latestBuilds := gcs.LatestBuildForSpec(spec, builder); len(latestBuilds) > 0 {
for _, latestBuild := range latestBuilds {
uploadTargets[latestBuild] = gcs.DataUpload(strings.NewReader(spec.BuildID))
}
}
for _, item := range o.Items {
info, err := os.Stat(item)
if err != nil {
logrus.Warnf("Encountered error in resolving items to upload for %s: %v", item, err)
continue
}
if info.IsDir() {
gatherArtifacts(item, gcsPath, info.Name(), uploadTargets)
} else {
destination := path.Join(gcsPath, info.Name())
if _, exists := uploadTargets[destination]; exists {
logrus.Warnf("Encountered duplicate upload of %s, skipping...", destination)
continue
}
uploadTargets[destination] = gcs.FileUpload(item)
}
}
for destination, upload := range extra {
uploadTargets[path.Join(gcsPath, destination)] = upload
}
return uploadTargets
}
// PathsForJob determines the following for a job:
// - path in GCS under the bucket where job artifacts will be uploaded for:
// - the job
// - this specific run of the job (if any subdir is present)
// The builder for the job is also returned for use in other path resolution.
func PathsForJob(options *kube.GCSConfiguration, spec *downwardapi.JobSpec, subdir string) (string, string, gcs.RepoPathBuilder) {
builder := builderForStrategy(options.PathStrategy, options.DefaultOrg, options.DefaultRepo)
jobBasePath := gcs.PathForSpec(spec, builder)
if options.PathPrefix != "" {
jobBasePath = path.Join(options.PathPrefix, jobBasePath)
}
var gcsPath string
if subdir == "" {
gcsPath = jobBasePath
} else {
gcsPath = path.Join(jobBasePath, subdir)
}
return jobBasePath, gcsPath, builder
}
func builderForStrategy(strategy, defaultOrg, defaultRepo string) gcs.RepoPathBuilder {
var builder gcs.RepoPathBuilder
switch strategy {
case kube.PathStrategyExplicit:
builder = gcs.NewExplicitRepoPathBuilder()
case kube.PathStrategyLegacy:
builder = gcs.NewLegacyRepoPathBuilder(defaultOrg, defaultRepo)
case kube.PathStrategySingle:
builder = gcs.NewSingleDefaultRepoPathBuilder(defaultOrg, defaultRepo)
}
return builder
}
func gatherArtifacts(artifactDir, gcsPath, subDir string, uploadTargets map[string]gcs.UploadFunc) {
logrus.Printf("Gathering artifacts from artifact directory: %s", artifactDir)
filepath.Walk(artifactDir, func(fspath string, info os.FileInfo, err error) error {
if info == nil || info.IsDir() {
return nil
}
// we know path will be below artifactDir, but we can't
// communicate that to the filepath module. We can ignore
// this error as we can be certain it won't occur and best-
// effort upload is OK in any case
if relPath, err := filepath.Rel(artifactDir, fspath); err == nil {
destination := path.Join(gcsPath, subDir, relPath)
if _, exists := uploadTargets[destination]; exists {
logrus.Warnf("Encountered duplicate upload of %s, skipping...", destination)
return nil
}
logrus.Printf("Found %s in artifact directory. Uploading as %s\n", fspath, destination)
uploadTargets[destination] = gcs.FileUpload(fspath)
} else {
logrus.Warnf("Encountered error in relative path calculation for %s under %s: %v", fspath, artifactDir, err)
}
return nil
})
}

26
vendor/k8s.io/test-infra/prow/gitserver/BUILD.bazel generated vendored Normal file
View File

@@ -0,0 +1,26 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"helpers.go",
"types.go",
],
importpath = "k8s.io/test-infra/prow/gitserver",
tags = ["automanaged"],
visibility = ["//visibility:public"],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

34
vendor/k8s.io/test-infra/prow/gitserver/helpers.go generated vendored Normal file
View File

@@ -0,0 +1,34 @@
package gitserver
import (
"strings"
)
// HasLabel checks if label is in the label set "issueLabels".
func HasLabel(label string, issueLabels []Label) bool {
for _, l := range issueLabels {
if strings.ToLower(l.Name) == strings.ToLower(label) {
return true
}
}
return false
}
// ChangedLabels describe a gitlab PR changed labels
func ChangedLabels(action PullRequestEventAction, previous, current []Label) []Label {
labels := make([]Label, 0)
if action == PullRequestActionLabeled {
for _, l := range current {
if !HasLabel(l.Name, previous) {
labels = append(labels, l)
}
}
} else if action == PullRequestActionUnlabeled {
for _, l := range previous {
if !HasLabel(l.Name, current) {
labels = append(labels, l)
}
}
}
return labels
}

1024
vendor/k8s.io/test-infra/prow/gitserver/types.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

32
vendor/k8s.io/test-infra/prow/initupload/BUILD.bazel generated vendored Normal file
View File

@@ -0,0 +1,32 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"doc.go",
"options.go",
"run.go",
],
importpath = "k8s.io/test-infra/prow/initupload",
visibility = ["//visibility:public"],
deps = [
"//vendor/k8s.io/test-infra/prow/gcsupload:go_default_library",
"//vendor/k8s.io/test-infra/prow/pod-utils/clone:go_default_library",
"//vendor/k8s.io/test-infra/prow/pod-utils/downwardapi:go_default_library",
"//vendor/k8s.io/test-infra/prow/pod-utils/gcs:go_default_library",
],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

20
vendor/k8s.io/test-infra/prow/initupload/doc.go generated vendored Normal file
View File

@@ -0,0 +1,20 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package initupload determines the output status of clone
// operations and posts that status along with artifacts and
// logs to cloud storage
package initupload

75
vendor/k8s.io/test-infra/prow/initupload/options.go generated vendored Normal file
View File

@@ -0,0 +1,75 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package initupload
import (
"encoding/json"
"flag"
"k8s.io/test-infra/prow/gcsupload"
)
const (
// JSONConfigEnvVar is the environment variable that
// utilities expect to find a full JSON configuration
// in when run.
JSONConfigEnvVar = "INITUPLOAD_OPTIONS"
)
// NewOptions returns an empty Options with no nil fields
func NewOptions() *Options {
return &Options{
Options: gcsupload.NewOptions(),
}
}
type Options struct {
*gcsupload.Options
// Log is the log file to which clone records are written.
// If unspecified, no clone records are uploaded.
Log string `json:"log,omitempty"`
}
// ConfigVar exposes the environment variable used
// to store serialized configuration
func (o *Options) ConfigVar() string {
return JSONConfigEnvVar
}
// LoadConfig loads options from serialized config
func (o *Options) LoadConfig(config string) error {
return json.Unmarshal([]byte(config), o)
}
// AddFlags binds flags to options
func (o *Options) AddFlags(flags *flag.FlagSet) {
flags.StringVar(&o.Log, "clone-log", "", "Path to the clone records log")
o.Options.AddFlags(flags)
}
// Complete internalizes command line arguments
func (o *Options) Complete(args []string) {
o.Options.Complete(args)
}
// Encode will encode the set of options in the format
// that is expected for the configuration environment variable
func Encode(options Options) (string, error) {
encoded, err := json.Marshal(options)
return string(encoded), err
}

109
vendor/k8s.io/test-infra/prow/initupload/run.go generated vendored Normal file
View File

@@ -0,0 +1,109 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package initupload
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"time"
"k8s.io/test-infra/prow/pod-utils/clone"
"k8s.io/test-infra/prow/pod-utils/downwardapi"
"k8s.io/test-infra/prow/pod-utils/gcs"
)
func (o Options) Run() error {
spec, err := downwardapi.ResolveSpecFromEnv()
if err != nil {
return fmt.Errorf("could not resolve job spec: %v", err)
}
started := struct {
Timestamp int64 `json:"timestamp"`
}{
Timestamp: time.Now().Unix(),
}
startedData, err := json.Marshal(&started)
if err != nil {
return fmt.Errorf("could not marshal starting data: %v", err)
}
uploadTargets := map[string]gcs.UploadFunc{
"started.json": gcs.DataUpload(bytes.NewReader(startedData)),
}
var failed bool
if o.Log != "" {
if failed, err = processCloneLog(o.Log, uploadTargets); err != nil {
return err
}
}
if err := o.Options.Run(spec, uploadTargets); err != nil {
return fmt.Errorf("failed to upload to GCS: %v", err)
}
if failed {
return errors.New("cloning the appropriate refs failed")
}
return nil
}
func processCloneLog(logfile string, uploadTargets map[string]gcs.UploadFunc) (bool, error) {
var cloneRecords []clone.Record
data, err := ioutil.ReadFile(logfile)
if err != nil {
return true, fmt.Errorf("could not read clone log: %v", err)
}
if err = json.Unmarshal(data, &cloneRecords); err != nil {
return true, fmt.Errorf("could not unmarshal clone records: %v", err)
}
// Do not read from cloneLog directly.
// Instead create multiple readers from cloneLog so it can be uploaded to
// both clone-log.txt and build-log.txt on failure.
cloneLog := bytes.Buffer{}
failed := false
for _, record := range cloneRecords {
cloneLog.WriteString(clone.FormatRecord(record))
failed = failed || record.Failed
}
uploadTargets["clone-log.txt"] = gcs.DataUpload(bytes.NewReader(cloneLog.Bytes()))
uploadTargets["clone-records.json"] = gcs.FileUpload(logfile)
if failed {
uploadTargets["build-log.txt"] = gcs.DataUpload(bytes.NewReader(cloneLog.Bytes()))
finished := struct {
Timestamp int64 `json:"timestamp"`
Passed bool `json:"passed"`
Result string `json:"result"`
}{
Timestamp: time.Now().Unix(),
Passed: false,
Result: "FAILURE",
}
finishedData, err := json.Marshal(&finished)
if err != nil {
return true, fmt.Errorf("could not marshal finishing data: %v", err)
}
uploadTargets["finished.json"] = gcs.DataUpload(bytes.NewReader(finishedData))
}
return failed, nil
}

39
vendor/k8s.io/test-infra/prow/kube/BUILD.bazel generated vendored Normal file
View File

@@ -0,0 +1,39 @@
package(default_visibility = ["//visibility:public"])
load(
"@io_bazel_rules_go//go:def.bzl",
"go_library",
)
go_library(
name = "go_default_library",
srcs = [
"client.go",
"metrics.go",
"prowjob.go",
"types.go",
],
importpath = "k8s.io/test-infra/prow/kube",
deps = [
"//vendor/github.com/ghodss/yaml:go_default_library",
"//vendor/github.com/prometheus/client_golang/prometheus:go_default_library",
"//vendor/github.com/sirupsen/logrus:go_default_library",
"//vendor/k8s.io/api/core/v1:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/apis/meta/v1:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/util/sets:go_default_library",
"//vendor/k8s.io/test-infra/prow/apis/prowjobs/v1:go_default_library",
],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
)

684
vendor/k8s.io/test-infra/prow/kube/client.go generated vendored Normal file
View File

@@ -0,0 +1,684 @@
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kube
import (
"bytes"
"crypto/tls"
"crypto/x509"
"encoding/base64"
"encoding/json"
"errors"
"flag"
"fmt"
"io"
"io/ioutil"
"net/http"
"strconv"
"strings"
"time"
"github.com/ghodss/yaml"
"github.com/sirupsen/logrus"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/sets"
)
var InClusterBaseURL string
func init() {
flag.StringVar(&InClusterBaseURL, "in-cluster-base-url", "https://kubernetes.default", "the base url to request k8s apiserver in cluster")
}
const (
// TestContainerName specifies the primary container name.
TestContainerName = "test"
https = "https"
maxRetries = 8
retryDelay = 2 * time.Second
requestTimeout = time.Minute
// EmptySelector selects everything
EmptySelector = ""
// DefaultClusterAlias specifies the default cluster key to schedule jobs.
DefaultClusterAlias = "default"
)
// newClient is used to allow mocking out the behavior of 'NewClient' while testing.
var newClient = NewClient
// Logger can print debug messages
type Logger interface {
Debugf(s string, v ...interface{})
}
// Client interacts with the Kubernetes api-server.
type Client struct {
// If logger is non-nil, log all method calls with it.
logger Logger
baseURL string
deckURL string
client *http.Client
token string
namespace string
fake bool
hiddenReposProvider func() []string
hiddenOnly bool
}
// SetHiddenReposProvider takes a continuation that fetches a list of orgs and repos for
// which PJs should not be returned.
// NOTE: This function is not thread safe and should be called before the client is in use.
func (c *Client) SetHiddenReposProvider(p func() []string, hiddenOnly bool) {
c.hiddenReposProvider = p
c.hiddenOnly = hiddenOnly
}
// Namespace returns a copy of the client pointing at the specified namespace.
func (c *Client) Namespace(ns string) *Client {
nc := *c
nc.namespace = ns
return &nc
}
func (c *Client) log(methodName string, args ...interface{}) {
if c.logger == nil {
return
}
var as []string
for _, arg := range args {
as = append(as, fmt.Sprintf("%v", arg))
}
c.logger.Debugf("%s(%s)", methodName, strings.Join(as, ", "))
}
// ConflictError is http 409.
type ConflictError struct {
e error
}
func (e ConflictError) Error() string {
return e.e.Error()
}
// NewConflictError returns an error with the embedded inner error
func NewConflictError(e error) ConflictError {
return ConflictError{e: e}
}
// UnprocessableEntityError happens when the apiserver returns http 422.
type UnprocessableEntityError struct {
e error
}
func (e UnprocessableEntityError) Error() string {
return e.e.Error()
}
// NewUnprocessableEntityError returns an error with the embedded inner error
func NewUnprocessableEntityError(e error) UnprocessableEntityError {
return UnprocessableEntityError{e: e}
}
// NotFoundError happens when the apiserver returns http 404
type NotFoundError struct {
e error
}
func (e NotFoundError) Error() string {
return e.e.Error()
}
// NewNotFoundError returns an error with the embedded inner error
func NewNotFoundError(e error) NotFoundError {
return NotFoundError{e: e}
}
type request struct {
method string
path string
deckPath string
query map[string]string
requestBody interface{}
}
func (c *Client) request(r *request, ret interface{}) error {
out, err := c.requestRetry(r)
if err != nil {
return err
}
if ret != nil {
if err := json.Unmarshal(out, ret); err != nil {
return err
}
}
return nil
}
func (c *Client) retry(r *request) (*http.Response, error) {
var resp *http.Response
var err error
backoff := retryDelay
for retries := 0; retries < maxRetries; retries++ {
resp, err = c.doRequest(r.method, r.deckPath, r.path, r.query, r.requestBody)
if err == nil {
if resp.StatusCode < 500 {
break
}
resp.Body.Close()
}
time.Sleep(backoff)
backoff *= 2
}
return resp, err
}
// Retry on transport failures. Does not retry on 500s.
func (c *Client) requestRetryStream(r *request) (io.ReadCloser, error) {
if c.fake && r.deckPath == "" {
return nil, nil
}
resp, err := c.retry(r)
if err != nil {
return nil, err
}
if resp.StatusCode == 409 {
return nil, NewConflictError(fmt.Errorf("body cannot be streamed"))
} else if resp.StatusCode < 200 || resp.StatusCode > 299 {
return nil, fmt.Errorf("response has status \"%s\"", resp.Status)
}
return resp.Body, nil
}
// Retry on transport failures. Does not retry on 500s.
func (c *Client) requestRetry(r *request) ([]byte, error) {
if c.fake && r.deckPath == "" {
return []byte("{}"), nil
}
resp, err := c.retry(r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
rb, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, err
}
if resp.StatusCode == 409 {
return nil, NewConflictError(fmt.Errorf("body: %s", string(rb)))
} else if resp.StatusCode == 422 {
return nil, NewUnprocessableEntityError(fmt.Errorf("body: %s", string(rb)))
} else if resp.StatusCode == 404 {
return nil, NewNotFoundError(fmt.Errorf("body: %s", string(rb)))
} else if resp.StatusCode < 200 || resp.StatusCode > 299 {
return nil, fmt.Errorf("response has status \"%s\" and body \"%s\"", resp.Status, string(rb))
}
return rb, nil
}
func (c *Client) doRequest(method, deckPath, urlPath string, query map[string]string, body interface{}) (*http.Response, error) {
url := c.baseURL + urlPath
if c.deckURL != "" && deckPath != "" {
url = c.deckURL + deckPath
}
var buf io.Reader
if body != nil {
b, err := json.Marshal(body)
if err != nil {
return nil, err
}
buf = bytes.NewBuffer(b)
}
req, err := http.NewRequest(method, url, buf)
if err != nil {
return nil, err
}
if c.token != "" {
req.Header.Set("Authorization", "Bearer "+c.token)
}
if method == http.MethodPatch {
req.Header.Set("Content-Type", "application/strategic-merge-patch+json")
} else {
req.Header.Set("Content-Type", "application/json")
}
q := req.URL.Query()
for k, v := range query {
q.Add(k, v)
}
req.URL.RawQuery = q.Encode()
return c.client.Do(req)
}
// NewFakeClient creates a client that doesn't do anything. If you provide a
// deck URL then the client will hit that for the supported calls.
func NewFakeClient(deckURL string) *Client {
return &Client{
namespace: "default",
deckURL: deckURL,
client: &http.Client{},
fake: true,
}
}
// NewClientInCluster creates a Client that works from within a pod.
func NewClientInCluster(namespace string) (*Client, error) {
tokenFile := "/var/run/secrets/kubernetes.io/serviceaccount/token"
token, err := ioutil.ReadFile(tokenFile)
if err != nil {
return nil, err
}
client := &http.Client{Timeout: requestTimeout}
if strings.HasPrefix(InClusterBaseURL, https) {
rootCAFile := "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
certData, err := ioutil.ReadFile(rootCAFile)
if err != nil {
return nil, err
}
cp := x509.NewCertPool()
cp.AppendCertsFromPEM(certData)
client.Transport = &http.Transport{
TLSClientConfig: &tls.Config{
MinVersion: tls.VersionTLS12,
RootCAs: cp,
},
}
}
return &Client{
logger: logrus.WithField("client", "kube"),
baseURL: InClusterBaseURL,
client: client,
token: string(token),
namespace: namespace,
}, nil
}
// Cluster represents the information necessary to talk to a Kubernetes
// master endpoint.
// NOTE: if your cluster runs on GKE you can use the following command to get these credentials:
// gcloud --project <gcp_project> container clusters describe --zone <zone> <cluster_name>
type Cluster struct {
// The IP address of the cluster's master endpoint.
Endpoint string `json:"endpoint"`
// Base64-encoded public cert used by clients to authenticate to the
// cluster endpoint.
ClientCertificate string `json:"clientCertificate"`
// Base64-encoded private key used by clients..
ClientKey string `json:"clientKey"`
// Base64-encoded public certificate that is the root of trust for the
// cluster.
ClusterCACertificate string `json:"clusterCaCertificate"`
}
// NewClientFromFile reads a Cluster object at clusterPath and returns an
// authenticated client using the keys within.
func NewClientFromFile(clusterPath, namespace string) (*Client, error) {
data, err := ioutil.ReadFile(clusterPath)
if err != nil {
return nil, err
}
var c Cluster
if err := yaml.Unmarshal(data, &c); err != nil {
return nil, err
}
return NewClient(&c, namespace)
}
// UnmarshalClusterMap reads a map[string]Cluster in yaml bytes.
func UnmarshalClusterMap(data []byte) (map[string]Cluster, error) {
var raw map[string]Cluster
if err := yaml.Unmarshal(data, &raw); err != nil {
// If we failed to unmarshal the multicluster format try the single Cluster format.
var singleConfig Cluster
if err := yaml.Unmarshal(data, &singleConfig); err != nil {
return nil, err
}
raw = map[string]Cluster{DefaultClusterAlias: singleConfig}
}
return raw, nil
}
// MarshalClusterMap writes c as yaml bytes.
func MarshalClusterMap(c map[string]Cluster) ([]byte, error) {
return yaml.Marshal(c)
}
// ClientMapFromFile reads the file at clustersPath and attempts to load a map of cluster aliases
// to authenticated clients to the respective clusters.
// The file at clustersPath is expected to be a yaml map from strings to Cluster structs OR it may
// simply be a single Cluster struct which will be assigned the alias $DefaultClusterAlias.
// If the file is an alias map, it must include the alias $DefaultClusterAlias.
func ClientMapFromFile(clustersPath, namespace string) (map[string]*Client, error) {
data, err := ioutil.ReadFile(clustersPath)
if err != nil {
return nil, fmt.Errorf("read error: %v", err)
}
raw, err := UnmarshalClusterMap(data)
if err != nil {
return nil, fmt.Errorf("unmarshal error: %v", err)
}
foundDefault := false
result := map[string]*Client{}
for alias, config := range raw {
client, err := newClient(&config, namespace)
if err != nil {
return nil, fmt.Errorf("failed to load config for build cluster alias %q in file %q: %v", alias, clustersPath, err)
}
result[alias] = client
if alias == DefaultClusterAlias {
foundDefault = true
}
}
if !foundDefault {
return nil, fmt.Errorf("failed to find the required %q alias in build cluster config %q", DefaultClusterAlias, clustersPath)
}
return result, nil
}
// NewClient returns an authenticated Client using the keys in the Cluster.
func NewClient(c *Cluster, namespace string) (*Client, error) {
cc, err := base64.StdEncoding.DecodeString(c.ClientCertificate)
if err != nil {
return nil, err
}
ck, err := base64.StdEncoding.DecodeString(c.ClientKey)
if err != nil {
return nil, err
}
ca, err := base64.StdEncoding.DecodeString(c.ClusterCACertificate)
if err != nil {
return nil, err
}
cert, err := tls.X509KeyPair(cc, ck)
if err != nil {
return nil, err
}
cp := x509.NewCertPool()
cp.AppendCertsFromPEM(ca)
tr := &http.Transport{
TLSClientConfig: &tls.Config{
MinVersion: tls.VersionTLS12,
Certificates: []tls.Certificate{cert},
RootCAs: cp,
},
}
return &Client{
logger: logrus.WithField("client", "kube"),
baseURL: c.Endpoint,
client: &http.Client{Transport: tr, Timeout: requestTimeout},
namespace: namespace,
}, nil
}
// GetPod is analogous to kubectl get pods/NAME namespace=client.namespace
func (c *Client) GetPod(name string) (Pod, error) {
c.log("GetPod", name)
var retPod Pod
err := c.request(&request{
path: fmt.Sprintf("/api/v1/namespaces/%s/pods/%s", c.namespace, name),
}, &retPod)
return retPod, err
}
// ListPods is analogous to kubectl get pods --selector=SELECTOR --namespace=client.namespace
func (c *Client) ListPods(selector string) ([]Pod, error) {
c.log("ListPods", selector)
var pl struct {
Items []Pod `json:"items"`
}
err := c.request(&request{
path: fmt.Sprintf("/api/v1/namespaces/%s/pods", c.namespace),
query: map[string]string{"labelSelector": selector},
}, &pl)
return pl.Items, err
}
// DeletePod deletes the pod at name in the client's default namespace.
//
// Analogous to kubectl delete pod
func (c *Client) DeletePod(name string) error {
c.log("DeletePod", name)
return c.request(&request{
method: http.MethodDelete,
path: fmt.Sprintf("/api/v1/namespaces/%s/pods/%s", c.namespace, name),
}, nil)
}
// CreateProwJob creates a prowjob in the client's default namespace.
//
// Analogous to kubectl create prowjob
func (c *Client) CreateProwJob(j ProwJob) (ProwJob, error) {
var representation string
if out, err := json.Marshal(j); err == nil {
representation = string(out[:])
} else {
representation = fmt.Sprintf("%v", j)
}
c.log("CreateProwJob", representation)
var retJob ProwJob
err := c.request(&request{
method: http.MethodPost,
path: fmt.Sprintf("/apis/prow.k8s.io/v1/namespaces/%s/prowjobs", c.namespace),
requestBody: &j,
}, &retJob)
return retJob, err
}
func (c *Client) getHiddenRepos() sets.String {
if c.hiddenReposProvider == nil {
return nil
}
return sets.NewString(c.hiddenReposProvider()...)
}
func shouldHide(pj *ProwJob, hiddenRepos sets.String, showHiddenOnly bool) bool {
if pj.Spec.Refs == nil {
// periodic jobs do not have refs and therefore cannot be
// hidden by the org/repo mechanism
return false
}
shouldHide := hiddenRepos.HasAny(fmt.Sprintf("%s/%s", pj.Spec.Refs.Org, pj.Spec.Refs.Repo), pj.Spec.Refs.Org)
if showHiddenOnly {
return !shouldHide
}
return shouldHide
}
// GetProwJob returns the prowjob at name in the client's default namespace.
//
// Analogous to kubectl get prowjob/NAME
func (c *Client) GetProwJob(name string) (ProwJob, error) {
c.log("GetProwJob", name)
var pj ProwJob
err := c.request(&request{
path: fmt.Sprintf("/apis/prow.k8s.io/v1/namespaces/%s/prowjobs/%s", c.namespace, name),
}, &pj)
if err == nil && shouldHide(&pj, c.getHiddenRepos(), c.hiddenOnly) {
pj = ProwJob{}
// Revealing the existence of this prow job is ok because the pj name cannot be used to
// retrieve the pj itself. Furthermore, a timing attack could differentiate true 404s from
// 404s returned when a hidden pj is queried so returning a 404 wouldn't hide the pj's existence.
err = errors.New("403 ProwJob is hidden")
}
return pj, err
}
// ListProwJobs lists prowjobs using the specified labelSelector in the client's default namespace.
//
// Analogous to kubectl get prowjobs --selector=SELECTOR
func (c *Client) ListProwJobs(selector string) ([]ProwJob, error) {
c.log("ListProwJobs", selector)
var jl struct {
Items []ProwJob `json:"items"`
}
err := c.request(&request{
path: fmt.Sprintf("/apis/prow.k8s.io/v1/namespaces/%s/prowjobs", c.namespace),
deckPath: "/prowjobs.js",
query: map[string]string{"labelSelector": selector},
}, &jl)
if err == nil {
hidden := c.getHiddenRepos()
var pjs []ProwJob
for _, pj := range jl.Items {
if !shouldHide(&pj, hidden, c.hiddenOnly) {
pjs = append(pjs, pj)
}
}
jl.Items = pjs
}
return jl.Items, err
}
// DeleteProwJob deletes the prowjob at name in the client's default namespace.
func (c *Client) DeleteProwJob(name string) error {
c.log("DeleteProwJob", name)
return c.request(&request{
method: http.MethodDelete,
path: fmt.Sprintf("/apis/prow.k8s.io/v1/namespaces/%s/prowjobs/%s", c.namespace, name),
}, nil)
}
// ReplaceProwJob will replace name with job in the client's default namespace.
//
// Analogous to kubectl replace prowjobs/NAME
func (c *Client) ReplaceProwJob(name string, job ProwJob) (ProwJob, error) {
c.log("ReplaceProwJob", name, job)
var retJob ProwJob
err := c.request(&request{
method: http.MethodPut,
path: fmt.Sprintf("/apis/prow.k8s.io/v1/namespaces/%s/prowjobs/%s", c.namespace, name),
requestBody: &job,
}, &retJob)
return retJob, err
}
// CreatePod creates a pod in the client's default namespace.
//
// Analogous to kubectl create pod
func (c *Client) CreatePod(p v1.Pod) (Pod, error) {
c.log("CreatePod", p)
var retPod Pod
err := c.request(&request{
method: http.MethodPost,
path: fmt.Sprintf("/api/v1/namespaces/%s/pods", c.namespace),
requestBody: &p,
}, &retPod)
return retPod, err
}
// GetLog returns the log of the default container in the specified pod, in the client's default namespace.
//
// Analogous to kubectl logs pod
func (c *Client) GetLog(pod string) ([]byte, error) {
c.log("GetLog", pod)
return c.requestRetry(&request{
path: fmt.Sprintf("/api/v1/namespaces/%s/pods/%s/log", c.namespace, pod),
})
}
// GetLogTail returns the last n bytes of the log of the specified container in the specified pod,
// in the client's default namespace.
//
// Analogous to kubectl logs pod --tail -1 --limit-bytes n -c container
func (c *Client) GetLogTail(pod, container string, n int64) ([]byte, error) {
c.log("GetLogTail", pod, n)
return c.requestRetry(&request{
path: fmt.Sprintf("/api/v1/namespaces/%s/pods/%s/log", c.namespace, pod),
query: map[string]string{ // Because we want last n bytes, we fetch all lines and then limit to n bytes
"tailLines": "-1",
"container": container,
"limitBytes": strconv.FormatInt(n, 10),
},
})
}
// GetContainerLog returns the log of a container in the specified pod, in the client's default namespace.
//
// Analogous to kubectl logs pod -c container
func (c *Client) GetContainerLog(pod, container string) ([]byte, error) {
c.log("GetContainerLog", pod)
return c.requestRetry(&request{
path: fmt.Sprintf("/api/v1/namespaces/%s/pods/%s/log", c.namespace, pod),
query: map[string]string{"container": container},
})
}
// CreateConfigMap creates a configmap.
//
// Analogous to kubectl create configmap
func (c *Client) CreateConfigMap(content ConfigMap) (ConfigMap, error) {
c.log("CreateConfigMap")
var retConfigMap ConfigMap
err := c.request(&request{
method: http.MethodPost,
path: fmt.Sprintf("/api/v1/namespaces/%s/configmaps", c.namespace),
requestBody: &content,
}, &retConfigMap)
return retConfigMap, err
}
// GetConfigMap gets the configmap identified.
func (c *Client) GetConfigMap(name, namespace string) (ConfigMap, error) {
c.log("GetConfigMap", name)
if namespace == "" {
namespace = c.namespace
}
var retConfigMap ConfigMap
err := c.request(&request{
path: fmt.Sprintf("/api/v1/namespaces/%s/configmaps/%s", namespace, name),
}, &retConfigMap)
return retConfigMap, err
}
// ReplaceConfigMap puts the configmap into name.
//
// Analogous to kubectl replace configmap
//
// If config.Namespace is empty, the client's default namespace is used.
// Returns the content returned by the apiserver
func (c *Client) ReplaceConfigMap(name string, config ConfigMap) (ConfigMap, error) {
c.log("ReplaceConfigMap", name)
namespace := c.namespace
if config.Namespace != "" {
namespace = config.Namespace
}
var retConfigMap ConfigMap
err := c.request(&request{
method: http.MethodPut,
path: fmt.Sprintf("/api/v1/namespaces/%s/configmaps/%s", namespace, name),
requestBody: &config,
}, &retConfigMap)
return retConfigMap, err
}

67
vendor/k8s.io/test-infra/prow/kube/metrics.go generated vendored Normal file
View File

@@ -0,0 +1,67 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kube
import (
"github.com/prometheus/client_golang/prometheus"
)
var (
prowJobs = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Name: "prowjobs",
Help: "Number of prowjobs in the system",
}, []string{
// name of the job
"job_name",
// type of the prowjob: presubmit, postsubmit, periodic, batch
"type",
// state of the prowjob: triggered, pending, success, failure, aborted, error
"state",
})
)
func init() {
prometheus.MustRegister(prowJobs)
}
// GatherProwJobMetrics gathers prometheus metrics for prowjobs.
func GatherProwJobMetrics(pjs []ProwJob) {
// map of job to job type to state to count
metricMap := make(map[string]map[string]map[string]float64)
for _, pj := range pjs {
if metricMap[pj.Spec.Job] == nil {
metricMap[pj.Spec.Job] = make(map[string]map[string]float64)
}
if metricMap[pj.Spec.Job][string(pj.Spec.Type)] == nil {
metricMap[pj.Spec.Job][string(pj.Spec.Type)] = make(map[string]float64)
}
metricMap[pj.Spec.Job][string(pj.Spec.Type)][string(pj.Status.State)]++
}
// This may be racing with the prometheus server but we need to remove
// stale metrics like triggered or pending jobs that are now complete.
prowJobs.Reset()
for job, jobMap := range metricMap {
for jobType, typeMap := range jobMap {
for state, count := range typeMap {
prowJobs.WithLabelValues(job, jobType, state).Set(count)
}
}
}
}

143
vendor/k8s.io/test-infra/prow/kube/prowjob.go generated vendored Normal file
View File

@@ -0,0 +1,143 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kube
import (
"k8s.io/test-infra/prow/apis/prowjobs/v1"
)
// The following are aliases to aid in the refactoring while we move
// API definitions under prow/apis/
// ProwJobType specifies how the job is triggered.
type ProwJobType = v1.ProwJobType
// ProwJobState specifies whether the job is running
type ProwJobState = v1.ProwJobState
// ProwJobAgent specifies the controller (such as plank or jenkins-agent) that runs the job.
type ProwJobAgent = v1.ProwJobAgent
// Various job types.
const (
// PresubmitJob means it runs on unmerged PRs.
PresubmitJob = v1.PresubmitJob
// PostsubmitJob means it runs on each new commit.
PostsubmitJob = v1.PostsubmitJob
// Periodic job means it runs on a time-basis, unrelated to git changes.
PeriodicJob = v1.PeriodicJob
// BatchJob tests multiple unmerged PRs at the same time.
BatchJob = v1.BatchJob
)
// Various job states.
const (
// TriggeredState means the job has been created but not yet scheduled.
TriggeredState = v1.TriggeredState
// PendingState means the job is scheduled but not yet running.
PendingState = v1.PendingState
// SuccessState means the job completed without error (exit 0)
SuccessState = v1.SuccessState
// FailureState means the job completed with errors (exit non-zero)
FailureState = v1.FailureState
// AbortedState means prow killed the job early (new commit pushed, perhaps).
AbortedState = v1.AbortedState
// ErrorState means the job could not schedule (bad config, perhaps).
ErrorState = v1.ErrorState
)
const (
// KubernetesAgent means prow will create a pod to run this job.
KubernetesAgent = v1.KubernetesAgent
// JenkinsAgent means prow will schedule the job on jenkins.
JenkinsAgent = v1.JenkinsAgent
)
const (
// CreatedByProw is added on pods created by prow. We cannot
// really use owner references because pods may reside on a
// different namespace from the namespace parent prowjobs
// live and that would cause the k8s garbage collector to
// identify those prow pods as orphans and delete them
// instantly.
// TODO: Namespace this label.
CreatedByProw = "created-by-prow"
// ProwJobTypeLabel is added in pods created by prow and
// carries the job type (presubmit, postsubmit, periodic, batch)
// that the pod is running.
ProwJobTypeLabel = "prow.k8s.io/type"
// ProwJobIDLabel is added in pods created by prow and
// carries the ID of the ProwJob that the pod is fulfilling.
// We also name pods after the ProwJob that spawned them but
// this allows for multiple resources to be linked to one
// ProwJob.
ProwJobIDLabel = "prow.k8s.io/id"
// ProwJobAnnotation is added in pods created by prow and
// carries the name of the job that the pod is running. Since
// job names can be arbitrarily long, this is added as
// an annotation instead of a label.
ProwJobAnnotation = "prow.k8s.io/job"
// OrgLabel is added in resources created by prow and
// carries the org associated with the job, eg kubernetes-sigs.
OrgLabel = "prow.k8s.io/refs.org"
// RepoLabel is added in resources created by prow and
// carries the repo associated with the job, eg test-infra
RepoLabel = "prow.k8s.io/refs.repo"
// PullLabel is added in resources created by prow and
// carries the PR number associated with the job, eg 321.
PullLabel = "prow.k8s.io/refs.pull"
)
// ProwJob contains the spec as well as runtime metadata.
type ProwJob = v1.ProwJob
// ProwJobSpec configures the details of the prow job.
//
// Details include the podspec, code to clone, the cluster it runs
// any child jobs, concurrency limitations, etc.
type ProwJobSpec = v1.ProwJobSpec
// DecorationConfig specifies how to augment pods.
//
// This is primarily used to provide automatic integration with gubernator
// and testgrid.
type DecorationConfig = v1.DecorationConfig
// UtilityImages holds pull specs for the utility images
// to be used for a job
type UtilityImages = v1.UtilityImages
// PathStrategy specifies minutia about how to contruct the url.
// Usually consumed by gubernator/testgrid.
const (
PathStrategyLegacy = v1.PathStrategyLegacy
PathStrategySingle = v1.PathStrategySingle
PathStrategyExplicit = v1.PathStrategyExplicit
)
// GCSConfiguration holds options for pushing logs and
// artifacts to GCS from a job.
type GCSConfiguration = v1.GCSConfiguration
// ProwJobStatus provides runtime metadata, such as when it finished, whether it is running, etc.
type ProwJobStatus = v1.ProwJobStatus
// Pull describes a pull request at a particular point in time.
type Pull = v1.Pull
// Refs describes how the repo was constructed.
type Refs = v1.Refs

86
vendor/k8s.io/test-infra/prow/kube/types.go generated vendored Normal file
View File

@@ -0,0 +1,86 @@
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kube
import (
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// TODO: Drop all of these, please!
// ObjectMeta is a kubernetes v1 ObjectMeta
type ObjectMeta = metav1.ObjectMeta
// Pod is a kubernetes v1 Pod
type Pod = v1.Pod
// PodTemplateSpec is a kubernetes v1 PodTemplateSpec
type PodTemplateSpec = v1.PodTemplateSpec
// PodSpec is a kubernetes v1 PodSpec
type PodSpec = v1.PodSpec
// PodStatus is a kubernetes v1 PodStatus
type PodStatus = v1.PodStatus
// Phase constants
const (
PodPending = v1.PodPending
PodRunning = v1.PodRunning
PodSucceeded = v1.PodSucceeded
PodFailed = v1.PodFailed
PodUnknown = v1.PodUnknown
)
// PodStatus constants
const (
Evicted = "Evicted"
)
// Container is a kubernetes v1 Container
type Container = v1.Container
// Port is a kubernetes v1 ContainerPort
type Port = v1.ContainerPort
// EnvVar is a kubernetes v1 EnvVar
type EnvVar = v1.EnvVar
// Volume is a kubernetes v1 Volume
type Volume = v1.Volume
// VolumeMount is a kubernetes v1 VolumeMount
type VolumeMount = v1.VolumeMount
// VolumeSource is a kubernetes v1 VolumeSource
type VolumeSource = v1.VolumeSource
// EmptyDirVolumeSource is a kubernetes v1 EmptyDirVolumeSource
type EmptyDirVolumeSource = v1.EmptyDirVolumeSource
// SecretSource is a kubernetes v1 SecretVolumeSource
type SecretSource = v1.SecretVolumeSource
// ConfigMapSource is a kubernetes v1 ConfigMapVolumeSource
type ConfigMapSource = v1.ConfigMapVolumeSource
// ConfigMap is a kubernetes v1 ConfigMap
type ConfigMap = v1.ConfigMap
// Secret is a kubernetes v1 secret
type Secret = v1.Secret

View File

@@ -0,0 +1,30 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"clone.go",
"format.go",
"types.go",
],
importpath = "k8s.io/test-infra/prow/pod-utils/clone",
visibility = ["//visibility:public"],
deps = [
"//vendor/github.com/sirupsen/logrus:go_default_library",
"//vendor/k8s.io/test-infra/prow/kube:go_default_library",
],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

146
vendor/k8s.io/test-infra/prow/pod-utils/clone/clone.go generated vendored Normal file
View File

@@ -0,0 +1,146 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package clone
import (
"bytes"
"fmt"
"os/exec"
"strings"
"github.com/sirupsen/logrus"
"k8s.io/test-infra/prow/kube"
)
// Run clones the refs under the prescribed directory and optionally
// configures the git username and email in the repository as well.
func Run(refs kube.Refs, dir, gitUserName, gitUserEmail, cookiePath string, env []string) Record {
logrus.WithFields(logrus.Fields{"refs": refs}).Info("Cloning refs")
record := Record{Refs: refs}
for _, command := range commandsForRefs(refs, dir, gitUserName, gitUserEmail, cookiePath, env) {
formattedCommand, output, err := command.run()
logrus.WithFields(logrus.Fields{"command": formattedCommand, "output": output, "error": err}).Info("Ran command")
message := ""
if err != nil {
message = err.Error()
record.Failed = true
}
record.Commands = append(record.Commands, Command{Command: formattedCommand, Output: output, Error: message})
if err != nil {
break
}
}
return record
}
// PathForRefs determines the full path to where
// refs should be cloned
func PathForRefs(baseDir string, refs kube.Refs) string {
var clonePath string
if refs.PathAlias != "" {
clonePath = refs.PathAlias
} else {
clonePath = fmt.Sprintf("github.com/%s/%s", refs.Org, refs.Repo)
}
return fmt.Sprintf("%s/src/%s", baseDir, clonePath)
}
func commandsForRefs(refs kube.Refs, dir, gitUserName, gitUserEmail, cookiePath string, env []string) []cloneCommand {
repositoryURI := fmt.Sprintf("https://github.com/%s/%s.git", refs.Org, refs.Repo)
if refs.CloneURI != "" {
repositoryURI = refs.CloneURI
}
cloneDir := PathForRefs(dir, refs)
commands := []cloneCommand{{"/", env, "mkdir", []string{"-p", cloneDir}}}
gitCommand := func(args ...string) cloneCommand {
return cloneCommand{dir: cloneDir, env: env, command: "git", args: args}
}
commands = append(commands, gitCommand("init"))
if gitUserName != "" {
commands = append(commands, gitCommand("config", "user.name", gitUserName))
}
if gitUserEmail != "" {
commands = append(commands, gitCommand("config", "user.email", gitUserEmail))
}
if cookiePath != "" {
commands = append(commands, gitCommand("config", "http.cookiefile", cookiePath))
}
commands = append(commands, gitCommand("fetch", repositoryURI, "--tags", "--prune"))
commands = append(commands, gitCommand("fetch", repositoryURI, refs.BaseRef))
// unless the user specifically asks us not to, init submodules
if !refs.SkipSubmodules {
commands = append(commands, gitCommand("submodule", "update", "--init", "--recursive"))
}
var target string
if refs.BaseSHA != "" {
target = refs.BaseSHA
} else {
target = "FETCH_HEAD"
}
// we need to be "on" the target branch after the sync
// so we need to set the branch to point to the base ref,
// but we cannot update a branch we are on, so in case we
// are on the branch we are syncing, we check out the SHA
// first and reset the branch second, then check out the
// branch we just reset to be in the correct final state
commands = append(commands, gitCommand("checkout", target))
commands = append(commands, gitCommand("branch", "--force", refs.BaseRef, target))
commands = append(commands, gitCommand("checkout", refs.BaseRef))
for _, prRef := range refs.Pulls {
ref := fmt.Sprintf("pull/%d/head", prRef.Number)
if prRef.Ref != "" {
ref = prRef.Ref
}
commands = append(commands, gitCommand("fetch", repositoryURI, ref))
var prCheckout string
if prRef.SHA != "" {
prCheckout = prRef.SHA
} else {
prCheckout = "FETCH_HEAD"
}
commands = append(commands, gitCommand("merge", prCheckout))
}
return commands
}
type cloneCommand struct {
dir string
env []string
command string
args []string
}
func (c *cloneCommand) run() (string, string, error) {
output := bytes.Buffer{}
cmd := exec.Command(c.command, c.args...)
cmd.Dir = c.dir
cmd.Env = append(cmd.Env, c.env...)
cmd.Stdout = &output
cmd.Stderr = &output
err := cmd.Run()
return strings.Join(append([]string{c.command}, c.args...), " "), output.String(), err
}
func (c *cloneCommand) String() string {
return fmt.Sprintf("PWD=%s %s %s %s", c.dir, strings.Join(c.env, " "), c.command, strings.Join(c.env, " "))
}

View File

@@ -0,0 +1,55 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package clone
import (
"bytes"
"fmt"
)
// FormatRecord describes the record in a human-readable
// manner for inclusion into build logs
func FormatRecord(record Record) string {
output := bytes.Buffer{}
if record.Failed {
fmt.Fprintln(&output, "# FAILED!")
}
fmt.Fprintf(&output, "# Cloning %s/%s at %s", record.Refs.Org, record.Refs.Repo, record.Refs.BaseRef)
if record.Refs.BaseSHA != "" {
fmt.Fprintf(&output, "(%s)", record.Refs.BaseSHA)
}
output.WriteString("\n")
if len(record.Refs.Pulls) > 0 {
output.WriteString("# Checking out pulls:\n")
for _, pull := range record.Refs.Pulls {
fmt.Fprintf(&output, "#\t%d", pull.Number)
if pull.SHA != "" {
fmt.Fprintf(&output, "(%s)", pull.SHA)
}
fmt.Fprint(&output, "\n")
}
}
for _, command := range record.Commands {
fmt.Fprintf(&output, "$ %s\n", command.Command)
fmt.Fprint(&output, command.Output)
if command.Error != "" {
fmt.Fprintf(&output, "# Error: %s\n", command.Error)
}
}
return output.String()
}

38
vendor/k8s.io/test-infra/prow/pod-utils/clone/types.go generated vendored Normal file
View File

@@ -0,0 +1,38 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package clone
import (
"k8s.io/test-infra/prow/kube"
)
// Record is a trace of what the desired
// git state was, what steps we took to get there,
// and whether or not we were successful.
type Record struct {
Refs kube.Refs `json:"refs"`
Commands []Command `json:"commands"`
Failed bool `json:"failed"`
}
// Command is a trace of a command executed
// while achieving the desired git state.
type Command struct {
Command string `json:"command"`
Output string `json:"output,omitempty"`
Error string `json:"error,omitempty"`
}

View File

@@ -0,0 +1,40 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"doc.go",
"podspec.go",
],
importpath = "k8s.io/test-infra/prow/pod-utils/decorate",
visibility = ["//visibility:public"],
deps = [
"//vendor/github.com/sirupsen/logrus:go_default_library",
"//vendor/k8s.io/api/core/v1:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/apis/meta/v1:go_default_library",
"//vendor/k8s.io/apimachinery/pkg/util/validation:go_default_library",
"//vendor/k8s.io/test-infra/prow/clonerefs:go_default_library",
"//vendor/k8s.io/test-infra/prow/entrypoint:go_default_library",
"//vendor/k8s.io/test-infra/prow/gcsupload:go_default_library",
"//vendor/k8s.io/test-infra/prow/initupload:go_default_library",
"//vendor/k8s.io/test-infra/prow/kube:go_default_library",
"//vendor/k8s.io/test-infra/prow/pod-utils/clone:go_default_library",
"//vendor/k8s.io/test-infra/prow/pod-utils/downwardapi:go_default_library",
"//vendor/k8s.io/test-infra/prow/pod-utils/wrapper:go_default_library",
"//vendor/k8s.io/test-infra/prow/sidecar:go_default_library",
],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

View File

@@ -0,0 +1,19 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package decorate is a library for adding to a user-provided PodSpec
// in order to create a full Pod that will fulfill a test job
package decorate

View File

@@ -0,0 +1,512 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package decorate
import (
"fmt"
"path"
"path/filepath"
"sort"
"strconv"
"strings"
"github.com/sirupsen/logrus"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/validation"
"k8s.io/test-infra/prow/clonerefs"
"k8s.io/test-infra/prow/entrypoint"
"k8s.io/test-infra/prow/gcsupload"
"k8s.io/test-infra/prow/initupload"
"k8s.io/test-infra/prow/kube"
"k8s.io/test-infra/prow/pod-utils/clone"
"k8s.io/test-infra/prow/pod-utils/downwardapi"
"k8s.io/test-infra/prow/pod-utils/wrapper"
"k8s.io/test-infra/prow/sidecar"
)
const (
logMountName = "logs"
logMountPath = "/logs"
artifactsEnv = "ARTIFACTS"
artifactsPath = logMountPath + "/artifacts"
codeMountName = "code"
codeMountPath = "/home/prow/go"
gopathEnv = "GOPATH"
toolsMountName = "tools"
toolsMountPath = "/tools"
gcsCredentialsMountName = "gcs-credentials"
gcsCredentialsMountPath = "/secrets/gcs"
)
// Labels returns a string slice with label consts from kube.
func Labels() []string {
return []string{kube.ProwJobTypeLabel, kube.CreatedByProw, kube.ProwJobIDLabel}
}
// VolumeMounts returns a string slice with *MountName consts in it.
func VolumeMounts() []string {
return []string{logMountName, codeMountName, toolsMountName, gcsCredentialsMountName}
}
// VolumeMountPaths returns a string slice with *MountPath consts in it.
func VolumeMountPaths() []string {
return []string{logMountPath, codeMountPath, toolsMountPath, gcsCredentialsMountPath}
}
// LabelsAndAnnotationsForSpec returns a minimal set of labels to add to prowjobs or its owned resources.
//
// User-provided extraLabels and extraAnnotations values will take precedence over auto-provided values.
func LabelsAndAnnotationsForSpec(spec kube.ProwJobSpec, extraLabels, extraAnnotations map[string]string) (map[string]string, map[string]string) {
jobNameForLabel := spec.Job
if len(jobNameForLabel) > validation.LabelValueMaxLength {
// TODO(fejta): consider truncating middle rather than end.
jobNameForLabel = strings.TrimRight(spec.Job[:validation.LabelValueMaxLength], "-")
logrus.Warnf("Cannot use full job name '%s' for '%s' label, will be truncated to '%s'",
spec.Job,
kube.ProwJobAnnotation,
jobNameForLabel,
)
}
labels := map[string]string{
kube.CreatedByProw: "true",
kube.ProwJobTypeLabel: string(spec.Type),
kube.ProwJobAnnotation: jobNameForLabel,
}
if spec.Type != kube.PeriodicJob && spec.Refs != nil {
labels[kube.OrgLabel] = spec.Refs.Org
labels[kube.RepoLabel] = spec.Refs.Repo
if len(spec.Refs.Pulls) > 0 {
labels[kube.PullLabel] = strconv.Itoa(spec.Refs.Pulls[0].Number)
}
}
for k, v := range extraLabels {
labels[k] = v
}
// let's validate labels
for key, value := range labels {
if errs := validation.IsValidLabelValue(value); len(errs) > 0 {
// try to use basename of a path, if path contains invalid //
base := filepath.Base(value)
if errs := validation.IsValidLabelValue(base); len(errs) == 0 {
labels[key] = base
continue
}
logrus.Warnf("Removing invalid label: key - %s, value - %s, error: %s", key, value, errs)
delete(labels, key)
}
}
annotations := map[string]string{
kube.ProwJobAnnotation: spec.Job,
}
for k, v := range extraAnnotations {
annotations[k] = v
}
return labels, annotations
}
// LabelsAndAnnotationsForJob returns a standard set of labels to add to pod/build/etc resources.
func LabelsAndAnnotationsForJob(pj kube.ProwJob) (map[string]string, map[string]string) {
var extraLabels map[string]string
if extraLabels = pj.ObjectMeta.Labels; extraLabels == nil {
extraLabels = map[string]string{}
}
extraLabels[kube.ProwJobIDLabel] = pj.ObjectMeta.Name
return LabelsAndAnnotationsForSpec(pj.Spec, extraLabels, nil)
}
// ProwJobToPod converts a ProwJob to a Pod that will run the tests.
func ProwJobToPod(pj kube.ProwJob, buildID string) (*v1.Pod, error) {
if pj.Spec.PodSpec == nil {
return nil, fmt.Errorf("prowjob %q lacks a pod spec", pj.Name)
}
rawEnv, err := downwardapi.EnvForSpec(downwardapi.NewJobSpec(pj.Spec, buildID, pj.Name))
if err != nil {
return nil, err
}
spec := pj.Spec.PodSpec.DeepCopy()
spec.RestartPolicy = "Never"
spec.Containers[0].Name = kube.TestContainerName
// we treat this as false if unset, while kubernetes treats it as true if
// unset because it was added in v1.6
if spec.AutomountServiceAccountToken == nil {
myFalse := false
spec.AutomountServiceAccountToken = &myFalse
}
if pj.Spec.DecorationConfig == nil {
spec.Containers[0].Env = append(spec.Containers[0].Env, kubeEnv(rawEnv)...)
} else {
if err := decorate(spec, &pj, rawEnv); err != nil {
return nil, fmt.Errorf("error decorating podspec: %v", err)
}
}
podLabels, annotations := LabelsAndAnnotationsForJob(pj)
return &v1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: pj.ObjectMeta.Name,
Labels: podLabels,
Annotations: annotations,
},
Spec: *spec,
}, nil
}
const cloneLogPath = "clone.json"
// CloneLogPath returns the path to the clone log file in the volume mount.
func CloneLogPath(logMount kube.VolumeMount) string {
return filepath.Join(logMount.MountPath, cloneLogPath)
}
// Exposed for testing
const (
cloneRefsName = "clonerefs"
cloneRefsCommand = "/clonerefs"
)
// cloneEnv encodes clonerefs Options into json and puts it into an environment variable
func cloneEnv(opt clonerefs.Options) ([]v1.EnvVar, error) {
// TODO(fejta): use flags
cloneConfigEnv, err := clonerefs.Encode(opt)
if err != nil {
return nil, err
}
return kubeEnv(map[string]string{clonerefs.JSONConfigEnvVar: cloneConfigEnv}), nil
}
// sshVolume converts a secret holding ssh keys into the corresponding volume and mount.
//
// This is used by CloneRefs to attach the mount to the clonerefs container.
func sshVolume(secret string) (kube.Volume, kube.VolumeMount) {
var sshKeyMode int32 = 0400 // this is octal, so symbolic ref is `u+r`
name := strings.Join([]string{"ssh-keys", secret}, "-")
mountPath := path.Join("/secrets/ssh", secret)
v := kube.Volume{
Name: name,
VolumeSource: kube.VolumeSource{
Secret: &kube.SecretSource{
SecretName: secret,
DefaultMode: &sshKeyMode,
},
},
}
vm := kube.VolumeMount{
Name: name,
MountPath: mountPath,
ReadOnly: true,
}
return v, vm
}
// cookiefileVolumes converts a secret holding cookies into the corresponding volume and mount.
//
// Secret can be of the form secret-name/base-name or just secret-name.
// Here secret-name refers to the kubernetes secret volume to mount, and base-name refers to the key in the secret
// where the cookies are stored. The secret-name pattern is equivalent to secret-name/secret-name.
//
// This is used by CloneRefs to attach the mount to the clonerefs container.
// The returned string value is the path to the cookiefile for use with --cookiefile.
func cookiefileVolume(secret string) (kube.Volume, kube.VolumeMount, string) {
// Separate secret-name/key-in-secret
parts := strings.SplitN(secret, "/", 2)
cookieSecret := parts[0]
var base string
if len(parts) == 1 {
base = parts[0] // Assume key-in-secret == secret-name
} else {
base = parts[1]
}
var cookiefileMode int32 = 0400 // u+r
vol := kube.Volume{
Name: "cookiefile",
VolumeSource: kube.VolumeSource{
Secret: &kube.SecretSource{
SecretName: cookieSecret,
DefaultMode: &cookiefileMode,
},
},
}
mount := kube.VolumeMount{
Name: vol.Name,
MountPath: "/secrets/cookiefile", // append base to flag
ReadOnly: true,
}
return vol, mount, path.Join(mount.MountPath, base)
}
// CloneRefs constructs the container and volumes necessary to clone the refs requested by the ProwJob.
//
// The container checks out repositories specified by the ProwJob Refs to `codeMount`.
// A log of what it checked out is written to `clone.json` in `logMount`.
//
// The container may need to mount SSH keys and/or cookiefiles in order to access private refs.
// CloneRefs returns a list of volumes containing these secrets required by the container.
func CloneRefs(pj kube.ProwJob, codeMount, logMount kube.VolumeMount) (*kube.Container, []kube.Refs, []kube.Volume, error) {
if pj.Spec.DecorationConfig == nil {
return nil, nil, nil, nil
}
if skip := pj.Spec.DecorationConfig.SkipCloning; skip != nil && *skip {
return nil, nil, nil, nil
}
var cloneVolumes []kube.Volume
var refs []kube.Refs // Do not return []*kube.Refs which we do not own
if pj.Spec.Refs != nil {
refs = append(refs, *pj.Spec.Refs)
}
for _, r := range pj.Spec.ExtraRefs {
refs = append(refs, r)
}
if len(refs) == 0 { // nothing to clone
return nil, nil, nil, nil
}
if codeMount.Name == "" || codeMount.MountPath == "" {
return nil, nil, nil, fmt.Errorf("codeMount must set Name and MountPath")
}
if logMount.Name == "" || logMount.MountPath == "" {
return nil, nil, nil, fmt.Errorf("logMount must set Name and MountPath")
}
var cloneMounts []kube.VolumeMount
var sshKeyPaths []string
for _, secret := range pj.Spec.DecorationConfig.SSHKeySecrets {
volume, mount := sshVolume(secret)
cloneMounts = append(cloneMounts, mount)
sshKeyPaths = append(sshKeyPaths, mount.MountPath)
cloneVolumes = append(cloneVolumes, volume)
}
var cloneArgs []string
var cookiefilePath string
if cp := pj.Spec.DecorationConfig.CookiefileSecret; cp != "" {
v, vm, vp := cookiefileVolume(cp)
cloneMounts = append(cloneMounts, vm)
cloneVolumes = append(cloneVolumes, v)
cookiefilePath = vp
cloneArgs = append(cloneArgs, "--cookiefile="+cookiefilePath)
}
env, err := cloneEnv(clonerefs.Options{
CookiePath: cookiefilePath,
GitRefs: refs,
GitUserEmail: clonerefs.DefaultGitUserEmail,
GitUserName: clonerefs.DefaultGitUserName,
HostFingerprints: pj.Spec.DecorationConfig.SSHHostFingerprints,
KeyFiles: sshKeyPaths,
Log: CloneLogPath(logMount),
SrcRoot: codeMount.MountPath,
})
if err != nil {
return nil, nil, nil, fmt.Errorf("clone env: %v", err)
}
container := kube.Container{
Name: cloneRefsName,
Image: pj.Spec.DecorationConfig.UtilityImages.CloneRefs,
Command: []string{cloneRefsCommand},
Args: cloneArgs,
Env: env,
VolumeMounts: append([]kube.VolumeMount{logMount, codeMount}, cloneMounts...),
}
return &container, refs, cloneVolumes, nil
}
func decorate(spec *kube.PodSpec, pj *kube.ProwJob, rawEnv map[string]string) error {
rawEnv[artifactsEnv] = artifactsPath
rawEnv[gopathEnv] = codeMountPath
logMount := kube.VolumeMount{
Name: logMountName,
MountPath: logMountPath,
}
logVolume := kube.Volume{
Name: logMountName,
VolumeSource: kube.VolumeSource{
EmptyDir: &kube.EmptyDirVolumeSource{},
},
}
codeMount := kube.VolumeMount{
Name: codeMountName,
MountPath: codeMountPath,
}
codeVolume := kube.Volume{
Name: codeMountName,
VolumeSource: kube.VolumeSource{
EmptyDir: &kube.EmptyDirVolumeSource{},
},
}
toolsMount := kube.VolumeMount{
Name: toolsMountName,
MountPath: toolsMountPath,
}
toolsVolume := kube.Volume{
Name: toolsMountName,
VolumeSource: kube.VolumeSource{
EmptyDir: &kube.EmptyDirVolumeSource{},
},
}
gcsCredentialsMount := kube.VolumeMount{
Name: gcsCredentialsMountName,
MountPath: gcsCredentialsMountPath,
}
gcsCredentialsVolume := kube.Volume{
Name: gcsCredentialsMountName,
VolumeSource: kube.VolumeSource{
Secret: &kube.SecretSource{
SecretName: pj.Spec.DecorationConfig.GCSCredentialsSecret,
},
},
}
cloner, refs, cloneVolumes, err := CloneRefs(*pj, codeMount, logMount)
if err != nil {
return fmt.Errorf("could not create clonerefs container: %v", err)
}
if cloner != nil {
spec.InitContainers = append([]kube.Container{*cloner}, spec.InitContainers...)
}
gcsOptions := gcsupload.Options{
// TODO: pass the artifact dir here too once we figure that out
GCSConfiguration: pj.Spec.DecorationConfig.GCSConfiguration,
GcsCredentialsFile: fmt.Sprintf("%s/service-account.json", gcsCredentialsMountPath),
DryRun: false,
}
initUploadOptions := initupload.Options{
Options: &gcsOptions,
}
if cloner != nil {
initUploadOptions.Log = CloneLogPath(logMount)
}
// TODO(fejta): use flags
initUploadConfigEnv, err := initupload.Encode(initUploadOptions)
if err != nil {
return fmt.Errorf("could not encode initupload configuration as JSON: %v", err)
}
entrypointLocation := fmt.Sprintf("%s/entrypoint", toolsMountPath)
spec.InitContainers = append(spec.InitContainers,
kube.Container{
Name: "initupload",
Image: pj.Spec.DecorationConfig.UtilityImages.InitUpload,
Command: []string{"/initupload"},
Env: kubeEnv(map[string]string{
initupload.JSONConfigEnvVar: initUploadConfigEnv,
downwardapi.JobSpecEnv: rawEnv[downwardapi.JobSpecEnv], // TODO: shouldn't need this?
}),
VolumeMounts: []kube.VolumeMount{logMount, gcsCredentialsMount},
},
kube.Container{
Name: "place-tools",
Image: pj.Spec.DecorationConfig.UtilityImages.Entrypoint,
Command: []string{"/bin/cp"},
Args: []string{"/entrypoint", entrypointLocation},
VolumeMounts: []kube.VolumeMount{toolsMount},
},
)
wrapperOptions := wrapper.Options{
ProcessLog: fmt.Sprintf("%s/process-log.txt", logMountPath),
MarkerFile: fmt.Sprintf("%s/marker-file.txt", logMountPath),
}
// TODO(fejta): use flags
entrypointConfigEnv, err := entrypoint.Encode(entrypoint.Options{
Args: append(spec.Containers[0].Command, spec.Containers[0].Args...),
Options: &wrapperOptions,
Timeout: pj.Spec.DecorationConfig.Timeout,
GracePeriod: pj.Spec.DecorationConfig.GracePeriod,
ArtifactDir: artifactsPath,
})
if err != nil {
return fmt.Errorf("could not encode entrypoint configuration as JSON: %v", err)
}
allEnv := rawEnv
allEnv[entrypoint.JSONConfigEnvVar] = entrypointConfigEnv
spec.Containers[0].Command = []string{entrypointLocation}
spec.Containers[0].Args = []string{}
spec.Containers[0].Env = append(spec.Containers[0].Env, kubeEnv(allEnv)...)
spec.Containers[0].VolumeMounts = append(spec.Containers[0].VolumeMounts, logMount, toolsMount)
gcsOptions.Items = append(gcsOptions.Items, artifactsPath)
// TODO(fejta): use flags
sidecarConfigEnv, err := sidecar.Encode(sidecar.Options{
GcsOptions: &gcsOptions,
WrapperOptions: &wrapperOptions,
})
if err != nil {
return fmt.Errorf("could not encode sidecar configuration as JSON: %v", err)
}
spec.Containers = append(spec.Containers, kube.Container{
Name: "sidecar",
Image: pj.Spec.DecorationConfig.UtilityImages.Sidecar,
Command: []string{"/sidecar"},
Env: kubeEnv(map[string]string{
sidecar.JSONConfigEnvVar: sidecarConfigEnv,
downwardapi.JobSpecEnv: rawEnv[downwardapi.JobSpecEnv], // TODO: shouldn't need this?
}),
VolumeMounts: []kube.VolumeMount{logMount, gcsCredentialsMount},
})
spec.Volumes = append(spec.Volumes, logVolume, toolsVolume, gcsCredentialsVolume)
if len(refs) > 0 {
spec.Containers[0].WorkingDir = clone.PathForRefs(codeMount.MountPath, refs[0])
spec.Containers[0].VolumeMounts = append(spec.Containers[0].VolumeMounts, codeMount)
spec.Volumes = append(spec.Volumes, append(cloneVolumes, codeVolume)...)
}
return nil
}
// kubeEnv transforms a mapping of environment variables
// into their serialized form for a PodSpec, sorting by
// the name of the env vars
func kubeEnv(environment map[string]string) []v1.EnvVar {
var keys []string
for key := range environment {
keys = append(keys, key)
}
sort.Strings(keys)
var kubeEnvironment []v1.EnvVar
for _, key := range keys {
kubeEnvironment = append(kubeEnvironment, v1.EnvVar{
Name: key,
Value: environment[key],
})
}
return kubeEnvironment
}

View File

@@ -0,0 +1,26 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"doc.go",
"jobspec.go",
],
importpath = "k8s.io/test-infra/prow/pod-utils/downwardapi",
visibility = ["//visibility:public"],
deps = ["//vendor/k8s.io/test-infra/prow/kube:go_default_library"],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

View File

@@ -0,0 +1,19 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package downwardapi declares the types used to expose
// job configuration to the jobs themselves
package downwardapi

View File

@@ -0,0 +1,156 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package downwardapi
import (
"encoding/json"
"fmt"
"os"
"strconv"
"k8s.io/test-infra/prow/kube"
)
// JobSpec is the full downward API that we expose to
// jobs that realize a ProwJob. We will provide this
// data to jobs with environment variables in two ways:
// - the full spec, in serialized JSON in one variable
// - individual fields of the spec in their own variables
type JobSpec struct {
Type kube.ProwJobType `json:"type,omitempty"`
Job string `json:"job,omitempty"`
BuildID string `json:"buildid,omitempty"`
ProwJobID string `json:"prowjobid,omitempty"`
Refs kube.Refs `json:"refs,omitempty"`
// we need to keep track of the agent until we
// migrate everyone away from using the $BUILD_NUMBER
// environment variable
agent kube.ProwJobAgent
}
// NewJobSpec converts a kube.ProwJobSpec invocation into a JobSpec
func NewJobSpec(spec kube.ProwJobSpec, buildID, prowJobID string) JobSpec {
refs := kube.Refs{}
if spec.Refs != nil {
refs = *spec.Refs
}
return JobSpec{
Type: spec.Type,
Job: spec.Job,
BuildID: buildID,
ProwJobID: prowJobID,
Refs: refs,
agent: spec.Agent,
}
}
// ResolveSpecFromEnv will determine the Refs being
// tested in by parsing Prow environment variable contents
func ResolveSpecFromEnv() (*JobSpec, error) {
specEnv, ok := os.LookupEnv(JobSpecEnv)
if !ok {
return nil, fmt.Errorf("$%s unset", JobSpecEnv)
}
spec := &JobSpec{}
if err := json.Unmarshal([]byte(specEnv), spec); err != nil {
return nil, fmt.Errorf("malformed $%s: %v", JobSpecEnv, err)
}
return spec, nil
}
const (
// JobSpecEnv is the name that contains JobSpec marshaled into a string.
JobSpecEnv = "JOB_SPEC"
jobNameEnv = "JOB_NAME"
jobTypeEnv = "JOB_TYPE"
prowJobIDEnv = "PROW_JOB_ID"
buildIDEnv = "BUILD_ID"
prowBuildIDEnv = "BUILD_NUMBER" // Deprecated, will be removed in the future.
repoOwnerEnv = "REPO_OWNER"
repoNameEnv = "REPO_NAME"
pullBaseRefEnv = "PULL_BASE_REF"
pullBaseShaEnv = "PULL_BASE_SHA"
pullRefsEnv = "PULL_REFS"
pullNumberEnv = "PULL_NUMBER"
pullPullShaEnv = "PULL_PULL_SHA"
)
// EnvForSpec returns a mapping of environment variables
// to their values that should be available for a job spec
func EnvForSpec(spec JobSpec) (map[string]string, error) {
env := map[string]string{
jobNameEnv: spec.Job,
buildIDEnv: spec.BuildID,
prowJobIDEnv: spec.ProwJobID,
jobTypeEnv: string(spec.Type),
}
// for backwards compatibility, we provide the build ID
// in both $BUILD_ID and $BUILD_NUMBER for Prow agents
// and in both $buildId and $BUILD_NUMBER for Jenkins
if spec.agent == kube.KubernetesAgent {
env[prowBuildIDEnv] = spec.BuildID
}
raw, err := json.Marshal(spec)
if err != nil {
return env, fmt.Errorf("failed to marshal job spec: %v", err)
}
env[JobSpecEnv] = string(raw)
if spec.Type == kube.PeriodicJob {
return env, nil
}
env[repoOwnerEnv] = spec.Refs.Org
env[repoNameEnv] = spec.Refs.Repo
env[pullBaseRefEnv] = spec.Refs.BaseRef
env[pullBaseShaEnv] = spec.Refs.BaseSHA
env[pullRefsEnv] = spec.Refs.String()
if spec.Type == kube.PostsubmitJob || spec.Type == kube.BatchJob {
return env, nil
}
env[pullNumberEnv] = strconv.Itoa(spec.Refs.Pulls[0].Number)
env[pullPullShaEnv] = spec.Refs.Pulls[0].SHA
return env, nil
}
// EnvForType returns the slice of environment variables to export for jobType
func EnvForType(jobType kube.ProwJobType) []string {
baseEnv := []string{jobNameEnv, JobSpecEnv, jobTypeEnv, prowJobIDEnv, buildIDEnv, prowBuildIDEnv}
refsEnv := []string{repoOwnerEnv, repoNameEnv, pullBaseRefEnv, pullBaseShaEnv, pullRefsEnv}
pullEnv := []string{pullNumberEnv, pullPullShaEnv}
switch jobType {
case kube.PeriodicJob:
return baseEnv
case kube.PostsubmitJob, kube.BatchJob:
return append(baseEnv, refsEnv...)
case kube.PresubmitJob:
return append(append(baseEnv, refsEnv...), pullEnv...)
default:
return []string{}
}
}

View File

@@ -0,0 +1,33 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"doc.go",
"target.go",
"upload.go",
],
importpath = "k8s.io/test-infra/prow/pod-utils/gcs",
visibility = ["//visibility:public"],
deps = [
"//vendor/cloud.google.com/go/storage:go_default_library",
"//vendor/github.com/sirupsen/logrus:go_default_library",
"//vendor/k8s.io/test-infra/prow/errorutil:go_default_library",
"//vendor/k8s.io/test-infra/prow/kube:go_default_library",
"//vendor/k8s.io/test-infra/prow/pod-utils/downwardapi:go_default_library",
],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

21
vendor/k8s.io/test-infra/prow/pod-utils/gcs/doc.go generated vendored Normal file
View File

@@ -0,0 +1,21 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package gcs handles uploading files and raw data
// to GCS and determines where in the GCS
// bucket data should go given a specific
// job specification
package gcs

139
vendor/k8s.io/test-infra/prow/pod-utils/gcs/target.go generated vendored Normal file
View File

@@ -0,0 +1,139 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package gcs
import (
"fmt"
"path"
"strconv"
"strings"
"github.com/sirupsen/logrus"
"k8s.io/test-infra/prow/kube"
"k8s.io/test-infra/prow/pod-utils/downwardapi"
)
// PathForSpec determines the GCS path prefix for files uploaded
// for a specific job spec
func PathForSpec(spec *downwardapi.JobSpec, pathSegment RepoPathBuilder) string {
switch spec.Type {
case kube.PeriodicJob, kube.PostsubmitJob:
return path.Join("logs", spec.Job, spec.BuildID)
case kube.PresubmitJob:
return path.Join("pr-logs", "pull", pathSegment(spec.Refs.Org, spec.Refs.Repo), strconv.Itoa(spec.Refs.Pulls[0].Number), spec.Job, spec.BuildID)
case kube.BatchJob:
return path.Join("pr-logs", "pull", "batch", spec.Job, spec.BuildID)
default:
logrus.Fatalf("unknown job spec type: %v", spec.Type)
}
return ""
}
// AliasForSpec determines the GCS path aliases for a job spec
func AliasForSpec(spec *downwardapi.JobSpec) string {
switch spec.Type {
case kube.PeriodicJob, kube.PostsubmitJob, kube.BatchJob:
return ""
case kube.PresubmitJob:
return path.Join("pr-logs", "directory", spec.Job, fmt.Sprintf("%s.txt", spec.BuildID))
default:
logrus.Fatalf("unknown job spec type: %v", spec.Type)
}
return ""
}
// LatestBuildForSpec determines the GCS path for storing the latest
// build id for a job. pathSegment can be nil so callers of this
// helper are not required to choose a path strategy but can still
// get back a result.
func LatestBuildForSpec(spec *downwardapi.JobSpec, pathSegment RepoPathBuilder) []string {
var latestBuilds []string
switch spec.Type {
case kube.PeriodicJob, kube.PostsubmitJob:
latestBuilds = append(latestBuilds, path.Join("logs", spec.Job, "latest-build.txt"))
case kube.PresubmitJob:
latestBuilds = append(latestBuilds, path.Join("pr-logs", "directory", spec.Job, "latest-build.txt"))
// Gubernator expects presubmit tests to upload latest-build.txt
// under the PR-specific directory too.
if pathSegment != nil {
latestBuilds = append(latestBuilds, path.Join("pr-logs", "pull", pathSegment(spec.Refs.Org, spec.Refs.Repo), strconv.Itoa(spec.Refs.Pulls[0].Number), spec.Job, "latest-build.txt"))
}
case kube.BatchJob:
latestBuilds = append(latestBuilds, path.Join("pr-logs", "directory", spec.Job, "latest-build.txt"))
default:
logrus.Errorf("unknown job spec type: %v", spec.Type)
return nil
}
return latestBuilds
}
// RootForSpec determines the root GCS path for storing artifacts about
// the provided job.
func RootForSpec(spec *downwardapi.JobSpec) string {
switch spec.Type {
case kube.PeriodicJob, kube.PostsubmitJob:
return path.Join("logs", spec.Job)
case kube.PresubmitJob, kube.BatchJob:
return path.Join("pr-logs", "directory", spec.Job)
default:
logrus.Errorf("unknown job spec type: %v", spec.Type)
}
return ""
}
// RepoPathBuilder builds GCS path segments and embeds defaulting behavior
type RepoPathBuilder func(org, repo string) string
// NewLegacyRepoPathBuilder returns a builder that handles the legacy path
// encoding where a path will only contain an org or repo if they are non-default
func NewLegacyRepoPathBuilder(defaultOrg, defaultRepo string) RepoPathBuilder {
return func(org, repo string) string {
if org == defaultOrg {
if repo == defaultRepo {
return ""
}
return repo
}
// handle gerrit repo
repo = strings.Replace(repo, "/", "_", -1)
return fmt.Sprintf("%s_%s", org, repo)
}
}
// NewSingleDefaultRepoPathBuilder returns a builder that handles the legacy path
// encoding where a path will contain org and repo for all but one default repo
func NewSingleDefaultRepoPathBuilder(defaultOrg, defaultRepo string) RepoPathBuilder {
return func(org, repo string) string {
if org == defaultOrg && repo == defaultRepo {
return ""
}
// handle gerrit repo
repo = strings.Replace(repo, "/", "_", -1)
return fmt.Sprintf("%s_%s", org, repo)
}
}
// NewExplicitRepoPathBuilder returns a builder that handles the path encoding
// where a path will always have an explicit "org_repo" path segment
func NewExplicitRepoPathBuilder() RepoPathBuilder {
return func(org, repo string) string {
// handle gerrit repo
repo = strings.Replace(repo, "/", "_", -1)
return fmt.Sprintf("%s_%s", org, repo)
}
}

92
vendor/k8s.io/test-infra/prow/pod-utils/gcs/upload.go generated vendored Normal file
View File

@@ -0,0 +1,92 @@
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package gcs
import (
"context"
"fmt"
"io"
"os"
"sync"
"cloud.google.com/go/storage"
"github.com/sirupsen/logrus"
"k8s.io/test-infra/prow/errorutil"
)
// UploadFunc knows how to upload into an object
type UploadFunc func(obj *storage.ObjectHandle) error
// Upload uploads all of the data in the
// uploadTargets map to GCS in parallel. The map is
// keyed on GCS path under the bucket
func Upload(bucket *storage.BucketHandle, uploadTargets map[string]UploadFunc) error {
errCh := make(chan error, len(uploadTargets))
group := &sync.WaitGroup{}
group.Add(len(uploadTargets))
for dest, upload := range uploadTargets {
obj := bucket.Object(dest)
logrus.WithField("dest", dest).Info("Queued for upload")
go func(f UploadFunc, obj *storage.ObjectHandle, name string) {
defer group.Done()
if err := f(obj); err != nil {
errCh <- err
}
logrus.WithField("dest", name).Info("Finished upload")
}(upload, obj, dest)
}
group.Wait()
close(errCh)
if len(errCh) != 0 {
var uploadErrors []error
for err := range errCh {
uploadErrors = append(uploadErrors, err)
}
return fmt.Errorf("encountered errors during upload: %v", uploadErrors)
}
return nil
}
// FileUpload returns an UploadFunc which copies all
// data from the file on disk to the GCS object
func FileUpload(file string) UploadFunc {
return func(obj *storage.ObjectHandle) error {
reader, err := os.Open(file)
if err != nil {
return err
}
uploadErr := DataUpload(reader)(obj)
closeErr := reader.Close()
return errorutil.NewAggregate(uploadErr, closeErr)
}
}
// DataUpload returns an UploadFunc which copies all
// data from src reader into GCS
func DataUpload(src io.Reader) UploadFunc {
return func(obj *storage.ObjectHandle) error {
writer := obj.NewWriter(context.Background())
_, copyErr := io.Copy(writer, src)
closeErr := writer.Close()
return errorutil.NewAggregate(copyErr, closeErr)
}
}

View File

@@ -0,0 +1,25 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"doc.go",
"options.go",
],
importpath = "k8s.io/test-infra/prow/pod-utils/wrapper",
visibility = ["//visibility:public"],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

19
vendor/k8s.io/test-infra/prow/pod-utils/wrapper/doc.go generated vendored Normal file
View File

@@ -0,0 +1,19 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package wrapper contains utilities for the processes that
// wrap the test execution in a ProwJob test container
package wrapper

View File

@@ -0,0 +1,56 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package wrapper
import (
"errors"
"flag"
)
// Options exposes the configuration options
// used when wrapping test execution
type Options struct {
// ProcessLog will contain std{out,err} from the
// wrapped test process
ProcessLog string `json:"process_log"`
// MarkerFile will be written with the exit code
// of the test process or an internal error code
// if the entrypoint fails.
MarkerFile string `json:"marker_file"`
}
// AddFlags adds flags to the FlagSet that populate
// the wrapper options struct provided.
func (o *Options) AddFlags(fs *flag.FlagSet) {
fs.StringVar(&o.ProcessLog, "process-log", "", "path to the log where stdout and stderr are streamed for the process we execute")
fs.StringVar(&o.MarkerFile, "marker-file", "", "file we write the return code of the process we execute once it has finished running")
}
// Validate ensures that the set of options are
// self-consistent and valid
func (o *Options) Validate() error {
if o.ProcessLog == "" {
return errors.New("no log file specified with --process-log")
}
if o.MarkerFile == "" {
return errors.New("no marker file specified with --marker-file")
}
return nil
}

34
vendor/k8s.io/test-infra/prow/sidecar/BUILD.bazel generated vendored Normal file
View File

@@ -0,0 +1,34 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"doc.go",
"options.go",
"run.go",
],
importpath = "k8s.io/test-infra/prow/sidecar",
visibility = ["//visibility:public"],
deps = [
"//vendor/github.com/fsnotify/fsnotify:go_default_library",
"//vendor/github.com/sirupsen/logrus:go_default_library",
"//vendor/k8s.io/test-infra/prow/gcsupload:go_default_library",
"//vendor/k8s.io/test-infra/prow/pod-utils/downwardapi:go_default_library",
"//vendor/k8s.io/test-infra/prow/pod-utils/gcs:go_default_library",
"//vendor/k8s.io/test-infra/prow/pod-utils/wrapper:go_default_library",
],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

20
vendor/k8s.io/test-infra/prow/sidecar/doc.go generated vendored Normal file
View File

@@ -0,0 +1,20 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package sidecar is a library that knows how to report on the
// output of a process that writes its output and exit code to
// disk
package sidecar

87
vendor/k8s.io/test-infra/prow/sidecar/options.go generated vendored Normal file
View File

@@ -0,0 +1,87 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package sidecar
import (
"encoding/json"
"flag"
"k8s.io/test-infra/prow/gcsupload"
"k8s.io/test-infra/prow/pod-utils/wrapper"
)
// NewOptions returns an empty Options with no nil fields
func NewOptions() *Options {
return &Options{
GcsOptions: gcsupload.NewOptions(),
WrapperOptions: &wrapper.Options{},
}
}
// Options exposes the configuration necessary
// for defining the process being watched and
// where in GCS an upload will land.
type Options struct {
GcsOptions *gcsupload.Options `json:"gcs_options"`
WrapperOptions *wrapper.Options `json:"wrapper_options"`
}
// Validate ensures that the set of options are
// self-consistent and valid
func (o *Options) Validate() error {
if err := o.WrapperOptions.Validate(); err != nil {
return err
}
return o.GcsOptions.Validate()
}
const (
// JSONConfigEnvVar is the environment variable that
// utilities expect to find a full JSON configuration
// in when run.
JSONConfigEnvVar = "SIDECAR_OPTIONS"
)
// ConfigVar exposese the environment variable used
// to store serialized configuration
func (o *Options) ConfigVar() string {
return JSONConfigEnvVar
}
// LoadConfig loads options from serialized config
func (o *Options) LoadConfig(config string) error {
return json.Unmarshal([]byte(config), o)
}
// AddFlags binds flags to options
func (o *Options) AddFlags(flags *flag.FlagSet) {
o.GcsOptions.AddFlags(flags)
o.WrapperOptions.AddFlags(flags)
}
// Complete internalizes command line arguments
func (o *Options) Complete(args []string) {
o.GcsOptions.Complete(args)
}
// Encode will encode the set of options in the format that
// is expected for the configuration environment variable
func Encode(options Options) (string, error) {
encoded, err := json.Marshal(options)
return string(encoded), err
}

162
vendor/k8s.io/test-infra/prow/sidecar/run.go generated vendored Normal file
View File

@@ -0,0 +1,162 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package sidecar
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"os"
"os/signal"
"path/filepath"
"strconv"
"strings"
"sync"
"syscall"
"time"
"github.com/fsnotify/fsnotify"
"github.com/sirupsen/logrus"
"k8s.io/test-infra/prow/pod-utils/downwardapi"
"k8s.io/test-infra/prow/pod-utils/gcs"
)
// Run will watch for the process being wrapped to exit
// and then post the status of that process and any artifacts
// to cloud storage.
func (o Options) Run() error {
spec, err := downwardapi.ResolveSpecFromEnv()
if err != nil {
return fmt.Errorf("could not resolve job spec: %v", err)
}
// If we are being asked to terminate by the kubelet but we have
// NOT seen the test process exit cleanly, we need a to start
// uploading artifacts to GCS immediately. If we notice the process
// exit while doing this best-effort upload, we can race with the
// second upload but we can tolerate this as we'd rather get SOME
// data into GCS than attempt to cancel these uploads and get none.
interrupt := make(chan os.Signal)
signal.Notify(interrupt, os.Interrupt, syscall.SIGTERM)
go func() {
select {
case s := <-interrupt:
logrus.Errorf("Received an interrupt: %s", s)
o.doUpload(spec, false, true)
}
}()
// Only start watching file events if the file doesn't exist
// If the file exists, it means the main process already completed.
if _, err := os.Stat(o.WrapperOptions.MarkerFile); os.IsNotExist(err) {
watcher, err := fsnotify.NewWatcher()
if err != nil {
return fmt.Errorf("could not begin fsnotify watch: %v", err)
}
defer watcher.Close()
ticker := time.NewTicker(30 * time.Second)
group := sync.WaitGroup{}
group.Add(1)
go func() {
defer group.Done()
for {
select {
case event := <-watcher.Events:
if event.Name == o.WrapperOptions.MarkerFile && event.Op&fsnotify.Create == fsnotify.Create {
return
}
case err := <-watcher.Errors:
logrus.WithError(err).Info("Encountered an error during fsnotify watch")
case <-ticker.C:
if _, err := os.Stat(o.WrapperOptions.MarkerFile); err == nil {
return
}
}
}
}()
dir := filepath.Dir(o.WrapperOptions.MarkerFile)
if err := watcher.Add(dir); err != nil {
return fmt.Errorf("could not add to fsnotify watch: %v", err)
}
group.Wait()
ticker.Stop()
}
// If we are being asked to terminate by the kubelet but we have
// seen the test process exit cleanly, we need a chance to upload
// artifacts to GCS. The only valid way for this program to exit
// after a SIGINT or SIGTERM in this situation is to finish]
// uploading, so we ignore the signals.
signal.Ignore(os.Interrupt, syscall.SIGTERM)
passed := false
aborted := false
returnCodeData, err := ioutil.ReadFile(o.WrapperOptions.MarkerFile)
if err != nil {
logrus.WithError(err).Warn("Could not read return code from marker file")
} else {
returnCode, err := strconv.Atoi(strings.TrimSpace(string(returnCodeData)))
if err != nil {
logrus.WithError(err).Warn("Failed to parse process return code")
}
passed = returnCode == 0 && err == nil
aborted = returnCode == 130
}
return o.doUpload(spec, passed, aborted)
}
func (o Options) doUpload(spec *downwardapi.JobSpec, passed, aborted bool) error {
uploadTargets := map[string]gcs.UploadFunc{
"build-log.txt": gcs.FileUpload(o.WrapperOptions.ProcessLog),
}
var result string
switch {
case passed:
result = "SUCCESS"
case aborted:
result = "ABORTED"
default:
result = "FAILURE"
}
finished := struct {
Timestamp int64 `json:"timestamp"`
Passed bool `json:"passed"`
Result string `json:"result"`
}{
Timestamp: time.Now().Unix(),
Passed: passed,
Result: result,
}
finishedData, err := json.Marshal(&finished)
if err != nil {
logrus.WithError(err).Warn("Could not marshal finishing data")
} else {
uploadTargets["finished.json"] = gcs.DataUpload(bytes.NewBuffer(finishedData))
}
if err := o.GcsOptions.Run(spec, uploadTargets); err != nil {
return fmt.Errorf("failed to upload to GCS: %v", err)
}
return nil
}

30
vendor/k8s.io/test-infra/testgrid/util/gcs/BUILD.bazel generated vendored Normal file
View File

@@ -0,0 +1,30 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["gcs.go"],
importpath = "k8s.io/test-infra/testgrid/util/gcs",
visibility = [
"//testgrid:__subpackages__",
"//vendor/k8s.io/test-infra/prow/gcsupload:__subpackages__",
"//vendor/k8s.io/test-infra/prow/spyglass:__subpackages__",
],
deps = [
"//vendor/cloud.google.com/go/storage:go_default_library",
"//vendor/google.golang.org/api/option:go_default_library",
],
)
filegroup(
name = "package-srcs",
srcs = glob(["**"]),
tags = ["automanaged"],
visibility = ["//visibility:private"],
)
filegroup(
name = "all-srcs",
srcs = [":package-srcs"],
tags = ["automanaged"],
visibility = ["//visibility:public"],
)

133
vendor/k8s.io/test-infra/testgrid/util/gcs/gcs.go generated vendored Normal file
View File

@@ -0,0 +1,133 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package gcs
import (
"context"
"errors"
"fmt"
"hash/crc32"
"log"
"net/url"
"strings"
"cloud.google.com/go/storage"
"google.golang.org/api/option"
)
// ClientWithCreds returns a storage client, optionally authenticated with the specified .json creds
func ClientWithCreds(ctx context.Context, creds ...string) (*storage.Client, error) {
var options []option.ClientOption
switch l := len(creds); l {
case 0: // Do nothing
case 1:
options = append(options, option.WithCredentialsFile(creds[0]))
default:
return nil, fmt.Errorf("%d creds files unsupported (at most 1)", l)
}
return storage.NewClient(ctx, options...)
}
// Path parses gs://bucket/obj urls
type Path struct {
url url.URL
}
// String returns the gs://bucket/obj url
func (g Path) String() string {
return g.url.String()
}
// Set updates value from a gs://bucket/obj string, validating errors.
func (g *Path) Set(v string) error {
u, err := url.Parse(v)
if err != nil {
return fmt.Errorf("invalid gs:// url %s: %v", v, err)
}
return g.SetURL(u)
}
// SetURL updates value to the passed in gs://bucket/obj url
func (g *Path) SetURL(u *url.URL) error {
switch {
case u == nil:
return errors.New("nil url")
case u.Scheme != "gs":
return fmt.Errorf("must use a gs:// url: %s", u)
case strings.Contains(u.Host, ":"):
return fmt.Errorf("gs://bucket may not contain a port: %s", u)
case u.Opaque != "":
return fmt.Errorf("url must start with gs://: %s", u)
case u.User != nil:
return fmt.Errorf("gs://bucket may not contain an user@ prefix: %s", u)
case u.RawQuery != "":
return fmt.Errorf("gs:// url may not contain a ?query suffix: %s", u)
case u.Fragment != "":
return fmt.Errorf("gs:// url may not contain a #fragment suffix: %s", u)
}
g.url = *u
return nil
}
// ResolveReference returns the path relative to the current path
func (g Path) ResolveReference(ref *url.URL) (*Path, error) {
var newP Path
if err := newP.SetURL(g.url.ResolveReference(ref)); err != nil {
return nil, err
}
return &newP, nil
}
// Bucket returns bucket in gs://bucket/obj
func (g Path) Bucket() string {
return g.url.Host
}
// Object returns path/to/something in gs://bucket/path/to/something
func (g Path) Object() string {
if g.url.Path == "" {
return g.url.Path
}
return g.url.Path[1:]
}
func calcCRC(buf []byte) uint32 {
return crc32.Checksum(buf, crc32.MakeTable(crc32.Castagnoli))
}
// Upload writes bytes to the specified Path
func Upload(ctx context.Context, client *storage.Client, path Path, buf []byte) error {
crc := calcCRC(buf)
w := client.Bucket(path.Bucket()).Object(path.Object()).NewWriter(ctx)
w.SendCRC32C = true
// Send our CRC32 to ensure google received the same data we sent.
// See checksum example at:
// https://godoc.org/cloud.google.com/go/storage#Writer.Write
w.ObjectAttrs.CRC32C = crc
w.ProgressFunc = func(bytes int64) {
log.Printf("Uploading %s: %d/%d...", path, bytes, len(buf))
}
if n, err := w.Write(buf); err != nil {
return fmt.Errorf("writing %s failed: %v", path, err)
} else if n != len(buf) {
return fmt.Errorf("partial write of %s: %d < %d", path, n, len(buf))
}
if err := w.Close(); err != nil {
return fmt.Errorf("closing %s failed: %v", path, err)
}
return nil
}