func forcefullyDeletePod(c client.Interface, pod *api.Pod) { var zero int64 err := c.Pods(pod.Namespace).Delete(pod.Name, &api.DeleteOptions{GracePeriodSeconds: &zero}) if err != nil { util.HandleError(err) } }
func deletePods(kubeClient client.Interface, ns string, before unversioned.Time) (int64, error) { items, err := kubeClient.Pods(ns).List(labels.Everything(), fields.Everything()) if err != nil { return 0, err } expired := unversioned.Now().After(before.Time) var deleteOptions *api.DeleteOptions if expired { deleteOptions = api.NewDeleteOptions(0) } estimate := int64(0) for i := range items.Items { if items.Items[i].Spec.TerminationGracePeriodSeconds != nil { grace := *items.Items[i].Spec.TerminationGracePeriodSeconds if grace > estimate { estimate = grace } } err := kubeClient.Pods(ns).Delete(items.Items[i].Name, deleteOptions) if err != nil && !errors.IsNotFound(err) { return 0, err } } if expired { estimate = 0 } return estimate, nil }
// Returns the old RCs targetted by the given Deployment. func GetOldRCs(deployment extensions.Deployment, c client.Interface) ([]*api.ReplicationController, error) { namespace := deployment.ObjectMeta.Namespace // 1. Find all pods whose labels match deployment.Spec.Selector podList, err := c.Pods(namespace).List(labels.SelectorFromSet(deployment.Spec.Selector), fields.Everything()) if err != nil { return nil, fmt.Errorf("error listing pods: %v", err) } // 2. Find the corresponding RCs for pods in podList. // TODO: Right now we list all RCs and then filter. We should add an API for this. oldRCs := map[string]api.ReplicationController{} rcList, err := c.ReplicationControllers(namespace).List(labels.Everything()) if err != nil { return nil, fmt.Errorf("error listing replication controllers: %v", err) } for _, pod := range podList.Items { podLabelsSelector := labels.Set(pod.ObjectMeta.Labels) for _, rc := range rcList.Items { rcLabelsSelector := labels.SelectorFromSet(rc.Spec.Selector) if rcLabelsSelector.Matches(podLabelsSelector) { // Filter out RC that has the same pod template spec as the deployment - that is the new RC. if api.Semantic.DeepEqual(rc.Spec.Template, GetNewRCTemplate(deployment)) { continue } oldRCs[rc.ObjectMeta.Name] = rc } } } requiredRCs := []*api.ReplicationController{} for _, value := range oldRCs { requiredRCs = append(requiredRCs, &value) } return requiredRCs, nil }
// GetSwaggerSchema returns the swagger spec from master func GetSwaggerSchema(apiVer string, kubeClient client.Interface) (*swagger.ApiDeclaration, error) { swaggerSchema, err := kubeClient.SwaggerSchema(apiVer) if err != nil { return nil, fmt.Errorf("couldn't read swagger schema from server: %v", err) } return swaggerSchema, nil }
func New(kubeClient client.Interface, resyncPeriod controller.ResyncPeriodFunc, threshold int) *GCController { eventBroadcaster := record.NewBroadcaster() eventBroadcaster.StartLogging(glog.Infof) eventBroadcaster.StartRecordingToSink(kubeClient.Events("")) gcc := &GCController{ kubeClient: kubeClient, threshold: threshold, deletePod: func(namespace, name string) error { return kubeClient.Pods(namespace).Delete(name, api.NewDeleteOptions(0)) }, } terminatedSelector := compileTerminatedPodSelector() gcc.podStore.Store, gcc.podStoreSyncer = framework.NewInformer( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return gcc.kubeClient.Pods(api.NamespaceAll).List(labels.Everything(), terminatedSelector) }, WatchFunc: func(rv string) (watch.Interface, error) { return gcc.kubeClient.Pods(api.NamespaceAll).Watch(labels.Everything(), terminatedSelector, rv) }, }, &api.Pod{}, resyncPeriod(), framework.ResourceEventHandlerFuncs{}, ) return gcc }
func GetServerVersion(w io.Writer, kubeClient client.Interface) { serverVersion, err := kubeClient.ServerVersion() if err != nil { fmt.Printf("Couldn't read server version from server: %v\n", err) os.Exit(1) } fmt.Fprintf(w, "Server Version: %#v\n", *serverVersion) }
func UpdateExistingReplicationController(c client.Interface, oldRc *api.ReplicationController, namespace, newName, deploymentKey, deploymentValue string, out io.Writer) (*api.ReplicationController, error) { SetNextControllerAnnotation(oldRc, newName) if _, found := oldRc.Spec.Selector[deploymentKey]; !found { return AddDeploymentKeyToReplicationController(oldRc, c, deploymentKey, deploymentValue, namespace, out) } else { // If we didn't need to update the controller for the deployment key, we still need to write // the "next" controller. return c.ReplicationControllers(namespace).Update(oldRc) } }
// updateNamespaceStatusFunc will verify that the status of the namespace is correct func updateNamespaceStatusFunc(kubeClient client.Interface, namespace *api.Namespace) (*api.Namespace, error) { if namespace.DeletionTimestamp.IsZero() || namespace.Status.Phase == api.NamespaceTerminating { return namespace, nil } newNamespace := api.Namespace{} newNamespace.ObjectMeta = namespace.ObjectMeta newNamespace.Status = namespace.Status newNamespace.Status.Phase = api.NamespaceTerminating return kubeClient.Namespaces().Status(&newNamespace) }
func getPodsForRCs(c client.Interface, replicationControllers []*api.ReplicationController) ([]api.Pod, error) { allPods := []api.Pod{} for _, rc := range replicationControllers { podList, err := c.Pods(rc.ObjectMeta.Namespace).List(labels.SelectorFromSet(rc.Spec.Selector), fields.Everything()) if err != nil { return allPods, fmt.Errorf("error listing pods: %v", err) } allPods = append(allPods, podList.Items...) } return allPods, nil }
func New(client client.Interface) *DeploymentController { eventBroadcaster := record.NewBroadcaster() eventBroadcaster.StartLogging(glog.Infof) eventBroadcaster.StartRecordingToSink(client.Events("")) return &DeploymentController{ client: client, expClient: client.Extensions(), eventRecorder: eventBroadcaster.NewRecorder(api.EventSource{Component: "deployment-controller"}), } }
// NewResourceQuota creates a new resource quota admission control handler func NewResourceQuota(client client.Interface) admission.Interface { lw := &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return client.ResourceQuotas(api.NamespaceAll).List(labels.Everything()) }, WatchFunc: func(resourceVersion string) (watch.Interface, error) { return client.ResourceQuotas(api.NamespaceAll).Watch(labels.Everything(), fields.Everything(), resourceVersion) }, } indexer, reflector := cache.NewNamespaceKeyedIndexerAndReflector(lw, &api.ResourceQuota{}, 0) reflector.Run() return createResourceQuota(client, indexer) }
func deletePersistentVolumeClaims(kubeClient client.Interface, ns string) error { items, err := kubeClient.PersistentVolumeClaims(ns).List(labels.Everything(), fields.Everything()) if err != nil { return err } for i := range items.Items { err := kubeClient.PersistentVolumeClaims(ns).Delete(items.Items[i].Name) if err != nil && !errors.IsNotFound(err) { return err } } return nil }
func deleteReplicationControllers(kubeClient client.Interface, ns string) error { items, err := kubeClient.ReplicationControllers(ns).List(labels.Everything()) if err != nil { return err } for i := range items.Items { err := kubeClient.ReplicationControllers(ns).Delete(items.Items[i].Name) if err != nil && !errors.IsNotFound(err) { return err } } return nil }
func deleteResourceQuotas(kubeClient client.Interface, ns string) error { resourceQuotas, err := kubeClient.ResourceQuotas(ns).List(labels.Everything()) if err != nil { return err } for i := range resourceQuotas.Items { err := kubeClient.ResourceQuotas(ns).Delete(resourceQuotas.Items[i].Name) if err != nil && !errors.IsNotFound(err) { return err } } return nil }
// syncNamespace orchestrates deletion of a Namespace and its associated content. func syncNamespace(kubeClient client.Interface, versions *unversioned.APIVersions, namespace *api.Namespace) (err error) { if namespace.DeletionTimestamp == nil { return nil } glog.V(4).Infof("Syncing namespace %s", namespace.Name) // ensure that the status is up to date on the namespace // if we get a not found error, we assume the namespace is truly gone namespace, err = retryOnConflictError(kubeClient, namespace, updateNamespaceStatusFunc) if err != nil { if errors.IsNotFound(err) { return nil } return err } // if the namespace is already finalized, delete it if finalized(namespace) { err = kubeClient.Namespaces().Delete(namespace.Name) if err != nil && !errors.IsNotFound(err) { return err } return nil } // there may still be content for us to remove estimate, err := deleteAllContent(kubeClient, versions, namespace.Name, *namespace.DeletionTimestamp) if err != nil { return err } if estimate > 0 { return &contentRemainingError{estimate} } // we have removed content, so mark it finalized by us result, err := retryOnConflictError(kubeClient, namespace, finalizeNamespaceFunc) if err != nil { return err } // now check if all finalizers have reported that we delete now if finalized(result) { err = kubeClient.Namespaces().Delete(namespace.Name) if err != nil && !errors.IsNotFound(err) { return err } } return nil }
// finalize will finalize the namespace for kubernetes func finalizeNamespaceFunc(kubeClient client.Interface, namespace *api.Namespace) (*api.Namespace, error) { namespaceFinalize := api.Namespace{} namespaceFinalize.ObjectMeta = namespace.ObjectMeta namespaceFinalize.Spec = namespace.Spec finalizerSet := sets.NewString() for i := range namespace.Spec.Finalizers { if namespace.Spec.Finalizers[i] != api.FinalizerKubernetes { finalizerSet.Insert(string(namespace.Spec.Finalizers[i])) } } namespaceFinalize.Spec.Finalizers = make([]api.FinalizerName, 0, len(finalizerSet)) for _, value := range finalizerSet.List() { namespaceFinalize.Spec.Finalizers = append(namespaceFinalize.Spec.Finalizers, api.FinalizerName(value)) } return kubeClient.Namespaces().Finalize(&namespaceFinalize) }
// New returns a new service controller to keep cloud provider service resources // (like external load balancers) in sync with the registry. func New(cloud cloudprovider.Interface, kubeClient client.Interface, clusterName string) *ServiceController { broadcaster := record.NewBroadcaster() broadcaster.StartRecordingToSink(kubeClient.Events("")) recorder := broadcaster.NewRecorder(api.EventSource{Component: "service-controller"}) return &ServiceController{ cloud: cloud, kubeClient: kubeClient, clusterName: clusterName, cache: &serviceCache{serviceMap: make(map[string]*cachedService)}, eventBroadcaster: broadcaster, eventRecorder: recorder, nodeLister: cache.StoreToNodeLister{ Store: cache.NewStore(cache.MetaNamespaceKeyFunc), }, } }
// Returns an RC that matches the intent of the given deployment. // Returns nil if the new RC doesnt exist yet. func GetNewRC(deployment extensions.Deployment, c client.Interface) (*api.ReplicationController, error) { namespace := deployment.ObjectMeta.Namespace rcList, err := c.ReplicationControllers(namespace).List(labels.Everything()) if err != nil { return nil, fmt.Errorf("error listing replication controllers: %v", err) } newRCTemplate := GetNewRCTemplate(deployment) for _, rc := range rcList.Items { if api.Semantic.DeepEqual(rc.Spec.Template, newRCTemplate) { // This is the new RC. return &rc, nil } } // new RC does not exist. return nil, nil }
// retryOnConflictError retries the specified fn if there was a conflict error // TODO RetryOnConflict should be a generic concept in client code func retryOnConflictError(kubeClient client.Interface, namespace *api.Namespace, fn updateNamespaceFunc) (result *api.Namespace, err error) { latestNamespace := namespace for { result, err = fn(kubeClient, latestNamespace) if err == nil { return result, nil } if !errors.IsConflict(err) { return nil, err } latestNamespace, err = kubeClient.Namespaces().Get(latestNamespace.Name) if err != nil { return nil, err } } return }
// NewProvision creates a new namespace provision admission control handler func NewProvision(c client.Interface) admission.Interface { store := cache.NewStore(cache.MetaNamespaceKeyFunc) reflector := cache.NewReflector( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return c.Namespaces().List(labels.Everything(), fields.Everything()) }, WatchFunc: func(resourceVersion string) (watch.Interface, error) { return c.Namespaces().Watch(labels.Everything(), fields.Everything(), resourceVersion) }, }, &api.Namespace{}, store, 0, ) reflector.Run() return createProvision(c, store) }
// NewLimitRanger returns an object that enforces limits based on the supplied limit function func NewLimitRanger(client client.Interface, limitFunc LimitFunc) admission.Interface { lw := &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return client.LimitRanges(api.NamespaceAll).List(labels.Everything()) }, WatchFunc: func(resourceVersion string) (watch.Interface, error) { return client.LimitRanges(api.NamespaceAll).Watch(labels.Everything(), fields.Everything(), resourceVersion) }, } indexer, reflector := cache.NewNamespaceKeyedIndexerAndReflector(lw, &api.LimitRange{}, 0) reflector.Run() return &limitRanger{ Handler: admission.NewHandler(admission.Create, admission.Update), client: client, limitFunc: limitFunc, indexer: indexer, } }
func NewHorizontalController(client client.Interface, metricsClient metrics.MetricsClient) *HorizontalController { broadcaster := record.NewBroadcaster() broadcaster.StartRecordingToSink(client.Events("")) recorder := broadcaster.NewRecorder(api.EventSource{Component: "horizontal-pod-autoscaler"}) return &HorizontalController{ client: client, metricsClient: metricsClient, eventRecorder: recorder, } }
// NewPersistentVolumeClaimBinder creates a new PersistentVolumeClaimBinder func NewPersistentVolumeClaimBinder(kubeClient client.Interface, syncPeriod time.Duration) *PersistentVolumeClaimBinder { volumeIndex := NewPersistentVolumeOrderedIndex() binderClient := NewBinderClient(kubeClient) binder := &PersistentVolumeClaimBinder{ volumeIndex: volumeIndex, client: binderClient, } _, volumeController := framework.NewInformer( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return kubeClient.PersistentVolumes().List(labels.Everything(), fields.Everything()) }, WatchFunc: func(resourceVersion string) (watch.Interface, error) { return kubeClient.PersistentVolumes().Watch(labels.Everything(), fields.Everything(), resourceVersion) }, }, &api.PersistentVolume{}, // TODO: Can we have much longer period here? syncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: binder.addVolume, UpdateFunc: binder.updateVolume, DeleteFunc: binder.deleteVolume, }, ) _, claimController := framework.NewInformer( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return kubeClient.PersistentVolumeClaims(api.NamespaceAll).List(labels.Everything(), fields.Everything()) }, WatchFunc: func(resourceVersion string) (watch.Interface, error) { return kubeClient.PersistentVolumeClaims(api.NamespaceAll).Watch(labels.Everything(), fields.Everything(), resourceVersion) }, }, &api.PersistentVolumeClaim{}, // TODO: Can we have much longer period here? syncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: binder.addClaim, UpdateFunc: binder.updateClaim, // no DeleteFunc needed. a claim requires no clean-up. // syncVolume handles the missing claim }, ) binder.claimController = claimController binder.volumeController = volumeController return binder }
// NewExists creates a new namespace exists admission control handler func NewExists(c client.Interface) admission.Interface { store := cache.NewStore(cache.MetaNamespaceKeyFunc) reflector := cache.NewReflector( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return c.Namespaces().List(labels.Everything(), fields.Everything()) }, WatchFunc: func(resourceVersion string) (watch.Interface, error) { return c.Namespaces().Watch(labels.Everything(), fields.Everything(), resourceVersion) }, }, &api.Namespace{}, store, 5*time.Minute, ) reflector.Run() return &exists{ client: c, store: store, Handler: admission.NewHandler(admission.Create, admission.Update, admission.Delete), } }
// PersistentVolumeRecycler creates a new PersistentVolumeRecycler func NewPersistentVolumeRecycler(kubeClient client.Interface, syncPeriod time.Duration, plugins []volume.VolumePlugin) (*PersistentVolumeRecycler, error) { recyclerClient := NewRecyclerClient(kubeClient) recycler := &PersistentVolumeRecycler{ client: recyclerClient, kubeClient: kubeClient, } if err := recycler.pluginMgr.InitPlugins(plugins, recycler); err != nil { return nil, fmt.Errorf("Could not initialize volume plugins for PVClaimBinder: %+v", err) } _, volumeController := framework.NewInformer( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return kubeClient.PersistentVolumes().List(labels.Everything(), fields.Everything()) }, WatchFunc: func(resourceVersion string) (watch.Interface, error) { return kubeClient.PersistentVolumes().Watch(labels.Everything(), fields.Everything(), resourceVersion) }, }, &api.PersistentVolume{}, // TODO: Can we have much longer period here? syncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: func(obj interface{}) { pv := obj.(*api.PersistentVolume) recycler.reclaimVolume(pv) }, UpdateFunc: func(oldObj, newObj interface{}) { pv := newObj.(*api.PersistentVolume) recycler.reclaimVolume(pv) }, }, ) recycler.volumeController = volumeController return recycler, nil }
// NewServiceAccount returns an admission.Interface implementation which limits admission of Pod CREATE requests based on the pod's ServiceAccount: // 1. If the pod does not specify a ServiceAccount, it sets the pod's ServiceAccount to "default" // 2. It ensures the ServiceAccount referenced by the pod exists // 3. If LimitSecretReferences is true, it rejects the pod if the pod references Secret objects which the pod's ServiceAccount does not reference // 4. If the pod does not contain any ImagePullSecrets, the ImagePullSecrets of the service account are added. // 5. If MountServiceAccountToken is true, it adds a VolumeMount with the pod's ServiceAccount's api token secret to containers func NewServiceAccount(cl client.Interface) *serviceAccount { serviceAccountsIndexer, serviceAccountsReflector := cache.NewNamespaceKeyedIndexerAndReflector( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return cl.ServiceAccounts(api.NamespaceAll).List(labels.Everything(), fields.Everything()) }, WatchFunc: func(resourceVersion string) (watch.Interface, error) { return cl.ServiceAccounts(api.NamespaceAll).Watch(labels.Everything(), fields.Everything(), resourceVersion) }, }, &api.ServiceAccount{}, 0, ) tokenSelector := fields.SelectorFromSet(map[string]string{client.SecretType: string(api.SecretTypeServiceAccountToken)}) secretsIndexer, secretsReflector := cache.NewNamespaceKeyedIndexerAndReflector( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return cl.Secrets(api.NamespaceAll).List(labels.Everything(), tokenSelector) }, WatchFunc: func(resourceVersion string) (watch.Interface, error) { return cl.Secrets(api.NamespaceAll).Watch(labels.Everything(), tokenSelector, resourceVersion) }, }, &api.Secret{}, 0, ) return &serviceAccount{ Handler: admission.NewHandler(admission.Create), // TODO: enable this once we've swept secret usage to account for adding secret references to service accounts LimitSecretReferences: false, // Auto mount service account API token secrets MountServiceAccountToken: true, // Reject pod creation until a service account token is available RequireAPIToken: true, client: cl, serviceAccounts: serviceAccountsIndexer, serviceAccountsReflector: serviceAccountsReflector, secrets: secretsIndexer, secretsReflector: secretsReflector, } }
func NewDaemonSetsController(kubeClient client.Interface, resyncPeriod controller.ResyncPeriodFunc) *DaemonSetsController { eventBroadcaster := record.NewBroadcaster() eventBroadcaster.StartLogging(glog.Infof) eventBroadcaster.StartRecordingToSink(kubeClient.Events("")) dsc := &DaemonSetsController{ kubeClient: kubeClient, podControl: controller.RealPodControl{ KubeClient: kubeClient, Recorder: eventBroadcaster.NewRecorder(api.EventSource{Component: "daemon-set"}), }, expectations: controller.NewControllerExpectations(), queue: workqueue.New(), } // Manage addition/update of daemon sets. dsc.dsStore.Store, dsc.dsController = framework.NewInformer( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return dsc.kubeClient.Extensions().DaemonSets(api.NamespaceAll).List(labels.Everything()) }, WatchFunc: func(rv string) (watch.Interface, error) { return dsc.kubeClient.Extensions().DaemonSets(api.NamespaceAll).Watch(labels.Everything(), fields.Everything(), rv) }, }, &extensions.DaemonSet{}, // TODO: Can we have much longer period here? FullDaemonSetResyncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: func(obj interface{}) { ds := obj.(*extensions.DaemonSet) glog.V(4).Infof("Adding daemon set %s", ds.Name) dsc.enqueueDaemonSet(obj) }, UpdateFunc: func(old, cur interface{}) { oldDS := old.(*extensions.DaemonSet) glog.V(4).Infof("Updating daemon set %s", oldDS.Name) dsc.enqueueDaemonSet(cur) }, DeleteFunc: func(obj interface{}) { ds := obj.(*extensions.DaemonSet) glog.V(4).Infof("Deleting daemon set %s", ds.Name) dsc.enqueueDaemonSet(obj) }, }, ) // Watch for creation/deletion of pods. The reason we watch is that we don't want a daemon set to create/delete // more pods until all the effects (expectations) of a daemon set's create/delete have been observed. dsc.podStore.Store, dsc.podController = framework.NewInformer( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return dsc.kubeClient.Pods(api.NamespaceAll).List(labels.Everything(), fields.Everything()) }, WatchFunc: func(rv string) (watch.Interface, error) { return dsc.kubeClient.Pods(api.NamespaceAll).Watch(labels.Everything(), fields.Everything(), rv) }, }, &api.Pod{}, resyncPeriod(), framework.ResourceEventHandlerFuncs{ AddFunc: dsc.addPod, UpdateFunc: dsc.updatePod, DeleteFunc: dsc.deletePod, }, ) // Watch for new nodes or updates to nodes - daemon pods are launched on new nodes, and possibly when labels on nodes change, dsc.nodeStore.Store, dsc.nodeController = framework.NewInformer( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return dsc.kubeClient.Nodes().List(labels.Everything(), fields.Everything()) }, WatchFunc: func(rv string) (watch.Interface, error) { return dsc.kubeClient.Nodes().Watch(labels.Everything(), fields.Everything(), rv) }, }, &api.Node{}, resyncPeriod(), framework.ResourceEventHandlerFuncs{ AddFunc: dsc.addNode, UpdateFunc: dsc.updateNode, }, ) dsc.syncHandler = dsc.syncDaemonSet dsc.podStoreSynced = dsc.podController.HasSynced return dsc }
// IncrementUsage updates the supplied ResourceQuotaStatus object based on the incoming operation // Return true if the usage must be recorded prior to admitting the new resource // Return an error if the operation should not pass admission control func IncrementUsage(a admission.Attributes, status *api.ResourceQuotaStatus, client client.Interface) (bool, error) { var errs []error dirty := true set := map[api.ResourceName]bool{} for k := range status.Hard { set[k] = true } obj := a.GetObject() // handle max counts for each kind of resource (pods, services, replicationControllers, etc.) if a.GetOperation() == admission.Create { resourceName := resourceToResourceName[a.GetResource()] hard, hardFound := status.Hard[resourceName] if hardFound { used, usedFound := status.Used[resourceName] if !usedFound { return false, fmt.Errorf("quota usage stats are not yet known, unable to admit resource until an accurate count is completed.") } if used.Value() >= hard.Value() { errs = append(errs, fmt.Errorf("limited to %s %s", hard.String(), resourceName)) dirty = false } else { status.Used[resourceName] = *resource.NewQuantity(used.Value()+int64(1), resource.DecimalSI) } } } if a.GetResource() == "pods" { for _, resourceName := range []api.ResourceName{api.ResourceMemory, api.ResourceCPU} { // ignore tracking the resource if it's not in the quota document if !set[resourceName] { continue } hard, hardFound := status.Hard[resourceName] if !hardFound { continue } // if we do not yet know how much of the current resource is used, we cannot accept any request used, usedFound := status.Used[resourceName] if !usedFound { return false, fmt.Errorf("unable to admit pod until quota usage stats are calculated.") } // the amount of resource being requested, or an error if it does not make a request that is tracked pod := obj.(*api.Pod) delta, err := resourcequotacontroller.PodRequests(pod, resourceName) if err != nil { return false, fmt.Errorf("must make a non-zero request for %s since it is tracked by quota.", resourceName) } // if this operation is an update, we need to find the delta usage from the previous state if a.GetOperation() == admission.Update { oldPod, err := client.Pods(a.GetNamespace()).Get(pod.Name) if err != nil { return false, err } // if the previous version of the resource made a resource request, we need to subtract the old request // from the current to get the actual resource request delta. if the previous version of the pod // made no request on the resource, then we get an err value. we ignore the err value, and delta // will just be equal to the total resource request on the pod since there is nothing to subtract. oldRequest, err := resourcequotacontroller.PodRequests(oldPod, resourceName) if err == nil { err = delta.Sub(*oldRequest) if err != nil { return false, err } } } newUsage := used.Copy() newUsage.Add(*delta) // make the most precise comparison possible newUsageValue := newUsage.Value() hardUsageValue := hard.Value() if newUsageValue <= resource.MaxMilliValue && hardUsageValue <= resource.MaxMilliValue { newUsageValue = newUsage.MilliValue() hardUsageValue = hard.MilliValue() } if newUsageValue > hardUsageValue { errs = append(errs, fmt.Errorf("unable to admit pod without exceeding quota for resource %s: limited to %s but require %s to succeed.", resourceName, hard.String(), newUsage.String())) dirty = false } else { status.Used[resourceName] = *newUsage } } } return dirty, errors.NewAggregate(errs) }
// NewNamespaceController creates a new NamespaceController func NewNamespaceController(kubeClient client.Interface, versions *unversioned.APIVersions, resyncPeriod time.Duration) *NamespaceController { var controller *framework.Controller _, controller = framework.NewInformer( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return kubeClient.Namespaces().List(labels.Everything(), fields.Everything()) }, WatchFunc: func(resourceVersion string) (watch.Interface, error) { return kubeClient.Namespaces().Watch(labels.Everything(), fields.Everything(), resourceVersion) }, }, &api.Namespace{}, // TODO: Can we have much longer period here? resyncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: func(obj interface{}) { namespace := obj.(*api.Namespace) if err := syncNamespace(kubeClient, versions, namespace); err != nil { if estimate, ok := err.(*contentRemainingError); ok { go func() { // Estimate is the aggregate total of TerminationGracePeriodSeconds, which defaults to 30s // for pods. However, most processes will terminate faster - within a few seconds, probably // with a peak within 5-10s. So this division is a heuristic that avoids waiting the full // duration when in many cases things complete more quickly. The extra second added is to // ensure we never wait 0 seconds. t := estimate.Estimate/2 + 1 glog.V(4).Infof("Content remaining in namespace %s, waiting %d seconds", namespace.Name, t) time.Sleep(time.Duration(t) * time.Second) if err := controller.Requeue(namespace); err != nil { util.HandleError(err) } }() return } util.HandleError(err) } }, UpdateFunc: func(oldObj, newObj interface{}) { namespace := newObj.(*api.Namespace) if err := syncNamespace(kubeClient, versions, namespace); err != nil { if estimate, ok := err.(*contentRemainingError); ok { go func() { t := estimate.Estimate/2 + 1 glog.V(4).Infof("Content remaining in namespace %s, waiting %d seconds", namespace.Name, t) time.Sleep(time.Duration(t) * time.Second) if err := controller.Requeue(namespace); err != nil { util.HandleError(err) } }() return } util.HandleError(err) } }, }, ) return &NamespaceController{ controller: controller, } }
// deleteAllContent will delete all content known to the system in a namespace. It returns an estimate // of the time remaining before the remaining resources are deleted. If estimate > 0 not all resources // are guaranteed to be gone. func deleteAllContent(kubeClient client.Interface, versions *unversioned.APIVersions, namespace string, before unversioned.Time) (estimate int64, err error) { err = deleteServiceAccounts(kubeClient, namespace) if err != nil { return estimate, err } err = deleteServices(kubeClient, namespace) if err != nil { return estimate, err } err = deleteReplicationControllers(kubeClient, namespace) if err != nil { return estimate, err } estimate, err = deletePods(kubeClient, namespace, before) if err != nil { return estimate, err } err = deleteSecrets(kubeClient, namespace) if err != nil { return estimate, err } err = deletePersistentVolumeClaims(kubeClient, namespace) if err != nil { return estimate, err } err = deleteLimitRanges(kubeClient, namespace) if err != nil { return estimate, err } err = deleteResourceQuotas(kubeClient, namespace) if err != nil { return estimate, err } err = deleteEvents(kubeClient, namespace) if err != nil { return estimate, err } // If experimental mode, delete all experimental resources for the namespace. if containsVersion(versions, "extensions/v1beta1") { resources, err := kubeClient.Discovery().ServerResourcesForGroupVersion("extensions/v1beta1") if err != nil { return estimate, err } if containsResource(resources, "horizontalpodautoscalers") { err = deleteHorizontalPodAutoscalers(kubeClient.Extensions(), namespace) if err != nil { return estimate, err } } if containsResource(resources, "ingresses") { err = deleteIngress(kubeClient.Extensions(), namespace) if err != nil { return estimate, err } } if containsResource(resources, "daemonsets") { err = deleteDaemonSets(kubeClient.Extensions(), namespace) if err != nil { return estimate, err } } if containsResource(resources, "jobs") { err = deleteJobs(kubeClient.Extensions(), namespace) if err != nil { return estimate, err } } if containsResource(resources, "deployments") { err = deleteDeployments(kubeClient.Extensions(), namespace) if err != nil { return estimate, err } } } return estimate, nil }