func NewGarbageCollector(clientPool dynamic.ClientPool, resources []unversioned.GroupVersionResource) (*GarbageCollector, error) { gc := &GarbageCollector{ clientPool: clientPool, dirtyQueue: workqueue.New(), orphanQueue: workqueue.New(), // TODO: should use a dynamic RESTMapper built from the discovery results. restMapper: registered.RESTMapper(), } gc.propagator = &Propagator{ eventQueue: workqueue.New(), uidToNode: &concurrentUIDToNode{ RWMutex: &sync.RWMutex{}, uidToNode: make(map[types.UID]*node), }, gc: gc, } for _, resource := range resources { if _, ok := ignoredResources[resource]; ok { glog.V(6).Infof("ignore resource %#v", resource) continue } monitor, err := monitorFor(gc.propagator, gc.clientPool, resource) if err != nil { return nil, err } gc.monitors = append(gc.monitors, monitor) } return gc, nil }
func TestReinsert(t *testing.T) { q := workqueue.New() q.Add("foo") // Start processing i, _ := q.Get() if i != "foo" { t.Errorf("Expected %v, got %v", "foo", i) } // Add it back while processing q.Add(i) // Finish it up q.Done(i) // It should be back on the queue i, _ = q.Get() if i != "foo" { t.Errorf("Expected %v, got %v", "foo", i) } // Finish that one up q.Done(i) if a := q.Len(); a != 0 { t.Errorf("Expected queue to be empty. Has %v items", a) } }
func NewJobController(kubeClient client.Interface) *JobController { eventBroadcaster := record.NewBroadcaster() eventBroadcaster.StartLogging(glog.Infof) eventBroadcaster.StartRecordingToSink(kubeClient.Events("")) jm := &JobController{ kubeClient: kubeClient, podControl: controller.RealPodControl{ KubeClient: kubeClient, Recorder: eventBroadcaster.NewRecorder(api.EventSource{Component: "job"}), }, expectations: controller.NewControllerExpectations(), queue: workqueue.New(), } jm.jobStore.Store, jm.jobController = framework.NewInformer( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return jm.kubeClient.Experimental().Jobs(api.NamespaceAll).List(labels.Everything(), fields.Everything()) }, WatchFunc: func(rv string) (watch.Interface, error) { return jm.kubeClient.Experimental().Jobs(api.NamespaceAll).Watch(labels.Everything(), fields.Everything(), rv) }, }, &experimental.Job{}, replicationcontroller.FullControllerResyncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: jm.enqueueController, UpdateFunc: func(old, cur interface{}) { if job := cur.(*experimental.Job); !isJobFinished(job) { jm.enqueueController(job) } }, DeleteFunc: jm.enqueueController, }, ) jm.podStore.Store, jm.podController = framework.NewInformer( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return jm.kubeClient.Pods(api.NamespaceAll).List(labels.Everything(), fields.Everything()) }, WatchFunc: func(rv string) (watch.Interface, error) { return jm.kubeClient.Pods(api.NamespaceAll).Watch(labels.Everything(), fields.Everything(), rv) }, }, &api.Pod{}, replicationcontroller.PodRelistPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: jm.addPod, UpdateFunc: jm.updatePod, DeleteFunc: jm.deletePod, }, ) jm.updateHandler = jm.updateJobStatus jm.syncHandler = jm.syncJob jm.podStoreSynced = jm.podController.HasSynced return jm }
// newIPVSController creates a new controller from the given config. func newIPVSController(kubeClient *unversioned.Client, namespace string, useUnicast bool, password string) *ipvsControllerController { ipvsc := ipvsControllerController{ client: kubeClient, queue: workqueue.New(), reloadRateLimiter: util.NewTokenBucketRateLimiter(reloadQPS, int(reloadQPS)), reloadLock: &sync.Mutex{}, } clusterNodes := getClusterNodesIP(kubeClient) nodeInfo, err := getNodeInfo(clusterNodes) if err != nil { glog.Fatalf("Error getting local IP from nodes in the cluster: %v", err) } neighbors := getNodeNeighbors(nodeInfo, clusterNodes) ipvsc.keepalived = &keepalived{ iface: nodeInfo.iface, ip: nodeInfo.ip, netmask: nodeInfo.netmask, nodes: clusterNodes, neighbors: neighbors, priority: getNodePriority(nodeInfo.ip, clusterNodes), useUnicast: useUnicast, password: password, } enqueue := func(obj interface{}) { key, err := keyFunc(obj) if err != nil { glog.Infof("Couldn't get key for object %+v: %v", obj, err) return } ipvsc.queue.Add(key) } eventHandlers := framework.ResourceEventHandlerFuncs{ AddFunc: enqueue, DeleteFunc: enqueue, UpdateFunc: func(old, cur interface{}) { if !reflect.DeepEqual(old, cur) { enqueue(cur) } }, } ipvsc.svcLister.Store, ipvsc.svcController = framework.NewInformer( cache.NewListWatchFromClient( ipvsc.client, "services", namespace, fields.Everything()), &api.Service{}, resyncPeriod, eventHandlers) ipvsc.epLister.Store, ipvsc.epController = framework.NewInformer( cache.NewListWatchFromClient( ipvsc.client, "endpoints", namespace, fields.Everything()), &api.Endpoints{}, resyncPeriod, eventHandlers) return &ipvsc }
// newQuotaEvaluator configures an admission controller that can enforce quota constraints // using the provided registry. The registry must have the capability to handle group/kinds that // are persisted by the server this admission controller is intercepting func newQuotaEvaluator(client clientset.Interface, registry quota.Registry) (*quotaEvaluator, error) { liveLookupCache, err := lru.New(100) if err != nil { return nil, err } updatedCache, err := lru.New(100) if err != nil { return nil, err } lw := &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return client.Core().ResourceQuotas(api.NamespaceAll).List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return client.Core().ResourceQuotas(api.NamespaceAll).Watch(options) }, } indexer, reflector := cache.NewNamespaceKeyedIndexerAndReflector(lw, &api.ResourceQuota{}, 0) reflector.Run() return "aEvaluator{ client: client, indexer: indexer, registry: registry, liveLookupCache: liveLookupCache, liveTTL: time.Duration(30 * time.Second), updatedQuotas: updatedCache, queue: workqueue.New(), work: map[string][]*admissionWaiter{}, dirtyWork: map[string][]*admissionWaiter{}, inProgress: sets.String{}, }, nil }
// NewTaskQueue creates a new task queue with the given sync function. // The sync function is called for every element inserted into the queue. func NewTaskQueue(syncFn func(string)) *taskQueue { return &taskQueue{ queue: workqueue.New(), sync: syncFn, workerDone: make(chan struct{}), } }
// newLoadBalancerController creates a new controller from the given config. func newLoadBalancerController(cfg *loadBalancerConfig, kubeClient *unversioned.Client, namespace string) *loadBalancerController { lbc := loadBalancerController{ cfg: cfg, client: kubeClient, queue: workqueue.New(), reloadRateLimiter: util.NewTokenBucketRateLimiter( reloadQPS, int(reloadQPS)), targetService: *targetService, forwardServices: *forwardServices, httpPort: *httpPort, tcpServices: map[string]int{}, } for _, service := range strings.Split(*tcpServices, ",") { portSplit := strings.Split(service, ":") if len(portSplit) != 2 { glog.Errorf("Ignoring misconfigured TCP service %v", service) continue } if port, err := strconv.Atoi(portSplit[1]); err != nil { glog.Errorf("Ignoring misconfigured TCP service %v: %v", service, err) continue } else { lbc.tcpServices[portSplit[0]] = port } } enqueue := func(obj interface{}) { key, err := keyFunc(obj) if err != nil { glog.Infof("Couldn't get key for object %+v: %v", obj, err) return } lbc.queue.Add(key) } eventHandlers := framework.ResourceEventHandlerFuncs{ AddFunc: enqueue, DeleteFunc: enqueue, UpdateFunc: func(old, cur interface{}) { if !reflect.DeepEqual(old, cur) { enqueue(cur) } }, } lbc.svcLister.Store, lbc.svcController = framework.NewInformer( cache.NewListWatchFromClient( lbc.client, "services", namespace, fields.Everything()), &api.Service{}, resyncPeriod, eventHandlers) lbc.epLister.Store, lbc.epController = framework.NewInformer( cache.NewListWatchFromClient( lbc.client, "endpoints", namespace, fields.Everything()), &api.Endpoints{}, resyncPeriod, eventHandlers) return &lbc }
func NewCertificateController(kubeClient clientset.Interface, syncPeriod time.Duration, caCertFile, caKeyFile string, approveAllKubeletCSRsForGroup string) (*CertificateController, error) { // Send events to the apiserver eventBroadcaster := record.NewBroadcaster() eventBroadcaster.StartLogging(glog.Infof) eventBroadcaster.StartRecordingToSink(&unversionedcore.EventSinkImpl{Interface: kubeClient.Core().Events("")}) // Configure cfssl signer // TODO: support non-default policy and remote/pkcs11 signing policy := &config.Signing{ Default: config.DefaultConfig(), } ca, err := local.NewSignerFromFile(caCertFile, caKeyFile, policy) if err != nil { return nil, err } cc := &CertificateController{ kubeClient: kubeClient, queue: workqueue.New(), signer: ca, approveAllKubeletCSRsForGroup: approveAllKubeletCSRsForGroup, } // Manage the addition/update of certificate requests cc.csrStore.Store, cc.csrController = framework.NewInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return cc.kubeClient.Certificates().CertificateSigningRequests().List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return cc.kubeClient.Certificates().CertificateSigningRequests().Watch(options) }, }, &certificates.CertificateSigningRequest{}, syncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: func(obj interface{}) { csr := obj.(*certificates.CertificateSigningRequest) glog.V(4).Infof("Adding certificate request %s", csr.Name) cc.enqueueCertificateRequest(obj) }, UpdateFunc: func(old, new interface{}) { oldCSR := old.(*certificates.CertificateSigningRequest) glog.V(4).Infof("Updating certificate request %s", oldCSR.Name) cc.enqueueCertificateRequest(new) }, DeleteFunc: func(obj interface{}) { csr := obj.(*certificates.CertificateSigningRequest) glog.V(4).Infof("Deleting certificate request %s", csr.Name) cc.enqueueCertificateRequest(obj) }, }, ) cc.syncHandler = cc.maybeSignCertificate return cc, nil }
func NewJobController(podInformer framework.SharedIndexInformer, kubeClient clientset.Interface) *JobController { eventBroadcaster := record.NewBroadcaster() eventBroadcaster.StartLogging(glog.Infof) // TODO: remove the wrapper when every clients have moved to use the clientset. eventBroadcaster.StartRecordingToSink(&unversionedcore.EventSinkImpl{Interface: kubeClient.Core().Events("")}) if kubeClient != nil && kubeClient.Core().GetRESTClient().GetRateLimiter() != nil { metrics.RegisterMetricAndTrackRateLimiterUsage("job_controller", kubeClient.Core().GetRESTClient().GetRateLimiter()) } jm := &JobController{ kubeClient: kubeClient, podControl: controller.RealPodControl{ KubeClient: kubeClient, Recorder: eventBroadcaster.NewRecorder(api.EventSource{Component: "job-controller"}), }, expectations: controller.NewControllerExpectations(), queue: workqueue.New(), recorder: eventBroadcaster.NewRecorder(api.EventSource{Component: "job-controller"}), } jm.jobStore.Store, jm.jobController = framework.NewInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return jm.kubeClient.Batch().Jobs(api.NamespaceAll).List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return jm.kubeClient.Batch().Jobs(api.NamespaceAll).Watch(options) }, }, &batch.Job{}, // TODO: Can we have much longer period here? replicationcontroller.FullControllerResyncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: jm.enqueueController, UpdateFunc: func(old, cur interface{}) { if job := cur.(*batch.Job); !IsJobFinished(job) { jm.enqueueController(job) } }, DeleteFunc: jm.enqueueController, }, ) podInformer.AddEventHandler(framework.ResourceEventHandlerFuncs{ AddFunc: jm.addPod, UpdateFunc: jm.updatePod, DeleteFunc: jm.deletePod, }) jm.podStore.Indexer = podInformer.GetIndexer() jm.podStoreSynced = podInformer.HasSynced jm.updateHandler = jm.updateJobStatus jm.syncHandler = jm.syncJob return jm }
// newReplicationManager configures a replication manager with the specified event recorder func newReplicationManager(eventRecorder record.EventRecorder, podInformer framework.SharedIndexInformer, kubeClient clientset.Interface, resyncPeriod controller.ResyncPeriodFunc, burstReplicas int, lookupCacheSize int, garbageCollectorEnabled bool) *ReplicationManager { if kubeClient != nil && kubeClient.Core().GetRESTClient().GetRateLimiter() != nil { metrics.RegisterMetricAndTrackRateLimiterUsage("replication_controller", kubeClient.Core().GetRESTClient().GetRateLimiter()) } rm := &ReplicationManager{ kubeClient: kubeClient, podControl: controller.RealPodControl{ KubeClient: kubeClient, Recorder: eventRecorder, }, burstReplicas: burstReplicas, expectations: controller.NewUIDTrackingControllerExpectations(controller.NewControllerExpectations()), queue: workqueue.New(), garbageCollectorEnabled: garbageCollectorEnabled, } rm.rcStore.Indexer, rm.rcController = framework.NewIndexerInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return rm.kubeClient.Core().ReplicationControllers(api.NamespaceAll).List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return rm.kubeClient.Core().ReplicationControllers(api.NamespaceAll).Watch(options) }, }, &api.ReplicationController{}, // TODO: Can we have much longer period here? FullControllerResyncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: rm.enqueueController, UpdateFunc: rm.updateRC, // This will enter the sync loop and no-op, because the controller has been deleted from the store. // Note that deleting a controller immediately after scaling it to 0 will not work. The recommended // way of achieving this is by performing a `stop` operation on the controller. DeleteFunc: rm.enqueueController, }, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}, ) podInformer.AddEventHandler(framework.ResourceEventHandlerFuncs{ AddFunc: rm.addPod, // This invokes the rc for every pod change, eg: host assignment. Though this might seem like overkill // the most frequent pod update is status, and the associated rc will only list from local storage, so // it should be ok. UpdateFunc: rm.updatePod, DeleteFunc: rm.deletePod, }) rm.podStore.Indexer = podInformer.GetIndexer() rm.podController = podInformer.GetController() rm.syncHandler = rm.syncReplicationController rm.podStoreSynced = rm.podController.HasSynced rm.lookupCache = controller.NewMatchingCache(lookupCacheSize) return rm }
// NewPetSetController creates a new petset controller. func NewPetSetController(podInformer framework.SharedIndexInformer, kubeClient *client.Client, resyncPeriod time.Duration) *PetSetController { eventBroadcaster := record.NewBroadcaster() eventBroadcaster.StartLogging(glog.Infof) eventBroadcaster.StartRecordingToSink(kubeClient.Events("")) recorder := eventBroadcaster.NewRecorder(api.EventSource{Component: "petset"}) pc := &apiServerPetClient{kubeClient, recorder, &defaultPetHealthChecker{}} psc := &PetSetController{ kubeClient: kubeClient, blockingPetStore: newUnHealthyPetTracker(pc), newSyncer: func(blockingPet *pcb) *petSyncer { return &petSyncer{pc, blockingPet} }, queue: workqueue.New(), } podInformer.AddEventHandler(framework.ResourceEventHandlerFuncs{ // lookup the petset and enqueue AddFunc: psc.addPod, // lookup current and old petset if labels changed UpdateFunc: psc.updatePod, // lookup petset accounting for deletion tombstones DeleteFunc: psc.deletePod, }) psc.podStore.Indexer = podInformer.GetIndexer() psc.podController = podInformer.GetController() psc.psStore.Store, psc.psController = framework.NewInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return psc.kubeClient.Apps().PetSets(api.NamespaceAll).List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return psc.kubeClient.Apps().PetSets(api.NamespaceAll).Watch(options) }, }, &apps.PetSet{}, petSetResyncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: psc.enqueuePetSet, UpdateFunc: func(old, cur interface{}) { oldPS := old.(*apps.PetSet) curPS := cur.(*apps.PetSet) if oldPS.Status.Replicas != curPS.Status.Replicas { glog.V(4).Infof("Observed updated replica count for PetSet: %v, %d->%d", curPS.Name, oldPS.Status.Replicas, curPS.Status.Replicas) } psc.enqueuePetSet(cur) }, DeleteFunc: psc.enqueuePetSet, }, ) // TODO: Watch volumes psc.podStoreSynced = psc.podController.HasSynced psc.syncHandler = psc.Sync return psc }
// NewEndpointController returns a new *EndpointController. func NewEndpointController(client *client.Client, resyncPeriod controller.ResyncPeriodFunc) *EndpointController { e := &EndpointController{ client: client, queue: workqueue.New(), } e.serviceStore.Store, e.serviceController = framework.NewInformer( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return e.client.Services(api.NamespaceAll).List(labels.Everything(), fields.Everything()) }, WatchFunc: func(rv string) (watch.Interface, error) { options := api.ListOptions{ResourceVersion: rv} return e.client.Services(api.NamespaceAll).Watch(labels.Everything(), fields.Everything(), options) }, }, &api.Service{}, // TODO: Can we have much longer period here? FullServiceResyncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: e.enqueueService, UpdateFunc: func(old, cur interface{}) { e.enqueueService(cur) }, DeleteFunc: e.enqueueService, }, ) e.podStore.Store, e.podController = framework.NewInformer( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return e.client.Pods(api.NamespaceAll).List(labels.Everything(), fields.Everything()) }, WatchFunc: func(rv string) (watch.Interface, error) { options := api.ListOptions{ResourceVersion: rv} return e.client.Pods(api.NamespaceAll).Watch(labels.Everything(), fields.Everything(), options) }, }, &api.Pod{}, resyncPeriod(), framework.ResourceEventHandlerFuncs{ AddFunc: e.addPod, UpdateFunc: e.updatePod, DeleteFunc: e.deletePod, }, ) return e }
func TestLen(t *testing.T) { q := workqueue.New() q.Add("foo") if e, a := 1, q.Len(); e != a { t.Errorf("Expected %v, got %v", e, a) } q.Add("bar") if e, a := 2, q.Len(); e != a { t.Errorf("Expected %v, got %v", e, a) } q.Add("foo") // should not increase the queue length. if e, a := 2, q.Len(); e != a { t.Errorf("Expected %v, got %v", e, a) } }
// NewQuotaEvaluator configures an admission controller that can enforce quota constraints // using the provided registry. The registry must have the capability to handle group/kinds that // are persisted by the server this admission controller is intercepting func NewQuotaEvaluator(quotaAccessor QuotaAccessor, registry quota.Registry, workers int, stopCh <-chan struct{}) Evaluator { return "aEvaluator{ quotaAccessor: quotaAccessor, registry: registry, queue: workqueue.New(), work: map[string][]*admissionWaiter{}, dirtyWork: map[string][]*admissionWaiter{}, inProgress: sets.String{}, workers: workers, stopCh: stopCh, } }
// newLoadBalancerController creates a new controller from the given config. func newLoadBalancerController(c *client.Client, namespace string, domain string, nodes []string) *loadBalancerController { mgr := &haproxy.HAProxyManager{ Exec: exec.New(), ConfigFile: "haproxy.cfg", DomainName: domain, } lbc := loadBalancerController{ client: c, queue: workqueue.New(), reloadRateLimiter: util.NewTokenBucketRateLimiter(reloadQPS, int(reloadQPS)), haproxy: mgr, domain: domain, clusterNodes: nodes, } enqueue := func(obj interface{}) { key, err := keyFunc(obj) if err != nil { glog.Infof("Couldn't get key for object %+v: %v", obj, err) return } lbc.queue.Add(key) } eventHandlers := framework.ResourceEventHandlerFuncs{ AddFunc: enqueue, DeleteFunc: enqueue, UpdateFunc: func(old, cur interface{}) { if !reflect.DeepEqual(old, cur) { enqueue(cur) } }, } lbc.svcLister.Store, lbc.svcController = framework.NewInformer( cache.NewListWatchFromClient( lbc.client, "services", namespace, fields.Everything()), &api.Service{}, resyncPeriod, eventHandlers) lbc.epLister.Store, lbc.epController = framework.NewInformer( cache.NewListWatchFromClient( lbc.client, "endpoints", namespace, fields.Everything()), &api.Endpoints{}, resyncPeriod, eventHandlers) return &lbc }
// NewNamespaceController creates a new NamespaceController func NewNamespaceController( kubeClient clientset.Interface, clientPool dynamic.ClientPool, groupVersionResources []unversioned.GroupVersionResource, resyncPeriod time.Duration, finalizerToken api.FinalizerName) *NamespaceController { // create the controller so we can inject the enqueue function namespaceController := &NamespaceController{ kubeClient: kubeClient, clientPool: clientPool, queue: workqueue.New(), groupVersionResources: groupVersionResources, opCache: operationNotSupportedCache{}, finalizerToken: finalizerToken, } if kubeClient != nil && kubeClient.Core().GetRESTClient().GetRateLimiter() != nil { metrics.RegisterMetricAndTrackRateLimiterUsage("namespace_controller", kubeClient.Core().GetRESTClient().GetRateLimiter()) } // configure the backing store/controller store, controller := framework.NewInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return kubeClient.Core().Namespaces().List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return kubeClient.Core().Namespaces().Watch(options) }, }, &api.Namespace{}, resyncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: func(obj interface{}) { namespace := obj.(*api.Namespace) namespaceController.enqueueNamespace(namespace) }, UpdateFunc: func(oldObj, newObj interface{}) { namespace := newObj.(*api.Namespace) namespaceController.enqueueNamespace(namespace) }, }, ) namespaceController.store = store namespaceController.controller = controller return namespaceController }
func TestBasic(t *testing.T) { // If something is seriously wrong this test will never complete. q := workqueue.New() // Start producers const producers = 50 producerWG := sync.WaitGroup{} producerWG.Add(producers) for i := 0; i < producers; i++ { go func(i int) { defer producerWG.Done() for j := 0; j < 50; j++ { q.Add(i) time.Sleep(time.Millisecond) } }(i) } // Start consumers const consumers = 10 consumerWG := sync.WaitGroup{} consumerWG.Add(consumers) for i := 0; i < consumers; i++ { go func(i int) { defer consumerWG.Done() for { item, quit := q.Get() if item == "added after shutdown!" { t.Errorf("Got an item added after shutdown.") } if quit { return } t.Logf("Worker %v: begin processing %v", i, item) time.Sleep(3 * time.Millisecond) t.Logf("Worker %v: done processing %v", i, item) q.Done(item) } }(i) } producerWG.Wait() q.ShutDown() q.Add("added after shutdown!") consumerWG.Wait() }
// NewEndpointController returns a new *EndpointController. func NewEndpointController(client *client.Client) *EndpointController { e := &EndpointController{ client: client, queue: workqueue.New(), } e.serviceStore.Store, e.serviceController = framework.NewInformer( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return e.client.Services(api.NamespaceAll).List(labels.Everything()) }, WatchFunc: func(rv string) (watch.Interface, error) { return e.client.Services(api.NamespaceAll).Watch(labels.Everything(), fields.Everything(), rv) }, }, &api.Service{}, FullServiceResyncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: e.enqueueService, UpdateFunc: func(old, cur interface{}) { e.enqueueService(cur) }, DeleteFunc: e.enqueueService, }, ) e.podStore.Store, e.podController = framework.NewInformer( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return e.client.Pods(api.NamespaceAll).List(labels.Everything(), fields.Everything()) }, WatchFunc: func(rv string) (watch.Interface, error) { return e.client.Pods(api.NamespaceAll).Watch(labels.Everything(), fields.Everything(), rv) }, }, &api.Pod{}, PodRelistPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: e.addPod, UpdateFunc: e.updatePod, DeleteFunc: e.deletePod, }, ) return e }
// NewEndpointController returns a new *EndpointController. func NewEndpointController(client *clientset.Clientset) *endpointController { e := &endpointController{ client: client, queue: workqueue.New(), } e.serviceStore.Store, e.serviceController = framework.NewInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return e.client.Core().Services(api.NamespaceAll).List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return e.client.Core().Services(api.NamespaceAll).Watch(options) }, }, &api.Service{}, kservice.FullServiceResyncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: e.enqueueService, UpdateFunc: func(old, cur interface{}) { e.enqueueService(cur) }, DeleteFunc: e.enqueueService, }, ) e.podStore.Indexer, e.podController = framework.NewIndexerInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return e.client.Core().Pods(api.NamespaceAll).List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return e.client.Core().Pods(api.NamespaceAll).Watch(options) }, }, &api.Pod{}, 5*time.Minute, framework.ResourceEventHandlerFuncs{ AddFunc: e.addPod, UpdateFunc: e.updatePod, DeleteFunc: e.deletePod, }, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}, ) return e }
// newLoadBalancerController creates a new controller from the given config. func newLoadBalancerController(cfg *loadBalancerConfig, kubeClient *unversioned.Client, namespace string, tcpServices map[string]int) *loadBalancerController { lbc := loadBalancerController{ cfg: cfg, client: kubeClient, queue: workqueue.New(), reloadRateLimiter: util.NewTokenBucketRateLimiter( reloadQPS, int(reloadQPS)), targetService: *targetService, forwardServices: *forwardServices, httpPort: *httpPort, tcpServices: tcpServices, } enqueue := func(obj interface{}) { key, err := keyFunc(obj) if err != nil { glog.Infof("Couldn't get key for object %+v: %v", obj, err) return } lbc.queue.Add(key) } eventHandlers := framework.ResourceEventHandlerFuncs{ AddFunc: enqueue, DeleteFunc: enqueue, UpdateFunc: func(old, cur interface{}) { if !reflect.DeepEqual(old, cur) { enqueue(cur) } }, } lbc.svcLister.Store, lbc.svcController = framework.NewInformer( cache.NewListWatchFromClient( lbc.client, "services", namespace, fields.Everything()), &api.Service{}, resyncPeriod, eventHandlers) lbc.epLister.Store, lbc.epController = framework.NewInformer( cache.NewListWatchFromClient( lbc.client, "endpoints", namespace, fields.Everything()), &api.Endpoints{}, resyncPeriod, eventHandlers) return &lbc }
func TestAddWhileProcessing(t *testing.T) { q := workqueue.New() // Start producers const producers = 50 producerWG := sync.WaitGroup{} producerWG.Add(producers) for i := 0; i < producers; i++ { go func(i int) { defer producerWG.Done() q.Add(i) }(i) } // Start consumers const consumers = 10 consumerWG := sync.WaitGroup{} consumerWG.Add(consumers) for i := 0; i < consumers; i++ { go func(i int) { defer consumerWG.Done() // Every worker will re-add every item up to two times. // This tests the dirty-while-processing case. counters := map[interface{}]int{} for { item, quit := q.Get() if quit { return } counters[item]++ if counters[item] < 2 { q.Add(item) } q.Done(item) } }(i) } producerWG.Wait() q.ShutDown() consumerWG.Wait() }
// NewEndpointController returns a new *EndpointController. func NewEndpointController(podInformer framework.SharedIndexInformer, client *clientset.Clientset) *EndpointController { if client != nil && client.Core().GetRESTClient().GetRateLimiter() != nil { metrics.RegisterMetricAndTrackRateLimiterUsage("endpoint_controller", client.Core().GetRESTClient().GetRateLimiter()) } e := &EndpointController{ client: client, queue: workqueue.New(), } e.serviceStore.Store, e.serviceController = framework.NewInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return e.client.Core().Services(api.NamespaceAll).List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return e.client.Core().Services(api.NamespaceAll).Watch(options) }, }, &api.Service{}, // TODO: Can we have much longer period here? FullServiceResyncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: e.enqueueService, UpdateFunc: func(old, cur interface{}) { e.enqueueService(cur) }, DeleteFunc: e.enqueueService, }, ) podInformer.AddEventHandler(framework.ResourceEventHandlerFuncs{ AddFunc: e.addPod, UpdateFunc: e.updatePod, DeleteFunc: e.deletePod, }) e.podStore.Indexer = podInformer.GetIndexer() e.podController = podInformer.GetController() e.podStoreSynced = podInformer.HasSynced return e }
// NewNamespaceController creates a new NamespaceController func NewNamespaceController(kubeClient clientset.Interface, versions *unversioned.APIVersions, resyncPeriod time.Duration) *NamespaceController { // create the controller so we can inject the enqueue function namespaceController := &NamespaceController{ kubeClient: kubeClient, versions: versions, queue: workqueue.New(), } // configure the backing store/controller store, controller := framework.NewInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return kubeClient.Core().Namespaces().List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return kubeClient.Core().Namespaces().Watch(options) }, }, &api.Namespace{}, resyncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: func(obj interface{}) { namespace := obj.(*api.Namespace) namespaceController.enqueueNamespace(namespace) }, UpdateFunc: func(oldObj, newObj interface{}) { namespace := newObj.(*api.Namespace) namespaceController.enqueueNamespace(namespace) }, }, ) namespaceController.store = store namespaceController.controller = controller return namespaceController }
// NewReplicaSetController creates a new ReplicaSetController. func NewReplicaSetController(kubeClient clientset.Interface, resyncPeriod controller.ResyncPeriodFunc, burstReplicas int) *ReplicaSetController { eventBroadcaster := record.NewBroadcaster() eventBroadcaster.StartLogging(glog.Infof) eventBroadcaster.StartRecordingToSink(&unversioned_legacy.EventSinkImpl{kubeClient.Legacy().Events("")}) rsc := &ReplicaSetController{ kubeClient: kubeClient, podControl: controller.RealPodControl{ KubeClient: kubeClient, Recorder: eventBroadcaster.NewRecorder(api.EventSource{Component: "replicaset-controller"}), }, burstReplicas: burstReplicas, expectations: controller.NewControllerExpectations(), queue: workqueue.New(), } rsc.rsStore.Store, rsc.rsController = framework.NewInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return rsc.kubeClient.Extensions().ReplicaSets(api.NamespaceAll).List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return rsc.kubeClient.Extensions().ReplicaSets(api.NamespaceAll).Watch(options) }, }, &extensions.ReplicaSet{}, // TODO: Can we have much longer period here? FullControllerResyncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: rsc.enqueueReplicaSet, UpdateFunc: func(old, cur interface{}) { // You might imagine that we only really need to enqueue the // replica set when Spec changes, but it is safer to sync any // time this function is triggered. That way a full informer // resync can requeue any replica set that don't yet have pods // but whose last attempts at creating a pod have failed (since // we don't block on creation of pods) instead of those // replica sets stalling indefinitely. Enqueueing every time // does result in some spurious syncs (like when Status.Replica // is updated and the watch notification from it retriggers // this function), but in general extra resyncs shouldn't be // that bad as ReplicaSets that haven't met expectations yet won't // sync, and all the listing is done using local stores. oldRS := old.(*extensions.ReplicaSet) curRS := cur.(*extensions.ReplicaSet) if oldRS.Status.Replicas != curRS.Status.Replicas { glog.V(4).Infof("Observed updated replica count for ReplicaSet: %v, %d->%d", curRS.Name, oldRS.Status.Replicas, curRS.Status.Replicas) } rsc.enqueueReplicaSet(cur) }, // This will enter the sync loop and no-op, because the replica set has been deleted from the store. // Note that deleting a replica set immediately after scaling it to 0 will not work. The recommended // way of achieving this is by performing a `stop` operation on the replica set. DeleteFunc: rsc.enqueueReplicaSet, }, ) rsc.podStore.Store, rsc.podController = framework.NewInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return rsc.kubeClient.Legacy().Pods(api.NamespaceAll).List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return rsc.kubeClient.Legacy().Pods(api.NamespaceAll).Watch(options) }, }, &api.Pod{}, resyncPeriod(), framework.ResourceEventHandlerFuncs{ AddFunc: rsc.addPod, // This invokes the ReplicaSet for every pod change, eg: host assignment. Though this might seem like // overkill the most frequent pod update is status, and the associated ReplicaSet will only list from // local storage, so it should be ok. UpdateFunc: rsc.updatePod, DeleteFunc: rsc.deletePod, }, ) rsc.syncHandler = rsc.syncReplicaSet rsc.podStoreSynced = rsc.podController.HasSynced return rsc }
func NewDaemonSetsController(kubeClient client.Interface, resyncPeriod controller.ResyncPeriodFunc) *DaemonSetsController { eventBroadcaster := record.NewBroadcaster() eventBroadcaster.StartLogging(glog.Infof) eventBroadcaster.StartRecordingToSink(kubeClient.Events("")) dsc := &DaemonSetsController{ kubeClient: kubeClient, podControl: controller.RealPodControl{ KubeClient: kubeClient, Recorder: eventBroadcaster.NewRecorder(api.EventSource{Component: "daemon-set"}), }, expectations: controller.NewControllerExpectations(), queue: workqueue.New(), } // Manage addition/update of daemon sets. dsc.dsStore.Store, dsc.dsController = framework.NewInformer( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return dsc.kubeClient.Extensions().DaemonSets(api.NamespaceAll).List(labels.Everything(), fields.Everything(), unversioned.ListOptions{}) }, WatchFunc: func(options unversioned.ListOptions) (watch.Interface, error) { return dsc.kubeClient.Extensions().DaemonSets(api.NamespaceAll).Watch(options) }, }, &extensions.DaemonSet{}, // TODO: Can we have much longer period here? FullDaemonSetResyncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: func(obj interface{}) { ds := obj.(*extensions.DaemonSet) glog.V(4).Infof("Adding daemon set %s", ds.Name) dsc.enqueueDaemonSet(obj) }, UpdateFunc: func(old, cur interface{}) { oldDS := old.(*extensions.DaemonSet) glog.V(4).Infof("Updating daemon set %s", oldDS.Name) dsc.enqueueDaemonSet(cur) }, DeleteFunc: func(obj interface{}) { ds := obj.(*extensions.DaemonSet) glog.V(4).Infof("Deleting daemon set %s", ds.Name) dsc.enqueueDaemonSet(obj) }, }, ) // Watch for creation/deletion of pods. The reason we watch is that we don't want a daemon set to create/delete // more pods until all the effects (expectations) of a daemon set's create/delete have been observed. dsc.podStore.Store, dsc.podController = framework.NewInformer( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return dsc.kubeClient.Pods(api.NamespaceAll).List(labels.Everything(), fields.Everything(), unversioned.ListOptions{}) }, WatchFunc: func(options unversioned.ListOptions) (watch.Interface, error) { return dsc.kubeClient.Pods(api.NamespaceAll).Watch(options) }, }, &api.Pod{}, resyncPeriod(), framework.ResourceEventHandlerFuncs{ AddFunc: dsc.addPod, UpdateFunc: dsc.updatePod, DeleteFunc: dsc.deletePod, }, ) // Watch for new nodes or updates to nodes - daemon pods are launched on new nodes, and possibly when labels on nodes change, dsc.nodeStore.Store, dsc.nodeController = framework.NewInformer( &cache.ListWatch{ ListFunc: func() (runtime.Object, error) { return dsc.kubeClient.Nodes().List(labels.Everything(), fields.Everything(), unversioned.ListOptions{}) }, WatchFunc: func(options unversioned.ListOptions) (watch.Interface, error) { return dsc.kubeClient.Nodes().Watch(options) }, }, &api.Node{}, resyncPeriod(), framework.ResourceEventHandlerFuncs{ AddFunc: dsc.addNode, UpdateFunc: dsc.updateNode, }, ) dsc.syncHandler = dsc.syncDaemonSet dsc.podStoreSynced = dsc.podController.HasSynced return dsc }
// NewResourceQuotaController creates a new ResourceQuotaController func NewResourceQuotaController(kubeClient clientset.Interface, resyncPeriod controller.ResyncPeriodFunc) *ResourceQuotaController { rq := &ResourceQuotaController{ kubeClient: kubeClient, queue: workqueue.New(), resyncPeriod: resyncPeriod, } rq.rqIndexer, rq.rqController = framework.NewIndexerInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return rq.kubeClient.Core().ResourceQuotas(api.NamespaceAll).List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return rq.kubeClient.Core().ResourceQuotas(api.NamespaceAll).Watch(options) }, }, &api.ResourceQuota{}, resyncPeriod(), framework.ResourceEventHandlerFuncs{ AddFunc: rq.enqueueResourceQuota, UpdateFunc: func(old, cur interface{}) { // We are only interested in observing updates to quota.spec to drive updates to quota.status. // We ignore all updates to quota.Status because they are all driven by this controller. // IMPORTANT: // We do not use this function to queue up a full quota recalculation. To do so, would require // us to enqueue all quota.Status updates, and since quota.Status updates involve additional queries // that cannot be backed by a cache and result in a full query of a namespace's content, we do not // want to pay the price on spurious status updates. As a result, we have a separate routine that is // responsible for enqueue of all resource quotas when doing a full resync (enqueueAll) oldResourceQuota := old.(*api.ResourceQuota) curResourceQuota := cur.(*api.ResourceQuota) if api.Semantic.DeepEqual(oldResourceQuota.Spec.Hard, curResourceQuota.Status.Hard) { return } glog.V(4).Infof("Observed updated quota spec for %v/%v", curResourceQuota.Namespace, curResourceQuota.Name) rq.enqueueResourceQuota(curResourceQuota) }, // This will enter the sync loop and no-op, because the controller has been deleted from the store. // Note that deleting a controller immediately after scaling it to 0 will not work. The recommended // way of achieving this is by performing a `stop` operation on the controller. DeleteFunc: rq.enqueueResourceQuota, }, cache.Indexers{"namespace": cache.MetaNamespaceIndexFunc}, ) // We use this pod controller to rapidly observe when a pod deletion occurs in order to // release compute resources from any associated quota. rq.podStore.Store, rq.podController = framework.NewInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return rq.kubeClient.Core().Pods(api.NamespaceAll).List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return rq.kubeClient.Core().Pods(api.NamespaceAll).Watch(options) }, }, &api.Pod{}, resyncPeriod(), framework.ResourceEventHandlerFuncs{ DeleteFunc: rq.deletePod, }, ) // set the synchronization handler rq.syncHandler = rq.syncResourceQuotaFromKey return rq }
func NewDaemonSetsController(podInformer framework.SharedIndexInformer, kubeClient clientset.Interface, resyncPeriod controller.ResyncPeriodFunc, lookupCacheSize int) *DaemonSetsController { eventBroadcaster := record.NewBroadcaster() eventBroadcaster.StartLogging(glog.Infof) // TODO: remove the wrapper when every clients have moved to use the clientset. eventBroadcaster.StartRecordingToSink(&unversionedcore.EventSinkImpl{Interface: kubeClient.Core().Events("")}) if kubeClient != nil && kubeClient.Core().GetRESTClient().GetRateLimiter() != nil { metrics.RegisterMetricAndTrackRateLimiterUsage("daemon_controller", kubeClient.Core().GetRESTClient().GetRateLimiter()) } dsc := &DaemonSetsController{ kubeClient: kubeClient, eventRecorder: eventBroadcaster.NewRecorder(api.EventSource{Component: "daemonset-controller"}), podControl: controller.RealPodControl{ KubeClient: kubeClient, Recorder: eventBroadcaster.NewRecorder(api.EventSource{Component: "daemon-set"}), }, burstReplicas: BurstReplicas, expectations: controller.NewControllerExpectations(), queue: workqueue.New(), } // Manage addition/update of daemon sets. dsc.dsStore.Store, dsc.dsController = framework.NewInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return dsc.kubeClient.Extensions().DaemonSets(api.NamespaceAll).List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return dsc.kubeClient.Extensions().DaemonSets(api.NamespaceAll).Watch(options) }, }, &extensions.DaemonSet{}, // TODO: Can we have much longer period here? FullDaemonSetResyncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: func(obj interface{}) { ds := obj.(*extensions.DaemonSet) glog.V(4).Infof("Adding daemon set %s", ds.Name) dsc.enqueueDaemonSet(ds) }, UpdateFunc: func(old, cur interface{}) { oldDS := old.(*extensions.DaemonSet) curDS := cur.(*extensions.DaemonSet) // We should invalidate the whole lookup cache if a DS's selector has been updated. // // Imagine that you have two RSs: // * old DS1 // * new DS2 // You also have a pod that is attached to DS2 (because it doesn't match DS1 selector). // Now imagine that you are changing DS1 selector so that it is now matching that pod, // in such case we must invalidate the whole cache so that pod could be adopted by DS1 // // This makes the lookup cache less helpful, but selector update does not happen often, // so it's not a big problem if !reflect.DeepEqual(oldDS.Spec.Selector, curDS.Spec.Selector) { dsc.lookupCache.InvalidateAll() } glog.V(4).Infof("Updating daemon set %s", oldDS.Name) dsc.enqueueDaemonSet(curDS) }, DeleteFunc: dsc.deleteDaemonset, }, ) // Watch for creation/deletion of pods. The reason we watch is that we don't want a daemon set to create/delete // more pods until all the effects (expectations) of a daemon set's create/delete have been observed. podInformer.AddEventHandler(framework.ResourceEventHandlerFuncs{ AddFunc: dsc.addPod, UpdateFunc: dsc.updatePod, DeleteFunc: dsc.deletePod, }) dsc.podStore.Indexer = podInformer.GetIndexer() dsc.podController = podInformer.GetController() dsc.podStoreSynced = podInformer.HasSynced // Watch for new nodes or updates to nodes - daemon pods are launched on new nodes, and possibly when labels on nodes change, dsc.nodeStore.Store, dsc.nodeController = framework.NewInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return dsc.kubeClient.Core().Nodes().List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return dsc.kubeClient.Core().Nodes().Watch(options) }, }, &api.Node{}, resyncPeriod(), framework.ResourceEventHandlerFuncs{ AddFunc: dsc.addNode, UpdateFunc: dsc.updateNode, }, ) dsc.syncHandler = dsc.syncDaemonSet dsc.lookupCache = controller.NewMatchingCache(lookupCacheSize) return dsc }
func New(federationClient federation_release_1_4.Interface, dns dnsprovider.Interface, federationName, zoneName string) *ServiceController { broadcaster := record.NewBroadcaster() // federationClient event is not supported yet // broadcaster.StartRecordingToSink(&unversioned_core.EventSinkImpl{Interface: kubeClient.Core().Events("")}) recorder := broadcaster.NewRecorder(api.EventSource{Component: UserAgentName}) s := &ServiceController{ dns: dns, federationClient: federationClient, federationName: federationName, zoneName: zoneName, serviceCache: &serviceCache{fedServiceMap: make(map[string]*cachedService)}, clusterCache: &clusterClientCache{ rwlock: sync.Mutex{}, clientMap: make(map[string]*clusterCache), }, eventBroadcaster: broadcaster, eventRecorder: recorder, queue: workqueue.New(), knownClusterSet: make(sets.String), } s.serviceStore.Store, s.serviceController = framework.NewInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (pkg_runtime.Object, error) { return s.federationClient.Core().Services(v1.NamespaceAll).List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return s.federationClient.Core().Services(v1.NamespaceAll).Watch(options) }, }, &v1.Service{}, serviceSyncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: s.enqueueService, UpdateFunc: func(old, cur interface{}) { // there is case that old and new are equals but we still catch the event now. if !reflect.DeepEqual(old, cur) { s.enqueueService(cur) } }, DeleteFunc: s.enqueueService, }, ) s.clusterStore.Store, s.clusterController = framework.NewInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (pkg_runtime.Object, error) { return s.federationClient.Federation().Clusters().List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return s.federationClient.Federation().Clusters().Watch(options) }, }, &v1beta1.Cluster{}, clusterSyncPeriod, framework.ResourceEventHandlerFuncs{ DeleteFunc: s.clusterCache.delFromClusterSet, AddFunc: s.clusterCache.addToClientMap, UpdateFunc: func(old, cur interface{}) { oldCluster, ok := old.(*v1beta1.Cluster) if !ok { return } curCluster, ok := cur.(*v1beta1.Cluster) if !ok { return } if !reflect.DeepEqual(oldCluster.Spec, curCluster.Spec) { // update when spec is changed s.clusterCache.addToClientMap(cur) } pred := getClusterConditionPredicate() // only update when condition changed to ready from not-ready if !pred(*oldCluster) && pred(*curCluster) { s.clusterCache.addToClientMap(cur) } // did not handle ready -> not-ready // how could we stop a controller? }, }, ) return s }
// NewDeploymentController creates a new DeploymentController. func NewDeploymentController(client clientset.Interface, resyncPeriod controller.ResyncPeriodFunc) *DeploymentController { eventBroadcaster := record.NewBroadcaster() eventBroadcaster.StartLogging(glog.Infof) // TODO: remove the wrapper when every clients have moved to use the clientset. eventBroadcaster.StartRecordingToSink(&unversionedcore.EventSinkImpl{client.Core().Events("")}) dc := &DeploymentController{ client: client, eventRecorder: eventBroadcaster.NewRecorder(api.EventSource{Component: "deployment-controller"}), queue: workqueue.New(), podExpectations: controller.NewControllerExpectations(), rsExpectations: controller.NewControllerExpectations(), } dc.dStore.Store, dc.dController = framework.NewInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return dc.client.Extensions().Deployments(api.NamespaceAll).List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return dc.client.Extensions().Deployments(api.NamespaceAll).Watch(options) }, }, &extensions.Deployment{}, FullDeploymentResyncPeriod, framework.ResourceEventHandlerFuncs{ AddFunc: func(obj interface{}) { d := obj.(*extensions.Deployment) glog.V(4).Infof("Adding deployment %s", d.Name) dc.enqueueDeployment(obj) }, UpdateFunc: func(old, cur interface{}) { oldD := old.(*extensions.Deployment) glog.V(4).Infof("Updating deployment %s", oldD.Name) // Resync on deployment object relist. dc.enqueueDeployment(cur) }, // This will enter the sync loop and no-op, because the deployment has been deleted from the store. DeleteFunc: func(obj interface{}) { d := obj.(*extensions.Deployment) glog.V(4).Infof("Deleting deployment %s", d.Name) dc.enqueueDeployment(obj) }, }, ) dc.rsStore.Store, dc.rsController = framework.NewInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return dc.client.Extensions().ReplicaSets(api.NamespaceAll).List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return dc.client.Extensions().ReplicaSets(api.NamespaceAll).Watch(options) }, }, &extensions.ReplicaSet{}, resyncPeriod(), framework.ResourceEventHandlerFuncs{ AddFunc: dc.addReplicaSet, UpdateFunc: dc.updateReplicaSet, DeleteFunc: dc.deleteReplicaSet, }, ) dc.podStore.Store, dc.podController = framework.NewInformer( &cache.ListWatch{ ListFunc: func(options api.ListOptions) (runtime.Object, error) { return dc.client.Core().Pods(api.NamespaceAll).List(options) }, WatchFunc: func(options api.ListOptions) (watch.Interface, error) { return dc.client.Core().Pods(api.NamespaceAll).Watch(options) }, }, &api.Pod{}, resyncPeriod(), framework.ResourceEventHandlerFuncs{ // When pod updates (becomes ready), we need to enqueue deployment UpdateFunc: dc.updatePod, // When pod is deleted, we need to update deployment's expectations DeleteFunc: dc.deletePod, }, ) dc.syncHandler = dc.syncDeployment dc.rsStoreSynced = dc.rsController.HasSynced dc.podStoreSynced = dc.podController.HasSynced return dc }
func TestProcessEvent(t *testing.T) { var testScenarios = []struct { name string // a series of events that will be supplied to the // Propagator.eventQueue. events []event }{ { name: "test1", events: []event{ createEvent(addEvent, "1", []string{}), createEvent(addEvent, "2", []string{"1"}), createEvent(addEvent, "3", []string{"1", "2"}), }, }, { name: "test2", events: []event{ createEvent(addEvent, "1", []string{}), createEvent(addEvent, "2", []string{"1"}), createEvent(addEvent, "3", []string{"1", "2"}), createEvent(addEvent, "4", []string{"2"}), createEvent(deleteEvent, "2", []string{"doesn't matter"}), }, }, { name: "test3", events: []event{ createEvent(addEvent, "1", []string{}), createEvent(addEvent, "2", []string{"1"}), createEvent(addEvent, "3", []string{"1", "2"}), createEvent(addEvent, "4", []string{"3"}), createEvent(updateEvent, "2", []string{"4"}), }, }, { name: "reverse test2", events: []event{ createEvent(addEvent, "4", []string{"2"}), createEvent(addEvent, "3", []string{"1", "2"}), createEvent(addEvent, "2", []string{"1"}), createEvent(addEvent, "1", []string{}), createEvent(deleteEvent, "2", []string{"doesn't matter"}), }, }, } for _, scenario := range testScenarios { propagator := &Propagator{ eventQueue: workqueue.New(), uidToNode: &concurrentUIDToNode{ RWMutex: &sync.RWMutex{}, uidToNode: make(map[types.UID]*node), }, gc: &GarbageCollector{ dirtyQueue: workqueue.New(), }, } for i := 0; i < len(scenario.events); i++ { propagator.eventQueue.Add(scenario.events[i]) propagator.processEvent() verifyGraphInvariants(scenario.name, propagator.uidToNode.uidToNode, t) } } }