func handleConn(in <-chan *net.TCPConn, out chan<- *net.TCPConn, rAddr *net.TCPAddr, cb *circuit.Breaker) { for conn := range in { cb.Call(func() error { return proxy(conn, rAddr) }, 0) } }
func NewBreaker(threshold int64, breaker string, breaks map[string]func()) *circuit.Breaker { var cb *circuit.Breaker switch breaker { case "threshold": cb = circuit.NewThresholdBreaker(threshold) case "consecutive": cb = circuit.NewConsecutiveBreaker(threshold) default: log.Fatal("invalid breaker type") } events := cb.Subscribe() go func() { for { e := <-events switch e { case circuit.BreakerTripped: breakMe("trip", breaks) case circuit.BreakerReset: breakMe("reset", breaks) case circuit.BreakerFail: breakMe("fail", breaks) case circuit.BreakerReady: breakMe("ready", breaks) default: breakMe("event", breaks) } } }() return cb }
// start dials the remote addr and commences gossip once connected. Upon exit, // the client is sent on the disconnected channel. This method starts client // processing in a goroutine and returns immediately. func (c *client) start( g *Gossip, disconnected chan *client, rpcCtx *rpc.Context, stopper *stop.Stopper, nodeID roachpb.NodeID, breaker *circuit.Breaker, ) { stopper.RunWorker(func() { ctx, cancel := context.WithCancel(c.AnnotateCtx(context.Background())) var wg sync.WaitGroup defer func() { // This closes the outgoing stream, causing any attempt to send or // receive to return an error. // // Note: it is still possible for incoming gossip to be processed after // this point. cancel() // The stream is closed, but there may still be some incoming gossip // being processed. Wait until that is complete to avoid racing the // client's removal against the discovery of its remote's node ID. wg.Wait() disconnected <- c }() consecFailures := breaker.ConsecFailures() var stream Gossip_GossipClient if err := breaker.Call(func() error { // Note: avoid using `grpc.WithBlock` here. This code is already // asynchronous from the caller's perspective, so the only effect of // `WithBlock` here is blocking shutdown - at the time of this writing, // that ends ups up making `kv` tests take twice as long. conn, err := rpcCtx.GRPCDial(c.addr.String()) if err != nil { return err } if stream, err = NewGossipClient(conn).Gossip(ctx); err != nil { return err } return c.requestGossip(g, stream) }, 0); err != nil { if consecFailures == 0 { log.Warningf(ctx, "node %d: failed to start gossip client: %s", nodeID, err) } return } // Start gossiping. log.Infof(ctx, "node %d: started gossip client to %s", nodeID, c.addr) if err := c.gossip(ctx, g, stream, stopper, &wg); err != nil { if !grpcutil.IsClosedConnection(err) { g.mu.Lock() if c.peerID != 0 { log.Infof(ctx, "node %d: closing client to node %d (%s): %s", nodeID, c.peerID, c.addr, err) } else { log.Infof(ctx, "node %d: closing client to %s: %s", nodeID, c.addr, err) } g.mu.Unlock() } } }) }