func TestProducerReturnsExpectationsToChannels(t *testing.T) { config := sarama.NewConfig() config.Producer.Return.Successes = true mp := NewAsyncProducer(t, config) mp.ExpectInputAndSucceed() mp.ExpectInputAndSucceed() mp.ExpectInputAndFail(sarama.ErrOutOfBrokers) mp.Input() <- &sarama.ProducerMessage{Topic: "test 1"} mp.Input() <- &sarama.ProducerMessage{Topic: "test 2"} mp.Input() <- &sarama.ProducerMessage{Topic: "test 3"} msg1 := <-mp.Successes() msg2 := <-mp.Successes() err1 := <-mp.Errors() if msg1.Topic != "test 1" { t.Error("Expected message 1 to be returned first") } if msg2.Topic != "test 2" { t.Error("Expected message 2 to be returned second") } if err1.Msg.Topic != "test 3" || err1.Err != sarama.ErrOutOfBrokers { t.Error("Expected message 3 to be returned as error") } if err := mp.Close(); err != nil { t.Error(err) } }
// kafkaClient initializes a connection to a Kafka cluster and // initializes one or more clientProducer() (producer instances). func kafkaClient(n int) { switch noop { // If not noop, actually fire up Kafka connections and send messages. case false: cId := "client_" + strconv.Itoa(n) conf := kafka.NewConfig() if compression != kafka.CompressionNone { conf.Producer.Compression = compression } conf.Producer.Flush.MaxMessages = batchSize client, err := kafka.NewClient(brokers, conf) if err != nil { log.Println(err) os.Exit(1) } else { log.Printf("%s connected\n", cId) } for i := 0; i < producers; i++ { go clientProducer(client) } // If noop, we're not creating connections at all. // Just generate messages and burn CPU. default: for i := 0; i < producers; i++ { go clientDummyProducer() } } <-killClients }
// NewConsumer returns a new mock Consumer instance. The t argument should // be the *testing.T instance of your test method. An error will be written to it if // an expectation is violated. The config argument is currently unused and can be set to nil. func NewConsumer(t ErrorReporter, config *sarama.Config) *Consumer { if config == nil { config = sarama.NewConfig() } c := &Consumer{ t: t, config: config, partitionConsumers: make(map[string]map[int32]*PartitionConsumer), } return c }
// NewAsyncProducer instantiates a new Producer mock. The t argument should // be the *testing.T instance of your test method. An error will be written to it if // an expectation is violated. The config argument is used to determine whether it // should ack successes on the Successes channel. func NewAsyncProducer(t ErrorReporter, config *sarama.Config) *AsyncProducer { if config == nil { config = sarama.NewConfig() } mp := &AsyncProducer{ t: t, closed: make(chan struct{}, 0), expectations: make([]*producerExpectation, 0), input: make(chan *sarama.ProducerMessage, config.ChannelBufferSize), successes: make(chan *sarama.ProducerMessage, config.ChannelBufferSize), errors: make(chan *sarama.ProducerError, config.ChannelBufferSize), } go func() { defer func() { close(mp.successes) close(mp.errors) }() for msg := range mp.input { mp.l.Lock() if mp.expectations == nil || len(mp.expectations) == 0 { mp.expectations = nil mp.t.Errorf("No more expectation set on this mock producer to handle the input message.") } else { expectation := mp.expectations[0] mp.expectations = mp.expectations[1:] if expectation.Result == errProduceSuccess { mp.lastOffset++ if config.Producer.Return.Successes { msg.Offset = mp.lastOffset mp.successes <- msg } } else { if config.Producer.Return.Errors { mp.errors <- &sarama.ProducerError{Err: expectation.Result, Msg: msg} } } } mp.l.Unlock() } mp.l.Lock() if len(mp.expectations) > 0 { mp.t.Errorf("Expected to exhaust all expectations, but %d are left.", len(mp.expectations)) } mp.l.Unlock() close(mp.closed) }() return mp }