func (d *EventGenerator) GetBlkioEvent(container *docker.APIContainers, stats *docker.Stats) common.MapStr { logp.Debug("generator", "Generate blkio event %v", container.ID) blkioStats := d.buildStats(stats.Read, stats.BlkioStats.IOServicedRecursive) var event common.MapStr d.BlkioStats.RLock() oldBlkioStats, ok := d.BlkioStats.M[container.ID] d.BlkioStats.RUnlock() if ok { calculator := d.CalculatorFactory.NewBlkioCalculator(oldBlkioStats, blkioStats) event = common.MapStr{ "@timestamp": common.Time(stats.Read), "type": "blkio", "containerID": container.ID, "containerName": d.extractContainerName(container.Names), "containerLabels": d.buildLabelArray(container.Labels), "dockerSocket": d.Socket, "blkio": common.MapStr{ "read_ps": calculator.GetReadPs(), "write_ps": calculator.GetWritePs(), "total_ps": calculator.GetTotalPs(), }, } } else { event = common.MapStr{ "@timestamp": common.Time(stats.Read), "type": "blkio", "containerID": container.ID, "containerName": d.extractContainerName(container.Names), "containerLabels": d.buildLabelArray(container.Labels), "dockerSocket": d.Socket, "blkio": common.MapStr{ "read_ps": float64(0), "write_ps": float64(0), "total_ps": float64(0), }, } } d.BlkioStats.Lock() d.BlkioStats.M[container.ID] = blkioStats // purge old saved data for containerId, blkioStat := range d.BlkioStats.M { // if data older than two ticks, then delete it if d.expiredSavedData(blkioStat.Time) { delete(d.BlkioStats.M, containerId) } } d.BlkioStats.Unlock() return event }
func toTime(key string, data map[string]interface{}) (interface{}, error) { emptyIface, exists := data[key] if !exists { return common.Time(time.Unix(0, 0)), fmt.Errorf("Key %s not found", key) } ts, ok := emptyIface.(time.Time) if !ok { return common.Time(time.Unix(0, 0)), fmt.Errorf("Expected date, found %T", emptyIface) } return common.Time(ts), nil }
func processGroups(groups []string, topic string, pids map[int32]int64) []common.MapStr { var events []common.MapStr for _, group := range groups { pid_offsets, err := getConsumerOffsets(group, topic, pids) if err == nil { for pid, offset := range pid_offsets { event := common.MapStr{ "@timestamp": common.Time(time.Now()), "type": "consumer", "partition": pid, "topic": topic, "group": group, "offset": offset, } size, ok := pids[pid] if ok { event.Update(common.MapStr{"lag": size - offset}) } events = append(events, event) } } else { logp.Debug("kafkabeat", "No offsets for group %s on topic %s", group, topic) } } return events }
func (fp *PhpfpmPublisher) Publish(data map[string]interface{}) { fp.client.PublishEvent(common.MapStr{ "@timestamp": common.Time(time.Now()), "type": "phpfpm", "phpfpm": data, }) }
func (d *EventGenerator) GetCpuEvent(container *docker.APIContainers, stats *docker.Stats) common.MapStr { logp.Debug("generator", "Generate cpu event %v", container.ID) calculator := d.CalculatorFactory.NewCPUCalculator( calculator.CPUData{ PerCpuUsage: stats.PreCPUStats.CPUUsage.PercpuUsage, TotalUsage: stats.PreCPUStats.CPUUsage.TotalUsage, UsageInKernelmode: stats.PreCPUStats.CPUUsage.UsageInKernelmode, UsageInUsermode: stats.PreCPUStats.CPUUsage.UsageInUsermode, }, calculator.CPUData{ PerCpuUsage: stats.CPUStats.CPUUsage.PercpuUsage, TotalUsage: stats.CPUStats.CPUUsage.TotalUsage, UsageInKernelmode: stats.CPUStats.CPUUsage.UsageInKernelmode, UsageInUsermode: stats.CPUStats.CPUUsage.UsageInUsermode, }, ) event := common.MapStr{ "@timestamp": common.Time(stats.Read), "type": "cpu", "containerID": container.ID, "containerName": d.extractContainerName(container.Names), "containerLabels": d.buildLabelArray(container.Labels), "dockerSocket": d.Socket, "cpu": common.MapStr{ "percpuUsage": calculator.PerCpuUsage(), "totalUsage": calculator.TotalUsage(), "usageInKernelmode": calculator.UsageInKernelmode(), "usageInUsermode": calculator.UsageInUsermode(), }, } return event }
func (lr LogRecord) ToMapStr() common.MapStr { m := common.MapStr{ "eventLogName": lr.EventLogName, "sourceName": lr.SourceName, "computerName": lr.ComputerName, "recordNumber": lr.RecordNumber, "eventID": lr.EventID, "eventType": lr.EventType, "message": lr.Message, "@timestamp": common.Time(lr.TimeGenerated), "type": "eventlog", } if lr.EventCategory != "" { m["eventCategory"] = lr.EventCategory } if lr.UserSID != nil { m["userSID"] = common.MapStr{ "name": lr.UserSID.Name, "domain": lr.UserSID.Domain, "type": lr.UserSID.SIDType.String(), } } return m }
func TestHttpEventToMapStr(t *testing.T) { now := time.Now() fields := make(map[string]string) fields["field1"] = "value1" fields["field2"] = "value2" request := Request{} request.Url = "www.example.org" headers := make(map[string]string) headers["header1"] = "value1" request.Headers = headers request.Body = "Body" request.Method = "get" event := HttpEvent{} event.Fields = fields event.DocumentType = "test" event.ReadTime = now event.Request = request mapStr := event.ToMapStr() _, fieldsExist := mapStr["fields"] assert.True(t, fieldsExist) _, requestExist := mapStr["request"] assert.True(t, requestExist) assert.Equal(t, "test", mapStr["type"]) assert.Equal(t, common.Time(now), mapStr["@timestamp"]) }
func (e *Event) ToMapStr() common.MapStr { event := common.MapStr{ common.EventMetadataKey: e.EventMetadata, "@timestamp": common.Time(e.ReadTime), "source": e.State.Source, "offset": e.State.Offset, // Offset here is the offset before the starting char. "type": e.DocumentType, "input_type": e.InputType, } // Add data fields which are added by the readers for key, value := range e.Data { event[key] = value } // Check if json fields exist var jsonFields common.MapStr if fields, ok := event["json"]; ok { jsonFields = fields.(common.MapStr) } if e.JSONConfig != nil && len(jsonFields) > 0 { mergeJSONFields(e, event, jsonFields) } else if e.Text != nil { event["message"] = *e.Text } return event }
func (f *FileEvent) ToMapStr() common.MapStr { event := common.MapStr{ "@timestamp": common.Time(f.ReadTime), "source": f.Source, "offset": f.Offset, // Offset here is the offset before the starting char. "message": f.Text, "type": f.DocumentType, "input_type": f.InputType, "count": 1, } if f.Fields != nil { if f.fieldsUnderRoot { for key, value := range *f.Fields { // in case of conflicts, overwrite _, found := event[key] if found { logp.Warn("Overwriting %s key", key) } event[key] = value } } else { event["fields"] = f.Fields } } return event }
func collectFileSystemStats(fss []sigar.FileSystem) []common.MapStr { events := make([]common.MapStr, 0, len(fss)) for _, fs := range fss { fsStat, err := GetFileSystemStat(fs) if err != nil { logp.Debug("topbeat", "Skip filesystem %d: %v", fsStat, err) continue } addFileSystemUsedPercentage(fsStat) event := common.MapStr{ "@timestamp": common.Time(time.Now()), "type": "filesystem", "count": 1, "fs": common.MapStr{ "device_name": fsStat.DevName, "mount_point": fsStat.Mount, "total": fsStat.Total, "used": fsStat.Used, "free": fsStat.Free, "avail": fsStat.Avail, "files": fsStat.Files, "free_files": fsStat.FreeFiles, "used_p": fsStat.UsedPercent, }, } events = append(events, event) } return events }
func (d *EventGenerator) getCpuEvent(container *docker.APIContainers, stats *docker.Stats) common.MapStr { calculator := d.calculatorFactory.newCPUCalculator( CPUData{ perCpuUsage: stats.PreCPUStats.CPUUsage.PercpuUsage, totalUsage: stats.PreCPUStats.CPUUsage.TotalUsage, usageInKernelmode: stats.PreCPUStats.CPUUsage.UsageInKernelmode, usageInUsermode: stats.PreCPUStats.CPUUsage.UsageInUsermode, }, CPUData{ perCpuUsage: stats.CPUStats.CPUUsage.PercpuUsage, totalUsage: stats.CPUStats.CPUUsage.TotalUsage, usageInKernelmode: stats.CPUStats.CPUUsage.UsageInKernelmode, usageInUsermode: stats.CPUStats.CPUUsage.UsageInUsermode, }, ) event := common.MapStr{ "@timestamp": common.Time(stats.Read), "type": "cpu", "containerID": container.ID, "containerName": d.extractContainerName(container.Names), "cpu": common.MapStr{ "percpuUsage": calculator.perCpuUsage(), "totalUsage": calculator.totalUsage(), "usageInKernelmode": calculator.usageInKernelmode(), "usageInUsermode": calculator.usageInUsermode(), }, } return event }
func testSendMultipleViaLogstash(t *testing.T, name string, tls bool) { ls := newTestLogstashOutput(t, name, tls) defer ls.Cleanup() for i := 0; i < 10; i++ { event := common.MapStr{ "@timestamp": common.Time(time.Now()), "host": "test-host", "type": "log", "message": fmt.Sprintf("hello world - %v", i), } ls.PublishEvent(nil, testOptions, event) } // wait for logstash event flush + elasticsearch waitUntilTrue(5*time.Second, checkIndex(ls, 10)) // search value in logstash elasticsearch index resp, err := ls.Read() if err != nil { return } if len(resp) != 10 { t.Errorf("wrong number of results: %d", len(resp)) } }
func TestClientPublishEvent(t *testing.T) { index := "beat-int-pub-single-event" output, client := connectTestEs(t, map[string]interface{}{ "index": index, }) // drop old index preparing test client.Delete(index, "", "", nil) event := outputs.Data{Event: common.MapStr{ "@timestamp": common.Time(time.Now()), "type": "libbeat", "message": "Test message from libbeat", }} err := output.PublishEvent(nil, outputs.Options{Guaranteed: true}, event) if err != nil { t.Fatal(err) } _, _, err = client.Refresh(index) if err != nil { t.Fatal(err) } _, resp, err := client.CountSearchURI(index, "", nil) if err != nil { t.Fatal(err) } assert.Equal(t, 1, resp.Count) }
func TestUseType(t *testing.T) { if testing.Short() { t.Skip("Skipping in short mode. Requires Kafka") } if testing.Verbose() { logp.LogInit(logp.LOG_DEBUG, "", false, true, []string{"kafka"}) } id := strconv.Itoa(rand.New(rand.NewSource(int64(time.Now().Nanosecond()))).Int()) logType := fmt.Sprintf("log-type-%s", id) kafka := newTestKafkaOutput(t, "", true) event := common.MapStr{ "@timestamp": common.Time(time.Now()), "host": "test-host", "type": logType, "message": id, } if err := kafka.PublishEvent(nil, testOptions, event); err != nil { t.Fatal(err) } messages := testReadFromKafkaTopic(t, logType, 1, 5*time.Second) if assert.Len(t, messages, 1) { msg := messages[0] logp.Debug("kafka", "%s: %s", msg.Key, msg.Value) assert.Contains(t, string(msg.Value), id) } }
func (cpu *CPU) GetCoreStats() ([]common.MapStr, error) { events := []common.MapStr{} cpuCoreStat, err := GetCpuTimesList() if err != nil { logp.Warn("Getting cpu core times: %v", err) return nil, err } cpu.AddCpuPercentageList(cpuCoreStat) for coreNumber, stat := range cpuCoreStat { coreStat := cpu.GetCpuStatEvent(&stat) coreStat["id"] = coreNumber event := common.MapStr{ "@timestamp": common.Time(time.Now()), "type": "core", "core": coreStat, } events = append(events, event) } return events, nil }
func eventMapping(cont *dc.APIContainers) common.MapStr { event := common.MapStr{ "created": common.Time(time.Unix(cont.Created, 0)), "id": cont.ID, "name": docker.ExtractContainerName(cont.Names), "command": cont.Command, "image": cont.Image, "size": common.MapStr{ "root_fs": cont.SizeRootFs, "rw": cont.SizeRw, }, "status": cont.Status, } labels := docker.BuildLabelArray(cont.Labels) if len(labels) > 0 { event["labels"] = labels } ports := convertContainerPorts(cont.Ports) if len(ports) > 0 { event["ports"] = ports } return event }
func testSendMessageViaLogstash(t *testing.T, name string, tls bool) { if testing.Short() { t.Skip("Skipping in short mode. Requires Logstash and Elasticsearch") } ls := newTestLogstashOutput(t, name, tls) defer ls.Cleanup() event := common.MapStr{ "@timestamp": common.Time(time.Now()), "host": "test-host", "type": "log", "message": "hello world", } ls.PublishEvent(nil, testOptions, event) // wait for logstash event flush + elasticsearch waitUntilTrue(5*time.Second, checkIndex(ls, 1)) // search value in logstash elasticsearch index resp, err := ls.Read() if err != nil { return } if len(resp) != 1 { t.Errorf("wrong number of results: %d", len(resp)) } }
func (redis *Redis) publishTransaction(t *transaction) { if redis.results == nil { return } event := common.MapStr{} event["type"] = "redis" if !t.IsError { event["status"] = common.OK_STATUS } else { event["status"] = common.ERROR_STATUS } event["responsetime"] = t.ResponseTime if redis.SendRequest { event["request"] = t.RequestRaw } if redis.SendResponse { event["response"] = t.ResponseRaw } event["redis"] = common.MapStr(t.Redis) event["method"] = strings.ToUpper(t.Method) event["resource"] = t.Path event["query"] = t.Query event["bytes_in"] = uint64(t.BytesIn) event["bytes_out"] = uint64(t.BytesOut) event["@timestamp"] = common.Time(t.ts) event["src"] = &t.Src event["dst"] = &t.Dst redis.results.PublishEvent(event) }
func (t *Topbeat) exportSystemStats() error { load_stat, err := GetSystemLoad() if err != nil { logp.Warn("Getting load statistics: %v", err) return err } cpu_stat, err := GetCpuTimes() if err != nil { logp.Warn("Getting cpu times: %v", err) return err } t.addCpuPercentage(cpu_stat) cpu_core_stat, err := GetCpuTimesList() if err != nil { logp.Warn("Getting cpu core times: %v", err) return err } t.addCpuPercentageList(cpu_core_stat) mem_stat, err := GetMemory() if err != nil { logp.Warn("Getting memory details: %v", err) return err } t.addMemPercentage(mem_stat) swap_stat, err := GetSwap() if err != nil { logp.Warn("Getting swap details: %v", err) return err } t.addSwapPercentage(swap_stat) event := common.MapStr{ "@timestamp": common.Time(time.Now()), "type": "system", "load": load_stat, "cpu": cpu_stat, "mem": mem_stat, "swap": swap_stat, "count": 1, } if t.cpuPerCore { cpus := common.MapStr{} for coreNumber, stat := range cpu_core_stat { cpus["cpu"+strconv.Itoa(coreNumber)] = stat } event["cpus"] = cpus } t.events.PublishEvent(event) return nil }
// TestOutputLoadTemplate checks that the template is inserted before // the first event is published. func TestOutputLoadTemplate(t *testing.T) { client := GetTestingElasticsearch() err := client.Connect(5 * time.Second) if err != nil { t.Fatal(err) } // delete template if it exists client.request("DELETE", "/_template/libbeat", "", nil, nil) // Make sure template is not yet there assert.False(t, client.CheckTemplate("libbeat")) templatePath := "../../../packetbeat/packetbeat.template.json" if strings.HasPrefix(client.Connection.version, "2.") { templatePath = "../../../packetbeat/packetbeat.template-es2x.json" } tPath, err := filepath.Abs(templatePath) if err != nil { t.Fatal(err) } config := map[string]interface{}{ "hosts": GetEsHost(), "template": map[string]interface{}{ "name": "libbeat", "path": tPath, "versions.2x.enabled": false, }, } cfg, err := common.NewConfigFrom(config) if err != nil { t.Fatal(err) } output, err := New("libbeat", cfg, 0) if err != nil { t.Fatal(err) } event := outputs.Data{Event: common.MapStr{ "@timestamp": common.Time(time.Now()), "host": "test-host", "type": "libbeat", "message": "Test message from libbeat", }} err = output.PublishEvent(nil, outputs.Options{Guaranteed: true}, event) if err != nil { t.Fatal(err) } // Guaranteed publish, so the template should be there assert.True(t, client.CheckTemplate("libbeat")) }
func TestConversions(t *testing.T) { ts := time.Now() input := map[string]interface{}{ "testString": "hello", "testInt": 42, "testIntFromFloat": 42.0, "testIntFromInt64": int64(42), "testBool": true, "testObj": map[string]interface{}{ "testObjString": "hello, object", }, "testNonNestedObj": "hello from top level", "testTime": ts, // wrong types "testErrorInt": "42", "testErrorTime": 12, "testErrorBool": "false", "testErrorString": 32, } schema := s.Schema{ "test_string": Str("testString"), "test_int": Int("testInt"), "test_int_from_float": Int("testIntFromFloat"), "test_int_from_int64": Int("testIntFromInt64"), "test_bool": Bool("testBool"), "test_time": Time("testTime"), "test_obj_1": s.Object{ "test": Str("testNonNestedObj"), }, "test_obj_2": Dict("testObj", s.Schema{ "test": Str("testObjString"), }), "test_error_int": Int("testErrorInt", s.Optional), "test_error_time": Time("testErrorTime", s.Optional), "test_error_bool": Bool("testErrorBool", s.Optional), "test_error_string": Str("testErrorString", s.Optional), } expected := common.MapStr{ "test_string": "hello", "test_int": int64(42), "test_int_from_float": int64(42), "test_int_from_int64": int64(42), "test_bool": true, "test_time": common.Time(ts), "test_obj_1": common.MapStr{ "test": "hello from top level", }, "test_obj_2": common.MapStr{ "test": "hello, object", }, } output := schema.Apply(input) assert.Equal(t, output, expected) }
func testEvent() outputs.Data { return outputs.Data{Event: common.MapStr{ "@timestamp": common.Time(time.Now()), "type": "log", "extra": 10, "message": "message", }} }
// Publish Nginx Stub status. func (p *StubPublisher) Publish(s map[string]interface{}, source string) { p.client.PublishEvent(common.MapStr{ "@timestamp": common.Time(time.Now()), "type": "stub", "source": source, "stub": s, }) }
func testEvent() common.MapStr { return common.MapStr{ "@timestamp": common.Time(time.Now()), "type": "log", "extra": 10, "message": "message", } }
func testEvent() common.MapStr { event := common.MapStr{} event["@timestamp"] = common.Time(time.Now()) event["type"] = "test" event["src"] = &common.Endpoint{} event["dst"] = &common.Endpoint{} return event }
// testEvent returns a new common.MapStr with the required fields // populated. func testEvent() outputs.Data { return outputs.Data{Event: common.MapStr{ "@timestamp": common.Time(time.Now()), "type": "test", "src": &common.Endpoint{}, "dst": &common.Endpoint{}, }} }
func (d *EventGenerator) getBlkioEvent(container *docker.APIContainers, stats *docker.Stats) common.MapStr { blkioStats := d.buildStats(stats.Read, stats.BlkioStats.IOServicedRecursive) var event common.MapStr oldBlkioStats, ok := d.blkioStats[container.ID] if ok { calculator := d.calculatorFactory.newBlkioCalculator(oldBlkioStats, blkioStats) event = common.MapStr{ "@timestamp": common.Time(stats.Read), "type": "blkio", "containerID": container.ID, "containerName": d.extractContainerName(container.Names), "blkio": common.MapStr{ "read_ps": calculator.getReadPs(), "write_ps": calculator.getWritePs(), "total_ps": calculator.getTotalPs(), }, } } else { event = common.MapStr{ "@timestamp": common.Time(stats.Read), "type": "blkio", "containerID": container.ID, "containerName": d.extractContainerName(container.Names), "blkio": common.MapStr{ "read_ps": float64(0), "write_ps": float64(0), "total_ps": float64(0), }, } } d.blkioStats[container.ID] = blkioStats // purge old saved data for containerId, blkioStat := range d.blkioStats { // if data older than two ticks, then delete it if d.expiredSavedData(blkioStat.time) { delete(d.blkioStats, containerId) } } return event }
func partTestSimple(N int, makeKey bool) partTestScenario { numPartitions := int32(15) return func(t *testing.T, reachableOnly bool, part sarama.Partitioner) error { t.Logf(" simple test with %v partitions", numPartitions) partitions := make([]int, numPartitions) requiresConsistency := !reachableOnly assert.Equal(t, requiresConsistency, part.RequiresConsistency()) for i := 0; i <= N; i++ { ts := time.Now() event := common.MapStr{ "@timestamp": common.Time(ts), "type": "test", "message": randString(20), } jsonEvent, err := json.Marshal(event) if err != nil { return fmt.Errorf("json encoding failed with %v", err) } msg := &message{partition: -1} msg.event = event msg.topic = "test" if makeKey { msg.key = randASCIIBytes(10) } msg.value = jsonEvent msg.ts = ts msg.initProducerMessage() p, err := part.Partition(&msg.msg, numPartitions) if err != nil { return err } assert.True(t, 0 <= p && p < numPartitions) partitions[p]++ } // count number of partitions being used nPartitions := 0 for _, p := range partitions { if p > 0 { nPartitions++ } } t.Logf(" partitions used: %v/%v", nPartitions, numPartitions) assert.True(t, nPartitions > 3) return nil } }
func (redis *Redis) newTransaction(requ, resp *redisMessage) common.MapStr { error := common.OK_STATUS if resp.IsError { error = common.ERROR_STATUS } var returnValue map[string]common.NetString if resp.IsError { returnValue = map[string]common.NetString{ "error": resp.Message, } } else { returnValue = map[string]common.NetString{ "return_value": resp.Message, } } src := &common.Endpoint{ Ip: requ.TcpTuple.Src_ip.String(), Port: requ.TcpTuple.Src_port, Proc: string(requ.CmdlineTuple.Src), } dst := &common.Endpoint{ Ip: requ.TcpTuple.Dst_ip.String(), Port: requ.TcpTuple.Dst_port, Proc: string(requ.CmdlineTuple.Dst), } if requ.Direction == tcp.TcpDirectionReverse { src, dst = dst, src } // resp_time in milliseconds responseTime := int32(resp.Ts.Sub(requ.Ts).Nanoseconds() / 1e6) event := common.MapStr{ "@timestamp": common.Time(requ.Ts), "type": "redis", "status": error, "responsetime": responseTime, "redis": returnValue, "method": common.NetString(bytes.ToUpper(requ.Method)), "resource": requ.Path, "query": requ.Message, "bytes_in": uint64(requ.Size), "bytes_out": uint64(resp.Size), "src": src, "dst": dst, } if redis.SendRequest { event["request"] = requ.Message } if redis.SendResponse { event["response"] = resp.Message } return event }
func (cpu *CPU) GetSystemStats() (common.MapStr, error) { loadStat, err := GetSystemLoad() if err != nil { logp.Warn("Getting load statistics: %v", err) return nil, err } cpuStat, err := GetCpuTimes() if err != nil { logp.Warn("Getting cpu times: %v", err) return nil, err } cpu.AddCpuPercentage(cpuStat) memStat, err := GetMemory() if err != nil { logp.Warn("Getting memory details: %v", err) return nil, err } AddMemPercentage(memStat) swapStat, err := GetSwap() if err != nil { logp.Warn("Getting swap details: %v", err) return nil, err } AddSwapPercentage(swapStat) event := common.MapStr{ "@timestamp": common.Time(time.Now()), "type": "system", "load": loadStat, "cpu": GetCpuStatEvent(cpuStat), "mem": GetMemoryEvent(memStat), "swap": GetSwapEvent(swapStat), } if cpu.CpuPerCore { cpuCoreStat, err := GetCpuTimesList() if err != nil { logp.Warn("Getting cpu core times: %v", err) return nil, err } cpu.AddCpuPercentageList(cpuCoreStat) cpus := common.MapStr{} for coreNumber, stat := range cpuCoreStat { cpus["cpu"+strconv.Itoa(coreNumber)] = GetCpuStatEvent(&stat) } event["cpus"] = cpus } return event, nil }