func Test_Proto_GRPC_Timeout(t *testing.T) { t.Skip() logx.SetLevel(logx.DEBUG) srv1 := new(GRPCServer) srv1.Start() defer srv1.srv.Stop() conn, err := grpc.Dial(fmt.Sprintf("localhost:%d", srv1.Port), grpc.WithInsecure()) assert.NoError(t, err) defer conn.Close() c := pb3.NewBarClient(conn) req := pb3.TestRep{ "test", []byte(strings.Repeat("m", 10)), } res, err := c.Test(context.Background(), &req) assert.NoError(t, err) assert.EqualValues(t, req, *res) res1, err := c.Test(context.Background(), &req) assert.NoError(t, err) assert.EqualValues(t, req, *res1) }
func Benchmark_Proto_GRPC_Large(b *testing.B) { b.Skip() logx.SetLevel(logx.DEBUG) srv1 := new(GRPCServer) srv1.Start() defer srv1.srv.Stop() conn, err := grpc.Dial(fmt.Sprintf("localhost:%d", srv1.Port), grpc.WithInsecure()) assert.NoError(b, err) defer conn.Close() c := pb3.NewBarClient(conn) b.Log(srv1.Port, b.N) b.ResetTimer() for i := 0; i < b.N; i++ { b.StartTimer() req := pb3.TestRep{ fmt.Sprintf("%d", i), []byte(strings.Repeat("m", 1024*1024)), } res, err := c.Test(context.Background(), &req) b.StopTimer() assert.NoError(b, err) assert.EqualValues(b, req, *res) b.SetBytes(int64(len(req.Body) * 2)) } }
func main() { fs := flags.New().SetVersion(Version) fs.UseEnv = false fs.Boot() logx.SetLevel(log_level) prog, err := program.ParseFiles(flag.Args()...) logx.OnFatal(err) opts := options.NewGenOptions(layer, pack, core) gen_root := gen.NewGenRoot(prog, opts) // make package dir err = os.MkdirAll(filepath.Join(out, pack), os.ModePerm) logx.OnFatal(err) // Write types res, err := gen_root.Out() logx.OnFatal(err) for filename, box_res := range res { target := filepath.Join(out, pack, filename) err = ioutil.WriteFile(target, box_res, os.ModePerm) logx.OnFatal(err) logx.Infof("==> %s", target) } // Write Fns // fn_fname, fn_res, err := gen_root.PipeOut() // logx.OnFatal(err) // target := filepath.Join(out, pack, fn_fname) // err = ioutil.WriteFile(target, fn_res, os.ModePerm) // logx.Infof("==> %s", target) // // // Write package // pack_res, err := gen_root.PackageOut() // logx.OnFatal(err) // target = filepath.Join(out, pack, pack) + ".go" // err = ioutil.WriteFile(target, pack_res, os.ModePerm) // logx.OnFatal(err) // logx.Infof("==> %s", target) }
func Test_Storage_Upload_FinishUpload(t *testing.T) { logx.SetLevel(logx.DEBUG) tree := fixtures.NewTree("finish-upload", "") defer tree.Squash() assert.NoError(t, tree.Populate()) m, err := model.New(tree.CWD, false, proto.CHUNK_SIZE, 16) assert.NoError(t, err) os.RemoveAll("testdata/finish-upload-storage") stor := storage.NewBlockStorage(&storage.BlockStorageOptions{ "testdata/finish-upload-storage", 2, 16, 32, }) defer os.RemoveAll("testdata/finish-upload-storage") names := lists.NewFileList().ListDir(tree.CWD) mans, err := m.FeedManifests(true, false, true, names...) uID, _ := uuid.NewV4() missing, err := stor.CreateUploadSession(*uID, mans.GetManifestSlice(), time.Hour) assert.NoError(t, err) toUpload := mans.GetChunkLinkSlice(missing) for _, v := range toUpload { r, err := os.Open(tree.BlobFilename(v.Name)) assert.NoError(t, err) defer r.Close() buf := make([]byte, v.Size) _, err = r.ReadAt(buf, v.Offset) err = stor.UploadChunk(*uID, v.Chunk.ID, bytes.NewReader(buf)) assert.NoError(t, err) } err = stor.FinishUploadSession(*uID) assert.NoError(t, err) }
func NewFixtureServer(name string) (res *FixtureServer, err error) { logx.SetLevel(logx.DEBUG) wd, _ := os.Getwd() rt := filepath.Join(wd, "testdata", name+"-srv") os.RemoveAll(rt) p := storage.NewBlockStorage(&storage.BlockStorageOptions{rt, 2, 32, 64}) ports, err := GetOpenPorts(2) if err != nil { return } ctx := context.Background() info := &proto.ServerInfo{ HTTPEndpoint: fmt.Sprintf("http://localhost:%d/v1", ports[0]), RPCEndpoints: []string{fmt.Sprintf("localhost:%d", ports[1])}, ChunkSize: 1024 * 1024 * 2, PoolSize: 16, BufferSize: 1024 * 1024 * 8, } tServer := thrift.NewServer(ctx, &thrift.Options{info, fmt.Sprintf(":%d", ports[1])}, p) hServer := front.NewServer(ctx, &front.Options{info, fmt.Sprintf(":%d", ports[0]), ""}, p) res = &FixtureServer{ server.NewCompositeServer(ctx, tServer, hServer), info, rt, } go res.Start() time.Sleep(time.Millisecond * 00) return }
func main() { flags.New().SetPrefix("WAVES").Boot() logx.SetLevel(log_level) ctx, cancel := context.WithCancel(context.Background()) mach := machine.NewMachine(ctx, interval) go mach.Start() datapointReporter := datapoint.NewReporter(&datapointOptions, noop) u, err := url.Parse(annotationsEndpoint) logx.OnFatal(err) if token != "" { u.User = url.User(token) } annReporter := annotation.NewReporter(&annotation.ReporterOptions{u}, noop) dp := dockerprobe.NewDockerProbe(dockerEndpoint, makeDockerReporter(datapointReporter), report.NewMetaReporter(strings.Split(dockerAnnotationTags, ","), annReporter), ) err = mach.Add(dp) logx.OnFatal(err) sigChan := make(chan os.Signal, 1) signal.Notify(sigChan, syscall.SIGHUP, syscall.SIGINT, syscall.SIGTERM, syscall.SIGQUIT) <-sigChan logx.Info("bye") cancel() }
func Test_Storage_Upload_CreateUpload(t *testing.T) { logx.SetLevel(logx.DEBUG) tree := fixtures.NewTree("create-upload", "") defer tree.Squash() assert.NoError(t, tree.Populate()) m, err := model.New(tree.CWD, false, proto.CHUNK_SIZE, 16) assert.NoError(t, err) os.RemoveAll("testdata/create-upload-storage") stor := storage.NewBlockStorage(&storage.BlockStorageOptions{ "testdata/create-upload-storage", 2, 16, 32, }) defer os.RemoveAll("testdata/create-upload-storage") names := lists.NewFileList().ListDir(tree.CWD) mans, err := m.FeedManifests(true, false, true, names...) uID, _ := uuid.NewV4() missing, err := stor.CreateUploadSession(*uID, mans.GetManifestSlice(), time.Hour) assert.NoError(t, err) assert.Len(t, missing, 4) }