func (s *UserService) Create(ctx context.Context, u *User) error { span := trace.FromContext(ctx).NewChild("trythings.user.Create") defer span.Finish() // TODO Make sure u.GoogleID == user.Current(ctx).ID if u.ID != "" { return fmt.Errorf("u already has id %q", u.ID) } if u.CreatedAt.IsZero() { u.CreatedAt = time.Now() } id, _, err := datastore.AllocateIDs(ctx, "User", nil, 1) if err != nil { return err } u.ID = fmt.Sprintf("%x", id) rootKey := datastore.NewKey(ctx, "Root", "root", 0, nil) k := datastore.NewKey(ctx, "User", u.ID, 0, rootKey) k, err = datastore.Put(ctx, k, u) if err != nil { return err } return nil }
func NewFileKey(c context.Context) (*datastore.Key, error) { parentKey := GetParentKey(c) lowID, _, err := datastore.AllocateIDs(c, KIND_FILE, parentKey, 1) if err != nil { return nil, err } return datastore.NewKey(c, KIND_FILE, "", lowID, parentKey), nil }
func (d rdsImpl) AllocateIDs(incomplete *ds.Key, n int) (start int64, err error) { par, err := dsF2R(d.aeCtx, incomplete.Parent()) if err != nil { return } start, _, err = datastore.AllocateIDs(d.aeCtx, incomplete.Kind(), par, n) return }
func (s *SpotGet) key(ctx context.Context) *datastore.Key { if s.SpotCode == 0 { low, _, err := datastore.AllocateIDs(ctx, "Spot", nil, 1) if err != nil { return nil } return datastore.NewKey(ctx, "Spot", "", low, nil) } return datastore.NewKey(ctx, "Spot", "", s.SpotCode, nil) }
func submissionsAddHandler(w http.ResponseWriter, r *http.Request) { ctx := appengine.NewContext(r) if err := r.ParseForm(); err != nil { serveErr(ctx, err, w) return } ID, _, err := datastore.AllocateIDs(ctx, "Podcast", nil, 1) if err != nil { serveErr(ctx, err, w) return } date, err := time.Parse(yyyymmdd, r.FormValue("date")) if err != nil { serveErr(ctx, err, w) return } podcast := Podcast{ ID: ID, Show: r.FormValue("show"), Title: r.FormValue("title"), Desc: r.FormValue("desc"), URL: template.URL(r.FormValue("url")), MediaURL: template.URL(r.FormValue("media_url")), RuntimeSec: r.FormValue("runtime"), Size: r.FormValue("size"), Date: date, Added: time.Now(), } if _, err := datastore.Put(ctx, datastore.NewKey(ctx, "Podcast", "", ID, nil), &podcast); err != nil { serveErr(ctx, err, w) return } key, err := datastore.DecodeKey(r.FormValue("key")) if err != nil { serveErr(ctx, err, w) return } if err := datastore.Delete(ctx, key); err != nil { serveErr(ctx, err, w) return } if err := memcache.Delete(ctx, cacheKey); err != nil { log.Errorf(ctx, "memcache delete error %v", err) } successTmpl.ExecuteTemplate(w, "base", nil) }
func (s *TaskService) Create(ctx context.Context, t *Task) error { span := trace.FromContext(ctx).NewChild("trythings.task.Create") defer span.Finish() if t.ID != "" { return fmt.Errorf("t already has id %q", t.ID) } if t.CreatedAt.IsZero() { t.CreatedAt = time.Now() } if t.SpaceID == "" { return errors.New("SpaceID is required") } ok, err := s.IsVisible(ctx, t) if err != nil { return err } if !ok { return errors.New("cannot access space to create task") } id, _, err := datastore.AllocateIDs(ctx, "Task", nil, 1) if err != nil { return err } t.ID = fmt.Sprintf("%x", id) rootKey := datastore.NewKey(ctx, "Root", "root", 0, nil) k := datastore.NewKey(ctx, "Task", t.ID, 0, rootKey) k, err = datastore.Put(ctx, k, t) if err != nil { return err } err = s.Index(ctx, t) if err != nil { return err } return nil }
func createTask(w http.ResponseWriter, r *http.Request, _ httprouter.Params) { ctx := appengine.NewContext(r) low, _, err := datastore.AllocateIDs(ctx, "tasks", nil, 1) if err != nil { sendJSONResponse(w, http.StatusNotModified, nil) return } t, err := maketask(low, r.Body) if err != nil { sendJSONResponse(w, http.StatusNotModified, nil) return } if _, e := datastore.Put(ctx, datastore.NewKey(ctx, "tasks", "", low, nil), t); e != nil { sendJSONResponse(w, http.StatusNotModified, nil) return } sendJSONResponse(w, http.StatusCreated, nil) }
func (s *ViewService) Create(ctx context.Context, v *View) error { span := trace.FromContext(ctx).NewChild("trythings.view.Create") defer span.Finish() if v.ID != "" { return fmt.Errorf("v already has id %q", v.ID) } if v.CreatedAt.IsZero() { v.CreatedAt = time.Now() } if v.Name == "" { return errors.New("Name is required") } if v.SpaceID == "" { return errors.New("SpaceID is required") } ok, err := s.IsVisible(ctx, v) if err != nil { return err } if !ok { return errors.New("cannot access space to create view") } id, _, err := datastore.AllocateIDs(ctx, "View", nil, 1) if err != nil { return err } v.ID = fmt.Sprintf("%x", id) rootKey := datastore.NewKey(ctx, "Root", "root", 0, nil) k := datastore.NewKey(ctx, "View", v.ID, 0, rootKey) k, err = datastore.Put(ctx, k, v) if err != nil { return err } return nil }
func (s *SpaceService) Create(ctx context.Context, sp *Space) error { span := trace.FromContext(ctx).NewChild("trythings.space.Create") defer span.Finish() if sp.ID != "" { return fmt.Errorf("sp already has id %q", sp.ID) } if sp.CreatedAt.IsZero() { sp.CreatedAt = time.Now() } if len(sp.UserIDs) > 0 { return errors.New("UserIDs must be empty") } su, err := IsSuperuser(ctx) if err != nil { return err } if !su { u, err := s.UserService.FromContext(ctx) if err != nil { return err } sp.UserIDs = []string{u.ID} } id, _, err := datastore.AllocateIDs(ctx, "Space", nil, 1) if err != nil { return err } sp.ID = fmt.Sprintf("%x", id) rootKey := datastore.NewKey(ctx, "Root", "root", 0, nil) k := datastore.NewKey(ctx, "Space", sp.ID, 0, rootKey) k, err = datastore.Put(ctx, k, sp) if err != nil { return err } return nil }
// PostSubmission creates a new submission. func PostSubmission(ctx context.Context, w http.ResponseWriter, r *http.Request) (status int, err error) { if r.Method != "POST" { return http.StatusMethodNotAllowed, nil } p, ok := passenger.FromContext(ctx) if !ok { return http.StatusUnauthorized, nil } mediaType, _, err := mime.ParseMediaType(r.Header.Get("Content-Type")) if err != nil { return http.StatusBadRequest, err } if !strings.HasPrefix(mediaType, "multipart/") { return http.StatusUnsupportedMediaType, nil } resultKey, err := datastore.DecodeKey(mux.Vars(r)["resultKey"]) if err != nil { return http.StatusNotFound, err } if !util.HasParent(p.User, resultKey) { return http.StatusBadRequest, errors.New("cannot submit answer for other users") } taskKey, err := datastore.DecodeKey(mux.Vars(r)["taskKey"]) if err != nil { return http.StatusNotFound, err } if err := r.ParseMultipartForm(16 << 20); err != nil { return http.StatusBadRequest, err } files, ok := r.MultipartForm.File["files"] if !ok { return http.StatusBadRequest, errors.New("missing files") } var task model.Task if err = datastore.Get(ctx, taskKey, &task); err != nil { return http.StatusNotFound, err } // Furthermore, the name of the GCS object is derived from the of the // encapsulating Submission. To avoid race conditions, allocate an ID. low, _, err := datastore.AllocateIDs(ctx, model.SubmissionKind, resultKey, 1) if err != nil { return http.StatusInternalServerError, err } submissionKey := datastore.NewKey(ctx, model.SubmissionKind, "", low, resultKey) storedCode := model.StoredObject{ Bucket: util.SubmissionBucket(), Name: nameObject(submissionKey) + "/Code/", } submission := model.Submission{ Task: taskKey, Time: time.Now(), Language: detectLanguage(files), Code: storedCode, } if _, err = datastore.Put(ctx, submissionKey, &submission); err != nil { return http.StatusInternalServerError, err } var tests model.Tests testKeys, err := model.NewQueryForTest(). Ancestor(taskKey). GetAll(ctx, &tests) if err != nil { return http.StatusInternalServerError, err } prrs, pwrs := multiPipe(len(tests)) go maketar(pwrs, files) for i, t := range tests { go func(i int, t model.Test) { if err := test.Tester(t.Tester).Call(ctx, *t.Key(testKeys[i]), *submission.Key(submissionKey), prrs[i]); err != nil { log.Warningf(ctx, "%s", err) } }(i, t) } if err := upload(util.CloudContext(ctx), storedCode.Bucket, storedCode.Name, files); err != nil { return http.StatusInternalServerError, err } return http.StatusOK, nil }
// AllocateIDs wraps datastore.AllocateIDs func (d *Driver) AllocateIDs(parent *datastore.Key, n int) (low, high int64, err error) { return datastore.AllocateIDs(d.ctx, d.kind, parent, n) }
func Run(c context.Context, ds appwrap.Datastore, job MapReduceJob) (int64, error) { readerNames, err := job.Inputs.ReaderNames() if err != nil { return 0, fmt.Errorf("forming reader names: %s", err) } else if len(readerNames) == 0 { return 0, fmt.Errorf("no input readers") } writerNames, err := job.Outputs.WriterNames(c) if err != nil { return 0, fmt.Errorf("forming writer names: %s", err) } else if len(writerNames) == 0 { return 0, fmt.Errorf("no output writers") } reducerCount := len(writerNames) jobKey, err := createJob(ds, job.UrlPrefix, writerNames, job.OnCompleteUrl, job.SeparateReduceItems, job.JobParameters, job.RetryCount) if err != nil { return 0, fmt.Errorf("creating job: %s", err) } firstId, _, err := datastore.AllocateIDs(c, TaskEntity, nil, len(readerNames)) if err != nil { return 0, fmt.Errorf("allocating task ids: %s", err) } taskKeys := makeTaskKeys(ds, firstId, len(readerNames)) tasks := make([]JobTask, len(readerNames)) for i, readerName := range readerNames { url := fmt.Sprintf("%s/map?taskKey=%s;reader=%s;shards=%d", job.UrlPrefix, taskKeys[i].Encode(), readerName, reducerCount) tasks[i] = JobTask{ Status: TaskStatusPending, Url: url, Type: TaskTypeMap, } } if err := createTasks(ds, jobKey, taskKeys, tasks, StageMapping); err != nil { if _, innerErr := markJobFailed(c, ds, jobKey); err != nil { logError(c, "failed to log job %d as failed: %s", jobKey.IntID(), innerErr) } return 0, fmt.Errorf("creating tasks: %s", err) } for i := range tasks { if err := job.PostTask(c, tasks[i].Url, job.JobParameters); err != nil { if _, innerErr := markJobFailed(c, ds, jobKey); err != nil { logError(c, "failed to log job %d as failed: %s", jobKey.IntID(), innerErr) } return 0, fmt.Errorf("posting task: %s", err) } } if err := job.PostStatus(c, fmt.Sprintf("%s/map-monitor?jobKey=%s", job.UrlPrefix, jobKey.Encode())); err != nil { logCritical(c, "failed to start map monitor task: %s", err) } return jobKey.IntID(), nil }
func (s *SearchService) Create(ctx context.Context, se *Search) error { span := trace.FromContext(ctx).NewChild("trythings.search.Create") defer span.Finish() if se.ID != "" { return fmt.Errorf("se already has id %q", se.ID) } if se.CreatedAt.IsZero() { se.CreatedAt = time.Now() } if se.Name == "" { return errors.New("Name is required") } if se.ViewID == "" { return errors.New("ViewID is required") } v, err := s.ViewService.ByID(ctx, se.ViewID) if err != nil { return err } if se.SpaceID == "" { se.SpaceID = v.SpaceID } if se.SpaceID != v.SpaceID { return errors.New("Search's SpaceID must match View's") } if len(se.ViewRank) != 0 { return fmt.Errorf("se already has a view rank %x", se.ViewRank) } // TODO#Performance: Add a shared or per-request cache to support these small, repeated queries. if se.Query == "" { return errors.New("Query is required") } rootKey := datastore.NewKey(ctx, "Root", "root", 0, nil) // Create a ViewRank for the search. // It should come after every other search in the view. var ranks []*struct { ViewRank datastore.ByteString } _, err = datastore.NewQuery("Search"). Ancestor(rootKey). Filter("ViewID =", se.ViewID). Project("ViewRank"). Order("-ViewRank"). Limit(1). GetAll(ctx, &ranks) if err != nil { return err } maxViewRank := MinRank if len(ranks) != 0 { maxViewRank = Rank(ranks[0].ViewRank) } rank, err := NewRank(maxViewRank, MaxRank) if err != nil { return err } se.ViewRank = datastore.ByteString(rank) ok, err := s.IsVisible(ctx, se) if err != nil { return err } if !ok { return errors.New("cannot access view to create search") } id, _, err := datastore.AllocateIDs(ctx, "Search", nil, 1) if err != nil { return err } se.ID = fmt.Sprintf("%x", id) k := datastore.NewKey(ctx, "Search", se.ID, 0, rootKey) k, err = datastore.Put(ctx, k, se) if err != nil { return err } return nil }
// PostSubmission creates a new submission. func PostSubmission(ctx context.Context, w http.ResponseWriter, r *http.Request) (status int, err error) { if r.Method != "POST" { return http.StatusMethodNotAllowed, nil } var body = struct { Code string Language string }{} if err := json.NewDecoder(r.Body).Decode(&body); err != nil { return http.StatusBadRequest, err } p, ok := passenger.FromContext(ctx) if !ok { return http.StatusUnauthorized, nil } resultKey, err := datastore.DecodeKey(mux.Vars(r)["resultKey"]) if err != nil { return http.StatusNotFound, err } if !util.HasParent(p.User, resultKey) { return http.StatusBadRequest, errors.New("cannot submit answer for other users") } taskKey, err := datastore.DecodeKey(mux.Vars(r)["taskKey"]) if err != nil { return http.StatusNotFound, err } var task model.Task if err = datastore.Get(ctx, taskKey, &task); err != nil { return http.StatusInternalServerError, err } // Furthermore, the name of the GCS object is derived from the of the // encapsulating Submission. To avoid race conditions, allocate an ID. low, _, err := datastore.AllocateIDs(ctx, model.SubmissionKind, resultKey, 1) if err != nil { return http.StatusInternalServerError, err } submissionKey := datastore.NewKey(ctx, model.SubmissionKind, "", low, resultKey) submission := model.Submission{ Task: taskKey, Time: time.Now(), } if body.Code != "" { submission.Code, err = store(ctx, submissionKey, body.Code, body.Language) if err != nil { return http.StatusInternalServerError, err } submission.Language = body.Language } // Set the submission in stone. if _, err = datastore.Put(ctx, submissionKey, &submission); err != nil { return http.StatusInternalServerError, err } var tests model.Tests _, err = model.NewQueryForTest(). Ancestor(taskKey). GetAll(ctx, &tests) if err != nil { return http.StatusInternalServerError, err } for _, t := range tests { if err := test.Tester(t.Tester).Call(ctx, t.Params, *submission.Key(submissionKey)); err != nil { log.Warningf(ctx, "%s", err) continue } } // TODO(flowlo): Return something meaningful. return http.StatusOK, nil }
func mapMonitorTask(c context.Context, ds appwrap.Datastore, pipeline MapReducePipeline, jobKey *datastore.Key, r *http.Request, timeout time.Duration) int { start := time.Now() job, err := waitForStageCompletion(c, ds, pipeline, jobKey, StageMapping, StageReducing, timeout) if err != nil { logCritical(c, "waitForStageCompletion() failed: %s", err) return 200 } else if job.Stage == StageMapping { logInfo(c, "wait timed out -- returning an error and letting us automatically restart") return 500 } logInfo(c, "map stage completed -- stage is now %s", job.Stage) // erm... we just did this in jobStageComplete. dumb to do it again mapTasks, err := gatherTasks(ds, job) if err != nil { logError(c, "failed loading tasks: %s", mapTasks) jobFailed(c, ds, pipeline, jobKey, fmt.Errorf("error loading tasks after map complete: %s", err.Error())) return 200 } // we have one set for each reducer task storageNames := make([][]string, len(job.WriterNames)) for i := range mapTasks { var shardNames map[string]int if err = json.Unmarshal([]byte(mapTasks[i].Result), &shardNames); err != nil { logError(c, `unmarshal error for result from map %d result '%+v'`, job.FirstTaskId+int64(i), mapTasks[i].Result) jobFailed(c, ds, pipeline, jobKey, fmt.Errorf("cannot unmarshal map shard names: %s", err.Error())) return 200 } else { for name, shard := range shardNames { storageNames[shard] = append(storageNames[shard], name) } } } firstId, _, err := datastore.AllocateIDs(c, TaskEntity, nil, len(job.WriterNames)) if err != nil { jobFailed(c, ds, pipeline, jobKey, fmt.Errorf("failed to allocate ids for reduce tasks: %s", err.Error())) return 200 } taskKeys := makeTaskKeys(ds, firstId, len(job.WriterNames)) tasks := make([]JobTask, 0, len(job.WriterNames)) for shard := range job.WriterNames { if shards := storageNames[shard]; len(shards) > 0 { url := fmt.Sprintf("%s/reduce?taskKey=%s;shard=%d;writer=%s", job.UrlPrefix, taskKeys[len(tasks)].Encode(), shard, url.QueryEscape(job.WriterNames[shard])) firstId++ shardJson, _ := json.Marshal(shards) shardZ := &bytes.Buffer{} w := zlib.NewWriter(shardZ) w.Write(shardJson) w.Close() tasks = append(tasks, JobTask{ Status: TaskStatusPending, Url: url, ReadFrom: shardZ.Bytes(), SeparateReduceItems: job.SeparateReduceItems, Type: TaskTypeReduce, }) } } // this means we got nothing from maps. there is no result. so, we're done! right? that's hard to communicate though // so we'll just start a single task with no inputs if len(tasks) == 0 { logInfo(c, "no results from maps -- starting noop reduce task") url := fmt.Sprintf("%s/reduce?taskKey=%s;shard=%d;writer=%s", job.UrlPrefix, taskKeys[len(tasks)].Encode(), 0, url.QueryEscape(job.WriterNames[0])) tasks = append(tasks, JobTask{ Status: TaskStatusPending, Url: url, ReadFrom: []byte(``), SeparateReduceItems: job.SeparateReduceItems, Type: TaskTypeReduce, }) } taskKeys = taskKeys[0:len(tasks)] if err := createTasks(ds, jobKey, taskKeys, tasks, StageReducing); err != nil { jobFailed(c, ds, pipeline, jobKey, fmt.Errorf("failed to create reduce tasks: %s", err.Error())) return 200 } for i := range tasks { if err := pipeline.PostTask(c, tasks[i].Url, job.JsonParameters); err != nil { jobFailed(c, ds, pipeline, jobKey, fmt.Errorf("failed to post reduce task: %s", err.Error())) return 200 } } if err := pipeline.PostStatus(c, fmt.Sprintf("%s/reduce-monitor?jobKey=%s", job.UrlPrefix, jobKey.Encode())); err != nil { jobFailed(c, ds, pipeline, jobKey, fmt.Errorf("failed to start reduce monitor: %s", err.Error())) return 200 } logInfo(c, "mapping complete after %s of monitoring ", time.Now().Sub(start)) return 200 }