func (gd *GDrive) getMetadataChanges(svc *drive.Service, startChangeId int64, changeChan chan<- []*drive.Change, errorChan chan<- error) { var about *drive.About var err error // Get the Drive About information in order to figure out how many // changes we need to download to get up to date. for try := 0; ; try++ { about, err = svc.About.Get().Do() if err == nil { break } else { err = gd.tryToHandleDriveAPIError(err, try) } if err != nil { errorChan <- err return } } // Don't clutter the output with a progress bar unless it looks like // downloading changes may take a while. // TODO: consider using timer.AfterFunc to put up the progress bar if // we're not done after a few seconds? It's not clear if this is worth // the trouble. var bar *pb.ProgressBar numChanges := about.LargestChangeId - startChangeId if numChanges > 1000 && !gd.quiet { bar = pb.New64(numChanges) bar.ShowBar = true bar.ShowCounters = false bar.Output = os.Stderr bar.Prefix("Updating metadata cache: ") bar.Start() } pageToken := "" try := 0 // Keep asking Drive for more changes until we get through them all. for { // Only ask for the fields in the drive.Change structure that we // actually to be filled in to save some bandwidth... fields := []googleapi.Field{"nextPageToken", "items/id", "items/fileId", "items/deleted", "items/file/id", "items/file/parents", "items/file/title", "items/file/fileSize", "items/file/mimeType", "items/file/properties", "items/file/modifiedDate", "items/file/md5Checksum", "items/file/labels"} q := svc.Changes.List().MaxResults(1000).IncludeSubscribed(false).Fields(fields...) if startChangeId >= 0 { q = q.StartChangeId(startChangeId + 1) } if pageToken != "" { q = q.PageToken(pageToken) } r, err := q.Do() if err != nil { err = gd.tryToHandleDriveAPIError(err, try) if err != nil { errorChan <- err return } try++ continue } // Success. Reset the try counter in case we had errors leading up // to this. try = 0 if len(r.Items) > 0 { // Send the changes along to the goroutine that's updating the // local cache. changeChan <- r.Items if bar != nil { bar.Set(int(r.Items[len(r.Items)-1].Id - startChangeId)) } } pageToken = string(r.NextPageToken) if pageToken == "" { break } } // Signal that no more changes are coming. close(changeChan) if bar != nil { bar.Finish() } gd.debug("Done updating metadata from Drive") }
func main() { flag.Usage = usage help := flag.Bool("help", false, "show this message") version := flag.Bool("version", false, "show version") failpath := flag.String("faildir", "", "dir where failed torrentzips should be copied") flag.Parse() if *help { flag.Usage() os.Exit(0) } if *version { fmt.Fprintf(os.Stdout, "%s version %s, Copyright (c) 2013 Uwe Hoffmann. All rights reserved.\n", os.Args[0], versionStr) os.Exit(0) } if *failpath == "" { flag.Usage() os.Exit(0) } cv := new(countVisitor) for _, name := range flag.Args() { fmt.Fprintf(os.Stdout, "initial scan of %s to determine amount of work\n", name) err := filepath.Walk(name, cv.visit) if err != nil { fmt.Fprintf(os.Stderr, "failed to count in dir %s: %v\n", name, err) os.Exit(1) } } mg := int(cv.numBytes / megabyte) fmt.Fprintf(os.Stdout, "found %d files and %d MB to do. starting work...\n", cv.numFiles, mg) var byteProgress *pb.ProgressBar if mg > 10 { pb.BarStart = "MB [" byteProgress = pb.New(mg) byteProgress.RefreshRate = 5 * time.Second byteProgress.ShowCounters = true byteProgress.Start() } inwork := make(chan *workUnit) sv := &scanVisitor{ inwork: inwork, } wg := new(sync.WaitGroup) wg.Add(cv.numFiles) for i := 0; i < 8; i++ { worker := &testWorker{ byteProgress: byteProgress, failpath: *failpath, inwork: inwork, wg: wg, } go worker.run() } for _, name := range flag.Args() { err := filepath.Walk(name, sv.visit) if err != nil { fmt.Fprintf(os.Stderr, "failed to scan dir %s: %v\n", name, err) os.Exit(1) } } wg.Wait() close(inwork) if byteProgress != nil { byteProgress.Set(int(byteProgress.Total)) byteProgress.Finish() } fmt.Fprintf(os.Stdout, "Done.\n") }