func (t *mockTagAdder) AddTag(ref reference.Named, id digest.Digest, force bool) error { if t.refs == nil { t.refs = make(map[string]string) } t.refs[ref.String()] = id.String() return nil }
func translatePullError(err error, ref reference.Named) error { switch v := err.(type) { case errcode.Errors: if len(v) != 0 { for _, extra := range v[1:] { logrus.Infof("Ignoring extra error returned from registry: %v", extra) } return translatePullError(v[0], ref) } case errcode.Error: var newErr error switch v.Code { case errcode.ErrorCodeDenied: // ErrorCodeDenied is used when access to the repository was denied newErr = errors.Errorf("repository %s not found: does not exist or no read access", ref.Name()) case v2.ErrorCodeManifestUnknown: newErr = errors.Errorf("manifest for %s not found", ref.String()) case v2.ErrorCodeNameUnknown: newErr = errors.Errorf("repository %s not found", ref.Name()) } if newErr != nil { logrus.Infof("Translating %q to %q", err, newErr) return newErr } case xfer.DoNotRetry: return translatePullError(v.Err, ref) } return err }
// TagImageWithReference adds the given reference to the image ID provided. func (daemon *Daemon) TagImageWithReference(imageID image.ID, newTag reference.Named) error { if err := daemon.referenceStore.AddTag(newTag, imageID.Digest(), true); err != nil { return err } daemon.LogImageEvent(imageID.String(), newTag.String(), "tag") return nil }
func (p *v1Puller) pullRepository(ctx context.Context, ref reference.Named) error { progress.Message(p.config.ProgressOutput, "", "Pulling repository "+p.repoInfo.FullName()) tagged, isTagged := ref.(reference.NamedTagged) repoData, err := p.session.GetRepositoryData(p.repoInfo) if err != nil { if strings.Contains(err.Error(), "HTTP code: 404") { if isTagged { return fmt.Errorf("Error: image %s:%s not found", p.repoInfo.RemoteName(), tagged.Tag()) } return fmt.Errorf("Error: image %s not found", p.repoInfo.RemoteName()) } // Unexpected HTTP error return err } logrus.Debug("Retrieving the tag list") var tagsList map[string]string if !isTagged { tagsList, err = p.session.GetRemoteTags(repoData.Endpoints, p.repoInfo) } else { var tagID string tagsList = make(map[string]string) tagID, err = p.session.GetRemoteTag(repoData.Endpoints, p.repoInfo, tagged.Tag()) if err == registry.ErrRepoNotFound { return fmt.Errorf("Tag %s not found in repository %s", tagged.Tag(), p.repoInfo.FullName()) } tagsList[tagged.Tag()] = tagID } if err != nil { logrus.Errorf("unable to get remote tags: %s", err) return err } for tag, id := range tagsList { repoData.ImgList[id] = ®istry.ImgData{ ID: id, Tag: tag, Checksum: "", } } layersDownloaded := false for _, imgData := range repoData.ImgList { if isTagged && imgData.Tag != tagged.Tag() { continue } err := p.downloadImage(ctx, repoData, imgData, &layersDownloaded) if err != nil { return err } } writeStatus(ref.String(), p.config.ProgressOutput, layersDownloaded) return nil }
// trustedPush handles content trust pushing of an image func trustedPush(ctx context.Context, cli *command.DockerCli, repoInfo *registry.RepositoryInfo, ref reference.Named, authConfig types.AuthConfig, requestPrivilege types.RequestPrivilegeFunc) error { responseBody, err := imagePushPrivileged(ctx, cli, authConfig, ref.String(), requestPrivilege) if err != nil { return err } defer responseBody.Close() return PushTrustedReference(cli, repoInfo, ref, authConfig, responseBody) }
// TagImage creates a tag in the repository reponame, pointing to the image named // imageName. func (daemon *Daemon) TagImage(newTag reference.Named, imageName string) error { imageID, err := daemon.GetImageID(imageName) if err != nil { return err } if err := daemon.referenceStore.AddTag(newTag, imageID, true); err != nil { return err } daemon.EventsService.Log("tag", newTag.String(), "") return nil }
func (p *v2Puller) pullV2Repository(ctx context.Context, ref reference.Named) (err error) { var layersDownloaded bool if !reference.IsNameOnly(ref) { var err error layersDownloaded, err = p.pullV2Tag(ctx, ref) if err != nil { return err } } else { manSvc, err := p.repo.Manifests(ctx) if err != nil { return err } tags, err := manSvc.Tags() if err != nil { // If this repository doesn't exist on V2, we should // permit a fallback to V1. return allowV1Fallback(err) } // The v2 registry knows about this repository, so we will not // allow fallback to the v1 protocol even if we encounter an // error later on. p.confirmedV2 = true // This probably becomes a lot nicer after the manifest // refactor... for _, tag := range tags { tagRef, err := reference.WithTag(ref, tag) if err != nil { return err } pulledNew, err := p.pullV2Tag(ctx, tagRef) if err != nil { // Since this is the pull-all-tags case, don't // allow an error pulling a particular tag to // make the whole pull fall back to v1. if fallbackErr, ok := err.(fallbackError); ok { return fallbackErr.err } return err } // pulledNew is true if either new layers were downloaded OR if existing images were newly tagged // TODO(tiborvass): should we change the name of `layersDownload`? What about message in WriteStatus? layersDownloaded = layersDownloaded || pulledNew } } writeStatus(ref.String(), p.config.ProgressOutput, layersDownloaded) return nil }
func (i *Image) PullImage(ctx context.Context, ref reference.Named, metaHeaders map[string][]string, authConfig *types.AuthConfig, outStream io.Writer) error { defer trace.End(trace.Begin("PullImage")) log.Printf("PullImage: ref = %+v, metaheaders = %+v\n", ref, metaHeaders) var cmdArgs []string cmdArgs = append(cmdArgs, "-reference", ref.String()) if authConfig != nil { if len(authConfig.Username) > 0 { cmdArgs = append(cmdArgs, "-username", authConfig.Username) } if len(authConfig.Password) > 0 { cmdArgs = append(cmdArgs, "-password", authConfig.Password) } } portLayerServer := PortLayerServer() if portLayerServer != "" { cmdArgs = append(cmdArgs, "-host", portLayerServer) } // intruct imagec to use os.TempDir cmdArgs = append(cmdArgs, "-destination", os.TempDir()) log.Printf("PullImage: cmd = %s %+v\n", Imagec, cmdArgs) cmd := exec.Command(Imagec, cmdArgs...) cmd.Stdout = outStream cmd.Stderr = outStream // Execute err := cmd.Start() if err != nil { log.Printf("Error starting %s - %s\n", Imagec, err) return fmt.Errorf("Error starting %s - %s\n", Imagec, err) } err = cmd.Wait() if err != nil { log.Println("imagec exit code:", err) return err } client := PortLayerClient() ImageCache().Update(client) return nil }
func (i *Image) PullImage(ctx context.Context, ref reference.Named, metaHeaders map[string][]string, authConfig *types.AuthConfig, outStream io.Writer) error { defer trace.End(trace.Begin(ref.String())) log.Debugf("PullImage: ref = %+v, metaheaders = %+v\n", ref, metaHeaders) options := imagec.Options{ Destination: os.TempDir(), Reference: ref.String(), Timeout: imagec.DefaultHTTPTimeout, Outstream: outStream, } if authConfig != nil { if len(authConfig.Username) > 0 { options.Username = authConfig.Username } if len(authConfig.Password) > 0 { options.Password = authConfig.Password } } portLayerServer := PortLayerServer() if portLayerServer != "" { options.Host = portLayerServer } insecureRegistries := InsecureRegistries() for _, registry := range insecureRegistries { if registry == ref.Hostname() { options.InsecureAllowHTTP = true break } } log.Infof("PullImage: reference: %s, %s, portlayer: %#v", options.Reference, options.Host, portLayerServer) ic := imagec.NewImageC(options, streamformatter.NewJSONStreamFormatter()) err := ic.PullImage() if err != nil { return err } return nil }
func (p *v2Puller) pullV2Repository(ctx context.Context, ref reference.Named) (err error) { var refs []reference.Named if !reference.IsNameOnly(ref) { refs = []reference.Named{ref} } else { manSvc, err := p.repo.Manifests(ctx) if err != nil { return err } tags, err := manSvc.Tags() if err != nil { return err } // If this call succeeded, we can be confident that the // registry on the other side speaks the v2 protocol. p.confirmedV2 = true // This probably becomes a lot nicer after the manifest // refactor... for _, tag := range tags { tagRef, err := reference.WithTag(ref, tag) if err != nil { return err } refs = append(refs, tagRef) } } var layersDownloaded bool for _, pullRef := range refs { // pulledNew is true if either new layers were downloaded OR if existing images were newly tagged // TODO(tiborvass): should we change the name of `layersDownload`? What about message in WriteStatus? pulledNew, err := p.pullV2Tag(ctx, pullRef) if err != nil { return err } layersDownloaded = layersDownloaded || pulledNew } writeStatus(ref.String(), p.config.ProgressOutput, layersDownloaded) return nil }
// Get returns the imageID for a parsed reference func (store *repoCache) Get(ref reference.Named) (string, error) { defer trace.End(trace.Begin("")) ref = reference.WithDefaultTag(ref) store.mu.RLock() defer store.mu.RUnlock() repository, exists := store.Repositories[ref.Name()] if !exists || repository == nil { return "", ErrDoesNotExist } imageID, exists := repository[ref.String()] if !exists { return "", ErrDoesNotExist } return imageID, nil }
// Delete deletes a reference from the store. It returns true if a deletion // happened, or false otherwise. func (store *repoCache) Delete(ref reference.Named, save bool) (bool, error) { defer trace.End(trace.Begin("")) ref = reference.WithDefaultTag(ref) store.mu.Lock() defer store.mu.Unlock() var err error // return code -- assume success rtc := true repoName := ref.Name() repository, exists := store.Repositories[repoName] if !exists { return false, ErrDoesNotExist } refStr := ref.String() if imageID, exists := repository[refStr]; exists { delete(repository, refStr) if len(repository) == 0 { delete(store.Repositories, repoName) } if store.referencesByIDCache[imageID] != nil { delete(store.referencesByIDCache[imageID], refStr) if len(store.referencesByIDCache[imageID]) == 0 { delete(store.referencesByIDCache, imageID) } } if layer, exists := store.images[imageID]; exists { delete(store.Layers, imageID) delete(store.images, layer) } if save { err = store.Save() if err != nil { rtc = false } } return rtc, err } return false, ErrDoesNotExist }
func verifySchema1Manifest(signedManifest *schema1.SignedManifest, ref reference.Named) (m *schema1.Manifest, err error) { // If pull by digest, then verify the manifest digest. NOTE: It is // important to do this first, before any other content validation. If the // digest cannot be verified, don't even bother with those other things. if digested, isCanonical := ref.(reference.Canonical); isCanonical { verifier, err := digest.NewDigestVerifier(digested.Digest()) if err != nil { return nil, err } if _, err := verifier.Write(signedManifest.Canonical); err != nil { return nil, err } if !verifier.Verified() { err := fmt.Errorf("image verification failed for digest %s", digested.Digest()) logrus.Error(err) return nil, err } } m = &signedManifest.Manifest if m.SchemaVersion != 1 { return nil, fmt.Errorf("unsupported schema version %d for %q", m.SchemaVersion, ref.String()) } if len(m.FSLayers) != len(m.History) { return nil, fmt.Errorf("length of history not equal to number of layers for %q", ref.String()) } if len(m.FSLayers) == 0 { return nil, fmt.Errorf("no FSLayers in manifest for %q", ref.String()) } return m, nil }
func (pm *Manager) pull(ref reference.Named, metaHeader http.Header, authConfig *types.AuthConfig, pluginID string) (types.PluginPrivileges, error) { pd, err := distribution.Pull(ref, pm.registryService, metaHeader, authConfig) if err != nil { logrus.Debugf("error in distribution.Pull(): %v", err) return nil, err } if err := distribution.WritePullData(pd, filepath.Join(pm.libRoot, pluginID), true); err != nil { logrus.Debugf("error in distribution.WritePullData(): %v", err) return nil, err } tag := distribution.GetTag(ref) p := v2.NewPlugin(ref.Name(), pluginID, pm.runRoot, pm.libRoot, tag) if err := p.InitPlugin(); err != nil { return nil, err } pm.pluginStore.Add(p) pm.pluginEventLogger(pluginID, ref.String(), "pull") return p.ComputePrivileges(), nil }
func verifyManifest(signedManifest *schema1.SignedManifest, ref reference.Named) (m *schema1.Manifest, err error) { // If pull by digest, then verify the manifest digest. NOTE: It is // important to do this first, before any other content validation. If the // digest cannot be verified, don't even bother with those other things. if digested, isCanonical := ref.(reference.Canonical); isCanonical { verifier, err := digest.NewDigestVerifier(digested.Digest()) if err != nil { return nil, err } payload, err := signedManifest.Payload() if err != nil { // If this failed, the signatures section was corrupted // or missing. Treat the entire manifest as the payload. payload = signedManifest.Raw } if _, err := verifier.Write(payload); err != nil { return nil, err } if !verifier.Verified() { err := fmt.Errorf("image verification failed for digest %s", digested.Digest()) logrus.Error(err) return nil, err } var verifiedManifest schema1.Manifest if err = json.Unmarshal(payload, &verifiedManifest); err != nil { return nil, err } m = &verifiedManifest } else { m = &signedManifest.Manifest } if m.SchemaVersion != 1 { return nil, fmt.Errorf("unsupported schema version %d for %q", m.SchemaVersion, ref.String()) } if len(m.FSLayers) != len(m.History) { return nil, fmt.Errorf("length of history not equal to number of layers for %q", ref.String()) } if len(m.FSLayers) == 0 { return nil, fmt.Errorf("no FSLayers in manifest for %q", ref.String()) } return m, nil }
func (r *pluginReference) Get(ref reference.Named) (digest.Digest, error) { if r.name.String() != ref.String() { return digest.Digest(""), reference.ErrDoesNotExist } return r.pluginID, nil }
func (p *v2Puller) pullV2Tag(ctx context.Context, ref reference.Named) (tagUpdated bool, err error) { manSvc, err := p.repo.Manifests(ctx) if err != nil { return false, err } var ( manifest distribution.Manifest tagOrDigest string // Used for logging/progress only ) if tagged, isTagged := ref.(reference.NamedTagged); isTagged { // NOTE: not using TagService.Get, since it uses HEAD requests // against the manifests endpoint, which are not supported by // all registry versions. manifest, err = manSvc.Get(ctx, "", client.WithTag(tagged.Tag())) if err != nil { return false, allowV1Fallback(err) } tagOrDigest = tagged.Tag() } else if digested, isDigested := ref.(reference.Canonical); isDigested { manifest, err = manSvc.Get(ctx, digested.Digest()) if err != nil { return false, err } tagOrDigest = digested.Digest().String() } else { return false, fmt.Errorf("internal error: reference has neither a tag nor a digest: %s", ref.String()) } if manifest == nil { return false, fmt.Errorf("image manifest does not exist for tag or digest %q", tagOrDigest) } // If manSvc.Get succeeded, we can be confident that the registry on // the other side speaks the v2 protocol. p.confirmedV2 = true logrus.Debugf("Pulling ref from V2 registry: %s", ref.String()) progress.Message(p.config.ProgressOutput, tagOrDigest, "Pulling from "+p.repo.Name()) var ( imageID image.ID manifestDigest digest.Digest ) switch v := manifest.(type) { case *schema1.SignedManifest: imageID, manifestDigest, err = p.pullSchema1(ctx, ref, v) if err != nil { return false, err } case *schema2.DeserializedManifest: imageID, manifestDigest, err = p.pullSchema2(ctx, ref, v) if err != nil { return false, err } case *manifestlist.DeserializedManifestList: imageID, manifestDigest, err = p.pullManifestList(ctx, ref, v) if err != nil { return false, err } default: return false, errors.New("unsupported manifest format") } progress.Message(p.config.ProgressOutput, "", "Digest: "+manifestDigest.String()) oldTagImageID, err := p.config.ReferenceStore.Get(ref) if err == nil { if oldTagImageID == imageID { return false, nil } } else if err != reference.ErrDoesNotExist { return false, err } if canonical, ok := ref.(reference.Canonical); ok { if err = p.config.ReferenceStore.AddDigest(canonical, imageID, true); err != nil { return false, err } } else if err = p.config.ReferenceStore.AddTag(ref, imageID, true); err != nil { return false, err } return true, nil }
// Push initiates a push operation on the repository named localName. // ref is the specific variant of the image to be pushed. // If no tag is provided, all tags will be pushed. func Push(ctx context.Context, ref reference.Named, imagePushConfig *ImagePushConfig) error { // FIXME: Allow to interrupt current push when new push of same image is done. // Resolve the Repository name from fqn to RepositoryInfo repoInfo, err := imagePushConfig.RegistryService.ResolveRepository(ref) if err != nil { return err } endpoints, err := imagePushConfig.RegistryService.LookupPushEndpoints(repoInfo) if err != nil { return err } progress.Messagef(imagePushConfig.ProgressOutput, "", "The push refers to a repository [%s]", repoInfo.FullName()) associations := imagePushConfig.ReferenceStore.ReferencesByName(repoInfo) if len(associations) == 0 { return fmt.Errorf("Repository does not exist: %s", repoInfo.Name()) } var ( lastErr error // confirmedV2 is set to true if a push attempt managed to // confirm that it was talking to a v2 registry. This will // prevent fallback to the v1 protocol. confirmedV2 bool ) for _, endpoint := range endpoints { if confirmedV2 && endpoint.Version == registry.APIVersion1 { logrus.Debugf("Skipping v1 endpoint %s because v2 registry was detected", endpoint.URL) continue } logrus.Debugf("Trying to push %s to %s %s", repoInfo.FullName(), endpoint.URL, endpoint.Version) pusher, err := NewPusher(ref, endpoint, repoInfo, imagePushConfig) if err != nil { lastErr = err continue } if err := pusher.Push(ctx); err != nil { // Was this push cancelled? If so, don't try to fall // back. select { case <-ctx.Done(): default: if fallbackErr, ok := err.(fallbackError); ok { confirmedV2 = confirmedV2 || fallbackErr.confirmedV2 err = fallbackErr.err lastErr = err continue } } logrus.Debugf("Not continuing with error: %v", err) return err } imagePushConfig.ImageEventLogger(ref.String(), repoInfo.Name(), "push") return nil } if lastErr == nil { lastErr = fmt.Errorf("no endpoints found for %s", repoInfo.FullName()) } return lastErr }
func (p *v2Puller) pullV2Tag(ctx context.Context, ref reference.Named) (tagUpdated bool, err error) { manSvc, err := p.repo.Manifests(ctx) if err != nil { return false, err } var ( manifest distribution.Manifest tagOrDigest string // Used for logging/progress only ) if tagged, isTagged := ref.(reference.NamedTagged); isTagged { manifest, err = manSvc.Get(ctx, "", distribution.WithTag(tagged.Tag())) if err != nil { return false, allowV1Fallback(err) } tagOrDigest = tagged.Tag() } else if digested, isDigested := ref.(reference.Canonical); isDigested { manifest, err = manSvc.Get(ctx, digested.Digest()) if err != nil { return false, err } tagOrDigest = digested.Digest().String() } else { return false, fmt.Errorf("internal error: reference has neither a tag nor a digest: %s", ref.String()) } if manifest == nil { return false, fmt.Errorf("image manifest does not exist for tag or digest %q", tagOrDigest) } if m, ok := manifest.(*schema2.DeserializedManifest); ok { if m.Manifest.Config.MediaType == schema2.MediaTypePluginConfig || m.Manifest.Config.MediaType == "application/vnd.docker.plugin.image.v0+json" { //TODO: remove this v0 before 1.13 GA return false, errMediaTypePlugin } } // If manSvc.Get succeeded, we can be confident that the registry on // the other side speaks the v2 protocol. p.confirmedV2 = true logrus.Debugf("Pulling ref from V2 registry: %s", ref.String()) progress.Message(p.config.ProgressOutput, tagOrDigest, "Pulling from "+p.repo.Named().Name()) var ( id digest.Digest manifestDigest digest.Digest ) switch v := manifest.(type) { case *schema1.SignedManifest: id, manifestDigest, err = p.pullSchema1(ctx, ref, v) if err != nil { return false, err } case *schema2.DeserializedManifest: id, manifestDigest, err = p.pullSchema2(ctx, ref, v) if err != nil { return false, err } case *manifestlist.DeserializedManifestList: id, manifestDigest, err = p.pullManifestList(ctx, ref, v) if err != nil { return false, err } default: return false, errors.New("unsupported manifest format") } progress.Message(p.config.ProgressOutput, "", "Digest: "+manifestDigest.String()) oldTagID, err := p.config.ReferenceStore.Get(ref) if err == nil { if oldTagID == id { return false, addDigestReference(p.config.ReferenceStore, ref, manifestDigest, id) } } else if err != reference.ErrDoesNotExist { return false, err } if canonical, ok := ref.(reference.Canonical); ok { if err = p.config.ReferenceStore.AddDigest(canonical, id, true); err != nil { return false, err } } else { if err = addDigestReference(p.config.ReferenceStore, ref, manifestDigest, id); err != nil { return false, err } if err = p.config.ReferenceStore.AddTag(ref, id, true); err != nil { return false, err } } return true, nil }
// Pull initiates a pull operation. image is the repository name to pull, and // tag may be either empty, or indicate a specific tag to pull. func Pull(ctx context.Context, ref reference.Named, imagePullConfig *ImagePullConfig) error { // Resolve the Repository name from fqn to RepositoryInfo repoInfo, err := imagePullConfig.RegistryService.ResolveRepository(ref) if err != nil { return err } // makes sure name is not empty or `scratch` if err := ValidateRepoName(repoInfo.Name()); err != nil { return err } endpoints, err := imagePullConfig.RegistryService.LookupPullEndpoints(repoInfo.Hostname()) if err != nil { return err } var ( lastErr error // discardNoSupportErrors is used to track whether an endpoint encountered an error of type registry.ErrNoSupport // By default it is false, which means that if an ErrNoSupport error is encountered, it will be saved in lastErr. // As soon as another kind of error is encountered, discardNoSupportErrors is set to true, avoiding the saving of // any subsequent ErrNoSupport errors in lastErr. // It's needed for pull-by-digest on v1 endpoints: if there are only v1 endpoints configured, the error should be // returned and displayed, but if there was a v2 endpoint which supports pull-by-digest, then the last relevant // error is the ones from v2 endpoints not v1. discardNoSupportErrors bool // confirmedV2 is set to true if a pull attempt managed to // confirm that it was talking to a v2 registry. This will // prevent fallback to the v1 protocol. confirmedV2 bool // confirmedTLSRegistries is a map indicating which registries // are known to be using TLS. There should never be a plaintext // retry for any of these. confirmedTLSRegistries = make(map[string]struct{}) ) for _, endpoint := range endpoints { if confirmedV2 && endpoint.Version == registry.APIVersion1 { logrus.Debugf("Skipping v1 endpoint %s because v2 registry was detected", endpoint.URL) continue } if endpoint.URL.Scheme != "https" { if _, confirmedTLS := confirmedTLSRegistries[endpoint.URL.Host]; confirmedTLS { logrus.Debugf("Skipping non-TLS endpoint %s for host/port that appears to use TLS", endpoint.URL) continue } } logrus.Debugf("Trying to pull %s from %s %s", repoInfo.Name(), endpoint.URL, endpoint.Version) puller, err := newPuller(endpoint, repoInfo, imagePullConfig) if err != nil { lastErr = err continue } if err := puller.Pull(ctx, ref); err != nil { // Was this pull cancelled? If so, don't try to fall // back. fallback := false select { case <-ctx.Done(): default: if fallbackErr, ok := err.(fallbackError); ok { fallback = true confirmedV2 = confirmedV2 || fallbackErr.confirmedV2 if fallbackErr.transportOK && endpoint.URL.Scheme == "https" { confirmedTLSRegistries[endpoint.URL.Host] = struct{}{} } err = fallbackErr.err } } if fallback { if _, ok := err.(ErrNoSupport); !ok { // Because we found an error that's not ErrNoSupport, discard all subsequent ErrNoSupport errors. discardNoSupportErrors = true // append subsequent errors lastErr = err } else if !discardNoSupportErrors { // Save the ErrNoSupport error, because it's either the first error or all encountered errors // were also ErrNoSupport errors. // append subsequent errors lastErr = err } logrus.Errorf("Attempting next endpoint for pull after error: %v", err) continue } logrus.Errorf("Not continuing with pull after error: %v", err) return err } imagePullConfig.ImageEventLogger(ref.String(), repoInfo.Name(), "pull") return nil } if lastErr == nil { lastErr = fmt.Errorf("no endpoints found for %s", ref.String()) } return lastErr }
// TrustedPush handles content trust pushing of an image func (cli *DockerCli) TrustedPush(ctx context.Context, repoInfo *registry.RepositoryInfo, ref reference.Named, authConfig types.AuthConfig, requestPrivilege types.RequestPrivilegeFunc) error { responseBody, err := cli.ImagePushPrivileged(ctx, authConfig, ref.String(), requestPrivilege) if err != nil { return err } defer responseBody.Close() // If it is a trusted push we would like to find the target entry which match the // tag provided in the function and then do an AddTarget later. target := &client.Target{} // Count the times of calling for handleTarget, // if it is called more that once, that should be considered an error in a trusted push. cnt := 0 handleTarget := func(aux *json.RawMessage) { cnt++ if cnt > 1 { // handleTarget should only be called one. This will be treated as an error. return } var pushResult distribution.PushResult err := json.Unmarshal(*aux, &pushResult) if err == nil && pushResult.Tag != "" && pushResult.Digest.Validate() == nil { h, err := hex.DecodeString(pushResult.Digest.Hex()) if err != nil { target = nil return } target.Name = registry.ParseReference(pushResult.Tag).String() target.Hashes = data.Hashes{string(pushResult.Digest.Algorithm()): h} target.Length = int64(pushResult.Size) } } var tag string switch x := ref.(type) { case reference.Canonical: return errors.New("cannot push a digest reference") case reference.NamedTagged: tag = x.Tag() } // We want trust signatures to always take an explicit tag, // otherwise it will act as an untrusted push. if tag == "" { if err = jsonmessage.DisplayJSONMessagesToStream(responseBody, cli.Out(), nil); err != nil { return err } fmt.Fprintln(cli.out, "No tag specified, skipping trust metadata push") return nil } if err = jsonmessage.DisplayJSONMessagesToStream(responseBody, cli.Out(), handleTarget); err != nil { return err } if cnt > 1 { return fmt.Errorf("internal error: only one call to handleTarget expected") } if target == nil { fmt.Fprintln(cli.out, "No targets found, please provide a specific tag in order to sign it") return nil } fmt.Fprintln(cli.out, "Signing and pushing trust metadata") repo, err := cli.getNotaryRepository(repoInfo, authConfig, "push", "pull") if err != nil { fmt.Fprintf(cli.out, "Error establishing connection to notary repository: %s\n", err) return err } // get the latest repository metadata so we can figure out which roles to sign err = repo.Update(false) switch err.(type) { case client.ErrRepoNotInitialized, client.ErrRepositoryNotExist: keys := repo.CryptoService.ListKeys(data.CanonicalRootRole) var rootKeyID string // always select the first root key if len(keys) > 0 { sort.Strings(keys) rootKeyID = keys[0] } else { rootPublicKey, err := repo.CryptoService.Create(data.CanonicalRootRole, "", data.ECDSAKey) if err != nil { return err } rootKeyID = rootPublicKey.ID() } // Initialize the notary repository with a remotely managed snapshot key if err := repo.Initialize(rootKeyID, data.CanonicalSnapshotRole); err != nil { return notaryError(repoInfo.FullName(), err) } fmt.Fprintf(cli.out, "Finished initializing %q\n", repoInfo.FullName()) err = repo.AddTarget(target, data.CanonicalTargetsRole) case nil: // already initialized and we have successfully downloaded the latest metadata err = cli.addTargetToAllSignableRoles(repo, target) default: return notaryError(repoInfo.FullName(), err) } if err == nil { err = repo.Publish() } if err != nil { fmt.Fprintf(cli.out, "Failed to sign %q:%s - %s\n", repoInfo.FullName(), tag, err.Error()) return notaryError(repoInfo.FullName(), err) } fmt.Fprintf(cli.out, "Successfully signed %q:%s\n", repoInfo.FullName(), tag) return nil }
// Push initiates a push operation on the repository named localName. // ref is the specific variant of the image to be pushed. // If no tag is provided, all tags will be pushed. func Push(ctx context.Context, ref reference.Named, imagePushConfig *ImagePushConfig) error { // FIXME: Allow to interrupt current push when new push of same image is done. // Resolve the Repository name from fqn to RepositoryInfo repoInfo, err := imagePushConfig.RegistryService.ResolveRepository(ref) if err != nil { return err } endpoints, err := imagePushConfig.RegistryService.LookupPushEndpoints(repoInfo) if err != nil { return err } progress.Messagef(imagePushConfig.ProgressOutput, "", "The push refers to a repository [%s]", repoInfo.FullName()) associations := imagePushConfig.ReferenceStore.ReferencesByName(repoInfo) if len(associations) == 0 { return fmt.Errorf("Repository does not exist: %s", repoInfo.Name()) } var ( lastErr error // confirmedV2 is set to true if a push attempt managed to // confirm that it was talking to a v2 registry. This will // prevent fallback to the v1 protocol. confirmedV2 bool // confirmedTLSRegistries is a map indicating which registries // are known to be using TLS. There should never be a plaintext // retry for any of these. confirmedTLSRegistries = make(map[string]struct{}) ) for _, endpoint := range endpoints { if confirmedV2 && endpoint.Version == registry.APIVersion1 { logrus.Debugf("Skipping v1 endpoint %s because v2 registry was detected", endpoint.URL) continue } if endpoint.URL.Scheme != "https" { if _, confirmedTLS := confirmedTLSRegistries[endpoint.URL.Host]; confirmedTLS { logrus.Debugf("Skipping non-TLS endpoint %s for host/port that appears to use TLS", endpoint.URL) continue } } logrus.Debugf("Trying to push %s to %s %s", repoInfo.FullName(), endpoint.URL, endpoint.Version) pusher, err := NewPusher(ref, endpoint, repoInfo, imagePushConfig) if err != nil { lastErr = err continue } if err := pusher.Push(ctx); err != nil { // Was this push cancelled? If so, don't try to fall // back. select { case <-ctx.Done(): default: if fallbackErr, ok := err.(fallbackError); ok { confirmedV2 = confirmedV2 || fallbackErr.confirmedV2 if fallbackErr.transportOK && endpoint.URL.Scheme == "https" { confirmedTLSRegistries[endpoint.URL.Host] = struct{}{} } err = fallbackErr.err lastErr = err logrus.Errorf("Attempting next endpoint for push after error: %v", err) continue } } logrus.Errorf("Not continuing with push after error: %v", err) return err } imagePushConfig.ImageEventLogger(ref.String(), repoInfo.Name(), "push") return nil } if lastErr == nil { lastErr = fmt.Errorf("no endpoints found for %s", repoInfo.FullName()) } return lastErr }
func (mf *v2ManifestFetcher) fetchWithRepository(ctx context.Context, ref reference.Named) (*imageInspect, error) { var ( manifest distribution.Manifest tagOrDigest string // Used for logging/progress only tagList = []string{} ) manSvc, err := mf.repo.Manifests(ctx) if err != nil { return nil, err } if _, isTagged := ref.(reference.NamedTagged); !isTagged { ref, err = reference.WithTag(ref, reference.DefaultTag) if err != nil { return nil, err } } if tagged, isTagged := ref.(reference.NamedTagged); isTagged { // NOTE: not using TagService.Get, since it uses HEAD requests // against the manifests endpoint, which are not supported by // all registry versions. manifest, err = manSvc.Get(ctx, "", client.WithTag(tagged.Tag())) if err != nil { return nil, allowV1Fallback(err) } tagOrDigest = tagged.Tag() } else if digested, isDigested := ref.(reference.Canonical); isDigested { manifest, err = manSvc.Get(ctx, digested.Digest()) if err != nil { return nil, err } tagOrDigest = digested.Digest().String() } else { return nil, fmt.Errorf("internal error: reference has neither a tag nor a digest: %s", ref.String()) } if manifest == nil { return nil, fmt.Errorf("image manifest does not exist for tag or digest %q", tagOrDigest) } // If manSvc.Get succeeded, we can be confident that the registry on // the other side speaks the v2 protocol. mf.confirmedV2 = true tagList, err = mf.repo.Tags(ctx).All(ctx) if err != nil { // If this repository doesn't exist on V2, we should // permit a fallback to V1. return nil, allowV1Fallback(err) } var ( image *image.Image manifestDigest digest.Digest ) switch v := manifest.(type) { case *schema1.SignedManifest: image, manifestDigest, err = mf.pullSchema1(ctx, ref, v) if err != nil { return nil, err } case *schema2.DeserializedManifest: image, manifestDigest, err = mf.pullSchema2(ctx, ref, v) if err != nil { return nil, err } case *manifestlist.DeserializedManifestList: image, manifestDigest, err = mf.pullManifestList(ctx, ref, v) if err != nil { return nil, err } default: return nil, errors.New("unsupported manifest format") } // TODO(runcom) //var showTags bool //if reference.IsNameOnly(ref) { //showTags = true //logrus.Debug("Using default tag: latest") //ref = reference.WithDefaultTag(ref) //} //_ = showTags return makeImageInspect(image, tagOrDigest, manifestDigest, tagList), nil }
func getData(c *cli.Context, ref reference.Named) (*imageInspect, error) { repoInfo, err := registry.ParseRepositoryInfo(ref) if err != nil { return nil, err } authConfig, err := getAuthConfig(c, repoInfo.Index) if err != nil { return nil, err } if err := validateRepoName(repoInfo.Name()); err != nil { return nil, err } //options := ®istry.Options{} //options.Mirrors = opts.NewListOpts(nil) //options.InsecureRegistries = opts.NewListOpts(nil) //options.InsecureRegistries.Set("0.0.0.0/0") //registryService := registry.NewService(options) registryService := registry.NewService(nil) //// TODO(runcom): hacky, provide a way of passing tls cert (flag?) to be used to lookup //for _, ic := range registryService.Config.IndexConfigs { //ic.Secure = false //} endpoints, err := registryService.LookupPullEndpoints(repoInfo) if err != nil { return nil, err } var ( ctx = context.Background() lastErr error discardNoSupportErrors bool imgInspect *imageInspect confirmedV2 bool ) for _, endpoint := range endpoints { // make sure I can reach the registry, same as docker pull does v1endpoint, err := endpoint.ToV1Endpoint(nil) if err != nil { return nil, err } if _, err := v1endpoint.Ping(); err != nil { if strings.Contains(err.Error(), "timeout") { return nil, err } continue } if confirmedV2 && endpoint.Version == registry.APIVersion1 { logrus.Debugf("Skipping v1 endpoint %s because v2 registry was detected", endpoint.URL) continue } logrus.Debugf("Trying to fetch image manifest of %s repository from %s %s", repoInfo.Name(), endpoint.URL, endpoint.Version) //fetcher, err := newManifestFetcher(endpoint, repoInfo, config) fetcher, err := newManifestFetcher(endpoint, repoInfo, authConfig, registryService) if err != nil { lastErr = err continue } if imgInspect, err = fetcher.Fetch(ctx, ref); err != nil { // Was this fetch cancelled? If so, don't try to fall back. fallback := false select { case <-ctx.Done(): default: if fallbackErr, ok := err.(fallbackError); ok { fallback = true confirmedV2 = confirmedV2 || fallbackErr.confirmedV2 err = fallbackErr.err } } if fallback { if _, ok := err.(registry.ErrNoSupport); !ok { // Because we found an error that's not ErrNoSupport, discard all subsequent ErrNoSupport errors. discardNoSupportErrors = true // save the current error lastErr = err } else if !discardNoSupportErrors { // Save the ErrNoSupport error, because it's either the first error or all encountered errors // were also ErrNoSupport errors. lastErr = err } continue } logrus.Debugf("Not continuing with error: %v", err) return nil, err } return imgInspect, nil } if lastErr == nil { lastErr = fmt.Errorf("no endpoints found for %s", ref.String()) } return nil, lastErr }
// trustedPull handles content trust pulling of an image func trustedPull(ctx context.Context, cli *command.DockerCli, repoInfo *registry.RepositoryInfo, ref reference.Named, authConfig types.AuthConfig, requestPrivilege types.RequestPrivilegeFunc) error { var refs []target notaryRepo, err := trust.GetNotaryRepository(cli, repoInfo, authConfig, "pull") if err != nil { fmt.Fprintf(cli.Out(), "Error establishing connection to trust repository: %s\n", err) return err } if tagged, isTagged := ref.(reference.NamedTagged); !isTagged { // List all targets targets, err := notaryRepo.ListTargets(trust.ReleasesRole, data.CanonicalTargetsRole) if err != nil { return trust.NotaryError(repoInfo.FullName(), err) } for _, tgt := range targets { t, err := convertTarget(tgt.Target) if err != nil { fmt.Fprintf(cli.Out(), "Skipping target for %q\n", repoInfo.Name()) continue } // Only list tags in the top level targets role or the releases delegation role - ignore // all other delegation roles if tgt.Role != trust.ReleasesRole && tgt.Role != data.CanonicalTargetsRole { continue } refs = append(refs, t) } if len(refs) == 0 { return trust.NotaryError(repoInfo.FullName(), fmt.Errorf("No trusted tags for %s", repoInfo.FullName())) } } else { t, err := notaryRepo.GetTargetByName(tagged.Tag(), trust.ReleasesRole, data.CanonicalTargetsRole) if err != nil { return trust.NotaryError(repoInfo.FullName(), err) } // Only get the tag if it's in the top level targets role or the releases delegation role // ignore it if it's in any other delegation roles if t.Role != trust.ReleasesRole && t.Role != data.CanonicalTargetsRole { return trust.NotaryError(repoInfo.FullName(), fmt.Errorf("No trust data for %s", tagged.Tag())) } logrus.Debugf("retrieving target for %s role\n", t.Role) r, err := convertTarget(t.Target) if err != nil { return err } refs = append(refs, r) } for i, r := range refs { displayTag := r.name if displayTag != "" { displayTag = ":" + displayTag } fmt.Fprintf(cli.Out(), "Pull (%d of %d): %s%s@%s\n", i+1, len(refs), repoInfo.Name(), displayTag, r.digest) ref, err := reference.WithDigest(reference.TrimNamed(repoInfo), r.digest) if err != nil { return err } if err := imagePullPrivileged(ctx, cli, authConfig, ref.String(), requestPrivilege, false); err != nil { return err } tagged, err := reference.WithTag(repoInfo, r.name) if err != nil { return err } trustedRef, err := reference.WithDigest(reference.TrimNamed(repoInfo), r.digest) if err != nil { return err } if err := TagTrusted(ctx, cli, trustedRef, tagged); err != nil { return err } } return nil }
func (p *v2Puller) pullV2Tag(ctx context.Context, ref reference.Named) (tagUpdated bool, err error) { tagOrDigest := "" if tagged, isTagged := ref.(reference.NamedTagged); isTagged { tagOrDigest = tagged.Tag() } else if digested, isCanonical := ref.(reference.Canonical); isCanonical { tagOrDigest = digested.Digest().String() } else { return false, fmt.Errorf("internal error: reference has neither a tag nor a digest: %s", ref.String()) } logrus.Debugf("Pulling ref from V2 registry: %q", tagOrDigest) manSvc, err := p.repo.Manifests(ctx) if err != nil { return false, err } unverifiedManifest, err := manSvc.GetByTag(tagOrDigest) if err != nil { return false, err } if unverifiedManifest == nil { return false, fmt.Errorf("image manifest does not exist for tag or digest %q", tagOrDigest) } // If GetByTag succeeded, we can be confident that the registry on // the other side speaks the v2 protocol. p.confirmedV2 = true var verifiedManifest *schema1.Manifest verifiedManifest, err = verifyManifest(unverifiedManifest, ref) if err != nil { return false, err } rootFS := image.NewRootFS() if err := detectBaseLayer(p.config.ImageStore, verifiedManifest, rootFS); err != nil { return false, err } // remove duplicate layers and check parent chain validity err = fixManifestLayers(verifiedManifest) if err != nil { return false, err } progress.Message(p.config.ProgressOutput, tagOrDigest, "Pulling from "+p.repo.Name()) var descriptors []xfer.DownloadDescriptor // Image history converted to the new format var history []image.History // Note that the order of this loop is in the direction of bottom-most // to top-most, so that the downloads slice gets ordered correctly. for i := len(verifiedManifest.FSLayers) - 1; i >= 0; i-- { blobSum := verifiedManifest.FSLayers[i].BlobSum var throwAway struct { ThrowAway bool `json:"throwaway,omitempty"` } if err := json.Unmarshal([]byte(verifiedManifest.History[i].V1Compatibility), &throwAway); err != nil { return false, err } h, err := v1.HistoryFromConfig([]byte(verifiedManifest.History[i].V1Compatibility), throwAway.ThrowAway) if err != nil { return false, err } history = append(history, h) if throwAway.ThrowAway { continue } layerDescriptor := &v2LayerDescriptor{ digest: blobSum, repo: p.repo, blobSumService: p.blobSumService, } descriptors = append(descriptors, layerDescriptor) } resultRootFS, release, err := p.config.DownloadManager.Download(ctx, *rootFS, descriptors, p.config.ProgressOutput) if err != nil { return false, err } defer release() config, err := v1.MakeConfigFromV1Config([]byte(verifiedManifest.History[0].V1Compatibility), &resultRootFS, history) if err != nil { return false, err } imageID, err := p.config.ImageStore.Create(config) if err != nil { return false, err } manifestDigest, _, err := digestFromManifest(unverifiedManifest, p.repoInfo) if err != nil { return false, err } if manifestDigest != "" { progress.Message(p.config.ProgressOutput, "", "Digest: "+manifestDigest.String()) } oldTagImageID, err := p.config.ReferenceStore.Get(ref) if err == nil && oldTagImageID == imageID { return false, nil } if canonical, ok := ref.(reference.Canonical); ok { if err = p.config.ReferenceStore.AddDigest(canonical, imageID, true); err != nil { return false, err } } else if err = p.config.ReferenceStore.AddTag(ref, imageID, true); err != nil { return false, err } return true, nil }
// Pull initiates a pull operation. image is the repository name to pull, and // tag may be either empty, or indicate a specific tag to pull. func Pull(ctx context.Context, ref reference.Named, imagePullConfig *ImagePullConfig) error { // Resolve the Repository name from fqn to RepositoryInfo repoInfo, err := imagePullConfig.RegistryService.ResolveRepository(ref) if err != nil { return err } // makes sure name is not empty or `scratch` if err := validateRepoName(repoInfo.Name()); err != nil { return err } endpoints, err := imagePullConfig.RegistryService.LookupPullEndpoints(repoInfo) if err != nil { return err } var ( // use a slice to append the error strings and return a joined string to caller errors []string // discardNoSupportErrors is used to track whether an endpoint encountered an error of type registry.ErrNoSupport // By default it is false, which means that if a ErrNoSupport error is encountered, it will be saved in errors. // As soon as another kind of error is encountered, discardNoSupportErrors is set to true, avoiding the saving of // any subsequent ErrNoSupport errors in errors. // It's needed for pull-by-digest on v1 endpoints: if there are only v1 endpoints configured, the error should be // returned and displayed, but if there was a v2 endpoint which supports pull-by-digest, then the last relevant // error is the ones from v2 endpoints not v1. discardNoSupportErrors bool // confirmedV2 is set to true if a pull attempt managed to // confirm that it was talking to a v2 registry. This will // prevent fallback to the v1 protocol. confirmedV2 bool ) for _, endpoint := range endpoints { if confirmedV2 && endpoint.Version == registry.APIVersion1 { logrus.Debugf("Skipping v1 endpoint %s because v2 registry was detected", endpoint.URL) continue } logrus.Debugf("Trying to pull %s from %s %s", repoInfo.Name(), endpoint.URL, endpoint.Version) puller, err := newPuller(endpoint, repoInfo, imagePullConfig) if err != nil { errors = append(errors, err.Error()) continue } if err := puller.Pull(ctx, ref); err != nil { // Was this pull cancelled? If so, don't try to fall // back. fallback := false select { case <-ctx.Done(): default: if fallbackErr, ok := err.(fallbackError); ok { fallback = true confirmedV2 = confirmedV2 || fallbackErr.confirmedV2 err = fallbackErr.err } } if fallback { if _, ok := err.(registry.ErrNoSupport); !ok { // Because we found an error that's not ErrNoSupport, discard all subsequent ErrNoSupport errors. discardNoSupportErrors = true // append subsequent errors errors = append(errors, err.Error()) } else if !discardNoSupportErrors { // Save the ErrNoSupport error, because it's either the first error or all encountered errors // were also ErrNoSupport errors. // append subsequent errors errors = append(errors, err.Error()) } continue } errors = append(errors, err.Error()) logrus.Debugf("Not continuing with error: %v", fmt.Errorf(strings.Join(errors, "\n"))) if len(errors) > 0 { return fmt.Errorf(strings.Join(errors, "\n")) } } imagePullConfig.ImageEventLogger(ref.String(), repoInfo.Name(), "pull") return nil } if len(errors) == 0 { return fmt.Errorf("no endpoints found for %s", ref.String()) } if len(errors) > 0 { return fmt.Errorf(strings.Join(errors, "\n")) } return nil }
func (store *repoCache) AddReference(ref reference.Named, imageID string, force bool, layerID string, save bool) error { defer trace.End(trace.Begin("")) if ref.Name() == string(digest.Canonical) { return errors.New("refusing to create an ambiguous tag using digest algorithm as name") } var err error store.mu.Lock() defer store.mu.Unlock() // does this repo (i.e. busybox) exist? repository, exists := store.Repositories[ref.Name()] if !exists || repository == nil { repository = make(map[string]string) store.Repositories[ref.Name()] = repository } refStr := ref.String() oldID, exists := repository[refStr] if exists { if oldID == imageID { log.Debugf("Image %s is already tagged as %s", oldID, ref.String()) return nil } // force only works for tags if digested, isDigest := ref.(reference.Canonical); isDigest { log.Debugf("Unable to overwrite %s with digest %s", oldID, digested.Digest().String()) return fmt.Errorf("Cannot overwrite digest %s", digested.Digest().String()) } if !force { log.Debugf("Refusing to overwrite %s with %s unless force is specified", oldID, ref.String()) return fmt.Errorf("Conflict: Tag %s is already set to image %s, if you want to replace it, please use -f option", ref.String(), oldID) } if store.referencesByIDCache[oldID] != nil { delete(store.referencesByIDCache[oldID], refStr) if len(store.referencesByIDCache[oldID]) == 0 { delete(store.referencesByIDCache, oldID) } } } repository[refStr] = imageID if store.referencesByIDCache[imageID] == nil { store.referencesByIDCache[imageID] = make(map[string]reference.Named) } store.referencesByIDCache[imageID][refStr] = ref store.Layers[layerID] = imageID store.images[imageID] = layerID // should we save this input? if save { err = store.Save() } return err }
func (p *v2Puller) pullV2Tag(ctx context.Context, ref reference.Named) (tagUpdated bool, err error) { manSvc, err := p.repo.Manifests(ctx) if err != nil { return false, err } var ( manifest distribution.Manifest tagOrDigest string // Used for logging/progress only ) if tagged, isTagged := ref.(reference.NamedTagged); isTagged { manifest, err = manSvc.Get(ctx, "", distribution.WithTag(tagged.Tag())) if err != nil { return false, allowV1Fallback(err) } tagOrDigest = tagged.Tag() } else if digested, isDigested := ref.(reference.Canonical); isDigested { manifest, err = manSvc.Get(ctx, digested.Digest()) if err != nil { return false, err } tagOrDigest = digested.Digest().String() } else { return false, fmt.Errorf("internal error: reference has neither a tag nor a digest: %s", ref.String()) } if manifest == nil { return false, fmt.Errorf("image manifest does not exist for tag or digest %q", tagOrDigest) } if m, ok := manifest.(*schema2.DeserializedManifest); ok { var allowedMediatype bool for _, t := range p.config.Schema2Types { if m.Manifest.Config.MediaType == t { allowedMediatype = true break } } if !allowedMediatype { configClass := mediaTypeClasses[m.Manifest.Config.MediaType] if configClass == "" { configClass = "unknown" } return false, fmt.Errorf("target is %s", configClass) } } // If manSvc.Get succeeded, we can be confident that the registry on // the other side speaks the v2 protocol. p.confirmedV2 = true logrus.Debugf("Pulling ref from V2 registry: %s", ref.String()) progress.Message(p.config.ProgressOutput, tagOrDigest, "Pulling from "+p.repo.Named().Name()) var ( id digest.Digest manifestDigest digest.Digest ) switch v := manifest.(type) { case *schema1.SignedManifest: if p.config.RequireSchema2 { return false, fmt.Errorf("invalid manifest: not schema2") } id, manifestDigest, err = p.pullSchema1(ctx, ref, v) if err != nil { return false, err } case *schema2.DeserializedManifest: id, manifestDigest, err = p.pullSchema2(ctx, ref, v) if err != nil { return false, err } case *manifestlist.DeserializedManifestList: id, manifestDigest, err = p.pullManifestList(ctx, ref, v) if err != nil { return false, err } default: return false, errors.New("unsupported manifest format") } progress.Message(p.config.ProgressOutput, "", "Digest: "+manifestDigest.String()) if p.config.ReferenceStore != nil { oldTagID, err := p.config.ReferenceStore.Get(ref) if err == nil { if oldTagID == id { return false, addDigestReference(p.config.ReferenceStore, ref, manifestDigest, id) } } else if err != reference.ErrDoesNotExist { return false, err } if canonical, ok := ref.(reference.Canonical); ok { if err = p.config.ReferenceStore.AddDigest(canonical, id, true); err != nil { return false, err } } else { if err = addDigestReference(p.config.ReferenceStore, ref, manifestDigest, id); err != nil { return false, err } if err = p.config.ReferenceStore.AddTag(ref, id, true); err != nil { return false, err } } } return true, nil }