Skip to content

jaredwilkening/Shock

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Shock

About:

Shock is a platform to support computation, storage, and distribution. Designed from the ground up to be fast, scalable, fault tolerant, federated.

Shock is RESTful. Accessible from desktops, HPC systems, exotic hardware, the cloud and your smartphone.

Shock is for scientific data. One of the challenges of large volume scientific data is that without often complex metadata it is of little to no value. Store and query(in development) complex metadata.

Shock is an data management system. Annotate, anonymize, convert, filter, quality control, statically subsample at line speed bioinformatics sequence data. Extensible plug-in architecture(in development).

Most importantly Shock is still very much in development. Be patient and contribute.

Shock is actively being developed at github.com/MG-RAST/Shock.


Building: --------- Shock (requires go=>1.1.0 [golang.org](http://golang.org/), git, mercurial, bazaar):
go get github.com/MG-RAST/Shock/...

Built binary will be located in env configured $GOPATH or $GOROOT depending on Go configuration.


Configuration: -------------- The Shock configuration file is in INI file format. There is a template of the config file located at the root level of the repository.
[Admin]
email=admin@host.com
secretkey=supersecretkey

[Anonymous]
# Controls an anonymous user's ability to read/write
# values: true/false
read=true
write=false
create-user=false

[Auth]
# defaults to local user management with basis auth
type=basic
# comment line about and uncomment below to use Globus Online as auth provider
#type=globus 
#globus_token_url=https://nexus.api.globusonline.org/goauth/token?grant_type=client_credentials
#globus_profile_url=https://nexus.api.globusonline.org/users

[Directories]
# See documentation for details of deploying Shock
site=/usr/local/shock/site
data=/usr/local/shock
logs=/var/log/shock

# Comma delimited search path available for remote path uploads. Only remote paths that prefix 
# match one of the following will be allowed. Note: poor choices can result in security concerns.
local_paths=N/A

[External]
site-url=http://localhost

[SSL]
enable=false
#key=<path_to_key_file>
#cert=<path_to_cert_file>

[Mongodb]
# Mongodb configuration:
# Hostnames and ports hosts=host1[,host2:port,...,hostN]
hosts=localhost
database=ShockDB
user=
password=

[Mongodb-Node-Indices]
# See http://www.mongodb.org/display/DOCS/Indexes#Indexes-CreationOptions for more info on mongodb index options.
# key=unique:true/false[,dropDups:true/false][,sparse:true/false]
id=unique:true

[Ports]
# Ports for site/api
# Note: use of port 80 may require root access
site-port=7444
api-port=7445

To run (additional requires mongodb=>2.0.3):

shock-server -conf <path_to_config_file>

Routes Overview

API Routes (default port 7445):

#####OPTIONS

  • all options request respond with CORS headers and 200 OK

#####GET

#####PUT

#####POST

#####DELETE

Site Routes (default port 7444):

#####GET

  • / this documentation and future site
  • /raw listing of data dir
  • /assets js, img, css, README.md

Authentication:

Shock supports multiple forms of Authentication via plugin modules. Credentials are cached for 1 hour to speed up high transaction loads. Server restarts will clear the credential cache.

Globus Online

In this configuration Shock locally stores only uuids for users that it has already seen. The registration of new users is done exclusively with the external auth provider. The user api is disabled in this mode.

Examples:

# globus online username & password
curl --user username:password ...

# globus online bearer token 
curl -H "Authorization: OAuth $TOKEN" ...

Data Types

Node:

  • id: unique identifier
  • file: name, size, checksum(s).
  • attributes: arbitrary json. Queriable.
  • indexes: A set of indexes to use.
  • version: a version stamp for this node.
node example:
{
    "data": {
        "attributes": null, 
        "file": {
            "checksum": {}, 
            "format": "", 
            "name": "", 
            "size": 0, 
            "virtual": false, 
            "virtual_parts": []
        }, 
        "id": "130cadb5-9435-4bd9-be13-715ec40b2bb5", 
        "relatives": [], 
        "type": [], 
        "version": "4da883924aa8ae9eb95f6cd247f2f554"
    }, 
    "error": null, 
    "status": 200
}

### Index:

Currently there is support for two types of indices: virtual and file.

virtual index:

A virtual index is one that can be generated on the fly without support of precalculated data. The current working example of this is the size virtual index. Based on the file size and desired chunksize the partitions become individually addressable.

file index:

Currently in early development the file index is a json file stored on disk in the node's directory.

# abstract form
{
	index_type : <type>,
	filename : <filename>,
	checksum_type : <type>,
	version : <version>,
	index : [
		[<position>,<length>,<optional_checksum>]...
	]
}

# example
{
	"index_type" : "fasta",
	"filename" : "none",
	"checksum_type" : "none",
	"version" : 1,
	"index" : [[0,1861]]
}



API by example

All examples use curl but can be easily modified for any http client / library. Note: Authentication is required for most of these commands

Node Creation (details):

# without file or attributes
curl -X POST http://<host>[:<port>]/node

# with attributes
curl -X POST -F "attributes=@<path_to_json_file>" http://<host>[:<port>]/node

# with file
curl -X POST -F "upload=@<path_to_data_file>" http://<host>[:<port>]/node

# with file local to the shock server
curl -X POST -F "path=<path_to_data_file>" http://<host>[:<port>]/node

# with file upload in N parts (part uploads may be done in parallel)
curl -X POST -F "parts=N" http://<host>[:<port>]/node
curl -X PUT -F "1=@<file_part_1>" http://<host>[:<port>]/node/<node_id>
curl -X PUT -F "2=@<file_part_2>" http://<host>[:<port>]/node/<node_id>
...
curl -X PUT -F "N=@<file_part_N>" http://<host>[:<port>]/node/<node_id>

#### Node retrieval ([details](#get_node)):
# node information
curl -X GET http://<host>[:<port>]/node/{id}

# download file
curl -X GET http://<host>[:<port>]/node/{id}/?download

# download first 1mb of file
curl -X GET http://<host>[:<port>]/node/{id}/?download&index=size&part=1
    
# download first 10mb of file
curl -X GET http://<host>[:<port>]/node/{id}/?download&index=size&chunk_size=10485760&part=1

# download Nth 10mb of file
curl -X GET http://<host>[:<port>]/node/{id}/?download&index=size&chunk_size=10485760&part=N

#### Node acls:
# view all acls
curl -X GET http://<host>[:<port>]/node/{id}/acl/

# view specific acls
curl -X GET http://<host>[:<port>]/node/{id}/acl/[ read | write | delete | owner ]

# changing owner (chown)
curl -X PUT http://<host>[:<port>]/node/{id}/acl/?owner=<email-address_or_uuid>
or
curl -X PUT http://<host>[:<port>]/node/{id}/acl/owner?users=<email-address_or_uuid>

# adding user to all acls (expect owner)
curl -X PUT http://<host>[:<port>]/node/{id}/acl/?all=<list_of_email-addresses_or_uuids>

# adding user to specific acls
curl -X PUT http://<host>[:<port>]/node/{id}/acl/[ read | write | delete | owner ]?users=<list_of_email-addresses_or_uuids>
or
curl -X PUT http://<host>[:<port>]/node/{id}/acl/?[ read | write | delete ]=<list_of_email-addresses_or_uuids>

# adding users to both read and write acls:
curl -X PUT http://<host>[:<port>]/node/{id}/acl/?read=<list_of_email-addresses_or_uuids>&write=<list_of_email-addresses_or_uuids>

# deleting user from all acls (expect owner)
curl -X DELETE http://<host>[:<port>]/node/{id}/acl/?all=<list_of_email-addresses_or_uuids>    

# deleting user to specific acls
curl -X DELETE http://<host>[:<port>]/node/{id}/acl/[ read | write | delete ]?users=<list_of_email-addresses_or_uuids>
or
curl -X DELETE http://<host>[:<port>]/node/{id}/acl/?[ read | write | delete ]=<list_of_email-addresses_or_uuids>

# deleting users to both read and write acls:
curl -X DELETE http://<host>[:<port>]/node/{id}/acl/?read=<list_of_email-addresses_or_uuids>&write=<list_of_email-addresses_or_uuids>

#### Querying ([details](#get_nodes)):
# by attribute key value
curl -X GET http://<host>[:<port>]/node/?query&<key>=<value>

# by attribute key value, limit 10
curl -X GET http://<host>[:<port>]/node/?query&<key>=<value>&limit=10

# by attribute key value, limit 10, offset 10
curl -X GET http://<host>[:<port>]/node/?query&<key>=<value>&limit=10&offset=10

API

Response wrapper:

All responses from Shock currently are in the following encoding.

{ "data": , "error": , "status": <int: http status code>, "limit": <int: paginated requests only>, "offset": <int: paginated requests only>, "total_count": <int: paginated requests only> }


### GET /

Description of resources available through this api

curl -X GET http://<host>[:<port>]/
returns
{"resources":["node", "user"],"url":"http://localhost:8000/","documentation":"http://localhost/","contact":"admin@host.com","id":"Shock","type":"Shock"}

### POST /node

Create node

  • optionally takes user/password via Basic Auth. If set only that user with have access to the node
  • accepts multipart/form-data encoded
  • to set attributes include file field named "attributes" containing a json file of attributes
  • to set file include file field named "upload" containing any file or include field named "path" containing the file system path to the file accessible from the Shock server
curl -X POST [ see Authentication ] [ -F "attributes=@<path_to_json>" ( -F "upload=@<path_to_data_file>" || -F "path=<path_to_file>") ] http://<host>[:<port>]/node
returns
{
    "data": {<node>},
    "error": <error message or null>, 
    "status": <http status of response (also set in headers)>
} 

### GET /node

List nodes

  • optionally takes user/password via Basic Auth. Grants access to non-public data
  • by adding ?offset=N you get the nodes starting at N+1
  • by adding ?limit=N you get a maximum of N nodes returned

All attributes are queriable. For example if a node has in it's attributes "about" : "metagenome" the url

/node/?query&about=metagenome

would return it and all other nodes with that attribute. Address of nested attributes like "metadata": { "env_biome": "ENVO:human-associated habitat", ... } is done via a dot notation

/node/?query&metadata.env_biome=ENVO:human-associated%20habitat

Multiple attributes can be selected in a single query and are treated as AND operations

/node/?query&metadata.env_biome=ENVO:human-associated%20habitat&about=metagenome

Note: all special characters like a space must be url encoded.

example
curl -X GET [ see Authentication ] http://<host>[:<port>]/node/[?offset=<offset>&limit=<count>][&query&<tag>=<value>]
returns
{
  "data": {[<array of nodes>]},
  "error": <string or null: error message>,
  "status": <int: http status code>,
  "limit": <limit>, 
  "offset": <offset>,
  "total_count": <count>
}

### GET /node/{id}

View node, download file (full or partial)

  • optionally takes user/password via Basic Auth
  • ?download - complete file download
  • ?download&index=size&part=1[&part=2...][chunksize=inbytes] - download portion of the file via the size virtual index. Chunksize defaults to 1MB (1048576 bytes).
curl -X GET [ see Authentication ] http://<host>[:<port>]/node/{id}
returns
{
    "data": {<node>},
    "error": <error message or null>, 
    "status": <http status of request>
}

### PUT /node/{id}

Modify node, create index

  • optionally takes user/password via Basic Auth

Modify:

  • Once the file or attributes of a node are set they are immutiable.
  • accepts multipart/form-data encoded
  • to set attributes include file field named "attributes" containing a json file of attributes
  • to set file include file field named "upload" containing any file or include field named "path" containing the file system path to the file accessible from the Shock server
curl -X PUT [ see Authentication ] [ -F "attributes=@<path_to_json>" ( -F "upload=@<path_to_data_file>" || -F "path=<path_to_file>") ] http://<host>[:<port>]/node/{id}
returns
{
    "data": {<node>},
    "error": <error message or null>, 
    "status": <http status of request>
}

**Create index:**
  • currently available index types: size, record (for sequence file types)
example
curl -X PUT [ see Authentication ] http://<host>[:<port>]/node/{id}?index=<type>
returns
{
    "data": null,
    "error": <error message or null>, 
    "status": <http status of request>
}

License ---

Copyright (c) 2010-2012, University of Chicago All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 48.1%
  • CSS 29.0%
  • JavaScript 19.1%
  • Perl 2.0%
  • Python 1.8%