Implementing storage with MongoDB
Clash Royale CLAN TAG#URR8PPP
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;
up vote
0
down vote
favorite
I need to write REST interface to MongoDB. Help me please to improve the architecture.
The program's main goal: respond to HTTP requests, store big files in Mongo, send file metainformation, etc. I want to build the application from independent components, but it seems difficult to me. May be I need to use some patterns such as MVC?
My program is written in go
:
LocalStorage
is responsible for creating local files recieved from HTTP:
package main
import "os"
import "log"
// LocalStorage manages files saved on disk
// All files are saved into dir directory
type LocalStorage struct
const dir = "./tmp/"
// CreateTempFile creates temporary file in dir directory
func (s *LocalStorage) CreateTempFile(name string) (*os.File, error)
file, err := os.OpenFile(dir+name,
os.O_CREATE
LocalFile
struct is responsible for local file management (removing, calculating SHA-256). Before uploading local file to GridFS, application must check if real SHA-256
checksum equals to expected.
package main
import (
"crypto/sha256"
"fmt"
"io"
"log"
"os"
"strings"
)
// LocalFile Describes file saved on disk
type LocalFile struct
Path string
Hash string
Handle *os.File
Prev *LocalFile
// Sha256 Calculates sha-256 checksum of this file
func (f *LocalFile) Sha256() string
file, err := os.Open(f.Path)
if err != nil
log.Println(err)
return ""
defer file.Close()
hasher := sha256.New()
if _, err := io.Copy(hasher, file); err != nil
log.Println(err)
return ""
var b strings.Builder
fmt.Fprintf(&b, "%x", hasher.Sum(nil))
return b.String()
// Remove removes file from disk
func (f *LocalFile) Remove()
err := os.Remove(f.Path)
if err != nil
log.Println(err)
Storage
is CRUD interface for GridFS. It iimplements operation on Grid files (creating, storing)
package main
import "gopkg.in/mgo.v2"
import "errors"
import "log"
import "fmt"
import "io"
import "os"
type Storage struct
Server string
Database string
Collection string
var db *mgo.Database
// Connect connects to the default server and opens default DB
func (s *Storage) Connect()
session, err := mgo.Dial("localhost")
if err != nil
log.Fatal(err)
db = session.DB("binary")
// CreateGridFile creates file in GridFS
func (s *Storage) CreateGridFile(name string) (*mgo.GridFile, error)
file, err := db.GridFS("fs").Create(name)
if err != nil
return nil, errors.New("Can not create Grid file")
return file, nil
// StoreFromDisk stores disk file in GridFS
// If local file's sha-256 is not equal to sha-256 value in FileMeta,
// error is returned
func (s *Storage) StoreFromDisk(file *LocalFile, meta *FileMeta) error
if file.Sha256() == meta.Hash
gridFile, err := s.CreateGridFile(meta.Name)
if err != nil
log.Println("In StoreFromDisk: ", err)
return err
defer gridFile.Close()
// TODO: delegate opening file to LocalFile struct
localFile, err := os.Open(file.Path)
if err != nil
log.Println("While opening local file in StoreFromDisk: ", err)
return err
defer localFile.Close()
bytesWritten, err := io.Copy(gridFile, localFile)
if err != nil
log.Println("While copying local file to GridFS: ", err)
return err
log.Printf("Copied %d bytes to GridFS.", bytesWritten)
return errors.New("file.sha256 != meta.sha256")
func (s *Storage) UploadGridFile()
file, err := s.CreateGridFile("hello")
if err != nil
log.Fatal(err)
file.Write(byte("hello world"))
file.Close()
func (s *Storage) OpenFile(name string) (io.ReadCloser, error)
file, err := db.GridFS("fs").Open(name)
if err != nil
log.Println(err)
return nil, err
return file, nil
func (s *Storage) SaveFileToDisk(name string) os.O_WRONLY,
0644)
if err != nil
log.Println("While creating file: ", err)
defer dest.Close()
// Copies from grid file to disk file
if _, err := io.Copy(dest, file); err != nil
fmt.Println(err)
func (s *Storage) WriteToGridFile()
There is also Meta struct that describes HTTP meta info about requests.
package main
import "net/http"
import "mime"
import "errors"
import "fmt"
import "log"
// Meta describes HTTP metainfo
type Meta struct
MediaType string
Boundary string
Range *Range
FileName string
Property *FileMeta
// Range describes HTTP range
type Range struct
Start int64
End int64
Size int64
// FileMeta describes user-define file info
type FileMeta struct
Name string
Hash string
Creator string
SysID string
// ParseMeta parses request information and makes Meta
func ParseMeta(req *http.Request) (*Meta, error)
meta := &Meta
if err := meta.parseContentType(req.Header.Get("Content-Type")); err != nil
return nil, err
if err := meta.parseContentRange(req.Header.Get("Content-Range")); err != nil
return nil, err
if err := meta.parseContentDisposition(req.Header.Get("Content-Disposition")); err != nil
return nil, err
// Parse user defined HTTP headers
meta.parseUserProperties(req)
return meta, nil
- How to determine what operation must implements these structs to build system with low complexity and connectivity?
- How to implement file interface in the way that this file wrapper allows reading the file, calculating checksum, comparing some info about file with meta information, removing file, etc?
- How to improve my GridFS interface to work with such local file wrappers?
- How to implement retrieving meta information about file in GridFS without downloading them to disk?
design-patterns go rest mongodb
add a comment |Â
up vote
0
down vote
favorite
I need to write REST interface to MongoDB. Help me please to improve the architecture.
The program's main goal: respond to HTTP requests, store big files in Mongo, send file metainformation, etc. I want to build the application from independent components, but it seems difficult to me. May be I need to use some patterns such as MVC?
My program is written in go
:
LocalStorage
is responsible for creating local files recieved from HTTP:
package main
import "os"
import "log"
// LocalStorage manages files saved on disk
// All files are saved into dir directory
type LocalStorage struct
const dir = "./tmp/"
// CreateTempFile creates temporary file in dir directory
func (s *LocalStorage) CreateTempFile(name string) (*os.File, error)
file, err := os.OpenFile(dir+name,
os.O_CREATE
LocalFile
struct is responsible for local file management (removing, calculating SHA-256). Before uploading local file to GridFS, application must check if real SHA-256
checksum equals to expected.
package main
import (
"crypto/sha256"
"fmt"
"io"
"log"
"os"
"strings"
)
// LocalFile Describes file saved on disk
type LocalFile struct
Path string
Hash string
Handle *os.File
Prev *LocalFile
// Sha256 Calculates sha-256 checksum of this file
func (f *LocalFile) Sha256() string
file, err := os.Open(f.Path)
if err != nil
log.Println(err)
return ""
defer file.Close()
hasher := sha256.New()
if _, err := io.Copy(hasher, file); err != nil
log.Println(err)
return ""
var b strings.Builder
fmt.Fprintf(&b, "%x", hasher.Sum(nil))
return b.String()
// Remove removes file from disk
func (f *LocalFile) Remove()
err := os.Remove(f.Path)
if err != nil
log.Println(err)
Storage
is CRUD interface for GridFS. It iimplements operation on Grid files (creating, storing)
package main
import "gopkg.in/mgo.v2"
import "errors"
import "log"
import "fmt"
import "io"
import "os"
type Storage struct
Server string
Database string
Collection string
var db *mgo.Database
// Connect connects to the default server and opens default DB
func (s *Storage) Connect()
session, err := mgo.Dial("localhost")
if err != nil
log.Fatal(err)
db = session.DB("binary")
// CreateGridFile creates file in GridFS
func (s *Storage) CreateGridFile(name string) (*mgo.GridFile, error)
file, err := db.GridFS("fs").Create(name)
if err != nil
return nil, errors.New("Can not create Grid file")
return file, nil
// StoreFromDisk stores disk file in GridFS
// If local file's sha-256 is not equal to sha-256 value in FileMeta,
// error is returned
func (s *Storage) StoreFromDisk(file *LocalFile, meta *FileMeta) error
if file.Sha256() == meta.Hash
gridFile, err := s.CreateGridFile(meta.Name)
if err != nil
log.Println("In StoreFromDisk: ", err)
return err
defer gridFile.Close()
// TODO: delegate opening file to LocalFile struct
localFile, err := os.Open(file.Path)
if err != nil
log.Println("While opening local file in StoreFromDisk: ", err)
return err
defer localFile.Close()
bytesWritten, err := io.Copy(gridFile, localFile)
if err != nil
log.Println("While copying local file to GridFS: ", err)
return err
log.Printf("Copied %d bytes to GridFS.", bytesWritten)
return errors.New("file.sha256 != meta.sha256")
func (s *Storage) UploadGridFile()
file, err := s.CreateGridFile("hello")
if err != nil
log.Fatal(err)
file.Write(byte("hello world"))
file.Close()
func (s *Storage) OpenFile(name string) (io.ReadCloser, error)
file, err := db.GridFS("fs").Open(name)
if err != nil
log.Println(err)
return nil, err
return file, nil
func (s *Storage) SaveFileToDisk(name string) os.O_WRONLY,
0644)
if err != nil
log.Println("While creating file: ", err)
defer dest.Close()
// Copies from grid file to disk file
if _, err := io.Copy(dest, file); err != nil
fmt.Println(err)
func (s *Storage) WriteToGridFile()
There is also Meta struct that describes HTTP meta info about requests.
package main
import "net/http"
import "mime"
import "errors"
import "fmt"
import "log"
// Meta describes HTTP metainfo
type Meta struct
MediaType string
Boundary string
Range *Range
FileName string
Property *FileMeta
// Range describes HTTP range
type Range struct
Start int64
End int64
Size int64
// FileMeta describes user-define file info
type FileMeta struct
Name string
Hash string
Creator string
SysID string
// ParseMeta parses request information and makes Meta
func ParseMeta(req *http.Request) (*Meta, error)
meta := &Meta
if err := meta.parseContentType(req.Header.Get("Content-Type")); err != nil
return nil, err
if err := meta.parseContentRange(req.Header.Get("Content-Range")); err != nil
return nil, err
if err := meta.parseContentDisposition(req.Header.Get("Content-Disposition")); err != nil
return nil, err
// Parse user defined HTTP headers
meta.parseUserProperties(req)
return meta, nil
- How to determine what operation must implements these structs to build system with low complexity and connectivity?
- How to implement file interface in the way that this file wrapper allows reading the file, calculating checksum, comparing some info about file with meta information, removing file, etc?
- How to improve my GridFS interface to work with such local file wrappers?
- How to implement retrieving meta information about file in GridFS without downloading them to disk?
design-patterns go rest mongodb
Do you want to serve the files via REST, or is the REST interface only for uploading? Or do you want to implement a FUSE filesystem? If yes, single mount or shareable? And you are aware of the fact that GridFS actually calculates a checksum of the file automatically? And that you can store arbitrary metadata in thefiles
collection?
â Markus W Mahlberg
May 21 at 8:31
I want to serve the files via REST, not only upload. Single mount. GridFS calculates MD5, but I need SHA-256.
â typemoon
May 21 at 8:58
@Markus W Mahlberg, REST API must serve next requests: put "little" file (can be transmitted in body of one POST request), get file meta info, put big chunked file, append bytes to the existing big file.
â typemoon
May 21 at 10:25
This actually is a problem. Operations on GridFS are all or nothing. There simply is no append. While you could manipulate the entry in the files and chunks collections, I doubt that this really is useful. In general, your use case sounds more like a job for a filesystem like Gluster or Ceph.
â Markus W Mahlberg
Jul 16 at 20:10
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I need to write REST interface to MongoDB. Help me please to improve the architecture.
The program's main goal: respond to HTTP requests, store big files in Mongo, send file metainformation, etc. I want to build the application from independent components, but it seems difficult to me. May be I need to use some patterns such as MVC?
My program is written in go
:
LocalStorage
is responsible for creating local files recieved from HTTP:
package main
import "os"
import "log"
// LocalStorage manages files saved on disk
// All files are saved into dir directory
type LocalStorage struct
const dir = "./tmp/"
// CreateTempFile creates temporary file in dir directory
func (s *LocalStorage) CreateTempFile(name string) (*os.File, error)
file, err := os.OpenFile(dir+name,
os.O_CREATE
LocalFile
struct is responsible for local file management (removing, calculating SHA-256). Before uploading local file to GridFS, application must check if real SHA-256
checksum equals to expected.
package main
import (
"crypto/sha256"
"fmt"
"io"
"log"
"os"
"strings"
)
// LocalFile Describes file saved on disk
type LocalFile struct
Path string
Hash string
Handle *os.File
Prev *LocalFile
// Sha256 Calculates sha-256 checksum of this file
func (f *LocalFile) Sha256() string
file, err := os.Open(f.Path)
if err != nil
log.Println(err)
return ""
defer file.Close()
hasher := sha256.New()
if _, err := io.Copy(hasher, file); err != nil
log.Println(err)
return ""
var b strings.Builder
fmt.Fprintf(&b, "%x", hasher.Sum(nil))
return b.String()
// Remove removes file from disk
func (f *LocalFile) Remove()
err := os.Remove(f.Path)
if err != nil
log.Println(err)
Storage
is CRUD interface for GridFS. It iimplements operation on Grid files (creating, storing)
package main
import "gopkg.in/mgo.v2"
import "errors"
import "log"
import "fmt"
import "io"
import "os"
type Storage struct
Server string
Database string
Collection string
var db *mgo.Database
// Connect connects to the default server and opens default DB
func (s *Storage) Connect()
session, err := mgo.Dial("localhost")
if err != nil
log.Fatal(err)
db = session.DB("binary")
// CreateGridFile creates file in GridFS
func (s *Storage) CreateGridFile(name string) (*mgo.GridFile, error)
file, err := db.GridFS("fs").Create(name)
if err != nil
return nil, errors.New("Can not create Grid file")
return file, nil
// StoreFromDisk stores disk file in GridFS
// If local file's sha-256 is not equal to sha-256 value in FileMeta,
// error is returned
func (s *Storage) StoreFromDisk(file *LocalFile, meta *FileMeta) error
if file.Sha256() == meta.Hash
gridFile, err := s.CreateGridFile(meta.Name)
if err != nil
log.Println("In StoreFromDisk: ", err)
return err
defer gridFile.Close()
// TODO: delegate opening file to LocalFile struct
localFile, err := os.Open(file.Path)
if err != nil
log.Println("While opening local file in StoreFromDisk: ", err)
return err
defer localFile.Close()
bytesWritten, err := io.Copy(gridFile, localFile)
if err != nil
log.Println("While copying local file to GridFS: ", err)
return err
log.Printf("Copied %d bytes to GridFS.", bytesWritten)
return errors.New("file.sha256 != meta.sha256")
func (s *Storage) UploadGridFile()
file, err := s.CreateGridFile("hello")
if err != nil
log.Fatal(err)
file.Write(byte("hello world"))
file.Close()
func (s *Storage) OpenFile(name string) (io.ReadCloser, error)
file, err := db.GridFS("fs").Open(name)
if err != nil
log.Println(err)
return nil, err
return file, nil
func (s *Storage) SaveFileToDisk(name string) os.O_WRONLY,
0644)
if err != nil
log.Println("While creating file: ", err)
defer dest.Close()
// Copies from grid file to disk file
if _, err := io.Copy(dest, file); err != nil
fmt.Println(err)
func (s *Storage) WriteToGridFile()
There is also Meta struct that describes HTTP meta info about requests.
package main
import "net/http"
import "mime"
import "errors"
import "fmt"
import "log"
// Meta describes HTTP metainfo
type Meta struct
MediaType string
Boundary string
Range *Range
FileName string
Property *FileMeta
// Range describes HTTP range
type Range struct
Start int64
End int64
Size int64
// FileMeta describes user-define file info
type FileMeta struct
Name string
Hash string
Creator string
SysID string
// ParseMeta parses request information and makes Meta
func ParseMeta(req *http.Request) (*Meta, error)
meta := &Meta
if err := meta.parseContentType(req.Header.Get("Content-Type")); err != nil
return nil, err
if err := meta.parseContentRange(req.Header.Get("Content-Range")); err != nil
return nil, err
if err := meta.parseContentDisposition(req.Header.Get("Content-Disposition")); err != nil
return nil, err
// Parse user defined HTTP headers
meta.parseUserProperties(req)
return meta, nil
- How to determine what operation must implements these structs to build system with low complexity and connectivity?
- How to implement file interface in the way that this file wrapper allows reading the file, calculating checksum, comparing some info about file with meta information, removing file, etc?
- How to improve my GridFS interface to work with such local file wrappers?
- How to implement retrieving meta information about file in GridFS without downloading them to disk?
design-patterns go rest mongodb
I need to write REST interface to MongoDB. Help me please to improve the architecture.
The program's main goal: respond to HTTP requests, store big files in Mongo, send file metainformation, etc. I want to build the application from independent components, but it seems difficult to me. May be I need to use some patterns such as MVC?
My program is written in go
:
LocalStorage
is responsible for creating local files recieved from HTTP:
package main
import "os"
import "log"
// LocalStorage manages files saved on disk
// All files are saved into dir directory
type LocalStorage struct
const dir = "./tmp/"
// CreateTempFile creates temporary file in dir directory
func (s *LocalStorage) CreateTempFile(name string) (*os.File, error)
file, err := os.OpenFile(dir+name,
os.O_CREATE
LocalFile
struct is responsible for local file management (removing, calculating SHA-256). Before uploading local file to GridFS, application must check if real SHA-256
checksum equals to expected.
package main
import (
"crypto/sha256"
"fmt"
"io"
"log"
"os"
"strings"
)
// LocalFile Describes file saved on disk
type LocalFile struct
Path string
Hash string
Handle *os.File
Prev *LocalFile
// Sha256 Calculates sha-256 checksum of this file
func (f *LocalFile) Sha256() string
file, err := os.Open(f.Path)
if err != nil
log.Println(err)
return ""
defer file.Close()
hasher := sha256.New()
if _, err := io.Copy(hasher, file); err != nil
log.Println(err)
return ""
var b strings.Builder
fmt.Fprintf(&b, "%x", hasher.Sum(nil))
return b.String()
// Remove removes file from disk
func (f *LocalFile) Remove()
err := os.Remove(f.Path)
if err != nil
log.Println(err)
Storage
is CRUD interface for GridFS. It iimplements operation on Grid files (creating, storing)
package main
import "gopkg.in/mgo.v2"
import "errors"
import "log"
import "fmt"
import "io"
import "os"
type Storage struct
Server string
Database string
Collection string
var db *mgo.Database
// Connect connects to the default server and opens default DB
func (s *Storage) Connect()
session, err := mgo.Dial("localhost")
if err != nil
log.Fatal(err)
db = session.DB("binary")
// CreateGridFile creates file in GridFS
func (s *Storage) CreateGridFile(name string) (*mgo.GridFile, error)
file, err := db.GridFS("fs").Create(name)
if err != nil
return nil, errors.New("Can not create Grid file")
return file, nil
// StoreFromDisk stores disk file in GridFS
// If local file's sha-256 is not equal to sha-256 value in FileMeta,
// error is returned
func (s *Storage) StoreFromDisk(file *LocalFile, meta *FileMeta) error
if file.Sha256() == meta.Hash
gridFile, err := s.CreateGridFile(meta.Name)
if err != nil
log.Println("In StoreFromDisk: ", err)
return err
defer gridFile.Close()
// TODO: delegate opening file to LocalFile struct
localFile, err := os.Open(file.Path)
if err != nil
log.Println("While opening local file in StoreFromDisk: ", err)
return err
defer localFile.Close()
bytesWritten, err := io.Copy(gridFile, localFile)
if err != nil
log.Println("While copying local file to GridFS: ", err)
return err
log.Printf("Copied %d bytes to GridFS.", bytesWritten)
return errors.New("file.sha256 != meta.sha256")
func (s *Storage) UploadGridFile()
file, err := s.CreateGridFile("hello")
if err != nil
log.Fatal(err)
file.Write(byte("hello world"))
file.Close()
func (s *Storage) OpenFile(name string) (io.ReadCloser, error)
file, err := db.GridFS("fs").Open(name)
if err != nil
log.Println(err)
return nil, err
return file, nil
func (s *Storage) SaveFileToDisk(name string) os.O_WRONLY,
0644)
if err != nil
log.Println("While creating file: ", err)
defer dest.Close()
// Copies from grid file to disk file
if _, err := io.Copy(dest, file); err != nil
fmt.Println(err)
func (s *Storage) WriteToGridFile()
There is also Meta struct that describes HTTP meta info about requests.
package main
import "net/http"
import "mime"
import "errors"
import "fmt"
import "log"
// Meta describes HTTP metainfo
type Meta struct
MediaType string
Boundary string
Range *Range
FileName string
Property *FileMeta
// Range describes HTTP range
type Range struct
Start int64
End int64
Size int64
// FileMeta describes user-define file info
type FileMeta struct
Name string
Hash string
Creator string
SysID string
// ParseMeta parses request information and makes Meta
func ParseMeta(req *http.Request) (*Meta, error)
meta := &Meta
if err := meta.parseContentType(req.Header.Get("Content-Type")); err != nil
return nil, err
if err := meta.parseContentRange(req.Header.Get("Content-Range")); err != nil
return nil, err
if err := meta.parseContentDisposition(req.Header.Get("Content-Disposition")); err != nil
return nil, err
// Parse user defined HTTP headers
meta.parseUserProperties(req)
return meta, nil
- How to determine what operation must implements these structs to build system with low complexity and connectivity?
- How to implement file interface in the way that this file wrapper allows reading the file, calculating checksum, comparing some info about file with meta information, removing file, etc?
- How to improve my GridFS interface to work with such local file wrappers?
- How to implement retrieving meta information about file in GridFS without downloading them to disk?
design-patterns go rest mongodb
asked May 21 at 6:54
typemoon
1363
1363
Do you want to serve the files via REST, or is the REST interface only for uploading? Or do you want to implement a FUSE filesystem? If yes, single mount or shareable? And you are aware of the fact that GridFS actually calculates a checksum of the file automatically? And that you can store arbitrary metadata in thefiles
collection?
â Markus W Mahlberg
May 21 at 8:31
I want to serve the files via REST, not only upload. Single mount. GridFS calculates MD5, but I need SHA-256.
â typemoon
May 21 at 8:58
@Markus W Mahlberg, REST API must serve next requests: put "little" file (can be transmitted in body of one POST request), get file meta info, put big chunked file, append bytes to the existing big file.
â typemoon
May 21 at 10:25
This actually is a problem. Operations on GridFS are all or nothing. There simply is no append. While you could manipulate the entry in the files and chunks collections, I doubt that this really is useful. In general, your use case sounds more like a job for a filesystem like Gluster or Ceph.
â Markus W Mahlberg
Jul 16 at 20:10
add a comment |Â
Do you want to serve the files via REST, or is the REST interface only for uploading? Or do you want to implement a FUSE filesystem? If yes, single mount or shareable? And you are aware of the fact that GridFS actually calculates a checksum of the file automatically? And that you can store arbitrary metadata in thefiles
collection?
â Markus W Mahlberg
May 21 at 8:31
I want to serve the files via REST, not only upload. Single mount. GridFS calculates MD5, but I need SHA-256.
â typemoon
May 21 at 8:58
@Markus W Mahlberg, REST API must serve next requests: put "little" file (can be transmitted in body of one POST request), get file meta info, put big chunked file, append bytes to the existing big file.
â typemoon
May 21 at 10:25
This actually is a problem. Operations on GridFS are all or nothing. There simply is no append. While you could manipulate the entry in the files and chunks collections, I doubt that this really is useful. In general, your use case sounds more like a job for a filesystem like Gluster or Ceph.
â Markus W Mahlberg
Jul 16 at 20:10
Do you want to serve the files via REST, or is the REST interface only for uploading? Or do you want to implement a FUSE filesystem? If yes, single mount or shareable? And you are aware of the fact that GridFS actually calculates a checksum of the file automatically? And that you can store arbitrary metadata in the
files
collection?â Markus W Mahlberg
May 21 at 8:31
Do you want to serve the files via REST, or is the REST interface only for uploading? Or do you want to implement a FUSE filesystem? If yes, single mount or shareable? And you are aware of the fact that GridFS actually calculates a checksum of the file automatically? And that you can store arbitrary metadata in the
files
collection?â Markus W Mahlberg
May 21 at 8:31
I want to serve the files via REST, not only upload. Single mount. GridFS calculates MD5, but I need SHA-256.
â typemoon
May 21 at 8:58
I want to serve the files via REST, not only upload. Single mount. GridFS calculates MD5, but I need SHA-256.
â typemoon
May 21 at 8:58
@Markus W Mahlberg, REST API must serve next requests: put "little" file (can be transmitted in body of one POST request), get file meta info, put big chunked file, append bytes to the existing big file.
â typemoon
May 21 at 10:25
@Markus W Mahlberg, REST API must serve next requests: put "little" file (can be transmitted in body of one POST request), get file meta info, put big chunked file, append bytes to the existing big file.
â typemoon
May 21 at 10:25
This actually is a problem. Operations on GridFS are all or nothing. There simply is no append. While you could manipulate the entry in the files and chunks collections, I doubt that this really is useful. In general, your use case sounds more like a job for a filesystem like Gluster or Ceph.
â Markus W Mahlberg
Jul 16 at 20:10
This actually is a problem. Operations on GridFS are all or nothing. There simply is no append. While you could manipulate the entry in the files and chunks collections, I doubt that this really is useful. In general, your use case sounds more like a job for a filesystem like Gluster or Ceph.
â Markus W Mahlberg
Jul 16 at 20:10
add a comment |Â
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f194844%2fimplementing-storage-with-mongodb%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Do you want to serve the files via REST, or is the REST interface only for uploading? Or do you want to implement a FUSE filesystem? If yes, single mount or shareable? And you are aware of the fact that GridFS actually calculates a checksum of the file automatically? And that you can store arbitrary metadata in the
files
collection?â Markus W Mahlberg
May 21 at 8:31
I want to serve the files via REST, not only upload. Single mount. GridFS calculates MD5, but I need SHA-256.
â typemoon
May 21 at 8:58
@Markus W Mahlberg, REST API must serve next requests: put "little" file (can be transmitted in body of one POST request), get file meta info, put big chunked file, append bytes to the existing big file.
â typemoon
May 21 at 10:25
This actually is a problem. Operations on GridFS are all or nothing. There simply is no append. While you could manipulate the entry in the files and chunks collections, I doubt that this really is useful. In general, your use case sounds more like a job for a filesystem like Gluster or Ceph.
â Markus W Mahlberg
Jul 16 at 20:10