Google Drive upload for large files

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
3
down vote

favorite
2












I'm using Google Drive API for uploading large disk image files ( >100 GB). My Code uses this wrapper to upload these large files in chunks of 40mb. The files are stored on NAS drive.



The code works fine, however I'm losing a lot of performance due to the fact that each chunk has to be downloaded from the drive, before being uploaded. See this graph of download (fetching chunk off NAS) and upload (sending to Google Drive).



Graph



Notice the upload pauses for the duration of the fetch from disk, which is understandable - however, I'm wondering if there is any way to fetch the next chunk off the disk while the previous chunk is uploading, as this could significantly improve my upload times?



using Google.Apis.Auth.OAuth2;
using Google.Apis.Drive.v3;
using Google.Apis.Drive.v3.Data;
using Google.Apis.Services;
using Google.Apis.Upload;
using File = Google.Apis.Drive.v3.Data.File;

namespace Backup

public class GoogleDrive


private readonly DriveService _service;

public GoogleDrive()

_service = CreateDriveService();


private FileInfo uploadingFile;

/// <summary>
///
/// </summary>
/// <param name="fileInfo"></param>
/// <param name="maximumTransferRate">Indicates the maximum transfer rate for the upload, in MB/s.</param>
/// <param name="chunkSizeMb"></param>
/// <returns>Returns ID of new file.</returns>
public async Task<string> Upload(FileInfo fileInfo, int maximumTransferRate = 0, int chunkSizeMb = 10)

var uploadStream = new FileStream(fileInfo.FullName, FileMode.Open, FileAccess.Read);

uploadingFile = fileInfo;

var uploadArgs = await GetUploadType(fileInfo);

switch (uploadArgs.UploadType)

default:
return "";
case UploadType.Create:

var insertRequest = _service.Files.Create(
new File

Name = fileInfo.Name,
Parents = new List<string>

BackupFolderId

,
uploadStream,
MimeTypes.GetMimeType(fileInfo.Name)
);

insertRequest.ChunkSize = chunkSizeMb * 1024 * 1024;
insertRequest.ProgressChanged += Upload_ProgressChanged;
insertRequest.ResponseReceived += Upload_ResponseReceived;

var createFileTask = insertRequest.UploadAsync();
await createFileTask.ContinueWith(t =>

uploadStream.Dispose();
);

return insertRequest.ResponseBody.Id;
case UploadType.Update:

var updateRequest = _service.Files.Update(
new File

Name = fileInfo.Name,
Parents = new List<string>

BackupFolderId

,
uploadArgs.FileId,
uploadStream,
MimeTypes.GetMimeType(fileInfo.Name)
);

updateRequest.ChunkSize = chunkSizeMb * 1024 * 1024;
updateRequest.ProgressChanged += Upload_ProgressChanged;
updateRequest.ResponseReceived += Upload_ResponseReceived;

var updateFileTask = updateRequest.UploadAsync();
await updateFileTask.ContinueWith(t =>

uploadStream.Dispose();
);

return updateRequest.ResponseBody.Id;



public async Task<FileList> GetFiles()

var files = _service.Files.List();
files.PageSize = 1000;
files.Fields = "files(id, name, md5Checksum, mimeType, kind)";
return await files.ExecuteAsync();


private async Task<UploadArgs> GetUploadType(FileSystemInfo fileOnNas)

var files = _service.Files.List();
files.Q = $"name=fileOnNas.Name.ToDbQuote()";
files.Fields = "files(id, name, md5Checksum, size)";
var result = await files.ExecuteAsync();

if (result.Files.Count == 0)

return new UploadArgs UploadType = Backup.UploadType.Create;

if (result.Files.Count == 1)

var fileInDrive = result.Files[0];

using (var stream = System.IO.File.OpenRead(fileOnNas.FullName))


return stream.Length == fileInDrive.Size
? new UploadArgs

UploadType = UploadType.None

: new UploadArgs

UploadType = UploadType.Update,
FileId = fileInDrive.Id
;




throw new NotSupportedException();


public delegate void UploadProgressChanged(IUploadProgress progress);

public event UploadProgressChanged OnUploadProgressChanged;

private void Upload_ProgressChanged(IUploadProgress progress)

//OnUploadProgressChanged.Invoke(progress);
EventLog.WriteEntry("NAS Drive Backup", progress.Status + $" uploadingFile.Name (progress.BytesSent / 1024 / 1024) of (uploadingFile.Length / 1024 / 1024)MB (" + ((float)progress.BytesSent / (float)uploadingFile.Length).ToString("P2") + $") complete" +
$"(progress.Status == UploadStatus.Failed ? progress.Exception.Message + "rn" + progress.Exception.StackTrace : "")", progress.Status == UploadStatus.Failed ? EventLogEntryType.Error : EventLogEntryType.Information);


private void Upload_ResponseReceived(File file)

EventLog.WriteEntry("NAS Drive Backup", file.Name + " was uploaded successfully", EventLogEntryType.Information);


public async Task Delete(string fileId)

var deleteRequest = _service.Files.Delete(fileId);
try

await deleteRequest.ExecuteAsync();

catch

var createPermissions = _service.Permissions.Create(new Permission

Type = "anyone",
Role = "owner"
, fileId);

createPermissions.TransferOwnership = true;
await createPermissions.ExecuteAsync();










share|improve this question





















  • I suggest keeping the performance tag. The .net is useless - everybody knows that c# is .net :-]
    – t3chb0t
    Jun 22 at 7:15










  • Righto you win :)
    – Brendan Gooden
    Jun 22 at 7:16










  • I don't see anything that could signifficantly be improved here because all you are doing is pluging in parameters for the DriveService which handles the uploadStream by itself and this is google's library, right? But maybe it's just your connection or device that cannot handle that much data in full-duplex mode?
    – t3chb0t
    Jun 23 at 8:00

















up vote
3
down vote

favorite
2












I'm using Google Drive API for uploading large disk image files ( >100 GB). My Code uses this wrapper to upload these large files in chunks of 40mb. The files are stored on NAS drive.



The code works fine, however I'm losing a lot of performance due to the fact that each chunk has to be downloaded from the drive, before being uploaded. See this graph of download (fetching chunk off NAS) and upload (sending to Google Drive).



Graph



Notice the upload pauses for the duration of the fetch from disk, which is understandable - however, I'm wondering if there is any way to fetch the next chunk off the disk while the previous chunk is uploading, as this could significantly improve my upload times?



using Google.Apis.Auth.OAuth2;
using Google.Apis.Drive.v3;
using Google.Apis.Drive.v3.Data;
using Google.Apis.Services;
using Google.Apis.Upload;
using File = Google.Apis.Drive.v3.Data.File;

namespace Backup

public class GoogleDrive


private readonly DriveService _service;

public GoogleDrive()

_service = CreateDriveService();


private FileInfo uploadingFile;

/// <summary>
///
/// </summary>
/// <param name="fileInfo"></param>
/// <param name="maximumTransferRate">Indicates the maximum transfer rate for the upload, in MB/s.</param>
/// <param name="chunkSizeMb"></param>
/// <returns>Returns ID of new file.</returns>
public async Task<string> Upload(FileInfo fileInfo, int maximumTransferRate = 0, int chunkSizeMb = 10)

var uploadStream = new FileStream(fileInfo.FullName, FileMode.Open, FileAccess.Read);

uploadingFile = fileInfo;

var uploadArgs = await GetUploadType(fileInfo);

switch (uploadArgs.UploadType)

default:
return "";
case UploadType.Create:

var insertRequest = _service.Files.Create(
new File

Name = fileInfo.Name,
Parents = new List<string>

BackupFolderId

,
uploadStream,
MimeTypes.GetMimeType(fileInfo.Name)
);

insertRequest.ChunkSize = chunkSizeMb * 1024 * 1024;
insertRequest.ProgressChanged += Upload_ProgressChanged;
insertRequest.ResponseReceived += Upload_ResponseReceived;

var createFileTask = insertRequest.UploadAsync();
await createFileTask.ContinueWith(t =>

uploadStream.Dispose();
);

return insertRequest.ResponseBody.Id;
case UploadType.Update:

var updateRequest = _service.Files.Update(
new File

Name = fileInfo.Name,
Parents = new List<string>

BackupFolderId

,
uploadArgs.FileId,
uploadStream,
MimeTypes.GetMimeType(fileInfo.Name)
);

updateRequest.ChunkSize = chunkSizeMb * 1024 * 1024;
updateRequest.ProgressChanged += Upload_ProgressChanged;
updateRequest.ResponseReceived += Upload_ResponseReceived;

var updateFileTask = updateRequest.UploadAsync();
await updateFileTask.ContinueWith(t =>

uploadStream.Dispose();
);

return updateRequest.ResponseBody.Id;



public async Task<FileList> GetFiles()

var files = _service.Files.List();
files.PageSize = 1000;
files.Fields = "files(id, name, md5Checksum, mimeType, kind)";
return await files.ExecuteAsync();


private async Task<UploadArgs> GetUploadType(FileSystemInfo fileOnNas)

var files = _service.Files.List();
files.Q = $"name=fileOnNas.Name.ToDbQuote()";
files.Fields = "files(id, name, md5Checksum, size)";
var result = await files.ExecuteAsync();

if (result.Files.Count == 0)

return new UploadArgs UploadType = Backup.UploadType.Create;

if (result.Files.Count == 1)

var fileInDrive = result.Files[0];

using (var stream = System.IO.File.OpenRead(fileOnNas.FullName))


return stream.Length == fileInDrive.Size
? new UploadArgs

UploadType = UploadType.None

: new UploadArgs

UploadType = UploadType.Update,
FileId = fileInDrive.Id
;




throw new NotSupportedException();


public delegate void UploadProgressChanged(IUploadProgress progress);

public event UploadProgressChanged OnUploadProgressChanged;

private void Upload_ProgressChanged(IUploadProgress progress)

//OnUploadProgressChanged.Invoke(progress);
EventLog.WriteEntry("NAS Drive Backup", progress.Status + $" uploadingFile.Name (progress.BytesSent / 1024 / 1024) of (uploadingFile.Length / 1024 / 1024)MB (" + ((float)progress.BytesSent / (float)uploadingFile.Length).ToString("P2") + $") complete" +
$"(progress.Status == UploadStatus.Failed ? progress.Exception.Message + "rn" + progress.Exception.StackTrace : "")", progress.Status == UploadStatus.Failed ? EventLogEntryType.Error : EventLogEntryType.Information);


private void Upload_ResponseReceived(File file)

EventLog.WriteEntry("NAS Drive Backup", file.Name + " was uploaded successfully", EventLogEntryType.Information);


public async Task Delete(string fileId)

var deleteRequest = _service.Files.Delete(fileId);
try

await deleteRequest.ExecuteAsync();

catch

var createPermissions = _service.Permissions.Create(new Permission

Type = "anyone",
Role = "owner"
, fileId);

createPermissions.TransferOwnership = true;
await createPermissions.ExecuteAsync();










share|improve this question





















  • I suggest keeping the performance tag. The .net is useless - everybody knows that c# is .net :-]
    – t3chb0t
    Jun 22 at 7:15










  • Righto you win :)
    – Brendan Gooden
    Jun 22 at 7:16










  • I don't see anything that could signifficantly be improved here because all you are doing is pluging in parameters for the DriveService which handles the uploadStream by itself and this is google's library, right? But maybe it's just your connection or device that cannot handle that much data in full-duplex mode?
    – t3chb0t
    Jun 23 at 8:00













up vote
3
down vote

favorite
2









up vote
3
down vote

favorite
2






2





I'm using Google Drive API for uploading large disk image files ( >100 GB). My Code uses this wrapper to upload these large files in chunks of 40mb. The files are stored on NAS drive.



The code works fine, however I'm losing a lot of performance due to the fact that each chunk has to be downloaded from the drive, before being uploaded. See this graph of download (fetching chunk off NAS) and upload (sending to Google Drive).



Graph



Notice the upload pauses for the duration of the fetch from disk, which is understandable - however, I'm wondering if there is any way to fetch the next chunk off the disk while the previous chunk is uploading, as this could significantly improve my upload times?



using Google.Apis.Auth.OAuth2;
using Google.Apis.Drive.v3;
using Google.Apis.Drive.v3.Data;
using Google.Apis.Services;
using Google.Apis.Upload;
using File = Google.Apis.Drive.v3.Data.File;

namespace Backup

public class GoogleDrive


private readonly DriveService _service;

public GoogleDrive()

_service = CreateDriveService();


private FileInfo uploadingFile;

/// <summary>
///
/// </summary>
/// <param name="fileInfo"></param>
/// <param name="maximumTransferRate">Indicates the maximum transfer rate for the upload, in MB/s.</param>
/// <param name="chunkSizeMb"></param>
/// <returns>Returns ID of new file.</returns>
public async Task<string> Upload(FileInfo fileInfo, int maximumTransferRate = 0, int chunkSizeMb = 10)

var uploadStream = new FileStream(fileInfo.FullName, FileMode.Open, FileAccess.Read);

uploadingFile = fileInfo;

var uploadArgs = await GetUploadType(fileInfo);

switch (uploadArgs.UploadType)

default:
return "";
case UploadType.Create:

var insertRequest = _service.Files.Create(
new File

Name = fileInfo.Name,
Parents = new List<string>

BackupFolderId

,
uploadStream,
MimeTypes.GetMimeType(fileInfo.Name)
);

insertRequest.ChunkSize = chunkSizeMb * 1024 * 1024;
insertRequest.ProgressChanged += Upload_ProgressChanged;
insertRequest.ResponseReceived += Upload_ResponseReceived;

var createFileTask = insertRequest.UploadAsync();
await createFileTask.ContinueWith(t =>

uploadStream.Dispose();
);

return insertRequest.ResponseBody.Id;
case UploadType.Update:

var updateRequest = _service.Files.Update(
new File

Name = fileInfo.Name,
Parents = new List<string>

BackupFolderId

,
uploadArgs.FileId,
uploadStream,
MimeTypes.GetMimeType(fileInfo.Name)
);

updateRequest.ChunkSize = chunkSizeMb * 1024 * 1024;
updateRequest.ProgressChanged += Upload_ProgressChanged;
updateRequest.ResponseReceived += Upload_ResponseReceived;

var updateFileTask = updateRequest.UploadAsync();
await updateFileTask.ContinueWith(t =>

uploadStream.Dispose();
);

return updateRequest.ResponseBody.Id;



public async Task<FileList> GetFiles()

var files = _service.Files.List();
files.PageSize = 1000;
files.Fields = "files(id, name, md5Checksum, mimeType, kind)";
return await files.ExecuteAsync();


private async Task<UploadArgs> GetUploadType(FileSystemInfo fileOnNas)

var files = _service.Files.List();
files.Q = $"name=fileOnNas.Name.ToDbQuote()";
files.Fields = "files(id, name, md5Checksum, size)";
var result = await files.ExecuteAsync();

if (result.Files.Count == 0)

return new UploadArgs UploadType = Backup.UploadType.Create;

if (result.Files.Count == 1)

var fileInDrive = result.Files[0];

using (var stream = System.IO.File.OpenRead(fileOnNas.FullName))


return stream.Length == fileInDrive.Size
? new UploadArgs

UploadType = UploadType.None

: new UploadArgs

UploadType = UploadType.Update,
FileId = fileInDrive.Id
;




throw new NotSupportedException();


public delegate void UploadProgressChanged(IUploadProgress progress);

public event UploadProgressChanged OnUploadProgressChanged;

private void Upload_ProgressChanged(IUploadProgress progress)

//OnUploadProgressChanged.Invoke(progress);
EventLog.WriteEntry("NAS Drive Backup", progress.Status + $" uploadingFile.Name (progress.BytesSent / 1024 / 1024) of (uploadingFile.Length / 1024 / 1024)MB (" + ((float)progress.BytesSent / (float)uploadingFile.Length).ToString("P2") + $") complete" +
$"(progress.Status == UploadStatus.Failed ? progress.Exception.Message + "rn" + progress.Exception.StackTrace : "")", progress.Status == UploadStatus.Failed ? EventLogEntryType.Error : EventLogEntryType.Information);


private void Upload_ResponseReceived(File file)

EventLog.WriteEntry("NAS Drive Backup", file.Name + " was uploaded successfully", EventLogEntryType.Information);


public async Task Delete(string fileId)

var deleteRequest = _service.Files.Delete(fileId);
try

await deleteRequest.ExecuteAsync();

catch

var createPermissions = _service.Permissions.Create(new Permission

Type = "anyone",
Role = "owner"
, fileId);

createPermissions.TransferOwnership = true;
await createPermissions.ExecuteAsync();










share|improve this question













I'm using Google Drive API for uploading large disk image files ( >100 GB). My Code uses this wrapper to upload these large files in chunks of 40mb. The files are stored on NAS drive.



The code works fine, however I'm losing a lot of performance due to the fact that each chunk has to be downloaded from the drive, before being uploaded. See this graph of download (fetching chunk off NAS) and upload (sending to Google Drive).



Graph



Notice the upload pauses for the duration of the fetch from disk, which is understandable - however, I'm wondering if there is any way to fetch the next chunk off the disk while the previous chunk is uploading, as this could significantly improve my upload times?



using Google.Apis.Auth.OAuth2;
using Google.Apis.Drive.v3;
using Google.Apis.Drive.v3.Data;
using Google.Apis.Services;
using Google.Apis.Upload;
using File = Google.Apis.Drive.v3.Data.File;

namespace Backup

public class GoogleDrive


private readonly DriveService _service;

public GoogleDrive()

_service = CreateDriveService();


private FileInfo uploadingFile;

/// <summary>
///
/// </summary>
/// <param name="fileInfo"></param>
/// <param name="maximumTransferRate">Indicates the maximum transfer rate for the upload, in MB/s.</param>
/// <param name="chunkSizeMb"></param>
/// <returns>Returns ID of new file.</returns>
public async Task<string> Upload(FileInfo fileInfo, int maximumTransferRate = 0, int chunkSizeMb = 10)

var uploadStream = new FileStream(fileInfo.FullName, FileMode.Open, FileAccess.Read);

uploadingFile = fileInfo;

var uploadArgs = await GetUploadType(fileInfo);

switch (uploadArgs.UploadType)

default:
return "";
case UploadType.Create:

var insertRequest = _service.Files.Create(
new File

Name = fileInfo.Name,
Parents = new List<string>

BackupFolderId

,
uploadStream,
MimeTypes.GetMimeType(fileInfo.Name)
);

insertRequest.ChunkSize = chunkSizeMb * 1024 * 1024;
insertRequest.ProgressChanged += Upload_ProgressChanged;
insertRequest.ResponseReceived += Upload_ResponseReceived;

var createFileTask = insertRequest.UploadAsync();
await createFileTask.ContinueWith(t =>

uploadStream.Dispose();
);

return insertRequest.ResponseBody.Id;
case UploadType.Update:

var updateRequest = _service.Files.Update(
new File

Name = fileInfo.Name,
Parents = new List<string>

BackupFolderId

,
uploadArgs.FileId,
uploadStream,
MimeTypes.GetMimeType(fileInfo.Name)
);

updateRequest.ChunkSize = chunkSizeMb * 1024 * 1024;
updateRequest.ProgressChanged += Upload_ProgressChanged;
updateRequest.ResponseReceived += Upload_ResponseReceived;

var updateFileTask = updateRequest.UploadAsync();
await updateFileTask.ContinueWith(t =>

uploadStream.Dispose();
);

return updateRequest.ResponseBody.Id;



public async Task<FileList> GetFiles()

var files = _service.Files.List();
files.PageSize = 1000;
files.Fields = "files(id, name, md5Checksum, mimeType, kind)";
return await files.ExecuteAsync();


private async Task<UploadArgs> GetUploadType(FileSystemInfo fileOnNas)

var files = _service.Files.List();
files.Q = $"name=fileOnNas.Name.ToDbQuote()";
files.Fields = "files(id, name, md5Checksum, size)";
var result = await files.ExecuteAsync();

if (result.Files.Count == 0)

return new UploadArgs UploadType = Backup.UploadType.Create;

if (result.Files.Count == 1)

var fileInDrive = result.Files[0];

using (var stream = System.IO.File.OpenRead(fileOnNas.FullName))


return stream.Length == fileInDrive.Size
? new UploadArgs

UploadType = UploadType.None

: new UploadArgs

UploadType = UploadType.Update,
FileId = fileInDrive.Id
;




throw new NotSupportedException();


public delegate void UploadProgressChanged(IUploadProgress progress);

public event UploadProgressChanged OnUploadProgressChanged;

private void Upload_ProgressChanged(IUploadProgress progress)

//OnUploadProgressChanged.Invoke(progress);
EventLog.WriteEntry("NAS Drive Backup", progress.Status + $" uploadingFile.Name (progress.BytesSent / 1024 / 1024) of (uploadingFile.Length / 1024 / 1024)MB (" + ((float)progress.BytesSent / (float)uploadingFile.Length).ToString("P2") + $") complete" +
$"(progress.Status == UploadStatus.Failed ? progress.Exception.Message + "rn" + progress.Exception.StackTrace : "")", progress.Status == UploadStatus.Failed ? EventLogEntryType.Error : EventLogEntryType.Information);


private void Upload_ResponseReceived(File file)

EventLog.WriteEntry("NAS Drive Backup", file.Name + " was uploaded successfully", EventLogEntryType.Information);


public async Task Delete(string fileId)

var deleteRequest = _service.Files.Delete(fileId);
try

await deleteRequest.ExecuteAsync();

catch

var createPermissions = _service.Permissions.Create(new Permission

Type = "anyone",
Role = "owner"
, fileId);

createPermissions.TransferOwnership = true;
await createPermissions.ExecuteAsync();












share|improve this question












share|improve this question




share|improve this question








edited Jun 22 at 8:46
























asked Jun 22 at 6:43









Brendan Gooden

1364




1364











  • I suggest keeping the performance tag. The .net is useless - everybody knows that c# is .net :-]
    – t3chb0t
    Jun 22 at 7:15










  • Righto you win :)
    – Brendan Gooden
    Jun 22 at 7:16










  • I don't see anything that could signifficantly be improved here because all you are doing is pluging in parameters for the DriveService which handles the uploadStream by itself and this is google's library, right? But maybe it's just your connection or device that cannot handle that much data in full-duplex mode?
    – t3chb0t
    Jun 23 at 8:00

















  • I suggest keeping the performance tag. The .net is useless - everybody knows that c# is .net :-]
    – t3chb0t
    Jun 22 at 7:15










  • Righto you win :)
    – Brendan Gooden
    Jun 22 at 7:16










  • I don't see anything that could signifficantly be improved here because all you are doing is pluging in parameters for the DriveService which handles the uploadStream by itself and this is google's library, right? But maybe it's just your connection or device that cannot handle that much data in full-duplex mode?
    – t3chb0t
    Jun 23 at 8:00
















I suggest keeping the performance tag. The .net is useless - everybody knows that c# is .net :-]
– t3chb0t
Jun 22 at 7:15




I suggest keeping the performance tag. The .net is useless - everybody knows that c# is .net :-]
– t3chb0t
Jun 22 at 7:15












Righto you win :)
– Brendan Gooden
Jun 22 at 7:16




Righto you win :)
– Brendan Gooden
Jun 22 at 7:16












I don't see anything that could signifficantly be improved here because all you are doing is pluging in parameters for the DriveService which handles the uploadStream by itself and this is google's library, right? But maybe it's just your connection or device that cannot handle that much data in full-duplex mode?
– t3chb0t
Jun 23 at 8:00





I don't see anything that could signifficantly be improved here because all you are doing is pluging in parameters for the DriveService which handles the uploadStream by itself and this is google's library, right? But maybe it's just your connection or device that cannot handle that much data in full-duplex mode?
– t3chb0t
Jun 23 at 8:00











1 Answer
1






active

oldest

votes

















up vote
2
down vote



accepted










I ended up diving in boots and all, and figuring it out myself. With the help of google-apis-dotnet-client source, I modified to suit my requirements. I believe this could be implemented in future releases of the Drive SDK in .NET



Glad if anyone can spot any improvements in this code :)



Much better result!



/// <summary>The core logic for uploading a stream. It is used by the upload and resume methods.</summary>
private async Task<IUploadProgress> UploadCoreAsync(CancellationToken cancellationToken)

try

using (var callback = new ServerErrorCallback(this))


byte chunk1 = ;

byte chunk2 = ;

PrepareNextChunkKnownSize(ContentStream, cancellationToken, out chunk1, out int _); // Prepare First Chunk

bool isCompleted = false;

bool usingChunk1 = true;

while (!isCompleted)

var getNextChunkTask = Task.Run(() =>


if (usingChunk1)

PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk2);

else

PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk1);



, cancellationToken);

var sendChunkTask = usingChunk1 ? SendNextChunkAsync(chunk1, cancellationToken) : SendNextChunkAsync(chunk2, cancellationToken);

await Task.WhenAll(getNextChunkTask, sendChunkTask).ConfigureAwait(false);

isCompleted = await sendChunkTask;

UpdateProgress(new ResumableUploadProgress(UploadStatus.Uploading, BytesServerReceived));

usingChunk1 = !usingChunk1;


UpdateProgress(new ResumableUploadProgress(UploadStatus.Completed, BytesServerReceived));



catch (TaskCanceledException ex)

Logger.Error(ex, "MediaUpload[0] - Task was canceled", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
throw ex;

catch (Exception ex)

Logger.Error(ex, "MediaUpload[0] - Exception occurred while uploading media", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));


return Progress;




/// <summary>Prepares the given request with the next chunk in case the steam length is known.</summary>
private void PrepareNextChunkKnownSizeCustom(Stream stream, CancellationToken cancellationToken, long bytesSent, out byte chunk)

int chkSize;

try

chkSize = (int) Math.Min(StreamLength - bytesSent, (long) ChunkSize);

catch

// Because we fetch next chunk and upload at the same time, this can fail when the last chunk
// has uploaded and it tries to fetch the next chunk which doesn't exist.
// In this case, return empty initialized values.
chunk = new byte[0];

return;


// Stream length is known and it supports seek and position operations.
// We can change the stream position and read bytes from the last point.
// If the number of bytes received by the server isn't equal to the amount of bytes the client sent, we
// need to change the position of the input stream, otherwise we can continue from the current position.
if (stream.Position != bytesSent)

stream.Position = bytesSent;


chunk = new byte[chkSize];
int bytesLeft = chkSize;
int bytesRead = 0;
while (bytesLeft > 0)

cancellationToken.ThrowIfCancellationRequested();
// Make sure we only read at most BufferSize bytes at a time.
int readSize = Math.Min(bytesLeft, BufferSize);
int len = stream.Read(chunk, bytesRead, readSize);
if (len == 0)

// Presumably the stream lied about its length. Not great, but we still have a chunk to send.
break;

bytesRead += len;
bytesLeft -= len;








share|improve this answer





















  • So I was right? It wasn't your code that was inefficient but google's? :-)
    – t3chb0t
    Jun 27 at 8:47










Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
);
);
, "mathjax-editing");

StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "196"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f197034%2fgoogle-drive-upload-for-large-files%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
2
down vote



accepted










I ended up diving in boots and all, and figuring it out myself. With the help of google-apis-dotnet-client source, I modified to suit my requirements. I believe this could be implemented in future releases of the Drive SDK in .NET



Glad if anyone can spot any improvements in this code :)



Much better result!



/// <summary>The core logic for uploading a stream. It is used by the upload and resume methods.</summary>
private async Task<IUploadProgress> UploadCoreAsync(CancellationToken cancellationToken)

try

using (var callback = new ServerErrorCallback(this))


byte chunk1 = ;

byte chunk2 = ;

PrepareNextChunkKnownSize(ContentStream, cancellationToken, out chunk1, out int _); // Prepare First Chunk

bool isCompleted = false;

bool usingChunk1 = true;

while (!isCompleted)

var getNextChunkTask = Task.Run(() =>


if (usingChunk1)

PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk2);

else

PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk1);



, cancellationToken);

var sendChunkTask = usingChunk1 ? SendNextChunkAsync(chunk1, cancellationToken) : SendNextChunkAsync(chunk2, cancellationToken);

await Task.WhenAll(getNextChunkTask, sendChunkTask).ConfigureAwait(false);

isCompleted = await sendChunkTask;

UpdateProgress(new ResumableUploadProgress(UploadStatus.Uploading, BytesServerReceived));

usingChunk1 = !usingChunk1;


UpdateProgress(new ResumableUploadProgress(UploadStatus.Completed, BytesServerReceived));



catch (TaskCanceledException ex)

Logger.Error(ex, "MediaUpload[0] - Task was canceled", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
throw ex;

catch (Exception ex)

Logger.Error(ex, "MediaUpload[0] - Exception occurred while uploading media", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));


return Progress;




/// <summary>Prepares the given request with the next chunk in case the steam length is known.</summary>
private void PrepareNextChunkKnownSizeCustom(Stream stream, CancellationToken cancellationToken, long bytesSent, out byte chunk)

int chkSize;

try

chkSize = (int) Math.Min(StreamLength - bytesSent, (long) ChunkSize);

catch

// Because we fetch next chunk and upload at the same time, this can fail when the last chunk
// has uploaded and it tries to fetch the next chunk which doesn't exist.
// In this case, return empty initialized values.
chunk = new byte[0];

return;


// Stream length is known and it supports seek and position operations.
// We can change the stream position and read bytes from the last point.
// If the number of bytes received by the server isn't equal to the amount of bytes the client sent, we
// need to change the position of the input stream, otherwise we can continue from the current position.
if (stream.Position != bytesSent)

stream.Position = bytesSent;


chunk = new byte[chkSize];
int bytesLeft = chkSize;
int bytesRead = 0;
while (bytesLeft > 0)

cancellationToken.ThrowIfCancellationRequested();
// Make sure we only read at most BufferSize bytes at a time.
int readSize = Math.Min(bytesLeft, BufferSize);
int len = stream.Read(chunk, bytesRead, readSize);
if (len == 0)

// Presumably the stream lied about its length. Not great, but we still have a chunk to send.
break;

bytesRead += len;
bytesLeft -= len;








share|improve this answer





















  • So I was right? It wasn't your code that was inefficient but google's? :-)
    – t3chb0t
    Jun 27 at 8:47














up vote
2
down vote



accepted










I ended up diving in boots and all, and figuring it out myself. With the help of google-apis-dotnet-client source, I modified to suit my requirements. I believe this could be implemented in future releases of the Drive SDK in .NET



Glad if anyone can spot any improvements in this code :)



Much better result!



/// <summary>The core logic for uploading a stream. It is used by the upload and resume methods.</summary>
private async Task<IUploadProgress> UploadCoreAsync(CancellationToken cancellationToken)

try

using (var callback = new ServerErrorCallback(this))


byte chunk1 = ;

byte chunk2 = ;

PrepareNextChunkKnownSize(ContentStream, cancellationToken, out chunk1, out int _); // Prepare First Chunk

bool isCompleted = false;

bool usingChunk1 = true;

while (!isCompleted)

var getNextChunkTask = Task.Run(() =>


if (usingChunk1)

PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk2);

else

PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk1);



, cancellationToken);

var sendChunkTask = usingChunk1 ? SendNextChunkAsync(chunk1, cancellationToken) : SendNextChunkAsync(chunk2, cancellationToken);

await Task.WhenAll(getNextChunkTask, sendChunkTask).ConfigureAwait(false);

isCompleted = await sendChunkTask;

UpdateProgress(new ResumableUploadProgress(UploadStatus.Uploading, BytesServerReceived));

usingChunk1 = !usingChunk1;


UpdateProgress(new ResumableUploadProgress(UploadStatus.Completed, BytesServerReceived));



catch (TaskCanceledException ex)

Logger.Error(ex, "MediaUpload[0] - Task was canceled", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
throw ex;

catch (Exception ex)

Logger.Error(ex, "MediaUpload[0] - Exception occurred while uploading media", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));


return Progress;




/// <summary>Prepares the given request with the next chunk in case the steam length is known.</summary>
private void PrepareNextChunkKnownSizeCustom(Stream stream, CancellationToken cancellationToken, long bytesSent, out byte chunk)

int chkSize;

try

chkSize = (int) Math.Min(StreamLength - bytesSent, (long) ChunkSize);

catch

// Because we fetch next chunk and upload at the same time, this can fail when the last chunk
// has uploaded and it tries to fetch the next chunk which doesn't exist.
// In this case, return empty initialized values.
chunk = new byte[0];

return;


// Stream length is known and it supports seek and position operations.
// We can change the stream position and read bytes from the last point.
// If the number of bytes received by the server isn't equal to the amount of bytes the client sent, we
// need to change the position of the input stream, otherwise we can continue from the current position.
if (stream.Position != bytesSent)

stream.Position = bytesSent;


chunk = new byte[chkSize];
int bytesLeft = chkSize;
int bytesRead = 0;
while (bytesLeft > 0)

cancellationToken.ThrowIfCancellationRequested();
// Make sure we only read at most BufferSize bytes at a time.
int readSize = Math.Min(bytesLeft, BufferSize);
int len = stream.Read(chunk, bytesRead, readSize);
if (len == 0)

// Presumably the stream lied about its length. Not great, but we still have a chunk to send.
break;

bytesRead += len;
bytesLeft -= len;








share|improve this answer





















  • So I was right? It wasn't your code that was inefficient but google's? :-)
    – t3chb0t
    Jun 27 at 8:47












up vote
2
down vote



accepted







up vote
2
down vote



accepted






I ended up diving in boots and all, and figuring it out myself. With the help of google-apis-dotnet-client source, I modified to suit my requirements. I believe this could be implemented in future releases of the Drive SDK in .NET



Glad if anyone can spot any improvements in this code :)



Much better result!



/// <summary>The core logic for uploading a stream. It is used by the upload and resume methods.</summary>
private async Task<IUploadProgress> UploadCoreAsync(CancellationToken cancellationToken)

try

using (var callback = new ServerErrorCallback(this))


byte chunk1 = ;

byte chunk2 = ;

PrepareNextChunkKnownSize(ContentStream, cancellationToken, out chunk1, out int _); // Prepare First Chunk

bool isCompleted = false;

bool usingChunk1 = true;

while (!isCompleted)

var getNextChunkTask = Task.Run(() =>


if (usingChunk1)

PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk2);

else

PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk1);



, cancellationToken);

var sendChunkTask = usingChunk1 ? SendNextChunkAsync(chunk1, cancellationToken) : SendNextChunkAsync(chunk2, cancellationToken);

await Task.WhenAll(getNextChunkTask, sendChunkTask).ConfigureAwait(false);

isCompleted = await sendChunkTask;

UpdateProgress(new ResumableUploadProgress(UploadStatus.Uploading, BytesServerReceived));

usingChunk1 = !usingChunk1;


UpdateProgress(new ResumableUploadProgress(UploadStatus.Completed, BytesServerReceived));



catch (TaskCanceledException ex)

Logger.Error(ex, "MediaUpload[0] - Task was canceled", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
throw ex;

catch (Exception ex)

Logger.Error(ex, "MediaUpload[0] - Exception occurred while uploading media", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));


return Progress;




/// <summary>Prepares the given request with the next chunk in case the steam length is known.</summary>
private void PrepareNextChunkKnownSizeCustom(Stream stream, CancellationToken cancellationToken, long bytesSent, out byte chunk)

int chkSize;

try

chkSize = (int) Math.Min(StreamLength - bytesSent, (long) ChunkSize);

catch

// Because we fetch next chunk and upload at the same time, this can fail when the last chunk
// has uploaded and it tries to fetch the next chunk which doesn't exist.
// In this case, return empty initialized values.
chunk = new byte[0];

return;


// Stream length is known and it supports seek and position operations.
// We can change the stream position and read bytes from the last point.
// If the number of bytes received by the server isn't equal to the amount of bytes the client sent, we
// need to change the position of the input stream, otherwise we can continue from the current position.
if (stream.Position != bytesSent)

stream.Position = bytesSent;


chunk = new byte[chkSize];
int bytesLeft = chkSize;
int bytesRead = 0;
while (bytesLeft > 0)

cancellationToken.ThrowIfCancellationRequested();
// Make sure we only read at most BufferSize bytes at a time.
int readSize = Math.Min(bytesLeft, BufferSize);
int len = stream.Read(chunk, bytesRead, readSize);
if (len == 0)

// Presumably the stream lied about its length. Not great, but we still have a chunk to send.
break;

bytesRead += len;
bytesLeft -= len;








share|improve this answer













I ended up diving in boots and all, and figuring it out myself. With the help of google-apis-dotnet-client source, I modified to suit my requirements. I believe this could be implemented in future releases of the Drive SDK in .NET



Glad if anyone can spot any improvements in this code :)



Much better result!



/// <summary>The core logic for uploading a stream. It is used by the upload and resume methods.</summary>
private async Task<IUploadProgress> UploadCoreAsync(CancellationToken cancellationToken)

try

using (var callback = new ServerErrorCallback(this))


byte chunk1 = ;

byte chunk2 = ;

PrepareNextChunkKnownSize(ContentStream, cancellationToken, out chunk1, out int _); // Prepare First Chunk

bool isCompleted = false;

bool usingChunk1 = true;

while (!isCompleted)

var getNextChunkTask = Task.Run(() =>


if (usingChunk1)

PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk2);

else

PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk1);



, cancellationToken);

var sendChunkTask = usingChunk1 ? SendNextChunkAsync(chunk1, cancellationToken) : SendNextChunkAsync(chunk2, cancellationToken);

await Task.WhenAll(getNextChunkTask, sendChunkTask).ConfigureAwait(false);

isCompleted = await sendChunkTask;

UpdateProgress(new ResumableUploadProgress(UploadStatus.Uploading, BytesServerReceived));

usingChunk1 = !usingChunk1;


UpdateProgress(new ResumableUploadProgress(UploadStatus.Completed, BytesServerReceived));



catch (TaskCanceledException ex)

Logger.Error(ex, "MediaUpload[0] - Task was canceled", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
throw ex;

catch (Exception ex)

Logger.Error(ex, "MediaUpload[0] - Exception occurred while uploading media", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));


return Progress;




/// <summary>Prepares the given request with the next chunk in case the steam length is known.</summary>
private void PrepareNextChunkKnownSizeCustom(Stream stream, CancellationToken cancellationToken, long bytesSent, out byte chunk)

int chkSize;

try

chkSize = (int) Math.Min(StreamLength - bytesSent, (long) ChunkSize);

catch

// Because we fetch next chunk and upload at the same time, this can fail when the last chunk
// has uploaded and it tries to fetch the next chunk which doesn't exist.
// In this case, return empty initialized values.
chunk = new byte[0];

return;


// Stream length is known and it supports seek and position operations.
// We can change the stream position and read bytes from the last point.
// If the number of bytes received by the server isn't equal to the amount of bytes the client sent, we
// need to change the position of the input stream, otherwise we can continue from the current position.
if (stream.Position != bytesSent)

stream.Position = bytesSent;


chunk = new byte[chkSize];
int bytesLeft = chkSize;
int bytesRead = 0;
while (bytesLeft > 0)

cancellationToken.ThrowIfCancellationRequested();
// Make sure we only read at most BufferSize bytes at a time.
int readSize = Math.Min(bytesLeft, BufferSize);
int len = stream.Read(chunk, bytesRead, readSize);
if (len == 0)

// Presumably the stream lied about its length. Not great, but we still have a chunk to send.
break;

bytesRead += len;
bytesLeft -= len;









share|improve this answer













share|improve this answer



share|improve this answer











answered Jun 26 at 23:01









Brendan Gooden

1364




1364











  • So I was right? It wasn't your code that was inefficient but google's? :-)
    – t3chb0t
    Jun 27 at 8:47
















  • So I was right? It wasn't your code that was inefficient but google's? :-)
    – t3chb0t
    Jun 27 at 8:47















So I was right? It wasn't your code that was inefficient but google's? :-)
– t3chb0t
Jun 27 at 8:47




So I was right? It wasn't your code that was inefficient but google's? :-)
– t3chb0t
Jun 27 at 8:47












 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f197034%2fgoogle-drive-upload-for-large-files%23new-answer', 'question_page');

);

Post as a guest













































































Popular posts from this blog

Greedy Best First Search implementation in Rust

Function to Return a JSON Like Objects Using VBA Collections and Arrays

C++11 CLH Lock Implementation