Google Drive upload for large files
Clash Royale CLAN TAG#URR8PPP
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;
up vote
3
down vote
favorite
I'm using Google Drive API for uploading large disk image files ( >100 GB). My Code uses this wrapper to upload these large files in chunks of 40mb. The files are stored on NAS drive.
The code works fine, however I'm losing a lot of performance due to the fact that each chunk has to be downloaded from the drive, before being uploaded. See this graph of download (fetching chunk off NAS) and upload (sending to Google Drive).
Notice the upload pauses for the duration of the fetch from disk, which is understandable - however, I'm wondering if there is any way to fetch the next chunk off the disk while the previous chunk is uploading, as this could significantly improve my upload times?
using Google.Apis.Auth.OAuth2;
using Google.Apis.Drive.v3;
using Google.Apis.Drive.v3.Data;
using Google.Apis.Services;
using Google.Apis.Upload;
using File = Google.Apis.Drive.v3.Data.File;
namespace Backup
public class GoogleDrive
private readonly DriveService _service;
public GoogleDrive()
_service = CreateDriveService();
private FileInfo uploadingFile;
/// <summary>
///
/// </summary>
/// <param name="fileInfo"></param>
/// <param name="maximumTransferRate">Indicates the maximum transfer rate for the upload, in MB/s.</param>
/// <param name="chunkSizeMb"></param>
/// <returns>Returns ID of new file.</returns>
public async Task<string> Upload(FileInfo fileInfo, int maximumTransferRate = 0, int chunkSizeMb = 10)
var uploadStream = new FileStream(fileInfo.FullName, FileMode.Open, FileAccess.Read);
uploadingFile = fileInfo;
var uploadArgs = await GetUploadType(fileInfo);
switch (uploadArgs.UploadType)
default:
return "";
case UploadType.Create:
var insertRequest = _service.Files.Create(
new File
Name = fileInfo.Name,
Parents = new List<string>
BackupFolderId
,
uploadStream,
MimeTypes.GetMimeType(fileInfo.Name)
);
insertRequest.ChunkSize = chunkSizeMb * 1024 * 1024;
insertRequest.ProgressChanged += Upload_ProgressChanged;
insertRequest.ResponseReceived += Upload_ResponseReceived;
var createFileTask = insertRequest.UploadAsync();
await createFileTask.ContinueWith(t =>
uploadStream.Dispose();
);
return insertRequest.ResponseBody.Id;
case UploadType.Update:
var updateRequest = _service.Files.Update(
new File
Name = fileInfo.Name,
Parents = new List<string>
BackupFolderId
,
uploadArgs.FileId,
uploadStream,
MimeTypes.GetMimeType(fileInfo.Name)
);
updateRequest.ChunkSize = chunkSizeMb * 1024 * 1024;
updateRequest.ProgressChanged += Upload_ProgressChanged;
updateRequest.ResponseReceived += Upload_ResponseReceived;
var updateFileTask = updateRequest.UploadAsync();
await updateFileTask.ContinueWith(t =>
uploadStream.Dispose();
);
return updateRequest.ResponseBody.Id;
public async Task<FileList> GetFiles()
var files = _service.Files.List();
files.PageSize = 1000;
files.Fields = "files(id, name, md5Checksum, mimeType, kind)";
return await files.ExecuteAsync();
private async Task<UploadArgs> GetUploadType(FileSystemInfo fileOnNas)
var files = _service.Files.List();
files.Q = $"name=fileOnNas.Name.ToDbQuote()";
files.Fields = "files(id, name, md5Checksum, size)";
var result = await files.ExecuteAsync();
if (result.Files.Count == 0)
return new UploadArgs UploadType = Backup.UploadType.Create;
if (result.Files.Count == 1)
var fileInDrive = result.Files[0];
using (var stream = System.IO.File.OpenRead(fileOnNas.FullName))
return stream.Length == fileInDrive.Size
? new UploadArgs
UploadType = UploadType.None
: new UploadArgs
UploadType = UploadType.Update,
FileId = fileInDrive.Id
;
throw new NotSupportedException();
public delegate void UploadProgressChanged(IUploadProgress progress);
public event UploadProgressChanged OnUploadProgressChanged;
private void Upload_ProgressChanged(IUploadProgress progress)
//OnUploadProgressChanged.Invoke(progress);
EventLog.WriteEntry("NAS Drive Backup", progress.Status + $" uploadingFile.Name (progress.BytesSent / 1024 / 1024) of (uploadingFile.Length / 1024 / 1024)MB (" + ((float)progress.BytesSent / (float)uploadingFile.Length).ToString("P2") + $") complete" +
$"(progress.Status == UploadStatus.Failed ? progress.Exception.Message + "rn" + progress.Exception.StackTrace : "")", progress.Status == UploadStatus.Failed ? EventLogEntryType.Error : EventLogEntryType.Information);
private void Upload_ResponseReceived(File file)
EventLog.WriteEntry("NAS Drive Backup", file.Name + " was uploaded successfully", EventLogEntryType.Information);
public async Task Delete(string fileId)
var deleteRequest = _service.Files.Delete(fileId);
try
await deleteRequest.ExecuteAsync();
catch
var createPermissions = _service.Permissions.Create(new Permission
Type = "anyone",
Role = "owner"
, fileId);
createPermissions.TransferOwnership = true;
await createPermissions.ExecuteAsync();
c# performance async-await network-file-transfer google-drive
add a comment |Â
up vote
3
down vote
favorite
I'm using Google Drive API for uploading large disk image files ( >100 GB). My Code uses this wrapper to upload these large files in chunks of 40mb. The files are stored on NAS drive.
The code works fine, however I'm losing a lot of performance due to the fact that each chunk has to be downloaded from the drive, before being uploaded. See this graph of download (fetching chunk off NAS) and upload (sending to Google Drive).
Notice the upload pauses for the duration of the fetch from disk, which is understandable - however, I'm wondering if there is any way to fetch the next chunk off the disk while the previous chunk is uploading, as this could significantly improve my upload times?
using Google.Apis.Auth.OAuth2;
using Google.Apis.Drive.v3;
using Google.Apis.Drive.v3.Data;
using Google.Apis.Services;
using Google.Apis.Upload;
using File = Google.Apis.Drive.v3.Data.File;
namespace Backup
public class GoogleDrive
private readonly DriveService _service;
public GoogleDrive()
_service = CreateDriveService();
private FileInfo uploadingFile;
/// <summary>
///
/// </summary>
/// <param name="fileInfo"></param>
/// <param name="maximumTransferRate">Indicates the maximum transfer rate for the upload, in MB/s.</param>
/// <param name="chunkSizeMb"></param>
/// <returns>Returns ID of new file.</returns>
public async Task<string> Upload(FileInfo fileInfo, int maximumTransferRate = 0, int chunkSizeMb = 10)
var uploadStream = new FileStream(fileInfo.FullName, FileMode.Open, FileAccess.Read);
uploadingFile = fileInfo;
var uploadArgs = await GetUploadType(fileInfo);
switch (uploadArgs.UploadType)
default:
return "";
case UploadType.Create:
var insertRequest = _service.Files.Create(
new File
Name = fileInfo.Name,
Parents = new List<string>
BackupFolderId
,
uploadStream,
MimeTypes.GetMimeType(fileInfo.Name)
);
insertRequest.ChunkSize = chunkSizeMb * 1024 * 1024;
insertRequest.ProgressChanged += Upload_ProgressChanged;
insertRequest.ResponseReceived += Upload_ResponseReceived;
var createFileTask = insertRequest.UploadAsync();
await createFileTask.ContinueWith(t =>
uploadStream.Dispose();
);
return insertRequest.ResponseBody.Id;
case UploadType.Update:
var updateRequest = _service.Files.Update(
new File
Name = fileInfo.Name,
Parents = new List<string>
BackupFolderId
,
uploadArgs.FileId,
uploadStream,
MimeTypes.GetMimeType(fileInfo.Name)
);
updateRequest.ChunkSize = chunkSizeMb * 1024 * 1024;
updateRequest.ProgressChanged += Upload_ProgressChanged;
updateRequest.ResponseReceived += Upload_ResponseReceived;
var updateFileTask = updateRequest.UploadAsync();
await updateFileTask.ContinueWith(t =>
uploadStream.Dispose();
);
return updateRequest.ResponseBody.Id;
public async Task<FileList> GetFiles()
var files = _service.Files.List();
files.PageSize = 1000;
files.Fields = "files(id, name, md5Checksum, mimeType, kind)";
return await files.ExecuteAsync();
private async Task<UploadArgs> GetUploadType(FileSystemInfo fileOnNas)
var files = _service.Files.List();
files.Q = $"name=fileOnNas.Name.ToDbQuote()";
files.Fields = "files(id, name, md5Checksum, size)";
var result = await files.ExecuteAsync();
if (result.Files.Count == 0)
return new UploadArgs UploadType = Backup.UploadType.Create;
if (result.Files.Count == 1)
var fileInDrive = result.Files[0];
using (var stream = System.IO.File.OpenRead(fileOnNas.FullName))
return stream.Length == fileInDrive.Size
? new UploadArgs
UploadType = UploadType.None
: new UploadArgs
UploadType = UploadType.Update,
FileId = fileInDrive.Id
;
throw new NotSupportedException();
public delegate void UploadProgressChanged(IUploadProgress progress);
public event UploadProgressChanged OnUploadProgressChanged;
private void Upload_ProgressChanged(IUploadProgress progress)
//OnUploadProgressChanged.Invoke(progress);
EventLog.WriteEntry("NAS Drive Backup", progress.Status + $" uploadingFile.Name (progress.BytesSent / 1024 / 1024) of (uploadingFile.Length / 1024 / 1024)MB (" + ((float)progress.BytesSent / (float)uploadingFile.Length).ToString("P2") + $") complete" +
$"(progress.Status == UploadStatus.Failed ? progress.Exception.Message + "rn" + progress.Exception.StackTrace : "")", progress.Status == UploadStatus.Failed ? EventLogEntryType.Error : EventLogEntryType.Information);
private void Upload_ResponseReceived(File file)
EventLog.WriteEntry("NAS Drive Backup", file.Name + " was uploaded successfully", EventLogEntryType.Information);
public async Task Delete(string fileId)
var deleteRequest = _service.Files.Delete(fileId);
try
await deleteRequest.ExecuteAsync();
catch
var createPermissions = _service.Permissions.Create(new Permission
Type = "anyone",
Role = "owner"
, fileId);
createPermissions.TransferOwnership = true;
await createPermissions.ExecuteAsync();
c# performance async-await network-file-transfer google-drive
I suggest keeping the performance tag. The .net is useless - everybody knows that c# is .net :-]
â t3chb0t
Jun 22 at 7:15
Righto you win :)
â Brendan Gooden
Jun 22 at 7:16
I don't see anything that could signifficantly be improved here because all you are doing is pluging in parameters for theDriveService
which handles theuploadStream
by itself and this is google's library, right? But maybe it's just your connection or device that cannot handle that much data in full-duplex mode?
â t3chb0t
Jun 23 at 8:00
add a comment |Â
up vote
3
down vote
favorite
up vote
3
down vote
favorite
I'm using Google Drive API for uploading large disk image files ( >100 GB). My Code uses this wrapper to upload these large files in chunks of 40mb. The files are stored on NAS drive.
The code works fine, however I'm losing a lot of performance due to the fact that each chunk has to be downloaded from the drive, before being uploaded. See this graph of download (fetching chunk off NAS) and upload (sending to Google Drive).
Notice the upload pauses for the duration of the fetch from disk, which is understandable - however, I'm wondering if there is any way to fetch the next chunk off the disk while the previous chunk is uploading, as this could significantly improve my upload times?
using Google.Apis.Auth.OAuth2;
using Google.Apis.Drive.v3;
using Google.Apis.Drive.v3.Data;
using Google.Apis.Services;
using Google.Apis.Upload;
using File = Google.Apis.Drive.v3.Data.File;
namespace Backup
public class GoogleDrive
private readonly DriveService _service;
public GoogleDrive()
_service = CreateDriveService();
private FileInfo uploadingFile;
/// <summary>
///
/// </summary>
/// <param name="fileInfo"></param>
/// <param name="maximumTransferRate">Indicates the maximum transfer rate for the upload, in MB/s.</param>
/// <param name="chunkSizeMb"></param>
/// <returns>Returns ID of new file.</returns>
public async Task<string> Upload(FileInfo fileInfo, int maximumTransferRate = 0, int chunkSizeMb = 10)
var uploadStream = new FileStream(fileInfo.FullName, FileMode.Open, FileAccess.Read);
uploadingFile = fileInfo;
var uploadArgs = await GetUploadType(fileInfo);
switch (uploadArgs.UploadType)
default:
return "";
case UploadType.Create:
var insertRequest = _service.Files.Create(
new File
Name = fileInfo.Name,
Parents = new List<string>
BackupFolderId
,
uploadStream,
MimeTypes.GetMimeType(fileInfo.Name)
);
insertRequest.ChunkSize = chunkSizeMb * 1024 * 1024;
insertRequest.ProgressChanged += Upload_ProgressChanged;
insertRequest.ResponseReceived += Upload_ResponseReceived;
var createFileTask = insertRequest.UploadAsync();
await createFileTask.ContinueWith(t =>
uploadStream.Dispose();
);
return insertRequest.ResponseBody.Id;
case UploadType.Update:
var updateRequest = _service.Files.Update(
new File
Name = fileInfo.Name,
Parents = new List<string>
BackupFolderId
,
uploadArgs.FileId,
uploadStream,
MimeTypes.GetMimeType(fileInfo.Name)
);
updateRequest.ChunkSize = chunkSizeMb * 1024 * 1024;
updateRequest.ProgressChanged += Upload_ProgressChanged;
updateRequest.ResponseReceived += Upload_ResponseReceived;
var updateFileTask = updateRequest.UploadAsync();
await updateFileTask.ContinueWith(t =>
uploadStream.Dispose();
);
return updateRequest.ResponseBody.Id;
public async Task<FileList> GetFiles()
var files = _service.Files.List();
files.PageSize = 1000;
files.Fields = "files(id, name, md5Checksum, mimeType, kind)";
return await files.ExecuteAsync();
private async Task<UploadArgs> GetUploadType(FileSystemInfo fileOnNas)
var files = _service.Files.List();
files.Q = $"name=fileOnNas.Name.ToDbQuote()";
files.Fields = "files(id, name, md5Checksum, size)";
var result = await files.ExecuteAsync();
if (result.Files.Count == 0)
return new UploadArgs UploadType = Backup.UploadType.Create;
if (result.Files.Count == 1)
var fileInDrive = result.Files[0];
using (var stream = System.IO.File.OpenRead(fileOnNas.FullName))
return stream.Length == fileInDrive.Size
? new UploadArgs
UploadType = UploadType.None
: new UploadArgs
UploadType = UploadType.Update,
FileId = fileInDrive.Id
;
throw new NotSupportedException();
public delegate void UploadProgressChanged(IUploadProgress progress);
public event UploadProgressChanged OnUploadProgressChanged;
private void Upload_ProgressChanged(IUploadProgress progress)
//OnUploadProgressChanged.Invoke(progress);
EventLog.WriteEntry("NAS Drive Backup", progress.Status + $" uploadingFile.Name (progress.BytesSent / 1024 / 1024) of (uploadingFile.Length / 1024 / 1024)MB (" + ((float)progress.BytesSent / (float)uploadingFile.Length).ToString("P2") + $") complete" +
$"(progress.Status == UploadStatus.Failed ? progress.Exception.Message + "rn" + progress.Exception.StackTrace : "")", progress.Status == UploadStatus.Failed ? EventLogEntryType.Error : EventLogEntryType.Information);
private void Upload_ResponseReceived(File file)
EventLog.WriteEntry("NAS Drive Backup", file.Name + " was uploaded successfully", EventLogEntryType.Information);
public async Task Delete(string fileId)
var deleteRequest = _service.Files.Delete(fileId);
try
await deleteRequest.ExecuteAsync();
catch
var createPermissions = _service.Permissions.Create(new Permission
Type = "anyone",
Role = "owner"
, fileId);
createPermissions.TransferOwnership = true;
await createPermissions.ExecuteAsync();
c# performance async-await network-file-transfer google-drive
I'm using Google Drive API for uploading large disk image files ( >100 GB). My Code uses this wrapper to upload these large files in chunks of 40mb. The files are stored on NAS drive.
The code works fine, however I'm losing a lot of performance due to the fact that each chunk has to be downloaded from the drive, before being uploaded. See this graph of download (fetching chunk off NAS) and upload (sending to Google Drive).
Notice the upload pauses for the duration of the fetch from disk, which is understandable - however, I'm wondering if there is any way to fetch the next chunk off the disk while the previous chunk is uploading, as this could significantly improve my upload times?
using Google.Apis.Auth.OAuth2;
using Google.Apis.Drive.v3;
using Google.Apis.Drive.v3.Data;
using Google.Apis.Services;
using Google.Apis.Upload;
using File = Google.Apis.Drive.v3.Data.File;
namespace Backup
public class GoogleDrive
private readonly DriveService _service;
public GoogleDrive()
_service = CreateDriveService();
private FileInfo uploadingFile;
/// <summary>
///
/// </summary>
/// <param name="fileInfo"></param>
/// <param name="maximumTransferRate">Indicates the maximum transfer rate for the upload, in MB/s.</param>
/// <param name="chunkSizeMb"></param>
/// <returns>Returns ID of new file.</returns>
public async Task<string> Upload(FileInfo fileInfo, int maximumTransferRate = 0, int chunkSizeMb = 10)
var uploadStream = new FileStream(fileInfo.FullName, FileMode.Open, FileAccess.Read);
uploadingFile = fileInfo;
var uploadArgs = await GetUploadType(fileInfo);
switch (uploadArgs.UploadType)
default:
return "";
case UploadType.Create:
var insertRequest = _service.Files.Create(
new File
Name = fileInfo.Name,
Parents = new List<string>
BackupFolderId
,
uploadStream,
MimeTypes.GetMimeType(fileInfo.Name)
);
insertRequest.ChunkSize = chunkSizeMb * 1024 * 1024;
insertRequest.ProgressChanged += Upload_ProgressChanged;
insertRequest.ResponseReceived += Upload_ResponseReceived;
var createFileTask = insertRequest.UploadAsync();
await createFileTask.ContinueWith(t =>
uploadStream.Dispose();
);
return insertRequest.ResponseBody.Id;
case UploadType.Update:
var updateRequest = _service.Files.Update(
new File
Name = fileInfo.Name,
Parents = new List<string>
BackupFolderId
,
uploadArgs.FileId,
uploadStream,
MimeTypes.GetMimeType(fileInfo.Name)
);
updateRequest.ChunkSize = chunkSizeMb * 1024 * 1024;
updateRequest.ProgressChanged += Upload_ProgressChanged;
updateRequest.ResponseReceived += Upload_ResponseReceived;
var updateFileTask = updateRequest.UploadAsync();
await updateFileTask.ContinueWith(t =>
uploadStream.Dispose();
);
return updateRequest.ResponseBody.Id;
public async Task<FileList> GetFiles()
var files = _service.Files.List();
files.PageSize = 1000;
files.Fields = "files(id, name, md5Checksum, mimeType, kind)";
return await files.ExecuteAsync();
private async Task<UploadArgs> GetUploadType(FileSystemInfo fileOnNas)
var files = _service.Files.List();
files.Q = $"name=fileOnNas.Name.ToDbQuote()";
files.Fields = "files(id, name, md5Checksum, size)";
var result = await files.ExecuteAsync();
if (result.Files.Count == 0)
return new UploadArgs UploadType = Backup.UploadType.Create;
if (result.Files.Count == 1)
var fileInDrive = result.Files[0];
using (var stream = System.IO.File.OpenRead(fileOnNas.FullName))
return stream.Length == fileInDrive.Size
? new UploadArgs
UploadType = UploadType.None
: new UploadArgs
UploadType = UploadType.Update,
FileId = fileInDrive.Id
;
throw new NotSupportedException();
public delegate void UploadProgressChanged(IUploadProgress progress);
public event UploadProgressChanged OnUploadProgressChanged;
private void Upload_ProgressChanged(IUploadProgress progress)
//OnUploadProgressChanged.Invoke(progress);
EventLog.WriteEntry("NAS Drive Backup", progress.Status + $" uploadingFile.Name (progress.BytesSent / 1024 / 1024) of (uploadingFile.Length / 1024 / 1024)MB (" + ((float)progress.BytesSent / (float)uploadingFile.Length).ToString("P2") + $") complete" +
$"(progress.Status == UploadStatus.Failed ? progress.Exception.Message + "rn" + progress.Exception.StackTrace : "")", progress.Status == UploadStatus.Failed ? EventLogEntryType.Error : EventLogEntryType.Information);
private void Upload_ResponseReceived(File file)
EventLog.WriteEntry("NAS Drive Backup", file.Name + " was uploaded successfully", EventLogEntryType.Information);
public async Task Delete(string fileId)
var deleteRequest = _service.Files.Delete(fileId);
try
await deleteRequest.ExecuteAsync();
catch
var createPermissions = _service.Permissions.Create(new Permission
Type = "anyone",
Role = "owner"
, fileId);
createPermissions.TransferOwnership = true;
await createPermissions.ExecuteAsync();
c# performance async-await network-file-transfer google-drive
edited Jun 22 at 8:46
asked Jun 22 at 6:43
Brendan Gooden
1364
1364
I suggest keeping the performance tag. The .net is useless - everybody knows that c# is .net :-]
â t3chb0t
Jun 22 at 7:15
Righto you win :)
â Brendan Gooden
Jun 22 at 7:16
I don't see anything that could signifficantly be improved here because all you are doing is pluging in parameters for theDriveService
which handles theuploadStream
by itself and this is google's library, right? But maybe it's just your connection or device that cannot handle that much data in full-duplex mode?
â t3chb0t
Jun 23 at 8:00
add a comment |Â
I suggest keeping the performance tag. The .net is useless - everybody knows that c# is .net :-]
â t3chb0t
Jun 22 at 7:15
Righto you win :)
â Brendan Gooden
Jun 22 at 7:16
I don't see anything that could signifficantly be improved here because all you are doing is pluging in parameters for theDriveService
which handles theuploadStream
by itself and this is google's library, right? But maybe it's just your connection or device that cannot handle that much data in full-duplex mode?
â t3chb0t
Jun 23 at 8:00
I suggest keeping the performance tag. The .net is useless - everybody knows that c# is .net :-]
â t3chb0t
Jun 22 at 7:15
I suggest keeping the performance tag. The .net is useless - everybody knows that c# is .net :-]
â t3chb0t
Jun 22 at 7:15
Righto you win :)
â Brendan Gooden
Jun 22 at 7:16
Righto you win :)
â Brendan Gooden
Jun 22 at 7:16
I don't see anything that could signifficantly be improved here because all you are doing is pluging in parameters for the
DriveService
which handles the uploadStream
by itself and this is google's library, right? But maybe it's just your connection or device that cannot handle that much data in full-duplex mode?â t3chb0t
Jun 23 at 8:00
I don't see anything that could signifficantly be improved here because all you are doing is pluging in parameters for the
DriveService
which handles the uploadStream
by itself and this is google's library, right? But maybe it's just your connection or device that cannot handle that much data in full-duplex mode?â t3chb0t
Jun 23 at 8:00
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
2
down vote
accepted
I ended up diving in boots and all, and figuring it out myself. With the help of google-apis-dotnet-client source, I modified to suit my requirements. I believe this could be implemented in future releases of the Drive SDK in .NET
Glad if anyone can spot any improvements in this code :)
/// <summary>The core logic for uploading a stream. It is used by the upload and resume methods.</summary>
private async Task<IUploadProgress> UploadCoreAsync(CancellationToken cancellationToken)
try
using (var callback = new ServerErrorCallback(this))
byte chunk1 = ;
byte chunk2 = ;
PrepareNextChunkKnownSize(ContentStream, cancellationToken, out chunk1, out int _); // Prepare First Chunk
bool isCompleted = false;
bool usingChunk1 = true;
while (!isCompleted)
var getNextChunkTask = Task.Run(() =>
if (usingChunk1)
PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk2);
else
PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk1);
, cancellationToken);
var sendChunkTask = usingChunk1 ? SendNextChunkAsync(chunk1, cancellationToken) : SendNextChunkAsync(chunk2, cancellationToken);
await Task.WhenAll(getNextChunkTask, sendChunkTask).ConfigureAwait(false);
isCompleted = await sendChunkTask;
UpdateProgress(new ResumableUploadProgress(UploadStatus.Uploading, BytesServerReceived));
usingChunk1 = !usingChunk1;
UpdateProgress(new ResumableUploadProgress(UploadStatus.Completed, BytesServerReceived));
catch (TaskCanceledException ex)
Logger.Error(ex, "MediaUpload[0] - Task was canceled", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
throw ex;
catch (Exception ex)
Logger.Error(ex, "MediaUpload[0] - Exception occurred while uploading media", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
return Progress;
/// <summary>Prepares the given request with the next chunk in case the steam length is known.</summary>
private void PrepareNextChunkKnownSizeCustom(Stream stream, CancellationToken cancellationToken, long bytesSent, out byte chunk)
int chkSize;
try
chkSize = (int) Math.Min(StreamLength - bytesSent, (long) ChunkSize);
catch
// Because we fetch next chunk and upload at the same time, this can fail when the last chunk
// has uploaded and it tries to fetch the next chunk which doesn't exist.
// In this case, return empty initialized values.
chunk = new byte[0];
return;
// Stream length is known and it supports seek and position operations.
// We can change the stream position and read bytes from the last point.
// If the number of bytes received by the server isn't equal to the amount of bytes the client sent, we
// need to change the position of the input stream, otherwise we can continue from the current position.
if (stream.Position != bytesSent)
stream.Position = bytesSent;
chunk = new byte[chkSize];
int bytesLeft = chkSize;
int bytesRead = 0;
while (bytesLeft > 0)
cancellationToken.ThrowIfCancellationRequested();
// Make sure we only read at most BufferSize bytes at a time.
int readSize = Math.Min(bytesLeft, BufferSize);
int len = stream.Read(chunk, bytesRead, readSize);
if (len == 0)
// Presumably the stream lied about its length. Not great, but we still have a chunk to send.
break;
bytesRead += len;
bytesLeft -= len;
So I was right? It wasn't your code that was inefficient but google's? :-)
â t3chb0t
Jun 27 at 8:47
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
I ended up diving in boots and all, and figuring it out myself. With the help of google-apis-dotnet-client source, I modified to suit my requirements. I believe this could be implemented in future releases of the Drive SDK in .NET
Glad if anyone can spot any improvements in this code :)
/// <summary>The core logic for uploading a stream. It is used by the upload and resume methods.</summary>
private async Task<IUploadProgress> UploadCoreAsync(CancellationToken cancellationToken)
try
using (var callback = new ServerErrorCallback(this))
byte chunk1 = ;
byte chunk2 = ;
PrepareNextChunkKnownSize(ContentStream, cancellationToken, out chunk1, out int _); // Prepare First Chunk
bool isCompleted = false;
bool usingChunk1 = true;
while (!isCompleted)
var getNextChunkTask = Task.Run(() =>
if (usingChunk1)
PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk2);
else
PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk1);
, cancellationToken);
var sendChunkTask = usingChunk1 ? SendNextChunkAsync(chunk1, cancellationToken) : SendNextChunkAsync(chunk2, cancellationToken);
await Task.WhenAll(getNextChunkTask, sendChunkTask).ConfigureAwait(false);
isCompleted = await sendChunkTask;
UpdateProgress(new ResumableUploadProgress(UploadStatus.Uploading, BytesServerReceived));
usingChunk1 = !usingChunk1;
UpdateProgress(new ResumableUploadProgress(UploadStatus.Completed, BytesServerReceived));
catch (TaskCanceledException ex)
Logger.Error(ex, "MediaUpload[0] - Task was canceled", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
throw ex;
catch (Exception ex)
Logger.Error(ex, "MediaUpload[0] - Exception occurred while uploading media", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
return Progress;
/// <summary>Prepares the given request with the next chunk in case the steam length is known.</summary>
private void PrepareNextChunkKnownSizeCustom(Stream stream, CancellationToken cancellationToken, long bytesSent, out byte chunk)
int chkSize;
try
chkSize = (int) Math.Min(StreamLength - bytesSent, (long) ChunkSize);
catch
// Because we fetch next chunk and upload at the same time, this can fail when the last chunk
// has uploaded and it tries to fetch the next chunk which doesn't exist.
// In this case, return empty initialized values.
chunk = new byte[0];
return;
// Stream length is known and it supports seek and position operations.
// We can change the stream position and read bytes from the last point.
// If the number of bytes received by the server isn't equal to the amount of bytes the client sent, we
// need to change the position of the input stream, otherwise we can continue from the current position.
if (stream.Position != bytesSent)
stream.Position = bytesSent;
chunk = new byte[chkSize];
int bytesLeft = chkSize;
int bytesRead = 0;
while (bytesLeft > 0)
cancellationToken.ThrowIfCancellationRequested();
// Make sure we only read at most BufferSize bytes at a time.
int readSize = Math.Min(bytesLeft, BufferSize);
int len = stream.Read(chunk, bytesRead, readSize);
if (len == 0)
// Presumably the stream lied about its length. Not great, but we still have a chunk to send.
break;
bytesRead += len;
bytesLeft -= len;
So I was right? It wasn't your code that was inefficient but google's? :-)
â t3chb0t
Jun 27 at 8:47
add a comment |Â
up vote
2
down vote
accepted
I ended up diving in boots and all, and figuring it out myself. With the help of google-apis-dotnet-client source, I modified to suit my requirements. I believe this could be implemented in future releases of the Drive SDK in .NET
Glad if anyone can spot any improvements in this code :)
/// <summary>The core logic for uploading a stream. It is used by the upload and resume methods.</summary>
private async Task<IUploadProgress> UploadCoreAsync(CancellationToken cancellationToken)
try
using (var callback = new ServerErrorCallback(this))
byte chunk1 = ;
byte chunk2 = ;
PrepareNextChunkKnownSize(ContentStream, cancellationToken, out chunk1, out int _); // Prepare First Chunk
bool isCompleted = false;
bool usingChunk1 = true;
while (!isCompleted)
var getNextChunkTask = Task.Run(() =>
if (usingChunk1)
PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk2);
else
PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk1);
, cancellationToken);
var sendChunkTask = usingChunk1 ? SendNextChunkAsync(chunk1, cancellationToken) : SendNextChunkAsync(chunk2, cancellationToken);
await Task.WhenAll(getNextChunkTask, sendChunkTask).ConfigureAwait(false);
isCompleted = await sendChunkTask;
UpdateProgress(new ResumableUploadProgress(UploadStatus.Uploading, BytesServerReceived));
usingChunk1 = !usingChunk1;
UpdateProgress(new ResumableUploadProgress(UploadStatus.Completed, BytesServerReceived));
catch (TaskCanceledException ex)
Logger.Error(ex, "MediaUpload[0] - Task was canceled", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
throw ex;
catch (Exception ex)
Logger.Error(ex, "MediaUpload[0] - Exception occurred while uploading media", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
return Progress;
/// <summary>Prepares the given request with the next chunk in case the steam length is known.</summary>
private void PrepareNextChunkKnownSizeCustom(Stream stream, CancellationToken cancellationToken, long bytesSent, out byte chunk)
int chkSize;
try
chkSize = (int) Math.Min(StreamLength - bytesSent, (long) ChunkSize);
catch
// Because we fetch next chunk and upload at the same time, this can fail when the last chunk
// has uploaded and it tries to fetch the next chunk which doesn't exist.
// In this case, return empty initialized values.
chunk = new byte[0];
return;
// Stream length is known and it supports seek and position operations.
// We can change the stream position and read bytes from the last point.
// If the number of bytes received by the server isn't equal to the amount of bytes the client sent, we
// need to change the position of the input stream, otherwise we can continue from the current position.
if (stream.Position != bytesSent)
stream.Position = bytesSent;
chunk = new byte[chkSize];
int bytesLeft = chkSize;
int bytesRead = 0;
while (bytesLeft > 0)
cancellationToken.ThrowIfCancellationRequested();
// Make sure we only read at most BufferSize bytes at a time.
int readSize = Math.Min(bytesLeft, BufferSize);
int len = stream.Read(chunk, bytesRead, readSize);
if (len == 0)
// Presumably the stream lied about its length. Not great, but we still have a chunk to send.
break;
bytesRead += len;
bytesLeft -= len;
So I was right? It wasn't your code that was inefficient but google's? :-)
â t3chb0t
Jun 27 at 8:47
add a comment |Â
up vote
2
down vote
accepted
up vote
2
down vote
accepted
I ended up diving in boots and all, and figuring it out myself. With the help of google-apis-dotnet-client source, I modified to suit my requirements. I believe this could be implemented in future releases of the Drive SDK in .NET
Glad if anyone can spot any improvements in this code :)
/// <summary>The core logic for uploading a stream. It is used by the upload and resume methods.</summary>
private async Task<IUploadProgress> UploadCoreAsync(CancellationToken cancellationToken)
try
using (var callback = new ServerErrorCallback(this))
byte chunk1 = ;
byte chunk2 = ;
PrepareNextChunkKnownSize(ContentStream, cancellationToken, out chunk1, out int _); // Prepare First Chunk
bool isCompleted = false;
bool usingChunk1 = true;
while (!isCompleted)
var getNextChunkTask = Task.Run(() =>
if (usingChunk1)
PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk2);
else
PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk1);
, cancellationToken);
var sendChunkTask = usingChunk1 ? SendNextChunkAsync(chunk1, cancellationToken) : SendNextChunkAsync(chunk2, cancellationToken);
await Task.WhenAll(getNextChunkTask, sendChunkTask).ConfigureAwait(false);
isCompleted = await sendChunkTask;
UpdateProgress(new ResumableUploadProgress(UploadStatus.Uploading, BytesServerReceived));
usingChunk1 = !usingChunk1;
UpdateProgress(new ResumableUploadProgress(UploadStatus.Completed, BytesServerReceived));
catch (TaskCanceledException ex)
Logger.Error(ex, "MediaUpload[0] - Task was canceled", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
throw ex;
catch (Exception ex)
Logger.Error(ex, "MediaUpload[0] - Exception occurred while uploading media", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
return Progress;
/// <summary>Prepares the given request with the next chunk in case the steam length is known.</summary>
private void PrepareNextChunkKnownSizeCustom(Stream stream, CancellationToken cancellationToken, long bytesSent, out byte chunk)
int chkSize;
try
chkSize = (int) Math.Min(StreamLength - bytesSent, (long) ChunkSize);
catch
// Because we fetch next chunk and upload at the same time, this can fail when the last chunk
// has uploaded and it tries to fetch the next chunk which doesn't exist.
// In this case, return empty initialized values.
chunk = new byte[0];
return;
// Stream length is known and it supports seek and position operations.
// We can change the stream position and read bytes from the last point.
// If the number of bytes received by the server isn't equal to the amount of bytes the client sent, we
// need to change the position of the input stream, otherwise we can continue from the current position.
if (stream.Position != bytesSent)
stream.Position = bytesSent;
chunk = new byte[chkSize];
int bytesLeft = chkSize;
int bytesRead = 0;
while (bytesLeft > 0)
cancellationToken.ThrowIfCancellationRequested();
// Make sure we only read at most BufferSize bytes at a time.
int readSize = Math.Min(bytesLeft, BufferSize);
int len = stream.Read(chunk, bytesRead, readSize);
if (len == 0)
// Presumably the stream lied about its length. Not great, but we still have a chunk to send.
break;
bytesRead += len;
bytesLeft -= len;
I ended up diving in boots and all, and figuring it out myself. With the help of google-apis-dotnet-client source, I modified to suit my requirements. I believe this could be implemented in future releases of the Drive SDK in .NET
Glad if anyone can spot any improvements in this code :)
/// <summary>The core logic for uploading a stream. It is used by the upload and resume methods.</summary>
private async Task<IUploadProgress> UploadCoreAsync(CancellationToken cancellationToken)
try
using (var callback = new ServerErrorCallback(this))
byte chunk1 = ;
byte chunk2 = ;
PrepareNextChunkKnownSize(ContentStream, cancellationToken, out chunk1, out int _); // Prepare First Chunk
bool isCompleted = false;
bool usingChunk1 = true;
while (!isCompleted)
var getNextChunkTask = Task.Run(() =>
if (usingChunk1)
PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk2);
else
PrepareNextChunkKnownSizeCustom(ContentStream, cancellationToken, BytesServerReceived + ChunkSize, out chunk1);
, cancellationToken);
var sendChunkTask = usingChunk1 ? SendNextChunkAsync(chunk1, cancellationToken) : SendNextChunkAsync(chunk2, cancellationToken);
await Task.WhenAll(getNextChunkTask, sendChunkTask).ConfigureAwait(false);
isCompleted = await sendChunkTask;
UpdateProgress(new ResumableUploadProgress(UploadStatus.Uploading, BytesServerReceived));
usingChunk1 = !usingChunk1;
UpdateProgress(new ResumableUploadProgress(UploadStatus.Completed, BytesServerReceived));
catch (TaskCanceledException ex)
Logger.Error(ex, "MediaUpload[0] - Task was canceled", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
throw ex;
catch (Exception ex)
Logger.Error(ex, "MediaUpload[0] - Exception occurred while uploading media", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
return Progress;
/// <summary>Prepares the given request with the next chunk in case the steam length is known.</summary>
private void PrepareNextChunkKnownSizeCustom(Stream stream, CancellationToken cancellationToken, long bytesSent, out byte chunk)
int chkSize;
try
chkSize = (int) Math.Min(StreamLength - bytesSent, (long) ChunkSize);
catch
// Because we fetch next chunk and upload at the same time, this can fail when the last chunk
// has uploaded and it tries to fetch the next chunk which doesn't exist.
// In this case, return empty initialized values.
chunk = new byte[0];
return;
// Stream length is known and it supports seek and position operations.
// We can change the stream position and read bytes from the last point.
// If the number of bytes received by the server isn't equal to the amount of bytes the client sent, we
// need to change the position of the input stream, otherwise we can continue from the current position.
if (stream.Position != bytesSent)
stream.Position = bytesSent;
chunk = new byte[chkSize];
int bytesLeft = chkSize;
int bytesRead = 0;
while (bytesLeft > 0)
cancellationToken.ThrowIfCancellationRequested();
// Make sure we only read at most BufferSize bytes at a time.
int readSize = Math.Min(bytesLeft, BufferSize);
int len = stream.Read(chunk, bytesRead, readSize);
if (len == 0)
// Presumably the stream lied about its length. Not great, but we still have a chunk to send.
break;
bytesRead += len;
bytesLeft -= len;
answered Jun 26 at 23:01
Brendan Gooden
1364
1364
So I was right? It wasn't your code that was inefficient but google's? :-)
â t3chb0t
Jun 27 at 8:47
add a comment |Â
So I was right? It wasn't your code that was inefficient but google's? :-)
â t3chb0t
Jun 27 at 8:47
So I was right? It wasn't your code that was inefficient but google's? :-)
â t3chb0t
Jun 27 at 8:47
So I was right? It wasn't your code that was inefficient but google's? :-)
â t3chb0t
Jun 27 at 8:47
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f197034%2fgoogle-drive-upload-for-large-files%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
I suggest keeping the performance tag. The .net is useless - everybody knows that c# is .net :-]
â t3chb0t
Jun 22 at 7:15
Righto you win :)
â Brendan Gooden
Jun 22 at 7:16
I don't see anything that could signifficantly be improved here because all you are doing is pluging in parameters for the
DriveService
which handles theuploadStream
by itself and this is google's library, right? But maybe it's just your connection or device that cannot handle that much data in full-duplex mode?â t3chb0t
Jun 23 at 8:00