A beta version of the new Box developer documentation site is launching soon! Updated Developer Guides, modern API Reference, and AI-powered search are on the way to help you build with Box faster. Stay tuned for more updates.
Uploads a chunk of a file for an upload session.
The actual endpoint URL is returned by the Create upload session
and Get upload session endpoints.
bytes 8388608-16777215/445856194The byte range of the chunk.
Must not overlap with the range of a part already uploaded this session. Each part’s size must be exactly equal in size to the part size specified in the upload session that you created. One exception is the last part of the file, as this can be smaller.
When providing the value for content-range, remember that:
sha=fpRyg5eVQletdZqEKaFlqwBXJzM=The RFC3230 message digest of the chunk uploaded.
Only SHA1 is supported. The SHA1 digest must be base64
encoded. The format of this header is as
sha=BASE64_ENCODED_DIGEST.
To get the value for the SHA digest, use the
openSSL command to encode the file part:
openssl sha1 -binary <FILE_PART_NAME> | base64.
D5E3F7AThe ID of the upload session.
The binary content of the file.
Chunk has been uploaded successfully.
Returns an error if the chunk conflicts with another chunk previously uploaded.
Returns an error if a precondition was not met.
Returns an error if the content range does not match a specified range for the session.
An unexpected client error.
curl -i -X PUT "https://upload.box.com/2.0/files/upload_sessions/F971964745A5CD0C001BBE4E58196BFD" \
-H "authorization: Bearer <ACCESS_TOKEN>" \
-H "digest: sha=fpRyg5eVQletdZqEKaFlqwBXJzM=" \
-H "content-range: bytes 8388608-16777215/445856194" \
-H "content-type: application/octet-stream" \
--data-binary @<FILE_NAME>await client.chunkedUploads.uploadFilePart(
acc.uploadSessionId,
generateByteStreamFromBuffer(chunkBuffer),
{
digest: digest,
contentRange: contentRange,
} satisfies UploadFilePartHeadersInput,
);client.chunked_uploads.upload_file_part(
acc.upload_session_id,
generate_byte_stream_from_buffer(chunk_buffer),
digest,
content_range,
)await client.ChunkedUploads.UploadFilePartAsync(uploadSessionId: acc.UploadSessionId, requestBody: Utils.GenerateByteStreamFromBuffer(buffer: chunkBuffer), headers: new UploadFilePartHeaders(digest: digest, contentRange: contentRange));try await client.chunkedUploads.uploadFilePart(uploadSessionId: acc.uploadSessionId, requestBody: Utils.generateByteStreamFromBuffer(buffer: chunkBuffer), headers: UploadFilePartHeaders(digest: digest, contentRange: contentRange))client.getChunkedUploads().uploadFilePart(acc.getUploadSessionId(), generateByteStreamFromBuffer(chunkBuffer), new UploadFilePartHeaders(digest, contentRange))await client.ChunkedUploads.UploadFilePartAsync(uploadSessionId: acc.UploadSessionId, requestBody: Utils.GenerateByteStreamFromBuffer(buffer: chunkBuffer), headers: new UploadFilePartHeaders(digest: digest, contentRange: contentRange));await client.chunkedUploads.uploadFilePart(
acc.uploadSessionId,
generateByteStreamFromBuffer(chunkBuffer),
{
digest: digest,
contentRange: contentRange,
} satisfies UploadFilePartHeadersInput,
);{
"part": {
"offset": 16777216,
"part_id": "6F2D3486",
"sha1": "134b65991ed521fcfe4724b7d814ab8ded5185dc",
"size": 3222784
}
}