Zata
LoginSignup
  • Getting Started with Zata.ai
    • Create & activate Account
  • Dashboard
  • Manage
    • Bucket
      • Create Bucket
        • To store objects in a Bucket
      • Deleting the Bucket
      • Bucket Policy
      • Applying Limitations with Bucket Policies
      • Share an object with a presigned URL
    • Access Keys
      • Creating a New Access Key
    • Service URLs for Zata.ai S3
    • Migration
      • Migrate from AWS S3 storage to Zata.ai S3 storage
      • Migrate from Wasabi S3 storage to Zata.ai S3 storage
      • Migrate from Google Drive to Zata.ai S3 storage
  • Users
    • Create a Subuser
    • Create a new access key and secret key for the specific subuser.
    • Roles
  • Account
    • Billing
    • Ingress and Egress Policy
  • KnowledgeBase
    • Integration
      • Connect Acronis Backup Gateway With ZATA.AI
      • Connect Bucket with CloudBerry
      • Connect MSP360 Backup With ZATA.AI
      • Connect Veeam Backup Gateway With ZATA.AI
      • Connect Ahsay cloud backup suite(ACBS) with Zata.ai
      • Connect Commvault Backup with ZATA.AI
      • Connect Vembu Backup with ZATA.AI
    • Mount S3 bucket to Linux operating system
    • Connect Bucket with S3 Client
    • Connect Bucket with Cyberduck
    • How to Transfer Data to Zata.ai Bucket Using Rclone
      • For Linux Server
      • For Windows Server
    • Connect S3 storage to CPanel
    • Backup your WordPress Site to Zata.ai S3 storage with using Updraft plugin.
    • Connect S3 Drive to Zata.ai
    • Mount Bucket to Local system using the TntDrive
    • How to Integrate QNAP NAS storage and Backup to Zata.ai
  • FAQ
    • What are the regions of Zata.ai?
    • What is the billing process for Zata.ai ?
    • What happens if I miss a Payment ?
    • How can I get support for billing-related issue?
    • What are the terms and conditions for using Zata.ai services?
    • Where can I find additional information on pricing and billing?
  • Support
Powered by GitBook
On this page
  • 1. Prerequisites
  • 2. Configure Google Drive as a Remote Location
  • 3. Configure Zata as a Remote Location
  • 4. Move Data to Zata
  1. Manage
  2. Migration

Migrate from Google Drive to Zata.ai S3 storage

PreviousMigrate from Wasabi S3 storage to Zata.ai S3 storageNextUsers

Last updated 2 days ago

To migrate data from Google Drive to Zata you have to use Rclone. The Rclone software package is a command-line tool which allows a variety of modes of operation for managing data in two or more locations. This article describes how to use Rclone to migrate data from Google Drive to Zata. For more information about Rclone, please visit .

1. Prerequisites

  • Active Zata.ai Account

  • Zata Bucket

  • Zata Endpoint, Access & Secret Key Pair

  • Version 1.63.1

2. Configure Google Drive as a Remote Location

2.1 Open your OS specific terminal and navigate to the folder location where the Rclone executable is stored.

2.2 Type in rclone config to create a new Remote

rclone config

2.3 Type in/Select "n" for a new remote connection

ExplainNo remotes found, make a new one? 
n) New remote 
r) Rename remote 
c) Copy remote 
s) Set configuration password 
q) Quit config
n/r/c/s/q> n

2.4 Enter a name for your google drive remote connection

Example :

Enter name for new remote.
name> drive

2.5 Type in/Select Google Drive

 1 / 1Fichier
   \ (fichier)
 2 / Akamai NetStorage
   \ (netstorage)
 3 / Alias for an existing remote
   \ (alias)
 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others
   \ (s3)
 5 / Backblaze B2
   \ (b2)
 6 / Better checksums for other remotes
   \ (hasher)
 7 / Box
   \ (box)
 8 / Cache a remote
   \ (cache)
 9 / Citrix Sharefile
   \ (sharefile)
10 / Cloudinary
   \ (cloudinary)
11 / Combine several remotes into one
   \ (combine)
12 / Compress a remote
   \ (compress)
13 / Dropbox
   \ (dropbox)
14 / Encrypt/Decrypt a remote
   \ (crypt)
15 / Enterprise File Fabric
   \ (filefabric)
16 / FTP
   \ (ftp)
17 / Files.com
   \ (filescom)
18 / Gofile
   \ (gofile)
19 / Google Cloud Storage (this is not Google Drive)
   \ (google cloud storage)
20 / Google Drive
   \ (drive)
21 / Google Photos
   \ (google photos)

2.5 Leave Blank and Hit Enter

Option client_id.
Google Application Client Id
Setting your own is recommended.
See https://rclone.org/drive/#making-your-own-client-id for how to create your own.
If you leave this blank, it will use an internal key which is low performance.
Enter a value. Press Enter to leave empty.
client_id>

2.6 Leave Blank and Hit Enter

Option client_secret.
OAuth Client Secret.
Leave blank normally.
Enter a value. Press Enter to leave empty.
client_secret>

2.7 Choose 1 or your scope and hit enter

Option scope.
Scope that rclone should use when requesting access from drive.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Full access all files, excluding Application Data Folder.
\ (drive)
2 / Read-only access to file metadata and file contents.
\ (drive.readonly)
/ Access to files created by rclone only.
3 | These are visible in the drive website.
| File authorization is revoked when the user deauthorizes the app.
\ (drive.file)
/ Allows read and write access to the Application Data folder.
4 | This is not visible in the drive website.
\ (drive.appfolder)
/ Allows read-only access to file metadata but
5 | does not allow any access to read or download file content.
\ (drive.metadata.readonly)
scope> 1

2.8 Leave Blank and Hit Enter

Option service_account_file.
Service Account Credentials JSON file path.
Leave blank normally.
Needed only if you want use SA instead of interactive login.
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
Enter a value. Press Enter to leave empty.
service_account_file>

2.9 Type in/Select "n" for No

Edit advanced config?
y) Yes
n) No (default)
y/n> n

2.10 Type in "y" for Yes to authenticate to your google drive using a web browser

Use web browser to automatically authenticate rclone with remote?
* Say Y if the machine running rclone has a web browser you can use
* Say N if running rclone on a (remote) machine without web browser access
If not sure try Y. If Y failed, try N.

y) Yes (default)
n) No
y/n>y

2023/07/27 13:16:20 NOTICE: If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=hy8ONFPQCykB30YldCCQfA
2023/07/27 13:16:20 NOTICE: Log in and authorize rclone for access
2023/07/27 13:16:20 NOTICE: Waiting for code...

2.11 A web browser window will pop up allowing you to enter in your credentials for your Google Drive Account

2.12 After signing in you will get a Success Message and then you can go back to command prompt to continue with setup.

2.13 Type in/Select "n" for no in regards to configuring as a Shared Drive.

2023/07/27 13:24:14 NOTICE: Got code
Configure this as a Shared Drive (Team Drive)?

y) Yes
n) No (default)
y/n> n

2.14 Configuration of Google Drive is complete. Review the information for accuracy and then type in/select "y" for yes to save the configuration and you are done.

Configuration complete.
Options:
- type: drive
- scope: drive
- token: {"access_token":"ya29.a0AbVbYH7EgnFk1J2fKDtmXsqIlSff9lQnejKJweBLU9NteOyJtPxsqUTNAqMu-cJIuqwsgQOfO_SH7zbsKGoql0w4v0H3L80VDRsoK4U48LVXppsfgNb0-wJh3_YdAnTOcZQrqeLkaguSJAupy3IIhaCgYKAZQSARMSFQFWKvPlsfpcY6uxG-vsoPowhX3p-Q0163","token_type":"Bearer","refresh_token":"1//01c-s3KbSkfWzCgYIARAAGAESNwF-L9IrkIUDhEWLzb8MUJefx5INb8WAgj6tGm4QH7tvCQ-O3T-AnrztykMfFsv6KM0t9Inv7co","expiry":"2023-07-27T14:24:13.4851761-07:00"}
- team_drive:
Keep this "google drive" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d>y

3. Configure Zata as a Remote Location

3.1 Enter "n" for a new remote connection

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n

3.2 Type in a name corresponding to Zata for your Remote location

Example :

Enter name for new remote.
name> zata

3.3 Type in/select "4" or choose Amazon S3 compliant provider

Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
 1 / 1Fichier
   \ (fichier)
 2 / Akamai NetStorage
   \ (netstorage)
 3 / Alias for an existing remote
   \ (alias)
 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others
   \ (s3)
 5 / Backblaze B2
   \ (b2)
 6 / Better checksums for other remotes
   \ (hasher)

3.4 Type in/select "34" or choose "Any other S3 compatible provider" from the list

29 / Storj (S3 Compatible Gateway)
   \ (Storj)
30 / Synology C2 Object Storage
   \ (Synology)
31 / Tencent Cloud Object Storage (COS)
   \ (TencentCOS)
32 / Wasabi Object Storage
   \ (Wasabi)
33 / Qiniu Object Storage (Kodo)
   \ (Qiniu)
34 / Any other S3 compatible provider
   \ (Other)

3.5 Choose Option 1 to type in your Zata Credentials in the following Steps

Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth> 1

3.6 Type in your Zata Access Key

Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> xxxxxxxxxxx

3.7 Type if your Zata Secret Key

Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> xxxxxxxxxxxxxxxxxxx

3.8 Choose Option 1 for your Region selection

Option region.
Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Use this if unsure.
1 | Will use v4 signatures and an empty region.
\ ()
/ Use this only if v4 signatures don't work.
2 | E.g. pre Jewel/v10 CEPH.
\ (other-v2-signature)
region> 1

3.9 Type in your endpoint for Zata where your bucket is located.

Option endpoint.
Endpoint for S3 API.
Required when using an S3 clone.
Enter a value. Press Enter to leave empty.
endpoint> https://idro1.zata.ai

3.10 Leave Blank and Hit Enter

Option location_constraint.
Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Enter a value. Press Enter to leave empty.
location_constraint>

3.11 Choose Option 1 or the ACL option you wish to use.

Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added and
the default (private) will be used.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
/ Owner gets FULL_CONTROL.
2 | The AllUsers group gets READ access.
\ (public-read)
/ Owner gets FULL_CONTROL.
3 | The AllUsers group gets READ and WRITE access.
| Granting this on a bucket is generally not recommended.
\ (public-read-write)
/ Owner gets FULL_CONTROL.
4 | The AuthenticatedUsers group gets READ access.
\ (authenticated-read)
/ Object owner gets FULL_CONTROL.
5 | Bucket owner gets READ access.
| If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ (bucket-owner-read)
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ (bucket-owner-full-control)
acl> 1

3.12 Type in/Select "n" for No

Edit advanced config?
y) Yes
n) No (default)
y/n> n

3.13 Configuration of your Remote Location for Zata is now complete. Review the information for accuracy and type in/select "y" for yes and you are done.

Configuration complete.
Options:
- type: s3
- provider: Other
- access_key_id: XXXXXXXXXXXXXX
- secret_access_key: XXXXXXXXXXXXXXXXXXX
- endpoint: https://idro1.zata.ai
- acl: private
Keep this "Zata" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y

3.14 Type in/Select "q" to quit from the Remote Configuration and go back to the root level

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q

c:\rclone-v1.63.1>

4. Move Data to Zata

4.1 Copy Data from Drive to Zata(Without Deleting Data from Google Drive)

If you just want to copy the data from Google Drive to Zata you have to use this command.

rclone move "drive": "Zata:bucket name" -P

Note :

  1. All the variables to be written with ""(double quotes)"".

  2. "drive" is just an example instead of drive you have to write name that you have given for google drive remote connection.

  3. "Zata" is just an example instead of drive you have to write name that you have given for Zata remote connection.

  4. "bucket name" is just an example instead of drive you have to write the name of the bucket in which you wish to copy your Google Drive data.

4.2 Copy Data from Drive to Zata(With Deleting Data from Google Drive)

If you just want to copy the data from Google Drive to Zata and also want to delete data from Google Drive, you have to use this command.

rclone move "drive": "Zata:bucket name" -P -v

Note :

  1. All the variables to be written with ""(double quotes)"".

  2. "drive" is just an example instead of drive you have to write name that you have given for google drive remote connnection.

  3. "Zata" is just an example instead of drive you have to write name that you have given for Zata remote connnection.

  4. "bucket name" is just an example instead of drive you have to write the name of the bucket in which you wish to copy your Google Drive data.

the Rclone project page
Rclone Executable