TL;DR I migrated about 295 GB / 400k files from a Microsoft 365 Business Standard OneDrive (source) into my personal Microsoft 365 OneDrive (destination) for about ten bucks using an Azure Container Instance running a Docker Hub rclone image. Everything happened in the cloud using simple deployment of an Azure Container Instance (ACI) deployed using an Azure Resource Manager (ARM) template. Made even easier with lots of LLM assistance for solution selection, development and troubleshooting.
Why I Chose rclone + Azure Container Instances (ACI)
My initial attempt to copy files from the source to destination was an MS Graph API Python code solution that would download Microsoft 365 Business Standard OneDrive (source) files to my local device and then upload into my personal Microsoft 365 OneDrive (destination).
However this required keeping the local device turned on and it many weeks to complete. So it was clear that a pure cloud solution was required to do the job. While there are paid services I wanted guaranteed security. Since OneDrive is a Microsoft product why not use Azure and why not use rclone which is an open source product. Using LLMs made the solution very clear and easy to implement.
Requirement | Options I Considered | Why rclone + ACI Won |
---|---|---|
Headless, can run unattended | Azure VM, ShareGate UI, Mover.io | ACI spins up in seconds, shuts down clean, and charges by the second. The job ran for about for 48 hours. Used open source Alpine rclone image pulled directly from Docker Hub into Azure ACI upon deployment. |
Robust | PowerShell PnP, Robocopy | rclone’s copy + check commands are battle‑tested and transparent and include OneDrive as source and destination. |
Secure | Logic Apps, Azure Functions | No reliance on third parties. ACI on a private VNet keeps tokens off the public internet. |
Low cost | Dedicated VM disk & NIC | ACI + Log Analytics Workspace came in at less than CAD $10. |
Repeatable | Portal‑only clicks | One ARM template pasted once → Redeploy in Portal anytime. |
Architecture Snapshot
┌─ OneDrive (Business Std) ─┐
│ source │
└───────────────────────────┘
▲ private VNet
│
┌───────────────────────────┐ stdout/stderr ┌──────────────────────────────┐
│ Azure Container Instance │ ───────────────▶ │ Azure Log Analytics (KQL) │
│ –– rclone/rclone:latest │ └──────────────────────────────┘
└───────────────────────────┘
│
▼
┌── OneDrive (Personal M365) ─┐
│ destination │
└─────────────────────────────┘
Full sanitized ARM template and parameters can be found in the repo link at bottom of post.
Prerequisites
- Azure subscription (any pay‑as‑you‑go works).
- Two OneDrive tenants:
- source — Microsoft 365 Business Standard.
- destination — Microsoft 365 Personal.
- Download open source rclone executable to run locally to create rclone.config.
Step 1 — Deploy the Container Group (Portal‑Only)
- In the Azure Portal, search “Deploy a custom template” and choose Build your own template in the editor.
- Paste in
template.json
(see repo) and Save. - On the Parameters blade, paste in your edited
parameters.json
. - Hit Review + Create → Create.
- Container Instance is deployed and automatically starts running your rclone job as specified in the commandOverrideArray value.
- Container instance is automatically terminated when the rclone copy job is completed (in my case it took about 48 hours to copy 295 GB / 400k files).
- Easy to stop and redeploy Container Instance at any time to continue / complete job.
Need to tweak settings later? Go to the Resource Group ▶ Deployments, click the deployment name, and hit Redeploy during which you can modify the template and parameters. For example you could modify the rclone commands:
// commandOverrideArray for initial COPY job
"commandOverrideArray": {
"value": [
"sh","-c",
"mkdir -p /root/.config/rclone && echo \"$RCLONE_CONF_B64\" | base64 -d > /root/.config/rclone/rclone.conf && unset RCLONE_CONF_B64 && rclone copy source: destination: --progress --transfers 6 --tpslimit 8 --log-level INFO && tail -f /dev/null"
]
}
Note: tail -f /dev/null
keeps the container alive for interactive troubleshooting.
Why the conservative limits? eg why use rclone options --transfers 6 --tpslimit 8
?
OneDrive throttles if you get greedy. Likely these could have been increased to make the whole process faster but rather than run into issues I just stuck with these nice conservative options which produced a stable ~2.9 MiB/s transfer of 295 GB / 400k files in about 48 hours.
Step 2 — Authenticate & Build the rclone Config
Use the rclone exectuable locally to create your rclone config. Just go to rclone site download the latest rclone executable and run it locally using command. It will walk you through a serious of steps to create your reclone config.
This will result in the creation of an rclone.config file that you will contain the tokens for source and destination onedrives and their names.
rclone config
# create two remotes:
# name: source type: onedrive tenant: <business‑tenant‑id>
# name: destination type: onedrive tenant: <personal‑tenant‑id>
When running rclone locally at one point it pops a browser window so that it can use the info you gave it to do the OAuth flow for your OneDrive account. Note I needed to disable my VPN during the initial rclone OAuth flow as I found that rclone’s browser pop‑up failed with it on.
I base64 encoded mine to avoid any formatting error issues in the template parameters:
base64 -w0 ~/.config/rclone/rclone.conf > conf.b64
Paste that base64 string into environmentVariable0
in parameters.json
.
Step 3 — Kick Off the Copy
To deploy the container instance is is really easy to do by simply deploying an Azure custom template in your resource group pasting the template and parameter json into the template.
Once the deployment succeeds, open Container Instances ▶ <container> ▶ Connect ▶ Bash to run bash sh commands on the running container instance. You can run rclone commands or Alpine Linux commands to get system state etc or for example watch logs:
tail -f /dev/stderr
You’ll note in my check phase I output a local text file that contains a list of file diffs so it was useful to use the bash commands to read that file.
Step 4 — Monitor with KQL, Regex & CI Metrics
Another easy to use feature of the Azure stack is the Log Analytics Workspace which saves raw container image rclone job stdout/stderr messages.
These messages are automatically saved in the Log Analytics Workspace and are persistent until you stop the workspace. This history of messages can be queried using KQL to see issues, errors, last rclone action, etc.
- Container Metrics blade shows CPU, memory, and network charts.
- Log Analytics Workspace ▶ Logs shows raw stdout/stderr.
ContainerInstanceLog_CL
| where ContainerName_s == "AZURE_CONTAINER_INSTANCE_NAME_HERE"
| where Log_s matches regex @"Copied \\(new\\)"
| project TimeGenerated, Log_s
Step 5 — Spin Up a Second ACI for rclone check
to do file diff
Rclone has a source – destination file diff feature called rclone check. After running the initial rclone copy job I used rclone check feature to do diff of source and destination drives. This took only a couple of hours to complete for the 295 GB about 400k files.
"commandOverrideArray": {
"value": [
"sh","-c",
"mkdir -p /root/.config/rclone && echo \"$RCLONE_CONF_B64\" | base64 -d > /root/.config/rclone/rclone.conf && unset RCLONE_CONF_B64 && rclone check source: destination: --combined /root/rclone_check_report.txt --exclude 'Personal Vault/**' --log-level INFO ; tail -f /dev/null"
]
}
This included exporting the check feature results into a /root/rclone_check_report.txt
file.
Adding ; tail -f /dev/null
to the rclone command keeps the ACI running after the rclone job completes. Then you can inspect the rclone_check_report.txt
file using cat /root/rclone_check_report.txt
or download from the Files tab.
This file diff worked perfectly. This check process uses rclone check which uses OneDrive hash compare feature as required to compare files.
Cost & Performance Summary
Data copied | ~295 GB / 400k files |
Time to complete | 48 h |
Average throughput | 2.9 MiB/s |
Azure ACI cost | CAD $9 |
Azure Log Analytics cost | CAD $1.00 |
Total | CAD $10 |
10 Things I Learned — and You Might Too
- Base64 your
rclone.conf
— was just bulletproof, no YAML/JSON escaping headaches in the template parameters. tail -f /dev/null
keeps the container alive post‑job instead of auto-terminating when rclone job is finished.- Private IP > Public IP for token security. Docker rclone image has nothing listening but public still has public ip. Azure Vnet is private and secure.
- Azure File Share isn’t in Canadian regions — initial logging storage selected but in the end Log Analytics Workspace was easy and robust.
- Use your own Docker Hub account to get rclone Docker image for CI — avoids public rate limits when deploying CI which prevents deployment.
- Ignore OneDrive Admin metrics — includes file versions so is misleading, instead count files via Explorer or
rclone size
. - Slow and steady wins — eg use conservative
--transfers 6 --tpslimit 8 ,
avoid API throttling storms. - Regex + KQL = — LLMs helped me craft perfect queries.
- Redeploy Container Instance is simple via Portal — no CLI, just click and redeploy, no headaches.
- LLMs saved hours — from regex to troubleshooting.
Wrap‑Up
295 GB / 400k files transferred in 48 hours, ten dollars, zero surprises. Azure’s serverless containers infra-as-code + rclone did the heavy lifting.
You can find my template and parameters at the repo below:
https://github.com/sitrucp/onedrive_rclone_copy
Happy copying!