Tridion content delivery website webfarm using azure file storage

While scaling a Tridion website on different Virtual Machines the major challenge which is usually encountered is keeping content on all servers in sync.

There are couple of approaches which can be considered

  1. Multiple deployers- each corresponding to one Virtual machine.

While publishing all the deployers can be configured on one publishing target so that the operation is performed in transaction.

Possible Issues:

  • Publishing time will increase with increase in VM’s
  • New deployer need to be configured or removed while scaling up or down i.e. addition or removal of a VM
  1. File replication script like robocopy

In this approach the files are published to a single physical location on a VM. A scheduler running on that VM will execute a robocopy script which will sync this folder to the website folder on different servers.

Possible Issues:

  • The publishing time and changes reflected depends on the frequency of the scheduler. For e.g. if the time set is scheduler is 5 min, the changes will reflected on the website after 5 min. Also with increase of VM’s as well as files and assets, this time will increase.
  • There is no guarantee that all the folders will be in sync as it is quite possible that script fails after syncing few VM


  1. Creating a shared network folder and pointing all the website so that all the content is published to this shared folder and all the website instance on different VM point to this one.

Possible Issues:

This approach does not have any of the issue mentioned above but the major challenge is SPOF. If due to some reason the network folder is not available it will result in all the website bringing down. Also, you will have to provide some explicit mechanism of backup else the data will be lost.


  1. Quite similar to above approach but without the single point of failure issue is using azure file storage which has high availability as well as high performance. With Azure File Storage, the web content can now be stored independently of the web server.

Possible Issues:

If the file storage is in a different geographical location, there might  be performance issues

Azure file storage is a highly scalable and highly available file storage which can be accessed by application running on different VM on azure just like a network shared path



Following information will be required while implementing Tridion CD website using azure file storage


Account Name                  azureusername

File Endpoint                      https://<account&gt;

Account Key                       xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx


Steps to configure shared file storage for  a webfarm

  1. Create an active directory user with same name as account name and password same as account key of azure file storage. If no active directory is available you will have to create local user with same name password on each VM hosting the application which will be difficult to maintain but will work. Remember to set password to never expire.
  2. Create web application on IIS and in physical path provide the UNC of azure shared file storage. Make sure you have copied all the website physical assets in a folder on this shared storage.


  1. Click on connect as and select specific user radio button



  1. Provide the credentials of the domain user and password and click ok3
  2. Since xmogrt.dll is not a .NET assembly, it will not be accessible from network location. You will have to delete this dll from bin folder of your application and copy it to %Systemdrive%\windows\system32
  3. Add the domain user  to IIS_IUSRS group on local system
  4. Recycle application pool
  5. Repeat step no. 2 to 6 on each web server which are going to be attached to loadbalancer.



Setting up deployer to publish to shared file storage

There are only two changes required in http deployer to deploy files on shared storage.

  1. In cd_storage.xml, in storage section for file system provide UNC path of storage

<Storage Type=”filesystem” Class=”” Id=”defaultFile” defaultFilesystem=”false”>

<Root Path=” \\<account>\websitedirectory\ ” />



  1. Create a new application pool with identity set to custom account. Here you need to specify the credentials of domain user which you created earlier so that deployer can access the shared storage.
  2. Assign this application pool to your httpupload application