Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current View Version History

« Previous Version 41 Current »

This article describes the steps to perform when you need to copy one customer environment to another (new) environment for the same customer - e.g. copy test to prod or the opposite.

Start by unzipping the content of the relevant zipped folder - it contains all the needed scripts and the guide in PDF format. (Version 3 is for older DAMS pre DC5)

Transfer steps v4.zip       Transfer steps v3.zip

Step-by-step guide

  • Most of the scripts need inputs before you can run them!

1) Backup the DAM Database on Production & the destination Database  (For Azure hosted servers, these are normally saved in H:\SQLBackups) this will mitigate the need to transfer/copy the backup files to the Test server using FTP later in the process.

FOR AZURE HOSTED SERVERS CREATE A SNAPSHOT AND DISK OF THE PRODUCTION STORAGE DISK, ATTACH THE NEWLY CREATED PRODUCTION STORAGE DISK TO THE TEST SERVER (this process is detailed further down this page, steps 5 & 6 are not needed for azure hosted servers)
2) Stop Digibatch, digimonitor,website + application pools (PLEASE NOTE THAT IN NEWER VERSIONS OF THE DAM DIGIMONITOR AND THE JOBS DATABASE ARE NO LONGER PRESENT)
4) Delete the logfiles for Digibatch, Digimonitor and websites (Optional)
5) Copy the .bak file to the database you want to overwrite. (Only for non azure hosted servers)
6) restore the databases
6b) Check the ownership of the databases (dam and dam_jobs). It should be an SA user.


7) Create schematics on both databases (1a_CreateSchemas.sql)
8) Create/connect users to schematics with the correct roles (1b_Addusers.sql)
9) Copy over schematics rights using the script: 1c_MoveSchmeaObjects.sql. Run this script against the DAM database. The script will generate a set of queries, which you then need to copy and paste into a new query window and run these manually. (When running this script you may receive an error stating ('SqlQueryNotificationService-ac71525d-ab10-45d7-a1b8-9aac35189f759', because it does not exist or you do not have permission.

These can both be deleted without any issues, as long as the website is not running, please see the screenshot below.


10) Change the owner of the schematics admin_database name_dam + admin_database name_dam + UserMgmt (1d_ChangeSchemas.sql)
11) Delete old schematics admin_OLDNAME_dam and admin_OLDNAME_dam_jobs on both databases


If it is not possible to delete the old schematics, you must not proceed to next step. This means that the schematics are still in use.
12) Null the completed date for search proxy scripts (1e_ResetSearches.sql)
13) Delete everything from service broker (2a_DROPSB.sql) (Check that everything has been removed sometimes a few autogenerated need to be cleared manually)

To check this navigate to DAM database, service broker, expand Queues (this should be empty) and expand Services (this should be empty)
14) Delete the old users from both databases.
If it is not possible to delete the old users, you must not proceed to next step. This means that the users are still in use.

NB: Step 15 + 16 + 17 + 18 + 19 is Only on older DAM Versions (pre DAM 5.2)
15) Login with admin_database name_dam (password =admin_database name_dam)
16) Run scripts for SB (2b_RebuildServiceBroker.sql + 2c_ServiceBrokerItemLastChanged.sql)

17) run the first script entry from search_proxy_scripts to create the search service broker (You can get it, by right-clicking the table and select edit top 200 and select all from the script column)

     

Then run the script.


18) Log in as your Server admin
19) Run the script 2d_EnableServiceBroker.sql

20) run the updateDZConfig script

NB: When @runscript =0 the script only simulates changes, when settings are as expected, set this value to 1 and run again.

NB; Skip step 21 if DAM does not have the table install_config_actualsite
21) run the updateinstallAcctuallsite script (ONLY PRESENT IN OLDER VERSIONS OF THE DAM)
22) Edit the stored procedures you get from running the script: 5_GetSPs.sql replace the old database name (old DB name_dam_Jobs) with the new (new DB name_dam_Jobs)

23a) Reconfigure web.config files by updating user passwords
23b) Create the database ref, please note that if the site existed beforehand, and you have used the same database name, then this step can be omitted.

23c) Recycle all AppPools related to site

  1. Check the admin_dbname_dam & admin_dbname_dam_jobs user accounts can login to the database, if they cannot the accounts are orphaned and need to be repaired by running the following scripts. List Orphaned Users.sql and then Fix Orphaned Users.sql

23d) Start the Digizuite website and check you can log in
24) Update Digimonitor instances by running the script 6_UpdateDMInstance.sql (before running this script you should check if the environment is using an Ingest folder) you achieve this by logging into the DAM and looking at 2 workflows to check if they are in use. (ONLY PRESENT IN OLDER VERSIONS OF THE DAM)

Navigate to system tool, workflows select DigiFileWatcher, select Edit, then Edit on the Standard Import.

Then do the same on IngestImporter_XML2metadata

25) Navigate to SystemTools / Digizuite Configuration,

Find the digizuite constant: 

WEBDATABASEREF - this constant should match the connectionstring name in web.config for the dam database

Find the digizuite constant:

JOBDATABASEREF - this constant should match the connectionstring name in web.config for the job database


26) Check the Digimontor instances to make sure you didn't miss any.
27) Repopulate the searches by running https://URL/apiproxy/JobService.js?accesskey=xxx&method=PopulateAllSearches - replace xxx with a valid access key from the script 5_GetAccessGUID.sql
28) Start DigiMonitor (Reconfigure config files first -update user passwords)

28b) Check for CDN configuration in Destinations. Make sure to change them to the environment if there are any. 
29) Configure Digibatch and re-enroll the job engines.
30) Start Digibatch
31) Check the jobs created by the "Repopulate the searches" are being processed.
32) Recreate and publish any SOLR searches - clean up the old ones. To clean up the solr searches perform the following steps

Navigate to the DAM database > tables and drill down to the dbo.search_version table

, right click the table and select top 1000 rows, then at the bottom of the query results input where usesolr =1

This will display all of the Solr searches in the results pane. These searches can be deleted using the Delete Solr Searches script, by adding the search versionid of the searches you wish to delete, please see the screenshot below.

 

Your results should look something like the screenshot below.

To check you can run this command again: where usesolr =1

You should be left with just the search you have created, or if you haven´t created any searches yet your result should be empty.



Once all searches have been repopulated, it´s time to test your DAM


How to Copy the Production Storage disk to the test server in Azure.

Steps:

  1. On the Test server, copy the following folders to one of the SQL disks (Webs, DZInstall & SQL Backups) ensure to cleanup the SQL Backups folder before copying
  2. Take a snapshot of the Production storage disk: (must be managed disks)
  3. Create a new disk from the snapshot (must be created in the test resource group)
  4. Remove the current storage disk from the test app server
  5. Mount the newly created storage disk on to the test app server
  6. RDP onto the test server and use disk management to ensure the disk is present and online.
  7. Restore the folders you copied in step 1 to their original location (Webs, DZInstall &SQL Backups) ensure that the production Database backup is not overwritten)
  8. Recreate the Storage share and set the permissions accordingly

Commence the database restoration process


Login to the Azure portal and search for virtual machines

Select the production app server 

Select disks

Select the storage disk, which will be a standard hdd of at least 1TB is size

Select Create Snapshot

Create the snapshot in the Test resource group, ensure the snapshot is a standard hdd and give the snapshot an easily recognizable name

Click Review + Create, Once validation is passed then click create

Once the deployment if finished click Go to resource

Now click Create disk

Ensure the resource group is the test resource group, give the disk a name and change the type to standard hdd by clicking on Change size

Change the type to standard hdd and click OK

Click Review + Create and after Validation is passed, click Create

After creation click Go to resource

And check all is as it should be, correct resource group, size, type and unattached.

The next step is to attach the newly created disk to the test app server. Browse to the test app server in the azure portal and select disks.

Remove the current storage disk by clicking on the X and click save.

Once the virtual machine has been updated, you can now attach the newly created Production storage disk to the test app server by clicking Attach existing disks and selecting the disk from the dropdown box, ensure the host caching matches the other disks and click save

Once the virtual machine is updated, we are ready to logon to the test server over RDP.

Once logged into the server open an mmc console by clicking on the start button and typing mmc

After the console has opened we need to add the disk management snap in, click File, Add/Remove Snap-In

Then select Disk Management and click Add

Ensure This computer is selected then click Finish

Then click OK, next double click on Disk Management (Local)

Ensure the disk is online and healthy, once confirmed you can close the mmc console

It important to check that the SQLDATA & SQLLOGS have the same drive letters as production, if not SQL will not load until this is rectified. (Drive letters can be changed using Disk Management)

Rename the following folders: websites, Digimonitor and Log files to reflect the Test environment URLs 

Recreate the storage share and apply the relevant permissions

Rerun the Powershell installation for the current version (This will ensure all the passwords, usernames, urls and storage locations are correct

Now we are ready to commence with the database restoration process.

After completing the copy process, log back into the azure portal and clean up IE delete the snapshot you created and the redundant storage drive (original test storage drive)

Delete snapsot

Delete redundant (unattached) original test storage disk


How to Copy Assets from Production to Test using FTP. (Non Azure hosted Servers)

  1. Ensure FTP server software is installed on the Test environment, the recommended software is SolarWinds, it can be found here https://www.solarwinds.com/free-tools/free-sftp-server
  2. Once installed on the Test server, configure the root folder to be Storage folder on the server
  3. Create a login for the SFTP server
  4. Configure Windows Firewall to allow Port 22
  5. Create a new Inbound rule for Port 22
  6. Start the SFTP server
  7. Install an FTP client on the Production Server, the recommended in WinSCP, it can be downloaded from here https://winscp.net/eng/index.php
  8. Once installed on the Production server, use this to connect to the Test server with the FTP login you created previously on the Test server.
  9. Once connected you need to copy the contents of the following folders from Production to Test:
  10. Storage\DMM\Assets
  11. Storage\Frontend data


Once the copying of Assets is complete they should be visible in both the DAM & Media Manager.



Related issues
  • No labels