This article describes the steps to perform when you need to copy one customer environment to another (new) environment for the same customer - e.g. copy test to prod or the opposite.
Start by unzipping the content of the relevant zipped folder - it contains all the needed scripts and the guide in PDF format. (Version 3 is for older DAMS pre DC5)
Transfer steps v4.zip Transfer steps v3.zip
Step-by-step guide
- Most of the scripts need inputs before you can run them!
- Improvement suggestions:
- Add checkmarks such that this is a checklist to do the procedure.
- Automate the whole process in PowerShell for example.
Backup the DAM Database on the production server & the DAM database on the test server that will be overwritten
Backup the DAM Database on Production & the destination Database (For Azure hosted servers, these are normally saved in H:\SQLBackups) this will mitigate the need to transfer/copy the backup files to the Test server using FTP later in the process.
FOR AZURE HOSTED SERVERS CREATE A SNAPSHOT AND DISK OF THE PRODUCTION STORAGE DISK, ATTACH THE NEWLY CREATED PRODUCTION STORAGE DISK TO THE TEST SERVER (this process is detailed further down this page, steps 5 & 6 are not needed for azure hosted servers)
2. Stop Digibatch and Digimonitor website + application pools
PLEASE NOTE THAT IN NEWER VERSIONS OF THE DAM DIGIMONITOR AND THE JOBS DATABASE ARE NO LONGER PRESENT)
3. Restore the databases
4. Check the ownership of the databases
(dam). It should be an SA user.
owner of dam database needs to be SA user - if not change it to the SA user.
7. Create schematics on both databases (gælder for versioner før 5.4)
(1a_CreateSchemas.sql)
8. Create/connect users to schematics with the correct roles (relevant så sql login bindes til db user)
(1b_Addusers.sql)
9. Copy over schematics rights (kun relevant hvis 7 er relevant)
Copy over schematics rights using the script: 1c_MoveSchmeaObjects.sql. Run this script against the DAM database. The script will generate a set of queries, which you then need to copy and paste into a new query window and run these manually. (When running this script you may receive an error stating ('SqlQueryNotificationService-ac71525d-ab10-45d7-a1b8-9aac35189f759', because it does not exist or you do not have permission.
These can both be deleted without any issues, as long as the website is not running, please see the screenshot below.
10. Change the owner of the schematics admin_database name_dam + admin_database name_dam + UserMgmt
(1d_ChangeSchemas.sql)
This step is only relevant if there exist other schemas in the database than dbo schema.
db owner skal sættes til SA hvis relevant hvis schema findes
11. Delete old schematics admin_OLDNAME_dam and admin_OLDNAME_dam_jobs on both databases
If it is not possible to delete the old schematics, you must not proceed to next step. This means that the schematics are still in use.
12. Null the completed date for search proxy scripts (1e_ResetSearches.sql)
når dz updater køres så bliver dette step gjort af sig selv - men kør script for en sikkerheds skyld
13. Delete everything from service broker (2a_DROPSB.sql) (Check that everything has been removed sometimes a few autogenerated need to be cleared manually)
Relevant fra før version 5.0
To check this navigate to DAM database, service broker, expand Queues (this should be empty) and expand Services (this should be empty)14) Delete the old users from both databases.If it is not possible to delete the old users, you must not proceed to next step. This means that the users are still in use.
20. run the updateDZConfig script
NB: When @runscript =0 the script only simulates changes, when settings are as expected, set this value to 1 and run again.
meget vigtigt - skal ændres til at pege på test fremfor produktion
22. Edit the stored procedures you get from running the script: 5_GetSPs.sql replace the old database name (old DB name_dam_Jobs) with the new (new DB name_dam_Jobs)
stadig relevant (ændre fra prod navn til test navn) - må ikke sættes i single user mode, ellers kan ikke køre kommandoen da hver query man laver mod databasen ses som en session som dermed så vil blive blokeret hvis der er mere end 1
23. Reconfigure web.config files by updating user passwords
stadig relevant
24. Create the database ref, please note that if the site existed beforehand, and you have used the same database name, then this step can be omitted.
ikke relevant fra før version 5.0.
gå ind i web.config for sitet og ændre connection string og databaseRef felter.
25. Recycle all AppPools related to site
- Check the admin_dbname_dam & admin_dbname_dam_jobs user accounts can login to the database, if they cannot the accounts are orphaned and need to be repaired by running the following scripts. List Orphaned Users.sql and then Fix Orphaned Users.sql
26. Start the Digizuite website and check you can log in
27. Navigate to SystemTools / Digizuite Configuration,
Find the digizuite constant:
WEBDATABASEREF - this constant should match the connectionstring name in web.config for the dam database
Find the digizuite constant:
// JOBDATABASEREF findes ikke mere fra version 5.4
JOBDATABASEREF - this constant should match the connectionstring name in web.config for the job database
28. Repopulate the searches by running https://URL/apiproxy/JobService.js?accesskey=xxx&method=PopulateAllSearches - replace xxx with a valid access key from the script 5_GetAccessGUID.sql
stadig relevant - dette step findes i dz installer i forvejen
29. Check for Azure CDN configuration in Destinations. Make sure to change them to the environment if there are any.
30. Configure Digibatch and re-enroll the job engines.
// ikke relevant fra 5.4 og frem
31. Start Digibatch
32. Check the jobs created by the "Repopulate the searches" api call are being processed. - ligger beskeder i rabbit mq - dam center kø - skal lægge i unaccked state = den har taget lås og er ved at processere beskeden
33. Recreate and publish any SOLR searches - clean up the old ones.
// ikke relevant da gjort i api kald i step 28
34. clean up the solr searches (cores/index) perform the following steps
// clean up solr core (index) - så vi kun vedligeholder det nyeste index - i stedet for at trække ud for alle indexes
delete unactive searches in solr portal - (green marked button in the picture below)
35. Validate assets in DAM Center is working and from test storage, not from production storage - upload a sample asset to verify this.
COPY OF PRODUCTION TO TEST IS DONE NOW
====================================================================
Discarded:
Navigate to the DAM database > tables and drill down to the dbo.search_version table
, right click the table and select top 1000 rows, then at the bottom of the query results input where usesolr =1
This will display all of the Solr searches in the results pane. These searches can be deleted using the Delete Solr Searches script, by adding the search versionid of the searches you wish to delete, please see the screenshot below.
Your results should look something like the screenshot below.
To check you can run this command again: where usesolr =1
You should be left with just the search you have created, or if you haven´t created any searches yet your result should be empty.
Once all searches have been repopulated, it´s time to test your DAM
How to Copy the Production Storage disk to the test server in Azure.
Steps:
- On the Test server, copy the following folders to one of the SQL disks (Webs, DZInstall & SQL Backups) ensure to cleanup the SQL Backups folder before copying
- Take a snapshot of the Production storage disk: (must be managed disks)
- Create a new disk from the snapshot (must be created in the test resource group)
- Remove the current storage disk from the test app server
- Mount the newly created storage disk on to the test app server
- RDP onto the test server and use disk management to ensure the disk is present and online.
- Restore the folders you copied in step 1 to their original location (Webs, DZInstall &SQL Backups) ensure that the production Database backup is not overwritten)
- Recreate the Storage share and set the permissions accordingly
Commence the database restoration process
Login to the Azure portal and search for virtual machines
Select the production app server
Select disks
Select the storage disk, which will be a standard hdd of at least 1TB is size
Select Create Snapshot
Create the snapshot in the Test resource group, ensure the snapshot is a standard hdd and give the snapshot an easily recognizable name
Click Review + Create, Once validation is passed then click create
Once the deployment if finished click Go to resource
Now click Create disk
Ensure the resource group is the test resource group, give the disk a name and change the type to standard hdd by clicking on Change size
Change the type to standard hdd and click OK
Click Review + Create and after Validation is passed, click Create
After creation click Go to resource
And check all is as it should be, correct resource group, size, type and unattached.
The next step is to attach the newly created disk to the test app server. Browse to the test app server in the azure portal and select disks.
Remove the current storage disk by clicking on the X and click save.
Once the virtual machine has been updated, you can now attach the newly created Production storage disk to the test app server by clicking Attach existing disks and selecting the disk from the dropdown box, ensure the host caching matches the other disks and click save
Once the virtual machine is updated, we are ready to logon to the test server over RDP.
Once logged into the server open an mmc console by clicking on the start button and typing mmc
After the console has opened we need to add the disk management snap in, click File, Add/Remove Snap-In
Then select Disk Management and click Add
Ensure This computer is selected then click Finish
Then click OK, next double click on Disk Management (Local)
Ensure the disk is online and healthy, once confirmed you can close the mmc console
It important to check that the SQLDATA & SQLLOGS have the same drive letters as production, if not SQL will not load until this is rectified. (Drive letters can be changed using Disk Management)
Rename the following folders: websites, Digimonitor and Log files to reflect the Test environment URLs
Recreate the storage share and apply the relevant permissions
Rerun the Powershell installation for the current version (This will ensure all the passwords, usernames, urls and storage locations are correct
Now we are ready to commence with the database restoration process.
After completing the copy process, log back into the azure portal and clean up IE delete the snapshot you created and the redundant storage drive (original test storage drive)
Delete snapsot
Delete redundant (unattached) original test storage disk
How to Copy Assets from Production to Test using FTP. (Non Azure hosted Servers)
- Ensure FTP server software is installed on the Test environment, the recommended software is SolarWinds, it can be found here https://www.solarwinds.com/free-tools/free-sftp-server
- Once installed on the Test server, configure the root folder to be Storage folder on the server
- Create a login for the SFTP server
- Configure Windows Firewall to allow Port 22
- Create a new Inbound rule for Port 22
- Start the SFTP server
- Install an FTP client on the Production Server, the recommended in WinSCP, it can be downloaded from here https://winscp.net/eng/index.php
- Once installed on the Production server, use this to connect to the Test server with the FTP login you created previously on the Test server.
- Once connected you need to copy the contents of the following folders from Production to Test:
- Storage\DMM\Assets
- Storage\Frontend data
Once the copying of Assets is complete they should be visible in both the DAM & Media Manager.
Discarded
4) Delete the logfiles for Digibatch, Digimonitor and websites (Optional)
3. Copy the .bak file to the database you want to overwrite. (Only for non azure hosted servers)
NB: Step 15 + 16 + 17 + 18 + 19 is Only on older DAM Versions (pre DAM 5.2)
15) Login with admin_database name_dam (password =admin_database name_dam)
16) Run scripts for SB (2b_RebuildServiceBroker.sql + 2c_ServiceBrokerItemLastChanged.sql)
17) run the first script entry from search_proxy_scripts to create the search service broker (You can get it, by right-clicking the table and select edit top 200 and select all from the script column)
Then run the script.
18) Log in as your Server admin
19) Run the script 2d_EnableServiceBroker.sql
21) run the updateinstallAcctuallsite script (ONLY PRESENT IN OLDER VERSIONS OF THE DAM)
NB; Skip step 21 if DAM does not have the table install_config_actualsite
27. Update Digimonitor instances by running the script 6_UpdateDMInstance.sql (before running this script you should check if the environment is using an Ingest folder) you achieve this by logging into the DAM and looking at 2 workflows to check if they are in use. (ONLY PRESENT IN OLDER VERSIONS OF THE DAM)
Navigate to system tool, workflows select DigiFileWatcher, select Edit, then Edit on the Standard Import.
Then do the same on IngestImporter_XML2metadata
26) Check the Digimontor instances to make sure you didn't miss any.
28) Start DigiMonitor (Reconfigure config files first -update user passwords)