Delete xDB contacts and interactions from Collection databases

To clean out unwanted contact data from your collection databases, there are some referential integrity constraints that mean the deletions have to happen in a specific order. The following query will delete all contacts, facets, interactions, interaction facets, and contact identifiers from xDB.

  FROM [xdb_collection].[ContactIdentifiers]
  FROM [xdb_collection].[ContactIdentifiersIndex]

  FROM [xdb_collection].[ContactFacets]
  FROM [xdb_collection].[InteractionFacets]
  FROM [xdb_collection].[Interactions]

  FROM [xdb_collection].[Contacts]

You will need to run this on both Collection shards.

The mysterious case of the Zombie Automation Plan

Yesterday we worked with our customer to deploy some integrations that trigger Sitecore Marketing Automation plans. The plans do their thing and then push messages onto an Azure Service Bus topic and the messages end up on a Dynamics Contact Activity feed. To test the integrations, I created an Automation Plan that would be triggered from an event and then pushed some test content to the service bus. All appeared to be working, so I deleted the plan and uploaded the “real” plans ready for action.

Next morning, lots of messages had been processed but the Activity Feeds on the Dynamics contacts had double the number of activity records. One which was correct and the other which had some content that looked a lot like my test automation plan output. No worries, I thought, it must still be there. Go and check in /System/Marketing Control Panel/Automation Plans. Nope, not there.

OK, so maybe it’s orphaned in the Master database. Search for the Item by ID (which conveniently enough was included in the service bus message). Nope, not there either.

Hmm, so it must be cached by the maengine.exe. Go to Azure Portal, restart ma-ops app service. Turns out this doesn’t recycle the WebJob, so I edited the maengine.exe.config and that recycled the WebJob.

Trigger a plan as a test. Nope, still getting duplicate messages on the bus.

Alright, what about Deploy Marketing Definitions ? Cool, go do that, trigger the plan again. No luck – still getting duplicate messages.

What was the cause?

Well, Automation Plans are not actually retrieved from the Master database, since the MA engine doesn’t have access to the Sitecore databases directly. It uses the REFDATA service which in turn pulls from the refdata database (which is where the data is sent when you click the magical Deploy Marketing Definitions button), so that was the next place to check.

The refdata database contains a table called xdb_refdata.Definitions which has some data that looks like:

A query of the xdb_refdata.DefinitionTypes table revealed that Automation Plans have a TypeID of 2DE34E0A-5FB6-4AE0-ACFC-69ACF97076B4

so the query to find all Automation Plans in the refdata Definitions table was:

select *
FROM [xdb_refdata].[Definitions]
WHERE TypeID = '2DE34E0A-5FB6-4AE0-ACFC-69ACF97076B4'

I can see the plans, but which is the zombie plan? The Data column is varbinary(max) and the ID column has no relation to the ItemID in the Master database, so to find out which plan doesn’t belong we need to get the plan data from the Data column:

SELECT id, cast(data as varchar(max)) as planData
FROM [xdb_refdata].[Definitions] 
WHERE TypeID='2DE34E0A-5FB6-4AE0-ACFC-69ACF97076B4' 
ORDER by LastModified

Then it was just a case of looking at the planData output and finding the zombie plan. Great but what now ? Kill the zombies with fire ? The Definition items were referenced in 2 other tables xdb_refdata.DefinitionCultures and xdb_refdata.DefinitionMonikers and I wasn’t keen to start deleting stuff in Production, so instead I marked the offending Definition items as IsActive=0:

UPDATE [xdb_refdata].[Definitions]
SET IsActive=0
WHERE id in ('B143F382-7014-4E55-A764-0180D82AED74','33BFBC5E-C3B9-49C0-9E37-710BA146AA79');

Run the tests again and no more duplicate messages coming through. Just to be sure I deployed the marketing definitions again but the data remained set to IsActive=0 so all was well..

The moral of the story is this: Don’t delete Marketing Automation Plans without deactivating them.


The above approach still had issues, some contacts were enrolling in plans and were blocked and the following error was occurring (although not all the time):

[12/02/2019 00:03:54 > fdf166: INFO] 2019-12-02 00:03:54 ERR An exception occurred during processing for worker '35f908d6-638e-433e-aef6-9184a80378f4'
[12/02/2019 00:03:54 > fdf166: INFO] Sitecore.Framework.Conditions.PostconditionException: Postcondition 'planDefinition should not be null' failed.
[12/02/2019 00:03:54 > fdf166: INFO] at Sitecore.Framework.Conditions.EnsuresValidator1.ThrowExceptionCore(String condition, String additionalMessage, ConstraintViolationType type) [12/02/2019 00:03:54 > fdf166: INFO] at Sitecore.Framework.Conditions.Throw.ValueShouldNotBeNull[T](ConditionValidator1 validator, String conditionDescription)
[12/02/2019 00:03:54 > fdf166: INFO] at Sitecore.Framework.Conditions.ValidatorExtensions.IsNotNullT

The key was in the planDefinition should not be null message. The zombie plan was still taking new enrolments.

To fix this, I needed to re-activate the plans in refdata and set the EndDate of the plan, but the Data column is varbinary(max) so a bit of SQL was needed:

SELECT id, cast(data as varchar(max)) as planData
FROM [xdb_refdata].[Definitions]
WHERE id ='33BFBC5E-C3B9-49C0-9E37-710BA146AA79'

declare @planData varchar(max);
declare @binPlanData varbinary(max);
declare @test varchar(max);
SET @planData = '{-- insert the modified planData from SELECT query above with EndDate set to a date in the past --}';
SET @binPlanData = cast(@planData as varbinary(max));
SET @test = cast(@binPlanData as varchar(max));

PRINT @test;

UPDATE [xdb_refdata].[Definitions]
SET Data = @binPlanData, IsActive=1
WHERE id = '33BFBC5E-C3B9-49C0-9E37-710BA146AA79';

Next time I am going to use a flamethrower.

Automate Everything!

Recently I had the privilege and good fortune to present at the 2019 Sitecore Symposium in Orlando, Florida. Below are links to the slides and notes from my presentation on customising Sitecore Marketing Automation plans.



Github repository containing sample code, utilities and content package:

Troubleshooting and debugging tips

See this blog post for some tips on creating custom activity types in Sitecore Marketing Automation plans.


Sitecore docs:
Custom Activity Types in 9.0.x:
Automation deep dive:
Marketing Automation Achitecture:

Troubleshooting custom activity types and Marketing Automation

Below are some tips for debugging and troubleshooting customisations in your Sitecore Marketing Automation plans. See also the slides from my recent Sitecore Symposium presentation “Automate Everything!” and the supporting code on GitHub.


  • Activity Properties must be PUBLIC to be populated
    • In your .NET class, ensure that the properties representing your Activity Type Parameters are public, otherwise the values will not be able to be set by the MA UI or read by the Automation Engine. (e.g. Message property in this file)
  • Ensure your activity ID is the same across:
    • Your config patch
    • Sitecore Activity Descriptor item
    • Angular package (lower case)
  • Watch out for DLL hell
    • Be careful which package versions you use in your .NET activity class or injected services. The dependencies that you use must be compatible with the Sitecore CM server and the Automation Engine.
  • If using custom facets, you have to include a config so that the Engine knows about them (contact loader XML)
  • Patch files MUST have the filename format sc.<name>.xml.
    • For example: sc.MarketingAutomation.ContactLoader.xml
  • MA engine won’t recycle on XML changes.
    • Unlike a typical Sitecore config, the Automation Engine will not recycle when you update when you edit or add an XML config file. You need to restart the service or WebJob manually for the changes to be picked up.
  • Activity not showing in the MA UI?
    • Make sure you have an icon in your Activity Type Descriptor
    • Check the XHR request in your browser network tab – the error will be in the HTTP response.
    • Ensure the DLL and plugin JS have been deployed to the Content Management server and that your Activity Type Descriptor is using the correct DLL reference.
  • Activity Type Parameters – set the editor and ObjectType (e.g. System.String).
    • This avoids getting quotes around your parameter values inbound and outbound

Some other tips

Sitecore kernel

You don’t have access to the Sitecore kernel or context database in the Automation Engine. The engine runs independently of Sitecore XM, so you cannot retrieve Sitecore items and you should not try to do so. And besides, you don’t have the connection strings in the engine config. You can use the built-in API’s to access some kinds of content.


Logging in the MA engine is completely different to “traditional” Sitecore logging which uses Log4Net. The Automation Engine uses the Ilogger interface from Microsoft.Extensions.Logging.

Log files are separate to the Sitecore logs:

Debugging your code

Attach to the MA engine for debugging in your local instance. The service will probably be named maengine. Make sure that you pick the right one, since you might have more than one MA engine service running. Mouse over the “maengine” process name in the list and it should show the file path to the maengine.exe file.

In PaaS, consult the Webjobs dashboard (as per previous section above). errors will be output to the console:


Sitecore automation plans are just Sitecore items. You can serialise MA plans and deploy them via Unicorn, TDS, and Sitecore packages.

Plans are stored in buckets under:
/sitecore/system/Marketing Control Panel/Automation Plans

Activity Type Descriptors are stored in:
/sitecore/system/Settings/Analytics/Marketing Automation/Activity Types

Predicate definitions are stored in a folder of your choosing under:
e.g. in the “XConnect – Marketing Automation” folder

Update: if you get this error below in the XHR calls from the MA UI, then you are missing an icon on your Sitecore Activity Descriptor item.

message: “An error has occurred.”,
exceptionMessage: “The ‘ObjectContent1' type failed to serialize the response body for content type 'application/json'.", exceptionType: "System.InvalidOperationException", stackTrace: null, innerException: { message: "An error has occurred.", exceptionMessage: "Value cannot be null. Parameter name: image", exceptionType: "System.ArgumentNullException", stackTrace: " at Sitecore.Diagnostics.Error.AssertString(String argument, String name, Boolean allowEmpty) at Sitecore.Resources.Images.GetThemedImageSource(String image) at Sitecore.Marketing.Automation.Extensions.ActivityDescriptorViewModelExtensions.ToViewModel(IActivityDescriptor activityDescriptor) at System.Linq.Enumerable.WhereSelectEnumerableIterator2.MoveNext() at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeList(JsonWriter writer, IEnumerable values, JsonArrayContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty) at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.Serialize(JsonWriter jsonWriter, Object value, Type objectType) at Newtonsoft.Json.JsonSerializer.SerializeInternal(JsonWriter jsonWriter, Object value, Type objectType) at System.Net.Http.Formatting.BaseJsonMediaTypeFormatter.WriteToStream(Type type, Object value, Stream writeStream, Encoding effectiveEncoding) at System.Net.Http.Formatting.JsonMediaTypeFormatter.WriteToStream(Type type, Object value, Stream writeStream, Encoding effectiveEncoding) at System.Net.Http.Formatting.BaseJsonMediaTypeFormatter.WriteToStreamAsync(Type type, Object value, Stream writeStream, HttpContent content, TransportContext transportContext, CancellationToken cancellationToken) — End of stack trace from previous location where exception was thrown — at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Web.Http.WebHost.HttpControllerHandler.d__22.MoveNext()”

Securing xDB data with Azure Key Vault and SQL Always Encrypted (part 3)

In part 1 we looked at the reasons for using Always Encrypted for xDB data and creating encryption keys in Azure Key Vault. In part 2 we looked at Column Master Keys and Column Encryption Keys in SQL Server. In this post I will cover the process of encrypting the data.

Before getting onto that topic I should point out that if you encrypt the data now, then your xDB collection, index worker, and search services will no longer be able to read xDB data until you create a Client ID and Client Secret that enable those services to retrieve the key from Azure Key Vault and use it to decrypt the column data in SQL. The documentation on how to do this is currently incorrect, and I will cover this in part 4. There are also some issues with the current documentation for encryption that are covered below.


The process of encryption entails a number of steps:

  1. Generate a script to recreate the stored procedures in your shard databases
  2. Delete stored procedures from the shards
  3. Disable change tracking on some of the tables in the shards
  4. Configure Always Encrypted on various columns in the shard database tables
  5. Restore the stored procedures
  6. Re-enable change tracking
  7. Grant permissions to the collection user

Encryption checklist


You will need to do every step listed above on each of your shard database. In order to help keep track of the steps and ensure that you don’t miss anything out, I’ve created a spreadsheet that you can download and mark off each step in the process.

Step 1 – Generate a script containing all stored procedures for each shard

The stored procedures need to be backed up as a script, deleted, and then re-created after encryption. It was not obvious to me how to do this for all Stored Procs in one go, but it turns out there’s a handy trick. Navigate to the shard database Programmability > Stored Procedures area of the tree and click on Stored Procedures:


Hit function key F7 and the Object Explorer window will come up with all the stored procs listed. Select all of them (except for System Stored Procedures) and then right mouse click, select Script Stored Procedure as > CREATE to > New Query Editor Window (or to file, whatever you prefer):


A script of all stored procedures will be generated that you can save for use in Step 5.

Step 2 – Delete stored procedures

Using the Object Explorer window as in Step 1, just select all the stored procedures (except for System Stored Procedures) and delete them.

Step 3 – Disable change tracking

Some of the tables need to have change tracking disabled (and re-enabled later on). For each table, right mouse click the table and open the Properties dialog, click Change Tracking from the left menu and turn it off. Alternatively you can use a script that I wrote.

Do this for the following tables:

  • Contacts
  • ContactFacets
  • Interactions
  • InteractionFacets

Step 4 – Configure Always Encrypted

For each of the tables listed below, configure the appropriate encryption type on the appropriate columns. Before doing this, if you are using Azure SQL I recommend that you bump up the DTU’s significantly before encrypting, as it will run a lot faster due to the processor-intensive nature of the encryption. Also, ensure that you have turned off any services that are connected to the databases (xConnect collection service, etc).

NOTE: The documentation is incorrect. Differences are highlighted below.


To do this, right mouse click each table and select Encryption from the context menu:


From the modal dialog box, set the encryption type for the columns:


Click OK and then, er, wait for a while……

Step 5 – Restore the stored procedures

This bit is pretty simple if you followed the list of tables above and encrypted the Identifier and Source columns in the UnlockContactIdentifiersIndex_Staging table. Just run the script for each shard (you did create a script for each shard right?) Otherwise, your stored procedure script will throw an error due to dependencies between tables. If you get the following error, then you’ve missed this table from your encryption:

Msg 402, Level 16, State 2, Procedure UnlockContactIdentifiersIndex, Line 26 [Batch Start Line xxxx]

The data types varbinary and varbinary(700) encrypted with [……] are incompatible in the equal to operator.

If there are any TMP tables left over after encryption then you can safely delete them (e.g. tmp_ms_xx_ContactFacets1 was one that I had left over after encryption).

Step 6 – Enable Change Tracking

You can use the script provided in step 3 above, for this, or do it manually.

Step 7 – Grant permissions to the collection user.

This step will depend on the name of the user that has access to the Collection Shard Map Manager database, since that is the user that accesses the Shards. Look in your connection strings config file for the xConnect Collection service and find the connection string named “collection” and identify the user in the connection string. In my case it was “xcsmmuser” (xConnect Shard Map Manager user). Yours may have a different user name. (Note that this user is not a Contained Database User, unlike most of the other Sitecore database users in the connection strings, because it needs to access more than one database.)

On both shards, run the following commands:

grant VIEW ANY COLUMN MASTER KEY DEFINITION to [your-xcsmmuser];


If you later on see the following error in your xConnect logs, then you’ve neglected to do the above step:

[Error] Sitecore.XConnect.Web.Infrastructure.Operations.GetEntitiesOperation`1[Sitecore.XConnect.Contact]: Sitecore.XConnect.Operations.DependencyFailedException: One or more dependencies failed —> Sitecore.Xdb.Collection.Failures.DataProviderException: Cannot access destination table ‘[xdb_collection].[GetContactIdsByIdentifiers_Staging]’. —> System.InvalidOperationException: Cannot access destination table ‘[xdb_collection].[GetContactIdsByIdentifiers_Staging]’. —> System.Data.SqlClient.SqlException: VIEW ANY COLUMN MASTER KEY DEFINITION permission denied in database ‘[shard database]’.

At which point, you are done with encryption but don’t forget to scale down your Azure SQL databases to their prior DTU settings to avoid incurring excess service charges. If you want to start querying encrypted data in SQL Management Studio then you will need to configure your database connection according to this super helpful post otherwise you will not be able to query or view the data. This post is also useful.

In the next article I will look at configuring client Id and Client Secret and configuring the xConnect services.

Securing xDB data with Azure Key Vault and SQL Always Encrypted (part 2)

In Part 1 we looked at the reasons for encrypting xDB data and at creating a key in an Azure Key Vault. In this second article in the series we will look at creating Column Master Keys and Column Encryption Keys in SQL Server. The process needs to be performed on all xDB collection database shards and for each shard you need to create a Column Master Key (CMK) based on the key we created in Part 1 and a Column Encryption Key (CEK) based on the CMK.

Step 1 – create the Column Master Key

Open up SQL Server Management Studio (SSMS) and navigate to the xDB Collection databases. You will most likely have a Shard Map Manager (SMM) and at least 2 Shard databases. The changes you make in the rest of this process will be made to ALL shard databases but the ShardMapManager will not be affected.

Here’s a sample screenshot of the 3 relevant databases (the prefix ma-demo_ is irrelevant, the databases are for illustrative purposes only):


Expand Xdb.Collection.Shard0 and navigate to Security > Always Encrypted Keys > Column Master Keys.


From the context menu of the Column Master Keys node, select the New Column Master Key option:


A modal dialog window will pop up from which you can choose the location of the key (or certificate) from which you will create your Column Master Key (CMK). You can see below the options are available to use a Windows Certificate Store (Current User or Local Machine), Azure Key Vault, or CNG. I have selected Azure Key Vault.


You will then need to log into your Azure subscription:


Once logged in, you should see your Key Vault and the key that you created earlier:


Select the key – in this case “SQL-DEV-AlwaysEncryptedKey” and type a suitable name (in the screenshot above I have called it “DEV-CMK-Shard0”. Click Generate Key. You should now see the new Always Encrypted Column Master Key in your Shard0 database.


Step 3 – create a Column Encryption Key

Once you have a Column Master Key based on your key (or certificate) in the Azure Key Vault you can create a Column Encryption Key (CEK). The CEK is, as you might expect, the key that is used to encrypt (and decrypt!) the columns in the tables of your database. Each Shard database will have its own CMK and each CEK will be based on the CMK in the appropriate Shard database.

This bit is very simple. Click on the context menu for Column Encryption Keys and select New Column Encryption Key:


Give the new CEK a name, choose the CMK from the Column master key drop down and click OK.


If you experience an error like this:


then go back to Part 1 of this series of articles and revisit Step 2. You need to assign appropriate permissions to your account in order to create the CEK.

Otherwise you should see the following in your Shard0 database:


You should now have a new CEK in Shard0. Repeat the above steps for the other xDB collection shard database(s) and you will be (nearly) ready to encrypt the data. We will cover off the next step in Part 3 of this series.

Securing xDB data with Azure Key Vault and SQL Always Encrypted (part 1)

Sitecore offers a number of features relating to the protection of Personally Identifiable Information (PII) data and GDPR compliance. The indexing of personal information data is disabled by default and details of how Sitecore addresses GDPR “data subject rights” can be found here. But what about the xDB SQL data ? Experience Profile data and custom facet data is stored in the xDB collection database shards and anyone with privileges to perform SQL queries against that data can easily read this information or, even more worrisome, dump the entire database out to a BACPAC and share that BACPAC file or accidentally leave it on a hard drive somewhere. The Red Cross database leak should be enough to keep you awake at night if you have sensitive personal information in your xDB database (or any other databases for that matter.)

Azure SQL databases created after May 2017 are encrypted “at rest” by default, however that isn’t the whole story. The issue of SQL query access to data or exporting databases from Azure SQL Server via BACPAC is not resolved by encryption at rest. A developer with access to connection strings would be able to query the data and anyone with SQL admin access could export the data and inadvertently or intentionally expose it (hmm, now where did I put that USB stick with 2 Gb of xDB data again?) What can we do to improve the security of the data ? Enter SQL Always Encrypted.

The Security Guide documentation for Sitecore contains a section on configuring SQL Always Encrypted which goes part way to explaining how to achieve this, however it outlines a number of steps that are not explicitly detailed and are a bit impenetrable if you’ve not done them before, as well as requiring you to hunt down the relevant documentation across various sites. There’s also a few gaps in the Sitecore documentation, although these should hopefully be updated on the Sitecore docs site soon.

I’m going to talk here about doing this via an Azure Key Vault rather than the Windows Key Store, since most Sitecore developers are familiar with using MMC and the Windows certificate store. Plus many production deployments will be on Azure PaaS so the Key Vault is a logical choice, but the steps have some commonality which hopefully will be apparent and useful in either case. Also, there are a number of ways to do this but I’m going to outline the approach that I used. For example, you could use a certificate to create your Column Master Key if you prefer. I ran into issues with the PowerShell approach outlined on various Microsoft documentation pages so I opted for the more manual approach outlined below. If you get the PowerShell approach to work, then please share.

Disclaimer: I am not a SQL Server DBA nor a Security Administrator. I accept no responsibility for damage or data loss caused by following these instructions. Please test this approach in a safe environment before performing changes to production data.

Step 1 – create a key in Azure KeyVault

Firstly you’ll need a KeyVault in your Azure subscription. Just add a Key Vault resource from the portal:


Then create a key by clicking Generate/Import:


Give the key a name and choose the key type and key size:


You should now have a key in your key vault:


Step 2 – Assign appropriate permissions to your user to access and use the key

Now comes a REALLY IMPORTANT STEP that is currently undocumented. You must ensure that you have sufficient rights to create and use the keys. If you run into problems later when creating the Column Encryption Keys then come back to this step.

Click on Access policies and then select your Azure portal user (in my case as per below):


The permissions blade will then open and you must select the policies as per below:


The bottom 4 cryptographic operations will probably not be checked – you absolutely need the following operations: Unwrap Key, Wrap Key, Verify, Sign. Without those you will encounter some, ahem, cryptic error messages.

Finally click OK to exit the permissions blade and then make sure you click SAVE on the Access Policies screen that is displayed after that. It’s an easy step to miss and it will cause you grief later on if you forget to save.


In Part 2, I will cover how to create Column Master Keys and Column Encryption Keys based on the new key we have just created. In subsequent posts I will cover the creation of Application Registrations and Client Secrets, assigning permissions, configuring xConnect, and encrypting the data, as well as some other related topics.

Improving password hashing for Sitecore 9.1 and Identity Server 2.0

Sitecore hashes passwords for users and stores them in a database. Out of the box, Sitecore 9.1 and 9.1.1 use SHA1 as the hashing algorithm in the Web.config:

<membership defaultProvider=”sitecore” hashAlgorithmType=”SHA1″>

Unfortunately SHA1 is considered insecure (although it takes a bit of time and money to crack) so for safety it’s probably a good idea to change it to something stronger.

The algorithm can be changed to a number of different algorithms by changing the hashAlgorithmType in the Web.config file, but 9.1 introduced Sitecore Identity – a separate service running on Sitecore Host – which had a hardcoded dependency on SHA1 and did not allow it to be changed. There was also no config element that could be updated.

We lodged a ticket with Support and they provided a hotfix in response – 306226 – which enabled hashing with SHA1, SHA256, SHA384, SHA512, HMACSHA1, HMACSHA256, HMACSHA384, and HMACSHA512.

Below is a guide to installing the hotfix and updating Sitecore and Identity Server to use SHA512.

NOTE: Changing the hashing algorithm will prevent you from logging in and require that all current passwords are reset. Following the approach below should allow you to update your admin level password and not lock yourself out 😊

To change the algorithm:

  • firstly log into the CM server as an admin level user,
  • change the Web.config hashAlgorithmType from SHA1 to SHA512,
  • refresh the Sitecore Dashboard screen,
  • go to User Manager,
  • change the password of the Admin user (or an admin level user of your choice),
  • take a note of the new password.

This will ensure that you have a new SHA512 hashed password in the SQL database.

Now you can update the Identity Server with the hotfix and change it to SHA512.

To install the Hotfix:

  • Stop the Sitecore Identity app service
  • Copy the contents of the Website folder in the hotfix zip into the /site/wwwroot folder of the app service
  • Edit the \sitecore\Sitecore.Plugin.IdentityServer\Config\identityServer.xml file and change the <PasswordHashAlgorithm> value
  • Start the app service

You should now be able to log in to the CM server with the new password.

To revert, you can change back to SHA1 on the CM and SI roles and log in with a user that did not have their password changed.

Contained database users and Sitecore

From Sitecore 9.0 onwards, the database users changed from being traditional SQL logins to “contained database users“. With the exception of the Shard Map Manager and Collection shard databases, the users for each database are contained within the database itself. The users are portable with the database and creating and updating them is different from the “old” way.

An example of where this might be needed is creating a secondary reporting database for a reporting rebuild. If you deploy the DACPAC you will need to create a user. The SQL for this is:

CREATE USER reportinguser WITH PASSWORD = 'new_password';

To change the password:

ALTER USER reportinguser WITH PASSWORD = 'new_password';

To connect to a contained database in SQL Server Management Studio, enter your contained database username and password, then click the Options>> button on the SQL Server login modal and choose a database.

This way you can test your connection for troubleshooting purposes.

Also, slightly off topic, but for the secondary reporting database you can assign dbo role to the reporting user with sp_addrolemember

exec sp_addrolemember 'db_owner', 'reportinguser';

What I learned on my Sitecore Hackathon journey

So you’re thinking about doing the Sitecore Hackathon ?

This year I had another crack at the Sitecore Hackathon, having done it a couple of years previously. I highly recommend it – not only is it fun (no really!) it’s also a great way to learn new things and get involved in the global Sitecore community. In a nutshell it’s a 24-hour coding marathon where you and two other teammates get together, eat pizza and attempt to build something new in Sitecore with the possible bonus of also winning a prize and the incredible fame and fortune that follows. (I made up that last bit, but there really are prizes.)

On my journey through this 24-hour marathon with my teammates I noticed a few things that I thought would be worth sharing which we either did and found useful (a short list) or wish we had done (a much longer list). Hope it helps someone next year.


  • With some foresight you can have a pretty good attempt at guessing what topics or product areas will be covered in the Hackathon. Announcements from last year’s Symposium are a good guide, or simply just whatever is a hot topic for blogs and Sitecore product webinars. This year, for example, JSS and Universal Tracker featured as ideas proposed for Hackathon entries, along with SXA, Commerce and some others. All four of those should have been fairly obvious choices.
  • Decide which of these areas you are comfortable with, or perhaps pick one that you want to learn! It’s a great opportunity to do stuff outside of your normal day job, get across new features and learn new things just for the heck of it.
  • Once you’ve had a think about what topics might be right for you and your team, do some R&D! For example, get JSS installed and spin up the sample app, or get SXA installed and watch some videos. If you change your mind, you haven’t lost anything and in fact you’ve learned something new.
  • Get together with your team and come up with some ideas about what might be a good project to develop within your areas of interest. What sort of thing could you build with JSS, given 24 hours and a lot of coffee?

Be prepared

  • Get the relevant version of Sitecore installed and working. Quite a few people were posting questions about how to install and configure 9.1 during the earlier parts of the Hackathon, which cost them valuable time that could have been saved by preparing an environment in advance.
  • Install the modules that you think you will be working on and make sure that they are configured. For example, make sure Universal Tracker is configured with client certificates and that it can push data to your development xConnect instance.
  • Check that you have access and can commit to the repository created for you by the Hackathon team. Do this before the Hackathon starts.
  • Make sure your team are all lined up ready to go at the agreed start time.
  • Choose a venue that you can all get to, with suitable transport and most importantly delicious options for Uber Eats and pizza deliveries 😊
  • What equipment will you need? Laptops (duh) and a video camera or suitable phone for making your submission video will be a good start. A/V adapters, USB sticks, and whatever else you think you might require.
  • Who is bringing what? One of my team had the foresight to bring snacks, drinks, and chocolates. We could have planned that better and shared the load.

On the day

  • Get together with your team an hour before the start time so that you can review the ideas sent out by the Hackathon organisers. Run through them and see what fits your previous R&D work (hopefully something!)
  • Get on the Slack channel and Twitter (#SCHackathon)
  • Come up with a rough plan of attack and decide who is going to start looking into different aspects.
  • Identify the tasks necessary to achieve your goal.
  • Put some rough estimates against these tasks and timebox them so you can decide whether or not to continue down a chosen path or to pull back and regroup.
  • Set some checkpoints for partway through the day to make sure you’re all travelling in the same direction. It’s easy to spend hours working on something and not even notice how fast the time has gone, so a checkpoint will keep you focused on how much time you have and what progress you are making.

Some other tips

  • Time flies. It really really does. You’ll look down at the screen at midday and then it will be 5pm but what happened in between ? Be conscious of time.
  • Don’t miss the deadline.  There are no extensions – you’re not in high school (are you?)
  • If at all possible, don’t leave your video and doco until 4 in the morning. You won’t be thinking straight and you’ll probably be like “oh yeah that’s good enough, just submit it”. (Not that we did that… no, of course not…). This is more of an issue in ANZ and Asia due to the timezone differences.
  • Read all the submission requirements and follow them, otherwise you could spend 24 hours working and then submit something that isn’t deemed to be “valid”. Make sure you’ve done the doco, the video, created a package and ticked all the boxes. At least 10 submissions didn’t make the cut this year due to not providing a Sitecore package.

Good luck for next year !