Using GraphQL with XM+EDGE in the Sitecore Demo Portal

Setting up an XM+EDGE instance in the Sitecore Demo Portal, I encountered a couple of gotchas with querying data via the Experience Edge GraphQL IDE. If you are already familiar with the Portal but are getting the error:

The field 'item' does not exist on the type 'query'

then skip down a couple of paragraphs to the solution.

tl;dr

  • Create a publishing target with database “experienceedge
  • Check the Publish to Experience Edge checkbox and save the new target
  • Publish the site to your new Edge publishing target and wait a few seconds before querying it, while the schema gets generated
  • PROFIT

The Demo Portal

The Sitecore Demo Portal was created to provide the ability to easily spin up demo sites that you can play with, learn, and showcase. You can very quickly create a complete headless site or commerce instance, as well as an empty XM + Experience Edge instance:


It’s available to Sitecore partners, MVPs, and Sitecore employees, and has been covered a few months back by Jeremy Davis and others, so I won’t go into any detail here, but you can learn more from Jeremy’s blog post or Neil Killen’s blog post.

I was looking to learn more about Experience Edge (and in particular use GraphQL on Edge) as I haven’t had the opportunity to work on a new Edge site for a customer, so I logged in and set up an XM+EDGE demo instance. This deployed surprisingly fast – in a matter of seconds – the team have really supercharged the deployment process since I last tried it a while back.

NOTE: This is NOT XM Cloud, it is a demo instance of XM that uses Experience Edge.

The problem

Once I navigated to the Experience Edge URL at https://edge.sitecorecloud.io/api/graphql/ide and configured the HTTP header for my API key, I tried to run a simple query to retrieve the default Home item:

query{ 
	item(path:"/sitecore/content/home", language:"en"){
    id
  }
}

The result was the following error:

{
  "errors": [
    {
      "message": "The field `item` does not exist on the type `query`.",
      "locations": [
        {
          "line": 2,
          "column": 3
        }
      ]
    }
  ]
}

The solution

The cause of the error was that I had not yet published my site to Edge, so the schema was not available to query. To fix this, do the following:

Create a new publishing target with Target Database “experienceedge” and check the Publish to Experience Edge checkbox:


Publish your site to that publishing target


Wait until it finishes, refresh your IDE GraphQL IDE window, and wait a few seconds for the schema to be generated, then run your query


More info

https://doc.sitecore.com/xp/en/developers/hd/201/sitecore-headless-development/install-and-configure-the-experience-edge-connector.html

https://doc.sitecore.com/xp/en/developers/hd/201/sitecore-headless-development/configure-publishing-targets.html

Slides from my SUGCON ANZ 2022 presentation

This year I had the very good fortune to attend and present at SUGCON ANZ in Melbourne – the first ANZ SUGCON since 2019 in Sydney. It was fantastic to hang out with some long lost Sitecore friends that I hadn’t seen since before COVID times, and to meet new Sitecore community folks as well as the crew from Sitecore – some of whom had travelled a long way to get there.

This year my presentation was “Making the journey to headless without losing your head” which looks at some of the challenges and choices that you might want to consider when taking your first steps towards a Sitecore headless implementation, based on my experiences over the last couple of years doing headless development with Sitecore XP and JSS.

Anyway, for what it’s worth here are my slides. There is some supporting content in the speaker notes, if you get that far… Thanks to all who attended, especially the Perth Sitecore crew, and thanks again to the organisers of SUGCON for the opportunity and good times.

Headless content API options with Sitecore

Harry Potter and the Sorcerer's Stone - Nearly Headless Nick. | Harry  potter movies, Harry potter, Harry potter ghosts

Consuming data from a “headless” CMS is a pretty popular approach these days, as the trend towards the delivery of API-driven front end rendering solutions continues to grow. Sitecore has been increasing its footprint in this space for several years now, and has developed some offerings that leverage the legacy of Sitecore’s content management, personalisation, and analytics features whilst also enabling the delivery of content to headless rendering apps. In this post I outline the options available for delivering and shaping Sitecore content via APIs using the features that Sitecore Headless Services adds on to the base platform, and also mention a couple of methods that are already available without the need for a JSS license.

Headless Services

Sitecore Headless Services (formerly JSS Server Components) provides additional server-side functionality that exposes Sitecore content and rendering information via APIs. Released as part of Sitecore JavaScript Services 9 Tech Preview, nearly four years ago now, it’s moved on considerably since then and provides a number of different methods by which we can extract and query data for consumption by apps using the JSS JavaScript Rendering SDK’s or the ASP.NET Core Rendering SDK, or any solution that needs to consume content from your Sitecore platform. It’s available with an Enterprise license, or as an add-on for other editions, and you can try it out via the Front End Developer Trial program at no cost.

[N.B. The following methods require Headless Services to be installed on your Sitecore instance]

Layout Service

The Layout Service exposes a REST endpoint (or endpoints, you can have multiple) through which content items can be retrieved using an API key and the results are returned in a specific JSON structure. I won’t go into the specifics of it here, but you can read more details in one of my earlier blog posts.

The API is quite flexible and extensible, and you can use a few different approaches to tailor the data to your needs:

Routes

Requesting an item (a route in JSS parlance) from the API returns a fully assembled description of the route content, as well as the renderings assigned to the placeholders on that route. One of the key differentiators of the layout service is that it leverages the Sitecore rendering engine to include rendering information along with content, and this means that you can use Sitecore personalisation and content management techniques to customise the data returned to your headless solution. Thus your digital producers and content authors can take advantage of the tools with which they are already familiar, such as Experience Editor, to add renderings to placeholders, apply rules based personalisation, and create and deploy tests, all of which are fully supported in headless mode when using the API.

The Headless SDKs have been specifically developed to consume the data structures returned by the Layout Serivce, so this is the “go to” option for consuming Sitecore content and rendering information in headless solutions based off the SDKs in both JavaScript or .NET versions.

Placeholders

One lesser known option available in the layout service API is that you can use it to retrieve only the contents of a specific placeholder. This might not sound like much but it is really quite a powerful and useful feature. In a headless delivery model, once the layout data has been retrieved from the API for a given route, the user can continue to interact with the site UI without the need for additional calls to the Sitecore delivery server (unless they request a new route of course.) Using the placeholder API call, you can dynamically retrieve and update the contents of specific placeholders in your headless application based on interactions that your user has had with your site.

For example you could append a querystring to the placeholder API call and personalise the rendering data in the placeholder on the server side, then dynamically update the UI with the freshly customised rendering information. Or you could send data to a custom endpoint, update a goal or a facet, and then pull the personalised rendering data based on the new information about your user. Or perhaps you could lazy load content into your UI to improve performance.

In addition, the JSON returned from the placeholder API call is very lean compared to the route API data, so requests are lightweight and fast.

Rendering contents resolvers

Headless Services introduced the concept of Rendering Contents Resolvers. These are a set of 6 “resolvers” that can be used to tailor the data returned for a specific rendering. The tailored rendering contents are then passed back to the Layout Service. The out-of-the-box resolvers are quite flexible and provide a quick and simple way to tailor rendering contents without the need for custom code. They can be easily extended and you can read more details about how to do that here.

Integrated GraphQL

GraphQL was added to Headless Services and announced at Sitecore Symposium back in 2018, along with the “official” release of JSS. The Sitecore docs describe it as “a generic GraphQL service platform on top of Sitecore. It hosts your data and presents it through GraphQL queries. The API supports real-time data using GraphQL subscriptions.”

What this means in practical terms is that you can query Sitecore items and perform Content Search queries via GraphQL. Integrated GraphQL is the use of GraphQL queries to shape your rendering contents. This is done by simply pasting the query code into a multi-line text field in your rendering (what could go wrong?). This will override any other rendering behaviours and return the query results instead of a datasource, or the output of a rendering contents resolver. GraphQL always wins.

One key difference that should be borne in mind, however, is that the JSON contract returned by an integrated GraphQL query will be quite different to the rendering contracts returned when using out-of-the-box options like rendering contents resolvers. This can result in a bit of a mixed bag of data structures being returned to your headless data consumers, some using the “standard” approach and some using a variety of GraphQL shaped rendering contents.

GraphQL can also be extended. For an example of extending Content Search in GraphQL, see Aaron Bickle’s excellent blog post on the subject.

Layout Service extensions

This is Sitecore: everything is extensible! So it is not difficult to extend the code that powers the API and customise the data contract. You can read about extending the context here.

That pretty much sums up the options for using the Layout Service but it’s not the end of the story. More techniques are available for powering headless solutions.

Other options using Headless Services

Connected GraphQL

This approach uses the same schema as Integrated GraphQL, but exposes API endpoints to which you can send your queries. Using this feature your apps can query Sitecore content and send variables with those queries, perhaps based on client interactions. One example might be to provide a headless search feature, passing user-supplied search terms back to the API endpoint which would in turn use the Content Search API to query Sitecore. Or perhaps use it to retrieve configuration settings or other values on-the-fly without the need to use the Layout Service. Customised GraphQL approaches such as that described in Aaron Bickle’s post mentioned earlier can also be leveraged to customise the default functionality available via Connected GraphQL.

Options that don’t requires Headless Services

Don’t have JSS and Headless Services? Not to worry! There are other options available to feed data to your headless solutions.

JSON renderings

Using SXA? Great! SXA is awesome! This also means that you can use SXA data modelling and JSON renderings and variants to return data to your headless app without a JSS license. This approach is pretty flexible because you have the power of SXA rendering variants and Scriban at your disposal.

Sitecore Services Client

The Sitecore Services Client has been around for quite a while and in a headless scenario it would most likely be used to provide read-only access to items via the ItemService. It’s a flexible API and if you want to simply pull content items out of Sitecore and consume that data in a headless app, then this is a great alternative to Headless Services. One downside is that you don’t get the rendering information that you would have been able to retrieve via the Layout Service, but if you only need items then this is your simplest, best, and cheapest option.

Custom APIs

Finally, you can always create your own API endpoints using .NET. In ASP.NET MVC this will probably be a Controller API or Web API endpoint with custom routing. This approach is very flexible, is something that any .NET developer will be comfortable with creating, and it doesn’t require any additional licenses to serve content.

Summary

No doubt there are other ways to pull content out of Sitecore for consumption in your headless solutions (e.g. the Item WebAPI – does that still exist?) but these are the mainstream approaches. If you want to take full advantage of the headless rendering SDKs and leverage analytics, personalisation, content testing, and the power of Sitecore’s content management feature set (placeholders, renderings, templates, datasources,  etc) then Headless Services is probably the best option, but if you don’t need those features, or if your client/employer doesn’t want to foot the bill for headless, then there are still some solid options available for pulling data out of Sitecore to power your headless solutions.

Securing xDB data with Azure Key Vault and SQL Always Encrypted (part 3)

In part 1 we looked at the reasons for using Always Encrypted for xDB data and creating encryption keys in Azure Key Vault. In part 2 we looked at Column Master Keys and Column Encryption Keys in SQL Server. In this post I will cover the process of encrypting the data.

Before getting onto that topic I should point out that if you encrypt the data now, then your xDB collection, index worker, and search services will no longer be able to read xDB data until you create a Client ID and Client Secret that enable those services to retrieve the key from Azure Key Vault and use it to decrypt the column data in SQL. The documentation on how to do this is currently incorrect, and I will cover this in part 4. There are also some issues with the current documentation for encryption that are covered below.

Overview

The process of encryption entails a number of steps:

  1. Generate a script to recreate the stored procedures in your shard databases
  2. Delete stored procedures from the shards
  3. Disable change tracking on some of the tables in the shards
  4. Configure Always Encrypted on various columns in the shard database tables
  5. Restore the stored procedures
  6. Re-enable change tracking
  7. Grant permissions to the collection user

Encryption checklist

31

You will need to do every step listed above on each of your shard database. In order to help keep track of the steps and ensure that you don’t miss anything out, I’ve created a spreadsheet that you can download and mark off each step in the process.

Step 1 – Generate a script containing all stored procedures for each shard

The stored procedures need to be backed up as a script, deleted, and then re-created after encryption. It was not obvious to me how to do this for all Stored Procs in one go, but it turns out there’s a handy trick. Navigate to the shard database Programmability > Stored Procedures area of the tree and click on Stored Procedures:

32

Hit function key F7 and the Object Explorer window will come up with all the stored procs listed. Select all of them (except for System Stored Procedures) and then right mouse click, select Script Stored Procedure as > CREATE to > New Query Editor Window (or to file, whatever you prefer):

33

A script of all stored procedures will be generated that you can save for use in Step 5.

Step 2 – Delete stored procedures

Using the Object Explorer window as in Step 1, just select all the stored procedures (except for System Stored Procedures) and delete them.

Step 3 – Disable change tracking

Some of the tables need to have change tracking disabled (and re-enabled later on). For each table, right mouse click the table and open the Properties dialog, click Change Tracking from the left menu and turn it off. Alternatively you can use a script that I wrote.

Do this for the following tables:

  • Contacts
  • ContactFacets
  • Interactions
  • InteractionFacets

Step 4 – Configure Always Encrypted

For each of the tables listed below, configure the appropriate encryption type on the appropriate columns. Before doing this, if you are using Azure SQL I recommend that you bump up the DTU’s significantly before encrypting, as it will run a lot faster due to the processor-intensive nature of the encryption. Also, ensure that you have turned off any services that are connected to the databases (xConnect collection service, etc).

NOTE: The documentation is incorrect. Differences are highlighted below.

34

To do this, right mouse click each table and select Encryption from the context menu:

35

From the modal dialog box, set the encryption type for the columns:

36

Click OK and then, er, wait for a while……

Step 5 – Restore the stored procedures

This bit is pretty simple if you followed the list of tables above and encrypted the Identifier and Source columns in the UnlockContactIdentifiersIndex_Staging table. Just run the script for each shard (you did create a script for each shard right?) Otherwise, your stored procedure script will throw an error due to dependencies between tables. If you get the following error, then you’ve missed this table from your encryption:

Msg 402, Level 16, State 2, Procedure UnlockContactIdentifiersIndex, Line 26 [Batch Start Line xxxx]

The data types varbinary and varbinary(700) encrypted with [……] are incompatible in the equal to operator.

If there are any TMP tables left over after encryption then you can safely delete them (e.g. tmp_ms_xx_ContactFacets1 was one that I had left over after encryption).

Step 6 – Enable Change Tracking

You can use the script provided in step 3 above, for this, or do it manually.

Step 7 – Grant permissions to the collection user.

This step will depend on the name of the user that has access to the Collection Shard Map Manager database, since that is the user that accesses the Shards. Look in your connection strings config file for the xConnect Collection service and find the connection string named “collection” and identify the user in the connection string. In my case it was “xcsmmuser” (xConnect Shard Map Manager user). Yours may have a different user name. (Note that this user is not a Contained Database User, unlike most of the other Sitecore database users in the connection strings, because it needs to access more than one database.)

On both shards, run the following commands:

grant VIEW ANY COLUMN MASTER KEY DEFINITION to [your-xcsmmuser];

grant VIEW ANY COLUMN ENCRYPTION KEY DEFINITION to [your-xcsmmuser];

If you later on see the following error in your xConnect logs, then you’ve neglected to do the above step:

[Error] Sitecore.XConnect.Web.Infrastructure.Operations.GetEntitiesOperation`1[Sitecore.XConnect.Contact]: Sitecore.XConnect.Operations.DependencyFailedException: One or more dependencies failed —> Sitecore.Xdb.Collection.Failures.DataProviderException: Cannot access destination table ‘[xdb_collection].[GetContactIdsByIdentifiers_Staging]’. —> System.InvalidOperationException: Cannot access destination table ‘[xdb_collection].[GetContactIdsByIdentifiers_Staging]’. —> System.Data.SqlClient.SqlException: VIEW ANY COLUMN MASTER KEY DEFINITION permission denied in database ‘[shard database]’.

At which point, you are done with encryption but don’t forget to scale down your Azure SQL databases to their prior DTU settings to avoid incurring excess service charges. If you want to start querying encrypted data in SQL Management Studio then you will need to configure your database connection according to this super helpful post otherwise you will not be able to query or view the data. This post is also useful.

In the next article I will look at configuring client Id and Client Secret and configuring the xConnect services.

Securing xDB data with Azure Key Vault and SQL Always Encrypted (part 2)

In Part 1 we looked at the reasons for encrypting xDB data and at creating a key in an Azure Key Vault. In this second article in the series we will look at creating Column Master Keys and Column Encryption Keys in SQL Server. The process needs to be performed on all xDB collection database shards and for each shard you need to create a Column Master Key (CMK) based on the key we created in Part 1 and a Column Encryption Key (CEK) based on the CMK.

Step 1 – create the Column Master Key

Open up SQL Server Management Studio (SSMS) and navigate to the xDB Collection databases. You will most likely have a Shard Map Manager (SMM) and at least 2 Shard databases. The changes you make in the rest of this process will be made to ALL shard databases but the ShardMapManager will not be affected.

Here’s a sample screenshot of the 3 relevant databases (the prefix ma-demo_ is irrelevant, the databases are for illustrative purposes only):

21

Expand Xdb.Collection.Shard0 and navigate to Security > Always Encrypted Keys > Column Master Keys.

22

From the context menu of the Column Master Keys node, select the New Column Master Key option:

23

A modal dialog window will pop up from which you can choose the location of the key (or certificate) from which you will create your Column Master Key (CMK). You can see below the options are available to use a Windows Certificate Store (Current User or Local Machine), Azure Key Vault, or CNG. I have selected Azure Key Vault.

24

You will then need to log into your Azure subscription:

25

Once logged in, you should see your Key Vault and the key that you created earlier:

26

Select the key – in this case “SQL-DEV-AlwaysEncryptedKey” and type a suitable name (in the screenshot above I have called it “DEV-CMK-Shard0”. Click Generate Key. You should now see the new Always Encrypted Column Master Key in your Shard0 database.

27

Step 3 – create a Column Encryption Key

Once you have a Column Master Key based on your key (or certificate) in the Azure Key Vault you can create a Column Encryption Key (CEK). The CEK is, as you might expect, the key that is used to encrypt (and decrypt!) the columns in the tables of your database. Each Shard database will have its own CMK and each CEK will be based on the CMK in the appropriate Shard database.

This bit is very simple. Click on the context menu for Column Encryption Keys and select New Column Encryption Key:

28

Give the new CEK a name, choose the CMK from the Column master key drop down and click OK.

29

If you experience an error like this:

210

then go back to Part 1 of this series of articles and revisit Step 2. You need to assign appropriate permissions to your account in order to create the CEK.

Otherwise you should see the following in your Shard0 database:

211

You should now have a new CEK in Shard0. Repeat the above steps for the other xDB collection shard database(s) and you will be (nearly) ready to encrypt the data. We will cover off the next step in Part 3 of this series.

Securing xDB data with Azure Key Vault and SQL Always Encrypted (part 1)

Sitecore offers a number of features relating to the protection of Personally Identifiable Information (PII) data and GDPR compliance. The indexing of personal information data is disabled by default and details of how Sitecore addresses GDPR “data subject rights” can be found here. But what about the xDB SQL data ? Experience Profile data and custom facet data is stored in the xDB collection database shards and anyone with privileges to perform SQL queries against that data can easily read this information or, even more worrisome, dump the entire database out to a BACPAC and share that BACPAC file or accidentally leave it on a hard drive somewhere. The Red Cross database leak should be enough to keep you awake at night if you have sensitive personal information in your xDB database (or any other databases for that matter.)

Azure SQL databases created after May 2017 are encrypted “at rest” by default, however that isn’t the whole story. The issue of SQL query access to data or exporting databases from Azure SQL Server via BACPAC is not resolved by encryption at rest. A developer with access to connection strings would be able to query the data and anyone with SQL admin access could export the data and inadvertently or intentionally expose it (hmm, now where did I put that USB stick with 2 Gb of xDB data again?) What can we do to improve the security of the data ? Enter SQL Always Encrypted.

The Security Guide documentation for Sitecore contains a section on configuring SQL Always Encrypted which goes part way to explaining how to achieve this, however it outlines a number of steps that are not explicitly detailed and are a bit impenetrable if you’ve not done them before, as well as requiring you to hunt down the relevant documentation across various sites. There’s also a few gaps in the Sitecore documentation, although these should hopefully be updated on the Sitecore docs site soon.

I’m going to talk here about doing this via an Azure Key Vault rather than the Windows Key Store, since most Sitecore developers are familiar with using MMC and the Windows certificate store. Plus many production deployments will be on Azure PaaS so the Key Vault is a logical choice, but the steps have some commonality which hopefully will be apparent and useful in either case. Also, there are a number of ways to do this but I’m going to outline the approach that I used. For example, you could use a certificate to create your Column Master Key if you prefer. I ran into issues with the PowerShell approach outlined on various Microsoft documentation pages so I opted for the more manual approach outlined below. If you get the PowerShell approach to work, then please share.

Disclaimer: I am not a SQL Server DBA nor a Security Administrator. I accept no responsibility for damage or data loss caused by following these instructions. Please test this approach in a safe environment before performing changes to production data.

Step 1 – create a key in Azure KeyVault

Firstly you’ll need a KeyVault in your Azure subscription. Just add a Key Vault resource from the portal:

0

Then create a key by clicking Generate/Import:

1

Give the key a name and choose the key type and key size:

2

You should now have a key in your key vault:

3

Step 2 – Assign appropriate permissions to your user to access and use the key

Now comes a REALLY IMPORTANT STEP that is currently undocumented. You must ensure that you have sufficient rights to create and use the keys. If you run into problems later when creating the Column Encryption Keys then come back to this step.

Click on Access policies and then select your Azure portal user (in my case as per below):

4

The permissions blade will then open and you must select the policies as per below:

5

The bottom 4 cryptographic operations will probably not be checked – you absolutely need the following operations: Unwrap Key, Wrap Key, Verify, Sign. Without those you will encounter some, ahem, cryptic error messages.

Finally click OK to exit the permissions blade and then make sure you click SAVE on the Access Policies screen that is displayed after that. It’s an easy step to miss and it will cause you grief later on if you forget to save.

6

In Part 2, I will cover how to create Column Master Keys and Column Encryption Keys based on the new key we have just created. In subsequent posts I will cover the creation of Application Registrations and Client Secrets, assigning permissions, configuring xConnect, and encrypting the data, as well as some other related topics.

What I learned on my Sitecore Hackathon journey

So you’re thinking about doing the Sitecore Hackathon ?

This year I had another crack at the Sitecore Hackathon, having done it a couple of years previously. I highly recommend it – not only is it fun (no really!) it’s also a great way to learn new things and get involved in the global Sitecore community. In a nutshell it’s a 24-hour coding marathon where you and two other teammates get together, eat pizza and attempt to build something new in Sitecore with the possible bonus of also winning a prize and the incredible fame and fortune that follows. (I made up that last bit, but there really are prizes.)

On my journey through this 24-hour marathon with my teammates I noticed a few things that I thought would be worth sharing which we either did and found useful (a short list) or wish we had done (a much longer list). Hope it helps someone next year.

Planning

  • With some foresight you can have a pretty good attempt at guessing what topics or product areas will be covered in the Hackathon. Announcements from last year’s Symposium are a good guide, or simply just whatever is a hot topic for blogs and Sitecore product webinars. This year, for example, JSS and Universal Tracker featured as ideas proposed for Hackathon entries, along with SXA, Commerce and some others. All four of those should have been fairly obvious choices.
  • Decide which of these areas you are comfortable with, or perhaps pick one that you want to learn! It’s a great opportunity to do stuff outside of your normal day job, get across new features and learn new things just for the heck of it.
  • Once you’ve had a think about what topics might be right for you and your team, do some R&D! For example, get JSS installed and spin up the sample app, or get SXA installed and watch some videos. If you change your mind, you haven’t lost anything and in fact you’ve learned something new.
  • Get together with your team and come up with some ideas about what might be a good project to develop within your areas of interest. What sort of thing could you build with JSS, given 24 hours and a lot of coffee?

Be prepared

  • Get the relevant version of Sitecore installed and working. Quite a few people were posting questions about how to install and configure 9.1 during the earlier parts of the Hackathon, which cost them valuable time that could have been saved by preparing an environment in advance.
  • Install the modules that you think you will be working on and make sure that they are configured. For example, make sure Universal Tracker is configured with client certificates and that it can push data to your development xConnect instance.
  • Check that you have access and can commit to the repository created for you by the Hackathon team. Do this before the Hackathon starts.
  • Make sure your team are all lined up ready to go at the agreed start time.
  • Choose a venue that you can all get to, with suitable transport and most importantly delicious options for Uber Eats and pizza deliveries 😊
  • What equipment will you need? Laptops (duh) and a video camera or suitable phone for making your submission video will be a good start. A/V adapters, USB sticks, and whatever else you think you might require.
  • Who is bringing what? One of my team had the foresight to bring snacks, drinks, and chocolates. We could have planned that better and shared the load.

On the day

  • Get together with your team an hour before the start time so that you can review the ideas sent out by the Hackathon organisers. Run through them and see what fits your previous R&D work (hopefully something!)
  • Get on the Slack channel and Twitter (#SCHackathon)
  • Come up with a rough plan of attack and decide who is going to start looking into different aspects.
  • Identify the tasks necessary to achieve your goal.
  • Put some rough estimates against these tasks and timebox them so you can decide whether or not to continue down a chosen path or to pull back and regroup.
  • Set some checkpoints for partway through the day to make sure you’re all travelling in the same direction. It’s easy to spend hours working on something and not even notice how fast the time has gone, so a checkpoint will keep you focused on how much time you have and what progress you are making.

Some other tips

  • Time flies. It really really does. You’ll look down at the screen at midday and then it will be 5pm but what happened in between ? Be conscious of time.
  • Don’t miss the deadline.  There are no extensions – you’re not in high school (are you?)
  • If at all possible, don’t leave your video and doco until 4 in the morning. You won’t be thinking straight and you’ll probably be like “oh yeah that’s good enough, just submit it”. (Not that we did that… no, of course not…). This is more of an issue in ANZ and Asia due to the timezone differences.
  • Read all the submission requirements and follow them, otherwise you could spend 24 hours working and then submit something that isn’t deemed to be “valid”. Make sure you’ve done the doco, the video, created a package and ticked all the boxes. At least 10 submissions didn’t make the cut this year due to not providing a Sitecore package.

Good luck for next year !